Skip to content

Feed aggregator

Installing Oracle on Docker (Part 1)

Xebia Blog - Fri, 09/19/2014 - 13:56

I’ve spent Xebia’s Innovation Day last August experimenting with Docker in a team with two of my colleagues and a guest. We thought Docker sounds really cool, especially if your goal is to run software that doesn’t require lots of infrastructure and can be easily installed, e.g. because it runs from a jar file. We wondered however what would happen if we tried to run enterprise-software, like an Oracle database. Software that is notoriously difficult to install and choosy about the infrastructure it runs on. Hence our aim for the day: install an Oracle database on CoreOS and Docker.

We chose CoreOS because of its small footprint and the fact that it is easily installed in a VM using Vagrant (see We used default Vagrantfile and CoreOS files with one modification: $vb_memory = 2024 in config.rb which allows the Oracle’s pre installer to run. The config files we used can be found here:

Starting with a default CoreOS install we then implemented the steps described here:
Below is a snippet from the first version of our Dockerfile (tag: b0a7b56).
FROM centos:centos6
# Step 1: Setting Hostname
# Step 2
RUN yum -y install wget
RUN wget --no-check-certificate -O /etc/yum.repos.d/public-yum-ol6.repo
RUN wget --no-check-certificate -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
RUN yum -y install oracle-rdbms-server-11gR2-preinstall

Note that this takes awhile because the pre installer downloads a number of CentOS packages that are missing in CoreOS.
Execute this in a shell:
vagrant up
vagrant ssh core-01
cd ~/share/docker
docker build -t oradocker .

This seemed like a good time to do a commit to save our work in Docker:
docker ps # note the container-id and substitute for the last parameter in the line below this one.
docker commit -m "executed pre installer" 07f7389c811e janv/oradocker

At this point we studiously ignore some of the advice listed under ‘Step 2’ in Tecmint’s install manual, namely adding the HOSTNAME to /etc/syconfig/network, allowing access to the xhost (what would anyone want that for?) and mapping an IP address to a hostname in /etc/hosts (setting the hostname through ‘ENV HOSTNAME’ had no real effect as far as we could tell). We tried that but it didn’t seem to work. Denying reality and substituting our own we just ploughed on…

Next we added Docker commands to the Dockerfile that creates the oracle user, copy the relevant installer files and unzip them. Docker starts by sending a build context to the Docker daemon. This takes quite some time because the Oracle installer files are large. There’s probably some way to avoid this, but we didn’t look into that. Unfortunately Docker copies the installer files each time you run docker -t … only to conclude later on that nothing changed.

The next version of our Dockerfile sort of works in the sense that it starts up the installer. The installer then complains about missing swap space. We fixed this temporarily at the CoreOS level by running the following commands:
sudo su -
swapfile=$(losetup -f)
truncate -s 1G /swap
losetup $swapfile /swap
mkswap $swapfile
swapon $swapfile

found here:
This works but it doesn’t survive a reboot.
Now the installer continues only to conclude that networking is configured improperly (one of those IgnoreAndContinue decisions coming back to bite us):
[FATAL] PRVF-0002 : Could not retrieve local nodename

For this to work you need to change /etc/hosts which our Docker version doesn’t allow us to do. Apparently this is fixed in a later version, but we didn’t get around to testing that. And maybe changing /etc/sysconfig/network is even enough, but we didn't try that either.

The latest version of our work is on github (tag: d87c5e0). The repository does not include the Oracle installer files. If you want to try for yourself you can download the files here: "> and adapt the docker file if necessary.

Below is a list of ToDo’s:

  1. Avoid copying large installer files if they’re not gonna be used anyway.
  2. Find out what should happen if you call ‘ENV HOSTNAME’.
  3. Make swap file setting permanent on CoreOS.
  4. Upgrade Docker version so we can change /etc/hosts

All in all this was a useful day, even though the end result was not a running database. We hope to continue working on the ToDo’s in the future.

Categories: Companies

How I spent $75,000 during a budget freeze

Derick Bailey - new ThoughtStream - Fri, 09/19/2014 - 13:00


$75K in Hardware/Software

That’s how much I spent, during a budget freeze at the company where I worked!

It was 2008 – a bad year for a lot of companies in the United States (and beyond). The company for which I worked was no exception and we were hit with a budget freeze. No one was allowed to buy anything above a small amount ($500 or so, I think) without CEO approval. It hit us all, hard. There wasn’t a single department, team or project that went unaffected. No one was happy.

A few months in to the budget freeze, though, I got the CIO and CEO to sign off on my purchase order of $75,000 for a virtualization environment. It took less than 1 week to go from me saying “we need this,” to the CEO saying, “where do I sign?” – all while everyone else looked on, mouths dropping to the floor as they couldn’t get a $750 request approved.

I would love to tell you how great I am and how easy this was… but that would be a lie. I got lucky with this – but I believe it’s repeatable luck, because I’m pretty good at listening, and applying principles and patterns to new situations… and it was 2 years earlier that I learned what allowed me to spend this money in 2008.

2006: A Lesson Learned In Time / Money

In 2006, the same company as above was needed to build a fancy user interface for a project with accessibility requirements for disabled persons. The UI control suite I had been using failed miserably. I quickly found a new one and started using it on the project. A few months later, another project needed similar things so I proposed buying a site license for the company.

After continuously being denied my request, I was about ready to give up. My manager sat down with me to discuss it and asked me to justify the cost to him. How would it save the company money, or help the company earn more money? I had no answer. It wouldn’t save money – only development time.

He quickly pointed out that time is money and after some discussion, we had a rough estimate of how many hours it would save us to buy the UI control suite vs build it ourselves. Multiply the average developer hourly rate by the hours to build, and it was obvious that the expense of the control suite was justified. I wrote down those numbers and reasons, and got the control suite purchased.

Back to 2008: Buying In A Budget Freeze

When the budget freeze hit I knew we needed those virtualization servers, but I couldn’t justify the cost. Then something bad happened, and an opportunity presented itself. We had a server crash caused by software conflicts between projects on a single physical server. After nearly 2 weeks of work form the I.T. Department moving projects around and re-installing software in production environments, everything was back up and I had my justification.

I spoke with the I.T. persons that were involved in the cleanup from the crash and got an estimate of how much time they spent on it. I talked to project managers for the affected projects and got an estimate of time and revenue lost due. I asked about replacement hardware and new hardware costs associated with the crash and fix.

I took all of the information I had gathered and compiled it in to a document that very clearly outlined how much money we had spent, how much we lost, the risk similar projects in the future and how much the virtualization setup I was requesting would cost. The $75,000 I wanted to spend would save us a lot of money very quickly, as the company continued to grow.

After a few days of gathering and documenting everything, the CIO signed the purchase order. The CEO was out that day but signed it as soon as he got back, on the advisement of the CIO. I had my servers in the middle of a budget freeze!

Lesson Learned: Speak Like Your Audience

In the end, I was able to justify $75,000 during a budget freeze, because I took the time to research and write a document that outlined the pain points that the CIO and CEO cared about: lost revenue, lost developer time, unnecessary money spent, etc.

Everyone talks about how you need to know your audience when doing public speaking, teaching, training, etc. But the same holds true in any job as well. Whether it’s a CEO, a customer in your store or whoever it is, you need to know how your audience thinks and what makes them tick. You need to understand how they see pain, what pain they currently see, and how you can address that pain with your proposed solutions. If you can do this, there’s a good chance you’ll be able to get the things you need.

– Derick

     Related Stories 
Categories: Blogs

Killing 7 Impediments in One Blow

Agile Tools - Fri, 09/19/2014 - 08:13

Have you heard the story of the Brave Little Tailor? Here’s a refresher:

So one day this little guy kills 7 flies with one mighty blow. He crafts for himself a belt with “7 in One Blow” sewn into it. He then proceeds through various feats of cleverness to intimidate or subdue giants, soldiers, kings and princesses. Each one, in their own ignorance, misinterpreting what “7 in One Blow” actually refers to. It’s a classic for a number of reasons:

  1. It’s a story about mis-communication: Not one single adversary has the wit to ask just what he means by killing “7 in one blow”
  2. It’s also a story about using one’s cleverness to achieve great things. You have to love the ingenuity of the little guy as he makes his way adroitly past each obstacle.
  3. It’s a story about blowing things way out of proportion. Each of the tailor’s adversaries manages to magnify the capabilities of the tailor to extraordinary, even supernatural levels.

I’m thinking I might have to get a belt like that and wear it around the office. A nice pair of kahkis, a button down shirt, and a big belt with the words, “7 in One Blow”. Given how prone we all tend to be to each of the foibles above, I’m sure it would be a riot.
A QA guy might see my belt and say, “Wow! He killed 7 bugs in one blow!”
Maybe a project manager might see it and think, “This guy is so good he finished 7 projects all at once!” Or maybe the HR rep says, “Did he really fire 7 people in one day?” Or the Scrum Master who thinks, “That’s a lot of impediments to clear out at once!”
The point is that we make up these stories all the time. We have stories in our heads about our team mates, “Did you hear about Joe?” our managers, and their managers. Sometimes it seems as though we all have these distorted visions of each other. And perhaps we do. We need to get better at questioning those stories. We need to cultivate more of a sense of curiosity about the incomplete knowledge that we have of each other. That belt would be my reminder. I might have to buy one for each member of my team.
Of course the other thing that the belt can remind us of, is to use our own innate cleverness to help get what we need. When we are wrestling with the corporate challenges, we all too often tend to try and brute force our problems and obstacles. We need to be a bit more like the Little Tailor and manipulate the world around us with some cleverness. We all have it to one degree or another, and Lord knows we need all the cleverness we can get. Good work is full of challenges and you don’t want to take them all head on or you will end up like an NFL linebacker – brain damaged. Instead, we need to approach some things with subtlety. There is just as much value in not being in the path of a problem as there is in tackling things head on. Like the Tailor, we need to recruit others to achieve our objectives.
Finally, we really must stop blowing things out of proportion. Nobody cares about our methodology. You want to know what my favorite kind of pairing is? Lunch! We need to lighten up a bit. Working your way through the dark corporate forest, you can either play with what ever it brings and gracefully dodge the risks, or…you can get stepped on.

Filed under: Agile, Coaching, impediment, Process, Teams Tagged: cleverness, fool, Process, Teams, wit
Categories: Blogs

The Agile Reader – Weekend Edition: 09/19/2014 - Kane Mar - Fri, 09/19/2014 - 06:16

You can get the Weekend Edition delivered directly to you via email by signing up

The Weekend Edition is a list of some interesting links found on the web to catch up with over the weekend. It is generated automatically, so I can’t vouch for any particular link but I’ve found the results are generally interesting and useful.

  • #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • Even better – communicating while drawing! #Scrum #Agile
  • #Agile – How does Planning Poker work in Agile? –
  • Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • #Meetings! Huh! What are they good for? #scrum
  • @Dell is hiring #SDET, #Austin, TX #Iwork4Dell #ecommerce #agile #Scrum
    #dotnet #Automation #testing @CareersAtDell
  • Medical Mutual #IT #Job: Agile Scrum Master – Project Manager 14-266 (#Strongsville, OH) #Jobs
  • Has #Scrum Killed the Business Analyst? #scrumrocks #agile #yrustilldoingwaterfall
  • RT @yochum: Scrum Expert: Increasing Velocity in a Regulated Environment #agile #scrum
  • Scrum Master at Agile (Atlanta, GA): : Our client is focused on building a platform and related… #ATL
  • How to Plan an Agile Sprint Meeting? –
  • Agile Scrum isn’t a silver bullet solution for s/w development, but it can be a big help. #AppsTrans #HP
  • Now hiring for: Scrum Master in Gainesville, FL #job #agile #mindtree
  • Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4) #agile #scrum
  • RT @yochum: Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4) #agile #scrum
  • RT @yochum: Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4) #agile #scrum
  • Agile Scrum #Master needed in #SanFrancisco, apply now at #Accenture! #job
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • FREE SCRUM EBOOK based on the AMAZON BESTSELLER: #scrum #agile inspired by #kschwaber
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • You get more value from periodic “lessons learned” events rather than a big one at the end #agile #scrum #PMI
  • RT @SpitFire_: How to Attract #Agile Development Talent @appvance #Tech #Scrum @lgoncalves1979 @kevinsurace #TechJo…
  • RT @tirrellpayton: You get more value from periodic “lessons learned” events rather than a big one at the end #agile…
  • A Quick, Effective Swarming Exercise for Scrum Development Teams #agile #projectmanagement
  • RT @yochum: Agile Tools: The Grumpy Scrum Master #agile #scrum
  • +1 The #agile mindset: It’s time to change our thinking, not #Scrum #agile #scrum (via @sdtimes)
  • Agile by McKnight, Scrum by Day is out! Stories via @dinwal @StratacticalCo
  • SCRUM EBOOK #Scrum #Agile inspired by #Ken Schwaber
  • RT @MRGottschalk: “Think Scrum is Only for Developers? Think Again.” by @MRGottschalk on @LinkedIn #Scrum #Agile
  • RT @FreeScrumEbook: SCRUM EBOOK #Scrum #Agile inspired by #Ken Schwaber
  • #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • RT @MichaelNir: #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • RT @MichaelNir: #RT #AUTHORRT #scrum that works ** ** #agile #project #productowner
  • Want to know more about Agile? Sign up to our free workshop #Scrum #agile
  • RT @boostagile: Want to know more about Agile? Sign up to our free workshop #Scrum #agile
  • RT @HPappsServices: Agile Scrum isn’t a silver bullet solution for s/w development, but it can be a big help. #Apps…
  • Does your agile process look like this? via @joescii
  • #jobs4u #jobs Scrum Master / Agile Coach #RVA #richmond #VA
  • Why do we “think” we “need” estimates? Its worth thinking about. #agile #scrum #kanban #NoEstimates
  • Check it out – FREE SCRUM EBOOK: #Scrum #Agile inspired by #KenSchwaber
  • Categories: Blogs

    Scaling Agile Your Way: SAFe vs. MAXOS (Part 2 of 4)

    Agile Management Blog - VersionOne - Thu, 09/18/2014 - 23:03

    In Part 1 of this four-part blog series, I explained why a cookie-cutter approach will not work as you undertake large-scale agile initiatives.  Each agile project developing a product or a solution has a unique context:  assumptions, situation, team members and their skill sets, organizational structure, management understanding and support, maturity of practices, challenges, culture, etc.

    In Part 1, I proposed a fairly comprehensive list of 25 scaling agile parameters classified into six scaling aspects:

    1.  Teams
    2.  Customers/Users
    3.  Agile Methods and Environments
    4.  Product/Solution
    5.  Complexity
    6.  Value Chain (see Tables 1 through 4 of Part 1 of this blog series)

    Each scaling agile parameter can assume one or more values from a range of values.  This comprehensive (but by no means, complete or exhaustive) list of 25 scaling agile parameters suggests that the agile scaling space is complex and rich with many choices.  Each organization or large-scale project is likely to select a different combination of these 25 scaling agile parameters that are relevant; moreover, the value or range of values for each scaling agile parameter for a project or an organization is likely to be unique.  However, in Part 1, I also clarified that “Scaling Agile Your Way” does not imply a license to optimize at the subsystem levels (teams or programs) at the expense of overall system-level optimization (portfolios and enterprise).  Systems thinking is important for Scaling Agile Your Way.

    In Part 1, I presented a brief overview of various popular Agile and Lean scaling frameworks: Scaled Agile Framework® (SAFe™), LeSS, DAD and MAXOS.  Although there are differences among SAFe, LeSS and DAD, they all are radically different from MAXOS.  In this part of the series, I will compare and contrast SAFe vs. MAXOS in some depth.

    Briefly, here are the key highlights of SAFe.  Details can be found at Scaled Agile Framework:

    • SAFe requires a “Portfolio, Program and Teams” hierarchy for a large-scale agile project.
    • Each team must be a cross-functional Scrum team and may follow many XP practices.
    • Epics at portfolio levels are managed as a Lean/Kanban flow.  Epics are broken down into features that can be completed in a single release cycle at the program level; each feature is broken down into stories that can be completed in a single sprint at the team level.
    • All teams in a release train of a program must follow the same lock-step sprint cadence (typically two weeks).
    • Release train planning requires all team members (typically up to 150) from all teams in a program to hold two-day-long release planning meetings in person, which entails a substantial effort and complex logistics.
    • Release cycles are typically eight to 12 weeks long.
    • Software is developed on sprint cadence, and released on demand (but cannot be released faster than the sprint cadence).
    • Considerable time and effort is spent in various ceremonies:  sprint planning, sprint review and sprint retrospectives, release train planning across multiple teams, release review and release retrospectives, etc.

    Briefly, here are the highlights of MAXOS.  Details can be found in Andy Singleton’s Agile 2014 presentation.

    • MAXOS is the scaling approach for “Continuous Agile.”  Continuous Agile combines Kanban Agile task management with continuous delivery code management. 
    • MAXOS requires a number of (almost) independent service teams.
    • Services have well-defined APIs, are loosely coupled, and have minimal dependencies among them.
    • Each team operates with Lean flow.  Applications are rapidly composed of a group or a network of services.
    • Each team is developer-centric (not cross-functional) and highly empowered.
    • Code is continually integrated in a single code base across all teams.
    • Code is continually tested with automated tests (unit, acceptance, regression, etc.) by firing off as many virtual machines as needed in a cloud-based environment.
    • Any dependency issues across teams are immediately resolved via rapid team-to-team communication.  For rapid team-to-team or member-to-member communication, tool support is essential. VersionOne provides excellent communication and collaboration among team members.
    • The typical ratio of developers to testers tends to be 10:1, as teams are developer-centric and developers do most automated testing.   There are no separate QA teams.  QA testers are called as needed for their expertise by developer-centric teams.
    • Each empowered, developer-centric team decides when to release its code (not decided by QA testers or product managers!).  MAXOS claims that this policy rationally aligns the interests of developers with consequences of their release decision; poorly written, poorly reviewed, or inadequately tested code may mean “no weekends” or “No Friday evening beer” for developers!
    • All features or stories have switches (togglers) that the product owner (called story owner) decides to turn on (unblock) or turn off (block) based on the market needs.
    • Code released in production is extensively supported by automated user feedback collection, measurements and analysis that result in actionable reports for product management.
    • Automated feedback from production environment is also used directly by developers to immediately fix problems.
    • Meeting time is minimized by “automating away” management meetings, and removing or reducing other Scrum ceremonies.  For example, sprint and release retrospectives are replaced by periodic “Happiness Surveys” and taking actions based on those surveys.

    Because of these fundamental differences between SAFe and MAXOS, they represent radically different approaches to scaling agile. The contrast between SAFe and MAXOS is breathtaking, and its implications are worth understanding.  Tables 5-10 present the differences between SAFe and MAXOS from the perspective of 25 scaling agile parameters covered in Tables 1-4 of Part 1.

    These six tables (Tables 5-10 below) follow a specific color legend described below:








    Are your agile projects closer to the SAFe Sweet Spot or the MAXOS Sweet Spot? 

    Or are your projects closer to the SAFe Challenge Zone or the MAXOS Challenge Zone?  Or are you in a situation where neither SAFe nor MAXOS will serve you unique agile scaling needs? If you are exploring the use of LeSS or DAD framework, I would encourage you to use the list of 25 scaling agile parameters to identify the Sweet Spot, Challenge Zone and Unfit Zone for LeSS or DAD (as I have done in Tables 5-10 for SAFe and MAXOS). Then determine if your projects are closer to the Sweet Spot or Challenge Zone of LeSS or DAD.

    I would love to hear from you either in the Comments below, by email (, or on Twitter @smthatte.

    Related posts:

    Part 1: Scaling Agile Your Way: Agile Scaling Aspects and Context Matter Greatly

    Stay tuned for these future parts of this series:

    Part 3: Scaling Agile Your Way – Sweet Spot, Challenge Zone and Unfit Zone for SAFe and MAXOS

    Part 4: Scaling Agile Your Way – How to develop and implement your custom approach

    Categories: Companies

    iOS 8: The biggest iOS release ever

    Derick Bailey - new ThoughtStream - Thu, 09/18/2014 - 20:22

    I updated my iPad to iOS 8 yesterday.

    IOS8 big release

    (and, yes… I drew this on my iPad)

         Related Stories 
    Categories: Blogs

    Increasing Velocity in a Regulated Environment

    Scrum Expert - Thu, 09/18/2014 - 20:10
    In regulated industries like Health Care you have to comply with standard operating procedures, heaps of paper work and frequent audits. Do these requirements conflict with the core tenets of Agile? How do you increase velocity in such regulated environments? This presentation explains how PHT Corporation overcame these constraints and got to Agile. He will draw a picture of PHT’s operating environment and the applicable regulations for software development. He demonstrates how the required documentation can become a byproduct of the everyday work of a Scrum team and describes changes he ...
    Categories: Communities

    Management Innovation is at the Top of the Innovation Stack

    J.D. Meier's Blog - Thu, 09/18/2014 - 18:13

    Management Innovation is at the top of the Innovation Stack.  

    The Innovation Stack includes the following layers:

    1. Management Innovation
    2. Strategic Innovation
    3. Product Innovation
    4. Operational Innovation

    While there is value in all of the layers, some layers of the Innovation Stack are more valuable than others in terms of overall impact.  I wrote a post that walks through each of the layers in the Innovation Stack.

    I think it’s often a surprise for people that Product or Service Innovation is not at the top of the stack.   Many people assume that if you figure out the ultimate product, then victory is yours.

    History shows that’s not the case, and that Management Innovation is actually where you create a breeding ground for ideas and people to flourish.

    Management Innovation is all about new ways of mobilizing talent, allocating resources, and building strategies.

    If you want to build an extremely competitive advantage, then build a Management Innovation advantage.  Management Innovation advantages are tough to copy or replicate.

    If you’ve followed my blog, you know that I’m a fan of extreme effectiveness.   When it comes to innovation, I’ve had the privilege and pleasure of playing a role in lots of types of innovation over the years at Microsoft.   If I look back, the most significant impact has always been in the area of Management Innovation.

    It’s the trump card.

    Categories: Blogs

    Agile Practitioners Conference, Kuala Lumpur, Malaysia, October 2-3 2014

    Scrum Expert - Thu, 09/18/2014 - 17:46
    Agile Practitioners Conference Malaysia is a two-day conference focused on Agile, Scrum, Lean, Kanban and UX that takes palace in Kuala Lumpur. In the agenda you can find topics like “(The Rise and Fail of) Fake Agile”, “Describing the Elephant in the Room: User Experience (UX)”, “Overcoming Scrum Implementation Challenges in the Asian Business Environment”, “Agile Requirements: From Planning to Execution” Web site: Location for the 2014 conference: KH Tower, Jalan Punchak, Off Jalan P.Ramlee, 50250 Kuala Lumpur, Malaysia.
    Categories: Communities

    STARWEST, Anaheim, USA, October 12-17 2014

    Scrum Expert - Thu, 09/18/2014 - 17:41
    STARWEST is a six-day software event and conference that features pre-conference training, in-depth half- and full-day tutorials and conference sessions covering major software testing issues and solutions. In the agenda you can find topics like “Real-World Software Testing with Microsoft Visual Studio”, “A Rapid Introduction to Rapid Software Testing”, “Test Automation Patterns: Issues and Solutions”, “Testing Ajax and Mobile Apps with Agile Test Management and Tools”, “A Tester’s Guide to Collaborating with Product Owners”, “Agile Development and Testing in a Regulated”, “Test Improvement in Our Rapidly Changing World”, “The Role of ...
    Categories: Communities

    Agile Advice Book Update

    Learn more about our Scrum and Agile training sessions on

    Well, last spring I announced that I was going to be publishing a collection of the best Agile Advice articles in a book.  I managed to get an ISBN number, got a great cover page design, and so it is almost done.  I’m still trying to figure out how to build an index… any suggestions would be welcome!!!  But… I’m hoping to get it published on iBooks and Amazon in the next month or two.  Let me know if you have any feedback on “must-have” Agile Advice articles – there’s still time to add / edit the contents.

    There are six major sections to the book:

    1. Basics and Foundations
    2. Applications and Variations
    3. Agile and Other Systems
    4. For Managers and Executives
    5. Bonus Chapters
    6. Agile Methods Quick Reference and Selection Guide

    The book will also have a small collection of 3 in-depth articles that have never been published here on Agile Advice (and never will be).  The three special articles are:

    1. Agile Mining at a Large Canadian Oil Sands Company
    2. Crossing the Knowing-Doing Gap
    3. Becoming a Professional Software Developer

    Again, any feedback on tools or techniques for creating a quick index section on a book would be great.  I’m using LibreOffice for my word processor on a Mac.  I’m cool with command-line tools if there’s something good!


    Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
    Categories: Blogs

    Who owns the meetings in Scrum?

    Growing Agile - Thu, 09/18/2014 - 14:31

    We are coaching a new team to use Scrum, and a question has popped up about who owns the various meetings in Scrum. Many people think that because the ScrumMaster is responsible for the process, they own all the Scrum meetings. If that’s you, just pause and go on a thought journey with me.

    One of our favourite sayings is:

    You do it, you own it

    The ScrumMaster is a facilitator in the majority of Scrum meetings – but does that make them an owner? If the ScrumMaster is the one scheduling all the meeting and taking notes etc – then they own the meeting. Scrum is not the ScrumMasters process. They are merely there to guide and coach the teams and Product Owners. Let me explain each meeting.

    Backlog Grooming (or Backlog Refinement)

    The outcome of this meeting is that the team has a better understanding of requirements and that stories are sized. The person who needs the estimates, and will be explaining the requirements is the Product Owner. The ScrumMaster ensures everyone is on the same page, checks the time box and perhaps structures the meeting – this is a facilitative role. The ScrumMaster should not be contributing to the content of the meeting.

    Sprint Planning Part 1

    This meeting is to check understanding and make a commitment for the sprint. Once again, the Product Owner is really interested in this outcome. The ScrumMaster is a facilitator.

    Sprint Planning Part 2

    This meeting is for the team to design and chat about how they are going to do each story. This meeting is for the team. The ScrumMaster can facilitate this, or they can ask the team to facilitate this. However, unless someone in the team has facilitation experience, I would suggest this be the ScrumMaster.

    Daily Scrum

    This is for the team to checkin with each other and their commitment. So the team own it. The ScrumMaster is there as a facilitator only, and occasionally should let the team do their own facilitation.

    Sprint Review

    This meeting is about the product/project and where you are as a team with the release. The Product Owner cares about this and therefore owns this meeting. The ScrumMaster facilitates. And remember facilitation means not adding any content!

    Sprint Retrospective

    This meeting is about the process of the team. It is for the team and should be owned by them. The ScrumMaster is the impartial facilitator.

     Who owns the meetings in Scrum?

    Should the facilitator or owner for any meeting not be available (maybe they are off sick or on training) then decide at the start of the meeting who will play which role. It is not a good idea for a facilitator (who should be impartial) to own and run a meeting.

    My personal preference is to have the owner of the meeting schedule the meeting and NOT the ScrumMaster. Let the people who need the meeting own it. If you’re a ScrumMaster and you currently schedule all the meetings, explain why you don’t want to do this anymore and ask someone else to schedule the meetings important to them. Remember “You do it, you own it!”.

    Categories: Companies

    Full width iOS Today Extension

    Xebia Blog - Thu, 09/18/2014 - 11:57

    Apple decided that the new Today Extensions of iOS 8 should not be the full width of the notification center. They state in their documentation:

    Because space in the Today view is limited and the expected user experience is quick and focused, you shouldn’t create a widget that's too big by default. On both platforms, a widget must fit within the width of the Today view, but it can increase in height to display more content.


    This means that developers that create Today Extensions can only use a width of 273 points instead of the full 320 points (for iPhones pre iPhone 6) and have a left offset of the remaining 47 points. Though with the release of iOS 8, several apps like DropBox and Evernote do seem to have a Today Extension that uses the full width. This raises question wether or not Apple noticed this and why it came through the approval process. Does Apple not care?

    Should you want to create a Today Extension with the full width yourself as well, here is how to do it (in Swift):

    override func viewWillAppear(animated: Bool) {
        if let superview = view.superview {
            var frame = superview.frame
            frame = CGRectMake(0, CGRectGetMinY(frame), CGRectGetWidth(frame) + CGRectGetMinX(frame), CGRectGetHeight(frame))
            superview.frame = frame

    This changes the super view (Today View) of your Today Extension view. It doesn't use any private Api's, but Apple might reject it for not following their rules. So think carefully before you use it.

    Categories: Companies

    Compromises in setting up teams in a scaled framework

    Scrum 4 You - Thu, 09/18/2014 - 07:16

    Making the first steps in setting up a scaled framework with agile teams in a complex environment does not always allow to set up teams by the book. Rather than dragging people from their current activities and having everybody dislike agile, before we have even really started using and living it, I try to make compromises.

    By working in very short sprints of one week, I allow team members to switch between teams with every start of a new sprint. For the one sprint however, they commit to be fully engaged in the sprint backlog and team activities of the one single team. Product owners can plan with the team members’ knowledge for some sprints. The team can make the experience i.e. every other sprint what they are capable of on their own and build confidence and see where they still need more knowledge transfer and help. Multi skilled team members can learn step by step to let go their „baby“ and still be engaged in the field of the product.

    An alternative in the same situation might be:
    If there is a huge overlap in skills needed for one and the other team or part of the product, try putting the teams to either being one team. Build rather large teams in the beginning covering two backlogs (should not be more). Creating a team backlog rather than a product backlog allows the others to learn about the other parts of the product by just being in the team and does not force team members to leave one or the other part of the product without their experience and knowledge.
    Forcing the teams into a decision for one or the other might lead to the opposite – double work by only making one part transparent or frustrated „left alone teams“ and multi skilled team members with feelings of guilt.

    What else have you tried?

    Related posts:

    1. Sprint Planning with geographically dispersed teams located in different timezones
    2. Scrum Teams – No Part Time!
    3. How internationally distributed Teams can improve their Sprint Planning 2

    Categories: Blogs

    The Grumpy Scrum Master

    Agile Tools - Thu, 09/18/2014 - 06:54

    grumpy dwarf

    “Going against men, I have heard at times a deep harmony
    thrumming in the mixture, and when they ask me what
    I say I don’t know. It is not the only or the easiest
    way to come to the truth. It is one way.” – Wendell Berry

    I looked in the mirror the other day and guess what I saw? The grumpy scrum master. He comes by sometimes and pays me a visit. Old grumpy looked at me and I looked at him and together we agreed that perhaps, just this one time, he just might be right.

    We sat down and had a talk. It turns out he’s tired and cranky and seen this all before. I told him I can relate. We agreed that we’ve both done enough stupid to last a couple of lifetimes. No arguments there. He knows what he doesn’t like – me too! After a little debate, we both agreed we don’t give a damn what you think.

    So we decided it was time to write a manifesto. That is

    We grumps have come to value:

    Speaking our mind over listening to whiners

    Working hard over talking about it

     Getting shit done over following a plan

    Disagreeing with you over getting along

    That is, while the items are the right are a total waste of time, the stuff on the left is much more gratifying.


    Filed under: Coaching, Humor, Scrum Tagged: bad attitude, grumpy, Humor, Scrum, Scrum Master
    Categories: Blogs

    Become high performing. By being happy.  

    Xebia Blog - Thu, 09/18/2014 - 04:59

    The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

    Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

    So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

    Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

    This are definitely awkward times. Terrible wars and epidemics where we can’t turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find “time” for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

    One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

    But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

    On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

    Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

    Categories: Companies

    How to create an Agile Burn-Up Graph in Google Docs - Kane Mar - Wed, 09/17/2014 - 22:01

    A Burn-Up graph is simply a stack graph showing the total amount of work the team has in their product backlog over a number of Sprints. I’ve used a variety of different Agile Burn-Up graphs over the years. Here’s one of my favourites:


    Agile Burn-Up Graph

    Agile Burn-Up Graph


    I created this with Excel while working with an insurance company based in Mayfield, Ohio. In this article I’ll show you how to create something similar using Google docs.

    Understanding the Burn-up Graph

    This graph (above) shows the total amount of work in the product backlog (top line of the graph), the amount of work completed (yellow) and the amount of work remaining (red and blue). The amount of work remaining is divided into estimated work (red) and un-estimated work (blue) which we guessed at using a very course scale. At the start you can see the total amount of work on the backlog increase until the fourth Sprint as indicated by the rising top-line of the graph.

    After the fourth Sprint the team decided that they needed to start breaking down the un-estimated work into small User Stories and so you can see an increase in the red area of the graph and a decline in the blue. We continued to complete work, so the yellow area continued to grow.

    By Sprint 12 we had completely broken down all the large bodies of work and had a well refined backlog.

    Creating the Graph in Google Spreadsheets

    The Google graph that I’ve created is a little bit simpler than the graph above. It shows the total amount of work in the product, the total amount of work added to the product backlog, and the total amount of work completed. You can get the Google Spreadsheet document to create this graph here.

    This is what it looks like:


    Agile Product Burn-up Graph

    Agile Product Burn-up Graph


    The spreadsheet contains two tabs. The first tab contains the data necessary for the graph, and the second tab contains the graph. To start using this graph,

    1. Make a copy of the Google Spreadsheet
    2. Enter the total of the teams estimates in the product backlog into the first column of Series A.
    3. There after all you need to record is the total number of the teams estimates completed at the end of each Sprint, and
    4. The total number of the teams estimates added to the Product Backlog (by the Product Owner) during the sprint.


    Product Burn-up Graph Google Spreadsheet

    Product Burn-up Graph Google Spreadsheet


    You can get the Google Spreadsheet document to create this graph here.

    Categories: Blogs

    Container Usage Guidelines

    Jimmy Bogard - Wed, 09/17/2014 - 21:25

    Over the years, I’ve used and abused IoC containers. While the different tools have come and gone, I’ve settled on a set of guidelines on using containers effectively. As a big fan of the Framework Design Guidelines book and its style of “DO/CONSIDER/AVOID/DON’T”, I tried to capture what has made me successful with containers over the years in a series of guidelines below.

    Container configuration

    Container configuration typically occurs once at the beginning of the lifecycle of an AppDomain, creating an instance of a container as the composition root of the application, and configuring and framework-specific service locators. StructureMap combines scanning for convention-based registration, and Registries for component-specific configuration.

    X AVOID scanning an assembly more than once.

    Scanning is somewhat expensive, as scanning involves passing each type in an assembly through each convention. A typical use of scanning is to target one or more assemblies, find all custom Registries, and apply conventions. Conventions include generics rules, matching common naming conventions (IFoo to Foo) and applying custom conventions. A typical root configuration would be:

    var container = new Container(cfg =>
        cfg.Scan(scan => {

    Component-specific configuration is then separated out into individual Registry objects, instead of mixed with scanning. Although it is possible to perform both scanning and component configuration in one step, separating component-specific registration in individual registries provides a better separation of conventions and configuration.

    √ DO separate configuration concerning different components or concerns into different Registry classes.

    Individual Registry classes contain component-specific registration. Prefer smaller, targeted Registries, organized around function, scope, component etc. All container configuration for a single 3rd-party component organized into a single Registry makes it easy to view and modify all configuration for that one component:

    public class NHibernateRegistry : Registry {
        public NHibernateRegistry() {
            For<Configuration>().Singleton().Use(c => new ConfigurationFactory().CreateConfiguration());
            For<ISessionFactory>().Singleton().Use(c => c.GetInstance<Configuration>().BuildSessionFactory());
            For<ISession>().Use(c => {
                var sessionFactory = c.GetInstance<ISessionFactory>();
                var orgInterceptor = new OrganizationInterceptor(c.GetInstance<IUserContext>());
                return sessionFactory.OpenSession(orgInterceptor);

    X DO NOT use the static API for configuring or resolving.

    Although StructureMap exposes a static API in the ObjectFactory class, it is considered obsolete. If a static instance of a composition root is needed for 3rd-party libraries, create a static instance of the composition root Container in application code.

    √ DO use the instance-based API for configuring.

    Instead of using ObjectFactory.Initialize and exposing ObjectFactory.Instance, create a Container instance directly. The consuming application is responsible for determining the lifecycle/configuration timing and exposing container creation/configuration as an explicit function allows the consuming runtime to determine these (for example, in a web application vs. integration tests).

    X DO NOT create a separate project solely for dependency resolution and configuration.

    Container configuration belongs in applications requiring those dependencies. Avoid convoluted project reference hierarchies (i.e., a “DependencyResolution” project). Instead, organize container configuration inside the projects needing them, and defer additional project creation until multiple deployed applications need shared, common configuration.

    √ DO include a Registry in each assembly that needs dependencies configured.

    In the case where multiple deployed applications share a common project, include inside that project container configuration for components specific to that project. If the shared project requires convention scanning, then a single Registry local to that project should perform the scanning of itself and any dependent assemblies.

    X AVOID loading assemblies by name to configure.

    Scanning allows adding assemblies by name, “scan.Assembly(“MyAssembly”)”. Since assembly names can change, reference a specific type in that assembly to be registered. 
    Lifecycle configuration

    Most containers allow defining the lifecycle of components, and StructureMap is no exception. Lifecycles determine how StructureMap manages instances of components. By default, instances for a single request are shared. Ideally, only Singleton instances and per-request instances should be needed. There are cases where a custom lifecycle is necessary, to scope a component to a given HTTP request (HttpContext).

    √ DO use the container to configure component lifecycle.

    Avoid creating custom factories or builder methods for component lifecycles. Your custom factory for building a singleton component is probably broken, and lifecycles in containers have undergone extensive testing and usage over many years. Additionally, building factories solely for controlling lifecycles leaks implementation and environment concerns to services consuming lifecycle-controlled components. In the case where instantiation needs to be deferred or lifecycle needs to be explicitly managed (for example, instantiating in a using block), depending on a Func<IService> or an abstract factory is appropriate.

    √ CONSIDER using child containers for per-request instances instead of HttpContext or similar scopes.

    Child/nested containers inherit configuration from a root container, and many modern application frameworks include the concept of creating scopes for requests. Web API in particular creates a dependency scope for each request. Instead of using a lifecycle, individual components can be configured for an individual instance of a child container:

    public IDependencyScope BeginScope() {
        IContainer child = this.Container.GetNestedContainer();
        var session = new ApiContext(child.GetInstance<IDomainEventDispatcher>());
        var resolver = new StructureMapDependencyResolver(child);
        var provider = new ServiceLocatorProvider(() => resolver);
        child.Configure(cfg =>
        return resolver;

    Since components configured for a child container are transient for that container, child containers provide a mechanism to create explicit lifecycle scopes configured for that one child container instance. Common applications include creating child containers per integration test, MVVM command handler, web request etc.

    √ DO dispose of child containers.

    Containers contain a Dispose method, so if the underlying service locator extensions do not dispose directly, dispose of the container yourself. Containers, when disposed, will call Dispose on any non-singleton component that implements IDisposable. This will ensure that any resources potentially consumed by components are disposed properly.
    Component design and naming

    Much of the negativity around DI containers arises from their encapsulation of building object graphs. A large, complicated object graph is resolved with single line of code, hiding potentially dozens of disparate underlying services. Common to those new to Domain-Driven Design is the habit of creating interfaces for every small behavior, leading to overly complex designs. These design smells are easy to spot without a container, since building complex object graphs by hand is tedious. DI containers hide this pain, so it is up to the developer to recognize these design smells up front, or avoid them entirely.

    X AVOID deeply nested object graphs.

    Large object graphs are difficult to understand, but easy to create with DI containers. Instead of a strict top-down design, identify cross-cutting concerns and build generic abstractions around them. Procedural code is perfectly acceptable, and many design patterns and refactoring techniques exist to address complicated procedural code. The behavioral design patterns can be especially helpful, combined with refactorings dealing with long/complicated code can be especially helpful. Starting with the Transaction Script pattern keeps the number of structures low until the code exhibits enough design smells to warrant refactoring.

    √ CONSIDER building generic abstractions around concepts, such as IRequestHandler<T>, IValidator<T>.

    When designs do become unwieldy, breaking down components into multiple services often leads to service-itis, where a system contains numerous services but each only used in one context or execution path. Instead, behavioral patterns such as the Mediator, Command, Chain of Responsibility and Strategy are especially helpful in creating abstractions around concepts. Common concepts include:

    • Queries
    • Commands
    • Validators
    • Notifications
    • Model binders
    • Filters
    • Search providers
    • PDF document generators
    • REST document readers/writers

    Each of these patterns begins with a common interface:

    public interface IRequestHandler<in TRequest, out TResponse>
        where TRequest : IRequest<TResponse> {
        TResponse Handle(TRequest request);
    public interface IValidator<in T> {
        ValidationResult Validate(T instance);
    public interface ISearcher {
        bool IsMatch(string query);
        IEnumerable<Person> Search(string query);

    Registration for these components involves adding all implementations of an interface, and code using these components request an instance based on a generic parameter or all instances in the case of the chain of responsibility pattern.

    One exception to this rule is for third-party components and external, volatile dependencies.

    √ CONSIDER encapsulating 3rd-party libraries behind adapters or facades.

    While using a 3rd-party dependency does not necessitate building an abstraction for that component, if the component is difficult or impossible to fake/mock in a test, then it is appropriate to create a facade around that component. File system, web services, email, queues and anything else that touches the file system or network are prime targets for abstraction.

    The database layer is a little more subtle, as requests to the database often need to be optimized in isolation from any other request. Switching database/ORM strategies is fairly straightforward, since most ORMs use a common language already (LINQ), but have subtle differences when it comes to optimizing calls. Large projects can switch between major ORMs with relative ease, so any abstraction would limit the use of any one ORM into the least common denominator.

    X DO NOT create interfaces for every service.

    Another common misconception of SOLID design is that every component deserves an interface. DI containers can resolve concrete types without an issue, so there is no technical limitation to depending directly on a concrete type. In the book Growing Object-Oriented Software, Guided by Tests, these components are referred to as Peers, and in Hexagonal Architecture terms, interfaces are reserved for Ports.

    √ DO depend on concrete types when those dependencies are in the same logical layer/tier.

    A side effect of depending directly on concrete types is that it becomes very difficult to over-specify tests. Interfaces are appropriate when there is truly an abstraction to a concept, but if there is no abstraction, no interface is needed.

    X AVOID implementation names whose name is the implemented interface name without the “I”.

    StructureMap’s default conventions do match up IFoo with Foo, and this can be a convenient default behavior, but when you have implementations whose name is the same as their interface without the “I”, that is a symptom that you are arbitrarily creating an interface for every service, when just resolving the concrete service type would be sufficient instead.  In other words, the mere ability to resolve a service type by an interface is not sufficient justification for introducing an interface.

    √ DO name implementation classes based on details of the implementation (AspNetUserContext : IUserContext).

    An easy way to detect excessive abstraction is when class names are directly the interface name without the prefix “I”. An implementation of an interface should describe the implementation. For concept-based interfaces, class names describe the representation of the concept (ChangeNameValidator, NameSearcher etc.) Environment/context-specific implementations are named after that context (WebApiUserContext : IUserContext).
    Dynamic resolution

    While most component resolution occurs at the very top level of a request (controller/presenter), there are occasions when dynamic resolution of a component is necessary. For example, model binding in MVC occurs after a controller is created, making it slightly more difficult to know at controller construction time what the model type is, unless it is assumed using the action parameters. For many extension points in MVC, it is impossible to avoid service location.

    X AVOID using the container for service location directly.

    Ideally, component resolution occurs once in a request, but in the cases where this is not possible, use a framework’s built-in resolution capabilities. In Web API for example, dynamically resolved dependencies should be resolved from the current dependency scope:

    var validationProvider = actionContext

    Web API creates a child container per request and caches this scoped container within the request message. If the framework does not provide a scoped instance, store the current container in an appropriately scoped object, such as HttpContext.Items for web requests. Occasionally, you might need to depend on a service but need to explicitly decouple or control its lifecycle. In those cases, containers support depending directly on a Func.

    √ CONSIDER depending on a Func<IService> for late-bound services.

    For cases where known types need to be resolved dynamically, instead of trying to build special caching/resolution services, you can instead depend on a constructor function in the form of a Func. This separates wiring of dependencies from instantiation, allowing client code to have explicit construction without depending directly on a container.

    public EmailController(Func<IEmailService> emailServiceProvider) {
        _emailServiceProvider = emailServiceProvider;
    public ActionResult SendEmail(string to, string subject, string body) {
        using (var emailService = _emailServiceProvider()) {
            emailService.Send(to, subject, body);

    In cases where this becomes complicated, or reflection code is needed, a factory method or delegate type explicitly captures this intent.

    √ DO encapsulate container usage with factory classes when invoking a container is required.

    The Patterns and Practices Common Service Locator defines a delegate type representing the creation of a service locator instance:

    public delegate IServiceLocator ServiceLocatorProvider();

    For code needing dynamic instantiation of a service locator, configuration code creates a dependency definition for this delegate type:

    public IDependencyScope BeginScope()
        IContainer child = this.Container.GetNestedContainer();
        var resolver = new StructureMapWebApiDependencyResolver(child);
        var provider = new ServiceLocatorProvider(() => resolver);
        child.Configure(cfg =>
        return new StructureMapWebApiDependencyResolver(child);

    This pattern is especially useful if an outer dependency has a longer configured lifecycle (static/singleton) but you need a window of shorter lifecycles. For simple instances of reflection-based component resolution, some containers include automatic facilities for creating factories.

    √ CONSIDER using auto-factory capabilities of the container, if available.

    Auto-factories in StructureMap are available as a separate package, and allow you to create an interface with an automatic implementation:

    public interface IPluginFactory {
        IList<IPlugin> GetPlugins();

    The AutoFactories feature will dynamically create an implementation that defers to the container for instantiating the list of plugins.

    Post Footer automatically generated by Add Post Footer Plugin for wordpress.

    Categories: Blogs

    How to Enable Estimate-Free Development

    Practical Agility - Dave Rooney - Wed, 09/17/2014 - 20:06
    Most of us have been there... the release or sprint planning meeting to goes on and on and on and on. There is constant discussion over what a story means and endless debate over whether it's 3, 5 or 8 points. You're eventually bludgeoned into agreement, or simply too numb to disagree. Any way you look at it, you'll never get those 2, 4 or even 6 hours back - they're gone forever! And to what
    Categories: Blogs

    Continuous Delivery is about removing waste from the Software Delivery Pipeline

    Xebia Blog - Wed, 09/17/2014 - 16:44

    On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

    Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

    This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

    To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

    These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

    a traditional software delivery process

    a traditional software delivery process

    Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

    overview of tooling for automating the software delivery process

    overview of tooling for automating the software delivery process

    When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

    After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

    an optimized software delivery process

    an optimized software delivery process

    As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

    Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

    Michiel Sens.

    Categories: Companies

    Knowledge Sharing

    SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.