Skip to content

Feed aggregator

How to write an Amazon RDS service broker for Cloud Foundry

Xebia Blog - Mon, 03/23/2015 - 11:58

Cloud Foundry is a wonderful on-premise PaaS  that makes it very easy to build, deploy while providing scalability and high availability to your stateless applications. But Cloud Foundry is really a Application Platform Service and does not provide high availability and scalability for your data. Fortunately, there is Amazon RDS, which excels in providing this as a service.

In this blog I will show you how easy it is to build, install and use a Cloud Foundry Service Broker for Amazon RDS.  The broker was developed in Node.JS using the Restify framework and can be deployed as a normal Cloud Foundry application. Finally,  I will point you to a skeleton service broker which you can use as the basis for your own.

Cloud Foundry Service Broker Domain

Before I race of into the details of the implementation, I would like to introduce you into the Cloud Foundry lingo. If you are aware of the lingo, just skip to the paragraph 'AWS RDS Service Broker operations'.

Service - an external resource that can be used by an application. It can be a database, a messaging system or an external application.  Commonly provided services are mysql, postgres, redis and memcached.

Service Plan - a plan specify the quality of the service and governs the amount memory, disk space, nodes etc. provided with the service.

Service Catalog - a document containing all services and service plans of a service broker.

Service Broker - a program that is capable of creating services and providing the necessary information to applications to connect to the service.

Now a service broker can provide the following operations:

Describe Services - Show me all the services this broker can provide.

Create Service - Creating an instance of a service matching a specified plan. When the service is a database, it depends on the broker what this means: It may create an entire database server, or just a new database instance, or even just a database schema.   Cloud Foundry calls this 'provisioning a service instance'.

Binding a Service - providing a specific application with the necessary information to connect to an existing service.  When the service is a database, it provides the hostname, portname, database name, username and password. Depending on the service broker, the broker may even  create specific credentials for each  bind request/application. The Cloud Controller will store the returned credentials in a JSON document stored as an UNIX environment variable (VCAP_SERVICES).

Unbind service - depending on the service broker, undo what what done on the bind.

Destroy Service - Easy, just deleting what was created. Cloud Foundry calls this 'deprovisioning a service instance'.

In the next paragraph I will map these operations to Amazon AWS RDS services.

AWS RDS Service Broker operations

Any Service Broker has to implement a REST API of the Cloud Foundry specification.  To create the Amazon AWS RDS service broker, I had to implement four out of five methods:

  • describe services - returns available services and service plans
  • create service - call the createDBInstance operation and store generated credentials as tags in with the instance.
  • bind service - call the describeDBInstances operation and return the stored credentials.
  • delete service - just call the deleteDBInstance operation.

I implemented these REST calls using the Restify framework and the Amazon AWS RDS API for Javascript. the skeleton looks like this:

// get catalog
server.get('/v2/catalog', function(request, response, next) {
    response.send(config.catalog);
    next();
});

// create service
server.put('/v2/service_instances/:id', function(request, response, next) {
        response.send(501, { 'description' : 'create/provision service not implemented' });
        next();
    });

// delete service
server.del('/v2/service_instances/:id', function(req, response, next) {
        response.send(501, { 'description' : 'delete/unprovision service not implemented' });
        next();
    });

// bind service
server.put('/v2/service_instances/:instance_id/service_bindings/:id', function(req, response, next) {
        response.send(501, { 'description' : 'bind service not implemented' });
        next();
});

// unbind service
server.del('/v2/service_instances/:instance_id/service_bindings/:id', function(req, response, next) {
    response.send(501, { 'description' : 'unbind service not implemented' });
    next();
});

For the actual implementation of each operations on AWS RDS,  I would like to refer you to the source code of aws-rds-service-broker.js on github.com .

Design decisions

That does not look all too difficult does it?  Here are some of my design decisions:

Where do I store the credentials?

I store the credentials as tags on the  instance.  I wanted to create service broker that was completely stateless so that I could deploy it in Cloud Foundry itself. I did not want to be dependent on a complete database for a little bit of information. The tags seemed to fit the purpose.

Why does bind return the same credentials for every bind?

I wanted the bind service to be as simple as possible. I did not want to generate new user accounts and passwords, because if I did, I had even more state to maintain.  Even more, I found  that if I bind two applications to the same MySQL service, they could see each others data. So why bother creating users for binds? Finally, making the bind service simple, kept the unbind service even simpler because there is nothing to undo.

How to implement different service plans?

The createDBInstance operation of AWS RDS API operation, takes a JSON object as input parameter that is basically the equivalent of a plan. I just had to add an appropriate JSON record to the configuration file for each plan. See the description of the params parameter of the createDBInstance operation.

How do I create a AWS RDS service within 60 seconds?

Well, I don't.  The service broker API states that you have to create a service within the timeout of the cloud controller (which is 60 seconds), but RDS takes a whee bit more time. So the create request is initiated within seconds, but before you can bind an application to it may take a few minutes. Nothing I can do about that.

Why store the service broker credentials in environment variables?

I want the service broker to be configured upon deployment time. When the credentials are in the config file, you need to change the files of the application on each deployment.

Installation

In these instructions, I presume you have access to an AWS account and you have an installation of Cloud Foundry. I used  Stackato which is a Cloud Foundy implementation by ActiveState.  These instructions assume you are too!

  1. Create a AWS IAM user
    First create a AWS IAM user (cf-aws-service-broker) with at least the folllowing privileges
  2. Assign privileges to execute AWS RDS operations
    The newly created IAM user needs the privileges to create RDS databases. I used the following permissions:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
             "rds:AddTagsToResource",
             "rds:CreateDBInstance",
             "rds:DeleteDBInstance",
             "rds:DescribeDBInstances",
             "rds:ListTagsForResource"
          ],
          "Resource": [
             "*"
          ]
        },
        {
          "Effect": "Allow",
          "Action": [
             "iam:GetUser"
          ],
          "Resource": [
              "*"
          ]
        }
      ]
    }
    
  3. Generate AWS access key and secret for the user 'cf-aws-service-broker'
  4. Create a Database Subnet
    Create a  database subnet 'stackato-db-subnet-group' in the AWS Region where you want to have the databases to be created.
  5. Check out the service broker
    git clone https://github.com/mvanholsteijn/aws-rds-service-broker
    cd aws-rds-service-broker
    
  6. Add all your parameters as environment variables to the manifest.yml
    applications:
       - name: aws-rds-service-broker
         mem: 256M
         disk: 1024M
         instances: 1
         env:
           AWS_ACCESS_KEY_ID: <fillin>
           AWS_SECRET_ACCESS_KEY: <fillin>
           AWS_REGION: <of db subnet group,eg eu-west-1>
           AWS_DB_SUBNET_GROUP: stackato-db-subnet-group
           SERVICE_BROKER_USERNAME: <fillin>
           SERVICE_BROKER_PASSWORD: <fillin>
         stackato:
           ignores:
             - .git
             - bin
             - node_modules
    
  7. Deploy the service broker
    stackato target <your-service-broker> --skip-ssl-validation
    stackato login
    push
    
  8. Install the service broker
    This script is a cunning implementation which create the service broker in Cloud Foundry and makes all the plans publicly available. In stackato we use the curl commands to achieve this. This script requires you to have installed jq, the wonderful JSON command line processor by Stephen Dolan.

    bin/install-service-broker.sh
    

Now you can use the service broker!

Using the Service Broker

Now we are ready to use the service broker.

  1. Deploy a sample application
    $ git clone https://github.com/mvanholsteijn/paas-monitor
    $ stackato push -n 
    
  2. Create a service for the mysql services
    $ stackato create-service
    1. filesystem 1.0, by core
    2. mysql
    3. mysql 5.5, by core
    4. postgres
    5. postgresql 9.1, by core
    6. redis 2.8, by core
    7. user-provided
    Which kind to provision:? 2
    1. 10gb: 10Gb HA MySQL database.
    2. default: Small 5Gb non-HA MySQL database
    Please select the service plan to enact:? 2
    Creating new service [mysql-844b1] ... OK
    
  3. Bind the service to the application
    stackato bind-service mysql-844b1 paas-monitor
      Binding mysql-844b1 to paas-monitor ... Error 10001: Service broker error: No endpoint set on the instance 'cfdb-3529e5764'. The instance is in state 'creating'. please retry a few minutes later (500)
    

    retry until the database is actually created (3-10 minutes on AWS)

    stackato bind-service mysql-844b1 paas-monitor
     Binding mysql-844d1 to paas-monitor ...
    Stopping Application [paas-monitor] ... OK
    Starting Application [paas-monitor] ...
    OK
    http://paas-monitor.<your-api-endpoint>/ deployed
    
  4. Check the environment of the application
    curl -s http://paas-monitor.<your-api-endpoint>/environment | jq .DATABASE_URL
    "mysql://root:e1zfMf7OXeq3@cfdb-3529e5764.c1ktcm2kjsfu.eu-central-1.rds.amazonaws.com:3306/mydb"
    

    As you can see the credentials for the newly created database has been inserted into the environment of the application.

Creating your own service broker

If you want to create your own service broker in Node.JS you may find the Skeleton Service Broker  a good starting point. It includes a number of utilities to test your broker in the bin directory.

  • catalog.sh - calls the catalog operation
  • provision.sh - calls the create operation
  • unprovision.sh - call the delete operation
  • bind.sh - calls the bind operation on a specified instance
  • unbind.sh - calls the unbind operation on a specified instance and bind id.
  • list.sh - calls the list all service instances operation
  • getenv.sh - gets the environment variables of an CF applications as sourceable output
  • install-service-broker.sh - installs the application and makes all plans public.
  • docurl.sh - calls the stackato CURL operation.

getenv.sh, install-service-broker.sh and provision.sh require jq to be installed.

Conclusion

As you can see, it is quite easy to create your own Cloud Foundry service broker!

Categories: Companies

Status des Agilen Festpreises

Scrum 4 You - Mon, 03/23/2015 - 08:52

Trotz Alerts, Twitter & Co habe ich ab und zu das Bedürfnis, die Google-Search-Engine zum Thema “Verträge für agile Projekte” oder “Agiler Festpreis”, auch “Agile Contracts” genannt, zu durchstöbern. Dabei stolpere ich immer wieder über die üblichen Verdächtigen, beziehungsweise über die gleichen Seiten, die seit Jahren als Mahnmale in den Weiten des Netzes stehen und ihren Ruf in die Ferne klingen lassen: “Ist eh klar!” Gemeint sind Webseiten & Blogs, die Möglichkeiten zeigen, wie man einen Vertrag gestalten kann, um mehr Vorteile des agilen Vorgehens zu nutzen oder das Risiko zwischen Kunden und Lieferanten zu teilen. So rügen Blog-Posts wie einer von Alistair Cockburn aus dem Jahr 2006 den immer noch nicht Bekehrten mit einer Liste von 15 Möglichkeiten, wie er seinen agilen Vertrag konstruieren kann. Wobei die meisten Varianten in knappen 4 bis 8 Zeilen beschrieben sind. Ist das Thema nun so einfach oder in Wirklichkeit sehr komplex? In meinen Gesprächen zu diesem Thema drängt sich mir aber immer mehr der Verdacht auf, dass es nicht um die Komplexität des Themas geht, sondern eher um Erfahrungswerte sowie das Unverständnis, wie dieses neue Konzept in der bestehenden Kultur des eigenen Unternehmens angewendet werden könnte.

Zumindest nach dem Durchlesen des Buchs “Der Agile Festpreis” haben viele die Grundzüge des Themas verstanden. Die meisten attestieren auch, dass dies ein wichtiger nächster Schritt für eine agile Organisation ist. Was meist fehlt, ist der Glaube, dass dies auch wirklich so möglich ist. In der Praxis zeigt sich das sehr oft dadurch, dass der Fokus der meisten Diskussionen auf der Glaubwürdigkeit und Anwendbarkeit liegt. Welche Unternehmen haben das schon gemacht? Wie würden sie das genau in unserer Situation anwenden? Wir arbeiten mit internen Dienstleistern, da ist das sowieso etwas anders, oder? Komplex ist der Agile Festpreis eigentlich nicht. Schwierig ist oft die konkrete Anwendung des neuen Konzepts in einer bestehenden Unternehmenskultur und das Sammeln der Details und der Erfahrung, die für die Einführung dieses neuen Prozesses notwendig ist.

Das mit der Kultur ist eben so eine Sache. Man kann nicht einfach mit einem neuen Begriff um sich werfen, einen schönen Plan für eine Reorganisation entwerfen und dann den Kultur-Schalter umlegen. Nein, es muss vorgelebt werden und wenn der damit errichtete Leuchtturm das Licht gut verteilt, folgen immer mehr Schiffe dem Beispiel. Das ist ja auch wesentlicher Bestandteil des agilen Vorgehens: Wenn man vor einer komplexen oder unübersichtlich großen Aufgabe steht und nicht sicher ist, wie man es angehen soll, startet man am besten mal und kontrolliert den Fortschritt.

So auch der Appell an all jene, die immer noch mit Freelancer-Time&Material-Heerscharen kämpfen, oder im Hoffnungsmodell des Festpreisvertrags ihre Nerven und ihr Geld verschwenden, einfach mal zu beginnen und sich nicht von einem “eh klar” abschrecken zu lassen. Das Konzept ist klar, aber die Umsetzung und das Ausrollen ist in jedem Unternehmen eine eigene Herausforderung. Die agile Organisation beginnt und endet an ihren Schnittstellen nach außen. Das heißt: Jedes Unternehmen, das sich agiler aufstellen will, muss sich über kurz oder lang auch mit den Lieferantenbeziehungen beschäftigen. Wenn man das richtig hinkriegt, kann man einige der Vorteile agiler Methoden erst richtig nutzen. Einkaufs- und Vertragsprozesse anzupassen und die damit verbundene Kultur einer Partnerschaftlichkeit zu leben, ist ein wesentlicher Aspekt des Erfolgs. Viele Unternehmen haben in den letzten Jahren – auch mit unserer Hilfe – den Schritt gewagt. Es gibt also mittlerweile genügend Erfahrung, auf der andere Unternehmen nun mit Bedacht aufbauen können.

Categories: Blogs

Constellation Icebreaker - Getting to know you

How do you get a group of folks to quickly learn about each other’s common interests? Consider the Constellation icebreaker technique.  Constellation is used to identify to what degree people from a group agree or disagree with certain statements (which can be based on a belief, idea, or value). 

This icebreaker is an informal way to get people to share a bit about themselves at the beginning of a training session, workshop, or when a new team is forming.  It is a non-confrontation way of learning people's opinion on a topic or statement.  Within seconds of applying this technique, the participants will clearly tell you what they think of the statement. Here’s how it works:
The set up:  
  • Identify a "center" of the room (or constellation).  This is the Sun.  The location of the Sun represents the highest degree of agreement with the statement.
  • Optionally, use masking or blue paint tape to create dashed lines around the sun in 3 feet/1 meter increments away from the sun.   
The activity: 
  • Ask everyone to stand on/around the Sun (aka., the center) - don't crowd too much
  • Speak the statement, e.g., "I love the Red Sox" 
  • Ask folks to place themselves either close to or away from the sun according to how much they agree or disagree with this statement (each person becomes the planet)
  • Once everyone has placed themselves, ask some/many/all of the folks why they have placed themselves where they are

Its a great way to learn about people fairly quickly.  Have you tried the Constellation icebreaker before?  If so, what do you think?  Do you have another icebreaker that you have found of value to getting folks to learn about each other?    
Categories: Blogs

Python: Equivalent to flatMap for flattening an array of arrays

Mark Needham - Mon, 03/23/2015 - 02:45

I found myself wanting to flatten an array of arrays while writing some Python code earlier this afternoon and being lazy my first attempt involved building the flattened array manually:

episodes = [
    {"id": 1, "topics": [1,2,3]},
    {"id": 2, "topics": [4,5,6]}
]
 
flattened_episodes = []
for episode in episodes:
    for topic in episode["topics"]:
        flattened_episodes.append({"id": episode["id"], "topic": topic})
 
for episode in flattened_episodes:
    print episode

If we run that we’ll see this output:

$ python flatten.py
 
{'topic': 1, 'id': 1}
{'topic': 2, 'id': 1}
{'topic': 3, 'id': 1}
{'topic': 4, 'id': 2}
{'topic': 5, 'id': 2}
{'topic': 6, 'id': 2}

What I was really looking for was the Python equivalent to the flatmap function which I learnt can be achieved in Python with a list comprehension like so:

flattened_episodes = [{"id": episode["id"], "topic": topic}
                      for episode in episodes
                      for topic in episode["topics"]]
 
for episode in flattened_episodes:
    print episode

We could also choose to use itertools in which case we’d have the following code:

from itertools import chain, imap
flattened_episodes = chain.from_iterable(
                        imap(lambda episode: [{"id": episode["id"], "topic": topic}
                                             for topic in episode["topics"]],
                             episodes))
for episode in flattened_episodes:
    print episode

We can then simplify this approach a little by wrapping it up in a ‘flatmap’ function:

def flatmap(f, items):
        return chain.from_iterable(imap(f, items))
 
flattened_episodes = flatmap(
    lambda episode: [{"id": episode["id"], "topic": topic} for topic in episode["topics"]], episodes)
 
for episode in flattened_episodes:
    print episode

I think the list comprehensions approach still works but I need to look into itertools more – it looks like it could work well for other list operations.

Categories: Blogs

Estimates or #noestimates… It’s All a Matter of Context

Leading Agile - Mike Cottmeyer - Sun, 03/22/2015 - 19:16

I think I’ve found myself (somewhat accidentally) at the beginning of a series of posts called ‘debates I find useless… let’s move on’. The latest round of discussion that seems to have spiked (at least for me) this week is the whole ‘to estimate or not to estimate’ conversation. The answer to this question clearly falls into the ‘it depends’ category, so if we are having an argument that involves any kind of absolute, we are probably wasting our time.

Even in a domain like commercial, non-governmental, software product development… the one LeadingAgile plays in most of the time… there is seldom any one, single way to do anything. I do believe that in this domain most estimates are functionally useless… but understanding what is estimateable and what isn’t… and more importantly what makes things un-estimateable and why they are un-estimateable… I find to be a way more useful conversation.

If we decide not to estimate, we better have a credible response to the question… when will you be done and what will I get for my money… because asking someone to spend a bucket of cash on the promise they might get something when the bucket runs out… is usually pretty much a non-starter. There are of course exceptions, but for most companies, the answer to this question is pretty important so we’ll need an alternative approach for solving this problem.

Almost always when someone calls my company… they aren’t really looking for advice on how to innovate or how to build the right product, that hasn’t historically been our brand… they are looking for help using agile to make and meet commitments, get product into market earlier, improve quality, or reduce costs. That’s typically been our sweet spot when it comes to introducing agile into large complex organizations and solving it requires more than just better estimating.

What’s the Problem with Estimating?

The companies that call us want to know how much they need to spend to get a particular outcome. They want to be able to make and meet commitments. They want to be able to manage customer expectations.

To have that conversation, you have to first start looking at why organizations aren’t predictable. Most of the time it’s not so much that they can’t estimate, it’s that they have way too many things going on at one time, they have way too many dependencies, and way too many non-instantly available resources. They believe in optimizing for individual production capacity, which causes them to matrix people across multiple initiatives, which just further exacerbates all the aforementioned problems.

Even once you get past the alignment issues, and you reduce much of the waste getting in the way of delivery, quite often companies don’t really know exactly what they want to build. They don’t really know exactly how they are going to build it. They might not even know who exactly is going to do the work (when they need to pull together the estimate) and we all know that we can see wild swings in throughput and productivity between any given developer on the team.

Even if companies can remove the waste, eliminate bottlenecks, and such… even if they know exactly what to build, how they are going to build it, and have a highly consistent stable of software engineers… developers able to deliver against estimates in a reliable way… quite often the code bases they are working in aren’t covered in tests, have a ton of technical debt and defects, and aren’t generally architected in a way that lends itself to stable delivery throughput.

Is Estimating the Right Problem to Solve?

Here is my take… having established our context and domain… I think that asking how to do better estimates is the wrong question to ask. I don’t think we really have an estimating problem as much as we have an organizational alignment problem, we have an investment strategy problem, and we definitely have a risk management problem. As companies, we are placing critical dollars on investments that have a very low probability of paying off… and we are relying on flawed estimates to mitigate that risk.

That is the problem that is killing us.

We want to use estimates to reduce uncertainty and end up increasing uncertainty.

We want to use estimates to reduce risk and end up increasing risk.

We want to believe that with enough up front planning we can know exactly what we need to build and how we are going to build it. That with enough historical information… or enough up-front analysis… we can determine how long the work is going to take. We want to believe that all developers are the same and that every developer can do everything in the estimate at the same rate as any other developer.

In practice, in my experience, this does’t work.

All it does is shift the perceived risk from the business to the development team and everyone loses.

So, What is the Right Problem to Solve?

We have an interesting paradox to deal with here. This is irrational… but make no mistake… this is the current reality in most software businesses today.

We live in a world where requirements are uncertain, technology is rapidly evolving, people are unpredictable… a world where technology is poorly architected, changes result in unintended consequences, and defects are rampant… AND we have to be able to make and meet commitments with some level of assurance that we can actually solve the business problem within the time and cost allocated to the project.

We figure this out or our companies fail.

Any solution that doesn’t answer the questions of when will we be done and what will we get for our money is a non-starter in most organizations.

#noestimates is a non-starter in most companies.

To solve for this irrationality, there is a ton of thinking that has to change, but at the highest level, you have to tackle this problem on two fundamental dimensions…

1) Do everything you can to optimize for throughput and stable delivery capacity, and…

2) Focus on budgeting and constraints rather than estimates.

What does this mean? Let me explain.

Optimize for Throughput and Stable Delivery Capacity

Well… given the uncertainty of requirements, technology, and people… you have to optimize the systems of delivery to eliminate as much of that variability as possible.

To me, that means creating complete cross functional teams, teams aligned toward a single set of products, features, or business capabilities. You have to give those teams as much clarity as possible regarding the problem they are trying to solve. They need to be given the tools necessary, and be held accountable for producing a measurable, working tested increment of product on regular intervals. This allows us a sampling frequency to assess progress.

To me, this is where agile really comes in. Well formed agile teams, working against a known backlog, and having the ability to produce a working tested increment of software at the end of every iteration will begin to stabilize delivery throughput and become predictable delivering on regular intervals. In a really stable, well formed team, it is often possible to estimate the backlog and establish a velocity against the backlog.

But even with well formed agile teams and stable delivery throughput… there is still variability in both the requirements space and the solutions space. In our problem domain, this isn’t going to be solved for easily, so what do you do?

Establish Budgets and Constraints Over Estimates

This is where I think the notion of estimates get’s us in trouble. Often we don’t know exactly what to build, or even how we are going to build it, and believe it or not… in our domain that can be a good thing. If our domain is uncertain, we don’t want to pretend that uncertainty doesn’t exist, or worse, force a level of certainty that isn’t good for our product, or for our company, or that forces us to make early decisions that need to be deferred to when we have more information.

Given though that business won’t allow software production to be totally open ended… what do we do?

We generally recommend that folks look out across their product roadmap, consider where they need to be with their product in the next 6, 9, or 12 months, and do a very high level estimate on what they think it will take to get there. At this level of abstraction, you don’t simply consider what you believe it will cost to do the work, but also what you are willing to invest to get the return on investment you are expecting to get.

At this level of abstraction, it doesn’t have to be exactly right, just directionally correct.

Once you have a high level estimate, stop calling it an estimate. It is a budget. It is a constraint.

As you begin the process of elaborating the requirements, and maturing your emerging understanding of the product you are building, you are no longer asking yourself ‘what are the requirements’ or ‘how how am I going to build this’… you are asking yourself what solution can we develop that can be delivered for the time and cost constraints that the business has asked me to develop within. This is a VERY powerful thinking tool for constraining product development and meeting objectives.

You are adapting the requirements in real time to solve the business problem, and adapting the solution to something that can be built within the constraints that have been established. You invest much of your energy into requirements decomposition, getting the MVP done as early as possible, getting feedback from the delivery teams, aggressively managing the backlog to the constraints, and adjusting to meet business goals. We make tradeoffs at all levels to meet business outcomes.

This is the essence of agile risk management IMO.

Mitigating Risk and Managing Uncertainty

Just to drive a couple of these points home… in the commercial, non-governmental, software product development space… many organizations are not organized well, technology is not architected well, we are making critical high risk investments all the time, competition is fierce and speed to market is essential, and we are dealing with crushing uncertainty in requirements, technology, and people… and given these constraints, sometimes stuff isn’t knowable.

That said, not estimating isn’t an option. Not constraining development isn’t an option. The good news is that in the presence of a stable delivery organization… in the form of complete cross-functional agile teams… teams working against a known backlog and able to produce a working tested increment of product on regular intervals… and in the context of an investment strategy that is based on high-level estimates which quickly become budgets and constraints… we can begin to deliver on plan.

We can now start evaluating progress against our assumptions and actively manage risk to make sure we are optimizing our chances of being successful.

We can begin to enumerate the backlog, making relative size estimates as we go. Since the teams are stable, we can begin to correlate the estimates with what it actually cost to build them. We can use past performance data around the accuracy of the estimates to anticipate the accuracy of the future estimates. The further out we have the backlog, the better we can get at planning forward. Because we know our progress, we can see where we are at against our business objectives.

When things change, we have clear line of sight to our top level business objectives, we can assess how the changes will impact what we are trying to accomplish, and if the emerging size of our backlog is going to impact our ability to meet those objectives within the time and cost constraints that we have established. When we learn that things are getting too big, we can either take something out, agree to spend more and go longer, or we can kill the project.

Sometimes we have bitten off more than we can chew. Sometimes the investment we want to make isn’t really knowable until we get in and start building it. Sometimes we get started, and then we realize we are screwed, and our only viable option is to kill the project, cut our losses, and be thankful that we spent as little money as possible to figure out we were running a fools errand. Nothing (in any of this) guarantees success.

We just need to know when to get out as early as possible.

Separating Knowable Stuff and Unknowable Stuff

I think that much of the stuff that we think is unknowable is often more knowable than we think. Just because we don’t know exactly what to build, doesn’t mean we know nothing about what to build. Just because the technology is a mess, doesn’t mean that it’s such a mess that we can’t work with it at some level of abstraction. Just because developer throughput varies, doesn’t mean that we can’t make some planning level assumptions that are reasonably accurate.

That said, I have worked with teams that are inventing new mathematical algorithms for solving problems that have never been solved before. I’ve had conversations with developers where the estimate could be anywhere between 2 hours and 2 months… and maybe even the problem is unsolvable with the team we have available to solve it. Honestly, the team legitimately doesn’t know, and no amount of analysis is going to help them know. They just have to try.

All I want to do is acknowledge these kinds of problems exist, but they aren’t every problem. The approach I’ve taken for problems like these is to isolate them from the stuff that we are better able to understand and deliver. Now… we have a business decision to make… does the product have value even if the problem never gets solved? Is it still worth investing in? Is it worth spending money given the risk that this aspect of the solution could never be solved? Can I afford to assume this business risk?

A big part of our challenge is that we are placing big bets on unknowable things, we are making hard commitments on things that aren’t sufficiently understood. We have to get better at isolating R&D from the stuff that is more well understood product development. If we are going to invest in R&D, the payoff better be big enough that the risk of failure is worth it. We can’t pretend the uncertainty isn’t there and we can’t estimate that uncertainty away.

In Conclusion

To have a rational conversation about what to estimate or how to estimate, we have to look at the nature of the problem we are trying to solve, the nature of the organization that is chartered to solve it, the investment and risk profile of what we are trying to build. We need to assess our tolerances for uncertainty. We don’t want to pretend we can estimate things we can’t estimate, but we also want to recognize that some things under some conditions can be estimated.

My experience is that there is a ton which we can do with organizational alignment, program and portfolio management, investment rationalization, risk management, budgeting and constraints, limiting WIP and focusing on flow, that can dramatically help companies get better at making and meeting commitments… and yes, even better at estimating.

Likewise, I think there are problems that don’t lend themselves to estimating at all, where the real problem isn’t estimating or not estimating, but the allocation of essential dollars to high risk endeavors… which are masquerading as software product development… but profile more like experimental R&D.

You just have to know what business you’re in, build your organization to accommodate those realities, estimate and control what can be estimated and controlled, and stop demanding those things which can’t be estimated or controlled conform to your sense of reality… unless you are really to invest in more advanced project modeling than most organizations I’m aware of have the capability to perform.

NOTE: Just realized this is the 600th post on LeadingAgile. That’s kind of a cool milestone. Thanks for being around and keeping up ;-)

The post Estimates or #noestimates… It’s All a Matter of Context appeared first on LeadingAgile.

Categories: Blogs

Python: Simplifying the creation of a stop word list with defaultdict

Mark Needham - Sun, 03/22/2015 - 03:51

I’ve been playing around with topics models again and recently read a paper by David Mimno which suggested the following heuristic for working out which words should go onto the stop list:

A good heuristic for identifying such words is to remove those that occur in more than 5-10% of documents (most common) and those that occur fewer than 5-10 times in the entire corpus (least common).

I decided to try this out on the HIMYM dataset that I’ve been working on over the last couple of months.

I started out with the following code to build a dictionary of words, their total occurrences and the episodes they’d been used in:

import csv
from sklearn.feature_extraction.text import CountVectorizer
from collections import defaultdict
 
episodes = defaultdict(str)
with open("sentences.csv", "r") as file:
    reader = csv.reader(file, delimiter = ",")
    reader.next()
    for row in reader:
        episodes[row[1]] += row[4]
 
vectorizer = CountVectorizer(analyzer='word', min_df = 0, stop_words = 'english')
matrix = vectorizer.fit_transform(episodes.values())
features = vectorizer.get_feature_names()
 
words = {}
for doc_id, doc in enumerate(matrix.todense()):
    for word_id, score in enumerate(doc.tolist()[0]):
        word = features[word_id]
        if not words.get(word):
            words[word] = {}
 
        if not words[word].get("score"):
            words[word]["score"] = 0
        words[word]["score"] += score
 
        if not words[word].get("episodes"):
            words[word]["episodes"] = set()
 
        if score > 0:
            words[word]["episodes"].add(doc_id)

This works fine but the code inside the last for block is ugly and most of it is handling the case when parts of a dictionary aren’t yet initialised which is defaultdict territory. You’ll notice I am using defaultdict in the first part of the code but not yet the second as I’d struggled to get it working.

This was my first attempt to make the ‘words’ variable based on it:

>>> words = defaultdict({})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: first argument must be callable

We can see why this doesn’t work if we try to evaluate ‘{}’ as a function which is what defaultdict does internally:

>>> {}()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'dict' object is not callable

Instead what we need is to pass in ‘dict':

>>> dict()
{}
 
>>> words = defaultdict(dict)
 
>>> words
defaultdict(<type 'dict'>, {})

That simplifies the first bit of the loop:

words = defaultdict(dict)
for doc_id, doc in enumerate(matrix.todense()):
    for word_id, score in enumerate(doc.tolist()[0]):
        word = features[word_id]
        if not words[word].get("score"):
            words[word]["score"] = 0
        words[word]["score"] += score
 
        if not words[word].get("episodes"):
            words[word]["episodes"] = set()
 
        if score > 0:
            words[word]["episodes"].add(doc_id)

We’ve still got a couple of other places to simplify though which we can do by defining a custom function and passing that into defaultdict:

def default_dict_function():
   return {"score": 0, "episodes": set()}
 
>>> words = defaultdict(default_dict_function)
 
>>> words
defaultdict(<function default_dict_function at 0x10963fcf8>, {})

And here’s the final product:

def default_dict_function():
   return {"score": 0, "episodes": set()}
words = defaultdict(default_dict_function)
 
for doc_id, doc in enumerate(matrix.todense()):
    for word_id, score in enumerate(doc.tolist()[0]):
        word = features[word_id]
        words[word]["score"] += score
        if score > 0:
            words[word]["episodes"].add(doc_id)

After this we can write out the words to our stop list:

with open("stop_words.txt", "w") as file:
    writer = csv.writer(file, delimiter = ",")
    for word, value in words.iteritems():
        # appears in > 10% of episodes
        if len(value["episodes"]) > int(len(episodes) / 10):
            writer.writerow([word.encode('utf-8')])
 
        # less than 10 occurences
        if value["score"] < 10:
            writer.writerow([word.encode('utf-8')])
Categories: Blogs

Python: Forgetting to use enumerate

Mark Needham - Sun, 03/22/2015 - 03:28

Earlier this evening I found myself writing the equivalent of the following Python code while building a stop list for a topic model…

words = ["mark", "neo4j", "michael"]
word_position = 0
for word in words:
   print word_position, word
   word_position +=1

…which is very foolish given that there’s already a function that makes it really easy to grab the position of an item in a list:

for word_position, word in enumerate(words):
   print word_position, word

Python does make things extremely easy at times – you’re welcome future Mark!

Categories: Blogs

PMI-ACP LinkedIn Study Group

Leading Answers - Mike Griffiths - Sun, 03/22/2015 - 01:24
I have created a LinkedIn group for readers of my PMI-ACP Exam Prep book. The group combines the features of a study group and Q&A forum along with exam taking tips. Once we have critical mass I will focus on... Mike Griffiths
Categories: Blogs

CI Saves My Bacon

Bobtuse Bobservations - Bob MacNeal - Sat, 03/21/2015 - 17:31
One production pattern I've found indispensable is Continuous Integration. CI saves my bacon. Recently I've found continuous delivery used on top of CI is a production pattern that helps me and my...

Bobtuse can be wildly informative
Categories: Blogs

Consul: the end of CNAMEs and PuppetDB?

Xebia Blog - Fri, 03/20/2015 - 17:24

Do you use CNAME records to identify services on your network? Do you feel life is impossible without PuppetDB and exported resources? In this blog I will explain how Consul can be used to replace both, and jump-start your transition towards container-based infrastructure in the process.

Categories: Companies

Let’s Acknowledge SAFe For What It Is… And Move On

Leading Agile - Mike Cottmeyer - Fri, 03/20/2015 - 15:28

It seems this week more SAFe related stuff than usual made it across my desk… some positive, some negative… some old, some new… but all asking the same fundamental questions. Is SAFe the savior of all things software development? Is SAFe really agile or merely the second coming of RUP? Will SAFe survive or be relegated to the ever growing list of failed approaches that have come and gone over the past 30 years.

Here is my take… it doesn’t matter.

Agile as it was defined 14 years ago is basically a lightweight framework for building software. Agile is predicated on the notion of small teams, colocation, onsite customers, lightweight documentation and frequent opportunities for feedback. These teams need to be sufficient for solving the problem they are formed to solve, they need to have clear backlog, and they need to be able to produce a working tested increment on some predetermined interval.

If you go one level deeper, agile teams need to work within a code base that is well architected, wrapped in tests, and safe to make changes. That code base needs to be supported by a team with autonomy to decide how best to solve the problems they are formed to solve. That team needs to be tightly aligned with the business objectives of the organization they are formed to support. They must have some degree of independence from the rest of the organization.

Let’s be clear…most organizations can’t form teams like this.

Most organizations have tightly coupled, non-autonomous teams….

Most organizations have poor alignment to the business…

Most organizations don’t have good tests…

Most code bases are not safe to make changes….

Most organizations are staffed as matrixes…

Should I go on?

So here is the deal. You either create the conditions to do agile well… agile as it was defined 14 years ago… or you do something else. SAFe is that something else. SAFe is a mechanism for wrapping the complexity of organizations that won’t reduce that complexity. SAFe encapsulates a larger, more enterprise focused value stream, a value stream that really does exist in most large organizations and can’t be ignored.

So… I want to say this one more time for emphasis… either you create the conditions to do agile well… or you do something else. SAFe is that something else.

We can say that SAFe is a cop out… or isn’t really agile… or that it’s the second coming of RUP… but don’t underestimate the complexity, the risk, or the cost of totally refactoring an enterprise to be the kind of organization that can really do agile at any kind of scale. Some organizations simply can’t or won’t invest in this. At the end of the day small batches are better than big batches. Iterative and incremental is better than waterfall, even if it isn’t agile.

I personally don’t think that SAFe is bad… or that SAFe undermines what we are trying to do with agile in the larger scheme of things… I do believe that SAFe will be better for some companies, some of the time. SAFe isn’t the way that I’ve chosen to tackle the enterprise problem. I want to work with companies that do want to fundamentally decouple themselves and have a shot at doing agile as it was originally envisioned…

But I’m pragmatic enough to know that can be a long road.

So… where does that leave us?

I think we should acknowledge SAFe for what it is and move on. Like I said, SAFe will work better for some companies, some of the time. It will be better than waterfall. I think it will be better than RUP as commonly practiced. I think it will be better than trying to do agile in companies that aren’t willing to create the conditions to do agile well. There will be some of us for which SAFe won’t be good enough, but that is okay too. SAFe will be better for lots of folks.

It all depends on what you value. Ron Jeffries said one time… ‘how good do you want to be, and how fast do you want to get there’?

Even though we don’t teach SAFe at LeadingAgile, I think for some SAFe can be a valid part of the journey toward greater agility… it might not get everyone as agile as we’d like them to be… but that’s probably okay too.

The post Let’s Acknowledge SAFe For What It Is… And Move On appeared first on LeadingAgile.

Categories: Blogs

Dates For Your Diary

AvailAgility - Karl Scotland - Fri, 03/20/2015 - 14:31

I’ve just updated my Calendar page with a couple of conferences coming up, as well as a new date for my Kanban Thinking course.

LeanUX15, New York, April 15-19

This is going to be a great event, with a fantastic line-up of speakers, covering a wide range of topics. I’m going to be talking about my current ideas on Strategy Deployment. If you use the code LeanUXSpeaker you’ll get 20% off. Prices go up on March 21st!

Agile Cymru, Cardiff, July 7-8speaker graphic-01

Another great event,  with another fantastic line-up of speakers! I’m particularly looking forward to this one because I get to go back to Cardiff, where I grew up and went to school.

Kanban Thinking Course, London, May 30-April 1

I’m running another training course with agil8 again in London. Here’s some feedback from the last one I ran:

  • Really engaging.
  • Found it fascinating and to be honest that surprised me.
  • It felt like a university lecture, in a good way! Very insightful and complete.
Categories: Blogs

Does your Definition of Done allow known defects?

Improving projects with xProcess - Fri, 03/20/2015 - 14:27
Is it just me or do you also find it odd that some teams have clauses like this in their definition of done (DoD)?
Done... or Done-But?... the Story will contain defects of level 3 severity or less only ...Of course they don't mean you have to put minor bugs in your code - that really would be mad - but it does mean you can sign the Story off as "Done" if the bugs you discover in it are only minor (like spelling mistakes, graphical misalignment, faults with easy workarounds, etc.). I saw DoDs like this some time ago and was seriously puzzled by the madness of it. I was reminded of it again at a meet-up discussion recently - it's clearly a practice that's not uncommon.

Let's look at the consequences of this policy. 

Potentially for every User Story that is signed off as "Done" there could be several additional Defect Stories (of low priority) that will be created. It's possible that finishing a Story (with no additional user requirements) will result in an increase in the Product Backlog size! (Aaaagh...) You're either never going to finish or, more likely, never going to fix those Defects in spite of all the waste that will be generated around recording, estimating, prioritising and finally attempting to fix the defects (when the original developer has forgotten how he coded the Story, or has been replaced with someone who never knew it in the first place).

What should happen then? 

Clearly the simple answer is that if you find a bug (of whatever severity) before the Story is "Done", fix it. You haven't finished until it works - just avoid double-think like I've finished it even though the product now contains new defects.

Can there be exceptions to this?

Those who think quality is "non-negotiable" would probably answer "No", but actually (whether acknowledged or not) we all work with a concept of "sufficient quality". It is inherent in ideas like "minimum viable product" and "minimum marketable feature". Zero defects is a slogan not a practicable policy for most product developments. Situations where we find defects that are hard to fix when working on a User Story, bring this issue to the fore.

So here's what I recommend Product Owners do. Firstly, don't sign off a Story if it contains defects! Secondly if defects are found choose to do one of the following:
  1. Insist it's fixed. Always preferred, and should nearly always be followed. Occasionally however it is too expensive, but unless the cost of fixing it is greater than the time already spent on the Story I would always recommend fixing. (We discuss below the problem of "deadlines".)
  2. Accept it's not a defect... at least not a defect that will ever get fixed (unless it's found and added to the Backlog by users). This doesn't feel right but it is more honest than adding items to the Product Backlog that will never be prioritised.
  3. Agree the defect is actually a different Story, functionality that will be covered elsewhere even though it is part of the same Epic or Feature. The original Story will not be released without all the functionality of that Epic/Feature, so it will be fixed before release. Note that this option depends on a well understood concept of Epic/Feature and appropriate release policies around it.
What I am arguing for here is that our Definition of Done trumps deadlines, Sprint boundaries and Sprint "commitments". I believe it is confusion in this area that leads teams to adopt misguided DoDs. That confusion in turn results in the need for "Maintenance Teams" that clear up after Development teams have scattered defects through the product, or the common practice of dumping defects into massive Defect logs that will never be cleared, even if the development continues for decades! As +Liz Keogh has observed, deadlines should really be renamed "sad-lines" - if they're missed nobody's dead; maybe a few are sad! It is not that such planned dates are unimportant, of course they are not. It is that agreed dates should not have greater importance than agreed quality.

These "Done-But" policies are most common in development departments where the concept of commitment ("Look me in the eye and tell me you will complete these Stories by this date") is considered more important than Done, i.e. that completing a Story means it will be at the quality agreed. The Scrum Guide replaced the word "commitment" with "forecast" in a recent revision for a reason - commitment should be what a team member brings to the overall goals of the organisation, not to a date that at best was derived from very limited information.

Of course in reality both commitment to dates and a particular Definition of Done must be subservient to the overall business goals. We can move a release date for an Epic/Feature to a later (or earlier) date if that will better fulfill the overall goals. Similarly changing the DoD or quality expectations up or down should always be considered in order to improve business outcomes.

Does your Definition of Done allow known defects? If so please come back to me and tell me why... or if you would change it, tell me how?
Categories: Companies

Partnerships & Possibilities, Episode 5, Season 6: Haven’t We Heard This Before?

Partnership & Possibilities - Diana Larsen - Fri, 03/20/2015 - 13:43

Partnerships & Possibilities: A Podcast on Leadership in Organizations
EPISODE 5: HAVEN’T WE HEARD THIS BEFORE?

Photo Credit: renaissancechambara via Compfight cc

[Introduction] Many of the issues being raised about gender in the workplace seem to covering old ground – patterns made explicit since at least the 70s.

[02:45] We’re hoping that we can take these patterns as givens, and can then move the conversation forward.

[03:00] “Fostering Women Leaders: A Fitness test for your top team”. The author seems to not know much about the history of women and gender equity in organizations – the article states the obvious.

[04:50] The article offers “resilience, grit, and competence” as skills for women to build – as if men don’t need these same skills. Sharon believes women often have a surplus of “resilience”, as opposed to needing special training.

[7:00] Looking at the talent pipeline to make sure women are entering isn’t enough.

[9:30] Researchers looked at opinions of Gen Xers vs Gen Y about expectations around gender equity in life and work and found a sad picture of unmet expectations.

[14:00] Venture capitalists are not funding women-led start-ups at the same level as men.

[17:00] What if the reason more and more women are doing more childcare, rather than following their careers, is because their male partners have less obstacles and make more money due to bias, and they are just giving in to an unjust reality?

[24:00] Conversations about women in organizations need to happen – but we need to talk about new things, instead of having the same conversations over and over again and expecting different outcomes.

[26:40] Leaving this problem for the millennials to solve is a fantasy. We all need to work together to figure out newer solutions, including the organizations we work for.

Categories: Blogs

Badass: Making users awesome – Kathy Sierra: Book Review

Mark Needham - Fri, 03/20/2015 - 09:30

I started reading Kathy Sierra’s new book ‘Badass: Making users awesome‘ a couple of weeks ago and with the gift of flights to/from Stockholm this week I’ve got through the rest of it.

I really enjoyed the book and have found myself returning to it almost every day to check up exactly what was said on a particular topic.

There were a few things that I’ve taken away and have been going on about to anyone who will listen.

2015 03 20 06 52 51

Paraphrasing, ‘help users acquire skills, don’t throw knowledge at them.’ I found this advice helpful both in my own learning of new things as well as for thinking how to help users of Neo4j get up and running faster.

Whether we’re doing a talk, workshop or online training, the goal isn’t to teach the user a bunch of information/facts but rather to help them learn skills which they can use to achieve their ‘compelling context‘.

Having said that, it’s very easy to fall into the information/facts trap as that type of information is much easier to prepare and present. You don’t have to spend much time thinking about how the user is going to use, rather you hope that if you throw enough information at them some of it will stick.

A user’s compelling context the problem they’re trying to solve regardless of the tools they use to solve it. The repeated example of this is a camera – we don’t buy a camera because we want to buy a camera, we buy it because we want to take great photographs.

2015 03 17 23 49 25

There’s a really interesting section in the middle of the book which talks about expert performance and skill acquisition and how we can achieve this through deliberate practice.

My main take away here is that we have only mastered a skill if we can achieve 95% reliability in repeating the task within 1-3 45-90 minute sessions.

If we can’t achieve this then the typical reaction is to either give up or keep trying to achieve the goal for many more hours. Neither of these is considered a useful approach.

Instead we should realise that if we can’t do the skill it’s probably because there’s a small sub skill that we need to master first. So our next step is to break this skill down into its components, master those and then try the original skill again.

Amy Hoy’s ‘doing it backwards‘ guide is very helpful for doing the skill breakdown as it makes you ask the question ‘can I do it tomorrow?‘ or is there something else that I need to do (learn) first.

I’ve been trying to apply this approach to my machine learning adventures which most recently has involved various topic modelling attempts on a How I met your mother data set.

I’d heard good things about the MALLET open source library but having never used it before sketched out the goals/skills I wanted to achieve:

Extract topics for HIMYM corpus ->
Train a topic model with mallet ->
Tweak an existing topic model that uses mallet ->
Run an existing topic model that uses mallet -> 
Install mallet
2015 03 20 00 11 48

The idea is that you then start from the last action and work your way back up the chain – it should also act as a nice deterrent for yak shaving.

While learning about mallet I came across several more articles that I should read about topic modelling and while these don’t directly contribute to learning a skill I think they will give me good background to help pick up some of the intuition behind topic modelling.

My take away about gaining knowledge on a skill is that when we’re getting started we should spend more time gaining practical knowledge rather than only reading but once we get more into it we’ll naturally become more curious and do the background reading. I often find myself just reading non stop about things but never completely understanding them because I don’t go hands on so this was a good reminder.

One of the next things I’m working on is a similar skill break down for people learning Neo4j and then we’ll look to apply this to make our meetup sessions more effective – should be fun!

The other awesome thing about this book is that I’ve come away with a bunch of other books to read as well:

In summary, if learning is your thing get yourself a copy of the book and read it over a few times – so many great tips, I’ve only covered a few.

Categories: Blogs

Agile and Beyond, Dearborn, USA, April 30 – May 1 2015

Scrum Expert - Thu, 03/19/2015 - 18:39
Agile and Beyond is a two-day conference focused on Agile software development and Scrum project management that takes place near Detroit in Dearborn, Michigan. It aimed at taking together software developers, designers, product owners and executives for workshops and presentations focused on Agile and Lean processes. In the agenda of Agile and Beyond you can find topics like “Expanding an Agile Culture in organizations with Design thinking”, “Spice Up Your Agile Everything”, “Agile Coach Activity Pack: Experience and Learn Through Four Simulations”, “The Business Analyst: How To Be More Than a ...
Categories: Communities

SAFe 4.0 Sneak Preview

Agile Product Owner - Thu, 03/19/2015 - 16:33

Our team has been hard at work building the new content for the 4.0 release this summer. Below is a sneak preview of the latest rendering of the new Big Picture, followed by key highlights of potential new changes:

SAFe-4.0-WIP-0317

SAFe 4.0 Sneak Preview

Portfolio Level:

  • Adding the immutable Lean-Agile principles on which SAFe is based;  they will be a key element on the Big Picture
  • Updating the House of Lean guidance and terminology. Adding a new pillar for Innovation to reflect its critical role in today’s modern business world.
  • Adding new Lean Systems Engineering (LSE) value stream to illustrate the integration to the new SAFe LSE framework and to show that a SAFe Portfolio can have Value Streams for both Software Systems and Cyber Physical Systems
  • Adding the “Customer” to the Big Picture. Customers are the reason why Value Streams exist and are the ultimate economic buyer of the subject solution
  • Introducing the concept of the Enterprise Portfolio to govern multiple instance of SAFe
  • Adding Software Capitalization guidance to the framework

Program/Team Level:

  • Adding an “ART Kanban” to make Feature WIP visible and to improve program execution, increase alignment and transparency
  • Renaming the majority of icons from “Program” to “ART” (Agile Release Train) to improve consistency of terminology and emphasize the release train concept (e.g. ART Epics. ART Backlog, ART PI Objectives, etc.)
  • Renaming Release Planning to PI Planning to further clarify the separation of concerns between Developing on Cadence and Releasing on Demand.
  • Agile Teams in SAFe will have the choice to use ScrumXP and/or Kanban to manage the flow of work. Kanban is particularly useful for managing WIP when the work of the team does not have a predictable arrival rate  (e.g. maintenance work, work of System Team and DevOps, etc.). You can learn more about using Kanban in SAFe now in the current Guidance article.

We are excited about the introduction of these new improvements to the Scaled Agile Framework. SAFe is an evolving work in process, capturing current best practices for implementing Lean-Agile practices at scale. Of course, that means you sometimes have to change things you’ve decided in the past. That’s called learning.

We are targeting release 4.0 of the framework around August,  and we are looking forward to your feedback and participation in improving the framework.  We’ll also be supporting our users of V3.0 for a year after this next release.

Our team would like to thank our Customers, SPCs/ SPCTs, SAFe Community, and SAFe partners who help us relentlessly improve the framework. Keep the feedback coming!

 

Be SAFe!
Richard Knaster

Categories: Blogs

3 steps to write good commit messages

tinyPM Team Blog - Thu, 03/19/2015 - 15:48

Commit

“If a thing is worth doing, it’s worth doing well” 

 

As you may know tinyPM is integrated with such SCM tools like Github, Beanstalk, Bitbucket and Stash. Thanks to this integration you can group all commits in a logical structure that corresponds to your backlog structure in tinyPM.

This is actually an awesome thing because you can easily access not only the history of particular user story, but also its development history.

And indeed, it’s all about communication. Almost all software projects are collaborative team projects. A well written commit message is critical to communicate the context of change to team members. Moreover, later on you can easily check and understand the context without going back to it again and again. This is how you can save your time and resources.

Now, this is not only developer’s or project manager’s dream about having everything in the right order and place. Let’s stop dreaming – it can all be done properly here and now!
There is only one condition – a whole team needs to be willing to write great commit messages. And now it’s time to show you how to do that.

Here are 3 rules to follow:

 

1. KISS – Keep It Simple Stupid 
A good commit message should be concise and consistent.

If the change is trivial that where explanation is not required, single line is sufficient enough as a commit message.
It should be written in an imperative style like “fix a typo”:

If applied, this commit will fix the typo.

 

If a commit is more complicated we need to write a subject (in an imperative style) and provide explanation.
Within this explanation, we should focus on describing why this change was made and how it benefitted the project.

It’s very important that a commit should refer to one logical change at a time. What does it mean? It’s not correct to write message like: “fix the filtering bug, the searching bug, and add attachments to the user story”. Here we should create three, separate commits:

fix filtering bug 

fix searching bug

add attachments to user story

 

2. Use three magic questions to provide sufficient context
Peter Hutterer wrote that good commit message should answer three, fundamental questions:

- Why is it necessary?
Concise description of the reason why a commit needs to be done. What does it fix? What feature does it add? How does it improve performance or reliability?

- How does it address the issue?
For trivial patches this part can be omitted. There should be a high-level description of what the approach was.

- What effects does the patch have?
(In addition to the obvious ones, this may include benchmarks, side effects, etc.)

 

These three questions will help you to provide all necessary information and context for future reference.

 

You can easily link commits to tasks or user stories in your tinyPM. Simply include a user story #id in the commit message #512 and the commit will show up on this particular story card:

Add a list of recommended products #512

For more instructions about commit syntax please check tinyPM documentation.

 

3. Do not use excuses!

We all know that sticking to the rules can be a pain in the neck. However, this is often necessary in order to achieve certain goals. Our goal is to write great commit messages, therefore, we need to adhere to such rules.

And yes, there will be a lot of excuses like: “but it works”, “we didn’t have time” or “I’m the only one working on this” and a dozen of others. Finding excuses is way easier than being consequent. Therefore, we need to think about it in the big picture – think how many benefits we can get.

Just remember that good programmers can be recognized by their legacy. Good commit history will help others to start work on such projects readily without any problems figuring it all out.

This should become a habit that will constantly benefit a whole team in terms of productivity and efficiency as well as a sense of satisfaction.

 
The last word belongs to Chris Beams: “(…) just think how much time the author is saving fellow and future committers by taking the time to provide this context here and now.”

Categories: Companies

March Newsletter: Managing Priorities, 10 Kanban Board Examples, Why It’s Time for Lean, and More

Here’s the March 2015 edition of the LeanKit monthly newsletter. Make sure you catch the next issue in your inbox and subscribe today. Kanban for DevOps: 3 Reasons IT Ops Uses Lean Flow (part 2 of 3) A top frustration for operations teams is the context switching caused by conflicting priorities. In part two of this three-part […]

The post March Newsletter: Managing Priorities, 10 Kanban Board Examples, Why It’s Time for Lean, and More appeared first on Blog | LeanKit.

Categories: Companies

Meetings, Meetings, and More Meetings

Leading Agile - Mike Cottmeyer - Thu, 03/19/2015 - 14:25

Why on earth do I need to spend so much of my time in a meeting? This is an absolutely sane question that most of the team members wind up asking at some point in time while I am coaching an organization towards more adaptive management techniques.

Regardless of the role, there are other things beyond meetings that we have traditionally declared to be a productive use of time. If you are a developer, then we declare productivity to be associated with time spent writing software. If you are a product manager, then we declare productivity to be associated with time spent defining the next version of a product or understanding the market’s demands. Whatever the role, it is rare for an organization or a profession to associate meeting time with high productivity.

From this perspective, it makes a ton of sense when people beg the question:

Why on earth do I need to spend so much of my time in a meeting?

Here’s my usual answer:

What defines a productive minute, is it one that is spent focusing on your craft or is it a minute that is spent delivering value to the organization as quickly as possible?

I tend to think that a productive minute is one that is spent delivering value to the organization as quickly as possible. So, while the time spent practicing a craft is absolutely a critical part of getting value to the organization it is a waste if the individual is not hyper focused on the actual needs of the organization. And this is where meetings come into the picture.

Effective meetings will have a specific theme and will enable a team to establish high clarity around the needs of the organization and teach accountability. For most of the teams that I coach this involves a few specific themes:

(1) Daily Standup – This is a quick touchpoint that is oriented around maintaining accountability within a team as each member takes a minute to update the other team members about the progress made over the past 24 hours, progress that they expect to make over the next 24 hours, and any issues or concerns that they need help addressing.

(2) Tactical Meeting – This is an hour or more and has a very specific purpose, dealing with short term tactics such as creating clarity around near term market needs or ensuring that the team is successful in meeting their commitments.

(3) Strategic Meeting – This is usually a half day or more and is focused on creating clarity about how to move the organization forward with a focus on the longer term vision and strategies.

What’s your take, are meetings useful in your organization? Do your meetings have specific themes or are they a mix-mash of agenda topics?

The post Meetings, Meetings, and More Meetings appeared first on LeadingAgile.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.