Skip to content

Feed aggregator

Building Software Craftsmen

Agile Management Blog - VersionOne - Mon, 08/25/2014 - 17:54

I see Craftsmanship as the answer to an issue that has been rising in importance over the past several years.  Agile, as a development methodology, has hit the mainstream.   While in many ways this is a good thing, there are some drawbacks.  The majority of attendees at the major conferences are now project managers, while in the past they were developers, or at least a fair mix of both.  This has helped socialize the ideas around agile development and is a good thing, but we also need to seek balance in the Force.  This has given rise to the Software Craftsmanship movement, which is focused on the Art of Writing Software, and what is required to create great software, especially in the agile world.  In a previous post, I mentioned how important Craftsmen are to the agile world.  But how does one become a Craftsman?

First, lets take a look at the state of affairs of our industry, job-wise.  It is pretty well known that being a softwaresoftware developer outlook bls
developer is a lucrative career, but the numbers are still somewhat surprising.  According to the U.S. Bureau of Labor Statistics, the median salary for software developers in the United States is around $93,000.  The statistics further show that the number of jobs in the software development field will grow by 22% in the next 10 years.  This is significantly higher than the rest of the job market.  So how are we going to fulfill these needs with high-quality developers?

Obviously, we need to train a lot of really talented people.  We need them to be able to create software well, and also to be able to work well together.  Currently, the primary source of education in software development is via the universities and colleges.  Unfortunately, not only is this failing to provide enough programmers for the jobs, it is also failing to provide the high level of quality and experience we will need.

Now, universities are great for a lot of things.  They provide a strong level of theory and understanding of the underlying science and logic.  What they don’t provide is real-world experience and practical applications of knowledge.  This needs to come from somewhere else.  My suggestion is to turn the clock back a few hundred years, and turn your team room into a workshop.  Let’s populate that workshop with Craftsmen.  We may not have all of the Craftsmen we need to begin with, so we need to build and grow them.  This can be done by applying an apprenticeship program and using the craftsmen’s model for further career development.  Let’s take a look at how this would work.

Apprenticeship

ApprenticeshipLet’s begin with the idea of hiring apprentices.  An apprentice is someone who may or may not have a formal education in software development.  What she will have is the desire to learn.  The question remains, “What will she learn and how will she learn it?”  For starters, we will focus on five main areas:

  1. Crafting Code – The art of using one or more programming languages to create clear, well-factored code.  We want our apprentices to be polyglots by the time they become journeymen, so we will do this in more than one language.
  2. Applied Principles – Well-written code isn’t enough.  An apprentice needs to understand principles like SOLID, and know how to apply them.
  3. Technologies and Tools – While programmers need to be able to practice activities like Refactoring by hand, they also need to know how to use certain tools, as well as which tool to choose for a particular task.
  4. Work Habits – Programming is about more than just showing up, slinging some code, and going home to play Mine-Craft.  Especially in an agile software development shop, we need to be able to build muscle memory around the activities that make good programmers great, such as TDD, Continuous Integration, etc.
  5. Soft Skills – The days of the socially inept programmer are over.  Software apprentices will learn how to work in a team, how to communicate with others, and other soft skills that tend to be forgotten in the traditional learning environment.

An apprentice will learn these foundational areas through working on real projects, under the tutelage of a mentor.  Ideally, that mentor might be a Master Craftsman, but if one isn’t available, then experienced, well-versed Journeymen will make good mentors as well.  At the beginning of her apprenticeship, the apprentice might do some basic things like creating and220px-Wanderbuch_journeyman_Wobrausky_from_Daschitz_02 maintaining the continuous integration environment, some bug fixing, or other tasks.  Over time, she will work with her mentor and other Craftsmen on real-world activities and projects, continually adding value to the team.  When the apprentice has demonstrated her ability to move on to bigger and more challenging projects and tasks, it’s time for her to be recognized as a Journeyman.  We will explore more about the life of a Journeyman in another article.  In order to become a Journeyman, the apprentice must show her expertise by doing, not by taking tests.  Real work pieces that evidence her abilities in various areas and languages, coupled with having paired with everyone on the team, will determine an apprentice’s ability to move on.

Categories: Companies

Encapsulating Value Streams and the Object Oriented Enterprise

Agile Management Blog - VersionOne - Mon, 08/25/2014 - 17:51

Guest post by Mike Cottmeyer of LeadingAgile

When you get right down to it, a Scrum team is fundamentally a container designed to encapsulate the entire product delivery value stream into a single workgroup.

The value stream associated with software development typically goes something like this: analysis, design, build, test, and deploy. That’s pretty much everything you need to develop a working, tested increment of the product… and is, therefore, what defines the basic requirements for a Scrum team.

When you put analysts, designers, developers, and testers into a single workgroup; let them work across skill-set boundaries, self-organize to balance load; and have them collectively produce a working, tested increment of product on regular intervals, you can reduce a tremendous amount of planning complexity.

Your organization has to get past the belief that individual productivity and utilization are the measures of effectiveness. You have to focus more on team throughput and flow of value, but the construct allows you to move fast, change direction, and adapt as we learn more about the emerging product. Planning is simple, objectives are clear, and outcomes are measurable.

Why Scrum Breaks?

The problem with many Scrum implementations is that the team doesn’t actually encapsulate the entire value stream. In almost every real-life organization, someone who is necessary for the team to complete their work doesn’t actually live in the Scrum team. This is very simply what causes Scrum to break. Dependencies kill Scrum.

When this happens, we get into an agile project management mindset. We are running some of the work through the Scrum team, but we need extra coordination mechanisms to help us line up the Scrum team with the other parts of the value stream that live outside the team. We have external planning dependencies that have to be dealt with.

It’s my assertion that these planning dependencies are what result in teams going through the motions of Scrum without realizing value Scrum promises. Last month I did a talk at Agile 2014 that was all about why agile fails in large enterprises. The whole talk is about how to systematically break dependencies between teams.

That said, some organizations are simple enough that a Scrum of Scrums is sufficient to model and deal with the unavoidable coordination issues. Some organizations have to be more proactive coordinating complex backlogs and use constructs like Product Owner Teams, Solutions Teams, and Requirements Clearinghouses.

The key takeaway here is that when you have a Scrum team where the entire value stream is not encapsulated, you need something outside the basic Scrum construct to coordinate across the teams. Pick your poison, but something outside the team almost always has to be present.

SAFe (Scaled Agile Framework) and Value Streams

I want to see if we can pull this up a level and talk a bit about SAFe. Coming off the Agile 2014 conference in Orlando, SAFe was everywhere… and for good reason. Everyone is trying to figure out how to take the concepts we’ve learned with Scrum and get the value at enterprise-level scale. Scaling Scrum is the topic du jour.

Honestly, I don’t keep up with SAFe in detail… I’ve never been in SAFe training… and I’m definitely not part of the inner circle of SAFe thought leaders. That said, I’ve read everything Dean (Leffingwell) has written since he released Scaling Software Agility, I have a ton of respect for his work, and I agree with the basic patterns he has introduced.

(At) this conference though, I heard something I hadn’t really heard before. It seemed that everyone was talking about value streams relative to SAFe. I’m sure the concept has been in SAFe for a while, but it caught my attention differently this time around. It made me wonder if I should think about SAFe similarly to how I think about Scrum.

At LeadingAgile, we typically coach an explicit value stream in the middle-level program tier. We think about progressive elaboration and maximizing the flow of value in the middle tier. We usually encourage an explicit Kanban flow that respects some of the upstream and downstream work processes we see most often in product delivery organizations.

It occurred to me that SAFe isn’t modeling the value stream explicitly in the middle tier; it is managing the work through the PSI/PI planning meeting, fundamentally encapsulating the entire value stream within the planning construct. In short, SAFe is fundamentally operating like a big Scrum, just encapsulating a larger value stream.

Whereas I’ve been thinking most about breaking dependencies, SAFe appears content with managing dependencies and making them visible and explicit in the planning session. This is absolutely a necessary step in the process, but by not dealing with the root cause of dependencies directly, I believe they will limit your ultimate agility over time.

SAFe will struggle with dependencies at scale for the same basic reason Scrum struggles the team level. Dependencies make it hard to move.

We’ve been giving a lot of thought lately to breaking dependencies, and our work with clients is fundamentally about forming complete cross-functional agile teams and systematically breaking dependencies between them over time. We believe that this is the only true way to scale agile indefinitely to the enterprise.

We believe this concept is methodology-independent and will make any methodology you choose better in the long run.

Why SAFe Breaks?

Scrum isn’t trying to break dependencies within the team; it is merely trying to encapsulate the value stream, allowing the team members to work across role boundaries, self-organize around the work and stabilize throughput, while holding them to producing a working, tested increment every couple of weeks.

SAFe isn’t trying to break dependencies within these larger value streams, either. It is merely trying to encapsulate the value stream similarly to Scrum, allowing team members to work across role boundaries, self-organize around the work, and stabilize throughput while producing a working, tested increment every PI.

There are clearly more constructs within SAFe than exist within Scrum to deal with the larger effective team size and integration tasks, but I think that the pattern fundamentally holds. I never really thought about it that way before. I’m curious if anyone else has ever thought SAFe as kind of a big Scrum?

If we know that Scrum implementations struggle when the entire value stream can’t be encapsulated within a team of 6-8 people, do SAFe implementations struggle when the value stream can’t be contained within a team of 125? If my assumption about dependencies and value streams holds, I suspect they would.

If SAFe is ultimately going to struggle at scale beyond 125 people, does that mean that we are going to need the same constructs for coordinating value across teams that we need in Scrum? At some point will we find ourselves talking about ‘SAFes of SAFes’ or ‘SAFe Product Owner Teams’ or ‘SAFe Portfolio Solutions Teams?’

I suspect we might. I think we might also see evidence of this already.

Maybe there is some stuff in SAFe that already accommodates this, but I’m curious what’s out there to tactically coordinate across SAFe value streams? Remember, I’m not talking about investment decisions across a SAFe Portfolio… I’m talking about managing dependencies between value streams. I gotta figure some folks are dealing with this already, given how well SAFe is doing in the market.

If anyone has any insight or can point me in the right direction, I’d appreciate it. I’m interested to know how the insiders think about this? Is anyone inventing things outside the SAFe body of knowledge that are being written about? Where is the body of knowledge emerging outside of the official cannon, and are people talking about this?

Ken and Jeff Got it Right

Back in 2006 Ken Schwaber put up a slide where he illustrated a team-of-teams structure, one where lower-level Scrum teams were encapsulated in a higher-order Scrum of Scrum construct. Back in 2006 I was thinking that there was no way this would work in the absence of very loosely coupled teams (and at that time, that was NOT my reality).

A few years ago, I heard Jeff Sutherland and Jim Coplien give a talk at the Scrum Gathering in Orlando. The one line I vividly remember from that talk was that, “teams were never expected to self-organize across class boundaries.” They were implicitly saying that encapsulation was the expectation for a Scrum team to form.

Jeff Sutherland, as we speak over at Scruminc.com is talking about Object Oriented Design (OOD) in Scaled Scrum implementations. He is basically making the case that Scrum teams are intended to be formed around Objects in an organization. It is a prerequisite for Scrum to work.

I think that this one concept is a point which has been wholly misunderstood and largely ignored by organizations adopting Scrum. Many people implementing Scrum nowadays don’t have any idea about OOD principles, let alone as they relate to organizational design and team structure.

When you read Craig Larman and Bas Vodde’s stuff around LeSS (Large Scale Scrum) and consider the structures they’ve put into place, you have to view those structures through the lens of an Object based organizational structure. Their work is built on the same foundation that Ken and Jeff laid 25 years ago, but that is seldom implemented.

I find myself fundamentally in alignment with Ken, Jeff, Bas, and Craig… and I think theirs is the best end-state for a scaled agile enterprise. The problem is that their underlying operational structure for a scaled Scrum implementation to work… the Object Oriented Enterprise… doesn’t exist in most companies adopting Scrum.

SAFe is a Compromise

I think I’m coming to the conclusion that SAFe is a reasonable compromise given the operational constraints in many organizations. If you aren’t going to form teams around Objects in your organization, you probably shouldn’t bother implementing any of the Scaled Scrum variants. You’ll just get frustrated.

That said, I do believe that SAFe is going to suffer from many of the same problems that Scrum deals with organizationally in the presence of incomplete or dependent value streams and a lack of organizational object orientation. It’s just a matter of time and scale.

At this point in the evolution of my thinking, I find myself in a place where I don’t believe the scaled Scrum stuff will work in most companies in their current state. I think SAFe will work to a point, but if it’s sold as the final destination, we are going to limit our ability to scale and ultimately limit our ability to be agile.

We can only be as agile as the size of the team, and the independence of the value streams, and the length of the PI boundary. I think organizations will soon find they are going through the motions of SAFe without really solving the problem. Again, it sounds just like where we are with Scrum in most companies.

Refactoring Your Enterprise

The only real, long-term sustainable approach to Scaled Enterprise Agile is to take the long, hard road toward refactoring your enterprise into an organization based around the OOD principles that were implied, but maybe not explicit, when agile was formally articulated 13 years ago. The problem is that this message doesn’t fill CSM classes and has to be sold all the way at the top of the organization. It will require a significant investment on the part of the executives.

The cool thing is that in the presence of this kind of organization, everything else starts to make sense. You can see a place where Scrum and Kanban live side-by-side in peaceful harmony. You can see where the technical practices fit in at scale. SAFe, Disciplined Agile Delivery (DAD), and LeSS all have a place in the pantheon of ideas. No matter which path you take, the Object Oriented enterprise makes everything else make sense. It’s the right context.

With the Object Based Enterprise as a sort of backdrop to sort out all the different perspectives on agile, we can begin to see that the conversation around potential future state starts to get WAY less interesting than what it takes to get there. I think the interesting conversation is around how we do the refactoring in the first place, and what the possible transition patterns look like which help us get there, while still running our businesses effectively.

If I think about it… that was really what my talk last week was about. It’s up on my blog, and was recorded by the conference, but that might take some time to publish. I think I’ll do my next post as an overview of the content and rationale behind the material in my presentation. Let me see if I can make that happen this weekend ;-)

- See more at: http://www.leadingagile.com/2014/08/encapsulating-value-streams-object-oriented-enterprise/#sthash.qGNQvwhB.dpuf

Categories: Companies

Secret Recipe for Building Self-Organizing Teams

Agile Management Blog - VersionOne - Mon, 08/25/2014 - 17:49

Guest post by Venkatesh Krishnamurthy, Advisor and Curator, Cutter, Techwell, Zephyr

Some time back I noticed something odd with an agile team. Team temperature used to be 10 out of 10, and each team member expressed their happiness working on this project.  I was curious to find the secret behind an “always happy team.” A bit of interaction with the team and the ScrumMaster revealed some disturbing secrets.  Here are the key ones:

  1. The team is self-organizing, and individuals can pick the story of their choice and deliver at their discretion!!
  2. Team has neither time pressure nor delivery timelines

I thought to myself that this is not a self-organizing team, but a directionless team.

As Esther Derby points out, there are several myths and misconceptions about Self-Organizing teams.  I did cover a bit about these myths during my talk at Lean Agile Systems Thinking conference(LAST) in Melbourne, which is available on Youtube (toward the end at 1:03 minutes).

I understand it is not easy to build a self-organizing team, but there are principles enabling leaders in building such agile teams.

One of the best analogies that I have heard so far about self-organizing teams is from Joseph Pelrine.  As Joseph puts it, building self-organizing teams is like preparing soup.  I thought it would be easier for readers to understand the self-organizing concept if I map the soup preparation steps to the self-organizing steps. Yes, soup preparation involves many more steps, but the key ones below would give the clues to readers for further analysis.

The below table illustrates the mapping:

venkatesh-image1

 

To conclude,

  • A self-organizing team needs a leader, the right amount of pressure apart from the right set of constraints/goals to succeed.
  • The true test of the self-organizing team is their collaboration ability during war time and not during peace time.
  • There is a difference between a team organizing themselves and a self-organizing team.  Don’t ignore the “self” part.
Categories: Companies

How Can Enterprise Architects Drive Business Value the Agile Way?

J.D. Meier's Blog - Mon, 08/25/2014 - 17:48

An Enterprise Architect can have a tough job when it comes to driving value to the business.   With multiple stakeholders, multiple moving parts, and a rapid rate of change, delivering value is tough enough.   But what if you want to accelerate value and maximize business impact?

Enterprise Architects can borrow a few concepts from the Agile world to be much more effective in today’s world.

A Look Back at How Agile Helped Connect Development to Business Impact …

First, let’s take a brief look at traditional development and how it evolved.  Traditionally, IT departments focused on delivering value to the business by shipping big bang projects.   They would plan it, build it, test it, and then release it.   The measure of success was on time, on budget.   

Few projects ever shipped on time.  Few were ever on budget.  And very few ever met the requirements of the business.

Then along came Agile approaches and they changed the game.

One of the most important ideas was a shift away from thick requirements documentation to user stories.  Developers got customers telling stories about what they wanted the future solution to do.  For example, a user story for a sale representative might look like this:

“As a sales rep, I want to see my customer’s account information so that I can identify cross-sell and upsell opportunities.” 

The use of user stories accomplished several things.   First, user stories got the development teams talking to the business users.  Rather than throwing documents back and forth, people started having face-to-face communication to understand the user stories.  Second, user stories helped chunk bigger units of value down into smaller units of value.  Rather than a big bang project where all the value is promised at the end of some long development cycle, a development team could now ship the solution in increments, where each increment was a prioritized set of stories.   The user stories effectively create a shared language for value

Third, it made it easier to test the delivery of value.  Now the user and the development team could test the solution against the user stories and acceptance criteria.  If the story met acceptance criteria, the user would acknowledge that the value was delivered.  In this way, the user stories created both a validation mechanism and a feedback loop for delivering and acknowledging value.

In the Agile world, bigger stories are called epics, and collections of stories are called themes.  Often a story starts off as an epic until it gets broken down into multiple stories.  What’s important here is that the collections of stories serve as a catalog of potential value.   Specifically, this catalog of stories reflects potential value with real stakeholders.  In this way, Agile helps drive customer focus and customer connection.  It’s really effective stakeholder management in action.

Agile approaches have been used in software projects large and small.  And they’ve forever changed how developers and project managers approach projects.

A Look at How Agile Can Help Enterprise Architecture Accelerate Business Value …

But how does this apply to Enterprise Architects?

As an Enterprise Architect, chances are you are responsible for achieving business outcomes.  You do this by driving business transformation.   The way you achieve business transformation is through driving capability change including business, people, and technical capabilities.

That’s a tall order.   And you need a way to chunk this up and make it meaningful to all the parties involved.

The Power of Scenarios as Units of Value for the Enterprise

This is where scenarios come into play.  Scenarios are a simple way to capture pains, needs and desired outcomes.   You can think of the desired outcome as the future capability vision.   It’s really a story that helps articulate the art of the possible.   More precisely, you can use scenarios to help build empathy with stakeholders for what value will look like, by painting a conceptual scene of the future.

An Enterprise scenario is simply a chunk of organizational change, typically about 3-5 business capabilities, 3-5 people capabilities, and 3-5 technical capabilities.

If that sounds like a lot of theory, let’s step into an example to show what it looks like in practice.

Let’s say you’re in a situation where you need to help a healthcare provider change their business.  

You can come up with a lot of scenarios, but it helps to start with the pains and needs of the business owner.  Otherwise, you might start going through a bunch of scenarios for the patients or for the doctors.  In this case, the business owner would be the Chief Medical Officer or the doctor of doctors.

Scenario: Tele-specialist for Healthcare

If we walk the pains, needs, and desired outcomes of the Chief Medical Officer, we might come up with a scenario that looks something like this, where the CURRENT STATE reflects the current pains, and needs, and the FUTURE STATE reflects the desired outcome.

CURRENT STATE

Here is an example of the CURRENT STATE portion of the scenario:

The Chief Medical Officer of Contoso Provider is struggling with increased costs and declining revenues. Costs are rising due to the Affordable Healthcare Act regulatory compliance requirements and increasing malpractice insurance premiums. Revenue is declining due to decreasing medical insurance payments per claim.

FUTURE STATE

Here is an example of the FUTURE STATE portion of the scenario:

Doctors can consult with patients, peers, and specialists from anywhere. Contoso provider's doctors can see more patients, increase accuracy of first time diagnosis, and grow revenues.


image

 

Storyboard for the Future Capability Vision

It helps to be able to picture what the Future Capability Vision might look like.   That’s where storyboarding can come in.  An Enterprise Architect can paint a simple scene of the future with a storyboard that shows the Future Capability Vision in action.  This practice lends itself to whiteboarding, and the beauty of a whiteboard is you can quickly elaborate where you need to, without getting mired in details.

image

As you can see in this example storyboard of the Future Capability Vision, we listed out some business benefits, which we could then drill-down into relevant KPIs and value measures.   We’ve also outlines some building blocks required for this Future Capability Vision in the form of business capabilities and technical capabilities.

Now this simple approach accomplishes a lot.   It helps ensure that any technology solution actually connects back to business drivers and pains that a business decision maker actually cares about.   This gets their fingerprints on the solution concept.   And it creates a simple “flashcard” for value.   If we name the Enterprise scenario well, then we can use it as a handle to get back to the story we created with the business of a better future.

The obvious thing this does, aside from connecting IT to the business, is it helps the business justify any investment in IT.

And all we did was walk through one Enterprise Scenario.  

But there is a lot more value to be found in the Enterprise.   We can literally explore and chunk up the value in the Enterprise if we take a step back and add another tool to our toolbelt:  the Scenario Chain.

Scenario Chain:  Chaining the Industry Scenarios to Enterprise Scenarios

The Scenario Chain is another powerful conceptual visualization tool.  It helps you quickly map out what’s happening in the marketplace in terms of industry drivers or industry scenarios.  You can then identify potential investment objectives.   These investment objectives lead to patterns of value or patterns of solutions in the Enterprise, which are effectively Enterprise scenarios.   From the Enterprise scenarios, you can then identify relevant usage scenarios.  The usage scenarios effectively represent new ways of working for the employees, or new interaction models with customers, which is effectively a change to your value stream.

image

With one simple glance, the Scenario Chain is a bird’s-eye view of how you can respond to the changing marketplace and how you can transform your business.   And, by using Enterprise scenarios, you can chunk up the change into meaningful units of value that reflect pains, needs, and desired outcomes for the business.  And, because you have the fingerprints of stakeholders from both business and IT, you’ve effectively created a shared vision for the future, that has business impact, a justification for investment, and it creates a pull-through mechanism for additional value, by driving the adoption of the usage scenarios.

Let’s elaborate on adoption and how scenarios can help accelerate business value.

Using Scenario to Drive Adoption and Accelerate Business Value

Driving adoption is a key way to realize the business value.  If nobody adopts the solution, then that’s what Gartner would call “Value Leakage.”  Value Realization really comes down to governance, measurement, and adoption.

With scenarios at your fingertips, you have a powerful way to articulate value, justify business cases, drive business transformation, and accelerate business value.   The key lies in using the scenarios as a unit of value, and focusing on scenarios as a way to drive adoption and change.

Here are three ways you can use scenarios to drive adoption and accelerate business value:

1.  Accelerate Business Adoption

One of the ways to accelerate business value is to accelerate adoption.    You can use scenarios to help enumerate specific behavior changes that need to happen to drive the adoption.   You can establish metrics and measures around specific behavior changes.   In this way, you make adoption a lot more specific, concrete, intentional, and tangible.

This approach is about doing the right things, faster.

2.  Re-Sequence the Scenarios

Another way to accelerate business value is to re-sequence the scenarios.   If your big bang is way at the end (way, way at the end), no good.  Sprinkle some of your bangs up front.   In fact, a great way to design for change is to build rolling thunder.   Put some of the scenarios up front that will get people excited about the change and directly experiencing the benefits.  Make it real.

The approach is about putting first things first.

3.  Identify Higher Value Scenarios

The third way to accelerate business value is to identify higher-value scenarios.   One of the things that happens along the way, is you start to uncover potential scenarios that you may not have seen before, and these scenarios represent orders of magnitude more value.   This is the space of serendipity.   As you learn more about users and what they value, and stakeholders and what they value, you start to connect more dots between the scenarios you can deliver and the value that can be realized (and therefore, accelerated.)

This approach is about trading up for higher value and more impact.

As you can see, Enterprise Architects can drive business value and accelerate business value realization by using scenarios and storyboarding.   It’s a simple and agile approach for connecting business and IT, and for shaping a more Agile Enterprise.

I’ll share more on this topic in future posts.   Value Realization is an art and a science and I’d like to reduce the gap between the state of the art and the state of the practice.

You Might Also Like

3 Ways to Accelerate Business Value

6 Steps for Enterprise Architecture as Strategy

Cognizant on the Next Generation Enterprise

Simple Enterprise Strategy

The Mission of Enterprise Services

The New Competitive Landscape

What Am I Doing on the Enterprise Strategy Team?

Why Have a Strategy?

Categories: Blogs

Journée Agile, Liege, Belgium, September 11 2014

Scrum Expert - Mon, 08/25/2014 - 15:39
The Journée Agile is a one-day conference focused on agile software development approaches like Scrum that takes place in Belgium every year, All the presentations and workshops are in French. The keynote of the 2014 edition will be given by Jurgen Appelo. In the agenda you can find topics like “Des outils du monde de la psychologie pour les equipes Scrum et Agile”, “L’attitude de Testing Agile”, “Spécifications Agiles”, “Passer de Scrum à Scrumban, pour quoi faire?” ou “Real options – Prises de décisions”. Web site: http://www.journeeagile.be/ Location for the 2014 conference: HEC-ULg, ...
Categories: Communities

Capacity Planning and the Project Portfolio

Johanna Rothman - Mon, 08/25/2014 - 15:17

I was problem-solving with a potential client the other day. They want to manage their project portfolio. They use Jira, so they think they can see everything everyone is doing. (I’m a little skeptical, but, okay.) They want to know how much the teams can do, so they can do capacity planning based on what the teams can do. (Red flag #1)

The worst part? They don’t have feature teams. They have component teams: front end, middleware, back end. You might, too. (Red flag #2)

Problem #1: They have a very large program, not a series of unrelated projects. They also have projects.

Problem #2: They want to use capacity planning, instead of flowing work through teams.

They are setting themselves up to optimize at the lowest level, instead of optimizing at the highest level of the organization.

If you read Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, you understand this problem. A program is a strategic collection of projects where the business value of the all of the projects is greater than any one of the projects itself. Each project has value. Yes. But all together, the program, has much more value. You have to consider the program has a whole.

Don’t Predict the Project Portfolio Based on Capacity

If you are considering doing capacity planning on what the teams can do based on their estimation or previous capacity, don’t do it.

First, you can’t possibly know based on previous data. Why? Because the teams are interconnected in interesting ways.

When you have component teams, not feature teams, their interdependencies are significant and unpredictable. Your ability to predict the future based on past velocity? Zero. Nada. Zilch.

This is legacy thinking from waterfall. Well, you can try to do it this way. But you will be wrong in many dimensions:

  • You will make mistakes because of prediction based on estimation. Estimates are guesses. When you have teams using relative estimation, you have problems.
  • Your estimates will be off because of the silent interdependencies that arise from component teams. No one can predict these if you have large stories, even if you do awesome program management. The larger the stories, the more your estimates are off. The longer the planning horizon, the more your estimates are off.
  • You will miss all the great ideas for your project portfolio that arise from innovation that you can’t predict in advance. As the teams complete features, and as the product owners realize what the teams do, the teams and the product owners will have innovative ideas. You, the management team, want to be able to capitalize on this feedback.

It’s not that estimates are bad. It’s that estimates are off. The more teams you have, the less your estimates are normalized between teams. Your t-shirt sizes are not my Fibonacci numbers, are not that team’s swarming or mobbing. (It doesn’t matter if you have component teams or feature teams for this to be true.)

When you have component teams, you have the additional problem of not knowing how the interdependencies affect your estimates. Your estimates will be off, because no one’s estimates take the interdependencies into account.

You don’t want to normalize estimates among teams. You want to normalize story size. Once you make story size really small, it doesn’t matter what the estimates are.

When you  make the story size really small, the product owners are in charge of the team’s capacity and release dates. Why? Because they are in charge of the backlogs and the roadmaps.

The more a program stops trying to estimate at the low level and uses small stories and manages interdependencies at the team level, the more the program has momentum.

The part where you gather all the projects? Do that part. You need to see all the work. Yes. that part works and helps the program see where they are going.

Use Value for the Project Portfolio

Okay, so you try to estimate the value of the features, epics, or themes in the roadmap of the project portfolio. Maybe you even use the cost of delay as Jutta and I suggest in Diving for Hidden Treasures: Finding the Real Value in Your Project Portfolio (yes, this book is still in progress). How will you know if you are correct?

You don’t. You see the demos the teams provide, and you reassess on a reasonable time basis. What’s reasonable? Not every week or two. Give the teams a chance to make progress. If people are multitasking, not more often than once every two months, or every quarter. They have to get to each project. Hint: stop the multitasking and you get tons more throughput.

Categories: Blogs

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 13:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

Categories: Companies

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 08:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!

 

Categories: Companies

Scrum in der Medizintechnik: Wie stelle ich ein erfolgreiches Team zusammen?

Scrum 4 You - Mon, 08/25/2014 - 07:30

Auch in stark regulierten Branchen wie der Medizintechnik ist es möglich, Produkte mit Scrum zu entwickeln – das hat sich mittlerweile von den IT-Abteilungen deutscher Dienstleistungsunternehmen über den innovativen Mittelstand bis in die großen Konzerne durchgesprochen*. Aber wo anfangen, wenn die Entscheidung für Scrum einmal gefällt wurde? Der Erfolg Ihres Projekts steht und fällt mit dem Team.

Scrum, richtig gelebt, bietet Ihnen die Möglichkeit, sämtliches Know-how, das Sie für die Entwicklung Ihres Produkts benötigen, in einem Team zu bündeln. Auf diese Weise lassen sich zusätzliche Aufwände und Redundanzen an Übergabepositionen minimieren und gleichzeitig dringend benötigtes Wissen beinahe automatisch auf mehrere Köpfe verteilen.

baseball-74003_640

Herausforderungen für die Medizintechnik

Für die Hersteller medizintechnischer Geräte bedeutet das allerdings nicht nur, Anwendungsentwickler und Tester zusammenzusetzen. Zu der ohnehin schon komplexen Aufgabe, Hardware und Software miteinander zu kombinieren, kommt hinzu, dass konstruierte Teile bestellt, Risikomanagement-Checklisten abgearbeitet und vor allem regulatorische Anforderungen erfüllt werden müssen. Neben Hard- und Softwareentwicklern sowie Konstrukteuren brauchen Sie in Ihrem Team also auch noch einen Einkäufer, jemanden aus der Produktdokumentation und eine Person, die sich mit den regulatorischen Anforderungen auskennt.

Pilotteams nicht als Wetterfrösche missbrauchen!

Viele Unternehmen treffen die Entscheidung, Scrum zunächst in Pilotprojekten auszuprobieren, um die Organisation nicht zu überfordern und ein Gefühl dafür zu bekommen, ob das funktionieren kann, oder nicht. Der Gedanke, die Organisation nicht zu überrumpeln, ist nachvollziehbar, und das Aufsetzen einer Pilotgruppe auch empfehlenswert.

Aber: Jedes Pilot-Team wird früher oder später an strukturelle Grenzen stoßen, vor allem wenn die Abteilungen, die den Teams zuarbeiten, nicht entsprechend geschult sind. Insbesondere wenn es um Anforderungen „aus dem Feld“ oder seitens der Regulierungsbehörden geht, kann die Integration entsprechender Know-how-Träger in ein Scrum-Team sehr viel Zeit sparen. Heißt das, mein QM – Mitarbeiter ist jetzt 100% der Zeit im Scrum-Team? Nicht unbedingt.

Verantwortungsbewusstsein schaffen

Allein das Commitment „entwicklungsfernerer“ Kollegen, regelmäßig zu Meetings wie Sprint Planning, Daily oder Review zu erscheinen, wird einen großen Beitrag zu mehr Effizienz Ihrer Scrum-Teams leisten.

Bei einem unserer Kunden in der Laborautomatisierung gibt es beispielsweise einen Projekteinkäufer, der regelmäßig die Dailies mehrerer Scrum Teams besucht und somit für das Team stets zu verlässlichen Zeitpunkten als Ansprechpartner zur Verfügung steht, sollte es beispielsweise Rückfragen zu Lieferzeiten geben. Und auch für die Arbeit des Projekteinkäufers ergeben sich durch die Unmittelbarkeit viele Vorteile. So bekommt er schnell ein Gefühl für die Dringlichkeit sowie über mögliche Zusammenhänge einzelner Bestellungen.

Auch bei der Erstellung von Bedienungs- oder Servicehandbüchern lassen sich eine Vielzahl von Verzögerungen und doppelten Arbeitsschritten durch die rechtzeitige Einbindung entsprechender Kollegen vermeiden. Schaffen Sie ein Bewusstsein dafür, dass Ihr Projekt nur durch das Zusammenwirken aller Parteien erfolgreich werden kann und das Scrum hierfür den notwendigen Rahmen bietet. Haben Sie die Rahmenbedingungen einmal abgesteckt und kommuniziert, werden sich Ihre Teams so aufstellen, wie sie es für eine anwenderfreundliche und normgerechte Produktentwicklung brauchen.

*Sowohl der Technical Information Report TIR 45:2012 der AAMI (Association for the Advancement of Medical Instrumentation ) als auch die Prozess-Norm IEC 62304 geben Herstellern explizit die Freiheit, ihre Produkte so zu entwickeln, wie sie es für richtig halten – solange die Produktsicherheit und -qualität gewährleistet bleiben.

Related posts:

  1. Wann ist ein ScrumMaster erfolgreich?
  2. Portfolio
  3. Auch wenn’s mal wieder länger dauert: Pull die wichtigsten Themen zuerst

Categories: Blogs

"How Thin is Thin?" An Example of Effective Story Slicing

Practical Agility - Dave Rooney - Sun, 08/24/2014 - 19:00
Graphene is pure carbon in the form of a very thin, nearly transparent sheet, one atom thick. It is remarkably strong for  its very low weight and it conducts heat and electricity with great efficiency. Wikipedia If you have spent any time at all working in an Agile software development environment, you've heard the mantra to split your Stories as thin as you possibly can while still
Categories: Blogs

Tipps zum Schreiben: Ein Sonntagserlebnis

Scrum 4 You - Sun, 08/24/2014 - 16:30

Ich werde immer wieder gefragt (meist von meinen Kollegen), wie ich es schaffe, neben meinen Trainings und Consulting-Aufträgen auch noch Bücher zu schreiben und Blogbeiträge zu verfassen. Es ist ganz einfach: Ich schreibe. Ich erzähle euch mal, wie so ein Tag aussehen kann.

Heute – Sonntag – habe ich meine Frau und eine Freundin auf ein Reitturnier begleitet. Wir sind um 06:00 aufgestanden, um 07:00 habe ich unser Pferd Rübe geputzt, dann sind wir eine Stunde gefahren und irgendwann gab es für mich mal eine Pause von 20 Minuten. Ich habe mir einen Kaffee gekauft, mich hingesetzt mein derzeit favorisiertes Schreibprogram Writer, ein Google Chrome Plugin, gestartet und geschrieben.

20 Minuten später kam meine Frau, ihre Freundin brauchte Hilfe mit ihrem Pferd. Also den Rechner zugeklappt, eingepackt und in den nächsten zwei Stunden zugeschaut, wie die beiden sehr erfolgreich waren. Dann gab es wieder 10 Minuten: Den Rechner rausgeholt unter den Baum gesetzt und da weitergeschrieben, wo ich aufgehört hatte. Klar muss ich dann immer den vorherigen Absatz umschreiben um reinzukommen, aber ich konnte wieder einige hundert Worte schreiben. Meine Frau kommt vorbei und bittet mich, das Pferd zu halten. Ich klappe den Rechner wieder zu. Dann waren wir fertig, haben die Pferde in den Stall zurückgebracht, die beiden anderen Pferde versorgt und sind nach Hause gefahren. Dort geduscht, etwas gegessen und meine Frau ist noch einmal zu den Pferden gefahren (das ist heute eine Ausnahme) und ich sitze seit 75 Minuten hier am Küchentisch und schreibe.

Zugegeben, ein solcher Tag ist auch für mich eine Ausnahme. Ich bin gerade wieder vom Schreiben gefangen. Sonst würde ich nicht in den “Auszeiten” schreiben. In den letzten 8 Wochen, nachdem ich das Manuskript zu “Selbstorganisation braucht Führung” an Dolores, meine Editorin, abgegeben habe, war ich erst einmal fertig mit Schreiben. Ausgeschrieben. Doch die letzten Blogs, von denen ihr in den letzten Tagen wieder einige lesen konntet, zeigen, dass es so viel zu bemerken gibt, das muss einfach zu Papier bzw. zu Datei gebracht werden. Normalerweise schreibe ich morgens, kurz nach dem Aufstehen, oder abends im Hotel, am Flughafen, wenn ich auf den Flieger warte, im Zug nach irgendwohin.

“Aber wie macht er das?”, höre ich fragen. Einem guten Freund von mir passiert das Gleiche beim Fotografieren. Er macht Fotos. Ständig. Ein anderer malt. Ich schreibe – ich denke dabei nicht nach, sondern ich schreibe. Manchmal wird es gut, manchmal sehr gut. Brauchbar ist es mittlerweile immer – aber das ist Übung. Macht es Spaß? Ohne Ende.

Versucht es auch, ich kann es nur empfehlen. Hört einfach auf darüber nachzudenken, was ihr schreiben wollt und schreibt.

Related posts:

  1. Führung ist?
  2. Über das Schreiben: Leidenschaft | Passion | Freewriting
  3. Bin ich am Arbeitsplatz zufrieden?

Categories: Blogs

Listen, Test, Code, Design OVER Sprints, Scrums, and ScrumMasters

"Back to basics" is Scrum?I've been noticing people talk about getting "back to the basics" and then proceed to talk about Scrum roles and rituals.

This annoys me for 2 main reasons:
  1.  Scrum was never "basics" for me and I've typically been doing this longer than the person who suggests this
  2. The more important reason is that if we think about this carefully, Scrum cannot be the "basics"
"Back to basics" should be about the essence of what we are doing"Back to basics", "focusing on the fundamentals", etc. is about getting back to the essence of an activity.  I touched upon this when I was exploring the concept of doctrine but let's think about this using the frame of "basics" or "fundamentals".

If we look at the context of developing software for a purpose, as opposed to as a hobby, what is the essence of what needs to happen?
  1. You need a shared understanding of what problem the software is intended to solve.  We have learned that the best way to do this is to engage directly with the relevant situation and people.
  2. You need a shared understanding of what the solution needs to do to solve the problem.  We have learned that the best way to do this is through conversations leading to agreed examples and then iterating.
  3. You need to build the solution.  We have learned that the best way to do this is in a thoughtful, collaborative, disciplined way.
  4. You need to manage the growing complexity of the system to ensure that it continues to be easy to change.  We have learned that the best way to do this is as an ongoing exercise reflecting the best knowledge we have at each point.
A more compact version of this might be: Listen, Test, Code, Design.
If you don't get good at these basics, all your Sprints, Scrums, and ScrumMasters won't matter much.
Categories: Blogs

Measuring Business value in Agile projects

Agile World - Venkatesh Krishnamurthy - Sun, 08/24/2014 - 01:44

image

Because the first principle of the Agile Manifesto talks about delivering valuable software to the customer, many agile practitioners are constantly thoughtful of the value in each step of the software-development lifecycle.

At the thirty-thousand-foot level, value creation starts with gathering requirements and continues with backlog creation, backlog grooming, writing user stories, and development, finally ending with integration, deployment, and support. Even with knowledge of all these moving parts, it is common to see organizations only measuring value during development and ignoring the rest of the steps.

What’s the fix? During backlog creation, user stories need to be compared and contrasted in order to promote maximum value delivery. The product owner might need to use different techniques, such as T-shirt sizing, in order to better prioritize the project’s stories.

An alternate approach to measuring the business value of user stories is to use a three-dimensional metric that incorporates complexity, business value, and the ROI. Creating value can often require a change in perspective from the normal project’s tasks and functions. Thinking outside the box, identifying business value before writing the user stories is much better than writing and then trying to evaluate.

Read  the complete article about measuring business value on TechWell

Picture courtesy https://flic.kr/p/8E7Dr5

Categories: Blogs

Xebia IT Architects Innovation Day

Xebia Blog - Sat, 08/23/2014 - 18:51

Friday August 22nd was Xebia’s first Innovation Day. We spent a full day experimenting with technology. I helped organizing the day for XITA, Xebia’s IT Architects department (Hmm. Department doesn’t feel quite right to describe what we are, but anyway). Innovation days are intended to inspire as well as educate. We split up in small teams and focused on a particular technology. Below is as list of project teams:

• Docker-izing enterprise software
• Run a web application high-available across multiple CoreOS nodes using Kubernetes
• Application architecture (team 1)
• Application architecture (team 2)
• Replace Puppet with Salt
• Scale "infinitely" with Apache Mesos

In the coming weeks we will publish what we learned in separate blogs.

First Xebia Innovation Day

Categories: Companies

New Foundations 3.0 Webinar

Agile Product Owner - Sat, 08/23/2014 - 16:26

Hi,

We’ve just posted an updated introductory Webinar: SAFe Foundations: Be Agile. Scale Up. Stay Lean. at ScaledAgileFramework/foundations. “Foundations” is the free Powerpoint briefing (download from the same page) that you can use in most any context to describe SAFe to your intended audience.

In this 45 minute webinar, I walk through the Foundations PPT and describe:

  • The rationale for Agile and SAFe
  • A bit of SAFe history
  • SAFe core values
  • Business benefits enterprises have achieved with SAFe
  • Lean Thinking overview
  • A brief overview, of SAFe Team, Program, and Portfolio levels
  • Introduction to the Principles of Lean Leadership
  • Next Steps and Implementation 1-2-3 Guidance

Thanks to Jennifer Fawcett for hosting the event.

Categories: Blogs

Why Iterative Planning?

Leading Agile - Mike Cottmeyer - Fri, 08/22/2014 - 17:40

First, I would like to credit Eric Ries in his 2010 Web 2.0 speech for giving me the idea for these awesome graphics. If you have never seen the speech then I highly recommend the version found on YouTube. I have always admired people with creative slides who can capture ideas with elegant simplicity. Since my artistic ability peaked in the first grade, the images in this post demonstrate my foray into abstract expressionism and hopefully convey the point of why we in software need iterative planning.

Unknown Problem | Unknown Solution

Most software changes start life in the state of an unknown problem with an unknown solution. Iterative planning graphNow the product mangers reading this may beg to differ but, most of the time a vague idea on having the software do something is not a known problem space. Say for instance I want to allow uninsured people to buy insurance as a government subsidized rate.  Most of us can imagine that this is a huge problem space and truly we would have no idea how to make this happen.  In this case the problem space and the solution space is unknown.  In order to plan a software delivery that will solve the want above, I need to clearly understand the problem that needs to be solved.  To do this in agile software delivery we create something called a roadmap.  The roadmap is a way of breaking this big unknown problem into smaller chucks that we can estimate (“guess wrong”) as to how long it will take to implement them.  It is usually at this stage that these chunks of work are funded.

Known Problem | Unknown Solution

Now a software releasIterative planning graphe is ready to be planned with some chunk of the roadmap.  In order to do that, the problem should be fairly well known and can be broken it into pieces.  These pieces can be estimated (“guessed wrong”) and slotted into delivery iterations.  Lets say we want to allow people to log into a website and sign up for insurance.  This is a relatively well-known problem space, there are security concerns, 3rd party integrations, databases, platforms and deployments.  Maybe this will not all fit in one release, but with more elaboration and planning a reasonable release plan with a list of risks will emerge. It is usually at this stage that the guess of the size of the thing in the roadmap is known to be wrong and changes must be made to the roadmap.

Known Problem | Known Solution

Iterative planning graphFinally we are ready to plan an iteration. So take a chunk of the release plan and break it into pieces and as a team there needs to be some certainty that these pieces of work can be completed in the sprint. If there are still things that don’t have a clear solution then don’t take those in the sprint and take a spike or research item instead. It is now that the wrongness of the guess during release planning is known and adjustments can be made both to the release plan and the roadmap.

Planning and elaboration go hand in hand as items move from unknown problem -unknown solution to known problem-unknown solution to known problem – known solution.

The post Why Iterative Planning? appeared first on LeadingAgile.

Categories: Blogs

GOAT14 – Call for Speakers

Notes from a Tool User - Mark Levison - Fri, 08/22/2014 - 15:41

This year’s Gatineau Ottawa Agile Tour (#GOAT14) will take place on Monday, November 24th 2014, and Agile Pain Relief Consulting is once again a proud sponsor. Organizers are looking for engaging and inspirational speakers for this year’s conference. If you are interested in participating, please submit a proposal by completing the online form at http://confengine.com/gatineau-ottawa-agile-tour-2014. The organizing committee will select speakers based on the following criteria:

  • Learning potential for and appeal to participants
  • Practicality and usefulness/applicability of content to the workplace
  • Overall program balance Speaker’s experience and reputation
  • Interactive elements (i.e. exercises, simulations, questions…)

Deadline for proposals: Sunday September 15th at 23:59pm

About the Gatineau – Ottawa Agile Tour
The Gatineau – Ottawa Agile Tour (#GOAT14) is a full day of conferences around the theme of Agility applied to software development, but also to management, marketing, product management and other areas of today’s businesses.

Categories: Blogs

Neo4j: LOAD CSV – Handling empty columns

Mark Needham - Fri, 08/22/2014 - 14:51

A common problem that people encounter when trying to import CSV files into Neo4j using Cypher’s LOAD CSV command is how to handle empty or ‘null’ entries in said files.

For example let’s try and import the following file which has 3 columns, 1 populated, 2 empty:

$ cat /tmp/foo.csv
a,b,c
mark,,
load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
SET p.b = row.b, p.c = row.c
RETURN p

When we execute that query we’ll see that our Person node has properties ‘b’ and ‘c’ with no value:

==> +-----------------------------+
==> | p                           |
==> +-----------------------------+
==> | Node[5]{a:"mark",b:"",c:""} |
==> +-----------------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 3
==> Labels added: 1
==> 26 ms

That isn’t what we want – we don’t want those properties to be set unless they have a value.

TO achieve this we need to introduce a conditional when setting the ‘b’ and ‘c’ properties. We’ll assume that ‘a’ is always present as that’s the key for our Person nodes.

The following query will do what we want:

load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
FOREACH(ignoreMe IN CASE WHEN trim(row.b) <> "" THEN [1] ELSE [] END | SET p.b = row.b)
FOREACH(ignoreMe IN CASE WHEN trim(row.c) <> "" THEN [1] ELSE [] END | SET p.c = row.c)
RETURN p

Since there’s no if or else statements in cypher we create our own conditional statement by using FOREACH. If there’s a value in the CSV column then we’ll loop once and set the property and if not we won’t loop at all and therefore no property will be set.

==> +-------------------+
==> | p                 |
==> +-------------------+
==> | Node[4]{a:"mark"} |
==> +-------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 1
==> Labels added: 1
Categories: Blogs

You shall not pass – Control your code quality gates with a wizard – Part III

Danube - Fri, 08/22/2014 - 13:25
You shall not pass – Control your code quality gates with a wizard – Part III

If you read the previous blog post in this series, you should already have a pretty good understanding on how to design your own quality gates with our wizard. When you finish reading this one, you can call yourself a wizard too. We will design a very powerful policy consisting of quite complex quality gates. All steps are first performed within the graphical quality gate wizard. For those of you who are interested in what is going on under the hood, we will also show the corresponding snippets of the XML document which is generated by the wizard. You can safely ignore those details if you do not intend to develop your own tooling around our quality gate enforcing backend. If you play with this thought though, we will also show you how to deploy quality gates specified in our declarative language without using our graphical wizard.

Your reward – The Power Example powerexample1 You shall not pass – Control your code quality gates with a wizard – Part III

Power example with six quality gates

Before we reveal the last secrets of our wizard and the submit rule evaluation algorithm, you probably like to know the reward for joining us. The policy we are going to design consists of the following steps:

1. At least one user has to give Code-Review+2 , authors cannot approve their own commits (their votes will be ignored)

2. Code-Review -2 blocks submit

3. Verified -1 blocks submit

4. At least two CI users (belonging to Gerrit group CI-Role) have to give Verified +1 before a change can be submitted

5. Only team leads (a list of Gerrit users) can submit

6. If a file called COPYRIGHT is changed within a commit, a Gerrit group called Legal has to approve (Code-Review +2) the Gerrit change

The final policy can be downloaded from here. Please note that it will not work out of the box for you as your technical group ids for the Legal and CI groups as well as the concrete user names for team leads will differ. We will guide you step by step how you come up with a result that fits your specific situation.

Starting with something known – Gerrit’s Default Submit Policy

powerexample step1to3 You shall not pass – Control your code quality gates with a wizard – Part III

Looking at steps 1, 2 and 3, you probably realized that they are quite similar to Gerrit’s Default Submit policy. Because of that, let’s start by loading the template Default Gerrit Submit Policy. Once you see the first tab of the editor that opens, adjust name and description as shown in the screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

If you now switch to the Source tab (the third one), you can see the XML the wizard generated for the default policy:

 You shall not pass – Control your code quality gates with a wizard – Part III

The XML based language you can see here is enforced by our Gerrit Quality Gate backend. We believe that this language is way easier to learn than writing custom Prolog snippets (the default way of customizing Gerrit’s submit behavior). Furthermore, it exposes some features of Gerrit (like user group info) which are not exposed as Prolog facts. Our Quality Gate backend is implemented as a Gerrit plugin that contributes a custom Prolog predicate which in turn parses the XML based language and instructs Gerrit’s Prolog engine accordingly. This amount of detail is probably only relevant to you if you intend to mix your own Prolog snippets with policies generated by our wizard.

The schema describing our language can be found here. Looking at the screenshot above, you can clearly see that the XML top element GerritWorkflow contains all settings of the first tab of our wizard. You have probably spotted the attributes for name, description, enableCodeReview and enableVerification. The latter two store the info whether to present users with the ability to vote on the Code-Review/Verified categories (given appropriate permissions).

The only child elements accepted by the GerritWorkflow element are SubmitRules. You can clearly see the three submit rules of the default policy, we have covered in detail in our second blog post. Let’s examine the first submit rule named Code-Review+2-And-Verified-To-Submit. If all its voting conditions are satisfied, it will be evaluated to allow, making submit possible if no other rule gets evaluated to block. As this rule has not specified any value for its actionIfNotSatisfied attribute, it will evaluate to ignore if not all its voting conditions are satisfied. Talking about voting conditions, you can see two VotingCondition child elements. The first one is satisfied if somebody gave Code-Review +2, the second one if somebody gave Verified +1. The second SubmitRule element maps directly to step 2 of our power example ( Code-Review -2 blocks submit), the third one directly to step 3 (Verified -1 blocks submit).

Ignore author votes by introducing a voting filter

powerexample step1 You shall not pass – Control your code quality gates with a wizard – Part III

Let’s modify the first submit rule that it matches the first step of our power example policy:

“ At least one user has to give Code-Review+2 , authors cannot approve their own commits (their votes will be ignored)”

For this, we first switch to the second tab of our wizard (Submit Rules) and double click on the first submit rule. Right after, we double click on the first voting condition (Code-Review) and check the Ignore author votes checkbox in the dialog that opens, see screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

Once we save this change (press Finish in the two dialogs) and switch back to the Source tab, we can see that the XML of the first submit rule has changed:

 You shall not pass – Control your code quality gates with a wizard – Part III

The first VotingCondition element now has a VoteAuthorFilter child element. This one has its ignoreAuthorVotes attributes set to true, which in turn will make sure that only votes of non authors will be taken under consideration when this voting condition gets evaluated. You also notice the ignoreNonAuthorVotes attribute. With that one, it would be possible to turn the condition around (if set to true) and ignore all but the author’s votes. If both attributes are set to true, all votes will be ignored. Voting conditions always apply to the latest change set of the Gerrit change in question.

Adding a group filter to the verified voting condition

powerexample step4 You shall not pass – Control your code quality gates with a wizard – Part III

Now that we have realized step 1 of our power example and step 2 and 3 could be just left unmodified from the default policy, let’s focus on step 4:

“At least two CI users (belonging to Gerrit group CI-Role) have to give Verified +1 before a change can be submitted”.

This can be achieved by modifying the second voting condition (Verified) of the first submit rule. This time, we do not ignore Verified votes from authors (we could by just checking the same box again) but by adding a group and a count filter.

 You shall not pass – Control your code quality gates with a wizard – Part III

Like shown in the screenshot above, enter 2 into the Vote Count Min field and add the Gerrit group of your choice that represents your CI users. The wizard allows you to select TeamForge groups, TeamForge project roles and internal Gerrit groups.

If we finish the dialogs and switch back to the Source tab, we can see that the second voting condition of our first submit rule has changed:

 You shall not pass – Control your code quality gates with a wizard – Part III

Two filters appeared, one VoteVoterFilter and one VoteCountingFilter. The first one makes sure that only votes casted by the CI_ROLE (we chose TeamForge project role role1086 here) will be recognized when evaluating the surrounding VotingCondition.

The second filter is a counting filter. Counting and summing filters are applied after all other filters within the same VotingCondition have been already applied. In our case, it will be applied after all votes which

a) do not fit into voting category Verified (votingCategory attribute of parent element)

b) do not have verdict +1 (value attribute of parent element)

c) have not been casted by a user which is part of the CI_ROLE (see paragraph above)

have been filtered out.

After that, our VoteCounting filter will only match if at least two (minCount attribute) votes are left. If this is not the case, the surrounding VotingCondition will not be satisfied and as a consequence, its surrounding SubmitRule will not be satisfied either.

Introducing SubmitRule filters

powerexample step5 You shall not pass – Control your code quality gates with a wizard – Part III

So far, we only talked about voting conditions and its child filter elements. Sometimes, you do not want an entire submit rule to be evaluated if a certain condition is not fulfilled. Our second blog post already used a submit rule filter for a rule that should only be evaluated if a commit was targeted for the experimental branch.

Step 5  of our power policy is another example:  “Only team leads (a list of Gerrit users) can submit”

We will add a filter to our first submit rule that will make sure that it only gets evaluated if a team lead looks at the Gerrit change. As we only have three submit rules so far and the first one is the only one which can potentially be evaluated to allow, it is sufficient to add this filter only to the first one. To do that, we switch back to the Submit Rules tab, double click on the first submit rule and click on the Next button in the dialog that opens. After that, you can see four tabs, grouping all available submit rule filters. You probably remember those tabs from the second blog post where the values for those filters have been automatically populated based on the characteristics of an existing Gerrit change (more precisely, its latest change set).

This time, we will manually enter the filter values we need. Let’s switch to the User tab and select the accounts of your team leads. In the screenshot below you can see that we chose the accounts of eszymanski and dsheta as team leads.

 You shall not pass – Control your code quality gates with a wizard – Part III

Once you select your team leads instead (our wizard makes it possible to interactively select any TeamForge user or internal Gerrit account), let’s click on Back and finally adjust the display name of our submit rule to its new meaning: Code-Review+2-Verified-From-2-CI-And-Project-Lead-To-Submit

If we finish the dialog and switch back to the Source tab, you can see that our first submit rule has not only changed its displayName but also got a new child element:

 You shall not pass – Control your code quality gates with a wizard – Part III

The UserFilter element makes sure that the surrounding submit rule will only be evaluated if at least one of its CurrentUser child elements matches the user currently looking at the Gerrit change.

If there are multiple submit rule filters, all of them have to match if their surrounding submit rule should be evaluated. You may ask what happens if no submit rule can be evaluated because none of them has matching filters. In that case, submit will be blocked and a corresponding message displayed in Gerrit’s Web UI. The same will happen if you have not defined any submit rule at all. As always, you can test your submit rules directly in the wizard against existing changes before deploying.

Providing guidance to your users with display only rules

powerexample step5 You shall not pass – Control your code quality gates with a wizard – Part III

Before we design a submit rule for the final step (6), let’s try to remember the submit rule evaluation algorithm and what will happen if a non team lead looks at a Gerrit change with our current policy. Quoting from blog post two:

 You shall not pass – Control your code quality gates with a wizard – Part III

a) For every submit rule that can be evaluated, figure out whether its voting conditions are satisfied (if a submit rule does not have a voting condition, it is automatically satisfied)

b) If all voting conditions are satisfied for a submit rule, the rule gets evaluated to the action specified in the actionIfSatisfiedField (ignore if no value set), otherwise the rule gets evaluated to the action specified in actionIfNotSatisfied field

c) If any of the evaluated submit rules got evaluated to block, submit will be disabled and the display name of all blocking rules displayed in Gerrit’s UI as reason for this decision

d) If no evaluated submit rule got evaluated to block but at least one to allow, submit will be enabled

e) If all evaluated rules got evaluated to ignore, submit will be disabled and the display names of all potential submit rule candidates displayed

As our first submit rule (Code-Review+2-Verified-From-2-CI-And-Project-Lead-To-Submit) has a submit rule filter which will not match if you are not a team lead, this rule will not be evaluated. This leaves us with submit rules two (Code-Review-Veto-Blocks-Submit) and three (Verified-Veto-Blocks-Submit). Neither of those submit rules have a submit rule filter so they will always be evaluated. Both rules have one Voting Condition, checking whether there is any Code-Review -2 or Verified -1 vote. If the corresponding voting condition can be satisfied, the surrounding submit rule will be evaluated to block, blocking submit and showing its display name as reason within Gerrit’s Web UI.

Let’s pretend nobody has vetoed our Gerrit change so far. In that case, all evaluated rules will be evaluated to ignore and the final step (e) of our algorithm will kick in. Submit will be disabled and the display names of all potential submit rule candidates, IOW all evaluated submit rules which can potentially be evaluated to allow will be shown. In our case, there are no potential submit rule candidates though as the only submit rule which can potentially evaluate to allow is submit rule one. This submit rule was not evaluated though as its submit rule filter did not match (no team lead was looking at the change). As a result, Gerrit can only show a very generic message why submit is not possible, leaving non team leads confused on what to wait for.

How to give guidance under those circumstances? Should we just modify our algorithm and also display the display names of submit rules that did not get evaluated? Probably not. Imagine you have a secret quality gate for a group called Apophenia who can bypass other quality gates if the commit to the enigma branch if the number of lines added to the commit is 23 (for anybody who does not know what I am talking about, I can really recommend this movie).

The corresponding submit rule would have submit rule filters making sure that the rule only gets evaluated for that particular branch, commit stats and user group. As long as those filters are not matched, the display name of surrounding submit rule must not be revealed under any circumstances. We are sure you can imagine a more business like scenario with similar characteristics.

Fortunately, there is a way how to guide users under those circumstances: Display only rules

Display only rules are submit rules without any voting conditions and no submit rule filters. Consequently, they are always evaluated and will always satisfy. They do not have any value (or ignore for that matter) set for their actionIfSatisfied attribute though. Hence, they will never influence whether submit is enabled or not (that’s why they are called display only after all). Their actionIfNotSatisfied attribute is set to allow. This makes them potential submit rule candidates.  In other words, their display names will always be shown whenever no other submit rules allows or blocks submit, providing perfect guidance.

In our particular example, we will create a display only rule with display name Team-Lead-To-Submit which will give all non team leads guidance why they cannot submit although nobody vetoed the change.

At this point, we like to demonstrate another cool feature of the Source tab. It is bidirectional, so you can also modify the XML and your changes will be reflected in the first and second tab of our wizard. Let’s paste our display only rule as one child element of the GerritWorkflow element:

<cn:SubmitRule actionIfNotSatisfied="allow" displayName="Team-Lead-To-Submit"/>

If you switch back to the Submit Rule tab, it should look like this

 You shall not pass – Control your code quality gates with a wizard – Part III

You probably recognized that this is the first time we used the Not Satisfied Action field, admittedly for a quite exotic use case, namely display only rules. The final step in our power policy will hopefully demonstrate a more common use case to use this field.

Not Satisfied Action for Exception Driven Rules

powerexample step61 You shall not pass – Control your code quality gates with a wizard – Part III

Step 6 of our power policy is an example of what we call exception driven rule:

“If a file called COPYRIGHT is changed within a commit, a Gerrit group called Legal has to approve (Code-Review +2) the Gerrit change”

Why exception driven? Well, having somebody from Legal approving a change is not sufficient by itself to enable submit, so having a separate submit rule with actionIfSatisfied set to allow is not the answer. Should we then just add legal approval as voting condition to all submit rules which can potentially enable submit? This is probably not a good idea either. Not every commit has to be approved by legal, only the ones changing the COPYRIGHT file.

Hence the best idea is to keep the existing submit rules unmodified and add a new submit rule which will

I) if evaluated, checks whether legal has approved the change and if not blocks submit (exception driven)

II) only be evaluated if legal has to actually approve the change (if the COPYRIGHT file changed)

Let’s tackle I) first by creating a new submit rule (push Adding Rule Manually button) with display name Legal-To-Approve-Changes-In-Copyright-File and setting Not Satisfied Action to block.

 You shall not pass – Control your code quality gates with a wizard – Part III

If we kept our new submit rule like this, it would not block a single change as it does not have any voting condition (and hence would always evaluate to satisfied). So let’s add a voting condition that requires a Gerrit group called Legal to give Code-Review +2. The screenshot below shows how this condition should look like. In our case, Legal is a TeamForge user group (group1008).

 You shall not pass – Control your code quality gates with a wizard – Part III

In the current state, all changes which do not satisfy our new voting condition would be blocked.

Implementing II) will make sure we only evaluate this submit rule (and its voting condition) if the corresponding commit changed the COPYRIGHT file. To do that, we have to click on Next, and switch to the Commit Detail tab which contains all submit rule filters which match characteristics of the commit associated with the evaluated change. The only field to fill in is the Commit delta file pattern. Its value has to be set to ^COPYRIGHT as shown in the screenshot below.

 You shall not pass – Control your code quality gates with a wizard – Part III

Why ^COPYRIGHT and not just COPYRIGHT? If a filter name does not end with Pattern, it only matches exact values. If a filter ends with Pattern though, it depends on the field value.

If the field value starts with ^, the field value is treated as a regular expression. ^COPYRIGHT will match any file change list that contains COPYRIGHT somewhere. If the field value does not start with ^, it is treated as an exact value. If we entered just COPYRIGHT, this would have only matched commits where only the COPYRIGHT (and no other file) got changed. Keep this logic in mind whenever you deal with pattern filters. Branch filters and commit message filters are other prominent examples where using a regular expression is probably better than an exact value.

If we finish the dialogs and switch to the Source tab, you can see the XML for our new submit rule:

 You shall not pass – Control your code quality gates with a wizard – Part III

The actionIfNotSatisfied attribute is set to block, we have one submit rule filter (CommitDetailFilter) and one voting condition with a filter (VoteVoterFilter).

Congratulations, you have successfully designed the power policy and can now test and deploy it!

powerexample1 You shall not pass – Control your code quality gates with a wizard – Part III

Power example with six quality gates

Learning more about the XML based quality gate language

Although you have seen quite a bit of our XML based language so far, we fully realize that we have not shown you every single feature. We do not believe this is necessary though, as our graphical wizard supports all features of the language. If you are unsure how a certain filter works, just create one example with the wizard, switch to the Source tab and find out how to do it properly. Our schema is another great resource as it is fully documented and will make sure that you do not come up with any invalid XML document. Last but not least, our wizard ships with many predefined templates. We tried to cover every single feature of the language within those templates.

For those of you who are familiar with Gerrit’s Prolog cookbook, we turned all Prolog examples into our declarative language and were able to cover the entire functionality demonstrated. The results can be found here.

As always, if you have any questions regarding the language, feel also free to drop a comment on this blog.

How to deploy quality gates without the graphical wizard

As explained before, our Quality Gate enforcing plugin ties into Gerrit’s Prolog based mechanism to customize its submit behavior. Gerrit expects the current submit rules in a Prolog file called rules.pl in a special ref called refs/meta/config. The deployment process for rules.pl is explained here.

Whenever our wizard generates a rules.pl file, it makes use of a custom Prolog predicate called cn:workflow/2 which is provided by our Quality Gate enforcing plugin. This predicate has two arguments. The first one takes the XML content as is, the second one will be bound to the body of Gerrit’s submit_rule/1 predicate. In a nutshell, the generated rules.pl looks like this:

submit_rule(Z):-cn:workflow(‘<XML document describing your quality gate policy>’, Z).

Our wizard does not use any other Prolog predicates. You can use our predicate as part of your own Prolog programs if you decide to come up with your own tooling and generate rules.pl by yourself. While passing the XML content, make sure it does not contain any character which would break Prolog quoting (no ‘ characters no newlines or XML encode then). Our graphical wizard takes care of this step.

Final words and Call for Participation

If you made it through the entire third blog post you can proudly call yourself a wizard too icon cool You shall not pass – Control your code quality gates with a wizard – Part III

Designing quality gates from scratch can be a complex matter. Fortunately, our wizard comes with many predefined templates you can just deploy. In addition, we turned any example from the Prolog cookbook into our format. If you are unsure how to match a certain state of a Gerrit change, just use the built in functionality of our wizard to turn it into a submit rule and adopt it according to your needs. Before you deploy, you can always simulate your quality gates within the wizard. It will follow the submit rule evaluation algorithm step by step and shows the evaluation result for every rule. If you do not like our wizard and do not like Prolog either, feel free to use our XML based language independently. This blog post has demonstrated how to do that.

Talking about the XML based language, its specification is Open Source. We encourage you to build your own wizard or other frontends and will happily assist if you have any questions regarding its functionality. Gerrit’s functionality to customize submit behavior is unmatched in the industry. We hope that with our contributions we made it a little easier to tap into it.

Coming up with the wizard, the language and our backend was a team effort. About half a dozen people worked for two months to get to the current state. We like to know from you whether it is worth investing further in this area. Want to have more examples? Better documentation? A tutorial video? A Web UI based wizard? Performance is not right? Cannot express the rules you like to express? Want to use the feature with vanilla Gerrit?

Please, spread the word about this new feature and give us feedback!

The post You shall not pass – Control your code quality gates with a wizard – Part III appeared first on blogs.collab.net.

Categories: Companies

R: Rook – Hello world example – ‘Cannot find a suitable app in file’

Mark Needham - Fri, 08/22/2014 - 13:05

I’ve been playing around with the Rook library and struggled a bit getting a basic Hello World application up and running so I thought I should document it for future me.

I wanted to spin up a web server using Rook and serve a page with the text ‘Hello World’. I started with the following code:

library(Rook)
s <- Rhttpd$new()
 
s$add(name='MyApp',app='helloworld.R')
s$start()
s$browse("MyApp")

where helloWorld.R contained the following code:

function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Unfortunately that failed on the ‘s$add’ line with the following error message:

> s$add(name='MyApp',app='helloworld.R')
Error in .Object$initialize(...) : 
  Cannot find a suitable app in file helloworld.R

I hadn’t realised that you actually need to assign that function to a variable ‘app’ in order for it to be picked up:

app <- function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Once I fixed that everything seemed to work as expected:s

> s
Server started on 127.0.0.1:27120
[1] MyApp http://127.0.0.1:27120/custom/MyApp
 
Call browse() with an index number or name to run an application.
Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.