Skip to content

Feed aggregator

Team Kanban in SAFe 3.0 and 4.0; Sneak Peak at SAFe BP 4.0

Agile Product Owner - Sat, 01/17/2015 - 00:17

Hi Folks,

Alex just finished a major update to the kanban guidance article, which describes how SAFe teams can apply kanban in the context of an ART.

In addition, maintenance teams, DevOps, and System Teams often prefer kanban anyway, as it generally has less planning, tasking and estimating overhead than Scrum, and better insights into flow. We’ve been typically coaching these teams to kanban anyway. Moreover, in the context of SAFe for Lean Systems Engineering, we’ll have lots of newly agile teams, many not doing software at all, and it seems only fair to offer them a choice of methods.

So long as the interfaces to the ART are managed appropriately, and teams can participate in Release (Quantum in 4.0) Planning, dependency management, System Demos, economic estimating and establish a velocity (readily derived from throughput), it really shouldn’t matter much as to how they go about their daily affairs of building value. One small downside is the potential for different ways of working in a single ART, but I think we are all mature enough to handle that at this point in the evolution of the industry. After all, what serious agile developer shouldn’t understand Scrum, XP and Kanban by now?

To that end SAFe 4.0, which will be released this summer, will have a balanced treatment which include team level kanban as a method choice. A Sneak peak at BP 4.0 is included below, just for kicks.

Sneak Peak at SAFe 4.0 with Kanban

Sneak Peak at SAFe 4.0 with kanban

 

Categories: Blogs

3 Things to Observe in a Sprint Review

Illustrated Agile - Len Lagestee - Fri, 01/16/2015 - 21:00

For many leaders, the sprint review (or demonstration) is one of the few chances they have to see their direct reports functioning with their Agile team and to witness the outcome of their work. My recommendation would be to attend as many sprint review sessions as you can. Everything you need to know about the health of an agile team can often be revealed by observing this session.

Building on the post, “A Managers Guide to Attending Agile Team Events,” when you do attend a sprint review, what you should be looking for? What are the characteristics of an effective sprint review? How should you respond if something seems amiss?

From my experience, effective sprint reviews have the following attributes:

Connection. One can observe if the team is moving as one unit and just how well the members of the team are interacting with each other. Is one role or person particularly quieter than another? Does the relationship between developers and testers feel collaborative or contentious? Many of the best sprint reviews I participated in have a celebratory feel to them. The team collectively feels they are working on something meaningful and making significant progress towards the product vision. Hopefully, you sense this vibe.

Conversation. The conversations in a sprint review should revolve around the product backlog and specifically around the value users should be receiving by finishing items in the backlog. This happens best when the team focuses on the acceptance criteria while they are demonstrating delivered value. The acceptance criteria is often the “script” for the demo. Otherwise, the demonstration tends to wander, making acceptance of the user story as complete quite challenging. Healthy conversation from the product owner about what’s on the horizon for their product is also a good sign.

Completion. Similarly, I have experienced some painful sprint reviews when the team is obviously not ready to demonstrate a completed story. For a story to be done, it should be in a state of potentially releasing it into production today should the product owner deem it ready. If this is not the case, something is amiss.

If you are not seeing these attributes during the sprint review, I would suggest, with a spirit of growth and empathy, the following conversations (in this order):

  1. Talk to the Scrum Master. Ask about their perception of team dynamics and health. Ask about the last team retrospective and what they decided to improve for the next sprint (Is the team able to self-heal or not?). Ask if there if the team is working towards a “definition of done.”
  2. Talk to the Product Owner. Ask about their vision and if they feel the right team is in place to bring the vision to reality. The product owner role can be quite challenging and without an amazing team, near impossible.
  3. Talk to your direct report. In private, ask their thoughts on how things are going and what they would suggest to improve the current condition of the sprint review. Ask how you can support them and if they have everything they need to build and test with craftsmanship.
Becoming a Catalyst - Scrum Master Edition

The post 3 Things to Observe in a Sprint Review appeared first on Illustrated Agile.

Categories: Blogs

Bandita Joarder on How Presence is Something You Can Learn

J.D. Meier's Blog - Fri, 01/16/2015 - 18:22

Bandita is one of the most amazing leaders in the technology arena.

She’s not just technical, but she also has business skills, and executive presence.

But she didn’t start out that way.

She had to learn presence from the school of hard knocks.   Many people think presence is something that either you have or you don’t.

Bandita proves otherwise.

Here is a guest post by Bandita Joarder on how presence is something you can learn:

Presence is Something You Can Learn

It’s a personal story.  It’s an empowering story.  It’s a story of a challenge and a change, and how learning the power of presence, helped Bandita move forward in her career.

Enjoy.

Categories: Blogs

Retired Rally Laptops Find New, Happy Homes

Rally Agile Blog - Fri, 01/16/2015 - 18:19

At Rally, we love our Apple laptops. Typically we use them day in and day out for three years or more, after which they enjoy their next phase of life.

In 2014, we started a donation program to place them into good use at “retirement homes”—organizations that would appreciate them until their useful life is over. Through the course of last year, we donated 95 MacBooks valued at nearly $30,000 to Colorado nonprofits and educational institutions. I’m happy to share a few of their stories.

Boulder Emergency Squad

At Rally, Cliff Rosell is a Senior Financial Analyst; but at Boulder Emergency Squad (BES) he’s a professional rescuer, IT supervisor, Resource Planning Supervisor, and Board Member —roles that add up to about 1,000 annual volunteer hours.

In 2014, Rally donated two iMacs and three MacBook Pros to support the work of this life-saving community organization. According to Cliff, “Prior to Rally’s donation, we had one desktop computer. Now we’ve upgraded our radio dispatch computer, have a dedicated system for training presentations, enabled two officers to work remotely, and have added a dedicated laptop for any member to use. Rally’s donation has really helped modernize BES in a way that could not have been done otherwise.”

Boulder Emergency Squad volunteer Nicolas Venot looks at map and call data on the donated computer used for radio dispatch at the organization’s headquarters

Brighton High School

Rally IT Director Jesse Brouillette spends his 1% paid volunteer time helping out at Brighton High School (BHS), where he uses his technology skills and expertise to help the small IT staff. The school has two computer labs but that need to serve nearly 2,000 students, so access to technology is limited.

Jesse and his wife, Emerald—who is Dean of Students at BHS—brainstormed affordable and feasible ways to increase technology access at the school, and decided to build a self-contained mobile lab that could be wheeled into classrooms. Rally donated 42 retired laptops for the project, and this mobile cart how now effectively doubled the computers available for student use.

Jesse Brouillette (right), Rally’s IT Director, volunteers at Brighton High School’s IT department and helped create a mobile lab with Rally-donated laptops.

In recognition of the contribution, Rally was honored with the school district’s “Reaching In” award, given to a business or other organization from the community that makes a significant impact.

Open Media Foundation

The Open Media Foundation (OMF) is an innovative media and technology nonprofit dedicated to putting the power of the media in the hands of the people, enabling everyone to engage in their community and bring about the change they wish to see in the world. OMF accomplishes its mission by providing access to affordable, high-end media and technology services. The staff and volunteers offer training and tools that enable everyone to represent their own voice in the media conversation.

With 10 laptops Rally donated in September, OMF was able to begin offering members of Denver Open Media (DOM) the ability to work on projects remotely. Explains Liz Wuster, Member and Donor Relations Manager, “In fewer than 60 days, the laptops have been checked out 34 times, for a total of 2,591 hours! They have been used individually by our members, and in bulk by organizations, such as the Denver Film Society, in its efforts to teach editing and animation to youths. We offers classes around tech capacity and editing, which give our members a better grasp of how to most effectively use these laptops.”

“The MacBooks donated by Rally have been very useful in terms of doing edit jobs on Adobe Premiere and making graphics on Adobe Photoshop. They've allowed me to get things done in the video production department when I can't come in to Denver Open Media during office hours.” - Brian Nemeth, a DOM member

I Have a Dream Foundation of Boulder County

ln October, the I Have a Dream Foundation of Boulder County (IHAD) put 10 Rally-donated MacBook Pros to immediate good use. Each year, the organization selects a cohort of 50-60 low-income “Dreamers,” at-risk kids who are partnered with community members. These Dreamers are encouraged in elementary school to believe that college is an attainable goal, and as they grow up they receive extensive career and college preparation guidance, sponsored campus visits, and assistance with the application process.

The Dreamer students are using the laptops to complete coursework, projects, and internships as they prepare to enter college. IHAD staff members use the laptops to raise needed funds for programs through grant writing, as well as to recruit, interview, and train tutors and mentors. IHAD’s program director will use her laptop to manage the caseload for 60 new Dreamers—tracking attendance, grades, and case notes as they begin their journey toward high school graduation.

“Before this MacBook, I didn't have access to a laptop in order to conduct off-site volunteer recruitment presentations, interviews, and trainings. I am very grateful for the new laptop from Rally!” commented Ashley, Volunteer Director at I Have a Dream Foundation

We’re delighted to hear the stories of how our “old” laptops are helping others in their next stage of life. We now have more requests than we can fulfill, so 2015 is likely to be another busy year for Rally laptops to find new, happy homes.

Geri Mitchell-Brown
Categories: Companies

How Do I Prioritize Work?

Leading Agile - Mike Cottmeyer - Fri, 01/16/2015 - 15:25

Value is a funny thing.  In enterprise agile coaching, I’m frequently encountering teams that are trying to either (1) complete every single project at the same time because they are all equally valuable, or (2) using a nebulous unit of sorts (say a 100 point scale) to indicate how valuable a work item may be.

In the first case, its usually pretty straight forward to identify that all of the projects are not truly equal in importance and we limit the number of work items to the actual capacity of the delivery system.

In the second case, a team has typically taken my advice and is now trying to figure out what work items are really the next most valuable to the business.  Frequently these teams will ask for a value scale, one that helps them to figure out if they need to build item ‘A’ or item ‘B’ next.  My answer in this case is almost always a question.  It goes something like this:

Me: “Which of the two items have a higher cost of delay?”
Team: “I don’t know, I think item ‘B'; but, if we ask person A or B from some other part of the organization they would probably say item ‘A’.”
Me: “Well, do you know the cost of delay for both items, it should be simple math to choose between them if you do”
Team: “No, we have a guess, but we really don’t know”

It is usually around this time that the teams will start asking for a method or system that can be used to help establish value for work. My answer is usually a bit over simplified; but, then again, perhaps it isn’t. My answer is usually ‘currency’. If the business exists to make money, then the value we are placing on work should always tie back to the potential for currency.

If a team is making tradeoffs based on value, I would like to see how that work item will turn into real money for the business.

What are your thoughts? Have you found any other approaches that work equally well?

The post How Do I Prioritize Work? appeared first on LeadingAgile.

Categories: Blogs

Acceptance Test-Driven Development and Test-Driven Development - How They Are the Same and How They Are Different

NetObjectives - Fri, 01/16/2015 - 10:41
There is often some confusion between Acceptance Test-Driven Development (ATDD) and Test-Driven Development (TDD). Here’s a short description of their similarities, their differences, and their relationship. Let’s start with their similarities. They both have three words in common (1). They both are aimed at creating a quality system. They involve writing things called tests, but which act as...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Scrum Coaching Retreats—a personal story

ScrumSense.com - Peter Hundermark - Thu, 01/15/2015 - 20:19
Who this is for

You may be considering attending the upcoming Scrum Coaching Retreat (SCR) to be held in Franschhoek, near Cape Town on 09-11 February 2015. And you may be wondering what the value will be to you. You may also need to find a convincing argument for your boss to sign off on the registration fee and, perhaps, travel costs from Jhb or elsewhere.

What is a Scrum Coaching Retreat?

Scrum Coaching Retreats are so-called because the international Scrum Alliance is the title sponsor, which enables the organisers to offer astonishing value for money to the participants. In practice anyone who finds him- or her-self in a role where they need or want to help other individuals or teams get better is a “coach” and is qualified as a participant! This certainly includes agile team coaches, Scrum Masters, Kanban coaches, project managers, product owners/managers and many more.

SCRs are regional events, organised by local agile coaches who have participated in at least one prior event and have some track record in organising agile events.

I’ve had the privilege of participating in two prior international SCRs (as well as a SUGSA-organised weekend coaching retreat, which was also great).

Boulder, Colorado (December 2011)

Boulder Scrum Coaching RetreatThe inaugural SCR took place in Boulder, Colorado in December 2011. It was freezing cold with lots of snow. Around 75 participants from the US, Canada and even further afield gathered in the historic Boulderado Hotel with the sole aim of going on a learning journey together. Experience levels ranged from Pete Behrens and Roger Brown, who established the Certified Scrum Coach programme in 2007 to Scrum Masters with just a few months of agile experience. Hardly anyone had heard of Kanban back then!

outcomesThe idea was born of a limited number of people self-organising into teams who would together go on a learning journey around a chosen topic. On the first day topics were generated open-space style and teams formed . On day two each team demonstrated to everyone else in a “joint review” session what they had done for the past day-and-a-half.

Mark Levison and Peter Hundermark at Boulder Scrum Coaching RetreatOld friendships were rekindled and new ones forged both during the team work and in the various cosy bars and restaurants in Boulder. For example, I remember Sigi Kaltenecker, and I having a fine meal and chat with Lyssa Adkins. Boulder is comparable in some ways with Franschhoek: good food married with grand scenery; just colder!

London (June 2014)

Fast forward more than two years to June 2014 and we find ourselves in London during Spring. As an aspirant organiser I was invited to participate. My good friend and former business parter Marius de Beer made the journey across the pond from Vancouver and we had an awesome time together.

Geoff Watts is Product Owner of our team at the London Scrum Coaching RetreatMarius and I chose to explore the topic of how to help novice Scrum Masters to become awesome team coaches. We cunningly conspired to recruit some quite inexperienced participants to join our team: we deliberately wanted to have their “beginner’s mind” and understand their specific needs. We chose Geoff Watts, an experienced CSC (and CST) as our team’s product owner and set to work to create our vision, all on the first evening. Over the next two days our team was able to deliver tangible results in the form of a working “Scrum Master Exchange” and a “Mentoring Programme”.

The new and improved structure for the London event included “two sleeps”. This meant a late afternoon start with topic generation, team selection and vision creation on day 1. Day 2 and 3 mornings were both fully devoted to working in these teams. Moreover, teams ran two sprints with a review and retrospective after each, enabling them to inspect and adapt. For some teams this resulted in delivering two shippable product increments. For at least one it meant abandoning their current direction after sprint 1 and reformulating their vision for sprint 2!

Scrum Coaching Retreat London opening by Mark SummersThe new format also gave participants time to interact with people outside their team in structured ways: more shallow dives into a greater variety of topics of interest.

The retreat closed with a large joint retrospective: around 75 people participated in the closing circle!

Learnings

The day after London closed, Marius and I spent some hours reflecting on the event, capturing what we believed should be carried forward to the Franschhoek event, and what new experiments we would like to run. So you can expect something quite similar to London, yet with some interesting new tweaks!

More SCRs have been held since London in the US and Asia. The “two sleeps, deep dive” structure with a limit of about 75 participants has persisted.

Franschhoek Scrum Coaching RetreatLe Franschhoek Hotel & Spa

Our theme in Franschhoek is “Small Improvements”. This reflects our strong belief that lasting change happens slowly but surely.

What I can promise is that you and your organisation will be the better for your participation!

At a very pragmatic level for just $600 (under R7000) you will get:

  • A pre-retreat workshop of ~4 hours on Monday 09 Feb for participants who feel they are “starting out” as coaches and want a little extra guidance and confidence. This will be facilitated by some of the most experienced local and international agile coaches. Lunch will be provided. Attendance at this workshop is optional, but the full cost and lunch is included in your registration fee.
  • The two-and-a-half day main event starting with topic generation and team formation on Monday evening and continuing on Tuesday and Wednesday.
  • A new, high-energy, problem-solving coaching clinic on Tuesday afternoon. It will be unlike anything you have seen or done before!
  • Two nights accommodation (Monday & Tuesday) at the comfortable (if not luxury—don’t tell your boss!) Le Franschhoek Hotel & Spa.
  • All breakfasts and lunches from Monday lunch through to Wednesday.
  • Alfresco dinner on Monday (braai) at the hotel for all participants.
  • For dinner on Tuesday you are free to grab a few (new) friends and head off to one of the many, great restaurants in Franschhoek, which range from the finest dining in the country to a simple pizzeria.
  • The venue is set in the Franschhoek valley and winelands, one of the most awesome regions in our beautiful country. We have great inside and outdoor spaces to choose from, depending on the weather, which is likely to be hot and dry. Bring your bathing costume for use in the pool during down time.pool at Le Franschhoek Hotel and Spa

Logistically it will be quite possible for you to fly in from Jhb on Monday morning and return home on Wednesday evening. However, why not spoil your better half with a weekend in Franschhoek before the event. The Le Franschhoek Hotel & Spa, or one of the many other fine establishments in the region, will love to pamper you both!

The post Scrum Coaching Retreats—a personal story appeared first on ScrumSense.

Categories: Blogs

NeuroAgile Quick Links #9

Notes from a Tool User - Mark Levison - Thu, 01/15/2015 - 19:19

Original infographic designed by Freepik.com
A collection of links to interesting research from the world of neuroscience and behavioural psychology that can be applied (or not) to Agile/Scrum Teams.

Categories: Blogs

Organizational Debt Cycle

Agilitrix - Michael Sahota - Thu, 01/15/2015 - 17:56

Many consider the modern workplace inhumane and uninhabitable. People are not fully engaged. It is killing our bottom lines. It is putting our organizations at risk. With our prevailing management system we have created a vast organizational debt that inhibits growth and performance.

We define organizational debt as the baggage that prevents people from delivering astonishing results. The diagram below shows the key problems that impact each human being and ultimately the effectiveness of our whole organization.

Joint post with Olaf Lewitz.

Organizational Debt Cycle

This is how the organizational debt cycle works:

Organizational Debt Cyle

We learn not to trust people we don’t know well, so our first principle is not to trust anyone. We act as if we’re afraid and sometimes we are. We want to be safe, avoid to be noticed, avoid to stand out. The more we cover our *ss, the less we connect. The less we connect, the more we feel alone… and can’t build any trust. The cycle continues.

The cycle works the other way, too: We start being afraid, not trusting people, feeling alone, so we make sure we don’t get hurt…

Impact

There is a direct connection between each of these problems and our organization’s effectiveness. For example:

  • No trust → I will try to do this myself rather than cooperate with you. I don’t believe in my leaders.
  • Fear → I will not ask for help when I need it. I will not take any risks that might improve things.
  • Cover your *ss → I will not report important information. I will not speak up to avoid disaster.
  • Alone → I will disengage. I will be powerless and not valued.

Henry Ford reportedly said that “every pair of hands comes with its own brain”. Many people got used to leave their brain at the door when they came to work. In many organisations people additionally leave their hearts outside. We don’t fully show up at work.
Prominent words like “work-life-balance” only make sense if we leave our life outside when we go to work. This cycle describes how we do that and why we keep doing it. We don’t feel we have a choice.

How We Create Organizational Debt

In organizations we create structures to support it:

  • RACI matrices have single responsibilities → alone
  • Performance reviews → fear
  • Reports are expected to match plans → cover your *ss
  • Don’t rock the boat → cover your ass
  • Gap between what we say and do → Trust no one

As leaders, we may unwittingly create or support an organizational culture where mistrust and fear impair performance.

How to Use this Model

The purpose of this model is to create awareness. When we choose to acknowledge and accept what is actually going on, then new behaviours and choices automatically emerge. Once we decide that we no longer wish to operate in this cycle, we may then ask, “What do we wish for ourselves?”

The post Organizational Debt Cycle appeared first on Catalyst - Agile & Culture.

Related posts:

  1. Whole Agile – Unleash People & Organizations Agile is incomplete. We need to augment it to create...
  2. People over Process – Win with People Success comes from Valuing People When we simplify the Agile...
  3. Self-Appreciation Game At Play4Agile, Olaf Lewitz and I hosted an exploratory session...

YARPP powered by AdBistroPowered by

Categories: Blogs

Combating the lava-layer anti-pattern with rolling refactoring

Jimmy Bogard - Thu, 01/15/2015 - 17:37

Mike Hadlow blogged about the lava-layer anti-pattern, describing, which I have ranted about in nearly every talk I do, the nefarious issue of opinionated but lazy tech leads introducing new concepts into a system but never really seeing the idea through all the way to the end. Mike’s story was about different opinions on the correct DAL tool to use, but none of them actually ever goes away:

LavaLayer

It’s not just DALs that I see this occur. Another popular strata I see are database naming conventions, starting from:

  • ORDERS
  • tblOrders
  • Orders
  • Order
  • t_Order

And on and on – none of which add any value, but it’s not a long-lived codebase without a little bike shedding, right?

That’s a pointless change, but I’ve seen others, especially in places where design is evolving rapidly. Places where the refactorings really do add value. I called the result long-tail design, where we have a long tail of different versions of an idea or design in a system, and each successive version occurs less and less:

Long-tail and lava-layer design destroy productivity in long-running projects. But how can we combat it?

Jimmy’s rule of 2: There can be at most two versions of a concept in an application

In practice, what this means is we don’t move on to the next iteration of a concept until we’ve completely refactored all existing instances. It starts like this:

image

A set of functionality we don’t like all exists in one version of the design. We don’t like it, and want to make a change. We start by carving out a slice to test out a new version of the design:

image

We poke at our concept, get input, refine it in this one slice. When we think we’re on to something, we apply it to a couple more places:

image

It’s at this point where we can start to make a decision: is our design better than the existing design? If not, we need to roll back our changes. Not leave it in, not comment it out, but roll it all the way back. We can always do our work in a branch to preserve our work, but we need to make a commitment one way or the other. If we do commit, our path forward is to refactor V1 out of existence:

image

image

image

image

image

We never start V3 of our concept until we’ve completely eradicated V1 – and that’s the law of 2. At most two versions of our design can be in our application at any one time.

We’re not discouraging refactoring or iterative/evolutionary design, but putting in parameters to discipline ourselves.

In practice, our successive designs become better than they could have been in our long-tail/lava-layer approach. The more examples we have of our idea, the stronger our case becomes that our idea is better. We wind up having a rolling refactoring result:

output_yWnRTm

A rolling refactoring is the only way to have a truly evolutionary design; our original neanderthal needs to die out before moving on to the next iteration.

Why don’t we apply a rolling refactoring design? Lots of excuses, but ultimately, it requires courage and discipline, backed by tests. Doing this without tests isn’t courage – it’s reckless and developer hubris.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

A Retrospective of 2014 and Futurespective of 2015

Ben Linders - Thu, 01/15/2015 - 15:17
2014 was a great year for me. I've helped organization to effectively deploy Agile and Lean and improve their ways of working, my first book became a bestseller and I've inspired professionals all around the world by sharing useful knowledge and experience on my blog and via InfoQ. Let's reflect on what 2014 has brought and do a futurespective to visualize the opportunities of 2015. Continue reading →
Categories: Blogs

Monitoring Akka with Kamon

Xebia Blog - Thu, 01/15/2015 - 14:49

Kamon is a framework for monitoring the health and performance of applications based on akka, the popular actor system framework often used with Scala. It provides good quick indicators, but also allows in-depth analysis.

Tracing

Beyond just collecting local metrics per actor (e.g. message processing times and mailbox size), Kamon is unique in that it also monitors message flow between actors.

Essentially, Kamon introduces a TraceContext that is maintained across asynchronous calls: it uses AOP to pass the context along with messages. None of your own code needs to change.

Because of convenient integration modules for Spray/Play, a TraceContext can be automatically started when an HTTP request comes in.

If nothing else, this can be easily combined with the Logback converter shipped with Kamon: simply logging the token is of great use right out of the gate.

Dashboarding

Kamon does not come with a dashboard by itself (though some work in this direction is underway).

Instead, it provides 3 'backends' to post the data to (4 if you count the 'LogReporter' backend that just dumps some statistics into Slf4j): 2 on-line services (NewRelic and DataDog), and statsd (from Etsy).

statsd might seem like a hassle to set up, as it needs additional components such as grafana/graphite to actually browse the statistics. Kamon fortunately provides a correctly set-up docker container to get you up and running quickly. We unfortunately ran into some issues with the image uploaded to the Docker Hub Registry, but building it ourselves from the definition on github resolved most of these.

Implementation

We found the source code to Kamon to be clear and to-the-point. While we're generally no great fan of AspectJ, for this purpose the technique seems to be quite well-suited.

'Monkey-patching' a core part of your stack like this can of course be dangerous, especially with respect to performance considerations. Unless you enable the heavier analyses (which are off by default and clearly marked), it seems this could be fairly light - but of course only real tests will tell.

Getting Started

Most Kamon modules are enabled by adding their respective akka extension. We found the quickest way to get started is to:

  • Add the Kamon dependencies to your project as described in the official getting started guide
  • Enable the Metrics and LogReporter extensions in your akka configuration
  • Start your application with AspectJ run-time weaving enabled. How to do this depends on how you start your application. We used the sbt-aspectj plugin.

Enabling AspectJ weaving can require a little bit of twiddling, but adding the LogReporter should give you quick feedback on whether you were successful: it should start periodically logging metrics information.

Next steps are:

  • Enabling Spray or Play plugins
  • Adding the trace token to your logging
  • Enabling other backends (e.g. statsd)
  • Adding custom application-specific metrics and trace points
Conclusion

Kamon looks like a healthy, useful tool that not only has great potential, but also provides some great quick wins.

The documentation that is available is of great quality, but there are some parts of the system that are not so well-covered. Luckily, the source code very approachable.

It is clear the Kamon project is not very popular yet, judging by some of the rough edges we encountered. These, however, seem to be mostly superficial: the core ideas and implementation seems solid. We highly recommend taking a look.

 

Remco Beckers

Arnout Engelen

Categories: Companies

Die Skalierung von Scrum ist doch gar nicht das Thema!

Scrum 4 You - Thu, 01/15/2015 - 09:00

So langsam beginnt mich das Thema „Skalieren von Scrum“ zu nerven. Warum? Ganz einfach: Ständig sollen wir beweisen, dass auch große Projekte mit Scrum möglich sind. Als müsste Scrum – das nichts weiter als ein Vorgehensmodell ist – zeigen, wie alles gut wird. Können das denn die traditionellen Methoden? Mittlerweile wissen wir doch aus unzähligen Untersuchungen, dass mit zunehmender Größe eines Projekts die Wahrscheinlichkeit des Scheiterns steigt. Warum sollte man also überhaupt große Projekte durchführen? Warum etwas immer und immer wieder versuchen, das mit hoher Wahrscheinlichkeit nicht erfolgreich sein wird?

Ja, aber … können große Projekte mit Scrum trotzdem erfolgreich sein? Sicher. Heute können wir problemlos Scrum-Projekte mit mehr als 100, 1.000 oder sogar 10.000 Leuten durchführen. Die Mechanismen sind bekannt, alleine schon unsere Scrum Checklist zeigt die dafür nötigen, wichtigsten Elemente. Allerdings wird damit oft nur versucht, das Alte in einen neuen Mantel zu stecken. Alles andere bleibt so ineffizient wie bisher. Die Frage wird nämlich falsch gestellt. Sie lautet nicht „Wie skaliert man Scrum?“, sondern: „Wie gelingt es uns, hoch komplexe, große Vorhaben durchzuführen?“

Großartiges wird immer von wenigen geleistet

Bei näherer Betrachtung erkennt man an manchen Projekten, dass sie unnötig aufgeblasen werden. Aus politischen Gründen werden sie teurer, als sie es sein müssten. Viele dieser Projekte könnten mit weniger Menschen, weniger Budget und weniger Druck kostengünstiger und effektiver geliefert werden. All das ist im Chaos Manifesto 2014 der Standish Group nachzulesen. Auch Dan Ward zeigt in seinem Buch „F.I.R.E.“ deutlich, dass die meisten Militär-Großprojekte immer dann gelingen, wenn bei strikter Einhaltung des Budgets einfache Lösungen gesucht werden.

Doch obwohl das eigentlich jeder weiß, stellen sich nun mit Agilität befasste Unternehmen die gleiche Frage wie ich als Early Scrum-Adopter vor 10 Jahren. 2004 hatte ich meine beginnende Karriere als Certified Scrum Trainer unterbrochen, um als Chef der Software-Entwicklung zurück in ein großes deutsches Unternehmen zu gehen. Die Frage „Wie skaliert man Scrum?“ ließ mich nicht los. Ich wollte wissen, wie man mit 40 Entwicklern an einem Standort und einem weiteren Team in Berlin gemeinsam erfolgreich scrummen kann. Zwischen 2004 und 2008 entwickelte ich mit anderen in der Scrum Community die Mechanismen für die Skalierung . Und wir haben sie ausprobiert: Nach Projekten mit 200 und mehr Entwicklern hatten sich die nötigen Best Practices herauskristallisiert. Wir hatten bewiesen, dass es funktioniert, aber wir hatten auch herausgearbeitet, warum es so schwer umzusetzen war.

Das heißt: Es war zwar möglich, große Projekte zu managen, aber gleichzeitig verliefen sie nicht sonderlich effektiv. Dazu war viel zu viel Management-Overhead notwendig.

Allmählich wurde klar, dass – obwohl wir erfolgreich waren – die Frage „Wie skaliert man Scrum?“ falsch gestellt worden war. Noch erfolgreicher waren nämlich jene Unternehmen, die große Projekte durch agile Vorgehensmodelle in kleine Projekte geschnitten und viele kleine Scrum-Teams an die Problemstellung gesetzt hatten. Ihre Projekte waren also gar keine „großen“ Projekte im klassischen Sinne mehr. Ihr Paradigma lautete: Die Entscheidung darüber, was wie zu tun ist, wird den Teams überlassen – und dadurch etablierten sich im Laufe der Jahre neue Führungssysteme. Es ist ja auch logisch: Die wichtigsten Erfindungen der Menschheit, die wissenschaftlichen Durchbrüche, die Cash-Cows von Unternehmen wurden in den seltensten Fällen von 200+ Menschen entwickelt. Nein, sie wurden von einigen wenigen, meist einer kleinen Gruppe von Menschen entwickelt. Also von kleinen Teams, die etwas Großartiges leisten wollten.

Skaliert wird durch Führung, Architekturen und Infrastrukturen

Schon höre ich das 08/15-Gegenargument: „Aber es gibt Aufgaben, die können nun mal nicht von einer kleinen Gruppe erledigt werden. Man baut ein Fußballstadion, einen Flughafen oder Photoshop nicht mit 7 Personen.“

Korrekt! Bei keinem unserer Kunden arbeiten wir an Projekten, die mit sieben Personen zu bewältigen wären. Natürlich müssen wir darüber nachdenken, wie man mehr als 7 Personen, ja wie man mehr als 7 Teams synchronisiert. Einer unserer Kunden will sogar sein gesamtes Unternehmen im Scrum-Klang schwingen lassen. Wie geht das?
Wir skalieren

  1. über die Führung von Teams und nicht über Prozessmodelle 1. Ordnung
  2. über eine angepasste Architektur und Entkopplung der einzelnen Komponenten oder Produktgruppen und
  3. durch das Einziehen von Infrastrukturen, die das ständige Integrieren der fertigen Produktteile in ein Ganzes ermöglichen.

Gleichzeitig gilt: Die Teams müssen sich selbst ihre Prozesse, ihre Checklisten, ihre Arbeitsabläufe geben. Sie definieren also für sich die Art und Weise, wie sie arbeiten. Nach einiger Zeit sind sie dann selbst in der Lage, sich teamübergreifend zu synchronisieren und bei Problemen sofort zu bemerken, dass es diese Probleme gibt.

Neue Rezepte? Nicht nötig.

Das alles ist doch kein Hexenwerk. Dazu braucht man keine neuen Scrum-Frameworks, wie das SAF oder LaaS oder XYZ. Bei genauer Betrachtung sind solche Schablonen sogar hinderlich. In meinen Augen sind sie Rückschritte im Vergleich zu den eigentlichen Frameworks von Scrum oder Kanban. Dabei haben wir es nun nämlich mit allzu ausgefeilten Prozessmodellen 1. Ordnung zu tun. Prozessmodelle 1. Ordnung sind mit Rezepten in Kochbüchern vergleichbar, etwa mit den „30 Minuten Menüs“ von Jamie Oliver. Obwohl wir alle – mich eingeschlossen – natürlich gerne solche Rezepte hätten, müssen wir akzeptieren, dass die Welt und die Projekte in ihr zu komplex sind, als dass sie sich in Rezepte pressen ließen.

Was wir also beim Thema Skalierung nicht gebrauchen können:

  1. Noch mehr Bürokratie in Gestalt von noch mehr Prozessvorschriften 1. Ordnung.
  2. Noch mehr Verwaltungsakte.
  3. Noch mehr Entscheidungsdelegation.
  4. Noch mehr Kollegen, die nur fürs Steuern da sind und nicht wissen, wie die eigentliche Arbeit funktioniert.

Obwohl es offensichtlich ist, erwähne ich das deshalb, weil wir gerade die ersten Anzeichen einer Scrum-Bürokratie erkennen können. Ganz sicher brauchen wir aber keine Company ScrumMaster, die darüber wachen,

  1. ob alle Dokumente vorhanden sind, die Scrum vorschreibt
  2. ob sich die Team ScrumMaster treffen oder
  3. die Qualitätsstandards einhalten. Eine Stichprobe mag im einen oder anderen Fall notwendig sein, um dem einen oder anderen ScrumMaster noch einmal etwas zu zeigen. Aber es sollte nicht so weit kommen, dass Company ScrumMaster zu Verwaltern oder Hütern des „heiligen Scrums“ werden.

Im Grunde brauchen wir noch nicht mal verbesserte Varianten für das Schreiben von User Stories oder noch bessere elektronische Verwaltungsmaschinen. (Was nicht heißt, dass gute Ideen zu noch besseren Hilfsmittel nicht hilfreich sein können.)

Wirklich notwendig sind hingegen Tools, die in Management-Frameworks („Modelle“ klingt mir zu sehr nach Spielzeug)

  1. allen Beteiligten noch schneller Feedback liefern.
  2. im entscheidenden Moment eine Konversation aller Beteiligten erzwingen.
  3. dabei helfen, miteinander zu arbeiten, wenn man nicht am gleichen Ort ist.

Wenn wir es mit großen Teams zu tun haben, brauchen wir Frameworks, die selbst Prozesse erzeugen können. Frameworks, die Managern dann auch dabei helfen, die erzeugten Prozesse immer wieder zu verändern, so dass sie für die Anforderungen der großen Gruppe oder der Internationalisierung angepasst werden können. Diese Art von Frameworks nennt man Prozessmodelle 2. Ordnung. Dazu gehören Scrum, Kanban, die Theory U, die Appreciative Inquiry, die Open Space Methode oder die 14 Punkte des Toyota Production Systems.

Gesucht: Company ScrumMaster die führen

Die Company ScrumMaster sind dazu da, die Rahmenbedingungen für die anderen ScrumMaster im Unternehmen zu schaffen. Es sollten Menschen sein, die Scrum im Unternehmen weiter vorantreiben und neue Wege für das Lösen von Problemen finden, damit Unternehmen noch effektiver und produktiver arbeiten können. Daher müssen diese Company ScrumMaster auch führen – nämlich an den Punkt, an dem sich die Teams und schließlich die ganze Organisation immer mit dem Kunden im Blick selbst organisieren. Diese Company ScrumMaster ermöglichen Selbstorganisation im großen Stil.

Womit wir beim Kern der Skalierung in Scrum wären: Scrum skaliert. Punkt.

Ein Company ScrumMaster kann dafür sorgen, dass es einfacher wird, wenn er vier – ich nenne es einmal „Dimensionen“ – in Einklang bringt und so aufeinander ausrichtet, dass die Organisation ihre Aufgaben bewältigen kann.

1. Befähigung. Die Akteure müssen zur Führung ihrer Teams in die Selbstorganisation befähigt werden. Sie sollten Anleitung zur Selbstorganisation und im Treffen von Entscheidungen bekommen, so damit sie selbst wiederum ihre Teams zum autonomen Arbeiten führen können.

2. Rhythmus. Scrum bedeutet Taktung und Flow. Der Company ScrumMaster muss die Wertschöpfungskette abbilden und oftmals erst durch die Verdeutlichung etablieren und verteidigen. Dann muss er sie so gestalten, dass nicht nur das einzelne Team, sondern das gesamte Unternehmen seinen Rhythmus finden kann (wie Nonaka das nennt.)

3. Architektur. Die Struktur der Produkte oder der Organisation (das kann ein Team, eine Abteilung, ein großes Team etc. sein) sollte so gewählt werden, dass die einzelnen Teams möglichst autark voneinander liefern können. Ob das eine Holokratie sein muss, oder ob andere Formen besser passen, muss jede Organisation für sich herausfinden. So vertritt 3M die klare Ansicht, dass eine Organisationseinheit nicht mehr als 150 Menschen umfassen sollte.

4. Infrastruktur. Technologische Hilfsmittel sollten der Organisation dazu verhelfen, Produktionsfortschritte ständig (mehrmals pro Woche, noch besser pro Tag) sichtbar und damit spürbar zu machen (z.B. durch Integrationsserver, Simulationen, Modelle, Prototypen).

Werden diese vier Dimensionen in der vorgegebenen Reihenfolge berücksichtigt, hat Skalierung eine Chance. Dann wird jede Schablone überflüssig.

Categories: Blogs

Scripting the configuration of your CI server

Putting the tea into team - Ivan Moore - Thu, 01/15/2015 - 01:24
How do you configure your CI server?Most people configure their CI server using a web based UI. You can confirm this by searching for "setting up Jenkins job", "setting up TeamCity build configuration", "setup ThoughtWorks Go pipeline" etc. The results will tell you to configure the appropriate CI server through a web based UI, probably with no mention that this is not the only way.

One of my serial ex-colleagues, Nick Pomfret, describes using these web based UIs as "clicky-clicky". In this article I will use the Jenkins term "job" (aka "project") to also mean TeamCity build configuration or GoCD pipeline. In this article, I'm calling GoCD a CI server; get over it.
What is wrong with clicky-clicky?  Clicky-clicky can be useful for quick experiments, or maybe if you only have one job to set up, but has some serious drawbacks. 
It works - don't change itOnce a job has been set up using clicky-clicky, one problem is that it is difficult to manage changes to it. It can be difficult to see who has changed what, and to restore a job to a previous configuration. Just version controlling the complete CI server configuration file (which some people do) does not do this well, because such files are difficult to diff, particularly when there are changes to other jobs.
Lovingly hand crafted, each one uniqueAnother problem with clicky-clicky is when you have a lot of jobs that you would like to set up in the same way, clicky-clicky is both time consuming, and inevitably leads to unintended inconsistencies between jobs, which can cause them to behave in slightly different ways, causing confusion and taking longer to diagnose problems.
Can't see the wood for the tabs Furthermore, web UIs often don't make it easy to see everything about the configuration of a job a compact format - some CI servers are better than others for that.
The right way - scriptingIf you script the setup of jobs, then you can version control the scripts. You can then safely change jobs, knowing that you can recreate them in the current or previous states, and you can see who changed what. If you need to move the CI server to a new machine, you can just rerun the scripts.

In some cases a script for setting up a job can be much more readable than the UI because it is often more compact and everything is together rather than spread over one or more screens.
Fully automated configuration of jobsIt can be very useful to script the setup of jobs so it is totally automatic; i.e. when a new project is created (e.g. a new repo is created, or a new directory containing a particular file, e.g. a build.gradle file, is created), then a job can be created automatically. If you take that approach, then it saves time because nobody needs to manually setup the jobs, it means that every project that needs a job gets one and none are forgotten, and it means that the jobs are consistent so it is easy to know what they do.

There are some subtleties about setting up fully automated jobs which I won't go into here - maybe a future blog article.
Tools for scriptingFor GoCD, see gomatic. For other CI servers, please add a comment if you know of anything that is any good!

Copyright ©2015 Ivan Moore
Categories: Blogs

COBOL is… Alive!

Sonar - Wed, 01/14/2015 - 20:20

Most C, Java, C++, C#, JavaScript… developers reading this blog entry might think that COBOL is dead and that SonarSource should better focus its attention on more hyped languages like Scala, Go, Dart, and so on. But in 1997, the Gartner Group reported that 80 percent of the world’s business ran on COBOL, with more than 200 billion lines of code in existence and an estimated 5 billion lines of new code annually. COBOL is mainly used in the banking and insurance markets, and according to what we have seen in the past years, the erosion of the number of COBOL lines of code used in production is pretty low. So not only is COBOL not YET dead, but several decades will be required to see this death really happen. We released the first version of the COBOL plugin at the beginning of 2010 and this language plugin was in fact the first one to embed our own source code analysis technology, even before Java, C, C++, PL/SQL, … So at SonarSource, COBOL is a kind of leading technology :).

Multiple vendor extensions and lack of structure

The COBOL plugin embeds more than 130 rules, but before talking about those rules, let’s talk about the wide range of different COBOL dialects that are supported by the plugin. Indeed, since 1959 several specifications of the language and preprocessor behavior have been published, and most COBOL compilers have extended those specifications. So providing an accurate COBOL source code analyser means supporting most of those dialects: IBM Enterprise Cobol, HP Tandem, Bull GCos, IBM Cobol II, IBM Cobol 400, IBM ILE Cobol, Microfocus AcuCobol, OpenCobol, … which is the case for our plugin Moreover for those of you who are not familiar with COBOL source code: let’s imagine a C source file containing 20,000 lines of code, no functions, and just some labels to group statements and to make it possible to “emulate” the concept of function. Said like this, I guess everyone can understand how easy it can be to write unmaintainable and unreliable COBOL programs.

Need for tooling

Starting from this observation, managing a portfolio of thousands of COBOL programs, each one containing thousands of COBOL lines of code, without any tooling to automatically detect quality defects and potential bugs is a bit risky. The SonarSource COBOL plugin allows to continuously analyse millions lines of COBOL code to detect such issues and here are several examples of the rules provided by the plugin:

  • Detection of unused paragraphs, sections and data items.
  • Detection of incorrect PERFORM ... THRU ... control flow, where the starting procedure is located after the ending one in the source code, thus leading to unexpected behavior.
  • Tracking of GO TO statements that transfer control outside of the current module, leading to unstructured code.
  • Copy of a data item (variable) into another, smaller data item, which can lead to data loss.
  • Copy of an alphanumeric data item to a numeric one, which can also lead to data loss.
  • Tracking of EVALUATE statements not having the WHEN OTHER clause (similar to an if without an else).
  • Detection of files which are opened but never closed.

And among those 130+ rules, 30+ target the SQL code which can be embedded into COBOL programs. One such rule tracks LIKE conditions starting with *. Another tracks the use of arithmetic expressions and scalar functions in WHEREconditions. And last but not least, here are some other key features of this SonarSource COBOL plugin :

  • Copybooks are analysed in the context of each COBOL program and issues are reported directly on those copybooks.
  • Remediation cost to fix issues is computed with help of the SQALE method: www.sqale.org.
  • Even on big COBOL applications containing thousands of COBOL programs and so potentially millions of lines of code and thousands of issues, tracking only new issues on new or updated source code is easy.
  • Duplications in PROCEDURE DIVISION and among all COBOL programs can also be tracked easily.
  • To make sure that code complies with internal coding practices, a Java API allows the development of custom rules.

How hard it is to evaluate this COBOL plugin ?

So YES, Cobol is alive, and the SonarSource COBOL plugin helps make it even more maintainable and reliable.

Categories: Open Source

Exploring Akka Stream's TCP Back Pressure

Xebia Blog - Wed, 01/14/2015 - 16:48

Some years ago, when Reactive Streams lived in utopia we got the assignment to build a high-volume message broker. A considerable amount of code of the solution we delivered back then was dedicated to prevent this broker being flooded with messages in case an endpoint became slow.

How would we have solved this problem today with the shiny new Akka Reactive Stream (experimental) implementation just within reach?

In this blog we explore Akka Streams in general and TCP Streams in particular. Moreover, we show how much easier we can solve the challenge we faced backed then using Streams.

A use-case for TCP Back Pressure

The high-volume message broker mentioned in the introduction basically did the following:

  • Read messages (from syslog) from a TCP socket
  • Parse the message
  • Forward the message to another system via a TCP connection

For optimal throughput multiple TCP connections were available, which allowed delivering messages to the endpoint system in parallel. The broker was supposed to handle about 4000 - 6000 messages per second. As follows a schema of the noteworthy components and message flow:

Waterhose2

Naturally we chose Akka as framework to implement this application. Our approach was to have an Actor for every TCP connection to the endpoint system. An incoming message was then forwarded to one of these connection Actors.

The biggest challenge was related to back pressure: how could we prevent our connection Actors from being flooded with messages in case the endpoint system slowed down or was not available? With 6000 messages per second an Actor's mailbox is flooded very quickly.

Another requirement was that message buffering had to be done by the client application, which was syslog. Syslog has excellent facilities for that. Durable mailboxes or something the like was out of the question. Therefore, we had to find a way to pull only as many messages in our broker as it could deliver to the endpoint. In other words: provide our own back pressure implementation.

A considerable amount of code of the solution we delivered back then was dedicated to back pressure. During one of our re-occurring innovation days we tried to figure out how much easier the back pressure challenge would have been if Akka Streams would have been available.

Akka Streams in a nutshell

In case you are new to Akka Streams as follows some basic information that help you understand the rest of the blog.

The core ingredients of a Reactive Stream consist of three building blocks:

  • A Source that produces some values
  • A Flow that performs some transformation of the elements produced by a Source
  • A Sink that consumes the transformed values of a Flow

Akka Streams provide a rich DSL through which transformation pipelines can be composed using the mentioned three building blocks.

A transformation pipeline executes asynchronously. For that to work it requires a so called FlowMaterializer, which will execute every step of the pipeline. A FlowMaterializer uses Actor's for the pipeline's execution even though from a usage perspective you are unaware of that.

A basic transformation pipeline looks as follows:


  import akka.stream.scaladsl._
  import akka.stream.FlowMaterializer
  import akka.actor.ActorSystem

  implicit val actorSystem = ActorSystem()
  implicit val materializer = FlowMaterializer()

  val numberReverserFlow: Flow[Int, String] = Flow[Int].map(_.toString.reverse)

  numberReverserFlow.runWith(Source(100 to 200), ForeachSink(println))

We first create a Flow that consumes Ints and transforms them into reversed Strings. For the Flow to run we call the runWith method with a Source and a Sink. After runWith is called, the pipeline starts executing asynchronously.

The exact same pipeline can be expressed in various ways, such as:


    //Use the via method on the Source that to pass in the Flow
    Source(100 to 200).via(numberReverserFlow).to(ForeachSink(println)).run()

    //Directly call map on the Source.
    //The disadvantage of this approach is that the transformation logic cannot be re-used.
    Source(100 to 200).map(_.toString.reverse).to(ForeachSink(println)).run()

For more information about Akka Streams you might want to have a look at this Typesafe presentation.

A simple reverse proxy with Akka Streams

Lets move back to our initial quest. The first task we tried to accomplish was to create a stream that accepts data from an incoming TCP connection, which is forwarded to a single outgoing TCP connection. In that sense this stream was supposed to act as a typical reverse-proxy that simply forwards traffic to another connection. The only remarkable quality compared to a traditional blocking/synchronous solution is that our stream operates asynchronously while preserving back-pressure.

import java.net.InetSocketAddress
import akka.actor.ActorSystem
import akka.stream.FlowMaterializer
import akka.stream.io.StreamTcp
import akka.stream.scaladsl.ForeachSink

implicit val system = ActorSystem("on-to-one-proxy")
implicit val materializer = FlowMaterializer()

val serverBinding = StreamTcp().bind(new InetSocketAddress("localhost", 6000))

val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      connection.handleWith(StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow)
}
val materializedServer = serverBinding.connections.to(sink).run()

serverBinding.localAddress(materializedServer)

First we create the mandatory instances every Akka reactive Stream requires, which is an ActorSystem and a FlowMaterializer. Then we create a server binding using the StreamTcp Extension that listens to incoming traffic on localhost:6000. With the ForeachSink[StreamTcp.IncomingConnection] we define how to handle the incoming data for every StreamTcp.IncomingConnection by passing a flow of type Flow[ByteString, ByteString]. This flow consumes ByteStrings of the IncomingConnection and produces a ByteString, which is the data that is sent back to the client.

In our case the flow of type Flow[ByteString, ByteString] is created by means of the StreamTcp().outgoingConnection(endpointAddress).flow. It forwards a ByteString to the given endpointAddress (here localhost:7000) and returns its response as a ByteString as well. This flow could also be used to perform some data transformations, like parsing a message.

Parallel reverse proxy with a Flow Graph

Forwarding a message from one connection to another will not meet our self defined requirements. We need to be able to forward messages from a single incoming connection to a configurable amount of outgoing connections.

Covering this use-case is slightly more complex. For it to work we make use of the flow graph DSL.


  import akka.util.ByteString
  import akka.stream.scaladsl._
  import akka.stream.scaladsl.FlowGraphImplicits._

  private def parallelFlow(numberOfConnections:Int): Flow[ByteString, ByteString] = {
    PartialFlowGraph { implicit builder =>
      val balance = Balance[ByteString]
      val merge = Merge[ByteString]
      UndefinedSource("in") ~> balance

      1 to numberOfConnections map { _ =>
        balance ~> StreamTcp().outgoingConnection(new InetSocketAddress("localhost", 7000)).flow ~> merge
      }

      merge ~> UndefinedSink("out")
    } toFlow (UndefinedSource("in"), UndefinedSink("out"))
  }

We construct a flow graph that makes use of the junction vertices Balance and Merge, which allow us to fan-out the stream to several other streams. For the amount of parallel connections we want to support, we create a fan-out flow starting with a Balance vertex, followed by a OutgoingConnection flow, which is then merged with a Merge vertex.

From an API perspective we faced the challenge of how to connect this flow to our IncomingConnection. Almost all flow graph examples take a concrete Source and Sink implementation as starting point, whereas the IncomingConnection does neither expose a Source nor a Sink. It only accepts a complete flow as input. Consequently, we needed a way to abstract the Source and Sink since our fan-out flow requires them.

The flow graph API offers the PartialFlowGraph class for that, which allows you to work with abstract Sources and Sinks (UndefinedSource and UndefinedSink). We needed quite some time to figure out how they work: simply declaring a UndefinedSource/Sink without a name won't work. It is essential that you give the UndefinedSource/Sink a name which must be identical to the one that is used in the UndefinedSource/Sink passed in the toFlow method. A bit more documentation on this topic would help.

Once the fan-out flow is created, it can be passed to the handleWith method of the IncomingConnection:

...
val sink = ForeachSink[StreamTcp.IncomingConnection] { connection =>
      println(s"Client connected from: ${connection.remoteAddress}")
      val parallelConnections = 20
      connection.handleWith(parallelFlow(parallelConnections))
    }
...

As a result, this implementation delivers all incoming messages to the endpoint system in parallel while still preserving back-pressure. Mission completed!

Testing the Application

To test our solution we wrote two helper applications:

  • A blocking client that pumps as many messages as possible into a socket connection to the parallel reverse proxy
  • A server that delays responses with a configurable latency in order to mimic a slow endpoint. The parallel reverse proxy forwards messages via one of its connections to this endpoint.

The following chart depicts the increase in throughput with the increase in amount of connections. Due to the nondeterministic concurrent behavior there are some spikes in the results but the trend shows a clear correlation between throughput and amount of connections:

Performance_Chart

End-to-end solution

The end-to-end solution can be found here.
By changing the numberOfConnections variable you can see the impact on performance yourself.

Check it out! ...and go with the flow ;-)

Information about TCP back pressure with Akka Streams

At the time of this writing there was not much information available about Akka Streams, due to the fact that it is one of the newest toys of the Typesafe factory. As follows some valuable resources that helped us getting started:

Categories: Companies

solve your own problems, first

Derick Bailey - new ThoughtStream - Wed, 01/14/2015 - 13:00

I do a lot of work on code that is open source and I consider myself very lucky in being able to do that. I try to open source as much stuff as I can, but not everything makes the cut. There are a lot of things that never see the light of day beyond the project in which they were created. That’s ok, though. Not every piece of code I write should be written so that others can use it – even if the idea is reusable.

now-serving-me

I Don’t Care About Your Needs… Yet

One of the most important lessons that I’ve learned in software development, is that I need to solve my own problems first. This sounds extremely selfish and completely counter to the idea of open source and giving back to the community – and it is.

But once again, that’s ok.

Very few of us are paid to work on open source projects. It is a rare thing, indeed. Even if we are able to work on open source for a company or client project, it is usually done as a means to solve the current project’s needs.

When I sit down to work on my client projects, for example, I don’t set out to write open source things that are unrelated. Rather, I intend to solve the problems and implement features and solutions that my client needs. When I am solving my client project needs, I usually don’t care about how the code I am writing will affect other developers that might want to use the same code. I am concerned with getting my client project right. Other people and their needs will have to take a seat and wait their turn.

I Might Care About Your Needs, But Not Your Code

Sometimes the open source world and the client project overlap. When these worlds do overlap, I may take the time to write code in a way that is re-usable by other people. I try to open source this kind of code whenever I can, and when I do this I do try to keep other people in mind.

Keeping other people in mind usually doesn’t change how the code looks, though – at least, not at first. The single largest change that this causes in the initial stages of writing open source, is documentation.

People need to know how to use my code – how to get started and what methods to call, when. This has a much larger impact on others than, say, having the most elegant or flexible API to work with. If no one knows how to use it, the API design doesn’t matter much.

Ok, Now I Care About Your Code

When (if … and that’s a big if – I release a lot of open source code that zero other people ever use) someone else finds something useful in what I’ve written and it doesn’t meet their current needs, then I care about what their needs are. If it is something that I can change and would make sense within the focus of my project, then I may add it or accept a pull request for it. Documentation changes are always welcome, too, as getting other people’s perspectives on what needs to be known will help others down the line.

Eventually, I will care about other people’s needs in my open source projects – but it takes a while to get there.

I’m Solving My Own Problems, Not Yours

If you look at the code that I have released as open source recently, there is very little that is new. Sure, it may be a new project or a new spin on an idea when one particular person looks at it. But the truth is, most of the ideas that I am open sourcing have been baked in to my systems for a long time – but baked in as part of that system, not as something re-usable.

I wrote a MongoDB database migration framework, for example. This is definitely not a new idea to the community or to me. I wrote the first version of this more than a year ago, but rewrote it when I needed it in new projects recently.

Authentication and authorization for NodeJS web apps? Again, not a new idea. I used other people’s code for a long time, until the frameworks I used did not meet my needs. I wrote the first version of my authorization library over a year ago, and baked it in to my systems. The authentication library was new to me for NodeJS, but something that I had done a dozen times in .NET and Ruby.  Both of these got fresh rewrites when I needed them in multiple places.

All of my most successful open source projects have come from me solving my own problems, first. Once I know how to solve a problem and I see that I am doing the same thing in multiple places / projects, then I look at making the solution re-usable. I’m lucky in that I get to open source most of my solutions like this, but not everyone has that luxury (or wants it).

You Should Solve Your Own Problems, Not Mine

Solve your own problems, first, for the current applications and systems.

Don’t set out to write frameworks and libraries the first time you run in to a problem – or even the second time. Wait until you see the need for the same solution in multiple projects, before you try to extract something reusable. Your extracted libraries and frameworks will be much better off, having done that.

– Derick

Categories: Blogs

Mit Tests die Kostenkurve kriegen

Scrum 4 You - Wed, 01/14/2015 - 08:30

Bei einem Gespräch unter Kollegen kam wieder einmal das Thema „Qualität der Software“ zur Sprache. Jeder von uns hatte seine eigene Geschichte aus seinem Projekt mitgenommen und mit den anderen geteilt. Wir waren verblüfft, wie schwer es Software-Teams noch immer fällt, Software regelmäßig und funktionstüchtig zu liefern. Um mit gutem Beispiel voranzugehen, entschieden wir daher, diesen Beitrag im sogenannten “Pair Blogging” entstehen zu lassen. Was bedeutet das? Zwei Kollegen arbeiten nebeneinander auf ihren Notebooks in einem Google-Dokument und schreiben was das Zeug hält. Und damit zum eigentlichen Thema.

Wir haben das Gefühl, dass auch heute noch viele Unternehmen bzw. das Management die anfänglichen Investitionskosten für einen guten Softwareauslieferungs-Prozess scheuen. Bei Legacy-Systemen ist die Angst vor einem langen und teuren Migrationsprojekt oft noch viel größer. Die Verlockung, schnell Software ohne wirkliche Achtung der Qualitätsgrundsätze zu entwickeln, rächt sich spätestens bei den ersten Anforderungsänderungen der Kunden. Denn zu diesem Zeitpunkt kippt das Verhältnis der anfänglichen Investitionskosten gegenüber den immer stärker steigenden Kosten für jede einzelne Änderung. Das folgende Diagramm, das die “Cost of Change”-Kurven von Berry Boehm, Alistar Cockburn, Scott Ambler und Kent Beck abbildet, macht dieses Phänomen deutlich sichtbar.

Screenshot 2014-11-28 10.03.18

Dabei ist es weder notwendig noch sinnvoll, für die Einführung von Testautomatisierung oder Continuous Delivery ein großes Projekt zu starten. Ein erfolgsversprechendes iteratives Vorgehen zeichnet sich durch einen schnellen Return on Investment und geringe Risiken aus. In kleinen Schritten immer genau jenen Teil verbessern, in dem die Schmerzen am größten sind. Welche Komponente sorgt für die meisten Fehler im Test? Diese sollte mit automatisierten Tests überprüft werden, und erstmal nur diese. Wo läuft das Deployment immer wieder schief? Für genau dieses System sollte das Deployment automatisiert werden. Solche Investitionen zahlen sich sofort aus, und wir lernen gleichzeitig für den Umgang mit dem nächsten Problem. Indem man immer das akut größte Problem löst, entsteht Stück für Stück ein Rahmen für agile Softwareentwicklung.

Teams benötigen dafür folgende drei Rahmenbedingungen:

  1. Verantwortungsbewusstsein für Qualität und Lieferung
  2. Die Freiheit und die Freiräume, sich selbstorganisiert darum kümmern zu dürfen
  3. Das nötige Können und Wissen

Das Management ist gefragt, diese Rahmenbedingungen zu schaffen. Die Experten für die Etablierung agiler Entwicklungsmethoden seid jedoch ihr in den Teams – fangt einfach mit dem ersten kleinen Schritt an und erfreut euch am Ergebnis eures Schaffens.

Der Beitrag entstand im Pair Blogging mit Frank Janisch.

Categories: Blogs

Help! My Company is Stuck… (Part 2)

Illustrated Agile - Len Lagestee - Wed, 01/14/2015 - 03:00

In Part 1 of Help! My Company is Stuck, we discussed a few of the stagnating scenarios found when organizations are attempting transformative movement towards greater agility and improved culture. In part 2, we’ll dig into a few of the more challenging situations you may be experiencing.

Admittedly, many of these will require a coordinated, herculean, and multi-year effort to fix. If the will of the people to change is not there, sadly, the organization may not be able to become “unstuck.” This doesn’t mean it can’t be successful but it will probably be a painful experience. I also realize some of the following suggestions are simplistic but perhaps you can use them to get the right conversations started.

Lingering systemic dysfunctions. These are the hard, often controversial, things such as forced rankings, archaic performance management systems, and legacy yearly project budgeting procedures. The biggest of these, in my opinion, is the covert comparison or ranking of the people in the workforce. I have written about this in the past (I’m not alone) and until this approach to performance measurement is radically changed or abolished, organizations will continue to struggle to create an atmosphere conducive to full employee productivity and satisfaction.

Begin conversations early and often with Senior Leadership and Human Resources. Jurgen Appello and Management 3.0 is a good place to start for cutting edge thinking around building revolutionary systems around the needs of your employees – instead of hindering and demoralizing them.

One area or department is “more agile” than another. Agile is often introduced in one department while others are left to adjust to this new way of thinking and working. When this occurs, two or more “ecosystems” emerge with each out of sync with the other. Tension often builds between these ecosystems until ultimately, walls are raised and greater dysfunction emerges.

Expand transformation efforts into sales, business development, finance, marketing, operations, and human resources as soon as possible. Initially, this may just be a communication of transformation progress or a few initial ideas for new ways of working together. Find catalysts and advocates in other departments to begin experimenting and co-creating new ways to working together. They are out there.

Leaders defaulting to obsolete leadership styles. Many leaders continue to exhibit command-and-control (and other damaging) behaviors. Industrial age management styles and techniques will often clash with a movement to greater agility with the casualty of this conflict being the people being led. In many organizations, a new generation of leadership thinking will be necessary.

Deliberate focus on developing and transforming leadership roles. I recently posted a PDF on Servant Leadership and Agility. While Servant Leadership is certainly not the only leadership style to focus on, it is a good place to start. Here is a link to a view of how Servant Leadership principles connect with other principles (such as agile and lean) to support an ecosystem of agility. Work with your training department or develop your own leadership training materials but get started by putting a strong emphasis on radically changing the leadership message at your organization.

Managers in an abyss. Even those people considered to be strong servant leaders aren’t sure what their purpose is in an agile organization. They were the ones assigning work and determining how best to build things. Now, they aren’t sure where they belong or what their purpose is. Often, the people who report to them are working on teams where their manager has no visibility into their work.

Connect with and coach your managers. Coach managers on their importance in developing and supporting people. Guide them to the importance of their presence in sprint review session and how they can be an amazing encourager of their people. Transformations gather a tremendous boost when your managers are active and engaged in the movement. They are often the most eager to change but the last to be invited.

The trigger for this post (and part one) are the number of people who have recently been sharing with me stories of how their workplace is causing them to be emotionally and physically drained. High-blood pressure, unable to sleep, unsustainable work load, and family tensions are just a few of the side effects I am hearing. I believe we need a few passionate catalyst to begin moving organizations out of the current arrangement many people are in. Steve Denning says it best.

If an organization is expecting any transformation (agile or otherwise) to be the spark for a workplace renaissance or a dramatic change in culture, the desire to overcome these scenarios (and many more not covered here) must be overwhelming.

Becoming a Catalyst - Scrum Master Edition

The post Help! My Company is Stuck… (Part 2) appeared first on Illustrated Agile.

Categories: Blogs

Test Management in Agile Teams

Danube - Tue, 01/13/2015 - 20:25

CollabNet TeamForge provides continuous feedback loops through all phases of the software life cycle from plan, code, build and test to deploy. TestLink, a widely adopted open source product for test management, now tightly integrated with TeamForge provides the ability to create test case trackers and associate them with requirements.  With TeamForge, users can execute test cases and store test results. TestLink utilizes a tracker to store test cases and also tie Test Plans into builds. With this integration, the test management features are available in TeamForge providing a comprehensive end-to-end ALM solution. For more information, see the blog “Test Management in TeamForge”.

This article describes how the traditional practice of managing test cases and plans has shifted in the agile world. Here’s an illustration of how Agile teams can benefit  from adopting the Test Management feature as they manage the full software development life cycle. There is a company X is building a mobile product in a release cycle that’s planned for five sprints. A sprint typically spans over two weeks and the last sprint is a “hardening sprint” in which the team stabilizes the product. There is a planning week to start with, which is followed by four feature development sprints and a final hardening sprint. Here’s a picture that shows the release plan in timeline view.

tmctf1 Test Management in Agile TeamsThe Scrum team works on one story per sprint. Each story has a Test Suite which is a container for Test Cases. The Test Cases have different scenarios and is attached to Test Suite. The scenarios can be happy path or negative testcases. In the diagram below you can see Story A, Story B, Story C and Story D with Test Suite A, Test Suite B, Test Suite C and Test Suite D respectively. Individual test cases are attached to Test Suites based on the complexity of the stories.

tmctf2 Test Management in Agile Teams

In this example, the team uses Jenkins with TestLink for Continuous Integration (CI) to build and run tests multiple times a day to provide early feedback to the Scrum teams. Then, creates a Test Plan for every sprint and the plans name in our case is Sprint 1, Sprint 2, Sprint 3 and Sprint 4.There will be multiple builds per day and several builds during the two week sprint. The picture below illustrates the process.

tmctf3 Test Management in Agile Teams

When the sprint is in progress there may be test cases that need to be executed for the stories in the current sprint, but sometimes there may be few test cases from the past release or past sprint that need to be executed for regression. The test cases will be assigned to a plan and the tests in the plan will be executed during the two weeks sprint. Consider both scenarios; see Sprint 1 has four test cases assigned to the test plan to execute but in Sprint 4 there are two test cases that belong to Sprint 4 and one test case from Sprint 1. This means you can reuse test cases of prior sprints in any test plan. During the hardening sprint there are specific test cases for the sprint and one test case from prior release. The diagram below describes the merging of plans with Test Cases.

tmctf4 Test Management in Agile Teams

The Scrum teams can easily create new plans sprint over sprint to pull test cases that they need to execute and the results are available by sprint. The Scrum team was executing all test cases manually and marked them PASS/FAIL. Later, the scrum team decided to automate most of the test cases and it was accomplished using automated tools as shown in the following illustration.

tmctf51 Test Management in Agile Teams If we pull a report by Test Plan, we get a consolidated report of automated test and manual test results.

tmctf6 Test Management in Agile Teams

Some agile teams would like to change test cases many times within a sprint and having one Test Plan per sprint might be traditional. For this case you can have multiple Test Plans within a sprint like Sprint 1.1, Sprint 1.2, Sprint 1.3 and Sprint 1.4 as show below. The test cases can be unique in each one of this plan or reused between plans or cumulative from plan 1.1 to 1.4.

tmctf7 Test Management in Agile Teams

Summary

The CollabNet TeamForge-TestLink integration builds on top of TeamForge’ s capability to support the software development life cycle from plan, code, build and test to deploy. With TestLink integration, TeamForge now has the extended capability to create test case trackers and associate them with requirements and provide traceability and test management right from requirements through release. This article illustrated how tests can be managed in an agile team and how automation is embedded in the execution of Test Cases.

I’d like to hear what you think and what features you would like to see added. Please let me know venkatj@collab.net.

Follow CollabNet on Twitter and LinkedIn for more insights from our industry experts #AskCollabNet.

The post Test Management in Agile Teams appeared first on blogs.collab.net.

Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.