Skip to content

Feed aggregator

Do you know your stakeholders?

Growing Agile - Tue, 07/28/2015 - 13:08
If you are lucky, you have direct access to your customers. In our experience very few Product Owners have this luxury – especially if they are in large organisations. Most Product Owners we come across have various business stakeholders and face the daunting task of managing their often conflicting demands and schedules. Product Owners know […]
Categories: Companies

Developer Alumni

It’s not unusual for a developer to change jobs every two or three years or even more often. Up until 2004 I had never thought of a company keeping up with you after you left or having an ‘Alumni’ concept. The consulting outfit I left had instituted an alumni program and sent me out a tiny flash stick with the company logo on it about a year after I left. It was a small thing, but I was kind of impressed they bothered with any effort at all. In addition they sent out an email every month or so with an update on how their business was going. About 6 years later I ended up back at the same company, partially because they had maintained a good relationship with me, and realized I left not so much because of them, but more for an opportunity to manage a development teams.

I came across this practice again in the most recent episode of The Front Side Podcast where they explained they were trying to build the company culture that allowed you to just tell your manager you were going on an interview. This is exactly the sort of thing to get you escorted out in many big companies, but they seem to be experimenting with having a very open and honest culture that admits, especially in consulting that many of your developers will leave at some point sometimes even over to your clients. They wanted an environment where a developer could admit they wanted a new experience the consulting business couldn’t provide and they left the door open down the road as Alumni to welcome them back.

I think it’s a great experiment and I’m honestly impressed if they can produce a company culture where an employee feels safe admitting they’re going on an interview. I love that companies are experimenting and blogging/podcasting on how it’s going. I also think the alumni idea is a great way to keep people motivated and maintain a good relationship with your local development community. I know some of the best recruiters I had were former employees who could explain why they left the company, but how it was a great opportunity if you were a good fit.

Categories: Blogs

Transparency - Two Way Visibility

Agile Complexification Inverter - Mon, 07/27/2015 - 21:44
What does the value of Transparency really mean?
Nextgov: How do you define transparency?
Fung: My definition is quite a bit different from the conventional wisdom about transparency. A transparency system is designed to allow people to improve the quality of decisions they make in some way, shape or form, and it enables them to improve their decisions to reduce the risks they face or to protect their interests. Some of those decisions are about political accountability but some are in private life, like what food to buy or what doctor to go to.-- Archon Fung, professor at Harvard University's John F. Kennedy School of Government who studies government transparency.

Does your company practice fair pay?  Here's what one worker brought to Google and made a difference in transparency at the search giant.
Tell Your Co-Workers How Much You Make!  There's no law against it and it increases the chances you'll be paid fairly.
Does the Agile Manifesto imply some form of organizational transparency? I believe so, yes.  Here's what Jeff Sutherland has to say about the topic, look for the Individual and Interaction section.  Agile Principles and Values by Jeff Sutherland on MSDN.
Scrum's 5 core values list the concept of Openness.  Is this not very similar to Transparency?
There are lots of synonyms - visibility, openness, observable, apparent, etc.
Does this value of transparency imply that the information flows in both directions, up and down an organizational hierarchy, from line-workers to managers & directors, as well as from CEO to directors and wage earners also?

See Also:
Elements of an Effective BacklogGood Journalism is Transparent and Objective by John Zhu

Categories: Blogs

The monolithic frontend in the microservices architecture

Xebia Blog - Mon, 07/27/2015 - 17:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it’s going to provide. Calling your shared library “common”, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn’t focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to manually implement a server which does this. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

Categories: Companies

Is It Ready?

Leading Agile - Mike Cottmeyer - Mon, 07/27/2015 - 17:16

In an agile transformation, one of the first things we work towards is to create an ability to deliver in a predictable manner. As described in our compass and our journey, there is a clear path for organizations that embark on an agile transformation. By becoming predictable, an organization can make and keep promises. In essence, we are working to stabilize the value delivery system.

There are many things that can contribute to an unstable value delivery system. Ready backlog is one of these. The following diagram illustrates three fundamental elements of agile.

LeadingAgile_Clarity_Accountability_Measurable_Progress

Clarity means that we have a system in which the work we’re asking delivery teams to deliver is clear and well understood. Structure means we have created cross-functional teams that have everything necessary to create an increment of working tested software. The ability to see working tested code is how we demonstrate measurable progress.

Our goal is to provide backlogs that are well understood by delivery teams. When we have ready backlog, delivery teams are able to make and meet their commitments. One contributor of instability is that the backlog items are not truly ready but the team chooses to pull them into a sprint anyway. Unready work creates churn for the delivery team, they spend more time trying to figure out what they are supposed to do than actually doing work to deliver. They may guess and build based on their understanding which may be, and often is, flawed.

The Definition of Ready

How do we know the backlog items are ready? Through a definition of ready. Most people are familiar with the term “definition of done”, but many organizations I’ve worked with are not as familiar with the definition of ready.

Just as a definition of done is used to create alignment across a delivery team on what it means to be done with a backlog item, the definition of ready provides clarity to a product owner or product owner team on what it means to create ready backlog items. What might be in a definition of ready? This varies but a list of potential items are in the following list:

  • All stories must meet the INVEST criteria
  • A clearly defined user
  • Story has been estimated
  • Acceptance criteria with examples
  • Acceptance criteria in a certain format such as “given…, when…, then…”
  • External information that can provide context such as:
    • User journeys that provide context for the backlog item
    • Screen mockups
    • Architectural drawings
    • Business rules

The definition of ready will be specific to an organization and may even vary across the organization. The definition of ready may need to be more inclusive when working with teams that have little domain knowledge. Maybe it is lighter for teams that have deep product knowledge and a long working relationship with the product owner. The real test is to make sure the backlog items have enough clarity so that the delivery teams are able to make a commitment for delivering items within a sprint.

The definition of ready can evolve over time and is a good mechanism to instill new behaviors. Maybe UI mock ups are something that isn’t normally provided but a delivery team believes this is something that is needed. Add it to the definition of ready. If at some point this is just the way work is done, that new behavior now exists, then remove it. Up to you, just make sure it creates clarity on what is expected.

Having clarity in the backlog is a critical element of a value delivery system. It certainly isn’t the only thing that is needed but we find that backlog clarity is necessary for creating a predictable delivery system. Give it a try if you don’t have one and see if it helps.

By the way, here is a job aid that might be useful for crafting your definition of ready. This job aid is designed to provide guidance on how to create clear user stories.

The post Is It Ready? appeared first on LeadingAgile.

Categories: Blogs

#NoEstimates Unplugged

TV Agile - Mon, 07/27/2015 - 17:05
Woody Zuill and Vasco Duarte share their experiences with #NoEstimates and describe how #NoEstimates is just one small part of getting back to Agile-as-if-you-meant-it. This presentation tackles #NoEstimates implications on teams, stakeholders, product management and even business. But most of all we talk about the implications to us as human beings. Video producer: http://oredev.org/ Further […]
Categories: Blogs

Scaling Agile at Spotify

Scrum Expert - Mon, 07/27/2015 - 16:54
Spotify have been growing quickly as a company and it is continuously experimenting with ways of making the company work as effective as possible. Creating strong team autonomy and making sure everybody is aligned requires effective and strong servant leadership. Spotify has achieved a lot of attention for its way of organizing into squads, chapters, tribes and guilds. This presentation explains how management and leadership work in this organization.Video producer: http://oredev.org/
Categories: Communities

Video: The Myths of Scrum – Scrum is for IT Projects

Learn more about our Scrum and Agile training sessions on WorldMindware.com

Subscribe to our BERTEIG / WorldMindware YouTube channel – we have over 20 more videos planned on the Myths of Scrum.

Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Video: The Myths of Scrum – Scrum is for IT Projects appeared first on Agile Advice.

Categories: Blogs

Is your Scrum User Group really a user group?

Scrum Breakfast - Mon, 07/27/2015 - 15:30
Ken Schwaber recently chided the Scrum Alliance for having a trademark on the term "Scrum User Group." Man! There is something about the way divorced couples can fight forever through their kids. I'd like to ignore the couple for a moment, and talk about the children: What is the state of user groups in the Scrum and Agile community?

After a period of working too hard, I have been able to attend a few user group meetings this year. A couple of patterns have stood out for me. Patterns that don't feel right, but maybe I am exaggerating? Do you recognise any of these?
  • The user group is associated with an agile consultancy.
  • The user group event starts with some promotion of said consultancy.
  • The user group doesn't have a website or even a meet-up page.
  • The user group website is mostly promoting high-value courses.
  • Membership is free (just sign up for our mailing list).
  • The user group event is mostly attended by beginners 
  • The only really experienced Agilist present is the group's host.
  • The user group events are mostly talks by managers, coaches or trainers (in other words, they are a form of self-promotion).
Is a user group more than a marketing arm of a consultancy? Or am I being too harsh? What is your experience with Scrum user groups? What do you think a good user group should be?



Categories: Blogs

Super fast unit test execution with WallabyJS

Xebia Blog - Mon, 07/27/2015 - 12:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

Categories: Companies

New support to start with iceScrum

IceScrum - Mon, 07/27/2015 - 11:57
Hello everybody, In this blogpost we are going to present you two new features we put at your disposal to start using iceScrum in the good way and understand it better. Starting from scratch with iceScrum can be difficult if you are not familiar at all with the Scrum methodology. That is why we added…
Categories: Open Source

5 Reasons why you should test your code

Xebia Blog - Mon, 07/27/2015 - 10:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Categories: Companies

Story Telling with Story Mapping

Once upon a time, a customer had a great buying experience on a website.  The customer loved how from the moment the customer was on the site to the moment they checked out a product, the process was intuitive and easy to use.   The process and design of the customer experience was not by accident.  In fact, it was done very methodically using story mapping.  What is Story MappingStory Mapping is a practice that provides framework for grouping user stories based on how a customer or user would use the system as a whole.   Established by Jeff Patton, it helps the team understand the customer experience by imaging what the customer process might be.  This promotes the team to think through elements of what the customer finds as valuable.  Benefits of Story MappingStory Mapping is a way to bridge the gap between an idea and the incremental work ahead.  It's a great way to decompose an idea to a number of unique user stories.  What are some additional benefits of story mapping?
  • It moves away from thinking of functionality first and toward the customer experience first. 
  • It provides the big picture and end-to-end view of the work ahead
  • It's a decomposition tool from idea to multiple user stories. 
  • It asks the team to identify the highest value work from a customer perspective and where you may want the most customer feedback.
  • It advocates cutting only one increment of work at a time instead realizing that feedback from the current increment will help shape subsequent increments. 
Getting started with Story Mapping
How might you get started in establishing a story map?  It starts by having wall space available to place the customer experience upon.  Next you educate the team on the story mapping process (see below).  Its best to keep to a Scrum team size (e.g., 7 +/-2), where everyone participates in the process.  Then as a team, follow these high-level steps:
  • Create the “backbone” of the story map.  These are the big tasks that the users engage with. Capture the end-to-end customer experience.  Start by asking “what do users do?”  You may use a quiet brainstorming approach to get a number of thoughts on the wall quickly
  • Then start adding steps that happen within each backbone.  
  • From there explore activities or options within each step.  Ask, what are the specific things a customer would do here?  Are there alternative things they could do?  These activities may be epics and even user stories. 
  • Create the “walking skeleton”.  This is where you slice a set of activities or options that can give you the minimum end-to end value of customer experience.  Only cut enough work that can be completed within one to three sprints that represents customer value. 
As you view the wall, the horizontal access defines the flow of which you place the backbone and steps.  The vertical access under each contains the activities or options represented by epics and user stories for that particular area. Use short verb/noun phrases to capture the backbones, steps, and activities (e.g. capture my address, view my order status, receive invoice). 

Next time your work appears to represent a customer experience, consider the story mapping tool as a way to embrace the customer perspective.  Story mapping provides a valuable tool for the team to understand the big picture, while decomposing the experience to more bite size work that allows for optionality for cutting an increment of work.  Its another tool in your requirements decomposition toolkit.  
Categories: Blogs

Neo4j: From JSON to CSV to LOAD CSV via jq

Mark Needham - Sun, 07/26/2015 - 01:05


In my last blog post I showed how to import a Chicago crime categories & sub categories JSON document using Neo4j’s cypher query language via the py2neo driver. While this is a good approach for people with a developer background, many of the users I encounter aren’t developers and favour using Cypher via the Neo4j browser.

If we’re going to do this we’ll need to transform our JSON document into a CSV file so that we can use the LOAD CSV command on it. Michael pointed me to the jq tool which comes in very handy.

To recap, this is a part of the JSON file:

{
    "categories": [
        {
            "name": "Index Crime",
            "sub_categories": [
                {
                    "code": "01A",
                    "description": "Homicide 1st & 2nd Degree"
                },
            ]
        },
        {
            "name": "Non-Index Crime",
            "sub_categories": [
                {
                    "code": "01B",
                    "description": "Involuntary Manslaughter"
                },
            ]
        },
        {
            "name": "Violent Crime",
            "sub_categories": [
                {
                    "code": "01A",
                    "description": "Homicide 1st & 2nd Degree"
                },
            ]
        }
    ]
}

We want to get one row for each sub category which contains three columns – category name, sub category code, sub category description.

First we need to pull out the categories:

$ jq ".categories[]" categories.json
 
{
  "name": "Index Crime",
  "sub_categories": [
    {
      "code": "01A",
      "description": "Homicide 1st & 2nd Degree"
    },
  ]
}
{
  "name": "Non-Index Crime",
  "sub_categories": [
    {
      "code": "01B",
      "description": "Involuntary Manslaughter"
    },
  ]
}
{
  "name": "Violent Crime",
  "sub_categories": [
    {
      "code": "01A",
      "description": "Homicide 1st & 2nd Degree"
    },
  ]
}

Next we want to create a row for each sub category with the category alongside it. We can use the pipe function to combine the two selectors:

$ jq ".categories[] | {name: .name, sub_category: .sub_categories[]}" categories.json
 
{
  "name": "Index Crime",
  "sub_category": {
    "code": "01A",
    "description": "Homicide 1st & 2nd Degree"
  }
}
...
{
  "name": "Non-Index Crime",
  "sub_category": {
    "code": "01B",
    "description": "Involuntary Manslaughter"
  }
}
...
{
  "name": "Violent Crime",
  "sub_category": {
    "code": "01A",
    "description": "Homicide 1st & 2nd Degree"
  }
}

Now we want to un-nest the sub category:

$ jq ".categories[] | {name: .name, sub_category: .sub_categories[]} | [.name, .sub_category.code, .sub_category.description]" categories.json
 
[
  "Index Crime",
  "01A",
  "Homicide 1st & 2nd Degree"
]
 
[
  "Non-Index Crime",
  "01B",
  "Involuntary Manslaughter"
]
 
[
  "Violent Crime",
  "01A",
  "Homicide 1st & 2nd Degree"
]

And finally let’s use the @csv filter to generate CSV lines:

$ jq ".categories[] | {name: .name, sub_category: .sub_categories[]} | [.name, .sub_category.code, .sub_category.description] | @csv" categories.json
"\"Index Crime\",\"01A\",\"Homicide 1st & 2nd Degree\""
"\"Index Crime\",\"02\",\"Criminal Sexual Assault\""
"\"Index Crime\",\"03\",\"Robbery\""
"\"Index Crime\",\"04A\",\"Aggravated Assault\""
"\"Index Crime\",\"04B\",\"Aggravated Battery\""
"\"Index Crime\",\"05\",\"Burglary\""
"\"Index Crime\",\"06\",\"Larceny\""
"\"Index Crime\",\"07\",\"Motor Vehicle Theft\""
"\"Index Crime\",\"09\",\"Arson\""
"\"Non-Index Crime\",\"01B\",\"Involuntary Manslaughter\""
"\"Non-Index Crime\",\"08A\",\"Simple Assault\""
"\"Non-Index Crime\",\"08B\",\"Simple Battery\""
"\"Non-Index Crime\",\"10\",\"Forgery & Counterfeiting\""
"\"Non-Index Crime\",\"11\",\"Fraud\""
"\"Non-Index Crime\",\"12\",\"Embezzlement\""
"\"Non-Index Crime\",\"13\",\"Stolen Property\""
"\"Non-Index Crime\",\"14\",\"Vandalism\""
"\"Non-Index Crime\",\"15\",\"Weapons Violation\""
"\"Non-Index Crime\",\"16\",\"Prostitution\""
"\"Non-Index Crime\",\"17\",\"Criminal Sexual Abuse\""
"\"Non-Index Crime\",\"18\",\"Drug Abuse\""
"\"Non-Index Crime\",\"19\",\"Gambling\""
"\"Non-Index Crime\",\"20\",\"Offenses Against Family\""
"\"Non-Index Crime\",\"22\",\"Liquor License\""
"\"Non-Index Crime\",\"24\",\"Disorderly Conduct\""
"\"Non-Index Crime\",\"26\",\"Misc Non-Index Offense\""
"\"Violent Crime\",\"01A\",\"Homicide 1st & 2nd Degree\""
"\"Violent Crime\",\"02\",\"Criminal Sexual Assault\""
"\"Violent Crime\",\"03\",\"Robbery\""
"\"Violent Crime\",\"04A\",\"Aggravated Assault\""
"\"Violent Crime\",\"04B\",\"Aggravated Battery\""

The only annoying thing about this output is that all the double quotes are escaped. We can sort that out by passing the ‘-r’ flag when we call jq:

$ jq -r ".categories[] | {name: .name, sub_category: .sub_categories[]} | [.name, .sub_category.code, .sub_category.description] | @csv" categories.json
"Index Crime","01A","Homicide 1st & 2nd Degree"
"Index Crime","02","Criminal Sexual Assault"
"Index Crime","03","Robbery"
"Index Crime","04A","Aggravated Assault"
"Index Crime","04B","Aggravated Battery"
"Index Crime","05","Burglary"
"Index Crime","06","Larceny"
"Index Crime","07","Motor Vehicle Theft"
"Index Crime","09","Arson"
"Non-Index Crime","01B","Involuntary Manslaughter"
"Non-Index Crime","08A","Simple Assault"
"Non-Index Crime","08B","Simple Battery"
"Non-Index Crime","10","Forgery & Counterfeiting"
"Non-Index Crime","11","Fraud"
"Non-Index Crime","12","Embezzlement"
"Non-Index Crime","13","Stolen Property"
"Non-Index Crime","14","Vandalism"
"Non-Index Crime","15","Weapons Violation"
"Non-Index Crime","16","Prostitution"
"Non-Index Crime","17","Criminal Sexual Abuse"
"Non-Index Crime","18","Drug Abuse"
"Non-Index Crime","19","Gambling"
"Non-Index Crime","20","Offenses Against Family"
"Non-Index Crime","22","Liquor License"
"Non-Index Crime","24","Disorderly Conduct"
"Non-Index Crime","26","Misc Non-Index Offense"
"Violent Crime","01A","Homicide 1st & 2nd Degree"
"Violent Crime","02","Criminal Sexual Assault"
"Violent Crime","03","Robbery"
"Violent Crime","04A","Aggravated Assault"
"Violent Crime","04B","Aggravated Battery"

Excellent. The only thing left is to write a header and then direct the output into a CSV file and get it into Neo4j:

$ echo "category,sub_category_code,sub_category_description" > categories.csv
$ jq -r ".categories[] |
         {name: .name, sub_category: .sub_categories[]} |
         [.name, .sub_category.code, .sub_category.description] |
         @csv " categories.json >> categories.csv
$ head -n10 categories.csv
category,sub_category_code,sub_category_description
"Index Crime","01A","Homicide 1st & 2nd Degree"
"Index Crime","02","Criminal Sexual Assault"
"Index Crime","03","Robbery"
"Index Crime","04A","Aggravated Assault"
"Index Crime","04B","Aggravated Battery"
"Index Crime","05","Burglary"
"Index Crime","06","Larceny"
"Index Crime","07","Motor Vehicle Theft"
"Index Crime","09","Arson"
LOAD CSV WITH HEADERS FROM "file:///Users/markneedham/projects/neo4j-spark-chicago/categories.csv" AS row
MERGE (c:CrimeCategory {name: row.category})
MERGE (sc:SubCategory {code: row.sub_category_code})
ON CREATE SET sc.description = row.sub_category_description
MERGE (c)-[:CHILD]->(sc)

And that’s it!

Graph  25
Categories: Blogs

Do you rotate Scrum masters in teams?

Ben Linders - Sat, 07/25/2015 - 17:54
Do you have the same person acting as a Scrum master for every iteration in your team(s)? Or do different team members take the role on turns? I'd like to hear how you do it, what works for you, and why. Continue reading →
Categories: Blogs

See you at Agile 2015!

Agile Product Owner - Sat, 07/25/2015 - 00:48

Hi Folks,

The Scaled Agile team has been busy getting ready for Agile2015.

In order to support our growing community, we are a conference sponsor and will have our own booth for the first time, which we think of as “SPC and Partner Central.” Please stop by the booth and say hello.

Drew and I will be both be speaking at Agile2015, as follows:

  • I’ll also be participating in the “Stalwarts” session on Tuesday, Aug 3 at 2pm – Stalwarts 411 and why?

agile2015_speakersIt’s sure to be a fun-filled and action packed week. We all look forward to sharing experiences, seeing old friends, and making new ones.

SAFe travels, see you at Agile 2015!

Categories: Blogs

Scrum Immersion workshop at GameStop - Case Study

Agile Complexification Inverter - Fri, 07/24/2015 - 23:27
Here's a overview of a Scrum Immersion workshop done at GameStop this month. A case study example.

Normally these workshops start with the leadership (the stakeholders or shareholders) which have a vision for a product (or project). This time we skipped this activity.

The purpose of the Workshop is to ensure alignment between the leadership team and the Agile Coaches with regards to the upcoming scrum workshop for the team(s). Set expectations for a transition from current (ad-hoc) practices to Scrum. Explain and educate on the role of the Product Owner.

Expected Outcomes:
  • Create a transition plan/schedule
  • Set realistic expectations for transition and next release
  • Overview of Scrum & leadership in an Agile environment
  • Identify a Scrum Product Owner – review role expectations
  • Alignment on Project/Program purpose or vision
  • Release goal (within context of Project/Program & Scrum transition)

Once we have alignment on the Product Owner role and the Project Vision we typically do a second workshop for the PO to elaborate the Product Vision into a Backlog. This time we skipped this activity.

The purpose of the Workshop is to educate the Product Owner (one person) and prepare a product backlog for the scrum immersion workshop. Also include the various consultants, SME, BA, developers, etc. in the backlog grooming process. Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Overview of Scrum & Product Owner role (and how the team supports this role)
  • Set PO role responsibilities and expectations
  • Alignment of Release goal (within context of Project/Program & Scrum transition)
  • Product Backlog ordered (prioritized) for the first 2 sprints
  • Agreement to Scrum cadence for planning meetings and grooming backlog and sprint review meetings

Once we have a PO engaged and we have a Product Backlog it is time to launch the team with a workshop - this activity typically requires from 2 to 5 days. This is the activity we did at GameStop this week.
The primary purpose of the workshop is to teach just enough of the Scrum process framework and the Agile mindset to get the team functioning as a Scrum team and working on the product backlog immediately after the workshop ends (begin Sprint One). Expected Outcomes:
  • Set realistic expectations for transition and next release
  • Basic mechanics of Scrum process framework
  • Understanding of additional engineering practices required to be an effective Scrum team A groomed / refined product backlog for 1- 3 iterations
  • A backlog that is estimated for 1 – 3 iterations
  • A Release plan, and expectations of its fidelity – plans to re-plan
  • Ability to start the very next day with Sprint Planning

Images from the workshop

The team brainstormed and the prioritized the objectives and activities of the workshop.


Purpose and Objectives of the WorkshopThe team then prioritized the Meta backlog (a list of both work items and learning items and activities) for the workshop.

Meta Backlog of workshop teams - ordered by participants
Possible PBI for Next Meta Sprint
Possible PBI for Later Sprints
Possible PBI for Some Day
Possible PBI for Another Month or Never
A few examples of work products (outcomes) from the workshop.

Affinity grouping of Persona for the user role in stories
Project Success Sliders activityTeam Roster (# of teams person is on)
A few team members working hardThree stories written during elaboration activity
A few stories after Affinity Estimation
Release Planning:  Using the concept of deriving duration based upon the estimated effort.  We made some assumptions of the business desired outcome;  that was to finish the complete product backlog by a fixed date.
The 1st iteration of a Release Plan That didn't feel good to the team, so we tried a different approach.  To fix the scope and cost, but to have a variable timeframe.
The 2nd iteration of a Release Plan That didn't feel good to the PO, so we tried again.  This time we fixed the cost and time, but varied the features, and broke the product backlog into milestones of releasable, valuable software.
The 3rd iteration of a Release PlanThis initial release plan feels better to both the team and the PO, so we start here.  Ready for sprint planning tomorrow.



Categories: Blogs

Android: Custom ViewMatchers in Espresso

Xebia Blog - Fri, 07/24/2015 - 17:03

Somehow it seems that testing is still treated like an afterthought in mobile development. The introduction of the Espresso test framework in the Android Testing Support Library improved the situation a little bit, but the documentation is limited and it can be hard to debug problems. And you will run into problems, because testing is hard to learn when there are so few examples to learn from.

Anyway, I recently created my first custom ViewMatcher for Espresso and I figured I would like to share it here. I was building a simple form with some EditText views as input fields, and these fields should display an error message when the user entered an invalid input.

Android TextView with Error Message

Android TextView with error message

In order to test this, my Espresso test enters an invalid value in one of the fields, presses "submit" and checks that the field is actually displaying an error message.

@Test
public void check() {
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .perform(ViewActions.typeText("foo"));
  Espresso
      .onView(ViewMatchers.withId(R.id.submit))
      .perform(ViewActions.click());
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .check(ViewAssertions.matches(
          ErrorTextMatchers.withErrorText(Matchers.containsString("email address is invalid"))));
}

The real magic happens inside the ErrorTextMatchers helper class:

public final class ErrorTextMatchers {

  /**
   * Returns a matcher that matches {@link TextView}s based on text property value.
   *
   * @param stringMatcher {@link Matcher} of {@link String} with text to match
   */
  @NonNull
  public static Matcher<View> withErrorText(final Matcher<String> stringMatcher) {

    return new BoundedMatcher<View, TextView>(TextView.class) {

      @Override
      public void describeTo(final Description description) {
        description.appendText("with error text: ");
        stringMatcher.describeTo(description);
      }

      @Override
      public boolean matchesSafely(final TextView textView) {
        return stringMatcher.matches(textView.getError().toString());
      }
    };
  }
} 

The main details of the implementation are as follows. We make sure that the matcher will only match children of the TextView class by returning a BoundedMatcher from withErrorText(). This makes it very easy to implement the matching logic itself in BoundedMatcher.matchesSafely(): simply take the getError() method from the TextView and feed it to the next Matcher. Finally, we have a simple implementation of the describeTo() method, which is only used to generate debug output to the console.

In conclusion, it turns out to be pretty straightforward to create your own custom ViewMatcher. Who knew? Perhaps there is still hope for testing mobile apps...

You can find an example project with the ErrorTextMatchers on GitHub: github.com/smuldr/espresso-errortext-matcher.

Categories: Companies

SonarLint brings SonarQube rules to Visual Studio

Sonar - Fri, 07/24/2015 - 14:33

We are happy to announce the release of SonarLint for Visual Studio version 1.0. SonarLint is a Visual Studio 2015 extension that provides on-the-fly feedback to developers on any new bug or quality issue injected into C# code. The extension is based on and benefits from the .NET Compiler Platform (“Roslyn”) and its code analysis API to provide a fully-integrated user experience in Visual Studio 2015.

Features

There are lots of great rules in the tool. We won’t list all 76 of the rules we’ve implemented so far, but here are a couple that show what you can expect from the product:

  • Defensive programming is a good practise, but in some cases you shouldn’t simply check whether an argument is null or not. For example, value types (such as structs) can never be null, and as a consequence comparing a non-restricted generic type parameter to null might not make sense (S2955), because it will always return false for structs.
  • Did you know that a static field of a generic class is not shared among instances of different close-constructed types (S2743)? So how many instances of DefaultInnerComparer do you think will be created with the following class? It is static so you might have guessed one, but actually there will be an instance for each type parameter used for instantiating the class.
  • We’ve been using SonarLint internally for a while now, and are running it against a few open source libraries too. We’ve already found bugs in both Roslyn and Nuget with the rule “Identical expressions used on both sides of a binary operator” rule (S1764).
  • Also, as shown below, some cases of null pointer dereferencing can be detected as well (S1697):

This is just a small selection of the implemented rules. To find out more, go and check out the product.

How to get it?

SonarLint is packaged in two ways:

  • Visual Studio Extension
  • Nuget package

To install the Visual Studio Extension download the VSIX file from Visual Studio Gallery. Optionally, you can download the complete source code and build the extension for yourself. Oh, and you might have already realized: this product is open source (under LGPLv3 license), so you can contribute if you’d like.

By the way, internally the SonarQube C# plugin also uses the same code analyzers, so if you are already using the SonarQube platform for C# projects, from now on you can also get the issues directly in the IDE.

What’s next?

In the following months we’ll increase the number of supported rules, and as with all our SonarQube plugins, we are moving towards bug-detection rules. Next to this effort, we’re continuously adding code fixes to the Visual Studio Extension. That way, the issues identified by the tool can be automatically fixed inside Visual Studio. In the longer run, we aim to bring the same analyzers we’ve implemented for C# to VB.Net as well. Updates are coming frequently, so keep an eye on the Visual Studio Extension Update window.

This SonarLint for Visual Studio is one piece of this puzzle we’ve been working on since the beginning of the year: providing to the .Net community a tight, easy and native integration of the SonarQube ecosystem into the Microsoft ALM suite. This 1.0 release of SonarLint is a good opportunity to warmly thanks again Jean-Marc Prieur, Duncan Pocklington and Bogdan Gavril from Microsoft. They have been highly and daily contributing to this effort to make SonarQube a central piece of any .Net development environment.

For more information on the product go to http://vs.sonarlint.org or follow us on Twitter.

Categories: Open Source

Personal message from Net Objectives' new COO, Marc Danziger

NetObjectives - Fri, 07/24/2015 - 08:26
Two months ago, I was contentedly leading my life as a singleton consultant down in Los Angeles. I’d managed to get to work for some pretty cool companies – Amgen, CarsDirect.com, the Academy of Motion Pictures Arts and Sciences (the Oscars folks), NBC, International Lease Finance Corp. (AIG), Lynda.com and Toyota. But, as the cliché goes, something was missing. I was finding that I could get a...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.