Skip to content

Feed aggregator

Are You Experimenting?

I came across another interesting idea in Change By Design. It was presented as Toyota's ideas around training and had 4 principles:

  • There is no substitute for direct observation
  • Proposed changes should always be structured as experiments
  • Workers and managers should experiment as frequently as possible 
  • Managers should coach, not fix


This idea of experimenting caught my attention. So often as I'm designing a solution for a customer, they want to try to get the perfect solution first time out of the box. When they don't know what perfect is, they struggle to make decisions. I can recall one client, after 3 weeks of developing a complex interface, others looked at it and we went through another 3 weeks revising it. This was all in development, the real end users didn't have a chance to try it out. No experimentation at all, just managers trying to guess what was needed. No direct observation.

This is just one example of something I see a lot of. Not being open to experimenting. Smart organizations are figuring it out though. The push for a DevOps approach and employing Microservices is a move in the right direction. With DevOps, we can deploy something and if it doesn't work, we can have a different solution out in weeks or maybe months, not quarters or years. With microservices, we get away from the monolithic applications and have a bunch of small, independent components. If one doesn't work quite right, the whole system doesn't fail.

So how open is your organization to experimenting? Are failures discouraged or recognized as the first step towards success? Are you trying to get everything perfect or do you recognize that everything is a prototype, even if it's in production?

Categories: Blogs

What is Strategy Deployment?

AvailAgility - Karl Scotland - Fri, 02/05/2016 - 12:19

Japanese Ship in a Storm

I’ve been writing about Strategy Deployment a lot recently but realised that I haven’t properly defined what I mean by the term. Actually, I sort of did in my last post, so I’m going to repeat, expand and build on that here.

In a nutshell, Strategy Deployment is any form of organisational improvement in which solutions emerge from the people closest to the problem.

The name comes from the Japanese term, Hoshin Kanri, the literal translation of which is “Direction Management”, which suggests both setting direction and steering towards it. A more metaphorical translation is “Ship in a storm going in the right direction”. This brings to my mind the image of everyone using all their skills and experience to pull together, with a common goal of escaping turbulence and reaching safety.

Lets look at the two elements, strategy and deployment, separately.

Strategy

Wikipedia defines strategy as

“a high level plan to achieve one or more goals under conditions of uncertainty”.

The emphasis is mine as these are the two key elements which indicate that a strategy is not a detailed plan with a known and predictable outcome.

Strategy to me is about improving and making significant breakthroughs in certain key competitive capabilities. I like Geoffrey Moore’s Hierarchy of Powers from Escape Velocity as a guide for exploring what those capabilities might be. This hierarchy is nicely summarised as

“Category Power (managing the portfolio of market categories in which a company is involved), Company Power (your status relative to competitors), Market Power (market share in your target segments), Offer Power (differentiation of your offering), and Execution Power (your ability to drive strategic transformation within your enterprise).”

As an aside, in this context, Agility as a Strategy can be thought of as primarily (although not exclusively) focussed on improving Execution and Offer Powers.

Determining Strategy as “a high level plan to achieve one or more goals under conditions of uncertainty”, therefore, involves setting enabling rather than governing constraints. Strategy should guide the creation of new solutions, and not control the implementation of existing solutions. It defines the how and not the what, the approach and not the tools.

Deployment

Mirriam-Webster defines deploy as

“to spread out, utilize, or arrange for a deliberate purpose”.

In this context it is the strategy that is being utilised for the deliberate purpose of improving organisational improvement. Given that the strategy is “a high level plan to achieve one or more goals under conditions of uncertainty”, this means that the deployment is not the orchestration and implementation of a detailed plan.

Instead it requires a shift in the way organisations operate, from a mindset where management knows best, and tells employees what to do without thinking or asking questions, to one where they propose direction and ask for feedback and enquiry. Instead of assuming that managers know the right answers as a facts, the deployment of strategy assumes that any suggestions are simply opinions to be explored and challenged. Employees are allowed, and encouraged, to think for themselves, allowing for the possibility that they may turn out to be wrong, and making it acceptable for people to change their mind.

As another aside, this brings to mind a great Doctor Who quote from the latest season:

"Do you know what thinking is? It's just a fancy word for changing your mind." #DoctorWho pic.twitter.com/Qf8AONNuVw

— Doctor Who BBCA (@DoctorWho_BBCA) November 8, 2015

Strategy Deployment

Back to my original definition of “any form of organisational improvement in which solutions emerge from the people closest to the problem.”

Strategy Deployment is the creation of a high level plan for organisational improvement under conditions of uncertainty (the strategy), and the utilisation of that strategy by employees for a deliberate purpose (to achieve one or more goals). Clear communication of both the goals and the strategy, and constant collaboration across the whole organisation to use all the skills, knowledge and experience available, allows the appropriate tactics emerge. In this way Strategy Deployment enables autonomy of teams and departments while maintaining alignment to the overall strategy and goals.

Note that I say any form. I don’t see Strategy Deployment as a specific method, or framework, but more as general approach or style. My preferred approach at the moment uses the X-Matrix, but I would also describe David Snowden’s Cynefin, David Anderson’s Kanban Method, Mike Burrows’ Agendashift and Jeff Anderson’s Lean Change Method as forms of Strategy Deployment. I’m hoping to explore the synergies more at Lean Kanban North America and the Kanban Leadership Retreat.

Categories: Blogs

Build your Product Backlog with Story Mapping

Scrum Expert - Thu, 02/04/2016 - 19:24
Story mapping is a technique invented by Jeff Patton that order user stories along two independent dimensions. The “map” arranges user activities along the horizontal axis in rough order of priority. On the vertical axis, it represents increasing sophistication of the implementation. In his blog post, Sunit Parekh explains how you can apply story maps to build your product backlog in a visual way. Sunit Parekh ...
Categories: Communities

Final retrospective for a team

Growing Agile - Thu, 02/04/2016 - 15:48
Usually a retrospective is to look at the past and then think of a way to improve your team’s process going forward. But what if this is your team’s last sprint together? Do you have a retrospective? We were posed with this problem at a client. We had trained them a year before and they […]
Categories: Companies

Agile Data Governance

TV Agile - Thu, 02/04/2016 - 13:54
Agile development efforts and Data Governance efforts are at odds with each other. Even though they both have the sponsorship at the highest level of the organization, there is disconnect when it comes to understanding how the two disciplines interact. Supporters of both disciplines swear by their trade and leave little wiggle room when it […]
Categories: Blogs

Saved Search Improvements and Search Sharing

Pivotal Tracker Blog - Thu, 02/04/2016 - 00:39

The new year is a good time to take stock of where you are, identify any issues holding you back, and make the necessary changes. To that end, we’ve made dotted lines with a Sharpie around minor problem areas in our interface and tightened them up to make them glow brighter than ever.

Here are the changes you’ll notice in our latest update:

Panel updates

We’ve made it easier to find panel actions by moving them to the top of the panel. All the things you could do before in the Settings menu at the bottom of the panel can now be done up in the Panel actions menu, along with a few extras.

Editing a saved search

You can now edit the name or the search criteria of a saved search. Click the heart (or the Panel actions menu) to edit the saved search. You can now tweak your search criteria or edit the name of a saved search to your heart’s content.editing-saved-search

Sharing a search

We’ve also added the ability to share a search with other project members, so if you’ve crafted a complex search, with one click you can show someone exactly the stories you want them to see. Click the Share Search option in the Panel actions menu to copy the URL and share it over email, chat, or on a wiki.kebab-search-menu 1

Try out these new tweaks and let us know how they’re working out for you. As always, please use the Provide Feedback link under Help & Updates in any project, or email tracker@pivotal.io with your thoughts.

 

The post Saved Search Improvements and Search Sharing appeared first on Pivotal Tracker.

Categories: Companies

A Day in the Life: Testing on the Tracker Team

Pivotal Tracker Blog - Wed, 02/03/2016 - 20:56

The Tracker team as a whole takes responsibility for building in quality and making sure that necessary testing activities are done along with other development tasks. But we testers bring our own special value to the party, and we’re seeking another great tester to help us as we work to deliver the best possible product to our customers. Let’s walk you through a typical day of testing on the Tracker team.

Kicking off the day

breakfast

Breakfast!

Every day begins with a catered breakfast, followed by standups. Our Tracker team is getting pretty large, around 30 people as I write. To work in optimally sized teams, we’re divided into “pods” of up to four developer pairs each, plus designers, a tester, the product owner, and help from Customer Support and Marketing. A quick all-hands standup is followed by pod standups to balance communication and collaboration within and between pods.

In the five-minute all-hands standup this morning, one of the other pods announced they plan a release today. Sara, one of our designers, reminded us about the design critique scheduled tomorrow.

Next, in another short standup for the pod I’m on, I mentioned a failing build that’s blocking testing. A developer pair volunteered to look into the failure. I didn’t need any other testing help today, but it was available if I did. Our Tracker team currently has only two testers, but the POs, developers, Customer Support specialists, and designers all help with various testing activities.

Later, in our quick Test/Support team standup, I offered to help with the regression testing for the other pod’s release. There weren’t too many support tickets coming in, so Nate, our Support Lead, also volunteered to help.

Digging in

After standups, I realized that the release branch hadn’t been deployed for final regression testing yet, so I checked my pod’s Tracker projects to see what was ready for testing. Each of our features is usually represented by an epic in Tracker, made up of multiple small stories. I write exploratory testing charters for each feature. Today, enough stories were done for our new shared search feature that I could start exploring. I clicked the Start button on the first charter to begin.

charter

An exploratory testing charter in Tracker

I explored the user experience of the new feature and compared it for consistency to other parts of the UI. I asked Sara, the designer, to pair with me to verify that it looks correct in the different browsers. She decided to tweak some images and went to talk to a developer about it. I did another charter to explore using different roles and personas. I think we missed a use case, and after discussing it with Matt, our product owner, I wrote a new story for it. I made notes in the charter story about what I learned while exploring.

I needed a break, so I joined some teammates in a doubles ping pong match. After that, it was time to start on regression testing for the other pod’s release. We put regression test checklists from a template into a Tracker chore to make it easy for a few of us to share the work. Our CI has extensive automated tests from the unit level up to the UI level, so our confidence level is high, but we like to make doubly sure. Depending on which part of our app we’re releasing, we do additional automated checks as well as some manual checks of our UI, integrations with third-party products, and other testing as needed. Today’s release was for our Platform. I ran a Postman collection to double-check that the API’s functionality and performance looked correct. I then checked that the release didn’t cause issues with our iOS push notifications or Google integration. Nate and Jo, our Test and Support Manager, completed the rest of the checklist and the release is ready to go.

Helping customers get the most from Tracker

helpSidebar

New Help menu

After lunch, I worked on some articles for our new Knowledge Base. We’re writing it from scratch to replace our existing Help pages with searchable, easily navigated information. I made some changes suggested by our Content Manager, Steve, and added in some illustrations from our Senior Designer, Monique.

While working on the articles, I helped one of our support specialists reproduce a problem that a customer reported with one of our API endpoints. Later, a pair of developers asked me to review some Behavior-Driven Development (BDD) scenarios with them to see if can think of any additional cases to cover.

 

Building shared understanding

Next, we had our pre-Iteration Planning Meeting (pre-IPM). I got together with our pod’s PO, designer, and development anchor to discuss stories that the whole pod will estimate in a couple of days when we hold our weekly IPM. We use a “Three Amigos” approach (using the term coined by George Dinwiddie), only we have four amigos! We example map each story, specifying rules and examples of desired and undesired behavior, and note any questions that come up.

collaboratePic

Amigos collaborating

I learned about example mapping from Matt Wynne at a recent conference. We started trying it several months ago as an experiment. The resulting rules and examples help make sure we all share a basic understanding of each story as we start discussing it in the IPM. The developer pair who works on the story later will use the rules and examples to come up with scenarios for the behavior-driven acceptance tests that guide their coding along with the Test-Driven Development (TDD) they do at the unit level. They use Cucumber and Capybara to write the BDD tests. When all the TDD and BDD tests are green, the story should meet all the acceptance criteria. The developer pair will also do some manual exploratory testing on each story. Example mapping, BDD, and exploring are contributing to a significant decrease in our story rejection rate and cycle time.

Wrapping up the day

Our Test/Support team retrospective is scheduled for tomorrow. I looked at the action items assigned to me from the last retro and the measurements we came up with to see how much progress we made. One goal from our last retro was to improve upon the visibility of when and where the code for any given story is deployed, so we know where we can test it. I checked the recently finished stories and the test environments where they were deployed, and saw there were still a few “gray areas.” The PO and I discussed this with one of the developers on our Toolsmiths team to see what can be improved in the automatic deploy process, and wrote a story to add this.

To be fair, not every day goes this smoothly or is as productive. Stuff happens! We might get sidetracked by a production issue, or I might spend an entire day on one activity such as working on the Knowledge Base. There are always new experiments to try. For example, we’d like to have testers and developers pair more often, but it’s a bit difficult right now with only two testers. We’re planning to automate more of our manual acceptance and regression testing to limit the amount of time needed for manual regression, and free up more time for exploring. We’d also like to do more to transfer testing skills to developers. I recently facilitated an exploratory testing workshop at our weekly Tech Talks office lunch, but we could do much more to have POs, designers, and developers writing and exploring their own test charters.

Our approach to testing

Our mindmap for testing in 2016 reflects our team culture around testing, along with the practices, tools, and infrastructure that support it. The heart of testing in Tracker is the commitment that our team—and indeed, our company—has to quality and customer happiness. A key part is our direct involvement with our customers through our email support and user feedback process.

mindmap

We brainstormed where we are and where we’d like to go this year.

MindMup

We captured our mind map in an online collaboration tool for further discussions.

 

 

 

 

 

 

 

 

Does the idea of this work day sound exciting to you? Great! We hope to find another adventurous tester to join us in our journey to learn more and keep improving the quality of Tracker. As you can see from the mind map, we have endless areas to explore!

The post A Day in the Life: Testing on the Tracker Team appeared first on Pivotal Tracker.

Categories: Companies

I Know I Don’t Need This Test, But …

Derick Bailey - new ThoughtStream - Wed, 02/03/2016 - 19:07

I was all set to npm publish the next version of my Rabbus library. All my tests were passing, and everything was good to go.

Then I decided to write just one more test… just in case.

Debugging

I mean, the code works fine. I don’t need to write this test. I know I don’t need it now, at least.

But I want to future proof the library, and make sure no one can (*ahem* … make sure *I* don’t) break things as code changed later.

So I wrote the test, in spite of knowing I didn’t need to.

NewImage

I’m glad I wrote that test – the one I *knew* I didn’t need, because the code worked fine.

Right.

Pardon me while I crush this bug, real quick.

Categories: Blogs

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Xebia Blog - Wed, 02/03/2016 - 19:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‘keyword’ refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‘Open browser’, ‘Go to url’, ‘Input text’, ‘Click button’, ‘Log in’, 'Search product', ‘Get search results’, ‘Register new customer’.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance, ‘Input text’ will merely enter a string into an edit field, while ‘Search product’ will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‘if, elseIf, elseIf, …, else’ branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a keyword can also be made generic, by taking arguments (as briefly touched upon in the previous section). For example, ‘Log in’ takes arguments: ${user}, ${pwd} and perhaps ${language}. This adds to the already high levels of reusability of a keyword, since multiple input conditions can be tested through the same function. As a matter of fact, it is precisely this aspect of a keyword that enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‘Get search results’ returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform assertions, as input for decision-making or for passing it into another function as argument, Some keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and perform assertions on xml) and specialized keywords for directly driving various interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of design patterns, such as creating various types of abstraction layers.
  • And lots and lots of other great and unique features.
Summary

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

Categories: Companies

FitNesse in your IDE

Xebia Blog - Wed, 02/03/2016 - 18:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.

screenshot_15435-1

Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)

Categories: Companies

Accelerate Your Delivery with Two New Reports

New Planned Percent Complete Reports and Burndown Charts help you create an environment of continuous improvement for your lean and agile teams.

The post Accelerate Your Delivery with Two New Reports appeared first on Blog | LeanKit.

Categories: Companies

Nine Product Management lessons from the Dojo

Xebia Blog - Wed, 02/03/2016 - 00:00
Are you kidding? a chance to add the Matrix to a blogpost?

Are you kidding? a chance to add the Matrix to a blogpost?

As I am gearing up for the belt exams next Saturday I couldn’t help to notice the similarities of what we learn in the dojo (it’s where the martial arts are taught) and how we should behave as Product Managers. Here are 9 lessons, straight from the Dojo, ready for your day job:

1.) Some things are worth fighting for

In Judo we practice Randori, which means ground wrestling. You will find that there are some grips that are worth fighting for, but some you should let go in search of a better path to victory.

In Product Management, we are the heat shield of the product, constantly between engineering striving for perfection, sales wanting something else, marketing pushing the launch date and management hammering on the PNL.

You need to pick your battles, some you deflect, some you unarm, and some you accept, because you are maneuvering yourself so you can make the move that counts.

Good product managers are not those who win the most battles, but those who know which ones to win.

2.) Preserve your partners

It’s fun to send people flying through the air, but the best way to improve yourself is to improve your partner. You are in this journey together, just as in Product Management. Ask yourself the following question today: “whom do I need to train as my successor” and start doing so.

I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers

"I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers"

3.) There is no such thing as fair

It’s a natural reaction if someone changed the rules of the game. We protest, we go on strike, we say it’s not fair, but in a market driven environment, what is fair? Disruption, changing the rules of the game has become the standard (24% of the companies experience it already, 58% expect it, 42% is still in denial) We can go on strike or adapt to it.

The difference between Kata and free sparing is that your opponents will not follow a prescribed path. Get over it.

4.) Behavior leads to outcome

I’m heavily debating the semantics with my colleague from South Africa (you know who you are), so it’s probably wording but the grunt of it is: if you want more of something, you should start doing it. Positive brand experiences will drive people to your products; hence one bad product affects all other products of your brand.

It’s not easy to change your behavior, whether it is in sport, health, customer interaction or product philosophy, but a different outcome starts with different behaviour.

Where did my product go?

Where did my product go?

5.) If it’s not working try something different

Part of Saturday’s exams will be what in Jujitsu is called “indirect combinations”. This means that you will be judged on the ability to move from one technique to another when the first one fails. Brute force is also an option, but not one that is likely to succeed, even if you are very strong.

Remember Microsoft pouring over a billion marketing dollars in Windows Phone? Brute forcing its position by buying Nokia? Blackberry doing something similar with QNX and only now switching to Android? Indirect combinations is not a lack of perseverance but adaptability to achieve result without brute force and with a higher chance of success.

This is where you tap out

This is where you tap out

6.) Failure is always an option

Tap out! Half of the stuff in Jujitsu is originally designed to break your bones, so tap out if your opponent has got a solid grip. It’s not the end, it’s the beginning. Nobody gets better without failing.

Two third of all Product Innovations fails, the remaining third takes about five iterations to get it right. Test your idea thoroughly but don’t be afraid to try something else too.

7.) Ask for help

There is no way you know it all. Trust your team, peers and colleagues to help you out. Everyone has something to offer, they may not always have the solution for you but in explaining your problem you will often find the solution.

8.) The only way to get better is to show up

I’m a thinker. I like to get the big picture before I act. This means that I can also overthink something that you just need to do. Though it is okay to study and listen, don’t forget to go out there and start doing it. Short feedback loops are key in building the right product, even if the product is not build right. So talk to customers, show them what you are working on, even in an early stage. You will not get better at martial arts or product management if wait too long to show up.

9.) Be in the moment

Don’t worry about what just happened, or what might happen. Worry about what is right in front of you. The technique you are forcing is probably not the one you want.

 

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

Categories: Companies

Q: What is an Agile Transition Guide?

Agile Complexification Inverter - Tue, 02/02/2016 - 21:55
I was at the Dallas Tech Fest last week and was asked several times what an Agile Transition Guide was (it was a title on my name tag)... it's surprising to me how many people assume they know what an Agile Coach is, yet there is no good definition or professional organization (with a possible exception coming: Agile Coaching Institute).

So naturally the conversation went something like this:

Inquisitive person:  "Hi David, what's an Agile Transition Guide?  Is that like a coach?"

David:  "Hi, glad you asked.  What does a coach do in your experience?"

Inquisitive person: "They help people and teams improve their software practices."

David:  "Yes, I do that also."

Inquisitive person: "Oh, well then why don't you call yourself a coach?"

David:  "Great question:  Let's see...  well one of the foundational principles of coaching (ICF) is that the coached asks for and desires an interaction with the coach, there is no authority assigning the relationship, or the tasks of coaching.  So do you see why I don't call myself a coach?"

Inquisitive person: "Well no, not really.  That's just semantics.  So you're not a coach... OK, but what's is a guide?"

David:  "Have you ever been fishing with a guide, or been whitewater rafting with a guide, or been on a tour with a guide?  What do they do differently than a coach?  Did you get to choose your guide, or were they assigned to your group?"

Inquisitive person: "Oh, yeah.  I've been trout fishing with a guide, they were very helpful, we caught a lot of fish, and had more fun than going on our own.  They also had some great gear and lots of local knowledge of where to find the trout."

David:  "Well, there you have it... that's a guide - an expert, a person that has years of experience, has techniques to share and increase your JOY with a new experience."

Inquisitive person: "Yes, I'm starting to see that difference, but can't a coach do this also?"

David:  "No, not unless the coach is willing to switch to a different modality - to one of mentoring, teaching, consulting, or protecting.  Some times a guide must take over for the participant and keep the person/group within the bounds of safety - think about a whitewater river guide.  A coach - by strict interpretation of the ethics, is not allowed to protect the person from their own decisions (even if there are foreseen consequence of this action."

Richard FeynmanAnd now the conversation start to get very interesting, the Whys start to flow and we can go down the various paths to understanding.  See Richard Feynman's dialogue about "Why questions"

So, I'm not a Coach

I've been hired as a coach (largely because the organization didn't truly understand the label, role, and the ethics of coaching).  This relationship was typically dysfunctional from the standpoint of being a coach.  So I decide to study the role of coaching. I've done a few classes, seminars, personal one of one coach, read a lot and drawn some conclusions from my study - I'm not good a coaching within the environment and situation that Agile Coaches are hired. I've learned that regardless of the title that an organization uses (Agile Coach, Scrum Master, etc.) it doesn't mean coaching.  It intends the relationship to be vastly different.  Since I'm very techie, I appreciate using the correct words, and phrase for a concept.  (Paraphrasing Phil Karlton: In software there are two major challenges: cache invalidation and naming things.  Two Hard Things)

So to stop the confusing and the absurd use of the terms, I quit referring to my role and skills as coaching.  Then I needed a new term.  And having lots of friends that have been Outward Bound instructors and understanding their roles, the concept of a river guide appeals to me in this Agile transformational role.  Therefore I coin the term Agile Transformation Guide.  But many organization do not wish to transform their organization, but they do wish for some type of transition, perhaps from tradition development to a more agile or lean mindset.  So a transition guide is more generic, capable of the situational awareness of the desire of the organization.

See Also:

The Difference Between Coaching & Mentoring

Scrum Master vs Scrum Coach by Charles Bradley

Agile Coach -or- Transition Guide to Agility by David Koontz; the whitewater guide analogy to agile coaching.

Academic paper:  Coaching in an Agile Context by David Koontz

Interesting Twitter conversation about the nature of "coaching" with Agile42 group.



Categories: Blogs

3 Simple Productivity Metrics for Agile or Waterfall

Rally Agile Blog - Tue, 02/02/2016 - 19:00
I want a number, a metric, that tells me how productive our teams are”

challenged my former Head of IT, some years ago. Certainly, it’s a reasonable request to ask how productive a team (or a whole system) is.  

But first let’s look at the why behind the question. Why do we measure productivity?  Because we should? Because we can? For transparency? Accountability? To drive behaviors and influence culture? Should we measure it at all?

There are three simple, impactful metrics (for Agile or Waterfall workplaces) that I informally collect (through conversations is a good start) to quickly gauge productivity and how healthy and high-performing an organization is.

Lead Time 

Lead time is the queen of metrics. Lead time tells me how long it takes work to travel through the whole system, from concept to cash:

lead time from concept to cash

How quickly is value delivered to customers? Long lead times indicate waste. Lean experts Mary and Tom Poppendieck describe how over 80 percent of time-to-market can be “waste” in the form of delays and non-value-added work. That quickly snowballs into a double-pronged, multi-million dollar cost of delay that directly hits a company’s profits via the delay in value delivered to customers and thousands of wasted person hours.

What do long lead times look like in the real world?

This week I’m working with the largest company of its type in this state. It regularly takes up to six months to get a business case approved, and another 12 months to deliver a project. That’s an 18-month lead time to deliver value to customers. Given that requirements change at an average rate of around 2 percent per month (based on Capers Jones’ empirical research in the 20th century, and it’s probably higher in 2016,) this means a project that goes live today is delivering requirements that were signed off on 12 months ago and have changed (degraded?) by over 24 percent.  

This company’s volume of work is expected to increase at least 30 percent in the near future (with no headcount increase.) What happens when we add 30 percent greater volume to an already chockablock freeway? It reduces our speed by an order of magnitude.  

traffic jam

This company is adding risk to its portfolio by having such long lead times. Are the teams productive? Not nearly as productive they could be. What actions should they take to reduce lead times? Just reducing the batch size of work (e.g. from 12-month projects into small, discrete features) and setting work in progress (WiP) limits will often double throughput (i.e. halve lead time) as described by Lean management guru Don Reinertsen. These are things you can start doing sooner rather than later.  

But, by itself, lead time doesn’t tell me how productive a team is.

Predictability

Predictability complements lead time and has an equal seat at the head of the table as the king of metrics. Not only do I want a short lead time, I want to reliably know when work will be done and when value will be delivered to customers. Predictability is not boring—it’s the new black. And it’s sure better than 50 shades of grey, so to speak, in terms of guessing when something might be delivered.  

The city I’m working in suffered floods not so long ago. I asked my client, whose offices overlook the river, whether the council knows the volume of water in the river and its rate of flow, i.e. how much water flows into the nearby sea every day. “Of course,” he replied.  “So, what about your portfolio? What volume of work can it handle and how quickly will that work flow out to customers?” My client didn’t even pause.

We don’t know. We don’t really know what our capacity is at the portfolio level or how quickly we can deliver work.”  

That’s not unusual in this type of organization. It would be unusual in manufacturing, where every widget is a physical item and easily traceable. But where work is less tangible it’s easy for “invisible waste” to significantly erode capacity.  

Predictable delivery not only increases profits and reduces bottlenecks, it has a more important outcome: it creates trust, trust that teams will deliver on time and that the portfolio can and will deliver the number of features (or requirements) promised. I give my business to companies I can trust—that deliver when they say they will—over companies that don’t deliver when they say they will.

What actions can you take to increase predictability? You need to know the capacity and velocity of your portfolio. Once your requirements are logically grouped into features (see above,) use relative sizing (starting with a small-ish and well-understood feature) to quickly get a view of how much work is in-flight and in the pipeline.

relative story sizing

T-shirt sizing is fine if your stakeholders are new to story points (which you can later map over the t-shirt sizes.) It will probably be “way too much” work, which is where prioritization comes in (a topic for another time.) Then, populate “just enough” features to be assigned to the next program increment (say, 12 weeks.) And do this activity with the people close to the work, not far-removed stakeholders.  

When I find a company (or a team) with short lead times and high predictability, it’s a good indication that it is productive (although it doesn’t tell me that they are delivering the right things—another topic for another time.) But there’s one other metric that trumps both lead times and predictability.

Happiness

Happiness is the most important metric because in a knowledge economy, talented people are the competitive advantage. Are our people (and customers) happy? Simplistically, happy employees deliver good products, which lead to happy customers and good profits. And, the reverse is usually true: an unhappy employee is more likely to deliver a poorer product, leading to unhappy customers and poorer profits. "People, products and profits—in that order,” as our own CA Agile Business Unit GM, Angela Tucci, reiterates. I want to know if my employees are happy or unhappy and why, because it’s closely linked to motivation. As Dan Pink’s now cult classic video explains, give your people autonomy, mastery and purpose and they will be motivated to change the world.

happy team

How do we find out whether our people are happy? Ask. Not (just) via an anonymous, annual, online tick box survey. Ask via team retrospectives. Ask via one-on-one or small group sessions. Use a simple 1-5 Likert scale if you want an easy way to quantify the qualitative data. Ask what’s making people happy and unhappy. Frequently improving what’s making people and teams unhappy improves our other two metrics: lead time and predictability.  

For example, my client is generally happy but is anxious because the organisation needs to pull 30 percent more work through the “system” as part of its growth objectives. My client’s teams perform reasonably well but are frustrated because there are bottlenecks around key roles and these delays generate significant non-value-added workarounds. Improving these problems would make these people happier and improve lead times and predictability, and lead to happier customers and greater profits. 

Let’s return to my former Head of IT and the quest for a single metric for productivity: this may be a holy grail for another explorer. But, armed with metrics for lead time, predictability and happiness, I can reasonably and efficiently infer sustainable productivity—not only at a team level, but at a portfolio and company level.

And so can you.

Suzanne Nottage
Categories: Companies

Value of Burndown and Burnup Charts

Johanna Rothman - Tue, 02/02/2016 - 16:58

I met a team recently who was concerned about their velocity. They were always “too late” according to their manager.

I asked them what they measured and how. They measured the burndown for each iteration. They calculated the number of points they could claim for each story. Why? Because they didn’t always finish the stories they “committed” to for each iteration.

Burndown.StoryPoints

This is what their burndown chart looked like.

A burndown chart measures what you have finished. If you look at their burndown, you can see there are times when not much is done. Then, near the end of the iteration, they finish more. However, they don’t finish “everything” before they run out of time.

An iteration is a timebox, by definition. In this case, having to “declare victory” and assess what they were doing should have helped them. But, when this team saw the burndown, two interesting things happened. They beat themselves up for not finishing. And, when they didn’t finish everything, they didn’t always do a retrospective. In addition, the product owner often took the unfinished work and added it to the next iteration’s work. Yes, added, not replaced. That meant they never caught up.

BurndownwithideallineThey tried this burndown chart next, to see if they could meet their ideal.

They realized they were “late,” off the ideal line from Day 2. They felt worse about themselves.

They stopped doing retrospectives, which meant they had no idea why they were “late.”

A burndown emphasizes what you have completed. A burndown with the “ideal” line emphasizes what you have done and what you “should” be doing. I have used story points here. You could look at story points against time, looking at the available hours or people days or something like that.

For me, a burndown is interesting, but not actionable. Think about what happens when you take a trip. You plug your destination into your favorite GPS (or app), and it calculates how much longer it will take to get to your destination. You know you have driven some number of miles, but to be honest, that’s done. What’s interesting to you is what you have remaining. That’s what a burnup chart helps you see.

For me, a burnup is a way to see what we have accomplished and what’s remaining. I can learn more from a burnup than I can from a burndown. That’s me. Here’s a burnup of the same data:
StoryPointBurnup

I made these charts from exactly the same data. Yet, I have a different feeling when I see the burnups.

When I see the Story points Done without the ideal line, I see a hockey stick. It’s not as bad a stick as the image in Is the Cost of Continuous Integration Worth the Value on Your Program?, Part 1, and it’s still a significant hockey stick.

StoryPointBurnupwithIdealLine

When I see this burnup, I can tell by Day 3 that we are “behind” from where we want to be. By Day 5, I know we cannot “make up” the time. As any team member, I can raise this as an impediment in the daily standup. If I am a leader of any sort, I will put this on my list to discuss in the retrospective, if not problem-solve before.

Maybe that’s just the way my mind works. I like seeing where we are headed and what it will take to get there. I’m interested in what we’ve done, but that’s in the past. I can’t address the past except to retrospect. I can address the future and say, “Is there something we can do now to help us accomplish what we thought we could in this timebox?”

George Dinwiddie has a great article on burndown charts: Feel the Burn: Getting the Most out of Burndown Charts.

Oh, and the team I discussed earlier? They decided they were trying to cram too much into an iteration. They made their stories smaller. That put more pressure on their product owner, but then they realized lack of PO time was an impediment. They thought they were to blame with a burndown. They saw their data more easily with a burnup. Maybe we all had a mind-meld going on.

It doesn’t matter which chart you generate. It matters how the chart makes you feel: what action will you take from  your chart? If it’s not prompting you to act early, maybe you need a different chart. One project truism is: You cannot “make up” time. You can choose actions based on what your data tells you. Can you hear your data?

Categories: Blogs

Mob Programming (Pair Squared)

I came across the idea of mob programming on an Elixir podcast, Elixir Fountain. Its pair programming on steroids, where you sit together and work on one problem together. There’s still just one driver at a time, though it rotates. There’s already a web site and a conference based on it.

I like the concept, but I’m not sure how effective it would be as a constant practice. I’ve done exercises like mob programming in small doses on particularly hard problems that involve architecture choices and occasionally as an exercise at user group meet ups. Anecdotally I don’t see doing it most of the time, but it is possible it works as a regular practice. It’s different enough that it might need some further experiments.

Categories: Blogs

1st Conference, Melbourne, Australia, February 15 2016

Scrum Expert - Mon, 02/01/2016 - 18:55
The 1st Conference is a one-day event aimed at people starting out with Agile run by practitioners of the Agile Melbourne community. The format is a 3-track conference with Team, Technical and Management as themes. In the agenda of the 1st Conference you can find topics like “The first 18 months of our Agile transformation”, “Infrastructure for Agile teams”, “Agile Governance”, “Large Scale Agile – LESS”, ...
Categories: Communities

Trends for 2016

J.D. Meier's Blog - Mon, 02/01/2016 - 17:08

Our world is changing faster than ever before.  It can be tough to keep up.  And what you don’t know, can sometimes hurt you.

Especially if you get disrupted.

If you want to be a better disruptor vs. be the disrupted, it helps to know what’s going on around the world.  There are amazing people, amazing companies, and amazing discoveries changing the world every day.  Or at least giving it their best shot.

  • You know the Mega-Trends: Cloud, Mobile, Social, and Big Data.
  • You know the Nexus-Of-Forces, where the Mega-Trends (Cloud, Mobile, Social, Big Data) converge around business scenarios.
  • You know the Mega-Trend of Mega-Trends:  Internet-Of-Things (IoT)

But do you know how Virtual Reality is changing the game? …

Disruption is Everywhere

Are you aware of how the breadth and depth of diversity is changing our interactions with the world?  Do you know how “bi-modal” or “dual-speed IT” are really taking shape in the 3rd Era of IT or the 4th Industrial Revolution?

Do you know what you can print now with 3D printers? (and have you seen the 3D printed car that can actually drive? … and did you know we have a new land speed record with the help of the Cloud, IoT, and analytics? … and have you seen what driverless cars are up to?)

And what about all of the innovation that’s happening in and around cities? (and maybe a city near you.)

And what’s going on in banking, healthcare, retail, and just about every industry around the world?

Trends for Digital Business Transformation in a Mobile-First, Cloud-First World

Yes, the world is changing, and it’s changing fast.  But there are patterns.  I did my yearly trends post to capture and share some of these trends and insights:

Trends for 2016: The Year of the Bold

Let me warn you now – it’s epic.  It’s not a trivial little blog post of key trends for 2016.  It’s a mega-post, packed full with the ideas, terms, and concepts that are shaping Digital Transformation as we know it.

Even if you just scan the post, you will likely find something you haven’t seen or heard of before.  It’s a bird’s-eye view of many of the big ideas that are changing software and the tech industry as well as what’s changing other industries, and the world around us.

If you are in the game of Digital Business Transformation, you need to know the vocabulary and the big ideas that are influencing the CEOs, CIOs, CDOs (Chief Digital Officers), COOs, CFOs, CISOs (Chief Information Security Officers), CINOs (Chief Innovation Officers), and the business leaders that are funding and driving decisions as they make their Digital Business Transformations and learn how to adapt for our Mobile-First, Cloud-First world.

If you want to be a disruptor, Trends for 2016: The Year of the Bold is a fast way to learn the building blocks of next-generation business in a Digital Economy in a Mobile-First, Cloud-First world.

10 Key Trends for 2016

Here are the 10 key trends at a glance from Trends for 2016: The Year of the Bold to get you started:

  1. Age of the Customer
  2. Beyond Smart Cities
  3. City Innovation
  4. Context is King
  5. Culture is the Critical Path
  6. Cybersecurity
  7. Diversity Finds New Frontiers
  8. Reputation Capital
  9. Smarter Homes
  10. Virtual Reality Gets Real

Perhaps the most interesting trend is how culture is making or breaking companies, and cities, as they transition to a new era of work and life.  It’s a particularly interesting trend because it’s like a mega-trend.  It’s the people and process part that goes along with the technology.  As many people are learning, Digital Transformation is a cultural shift, not a technology problem.

Get ready for an epic ride and read Trends for 2016: The Year of the Bold.

If you read nothing else, at least read the section up front titled, “The Year of the Bold” to get a quick taste of some of the amazing things happening to change the globe. 

Who knows maybe we’ll team up on tackling some of the Global Goals and put a small dent in the universe.

You Might Also Like

10 Big Ideas from Getting Results the Agile Way

10 Personal Productivity Tools from Agile Results

Agile Results for 2016

How To Be a Visionary Leader

The Future of Jobs

The New Competitive Landscape

What Life is Like with Agile Results

Categories: Blogs

There Are No “Buts” in Progressive Enhancement

TV Agile - Mon, 02/01/2016 - 17:00
Progressive enhancement sounds practical, but not for your current software development project, right? Good news: you’re wrong! This presentation debunks the myths that often preclude individuals and organizations from embracing progressive enhancement and demonstrate solid techniques for applying progressive enhancement in your work. You will get * a better sense of the devices people are […]
Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.