Skip to content

Feed aggregator

Product Storytelling

Agile Thinks and Things - Oana Juncu - Sat, 02/13/2016 - 21:50
Telling A Product Story @ BetterSoftware , Florence 2015Products are stories , users are their heroes. User Experience is a story. If customers we long for can project themselves in a narrative user experience, magic will happen. What is this magic ? Customer's loyalty beyond any rationale ( gush how I dislike this word !) .  What is this magic about ? It's about our social need of being useful , have a specific purpose and make a difference. Every company and organisation is able to grow wild if it holds that magic. Money is a collateral benefit . So what's your product story ? 
Vision - what is the story about ?Storytelling craftsmen ( and women, of course !) say that before building a story we should answer this question : "What is the story about?" . In product development, being able to answer to this question means holding a vision. I usually ask three questions that anyone in a company should answer :"What is your product story about?" - If it where to translate this in a more business slang , it would turn more like : "What context and problems your product adresses ?"  "Whom story is it?" - Basically this means : "For who are you building your product(s)?" This is a fundamental question. We, as individuals,  make sense in connection with other individuals. We create these connections to build family, business, culture and society. I was reading a smart remark today that said that smartphones are not that far away from telephones . A wide majority of apps serve to communicate, just like our old telephone does. So I believe that the "Who is this for?" is a more important question than the very popular "Why?". "Golden circles" are second. People are first. Still "Why?" is an important structural question, and I like to ask it with a twist :"Why is this story a good ( the best?) one?" - In user experience language this might be put like this :"Why do we believe our product(s) make our customers live a better life?" 
If anyone in a company can answer to these questions , that company has a vision.If anyone in a  team cans answer to these questions, that team has a vision.Alignment happens when those stories are in sync. Leadership happens when those stories are told over and over.
Why Story ? - A matter of neuroscience 
Because we trigger empathy through stories people can project themselves in . Empathy is one of the most important quality of human brain, that cartesian and industrial thinking unfortunately underestimated. We are still a lot in this mental model, where emotions and empathy are not serious matter, because there is no ... rationale ( gush how I dislike this word !) . But you know , there is one: scientific researches prove that our brain is designed to be connected to other brains. This is where empathy comes from. Why story ? Because it is the only activity that helps us to have an "integrated mind", which means align  the "rational brain" that allows us to have a sense of time, sequence and causality, and help us differentiate ourselves from others,  and the "emotional brain" . We can give a sense of purpose and "meaning-ness", only with an "integrated mind". Therefore, create a product like telling a story  give a sense of usefulness of our work . And it has him chances to hook customers more and more.
Quality of Product, Quality of the Story
I host a workshop called "Why Agile Product Development Hooks US - Demo for Our Brain" and I was asked :"How do you think that your technique of storytelling improves the quality of the development?" . This is an interest question , because "quality of development" can be so many things that you can loose your soul in it. But as long as we agree that fundamental quality is that special characteristic of a product to help a customer achieve something useful for her , staying focused on "What story my product tell?"Whom story is it? give us high chances to achieve the fundamental quality of a product.
Related posts 
Purposeful AgileWhy Agile Development Hooks UsStorymapping is the Plot of Your Product Story

Categories: Blogs

The Misaligned Middle: Getting IT Managers On Board with Change

Middle managers are drifting out of alignment with organizational objectives. Learn 2 paths IT shops are taking to dig their way out of the misalignment mess.

The post The Misaligned Middle: Getting IT Managers On Board with Change appeared first on Blog | LeanKit.

Categories: Companies

Lean Product Management

TV Agile - Thu, 02/11/2016 - 18:16
Product Management is an art of balancing customer needs with creating business value. Unfortunately, many of the tools and values we have as Product Managers do not focus on building products our customers need, but building what we “think” they will want. This presentation looks at traditional Product Management and explains how we can adapt […]
Categories: Blogs

Beautiful Teams

Scrum Expert - Thu, 02/11/2016 - 18:11
Let’s challenge some of the commonly accepted patterns for software development teams. High degree of autonomy doesn’t turn into anarchy but rather help to keep intrinsic motivation high. Participatory leadership means that every team member is a leader yet it doesn’t mean competition. Decisions making process has nothing to do with power structures. Culture is paramount and it goes ahead of technical skills. Collaboration is ...
Categories: Communities

Agile Open Camp Argentina, Bariloche, Argentina, March 3-6 2016

Scrum Expert - Thu, 02/11/2016 - 12:00
The Agile Open Camp Argentina conference is a four-day event that wants to gather Agile and Scrum practitioners of Argentina and Latin America to share knowledge and experiences. All the talks will be in Spanish. The Agile Open Camp Argentina conference follows the open space format for conferences. Open space is a simple methodology for self-organizing conference tracks. It relies on participation by people who have ...
Categories: Communities

Nonfunctional Requirements – Who needs them?

Agile Estimator - Thu, 02/11/2016 - 06:13

Nonfunctional

First, what are nonfunctional requirements. The easiest way to explain that is to compare them to functional requirements. Functional requirements describe what an application must do. For example if we were building an contact management system, then a typical functional requirement might be embodied in the story “as a user of the system, I should be able to add a new contact.” Another requirement might be “as a  user of the system, I should be able to take a picture of a contact with my cell phone when adding the contact.” This requirement is non-functional. The first requirement talks about adding a contact, while the second one gives details about how to add the contact. The second requirement is also more involved with the technical solution. Of course, as every developer knows, both are real requirements. The technical requirements must be followed exactly. You cannot go to a user and tell him or her that you decided to substitute a really good paint program for the ability to take pictures. Functionally, they accomplish the same thing. Most users would not or could not use the paint program to enter the picture of a contact. If requirements could be sized, both the functional and non-functional requirements would add to that size.

In the early days of software development estimating, everything was driven by source lines of code (SLOC). Project managers and organizations first decided how many SLOC were in the application they were developing. Sometimes they estimated this based on similar projects they had done. Sometimes they had a different heuristic to arrive at this. For example, shops that developed satellite support software often said that they could estimate the number of lines of code that would be necessary based on the weight of the satellite. In any case, this tended to hide the impact of all of the requirements, both functional and non-functional. Once the SLOC was estimated, their were guidelines that project managers used for estimating, planning and monitoring. They would plan on completing a certain number of SLOC per team member for the entire project. They would plan for the coding phase to be developing SLOC as twice the overall rate of the project as a whole. For example, if they thought that a team member would develop 1,200 SLOC in a year, or 100 SLOC per month. During the 6 month coding phase, that developer would be writing 200 SLOC per month.

Between the seventies and the eighties three things happened that changed the way many people estimated software development. The first was that Alan Albrecht developed the function point measure. This was an attempt to estimate software size based on the requirements that were being implemented, both functional and non-functional. The second was Barry Boehm’s development of the Constructive Cost Model (COCOMO). By the way, Barry and I have discussed the matter and concluded that we are probably not related. COCOMO used SLOC to estimate software development time, but also incorporated some non-functional application characteristics which were referred to as cost drivers. The third, and final, piece of the estimating was supplied by Capers Jones. Capers Jones was interested in programmer productivity and the impact that programming languages had on that productivity. He developed a technique called backfiring to study how many SLOC were required to implement a function point. It was the interaction of these three developments that changed the way software development time was estimated. Each will be elaborated on below. This is not simply a history lesson. Conceptually, little has changed in the way estimating is done since then. Tools like SLIM and SEER-SIM, which are in common use today, continue to use the same principles.

In the seventies, IBM tasked Alan Albrecht with developing a method of estimating that did not depend on SLOC. IBM considered itself to be in a multiple programming language, since they developed systems in both COBOL and PL/1. Albrecht devised a way to arrive at the size of a piece of software. First, he considered the functional requirements of the application. He looked at the data an application would read and maintain. He had weights to assign to these clusters of data. Then all of the data flows that entered and exited the system were considered. This included interface files, screens, reports and communications between applications. Weights were applied to these and the unadjusted function point count was calculated. Next, he considered the nonfunctional requirements. He identified 14 general systems characteristics (GSCs) and allowed them to be graded between 0 and 5. A typical GSC would ask if the system was designed to optimize the user interface. Once these 14 technical attributes were graded, the function point count would be adjusted. Theoretically, a 1,000 unadjusted function point count would take on a value between 650 and 1,350 adjusted function points, depending on these nonfunctional attributes of the application.

In 1981, Barry Boehm published “Software Engineering Economics.” It covered several topics, but primarily it described COCOMO. COCOMO came in three or four different flavors: basic, intermediate (there were two variations of this) and detailed. They were all driven by SLOC. For basic COCOMO, SLOC was the only independent variable. It would calculate effort and schedule. This was a leap beyond the SLOC delivery rates that many people used. COCOMO could estimate 10K SLOC applications or 100K SLOC applications. It new that the larger one was not simply 10 times as large as the former. It had the history built into it to predict the impact that larger project had on the delivery rates. In addition, intermediate and detailed COCOMO introduced the impact of cost drivers. There were two types of cost drivers: nonfunctional requirements and project related variables. The nonfunctional requirements were like the function point GSCs. In fact, they occasionally overlapped. An example of a nonfunctional cost driver was to decide whether the application would be accessing large databases. Project related variables included assessments of the experience and capabilities of project managers, analysts and coders. There was also a driver that allowed the delivery of the project to be accelerated.

During the nineties, Capers Jones developed a Programming Languages Table. The table took the dozens of programming languages in use and predicted how many SLOC would be required to implement a function point of functionality.  For example, Java was rated at 53 lines per function point. This meant that if someone had developed an application using 53,000 SLOC  of Java, then they had probably implemented a 1,000 function point application. Capers was the chairman of Software Productivity Research (SPR). They used this table in both several ways. For example, if they were trying to estimate how many function points an organization had in their portfolio of applications, they could have them counted. However, Jones made the argument that counting function points was a laborious process that had to be done by an (IFPUG) Certified Function Point Specialist (CFPS). As an alternative, SPR would count the total SLOC by language and estimate the function points based on that.  For example, if an organization had 5.3 million SLOC of Java in their application portfolio, SPR could estimate that they 100,000 function points in the portfolio. This process was called backfiring. The table could also be used to advise clients on productivity improvements from using different programming languages.

The Programming Languages Table could be used in a forward fashion. This became the key to developing estimates based on requirements. Function points for an application were estimated using the rules that Alan Albrecht had established. The Programming Languages Table could be used to estimate how many lines of code might be needed to implement the application. In the example above, 1,000 function points could be developed in 53,000 lines of Java. There were schemes for mixed language development. For example, if COBOL was being used to develop the reports, then a mix of 53 SLOC per function point and 107 SLOC per function point would be used. The resulting line count would be estimated using COCOMO. Pretty straight forward, except for one thing. You guessed it, it has to do with nonfunctional requirements. Should unadjusted function points be used to generate SLOC and COCOMO cost drivers be used to cover the nonfunctional requirements? Or, should we use the GSC’s to account for the nonfunctional requirements, use the adjusted function points to generate SLOC and then feed that into COCOMO.  I took the latter approach. Of course, I had to make sure that there was consistency between certain GSC’s and certain COCOMO cost drivers.

In 2000, Barry Boehm and his team published “Software Cost Estimation with COCOMO II.” Like COCOMO 81, this is both a book and a model. COCOMO 81 was primarily for estimating development using waterfall methodologies. COCOMO II addressed spiral methodologies, evolutionary development models and software being developed using commercial off-the-shelf (COTS) application-composition tools. It does not specifically address agile development.  However, the book describes some extensions to the model.  These include phase distribution of schedule and effort (COPSEMO) and rapid application development effort and schedule adjustments (CORADMO). I have gotten reasonable estimates of agile development by using them. In COCOMO II, Boehm took a stand on the issue of whether to use adjusted or unadjusted function points. He said to use unadjusted function points. It could have been that he felt that his cost drivers accounted for nonfunctional requirements better than the function point GSCs did. Of course, this left unanswered one of the questions that often came up with either version of COCOMO. If my estimate is driven by SLOC, the lines of code should reflect code that is written to implement non-functional requirements. If it is driven by unadjusted function points, does this understate the SLOC that are derived from them?

In 2007, IFPUG started to take a fresh look at non-functional system requirements. The GSC approach had both mathematical and technological problems. The mathematical problem resulted from coupling the functional and non-functional measures. It was particularly visible when measuring enhancements. For example, suppose an enhancement project required that 40 reports have functional changes as well as increased complexity, a non-functional requirement. The count for the functional change would be 5 x 40 = 200 unadjusted function points. If the value adjustment factor had been 1.00 before the increase, then it would be 1.01 after it. The adjusted function point count would be 202. The non-functional requirements would have added 2 function points. However, imagine the same project with 40 reports changing in complexity but only one of them with a functional change. The function point counts would be 5 unadjusted and 5.05 adjusted function points. The non-functional requirement that had added 2 function points before, now contributes an infinitesimal .05 function points. Technologically speaking, the GSCs asked questions such as whether tape mounts and special forms handling were being minimized. These were references to technology that had not been used in more than a decade. IFPUG’s Software Non-functional Assessment Process (SNAP) generates a measure that is independent of the functional size and is technologically up-to-date.

Now, all we need is for Capers Jones to develop a SNAP based backfiring approach. If we could say something like “it takes 18 lines of Java code to implement a SNAP point,” then we could estimate by taking the function points and the SNAP points and turning them into SLOC. The SLOC could then be used to derive a COCOMO II estimate of schedule and effort. Of course, there is no guarantee we would ever be able to make that type of statement. The functional and nonfunctional requirements might interact in a more complex fashion. Dr. Charey Tichenor has found a dollar amount he can associate with each SNAP point for development work done in his organization. He says that your mileage will vary. At an IFPUG conference in 2015, Dr. George Mitwasi talked about some experiences he had using SNAP. He was using a factor of 3 SNAP points per function point for estimating. In other words, an application that had 100 function points and 300 SNAP points would be estimated as if it were 200 function points in size. He admits that his work is still preliminary. It is only a matter of time until SNAP points are integrated into existing estimating models.

Categories: Blogs

[AgileScotland] Lean Agile Scotland Conference Dates Announced - 6, 7, 8 OCTOBER 2016

Agile Scotland - Wed, 02/10/2016 - 16:15
Hello!  I just noticed Chris has announced the dates for this years Lean Agile Scotland conference - 5th, 6th, 7th October.
http://leanagile.scot/tickets/
Clarke
Categories: Communities

A Conversation on Patterns, Practices and System Architecture in Node.js

Derick Bailey - new ThoughtStream - Wed, 02/10/2016 - 14:30

In my last post, I talked about how I see folding code regions as an anti-pattern as a design error in code. That post was inspired by a conversation I’ve been having via email, and that conversation has taken a rather interesting turn.

It started as a question about what editor to use, and a desire to find a modern editor with code folding. But since then, the conversation has turned into a discussion on the realization that the “old” way we built large-scale software in .NET, Java and other “enterprise” languages, may not be so “old” after all.

With permission, I’m posting a slightly edited version of the conversation. There’s a lot of truth and shared experience, here, for developers that are jumping away from “enterprise” languages and heading toward the “green pastures” of Node.

On Bloated Express Router Methods

Them:

I’m noticing I’m bloating my routes with a lot of logic that should be in the model … validation logic for custom fields, create logic with subdocument logic (preventing duplicates), and logic to convert the model into lighter-weight DTOs.  

Any chance you can think of a few posts and/or videos that hone in on these topics?

Me:

These topics are epic length articles and book, each :P

I don’t have a lot written about them, and the only recordings that generally touch this are the more recent WatchMeCode episodes on Express.

In general, you’re talking about the business of your application with logic to do what you need. That layer will most likely live in a library of modules, outside of your web app.

Your routes, then, would gather information from the request and call out to the library modules as quickly as possible.

The goal is to reduce the number of calls in a single route handler, to keep your real business logic out of the web app and living in other modules where they can be called from the web app, background services, or wherever else they need to be used.

The only exception to this, off-hand, is validation.

On Validation

Me (continued):

In my experience, validation is something that lives in the application layer – the web app, the API, or the whatever-application-host-you-are-using layer.

I find that it’s rare for a model to have a single set of validation rules that apply to every single use of that model. The web app may expect one set of rules to run at a certain time, while the API may expect all of the rules to run together, or have a different set of rules, entirely.

When it comes to sharing validation rules (which will happen – you don’t want to write the same “name is required” validation more than once), I’ve found composition of shared rules to work best. If you have a “user” model, you may have a separate “user validations” library from which you can pick and choose which validations are used, in what scenario.

Everything Old Is New Again

Me (continued):

None of this is new ground, of course. It’s all just the same questions over and over and over again, now applied to Node instead of C# or Java or Ruby or whatever.

It would be good to reach outside of Node, then, to find the answers to these questions in other languages.

For example, on my old LosTechies blog, I wrote a post with questions about validation and collected a list of links in response. It’s all about C# and domain driven design (“DDD”), but the concepts are still valid. you might read through those to get an idea.

Ultimately, these can be long and difficult subjects to tackle. My best advice is to try and read between the lines in my videos.

Notice how my routes continue to remain small and notice where I am putting code that you would have previously put in the routers. even if the type of code is not the same as what you’re doing, the underlying principle is.

Them:

[sigh] … yeah, I basically when through that same thought process when I hit “send.”

In an surprising twist, the move from .NET to MEAN Stack… and trying to do things the “MEAN Stack way”… has actually made me make a lot of 101-level mistakes.  

The world of OOP/OOD and .NET is chock full of concepts that end up becoming instinctive over time… separation of layers, ACID, whatever… that I’ve almost forgone while trying to “simplify” the apps I’m working on.

Take validation, for example. I can’t stand it when developers craft custom types in SQL Server and would almost never write a field-level validation rule in a trigger.  However, that’s essentially what is happening when you allow Mongoose to validate the model.

The service layer seems to suffer from the same type of ambiguity.  Had I been working in .NET, I’d have formal business libraries, service libraries, and the service host (WCF, ServiceStack, etc.) would only be exposing those operations relevant for the service layer.  

However, in MEAN, it almost feels customary to blur the business and service logic into the routes and flatten the back-end stack.

It’s odd, but I’ve tried so hard to create that flattened / simplified stack that I’ve almost gone too far.

Those Who Cannot Remember The Past…

Me (continued):

We certainly have gone too far – almost all of us, as a JavaScript community.

We tried to throw out the “old” way of doing things because we were on a “new” platform, and it turns out we’re making the same mistake as Ruby on Rails when it made fun of Java for all those years:

Patterns? Enterprise scale?! HAHAHAHAHA! That’s so DUMB!

And all of this eventually turned into:

“Oh… wait… maybe these ‘patterns’ things were a good idea. And hey… Java and .NET? Umm… How did you grow your apps with the enterprise needs, again?”

I did the same thing when I moved from .NET to Rails, years ago. I thought, “It’s a Rails app! Everything goes in here and I don’t have to worry about it anymore!”

How wrong I was.

… Are Condemned To Repeat It

Me (continued):

My journey into Node was the same story at first.

“It’s just Javascript. How hard can this be?!”

It turns out it can be very hard when you bloat your routers and embed your business code into your web application.

I learned a lot of these lessons the hard way when I was building SignalLeaf, and as I have been building apps for my one client.

In the end, I’ve realized that the “old” way is still the “right” way for the most part.

Large systems are not to be giant monoliths. They are to be composed of smaller applications that each have a specific focus.

Applications are not to be monolithic. They are to be composed of smaller modules and libraries, with the core business logic separated from the the application shell as much as is reasonable.

Modules and libraries are not to be monolithic. They are to serve one purpose and serve it well…

…it’s turtles all the way down.

On Timeless Principles and Patterns

Me (continued):

All the principles, patterns and practices of large scale .NET systems that I used to build, still apply to Node.

And I completely understand what you mean about Mongo / Mongoose to validate the model. This is what drove the giant beast of a model that I showed in my post on code folding.

My love of Mongoose has quickly diminished, as I’ve seen the damage I can do with it. I don’t dislike it… but I’m certainly questioning a lot of what I’ve done with it and a lot of what it allows (or wants) me to do.

I am seeing a need to look at the old principles and patterns of my .NET days. I am seeing a need to take the modular and component-based architectures that I’ve built with Backbone and Marionette, and do the same on the back-end again.

The implementation may look different when you get into the weeds, but a large scale system still looks like a large scale system, whether it’s built in .NET, Rails, Node, Rust, Go, Java, Erlang, Haskell, or all of the above.

You Are Not Alone

Them:

It sure is refreshing to know that I’m not alone.

Categories: Blogs

SonarLint for Visual Studio: Let’s Fix Some Real Issues in Code!

Sonar - Wed, 02/10/2016 - 10:38

As part of the development process of SonarLint for Visual Studio we regularly check a couple of open source projects, such as Roslyn, to filter out false positives and to validate our rule implementations. In this post we’ll highlight a couple of issues found recently in Roslyn project.

Short-circuit logic should be used to prevent null pointer dereferences in conditionals (S1697)

This rule recognizes a few very specific patterns in your code. We don’t expect any false positives from it, so whenever it reports an issue, we know that it found a bug. Check it out for yourself; here is the link to the problem line.

When body is null, the second part of the condition will be evaluated and throw a NullReferenceException. You might think that the body of a method can’t be null, but even in syntactically correct code it is possible. For example method declarations in interfaces, abstract or partial methods, and expression bodied methods or properties all have null bodies. So why hasn’t this bug shown up yet? This code is only called in one place, on a method declaration with a body.

The ternary operator should not return the same value regardless of the condition (S2758)

We’re not sure if this issue is a bug or just the result of some refactoring, but it is certainly confusing. Why would you check isStartToken if you don’t care about its content?

 ”IDisposables” should be disposed (S2930)

Lately we’ve spent some effort on removing false positives from this rule. For example, we’re not reporting on MemoryStream uses anymore, even though it is an IDisposable. SonarLint only reports on resources that should really be closed, which gives us high confidence in this rule. Three issues ([1], [2][3]) are found on the Roslyn project, where a FileStream, a TcpClient, and a TcpListener are not being disposed.

Method overloads with default parameter values should not overlap (S3427)

Mixing method overloads and default parameter values can result in cases when the default parameter value can’t be used at all, or can only be used in conjunction with named arguments. These three cases ([1], [2], [3]) fall into the former category, the default parameter values can’t be used at all, so it is perfectly safe to remove them. In each case, whenever only the first two arguments are supplied, another constructor will be called. Additionally, in this special case, if you call the method like IsEquivalentTo(node: myNode), then the default parameter value is used, but if you use IsEquivalentTo(myNode), then another overload is being called. Confusing, isn’t it?

Flags enumerations should explicitly initialize all their members (S2345)

It is good practice to explicitly set a value for your [Flags] enums. It’s not strictly necessary, and your code might function correctly without it, but still, it’s better safe than sorry. If the enum has only three members, then the automatic 0, 1, 2 field initialization works correctly, but when you have more members, you most probably don’t want to use the default values. For example here FromReferencedAssembly == FromSourceModule | FromAddedModule. Is this the desired setup? If so, why not add it explicitly to avoid confusion?

“async” methods should not return “void” (S3168)

As you probably know, async void methods should only be used in a very limited number of scenarios. The reason for this is that you can’t await on async void method calls. Basically, these are fire and forget methods, such as event handlers. So what happens when a test method is marked async void? Well, it depends. It depends on your test execution framework. For example NUnit 2.6.3 handles them correctly, but the newer NUnit 3.0 dropped support. Roslyn uses xUnit 2.1.0 at the moment, which does support running async void test methods, so there is no real issue with them right now. But changing the return value to Task would probably be advisable. To sum up, double check your async void methods; they might or might not work as you expect. Here are two occurrences from Roslyn ([1], [2]).

Additionally, here are some other confusing pieces of code that are marked by SonarLint. Rule S2275 (Format strings should be passed the correct number of arguments) triggers on this call, where the formatting arguments 10 and 100 are not used, because there are no placeholders for them in the format string. Finally, here are three cases ([1], [2], [3]) where values are bitwise OR-ed (|) with 0 (Rule S2437).

We sincerely hope you already use SonarLint daily to catch issues early. If not, you can download SonarLint from the Visual Studio Extension Gallery or install it directly from Visual Studio (Tools/Extensions and Updates). SonarLint is free and already trusted by thousands of developers, so start using it today!

Categories: Open Source

3 Questions Every IT Ops Person Hears (Again and Again)

Visualizing the IT Ops workflow changes the way teams work, and helps teams communicate the value of IT Operations in the organization.

The post 3 Questions Every IT Ops Person Hears (Again and Again) appeared first on Blog | LeanKit.

Categories: Companies

Tools to Assess and Manage Technical Debt

Scrum Expert - Tue, 02/09/2016 - 17:49
Technical debt is a metaphor coined by Ward Cunningham in 1992. This concept refers to the work that needs to be done so that a software development project could be considered as “complete”. Could you try to measure your amount of technical debt? Could you use some tools to do this? These are some of the questions that this article explores. In a brief informal survey ...
Categories: Communities

Thinking About Servant Leadership and Agile Project Management

Johanna Rothman - Tue, 02/09/2016 - 15:57

For many people, agile means an end to all project management. I disagree. I find value in servant leadership in project management.

I explain how you can think about servant leadership and agile project management in my projectmanagement.com column this month: Servant Leadership: The Agile Way.

If you are looking to increase your servant leadership and help your project team (or program), check out the Influential Agile Leader.

Categories: Blogs

What to do when safety is low in a retrospective

Ben Linders - Tue, 02/09/2016 - 11:38
At the start of an agile retrospective you can do a safety check by asking people to write down how safe they feel in the retrospective. If the score indicates that people feel unsafe, then that will have serious impact on the retrospective. Here are some suggestions how you can deal with this when facilitating retrospectives. Continue reading →
Categories: Blogs

How to correctly fill in a Story?

IceScrum - Tue, 02/09/2016 - 11:27
Hello everybody and welcome to this first blog post of 2016! In this article I am going to present you quite a simple topic, which is nevertheless mandatory to make your Scrum project a success: “How to correctly fill in your stories?”. Story fields First, let’s review their fields in their order of appearance. Name…
Categories: Open Source

The Dark Side of Javascript Fatigue

Javascript fatigue is a real experience for many developers who don’t spend their day to day in Node.js bashing out javascript. For many developers javascript is an occasional concern. The thing I can’t figure out about the javascript development world is the incredible churn. Churn is often disaster for a programming community. It frustrates anyone trying to build a solid application that will have a shelf life of a decade or more. Newcomers are treated to overwhelming choices without enough knowledge to choose. Then they find what they’ve learned is no longer the new and shiny tool only a few months later. And anyone on the outside feels validated in not jumping in.

Many in the javascript community attempt to couch all the churn as a benefit. It’s the incredible pace of innovation. I see sentiments like this:

The truth is, if you don’t like to constantly be learning new things, web development is probably not for you. You might have chosen the wrong career!
Josh Burgess

Even if we accept that it all the ‘innovation’ is moving things forward more quickly, there is rarely the reflection on the consequences. I’ve worked on an approximately 9 year old Rails app for about 5 years now and I’m still shocked by the number different frameworks and styles of javascript that litter the app:

  • Hand rolled pre JQuery javascript
  • Javascript cut and paste style
  • RJS (an attempt to avoid writing javascript altogether in early rails)
  • YUI
  • Prototype
  • Google Closure
  • JQuery
  • Angular

Eight different frameworks in about as many years. And though we adopted Angular about 2 years ago we’re already dealing with non-backwards compatibility, Angular 2.0. This is a large burden on maintenance and it costs us very real time to spin up on each one when we have to enhance the app or fix a bug.

This is a monolithic app that’s been built over quite a few years, but the big difference is the Rails app was opinionated and stuck to a lot of default conventions. The framework churn of Rails has been much more gradual and generally backwards compatible. The largest pain we experience was going from Rails 2 to 3, when Rails was merged with Merb. The knowledge someone built up in their first few years working in Ruby and Rails still applies. The churn is certainly exists, but at a measure pace.

In phone screens when I describe our main app, I list off the myriad javascript frameworks we use as a negative they should know about. And almost none of the candidates have heard of Google Closure, even though a critical piece of the app was written in it. They often assume I must be talking about the JVM Clojure.

Javascript has never been popular because of elegance or syntax. Rants like the following are not hard to find:

You see the Node.js philosophy is to take the worst fucking language ever designed and put it on the server.
Drew Hamlett

Large majorities of developers would rather avoid it completely to focus on any modern language and hopefully use a transpiler if they have to touch Javascript. In this environment it might do the javascript community some good to settle down some and focus on some stability.

Categories: Blogs

Automated UI Testing with React Native on iOS

Xebia Blog - Mon, 02/08/2016 - 22:30
code { display: inline !important; font-size: 90% !important; color: #6a205e !important; background-color: #f9f9f9 !important; border-radius: 4px !important; }

React Native is a technology to develop mobile apps on iOS and Android that have a near-native feel, all from one codebase. It is a very promising technology, but the documentation on testing can use some more depth. There are some pointers in the docs but they leave you wanting more. In this blog post I will show you how to use XCUITest to record and run automated UI tests on iOS.

Start by generating a brand new react native project and make sure it runs fine:
react-native init XCUITest && cd XCUITest && react-native run-ios
You should now see the default "Welcome to React Native!" screen in your simulator.

Let's add a textfield and display the results on screen by editing index.ios.js:

class XCUITest extends Component {

  constructor(props) {
    super(props);
    this.state = { text: '' };
  }

  render() {
    return (
      <View style={styles.container}>
        <TextInput
          testID="test-id-textfield"
          style={{borderWidth: 1, height: 30, margin: 10}}
          onChangeText={(text) => this.setState({text})}
          value={this.state.text}
        />
        <View testID="test-id-textfield-result" >
          <Text style={{fontSize: 20}}>You typed: {this.state.text}</Text>
        </View>
      </View>
    );
  }
}

Notice that I added testID="test-id-textfield" and testID="test-id-textfield-result" to the TextInput and the View. This causes React Native to set a accessibilityIdentifier on the native view. This is something we can use to find the elements in our UI test.

Recording the test

Open the XCode project in the ios folder and click File > New > Target. Then pick iOS > Test > iOS UI Testing Bundle. The defaults are ok, click Finish. Now there should be a XCUITestsUITests folder with a XCUITestUITests.swift file in it.

Let's open XCUITestUITests.swift and place the cursor inside the testExample method. At the bottom left of the editor there is a small red button. If you press it, the app will build and start in the simulator.

Every interaction you now have with the app will be recorded and added to the testExample method, just like in the looping gif at the bottom of this post. Now type "123" and tap on the text that says "You typed: 123". End the recording by clicking on the red dot again.

Something like this should have appeared in your editor:

      let app = XCUIApplication()
      app.textFields["test-id-textfield"].tap()
      app.textFields["test-id-textfield"].typeText("123")
      app.staticTexts["You typed: 123"].tap()

Notice that you can pull down the selectors to change them. Change the "You typed" selector to make it more specific, change the .tap() into .exists and then surround it with XCTAssert to do an actual assert:

      XCTAssert(app.otherElements["test-id-textfield-result"].staticTexts["You typed: 123"].exists)

Now if you run the test it will show you a nice green checkmark in the margin and say "Test Succeeded".

In this short blogpost I showed you how to use the React Native testID attribute to tag elements and record and adapt a XCUITest in XCode. There is a lot more to be told about React Native, so don't forget to follow me on twitter (@wietsevenema)

Recording UI Tests in XCode

Categories: Companies

The practice of reflection in action

thekua.com@work - Mon, 02/08/2016 - 20:53

In a previous article, I explained how the most essential agile practice is reflection. In this article, I outline examples how organisations, teams and people use reflection in action.

Reflection through retrospectives

Retrospectives are powerful tools that whole teams use to reflect on their current working practices to understand what they might do to continuously improve. As an author of a “The Retrospective Handbook“, I am clearly passionate about the practice because they explictly give teams permission to seek ways to improve and when executed well, create a safe space to talk about issues.

Reflection through coaching

Effective leaders draw upon coaching as a powerful skill that helps individuals reflect on their goals and actions to help them grow. Reflective questions asked by a coach to a coachee uncover barriers or new opportunities for a coachee to reach their own goals.

Coaching is a skill in itself and requires time for both the person doing the coaching, and for the people being coached. When done well, coaching can massively improve the performance and satisfication of team members by helping coachees reach their own goals or find ways to further develop themselves.

Reflection through daily/weekly prioritisation

I have run a course for Tech Leads for the past several years and in this course, I teach future Tech Leads to make time during their week to reflect and prioritise. I see many people in leadership positions fall into a reactive trap, where they are too busy “doing” without considering if it is the most important task they should be doing.

Effective leaders build time into their schedules to regularly review all their activities and to prioritise them. In this process, leaders also determine what is the best way of accomplishing these activities which is often involving and enabling others rather than doing it themselves.

Reflection through 1 to 1 feedback

When I work with teams, I teach team members the principles of giving and receiving effective feedback. I truly believe in the Prime Directive – that everyone is trying to do the best that they can, given their current skills and the situation at hand. A lot of conflict in working enviornments is often due to different goals, or different perspectives and it is easy for people to be frustrated with each other.

When team members do not know how to give an receive feedback, being on either side can be a really scary prospect. 1 to 1 feedback gives people opportunites to reflect on themselves and make space for personally being more effective and for strengthening the trust and relationships of the people involved.

Reflection through refactoring

Refactoring is an essential skill for the agile software developer and a non-negotiable part of development.

Three strikes and you refactor – Refactoring: Improving the Design of Existing Code (Martin Fowler)

Developers should be making tiny refactorings as they write and modify software as it forces developer to reflect on their code and think explicitly about better designs or ways of solving problems, one bit at a time.

Reflection through user feedback

In more recent years I have seen the User Experience field better integrated with agile delivery teams through practices such as user research, user testing, monitoring actual usage and collecting user feedback to constantly improve the product.

While good engineering practices help teams build systems right, only through user feedback can teams reflect on if they are building the right system.

Conclusion

Reflection is the most powerful way that teams can become agile. Through reflection, teams can better choose the practices they want and gain value immediately because they understand why they are adopting different ways of working.

Categories: Blogs

Agile Game Development: The Essential Gems

Agile Game Development - Mon, 02/08/2016 - 19:44














In a week, I'll be launching my first online training course on Agile Game Development.  This first course is an overview of Scrum and Kanban for game development with a focus on the values and principles (gems) of *why* we do it.  My aim was to provide broad training to many of the developers who don't get a change to attend onsite or offsite training.  
The training is hosted thought FrontRowAgile.com, which hosts training for other areas of agile (such as agile estimating and planning training by Mike Cohn).  
Members of the mailing list will receive discounts for training.  http://www.clintonkeith.com/mailing_list.html
Check out the free portions of the training below: https://youtu.be/DQzrFiMDoko

 https://youtu.be/nqPZnzh9680 

Categories: Blogs

Implement Scrum Using Team Foundation Server

TV Agile - Mon, 02/08/2016 - 18:42
This presentation explains how to use the new Agile project management tools of Team Foundation Server (TFS). Learn how to create and manage a product backlog, forecast and plan work for a sprint and manage the Sprint tasks using the new tools in TFS. Video producer: http://tv.ssw.com/
Categories: Blogs

Code Folding Is A Design Error

Derick Bailey - new ThoughtStream - Mon, 02/08/2016 - 16:53

Call it “code collapse”, “code folding”, “regions” or whatever your editor says it is called… just don’t call it a feature when it does little more than hide the monsters lurking in your code base.

Ide beast

You Shouldn’t Need Code Folding

The most common use case for code folding – well, more likely the reason it was created, in the first place – is to hide large chunks of code in your file. 

NewImage

Yes, that’s a single file with 747 lines of code, comments and formatting in it. I added a code fold just for this post, to illustrate the common use case: hiding monsters.

It’s a nightmare to work in this file, which is ultimately the problem that code folding allows.

The File Is Too Large

If you find yourself needing code collapse, your code is probably too large and needs to be split up into smaller chunks, potentially across more files. 

The screenshot above is a perfect example of this. 

In my haste to get things working, I didn’t bother to examine the reasons for the existence of certain pieces of code, early on. I didn’t realize that I was putting a lot of view-model formatting and similar code into this file. I was just putting things in there because it was easy and convenient to have everything in a single model / file. 

But over time, this file became more and more difficult to work with. I found myself needing to create specific view-models for specific screens, but unable to do so. I had so much code baked into this one file, that I couldn’t see a way to split it up when I needed to. 

I have dozens of view-specific models, multiple paths of code execution, features from different parts of the app and more, all crammed into this one file. 

And this one file has become a nightmare; a garbage dump for anything related to this model.

You Won’t Fix Problems That You Don’t See

It’s in our nature to ignore problems that we don’t see, even when we know they are there.

Code folding allows us to do exactly that – to have files with hundreds upon hundreds of lines of code, and not think about it because we’re not seeing it. 

But what problems are hiding in that massive file? What is in there that you don’t know about because you don’t want to open it up and look? 

Stop Code Folding And Face Your Nightmare Head On

If you find yourself looking at a file like I’ve shown above, turn off your code folding regions. Just disable the feature entirely, in your IDE. 

When you face the problem without hiding it behind a folded region, it becomes important; it becomes a problem to be solved, and it should be solved. It may not happen right away (probably won’t – we all have deadlines). But having the problem exposed will give you a constant reminder of the need for a better solution.

The hope of having the massive beast of code exposed is that it will eventually be cleaned up. One day, you’ll be so tired of looking at it and sifting through hundreds or thousands of lines of code, that you’ll actively seek ways to fix the problem. 

Face the nightmare of code in broad daylight. Force yourself to find a better way forward.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.