Skip to content

Feed aggregator

Cheating and building secure iOS games

Xebia Blog - Fri, 04/21/2017 - 08:53

You probably have one of the million games where you earn achievements and unlock specials on your iPad or iPhone. If you develop games, you've probably wondered about people cheating your games? In this blog we're going to show you how to try cheating out yourself and how to build secure iOS games.The actual question […]

The post Cheating and building secure iOS games appeared first on Xebia Blog.

Categories: Companies

Lean Tool Roundup: Kanban is King

Enjoy this excerpt from the Lean Business Report. Download the full report here.
Part of the beauty...

The post Lean Tool Roundup: Kanban is King appeared first on Blog | LeanKit.

Categories: Companies

Remote Working – Recommended Books

Growing Agile - Thu, 04/20/2017 - 15:05

More and more people want to work from home. Here are some books we’d recommend on the topic of working remotely:

Categories: Companies

Waggle Dance -or- Standup Meeting

Agile Complexification Inverter - Thu, 04/20/2017 - 14:34
Bees do a dance that bee keeper refer to as the Waggle dance...

It is with great pleasure that you can watch and using the power of science have this dance translated into English.

Bee Dance (Waggle Dance) by Bienentanz GmbH
What does this have to do with Scrum?  The power of a metaphor was well known to the creators of Extreme Programming (XP) - so much so, that it is one of only 12 "rules" that those really smart people decided to enshrine into their process.  It is also the most likely rule to not be mentioned in any survey of software development practices.  Unless you happen to be chatting with Eric Evens, and he may agree that he's captured the underlying principle in Domain-Driven Design, the Ubiquitous Language pattern.


Have you ever observed a great scrum team using a classic tool of many innovative company environments - the physical visual management board (Scrum Task Board). The generic behavior for a small group of people (say around 7 plus/minus 2) is for the group to discover that a form of dance where the speaker moves to the board and manipulates objects on the board as they speak gives everyone else the context of what story they are working upon and what task they are telling us they have completed. Then they exit stage left - so to speak. And the next dancer approaches from stage right, to repeat the dance segment. Generally speaking one circuit of this group is a complete dance for the day. The team is then in sync with all there team mates, and may have negotiated last minute changes to their daily plan, as the dance proceeded. In my observation of this dance great teams complete this ritual in about 15 minutes. They appear to need to perform this dance early in the morning to have productive days. And groups that practice this dance ritual well, out perform groups that are much larger and groups that don't dance.


So going all honey bee meta for a moment...  Let's use our meta-cognition ability to think about the patterns.  We love to pattern recognize - our brain is well designed for that (one of the primary reasons a physical visualization of work is so much more productive as a accelerator of happiness than virtualization of the same work items).

When do we use great metaphors - in design great NEW experiences for people that are reluctant to change.  And to communicate the desired behaviors, the exciting new benefits to adopting something new.  I'm thinking of the 1984 introduction of the Graphical User Interface by the Apple pirate team that produced the GUI, the Mouse, the Pointer, the DropDown Menu, etc.

Can you see a pattern in this... a pattern that relates to people changing systems, behaviors, disrupting the status quo?  It is resonating in my neurons, I'm having a heck of a time translating these neuron firing waves of intuitions, into the motor cortex to make my stupid fingers pound out the purposefully retarding movements on a QWERTY keyboard to communicate with you over Space-Time.  If only we could dance!

See Also:

The Waggle Dance of the Honeybee by Georgia Tech College of Computing
How can honeybees communicate the locations of new food sources? Austrian biologist, Karl Von Frisch, devised an experiment to find out! By pairing the direction of the sun with the flow of gravity, honeybees are able to explain the distant locations of food by dancing. "The Waggle Dance of the Honeybee" details the design of Von Frisch's famous experiment and explains the precise grammar of the honeybees dance language with high quality visualizations.
This video is a design documentary, developed by scientists at Georgia Tech's College of Computing in order to better understand and share with others, the complex behaviors that can arise in social insects. Their goal at the Multi-Agent Robotics and Systems (MARS) Laboratory is to harness new computer vision techniques to accelerate biologists' research in animal behavior. This behavioral research is then used, in turn, to design better systems of autonomous robots.


I was just reminded of @davidakoontz's wonderful metaphor for the daily #Scrum: waggle dance :) pic.twitter.com/h3c1B49mkC

— Tobias Mayer (@tobiasmayer) April 7, 2017


Categories: Blogs

Agile Portugal, Lisbon, Portugal, 2-3 June 2017

Scrum Expert - Thu, 04/20/2017 - 10:30
Agile Portugal is a two-day international conference that gathers practitioners from the Agile software development and Scrum project management community with invited international leading experts...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Agile Coach Camp Denmark, Dragor, Denmark, May 18-20 2017

Scrum Expert - Thu, 04/20/2017 - 09:00
Agile Coach Camp Denmark is a three-day event that serves as an unconference for Agile coaches of Denmark and other countries. The Agile Coach Camp Denmark is a free, not-for-profit, practitioner-run...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

The Joy (and Pain) of Leaving

Illustrated Agile - Len Lagestee - Wed, 04/19/2017 - 21:12

An Agile Coach Says Farewell

This day always comes and it always will. But it never gets any easier.

In the post “An Exit Strategy for the Agile Coach,” I discuss the journey from the time a coach arrives until the time has come for them to go. Well, this coach is experiencing the later as my time in the trenches with those I most recently served has ended.

Leaving never comes as a surprise in my line of work and you would think I would start getting used to it. But since I haven’t gotten used to it and I would rather not spend the money on therapy, I’ll do the next best thing…write about it.

So I’ve spent the past few weeks capturing my thoughts about what makes these opportunities so special and one word keeps bubbling up…relationships.

While producing better, faster, or more valuable outcomes is always something to strive for, what will often be remembered is the relationships formed and the bonds created. Creating an environment where a collection of unique and diverse humans connect and thrive will always be complicated. Guiding those relationships to innovate and build something together, even more so.

From the moment we arrive until the moment we leave, intentionally focusing on the behaviors we bring to our new relationships provides the opportunity for our bonds to strengthen. Those bonds form the foundation for designing and shaping our new work culture.

Attempting to create a new work culture without these bonds usually results in temporary improvements or a few quick wins but meaningful, long-lasting change will be elusive as we never talk about the things that really need talking about.

To help build those bonds (and they will come in handy later), here are a few of those behaviors I always need to focus on when I arrive:

Energy. Many relationships launch with energy and purpose. This is often because of the “newness” of our experience together. The trick is to not only keep this energy going but to intensify it when things get rough. When things do get tough and without strong bonds of relationship, whatever energy remains typically turns negative.

When organizations have lost their energy (or energy has turned negative), it requires a few braves souls to lead the way by bringing a new level of vibrancy. Honestly, this doesn’t take much usually…it could be as simple just smiling a little more and to start saying good morning to each other.

Mostly though, it’s just a little sacrificial energy that is required at first by subtly shifting focus from what we need to what others need. This small, but never easy shift, signals we are serious about changing and we are in this for the long haul.

Energy brings momentum to relationships.

Openness. Purposeful energy will always meet resistance. In fact, if you’re not meeting resistance, change isn’t happening. To build a sustaining movement strong enough to overcome this resistance, we need to know the resistance. Knowing our resistance requires people to be vocal about their experiences and brave enough to share what they know and who they are.

Change initiatives fail, I believe, for one reason. Lack of bravery. Any reduction of bravery (openness) by any one person shrinks the ability to build on our initial energy and smash through our resistance by just that much.

Openness brings bravery to relationships.

Listening. Creating an environment of openness is meaningless unless we listen to what others are opening up about. People lose bravery because their voice has been, or is being, silenced.

Each day there is a vacuum of silence waiting to be filled. It often feels as if we are in a race to fill this vacuum with our own words as quickly as we can (he writes while looking in the mirror).

Bravery requires space for the “unbrave” to become brave. Silence creates this space and provides an invitation for people to step-in when they otherwise wouldn’t. If you are already brave, maybe a season of silence would be appropriate for you to allow space for others to become bigger.

Listening provides the space for growth in our relationships.

Growth. The expansion of radical thinking, fresh ideas and personal bravery can only happen if openness (not afraid to share) and listening (not afraid to receive) are fully present.

And when this happens, magic ensues. The strengths of each individual blends together to create something any one person would ever believe was possible.

Would you like to test if the opportunity for growth is present? Continuously ask yourself and others this question, “Do others have more confidence (feel braver)?” If the answer is yes, a little smile just crept on your face when you read the question. You’ve felt and experienced what a growing confidence environment feels like. If the answer is no, check your energy, check your openness, and check your amount of listening. One or all is missing.

Growth creates something magical from our relationships.

Community. There is beauty in a reality when we could be ourselves and express true feelings. We laugh. We cry. We talk. We debate. We celebrate. We grieve. We know about each other’s life outside of work. Some days we may not really like each other but we know we need each other. This is real-life. This is community.

Organizations don’t need another framework and they don’t need to “scale” the one they have. They need the collective ability to sense when that nasty old resistance is reappearing and collectively and instinctually overwhelm and destroy it. This happens through the strength of community…can’t wait to share more in the next post and podcast.

Community multiplies our relationships.

As this current chapter closes and a new one begins, my feelings are bittersweet. While there is pain in knowing we won’t get to experience everyday life together there is much more joy. This joy comes from knowing we had this time together in the first place and we truly experienced life together. All the highs and all the lows. Perfect.

I mentioned in the “8 Ways to Measure Your Impact as an Agile Coach,” there is no way of really knowing if this was a “successful” coaching experience. Only time will tell if the seeds planted will take root.

But I do know this…

We will always have our community . I can’t wait to see what the future holds for each of you. I appreciate you all.

Becoming a Catalyst - Scrum Master Edition

The post The Joy (and Pain) of Leaving appeared first on Illustrated Agile.

Categories: Blogs

Agile Alliance Announces AGILE2017 Program

Scrum Expert - Wed, 04/19/2017 - 18:12
The Agile Alliance has announces the program for AGILE2017, the largest international gathering of Agilists. The conference is widely considered the premier global event for the advancement of Agile...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Scrum Alliance Creates Partnership With LeSS Company

Scrum Expert - Wed, 04/19/2017 - 17:34
The Scrum Alliance has announced a partnership with LeSS Company to support widespread adoption of Large-Scale Scrum (LeSS). Scrum Alliance interim CEO Lisa Hershman said, “Recognizing that...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Storytelling: the Big Picture for Agile Efforts

Scrum Expert - Wed, 04/19/2017 - 16:32
Agile reminds us that the focus of any set of requirements needs to be on an outcome rather than a collection of whats and whos. Storytelling is a powerful tool to elevate even the most diehard...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Breaking the SonarQube Analysis with Jenkins Pipelines

Sonar - Wed, 04/19/2017 - 15:14

One of the most requested feature regarding SonarQube Scanners is the ability to fail the build when quality level is not at the expected level. We have this built-in concept of quality gate in SonarQube, and we used to have a BuildBreaker plugin for this exact use case. But starting from version 5.2, aggregation of metrics is done asynchronously on SonarQube server side. It means build/scanner process would finish successfully just after publishing raw data to the SonarQube server, without waiting for the aggregation to complete.

Some people tried to resurrect the BuildBreaker feature by implementing some active polling at the end of the scanner execution. We never supported this solution, since it defeats one of the benefit of having asynchronous aggregation on SonarQube server side. Indeed it means your CI executors/agents will be occupied “just” for a wait.

The cleanest pattern to achieve this is to release the CI executor, and have the SonarQube server send a notification when aggregation is completed. The CI job would then be resumed, and take the appropriate actions (not only mark the job as failed, but it could also send email notifications for example).

All of this is now possible, thanks to the webhook feature introduced in SonarQube 6.2. We are also taking benefit of Jenkins pipeline feature, that allow some part of a job logic to be executed without occupying an executor.

Let’s see it in action.

First, you need SonarQube server 6.2+. In your Jenkins instance, install latest version of the SonarQube Scanner for Jenkins (2.6.1+). You should of course configure in Jenkins administration section the credentials to connect to the SonarQube server.

In your SonarQube server administration page, add a webhook entry:

https://<your Jenkins instance>/sonarqube-webhook/


Now you can configure a pipeline job using the two SonarQube keywords ‘withSonarQubeEnv’ and ‘waitForQualityGate’.

The first one should wrap the execution of the scanner (that will occupy an executor) and the second one will ‘pause’ the pipeline in a very light way, waiting for the webhook payload.

node {
  stage('SCM') {
    git 'https://github.com/foo/bar.git'
  }
  stage('build & SonarQube Scan') {
    withSonarQubeEnv('My SonarQube Server') {
      sh 'mvn clean package sonar:sonar'
    } // SonarQube taskId is automatically attached to the pipeline context
  }
}
 
// No need to occupy a node
stage("Quality Gate") {
  timeout(time: 1, unit: 'HOURS') { // Just in case something goes wrong, pipeline will be killed after a timeout
    def qg = waitForQualityGate() // Reuse taskId previously collected by withSonarQubeEnv
    if (qg.status != 'OK') {
      error "Pipeline aborted due to quality gate failure: ${qg.status}"
    }
  }
}

Here you are:


That’s all Folks!

Categories: Open Source

With ES7 And Beyond, Do You Need To Know ES6 Generators?

Derick Bailey - new ThoughtStream - Wed, 04/19/2017 - 13:00

A few years ago, JavaScript introduced the idea of generators to the language. It’s a tool that was absolutely needed in the JavaScript world, and I was very happy to see it added, even if I didn’t like the name at the time.

But now, after a few years of seeing generators in the wild and using them in my code, it’s time to answer the big question.

Do you need to learn generators?

Before I get to the big secret … two secrets, really … it’s important to understand what generators are which informs why they are so important.

What Is A Generator?

The .NET world has called them “iterators” for 12 years. But I think JavaScript was following Python’s lead with “generators”. You could also call them “coroutines”, which may be the fundamental concept on which generators and iterators are built.

Ok, enough computer science lessons… what is a generator, really?

A generator will halt execution of a function for an indefinite period of time when you call “yield” from inside a function.

The code that invoked the generator can then control exactly when the generator resumes… if at all.

And since you can return a value every time execution halts (you “yield” a value from the function), you can effectively have multiple return values from a single generator.

That’s the real magic of a generator – halting function execution and yielding values – and this why generators are so incredibly important, too.

But a generator isn’t just one thing.

There’s 2 Parts To A Generator

There’s the generator function, and the generator iterator (I’ll just call that an iterator from here on out).

A generator function is defined with an * (asterisk) near the function keyword or name. This function is responsible for “yield”-ing control of its execution – with a yielded value if needed – to the iterator.

An iterator is created by invoking the generator function.

Once you have an iterator, you can … you guessed it, iterate over the values that the generator function yields.

The result of an iterator’s “.next()” call has multiple properties to tell you whether or not the iterator is completed, provide the value that was yielded, etc.

But the real power in this is when calling “it.next();” the function will continue executing from where it left off, pausing at the next yield statement.

This means you can execute a method partway, pause it by yielding control to the code that made the function call, and then later decide if you want to continue executing the generator or not.

For more detail on this, I’ll list some great resources for what generators can do, below.

Right now, though, you should know about the real value and power of generators: how they make async functions a possibility.

Secret #1: Async Functions Use Generators

The truth is, async functions wouldn’t exist without generators. They are built on top of the same core functionality in the JavaScript runtime engine and internally, may even use generators directly (I’m not 100% sure of that, but I wouldn’t be surprised).

In fact, with generators and promises alone, you can create async/await functionality on your own.

It’s true! I’ve written the code, myself. I’ve seen others write it. And there’s a very popular library called co (as in coroutines) that will do it for you.

Compare that to the same async function calls using the syntax you saw yesterday.

With only a few minor syntax changes, co has written the same level of abstraction in calling asynchronous methods that look synchronous. It even uses promises under the hood to make this work, like async functions.

Clearly there was influence from this ingenuity in co, that influenced the final specification of async/await.

Secret #2: You Don’t Need To Learn Generators

Async functions are built on the same underlying technology as generators. They encapsulate generators into what co provided for you, as a formal syntax instead of a 3rd party library.

But do you need to learn generators?

No.

Make no mistake. You absolutely need generators.

Without them, async functions wouldn’t work. But you do not need to learn how to use them, directly.

They’re complex, compared to the way you’ve been working. They aren’t just a new way to write iteration and asynchronous code. But they represent a fundamental shift in how code is executed, and the API to manage that is not how a developer building line-of-business applications wants to think.

Sure, there are some use cases where generators can do really cool things. I’ll show you those in the resources below. But your code will not suffer one iota if you don’t learn how to use generators, directly.

Let the library and framework developers deal with generators to optimize the way things work. You can just sit back and focus on the awesomeness that is async functions, and forget out generators.

One Use Case For Generators

Ok, there is one case where I would say you do need generators.

If you want to support semi-modern browsers or Node.js v4 / v6 with async/await functionality…

And if you can’t guarantee the absolute latest versions of these will be used… Node v7.6+, MS Edge 15, Chrome 57, etc…

Then generators are your go-to option with the co library, to create the async function syntax you want.

Other than that, you’re not missing much if you decide to not learn generators.

So I say skip it.

Just wait for async/await to be readily available and spend your time learning promises instead (you absolutely need promises to effectively work with async functions).

Generator Resources

In spite of my advice that you don’t need to learn generators, there are some fun things you can do with them if you can completely wrap your head around them.

It took me a long time to do that.

But here are the resources I used to get my head into a place where generators made sense.

And some of my own blog posts on fun things I’ve done with generators.

The post With ES7 And Beyond, Do You Need To Know ES6 Generators? appeared first on DerickBailey.com.

Categories: Blogs

Agile Testing Manifesto – More Translations

Growing Agile - Wed, 04/19/2017 - 09:00

Our Agile Testing Manifesto has been translated by Victoria Slinyavchuk into both Ukrainian and Russian!

Here are the new images with the translations, feel free to use them and share them.

Ukrainian:

Russian: 

 

Categories: Companies

Domain Command Patterns - Validation

Jimmy Bogard - Tue, 04/18/2017 - 21:36

I don't normally like to debate domain modeling patterns (your project won't succeed or fail because of what you pick), I do still like to have a catalog of available patterns to me. And one thing that comes up often are "how should I model commands?":

https://twitter.com/Mystagogue/status/853045473115987969

In general, apps I build follow CQRS, where I split my application architecture into distinct commands and queries. However, no two applications are identical in terms of how they've applied CQRS. There always seem to be some variations here and there.

My applications also tend to have explicit objects for external "requests", which are the types bound to the HTTP request variables. This might be a form POST, or it might be a JSON POST, but in either case, there's a request object.

The real question is - how does that request object finally affect my domain model?

Request to Domain

Before I get into different patterns, I like to make sure I understand the problem I'm trying to solve. In the above picture, from the external request perspective, I need a few questions answered:

  • Was my request accepted or rejected?
  • If rejected, why?
  • If accepted, what happened?

In real life, there aren't fire-and-forget requests, you want some sort of acknowledgement. I'll keep this in mind when looking at my options.

Validation Types

First up is to consider validation. I tend to look at validation with at least a couple different levels:

  • Request validation
  • Domain validation

Think of request validation as "have I filled out the form correctly". These are easily translatable to client-side validation rules. If it were 100 years ago, this would be a desk clerk just making sure you've filled in all the boxes appropriately. This sort of validation you can immediately return to the client and does not require any sort of domain-specific knowledge.

A next-level validation is domain validation, or as I've often seen referred, "business rule validation". This is more of a system state validation, "can I affect the change to my system based on the current state of my system". I might be checking the state of a single entity, a group of entities, an entire collection of entities, or the entire system. The key here is I'm not checking the request against itself, but against the system state.

While you can mix request validation and domain validation together, it's not always pretty. Validation frameworks don't mix the two together well, and these days I generally recommend against using validation frameworks for domain validation. I've done it a lot in the past and the results...just aren't great.

As a side note, I avoid as much as possible any kind of validation that changes the state of the system and THEN validates. My validation should take place before I attempt to change state, not after. This means no validation attributes on entities, for example.

Validation concerns

Next, I need to concern myself with how validation errors bubble up. For request validation, that's rather simple. I can immediately return 400 Bad Request and a descriptive body of what exactly is off with the request. Typically, request validation happens in the UI layer of my application - built in with the MVC framework I'm using. Request validation doesn't really affect the design of my domain validation.

Domain Validation

Now that we've split our validation concerns into request validation and domain validation, I need to decide how I want to validate the domain side, and how that information will bubble up. Remember - it's important to know not only that my request has failed, but why it failed.

In the domain side, understanding the design of the why is important. Can I have one reason, or multiple reasons for failure? Does the reason need to include contextual data? Do I need to connect a failure reason to a particular input, or is the contextual data in the reason enough?

Next, how are the failures surfaced? When I pass the request (or command) to the domain, how does it tell me this command is not valid? Does it just return back a failure, or does it use some indirect means, like an exception?

public void DoSomething(SomethingRequest request) {  
    if (stateInvalid) {
        throw new DomainValidationException(reason);
    }
    entity.DoSomething();
}

or

public bool DoSomething(SomethingRequest request) {  
    if (stateInvalid) {
        return false;
    }
    entity.DoSomething();
    return true;
}

In either case, I have some method that is responsible for affecting change. Where this method lives we can look at in the next post, but it's somewhere. I've gotten past the request-level validation and now need domain-level validation - can I affect this change based on the current state of the system? Two ways I can surface this back out - directly via a return value, or indirectly via an exception.

Exceptional domain validation

At first glance, it might seem that using exceptions are a bad choice for surfacing validation. Exceptions should be exceptional, and not part of a normal operation. But exceptions would let me adhere to the CQS principle, where methods either perform an action, or return data, but not both.

Personally, I'm not that hung up on CQS for these outer portions of my application, which is more of OOP concern. Maybe if I'm trying to follow OOP to the letter it would be important. But I'm far more concerned with clean code than OOP.

If I expect the exceptional case to be frequent, that is, the user frequently tries to do something that my domain validation disallows, then this wouldn't be a good choice. I shouldn't use exceptions just to get around the CQS guideline.

However, I do try to design my UX so that the user cannot get themselves in an invalid state. Even validations - my UX should guide the user so that they don't put in invalid data. The HTML5 placeholder attribute or explanatory text helps there.

But what about domain state? This is a bit more complex - but ideally, if a user isn't allowed to perform a state change, for whatever reason, then they are not presented with an option to do so! This can be communicated either with a disabled link/button, or simply removing the button/link altogether. In the case of REST, we just wouldn't return links and forms that were not valid state transitions.

If we're up-front designing our UX to not allow the user to try to get themselves in a bad state, then exceptions would truly be exceptional, and then it's OK to use them I believe.

Returning success/failure

If we don't want to use exceptions, but directly return the success/failure of our operation, then at this point we need to decide:

  • Can I have one or multiple reasons for failure?
  • Do I need contextual information in my message?
  • Do I need to correlate my message to input fields?

I don't really have a go to answer for any of these, it's really depended on the nature of the application. But if I just needed a single reason, then I can have a very simple CommandResult:

public class CommandResult  
{
   private CommandResult() { }

   private CommandResult(string failureReason)
   {
       FailureReason = failureReason;
   }

   public string FailureReason { get; }
   public bool IsSuccess => string.IsNullOrEmpty(FailureReason);

   public static CommandResult Success { get; } = new CommandResult();

   public static CommandResult Fail(string reason)
   {
       return new CommandResult(reason);
   }

   public static implicit operator bool(CommandResult result)
   {
       return result.IsSuccess;
   }
}

In the above example, we just allow a single failure reason. And for simplicity's sake, an implicit operator to bool so that we can do things like:

public IActionResult DoSomething(SomethingRequest request) {  
    CommandResult result = service.DoSomething(request);
    return result ? Ok() : BadRequest(result.FailureReason);
}

We can of course make our CommandResult as complex as we need be to represent the result of our command, but I like to start simple.

Between these two options, which should you use? I've gone back and forth between the two and they both have benefits and drawbacks. At some point it becomes what your team is more comfortable with and what best fits their preferences.

With request and command validation, let's next turn to handling the command itself inside our domain.

Categories: Blogs

Introduction to DevOps with Chocolate, LEGO and Scrum Game

Scrum Expert - Tue, 04/18/2017 - 16:15
If one of the first aim of Scrum was to break the silos between business analysis, development and testing, you can consider that improving the cooperation with the operation side of IT as the next...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

What United Can Teach Us About Building Systems

Notes from a Tool User - Mark Levison - Tue, 04/18/2017 - 13:35

//creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

United Airlines, by Altair78, CC BY-SA 2.0

In April 2017, United Airlines made global headlines when they forcibly removed one passenger (David Dao) who had already boarded the flight, to accommodate crew flying to another destination. The flight staff called the airport police and Dao was hurt as he was dragged off the plane, suffering a concussion, loss of teeth and other injuries.

In the same week, another United passenger flying from Hawaii to Los Angeles was forced off a plane despite having paid full fare for a first class ticket. “That’s when they told me they needed the seat for somebody more important who came at the last minute,” Geoff Fearns said. “They said they have a priority list and this other person was higher on the list than me. They said they’d put me in cuffs if they had to.”

Neither of these cases should have ever happened. In both cases, crew on the ground and then customer service handled the situation badly. However, I don’t think that the real issue is the airline staff.

There are at least three major issues at play:

  1. Working at 100% of Capacity
  2. Too many conflicting rules
  3. Culture
Working at 100% of Capacity

David Dao was removed from his flight because four crew showed up after the passengers had already boarded. The crew were required in Kentucky the next day, and their absence would have delayed several hundred passengers on other flights. United offered up to $800 to get people to leave the flight, and then when that didn’t work, they simply selected four passengers at random to be removed.

There are two significant problems here:

  • Stochastic Queueing, by David Levinson. CC BY-SA 3.0

    By attempting to have every flight 100% full, there is no capacity to respond to small things going wrong in the system. E.g. extra people showing up at the last minute and creating a shortage of seats. This shows up time and time again, when utilization of capacity in a system exceeds a certain point and the system becomes fragile or breaks completely. In highways and road systems, it happens at around 65-70% of the designed capacity of the road.

  • By having only enough crew in each city to survive that day’s flights, United is again running at 100% capacity with respect to crew. There is additional risk in that a small problem in Chicago would escalate and become a big problem in Kentucky 24 hours later.

The issue in both cases is the attempt to squeeze every last dollar out of a situation. When we design organizations or systems (e.g. highways, hospitals, airlines) we need to build in enough slack to deal elegantly with small problems.

United could avoid this problem in the future. On all flights to key destinations, set aside 3-4 seats for deadheading crew, and offer these seats to standby passengers only when it can be guaranteed no crew will show up. Another alternative would be to establish backup crews in each city the airline services – the challenge there would be ensuring the backup crews have sufficient cross-skilling to deal with the variety of aircraft that United fly from that airport.

When we’re building teams for knowledge work (e.g. Software Development, Marketing, etc.) that slack time can be used for learning and growing skill.

More on Capacity and Slack:

https://qz.com/312075/why-the-second-busiest-airport-in-the-world-is-often-nightmarishly-fragile/
https://en.wikibooks.org/wiki/Fundamentals_of_Transportation/Queueing
http://brodzinski.com/2012/05/slack-time.html

Too Many Conflicting Rules

The more rules that a system has, the more unanticipated effects, and the less freedom the people doing the work have to make sensible decisions. Between the two incidents, we can see that United has some complicated rules about what it actually means to have bought a fare on the airline:

  • more important First Class passengers trump other First Class passengers
  • disabled people, children travelling alone and high status passengers trump deadheading crew
  • deadheading crew[1] have priority over Economy passengers
  • United Ground Staff are authorized to offer $400 and then $800 to get passengers to disembark, and cannot deviate from that

This list of United’s policies is, of course, incomplete as it’s just gleaned from news reports following the incident. The complete list is undoubtedly much longer.

With all these rules in place, the ground staff didn’t have enough room to explore other options. Perhaps if they had offered $1000 out of the gate, they would have got volunteers more quickly. Strangely, having to offer $400 – an insultingly low sum for many travellers – made it less likely that people would accept any subsequent offer.

As we learned in Simplicity, when we design systems with complicated rules, they will break down. When we build systems with simple heuristics, people have more capacity to make better decisions in their current situation. In most cases, they use the heuristic as expected, and only occasionally do they break it where local circumstance justifies.

[1] Deadheading crew – flight staff who aren’t working on the flight but are instead flying to the destination in a passenger seat, to work a flight from there. Culture

Under the control of Jeff Smisek, United went from an airline with some customer focus to one totally focused on costs and the bottom line. Companies that focus on reduction of cost might do well in the stock market for a few quarters. They do well because profits increase, but in the long run the focus on trying to save pennies hurts because it leaves an organization that has no flexibility.

Build an organization that is focused on delighting the customer, and give the team real decision-making capacity. Then even if we’ve made mistakes, those people still have the ability to identify a problem as it happens and make the situation right.

By all accounts Oscar Munoz, United’s CEO, is attempting to turn the culture around. He has only been CEO for 20 months, which is not enough time to make a significant impact on a company employing nearly 90,000 people.

The real question: is Munoz attempting to make significant changes to affect the culture, or will the cost focus of his predecessor live on?

Culture References at United: http://onemileatatime.boardingarea.com/2017/04/11/united-denied-boarding-fiasco/
https://www.forbes.com/sites/stevedenning/2011/04/01/is-delighting-the-customer-profitable/
Categories: Blogs

You Need ES2017’s Async Functions. Here’s Why …

Derick Bailey - new ThoughtStream - Tue, 04/18/2017 - 13:00

If you’ve ever written code like this, you know the pain that is asynchronous workflow in JavaScript.

Nested function after nested function. Multiple redundant (but probably necessary) checks for errors.

It’s enough to make you want to quit JavaScript… and this is a simple example!

Now imagine how great it would be if your code could look like this.

Soooo much easier to read… as if the code were entirely synchronous! I’ll take that any day, over the first example.

Using Async Functions

With async functions, that second code sample is incredibly close to what you can do. It only takes a few additional keywords to mark these function calls as async, and you’re golden.

Did you notice the difference, here?

With the addition of “async” to the outer function definition, you can now use the “await” keyword to call your other async functions.

By doing this, the JavaScript runtime will now invoke the async functions in a manner that allows you to wait for a response without using a callback. The code is still asynchronous where it needs to be, and synchronous where it can be.

This code does the same thing, has the same behavior from a functionality perspective. But visually, this code is significantly easier to read and understand.

The question now, is how do you create the async functions that save so much extra code and cruft, allowing you to write such simple workflow?

Writing Async Functions

If you’ve ever used a JavaScript Promise, then you already know how to create an async function.

Look at how the “createEmployee” function might be written, for example.

This code immediately creates and returns a promise. Once the work to create the employee is done, it then checks for some level of success and resolves the promise with the employee object. If there was a problem, it rejects the promise.

The only difference between this function and any other function where you might have returned a promise, is the use of the “async” keyword in the function definition.

But it’s this one keyword that solves the nested async problem that JavaScript has suffered with, forever.

Async With Flexibility

Beyond the simplicity of reading and understanding this code, there is one more giant benefit that needs to be stated.

With the use of promises in the the async functions, you have options for how you handle them. You are not required to “await” the result. You can still use the promise that is returned.

This code is just as valid as the previous code.

Yes, this code still calls the same async createEmployee function. But we’re able to take advantage of the promises that are returned when we want to.

And if you look back at the 3rd code sample above, you might remember that I was calling async functions but ultimately using a callback to return the result. Yet again, we see more flexibility.

Reevaluating My Stance On Promises

In the past, I’ve made some pretty strong statements about how I never look to promises as my first level of async code. I’m seriously reconsidering my position.

If the use of promises allows me to write such easily readable code, then I’m in.

Of course, now the challenge is getting support for this in the majority of browsers, as I’m not about to drop a ton of terrible pre-compiler hacks and 3rd party libraries into a browser to make this work for the web.

Node.js on the other hand? Well, it’s only a matter of time before v8.0 is stable for release.

For now, though, I’ll play with v7.6+ in a Docker container and get myself prepared for the new gold standard in asynchronous JavaScript.

The post You Need ES2017’s Async Functions. Here’s Why … appeared first on DerickBailey.com.

Categories: Blogs

Visual Report Improvements

TargetProcess - Edge of Chaos Blog - Tue, 04/18/2017 - 09:33
Period scale for date axis

Dates are now scaled as continuous axes by default. If you need to use periodic scales for dates, you can switch scale type from the field popup.

2017-04-13-15-40-23

Legend

Legend filtering has been improved. Now, several categories in the legend can be selected, and changes will be reflected on the chart.

2017-04-13-15-24-42

 

Tooltip

The mechanics of tooltip have been improved. Projection to axis was added for stacked bars and areas to see the total value of the stacked items.

2017-04-13-15-33-27

We will really appreciate your feedback on our reports editor. What do you like about it? What could be improved? Let us know what you think at ux@targetprocess.com

Categories: Companies

Leadership re-envisioned in the 21st Century

Agile Complexification Inverter - Mon, 04/17/2017 - 22:38
Is there a new form of leadership being envisioned in the 21st Century?  Is there someone challenging the traditional form of organizational structure?

Leading Wisely - a pod cast with Ricardo Semler.
Leading Wisely
"Join organizational changemaker Ricardo Semler in conversation with leaders challenging assumptions and changing how we live and work."
S1E01: Killing the Dinosaur Business Model (Part 1) with Basecamp’s Jason Fried & DHH

S1E02: Killing the Dinosaur Business Model (Part 2) with Basecamp’s Jason Fried & DHH
S1E03: Reinventing Organizations with Frederic Laloux

S1E04: Self-organization with Zappos' Tony Hsieh
S1E05: Busting Innovation Myths with David Burkus

S1E06: Merit and Self-Management with Jurgen Appelo

S1E07: Letting Values Inform Organizational Structure with Jos de Blok

S1E08: Corporate Liberation with Isaac Getz

S1E09: The Police & Self-Management with Erwin van Waeleghem

S1E10: Season Finale: The Common Denominator with Rich Sheridan of Joy Inc.


A ran across this series of 10 talks because I'm a fan of Joy, Inc. author and leader of Menlo innovations, Richard Sheridan.  I saw a tweet about his talk and found a bucket of goodness.
The Common Denominator with Rich Sheridan of Joy Inc.

Richard Sheridan on podcast Leading WiselySee Also:A Review of Leadership ModelsExamples of 21st C. CompaniesSafety - the perquisite for Leadership
A Leadership Paradox

Book List:
Maverick!: The Success Story Behind the World's Most Unusual Workplace by Ricardo SemlerJoy, Inc : How We Built a Workplace People Love by Richard SheridanReWork: Change the Way You Work Forever by David Heinemeier Hansson and Jason Fried
Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage in Human Consciousness by Frederic Laloux
Categories: Blogs

Certified Agile Leadership course in San Diego April 25-27

Agile Game Development - Mon, 04/17/2017 - 15:01
The key to successful agile adoption and growth lies not only with developers, but studio leadership as well. We all know that cross-discipline teams iterating on features creates a benefit, but to achieve the far greater (and rarer) reward of developer engagement and motivated productivity, you need deeper cultural change.  This requires a shift in the mindset of leadership.
The Certified Agile Leadership (CAL) course provides this shift.  It distills the experience and wisdom of decades of experience applying agile successfully and leads to true leadership transformation.  In taking the course, I personally found that not only were my leadership approaches transformed, but it altered how I engaged with family, friends and my own life.
I will be joining the CAL course being taught by my friend and occasional co-trainer Peter Green In San Diego on April 25th through the 27th.  Please join us!
http://agileforall.com/course/cal1/
Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.