Skip to content

Feed aggregator

The SAFe glossary goes international with nine new translations

Agile Product Owner - Tue, 05/02/2017 - 00:31

The SAFe community is very diverse, with nearly 200,000 new visitors to the SAFe website coming from non-English speaking countries in the last six months alone. And more SAFe classes are popping up all over the globe. Today we counted nearly 400 non-U.S. classes on the public training calendar.

Although most local concerns are supported by our Global Partner Network, the Framework itself has been—with a few exceptions—available exclusively in English. This is because adopting SAFe effectively relies on a common language that supports the Lean-Agile mindset and complements its common behaviors and practices. Without it, you can end up with another version of the Tower of Babel, which can impede the effectiveness of your implementation. But we also understand that a common ‘language of SAFe’ can be challenging when an enterprise is distributed geographically, or headquartered in countries where English is not the primary language.

Bridging the Language Gap

We’ve taken a major step toward bridging the language gap by introducing nine translations of the SAFe glossary. Each version is downloadable and provides definitions of the 90+ foundational terms used in SAFe, as well as 50 of the most common acronyms and abbreviations. Available translations include:

  • Arabic
  • Brazilian Portuguese
  • Chinese
  • Dutch
  • Finnish
  • German
  • Italian
  • Japanese
  • Spanish

Still in the backlog are Russian, Swedish, Korean, Hindi, and Bengali.

A big thank you to the growing list of volunteers who are contributing to this effort. Your talent and time is helping to make SAFe more accessible for everyone.

Rimanga sicuro! (Stay SAFe!)
—Dean and the Framework team

Categories: Blogs

Why or Why Not Agile? – Meet Up

Agile Ottawa - Mon, 05/01/2017 - 23:47
  “Sorry, the new Champlain Bridge cannot be built using Agile…” – Mathieu Boivert presentation attracts many participants at Agile Ottawa MeetUp! This MeetUp brought together Agile practitioners and students who have just begun a journey in Agile. The topic … Continue reading →
Categories: Communities

Domain Command Patterns - Handlers

Jimmy Bogard - Mon, 05/01/2017 - 23:45

In the last post, we looked at validation patterns in domain command handlers in response to a question, "Command objects should [always/never] have return values". This question makes an assumption - that we have command objects!

In this post, I want to look at a few of our options for handling domain commands:

Request to Domain

When I look at command handling, I'm really talking about the actual "meat" of the request handling. The part that mutates state. In very small applications, or very simple ones, I can put this "mutation" directly in the request handling (i.e., the controller action or event handler for stateful UIs).

But for most of the systems I build, it's too much to shove this all in the edge of my application. This raises the question - where should this logic go? We can look at a number of design patterns (including the Command Object pattern). Ultimately, I have a block of code that mutates state, and I need to decide where to put it.

Static Helper/Manager/Service Functions

A very simple option would be to create some static class to host mutation functions:

public static class SomethingManager {  
    public static void DoSomething(SomethingRequest request,
        MyDbContext db) {
        // Domain logic here
    }
}

If our method needed to work with any other objects to do its work, these would all be passed in as method arguments. We wouldn't use static service location, as we do have some standards. But with this approach, we can use all sorts of functional tricks at our disposal to build richness around this simple pattern.

How you break up these functions into individual separate classes is up to you. You might start off with a single static class per project, static class per domain object, per functional area, or per request. The general idea is that although C# doesn't support the full functional richness of F#, static functions provide a reasonable alternative.

The advantage to this approach is it's completely obvious exactly what the logic is. The return type above is "void" but as we saw with the validation options, it could be some sort of return object as well.

DDD Service Classes

Slightly different than the static class is the DDD Service Pattern. The big difference is that the service class is instance-oriented, and often uses dependency injection. The other big difference is in the wild I typically see service classes more entity or aggregate oriented:

public class SomethingService : ISomethingService {  
    public SomethingService(MyDbContext db) {
        _db = db;
    }

    public void DoSomething(SomethingRequest request) {
        // Domain logic here
    }
}

Services in the DDD world should be designed around a coordination activity. After all, the original definition was that services are designed to coordinate between aggregates, or between aggregates and external services. But that's not what I typically see, I typically see Java Spring-style DDD services where we have an entity Foo, and then:

  • FooController
  • FooService
  • FooRepository

I would highly discourage these kinds of services, as we've introduced arbitrary layering without much value. If we're doing DDD right, services would be a bit rarer, and therefore not needed for every single command in our system.

Request-Specific Handlers

With both the service and manager options, we typically see multiple requests handled by multiple method inside the same class. Although there's nothing stopping you from creating a service per request, the request-specific handler achieves this same end goal: a single class and method handling each individual request.

I copied this pattern enough times where I finally extracted the code into a library, MediatR. We create a class to encapsulate handling each individual request:

public class SomethingRequestHandler : IRequestHandler<SomethingRequest> {  
    public void Handle(SomethingRequest request) {
    }
}

There are variants for handling a request: sync/async, return value/void and combinations thereof.

This tends to be my default choice for handling domain commands, as it encourages me to isolate the logic from each request from any other request.

But sometimes the logic in my handler gets complicated, and I want to push that behavior down.

Domain Aggregate Functions

Finally, we can push our behavior down directly into our aggregates:

public class SomethingAggregate {  
    public void DoSomething(SomethingRequest request) {
    }
}

Or, if we don't want to couple our aggregates to the external request objects, we can destructure our request object into individual values:

public class SomethingAggregate {  
    public void DoSomething(string value1, int value2, decimal value3) {
    }
}

In my systems, I tend to start with the simple, procedural code inside a handler. When that code exhibits code smells, I push the behavior down into my domain objects. Of course we can just do that by default - only reserve procedural code for my CRUD areas of the application.

This certainly isn't an exhaustive list of domain command patterns, but it's 99% of what I typically see. I can mix multiple choices here as well - a handler for the logic to load/save, and a domain function for the actual "business logic".

I'm ignoring the actual Command Object pattern, as I find it might fit well with UI-level commands, it doesn't fit well with domain-level commands.

We can mix our validation choices too, and have field validation done by a framework, domain validation done by our aggregates, and use domain aggregate functions that return "result" objects.

So which way is "best"? I can't really say, a lot of this is a judgement call by your team. But with several options on the table we can at least make an informed decision.

Categories: Blogs

PostgreSQL: ERROR: argument of WHERE must not return a set

Mark Needham - Mon, 05/01/2017 - 22:42

In my last post I showed how to load and query data from the Strava API in PostgreSQL and after executing some simple queries my next task was to query more complex part of the JSON structure.

2017 05 01 21 22 55

Strava allows users to create segments, which are edited portions of road or trail where athletes can compete for time.

I wanted to write a query to find all the times that I’d run a particular segment. e.g. the Akerman Road segment covers a road running North to South in Kennington/Stockwell in South London.

This segment has the id ‘6818475’ so we’ll need to look inside segment_efforts and then compare the value segment.id against this id.

I initially wrote this query to try and find the times I’d run this segment:

SELECT id, data->'start_date' AS startDate, data->'average_speed' AS averageSpeed
FROM runs
WHERE jsonb_array_elements(data->'segment_efforts')->'segment'->>'id' = '6818475'

ERROR:  argument of WHERE must not return a set
LINE 3: WHERE jsonb_array_elements(data->'segment_efforts')->'segmen...

This doesn’t work since jsonb_array_elements returns a set of boolean values, as Craig Ringer points out on Stack Overflow.

Instead we can use a LATERAL subquery to achieve our goal:

SELECT id, data->'start_date' AS startDate, data->'average_speed' AS averageSpeed
FROM runs r,
LATERAL jsonb_array_elements(r.data->'segment_efforts') segment
WHERE segment ->'segment'->>'id' = '6818475'

    id     |       startdate        | averagespeed 
-----------+------------------------+--------------
 455461182 | "2015-12-24T11:20:26Z" | 2.841
 440088621 | "2015-11-27T06:10:42Z" | 2.975
 407930503 | "2015-10-07T05:18:34Z" | 2.985
 317170464 | "2015-06-03T04:44:59Z" | 2.842
 312629236 | "2015-05-27T04:46:33Z" | 2.857
 277786711 | "2015-04-02T05:25:59Z" | 2.408
 226351235 | "2014-12-05T07:59:15Z" | 2.803
 225073326 | "2014-12-01T06:15:21Z" | 2.929
 224287690 | "2014-11-29T09:02:46Z" | 3.087
 223964715 | "2014-11-28T06:18:29Z" | 2.844
(10 rows)

Perfect!

The post PostgreSQL: ERROR: argument of WHERE must not return a set appeared first on Mark Needham.

Categories: Blogs

With Agile, No Warnings Needed

Johanna Rothman - Mon, 05/01/2017 - 21:20

Have you ever worked on a project where the management and/or sponsors felt it necessary to provide you warnings: “This release better do this or have that. Otherwise, you’re toast.”

I have, once. That’s when I started to use release criteria and check with the sponsors/management to make sure they agreed.

I happen to like release criteria. Even better is when you use agile on your projects. You might get feedback before the release. Here’s what a client did on a recent project:

  • They had release criteria and the sponsors agreed to the criteria.
  • They released internally every two weeks and asked people to come to the demos.
  • They asked the product managers and product owners to review the finished work and to make sure the managers/sponsors liked where the roadmap was going.
  • The team worked in ways that promoted technical excellence, so they could (relatively) easily change the code base when people changed their minds.

The project didn’t fulfill all the wishes that managers and sponsors wanted. Those folks wanted the proverbial 15 pounds of project into a 5 pound bag. On the other hand, the team is on the verge of delivering a terrific product. (They have one more week to finish.) They are all proud of their effort and the way they’ve worked.

This morning, the project manager emailed me. “I’m so angry I could spit,” she said. “One of our sponsors, who couldn’t be bothered to see any demos just told me that if he doesn’t like it, he’s going to send us back to the drawing board. Do you have time for a quick call so I don’t get myself fired?”

This is a culture clash between the agile project’s transparency and request for frequent feedback vs. the controlling desires of management.

We spoke. She realized it was a difference in expectations and culture that will take a while to go away. There are probably reasons for it, and that doesn’t make it any easier for the team.

These kinds of situations are why I recommend new agile teams have a servant leader. I don’t care if you call that person an agile project manager or some other term, but the person’s role is to run interference between the two cultures.

The worst part? With the project’s transparency and interim delivery of value, no one needed to warn anyone about anything. The data this guy was looking for was in the demos, in the meeting minutes and was easily accessible.

I don’t know why people think they need to provide dire warnings. It’s not clear what effect they want to create. Dire warnings make even less sense when the team uses agile and provides interim value and demos.

If you’re using agile approaches, and you see this happening, decide what you want from this relationship. If you think you’ll have to work with this person again and again, it might make sense to have a conversation and see what they really want. What are their concerns? What are their pressures? Can you help them with information at other times instead of a week before the end of the project?

Don’t be surprised if you see this kind of a culture clash in your organization as teams start their transformation. Managers have a lot to do with culture (you might say they are the holders of the culture) and we’re asking them to use different measurements and act differently. A huge change. (Yes, after the agile project book, I’m writing an agile management book. I know, you’re not surprised.)

Categories: Blogs

Loading and analysing Strava runs using PostgreSQL JSON data type

Mark Needham - Mon, 05/01/2017 - 21:11

In my last post I showed how to map Strava runs using data that I’d extracted from their /activities API, but the API returns a lot of other data that I discarded because I wasn’t sure what I should keep.

The API returns a nested JSON structure so the easiest solution would be to save each run as an individual file but I’ve always wanted to try out PostgreSQL’s JSON data type and this seemed like a good opportunity.

Creating a JSON ready PostgreSQL table

First up we need to create a database in which we’ll store our Strava data. Let’s name it appropriately:

create database strava;
\connect strava;

Now we can now create a table with one field with the JSON data type:

CREATE TABLE runs (
  id INTEGER NOT NULL,
  data jsonb
);

ALTER TABLE runs ADD PRIMARY KEY(id);

Easy enough. Now we’re ready to populate the table.

Importing Strava API

We can partially reuse the script from the last post except rather than saving to CSV file we’ll save to PostgreSQL using the psycopg2 library.

2017 05 01 13 45 58

The script relies on a TOKEN environment variable. If you want to try this on your own Strava account you’ll need to create an application, which will give you a key.

extract-runs.py

import requests
import os
import json
import psycopg2

token = os.environ["TOKEN"]
headers = {'Authorization': "Bearer {0}".format(token)}

with psycopg2.connect("dbname=strava user=markneedham") as conn:
    with conn.cursor() as cur:
        page = 1
        while True:
            r = requests.get("https://www.strava.com/api/v3/athlete/activities?page={0}".format(page), headers = headers)
            response = r.json()

            if len(response) == 0:
                break
            else:
                for activity in response:
                    r = requests.get("https://www.strava.com/api/v3/activities/{0}?include_all_efforts=true".format(activity["id"]), headers = headers)
                    json_response = r.json()
                    cur.execute("INSERT INTO runs (id, data) VALUES(%s, %s)", (activity["id"], json.dumps(json_response)))
                    conn.commit()
                page += 1
Querying Strava

We can now write some queries against our newly imported data.

My quickest runs

SELECT id, data->>'start_date' as start_date, 
       (data->>'average_speed')::float as speed 
FROM runs 
ORDER BY speed DESC 
LIMIT 5

    id     |      start_date      | speed 
-----------+----------------------+-------
 649253963 | 2016-07-22T05:18:37Z | 3.736
 914796614 | 2017-03-26T08:37:56Z | 3.614
 653703601 | 2016-07-26T05:25:07Z | 3.606
 548540883 | 2016-04-17T18:18:05Z | 3.604
 665006485 | 2016-08-05T04:11:21Z | 3.604
(5 rows)
My longest runs

SELECT id, data->>'start_date' as start_date, 
       (data->>'distance')::float as distance
FROM runs
ORDER BY distance DESC
LIMIT 5

    id     |      start_date      | distance 
-----------+----------------------+----------
 840246999 | 2017-01-22T10:20:33Z |  10764.1
 461124609 | 2016-01-02T08:42:47Z |  10457.9
 467634177 | 2016-01-10T18:48:47Z |  10434.5
 471467618 | 2016-01-16T12:33:28Z |  10359.3
 540811705 | 2016-04-10T07:26:55Z |   9651.6
(5 rows)
Runs this year

SELECT COUNT(*)
FROM runs
WHERE data->>'start_date' >= '2017-01-01 00:00:00'

 count 
-------
    62
(1 row)
Runs per year
SELECT EXTRACT(year from to_date(data->>'start_date', 'YYYY-mm-dd')) AS year, 
       count(*) 
FROM runs 
GROUP BY year 
ORDER BY year

 year | count 
------+-------
 2014 |    18
 2015 |   139
 2016 |   166
 2017 |    62
(4 rows)

That’s all for now. Next I’m going to learn how to query segments, which are stored inside a nested array inside the JSON document. Stay tuned for that in a future post.

The post Loading and analysing Strava runs using PostgreSQL JSON data type appeared first on Mark Needham.

Categories: Blogs

Only trigger a release when the build changed

Xebia Blog - Mon, 05/01/2017 - 18:15

Back in the early days, when we used XAML builds in TFS (wow that seems like ages ago!), we had the possibility to NOT execute a build when nothing changed in the source code repository. This checkbox “Build even if nothing has changed” does not exist anymore in VSTS. For me this is not a real […]

The post Only trigger a release when the build changed appeared first on Xebia Blog.

Categories: Companies

Leading to Real Agility – Video Series

Learn more about transforming people, process and culture with the Real Agility Program

I have recently published all 16 videos for the Leading to Real Agility series on YouTube.  The videos cover leadership topics including:

  • Organizational Change
  • Dealing with Laggards
  • Leadership Responsibilities
  • and many others…

The videos are short (typically 2 or 3 minutes each) and focus on introducing the basics of each topic.  Further depth can be gained through our Leading to Real Agility one-on-one coaching service.

BESTEIG Real Agility logo - Agile Coach development program

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Leading to Real Agility – Video Series appeared first on Agile Advice.

Categories: Blogs

SonarJS 3.0: Being Lean and Mean in JavaScript

Sonar - Mon, 05/01/2017 - 12:40

All through 2016 SonarJS has become richer and more powerful thanks to new rules and its new data flow engine, to the point of being able to find pretty interesting stuff like this:

cool-issue-annotated

That’s cool, isn’t it? Yet, there’s such a thing as being blinded by coolness and, as Pirelli was fond of saying, power is nothing without control. What good is pointing out a very nasty and hidden bug if you have long since stopped listening to what SonarJS has to tell you?

There are two main reasons a developer stops listening to the analyzer:

  1. The analyzer is noisy, stacking issue on top of issue because you insist on having more than one statement per line.
  2. The analyzer says something that is really dumb, so, the developer presumes, the analyzer is dumb. Life is too short to listen to dumb tools.

Unless we tackled both these points we risked having our oh-so-powerful analyzer be perceived like a “the end is nigh” lunatic.

futurama-the-end-is-nigh

Kill the noise

We don’t want to spam the developer with potentially true but ultimately irrelevant messages. But what is relevant?

We do want to provide value out of the box, so all SonarSource analysers provide a default rule-set, called the “Sonar way” profile, that does represent what it means for us to write good <insert language here> code. This means that we don’t have the luxury of saying “the users will setup the profile with the rules they prefer”, we have to take a stance on which rules are activated by default.

Guess what? Nobody knew that defining what is good JavaScript could be so complicated!

We thus embarked on a deep review of our default “Sonar way” profile to see if we could indeed find a meaningful, useful, common ground. We knew we needed an external point of view and we were very lucky to find a very knowledgeable and critical one: Alexander Kamushkin.

Alexander worked with us for a month and he did an amazing job, if somewhat painful to us, pointing out which rules provided the most value regardless of team culture and idioms, which could become idiom-neutral with some work, and which were by definition optional conventions.

After the first few rounds of discussion he put everything in what we have come to refer to as “Alexander’s Report”, of which this is a very small excerpt:

alexanders-report

Of course this was not the end of it, we kept on refining these findings, prioritizing and adding some more all through the development of SonarJS 3.0 and we have more in the pipe for later on.

We improved dozens of rules, splitted rules to separate the unarguable bug-generating cases from maintenance-related cases, added heuristics to kill false-positives, almost no rule previously part of “Sonar way” was left untouched. We also further evolved the data flow engine itself to make sure it was not making assumptions that might lead rules to be overconfident in reporting an issue.

We now feel that the default profile of SonarJS 3.0 is a carefully trimmed set of high-value/low-noise rules useful in almost any JS development context.

We also created a new profile: “Sonar way Recommended”. This profile includes all the rules found in “Sonar way”, plus rules which we have evolved to be high-value/low-noise for JS developments that mandate high code readability and long-term evolution.

Things we learned The issue is in the eye of the beholder

Take for instance one excellent rule: “Dead stores should be removed”. This rule says that if you set a value to a variable and then you set a new value without ever using the previous one you have probably made a mistake.

let x = 3;
x = 10;
alert(“Value : “ + x);

We can hardly be more confident that something is wrong here, you probably wanted to do something with that “3”. What if you are in the habit of initialising all your number variables to 0, or all your strings to empty string?

function bye(p) {
  let answer = "";
  switch(p) {
    case "HELLO" : answer = "GOODBYE";
      break;
    case "HI" : answer = "BYE";
      break;
    default : answer = "HASTA LA VISTA BABY";
  }
  return answer;
}

Do we want to raise a dead-store issue on that first initialisation? If we did we could kind of excuse ourselves by saying that it is indeed a dead-store, but since the developer did that on purpose the analyser is at best perceived as pedantic.

After all when raising issues we are not addressing machines but human beings. We want them to read and care about these issues, we cannot hide behind technical correctness, we must be correct and also try to guess when something is done on purpose and when it is a genuine mistake.

It’s not a bug until it is

Before SonarJS 3.0 every issue we detected which could potentially lead to a bug was classified as a bug.

This was done out of a coherent approach at issue criticality and it does draw a clear distinction between potential mistakes and more readability or maintenance-related code smells.

Still, there’s something very alarming in getting a report saying that your project contains 1.542 bugs.

A classic example of this is NaN. If you don’t expect NaN to be a possible value you can introduce some very nasty bugs, because, let’s not forget that NaN == NaN is false!

Still, nothing might happen, because you are careful in other ways and as such playing with NaNs is at worst suspicious, not a bug.

Also, we found out that, as the analyser improved, many potentially dangerous things could be resolved into being either certainly bugs or certainly not bugs. There’s no need to scream about an undefined variable if we can track its value and only raise an issue when you try to access a property on that undefined variable.

If you can’t analyse it, don’t make assumptions

The data flow analysis engine is pretty good, but it still cannot analyse everything. We learnt that if we cannot follow the whole life of a variable’s value we are better off assuming no knowledge than partial knowledge.

let x;
if(p) x = 2;
if(isPositive(x)) {
  return 10/x;
}

Should we warn you of a possible NaN? It depends if we were able to resolve the declaration of isPositive and go through its implementation. If we don’t know what happens within isPositive, even if we know that it is possible that x is undefined we can’t be sure that x can be undefined when 10/x is executed. It’s safer, to avoid raising an issue because of our partial understanding, to not presume we know anything about x.

And more

There would be much more to say, but for the time being suffice to know that SonarJS 3.0 inaugurates a focus on minimalistic usefulness, or, as Marko Ramius would say: we have engaged the silent drive.

Categories: Open Source

Scrum Day Germany, Filderstadt, Germany, May 30-31 2017

Scrum Expert - Mon, 05/01/2017 - 10:00
Scrum Day Germany is a two-day conference about Scrum and Agile project management that propose an international line-up of Agile experts. It provides a multi-track sessions and full day workshops....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Agile Development Conference, Las Vegas, USA, June 4–9 2017

Scrum Expert - Mon, 05/01/2017 - 09:15
The Agile Development Conference West is an event focused on agile methods, technologies, tools, and leadership principles from thought leaders that takes place in Las Vegas. You will find at this...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Unconf, Kracow, Poland, June 2-3 2017

Scrum Expert - Mon, 05/01/2017 - 09:00
Unconf — the unexpected conference is a two-day event dedicated to the Agile and Scrum. It is provided in a nontypical way that provides both open space discussions and workshops. Talks will be in...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Some Laws of Software Development

NetObjectives - Sun, 04/30/2017 - 13:28
This blog continues my series on Going Beyond Practices to Achieve Objectives.  There is a significant difference between a law and a principle.  In Values, Practices and Principles Are Not Enough we defined them as: Principle – a fundamental truth or proposition that serves as the foundation for a system of belief or behavior or for a chain of reasoning Law – (natural laws) a statement of fact,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Leaflet: Mapping Strava runs/polylines on Open Street Map

Mark Needham - Sat, 04/29/2017 - 17:36

I’m a big Strava user and spent a bit of time last weekend playing around with their API to work out how to map all my runs.

2017 04 29 15 56 06

Strava API and polylines

This is a two step process:

  1. Call the /athlete/activities/ endpoint to get a list of all my activities
  2. For each of those activities call /activities/[activityId] endpoint to get more detailed information for each activity

That second API returns a ‘polyline’ property which the documentation describes as follows:

Activity and segment API requests may include summary polylines of their respective routes. The values are string encodings of the latitude and longitude points using the Google encoded polyline algorithm format.

If we navigate to that page we get the following explanation:

Polyline encoding is a lossy compression algorithm that allows you to store a series of coordinates as a single string.

I tried out a couple of my polylines using the interactive polyline encoder utility which worked well once I realised that I needed to escape backslashes (“\”) in the polyline before pasting it into the tool.

Now that I’d figured out how to map one run it was time to automate the process.

Leaflet and OpenStreetMap

I’ve previously had a good experience using Leaflet so I was keen to use that and luckily came across a Stack Overflow answer showing how to do what I wanted.

I created a HTML file and manually pasted in a couple of my runs (not forgetting to escape those backslashes!) to check that they worked:

blog.html


  
    Mapping my runs
  

  
    
    
    
    

    
    var map = L.map('map').setView([55.609818, 13.003286], 13);
    L.tileLayer(
        'http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
            maxZoom: 18,
        }).addTo(map);

    var encodedRoutes = [
      "{zkrIm`inANPD?BDXGPKLATHNRBRFtAR~AFjAHl@D|ALtATj@HHJBL?`@EZ?NQ\\Y^MZURGJKR]RMXYh@QdAWf@[~@aAFGb@?j@YJKBU@m@FKZ[NSPKTCRJD?`@Wf@Wb@g@HCp@Qh@]z@SRMRE^EHJZnDHbBGPHb@NfBTxBN|DVbCBdA^lBFl@Lz@HbBDl@Lr@Bb@ApCAp@Ez@g@bEMl@g@`B_AvAq@l@    QF]Rs@Nq@CmAVKCK?_@Nw@h@UJIHOZa@xA]~@UfASn@U`@_@~@[d@Sn@s@rAs@dAGN?NVhAB\\Ox@@b@S|A?Tl@jBZpAt@vBJhATfGJn@b@fARp@H^Hx@ARGNSTIFWHe@AGBOTAP@^\\zBMpACjEWlEIrCKl@i@nAk@}@}@yBOWSg@kAgBUk@Mu@[mC?QLIEUAuAS_E?uCKyCA{BH{DDgF`AaEr@uAb@oA~@{AE}AKw@    g@qAU[_@w@[gAYm@]qAEa@FOXg@JGJ@j@o@bAy@NW?Qe@oCCc@SaBEOIIEQGaAe@kC_@{De@cE?KD[H[P]NcAJ_@DGd@Gh@UHI@Ua@}Bg@yBa@uDSo@i@UIICQUkCi@sCKe@]aAa@oBG{@G[CMOIKMQe@IIM@KB]Tg@Nw@^QL]NMPMn@@\\Lb@P~@XT",
      "u}krIq_inA_@y@My@Yu@OqAUsA]mAQc@CS@o@FSHSp@e@n@Wl@]ZCFEBK?OC_@Qw@?m@CSK[]]EMBeAA_@m@qEAg@UoCAaAMs@IkBMoACq@SwAGOYa@IYIyA_@kEMkC]{DEaAScC@yEHkGA_ALsCBiA@mCD{CCuAZcANOH@HDZl@Z`@RFh@\\TDT@ZVJBPMVGLM\\Mz@c@NCPMXERO|@a@^Ut@s@p@KJAJ    Bd@EHEXi@f@a@\\g@b@[HUD_B@uADg@DQLCLD~@l@`@J^TF?JANQ\\UbAyABEZIFG`@o@RAJEl@_@ZENDDIA[Ki@BURQZaARODKVs@LSdAiAz@G`BU^A^GT@PRp@zARXRn@`BlDHt@ZlAFh@^`BX|@HHHEf@i@FAHHp@bBd@v@DRAVMl@i@v@SROXm@tBILOTOLs@NON_@t@KX]h@Un@k@\\c@h@Ud@]ZGNKp@Sj@KJo@    b@W`@UPOX]XWd@UF]b@WPOAIBSf@QVi@j@_@V[b@Uj@YtAEFCCELARBn@`@lBjAzD^vB^hB?LENURkAv@[Ze@Xg@Py@p@QHONMA[HGAWE_@Em@Hg@AMCG@QHq@Cm@M[Jy@?UJIA{@Ae@KI@GFKNIX[QGAcAT[JK?OVMFK@IAIUKAYJI?QKUCGFIZCXDtAHl@@p@LjBCZS^ERAn@Fj@Br@Hn@HzAHh@RfD?j@TnCTlA    NjANb@\\z@TtARr@P`AFnAGfBG`@CFE?"
  ]

    for (let encoded of encodedRoutes) {
      var coordinates = L.Polyline.fromEncoded(encoded).getLatLngs();

      L.polyline(
          coordinates,
          {
              color: 'blue',
              weight: 2,
              opacity: .7,
              lineJoin: 'round'
          }
      ).addTo(map);
    }
    
  

We can spin up a Python web server over that HTML file to see how it renders:

$ python -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...

And below we can see both runs plotted on the map.

2017 04 29 15 53 28 Automating Strava API to Open Street Map

The final step is to automate the whole thing so that I can see all of my runs.

I wrote the following script to call the Strava API and save the polyline for every run to a CSV file:

import requests
import os
import sys
import csv

token = os.environ["TOKEN"]
headers = {'Authorization': "Bearer {0}".format(token)}

with open("runs.csv", "w") as runs_file:
    writer = csv.writer(runs_file, delimiter=",")
    writer.writerow(["id", "polyline"])

    page = 1
    while True:
        r = requests.get("https://www.strava.com/api/v3/athlete/activities?page={0}".format(page), headers = headers)
        response = r.json()

        if len(response) == 0:
            break
        else:
            for activity in response:
                r = requests.get("https://www.strava.com/api/v3/activities/{0}?include_all_efforts=true".format(activity["id"]), headers = headers)
                polyline = r.json()["map"]["polyline"]
                writer.writerow([activity["id"], polyline])
            page += 1

I then wrote a simple script using Flask to parse the CSV files and send a JSON representation of my runs to a slightly modified version of the HTML page that I described above:

from flask import Flask
from flask import render_template
import csv
import json

app = Flask(__name__)

@app.route('/')
def my_runs():
    runs = []
    with open("runs.csv", "r") as runs_file:
        reader = csv.DictReader(runs_file)

        for row in reader:
            runs.append(row["polyline"])

    return render_template("leaflet.html", runs = json.dumps(runs))

if __name__ == "__main__":
    app.run(port = 5001)

I changed the following line in the HTML file:

var encodedRoutes = {{ runs|safe }};

Now we can launch our Flask web server:

$ python app.py 
 * Running on http://127.0.0.1:5001/ (Press CTRL+C to quit)

And if we navigate to http://127.0.0.1:5001/ we can see all my runs that went near Westminster:

2017 04 29 16 32 00

The full code for all the files I’ve described in this post are available on github. If you give it a try you’ll need to provide your Strava Token in the ‘TOKEN’ environment variable before running extract_runs.py.

Hope this was helpful and if you have any questions ask me in the comments.

The post Leaflet: Mapping Strava runs/polylines on Open Street Map appeared first on Mark Needham.

Categories: Blogs

Leading with Respect

Leading with respect is the key to the success of any transformation. Learn how to create a culture of transparency, trust, and teamwork in your org.

The post Leading with Respect appeared first on Blog | LeanKit.

Categories: Companies

Agile Coach Camp Canada, Cornwall, Canada, June 9-11 2017

Scrum Expert - Fri, 04/28/2017 - 10:00
Agile Coach Camp Canada is a three-day conference that creates opportunities for the Agile coaches community to share successes, learning, questions and unresolved dilemmas. All this happens in an...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Agile Open Canada, Vancouver, Canada, May 11-12 2017

Scrum Expert - Fri, 04/28/2017 - 09:00
Agile Open Canada is a two-day conference that creates opportunities for the Agile practitioners and enthusiasts of Canada to share successes, learning, questions and unresolved dilemmas. All this...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Synergic Reading Lessons

Agile Complexification Inverter - Fri, 04/28/2017 - 06:17



Wondering what other books I should read concurrently with the philosophy of this book, Other Minds, on the mind of our alien ancestors. In chapter one Peter is already mashing up Ismael and Darwin, so I feel it appropriate to do a bit of mix-in myself. I'm thinking Seven Brief Lessons on Physics will add spice. To bad I recycled How to Create a Mind at Half Price Books.




I've also got to read Coaching Agile Teams by Lyssa Adkins for work's book club. And I may mix-in a bit of LEGO Serious Play, because I cannot get serious about coaching - seems like a play activity to me.




Maybe I will devise a quadrant model of these books. A Venn diagram of their overlapping topics.



Squid Communicate With a Secret, Skin-Powered Alphabet

Categories: Blogs

New PMI-ACP Workbook

Leading Answers - Mike Griffiths - Thu, 04/27/2017 - 23:00
I am pleased to announce the availability of my new PMI-ACP Workbook. This new workbook focusses on a smaller subset of 50 key topics. My original PMI-ACP Exam Prep book distilled all the relevant content from the 11 books on... Mike Griffiths
Categories: Blogs

Python: Flask – Generating a static HTML page

Mark Needham - Thu, 04/27/2017 - 22:59

Whenever I need to quickly spin up a web application Python’s Flask library is my go to tool but I recently found myself wanting to generate a static HTML to upload to S3 and wondered if I could use it for that as well.

It’s actually not too tricky. If we’re in the scope of the app context then we have access to the template rendering that we’d normally use when serving the response to a web request.

The following code will generate a HTML file based on a template file templates/blog.html:

from flask import render_template
import flask

app = flask.Flask('my app')

if __name__ == "__main__":
    with app.app_context():
        rendered = render_template('blog.html', \
            title = "My Generated Page", \
            people = [{"name": "Mark"}, {"name": "Michael"}])
        print(rendered)

templates/index.html



  
	{{ title }}
  
  
	{{ title }}
  
    {% for person in people %}
  • {{ person.name }}
  • {% endfor %}

If we execute the Python script it will generate the following HTML:

$ python blog.py 


  
	My Generated Page
  
  
	My Generated Page
  
  • Mark
  • Michael


And we can finish off by redirecting that output into a file:

$ python blog.py  > blog.html

We could also write to the file from Python but this seems just as easy!

The post Python: Flask – Generating a static HTML page appeared first on Mark Needham.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.