Skip to content

Feed aggregator

De-mystifying Jest Snapshot Test Mocks

Xebia Blog - Mon, 04/10/2017 - 13:48

So, let’s say you have a nice React Native setup with the Jest testing library. You want to snapshot-test all your components of course! But you’re getting seemingly unrelated errors when you tried to mock a third party module in your snapshots and you’re lost in all that API documentation. Let’s dig into an example […]

The post De-mystifying Jest Snapshot Test Mocks appeared first on Xebia Blog.

Categories: Companies

TrumpCare in its Infancy January 2017

Agile Complexification Inverter - Sun, 04/09/2017 - 23:47
I'm extremely concerned today for my country and this planet.  It appears that history is repeating.
    January 27th -- International Holocaust Remembrance Day.

President Trump bars refugees and citizens of Muslim nations entry into the U.S.A.

The New York Times
By Bundesarchiv, Bild 183-N0827-318 / CC-BY-SA 3.0, CC BY-SA 3.0 de
Four score and four years ago a dictator brought forth on the European continent an evolving plan to rule the world and subjugate the masses.

Now we are engaged in a great resistance, testing whether our nation, or any nations conceived from the learning of our mothers and fathers and so dedicated to liberty, can long endure.  We are met on a great social square of technologic creation.  We have come to dedicate a portion of our wealth, wisdom, and life to those in history that have offered their lives and wisdom so that we may learn and prosper.  It is altogether fitting and proper that we should do this.

But, in a larger sense, we can not dedicate -- we can not consecrate -- we can not hallow -- this square.  The brave women and men, living and dead, who struggle here, have consecrated it, far above our poor power to add or detract.  The world will little note, nor long remember what we say here, but it can never forget what they did here in the commons.  It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced.  It is rather for us to be here dedicated to the great task remaining before us -- that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion -- that this nation, ruled by law, shall have a new birth of freedom -- and that government of the people, by the people, for the people, shall not perish from this planet.

-- David A. Koontz, human patriot


President Abraham Lincoln's address, on Thursday, November 19, 1863, to dedicate Soldiers' National Cemetery in Gettysburg, Pennsylvania, four and a half months after the Union armies defeated those of the Confederacy at the Battle of GettysburgFour score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.

"Abraham Lincoln's carefully crafted address, secondary to other presentations that day, was one of the greatest and most influential statements of national purpose. In just over two minutes, Lincoln reiterated the principles of human equality espoused by the Declaration of Independence[6] and proclaimed the Civil War as a struggle for the preservation of the Union sundered by the secession crisis,[7] with "a new birth of freedom"[8] that would bring true equality to all of its citizens.[9] Lincoln also redefined the Civil War as a struggle not just for the Union, but also for the principle of human equality.[6]".

"Lincoln's address followed the oration by Edward Everett, who subsequently included a copy of the Gettysburg Address in his 1864 book about the event (Address of the Hon. Edward Everett At the Consecration of the National Cemetery At Gettysburg, 19th November 1863, with the Dedicatory Speech of President Lincoln, and the Other Exercises of the Occasion; Accompanied by An Account of the Origin of the Undertaking and of the Arrangement of the Cemetery Grounds, and by a Map of the Battle-field and a Plan of the Cemetery)."
 -- Wikipedia, Gettysburg Address
The books title is indictavite of the author's ability to thoroughly cover a topic. Everett's 2-hour oration had 13,607 words.



See Also:
     The Address by Ken Burns - PBS. Did you hear the story about the person that would give $20 bucks to grandkids that learned the Gettysburg Address? Encouraged me to learn it and it's history. History has an interesting emergent property... it appears to repeat, this is a emergent property from a complex system. It is the complex system practicing and learning... Humans as part of this universe's system, are so far (as we know) it's fastest learning sub-system. Our apparent loop duration is currently around Four Score years.Why President Obama Didn't Say 'Under God' While Reading the Gettysburg Address
Lincoln's 272 Words, A Model Of Brevity For Modern Times by Scott Simon

    Germany's Enabling Act of 1933. "The Enabling Act gave Hitler plenary powers. It followed on the heels of the Reichstag Fire Decree, which abolished most civil liberties and transferred state powers to the Reich government. The combined effect of the two laws was to transform Hitler's government into a de facto legal dictatorship."
     Women's March 2017 "A series of worldwide protests on January 21, 2017, in support of women's rights and related causes. The rallies were aimed at Donald Trump, immediately following his inauguration as President of the United States, largely due to his statements and positions which had been deemed as anti-women or otherwise reprehensible."
     Reichstag Fire Decree - Germany 1933  According to Rudolf Diels, Hitler was heard shouting through the fire "these sub-humans do not understand how the people stand at our side. In their mouse-holes, out of which they now want to come, of course they hear nothing of the cheering of the masses."[1].   Seizing on the burning of the Reichstag building as the supposed opening salvo in a communist uprising, the Nazis were able to throw millions of Germans into a convulsion of fear at the threat of Communist terror. The official account stated:  The burning of the Reichstag was intended to be the signal for a bloody uprising and civil war. Large-scale pillaging in Berlin was planned for as early as four o’clock in the morning on Tuesday. It has been determined that starting today throughout Germany acts of terrorism were to begin against prominent individuals, against private property, against the lives and safety of the peaceful population, and general civil war was to be unleashed…[2]
     TrumpCare: In the Beginning by Bill Frist - Nov. 2016, Forbes.  "Yesterday Americans woke up to news of a new president-elect: Donald J. Trump. The immediate question for those whose lives focus around lifting the health of individual Americans is, “What does this mean for health care in America?”
Categories: Blogs

Automated Tests for Asynchronous Processes

thekua.com@work - Sun, 04/09/2017 - 15:31

It’s been a while since I’ve worked on a server-side application that had asynchronous behaviour that wasn’t already an event-driven system. Asynchronous behaviour is always an interesting challenge to design and test. In general, asynchronous behaviour should not be hard to unit test – after all, the behaviour of an action shouldn’t necessarily be coupled temporally (see forms of coupling).

TIP: If you are finding the need for async testing in your unit tests, you’re probably doing something wrong and need to redesign your code to decouple these concerns.

If your testing strategy only includes unit testing, you will miss a whole bunch of behaviour which are often caught at high level of testing like integration, functional or system tests – which is where I need asynchronous testing.

Asychronous testing, conceptually, is actually pretty easy. Like synchronous testing, you take an action and then look for a desired result. However unlike synchronous testing, your test cannot guarantee that the action has completed before you check for the side-effect or result.

There are generally two approaches to testing asynchronous behaviour:

  1. Remove the asynchronous behaviour
  2. Poll until you have the desired state
Remove the asynchronous behaviour

I used this approach when TDD-ing a thick client application many years ago, when writing applications in swing applications was still a common approach. Doing this required isolating the action invoking behaviour into a single place, that, instead of it occurring in a different thread would, during the testing process, occur in the same thread as the test. I even gave a presentation on it in 2006, and wrote this cheatsheet talking about the process.

This approach required a disciplined approach to design where toggling this behaviour was isolated in a single place.

Poll until you have the desired state

Polling is a much more common approach to this problem however this involves the common problem of waiting and timeouts. Waiting too long increases your overall test time and extends the feedback loop. Waiting too short might also be quite costly depending on the operation you have (e.g. hammering some integration point unnecessarily).

Timeouts are another curse of asynchronous behaviour because you don’t really know when an action is going to take place, but you don’t really want a test going forever.

The last time I had to do something, we would often end up writing our own polling and timeout hook, while relatively simple is now available as a very simple library. Fortunately other people have also encountered this problem in java-land and contributed a library to help make testing this easier in the form of Awaitility.

Here is a simple test that demonstrates how easy the library can make testing asynchronous behaviour:

package com.thekua.spikes.aysnc.testing;

import com.thekua.spikes.aysnc.testing.FileGenerator;
import org.junit.Before;
import org.junit.Test;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import static java.util.concurrent.TimeUnit.SECONDS;
import static org.awaitility.Awaitility.await;
import static org.hamcrest.Matchers.startsWith;
import static org.junit.Assert.assertThat;

public class FileGeneratorTest {

    private static final String RESULT_FILE = "target/test/resultFile.txt";
    private static final String STEP_1_LOG = "target/test/step1.log";
    private static final String STEP_2_LOG = "target/test/step2.log";
    private static final String STEP_3_LOG = "target/test/step3.log";

    private static final List<String> FILES_TO_CLEAN_UP = Arrays.asList(STEP_1_LOG, STEP_2_LOG, STEP_3_LOG, RESULT_FILE);


    @Before
    public void setUp() {
        for (String fileToCleanUp : FILES_TO_CLEAN_UP) {
            File file = new File(fileToCleanUp);
            if (file.exists()) {
                file.delete();
            }
        }
    }


    @Test
    public void shouldWaitForAFileToBeCreated() throws Exception {
        // Given I have an aysnc process to run
        String expectedFile = RESULT_FILE;

        List<FileGenerator> fileGenerators = Arrays.asList(
                new FileGenerator(STEP_1_LOG, 1, "Step 1 is complete"),
                new FileGenerator(STEP_2_LOG, 3, "Step 2 is complete"),
                new FileGenerator(STEP_3_LOG, 4, "Step 3 is complete"),
                new FileGenerator(expectedFile, 7, "Process is now complete")
        );

        // when it is busy doing its work
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (final FileGenerator fileGenerator : fileGenerators) {
            executorService.execute(new Runnable() {
                public void run() {
                    fileGenerator.generate();
                }
            });
        }

        // then I get some log outputs
        await().atMost(2, SECONDS).until(testFileFound(STEP_1_LOG));
        await().until(testFileFound(STEP_2_LOG));
        await().until(testFileFound(STEP_3_LOG));

        // and I should have my final result with the output I expect
        await().atMost(10, SECONDS).until(testFileFound(expectedFile));
        String fileContents = readFile(expectedFile);
        assertThat(fileContents, startsWith("Process"));

        // Cleanup
        executorService.shutdown();
    }

    private String readFile(String expectedFile) throws IOException {
        return new String(Files.readAllBytes(Paths.get(expectedFile)));

    }


    private Callable<Boolean> testFileFound(final String file) {
        return new Callable<Boolean>() {
            public Boolean call() throws Exception {
                return new File(file).exists();
            }
        };
    }
}

You can explore the full demo code on this public git repository.

Categories: Blogs

Docker for Developers – An Interview on JavaScript Jabber

Derick Bailey - new ThoughtStream - Fri, 04/07/2017 - 13:30

On March 28th, 2017 I made an appearance on the JS Jabber podcast with a great panel of software developers, talking about Docker for software developers and JavaScript

Jsjabber docker for devs

In addition to the basics of “what is Docker?” we talk about why a developer would want to use it, including a lot of misconceptions and misunderstandings around the tooling and technologies, and more, including:

  • What’s the ultimate benefit that Docker provides?
  • Isn’t it a DevOps tools?
  • Why bother learning it, as a JavaScript developer? 
  • How does it compare to virtual machines?
  • Are you coding directly in the container, or ?

From the show notes:

As a JavaScript developer, learning Docker is going to have the same pay-off with other kinds of developers. There are times when one works well for one machine, but not on another. You then ask yourself why things are going that way when you are sure enough that you have tested it already.

The reasons that you come up with boil down to a few basic categories. It’s either because of a different operating system, configuration bits for the software itself, or libraries and runtimes that need to be installed and configured. These cause machine issues, which are solved by Docker.

Check out episode 255 of JS Jabber and learn more about Docker for JavaScript developers!

The post Docker for Developers – An Interview on JavaScript Jabber appeared first on DerickBailey.com.

Categories: Blogs

Blog Series: TDD and Process, Part 3

NetObjectives - Thu, 04/06/2017 - 11:52
Part 3: Reusing the Specification I’ve said that notion of automation should follow the TDD specification process, not lead it.  We choose the “test automation framework” (TAFW) based on the nature of our specification and how we have chosen to represent it to stakeholders. That said, once we have settled on a TAFW, we can then determine how best to use it to bind the specification to the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Groundhog Day at the Agile Transition Initiative

Agile Complexification Inverter - Thu, 04/06/2017 - 03:10
Now that everyone knows about Bill Murray's movie Groundhog Day - I love February 2nd.  It's my favorite, most enjoyable, beloved, cherished, esteemed day of the year.  And I don't need to tell you again how many LIKES I give this redundant day... so on to the story.

Bill & Groundhog
Well this happened about ten years ago, and about 6 years ago, or maybe it was 4 years past, and seems like we did this about 24 months ago...  or it could be today!

The Agile Transition Initiative at the company has come upon an inflection point (do ya' know what that is...  have you read Tipping Point?).  I'm not exactly sure of it's very precise date... but Feb. 2nd would be the perfect timing.   The inflection has to do with which direction your Agile Transition Initiative takes from this point into the future.   Will it continue on it's stated mission to "transform" the organization?  Or will it stall out and revert slowly to the status quo?

How do I recognize this perilous point in the agile trajectory?  Well there are several indications.  But first we must digress.


[We must Digress.]
Punxsutawney Phil Says more Winter in 2017In this story we will use the germ theory as a metaphor.  Germ theory came about in about ... (wait - you guess - go ahead ...  I'll give you a hundred year window... guess...). That's right! "The germ theory was proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762."  Wow, we've know about these little buggers for a long time.  And we started washing our hands ... (when...  correct -again).  "The year was 1846, and our would-be hero was a Hungarian doctor named Ignaz Semmelweis."  So right away business (society) started using a new discovery - a better way to treat patients.... or well it took a while maybe a few months, or maybe  more than 300 years.

But back to the metaphor - in this metaphor the organization will be like a human body and the change initiative will take the roll of a germ.  The germ is a change introduced to the body by some mechanism we are not very concerned with - maybe the body rubbed up against another body.  I hear that's a good way to spread knowledge.

We are interested in the body's natural process when a new factor is introduced.  What does a body do?  Well at first it just ignores this new thing - heck it's only one or two little germs, can't hurt anything - (there are a shit load of germs in your body right now).  But the germs are there to make a home - they consume energy and reproduce (at this point lets call it a virus - meh - what the difference?).  So the virus reproduces rapidly and starts to cause ripples... the body notices this and starts to react.  It sends in the white-blood cells - with anti-bodies.  Now I don't understand the biological responses - but I could learn all about it... but this is a metaphor and the creator of a metaphor may have artistic license to bend the truth a bit to make the point.  Point - WHAT IS THE POINT?

The point is the body (or organization) will have a natural reaction to the virus (change initiative) and when the body recognizes this change it's reaction (natural - maybe call it subconscious - involuntary).  Well let's just say it's been observed multiple times - the body tries very hard to rid itself of the unwanted bug (change).  It may go to unbelievable acts to get rid of it - like tossing all it's cookies back up - or squirting all it's incoming energy into the waste pit.  It could even launch a complete shutdown of all communication to a limb and allow it to fester and die, hopefully to fall off and not kill the complete organism.  Regaining the status quo is in the fundamental wiring of the human body.  Anything that challenges that stasis requires great energy to overcome this fundamental defense mechanism.

[Pop the stack.]
So back to the indicators of the tipping point in agile transitions.  Let's see if our metaphor helps us to see these indications.  The tossing of cookies - check.  That could be new people hired to help with the change are just tossed back out of the organization.  The squirts - check.  That is tenured people that have gotten on board with the change being challenged by others to just water it down... make it look like the things we use to do.  Heck let's even re-brand some of those new terms with our meanings - customized for our unique situation - that only we have ever seen, and therefore only we can know the solutions.  Folks, this is called the Bull Shit Reaction.

Now imagine a limb of the organization that has adopted the new way - they have caught the virus.  There is a high likely hood that someone in the organization is looking at them a "special".  A bit jealous of their new status and will start hoarding information flow from that successful group.  Now true that group was special - they attempted early transition and have had (in this organizations realm)  success.  Yet there was some exception to normal business process that made that success possible.  How could we possibly reproduce that special circumstance across the whole org-chart?  Maybe we just spin them off and let them go it alone - good luck, now back to business.

What's a MIND to do with this virus ridden body and all these natural reactions?

Well we are at an inflection point... what will you do?
Which curve do you want to be on?  - by Trail Ridge Consulting
[What Should You Do?]
Say you are in the office of VP of some such important silo, and they are introducing themselves to you (they are new at the Org.).  They ask you how it's going.  You reply, well, very well.  [That was the appropriate social response wasn't it?] Then they say, no - how's the agile transformation going?  BOOM!  That is a bit of a shocking first question in a get to know each other session - or is it that type of session - what should you do?

I will skip to the option I chose ...  because the other options are for crap - unless you have a different motive than I do... and that is a very real possibility, if so defiantly DON'T DO THIS:

Ask the VP if this is a safe space where you can tell the truth?  Be sincere and concerned - then listen.  There response is the direction you must now take, you have ceded control of your action to them, listen and listen to what is not said - decide if they want the truth or do they want to be placated.  Then give them what the desire.  For example (an obviously easy example - perhaps); imagine that the VP said:  I want the truth, you should always tell the truth.

Don't jump to fast to telling the truth... how can you ascertain how much of the truth they can handle?  You should defiantly have an image of Nicholson as Colonel Nathan R. Jessep as he addresses the Court on "Code Red".


You might ask about their style is it bold and blunt or soft and relationship focused.  You could study their DiSC profile to see what their nature may tell you about how to deliver the truth.

Imagine you determine that they want it blunt (I've found that given a choice must people say this, and only 75% are fibbing). So you suggest that it's not going well.  The transformation has come to an inflection point (pause to see if they understand that term).  You give some archeology - the organization has tried to do an agile transformation X times before.  VP is right with you, "and we wouldn't be trying again if those had succeeded."  Now that was a nice hors d'oeuvre, savory.  The main course is served - VP ask why?

Now you could offer you opinion, deliver some fun anecdote or two or 17, refer to some data, write a white paper, give them a Let Me Google That For You link. Or you could propose that they find the answer themselves.

Here's how that might go down:  Ask them to round up between 8.75 and 19.33 of the most open minded tenured (5 - 20 yrs) people up and down the hierarchy; testers, developers, delivery managers, directors, administrators (always include them - they are key to this process - cause they know every thing that has happened for the last 20 years).  Invite them to join the VP in a half day discovery task - to find out why this Agile thing get's ejected before it takes hold of our organization. If you come away from this workshop with anything other than - culture at the root of the issue, then congratulations your organization is unique.  Try the Journey Line technique with the group.  It's a respective of the organizations multi-year, multi-attempts to do ONE THING, multiple times.  Yes, kinda like Groundhog Day.

See Also:

The Fleas in the Jar Experiment. Who Kills Innovation? The Jar, The Fleas or Both? by WHATSTHEPONT


Categories: Blogs

Dash off a Fiver to the ACLU

Agile Complexification Inverter - Thu, 04/06/2017 - 03:09
What can you do to save the world with an Amazon Dash Button?

Has a new era of enablement reached the hockey stick curve of exponential growth?  I think it has.  I've been picking up this vibe, and I may not be the first to sense things around me.  I've got some feedback that I very poor at it in the personal sphere.  However, on a larger scale, on an abstract level, in the field of tech phenomena I've got a bit of a streak going.  Mind you I'm not rich on a Zuckerberg level... and my general problem is actualizing the idea as apposed to just having the brilliant idea - or recognizing the opportunity.

A colleague told me I would like this tinker's Dash Button hack.  It uses the little hardware IoT button Amazon built to sell more laundry soap - a bit of imaginative thinking outside of the supply chain problem domain and a few hours of coding.  Repurposing the giant AWS Cloud Mainframe, that the Matrix Architect has designed to enslave you, to give the ACLU a Fiver ($5) every time you feel like one of the talking heads (#45) in Washington DC has infringed upon one of you civil liberties.


Now I think this is the power of a true IoT the fact that an enabling technology could allow the emergent property that was not conceived of in it's design.  No one has really tried to solve the problem of the democrat voice of the people.  We use the power of currency to proxy for so many concepts in our society, and it appears that the SCOTUS has accepted that currency and it's usage is a from of speech (although not free - do you see what I did there?).  What would the Architect of our Matrix learn if he/she/it could collect all the thoughts of people when they had a visceral reaction to an event correlate that reaction to the event, measure the power of the reaction over a vast sample of the population and feed that reaction into the decision making process via a stream of funding for or against a proposed policy.  Now real power of this feedback system will occur when the feedback message may mutate the proposal (the power of Yes/AND).

I can see this as enabling real trend toward democracy - and of course this disrupts the incumbent power structure of the representative government (federal republic).  Imagine a hack-a-thon where all the political organizations and the charities and the religions came together in a convention center.  There are tables and spaces and boxes upon boxes of Amazon Dashes Buttons.  We ask the organizations what they like about getting a Fiver every time the talking head mouths off, and what data they may also need to capture to make the value stream most effective in their unique organization.  And we build and test this into a eco-system on top of the AWS Cloud.
"You know, if one person, just one person does it they may think he's really sick and they won't take him."What would it take to set this up one weekend...  I've found that I'm not a leader.  I don't get a lot of followers when I have an idea... but I have found that I can make one heck of a good first-follower!

"And three people do it, three, can you imagine, three people walking in singin a bar of Alice's Restaurant and walking out. They may think it's an organization. And can you, can you imagine fifty people a day, I said fifty people a day walking in singin a bar of Alice's Restaurant and walking out. And friends they may thinks it's a movement."I will just through this out here and allow the reader to link up the possibilities.


Elmo From ‘Sesame Street’ Learns He's Fired Because Of Donald Trump’s Budget Cuts.  Would this be a good test case for a Dash Button mash up to donate to Sesame Workshop.

See Also:

GitHub Repo Donation Button by Nathan Pryor
Instructables Dash Button projects
Coder Turns Amazon Dash Button Into ACLU Donation Tool by Mary Emily O'Hara
Life With The Dash Button: Good Design For Amazon, Bad Design For Everyone Else by Mark WilsonHow to start a movement - Derek Sivers TED Talk
Categories: Blogs

AWS Lambda: Programmatically scheduling a CloudWatchEvent

Mark Needham - Thu, 04/06/2017 - 01:49

I recently wrote a blog post showing how to create a Python ‘Hello World’ AWS lambda function and manually invoke it, but what I really wanted to do was have it run automatically every hour.

To achieve that in AWS Lambda land we need to create a CloudWatch Event. The documentation describes them as follows:

Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams.

2017 04 05 23 06 36

This is actually really easy from the Amazon web console as you just need to click the ‘Triggers’ tab and then ‘Add trigger’. It’s not obvious that there are actually three steps are involved as they’re abstracted from you.

So what are the steps?

  1. Create rule
  2. Give permission for that rule to execute
  3. Map the rule to the function

I forgot to do step 2) initially and then you just end up with a rule that never triggers, which isn’t particularly useful.

The following code creates a ‘Hello World’ lambda function and runs it once an hour:

import boto3

lambda_client = boto3.client('lambda')
events_client = boto3.client('events')

fn_name = "HelloWorld"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'

fn_response = lambda_client.create_function(
    FunctionName=fn_name,
    Runtime='python2.7',
    Role=fn_role,
    Handler="{0}.lambda_handler".format(fn_name),
    Code={'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(), },
)

fn_arn = fn_response['FunctionArn']
frequency = "rate(1 hour)"
name = "{0}-Trigger".format(fn_name)

rule_response = events_client.put_rule(
    Name=name,
    ScheduleExpression=frequency,
    State='ENABLED',
)

lambda_client.add_permission(
    FunctionName=fn_name,
    StatementId="{0}-Event".format(name),
    Action='lambda:InvokeFunction',
    Principal='events.amazonaws.com',
    SourceArn=rule_response['RuleArn'],
)

events_client.put_targets(
    Rule=name,
    Targets=[
        {
            'Id': "1",
            'Arn': fn_arn,
        },
    ]
)

We can now check if our trigger has been configured correctly:

$ aws events list-rules --query "Rules[?Name=='HelloWorld-Trigger']"
[
    {
        "State": "ENABLED", 
        "ScheduleExpression": "rate(1 hour)", 
        "Name": "HelloWorld-Trigger", 
        "Arn": "arn:aws:events:us-east-1:[your-aws-id]:rule/HelloWorld-Trigger"
    }
]

$ aws events list-targets-by-rule --rule HelloWorld-Trigger
{
    "Targets": [
        {
            "Id": "1", 
            "Arn": "arn:aws:lambda:us-east-1:[your-aws-id]:function:HelloWorld"
        }
    ]
}

$ aws lambda get-policy --function-name HelloWorld
{
    "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"HelloWorld-Trigger-Event\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"events.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1:[your-aws-id]:function:HelloWorld\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:events:us-east-1:[your-aws-id]:rule/HelloWorld-Trigger\"}}}]}"
}

All looks good so we’re done!

The post AWS Lambda: Programmatically scheduling a CloudWatchEvent appeared first on Mark Needham.

Categories: Blogs

Certified ScrumMaster Training Workshop in Ottawa—June 26-27

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:50
Agile Pain Relief presents a two-day Certified ScrumMaster Workshop in Ottawa— June 26-27, 2017 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

Certified Scrum Product Owner (CSPO) in Edmonton—June 22-23

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:49
Agile Pain Relief presents a two-day Certified Scrum Product Owner (CSPO) workshop in Edmonton—June 22-23 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

Certified ScrumMaster Training Workshop in Edmonton—June 20-21

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:47
Agile Pain Relief presents a two-day Certified ScrumMaster Workshop in Edmonton— June 20-21, 2017 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

Certified Scrum Product Owner (CSPO) in Ottawa—June 8-9

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:41
Agile Pain Relief presents a two-day Certified Scrum Product Owner (CSPO) workshop in Ottawa—June 8-9 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

Certified ScrumMaster Training Workshop in Toronto—June 6-7

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:05
Agile Pain Relief presents a two-day Certified ScrumMaster Workshop in Toronto—June 6-7 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

4 Benefits of Kanban Project Management

Kanban project management can be used at the team, project, and portfolio levels, to help teams deliver on time, on budget and on value.

The post 4 Benefits of Kanban Project Management appeared first on Blog | LeanKit.

Categories: Companies

What I Learned By Deleting All Of My Docker Images And Containers

Derick Bailey - new ThoughtStream - Wed, 04/05/2017 - 17:49

A few days ago I deleted all of my Docker containers, images and data volumes on my development laptop… wiped clean off my hard drive.

By accident.

And yes, I panicked!

Do not erase

But after a moment, the panic stopped; gone instantly after I realized that when it comes to Docker and containers, I’ve been doing it wrong.

Wait, You Deleted Them … Accidentally?!

If you build a lot of images and containers, like I do, you’re likely going to end up with a very large list of them on your machine.

Go ahead and open a terminal / console window and run these two commands:

Chances are, you have at least half a dozen containers with random names and more than a few dozen images with many of them having no tag info to tell you what they are. It’s a side effect of using Docker for development efforts, rebuilding images and rerunning new container instances on a regular basis.

No, it’s not a bug. It’s by design, and I understand the intent (another discussion for another time).

But, the average Docker developer knows that most of these old containers and images can be deleted safely. A good docker developer will clean them out on a regular basis. And great docker developers… well, they’re the ones that automate cleaning out all the old cruft to keep their machine running nice and smooth without taking up the entire hard drive with Docker related artifacts.

Then, there’s me.

DANGER, WILL ROBINSON

For whatever reason, I realized it had been a while since I had cleaned out my Docker artifacts. So I did what I always do: hit google and the magic answers of the internet for all my shell scripting needs.

My first priority was to remove all untagged images. A quick search and click later, I had a script that looked familiar pasted into my terminal window and I was hitting the enter button gleefully.

It wasn’t until a moment later – when I ran “docker images” again, and saw that I still had a dozen untagged images – that I figured out something was wrong.

Looking back at the page from which I copied the script, I saw the commands sitting under a heading that I had previously ignored. It read,

“Remove all stopped containers.”

Well, good news! All of my containers were already stopped, so guess what happened?

The panic hit hard as I quickly re-ran “docker ps -a” to find an empty list.

NewImage

The Epiphany And The Evanescent Panic

As fast as my panic had set in, it left. Only a mild annoyance with myself making such a simple mistake remained. And the only reason I had a mild amount of annoyance was knowing that I would have to recreate the container instances I need.

That only takes a moment, though, so it’s not a big deal.

In the end, the panic was gone due to my realization of something that I’ve read and said dozens of times.

From the documentation on Dockerfile best practices:

Containers should be ephemeral

The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.

I’ve used the word ephemeral, when talking about Docker containers, at least a dozen times in the last month.

But it wasn’t until this accidental moment of panic that I realized just how true it should be, and how wrong I was in my use of containers.

The Not-So Nuclear Option

The problem I had was the way in which I was using and thinking about containers, and this stemmed from how I viewed the data and configuration stored in them.

Basically, I was using my containers as if they were full-fledged installations on my machine or in a virtual machine. I was stopping and starting the same container over and over to ensure I never lost my data or configuration.

Sure, some of these containers used host-mounted volumes to read and write data to specific folders on my machine. For the most part, however, I assumed I would never lose the data in my containers because I would never delete them.

Well, that clearly wasn’t the case anymore…

I see now that what I once told a friend was “the nuclear option” of deleting all stopped containers, is really more like a dry-erase marker.

I’m just cleaning the board so I can use it again.

A Defining Moment

My experience, moment of panic and realization generated this post on twitter:

idea: if deleting all of your #Docker containers would cause you serious headache and hours of work to rebuild, you’re doing Docker wrong

— Derick Bailey (@derickbailey)

March 24, 2017


And honestly, this was a very defining experience, in reflection.

Reading and talking about how a Docker container is something that I can tear down, stand up again and continue from where I left off is one thing.

But having gone through this, I can see it directly applied to my own efforts, now.

Now the only minor annoyance that I have is rebuilding the container instances when I need them. The data and configuration are all easily re-created with scripts that I already have for my applications. At this point, I’m not even worried anymore.

That’s how Docker should be done.

The post What I Learned By Deleting All Of My Docker Images And Containers appeared first on DerickBailey.com.

Categories: Blogs

Thinking About Cadence vs. Iterations

Johanna Rothman - Wed, 04/05/2017 - 17:42


Many people use an iteration approach to agile. They decide on an iteration duration, commit to work for that iteration and by definition, they are done at the end of the timebox.

I like timeboxing many things. I like timeboxing work I don’t know how to start. I find short timeboxes help me focus on the first thing of value. Back when I used staged-delivery as a way to organize projects, we had a monthly milestone (timebox) to show progress and finish features. The teams and I found that a cadence of one month was good for us. The timebox focused us and allowed us to say no to other work.

A cadence is a pulse, a rhythm for a project. In my example above, you can see I used a timebox as a cadence and as a way to focus on work. You don’t have to use timeboxes to provide a cadence.

A new reader for the Pragmatic Manager asked me about scaling their agile transformation. They are starting and a number of people are impatient to be agile already. I suggested that instead of scaling agile, they think about what each team needs for creating their own successful agile approach.

One thing many teams (but not all) is a cadence for delivery, retrospectives and more planning. Not every team needs the focus of a timebox to do that. One team I know delivers several times during the week. They plan weekly, but not the same day each week. When they’ve finished three features, they plan for the next three. It takes them about 20-30 minutes to plan. It’s not a big deal. This team retrospects every Friday morning. (I would select a different day, but they didn’t ask me.)

Notice that they have two separate cadences for planning: once a week, but not the same day; and once a week for retrospectives on the same day each week.

Contrast that with another team new to agile. They have a backlog refinement session that often takes two hours (don’t get me started) and a two-hour pre-iteration planning session. Yes, they have trouble finishing the work they commit to. (I recommended they timebox their planning to one hour each and stop planning so much. Timeboxing that work to a shorter time would force them to plan less work. They might deliver more.)

A timebox can help a team create a project cadence, a rhythm. And, the timebox can help the team see their data, as long as they measure it.

A project cadence provides a team a rhythm. Depending on what the team needs, the team might decide to use timeboxes or not.

For me, one of the big problems in scaling is that each team often needs their own unique approach. Sometimes, that doesn’t fit with what managers new to agile think. I find that when I discuss cadence and iterations and explain the (subtle) difference to people, that can help.

Categories: Blogs

Building an Experimentation Culture at Spotify

Scrum Expert - Wed, 04/05/2017 - 17:03
Running an experiment is trivial: Make a change and see what happens. Running experiments at scale, however, is a different story. It is not trivial to simultaneously run hundreds of experiments...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Agility, Scalability & Autonomy

TV Agile - Wed, 04/05/2017 - 16:54
HMRC, the tax and revenue authority in the UK has a stated goal of becoming one of the most digital tax administrations in the world by 2020. The Department is in the midst of a digitally-enabled transformation and having a flexible infrastructure in place to underpin this is crucial – one that can support its […]
Categories: Blogs

A Nifty Workshop Technique

James Shore - Wed, 04/05/2017 - 10:00
05 Apr 2017 James Shore/Blog

It's hard to be completely original. But I have a little trick for workshops that I've never seen others do, and participants love it.

It's pretty simple: instead of passing out slide booklets, provide nice notebooks and pass out stickers. Specifically, something like Moleskine Cahiers and 3-1/3" x 4" printable labels.

Closeup of a workshop participant writing on a notebook page, with a sticker on the other page

I love passing out notebooks because they give participants the opportunity to actively listen by taking notes. (And, in my experience, most do.) Providing notebooks at the start of a workshop reinforces the message that participants need to take responsibility for their own learning. And, notebooks are just physically nicer and more cozy than slide packets... even the good ones.

The main problem with notebooks is that they force participants to copy down material. By printing important concepts on stickers, participants can literally cut and paste a reference directly into their notes. It's the best of both worlds.

There is a downside to this technique: rather than just printing out your slides, your stickers have to be custom-designed references. It's more work, but I find that it also results in better materials. Worth it.

People who've been to my workshops keeping asking me if they can steal the technique. I asked them to wait until I documented my one original workshop idea. Now I have. If you use this idea, I'd appreciate credit. Other than that, share and enjoy. :-)

Picture of a table at the Agile Fluency Game workshop showing participants writing in their notebooks

Categories: Blogs

Swagger, the REST Kryptonite

Jimmy Bogard - Tue, 04/04/2017 - 23:08

Swagger, a tool to help design, build, document, and consume RESTful APIs is ironically kryptonite for building actual RESTful APIs. The battle over the term "REST" is lost, where "RESTful" simply means "an API over HTTP" but these days is 99% of the time referring to "RPC over HTTP".

In a post covering the problems with Swagger, the author outlines some familiar issues I've seen with it (and its progenitors such as apiary.io):

  • Using YAML as the new XSD
  • Does not support Hypermedia (!!!!)
  • URI-centric
  • YAML-generation from code

Some of these are well-known issues, but the biggest one for me is the lack of hypermedia support. Those that know REST understand that REST includes a hypertext constraint. No hypermedia - you're not REST.

And that's OK for plenty of situations. I've blogged and given talks in the past about when REST is appropriate. I've shipped actual REST APIs as well as plenty of plain Web APIs. Each has its place, and I still stick to each name simply because it's valuable to distinguish between APIs with hypermedia and APIs without.

When not to use REST

In my client applications, I rarely actually need REST. If my server has only one client, and that client is developed/deployed lockstep with the server, there's no value to the decoupling that REST brings. Instead, I embrace the client/server coupling and use HTTP as merely the transport for client/server RPC. And that's perfectly for a wide variety of scenarios:

  • Single Page Applications (SPAs)
  • JS-heavy applications (but not full-blown SPAs)
  • Hybrid mobile applications
  • Native mobile applications where you force updates based on server

When you have a client and server that you're able to upgrade at the same time, hypermedia can hold you back. When I build clients alongside the server - and with ASP.NET Core, these both live in the exact same project - you can take advantage of this coupling to embrace this knowledge of the server. I even go so far as compiling my templates/views for Angular/Ember on the server side through Razor to get super-intelligent components that know exactly the shape of my DTOs.

In those cases, you're perfectly fine using RPC-over-HTTP, and Swagger.

When to use REST

When you have a client and server that deploy independently of each other, the coupling risk of RPC greatly increases. And in those cases, I start to look at REST as a means of decoupling my client and my server. The hypermedia constraint of REST goes a long way of helping to decouple, to the point where my clients can react to the existence of links, new form elements, labels, translations and more.

REST clients are more difficult to build, but it's a coupling tradeoff. But if I have server/client deployed independent, perhaps in situations of:

  • I don't control server API deployment
  • I don't control client consumer deployment
  • Mobile applications where I can't control upgrades
  • Microservice communication

Since Swagger doesn't support REST, and in fact encourages RPC-over-HTTP APIs, I wouldn't touch it for cases where I my client and server's deployments aren't lockstep.

REST and microservices

This decoupling is especially important for (micro)services, where often you'll see HTTP APIs exposed as a means of exposing service capabilities. Whether or not it's a good idea to expose temporal coupling this way is another question altogether.

If you expose RPC HTTP APIs, you're encouraging a new level of coupling with your microservice, leading down the same monolith path as before but now with 100-10K times more latency.

So if you decide to expose an HTTP API from your microservice for other services to consume, highly consider REST as then at least you'll only have temporal coupling to worry about and not the other forms of coupling that come along with RPC.

Documenting REST APIs

One of the big issues I have with Swagger documentation as it's essentially no different than API documentation for libraries. Java/Ruby/.NET documentation of a list of classes and a list of methods and a list of parameters. When I've had to consume an API that only had Swagger documentation, I was lost. Where do I start? How do I achieve a workflow of activities when I'm only given API endpoints?

My only savior was that I knew the web app also consumed the API, so I could reverse engineer the correct sequence of API calls necessary by following the workflow the app.

The ironic part was that the web application included links and forms - providing me a guided user experience and workflow for accomplishing a task. I looked at an item, saw links to related actions, followed them, clicked buttons, submitted forms and so on. The Swagger-based "REST" API was missing all of that, and the docs didn't help.

Instead, I would have preferred a markdown document describing the overall workflows, and the responses just include links and forms that I could follow myself. I didn't need a list of API calls, I needed a user experience applied to API.

Swagger, the tool for building RPC-over-HTTP APIs

Swagger has a rich ecosystem and support for a variety of platforms. If I were building a new SPA, I'd take a look at Swagger, especially for its ability to spit out TypeScript models, clients and the like.

However, if I'm building a protocol that demands decoupling with REST, Swagger would lock me in to a highly coupled RPC-over-HTTP API that would cripple my ability to deliver down the road.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.