Skip to content

Blogs

Certified ScrumMaster Training Workshop in Edmonton—June 20-21

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:47
Agile Pain Relief presents a two-day Certified ScrumMaster Workshop in Edmonton— June 20-21, 2017 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

Certified Scrum Product Owner (CSPO) in Ottawa—June 8-9

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:41
Agile Pain Relief presents a two-day Certified Scrum Product Owner (CSPO) workshop in Ottawa—June 8-9 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

Certified ScrumMaster Training Workshop in Toronto—June 6-7

Notes from a Tool User - Mark Levison - Wed, 04/05/2017 - 22:05
Agile Pain Relief presents a two-day Certified ScrumMaster Workshop in Toronto—June 6-7 taught by certified ScrumMaster Trainer Mark Levison.
Categories: Blogs

What I Learned By Deleting All Of My Docker Images And Containers

Derick Bailey - new ThoughtStream - Wed, 04/05/2017 - 17:49

A few days ago I deleted all of my Docker containers, images and data volumes on my development laptop… wiped clean off my hard drive.

By accident.

And yes, I panicked!

Do not erase

But after a moment, the panic stopped; gone instantly after I realized that when it comes to Docker and containers, I’ve been doing it wrong.

Wait, You Deleted Them … Accidentally?!

If you build a lot of images and containers, like I do, you’re likely going to end up with a very large list of them on your machine.

Go ahead and open a terminal / console window and run these two commands:

Chances are, you have at least half a dozen containers with random names and more than a few dozen images with many of them having no tag info to tell you what they are. It’s a side effect of using Docker for development efforts, rebuilding images and rerunning new container instances on a regular basis.

No, it’s not a bug. It’s by design, and I understand the intent (another discussion for another time).

But, the average Docker developer knows that most of these old containers and images can be deleted safely. A good docker developer will clean them out on a regular basis. And great docker developers… well, they’re the ones that automate cleaning out all the old cruft to keep their machine running nice and smooth without taking up the entire hard drive with Docker related artifacts.

Then, there’s me.

DANGER, WILL ROBINSON

For whatever reason, I realized it had been a while since I had cleaned out my Docker artifacts. So I did what I always do: hit google and the magic answers of the internet for all my shell scripting needs.

My first priority was to remove all untagged images. A quick search and click later, I had a script that looked familiar pasted into my terminal window and I was hitting the enter button gleefully.

It wasn’t until a moment later – when I ran “docker images” again, and saw that I still had a dozen untagged images – that I figured out something was wrong.

Looking back at the page from which I copied the script, I saw the commands sitting under a heading that I had previously ignored. It read,

“Remove all stopped containers.”

Well, good news! All of my containers were already stopped, so guess what happened?

The panic hit hard as I quickly re-ran “docker ps -a” to find an empty list.

NewImage

The Epiphany And The Evanescent Panic

As fast as my panic had set in, it left. Only a mild annoyance with myself making such a simple mistake remained. And the only reason I had a mild amount of annoyance was knowing that I would have to recreate the container instances I need.

That only takes a moment, though, so it’s not a big deal.

In the end, the panic was gone due to my realization of something that I’ve read and said dozens of times.

From the documentation on Dockerfile best practices:

Containers should be ephemeral

The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.

I’ve used the word ephemeral, when talking about Docker containers, at least a dozen times in the last month.

But it wasn’t until this accidental moment of panic that I realized just how true it should be, and how wrong I was in my use of containers.

The Not-So Nuclear Option

The problem I had was the way in which I was using and thinking about containers, and this stemmed from how I viewed the data and configuration stored in them.

Basically, I was using my containers as if they were full-fledged installations on my machine or in a virtual machine. I was stopping and starting the same container over and over to ensure I never lost my data or configuration.

Sure, some of these containers used host-mounted volumes to read and write data to specific folders on my machine. For the most part, however, I assumed I would never lose the data in my containers because I would never delete them.

Well, that clearly wasn’t the case anymore…

I see now that what I once told a friend was “the nuclear option” of deleting all stopped containers, is really more like a dry-erase marker.

I’m just cleaning the board so I can use it again.

A Defining Moment

My experience, moment of panic and realization generated this post on twitter:

idea: if deleting all of your #Docker containers would cause you serious headache and hours of work to rebuild, you’re doing Docker wrong

— Derick Bailey (@derickbailey)

March 24, 2017


And honestly, this was a very defining experience, in reflection.

Reading and talking about how a Docker container is something that I can tear down, stand up again and continue from where I left off is one thing.

But having gone through this, I can see it directly applied to my own efforts, now.

Now the only minor annoyance that I have is rebuilding the container instances when I need them. The data and configuration are all easily re-created with scripts that I already have for my applications. At this point, I’m not even worried anymore.

That’s how Docker should be done.

The post What I Learned By Deleting All Of My Docker Images And Containers appeared first on DerickBailey.com.

Categories: Blogs

Thinking About Cadence vs. Iterations

Johanna Rothman - Wed, 04/05/2017 - 17:42


Many people use an iteration approach to agile. They decide on an iteration duration, commit to work for that iteration and by definition, they are done at the end of the timebox.

I like timeboxing many things. I like timeboxing work I don’t know how to start. I find short timeboxes help me focus on the first thing of value. Back when I used staged-delivery as a way to organize projects, we had a monthly milestone (timebox) to show progress and finish features. The teams and I found that a cadence of one month was good for us. The timebox focused us and allowed us to say no to other work.

A cadence is a pulse, a rhythm for a project. In my example above, you can see I used a timebox as a cadence and as a way to focus on work. You don’t have to use timeboxes to provide a cadence.

A new reader for the Pragmatic Manager asked me about scaling their agile transformation. They are starting and a number of people are impatient to be agile already. I suggested that instead of scaling agile, they think about what each team needs for creating their own successful agile approach.

One thing many teams (but not all) is a cadence for delivery, retrospectives and more planning. Not every team needs the focus of a timebox to do that. One team I know delivers several times during the week. They plan weekly, but not the same day each week. When they’ve finished three features, they plan for the next three. It takes them about 20-30 minutes to plan. It’s not a big deal. This team retrospects every Friday morning. (I would select a different day, but they didn’t ask me.)

Notice that they have two separate cadences for planning: once a week, but not the same day; and once a week for retrospectives on the same day each week.

Contrast that with another team new to agile. They have a backlog refinement session that often takes two hours (don’t get me started) and a two-hour pre-iteration planning session. Yes, they have trouble finishing the work they commit to. (I recommended they timebox their planning to one hour each and stop planning so much. Timeboxing that work to a shorter time would force them to plan less work. They might deliver more.)

A timebox can help a team create a project cadence, a rhythm. And, the timebox can help the team see their data, as long as they measure it.

A project cadence provides a team a rhythm. Depending on what the team needs, the team might decide to use timeboxes or not.

For me, one of the big problems in scaling is that each team often needs their own unique approach. Sometimes, that doesn’t fit with what managers new to agile think. I find that when I discuss cadence and iterations and explain the (subtle) difference to people, that can help.

Categories: Blogs

Agility, Scalability & Autonomy

TV Agile - Wed, 04/05/2017 - 16:54
HMRC, the tax and revenue authority in the UK has a stated goal of becoming one of the most digital tax administrations in the world by 2020. The Department is in the midst of a digitally-enabled transformation and having a flexible infrastructure in place to underpin this is crucial – one that can support its […]
Categories: Blogs

A Nifty Workshop Technique

James Shore - Wed, 04/05/2017 - 10:00
05 Apr 2017 James Shore/Blog

It's hard to be completely original. But I have a little trick for workshops that I've never seen others do, and participants love it.

It's pretty simple: instead of passing out slide booklets, provide nice notebooks and pass out stickers. Specifically, something like Moleskine Cahiers and 3-1/3" x 4" printable labels.

Closeup of a workshop participant writing on a notebook page, with a sticker on the other page

I love passing out notebooks because they give participants the opportunity to actively listen by taking notes. (And, in my experience, most do.) Providing notebooks at the start of a workshop reinforces the message that participants need to take responsibility for their own learning. And, notebooks are just physically nicer and more cozy than slide packets... even the good ones.

The main problem with notebooks is that they force participants to copy down material. By printing important concepts on stickers, participants can literally cut and paste a reference directly into their notes. It's the best of both worlds.

There is a downside to this technique: rather than just printing out your slides, your stickers have to be custom-designed references. It's more work, but I find that it also results in better materials. Worth it.

People who've been to my workshops keeping asking me if they can steal the technique. I asked them to wait until I documented my one original workshop idea. Now I have. If you use this idea, I'd appreciate credit. Other than that, share and enjoy. :-)

Picture of a table at the Agile Fluency Game workshop showing participants writing in their notebooks

Categories: Blogs

Swagger, the REST Kryptonite

Jimmy Bogard - Tue, 04/04/2017 - 23:08

Swagger, a tool to help design, build, document, and consume RESTful APIs is ironically kryptonite for building actual RESTful APIs. The battle over the term "REST" is lost, where "RESTful" simply means "an API over HTTP" but these days is 99% of the time referring to "RPC over HTTP".

In a post covering the problems with Swagger, the author outlines some familiar issues I've seen with it (and its progenitors such as apiary.io):

  • Using YAML as the new XSD
  • Does not support Hypermedia (!!!!)
  • URI-centric
  • YAML-generation from code

Some of these are well-known issues, but the biggest one for me is the lack of hypermedia support. Those that know REST understand that REST includes a hypertext constraint. No hypermedia - you're not REST.

And that's OK for plenty of situations. I've blogged and given talks in the past about when REST is appropriate. I've shipped actual REST APIs as well as plenty of plain Web APIs. Each has its place, and I still stick to each name simply because it's valuable to distinguish between APIs with hypermedia and APIs without.

When not to use REST

In my client applications, I rarely actually need REST. If my server has only one client, and that client is developed/deployed lockstep with the server, there's no value to the decoupling that REST brings. Instead, I embrace the client/server coupling and use HTTP as merely the transport for client/server RPC. And that's perfectly for a wide variety of scenarios:

  • Single Page Applications (SPAs)
  • JS-heavy applications (but not full-blown SPAs)
  • Hybrid mobile applications
  • Native mobile applications where you force updates based on server

When you have a client and server that you're able to upgrade at the same time, hypermedia can hold you back. When I build clients alongside the server - and with ASP.NET Core, these both live in the exact same project - you can take advantage of this coupling to embrace this knowledge of the server. I even go so far as compiling my templates/views for Angular/Ember on the server side through Razor to get super-intelligent components that know exactly the shape of my DTOs.

In those cases, you're perfectly fine using RPC-over-HTTP, and Swagger.

When to use REST

When you have a client and server that deploy independently of each other, the coupling risk of RPC greatly increases. And in those cases, I start to look at REST as a means of decoupling my client and my server. The hypermedia constraint of REST goes a long way of helping to decouple, to the point where my clients can react to the existence of links, new form elements, labels, translations and more.

REST clients are more difficult to build, but it's a coupling tradeoff. But if I have server/client deployed independent, perhaps in situations of:

  • I don't control server API deployment
  • I don't control client consumer deployment
  • Mobile applications where I can't control upgrades
  • Microservice communication

Since Swagger doesn't support REST, and in fact encourages RPC-over-HTTP APIs, I wouldn't touch it for cases where I my client and server's deployments aren't lockstep.

REST and microservices

This decoupling is especially important for (micro)services, where often you'll see HTTP APIs exposed as a means of exposing service capabilities. Whether or not it's a good idea to expose temporal coupling this way is another question altogether.

If you expose RPC HTTP APIs, you're encouraging a new level of coupling with your microservice, leading down the same monolith path as before but now with 100-10K times more latency.

So if you decide to expose an HTTP API from your microservice for other services to consume, highly consider REST as then at least you'll only have temporal coupling to worry about and not the other forms of coupling that come along with RPC.

Documenting REST APIs

One of the big issues I have with Swagger documentation as it's essentially no different than API documentation for libraries. Java/Ruby/.NET documentation of a list of classes and a list of methods and a list of parameters. When I've had to consume an API that only had Swagger documentation, I was lost. Where do I start? How do I achieve a workflow of activities when I'm only given API endpoints?

My only savior was that I knew the web app also consumed the API, so I could reverse engineer the correct sequence of API calls necessary by following the workflow the app.

The ironic part was that the web application included links and forms - providing me a guided user experience and workflow for accomplishing a task. I looked at an item, saw links to related actions, followed them, clicked buttons, submitted forms and so on. The Swagger-based "REST" API was missing all of that, and the docs didn't help.

Instead, I would have preferred a markdown document describing the overall workflows, and the responses just include links and forms that I could follow myself. I didn't need a list of API calls, I needed a user experience applied to API.

Swagger, the tool for building RPC-over-HTTP APIs

Swagger has a rich ecosystem and support for a variety of platforms. If I were building a new SPA, I'd take a look at Swagger, especially for its ability to spit out TypeScript models, clients and the like.

However, if I'm building a protocol that demands decoupling with REST, Swagger would lock me in to a highly coupled RPC-over-HTTP API that would cripple my ability to deliver down the road.

Categories: Blogs

New Case Study: Northwestern Mutual Delivers 18 Months Ahead of Schedule with SAFe

Agile Product Owner - Mon, 04/03/2017 - 20:26

“We had been challenged a number of times in changing our underlying CRM platform. After implementing SAFe, our overall effort actually came in $12M less than originally estimated and 18 months sooner than predicted.”

Bryan Kadlec, Director, Client Digital Experience

How do you change a deeply ingrained Waterfall culture? For a 160-year old life insurance company, it wasn’t easy, but it was ultimately worth it.

Our latest case study from Northwestern Mutual (NWM) tells the story. In 2012, a company-wide push for continuous learning and improvement led the organization to consider Agile in earnest. At the time, it took more than 300 days and many iterations to deliver value to customers. Efforts to improve had been stymied by an entrenched waterfall culture.

For a company that helps clients manage risk, ironically, the business realized that it had to take some risk to move forward. Business unit leaders found the platform they needed in SAFe, and became the first large company in Wisconsin to deploy the Framework.

Prior to the first Program Increment (PI) planning event, transformation leaders trained as SAFe Program Consultants (SPCs) and additionally tapped SAFe Fellow Jennifer Fawcett to facilitate. At that first event, they launched four Agile Release Trains (ARTs).

At NWM, training was key, and in fact served as the first Sprint for some. By the second PI event, again with Jennifer facilitating, Release Train Engineers had a sense of ownership. As for changing the longtime waterfall culture, coaching proved essential, especially at the beginning.

Since deploying SAFe, Northwestern Mutual has seen a number of benefits that contribute toward the bottom line:

  • Collection Feature Cycle Time improved 30-50%
  • IT delivers requested capabilities 80-90 percent of the time
  • The overall effort on a project came in $12 million less than originally estimated and 18 months sooner than predicted

Now in year three of their implementation, and with 12 PIs behind them, the company has 14 ARTs in progress across a wide range of product areas. Northwestern Mutual also provides leadership for SAFe in Wisconsin, starting a Scaling Agile Meetup group that has drawn as many as 300 attendees.

Check out the full case study here.

Many thanks to Jill Schindler, IT Manager, Client Digital Experience, SPC; Bryan Kadlec, IT Director, Client Digital Experience, SPC; and Sarah Scott, Agile Lean Organization Coach, SPC4, for sharing their SAFe story.

Stay SAFe,
—Dean

Categories: Blogs

Learning About Kanban

Learn more about transforming people, process and culture with the Real Agility Program

From Essential Kanban Condensed by David J Anderson & Andy Carmichael

Kanban is a method for defining, managing, and improving services that deliver knowledge work, such as professional services, creative endeavors, and the design of both physical and software products. It may be characterized as a “start from what you do now” method—a catalyst for rapid and focused change within organizations—that reduces resistance to beneficial change in line with the organization’s goals.

The Kanban Method is based on making visible what is otherwise intangible knowledge work, to ensure that the service works on the right amount of work—work that is requested and needed by the customer and that the service has the capability to deliver. To do this, we use a kanban system—a delivery flow system that limits the amount of work in progress (WiP) by using visual signals.

http://leankanban.com/wpcontent/uploads/2016/06/Essential-Kanban-Condensed.pdf

I’ve been reading the above book on Kanban (the alternative path to agility) to familiarize myself with the method before taking the Kanban course by Accredited Kanban Trainer Travis Birch.

Two points from my learning are the principles of “Change Management” and “Service Delivery.”

Kanban regards “Change Management” as an incremental, evolutionary process as Kanban is utilized. For example, Kanban starts “with what you do now.” A business agrees to pursue improvement through evolutionary change, which happens over a period of time, based on experience and understanding. If one is using Kanban for the first time, there may be some awkwardness at the beginning, with a number of people trying to understand the principles, and how the visual board works. As the work goes on, understanding is increased, and with the new learning, change occurs in a very organic way. Acts of leadership are encouraged at every level. Changes can occur in all sectors: within individuals, within the environment, and in the cumulative outcomes of the work.

“Service Delivery” in Kanban requires that there is an understanding of and focus on the customer’s needs and expectations. The work is managed by people self-organizing around the work, and by the limiting of work-in-progress (WIP). This can help people feel that they have the right amount of work to accomplish with the right amount of time. WIP limits are policies that need to be made explicit in order to establish flow. The work on the board is “pulled” into the in-progress section only as people become available to do the work. An employee can focus on bringing higher quality to the work, and not feel threatened by a backlog that is crushing them. Policies are evolved to improve outcomes for the customers.

Of the nine values outlined in Kanban, three are directly related to change management and service delivery. The first is “respect;” by limiting the work-in-progress, respect is shown for the employee’s time and efforts, along with respect for the customer’s expectations. “Flow” refers to there being an ordered and timely movement to the work being done that is not overwhelming. “Transparency” occurs because everything is visible on the Kanban board and it becomes clear what is being done, when and by whom.

It’s been proposed that Scrum is for teams and Kanban is for services. In that way, they are both essential to the improvement of many organizations, especially those in which pure Scrum is not enough. They are complimentary from the perspective of improving business.

If you’re interested in the training with Travis Birch, AKT, go to:(http://www.worldmindware.com/TeamKanbanPractitioner).

Kanban has principles and general practices, but these must be applied in context, where different details will emerge as we pursue the common agendas of sustainability, service-orientation, and survivability. As a result, the journey is an adventure into unknown territory rather than a march over familiar ground” (from Essential Kanban Condensed)

Learn more about our Scrum and Agile training sessions on WorldMindware.comPlease share!
Facebooktwittergoogle_plusredditpinterestlinkedinmail

The post Learning About Kanban appeared first on Agile Advice.

Categories: Blogs

Regulatory and Industry Standards Compliance with SAFe

Agile Product Owner - Mon, 04/03/2017 - 17:03

Many systems in aerospace, defense, automotive, medical, banking, and other industries have an unacceptable social or economic cost of failure. In order to protect the public, these systems are also subject to extensive regulatory oversight and rigorous compliance standards. Historically, organizations building these systems have relied on comprehensive quality management systems and stage-gate based waterfall life-cycle models to reduce risk and ensure compliance. These same organizations are now adopting Lean-Agile methods, and are struggling to understand how their existing stage-gate compliance activities participate in a Lean-Agile flow of value.

Recently I’ve had the opportunity to collaborate with Harry Koehnemann with 321Gang, one of our SPCT-Gold Partners on an update to our guidance on how to use SAFe for implementing Lean-Agile practices at scale in these high assurance contexts. Harry has helped guide organizations in aerospace, automotive, medical device, and electronics industries through Agile. Prior to coming to Scaled Agile, I (Steve) worked for many years with a variety of Federal agencies, including a program in the Department of Homeland Security that was the first full implementation of SAFe. Each of us have seen the difficulties leaders of these organizations have faced trying to “go SAFe” but still meet the rigors of their regulatory and compliance processes, which frequently assume a waterfall product development model.

As more and more organizations in these high assurance industries are pursuing a SAFe implementation, we felt it was time to revisit our recommendations and provide some practical suggestions for how overcome this challenge by using Lean-Agile practices to actually produce BETTER compliance and safety outcomes. Our recommendations focus on four key approaches:

  • Taking an incremental approach to creating and assessing compliance information
  • Including compliance teams and their concerns in the product development ecosystem to collaborate on planning, executing, assessing, and adapting
  • Incorporate compliance in agile quality practices – automating, adapting, continuously improving, etc.
  • Integrating V&V and compliance activities into iterative development flow

We recently had the opportunity to share this information in an hour-long webinar. That recording along with a PDF of our slides can be found on our updated Guidance Article on this topic. We will also be coming out soon with a white paper and other enablement materials as part of a toolkit that SPCs can use to help organizations in these industries adopt the patterns that we have seen work successfully with similar SAFe implementations

If you are planning to attend any of the following conferences you can also stop by our sessions on this topic at these events:

Stay SAFe!

Steve Mayner and Harry Koehnemann

Categories: Blogs

How does backlog refinement work?

Scrum Breakfast - Mon, 04/03/2017 - 09:43
Last month, at the Scrum Breakfast Club, we looked at backlog refinement, so I had an opportunity to explain the product backlog iceberg, a popular metaphor for explaining the process. All about stories and features, TFB  and NFC, and everything else you need to know on a stories voyage from epic to grain of sand.
Categories: Blogs

AWS Lambda: Encrypted environment variables

Mark Needham - Mon, 04/03/2017 - 07:49

Continuing on from my post showing how to create a ‘Hello World’ AWS lambda function I wanted to pass encrypted environment variables to my function.

The following function takes in both an encrypted and unencrypted variable and prints them out.

Don’t print out encrypted variables in a real function, this is just so we can see the example working!

import boto3
import os

from base64 import b64decode

def lambda_handler(event, context):
    encrypted = os.environ['ENCRYPTED_VALUE']
    decrypted = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted))['Plaintext']

    # Don't print out your decrypted value in a real function! This is just to show how it works.
    print("Decrypted value:", decrypted)

    plain_text = os.environ["PLAIN_TEXT_VALUE"]
    print("Plain text:", plain_text)

Now we’ll zip up our function into HelloWorldEncrypted.zip, ready to send to AWS.

zip HelloWorldEncrypted.zip HelloWorldEncrypted.py

Now it’s time to upload our function to AWS and create the associated environment variables.

If you’re using a Python editor then you’ll need to install boto3 locally to keep the editor happy but you don’t need to include boto3 in the code you send to AWS Lambda – it comes pre-installed.

Now we write the following code to automate the creation of our Lambda function:

import boto3
from base64 import b64encode

fn_name = "HelloWorldEncrypted"
kms_key = "arn:aws:kms:[aws-zone]:[your-aws-id]:key/[your-kms-key-id]"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'

lambda_client = boto3.client('lambda')
kms_client = boto3.client('kms')

encrypt_me = "abcdefg"
encrypted = b64encode(kms_client.encrypt(Plaintext=encrypt_me, KeyId=kms_key)["CiphertextBlob"])

plain_text = 'hijklmno'

lambda_client.create_function(
        FunctionName=fn_name,
        Runtime='python2.7',
        Role=fn_role,
        Handler="{0}.lambda_handler".format(fn_name),
        Code={ 'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(),},
        Environment={
            'Variables': {
                'ENCRYPTED_VALUE': encrypted,
                'PLAIN_TEXT_VALUE': plain_text,
            }
        },
        KMSKeyArn=kms_key
)

The tricky bit for me here was figuring out that I needed to pass the value that I wanted to base 64 encode the output of the value encrypted by the KMS client. The KMS client relies on a KMS key that we need to setup. We can see a list of all our KMS keys by running the following command:

$ aws kms list-keys

The format of these keys is arn:aws:kms:[zone]:[account-id]:key/[key-id].

Now let’s try executing our Lambda function from the AWS console:

$ python CreateHelloWorldEncrypted.py

Let’s check it got created:

$ aws lambda list-functions --query "Functions[*].FunctionName"
[
    "HelloWorldEncrypted", 
]

And now let’s execute the function:

$ aws lambda invoke --function-name HelloWorldEncrypted --invocation-type RequestResponse --log-type Tail /tmp/out | jq ".LogResult"
"U1RBUlQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkgVmVyc2lvbjogJExBVEVTVAooJ0RlY3J5cHRlZCB2YWx1ZTonLCAnYWJjZGVmZycpCignUGxhaW4gdGV4dDonLCAnaGlqa2xtbm8nKQpFTkQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkKUkVQT1JUIFJlcXVlc3RJZDogOWJjZTNhNTAtMTgzMC0xMWU3LWIxZTYtYWY0MWQ2MzM2MWQ5CUR1cmF0aW9uOiAzNjAuMDQgbXMJQmlsbGVkIER1cmF0aW9uOiA0MDAgbXMgCU1lbW9yeSBTaXplOiAxMjggTUIJTWF4IE1lbW9yeSBVc2VkOiAyNCBNQgkK"

That’s a bit hard to read, some decoding is needed:

$ echo "U1RBUlQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkgVmVyc2lvbjogJExBVEVTVAooJ0RlY3J5cHRlZCB2YWx1ZTonLCAnYWJjZGVmZycpCignUGxhaW4gdGV4dDonLCAnaGlqa2xtbm8nKQpFTkQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkKUkVQT1JUIFJlcXVlc3RJZDogOWJjZTNhNTAtMTgzMC0xMWU3LWIxZTYtYWY0MWQ2MzM2MWQ5CUR1cmF0aW9uOiAzNjAuMDQgbXMJQmlsbGVkIER1cmF0aW9uOiA0MDAgbXMgCU1lbW9yeSBTaXplOiAxMjggTUIJTWF4IE1lbW9yeSBVc2VkOiAyNCBNQgkK" | base64 --decode
START RequestId: 9bce3a50-1830-11e7-b1e6-af41d63361d9 Version: $LATEST
('Decrypted value:', 'abcdefg')
('Plain text:', 'hijklmno')
END RequestId: 9bce3a50-1830-11e7-b1e6-af41d63361d9
REPORT RequestId: 9bce3a50-1830-11e7-b1e6-af41d63361d9	Duration: 360.04 ms	Billed Duration: 400 ms 	Memory Size: 128 MB	Max Memory Used: 24 MB	

And it worked, hoorah!

The post AWS Lambda: Encrypted environment variables appeared first on Mark Needham.

Categories: Blogs

AWS Lambda: Programatically create a Python ‘Hello World’ function

Mark Needham - Mon, 04/03/2017 - 00:11

I’ve been playing around with AWS Lambda over the last couple of weeks and I wanted to automate the creation of these functions and all their surrounding config.

Let’s say we have the following Hello World function:

def lambda_handler(event, context):
    print("Hello world")

To upload it to AWS we need to put it inside a zip file so let’s do that:

$ zip HelloWorld.zip HelloWorld.py
$ unzip -l HelloWorld.zip 
Archive:  HelloWorld.zip
  Length     Date   Time    Name
 --------    ----   ----    ----
       61  04-02-17 22:04   HelloWorld.py
 --------                   -------
       61                   1 file

Now we’re ready to write a script to create our AWS lambda function.

import boto3

lambda_client = boto3.client('lambda')

fn_name = "HelloWorld"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'

lambda_client.create_function(
    FunctionName=fn_name,
    Runtime='python2.7',
    Role=fn_role,
    Handler="{0}.lambda_handler".format(fn_name),
    Code={'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(), },
)

[your-aws-id] needs to be replaced with the identifier of our AWS account. We can find that out be running the following command against the AWS CLI:

$ aws ec2 describe-security-groups --query 'SecurityGroups[0].OwnerId' --output text
123456789012

Now we can create our function:

$ python CreateHelloWorld.py

2017 04 02 23 07 38

And if we test the function we’ll get the expected output:

2017 04 02 23 02 59

The post AWS Lambda: Programatically create a Python ‘Hello World’ function appeared first on Mark Needham.

Categories: Blogs

Article 11 in SAFe Implementation Roadmap series: Extend to the Portfolio

Agile Product Owner - Sun, 04/02/2017 - 22:58
Click to enlarge

If you’ve been following the SAFe Implementation Roadmap series—or you’re engaged in a real world transformation—you’ll appreciate the effort and commitment it takes to reach the 11th ‘critical move,’ Extend to the Portfolio.

At this stage in the rollout, the new behaviors are becoming second nature to all the players, and the measurable benefits of time to market, quality, productivity, and employee engagement have become tangible and are demonstrating real progress.

The door is now open to expand across the entire Portfolio. This is a telling phase in the rollout, as it tests the authenticity of the organization’s commitment to transforming the business at all levels. As scrutiny is turned to the higher-level practices in the business, and the Portfolio feels pressure to address the remaining legacy challenges, there is a fork in the road. One road leads to business as usual, approaches are not modernized, and the enterprise is unable to escape the inertia of tradition. This leads to “Agile in name only,” and as you can imagine, the results are seriously compromised.

The other road follows the original intent behind adopting SAFe, which is to dig in and do the work necessary to complete the move from traditional approaches to the Lean-Agile way of working and thinking. That requires leadership. While much of the heavy lifting is handled by SAFe Program Consultants (SPCs) and Lean-Agile Leaders, we increasingly we see an emerging Lean-Agile Program Management Office (PMO) leading the charge. In doing so, they establish exemplary Lean-Agile Principles, behaviors and practices, which are covered in the latest article in the Roadmap series, Extend to the Portfolio. They include:

  • Lead the change and foster relentless improvement
  • Align value streams to enterprise strategy
  • Establish enterprise value flow
  • Implement Lean financial management and budgeting
  • Align portfolio demand to implementation capacity and Agile forecasting
  • Evolve leaner and more objective governance practices
  • Foster a leaner approach to contracts and supplier relationships

Read the full article here. As always, we welcome your thoughts so if you’d like to provide some feedback on this new series of articles, you’re invited to leave your comments here.

Stay SAFe!
—Dean and the Framework team

Categories: Blogs

Invert Time Management; Schedule Energy

Agile Complexification Inverter - Sun, 04/02/2017 - 19:44
One can not manage Time. Why we talk like this is possible, might just lead to a billion dollar self help industry. Or we could invert the way we talk and think…

Scheduling Your Energy, Not Your Time By Scott AdamsYes that Scott Adams!
In that short article Scott give you his secret to success - it's basically free.  Now you could go out and buy a book like one of these to get other advice about your time usage.  Or - you could start by taking his (free) advice ... the decision is yours; but it's past time to make it.


The Time Of Your Life | RPM Life Management System $395 by Tony Robbins
100 Time Savers (2016 Edition) [obviously time sensitive information]Tell Your Time: How to Manage Your Schedule So You Can Live Free by Amy Lynn Andrews


See Also:
I'm Dysfunctional, You're Dysfunctional by Wendy Kaminer.    "The book is a strong critique of the self-help movement, and focuses criticism on other books on the subject matter, including topics of codependency and twelve-step programs. The author addresses the social implications of a society engaged in these types of solutions to their problems, and argues that they foster passivity, social isolation, and attitudes contrary to democracy."



Categories: Blogs

Effective carbon offsetting – what we’ve learned and what we’re doing

Henrik Kniberg's blog - Fri, 03/31/2017 - 12:29

Flying causes global warming. That sucks. But neverthless, we fly sometimes. Conferences, vacations, business trips. So what can we do? Well, here’s a simple rule of thumb:

  1. Fly as little as possible. Reduce the frequency & distance. Consider train for shorter trips.
  2. When you do fly, make sure you carbon offset. From wikipedia: “A carbon offset is a reduction in emissions of carbon dioxide or greenhouse gases made in order to compensate for or to offset an emission made elsewhere.”

The obvious question then is – HOW do you carbon offset? I was surprised when I dug into it.  “Traditional” carbon offsetting (buying emission credits and things like that) seems pretty useless! I couldn’t find any credible evidence that it makes a real difference! Almost like a scam.

So is there another way to carbon offset? Yes! This chart summarizes some of what I’ve learned so far. Read on for details. Got any more suggestions? Add comments. But please quantify.

(see this spreadsheet for the underlying numbers)

At Crisp we recently made a policy decision, unanimously:

  • For every flight organized by Crisp, we set aside SEK 100 per passenger-flight-hour to a carbon offset account.
  • That carbon offset account is managed by Climate Crisplet – a subset of people in Crisp who are interested in this kind of stuff and make sure the money is spent wisely. We try to maximize ROI in terms of CO2-reduction per money invested.

Why 100kr? Because flying emits about 0.23 tons of CO2eq per passenger hour <ref1, ref2, ref3>. It varies a bit depending on length of flight, speed, height, aircraft model, etc. But 0.23 tons per passenger-hour is a pretty reliable average (including radiative forcing).

CO2eq (Carbon dioxide equivalent) is the official unit of measurement for greenhouse gases. It is a way of aggregating different types of gases (such as CO2, methane, and others) into a single unit. We emit about 50 billion tons of CO2eq per year worldwide, and that’s the key driver of climate change.

So a 4-hour flight results in about 1 ton of greenhouse gas per passenger. We can’t take that specific ton back, it will stay in the atmosphere for many decades. But can we somehow pay to stop a DIFFERENT 1 ton of greenhouse gases from being emitted somewhere else? If so we’re fine right?

I’ve dug deep into this and concluded that yes, we can! And 400kr, if wisely spent, should be more than enough. Hence, 100kr per passenger-hour. We just need to be picky about HOW we spend it.

Our last conference trip involved flying 35 people to Marbella and back (9 hours of flying, there and back). So 315 passenger-hours, or 71 tons of greenhouse gases. Thanks to our carbon offsetting policy, the Climate Crisplet got 31,500kr to do something wise with. After some research we ended up doing this:

  • 3150kr (10% of the total) to Flygreenfund. They invest in aviation biofuel. This is jet fuel made from things like recycled frityrolja (how the heck do you translate that…. it’s recycled oil from deep-frying). Their estimate is that about 400kr compensates for 1 hour of flying.
    • Impact: about 8 tons (very loose calculation)
  • 28350kr (90% of the total) to Trine. They run a crowd funding platform for rental solar installations in sub-saharan Africa. The climate impact is measurable because they can calculate how much less kerosone and diesel needs to be burned when they install solar panels in a village. In this case we invested in Gamba, Zanzibar. Our investment is estimated to give 700 people clean electricity, reduce CO2 emissions by 260 tons, and give Crisp an 5.4% annual rate of return. Triple win! We’ll then reinvest that money to further reduce greenhouse gases.
    • Impact: About 260 tons (pretty specific calculation).

So our flight caused an increase of about 70 tons, and our investment will cause a decrease of about 270 tons. That’s a net win of almost 200 tons!

 

CrispOffsetMarbella

But wait, doesn’t that mean our price tag of 100kr per passenger-hour is too high?

Not really. Because the numbers are approximate. The are different ways of calculating this stuff, and each number comes with an uncertainty. But a 70 ton increase vs 270 tons reduction means we have a lot of margin for error! Even if the reductions were optimistic by a factor 3, we still win!

There’s also another very important factor to keep in mind: investment vs cost. Buying carbon offsets is money gone. You don’t get that back again. Same with things like flygreenfund. Trine, however, is an investment with an expected return (but with some risk of course). That means we are likely to get the money back, so we could invest it again and again! There are other companies offering similar types of services, basically handling the “hey how can I invest money and help the climate while getting a return on investment?” thing. For example Bright Sunday.

So why did we decide to spend 10% of our carbon offset money on Flygreenfund? Their impact is not as easily quantified as Trine, and there is no return on investment. But they are addressing the root cause – fossil fuel emissions from flying! And we want to support that.

Our carbon offsetting recommendation

So what’s the moral of this story? Don’t buy traditional carbon offsets? Invest in Trine? No, the learning goes deeper.

  • Have a clear and simple policy. In our case: 100kr per passenger-hour, and a team that is entrusted to manage the money.
  • Do the math. Stay clear of fluffy things like emission rights, unless someone can show you how it (physically!) causes CO2 reduction. Even when you see specific numbers, find out where those numbers come from. For example, Trine publishes their specific CO2 reduction estimates, but I asked them to walk through the underlying calculation with me (which they did willingly). Note that the climate impact varies quite a lot across their different projects.
  • Distinguish between costs and investments. A cost is only justifiable if it has a VERY clear and concrete impact, since that money can’t be reinvested later.
  • Trust is everything. Before investing, find out who is handling the money and what their motives are. Or follow in the tracks of someone else who you trust, and who has done the research for you. Feel free to follow us if you like
Categories: Blogs

PMI EMEA – Rome – PMI’s Agile Future

Leading Answers - Mike Griffiths - Thu, 03/30/2017 - 18:11
I will be presenting at the PMI EMEA Congress May 1-3 in Rome on “PMI’s Agile Future”. 2017 marks an important year for embracing agile approaches by the PMI. The PMBOK® v6 Guide, set to be released in Q3 will... Mike Griffiths
Categories: Blogs

The Gift of Feedback (in a Booklet)

thekua.com@work - Sun, 03/19/2017 - 20:00

Receiving timely relevant feedback is an important element of how people grow. Sports coaches do not wait until the new year starts to start giving feedback to sportspeople, so why should people working in organisations wait until their annual review to receive feedback? Leaders are responsible for creating the right atmosphere for feedback, and to ensure that individuals receive useful feedback that helps them amplify their effectiveness.

I have given many talks on the topic and written a number of articles on this topic to help you.

However today, I want to share some brilliant work from some colleagues of mine, Karen Willis and Sara Michelazzo (@saramichelazzo) who have put together a printable guide to help people collect feedback and to help structure witting effective feedback for others.

Feedback Booklet

The booklet is intended to be printed in an A4 format, and I personally love the hand-drawn style. You can download the current version of the booklet here. Use this booklet to collect effective feedback more often, and share this booklet to help others benefit too.

Categories: Blogs