Long ago, I had spun off a focus on Personal Effectiveness to Sources of Insight.
I mentor a lot of teams and leaders for high-performance, and I needed a place to consolidate and share principles, patterns, and practices for personal effectiveness. In fact, the tag-line is:
Proven Practices for Personal Effectiveness
I’ve significantly revamped the user experience, the content, and the collections of resources on Sources of Insight to reflect the latest feedback that users have shared with me (thank you all.)
Sources of Insight is for people with a passion for more from work and life. The goal of Sources of Insight is to be your source of insight, inspiration, and impact to help you achieve more in work and life. Whether you are an achiever or a high-performer, or simply want to work on your personal effectiveness, Sources of Insight will help you accelerate your results.
Better Insights, Better Results
One of the slogans I use is “Better Insights, Better Results” because the big idea is to “empower people with skill for work and life.”
Sources of Insight is now a success library of more than 1,300 articles with a focus on helping you “create a smarter, more creative, more capable you.”
You can think of this as applying patterns & practices to work and life, as well as making “Agile for Life” real.
The idea here is to give you the tools and techniques that will help you rise above the noise, get a better vantage point, and make better decisions to change any situation you find yourself in, or to create better situations from the start.Stand on the Shoulders of Giants
As part of building out Sources of Insight, I draw from great books, great people, and great quotes, to help you “Stand on the shoulders of giants.”
As my friend puts it, what I really do is help you “Run with the titans.”
I’m a big believer in finding the “best of the best” from various disciplines and experts around the world and synthesizing into actionable guidance.
I periodically have featured guests on Sources of Insight. I try to find the people that are the best in the world at what they do, or that have some interesting insight that will help you think differently or make more impact.
Many of my guest posts are by best-selling authors, but I also include comedians, and others.
Some of my guests include Al Ries – the father of brand positioning in the mind, Guy Kawasaki – who is all about empowering people, Gretchen Rubin – author of The Happiness Project, Jairek Robbins – author of Live It (and Tony Robbins’s son), Jim Kouzes – author of The Leadership Challenge, Marie Forleo – some say she is a female Tony Robbins, Michael Michalko – author of Thinker Toys, and a former Disney imagineer, Rick Kirschner – author of Dealing with People You Can’t Stand, Tim Ferris – author of The 4-Hour Work Week, and many more. (You can see featured guests at a glance on the Feature Guests page on Sources of Insight.)
Realize Your Potential
What makes the articles a bit different on Sources of Insight is they usually reflect problems that I am helping people with, so they are real-world scenarios and solutions.
In this way, Sources of Insight is more than a clearinghouse of the world’s best insight and action for work and life – it’s also a self-paced, virtual mentor where you can learn how to be YOUR best.
Some readers say that the key thing for them is that Sources of Insight helps you “realize your potential.”
As Ralph Waldo Emerson would say, “Make the most of yourself….for that is all there is of you.”
And I would add, nobody else is going to do it for you
Popular Topics for Personal Effectiveness
On Sources of Insight, I cover key topics for personal effectiveness. The most popular topics are:
I also have collections of Great Books, Great People, Great Quotes:
Great Books – These are hand-crafted indexes of interesting and insightful books that you can use to improve all aspects of your work and life. I spend a lot of money on books every single month and I read a lot of books each and every week. In fact, many of my blog posts are what I call “Book Nuggets” which are like the best needles in the haystack. My Great Books collection reflects a heavy investment in my quest to find the best wisdom of the world that is spread across hundreds, and thousands of books. Some of my more popular collections of books include Business Books, Career Books, Leadership Books, Personal Development Books, and Productivity Books.
Great People – This is where I shared and scale my best lessons learned and key insights from all walks of life. What I do is I try to compact and distill the best insights from various people into lessons learned. You can think of it as “Greatness Distilled.” Some people are famous and others are unsung heroes. What I focus on is the interesting insights that you can use to get better at work and life. Here are a few of my more popular great people pages: Bruce Lee, Chalene Johnson, Oprah Winfrey, Stephen Covey, Tony Robbins, and Steve Jobs.
Great Quotes – This is my attempt to organize collections of the world’s best wisdom of the ages and modern sages at your fingertips. The right words can spark the right ideas, or the right thinking or the right feeling or taking the right action. The right words help us live better, and they help us do better, and they help us be better. The right words help us build better vocabularies and better mental models and better ways of doing and being, living, and even breathing. Here are some of the more popular quotes collections: Focus Quotes, Happiness Quotes, Inspirational Quotes, Leadership Quotes, Motivation Quotes, Personal Development Quotes, and Productivity Quotes.
As I mentioned early, Sources of Insight is a Success Library of more than 1,3000+ Articles for Personal Greatness. You can start at the Articles page,
Some of the most popular articles include: 7 Habits of Happiness, 25 Inspirational Movies, 50 Life Hacks for Your Future Self, 101 of the Greatest Insights and Actions for Work and Life, 101 Questions that Empower You, How To Get Whatever You Want, How To Think Like Bill Gates, Inspirational Quotes, Lessons Learned from Bruce Lee, The Exponential Results Formula, You 2.0.Books
On Sources of Insight, the Books page is where I share free eBooks as well as feature the books I author. At this point, my main book featured is Getting Results the Agile Way, which is a personal results system for work and life.
Getting Results the Agile Way is where I introduce my simple productivity system: Agile Results.
Agile Results is really a simple system for meaningful results. It helps you create more moments that matter. It also helps you work on the right things, at the right time, the right way, with the right energy, to amplify your influence and impact.
Most importantly, it gets you spend more time in your strengths, less time in your weaknesses, and it helps you give your best where you have your best to give.
On Sources of Insight, the Courses page is where I share training to help you realize your potential and bring out your best.
My favorite way to provide training is through what I call “Monthly Improvement Sprints” or “30 Day Improvement Sprints” or just “30 Day Sprints.”
They are effectively 30 Day Challenges where you practice a little each day, to get better over time.
I find 30 Day Challenges or 30 Day Sprints are a great way to build better habits, learn new things, and improve your skills and abilities at whatever you focus on.
On the Courses page, you’ll find 30 Days of Getting Results, which was my attempt to share the absolute best principles, patterns, and practices for personal productivity.
Best of all, it’s free. It could well be the best self-paced training you ever take for high-performance and for mastering productivity, time management, and work-life balance.
Plus, it’s a powerful way to learn Agile Results in a simple way, with one mini-lesson each day, that includes an exercise to put it into practice.Resources
On Sources of Insight, the Resources page is effectively a library of helpful resources at your fingertips. Here are a few of the key resources:
Book Review – My Book Reviews are like mini movie trailers of books, where I include key highlights from the book as well as my key take aways. I don’t really do book reviews, where I talk about pros and cons. Instead, I look for the most interesting or the most insightful parts of the book and focus on those. I always ask the question, “How can I use this?” and I apply those “Book Nuggets” and those key take aways to real world scenarios.
Cheat Sheets – Cheat Sheets put key information at your fingertips. The only Cheat Sheet I have so far is a Blogging Resources Cheat Sheet. It’s actually a very powerful Cheat Sheet though, if you happen to be a blogger. I get asked a lot about blogging, everything from how to get started to how to create a successful blog. People ask me what the connection of blogging is to Personal Effectiveness, and to me it’s simple: Working on your blog, is working on your life. By building a blog, you build a personal platform for learning and growth. Blogging is still one of the most effective ways I know to focus on personal development, while giving your best where you have your best to give, and sharing your unique expertise with the world. I plan to add some very special Cheat Sheets of hard-core knowledge, so this page is more of a placeholder for now.
Checklists – Checklists are a quick way to provides lists of “one-liner reminders.” In general, I try to focus on creating actionable checklists that inspire and trigger the right thinking or the right actions. Currently, I provide a Focus Checklist, Leadership Checklist, Time Management Checklist, and The Charge Checklist, which is a checklist I created based on the best-selling book, The Charge.
How Tos – How Tos are a great way to turn insight into action. My most popular How Tos include How To Achieve Any Goal, How To Avoid Breaking Under Pressure, How To Change Any Habit, How To Find Your Strengths, and How To Find Your Values.
Product Recommendations – This is my roundup of the best products I’ve used for personal development and improving personal effectiveness. The big deal here is The Greatest Personal Development Gifts Ever. Not only are these the personal development programs that have served me well, but they are the gifts that I give to friends and family to give them an edge in work and life.
Trends – This is where I share key trends each year. If you’ve ever read one of my trends posts, you know that they are deep, and they help give you a big advantage when it comes to seeing the road ahead. One of the most important personal effectiveness skills that you can build is anticipation. The way to improve your anticipation is to learn how to identify, understand, and apply trends to create your future. When you focus on trends, they also help you build your visionary leadership skills, and if there’s one thing this world needs more of, it’s visionary leadership. Here is my Trends for 2016 post. It is a really deep dive into what’s happening around the world, but it also provides you the balcony view at a glance. Use this as your advantage to maneuver at work, to shape your business, and to shape yourself, with clarity, courage, and competence. After all, 2016 is the year of the bold!Personal Effectiveness Toolbox
I wonder if I saved the best for last? The Personal Effectiveness Toolbox is, to this date, my greatest compilation of the greatest programs and tools to help you do more and achieve more in this lifetime.
On the Personal Effectiveness Toolbox page, I share all of the best tools that I have used over the years to exponentially improve my ability to get results and to amplify my impact.
These are some of the best programs that have helped me really understand influence and impact.
They have helped me create my own personal achievement systems.
They have helped me get over any limiting beliefs and master my mind.
They have helped me really understand emotional intelligence at a deeper level and learn real skills and techniques.
They have also been my greatest programs for personal development and improving personal effectiveness across mind, body, emotions, career, finance, relationships, and fun.Change the World or Go Home
We have a little saying that we use in the halls at Microsoft:
Change the world or go home!
Every now and then, you can see a poster in the hall or on somebody’s wall of the Microsoft Blue Monster.
It was the work of Hugh MacLeod as you can recognize by his art – simplicity and elegance in action (and you can read the backstory at The Blue Monster.)
You have everything at your fingertips to be YOUR best and to realize your potential, the agile way.
Go ahead and change the world, your way.
Always remember to give your best, where you have YOUR best to give.
I’ve often been involved in conversations that boil down to “framework vs library” use in software development. But after reading a blog post, recently, I found myself wondering if this is even the right question to ask.
If “framework vs library” is the wrong question, then what is the right question?
Just like what happened on www.picobusiness.com, this wordpress site was compromised. And I was able to restore this one in about 5 minutes, which is pretty cool. The old style wasn’t available anymore. Which might be a blessing
Here are the slides for our talk Agile @ Lego at Passion for Projects in Uppsala. Enjoyed discussing this stuff with project managers and the like from all sorts of industries. A common theme from the conference was the power of self-organization, and the role of leadership in creating the right context for self-organization to happen. Our talk provided a real-life large scale example of this.
The Journey Starts With a Single StepKey Takeaways
- Your decision to become a Scrum Master triggers the start of a journey to become the catalyst your organization needs.
- Becoming a first-time Scrum Master requires more than just attending training or receiving a certification.
- Agile education comes in many shapes and sizes so your training selections should align with your own learning style.
In our previous post, we walked through the journey of deciding if you should become a Scrum Master. If you have answered the call with a resounding “yes,” you are probably excited to get started. But where to begin?
Having coached numerous Scrum Masters through this journey over the past 5 years (most who were previously project managers), here is a loose approach to take with those committing to their new role:
Embrace the change. A whole new world awaits those becoming a Scrum Master, especially those who have previously been a project manager. Accept how big a deal this is. This WILL be different. This period of awakening should foster an environment of questioning many paradigms, techniques, methods, and approaches previously used. What may have been recognized and rewarded in the past will not be true in the future world of agility.
Initial coaching periods will be spent with me sharing stories of past transformational experiences to get the new Scrum Master excited about the possibilities. I will listen to the language the Scrum Master is using to see if they realize just how big their change journey will be.
Look inward. With an understanding of just how different things will be, it’s time to make it personal. During this period, the focus is on building the self-awareness muscle of the new Scrum Master. I will rarely make any statements at this time…but plenty of questions. “How would you handle [insert a future situation]?” “What are you sensing about [insert a current situation]? “Why do you think…?” This is also the time to introduce journaling to the new Scrum Master.
Sometimes I will ask the new Scrum Master to create a personal mission statement as the first entry in their journal. For example, “The reason I’m here is to foster an environment capable of building a product our customers love with a team of people who are excited about coming to work every day.”
Discover your brilliance. With internal guidance established, the focus shifts to mapping out tactical areas of development to allow your strengths and personality traits to emerge. We may identify a few skills to improve on as well. What are those things we should work on to bring your personal mission statement come life? From my perspective, any development plan is centered around you being allowed to be yourself.
As there are many materials available publicly through a quick Google search, I won’t list everything out but choose a development model you feel comfortable with. You can always check out my book (for free by signing up for the blog) and the accompanying development worksheet if you are looking for something to start with. Regardless of what you choose, make it your own.
Find a coach or mentor. Looking back, the biggest gains made in my own development have come while under the “umbrella” of a mentor. To identify good candidates for your mentor, here are 4 characteristics of the good ones. Share your mission with them. Share your development areas with them. Share what you are struggling with and your biggest concerns. Most importantly, just be yourself with them.
If you are currently working with an Agile coach, I would hold off on selecting an additional mentor as your coach should be filling the mentor role for you initially.
Educate yourself. The first thing many new Scrum Masters do is jump into formal training or obtain a certification before going through the steps in this post (or the previous post). Without going through this entire thought and awareness journey, the Scrum Master role will feel mechanical and soulless. As a Scrum Master, you are a “tip of the spear” change agent. You must believe this and taking a two-day class to become a Certified Scrum Master is not enough to equip you with what you need to be the change catalyst your organization requires.
I am often asked for recommendations about training or certifications options for new Scrum Masters or Agile coaches. So far, the only recommendation I have publicly made is for the Agile Coaching Institute although I’m sure there are other great ones out there. I am neutral when it comes to the Certified Scrum Master certification…nice if you have it but not required in my opinion. (Full disclosure, I’ve had my CSM since 2005.) Outside of Agile, development exercises centered on facilitation, presenting, team-building, conversations, psychology and improvisation would also be recommended.
The reason I would wait to select training until after finding a mentor is to allow your mentor the opportunity to share what learning experiences has worked for them and for both of you to develop a plan together. We all learn new things differently. Select an education approach based on your own learning style. Some need to see, experience and interact while others learn best with formal, curriculum-based lectures or seminars.
Engage with a community. Find other Scrum Masters or Agile coaches in your organization or area. If there isn’t one in your organization or city, start one up! Meetup is a good place to search for local area Agilists. The importance of having a safe forum for new Scrum Masters to learn and share with other like-minded practitioners can’t be overstated.
And you’re off! By now, you have been assigned to your team. Team members are staring at you waiting for you to get them started. No worries. The journey continues new week with the post “Starting Your First Sprint.”Becoming a Catalyst - Scrum Master Edition
Yes, I know the “software is like construction” metaphor has been overplayed, but hear me out here. One of my guilty pleasures is a home improvement show on HGTV, Property Brothers. The show covers a home buyer who wants the house of their dreams, and renovates an older, more affordable home to do so. The gimmick is it’s one twin that helps the home buyer select a house to purchase, and the other to renovate it.Scoping Phase
The show starts off with Drew (the Realtor) working with the home buyers for their “must-haves” in their ideal home, as well as their max all-in budget. Typically this list is something like:
- Open floor plan
- Hardwood floors/granite countertops
- Big chef-style kitchen
- Spacious master bedroom/walk-in closet/master bath
- Big yard
- Man cave (but never a woman cave because reasons)
- X many extra bedrooms and Y bathrooms
- Location close to work/family/city
Drew then takes them to a house that meets every single one of their criteria. The homeowners fall in love with this house, seeing a move-in ready home that has everything they want. Then Drew asks them, “how much do you think this house is on the market for”.
At this point I wonder if the homeowners are universally idiots, having done zero research or it’s just the Magic Of Television. The homeowners will guess a price maybe 10-20% more than their max budget. Drew then does the reveal of a price at a minimum of 50% over budget, sometimes 100-200% over. They’re looking for a dream house of $500K but in reality it would cost $750K and up.
This serves not to shock the homeowners, but to reset expectations. If the homeowners want the perfect house in the perfect location, they’ll have to pay a lot for it.
We have to do this quite a bit in software development. Reset expectations of what someone thinks they can get for a certain amount of dollars/time/people to what is actually attainable. And like home renovations, throwing more people at the problem won’t necessarily speed things up, there is a physical constraint in the space you’re working in as well as dependencies between steps (can’t paint the walls until you actually build the walls). You can’t easily parallelize work, and when you do, there’s a lot of management/coordination overhead to doing so.Design Phase
Once the homeowner picks a “fixer upper”, Jonathan, the other twin and licensed contractor, takes over and directs the renovation of the house. He goes over a proposed design (that’s not exact on the little things but on the bigger things like where bathrooms should be) and budget. He works in a contingency budget, typically 10-20% of the overall budget, that he saves for “nice to haves”.
The proposed design is interesting in that it parallels a lot of the architectural and design decisions we make up front in software. We work out the hard problems that are hard to change, architecture, and provide a vision of the design typically through a couple of wire framed interactions along with a style guide. Nothing is set in stone, and Jonathan’s design shows colors, furniture and the like but none of the cosmetics are set in stone. These are the items that wouldn’t really affect the overall budget but get an idea of what the final design would look like.Implementation Phase
Once the homeowner picks a house, they put in an offer and through the Magic Of Television have the offer accepted. Although not always, sometimes the homeowners decide they know how to negotiate better than a professional, tell Drew to put in an insultingly low offer and it gets rejected without a counter and the homeowners are disappointed (to my chagrin). But a house is eventually purchased, and the project moves to demolition and renovation.
Inevitably curve balls are thrown in. A wall they wanted to open up turns out to be load bearing, and the owners have to decide to keep the wall, or put in a supporting beam at the cost of trading off some other element. It’s always tradeoffs, and it’s always in the homeowner’s court to decide what they would like to trade off. The contractor tries to hold them to their budget, because typically no one is happy if they spent more than they planned (even if in the moment it seems like a good idea).
Individual elements can be decided as they go, such as what kind of cabinets, flooring, furniture and the like, but the big decisions on how the rooms and plumbing should be laid out are decided up front. It’s just too costly to change these later. We see this in our projects, too. You could go the route of building small prototypes/services/whatever and expect to throw them away, but these still cost time and money to build. It’s worth instead taking some time up front to evaluate priorities and make some decisions on the big choices, but deferring the smaller decisions until you’re really forced to – the last responsible moment.
The metaphor isn’t perfect, but it does at least serve as a common reference/talking point when trying to explain how software design and development works, in a way that brings the client along with the process, keeps them involved but shepherds them along a well-worn path to success.
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
We’re glad to announce a new presentation deck, What’s New in SAFe 4.0., available for download at our Videos and Presentations page.
Intended for all audiences—new to SAFe, or long-time practitioner—this 39-slide deck provides a high-level view of the latest features in the Framework, including:
- The new Foundation layer
- Program and Team level changes
- Backwards compatibility with SAFe 3.0
- Support for large value streams with 4-Level SAFe
- Managing large portfolios
For those of you who are supporting a SAFe 3.0 to 4.0 upgrade, this deck will help you communicate how the 4.0 version helps enterprises of all sizes and levels of complexity. It takes approximately 45-minutes to present, and has fairly robust speaker notes, plus the Implementing SAFe 1-2-3 slide, some case study results, and next steps.
You are welcome to reproduce, distribute, and use any part of the PowerPoint presentation, and supplement with your own slides, free of charge but for informational and promotional purposes only. Just don’t ignore the fine print which reminds you that this remains the copyrighted property of Scaled Agile, Inc., and you may not: compete with any product or training provided by or for Scaled Agile; or modify the original slides; or remove any trademark or copyright.
Click here to get to our Videos and Presentations page. We’re glad to provide these resources to support your ongoing adventures with the Framework.
–Dean and Team
In 201x, the global financial markets collapsed. Reason: mortgages were given to people who couldn’t afford them. This debt was then repackaged and sold to banks and other institutions as good debt. (“The Big Short” by Michael Lewis is an excellent indictment of this time). However, the bigger question remained. Why didn’t the financial regulator system catch the problem early, while it was still small?
The answer? Complexity.
In “The Dog and the Frisbee” (pdf), Andrew Haldane, Executive Director Financial Stability at the Bank of England, explains all the things a dog would have to know and understand to catch a Frisbee: wind speed and direction, rotational velocity of the Frisbee, atmospheric conditions, and gravitation. It might require a degree in physics to know how to express the control problem involved in catching the Frisbee.
Yet dogs, without physics degrees, do this everyday. They obey a simple rule/heuristic: “run at a speed so that the angle of the gaze to the frisbee remains constant”. Empiricism and Simplicity. Agile works because it is an Empirical process using constant feedback to update both the work itself and the way we work.
Haldane goes on to show that the financial regulatory system evolved from something simple that many people at a bank could understand, to something only a few people could understand. Eventually it became so complex that no one person understood the system as a whole. The earlier regulatory frameworks worked well in part because many people understood, and therefore many people could spot problems early, before they got too complicated and large to resolve.
As we deal with ever-larger organizations, it’s tempting to say that this increase in complexity is okay because we’re larger. But if financial crisis taught us anything, the answer should be no. The bigger the system, the more important it is to use simple control mechanisms, simple feedback loops, and simple measures that can be understood by all. Decreasing complexity – not increasing it – has to be at the heart of all of our decisions. And coupled with that has to be the ability to respond quickly and change appropriately.
Image attribution: damedeeso, via photodune
Webhooks are a great way to add integration with external services and systems. I’ve used webhooks from Stripe, Dropbox and other systems to tell my app when something happens on the external service, making it easy for my system to respond to whatever the event was.
While webhooks are generally easy to handle, there are a few challenges I’ve run into when it comes to handling them in large volume or running a long process in response.A Basic Webhook Handler
If you’ve every built an Express app with an HTTP “GET” or “POST” route handler, you already know how to build a web hook handler. A webhook is really nothing more than an HTTP GET or POST that sends some information to you. It works the same as any other HTTP request, in this manner.
The typical response from the webhook is going to be a “200 OK” status message, as well. Sometimes you’ll send data back, but not often.
So, your typical webhook handler can look as simple as this:
There’s nothing terribly special, here. It’s just an Express router that handles a POST and responds. It doesn’t do anything, but it does technically handle the webhook.
The real challenge, though, is handling the webbhook quickly.You Must Respond Quickly
Speed is the name of the game.
Many of the most popular services require your HTTP request handler for the webhook to respond with a “200 OK” within a few seconds.
Dropbox, for example, requires you to respond in 10 seconds:
Your app only has ten seconds to respond to webhook requests.
This sounds easy at first – and usually is. When you only have a few webhooks coming in, and you don’t have much code in place, responding in 10 seconds or less is not a challenge.
When you start growing, though, and you start adding more business process to the back-end systems; or when you realize that your back-end system needs to access external resources and services that may be down, or you have a long-running business process for the web hook data – this is when things get “fun” with that 10 second limit.
What’s the solution, then? How do you build an Express app that will always respond quickly, no matter the number of webhooks coming in, and no matter the length and complexity of the process being run?Don’t Process The Webhook Immediately
Typically, an HTTP API needs to send a response. If you have an API to get users, or get a user by id, for example, the software making the request will expect a list of users or a single user to be returned through the HTTP stream.
With a webhook, however, the only response required is usually HTTP status “200 OK” – just to say, “hey, I got the message. Thanks!” Thankfully, this need for a simple response gives you plenty of options for handling a large volume of webhooks, and having long running processes.
Whatever tools you choose, the primary mechanism for handling volume and lengthy processes will be sending the webhook data to another back-end service somewhere.Publish A Message, Respond Quickly
RabbitMQ is my current choice for message queueing systems. With Express and RabbitMQ, you can write very simple code for a webhook handler.
This example (using a well encapsulated call to publish a message to RabbitMQ) will create a JSON document and publish it The web server then responds with “200 OK”, and the webhook publisher knows that the message was received.
Then, on the back-end of the system, another process can pick up the message from RabbitMQ and run whatever code it needs.Handle The Message
Once the message is in a queue, your options for handling it in the background are nearly endless. You don’t even have to stick with Node, at this point – any language with a RabbitMQ library could pick up the message and run your back-end code.
Whatever language you choose, and whatever code you need to run, you can handle it with all the time you need. There’s no need to performance optimize everything down to the nth degree, immediately, because you don’t have to worry about that 10 second response window.
Sure, you may have to deal with your own system performance needs. That, at least, will be in your control and not a mandate from the external service that provided the webhook data.Improve Architecture and Performance
One of the best side-effects of handling webhooks in this manner, is the architecture and performance improvement you can gain.
Architecture improves when we split out monolithic applications into a system of smaller parts. RabbitMQ makes this easy, allowing us to push messages between applications without having the various parts know about each other.
Improvements in architecture and splitting apart a monolith also provide performance enhancements. Users and external systems will receive responses faster. Additionally, you can scale up and out individual parts of the system, as needed.
These, and many other benefits, go hand in hand with messaging architectures.
- Sponsored: Furniture You Can Afford
FurnitureYouCanAfford.com has low prices, free shipping(in the continental U.S.), and a large selection of awesome furniture.
Digital Transformation is much broader than a technical play. It’s a chance to reimagine your customer experience, how your employees work, and how you perform operations.
It’s also a chance to continuously create and capture value in new and innovative ways, and I don’t just mean with DevOps.
Your business isn’t static. Neither is the world. Neither is the market. Neither is Digital Transformation.
Instead, Digital Transformation is a way to continuously evolve how you create and capture value in a mobile-first, cloud-first world.
In Digital Transformation Dr. Mark Baker shares how “digital” is more than just bits and bytes and there is always more Digital Transformation that can be done.Digital Transformation is Bound by Business Decisions, Not Technical Ones
Your Digital Transformation should not be bounded by technical decisions. Your Digital Business Transformation approach should be driven by your business decisions and your business design. What matters is that the landscape is digital and that you have to design for new customer experiences, new ways of working, and new ways of performing operations in a mobile-first, cloud-first world.
“Today, when we talk of digital transformation we mean restructuring an organization to use any and all information and network-based technologies that increase its competitiveness, in a way that, over a period of time, excludes and out-competes un-transformed organizations. Of course, in a literal sense, when we walk literally about digital we mean something like expressing data as series of the digits 0 and 1 or using or storing data or information in the form of digital signals: digital TV, a digital recording or a digital computer system.
However, if we think about it that way the whole scope of our understanding and what we are thinking of achieving is quite limited and fairly technical.
In the bigger sense of digital we mean a road map that includes the full process of making a business or service so that every part is freely accessible at every level with bounds set by explicit management models, not by physical constraints. Ultimately it means that all decisions become business or usage decisions, not technical ones.”Example of First Generation Digital Transformation
There are always some basic things you can do to get in the game of Digital Transformation. But that is just the start. Baker shares an example using a library and how they performed their Digital Transformation.
“It might be useful to give a simple example where, whose general principles apply to all digital projects. The Bodleian Libraries are a collection of approximately 40 libraries that serve the University of Oxford in England. One of the largest and most important libraries in the world, they hold 11 million printed items, 153 miles (246 kilometers) of shelving, including 3,224 bays with 95,000 shelf levels, and 600 map cabinets to hold 1.2 million maps and other items.
During the first generation of transformation I talked to senior librarians at the Bodleian, and the digital library projects that I was told of turned texts into bitmaps. Information was still effectively siloed and not electronically searchable within books, but the advantage of digital transformation at that stage was that the physical master copies were protected and copies could be sent with manually controlled access over electronic network to authorized users anywhere in the world.”Example of Second Generation Digital Transformation
Once you go digital, more opportunities open up for further transformation. Baker continues the example of a library that undergoes Digital Transformation.
“Later more advanced approaches, like Project Guttenberg, digitized the text into ASCII format so that catalogs of books were both digital and searchable as were the individual books. Beyond that, projects like Google Books Library Project allowed the whole contents of all the books to become accessible to a single keyword search that could search all text across volumes.”There is Always More Digital Transformation That Can Be Done
There is always more you can do and there are many stages to a full Digital Transformation.
“Of course, going digital goes beyond digitizing content and a more advanced model would determine accessibility, access and usage rights and payments, not just in the local user community but worldwide. In a project of that type any user would be able to do keyword searches across all the contents of a particular library and then usage and any payment would be determined for the specific books or documents they wanted access to, appropriate access would be granted and payment (if any) would be collected. If acquisition was performed on the same platform then requests for information, usage statistics, reader feedback and null-searches could be matched to the acquisition of new materials for the library, so as to better serve the users.
Ultimately even search goes further so that improved semantic search tools would allow search by meaning s well as by key words or phrases, as well as predictive analysis of future usage creating a proactive model, rather than a reactive model where the available content is always out of date.
At each state the instigators might have expressed the view that they had ‘gone digital’ and at each stage there would have been much, much more that could be done. This is, of course, just one specific instance of digital transformation related to libraries, but shows a simplified example of how there are many stages to a full transformation.”
Digital Transformation is not done when you are “transformed.”
It’s a journey of continuous evolution.You Might Also Like
Hi, David here.Scrum and Agile training sessions on WorldMindware.comPlease share!
The post David Sabine to Host Panel Discussion with Strategic Leadership Forum in Toronto appeared first on Agile Advice.
The "whether or not" situation is not a real decision making situation, because we position ourselves in a one-dimensional Two-ways option type. Having only one option is not an option. Having one option and its opposite it is not either.
The "whether or not" situation was coined beautifully by Dan and Cheap Heath , who say that "whether you are asking yourself "whether or not" step back". You don't have enough of the big picture. In this case focusing to much creates blind spots of other options we might have at hand.
"I wonder if whether A or B" An "advanced" form of "I wonder if whether or not" is the "I wonder if A or B". Now, this might seem different to you, but it is not really. You are still in a kind of one dimension decision making process, because not doing A means implicitly doing B. Choices are narrow and you're stuck in your options.So let's see some examples here :
- I wonder whether I should buy a more expensive smartphone or stick to a basic one
- I wonder whether I should accept the offer from Harvard or Stanford
- I wonder whether I should pick the blue shirt or the white one
- I wonder if web users will like a green call to action button or a red one?
- I wonder if I should write a new blog post or prepare dinner ,
- I wonder if my customers want a call-back button or a chat space....
I hope that I recognised at least one of the situation you eventually were in. And I hope for you that the majority of you were scanning for the answer to question number 2 :) The business experimentation movement accelerated by LeanStartup has came up with receipt to answer questions like #4 and #6 in my example : The A/B testing! Yey, shiny! A/B testing says that we will implement non A or B , but A and B and then we wait and see.
The Answer To A Question That Was Not Asked So here is the moment of gathering the data after the A/B testing. To all that implemented A/B testing I ask a question:What did you ( really) learn?The feed-back I have after each A/B testing starts with "hmmm...". Then it can go like this :"It seems A has more hits than B. But B is used heavily from 8:00 to 9:00 am. We should understand why" or/and"It seems that A has more hits , but hey , isn't it because it's right in the middle of the page. B has very few hits but it gets traction each time."So, the global conclusion is we have collected very interesting data, just we don't have a clue what to do with it. In the "Whether A or B" situation we are still in a narrow focus situation , where we only think of A or B as options. In the specific case of A/B testing tool, the results are confusing because they just feed a behaviour data that blows at our face because they are not in our narrow scope of focus. That data simply answers to questions that we didn't fully ask. It's like having a lot of indices , but no clue how to solve an enigma. The gathered data is just like the messages intercepted by the British secret services: encrypted by the Enigma machine during WWII they sound like gibberish.Once again , stepping back to have a bigger picture is necessary.
Enigma Encrypting Machine
The Vanishing Options Test
What if instead of picking from that 1-dimensional-2-ways options ( Yes/No, A/B) we just force our brain to unfocused a little bit to get a bigger picture? Because, one field where focusing doesn't help is exploring (or identifying) real options. My favorite tool to" unfocus" to unfold creativity ( ie new options) is the Vanishing Option test , also defined as such by Dan and Cheap Heath.
The test goes like this :
Imagine that all the options you have thought about are gone. E.g. you're stuck with an Yes choice, there can be no A and nor B, or there is only A.... Now think at the following question :What would you do in this situation to reach your goal?
Let's take an example : imagine you're sucked with a "light green colour/white text" call-to-action button on your page. Can't change that! How would you improve your hits?
Leave The Data Basement
Collecting data is good, but remember, data is encrypted. Just like having the Enigma Machine, didn't help allies to understand the messages, having a (BIG) data is simply not enough. We need a decryption key, don't we? The bad news is this : the only decryption keys available for us are our own cognitive biases. So we turn gibberish to very probably distorted messages.
Nevertheless, there's good news , and it's called hope. As in many situations ( just like in Enigme decryption story, by the way), better answers come from changing perspective. There is one simple way to change perspective for data interpretation:
Leave the deep basement behind your complex data graphs screen and go observe real users in the light. Talk to them. Ask them why A ? What does B mean to them?
Enjoy the sun!
Test Driven Business Featuring Lean StartupLean Startup Entrepreneurship Is Like Playing Video Games
Those of us in the Lean world are accustomed to discussing “flow” – where work is performed in an even manner to reduce mura. Activities are synchronized, layouts are optimized, resources are available exactly where and when they are needed, and the pace is set by true demand. The operation just hums along creating value for the customer. Well, “just” is a bit of a misnomer as we know how difficult achieving flow can be.
I remember being introduced to the work of Mihaly Csíkszentmihályi decades ago in a psychology class, and have recently become reacquainted with him while researching motivation and productivity. Csíkszentmihályi, a psychologist of Hungarian decent and a professor at Claremont, has also developed a theory of flow from an individual perspective – see his TED Talk.
Different than Lean flow? Or maybe not?
Csíkszentmihályi’s concept of flow is being completely absorbed by what you are doing, energized, and with the creative juices flowing. Many of us already think of it as “being in the zone.” It is truly a positive, invigorating experience, as opposed to “hyperfocus” which can be negative.
He began researching the concept out of fascination for artists and other professionals that became so engrossed in their work that they forgot about all else, sometimes including basic needs. As the the model at the right shows, flow happens when the skill level and challenge environment is high. The ability to be creative and accomplished in such a situation is very fulfilling.
Csíkszentmihályi once described flow as “being completely involved in an activity for its own sake. The ego falls away. Time flies. Every action, movement, and thought follows inevitably from the previous one, like playing jazz. Your whole being is involved, and you’re using your skills to the utmost.”
Flow has parallels with concepts in Eastern religions and philosophy. Buddhism talks about “action with inaction” and Taoism has “doing without doing. The Hindu Ashtavakra Gita and Bhagavad-Gita have similar descriptions.
Components of flow include a challenge-skill balance, the merging of action and awareness, clarity of goals, immediate and unambiguous feedback, concentration on the task, transformation of time, and the autotelic experience. He dove deeper into the autotelic personality, which is when people do work because it is intrinsically rewarding instead of to obtain external goals. Aspects of the autotelic personality include curiosity, persistence, and humility.
So is it really different than the concept of flow in Lean?
Completely involved, every action following inevitably from the previous, using skills to the utmost, clarity of goals, immediate feedback, curiosity, humility. Sounds like a finely-tuned work cell.
Well funny we should stumble on that comparison. Csíkszentmihályi did go on to research “group flow,” where both individuals as well as the group are able to achieve flow, with the characteristics being:
- Creative spatial arrangements: Chairs, pin walls, charts, but no tables; thus work primarily standing and moving
- Playground design: Charts for information inputs, flow graphs, project summary, craziness, safe place, result wall, open topics
- Parallel, organized working
- Target group focus
- Experimentation and prototyping
- Increase in efficiency through visualization
- Using differences among participants as an opportunity, rather than an obstacle
Add visual controls, open and transparent communication, and experimentation to the similarity with Lean.
Which perhaps reminds us of one of the reasons Lean work cells work: they create a fulfilling, productive, and improving operation by leveraging and rewarding the brains of humans.
For those of you who have been following (or contributing to) the SAFe for Lean Systems Engineering development, you know that our learnings have now been consolidated into SAFe 4.0. But you also know that there are many more steps in a journey to better understand and apply the Principles of Lean-Agile development to systems that include both hardware and software.
Our next step down this path is a new article by Alex and 321 Gang’s Harry Koehnemann (who was instrumental in the development of SAFe LSE), entitled Building Complex Systems with SAFe, which has just now been posted on Version One’s blog.
Alex and Harry are committed to evolving this work further, and we can expect some solid, enhanced content to appear in the SAFe Guidance section in the next few months. In the meantime, check out this article, and see if you think the u-curve optimization figure for thinking through the economics of complex system integration is as cool a thought as I do.
Of course, comments are welcome, right here.
—Dean, Alex, Richard, Inbar
In Node.js, it’s common to use “module.exports” to export an object instance, allowing other files to get the same object instance when requiring the file in question. A lot of people call this the singleton pattern of Node.js, but this isn’t really a singleton.
No, this is just a cached object instance – and one that is not guaranteed to be re-used across all files that require it.The Problem With Require
When you call the “require” function in Node, it uses the path of the required file as a cache key. If you require the same file from multiple other files, you typically get the same cached copy of the module sent back to you.
This is great for conserving memory and even producing a poor facsimile of a singleton. However, it’s very easy to break the cached object feature of the require call.
There are two core scenarios in which this feature will not work as expected:
- Accidental upper / lower case letter changes
- When another module installs the same module from NPM
Windows and OSX (by default) are not case sensitive on the file system. You can look for a file called “foo.js” and a file called “FOO.js”, and both of these searches will find the same file in the same folder, no matter the casing on the actual file name.
Because of this, it’s easy to break the object cache of the require call on both Windows and OSX.
Create a “foo.js” file with a simple export:
Now require it twice, in another file:
Run the index.js file and see what happens:
In this example, the require call used the case sensitive string that you supplied as part of the key for the cache. But, when it came to the file system, the same file was returned both times.
Since it the file was loaded twice, it was also evaluated twice, and it produced the exported object twice – once for each casing of the file name.
There are other problems associated with mis-casing the file name in a require statement, as well. If you deploy to a file system that is case sensitive, for example, the version that is not cased the same as the actual file will not find the file.
While it is generally a good idea to always get the casing correct, it’s not always something that happens. The result is a broken module cache on case insensitive file systems.NPM Module Dependencies
The other situation where a module cache will not work is when you install the same module as a dependency of two or more modules, from NPM.
That is, if my project depends on “Foo” and “Bar” from NPM, and both Foo and Bar depend on “Quux”, NPM (version 2 or below) will install different copies of “Quux” for each module that depends on it.
I understand that NPM v3 attempts to solve this problem by flattening the dependency list. If Foo and Bar both depend on the same (or compatible) versions of Quux, then only one copy of Quux will be installed.
However, if Foo and Bar depend on different / incompatible versions of Quux, it will still install both versions. Foo and Bar will no share the module cache in this instance.Make It Cache-Buster-Proof
If the require call in Node doesn’t produce a true singleton, but a cached module instead, how can a true singleton be created?
Unfortunately, the answer is something terrible… a practice that should generally be avoided… global variables.
Node does allow you to export a global variable, in spite of everything it does to try and prevent you from doing this. To do it, you have to explicitly use the “global” keyword:
Now, from anywhere in any other file that is loaded into a node app, with this module module loaded, I can call on “foo”:
The result is an output like the original, but done without holding a direct reference to the required “global” module:
In spite of the seriously dangerous implications of using a global variable, including JSHint complaining about the lack of definition for “foo” in this case, it is possible to create a true singleton in Node with this technique … and to do it somewhat safely, using ES6 symbols.The Core Of A True Singleton
But with the advent of ES6 (ES2015) Symbols, it is possible to use a global variable and not have it completely destroy the integrity of your application.
Using a Symbol, you can attach something to the global object with relative safety. But having this in place is only half the singleton solution, as you will see in a moment.
Before moving on to the final half of the solution, though, the singleton should provide a specific API to match the pattern definition: an “instance” property by which you can obtain the one single instance of the object.
With this code in place, you can require the same file multiple times, and you will only get one object instance in return. But it does not yet account for the case insensitive file loading, or NPM module dependency problems that we saw earlier.
This is the same output that was shown with the previous inconsistent file name casing. Let’s fix that, and the NPM module loading now.Creating The True Singleton
The final piece of the puzzle, and the way to create the true singleton, is to ensure that any version of the file being loaded will not overwrite the global symbol. Unfortunately there’s a problem with the way the current symbol is written. Each time the file is loaded, a new instance of the symbol is created.
To fix that, you have to use the global symbol cache. Additionally, you need to check to see if the global object has a value on that symbol already. Finally, you need to give this symbol a unique name, since you are now potentially exposing the symbol to other developers.
This version of the code uses Symbol.for to get a globally shared symbol. It then loads all symbols from the global object, and checks for the presence of the global Symbol you created. If the global symbol already exists, don’t overwrite it.
With this in place, calling the original code with mixed file name casing will produce the same object instance – a true singleton!A True Singleton… But At What Cost?
With ES6 Symbols, you now have a true singleton that is relatively safe from harm. You can require the same file from multiple places, accidentally mixing the file name casing, and even require it from multiple instances of NPM dependencies. The result will always be a single object instance.
But this solution, as well as it might work, does bring some potential danger of it’s own.
For example, go back to the NPM problem where Foo and Bar depend on Quux. If Foo and Bar depend on different, incompatible versions of Quux, you’re going to be in trouble. The first module to load Quux will be the winner in terms of creating the singleton. The other one will be out of luck and probably end up having strange problems that are very hard to debug.
This can be fixed by inserting a version number into the symbol, for your singleton… but now you’re back at the point where it is no longer a true singleton!
Additionally, this singleton instance is not truly safe. With the use of a global symbol, there is a possibility of someone else trying to use the same symbol for another purpose. This is highly unlikely, but still a possibility – and anytime a problem is “highly unlikely, but still a possibility”, you can rest assured that it will be an actual problem for someone, somewhere.The Cost-Benefit Analysis Of Both Methods
While it’s true that a simple require call in Node.js will provide a cached export, provided the require statements and versioning are compatible, this object is a poor facsimile of a singleton.
The other side of the coin – creating a true singleton with ES6 Symbols and a global variable – is not without it’s own share of problems.
From experience in using require statements as a poor singleton implementation (… a lot of experience, mind you), I can say that it is generally good enough. Be sure your require statements are cased correctly, and you will likely cover 99% of your needs, or more.
If you do find yourself needing a true singleton to go across NPM dependencies and other module require calls, though, it can be done. Just be sure you actually need this before you head down this path. While it is possible, it is not without it’s own perils.
Do you manage your email inbox similar to your kitchen sink or your bookshelf? The answer will not only disable or enable your ability to practice Inbox Zero — the habit to regularly process your inbox to empty. It’ll also put you in either cognitive exhausted or cognitive alert mode.
You bought a new book and read it. Now you want to put it in your bookshelf, which unfortunately happens to be full. You skim the spines and almost randomly remove one book to give room for your new book. Bookshelf is left unsorted. Do you recognize this? Probably.
Your kitchen sink is full of a combination of leftovers and plastic packaging materials. You throw a glance and rather randomly decide to remove the cucumber parts and leave everything else in the same mess as you found it. Do you recognize this? Absolutely not.
Understanding, deciding, recalling, memorizing, and inhibiting are the five functions that make up the majority of our conscious thoughts. They are intensive glucose and oxygen consumers. Overuse makes us feel exhausted. Managing the inbox as a bookshelf relies on all five.
Kitchen sink cleaning is not completed until everything is removed. Every single email must be deleted, archived or put in a to-do folder. Inbox zero is not a continuous state. Analogous to the kitchen sink cleaning, we ought to do it 2-3 times a day.
In the article, along with "taking the best from Waterfall and Agile" and mixing them together into a "perfect methodology", the author, as a self-proclaimed change agent, suggested you should force implement all CMMI Level 5 developer practices at once. Can someone point me to the list of CMMI Level 5 Developer Practices? I looked for it, but couldn't find it. Perhaps there is a walled garden somewhere with this information tucked away therein? I may be mistaken, but I believe the CMMI practices are about tracking and improving the overall process, NOT about hands-on-keys activities or anything of the sort.
But let's pretend there is a prescribed list of development practices that are classified at CMMI Level 5 so that we may continue with the discussion. The author's reasoning for forcing a list of "best practices" on the team all at once is that "They're going to hate you anyway for changing things." You might as well change everything at once and get some good results fast.
What the what?If your change management plan is to force "best practices" on people without any thought for where they are today, it's no wonder they hate you. It doesn't matter if you force one poorly chosen and misunderstood practice on them or 100. You're creating a hostile environment. You are forcing people to adopt practices they are likely not familiar with. And these practices likely don't yet fit into their overall approach to work. They probably don't even fit into the common mental model for these people.
You're actually going to make things worse for everyone. This isn't about them not being able to accept change. This is about you not taking responsibility for having no idea how to implement a change.
Meet them where they are and lead them to a better place.Organizational change usually starts slowly and builds momentum with success. I don't care if Ward Cunningham himself wrote the list of developer "best practices", you don't shock and awe anyone with them if you want to be successful.
Listen and observe. Open your mind. Look at them not from the perspective of, "Here's what you should be doing, but aren't", look at them from the perspective of, "I'd really like to understand why you do things the way you do and the benefit you gain from it."
Find something they WANT to change. Something that is causing them pain. Help them come up with something different they could try. Maybe you think they should pair program 100% of the time (presumably because it is a "best practice" [gross]). It's possible a good first step is to start talking about shared code and our agreed standards. Maybe they won't ever get to pairing. But if they get better at the things pairing helps with, isn't that pretty great?
Following on from my last post, and based on the feedback in the comments, I want to say more about the dynamics of Strategy Deployment.
The first point is to do with the directionality. Strategy isn’t deployed by being pushed down from the top of the organisation, with the expectation that the right tactics simply need to be discovered. Rather, the strategy is proposed by a central group, so that decentralised groups can explore and create feedback on the proposal. Thus information flows out and back from the central group. Further, the decentralised groups are not formed down organisational structures, but are cross-organisational so that information also flows between groups and divisions. The following picture is trying to visualise this, where colours represent organisational divisions. Note also that some individuals are both members of a “deployed from” group, and a “deployed to” group – the deployment isn’t a hand-off either.
This means that a Strategy Deployment can begin anywhere in the organisation, in any one of those groups, and by widening the deployment to more and more groups, greater alignment is achieved around common results, strategies, indicators and tactics.
That leads to the second point about emergence. In the same way that the tactical initiatives are hypotheses with experiments on how to implement the strategies, so the strategies themselves are also hypotheses. Tactics can also be viewed as experiments to learn whether the strategies are the best ones. In fact Strategy Deployment can be thought of as nested experimentation, where every PDSA “Do” has its own PDSA cycle.
With regular and frequent feedback cycles from the experiments, looking at the current indicators and results, strategy can emerge as opportunities are identified and amplified, or drawbacks are discovered and dampened. In this way Strategy Deployment explores the evolutionary potential of the present rather than trying to close the gap towards a forecasted future.
These dynamics are often referred to as Catchball in the lean community, as ideas and learnings are tossed around the organisation between groups, with the cycle “Catch, Reflect, Improve, Pass”.
I also like the LAGER mnemonic I mentioned in Strategy Deployment as Organisational Improv, which is another way of thinking about these dynamics.