Skip to content

Feed aggregator

Estimating: Bottom-up vs. Top-down

Agile Estimator - Tue, 01/03/2017 - 22:39
bottom-up-top-downBottom-up vs. Top-down Table

When most people think about estimating, they are thinking about bottom-up estimating. When your car needs to be repaired, you bring it to a mechanic. If you need new brakes, you will get an estimate for the cost of the brakes and the amount of time that is required to install them. If you also need an oil change, then the cost of that is added to your estimate. Software developers tend to think the same way. They attempt to identify the tasks that must be performed. They estimate the time for each task and add up these estimates. Agile developers do this. The steps of agile estimating are explained in Traditional Agile Estimating.

Some organizations already have Software Development Life Cycles (SDLCs) that they have specified. These SDLCs give all of the tasks that must be performed to develop software. However, many of the steps have to be broken down into finer detail. For example, there may be a task called Code Modules. However, that is both difficult to estimate or control. It ends up being broken into Code Payment Screen, Code A/R Report and a host of others. Early in the life cycle, it is very difficult to specify all of these tasks and impossible to estimate them.

People involved in agile development usually think of estimating from the bottom-up. They will identify as many user stories as possible early in the life cycle. They will then use a technique like estimating poker to assign story points. In summary, estimating poker is a collaborative technique that involves the development team. User stories are considered one at a time. Each team member assigns a number of story points to the story. They discuss it until they reach consensus and then move on to the next user story.

Managers love the idea of  bottom-up estimating. If all of the tasks necessary to develop an application are estimated, they can be placed in a work breakdown structure and a Gantt chart. This gives the illusion of control. The developers love the idea of bottom-up estimating. Stories and tasks must be identified as part of the development process. Therefore, the bottom-up estimate is not extra work just associated with estimating. This is consistent with agile principles and practices. Statisticians love the idea of bottom up estimating. Whether estimating by task or user story, each component gets its own estimate. The estimates will usually be incorrect, but the errors will tend to cancel each other out. In theory, it is a winning approach. In practice, you just cannot do bottom-up estimating early in the life cycle. Project sponsors, end users and business analysts are developing any application artifacts like a feasibility study. Sponsors and users do not know what logical data models are. Business analysts know what they are, but probably have no idea how long it will take to develop one before the scope of the project is better established. For many applications, the development environment has not yet been decided on. Data warehouse applications may be developed using special software packages with entirely different development tasks than an organization typically specifies in its SDLC. In most cases, bottom-up estimating is impossible to do correctly early in the life cycle.

Top-down estimating begins with establishing the size of the application to be developed. Knowing this, algorithmic models were used to predict how much effort and how much calendar time would be required to develop the application. This approach was developed when the waterfall approach to software development was popular. Therefore, these models typically predicted how much time would be spent in the analysis, design and coding phases of the application development. Some approaches would predict the amount of time for various activities, like project management. In the beginning, that size was expressed in lines of code. There were two problems with this. First, you only know the number of lines of code after you have developed the application. Then you do not need the estimate. However, many organizations developed heuristics to help them predict lines of code. These rules of thumb were tied to the experience of the organization. For example, at one time NASA would predict the number of lines of code in satellite support software based on the weight of the satellite itself. The second problem can be summarized by Capers Jones’s statement the using lines of code should be considered professional malpractice. There are many problems with it. In one of his books, Capers shows that it often misrepresents the value of software. For example, is 2,000,000 lines of assembly language more valuable that 20,000 lines of COBOL?. Should it take 100 times longer to write? Even more to the point, with so many development environments being build around screen painters and other tools that do not actually have lines of code, the antiquated measure has become unusable. Function points, use case points and a host of lesser known measures have taken the place of lines of code. Barry Boehm (no relation) developed several estimating models that he called the Constructive Cost Model (COCOMO) in 1981. One of models was Basic COCOMO. It transformed the number of lines of code into the person-months of effort and the calendar months of schedule that would be required for application development. Practitioners at he time found ways to drive COCOMO from function points as opposed to lines of code.

Basic COCOMO was not as accurate as people wanted. Therefore, Boehm introduced Intermediate COCOMO at the same time. He actually introduced product level and component level versions of Intermediate COCOMO, but the difference is not important at this point. What is important is that Intermediate COCOMO utilized cost drivers. Cost drivers impacted the estimates. They were necessary and made sense. Imagine there are two applications that are 100,000 source lines of code. Will they take the same amount of time to develop? Probably not. There will be two types of differences between the two application projects. The first type are product differences. One application might be a computer game and the other an embedded system in a piece of medical equipment. The second application will have a higher required reliability. This will impact its development time. There are other product related cost drivers. The complexity of the products may also be different and impact the development time. The other class of cost drivers are associated with the development process. How experienced is the team with this type of application? How experienced is the team with the development language/environment being used? These cost drivers also impact development effort and schedule. In fact, cost drivers can change development effort by an order of magnitude.

COCOMO was not the only costing model around. At about this time, Larry Putnam introduced Software Lifecycle Management (SLIM). The Walston-Felix IBM-FSD Model and the Price-S model were two other top-down models that were introduced at about the same time. Which one was best? Nobody knows! There were several bake-offs but none actually answered that question. It turns out it was impossible to answer. Which car is best? In 1969, I saw a move called Grand Prix. Pete Aron is a race car driver who is just about unemployable. He was reckless. A Japanese car company hires him. He wins the race. Why? If you are reading this today, and obviously you are, then you might think it was because the Japanese are capable to making a fine automobile. In 1969, this would never of occurred to you. The Japanese had introduced motorcycle to America and they were a failure. Japanese cars would be the same. Pete Aron won the race because it is the driver, not the car that wins the race. He was driven to win and afraid to lose. That is all there was to it. Automotive enthusiasts might debate the. However, when it comes to estimating there is no debate. It is the estimator, not the model, that produces a useful estimate!

Practitioners started to use function points to drive the top-down models. Capers Jones had produced some tables that showed how many lines of code were required to implement a function point. Thus, function point could drive models like COCOMO. Some practitioners used unadjusted function points. There were complications when the Value Adjustment Factor (VAF) was used. Which General System Characteristics (GSCs) resulted in more lines of code? They were not adequate to use in place of cost drivers. A minimum VAF would make the adjusted function point size 65% of its unadjusted size; maximum would be 135%. The size difference is only a factor of 2.  Cost drivers could usually impact the estimates to a much greater extent. Now, the International Function Point Users Group (IFPUG) has introduced the Software Non-functional Assessment Practices (SNAP). This is a counting approach that might replace the product cost drivers, but not the process ones.

These top-down techniques can often be performed by someone who is not familiar with all of the nuances of system development. The individuals must be familiar with the model being used, such as COCOMO. In addition, they must be trained in the sizing measure being used, such as function point analysis. This means that there is usually a small group of estimators in most organizations. In an organization using agile development, this might be a function that the product managers take on. This way, they can report back to the sponsors and other users what they expect in terms of schedule for an application development. Many organizations rely on consultants to perform these estimates. An independent estimator is often a good choice. This estimator is not overstating an estimate in order to negotiate for more resources, nor understating the estimate in order to pressure the development team to deliver faster.

Estimators look for techniques that are orthogonal to one another. This means that they are statistically independent. Top-down and bottom-up estimating approaches can be orthogonal. The bottom-up method is usually performed during the development just by virtue of identifying tasks and assigning people to them. If a top-down estimate has been developed, then it can be compared to what is being indicated by the bottom-up estimate at any time.

In the perfect world of agile systems development, all of the activity goes directly into developing application code. This is a drawback of top-down estimating. The effort that goes into it does not directly implement the application. If that effort is performed by a non-developer, then it becomes more of a business decision of whether the time and effort spent in developing the effort is helping the project sponsor to make better business decisions. Another area of concern is the distraction that this may be for the development and user communities. If the developers must answer questions in order to size the application, then this detracts from development effort. If a user must answer questions, then that user may be distressed if and when a developer asks the same questions again. The value of the estimate must exceed these costs or it should not be done.

The most modern of the cost models do not fit neatly into the bottom-up or top-down category. COCOMO II has replaced COCOMO the model of choice among COCOMO fans. SPQR, Checkpoint and Knowledge plan were released by Software Productivity Research (SPR), then under the direction of Capers Jones. Dan Galorath’s SEER-SEM is one of the more recent, commercially successful estimating models. The pros and cons of these approaches are basically the same as top-down models.

Categories: Blogs

Keeping Remote Teams Cohesive, Part 3: (Over-) Communication is Key

This is part of a three-part series on keeping remote teams cohesive. We recommend that you begin...

The post Keeping Remote Teams Cohesive, Part 3: (Over-) Communication is Key appeared first on Blog | LeanKit.

Categories: Companies

Major Upgrade of Virto Kanban Board for Office 365 and SharePoint

Scrum Expert - Tue, 01/03/2017 - 18:08
Using Kanban Board for task management is an excellent way to working on SharePoint integrated projects with effective team collaboration. If you want to visualize your team work and implement Scrum / Agile methodology to your SharePoint environment, Virto Kanban board is the solution you need. Virto Kanban uses any SharePoint task lists or custom lists in SharePoint 2016/2013/2010 or Office 365. The flexible settings allow you to meet almost any SharePoint project demand and display your project flow on a single board. Virto Kanban main features are: Select colors for tasks and markers. SharePoint Kanban allows you to assign colors for distinct task types and apply markers for overdue tasks or any other custom conditions. Drag & drop tasks within columns and swimlanes. Move SharePoint tasks within columns and swimlanes that can represent project stages, issue priority or distinguish sub-processes of a project. Any other statuses for swimlanes and columns can be used according your project demands. Apply view and condition filters. Display tasks with custom view filters to track with a single glance any project details. Collect statistics with graphic charts and total count of hours. With SharePoint Kanban Board, you will be always informed how many hours were spent to complete a project stage and how many tasks are assigned to each user. Statistics are displayed as color-coded charts and diagrams. Assign task management permissions and task watchers. You can delegate to certain users the rights to edit tasks on SharePoint Kanban Board and assign task watchers [...]
Categories: Communities

Agile on the Beach 2017 Call for Speakers Extended to January 11

Scrum Expert - Tue, 01/03/2017 - 17:09
Agile on the Beach is a two-day conference on Scrum and Agile approaches that will take place in Falmouth in Cornwall (UK) on 6th and 7th July 2017. The call for speaker has been extended by one week, to close on Wednesday 11th January.. The Agile on the Beach 2017 conference will focus Agile working, software creation and delivery, teams, practice and new business thinking. These themes will be organized as six tracks, some on different days: * Software delivery: including programing, testing and operations * Team Working, e.g. culture, personnel management, self-organization, leadership * Agile Practices, e.g. agile basics, applying agile tools and methods, writing user stories. * Product Design, e.g. user experience, front end design * Product Management, e.g. requirements gathering, the product manager role * Business, e.g. applying agile beyond software Get more information about Agile on the Beach 2017 Call for Speakers on https://www.agileonthebeach.co.uk/page/1277163/call-for-speakers
Categories: Communities

Targetprocess v.3.10.7: Recent Items and Browsing History

TargetProcess - Edge of Chaos Blog - Tue, 01/03/2017 - 16:55
Notice: Tags Bundles Fall Into Oblivion

We've found a piece of legacy functionality from Targetprocess 2 that's been quite useless for several years — since the days when the multi-project concept was first released: Tags Bundles.

Tags Bundles are similar to a group of Tags, and were used in conjunction with features that are either outdated or have been removed. So, we wanted to let you know that we will be saying goodbye to Bundles in a few releases.

Recent Items and Browsing History

Every v.3 user has faced the challenge of quickly finding some entity which they recently viewed or modified. To fix this, we've added a special 'My Recent' tab to the views list so you can find quickly the entities that you've opened or edited lately. The tab shows the 15 most recently modified items which you own, and the 15 most recently modified items which you are assigned to. In the 'Browsed' tab, you can find a list of entities which you've opened recently.

myrecent

 

Fixed Bugs
  • You can customize Project cards with a new unit ('Last State Change Date') and filter Projects by the date of their last state change
  • Fixed Quick Add exception in case a 'Targetprocess Entity Type' required custom field is in the form
  • Fixed timesheet page errors which appeared if a Project's abbreviation had special symbols
Categories: Companies

Continuous Planning Article Posted

Johanna Rothman - Tue, 01/03/2017 - 16:44

I have a new article up on projectmanagement.com, Continuous Agile Program Planning: Think Big, Plan Small. It’s about how to use rolling wave planning especially for an agile program.

If you are a Product Owner or you are responsible for planning what when, and want to learn how to do this, join my PPO Workshop, starting next week.

Categories: Blogs

Lessons for the New Year

Johanna Rothman - Tue, 01/03/2017 - 16:32

I don’t know if you retrospect on a regular basis. I do. (I know, you are so surprised!)

Andy Kaufman asked me to share my biggest learning for his podcast. Take a listen to The Most Important Lesson You Learned Last Year. I’m pleased and proud to be in such good company. Thanks, Andy!

Categories: Blogs

Where there is a will

Leading Agile - Mike Cottmeyer - Tue, 01/03/2017 - 16:00
Where there’s a will…

Grace Hopper and Margaret Hamilton were recently named among 21 recipients of the Presidential Medal of Freedom in honor of their contributions to the advancement of computer technology. Their names are well known among software professionals, even if not as familiar to the general public.

Hopper was fascinated by gadgets, and when she encountered the Mark I computer in the Navy she was hooked. She became only the third person to try and program the machine, and ultimately was awarded the Naval Ordnance Development Award for her work.

Later she developed the first compiler. Called A-0, or Arithmetic Language version zero, it comprised a set of pre-built subroutines that could take immediate arguments. It functioned as a loader or linker, and did not have text parsing functionality; but it was far more than had been done previously.

Her interest in making computers usable by humans led her to drive a series of improvements in compiler technology, from A-0, A-1, A-2 (ARITH-MATIC) to AT-3 (MATH-MATIC) to B-0 (FLOW-MATIC) to COBOL (Common Business Oriented Language). At its height in the late 20th century, COBOL is thought to have represented about 97% of all production business software worldwide. Not a bad legacy.

Hopper was a high-energy person well known for perseverance. One of her favorite catch-phrases was, “It’s better to ask for forgiveness than permission.” She was determined to figure out a way to get things done.

Years later, Margaret Hamilton would demonstrate the same personal traits and would drive further advancement in the field of software. Contrary to many of the popular accounts that name her as the “head” or “lead” or even the sole author of the software used on the Apollo flights, Hamilton joined the NASA team as the junior-most programmer.

She came to the job with a background in mathematics rather than engineering, and with no more idea what a “programmer” does for a living than anyone else on the team. In those days, there was no such job description. As Hamilton put it in an interview years later, “Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did.”

It was hammered into everyone that this had to be a zero-defect system. If an error occurred a quarter million miles from Earth, the astronauts would die. So, Hamilton inquired as to how the team was ensuring defect-free software. The answer: Augekugel.

She pressed the question and learned this meant “eye-balling.” There was one guy who was pretty adept at spotting potential integration errors by examining the source code visually. Hamilton thought this was a bit risky. She sat with him and learned his method. Eventually she saw that it was a pattern recognition process, and she perceived it could be done programmatically. She invented Higher Order Software; software that operated on other software rather than the domain problem. Today we call it static code analysis.

In the Lunar descent phase of the Apollo 11 mission, another of Hamilton’s innovations came into play. Although they were top-tier test pilots and had completed a massive amount of training, Neil Armstrong and Buzz Aldrin were doing something no one had done before, and it was hardly routine. There are detailed reports of what happened, but in a nutshell the astronauts basically turned everything on and loaded every program, just in case.

Turning everything on was a mistake, as there was a known issue with RF interference between the guidance computer and the rendezvous radar, which Aldrin switched on even though it was not needed in the descent phase. Loading the ascent program was a mistake, as the limited memory in the guidance computer didn’t allow for all the programs to be loaded at once, and that program was not needed in the descent phase.

Between sporadic hardware errors and the scary 1201 and 1202 messages on the little numeric console, the astronauts decided to land manually. Those error codes pertained to another innovation of Hamilton’s – the system could dump programs it didn’t need in order to free up scarce memory resources for the programs it did need. When the astronauts loaded too many programs, the system dutifully dumped the unnecessary ones, based on the mission phase they were in. Despite the scary-looking error codes, there was nothing wrong with the computer or the software.

By that time, Hamilton’s value to the team had been well proven time and again. But initially, she was the junior member of the team. They gave the junior team member the least important program to write. They called it the Forget It program. It would be executed in case of some catastrophic failure. They figured there would be no catastrophic failures. Fast forward to Apollo 13, and Hamilton is in the forefront of the recovery effort.

After Apollo, Hamilton and the person who actually was the team lead, Dan Lickly, formed a company they called Higher Order Software. They were ahead of their time, it seems. Companies weren’t interested in static code analysis…or else the mathematician/programmer and the engineer/professor just weren’t good at sales. Today, it’s almost unthinkable to run a professional software delivery organization without static code analysis. Not a bad legacy.

…there’s a way

When I read that Hopper and Hamilton had been honored by the President, it put me in mind of the attitude we encounter so often in large companies today. People are just one or two steps away from automating something, and they stop short. They continue to repeat the same tasks manually, dozens or hundreds of times per year.

A development team may have a well-vetted and clearly-documented procedure to run tests, integrate their code, and deploy. But they never convert their documentation into an executable script. It would save them so much effort and time!

People are pretty skilled at finding excuses not to do things. At one company, teams were hesitant to set up a continuous integration server because their architecture team was still mulling over the choice of product. I still don’t understand why the team couldn’t set up their own CI server in the meantime. It would have made their lives easier, and they would have worked out any kinks in the general CI process, making the transition to the “official” product that much smoother. Oh, well.

An infrastructure team may have a well-vetted and clearly-documented procedure to provision virtual servers of various kinds. But they never convert their documentation into executable scripts. It would save them so much effort and time!

At one company, the infrastructure team maintained a few hundred servers, mostly VMs, and mostly standard build-outs. One of the guys told me they didn’t script any of the builds because some of the systems were one-off configurations that supported legacy applications with built-in dependencies on various obsolete things. Okay, that’s understandable, but what about the 80-90% of configurations that were standard? Why not go ahead and script those? An individual engineer could at least script his/her own repeated configuration tasks, even without telling anyone about it. Wouldn’t that make for an easier work day? Oh, well.

Maybe I’m the one who’s wrong, here. Maybe I’m just too lazy to repeat the same manual tasks over and over again every day. Maybe I should be more diligent. I’ll work on that one of these days. Promise. Maybe.

If I’m not wrong, then I’m not sure what’s missing, unless it’s the preference to ask for forgiveness rather than permission, or the determination to find a way. As the recent Presidential awards suggest, there are role models available for that sort of thing.

The post Where there is a will appeared first on LeadingAgile.

Categories: Blogs

Agile Africa – Self Selection

Growing Agile - Tue, 01/03/2017 - 13:02
One of the keynotes at Agile Africa was by Sandy Mamoli, author of the book Creating Great Teams. She talked about her experience running self-selection events for large teams and gave some great advice on how you can do it yourself.
Categories: Companies

Resolutions for Agile Catalysts Everywhere (2017 Edition)

Illustrated Agile - Len Lagestee - Tue, 01/03/2017 - 02:02

A Strengthening Boldness

With the arrival of 2017 it’s hard to believe how fast 2016 went by. It’s even harder to believe it’s been 6 years since the first resolutions post back in December of 2011. You can take a look back at all of them with these links:

2012 2013 2014 2015 2016

As I sat down to write the resolutions for 2017, I decided to expand the scope of this annual activity beyond Scrum Masters to include anyone looking to be a catalyst for positive change in their organizations. The more I’ve been doing what I do, the more I realize the importance of each individual being an active ingredient in the creation of an environment of connected agility.

So regardless of your role – a team member, Scrum Master, product owner, senior leader, manager, coach, or anyone in between – these resolutions are crafted with you mind. The strong and the brave willing to break out of the status quo and be an active participant is shaping a workplace where bonds strengthen and creativity flows.

If there is a theme to the resolutions this year I guess it would be “A Strengthening Boldness.” Throughout the month of January, I will be blogging and podcasting about how change is introduced into the workplace. To embed change into how we work, it will require bold people doing bold things. Watch the video in a blog post from 2012 to see what I mean. Hopefully, these resolutions will provide a spark of boldness for you (and for me) as we venture into a new year.

But before we jump into the resolutions, I would like to express a quick but sincere thank you to the readers of The Illustrated Agile Blog and to the listeners of The Illustrated Agile Podcast throughout the year. It’s been great interacting with you in 2016 and I can’t wait to experience the adventures we can stir up in 2017.

So here they are…the 2017 Resolutions for Agile Catalysts Everywhere. If you have any of your own to add to the list please share them with all of us in the comments below.

Strengthen others. Resolve to make every interaction with others meaningful and memorable. When people leave a conversation with you they should feel bigger, encouraged, stronger, happier and more alive. This investment in the well-being of others will not only reinforce the team bonds, it will begin to release confidence throughout your workplace. While this is important for everyone, if you have people reporting to you put this one at the top of your list. If you find you can’t genuinely be a force of positivity with someone then you probably need to learn more about them. Hence…

Really learn about your teammates. Resolve to connect with someone and discover as much as you can about them. Maybe pick someone who you may not know very well or someone who, on the surface at least, might be quite different than you. Names of people are popping into your mind as you read this. Commit to reaching out to them as soon as possible.

If you’re not sure where to start, grab them for a quick coffee or a walk and have starter questions ready. A few could be:

  • How are things outside of work?
  • Are you feeling the strength of being a part of our community?
  • Do you feel your voice is being heard?
  • Are you growing in your role?
  • What is the biggest challenge you’re facing right now?

Find a mentor. If you don’t have someone who is inspiring you, challenging you, encouraging you and pushing you, resolve to find someone who can fill this role for you…today. I’ve written about the importance of finding a mentor in the past. This act of surrounding yourself with a person willing to unselfishly give of their time for the purpose of your own growth will provide dividends for the rest of your life. It has for mine.

Be a mentor. The only way this mentor thing works is if you subsequently become a mentor to somebody else. Resolve to take someone under your wings and become fully responsible for their growth and well-being this year. While this may seem daunting, just a few interactions and periods of sharing and conversing is all it takes to get started. If you are not sure who you can be a mentor to, continue the “learn about your teammates” resolution until you find someone. Your experiences could be just what someone else needs to inspire them to greater things.

Default to action. This is the biggest thing I’m working on for 2017. I’ve recently found myself in the habit of telling people, “Let me know if you need any help.” My resolution this year is to stop asking if people need help and to JUST START HELPING. Many people will never “let you know” if they need help even when they need it the most. While this will require a little more sacrifice of time, rolling up our sleeves and doing work together is how communities bond and connections strengthen.

Think simply. The book “Essentialism” mentioned the phrase “less but better.” I’m thinking this will be the phrase I will be frequently saying to myself this year. Resolve to take a hard look at how people in your organization, department, or teams really get things done and decide to do fewer things better. This will require hard decisions to be made about what is really important. For Scrum Masters and Coaches, start every conversation about how we should work (or how we should be Agile) with the question, “What is the simplest thing we can do?”

Think radically. With the pace of change in the world accelerating, by the time you’ve become “Agile” it will not be enough to keep up with future challenges. So in 2017, resolve to start thinking about fresh ideas. Better yet, start experimenting with seemingly “radical” ideas. Add creative things to otherwise mundane activities. Add mundane things to otherwise creative activities. Change things up. Ask daring questions to everyday problems and provide daring answers when a typical response would be expected. Be a shock the system.

Looking forward to an amazing 2017 with all of you!

Becoming a Catalyst - Scrum Master Edition

The post Resolutions for Agile Catalysts Everywhere (2017 Edition) appeared first on Illustrated Agile.

Categories: Blogs

Matching strings in Scala

Xebia Blog - Mon, 01/02/2017 - 22:31
Over December I had a lot of fun doing the Advent of Code coding challenges with some colleagues. Many of those, such as day 21, require interpreting some kind of string input. While normally I'd probably marshall those strings into case classes before processing, in this case that seemed like overkill: a quick pattern-match should
Categories: Companies

Hack a Happy New Year!

J.D. Meier's Blog - Mon, 01/02/2017 - 21:04

“Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it.” — Johann Wolfgang Von Goethe

Hack a Better New Year

It’s time to dig down, dig in, and dig deep to create a great year for yourself and others.

I’m a fan of hacks for work and life.

I find that hacking away at challenges is a great way to make progress and to eventually overcome them.

Hacking is really an a approach and a mindset where you try new things, experiment and explore while staying open-minded and learning as you go.

You never really know what’s going to work, until you’ve actually made it work.

Nothing beats personal experimentation when it comes to creating better results in your life.

Anyway, in the spirit of kicking off the new year right, I created a comprehensive collection of the ultimate hacks for a happy new year:

101 Hacks for a Happy New Year

This is no ordinary set of hacks.  It’s deep.  There are hacks for mind, body, emotions, career, finance, relationships, and fun.

There are hacks you can use everyday to change how you think, feel, and act.

There are hacks to help you change habits.

There are hacks to help you relight your fire and get back in the game, if you’ve been in a slump or waiting on the sidelines.

Jump back in the game, master your work and life, and have some fun in the process.

Here is a quick list of the hacks from 101 Hacks for a Happy New Year:

1. Get the power of a New Year’s Resolution on your side
2. Limit yourself to one big resolution at a time
3. Get specific with your goals
4. Dream bigger to realize your potential
5. If you want change, you must change
6. Guide your path with vision, values, and goals
7. Change a habit with Habit Stacking
8. Create mini-feedback loops
9. Bounce back from a setback
10. Avoid “All or Nothing” thinking
11. Choose progress over perfection
12. Reward yourself more often
13. Gamify it
14. Adopt a Tiny Habit
15. Just Start
16. Adopt a growth mindset
17. Create if-then plans to stick with your goals
18. Start with Great Expectations
19. Adopt 7 beliefs for personal excellence
20. Master the art of goal planning
21. Prime your mind for greatness
22. Use dreams, goals, and habits to pull you forward
23. Use the Exponential Results Formula to make a big change
24. Adopt the 7 Habits of Highly Motivated People
25. Use Trigger Moments to activate your higher self
26. Use Door Frame Triggers to inspire a better version of you
27. Find your purpose
28. Figure out what you really want
29. Use 3 Wins to Rule Your Year
30. Commit to your best year ever
31. Find your Signature Strengths
32. Practice a “lighter feeling”
33. Let go of regrets
34. 15-Minutes of Fulfillment
35. Create your ideal day the Tony Robbins Way
36. Master your emotions for power, passion, and strength
37. Start your year in February
38. Build your personal effectiveness toolbox
39. Write your story for the future
40. Get out of a slump
41. Give your best, where you have your best to give
42. Ask more empowering questions
43. Surround yourself with better people
44. Find better mentors
45. Do the opposite
46. Try a 30 Day Sprint
47. Grow 10 Years Younger
48. Don’t get sick this year
49. Know Thyself
50. Decide Who You Are
51. Decide Who You Want To Be
52. Cultivate an Attitude of Gratitude
53. Try 20-Minute Sprints
54. Create a vision board for your year
55. Adopt some meaningful mantras and affirmations
56. Practice your mindfulness
57. 15-Minutes of Happiness
58. Breathe better
59. Become your own gym
60. Master your wealth
61. Learn how to read faster
62. Let go of negative feelings
63. Live a meaningful life
64. Establish a routine for eating, sleeping, and exercising
65. Improve your likeability
66. Win friends and influence people
67. Improve your charisma through power, presence, and warmth
68. Fill your mind with a few good good thoughts
69. Ask for help more effectively
70. Attract everything you’ve ever wanted
71. Catch the next train
72. Unleash You 2.0
73. Learn anything in 20 hours
74. Use stress to be your best
75. Take worry breaks
76. Use the Rule of Three to rule your day
77. Have better days
78. Read 5 powerful personal development books
79. Practice the 10 Skills of Personal Leadership
80.  Develop your Emotional Intelligence
81. Cap your day with four powerful questions
82. Build mental toughness like a Navy Seal
83. Feel In Control
84. Transform your job
85. Use work as your ultimate form of self-expression
86. Be the one who gives their all
87. Live without the fear of death in your heart
88. Find your personal high-performance pattern
89. Create unshakeable confidence
90. Lead a charged life
91. Use feedback to be your best
92. Make better decisions
93. Learn how to deal with difficult people
94. Defeat decision fatigue
95. Make the most of luck
96. Develop your spiritual intelligence
97. Conquer your fears
98. Deal with tough criticism
99. Embrace the effort
100. Finding truth from the B.S.
101. Visualize more effectively

For the details of each hack, check out 101 Hacks for a Happy New Year.

I will likely tune and prune the hacks over time, and improve the titles and the descriptions.

Meanwhile, I’m not letting perfectionism get in the way of progress.

Go forth and hack a happy new year and share 101 Hacks for a Happy New Year with a friend.

Categories: Blogs

Packer, Ansible and Docker Part 3: Multiple Roles

Previously we modified our setup to use a role from ansible galaxy to install and configure redis. One key thing lacking here is that one rarely needs to just use a role from ansible galaxy by itself so next up we’ll modify our playbook to define the server as a role that uses the redis role.

Creating Our Role

First up we’ll create a role directory containing our role and a few subdirectories we’ll use later.

mkdir -p roles/redis/{meta,tasks,defaults}

Next up, we’ll move the geerlingguy.redis role into a dependency for this new role in roles/meta/main.yml.

---
dependencies:
- role: geerlingguy.redis

We also change the playbook.yml to reference this new role and remove the Hello World task we added in part one.

---
- name: A demo to run ansible in a docker container
hosts: all
roles:
- redis

That’s all for now. We’ll run packer build template.json one more time before we move on to make sure everything is working fine. If it’s not, check out the current directory structure at https://github.com/jamescarr/pad-tutorial/tree/part-3-a.

Making It Dynamic

This has been nice and all but now we’ve reached a point where we need to create more server roles. We could just copy and paste the existing template.json and change a few bits but that will become a headache, won’t it? What we should do is use a variable in packer to dynamically change which role we use.

The good news is that we already have a packer variable named ansible_role that we can repurpose for this need. We previously set it to default but we’re going to change it to specify the role to run. With this in mind, we modify our playbook.yml to use the ansible_host variable as the role that is included.

---
- name: A demo to run ansible in a docker container
hosts: all
roles:
- "{{ ansible_host }}"

While we’re at it, let’s also change the docker image name and add a variable for the version number too. Add a version variable to the variables section of template.json and default it to latest.

"variables": {
"ansible_host": "",
"version": "latest",
"ansible_connection": "docker",
"ansible_roles_path": "galaxy"
},

We also update the post-processor section to use the ansible_host and version.

"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamescarr/{{ user `ansible_host` }}",
"tag": "{{ user `version` }}"
}
]

Now we can run packer again but this time we specify the ansible_host. We’ll just build a docker image with the latest tag but if we wanted to override it we could specify the version too.

> $ packer build -var 'ansible_host=redis' template.json

Now we can create separate roles for any new docker image we want to build with this template and can include any cross cutting, “all images get this” tasks to the playbook.yml but it is good to keep them all encapsulated in a single role.

Add Another Role

With all this snazzy dynamic pieces in place, let’s add one more role to build a docker image for. For fun, we’ll select RabbitMQ. A quick google makes it appear that Mayeu.RabbitMQ is the role to use so let’s run with that and add it to our requirements.yml.

---
- src: geerlingguy.redis
version: 1.1.5
- src: https://github.com/jamescarr/ansible-playbook-rabbitmq/archive/e303777.tar.gz
name: Mayeu.RabbitMQ

In this case, I actually discovered a defect in the latest release of the role that I was able to easily fix so I instead reference the git commit hash of my fork. Thankfully anisble-galaxy is flexible like that.

Since we’ll probably find ourselves changing this often, let’s add a shell-local provisioner to packer under the provisioners section to run ansible-galaxy all the time as the first step.

{
"type": "shell-local",
"command": "ansible-galaxy install -p galaxy -r requirements.yml"
}

Next up we’ll create the role structure for our RabbitMQ role.

> $ mkdir -p roles/rabbitmq/{meta,tasks,defaults}

This time around we won’t just use the galaxy role, we’ll also configure it to our likings. For our needs we want to configure RabbitMQ with a host of plugins and define a default vhost with one user that can write to any exchange (but not configure or read from queues) and one user that is the administrator. With this in mind we add the following to roles/rabbitmq/meta/main.yml.

---
dependencies:
- role: Mayeu.RabbitMQ
rabbitmq_ssl: false
rabbitmq_plugins:
- rabbitmq_management
- rabbitmq_management_agent
- rabbitmq_shovel
- rabbitmq_federation
- rabbitmq_shovel_management
rabbitmq_vhost_definitions:
- name: "{{ main_vhost }}"
rabbitmq_users_definitions:
- vhost: "{{ main_vhost }}"
user: user1
password: password
configure_priv: "^$"
read_priv: "^$" # Disallow reading.
write_priv: ".*" # allow writing.
- vhost: "{{ main_vhost }}"
user: admin
password: password
force: no
tags:
- administrator

We also use the same vhost name multiple times so rather than duplicate it all over we define a variable for it in roles/rabbitmq/defaults/main.yml.

---
main_vhost: /faxanadu

With all of these bits in place, let’s run packer to build out our new rabbitmq docker image!

> $ packer build -var 'ansible_host=rabbitmq' template.json

When this is done, we should have a docker image that can run rabbitmq with our expected configuration details!

jamescarr@Jamess-MacBook-Pro-2 ~/Projects/docker-ansible                                                                                                                        [11:47:53]
> $ docker run -it -p 15672:15672 -p 5672:5672 jamescarr/rabbitmq:latest rabbitmq-server ⬡ 6.2.2 [±master ●]
RabbitMQ 3.6.6. Copyright (C) 2007-2016 Pivotal Software, Inc.
## ## Licensed under the MPL. See http://www.rabbitmq.com/
## ##
########## Logs: /var/log/rabbitmq/rabbit@730b08abceab.log
###### ## /var/log/rabbitmq/rabbit@730b08abceab-sasl.log
##########
Starting broker...
completed with 9 plugins.

Looks good! You can see the latest state of the project at https://github.com/jamescarr/pad-tutorial/tree/efc9e035eaa0a27ba2d6066ed04a37f5efa4bf42.

Next

Unfortunately when we try to login to the rabbitmq management console we’ll notice that none of our configured users work. That’s because they were created under the hood by rabbitmqctl and not persisted. In the next post we’ll modify our little build system we have going here to run a subset of tasks on container start!

Packer, Ansible and Docker Part 3: Multiple Roles was originally published in James Carr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Blogs

Docker Recipes for Node.js: Pre-sale Dates & Details

Derick Bailey - new ThoughtStream - Mon, 01/02/2017 - 14:30

I have another update for the planned Docker Recipes for Node.js ebook – this time focused around the pricing and pre-sale dates.

Chef recipe

The Pre-sale Dates and Prices

These are the details that you need to know, to take advantage of the significant discount that I’ll offer during the pre-sale, and be able to participate in the feedback cycles of the book.

  • Pre-Sale Dates: January 16th – 31st
  • Pre-Sale Price: $9

On January 16th, 2017, the pre-sale will start and run through the end of January. It will be open to the public and will be announced here, through my mailing list, on twitter, etc. 

During the pre-sale, you’ll be able to pick up the ebook for only $9. This will be a significant discount compared to the final price, while also giving you opportunity to be involved in the feedback cycles and direction of the book.

But there’s more to the pre-sale than just the dates and price.

A Pre-Sale Goal

I’m setting a goal with these pre-sales, to help me determine whether or not the book is really worth writing (though I suspect it is)

  • Pre-Sale Goal: 100 sales @ $9 each

If I don’t hit 100 sales in the pre-sale timeframe, the book will likely be delayed or may be cancelled entirely (refunds would be issued, if that happens). 

Having a group of people dedicated to providing feedback is important to this book’s content. But I understand that not everyone will be able to provide feedback in a timely manner, for all updates to the book. 

With 100 copies sold in the pre-sale, however, I should have enough people who can provide the needed feedback, even if a large percentage are out of pocket in the same time period. 

The revenue from these sales will also give me more options for editing, review, and possibly work on additional materials. I don’t yet have any specific plans for these aspects, but I want to leave open the possibility that more pre-sales revenue will create a better end result.

Book & Bundles For Pre-Sale Purchases

The pre-sale will likely only include 1 complete recipe at launch (as I mentioned in the last post), but the $9 price will get you much more than just the early release and final ebook.

As I’ve said already, you’ll have a chance to be involved in the feedback cycles and direction of the book. This will likely happen through a combination of email and access to the WatchMeCode community slack.

Pre-sale purchases will also be given updates to the book as they are released, and will get other related content when the book is complete!

The current plan is to have 3 tiers of content bundles for sale, with these as the final prices (subject to change).

  1. ebook only: $19
  2. ebook plus “more info” screencasts: $79
  3. ebook, “more info” screencasts, both WatchMeCode Docker guides: $199

Many of the recipes in this book will reference additional information found in WatchMeCode screencasts. Subjects such as using the Node.js built-in debugger are very relevant to the book, but not strictly something that should be covered in the book. The second tier package will include these “more info” and other related screencasts and materials.

Additionally, WatchMeCode already has Guides for Learning Docker and Node.js with Docker. The top tier package will include both of these guides, plus everything from the first two tiers. There may be additional materials included in this top-tier, but that is yet to be determined.

There’s an important note in these packages, for pre-sale purchases, as well.

All pre-sale purchases, at the $9, will be given the complete 2nd tier package!

That’s a $79 package for $9, as a giant thank-you for your feedback and assistance with the book.

You’ll also get a significant discount on the top-tier package (probably near 50% off).

The Pre-Sale Starts January 16th

As I said above, the pre-sale starts on January 16th and runs through the 31st.

Once this pre-sale is over, sales will stop so work can begin on the content and collaboration with the pre-lease buyers.

If you’re interested, at all, in getting this book at a the cheapest possible price and ensuring it is released as planned, join the mailing list (below) and be ready for the launch on January 16th.

The post Docker Recipes for Node.js: Pre-sale Dates & Details appeared first on DerickBailey.com.

Categories: Blogs

BA's on Agile Projects?

Leading Answers - Mike Griffiths - Sun, 01/01/2017 - 20:40
The role of the business Analyst (BA) on agile projects in some ways parallels the role of the project manager (PM). In that, some people believe these roles are not needed at all! The Scrum Guide, for instance, that outlines... Mike Griffiths
Categories: Blogs

Psychological Safety leads to High-Performing Agile Teams

There are two types of safety that factor into a healthy and productive enterprise environment and high-performing teams.  The first is physical safety. This is where employees have an environment where they are free from physical hazards and can focus on the work at hand. This type of safety should be part of the standard workplace promoted by company and government regulations.
The second is psychological safety that is core to enterprise effectiveness. According to Google research, high performing teams always display psychological safety.  This phenomenon has two aspects.  The first is where there is a shared belief that the team is safe to take interpersonal risks and be vulnerable in front of each other.  The second is how this type of safety along with increased accountability leads to increased employee productivity and ergo high-performing teams.  Psychological safety helps establish Agile in that it promotes a safe space for employees to share their ideas, discuss options, take methodical risks, and become productive.  An Agile mindset promotes self-organizing teams around the work, taking ownership and accountability, and creating an environment for learning what is customer value through the discovery mindset, divergent thinking, and feedback loops. Agile with psychological safety can be a powerful pairing toward high-performing teams.   

However, accountability without psychological safety, leads to great anxiety.  This is why there is a need to move away from a negative mindset when results aren’t positive or new ideas are seen as different. If this occurs, employees are less willing to share ideas and take risks.  Instead consider ways to build psychological safety paired with team ownership and accountability of the work. This can lead to high performing teams. 

Everyone has a role to play in establishing a psychologically safe environment.  Agile Coaches and ScrumMasters can help you evolve to an enterprise where psychological safety and accountability are paired. Leadership has a strong role to play to provide awareness of the importance of a safe environment, provide education on this topic, and build positive patterns in the way they respond to results of risk taking by teams.  Team members must adopt an open, divergent, and positive mindset that is focused on accepting differences and coaching each other for better business outcomes.  Employees at all levels must be aware of the attitudes and mindset they bring.   
Categories: Blogs

Packer, Ansible and Docker Part 2: Using Ansible Galaxy

Previously we setup packer, docker and ansible to build a very simple docker image that simply placed a file under /root with some content, a very simple start. Today we’ll go further and explore using ansible roles and making some pieces a bit more dynamic.

A Real World Example

In this tutorial, we’ll build a docker image that has Redis installed. While we could go through the steps of adding an apt repository, installing and configuring Redis ourselves why not go strong and take advantage of the fact that someone else has already done this? Enter ansible-galaxy.

Using Ansible Galaxy

Simply put, ansible galaxy is pretty much like any package manager for your favorite language like pip for python or npm for node. You can install packages stand-alone by running something like ansible-galaxy install geerlinguy.redis or you can define a requirements file just like in python to specify multiple dependencies.

Finding the Right Role

If we do a search we’ll soon discover that there are 150 or so odd redis roles out there… which do we use? Usually we’ll want to temper our choice to the one with the most downloads. A very quick method I use is to just google ansible-galaxy <service> and run with the first result. Thankfully the search for ansible-galaxy redis returns geerlingguy.redis which is by a guy I know puts out some pretty high quality roles. But if I didn’t know that, the number of downloads are a good indicator!

Adding our Requirements File

Our requirements.yml file is off to a simple start.

---
- src: geerlingguy.redis
version: 1.1.5

While the ansible-local provisioner lets you define a galaxy_file and installs roles for you the ansible-remote provisioner (which we’re using) does not. So we’ll need to install the roles and specify the directory in they’ll be installed in. While roles is typically a good directory name I prefer to store my galaxy related roles in a separate directory and leave roles for my own roles (which we’ll cover in a later blog post). For today, let’s use galaxy for roles installed from ansible-galaxy. You can specify the target path with the -p option below.

$ ansible-galaxy install -p galaxy -r requirements.yml

This is also a good time to update our playbook to use the role.

---
- name: A demo to run ansible in a docker container
hosts: all
tasks:
- name: Add a file to root's home dir
copy:
dest: /root/foo
content: Hello World!
owner: root
roles:
- geerlingguy.redis

Finally, we need to tell ansible where our role directories are. We can do this by defining the ANSIBLE_ROLES_PATH environment variable. So we update our template.json to the below content with that added.

{
"variables": {
"ansible_host": "default",
"ansible_connection": "docker",
"ansible_roles_path": "galaxy"
},
"builders": [{
"type": "docker",
"image": "ubuntu:16.04",
"commit": "true",
"run_command": [ "-d", "-i", "-t", "--name", "{{user `ansible_host`}}", "{{.Image}}", "/bin/bash" ]
}],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get update",
"apt-get install python -yq"
]
},
{
"type": "ansible",
"ansible_env_vars": [
"ANSIBLE_ROLES_PATH={{user `ansible_roles_path` }}"
],
"user": "root",
"playbook_file": "./playbook.yml",
"extra_arguments": [
"--extra-vars",
"ansible_host={{user `ansible_host`}} ansible_connection={{user `ansible_connection`}}"
]
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamescarr/demo",
"tag": "0.1"
}
]
]
}

With all of these changes made, let’s run packer again.

$ packer build template.json

If all goes well, we’ll see the tasks to install redis get executed.

And if we run our container we can see redis is indeed installed!

$ docker run -it jamescarr/demo:0.1 bash

For reference, you can see the project in its entirety at https://github.com/jamescarr/pad-tutorial/tree/part-two.

This is pretty nifty stuff… no more Dockerfiles masquerading as glorified bash scripts. How about we take this further and use the same docker template to build out multiple different types of docker images?

Next Up

Tomorrow I’ll write up part three and we’ll explore making this template more dynamic and utilize our own roles to layout specific tasks.

Packer, Ansible and Docker Part 2: Using Ansible Galaxy was originally published in James Carr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Blogs

New Role with RMC Learning Solutions

Leading Answers - Mike Griffiths - Sat, 12/31/2016 - 20:28
I have taken on an exciting new part-time role with RMC Learning Solutions as their Agile Practice Lead. I worked with RMC to create my PMI-ACP Exam Prep book and their ACP training offerings. So, I am really looking forward... Mike Griffiths
Categories: Blogs

Build Docker Images with Packer and Ansible

Recently someone asked me where a good place is to get started learning tools like docker, packer and ansible. I did some quick googling and didn’t find what I thought were really good, in-depth tutorials so I decided to write one myself!

Getting Started

This tutorial assumes you are working with OSX although you should be able to accomplish the same results on a linux workstation by seeking out the packages required for your platform.

To get started, let’s install the following tools. If you don’t already have homebrew installed I recommend installing it first.

  • Install packer. You can install it via brew install packer.
  • Install Docker.
  • Install ansible (brew install ansible).
Our First Packer Template

The goal of this tutorial is to get a packer template together that will build a docker image using ansible to provision it. With that in mind, we’re going start by first dipping our toes into packer and use the docker builder, the local shell provisioner and finally a docker post-processor to export docker image with a single file added to /root.

{
"builders": [{
"type": "docker",
"image": "ubuntu:16.04",
"commit": "true"
}],
"provisioners": [
{
"type": "shell",
"inline": ["echo 'hello!' > /root/foo"]
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamescarr/demo",
"tag": "0.1"
}
]
]
}

Save this as template.json and run packer build template.json. Once the build finishes you should be able to run the docker image and inspect it.

jamescarr@Jamess-MacBook-Pro-2 ~/Projects/docker-ansible                                                                                                [13:11:49]
> $ docker run -it jamescarr/demo:0.1 bash ⬡ 6.2.2
root@bec87474b4a7:/# ls /root
foo
root@bec87474b4a7:/# cat /root/foo
hello!
Adding Ansible

Next, let’s swap out our shell provisioner with ansible. There are two options you can go with, ansible-local or ansible-remote. The difference here is that ansible-local will run on the target (in this case, in a docker container) while ansible remote will run on your local machine against the target (typically over ssh). Basically your needs will determine which you want to use. If you want ansible on the target image (perhaps to render templates on container start) then ansible-local is the right path to go while ansible-remote is great if we want to use ansible to provision the image but want to leave ansible off of the resulting image.

This one will be a rather big change… we need to use docker as the ansible_connection as the default will try to connect over ssh and transfer files via scp… obviously this won’t work in docker unless our container runs an ssh server! So we need to instruct ansible to use the docker connection driver to connect. This also requires us to create a consistent hostname, so we define a container name to use when building. We also use the shell provisioner again to install python since ansible will require python on the target and the docker image doesn’t have it by default.

Here’s the updated template.json:

{
"variables": {
"ansible_host": "default",
"ansible_connection": "docker"
},
"builders": [
{
"type": "docker",
"image": "ubuntu:16.04",
"commit": "true",
"run_command": [
"-d",
"-i",
"-t",
"--name",
"{{user `ansible_host`}}",
"{{.Image}}",
"/bin/bash"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"apt-get update",
"apt-get install python -yq"
]
},
{
"type": "ansible",
"user": "root",
"playbook_file": "./playbook.yml",
"extra_arguments": [
"--extra-vars",
"ansible_host={{user `ansible_host`}} ansible_connection={{user `ansible_connection`}}"
]
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "jamescarr/demo",
"tag": "0.1"
}
]
]
}

And the new playbook.yml file which has a single task that uses copy to generate a file similar to what we created previously with the shell provisioner.

---
- name: A demo to run ansible in a docker container
hosts: all
tasks:
- name: Add a file to root's home dir
copy:
dest: /root/foo
content: Hello World!
owner: root

Run packer build template.json and once it completes, let’s test it out.

jamescarr@Jamess-MacBook-Pro-2 ~/Projects/docker-ansible                                                                                                [13:47:00]
> $ docker run -it jamescarr/demo:0.1 bash ⬡ 6.2.2 [±master ●●]
root@2c46ddf2d657:/# cat /root/foo
Hello World!root@2c46ddf2d657:/# ^C
(failed reverse-i-search)`pa': ^C
root@2c46ddf2d657:/# exit
exit
Next Up

I hope this has been a good quick overview on getting started using packer, ansible and docker. This didn’t really produce much useful aside from introducing the pieces and providing a foundation to start with. Tomorrow’s post I’ll tackle some larger “real world” solutions and how to make this a bit more dynamic!

Build Docker Images with Packer and Ansible was originally published in James Carr on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.