Someone on twitter asked me a question about promises a while back. It was a good question about the general use of reject and resolve to manage a yes/no dialog box.
@derickbailey I’m thinking on how I would turn showing a simple modal into a promise. Better to call resolve on hide event or btn press?
— moniuchos (@moniuchos)
@derickbailey For confirmation dialogs (yes/no) would you make “no” call .reject or rather .resolve it with a false parameter?
— moniuchos (@moniuchos)
The short answer is always resolve with a status indicator.
But I think @moniuchos is looking for something more along the lines of why you would use resolve or reject – not just this one specific, terse answer for this one specific situation.
To understand the answer, there’s some background to dig in to: managing the result of a modal dialog, and understanding reject vs resolve.Managing The Result Of A Modal Dialog
Generally speaking, my use of any view is handled with a mediator – a workflow object that manages the over-all process. In the example from that post, the “getEmployeeDetail” method could easily be modified to use a modal dialog instead of just displaying the form in a normal DOM element (using Bootstrap in this example):
Notice, in this example, that the result of the employee detail modal dialog is split across two possible events: a “cancel” event or a “complete” event. Having these two events split apart makes it easy to write code for each specific path. But a promise doesn’t have a “cancel” and “complete”. It only has a “reject” or “resolve” – which are not equivalent.
To move the modal code toward something that the promise can better work with, a single “closed” event could be used with the modal dialog, passing a result object with a status back to the mediator:
The change in this file is to have a single result object passed through a single “closed” event. You would then have to examine the result to see what should be done next.
Note that neither of these examples is the “right” or “wrong” way to do it. Which you would use when is a matter of preference and functional needs at any given point in the application. With the second example, though, it will be easier to work with the “resolve” method of a project. To understand why it will be easier this way, you need to understand the purpose of reject vs resolve in a promise.When To Reject Or Resolve A Promise
There are a lot of “promise” objects and libraries and specifications out there. But with ES6 (ES2015) being “done” and work to implement them in many browsers underway, I’m going to assume that the ES6 Promise object/specification is being used.
With that in mind, an understanding of reject vs resolve can be extrapolated from the MDN documentation on ES6 Promises:
A Promise is in one of these states:
- pending: initial state, not fulfilled or rejected.
- fulfilled: meaning that the operation completed successfully.
- rejected: meaning that the operation failed.
The first state, pending, isn’t really meaningful right now. Fulfilled vs rejected is what we care about. In order to fulfill a promise, the “resolve” method is called. In order to reject a promise, we call “reject”
The question, though, is what does it mean for an operation to “fail”? Is a “cancel” button or the little red (x) on a modal dialog a “failed” operation? Or is that simply a “cancelled” state for an operation that succeeds? The answer is found by looking at how a promise behaves when you reject it.
In this example, the output will be “some reasons” printed twice to the console. This happens because the rejection is being handled twice – once with the second parameter to the “then” call, and again with a “catch” call.
The catch method can be useful for error handling in your promise composition.
You can also verify this by throwing an exception from within your promise, watching both the “catch” and “onReject” callbacks firing:
All of this points to the conclusion that you call “reject” when the “failure” of the process is a truly exceptional state – something that was not anticipated. If an error is thrown, a state that cannot be handled is found, or some other condition that cannot be handled through normal means occurs, this is when you “reject” the promise.
But, a “cancel” button or the little red (x) being clicked? That is certainly not an unexpected or exceptional state. That is something your code should handle under normal circumstances, not as an error condition with the equivalent of a try / catch block.
In other words, a “cancel” or “no” or click of a red (x) to close a dialog is something to be handled via the “resolve” method of a promise, not the reject method.Resolving The Close Of A Modal Dialog
With this new found knowledge of when to resolve vs reject, let’s go back to the modal dialog shown in the earlier example.
The second of the two code listings shows a single “closed” event that is used to deliver the status of the dialog box back to the mediator object that controls the higher-level flow. Since a promise has a single “resolve” method, the single “closed” event from the modal dialog makes life a bit easier.
The code can be modified (with many details omitted in this example) with only a few changes to the workflow:
The callback function for the promises “then” hasn’t changed all, from the callback for the “closed” event. The major difference is found in how that callback is executed. Rather than being event driven, it is now promise driven.Handling Additional Results
It might seem trivial to handle a form close as a “reject” in a promise, even when looking at the above reasoning and examples. But if you did that, you would quickly run in to some rather serious limitations and complexities.
For example, if a modal dialog needs to change from simply yes/no or closed/complete to a more complex set of results, you would be in trouble with “resolve” as yes and “reject” as no. Say you’re implementing a wizard style UI with promises. What do you do when the wizard has a “next”, and a “previous” event, as well as a “cancel” and “complete” event? You’ve only got two states you can model this within, if you’re using resolve and reject as positive and negative responses.
Even if you’re only dealing with yes/no results, modeling no as a reject is dangerous because of the way exceptions are handled. You saw in the promise example that throws an exception, how the exception is handled in the “catch” or “onReject” callbacks. If you tried to model a “no” as reject, you would end up with logic in your “onReject” callback to check if you’re dealing with an exception or simply a negative response. This kind of logic gets complicated, quickly. It makes the code brittle and difficult to work with.
All that being said, having everything modeled in the “resolve” of a promise isn’t always the best way to move forward, either.It’s All About Tradeoffs
My original workflow blog post doesn’t use promises or single return values with status codes for a very specific reason: explicit handling of state changes tends to reduce complexity in code.
For every state / result that can be produced by a given object, a single “closed” or “resolved” handler would require you to add yet another branch of logic to your if-statement or switch-statement. With only two states to handle, this may not be a problem, but it will quickly get out of hand.
By modeling each return status as a separate event from the form, it will be easier to add / remove / change the number of possible results without adding complexity to the code. Rather than having a series of if-statements or a long switch statement, each result will be facilitated by an explicit callback for that result.
The downside to the explicit callback per result pattern, is added complexity in managing all of the possible states. You need good documentation and probably a fair amount of testing to make sure you have handled all of the necessary results callbacks.In The End …
If you’re already dealing with a promise-based API, you can use resolve to manage the result of a modal dialog or any other code.
If you don’t have an event-based API on which you can model a specific event per result, or if doing that would add complexity in managing the object life-cycles, a promise can be a good way to handle things.
I’ve been playing around with the Road Safety open data set and the download comes with several CSV files and an excel spreadsheet containing the legend.
There are 45 sheets in total and each of them looks like this:
I wanted to create a CSV file for each sheet so that I can import the data set into Neo4j using the LOAD CSV command.
I came across the Python Excel website which pointed me at the xlrd library since I’m working with a pre 2010 Excel file.
I ended up with the following script which iterates through all but the first two sheets in the spreadsheet – the first two sheets contain instructions rather than data:
from xlrd import open_workbook import csv wb = open_workbook('Road-Accident-Safety-Data-Guide-1979-2004.xls') for i in range(2, wb.nsheets): sheet = wb.sheet_by_index(i) print sheet.name with open("data/%s.csv" %(sheet.name.replace(" ","")), "w") as file: writer = csv.writer(file, delimiter = ",") print sheet, sheet.name, sheet.ncols, sheet.nrows header = [cell.value for cell in sheet.row(0)] writer.writerow(header) for row_idx in range(1, sheet.nrows): row = [int(cell.value) if isinstance(cell.value, float) else cell.value for cell in sheet.row(row_idx)] writer.writerow(row)
I’ve replaced spaces in the sheet name so that the file name on a disk is a bit easier to work with. For some reason the numeric values were all floats whereas I wanted them as ints so I had to explicitly apply that transformation.
Here are a few examples of what the CSV files look like:
$ cat data/1stPointofImpact.csv code,label 0,Did not impact 1,Front 2,Back 3,Offside 4,Nearside -1,Data missing or out of range $ cat data/RoadType.csv code,label 1,Roundabout 2,One way street 3,Dual carriageway 6,Single carriageway 7,Slip road 9,Unknown 12,One way street/Slip road -1,Data missing or out of range $ cat data/Weather.csv code,label 1,Fine no high winds 2,Raining no high winds 3,Snowing no high winds 4,Fine + high winds 5,Raining + high winds 6,Snowing + high winds 7,Fog or mist 8,Other 9,Unknown -1,Data missing or out of range
And that’s it. Not too difficult!
I’ve previously written a couple of blog posts showing how to strip out the byte order mark (BOM) from CSV files to make loading them into Neo4j easier and today I came across another way to clean up the file using tail.
The BOM is 3 bytes long at the beginning of the file so if we know that a file contains it then we can strip out those first 3 bytes tail like this:
$ time tail -c +4 Casualty7904.csv > Casualty7904_stripped.csv real 0m31.945s user 0m31.370s sys 0m0.518s
The -c command is described thus;
-c number The location is number bytes.
So in this case we start reading at byte 4 (i.e. skipping the first 3 bytes) and then direct the output into a new file.
Although using tail is quite simple, it took 30 seconds to process a 300MB CSV file which might actually be slower than opening the file with a Hex editor and manually deleting the bytes!
Many of us have been on a journey from team-level agile adoption to the complexities of enterprise-wide Lean-Agile transformations. Changing the way we development software is just a part. We also need new approaches to budgeting, forecasting, and capitalization. These approaches are well described in SAFe.
Our next steps is contract agility. What contract models can balance fixed and variable scope and move us to a zone of collaborative systems building? This Agile2015 presentation discusses commercial and U.S. federal government contract models, provides two case studies, and ends with an introduction to the SAFe Managed Investment Contract model.
You can download it at http://www.scaledagile.com/agile2015/
Our Gold Partner, Accenture, is a $30 billion dollar company that ranks #44 on Forbes’ “World’s Most Valuable Brands” list, so when they adopted Agile and DevOps across their Global Delivery Network, we were glad to see that they chose SAFe as an integral part of their effort to accelerate software delivery. It presented an opportunity to see SAFe in action on truly large scale.
In the provided case study, Accenture shares its insights on addressing process, organization, and tool challenges, and the early benefits are compelling:
Early Quantitative Benefits
- 50% improvement in merge and retrofit (based on the actual effort tracked)
- 63% improvement in software configuration management (effort to support SCM activities)
- 59% improvement in quality costs (percentage of defects attributed to SCM and deployment)
- 90% improvement in build and deployment (process and effort to raise deployment requests)
“SAFe is critical to the alignment of delivery timelines.”
Early Qualitative Benefits
- Improved demand management and traceability from portfolio through to Agile delivery teams
- Granular configuration management and traceability
- Integration with Agile lifecycle tools to allow story-based, configuration management driven from meta data
- Real-time traceability of status for build and deployment
- Automated build and deployments, including “one-button deployment”
- Developer efficiencies as a consequence of improved tool interaction times and processes
Many thanks to Accenture’s Mirco Hering, APAC lead for DevOps and Agile, Andrew Ball, senior manager, and Ajay Nair, APAC Agile lead for Accenture Digital, for taking the time to share their insights and learnings. Their story is an inspiration to all of us in the SAFe community.
The team decides on how much work it will do in a Sprint. No one should bring pressure on the team to over-commit. This simply builds resentment, distrust and encourages low-quality work. That said, of course teams can be inspired by challenging overall project or product goals. A stretch goal for a Sprint is just a way to 100% guarantee failure. Even the team should not set its own stretch goals.
There are a few interesting principles that apply here. For example, the Agile Manifesto mentions sustainability:
Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
The Agile Manifesto also points out the importance of trust:
Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
Stretch goals are incompatible with both of these principles from the Agile Manifesto.
There are two types of stretch goals. The first type are those assigned by outsiders to the team. The second type are those which a team sets for itself. Both types are bad.Stretch Goals Assigned by Outsiders
The worst extreme of this type of stretch goal is also the most common! This is the fixed-scope-fixed-date project deadline. In this type of stretch goal, the project team, doing Scrum or not, is forced to work backwards from the deadline to figure out how to get the work done. If the team can’t figure this out, managers often say things like “re-estimate” or “just get it done.” (Note: another thing that managers do in this situation is even worse: adding people to the project! Check out “The Mythical Man-Month” by F. Brooks for a great analysis of this problem.)
My anecdotal experience with this sort of thing is simple: quality suffers or sustainability suffers. I once worked with three other people on a mission critical project to help two banks with their merger. There was a regulatory deadline for completing the integration of the two existing systems for things like trading, etc. Fixed-scope-fixed-date. Coffee and sleepless nights were our solution since we tried not to sacrifice quality. We actually ended up working in my home for the last few 24-hour stretches so that we would have access to a shower. Suffice it to say, there’s no way we could have sustained that pace. It’s anti-Agile.
A quick search for ideas and opinions about stretch goals makes it very clear that there is no commonly agreed “correct” answer. However, from an Agile perspective stretch goals assigned by outsiders are clearly against the principles of the Agile Manifesto.Stretch Goals Set by the Scrum Team
The Scrum Guide states:
The number of items selected from the Product Backlog for the Sprint is solely up to the Development Team. Only the Development Team can assess what it can accomplish over the upcoming Sprint.
For emphasis: what it can accomplish – not what it (the Development Team) wants to accomplish, or what it should accomplish, or what it could accomplish if everything goes perfectly. A Development Team should be accomplishing their Sprint plan successfully (all Product Backlog Items done) on a regular basis. Of course, exceptional circumstances may intervene from time to time, but the team should be building trust with stakeholders. Here’s another story:
I had a good friend. We would always go out for coffee together. We just hung out – chatted about life, projects, relationships. Of course, from time-to-time one or the other of us would cancel our plans. That’s just life too. But there came a time when my friend started cancelling more often than not. There was always a good excuse: I’m sick, unexpected visitors, work emergency, whatever. After a little while of this I started to think that cancelling would be the default. I even got to the point where I was making alternative plans even if my friend and I had plans. I got to the point where I no longer trusted my friend. It didn’t matter that the excuses were always good. Trust was broken.
It doesn’t matter why a team fails to meet a goal. It reduces trust. It doesn’t matter why a team succeeds in meeting a goal. It builds trust. Even among team members. A team setting stretch goals is setting itself up for regular failure. Even if the team doesn’t share those stretch goals with outsiders.
Stretch goals destroy trust within the team.
Think about that. When a team fails to meet its own stretch goal, team members will start to look for someone to blame. People look for explanations, for stories. The team will create its own narrative about why a stretch goal was missed. If it happens over and over, that narrative will start to become doubt about the team’s own capacity either by pin-pointing an individual or in a gestalt team sense.Trust and Agility
The importance of trust cannot be over-stated. In order for individuals to work effectively together, they must trust each other. How much trust? Well, the Agile Manifesto directly addresses trust:
Build projects around motivated individuals. Give them the environment and support they need and trust them to get the job done.Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
At a large big room planning (BRP) event run by one of our customers recently, between 400 and 500 people spent two days planning their next 12 weeks of work. The stakes were high, as the health care product they are racing their competitors to deliver must go live by January 1st. Rally talks about big room planning as the “secret sauce” of agile at scale, and one of the key reasons it's so powerful is because it exponentially cuts down on decision-making cycle time. In short, lots of decisions get made. Fast.
I worked with one of the customer's Agile Release Trains (each Train consists of a number of associated teams) over these two days to prioritize and plan their upcoming work. It’s a tiring two days: not because of the physical demands so much as the constant collaboration, communication, and thinking required between and across trains. We’re often not used to working at that pace in our regular office jobs. Traditional planning of this type — usually document-driven, over an annual cycle— is fundamentally flawed because the teams and stakeholders are not collaborating and making decisions in realtime. Why is this a big deal? Because decisions in progress are a form of WiP (work in progress) and, in short, too much WiP is harmful.
Like all WiP, decisions in progress (DiP) should be completed as soon as possible: ideally you should reduce your cycle time for making decisions, reduce the batch size of your decisions, and increase their frequency. If your DIP is high or out-of-control you’ll see the same problems as with high WiP: bottlenecks, long lead times, long feedback cycles, and increased defects, to name a few.
What does this have to do with big room planning? Well, it’s an event where 500 people make a lot of decisions over the course of two days. Let’s use some simple math: If 500 people each make 20 decisions per day (a conservative estimate), that’s 20,000 total decisions made during the event, with an average cycle time of around one working day. In my view, that’s a good return on investment. Divide the cost of running a big room planning event by the number of decisions made and it will equate to around the price of a good coffee per decision. (Having unlimited coffee at this kind of event is one more good decision, as it pays for itself.)
If the math doesn’t convince your stakeholders, then look at your company’s current decision-making ceremonies: these may be monthly meetings, stage gates, approval meetings, etc. How many decisions get made? Some, sure. But 20,000? Probably not. What’s your average cycle time for making decisions? Maybe some decisions are made in the meetings; but if your decision is in the queue until the next meeting — the following month — that’s a 30-day DiP cycle. Lean experts like Mary and Tom Poppendieck back up the high cost of DiP to an organisation through studying processes’ value-add versus non value-added time. Often the latter exceeds 80% through delays and hand-offs, many of which relate to decisions in progress.
A large financial institute I recently visited described the ceremonies used by its Agile Release Train. They initially started off with a daily Scrum of Scrums meeting, with a view of reducing the frequency once the teams became more mature. However, they derived so much value from reducing decision-making cycle times that they agreed to keep it on their daily schedule. How many teams in your company vote to meet more often than required because the meeting proves its value?
So: treat your DiP like WiP. Aggressively reduce it, shorten your cycle times, improve the flow of DiP, and improve the quality of your decisions through realtime, face-to-face collaboration. You can start by identifying three upcoming decisions in your area of the organization (e.g. small, medium, large) and how these will be handled. Brainstorm with your team about what you can do to reduce the DiP cycle time for each decision.
In the time it took to write this blog, 1,000 decisions were made in the aforementioned big room planning event. In the same time period, how many decisions were made about your company’s next release?Suzanne Nottage
The Kanban technique emerged in the late 1940s as Toyota’s reimagined approach to manufacturing and engineering. The system’s highly visual nature allowed teams to communicate more easily on what work needed to be done and when. It also standardized cues and refined processes, which helped to reduce waste and maximize value. The application of Kanban […]
I’ve been noodling around on how to distill the LeadingAgile perspective on agile transformation down to it’s most non-negotiable core. I thought it might be fun to write up a manifesto to guide thinking around transformation.
The LeadingAgile Transformation Manifesto
Agile transformation is about systematically creating an environment for agile to thrive in your organization.
At the delivery level, agile is about forming teams, building backlogs, and producing working tested increments of product.
At the enterprise level, agile is about team-based organizations, adaptive governance, and metrics that support the flow of value.
Anything that prevents you from doing these things, either at the delivery level or across the enterprise, is an impediment that must be removed.
Of course this is a tightly packed description.
It doesn’t say anything about the attributes of an environment where agile can thrive, or the nature of an agile team, a description of a backlog, or even what it means to produce a working tested increment of product.
It doesn’t describe anything about what it means to form a team-based organization, to apply adaptive governance, or the nature and form of metrics which support measuring the flow of value.
It doesn’t describe the kinds of things that will inevitably get in the way of forming teams, building backlogs, or producing working tested increments.
It doesn’t describe the kinds of things that will inevitably get in the way of forming team-based organizations, applying adaptive governance, or installing metrics which support measuring the flow of value.
It doesn’t say anything about installing Product Owners or ScrumMasters or what kind of training you need to do, or what to do about culture, or what to do about the people that don’t seem to want to get on board.
The point is… if you form teams, build backlogs, and produce working tested software… and you remove all the impediments… whatever they happen to be… I believe everything else will take care of itself with minimal effort.
At scale… if you form team based organizations, use lean/kanban based adaptive governance, and install healthy metrics which promote the flow of value into the marketplace… everything else will take care of itself with minimal effort.
In the absence of this minimal subset, nothing we do works.
In the presence of this minimal subset, nothing much else matters.
So… you’re job in leading an agile transformation is to create the environment where agile can thrive. Your job is to form teams, build backlogs, and produce working tested software. Your job is to create team based organizations, apply adaptive governance, and install metrics that promote the flow of value.
Anything that gets in the way in an impediment to transformation.
Anything else is vanity that won’t make any difference at all.
At a recent SUGSA open space in Johannesburg someone posed the question “where do I find Scrum Masters?”. I hear this question asked repeatedly in different forms by people trying to transition to lean-agile ways of working. I believe such questions are born out of the historic machine model we have of organisations, that people are fungible resources. At least in the world of knowledge workers this is both untrue and damaging.
Moreover, if everyone in a rapidly growing number of companies adopting Scrum, chases after the same pool of experienced Scrum Masters, we are not addressing the need. We are just recycling the same group of experienced people and not growing capacity. Let’s examine the capacity requirement a little more. If you’re starting out with Scrum, for every 10 or so development team members (the people who actually do the work) you need one Scrum Master (aka team coach). I’ll save the sermon here on why you need a Scrum Master per one or two teams. You can read Michael James’ Scrum Master Checklist to learn more about that.
My simple response to the original question is “grow a pair!”…..more.KANBAN TRAINING
Once again Scrum Sense are teaming up with LEANability to bring foundation and advanced Kanban training to South Africa. Join Dr. Klaus Leopold on a 2-day practical learning journey and deepen your knowledge of Kanban. After completion you will receive certification through the Lean Kanban University.
Kanban is a new technique for managing software development processes in a highly efficient way. It underpins Toyota’s “just-in-time” (JIT) production system. Kanban provides a way of prioritising workflow and is effective at uncovering workflow and process issues.
Book your place on one of the following courses:
SPECIAL OFFER: Book for 3 and only pay for 2!
As hopefully most of you are aware, Scrum Sense is in the process of merging with agile42, a leading global agile coaching company.
We encourage you to sign-up to their newsletter to receive monthly company updates as well as interesting blog posts by their agile coaches.
Certified Scrum Product Owner (JHB)
15-16 Sept 2015
Certified Scrum Master (CPT)
28-29 Sept 2015
Certified Scrum Master (JHB)
06-07 Oct 2015
Improving & Scaling Kanban – Advanced (JHB)
05-06 Nov 2015
Applying Kanban – Foundation (CPT)
09-10 Nov 2015
Several of us among LeanKit’s founders and early employees first learned about Lean in the context of logistics and manufacturing. We wrote and implemented software that helped big companies buy and move and track physical goods. So we learned about the Lean concept of reducing waste in terms of inventory, transportation, motion, etc. It made […]
Inspired by the blog of Mike Cohn [Coh08] "Improving On Traditional Release Burndown Charts" I created a time lapsed version of it. It also nicely demonstrates that forecasts of "What will be finished?" (at a certain time) get better as the project progresses.
The improved traditional release burn down chart clearly show what (a) is finished (light green), (b) what will very likely be finished (dark green), (c) what will perhaps be finished, and perhaps not (orange), and (d) what almost is guaranteed not to be finished (red).
This knowledge supports product owners in ordering the backlog based on the current knowledge.Simulation
The result is obtained doing a Monte Carlo simulation of a toy project, using a fixed product backlog of around 100 backlog items with various sized items. The amount of work realized also varies per projectday based on a simple uniform probability distribution.
Forecasting is done using a 'worst' velocity and a 'best' velocity. Both are determined using only the last 3 velocities, i.e. only the last 3 sprints are considered.
The 2 grey lines represent the height of the orange part of the backlog, i.e. the backlog items that might be or not be finished. This also indicates the uncertainty over time of what actually will be delivered by the team at the given time.
The Making Of...
Over time the difference between the 2 grey lines gets smaller, a clear indication of improving predictability and reduction of risk. Also, the movie shows that the final set of backlog items done is well between the 2 grey lines from the start of the project.
This looks very similar to the 'Cone of Uncertainty'. Besides that the shape of the grey lines only remotely resembles a cone, another difference is that the above simulation merely takes statistical chances into account. The fact that the team gains more knowledge and insight over time, is not considered in the simulation, whereas it is an important factor in the 'Cone of Uncertainty'.References
[Coh08] "Improving On Traditional Release Burndown Charts", Mike Cohn, June 2008, https://www.mountaingoatsoftware.com/blog/improving-on-traditional-release-burndown-charts
[GNU Plot] Gnu plot version 5.0, "A portable command-line driven graphing utility", http://www.gnuplot.info
[ffmpeg] "A complete, cross-platform solution to record, convert and stream audio and video", https://ffmpeg.org
DFW Scrum Meeting Aug. 18th 2015
It’s said that two heads are better than one, in reference to problem solving. We will use Tangram puzzles to simulate this experience, and via structured debriefs of these exercises, discover the powerful behaviors of awesome collaboration, and the negative warning signs of poor collaboration. We will jump right into simulation exercises, come prepared to have FUN and learn by doing. No lecture - if you want a lecture… go here: http://lmgtfy.com/?q=+collaboration+pair+programming+lectures
Here are some of the resources and exercise if you wish to reproduce this workshop or want to dig further into the science behind collaboration.
Presentation Cultivation Collaboration (PDF)References on collaboration (PDF)Jim Tamm's TED TALK on defensiveness (PDF)
Kinda of a neat milestone for our company. LeadingAgile debuted at number 332 on the Inc. 500 list of fastest growing privately held companies in America. We were number 18 in Georgia. Number 28 in our industry. Our growth rate was 1410% over the past three years. Again… it’s been a hell of a ride.
Thanks for being there with us through the journey.