Skip to content

Feed aggregator

Develop Against Local Node Modules, Deploy From npm or Github

Derick Bailey - new ThoughtStream - Tue, 04/05/2016 - 00:44

Using a module that’s hosted in npm or on github is fairly easy. You can “npm install” or just update your package.json file with the github location and be done.

But what about developing locally? How do you get your code from a module that you are building, into a project that needs it, without going through the typical npm or github install?

Symlinks To The Rescue

npm has a feature built in to it to help you work on modules locally and use the code in your project without publishing it to the npm repository: symlinks.

The gist of it is that you create a symlink inside of your actual project’s node_modules folder, and you point it to the location of your module on your local system. This lets you run the current, in-development code in your project, without publishing it to npm first.

The best part, though, is that npm automates this for you.

npm link

Say you have a module, “foo”, that you are building on your system. When you have a working version and need to test it in your project, you’ll need to run two steps:

  • tell npm this module can be linked
  • link it into your project

Start in the “foo” module’s development directory, and run this:

That’s it. Two words: “npm link”.

Behind the scenes, this will tell your local npm installation that the “foo” library can be linked into other projects.

Now head over to the project that needs to use “foo” and run this:

These three words will tell npm to create a link to the “foo” module within the current project.

If you look at the node_modules folder, you’ll see the symlink to the module. This means you can run your standard require(“foo”) call inside of your project, and it will load your local development version of “foo”.

But, what about deployment to production?

Deploying Production Modules

The great thing about linked modules on your local box, is that they are only linked on your local box. The symlink that npm created only exists on your local machine, and nowhere else.

With that in mind, once you have the “foo” module working the way you need it, you would publish it to npm like any other module. Your project’s “package.json” file would still contain a standard reference to the “foo” module, to install from npm, as well.

When you deploy the project and run “npm install” for the production or test or whatever environment, npm will install the published version of your “foo” module.

You only need to make sure you publish the “foo” module before you deploy your code.

Unlinking

P.S. You can just as easily unlink a linked module from your local project with… you guessed it, “npm unlink foo”.

Categories: Blogs

March Highlights—Latest Articles & Videos for the SAFe of Mind

Agile Product Owner - Mon, 04/04/2016 - 22:58

Hi Folks,

We started this new “Highlights” series last month to capture the SAFe news and activities that haven’t been covered in other posts, but would still provide real value to the community.  Here’s the latest roundup—a mix of new perspectives, understanding, and practical steps for successful implementation of the Framework.

Recorded Webinar: Building Complex Systems with SAFe 4.0
Presented by Alex Yakyma, SAFe Fellow and Principal Consultant, Scaled Agile & Harry Koehnemann, SPCT, Director of Technology, 321 Gang

This is a must-see video for anyone working with large solutions including hundreds or more practitioners per value stream. Alex and Harry take a deep dive and discuss the challenges large systems builders face and key approaches for addressing them. They look at the key challenges that make complex systems development so complex, and explore the myths that surround Lean-Agile in a complex, multidisciplinary world that include hardware, firmware and other engineering domains.

SAFe Provides a Recipe for Agile at Enterprise Scale
by Guy Harrison, Executive Director of R&D at Dell
Guy Harrison describes SAFe as attempting to define the ‘Agile Enterprise;’ a solution to software development that is still somewhat at odds with the broader business lifecycle in which it resides.

The Top 10 Pitfalls of Agile Capitalization
by Catherine Connor, SPC4, Portfolio Agility Solutions Specialist at CA Technologies
A good read for companies looking for ways to accurately and defensibly capitalize agile software development.

Ready to get started on your SAFe® 4.0 transformation? (SAFe 4.0 for ALM)
by Amy Silberbauer, SPC, Solution Architect, Enterprise Scaled Agile (SAFe), Strategic DevOps at IBM

“Is it SAFe yet?” Get the answer in this 5th installment describing the transformation of IBM’s internal ALM development organization towards a Continuous Delivery model.

Using Agile Based Strategic Planning Across Your Enterprise
by Steve Elliott, SPC, CEO of AgileCraft

What does it mean to have an “Agile” strategic planning process? Steve Elliott discusses how enterprise agility solves the three key problems of traditional strategic planning.

SAFe Agile Release Train Launch (ART) / Client: Totemic Tech
by McKenna Consultants/ Nick McKenna, SPC, CEO McKenna Consultants

This quick read describes the early stages of a SAFe adoption by Totemic Tech, a UK-based SaaS platform provider for banking and finance markets.

Shiny Objects and SAFe – article hosted by VersionOne
by Tom Weinberger, SPC, Agile Coach at Blue Agility

Too much WIP on your plate? Read Tom Weinberger’s discussion on how to handle multiple requests from multiple stakeholders within the SAFe discipline.

BestBrains SAFe PI Planning Simulation

The brains from BestBrains share a 5-minute video on their 2-day PI planning simulation. Check it out for some creative use of LEGOS and chalkboards as the ART teams build a ‘village.’ Said one participant, “It gave me a deep understanding of SAFe and how to use the theory in practice.”

We’ll keep rounding up these great resources for the community, so stay tuned to the blog.

In the meantime, stay SAFe!

–Dean

Categories: Blogs

Pivotal Tracker Analytics: Now Out of Beta!

Pivotal Tracker Blog - Mon, 04/04/2016 - 21:12

It’s official! Pivotal Tracker’s new project Analytics features are now out of beta!

As thousands of you have already discovered, Analytics bring a new level of visibility to your project and allows you to easily uncover bottlenecks and continuously improve your team’s performance. It’s a collection of simple but powerful reports that give insight into your project’s cadence, including fluctuations over time. You get a high-level view of progress, and the power to easily drill down into details.

Here’s what some of our customers have said about Analytics recently:

“We love the new reports, they have been very helpful for us to communicate expectations with managers, as well as optimize our cycle time.”
—Dennis Stevense, Lead Software Engineer, Streamline

“Loving Pivotal Tracker’s new analytics! Helpful and practical, with all the insight we were looking for.”
—Matthew O’Neill, COO, SameWave

Where to start
Analytics replace Tracker’s old progress and points breakdown reports, as well as the in-panel charts. The new Analytics are one click away from your project—just use the Analytics tab in the new navigation at the top of the page. This is also how you can get to your project settings and members pages (it replaces the old cogwheel menu in the project sidebar).

analytics wayfinding

Analytics provide a number of ways for your team to explore your project data:

Project Overview

Spot high-level metrics and trends in one glance, and drill down easily to various detailed reports and charts, including Velocity, Burnup, and Cumulative flow.
overview


Iteration

Get a snapshot of progress made in a given iteration, with iteration-level burnup and flow charts.
iteration


Epics

View feature-level progress, and drill down to detailed reports that help you understand how scope changed over time.
epics


Releases

Visualize historical progress as well as what’s left remaining for important milestones.
releases


Story Activity

See and share story-level progress for a given date range or iteration.story activity report


Cycle Time

Find out how time was spent on stories trends over time, and which stories have taken the longest.
cycle time
Click “view report” anywhere you see it to drill down to a more detailed report (e.g., reports for individual releases that also include release burndown charts). These charts allow you to filter by label or epic, or see a list of stories for a given iteration by clicking on that iteration in the chart.

Note: Currently, Analytics allow you to see up to six months of historical data.

Coming soon!
We will be publishing a series of Analytics-related blog posts over the next few weeks, so be sure to follow us on Twitter to stay informed.

One of the next steps is improved cross-project analytics and visibility, and we’re looking for customers that might be willing to help us shape these. If you would benefit from being able to see progress, trends, or status across multiple projects, please get in touch!

Your feedback throughout the beta process has been instrumental in getting Analytics to where they are today, so thank you! But please don’t stop—this is just one milestone on a long journey. Continue to send us feedback via the widget at the top left of the Analytics page, or email tracker@pivotal.io, to share your comments and suggestions.

The post Pivotal Tracker Analytics: Now Out of Beta! appeared first on Pivotal Tracker.

Categories: Companies

SAFe Lean-Agile Principles Abridged

Agile Product Owner - Mon, 04/04/2016 - 19:28

 

Hi Folks,

Richard Knaster has been working on a new whitepaper: “An Introduction to SAFe 4.0.” It distills SAFe down to its primary elements and ideas, with just enough depth to provide a fairly comprehensive understanding of the Framework. He’ll post that for comments and downloads sometime soon in the Updates category of this blog, so stay tuned.

This new, “leaner” overview of the Framework has reminded us of the need to emphasize what’s really important in SAFe.  Those of you who have been practicing in the trenches know how critical the principles are to a successful implementation, so with that in mind I thought I’d provide the abridged version of those here now. Comments are welcome.

SAFe Lean-Agile Principles Abridged #1 – Take an economic view

Achieving the best value and quality to people and society in the sustainably shortest lead time requires a fundamental understanding of the economics of the system builder’s mission. Lean systems builders endeavor to make sure that every day decisions are made in a proper economic context. The primary aspects include developing and communicating the strategy for incremental value delivery, and the creation of the Value Stream Economic Framework, which defines the tradeoffs between risk, cost of delay, operational and development costs, and supports decentralized decision-making.

#2- Apply systems thinking

Deming, one of the world’s foremost systems thinkers, constantly focused on the larger view of problems and challenges faced by people building and deploying systems of all types—manufacturing systems, social systems, management systems, even government systems. One central conclusion was the understating that the problems faced in the workplace were a result of a series of complex interactions that occurred within the systems the workers used to do their work. In SAFe, systems thinking is applied to the organization that builds the system, as well as the system under development, and further, how that system operates in its end user environment.

#3 – Assume variability; preserve options

Traditional design and lifecycle practices drive picking a single requirements and design option early in the development process (early in the “cone of uncertainty”). However, if the starting point is wrong, then future adjustments take too long and can lead to a suboptimal long-term design. Alternatively, lean systems developers maintain multiple requirements and design options for a longer period in the development cycle. Empirical data is then used to narrow focus, resulting in a design that creates better economic outcomes.

#4 – Build incrementally with fast, integrated learning cycles

Lean systems builders build solutions incrementally in a series of short iterations. Each iteration results in an integrated increment of a working system. Subsequent iterations build upon the previous ones. Increments provide the opportunity for fast customer feedback and risk mitigation, and also serve as minimum viable solutions or prototypes for market testing and validation. In addition, these early, fast feedback points allow the systems builder to “pivot” where necessary to an alternate course of action

#5 – Base milestones on objective evaluation of working systems

Systems builders and customers have a shared responsibility to assure that investment in new solutions will deliver economic benefit. The sequential, phase-gate development model was designed to meet this challenge, but experience has shown that it does not mitigate risk as intended. In Lean-Agile development, each integration point provides an opportunity to evaluate the solution, frequently and throughout the development life cycle. This objective evaluation provides the financial, technical, and fitness-for-purpose governance needed to assure that a continuing investment will produce a commensurate return.

#6 – Visualize and limit WIP, reduce batch sizes, and manage queue lengths

Lean systems builders strive to achieve a state of continuous flow, whereby new system capabilities move quickly and visibly from concept to cash. Three primary keys to implementing flow are to: 1. Visualize and limit the amount of work-in-process so as to limit demand to actual capacity, 2. Reduce the batch sizes of work items to facilitate reliable flow though the system, and 3. Manage queue lengths so as to reduce the wait times for new capabilities.

#7 – Apply cadence, synchronize with cross-domain planning

Cadence transforms unpredictable events into predictable ones, and provides a rhythm for development. Synchronization causes multiple perspectives to be understood, resolved and integrated at the same time. Applying development cadence and synchronization, coupled with periodic cross-domain planning, provides Lean systems builders with the tools they need to operate effectively in the presence of product development uncertainty.

#8 – Unlock the intrinsic motivation of knowledge workers

Lean-Agile leaders understand that ideation, innovation, and engagement of knowledge workers can’t generally be motivated by, incentive compensation, as individual MBOs (Management by Objectives), cause internal competition and destruction of the cooperation necessary to achieve the larger system aim.  Providing autonomy, mission and purpose, and minimizing constraints, leads to higher levels of employee engagement, and results in better outcomes for customers and the enterprise.

#9 – Decentralize decision-making

Achieving fast value delivery requires fast, decentralized decision-making, as any decision escalated introduces delay. In addition, escalation can lead to lower fidelity decisions, due to the lack of local context, plus changes in fact patterns that occur during the wait time. Decentralized decision-making reduces delays, improves product development flow, and enables faster feedback and more innovative solutions. However, some decisions are strategic, global in nature, and have economies of scale sufficient enough to warrant centralize decision-making.  Since both types of decisions occur, the creation of an established decision-making framework is a critical step in ensuring fast flow of value.

Categories: Blogs

Being Digital

NetObjectives - Mon, 04/04/2016 - 18:09
In a recent HBR article entitled Which Industries Are the Most Digital (and Why)?, the authors measure digital progress and adoption in 22 industry sectors. Specifically, they examine the various industry sectors in light of three categories of digital capabilities: digital assets, digital usage and digital labor, reaching the following conclusion: “What really sets the leaders apart, however, is...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

Adapting Agile to Distributed Teams

TV Agile - Mon, 04/04/2016 - 18:00
Agile is the most effective project management methodology for distributed teams, but there are adaptations to the process that can improve effectiveness for these teams. Chuck Lewin shares his recent experiences of leading software development teams spread across sites in North America, Europe, and Asia. He focuses on the art of writing better user stories, […]
Categories: Blogs

Being Agile in Business

Scrum Expert - Mon, 04/04/2016 - 14:49
If agile started as a software development movement, the customer has been at the center of its values since its initial statement. The usage of agile and lean has now widespread beyond the IT world. In her book “Being Agile in Business”, Belinda Waldock explains the agile and lean approaches from a business perspective. The book is structured in four parts that explore the definition of agile, the reasons to adopt agile, the approaches and technique to behave in an agile way. The final part provides simple steps to share an agile culture in your teams and organizations. The book covers approaches like Scrum and Kanban. The book is well-structured and easy to read, providing examples that are outside the software development world. I will recommend this book as a good introduction to the agile approaches for every business person that has to deal with an agile software development team, as a product owner or a manager, or that want to implement the agile perspective in other domains like marketing or sales. Reference: Being Agile in Business, Belinda Waldock, Pearson, 978-1292083704 Quotes Letting go of existing processes and systems to adopt agile can be difficult, especially for those who are comfortable with routines. Even if that routine is difficult, it can be hard to acknowledge. There is a need to let go of the existing rules, identify the problem within the process or the system and go through the pain of change in order to achieve a better outcome. Trusting [...]
Categories: Communities

Links for 2016-04-03 [del.icio.us]

Zachariah Young - Mon, 04/04/2016 - 09:00
Categories: Blogs

Need help with Agile Retrospectives?

Ben Linders - Sun, 04/03/2016 - 20:39
Do you have a question about doing agile retrospectives? Need some help on how to do them? I'm there to answer your questions. All you have to do is ask :-) Continue reading →
Categories: Blogs

Complexity is Relative

NetObjectives - Sun, 04/03/2016 - 20:38
As software professionals, our fairly automatic response to complexity is to try to reduce it. Time and again many of us experienced the perils of overly complicated software. For example, in his 2013 Ph.D. dissertation[1], Dan Sturtevant analyzes eight consecutive releases of an application by a successful software firm, concluding that “files with high McCabe scores[2] are expected to have 2.1...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Companies

A Language for Architecture

J.D. Meier's Blog - Sun, 04/03/2016 - 18:52

This is an article that I originally wrote for the Architecture Journal to walk through how we created “a language for software architecture.”  Since the article is no longer available, I’m making it available here for old time’s sake.

The goal at the time was to create a simple way to work through solution design challenges and expose some of the key architectural concerns and choices.

The idea was to make it very easy to zoom out to the broader context, and then very quickly zoom into common architecture choices, such as deployment topologies and cross-cutting concerns.

I also wanted to be able to better leverage the existing patterns in the software industry by giving them a backdrop and a canvas so architects could compose them easier and apply them in a more holistic and effective way.

Grady Booch, one of IBM’s distinguished engineers, had this to say about the Architecture Guide where we first created this “language for architecture”:

“Combine these styles and archetypes, and you have an interesting language for describing a large class of applications. While I don’t necessarily agree that these styles and archetypes are orthogonal (nor are the lists complete) for the general domain of software architecture, for Microsoft’s purposes, these styles offer an excellent operating model into which one can apply their patterns and practices.”

While a lot has changed since the original creation of our Architecture Language, a lot of the meta-frame remains the same.  If I were to update the Architecture Language, I would simply walk through the big categories and update them. 

Summary

One of the most important outcomes of the patterns & practices Application Architecture Guide 2.0 project is a language for the space. A language for application architecture. Building software applications involves a lot of important decisions. By organizing these decisions as a language and a set of mental models, we can simplify organizing and sharing information. By mapping out the architecture space, we can organize and share knowledge more effectively. By using this map as a backdrop, we can also overlay principles, patterns, technologies, and key solutions assets in meaningful and relevant ways. Rather than a sea of information, we can quickly browse hot spots for relevant solutions.
Contents

  • Overview
  • A Map of the Terrain
  • Mapping Out the Architecture Space
  • Architecture Frame
  • Application Types
  • Application Feature Frame
  • Architecture Styles
  • Quality Attributes
  • Layered Architecture Reference Example
  • Layers
  • Tiers
  • Conclusion
  • Resources
A Map of the Terrain

One of the most effective ways to deal with information overload is to frame a space. Just like you frame a picture, you can frame a problem to show it a certain way. When I started the patterns & practices Application Architecture Guide 2.0 project, the first thing I wanted to do was to frame out the space. Rather than provide step-by-step architectural guidance, I thought it would be far more valuable to first create a map of what’s important. We could then use this map to prioritize and focus our efforts. We could also use this map as a durable, evolvable backdrop for creating, organizing and sharing our patterns & practices work. This is the main map, the Architecture Frame, we created to help us organize and share principles, patterns, and practices in the application architecture space:

image
Mapping Out the Architecture Space

Creating the map was an iterative and incremental process. The first step was to break up application architecture into meaningful buckets. It first started when I created a project proposal for our management team. As part of the proposal, I created a demo to show how we might chunk up the architecture space in a meaningful way. In the demo, I included a list of key trends, a set of application types, a set of architectural styles, a frame for quality attributes, an application feature frame, a set of example deployment patterns, and a map of patterns & practices solution assets. I used examples where possible simply to illustrate the idea. It was well received and it served as a strawman for the team.

Each week, our core Application Architecture Guide 2.0 project team met with our extended development team, which primarily included patterns & practices development team members. During this time, we worked through a set of application types, created a canonical application, analyzed layers and tiers, evaluated key trends, and created technology matrix trade-off charts. To create and share information rapidly, we created a lot of mind maps and slides. The mind maps worked well. Rather than get lost in documents, we used the mind maps as backdrops for conversation and elaboration.
Key Mapping Exercises

We mapped out several things in parallel:

  • Key trends. Although we didn’t focus on trends in the guide, we first mapped out key trends to help figure out what to pay attention to. We used a mind map and we organized key trends by application, infrastructure, and process. While there weren’t any major surprises, it was a healthy exercise getting everybody on the same page in terms of which trends mattered.
  • Canonical application. This first thing we did was figure out what’s the delta from the original architecture guide. There were a few key changes. For example, we found that today’s applications have a lot more clients and scenarios they serve. They’ve matured and they’ve been extended. We also found today’s applications have a lot more services, both in terms of exposing and in terms of consuming. We also noticed that some of today’s applications are flatter and have less layers. Beyond that, many things such as the types of components and the types of layers were fairly consistent with the original model.
  • Layers and tiers. This was one of the more painful exercises. Early in the project, we met each week with our development team, along with other reviewers. The goal was to map out the common layers, tiers, and components. While there was a lot of consistency with the original application architecture guide, we wanted to reflect any learnings and changes since the original model. Once we had a working map of the layers, tiers, and components, we vetted with multiple customers to sanity check the thinking.
  • Application types. We originally explored organizing applications around business purposes or dominant functionality, customer feedback told us we were better off optimizing around technical types, such as Web application or mobile client. They were easy for customers to identify with. They also made it easy to overlay patterns, technologies, and key patterns & practices solution assets. The technical application types also made it easy to map out relevant technologies.
  • Architectural styles. This is where we had a lot of debate. While we ultimately agreed that it was helpful to have a simple language for abstracting the shapes of applications and the underlying principles from the technology, it was difficult to create a map that everybody was happy with. Things got easier once we changed some of the terminology and we organized the architectural styles by common hot spots. It then became obvious that the architectural styles are simply named sets of principles. We could then have a higher level conversation around whether to go with object-based community or message-based and SOA, for example. It was also easy to describe deployments in terms of 2-tier, 3-tier, and N-tier.
  • Hot spots for architecture. When you build applications, there’s a common set of challenges that show up again. For example, caching, data access, exception management, logging … etc. These are application infrastructure problems or cross-cutting concerns. You usually don’t want to make these decisions ad-hoc on any significant application. Instead, you want to have a set of patterns and guidelines or ideally reusable code that the team can leverage throughout the application. What makes these hot spots is that they are actionable, key engineering decisions. You want to avoid do-overs where you can. Some do-overs are more expensive than others. One of the beauties of the architecture hot spots is that they helped show the backdrop behind Enterprise Library. For example, there’s a data access block, a caching block, a validation block … etc.
  • Hot spots for application types. When you build certain classes of application, there’s recurring hot spots. For example, when you build a rich client, one of the common hot spots to figure out is how to handle occasionally disconnected scenarios. The collection of hot spots for architecture served as a baseline for finding hot spots in the other application types. For example, from the common set of hot spots, we could then figure out which ones are relevant for Web applications, or which additional hot spots would we need to include.
  • Patterns. Mapping out patterns was a lengthy process. Ultimately, we probably ended up with more information in our workspace than made it into the guide. To map out the patterns, we created multiple mind maps of various pattern depots. We summarized patterns so that we could quickly map them from problems to solutions. We then used our architecture hot spots and our hot spots for application types as a filter to find the relevant patterns. We then vetted the patterns with customers to see if the mapping was useful. We cut any patterns that didn’t seem high enough priority. We also cut many of our pattern descriptions when they started to weight the guide down. We figured we had plenty of material and insight to carve out future pattern guides and we didn’t want to overshadow the value of the main chapters in the guide. We decided the best move for now was to provide a Pattern Map at the end of each application chapter to show which patterns are relevant for key hot spots. Customers seemed to like this approach and it kept things lightweight.
  • patterns & practices solution assets. This was the ultimate exercise in organizing our catalog. We actually have a large body of documented patterns. We also have several application blocks and factories, as well as guides. By using our architecture frame, it was easier to organize the catalog. For example, the factories and reference implementations mapped to the application types. The Enterprise Library blocks mapped to the architecture hot spots. Several of the guides mapped to the quality attributes frame.
  • Microsoft platform. This was a challenge. It meant slicing and dicing the platform stack in a meaningful way as well as finding the right product team contacts. Once we had our application types in place, it got a lot easier. For example, depending on which type of application you were building (RIA, Web, mobile … etc.), this quickly narrowed down relevant technology options. We created technology matrixes for presentation technologies, integration technologies, workflow technologies, and data access technologies. Since the bulk of the guide is principle and pattern based, we kept these matrixes in the appendix for fast lookups.
Key Components of the Application Architecture Map

Over the weeks and months of the project, a very definite map of the landscape emerged. We found ourselves consistently looking for the same frames to organize information. While we tuned and pruned specific hot spots in areas, the overall model of common frames was helping us move through the space quickly.

  • Architecture frame. The architecture frame was the main organizing map. It brought together the context (scenarios, quality attributes, requirements/constraints), application types, architectural styles, and the application hot spots.
  • Application types. For application types, we optimized around a simple, technical set that resonated with customers. For example, Web application, RIA, mobile … etc.
  • Quality attributes. We organized quality attributes by key hot spots: system, runtime, design-time, and user qualities.
  • Architectural styles. We organized architectural styles by key hot spots: communication, deployment, domain, interaction, and structure.
  • Requirements and constraints. We organized requirements by key types: functional, non-functional, technological. We thought of constraints in terms of industry and organizational constraints, as well as by which concern (for example, constraints for security or privacy).
  • Application feature frame. The application feature frame became a solid backdrop for organizing many guidelines through the guide. The hot spots resonated: caching, communication, concurrency and transactions, configuration management, coupling and cohesion, data access, exception management, layering, logging and instrumentation, state management, structure, validation and workflow.
  • Application type frames. The application type frames are simply hot spots for key application types. We created frames for: Web applications, rich internet applications (RIA), mobile applications, rich client applications and services.
  • Layered architecture reference model. (Canonical application) The canonical application is actually a layered architecture reference model. It helps show the layers and components in context.
  • Layers and tiers. We used layers to represent logical partitions and tiers for physical partitions (this precedent was set in the original guide.) We identified key components within the key layers: presentation layer, business layer, data layer, and service layer.
  • Pattern Maps. Pattern maps are simply overlays of key patterns on top of relevant hot spots. We created pattern maps for the application types.
  • Product and technology maps. We created technology matrixes for relevant products and technologies. To put the technologies in context, we used application types where relevant. We also used scenarios. To help make trade-off decisions, we included benefits and considerations for each technology.
User, Business, and System Perspective

One thing that helped early on was creating a Venn diagram of the three perspectives, user, business, and system:

image

In application architecture, it’s easy to lose perspective. It helps to keep three perspectives in mind. By having a quick visual of the three perspectives, it was easy to reminder ourselves that architecture is always a trade-off among these perspectives. It also helped remind us to be clear which perspective we’re talking about at any point in time. This also helped resolve many debates. The problem in architecture debates is that everybody is usually right, but only from their perspective. Once we showed people where their perspective fit in the bigger picture, debates quickly turned from conflict to collaboration. It was easy to move through user goals, business goals, and system goals once people knew the map.
Architecture Frame

The Architecture Frame is a simple way to organize the space. It’s a durable, evolvable backdrop. You can extend it to suit your needs. The strength of the frame is that it combines multiple lenses:

image

Here are the key lenses:

  • Scenarios. This sets the context. You can’t evaluate architecture in a vacuum. You need a backdrop. Scenarios provide the backdrop for evaluation and relevancy.
  • Quality Attributes. This includes your system qualities, your runtime qualities, your design-time qualities and user qualities.
  • Requirements / Constraints. Requirements and constraints includes functional requirements, non-functional requirements, technological requirements, industry constraints and organizational constraints.
  • Application Types. This is an extensible set of common types of applications or clients. You can imagine extending for business types. You can imagine including just the types of applications your organization builds. Think of it as product-line engineering. When you know the types of applications you build, you can optimize it.
  • Architectural Styles. This is a flat list of common architectural styles. The list of architectural styles is flexible and most applications are a mash up of various styles. Architectural styles become more useful when they are organized by key decisions or concerns.
  • Application Feature Frame. The application feature frame is a concise set of hot spots that show up time and again across applications. They reflect cross-cutting concerns and common application infrastructure challenges.
Application Types

This is the simple set of technical application types we defined:

Application Type

Description

Web applications

Applications of this type typically support connected scenarios and can support different browsers running on a range of operating systems and platforms.

Rich Internet applications (RIA)

Applications of this type can be developed to support multiple platforms and multiple browsers, displaying rich media or graphical content. Rich Internet applications run in a browser sandbox that restricts access to some devices on the client.

Mobile Applications

Applications of this type can be developed as thin client or rich client applications. Rich client mobile applications can support disconnected or occasionally connected scenarios. Web or thin client applications support connected scenarios only. The device resources may prove to be a constraint when designing mobile applications.

Rich client applications

Applications of this type are usually developed as stand-alone applications with a graphical user interface that displays data using a range of controls. Rich client applications can be designed for disconnected and occasionally connected scenarios because the applications run on the client machine.

Services

Services expose complex functionality and allow clients to access them from local or remote machine. Service operations are called using messages, based on XML schemas, passed over a transport channel. The goal in this type of application is to achieve loose coupling between the client and the server.

Application Feature Frame

This is the set of hot spots for applications we defined:

Category

Description

Authentication and Authorization

Authentication and authorization allow you to identify the users of your application with confidence, and to determine the resources and operations to which they should have access.

Caching and State

Caching improves performance, reduces server round trips, and can be used to maintain the state of your application.

Communication

Communication strategies determine how you will communicate between layers and tiers, including protocol, security, and communication-style decisions.

Composition

Composition strategies determine how you manage component dependencies and the interactions between components.

Concurrency and Transactions

Concurrency is concerned with the way that your application handles conflicts caused by multiple users creating, reading, updating, and deleting data at the same time. Transactions are used for important multi-step operations in order to treat them as though they were atomic, and to recover in the case of a failure or error.

Configuration Management

Configuration management defines how you configure your application after deployment, where you store configuration data, and how you protect the configuration data.

Coupling and Cohesion

Coupling and cohesion are strategies concerned with layering, separating application components and layers, and organizing your application trust and functionality boundaries.

Data Access

Data access strategies describe techniques for abstracting and accessing data in your data store. This includes data entity design, error management, and managing database connections.

Exception Management

Exception-management strategies describe techniques for handling errors, logging errors for auditing purposes, and notifying users of error conditions.

Logging and Instrumentation

Logging and instrumentation represents the strategies for logging key business events, security actions, and provision of an audit trail in the case of an attack or failure.

User Experience

User experience is the interaction between your users and your application. A good user experience can improve the efficiency and effectiveness of the application, while a poor user experience may deter users from using an otherwise well-designed application.

Validation

Validation is the means by which your application checks and verifies input from all sources before trusting and processing it. A good input and data-validation strategy takes into account not only the source of the data, but also how the data will be used, when determining how to validate it.

Workflow

Workflow is a system-assisted process that is divided into a series of execution steps, events, and conditions. The workflow may be an orchestration between a set of components and systems, or it may include human collaboration.

Architectural Styles

For architectural styles, we first framed the key concerns to organize the architectural styles, and then we defined some common architectural styles.
Organizing Architectural Styles

These are the hot spots we used to organize architectural styles:

Hot Spots

Architectural Styles

Communication

Service-Oriented Architecture(SOA) and/or Message Bus and/or Pipes and Filters.

Deployment

Client/server or 3-Tier or N-Tier.

Domain

Domain Model or Gateway.

Interaction

Separated Presentation.

Structure

Component-Based and/or Object-Oriented and/or Layered Architecture.

Architectural Style Frame

These are some commonly recognized architectural styles:

Architectural Style

Description

Client-server

Segregates the system into two applications, where the client makes a service request to the server.

Component-Based Architecture

Decomposes application design into reusable functional or logical components that are location-transparent and expose well-defined communication interfaces.

Layered Architecture

Partitions the concerns of the application into stacked groups (layers) such as presentation layer, business layer, data layer, and services layer.

Message-Bus

A software system that can receive and send messages that are based on a set of known formats, so that systems can communicate with each other without needing to know the actual recipient.

N-Tier/3-Tier

Segregates functionality into separate segments in much the same way as the layered style, but with each segment being a tier located on a physically separate computer.

Object-Oriented

An architectural style based on division of tasks for an application or system into individual reusable and self-sufficient objects, each containing the data and the behavior relevant to the object.

Separated Presentation

Separates the logic for managing user interaction from the user interface (UI) view and from the data with which the user works.

Service-Oriented Architecture

Refers to Applications that expose and consume functionality as a service using contracts and messages.

Quality Attributes

For quality attributes, we first framed the key categories to organize the quality attributes, and then we defined some common quality attributes.
Organizing Quality Attributes

This is a simple way to organize and group quality attributes:

Type

Quality attributes

System Qualities

· Supportability

· Testability

Run-time Qualities

· Availability

· Interoperability

· Manageability

· Performance

· Reliability

· Scalability

· Security

Design Qualities

· Conceptual Integrity

· Flexibility

· Maintainability

· Reusability

User Qualities

· User Experience / Usability

Quality Attribute Frame

These are some common quality attributes:

Quality attribute

Description

Availability

Availability is the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability will be affected by system errors, infrastructure problems, malicious attacks, and system load.

Conceptual Integrity

Conceptual integrity is the consistency and coherence of the overall design. This includes the way that components or modules are designed, as well as factors such as coding style and variable naming.

Flexibility

The ability of a system to adapt to varying environments and situations, and to cope with changes in business policies and rules. A flexible system is one that is easy to reconfigure or adapt in response to different user and system requirements.

Interoperability

Interoperability is the ability of diverse components of a system or different systems to operate successfully by exchanging information, often by using services. An interoperable system makes it easier to exchange and reuse information internally as well as externally.

Maintainability

Maintainability is the ability of a system to undergo changes to its components, services, features, and interfaces as may be required when adding or changing the functionality, fixing errors, and meeting new business requirements.

Manageability

Manageability is how easy it is to manage the application, usually through sufficient and useful instrumentation exposed for use in monitoring systems and for debugging and performance tuning.

Performance

Performance is an indication of the responsiveness of a system to execute any action within a given time interval. It can be measured in terms of latency or throughput. Latency is the time taken to respond to any event. Throughput is the number of events that take place within a given amount of time.

Reliability

Reliability is the ability of a system to remain operational over time. Reliability is measured as the probability that a system will not fail to perform its intended functions over a specified time interval.

Reusability

Reusability is the capability for components and subsystems to be suitable for use in other applications and in other scenarios. Reusability minimizes the duplication of components and also the implementation time.

Scalability

Scalability is the ability of a system to function well when there are changes to the load or demand. Typically, the system will be able to be extended over more powerful or more numerous servers as demand and load increase.

Security

Security is the ways that a system is protected from disclosure or loss of information, and the possibility of a successful malicious attack. A secure system aims to protect assets and prevent unauthorized modification of information.

Supportability

Supportability is how easy it is for operators, developers, and users to understand and use the application, and how easy it is to resolve errors when the system fails to work correctly.

Testability

Testability is a measure of how easy it is to create test criteria for the system and its components, and to execute these tests in order to determine if the criteria are met. Good testability makes it more likely that faults in a system can be isolated in a timely and effective manner.

Usability

Usability defines how well the application meets the requirements of the user and consumer by being intuitive, easy to localize and globalize, and able to provide good access for disabled users and a good overall user experience.

Layered Architecture Reference Model

This is our canonical application example. It’s a layered architecture showing the common components within each layer:

image

The canonical application model helped us show how the various layers and components work together. It was an easy diagram to pull up and talk through when we were discussing various design trade-offs at the different layers.
Layers

We identified the following layers:

  • Presentation layer
  • Business layer
  • Data layer
  • Service layer

They are logical layers. The important thing about layers is that they help factor and group your logic. They are also fractal. For example, a service can have multiple types of layers within it. The following is a quick explanation of the key components within each layer.
Presentation Layer Components

  • User interface (UI) components. UI components provide a way for users to interact with the application. They render and format data for users and acquire and validate data input by the user.
  • User process components. To help synchronize and orchestrate these user interactions, it can be useful to drive the process by using separate user process components. This means that the process-flow and state-management logic is not hard-coded in the UI elements themselves, and the same basic user interaction patterns can be reused by multiple UIs.
Business Layer Components
  • Application facade (optional). Use a façade to combine multiple business operations into a single message-based operation. You might access the application façade from the presentation layer by using different communication technologies.
  • Business components. Business components implement the business logic of the application. Regardless of whether a business process consists of a single step or an orchestrated workflow, your application will probably require components that implement business rules and perform business tasks.
  • Business entity components. Business entities are used to pass data between components. The data represents real-world business entities, such as products and orders. The business entities used internally in the application are usually data structures, such as DataSets, DataReaders, or Extensible Markup Language (XML) streams, but they can also be implemented by using custom object-oriented classes that represent the real-world entities your application has to work with, such as a product or an order.
  • Business workflows. Many business processes involve multiple steps that must be performed in the correct order and orchestrated. Business workflows define and coordinate long-running, multi-step business processes, and can be implemented using business process management tools.
Data Layer Components
  • Data access logic components. Data access components abstract the logic necessary to access your underlying data stores. Doing so centralizes data access functionality, and makes the process easier to configure and maintain.
  • Data helpers / utility components. Helper functions and utilities assist in data manipulation, data transformation, and data access within the layer. They consist of specialized libraries and/or custom routines especially designed to maximize data access performance and reduce the development requirements of the logic components and the service agent parts of the layer.
  • Service agents. Service agents isolate your application from the idiosyncrasies of calling diverse services from your application, and can provide additional services such as basic mapping between the format of the data exposed by the service and the format your application requires.
Service Layer Components
  • Service interfaces. Services expose a service interface to which all inbound messages are sent. The definition of the set of messages that must be exchanged with a service, in order for the service to perform a specific business task, constitutes a contract. You can think of a service interface as a façade that exposes the business logic implemented in the service to potential consumers.
  • Message types. When exchanging data across the service layer, data structures are wrapped by message structures that support different types of operations. For example, you might have a Command message, a Document message, or another type of message. These message types are the “message contracts” for communication between service consumers and providers.
Tiers

Tiers represent the physical separation of the presentation, business, services, and data functionality of your design across separate computers and systems. Some common tiered design patterns include two-tier, three-tier, and n-tier.
Two-Tier

The two-tier pattern represents a basic structure with two main components, a client and a server.

image
Three-Tier

In a three-tier design, the client interacts with application software deployed on a separate server, and the application server interacts with a database that is also located on a separate server. This is a very common pattern for most Web applications and Web services.

image
N-Tier

In this scenario, the Web server (which contains the presentation layer logic) is physically separated from the application server that implements the business logic.

image
Conclusion

It’s easier to find your way around when you have a map. By having a map, you know where the key hot spots are. The map helps you organize and share relevant information more effectively. More importantly, the map helps bring together archetypes, arch styles, and hot spots in a meaningful way. When you put it all together, you have a simple language for describing large classes of applications, as well as a common language for application architecture.

Categories: Blogs

Why Agile Game Development?

Agile Game Development - Sun, 04/03/2016 - 18:11

 Book Link

I made my first computer game in 1976 and became a professional game developer in 1994.  Within five years I was nearly burned out:  I had been promoted to lead seven game projects and had turned into that whip waving manager that we all hated.

But I have been inspired along the way by witnessing how people like Shigeru Miyamoto made games and what Mark Cerney wrote about his ideal process. I have also been inspired by being on a few teams that made great games and loved making them together.

This all came together when I read the first book about Scrum in 2003.  It wasn't hard to make a connection between Miyamoto's "find the fun" philosophy and Mark's preproduction experimentation approach and the values of Scrum.

So we started experimenting with Scrum in game development.  It wasn't a perfect fit.  For example, we had to go beyond Scrum for content production and support.  Along the way, we attended courses by Ken Schwaber and Mike Cohn (who also coached us onsite).  They both inspired us about the human aspect of agile.

But after using it awhile, we began to see the benefit.  Teams became more accountable.  We leaders focused less on solving daily problems for them or baby-sitting a prescriptive process.  We learned to serve their need for vision, clarity and support. Engagement, passion and fun grew.

A few years later, we were acquired by Vivendi and I started visiting their other studios to talk about how Scrum works for game development.  I also started presenting the topic at GDC to large audiences.  I enjoyed doing this and was encouraged by Mike, now a friend and mentor, to do it full-time.

So I took the leap in 2008 and began life as a one-person training crew.  I had plenty of time and barely enough savings in the first few years to finish the book.  Following that, the business became sustainable and I have loved every minute (OK, some of the airline travel hasn't been great).  I do miss working on games directly with small teams, but walking inside over 100 studios over the past eight years and getting to know the people within is rewarding.

I'm not doing this to grow a big consulting firm.  I still consider myself a game developer first and a trainer/consultant second.  However, I am a Certified Scrum Trainer and have worked with some of the most skilled agile and lean trainers and thinkers.  Combined with my game development experience this has helped me translate the purpose and values of agile and lean to the realities and challenges game developers face.

My goal isn't to ensure teams are following some rules by-the-book, but to help them find ways to make great games through iterative and human-focused approaches that work for game teams...and have a blast doing it.



Categories: Blogs

Book Review: The Phoenix Project

thekua.com@work - Sun, 04/03/2016 - 09:53

It has been a while since I read The Phoenix Project and I am glad to have reviewed it again recently. Described as a business novel, or The Goal for the 21st century, the book focuses on a story that large organisations need to realise when they feel they need to transform IT.

Title cover of the Phoenix Project book

The book focuses on a company in crisis – a company that is trying to complete lots of software projects, has a terrible number in flight and grapples with the problems many companies have – lack of visibility of the work, dependency on key individuals, marketing lead promises and IT treated as a cost-centre attitude. Bill, an IT Manager is one day promoted into a higher role where he is responsible for turning around and dealing with all the critical issues. He is given access to a mentor who introduces him to the “mysterious Three Ways” that are slowly uncovered throughout the book.

What I liked about the book

Business novels are refreshing to read as they feel less like reading a business book and sometimes makes picking up the book less of a chore. The authors manage to talk about generating insights and explaining some of the tools from a number of angles (Bill’s thoughts as well as from other characters’ perspectives) as well as relating it to existing material such as Theory of Constraints.

Like all good books, you follow the exciting story plot that descends into what seems like an insurmountable situation, only for the protagonist to find ways of overcoming it. For those who have never been exposed to visual ways of working (like Kanban), or understanding Work in Progress, Queueing theory and how IT capability matters to business, there are many useful lessons to learn.

What would have made the book better

Although the book has several characters who behave in a negative way, and pay for some of thoese consequences you don’t hear about the attempts by the protaganist which end up failing (with their consequences) unlike the real world. I also felt that the pace at which things changed seemed to occur at an unrealistic rate – but that’s proabably the downsides of a business novel versus what might actually happen in real life.

Conclusion
I would still highly recommend this reading if you’re interested in understanding about how modern IT, interested in how DevOps culture operates and some tools and techniques for moving IT management into a more responsive, flexible but still highly controlled manner.

Categories: Blogs

Five leaderships lessons from the Samurai for Product Managers

Xebia Blog - Sat, 04/02/2016 - 12:54
We have covered several topics in the Product Samurai series that should make you a better product manager. But what if you are leading product management or run innovation within your enterprise? Here are five leadership lessons that make your team better. “New eras don't come about because of swords, they're created by the people
Categories: Companies

The Mech Warrior 2 Hero Story

Agile Game Development - Fri, 04/01/2016 - 17:14
Mech Warrior 2 (MW2), released in 1995, was a big hit game for Activision and probably my all-time favorite.  What also stands out for MW2 is that the team defied a cancellation and worked nonstop to save the game while their boss was on travel.

What makes MW2 a unique memory for me, is that I finished it hours before my our first child was born five weeks early.   When my wife had early contractions, the doctor told her that if she had ten repeats within the next hour, we should to dash off to the hospital. She told me this, while I was playing the last level of WM2.  So I set the goal of completing the game within an hour.  By the time she counted to ten contractions, I had finished .  My son was born a bit premature, but healthy, a few hours later.  To her credit, my wife does not remind me of this obsessed and selfish behavior.  I blame the game.

Recently I asked a few of the participants and leaders of the original game to dig into their memory share their experiences with me.  Tim Morten, a programmer on MW2, who is now a Lead Producer at Blizzard Entertainment, shared some of that history:

“MW2 went through two rebirths: one on the engineering side, and one on the design side.  The original team had implemented something with promise, but it barely ran (not enough memory to hold more than two mechs) and it lacked narrative (just mechs on a flat surface shooting lamely at each other).  

After a couple of years of effort, with a major deadline looming, management had no option but to retrench and down-scope the project.  The existing team leadership departed at that point (lead engineers, lead producer, etc).  

In an effort to salvage the massive effort invested, a couple of remaining engineers went rogue while VP Howard Marks was away at a tradeshow for a week - without permission, they attempted to convert the game to protected mode.  This would theoretically provide access to enough memory to render a full set of mechs, but it had been deemed impossible in anything less than nine months - way more time than was available.

As of 9pm the night before Howard returned, they were ready to concede defeat: protected mode conversion requires extensive Intel assembly language programming, something they had no experience with - and there was no internet to use as a reference, they just had a single Intel tech manual.  They thought they had done the right things, but there was no telling how many bugs remained before the game loop would run.  Howard's arrival would spell the end of their effort, since his priority was to ship something, even if massive compromise in scope was required.

Against all odds, that midnight the game successfully looped in protected mode for the first time, and they were rewarded with a full set of mechs rendering - albeit in wireframe and without sound.  They were elated to have cracked the hardest problem, opening up the possibility to build a better game.

Howard returned, recognized the potential that had been unlocked, and helped set the team up for success by bringing in proven problem solvers from Pitfall: The Mayan Adventure.  John Spinale and Sean Vesce stepped in, to build a new team on the skeleton that remained, and to establish a vision for a product that to that point was nothing more than a bare bones tech demo.

The design rebirth of MW2 is something that Sean can speak better to, but it's fair to say that the technology rebirth was just an enabler - the design team innovated on so many levels under tight time pressure to produce something that was revolutionary for the time.  Without that innovation, I have no doubt that MW2 would languish in obscurity today.  Likewise, without the successful leadership of John rebuilding the team, and protecting the team from outside interference, we would not have achieved the success that we ultimately did.”


I’ve hear similar stories from numerous hit games: teams investing a measure of passion, heroic leadership protecting the team and visionary executives bucking convention and gambling on a vision.  These seem like valuable attributes to grow.  This is what “people over process” is about .
Categories: Blogs

Agile vs Waterfall

Agilitrix - Michael Sahota - Fri, 04/01/2016 - 15:58

Here is my one page summary of some key differences between Agile and Waterfall. (I created this when I was asked to explain this to an exec earlier this month about this and I didn’t have anything good in my toolkit nor could I find something on google.) Key Differences between Agile and Waterfall In waterfall, […]

The post Agile vs Waterfall appeared first on agilitrix.com - Michael Sahota.

Categories: Blogs

12 years, 12 lessons working at ThoughtWorks

thekua.com@work - Fri, 04/01/2016 - 15:15

I’ve been at ThoughtWorks for 12 years. Who would have imagined? Instead of writing about my reflections on the past year, I thought I would do something different and post twelve key learnings and observations looking back over my career. I have chosen twelve, not because there are only twelve, but because it fits well with the theme of twelve years.

1. Tools don’t replace thinking

In my years of consulting and working with many organisations and managers I have seen a common approach to fixing problems, where a manager believes a tool will “solve” the given problem. This can be successful where a problem area is very well understood, unlikely to have many exceptions and everyone acts in the same manner. Unfortunately this doesn’t reflect many real-world problems.

Too many times I have witnessed managers implement an organisational-wide tool that is locked down to a specific way of working. The tool fails to solve the problem, and actually blocks real work from getting done. Tools should be there to aid, to help prevent known errors and to help us remember repeated tasks, not to replace thinking.

2. Agile “transformations” rarely work unless the management group understand its values

Many managers make the mistake that only the people close to the work need to “adopt agile” when other parts of the organisation need to change at the same time. Co-ordinating this in enterprises takes a lot of time and skill with a focus on synchronising change at different levels of the organisation.

Organisations who adopt agile in only one part of their organisation face a real threat. As the old saying goes, “Change your organisation, or change your organisation.”

3. Safety is required for learning

Learning necessitates the making of mistakes. In the Dreyfus model, this means that particularly people in an Advanced Beginner stage, need to make mistakes in order to learn. People won’t risk making mistakes if they feel they will do a bad job, lose respect from their colleagues or potentially hurt other people in that process.

As a person passionate about teaching and learning, I find ways to create a safe space for people to fail, and in doing so, make the essential mistakes they need to properly learn.

4. Everyone can be a leader

I have written about this topic before, but it is such an important observation. I see a common mental model trap where people feel the need to be given the role of a leader, in order to act like a leader. People can demonstrate acts of leadership regardless of their title and can do so in many different ways, simply by taking action on something without the explicit expectation or request for it.

5. Architects make the best decisions when they code

In the Tech Lead courses I run, I advocate for Tech Leads to spend at least 30% of their time coding. Spending time with the code helps build trust, respect and a current understanding of the system. Making architectural decisions without regard for the constraints of the current system are often bad decisions.

6. Courage is required for change

I miss people talking about the XP values, one of which includes Courage. Courage is required for acts of leadership, taking on the risk to fail and the risk/reward of attempting something new. Where there is no risk, there is often little reward.

7. Congruence is essential for building trust

Beware of the old age maxim, “Do as I say, not as a I do.” In reality, regardless of what you say, people will remember how you act, first and foremost. Acting congruently is making sure that your actions follow your words. Acting incongruently destroys trust. Saying “no” or “not now” is better than promising to do something by a certain time, only to not deliver it.

8. Successful pair programming correlates with good collaboration

Although not all pair programming environments are healthy, I do believe that when it works well, teams tend to have better collaborative cultures. Many developers prefer the anti-pattern of (long lived) branch-based development because it defers feedback and sources of potential conflict.

I consider (navigable) conflict a healthy sign of collaborative teams. Deferring feedback, such as is the case with code reviews on long-lived branches tends to lead to more resentment because it is delivered so late.

9. Multi model thinking leads to more powerful outcomes

One of my favourite subjects at University, was Introduction to Philosophy where we spent each week in the semester studying a different philosopher. Over the course of my career, I have come to appreciate the value of diversity, and to see a problem through multiple lenses. Systems thinking also recognises that facts can be interpreted in different ways, leading to newer ideas or solutions which may be combined for greater effect.

10. Appreciate that everyone has different strengths

Everyone is unique, each with their own set of strengths and weaknesses. Although we tend to seek like-minded people, teams are better off with a broader set of strengths. A strength in one area may be a weakness in a certain context, and teams are stronger when they have a broader set of strengths. Differences in strengths can lead to conflict but healthy teams appreciate the differences that people bring, rather than resent people for them.

11. Learning is a lifelong skill

The world constantly changes around us and there are always opportunities to learn some new skill, technique or tool. We can even learn to get better at learning and there are many books like Apprenticeship Patterns and The First 20 Hours which can give you techniques to get better at this.

12. Happiness occurs through positive impact

The well known book, Drive, talks about how people develop happiness through working towards a certain purpose. In my experience, this is often about helping people find ways to have a positive impact on others, which is why our Pillar 2 (Champion software excellence and revolutionize the IT industry) and Pillar 3 (Advocate passionately for social and economic justice) values are really important for us.

Conclusion

The twelve points above are not the only lessons I have learned in my time at ThoughtWorks but they are some of the more key learnings that help me help our clients.

Categories: Blogs

Testing In Ansible

This is a topic I brushed up against yesterday and meant to blog about it at the end of the day but got a little busy. A lot of times when provisioning boxes locally in vagrant I’ve thought it would be incredibly useful to be able to automatically test the system to ensure all the expected bits are provisioned as expected.

I’ll probably throw together a nice public demo but the short and skinny is to include a final ansible provisioning step after the normal step that runs a test playbook of sorts against the system. For us we just dumped our test tags into our main roles and tag them as test. Then in vagrant we exclude test tagged tasks and then in the test phase we only run those tagged tasks. Below is an example for one of our services to test that two service processes are running and that the load balancer is also serving up responses that are the same as those running on the two processes.

I’ve also heard of other tools in this space like ServerSpec which may fit your bill if you’re not running ansible or are running some mixed environment. So far I think ansible fits well here but you’re definitely going to be a little limited due to the tests being in yaml. Although you could hypothetically write some custom modules or resort to shell wizardry if you need something more advanced.

I’m really excited about this… the idea we could have full test suites with each of our ansible roles that can verify a whole swath of aspects like expected ulimits and the like is GREAT.

Categories: Blogs

Friday Functions: AWS ZSH Helper

This morning I’m going to go with a new recurring weekly post: Friday Functions! While some of it will aim to share my large inventory of zsh functions I’ve acquired over the years I’ll also be finding new additions if I run out of material. So it also serves to help me learn more!

This week’s function is probably only useful if you’re into AWS and use the awscli tool to interact with it from the command line. Using the awscli command direction can be quite verbose so some nice shortcuts are useful. I actually learned of this handy function from Kris’s awesome collection of zsh configuration and made a few small adaptions to it.

This is pretty useful. If you want to find all instances with http in the name you just run aws-instances-describe http.

Screen Shot 2016-03-31 at 6.14.42 PM

Or if you want to look for instances by a specific tag you can use the `-t` switch. For example, to find all instances with the worker_email role tag we can just run aws-instance-describe -t role worker_email. You can add -s to changed the filter to include the running state and like the actual call you can include multiple instances. So if you wanted to find all stopped instances with the taskhistory role you’d run aws-instance-describe -t role taskhistory -s stopped. The function sets this to default to running instances only since that’s what I’m looking for 99% of the time… looking for stopped or terminated instances is definitely the exception.

Hope this was interesting enough. Ideas, thoughts, comments or criticism are all welcome in the comments below! Let me know what you think! 🙂

Categories: Blogs

What Are Developers Really Paid To Do?

Derick Bailey - new ThoughtStream - Fri, 04/01/2016 - 13:30

It’s a question that most developers have a fast answer for: “WRITE CODE!” … but, is that really what you’re paid to do?

In this episode of Thoughts On Code I’ll explain why I don’t think your job is to just write code, after all.

Categories: Blogs

Knowledge Sharing


SpiraTeam is a agile application lifecycle management (ALM) system designed specifically for methodologies such as scrum, XP and Kanban.