Using a module that’s hosted in npm or on github is fairly easy. You can “npm install” or just update your package.json file with the github location and be done.
But what about developing locally? How do you get your code from a module that you are building, into a project that needs it, without going through the typical npm or github install?Symlinks To The Rescue
npm has a feature built in to it to help you work on modules locally and use the code in your project without publishing it to the npm repository: symlinks.
The gist of it is that you create a symlink inside of your actual project’s node_modules folder, and you point it to the location of your module on your local system. This lets you run the current, in-development code in your project, without publishing it to npm first.
The best part, though, is that npm automates this for you.npm link
Say you have a module, “foo”, that you are building on your system. When you have a working version and need to test it in your project, you’ll need to run two steps:
- tell npm this module can be linked
- link it into your project
Start in the “foo” module’s development directory, and run this:
That’s it. Two words: “npm link”.
Behind the scenes, this will tell your local npm installation that the “foo” library can be linked into other projects.
Now head over to the project that needs to use “foo” and run this:
These three words will tell npm to create a link to the “foo” module within the current project.
If you look at the node_modules folder, you’ll see the symlink to the module. This means you can run your standard require(“foo”) call inside of your project, and it will load your local development version of “foo”.
But, what about deployment to production?Deploying Production Modules
The great thing about linked modules on your local box, is that they are only linked on your local box. The symlink that npm created only exists on your local machine, and nowhere else.
With that in mind, once you have the “foo” module working the way you need it, you would publish it to npm like any other module. Your project’s “package.json” file would still contain a standard reference to the “foo” module, to install from npm, as well.
When you deploy the project and run “npm install” for the production or test or whatever environment, npm will install the published version of your “foo” module.
You only need to make sure you publish the “foo” module before you deploy your code.Unlinking
P.S. You can just as easily unlink a linked module from your local project with… you guessed it, “npm unlink foo”.
We started this new “Highlights” series last month to capture the SAFe news and activities that haven’t been covered in other posts, but would still provide real value to the community. Here’s the latest roundup—a mix of new perspectives, understanding, and practical steps for successful implementation of the Framework.
Recorded Webinar: Building Complex Systems with SAFe 4.0
Presented by Alex Yakyma, SAFe Fellow and Principal Consultant, Scaled Agile & Harry Koehnemann, SPCT, Director of Technology, 321 Gang
This is a must-see video for anyone working with large solutions including hundreds or more practitioners per value stream. Alex and Harry take a deep dive and discuss the challenges large systems builders face and key approaches for addressing them. They look at the key challenges that make complex systems development so complex, and explore the myths that surround Lean-Agile in a complex, multidisciplinary world that include hardware, firmware and other engineering domains.
SAFe Provides a Recipe for Agile at Enterprise Scale
by Guy Harrison, Executive Director of R&D at Dell
Guy Harrison describes SAFe as attempting to define the ‘Agile Enterprise;’ a solution to software development that is still somewhat at odds with the broader business lifecycle in which it resides.
The Top 10 Pitfalls of Agile Capitalization
by Catherine Connor, SPC4, Portfolio Agility Solutions Specialist at CA Technologies
A good read for companies looking for ways to accurately and defensibly capitalize agile software development.
Ready to get started on your SAFe® 4.0 transformation? (SAFe 4.0 for ALM)
by Amy Silberbauer, SPC, Solution Architect, Enterprise Scaled Agile (SAFe), Strategic DevOps at IBM
“Is it SAFe yet?” Get the answer in this 5th installment describing the transformation of IBM’s internal ALM development organization towards a Continuous Delivery model.
Using Agile Based Strategic Planning Across Your Enterprise
by Steve Elliott, SPC, CEO of AgileCraft
What does it mean to have an “Agile” strategic planning process? Steve Elliott discusses how enterprise agility solves the three key problems of traditional strategic planning.
SAFe Agile Release Train Launch (ART) / Client: Totemic Tech
by McKenna Consultants/ Nick McKenna, SPC, CEO McKenna Consultants
This quick read describes the early stages of a SAFe adoption by Totemic Tech, a UK-based SaaS platform provider for banking and finance markets.
Shiny Objects and SAFe – article hosted by VersionOne
by Tom Weinberger, SPC, Agile Coach at Blue Agility
Too much WIP on your plate? Read Tom Weinberger’s discussion on how to handle multiple requests from multiple stakeholders within the SAFe discipline.
The brains from BestBrains share a 5-minute video on their 2-day PI planning simulation. Check it out for some creative use of LEGOS and chalkboards as the ART teams build a ‘village.’ Said one participant, “It gave me a deep understanding of SAFe and how to use the theory in practice.”
We’ll keep rounding up these great resources for the community, so stay tuned to the blog.
In the meantime, stay SAFe!
It’s official! Pivotal Tracker’s new project Analytics features are now out of beta!
As thousands of you have already discovered, Analytics bring a new level of visibility to your project and allows you to easily uncover bottlenecks and continuously improve your team’s performance. It’s a collection of simple but powerful reports that give insight into your project’s cadence, including fluctuations over time. You get a high-level view of progress, and the power to easily drill down into details.
Here’s what some of our customers have said about Analytics recently:
“We love the new reports, they have been very helpful for us to communicate expectations with managers, as well as optimize our cycle time.”
—Dennis Stevense, Lead Software Engineer, Streamline
“Loving Pivotal Tracker’s new analytics! Helpful and practical, with all the insight we were looking for.”
—Matthew O’Neill, COO, SameWave
Where to start
Analytics replace Tracker’s old progress and points breakdown reports, as well as the in-panel charts. The new Analytics are one click away from your project—just use the Analytics tab in the new navigation at the top of the page. This is also how you can get to your project settings and members pages (it replaces the old cogwheel menu in the project sidebar).
Analytics provide a number of ways for your team to explore your project data:Project Overview
Find out how time was spent on stories trends over time, and which stories have taken the longest.
Click “view report” anywhere you see it to drill down to a more detailed report (e.g., reports for individual releases that also include release burndown charts). These charts allow you to filter by label or epic, or see a list of stories for a given iteration by clicking on that iteration in the chart.
Note: Currently, Analytics allow you to see up to six months of historical data.
We will be publishing a series of Analytics-related blog posts over the next few weeks, so be sure to follow us on Twitter to stay informed.
One of the next steps is improved cross-project analytics and visibility, and we’re looking for customers that might be willing to help us shape these. If you would benefit from being able to see progress, trends, or status across multiple projects, please get in touch!
Your feedback throughout the beta process has been instrumental in getting Analytics to where they are today, so thank you! But please don’t stop—this is just one milestone on a long journey. Continue to send us feedback via the widget at the top left of the Analytics page, or email firstname.lastname@example.org, to share your comments and suggestions.
Richard Knaster has been working on a new whitepaper: “An Introduction to SAFe 4.0.” It distills SAFe down to its primary elements and ideas, with just enough depth to provide a fairly comprehensive understanding of the Framework. He’ll post that for comments and downloads sometime soon in the Updates category of this blog, so stay tuned.
This new, “leaner” overview of the Framework has reminded us of the need to emphasize what’s really important in SAFe. Those of you who have been practicing in the trenches know how critical the principles are to a successful implementation, so with that in mind I thought I’d provide the abridged version of those here now. Comments are welcome.SAFe Lean-Agile Principles Abridged #1 – Take an economic view
Achieving the best value and quality to people and society in the sustainably shortest lead time requires a fundamental understanding of the economics of the system builder’s mission. Lean systems builders endeavor to make sure that every day decisions are made in a proper economic context. The primary aspects include developing and communicating the strategy for incremental value delivery, and the creation of the Value Stream Economic Framework, which defines the tradeoffs between risk, cost of delay, operational and development costs, and supports decentralized decision-making.#2- Apply systems thinking
Deming, one of the world’s foremost systems thinkers, constantly focused on the larger view of problems and challenges faced by people building and deploying systems of all types—manufacturing systems, social systems, management systems, even government systems. One central conclusion was the understating that the problems faced in the workplace were a result of a series of complex interactions that occurred within the systems the workers used to do their work. In SAFe, systems thinking is applied to the organization that builds the system, as well as the system under development, and further, how that system operates in its end user environment.#3 – Assume variability; preserve options
Traditional design and lifecycle practices drive picking a single requirements and design option early in the development process (early in the “cone of uncertainty”). However, if the starting point is wrong, then future adjustments take too long and can lead to a suboptimal long-term design. Alternatively, lean systems developers maintain multiple requirements and design options for a longer period in the development cycle. Empirical data is then used to narrow focus, resulting in a design that creates better economic outcomes.#4 – Build incrementally with fast, integrated learning cycles
Lean systems builders build solutions incrementally in a series of short iterations. Each iteration results in an integrated increment of a working system. Subsequent iterations build upon the previous ones. Increments provide the opportunity for fast customer feedback and risk mitigation, and also serve as minimum viable solutions or prototypes for market testing and validation. In addition, these early, fast feedback points allow the systems builder to “pivot” where necessary to an alternate course of action#5 – Base milestones on objective evaluation of working systems
Systems builders and customers have a shared responsibility to assure that investment in new solutions will deliver economic benefit. The sequential, phase-gate development model was designed to meet this challenge, but experience has shown that it does not mitigate risk as intended. In Lean-Agile development, each integration point provides an opportunity to evaluate the solution, frequently and throughout the development life cycle. This objective evaluation provides the financial, technical, and fitness-for-purpose governance needed to assure that a continuing investment will produce a commensurate return.#6 – Visualize and limit WIP, reduce batch sizes, and manage queue lengths
Lean systems builders strive to achieve a state of continuous flow, whereby new system capabilities move quickly and visibly from concept to cash. Three primary keys to implementing flow are to: 1. Visualize and limit the amount of work-in-process so as to limit demand to actual capacity, 2. Reduce the batch sizes of work items to facilitate reliable flow though the system, and 3. Manage queue lengths so as to reduce the wait times for new capabilities.#7 – Apply cadence, synchronize with cross-domain planning
Cadence transforms unpredictable events into predictable ones, and provides a rhythm for development. Synchronization causes multiple perspectives to be understood, resolved and integrated at the same time. Applying development cadence and synchronization, coupled with periodic cross-domain planning, provides Lean systems builders with the tools they need to operate effectively in the presence of product development uncertainty.#8 – Unlock the intrinsic motivation of knowledge workers
Lean-Agile leaders understand that ideation, innovation, and engagement of knowledge workers can’t generally be motivated by, incentive compensation, as individual MBOs (Management by Objectives), cause internal competition and destruction of the cooperation necessary to achieve the larger system aim. Providing autonomy, mission and purpose, and minimizing constraints, leads to higher levels of employee engagement, and results in better outcomes for customers and the enterprise.#9 – Decentralize decision-making
Achieving fast value delivery requires fast, decentralized decision-making, as any decision escalated introduces delay. In addition, escalation can lead to lower fidelity decisions, due to the lack of local context, plus changes in fact patterns that occur during the wait time. Decentralized decision-making reduces delays, improves product development flow, and enables faster feedback and more innovative solutions. However, some decisions are strategic, global in nature, and have economies of scale sufficient enough to warrant centralize decision-making. Since both types of decisions occur, the creation of an established decision-making framework is a critical step in ensuring fast flow of value.
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
- Sponsored: 64% off Code Black Drone with HD Camera
Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
This is an article that I originally wrote for the Architecture Journal to walk through how we created “a language for software architecture.” Since the article is no longer available, I’m making it available here for old time’s sake.
The goal at the time was to create a simple way to work through solution design challenges and expose some of the key architectural concerns and choices.
The idea was to make it very easy to zoom out to the broader context, and then very quickly zoom into common architecture choices, such as deployment topologies and cross-cutting concerns.
I also wanted to be able to better leverage the existing patterns in the software industry by giving them a backdrop and a canvas so architects could compose them easier and apply them in a more holistic and effective way.
Grady Booch, one of IBM’s distinguished engineers, had this to say about the Architecture Guide where we first created this “language for architecture”:
“Combine these styles and archetypes, and you have an interesting language for describing a large class of applications. While I don’t necessarily agree that these styles and archetypes are orthogonal (nor are the lists complete) for the general domain of software architecture, for Microsoft’s purposes, these styles offer an excellent operating model into which one can apply their patterns and practices.”
While a lot has changed since the original creation of our Architecture Language, a lot of the meta-frame remains the same. If I were to update the Architecture Language, I would simply walk through the big categories and update them.Summary
One of the most important outcomes of the patterns & practices Application Architecture Guide 2.0 project is a language for the space. A language for application architecture. Building software applications involves a lot of important decisions. By organizing these decisions as a language and a set of mental models, we can simplify organizing and sharing information. By mapping out the architecture space, we can organize and share knowledge more effectively. By using this map as a backdrop, we can also overlay principles, patterns, technologies, and key solutions assets in meaningful and relevant ways. Rather than a sea of information, we can quickly browse hot spots for relevant solutions.
- A Map of the Terrain
- Mapping Out the Architecture Space
- Architecture Frame
- Application Types
- Application Feature Frame
- Architecture Styles
- Quality Attributes
- Layered Architecture Reference Example
One of the most effective ways to deal with information overload is to frame a space. Just like you frame a picture, you can frame a problem to show it a certain way. When I started the patterns & practices Application Architecture Guide 2.0 project, the first thing I wanted to do was to frame out the space. Rather than provide step-by-step architectural guidance, I thought it would be far more valuable to first create a map of what’s important. We could then use this map to prioritize and focus our efforts. We could also use this map as a durable, evolvable backdrop for creating, organizing and sharing our patterns & practices work. This is the main map, the Architecture Frame, we created to help us organize and share principles, patterns, and practices in the application architecture space:
Creating the map was an iterative and incremental process. The first step was to break up application architecture into meaningful buckets. It first started when I created a project proposal for our management team. As part of the proposal, I created a demo to show how we might chunk up the architecture space in a meaningful way. In the demo, I included a list of key trends, a set of application types, a set of architectural styles, a frame for quality attributes, an application feature frame, a set of example deployment patterns, and a map of patterns & practices solution assets. I used examples where possible simply to illustrate the idea. It was well received and it served as a strawman for the team.
Each week, our core Application Architecture Guide 2.0 project team met with our extended development team, which primarily included patterns & practices development team members. During this time, we worked through a set of application types, created a canonical application, analyzed layers and tiers, evaluated key trends, and created technology matrix trade-off charts. To create and share information rapidly, we created a lot of mind maps and slides. The mind maps worked well. Rather than get lost in documents, we used the mind maps as backdrops for conversation and elaboration.
Key Mapping Exercises
We mapped out several things in parallel:
- Key trends. Although we didn’t focus on trends in the guide, we first mapped out key trends to help figure out what to pay attention to. We used a mind map and we organized key trends by application, infrastructure, and process. While there weren’t any major surprises, it was a healthy exercise getting everybody on the same page in terms of which trends mattered.
- Canonical application. This first thing we did was figure out what’s the delta from the original architecture guide. There were a few key changes. For example, we found that today’s applications have a lot more clients and scenarios they serve. They’ve matured and they’ve been extended. We also found today’s applications have a lot more services, both in terms of exposing and in terms of consuming. We also noticed that some of today’s applications are flatter and have less layers. Beyond that, many things such as the types of components and the types of layers were fairly consistent with the original model.
- Layers and tiers. This was one of the more painful exercises. Early in the project, we met each week with our development team, along with other reviewers. The goal was to map out the common layers, tiers, and components. While there was a lot of consistency with the original application architecture guide, we wanted to reflect any learnings and changes since the original model. Once we had a working map of the layers, tiers, and components, we vetted with multiple customers to sanity check the thinking.
- Application types. We originally explored organizing applications around business purposes or dominant functionality, customer feedback told us we were better off optimizing around technical types, such as Web application or mobile client. They were easy for customers to identify with. They also made it easy to overlay patterns, technologies, and key patterns & practices solution assets. The technical application types also made it easy to map out relevant technologies.
- Architectural styles. This is where we had a lot of debate. While we ultimately agreed that it was helpful to have a simple language for abstracting the shapes of applications and the underlying principles from the technology, it was difficult to create a map that everybody was happy with. Things got easier once we changed some of the terminology and we organized the architectural styles by common hot spots. It then became obvious that the architectural styles are simply named sets of principles. We could then have a higher level conversation around whether to go with object-based community or message-based and SOA, for example. It was also easy to describe deployments in terms of 2-tier, 3-tier, and N-tier.
- Hot spots for architecture. When you build applications, there’s a common set of challenges that show up again. For example, caching, data access, exception management, logging … etc. These are application infrastructure problems or cross-cutting concerns. You usually don’t want to make these decisions ad-hoc on any significant application. Instead, you want to have a set of patterns and guidelines or ideally reusable code that the team can leverage throughout the application. What makes these hot spots is that they are actionable, key engineering decisions. You want to avoid do-overs where you can. Some do-overs are more expensive than others. One of the beauties of the architecture hot spots is that they helped show the backdrop behind Enterprise Library. For example, there’s a data access block, a caching block, a validation block … etc.
- Hot spots for application types. When you build certain classes of application, there’s recurring hot spots. For example, when you build a rich client, one of the common hot spots to figure out is how to handle occasionally disconnected scenarios. The collection of hot spots for architecture served as a baseline for finding hot spots in the other application types. For example, from the common set of hot spots, we could then figure out which ones are relevant for Web applications, or which additional hot spots would we need to include.
- Patterns. Mapping out patterns was a lengthy process. Ultimately, we probably ended up with more information in our workspace than made it into the guide. To map out the patterns, we created multiple mind maps of various pattern depots. We summarized patterns so that we could quickly map them from problems to solutions. We then used our architecture hot spots and our hot spots for application types as a filter to find the relevant patterns. We then vetted the patterns with customers to see if the mapping was useful. We cut any patterns that didn’t seem high enough priority. We also cut many of our pattern descriptions when they started to weight the guide down. We figured we had plenty of material and insight to carve out future pattern guides and we didn’t want to overshadow the value of the main chapters in the guide. We decided the best move for now was to provide a Pattern Map at the end of each application chapter to show which patterns are relevant for key hot spots. Customers seemed to like this approach and it kept things lightweight.
- patterns & practices solution assets. This was the ultimate exercise in organizing our catalog. We actually have a large body of documented patterns. We also have several application blocks and factories, as well as guides. By using our architecture frame, it was easier to organize the catalog. For example, the factories and reference implementations mapped to the application types. The Enterprise Library blocks mapped to the architecture hot spots. Several of the guides mapped to the quality attributes frame.
- Microsoft platform. This was a challenge. It meant slicing and dicing the platform stack in a meaningful way as well as finding the right product team contacts. Once we had our application types in place, it got a lot easier. For example, depending on which type of application you were building (RIA, Web, mobile … etc.), this quickly narrowed down relevant technology options. We created technology matrixes for presentation technologies, integration technologies, workflow technologies, and data access technologies. Since the bulk of the guide is principle and pattern based, we kept these matrixes in the appendix for fast lookups.
Over the weeks and months of the project, a very definite map of the landscape emerged. We found ourselves consistently looking for the same frames to organize information. While we tuned and pruned specific hot spots in areas, the overall model of common frames was helping us move through the space quickly.
- Architecture frame. The architecture frame was the main organizing map. It brought together the context (scenarios, quality attributes, requirements/constraints), application types, architectural styles, and the application hot spots.
- Application types. For application types, we optimized around a simple, technical set that resonated with customers. For example, Web application, RIA, mobile … etc.
- Quality attributes. We organized quality attributes by key hot spots: system, runtime, design-time, and user qualities.
- Architectural styles. We organized architectural styles by key hot spots: communication, deployment, domain, interaction, and structure.
- Requirements and constraints. We organized requirements by key types: functional, non-functional, technological. We thought of constraints in terms of industry and organizational constraints, as well as by which concern (for example, constraints for security or privacy).
- Application feature frame. The application feature frame became a solid backdrop for organizing many guidelines through the guide. The hot spots resonated: caching, communication, concurrency and transactions, configuration management, coupling and cohesion, data access, exception management, layering, logging and instrumentation, state management, structure, validation and workflow.
- Application type frames. The application type frames are simply hot spots for key application types. We created frames for: Web applications, rich internet applications (RIA), mobile applications, rich client applications and services.
- Layered architecture reference model. (Canonical application) The canonical application is actually a layered architecture reference model. It helps show the layers and components in context.
- Layers and tiers. We used layers to represent logical partitions and tiers for physical partitions (this precedent was set in the original guide.) We identified key components within the key layers: presentation layer, business layer, data layer, and service layer.
- Pattern Maps. Pattern maps are simply overlays of key patterns on top of relevant hot spots. We created pattern maps for the application types.
- Product and technology maps. We created technology matrixes for relevant products and technologies. To put the technologies in context, we used application types where relevant. We also used scenarios. To help make trade-off decisions, we included benefits and considerations for each technology.
One thing that helped early on was creating a Venn diagram of the three perspectives, user, business, and system:
In application architecture, it’s easy to lose perspective. It helps to keep three perspectives in mind. By having a quick visual of the three perspectives, it was easy to reminder ourselves that architecture is always a trade-off among these perspectives. It also helped remind us to be clear which perspective we’re talking about at any point in time. This also helped resolve many debates. The problem in architecture debates is that everybody is usually right, but only from their perspective. Once we showed people where their perspective fit in the bigger picture, debates quickly turned from conflict to collaboration. It was easy to move through user goals, business goals, and system goals once people knew the map.
The Architecture Frame is a simple way to organize the space. It’s a durable, evolvable backdrop. You can extend it to suit your needs. The strength of the frame is that it combines multiple lenses:
Here are the key lenses:
- Scenarios. This sets the context. You can’t evaluate architecture in a vacuum. You need a backdrop. Scenarios provide the backdrop for evaluation and relevancy.
- Quality Attributes. This includes your system qualities, your runtime qualities, your design-time qualities and user qualities.
- Requirements / Constraints. Requirements and constraints includes functional requirements, non-functional requirements, technological requirements, industry constraints and organizational constraints.
- Application Types. This is an extensible set of common types of applications or clients. You can imagine extending for business types. You can imagine including just the types of applications your organization builds. Think of it as product-line engineering. When you know the types of applications you build, you can optimize it.
- Architectural Styles. This is a flat list of common architectural styles. The list of architectural styles is flexible and most applications are a mash up of various styles. Architectural styles become more useful when they are organized by key decisions or concerns.
- Application Feature Frame. The application feature frame is a concise set of hot spots that show up time and again across applications. They reflect cross-cutting concerns and common application infrastructure challenges.
This is the simple set of technical application types we defined:
Applications of this type typically support connected scenarios and can support different browsers running on a range of operating systems and platforms.
Rich Internet applications (RIA)
Applications of this type can be developed to support multiple platforms and multiple browsers, displaying rich media or graphical content. Rich Internet applications run in a browser sandbox that restricts access to some devices on the client.
Applications of this type can be developed as thin client or rich client applications. Rich client mobile applications can support disconnected or occasionally connected scenarios. Web or thin client applications support connected scenarios only. The device resources may prove to be a constraint when designing mobile applications.
Rich client applications
Applications of this type are usually developed as stand-alone applications with a graphical user interface that displays data using a range of controls. Rich client applications can be designed for disconnected and occasionally connected scenarios because the applications run on the client machine.
Services expose complex functionality and allow clients to access them from local or remote machine. Service operations are called using messages, based on XML schemas, passed over a transport channel. The goal in this type of application is to achieve loose coupling between the client and the server.Application Feature Frame
This is the set of hot spots for applications we defined:
Authentication and Authorization
Authentication and authorization allow you to identify the users of your application with confidence, and to determine the resources and operations to which they should have access.
Caching and State
Caching improves performance, reduces server round trips, and can be used to maintain the state of your application.
Communication strategies determine how you will communicate between layers and tiers, including protocol, security, and communication-style decisions.
Composition strategies determine how you manage component dependencies and the interactions between components.
Concurrency and Transactions
Concurrency is concerned with the way that your application handles conflicts caused by multiple users creating, reading, updating, and deleting data at the same time. Transactions are used for important multi-step operations in order to treat them as though they were atomic, and to recover in the case of a failure or error.
Configuration management defines how you configure your application after deployment, where you store configuration data, and how you protect the configuration data.
Coupling and Cohesion
Coupling and cohesion are strategies concerned with layering, separating application components and layers, and organizing your application trust and functionality boundaries.
Data access strategies describe techniques for abstracting and accessing data in your data store. This includes data entity design, error management, and managing database connections.
Exception-management strategies describe techniques for handling errors, logging errors for auditing purposes, and notifying users of error conditions.
Logging and Instrumentation
Logging and instrumentation represents the strategies for logging key business events, security actions, and provision of an audit trail in the case of an attack or failure.
User experience is the interaction between your users and your application. A good user experience can improve the efficiency and effectiveness of the application, while a poor user experience may deter users from using an otherwise well-designed application.
Validation is the means by which your application checks and verifies input from all sources before trusting and processing it. A good input and data-validation strategy takes into account not only the source of the data, but also how the data will be used, when determining how to validate it.
Workflow is a system-assisted process that is divided into a series of execution steps, events, and conditions. The workflow may be an orchestration between a set of components and systems, or it may include human collaboration.Architectural Styles
For architectural styles, we first framed the key concerns to organize the architectural styles, and then we defined some common architectural styles.
Organizing Architectural Styles
These are the hot spots we used to organize architectural styles:
Service-Oriented Architecture(SOA) and/or Message Bus and/or Pipes and Filters.
Client/server or 3-Tier or N-Tier.
Domain Model or Gateway.
Component-Based and/or Object-Oriented and/or Layered Architecture.Architectural Style Frame
These are some commonly recognized architectural styles:
Segregates the system into two applications, where the client makes a service request to the server.
Decomposes application design into reusable functional or logical components that are location-transparent and expose well-defined communication interfaces.
Partitions the concerns of the application into stacked groups (layers) such as presentation layer, business layer, data layer, and services layer.
A software system that can receive and send messages that are based on a set of known formats, so that systems can communicate with each other without needing to know the actual recipient.
Segregates functionality into separate segments in much the same way as the layered style, but with each segment being a tier located on a physically separate computer.
An architectural style based on division of tasks for an application or system into individual reusable and self-sufficient objects, each containing the data and the behavior relevant to the object.
Separates the logic for managing user interaction from the user interface (UI) view and from the data with which the user works.
Refers to Applications that expose and consume functionality as a service using contracts and messages.Quality Attributes
For quality attributes, we first framed the key categories to organize the quality attributes, and then we defined some common quality attributes.
Organizing Quality Attributes
This is a simple way to organize and group quality attributes:
· Conceptual Integrity
· User Experience / UsabilityQuality Attribute Frame
These are some common quality attributes:
Availability is the proportion of time that the system is functional and working. It can be measured as a percentage of the total system downtime over a predefined period. Availability will be affected by system errors, infrastructure problems, malicious attacks, and system load.
Conceptual integrity is the consistency and coherence of the overall design. This includes the way that components or modules are designed, as well as factors such as coding style and variable naming.
The ability of a system to adapt to varying environments and situations, and to cope with changes in business policies and rules. A flexible system is one that is easy to reconfigure or adapt in response to different user and system requirements.
Interoperability is the ability of diverse components of a system or different systems to operate successfully by exchanging information, often by using services. An interoperable system makes it easier to exchange and reuse information internally as well as externally.
Maintainability is the ability of a system to undergo changes to its components, services, features, and interfaces as may be required when adding or changing the functionality, fixing errors, and meeting new business requirements.
Manageability is how easy it is to manage the application, usually through sufficient and useful instrumentation exposed for use in monitoring systems and for debugging and performance tuning.
Performance is an indication of the responsiveness of a system to execute any action within a given time interval. It can be measured in terms of latency or throughput. Latency is the time taken to respond to any event. Throughput is the number of events that take place within a given amount of time.
Reliability is the ability of a system to remain operational over time. Reliability is measured as the probability that a system will not fail to perform its intended functions over a specified time interval.
Reusability is the capability for components and subsystems to be suitable for use in other applications and in other scenarios. Reusability minimizes the duplication of components and also the implementation time.
Scalability is the ability of a system to function well when there are changes to the load or demand. Typically, the system will be able to be extended over more powerful or more numerous servers as demand and load increase.
Security is the ways that a system is protected from disclosure or loss of information, and the possibility of a successful malicious attack. A secure system aims to protect assets and prevent unauthorized modification of information.
Supportability is how easy it is for operators, developers, and users to understand and use the application, and how easy it is to resolve errors when the system fails to work correctly.
Testability is a measure of how easy it is to create test criteria for the system and its components, and to execute these tests in order to determine if the criteria are met. Good testability makes it more likely that faults in a system can be isolated in a timely and effective manner.
Usability defines how well the application meets the requirements of the user and consumer by being intuitive, easy to localize and globalize, and able to provide good access for disabled users and a good overall user experience.Layered Architecture Reference Model
This is our canonical application example. It’s a layered architecture showing the common components within each layer:
The canonical application model helped us show how the various layers and components work together. It was an easy diagram to pull up and talk through when we were discussing various design trade-offs at the different layers.
We identified the following layers:
- Presentation layer
- Business layer
- Data layer
- Service layer
They are logical layers. The important thing about layers is that they help factor and group your logic. They are also fractal. For example, a service can have multiple types of layers within it. The following is a quick explanation of the key components within each layer.
Presentation Layer Components
- User interface (UI) components. UI components provide a way for users to interact with the application. They render and format data for users and acquire and validate data input by the user.
- User process components. To help synchronize and orchestrate these user interactions, it can be useful to drive the process by using separate user process components. This means that the process-flow and state-management logic is not hard-coded in the UI elements themselves, and the same basic user interaction patterns can be reused by multiple UIs.
- Application facade (optional). Use a façade to combine multiple business operations into a single message-based operation. You might access the application façade from the presentation layer by using different communication technologies.
- Business components. Business components implement the business logic of the application. Regardless of whether a business process consists of a single step or an orchestrated workflow, your application will probably require components that implement business rules and perform business tasks.
- Business entity components. Business entities are used to pass data between components. The data represents real-world business entities, such as products and orders. The business entities used internally in the application are usually data structures, such as DataSets, DataReaders, or Extensible Markup Language (XML) streams, but they can also be implemented by using custom object-oriented classes that represent the real-world entities your application has to work with, such as a product or an order.
- Business workflows. Many business processes involve multiple steps that must be performed in the correct order and orchestrated. Business workflows define and coordinate long-running, multi-step business processes, and can be implemented using business process management tools.
- Data access logic components. Data access components abstract the logic necessary to access your underlying data stores. Doing so centralizes data access functionality, and makes the process easier to configure and maintain.
- Data helpers / utility components. Helper functions and utilities assist in data manipulation, data transformation, and data access within the layer. They consist of specialized libraries and/or custom routines especially designed to maximize data access performance and reduce the development requirements of the logic components and the service agent parts of the layer.
- Service agents. Service agents isolate your application from the idiosyncrasies of calling diverse services from your application, and can provide additional services such as basic mapping between the format of the data exposed by the service and the format your application requires.
- Service interfaces. Services expose a service interface to which all inbound messages are sent. The definition of the set of messages that must be exchanged with a service, in order for the service to perform a specific business task, constitutes a contract. You can think of a service interface as a façade that exposes the business logic implemented in the service to potential consumers.
- Message types. When exchanging data across the service layer, data structures are wrapped by message structures that support different types of operations. For example, you might have a Command message, a Document message, or another type of message. These message types are the “message contracts” for communication between service consumers and providers.
Tiers represent the physical separation of the presentation, business, services, and data functionality of your design across separate computers and systems. Some common tiered design patterns include two-tier, three-tier, and n-tier.
The two-tier pattern represents a basic structure with two main components, a client and a server.
In a three-tier design, the client interacts with application software deployed on a separate server, and the application server interacts with a database that is also located on a separate server. This is a very common pattern for most Web applications and Web services.
In this scenario, the Web server (which contains the presentation layer logic) is physically separated from the application server that implements the business logic.
It’s easier to find your way around when you have a map. By having a map, you know where the key hot spots are. The map helps you organize and share relevant information more effectively. More importantly, the map helps bring together archetypes, arch styles, and hot spots in a meaningful way. When you put it all together, you have a simple language for describing large classes of applications, as well as a common language for application architecture.
I made my first computer game in 1976 and became a professional game developer in 1994. Within five years I was nearly burned out: I had been promoted to lead seven game projects and had turned into that whip waving manager that we all hated.
But I have been inspired along the way by witnessing how people like Shigeru Miyamoto made games and what Mark Cerney wrote about his ideal process. I have also been inspired by being on a few teams that made great games and loved making them together.
This all came together when I read the first book about Scrum in 2003. It wasn't hard to make a connection between Miyamoto's "find the fun" philosophy and Mark's preproduction experimentation approach and the values of Scrum.
So we started experimenting with Scrum in game development. It wasn't a perfect fit. For example, we had to go beyond Scrum for content production and support. Along the way, we attended courses by Ken Schwaber and Mike Cohn (who also coached us onsite). They both inspired us about the human aspect of agile.
But after using it awhile, we began to see the benefit. Teams became more accountable. We leaders focused less on solving daily problems for them or baby-sitting a prescriptive process. We learned to serve their need for vision, clarity and support. Engagement, passion and fun grew.
A few years later, we were acquired by Vivendi and I started visiting their other studios to talk about how Scrum works for game development. I also started presenting the topic at GDC to large audiences. I enjoyed doing this and was encouraged by Mike, now a friend and mentor, to do it full-time.
So I took the leap in 2008 and began life as a one-person training crew. I had plenty of time and barely enough savings in the first few years to finish the book. Following that, the business became sustainable and I have loved every minute (OK, some of the airline travel hasn't been great). I do miss working on games directly with small teams, but walking inside over 100 studios over the past eight years and getting to know the people within is rewarding.
I'm not doing this to grow a big consulting firm. I still consider myself a game developer first and a trainer/consultant second. However, I am a Certified Scrum Trainer and have worked with some of the most skilled agile and lean trainers and thinkers. Combined with my game development experience this has helped me translate the purpose and values of agile and lean to the realities and challenges game developers face.
My goal isn't to ensure teams are following some rules by-the-book, but to help them find ways to make great games through iterative and human-focused approaches that work for game teams...and have a blast doing it.
It has been a while since I read The Phoenix Project and I am glad to have reviewed it again recently. Described as a business novel, or The Goal for the 21st century, the book focuses on a story that large organisations need to realise when they feel they need to transform IT.
The book focuses on a company in crisis – a company that is trying to complete lots of software projects, has a terrible number in flight and grapples with the problems many companies have – lack of visibility of the work, dependency on key individuals, marketing lead promises and IT treated as a cost-centre attitude. Bill, an IT Manager is one day promoted into a higher role where he is responsible for turning around and dealing with all the critical issues. He is given access to a mentor who introduces him to the “mysterious Three Ways” that are slowly uncovered throughout the book.
What I liked about the book
Business novels are refreshing to read as they feel less like reading a business book and sometimes makes picking up the book less of a chore. The authors manage to talk about generating insights and explaining some of the tools from a number of angles (Bill’s thoughts as well as from other characters’ perspectives) as well as relating it to existing material such as Theory of Constraints.
Like all good books, you follow the exciting story plot that descends into what seems like an insurmountable situation, only for the protagonist to find ways of overcoming it. For those who have never been exposed to visual ways of working (like Kanban), or understanding Work in Progress, Queueing theory and how IT capability matters to business, there are many useful lessons to learn.
What would have made the book better
Although the book has several characters who behave in a negative way, and pay for some of thoese consequences you don’t hear about the attempts by the protaganist which end up failing (with their consequences) unlike the real world. I also felt that the pace at which things changed seemed to occur at an unrealistic rate – but that’s proabably the downsides of a business novel versus what might actually happen in real life.
I would still highly recommend this reading if you’re interested in understanding about how modern IT, interested in how DevOps culture operates and some tools and techniques for moving IT management into a more responsive, flexible but still highly controlled manner.
What makes MW2 a unique memory for me, is that I finished it hours before my our first child was born five weeks early. When my wife had early contractions, the doctor told her that if she had ten repeats within the next hour, we should to dash off to the hospital. She told me this, while I was playing the last level of WM2. So I set the goal of completing the game within an hour. By the time she counted to ten contractions, I had finished . My son was born a bit premature, but healthy, a few hours later. To her credit, my wife does not remind me of this obsessed and selfish behavior. I blame the game.
Recently I asked a few of the participants and leaders of the original game to dig into their memory share their experiences with me. Tim Morten, a programmer on MW2, who is now a Lead Producer at Blizzard Entertainment, shared some of that history:
“MW2 went through two rebirths: one on the engineering side, and one on the design side. The original team had implemented something with promise, but it barely ran (not enough memory to hold more than two mechs) and it lacked narrative (just mechs on a flat surface shooting lamely at each other).
After a couple of years of effort, with a major deadline looming, management had no option but to retrench and down-scope the project. The existing team leadership departed at that point (lead engineers, lead producer, etc).
In an effort to salvage the massive effort invested, a couple of remaining engineers went rogue while VP Howard Marks was away at a tradeshow for a week - without permission, they attempted to convert the game to protected mode. This would theoretically provide access to enough memory to render a full set of mechs, but it had been deemed impossible in anything less than nine months - way more time than was available.
As of 9pm the night before Howard returned, they were ready to concede defeat: protected mode conversion requires extensive Intel assembly language programming, something they had no experience with - and there was no internet to use as a reference, they just had a single Intel tech manual. They thought they had done the right things, but there was no telling how many bugs remained before the game loop would run. Howard's arrival would spell the end of their effort, since his priority was to ship something, even if massive compromise in scope was required.
Against all odds, that midnight the game successfully looped in protected mode for the first time, and they were rewarded with a full set of mechs rendering - albeit in wireframe and without sound. They were elated to have cracked the hardest problem, opening up the possibility to build a better game.
Howard returned, recognized the potential that had been unlocked, and helped set the team up for success by bringing in proven problem solvers from Pitfall: The Mayan Adventure. John Spinale and Sean Vesce stepped in, to build a new team on the skeleton that remained, and to establish a vision for a product that to that point was nothing more than a bare bones tech demo.
The design rebirth of MW2 is something that Sean can speak better to, but it's fair to say that the technology rebirth was just an enabler - the design team innovated on so many levels under tight time pressure to produce something that was revolutionary for the time. Without that innovation, I have no doubt that MW2 would languish in obscurity today. Likewise, without the successful leadership of John rebuilding the team, and protecting the team from outside interference, we would not have achieved the success that we ultimately did.”
I’ve hear similar stories from numerous hit games: teams investing a measure of passion, heroic leadership protecting the team and visionary executives bucking convention and gambling on a vision. These seem like valuable attributes to grow. This is what “people over process” is about .
Here is my one page summary of some key differences between Agile and Waterfall. (I created this when I was asked to explain this to an exec earlier this month about this and I didn’t have anything good in my toolkit nor could I find something on google.) Key Differences between Agile and Waterfall In waterfall, […]
I’ve been at ThoughtWorks for 12 years. Who would have imagined? Instead of writing about my reflections on the past year, I thought I would do something different and post twelve key learnings and observations looking back over my career. I have chosen twelve, not because there are only twelve, but because it fits well with the theme of twelve years.1. Tools don’t replace thinking
In my years of consulting and working with many organisations and managers I have seen a common approach to fixing problems, where a manager believes a tool will “solve” the given problem. This can be successful where a problem area is very well understood, unlikely to have many exceptions and everyone acts in the same manner. Unfortunately this doesn’t reflect many real-world problems.
Too many times I have witnessed managers implement an organisational-wide tool that is locked down to a specific way of working. The tool fails to solve the problem, and actually blocks real work from getting done. Tools should be there to aid, to help prevent known errors and to help us remember repeated tasks, not to replace thinking.2. Agile “transformations” rarely work unless the management group understand its values
Many managers make the mistake that only the people close to the work need to “adopt agile” when other parts of the organisation need to change at the same time. Co-ordinating this in enterprises takes a lot of time and skill with a focus on synchronising change at different levels of the organisation.
Organisations who adopt agile in only one part of their organisation face a real threat. As the old saying goes, “Change your organisation, or change your organisation.”3. Safety is required for learning
Learning necessitates the making of mistakes. In the Dreyfus model, this means that particularly people in an Advanced Beginner stage, need to make mistakes in order to learn. People won’t risk making mistakes if they feel they will do a bad job, lose respect from their colleagues or potentially hurt other people in that process.
As a person passionate about teaching and learning, I find ways to create a safe space for people to fail, and in doing so, make the essential mistakes they need to properly learn.4. Everyone can be a leader
I have written about this topic before, but it is such an important observation. I see a common mental model trap where people feel the need to be given the role of a leader, in order to act like a leader. People can demonstrate acts of leadership regardless of their title and can do so in many different ways, simply by taking action on something without the explicit expectation or request for it.5. Architects make the best decisions when they code
In the Tech Lead courses I run, I advocate for Tech Leads to spend at least 30% of their time coding. Spending time with the code helps build trust, respect and a current understanding of the system. Making architectural decisions without regard for the constraints of the current system are often bad decisions.6. Courage is required for change
I miss people talking about the XP values, one of which includes Courage. Courage is required for acts of leadership, taking on the risk to fail and the risk/reward of attempting something new. Where there is no risk, there is often little reward.7. Congruence is essential for building trust
Beware of the old age maxim, “Do as I say, not as a I do.” In reality, regardless of what you say, people will remember how you act, first and foremost. Acting congruently is making sure that your actions follow your words. Acting incongruently destroys trust. Saying “no” or “not now” is better than promising to do something by a certain time, only to not deliver it.8. Successful pair programming correlates with good collaboration
Although not all pair programming environments are healthy, I do believe that when it works well, teams tend to have better collaborative cultures. Many developers prefer the anti-pattern of (long lived) branch-based development because it defers feedback and sources of potential conflict.
I consider (navigable) conflict a healthy sign of collaborative teams. Deferring feedback, such as is the case with code reviews on long-lived branches tends to lead to more resentment because it is delivered so late.9. Multi model thinking leads to more powerful outcomes
One of my favourite subjects at University, was Introduction to Philosophy where we spent each week in the semester studying a different philosopher. Over the course of my career, I have come to appreciate the value of diversity, and to see a problem through multiple lenses. Systems thinking also recognises that facts can be interpreted in different ways, leading to newer ideas or solutions which may be combined for greater effect.10. Appreciate that everyone has different strengths
Everyone is unique, each with their own set of strengths and weaknesses. Although we tend to seek like-minded people, teams are better off with a broader set of strengths. A strength in one area may be a weakness in a certain context, and teams are stronger when they have a broader set of strengths. Differences in strengths can lead to conflict but healthy teams appreciate the differences that people bring, rather than resent people for them.11. Learning is a lifelong skill
The world constantly changes around us and there are always opportunities to learn some new skill, technique or tool. We can even learn to get better at learning and there are many books like Apprenticeship Patterns and The First 20 Hours which can give you techniques to get better at this.12. Happiness occurs through positive impact
The well known book, Drive, talks about how people develop happiness through working towards a certain purpose. In my experience, this is often about helping people find ways to have a positive impact on others, which is why our Pillar 2 (Champion software excellence and revolutionize the IT industry) and Pillar 3 (Advocate passionately for social and economic justice) values are really important for us.Conclusion
The twelve points above are not the only lessons I have learned in my time at ThoughtWorks but they are some of the more key learnings that help me help our clients.
This is a topic I brushed up against yesterday and meant to blog about it at the end of the day but got a little busy. A lot of times when provisioning boxes locally in vagrant I’ve thought it would be incredibly useful to be able to automatically test the system to ensure all the expected bits are provisioned as expected.
I’ll probably throw together a nice public demo but the short and skinny is to include a final ansible provisioning step after the normal step that runs a test playbook of sorts against the system. For us we just dumped our test tags into our main roles and tag them as test. Then in vagrant we exclude test tagged tasks and then in the test phase we only run those tagged tasks. Below is an example for one of our services to test that two service processes are running and that the load balancer is also serving up responses that are the same as those running on the two processes.
I’ve also heard of other tools in this space like ServerSpec which may fit your bill if you’re not running ansible or are running some mixed environment. So far I think ansible fits well here but you’re definitely going to be a little limited due to the tests being in yaml. Although you could hypothetically write some custom modules or resort to shell wizardry if you need something more advanced.
I’m really excited about this… the idea we could have full test suites with each of our ansible roles that can verify a whole swath of aspects like expected ulimits and the like is GREAT.
This morning I’m going to go with a new recurring weekly post: Friday Functions! While some of it will aim to share my large inventory of zsh functions I’ve acquired over the years I’ll also be finding new additions if I run out of material. So it also serves to help me learn more!
This week’s function is probably only useful if you’re into AWS and use the awscli tool to interact with it from the command line. Using the awscli command direction can be quite verbose so some nice shortcuts are useful. I actually learned of this handy function from Kris’s awesome collection of zsh configuration and made a few small adaptions to it.
This is pretty useful. If you want to find all instances with http in the name you just run aws-instances-describe http.
Or if you want to look for instances by a specific tag you can use the `-t` switch. For example, to find all instances with the worker_email role tag we can just run aws-instance-describe -t role worker_email. You can add -s to changed the filter to include the running state and like the actual call you can include multiple instances. So if you wanted to find all stopped instances with the taskhistory role you’d run aws-instance-describe -t role taskhistory -s stopped. The function sets this to default to running instances only since that’s what I’m looking for 99% of the time… looking for stopped or terminated instances is definitely the exception.
Hope this was interesting enough. Ideas, thoughts, comments or criticism are all welcome in the comments below! Let me know what you think!
It’s a question that most developers have a fast answer for: “WRITE CODE!” … but, is that really what you’re paid to do?
In this episode of Thoughts On Code I’ll explain why I don’t think your job is to just write code, after all.