We are happy to announce the release of the Semantic Logging Application Block (SLAB) v1.1. It has exciting new features that will make developers more productive and that help improve your operational intelligence. I want to highlight three in particular:
- Support for activity tracing
Support for activity IDs for EventSources is available in .NET 4.5.1. This version of SLAB adds support for capturing and storing in all supported sinks the new ActivityId and RelatedActivityId properties from events published by EventSource classes.
- Added new EnableEvents and DisableEvents extension methods to the EventListener to configure event listeners using an event source name rather than an event source instance.
This allows capturing events from sources that are not publicly accessible, such as those generated by the TPL event provider to indicate activity ID transfers when tasks are used.
- Added Elasticsearch sink
A configurable sink to publish log events to Elasticsearch server (1.x). The sink can be used in– or out-of-process, and buffers events when writing them to the Elasticsearch server.
For other features, deployment considerations and known issues, I refer you to the official Release Notes.
I’d like to emphasize the fact that the new Elasticsearch sink feature of this release was largely a community-driven effort. This is the first official release of the block after we started accepting community pull requests. Big thanks to all contributors. With this example of a fruitful collaboration between Microsoft and the developer community, I’d like to encourage other developers to engage with us and help improve SLAB and other p&p deliverables. If interested, our contribution guidelines can be found here.
SLAB1.1 with the corresponding sinks ships under Apache2.0 license through NuGet. These are the package names:
As always, try it, use it, let us know what you think.
Visual.ly InfographicUnderstanding why an organization thinks it needs to change is enlightening. Many times the people at the bottom ranks of the organization have no idea the driving forces that necessitate a change. As a group brainstorm a list of possible reasons for the change.
Dialogue on the difference between change and transition. How long does change take? What is a transition? Bridges Transition Model: Ending - Neutral Zone - New Beginning.
Dialogue on the difference between satisfiers and dissatisfaction in the workplace (Herzberg's Two-Factor theory).
See Gallup's Q12 Employee Engagement survey.
First just capture all the reasons that might be the answer to - Why change?
Typical reasons (if you need to prime the pump - or when the answers slow down - throw one of these out to head in a different direction).
- Current process just doesn’t work
- Decrease time to market for new products
- Cost reductions, improve effeciencies
- Scrum is the methodology of the decade (soup de-jour)
- We’ve tried all the others - maybe this one will catch on
- The boss said so
- Our competator is doing it
- Exponential change rate of market space requires ability to “pivot” into new market spaces with our existing products
- We don’t really know what our customers desire until we deliver it
- Outsourcing is hard so maybe this will fix it
- Our quality is too low - so Scrum will fix it
- Dot Vote - 3 dots each - which is the organization true motivation?
- 5 Whys? - Identify one item and ask WHY until you have a root cause motivation.
Intrinsic vs Extrinsic Motivation
Herzberg's Two Factor Theory
Interesting links on Motivation
First Break ALL the Rules.
Video series on Scrum (short takes)
A Release planning and Scrum simulation - Priates of the Carriballoonian
Project success sliders
Quikie video explains relative vs absolute measures
Dog Grooming - agile estimation technique
Elements of an effective scrum board - checklist
Pick the Metrics to use to evaluate your team
If your Agile and you know it - clap your hands. Prove it - show me the practices that support a claim of agility
Definition of Done & Ready
Intro the concepts of TDD & Refactoring
Create a set of well known plays for the team
Experience the power of prototyping and evolutionary design - Marshmallow challenge & video
Pair programming exercise
What Motivates your Employees
Resuming the DataViz 101 series started last year, I want to revisit some basics for data visualization, showing when we get value from having numbers visualized, and when such a visualization is inappropriate. One of the main reasons that data visualization exists at all — be it smooth infographics, or slick project reports — is the fact that it saves us time needed to digest some quantitative information, i.e. the information that has numbers in it. Visuals present numbers in an appealing way, making them easier to read. Sometimes, however, they use visualized numbers with no substantial ground. If no meaning is ingrained into the graphical cuteness, a visual would make no sense. Some other technique for information rendering has to be used then, such as a text.
Take a look at one such case where numbers pretend to be visualized with some meaning, while actually failing to provide real value to people who look at them.
One can see this pattern with numbers highlighted quite often on web-sites for conferences or gatherings. Such a visual is supposed, presumably, to convince potential attendees that this conference holds some value for them. However, I don’t see how it will help decide if a conference is worth attending or not. There’s no universal converter that would work for each and every individual, and translate those hours of keynotes, workshops, trainings, and the count of speakers into a meaningful answer to one question: “Will I learn something new and useful for me personally at this conference?” How are these flat numbers capable to attend to the unique knowledge landscape of any given individual? No way, they can’t do it. Those people who are looking to decide for themselves if a conference is worth attending or not, might as well skip this “hippish” part with meaningless numbers, and proceed straight on to the text piece about the speakers, keynotes, workshops and training. Bad news for someone who did this visual: they’ve wasted both their time, and the time of the site visitors.
Here’s the other example that shows how visualized numbers can help in project management:
This is a sparkline report, and while it includes numbers that seem to hold no meaning to an external observer, an insider who looks at the graph is likely to know the project context: how user stories and bugs are sized in general, how much effort does it take to have them completed, and how these numbers can be rendered into a diagnosis report on the project health. Compare the sparkline graph and this text: “This report covers the last 16 weeks. Designers had their backlog full with 13 user stories in the first week, with fewer and fewer new stories added in the next weeks. They completed 3 stories, and had 2 more added to their backlog in the current week”. Of course, the sparkline renders this info in a more compact and time-saving way.
As a summary, before we hurry to create a visual report, or an infographic with numbers, we need to consider if a user or a reader will get the info they want fast from this visual. Some information can be rendered best as a piece of text, like in this first example from a web-site of a conference. Words would have taken readers to the core of the matter faster. In the second example, it’s the other way round. It would take more time to convey the same information in words.
Sprintly was recently recognized by Software Advice as one of the top five favorite user interfaces in agile project management. Software Advice researched 100s of project management interfaces, selected their top five favorite and in the review called Sprintly’s interface “remarkably attractive.”
I had a chance to catch up with author Noel Radley to ask her a few questions about herself, how she found Sprintly and what were her primary drivers for selecting Sprintly:
Phuong: Will you give us some background on yourself?
Noel: I’ve been researcher since 2001 (PhD in English), covering topics of technology since 2007. I’ve also taught writing for over a decade at University of Texas at Austin and Santa Clara University. At both campuses, I led research in teaching with technology, including work in Drupal, electronic portfolios, and media production. In February, I joined Software Advice, where my primary research is project management software.
Phuong: How did you find Sprintly?
Noel: It was a rigorous process to select our five favorite softwares. We surveyed 100s of project management interfaces, and we deeply researched about a dozen.
Sprintly showed up as a popular project management interface, and repeatedly we found users commenting on the visual design. As we evaluated the interface, we agreed that it had earned its reputation for compelling design. It really stood out among project management softwares.
Phuong: What were the primary drivers for choosing Sprintly?
Noel: First, we were looking for interfaces that wowed us with beautiful design. Secondly, we were searching for interfaces that helped teams think in critical and productive ways about workflows. We liked Sprintly because it did both. The interface was stunning, while it also provided real insight to project teams.
It’s true there are many interfaces to help track development processes, but we wanted to search for ones that helped developers think in a new way. We feel Sprintly offers unique dashboard views that are conducive in prioritizing, decision-making, and adapting a development workflow.
For example, the Kanban style board is designed in such a way it doesn’t look like your typical card wall, which is refreshing. It also has functionalities your typical card wall may not have, since the cards are more obviously interactive and social than other boards we’d seen.
Phuong: From a UX and UI perspective, do you have any opinions on what a good agile project management tool should provide a team?
Noel: Our article focuses on UI design. To demonstrate great UI, we feel that an agile project management tool would have powerful visual metaphors to help the team conceptualize their work. The tool would be accessible with clearly labelled and categorized features. Finally, in terms of color and composition, the design would make it easy for the team to identify what aspects of the development were the most important at any given moment.
For development teams, the interface should make it easy for teams to view and adapt workflows in real-time.
From surveying these tools, we got a sense that agile project management interfaces are becoming more user-oriented in general. This seemed to indicate an innovative moment for these kinds of software, and the interfaces we selected were leading the way. We can’t wait to see what’s next!
“Stay hungry. Stay foolish.” Many people remember those words as a quote from Steve Jobs made during Jobs’ Stanford commencement address in 2005, where, among other things, he spoke about the Whole Earth Catalog as the Google of that day. Referring to the Whole Earth Catalog, and that particular time in history, brings a tear to my eye and makes me smile at the same time.
People remember the four words as Jobs’, but forget that Jobs readily admitted that he didn’t make up those words – he was quoting the farewell message on the back cover of the last issue of Whole Earth Catalog in 1974.
To this day, whenever I look at that photo, I never look at it as an end of a long and winding road, but as a beginning of a journey down it. But that’s not why I’m writing this today. What I’m interested in writing about starts with this “Stay hungry. Stay foolish.” mantra, and leads us down the road of why this is so important for agility.
Staying hungry: we are reminded to focus on delighting our customer. Hunger is so low on the Maslovian scale that it’s a basis for just about everything else. While one item to be hungry for are the products we produce, the capital that they provide should not be considered the goal of existence. In a sense, rather than simply dining well on the money that revenue produces, we have to also use it to plant the seeds for future revenues and the capital that it will provide.
“A rising tide lifts all boats.” It’s a phrase commonly attributed to John F. Kennedy, but he didn’t make up those words either. He got them from a chamber of commerce called the New England Council. For purposes here, remember that simply doing what’s worked in the past is merely a way to slowly drain the lake. Constantly improving is what we have to do to make the lake larger. Continuous improvement doesn’t come with a checklist of things to do. It requires creativity to figure out how to make what’s good now even better, and that requires that we never grow complacent and rest on our laurels. We have to constantly be searching and never forgetting that things could always be even better than they are. “Curiosity is stimulated when individuals feel deprived of information and wish to reduce or eliminate their ignorance.” (The Agile Mind, p279, Wilma Koutstaal, 2012, Oxford University Press.)
Staying foolish: we are reminded to not just accept, but to question and be curious. Getting to the “whys” are far more important than just jumping to a whole bunch of “whats” (and getting them wrong the until we learn what’s really needed.) And learning can be expensive at times. Failure shouldn’t be viewed as a bad thing, but as something that gives us an opportunity to learn something. If nothing else, we learn what doesn’t work. As Thomas Edison said, “I have not failed. I’ve just found 10,000 ways that won’t work.” We have to be in an environment that supports that line of thought for the potential to improve through learning to work.
Sometimes the reason we fear being foolish is that the idea that we have isn’t something that we feel we should discuss. It may be something that might make us the object of ridicule. Maybe we’re afraid that should something go wrong, that we’ll get blamed for the failure. Those sort of situations remind me of the old Hans Christian Andersen tale of “The Emperor’s New Clothes.” When the vain emperor employs tailors, they decided to make clothes from a special fabric that was invisible to anyone unfit for their positions or hopelessly stupid. The Emperor and his ministers all saw the clothes, because they feared that otherwise, they would be seen as unfit or stupid. The child who blurted out that the emperor had no clothes was not only foolish, but hungry as well (since he wasn’t part of the ruling class, who had everything to lose). In our world, we have to find ways to be like that child if we expect to make Agile anything more than superficial self-congratulation that ultimately ends up draining the lake. We have to make sure that in our organizations, we can stay hungry and foolish and not fear that saying “our process has no clothes” will be met with anything other than the wonderment of what’s possible.
Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.
The post Stay Hungry. Stay Foolish. And Don’t Be Afraid to Say “Our Process Has No Clothes!” appeared first on BigVisible Solutions.
Many people are familiar with process evaluation like The Nokia Test. There are also mash-ups of popular assessments, and I like The Borland Agile Assessment about the best, because it focuses on qualities (We work in an environment of trust and respect), rather than compliance (Single Product Backlog). Jeff Patton wrote an article, Performing a Simple Process Health […]
The post Evaluate Process Qualities, not Process Compliance appeared first on Constantly Changing.
Many time, in the middle of developing a user story, the programmer discovers a question about how it’s intended to work. Or the tester, when looking at the functionality that’s been developed, questions if it’s really supposed to work that way. I once worked with a team that too-often found that, when the programmer picked up the card, there were questions that hadn’t been thought out. The team created a new column on the sprint board, “Needs Analysis,” to the left of “Ready for Development,” for these cards that had been planned without being well understood.
It was that problem that the Three Amigos meeting was invented to address. Rather than wait until a story was in the sprint to fully understand it, stories that were on-deck for the next sprint would be discussed by representatives of the business, the programmers, and the testers to make sure that all the questions were answered before considering the story ready for development. Sure, the occasional question still came up during development, but the epidemic had been stemmed.
Since that time, I’ve found better ways to determine if a story is ready for development. I look for the list of essential examples, or acceptance scenarios, that the completed story will satisfy. These provide a crispness to the understanding of the story that’s hard to achieve any other way.
There are fringe benefits to going to this level of detail. Planning discussions of the story don’t spend a lot of time understanding what the story means. These discussions don’t go round-and-round finding the boundaries of the story. If the scenario isn’t listed, it’s part of another story (or it’s been overlooked). In fact, dividing the scenarios into groups is a simple way to split a story into smaller ones.
Another benefit is that the scenarios can be automated as acceptance tests prior to the development of the functionality. Having a clear picture of the outcome before starting helps keep the development on track, minimizing premature speculation of future needs and maximizing attention to current edge cases that might otherwise be overlooked.
In a development process that uses sprints or timeboxes, you’ve got the whole sprint to get the next sprint’s worth of stories refined prior to planning. If you’re practicing a single-piece pull process, you’ve got the length of time a story spends in the development input queue to do so. Either way, refining the backlog is a necessary overhead activity that should be done a little at a time, all the time.
The goal is to have the work done just-in-time for planning and development. It should be complete enough to avoid stoppages to build more understanding, but not so far in advance that the details get stale. We want our scenarios to take advantage of the most knowledge we can bring to bear. If done too early, we may have to revisit the scenarios to see if we need to alter them according to what we’ve learned since we created them.
More often than creating too many acceptance scenarios too early, I find teams spending this effort too late. It seems a lot of work to go to such detail when we know we’ve got a mountain of work to accomplish.
Developing software correctly is a detail-oriented business. We’re going to have to get to that detail sooner or later. Leaving it until the programmer has a question causes interruptions in development, delays, lost effort, and, sometimes, unwarranted assumptions that give us results we don’t want. Don’t look at the mountain of work. Look at the little bit of work we’ve decided to tackle next. Do a great job on that, and things will go more smoothly.
Derek Neighbors, Jade Meskill, Clayton Lengel-Zigich, and Jake Plains discuss:
- How to get more cross functional
- How to overcome the challenges of working with very different skill sets
- Asking not listening
Here’s a short animated video describing Spotify’s engineering culture (also posted on Spotify’s blog).
This is a journey in progress, not a journey completed, and there’s a lot of variation from squad to squad. So the stuff in the video isn’t all true for all squads all the time, but it appears to be mostly true for most squads most of the time :o)
Part 2 hasn’t been recorded yet. Stay tuned.
Last friday we had the pleasure of having CS students Sofie Lindblom and Anton Arbring as guests visiting the monthly competence day at Omegapoint.
After this visit, Sofie have done me the honour of musing on the theme of our letters by writing an open letter "Dear Senior, Letter to a Senior Programmer".
That post is so full with interesting topics it would take a day just to briefly discuss them. But those topics are also way too important to leave uncommented. So, to do something, let us pick one important thing and discuss it. I pick the topic about "what to learn".
Too much information out there - as always have been
But there is too much information out there to know where to start. I am not stupid, I did very well in all programming courses and is a fast learner. But I feel exhausted by the amount of information available.To start somewhere, let us start with the vast amount of information, technology, frameworks, etc that are out there. Obviously there is no way to take in all of that. If we want to use metaphors, it does not suffice saying "drinking from the fire-hose", it is rather to try to gulp the Nile.
Good part is that the situation is not new. Of course there are more information out there now compared to 15 years ago when I left university. But even then the amount of information available was too much for any individual to comprehend. And the situation is even older. The proverb "so many books, so little time" is not fresh-out-of-the-presses.
For those leaving university today, there will be truckloads of technology you will be using at work that you did not learn in class. But that situation is not new either. When I left university, I had not used a relational database in any single class or lab. Still, most systems (not all), I have worked with professionally have included SQL databases of some sort. Actually, one of my first jobs was to teach a class on the Java database API JDBC. How did I manage?
The obvious solution is to replace "know everything" with "able to comprehend". We cannot know everything beforehand, but we need to be able to understand any technology we come across with just spending a reasonable amount of work.
Killing a meme
There is a meme around in this information age that basically goes "you do not need to know, you need to be able to find information". I want to kill that meme in the context of system development.
The meme might very well be true, with Wikipedia and the rest of the web at our fingertips we will be able to find data like "first historically recorded solar eclipse" (5th of May 1375 BC in Ugarit). Nevertheless true, it is worthless to us as software professionals. Because what we need is not data or information, but understanding.
Deep knowledge feed deep knowledge
Now, this is only my own meandering experience, but I have found it invaluable to know a few things really well. Deep knowledge has interesting side effects. Suddenly you see some pattern apply to a new domain. It seems like no domain of knowledge is an island. Even if facts do not carry across boarders, some structures of thinking and reasoning actually apply.
This is really vague, so let me throw some examples to clarify. When studying law I suddenly found that my studies of formal logic really helped me. I studied negotiation theory and found how it applies to finding a good architecture for a software system. I studied compiler technology and found it helpful when studying linguistics. Through my lifelong studies of math, I see wonderful aspects of beauty in the world every day. (OK, the last is a little bit off topic - but it makes my life richer, and that is worth something)
The strategy I try to apply myself is to study subjects in depth, to the level when I have to think hard about them. The specific knowledge might not be immediately applicable - I will probably not have any specific use of knowing e g how to count the number of ways to paint a cube using several colours. However, thinking hard has probably etched new traces in my brain - and those ways of thinking will probably pop up applicable in a new domain.
To fall back on metaphors again. As software developers we need to dig deep to understand a new technology. To get down to depth we are not very helped by having dug a meter deep over a large area. But if we have dug a few 20 m deep holes in other places, there is a good chance that we can dig a short tunnel at the 20 m level from the bottom of some other hole.
How did I survive that first job-gig teaching an API that I had myself never used before? Well, having studied functional programming in depth (using e g ML) had made me comfortable with the ideas of abstract datatypes. So the idea of an API was not unfamiliar. Having studied linguistics I was very familiar with formal grammars of languages so SQL syntax was not strange. Having studied compiler technology I could understand the semantics of SQL. Having studied algebra and set theory I could easily pick up how SELECT and JOIN worked.
It took me a few days to read the JDBC API and specification combined with some small hacks to validate that I had got it right. And after those few days I did not only knew about JDBC, I actually understood it well enough to be able to teach it in a reasonable way. Not being an expert, but reasonably competent.
Without the deep knowledge in some very obscure subjects (linguistics, set theory, compiler technology etc) I would have been utterly lost. No skill in "searching information on the web" would have helped me the least.
The more I learn, the more I realize how little I know. It creates contradictory feelings towards the field I love. To twist it even further the part I love the most is that you can never be fully learned and that there is never a ”right” answer.I understand the frustration. But I am not sure I would like to have a field where there was a right answer, a proven best practice - many in our field dream of such.
However, to me a large portion of the beauty of the field is that we are constantly pacing unchartered terrain. The challenge is to constantly search your tool-box for something that seems applicable, to adapt, to improvise, to search, to try, to fail, to back up, to learn, to grow, to try again, to discuss, to exchange ideas, to finally nail it.
This is nothing but my own personal experience, but if I should offer an advice to handle the world of information we have around us it would be the following:
Find things to learn that you find interesting and that challenge your intellect. Take the time, pain, and pleasure to learn a few of those things to depth. The deep thinking will etch your brain in ways that will help you enormously whenever you approach a new field. And enjoy the pleasure of deep understanding when it dawns on you.
PS Should you come across Sofie and Anton, take the time to have a discussion with them. And do not stop at a chat about everyday things - they have really interesting ideas to delve into.
I’ve been a huge fan of messaging systems and distributed application design through messages for a good number of years now. I’ve written several articles on this general area of development, and my MarionetteJS framework took a lot of influence from messaging based architectures. It’s been a part of how I think for a long time now. But until last year, I had never used RabbitMQ. What’s even worse is that until very recently (like, within the last 2 weeks), my use of RabbitMQ was mostly hopes and prayers, constantly wondering if my code was going to crash again.
But I’ve started to correct that mistake – the mistake of not properly learning RabbitMQ, and assuming that it worked in a manner similar to what I was used to. Turns out it doesn’t… big surprise… but it can be used in a manner that I’m more accustomed to, given the right libraries on top of it. So I wanted to offer a couple of quick lessons learned from my experience in trying to learn more about RabbitMQ and improve my use of it. I also have a few resources you may want to look at, and a recommendation for NodeJS libraries.Lesson #1: One Connection Per Client Process. Many Channels Per Connection
This lesson alone was my biggest break-through in understanding RabbitMQ – and it all stems from how RabbitMQ manages connections to the server vs how you actually interact with the server.
In RabbitMQ, you have a connection and a channel. A connection is what it sounds like – it’s the connection between a RabbitMQ client and a RabbitMQ server. This connection travels over the TCP/IP sockets or whatever wire protocol you’re using. The thing about connections that I didn’t understand, was that they are very expensive to create and destroy. A single RabbitMQ connection is a single TCP/IP connection. You want to avoid having too many of these open. In fact, I would go so far as to say you want to limit your RabbitMQ connections to one per client process. That is, if you have ApplicationA, a single RabbitMQ connection should be opened by and maintained by that application instance.
If Connections are TCP/IP (and they are, really), then Channels are the next protocol layer on top of Connections. Think of it like this… when you get on the internet, you have an open connection to some server somewhere. You can then choose to use HTTP, FTP, WebSockets, XMPP and other instant messaging protocols, and more. These protocols on top of TCP/IP are the communication channels that your applications use while connected to the internet. A channel in RabbitMQ is similar. It’s the thing that your application uses to communicate with the RabbitMQ server.
Here’s the best part about channels, though: you can (and should!) have a lot of open channels on top of a single connection. You can create and destroy channels very quickly, and very cheaply. They allow you to have a single connection to the RabbiMQ server, but have sandboxed communication for various parts of your application. Channels are how your application communicates with the RabbitMQ server.
So, keep one connection per client process (instance) and many channels within that process (instance).Lesson #2: Learn The Channel-Oriented Protocol / API Before Learning An Abstraction
One of the main reasons that I had a hard time learning RabbitMQ initially, and why my code was so terrible for so long, was my lack of understanding in how RabbitMQ actually works. When I started using it, I jumped right to a library that provided some abstractions on top of the channel-oriented nature of the protocol and I didn’t understand the abstractions. My lack of understanding the protocol itself was to blame. I couldn’t understand why the commands I was issuing were happening through the channel object all the time, instead of using Exchange and Queue objects like I expected.
It turns out the protocol itself is very channel-oriented. Understanding this opened my eyes as to why the library I was using was set up the way it is. I think there are possibilities for improving the API that we interact with, on top of the channel-oriented API set… and I’ve found a library that I’m using on top of it, that I like.
The point is, before you jump off the deep end and put yourself in a bad situation, like I did, take the time to learn the AMQP protocol (which is what RabbitMQ runs) and the RabbitMQ extensions to it. Having this foundational knowledge will make it easier for you to see which library you will want to use, and understand the options and API within that library. If you don’t learn the protocol, you’ll likely end up confused like I was.Some Resources For Learning RabbitMQ
I found it incredibly easy to get started with RabbitMQ, but had a little more difficulty getting anything more than a “hello world” message going. It wasn’t until I started reading additional resources, other than what is listed at the RabbitMQ homepage, that I really started seeing how to build things and why. Here are some of the resources that I’ve been using:
- The RabbitMQ Docs - the official docs. Be sure to check out the tutorials, as well
- RabbitMQ In Action – the best book that I’ve found on the subject, and the book the finally taught me what I was doing wrong with the API / protocol. I HIGHLY recommend picking up this before and reading the intro / first two chapters before you start coding
- Alex Robson’s Notes On RabbitMQ - I asked Alex a few questions a while back, and he posted this amazing gist of info. This is something I still go back to on a regular basis, to verify the direction that I’m heading against the things that Alex has said. It’s not something that can be consumed / understood in one sitting, by a n00b, though. Keep it around as reference material, like I do.
There are countless other tutorials and blog posts around, but not that much from which I’ve learned much. I find myself continuing to go back to these resources for the info I need.My Choice Of NodeJS Libraries
When I first started trying to really learn RabbitMQ (after having used it for a while), I found myself with a dilemma: which of the many NodeJS libraries do I go with? There are three major players at this point:
Node.AMQP might seem great off-hand, but from what I’ve read it is odd in that it hides exchanges or channels or something like that. I’ve only read enough about it to know that I don’t want to use it. I haven’t actually tried it, but I doubt that I will.
BRAMQP is a very low level API on top of RabbitMQ. It positions itself as letting you do ANYTHING with AMQP, because it’s a very low level driver like API. But the problem is that you have to do everything yourself. This is a great choice if you’re building a solid abstraction on top if a mountain of RabbitMQ knowledge and experience. I’m not there yet, so I’m not using this one yet.
That leaves AMQPLib (AKA “amqp.node”) – my current choice of NodeJS library / drivers. This is the channel-oriented API that I mentioned previously, and was confused about at first. Having learned through the API and the way RabbitMQ works, though, I find it fairly easy to understand and work with.
But I wasn’t super happy with using a somewhat low level API library in my code directly. I wanted to build domain specific objects for my application to use, and I wanted them to be based on the Enterprise Integration Patterns that I cut my teeth on, in the messaging world. So I started building my own wrappers to give me pub/sub, point-to-point and other semantics that I wanted. Well it wasn’t long until I realized that the author of AMQPLib had already solved most of this for me, with his Rabbit.JS library.
Rabbit.JS provides some of the core Enterprise Integration Patterns in a library on top of AMQPLib. If you’re coming from an EIP background when trying to learn RabbitMQ, I still suggest starting with the core and fundamentals of RabbitMQ. But once you get the basics down and you understand what an exchange is and how it can be used properly, then you should look at Rabbit.js. I’m finding it to be quite nice to work with and build my domain specific objects on top of.Other Resources?
I’m quickly growing fond of RabbitMQ and my choice of libraries for working with it in NodeJS. I’ve found some good resources, as I’ve noted above, but I’m sure there are other resources around, and other opinions and advice, too. I’d love to hear what resources you would suggest for someone learning RabbitMQ – especially on NodeJS, but any language would be good, really. Drop a note in the comments below, and let me know.
The book: The 5 Elements of Effective Thinking by Burger & Starbird.
In the audio format I found it hard to visualize the 5 elements, perhaps because of the analogy to the classic elements of earth, fire, air, and water. So before any confusion sets in, here are the author's 5 elements:
- Grounding Your Thinking; Understand Deeply [Earth]
- Igniting Insights through Mistakes; Fail to Succeed [Fire]
- Creating Questions out of Thin Air; Be your own Socrates [Air]
- Seeing the Flow of Ideas; Look Back, Look forward [Water]
- Engaging Change; Transform Yourself [the Quintessential element]
Each chapter is illustrated with wonderful stories. An example: JFK's 1961 speech to Congress in which he challenged the US: "I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth." The result of this challenge was not to start putting people in rockets and sending them to the Moon. It was much simpler steps that built upon previous learnings. Their example is the Ranger program, in which NASA tried 6 times to just hit the Moon, and failed before succeeding in the seventh attempt to crash Ranger 7 into the moon. Learning in each attempt, solving bigger more complex problems by iteration.
He has noticed that when projects scale-up, one of the first issues organizations must confront is how to manage a Product Backlog across multiple teams.
Some organizations work from one master backlog managed by a Chief Product Owner or a Product Owner Team. Multiple teams then pull stories from that backlog.
Other organizations have teams with individual product owners who create their own backlogs and release their own modules into a loosely coupled framework. Spotify has set up their entire organization to enable this. (They also carefully manage dependencies across teams.)
There is a whole spectrum of options between these two examples. The right answer for any company lies in their own context. If you're building something where all the modules are intimately integrated, a single, tightly managed, master backlog may work well. In a different environment, it might be faster for individual teams to continuously release improvements on their own module. There is coordination on the epic level, but Sprint-to-Sprint, their backlogs are independent from each other.
These models work for different Scrum implementations and we know there are even more ways of doing it. We would love to hear your story so we are extending an open invitation to the Agile community:
How do you manage your backlog across teams?
We want to learn how your context shapes your practice. Why do you do it that way? What kind of product are you building? How many teams do you have? And how is your method working for you?
Please post your answers in the comment section or on Jeff's Facebook page, or on Twitter if you are that concise (@jeffsutherland #ScalingScrum).
As the conversation winds down, we'll write a blog and compile the most interesting and effective techniques so we can learn from each other.
In the coming months, look forward to a Scrum Inc. online course in which Alex and Jeff present a framework for scaling Scrum. They will also share this framework at Agile 2014 in Orlando.
For many teams, relying on emails and verbal communications to manage work handoffs alone results in misunderstandings and oversights. If this sounds familiar to you, see how CoreCommerce has streamlined the way they work using LeanKit to visualize what needs to get done. We had the opportunity to chat with Matt DeLong, CEO of ecommerce […]
Wired magazine has a nice little summary of personal kanban. Check it out!Try out our Virtual Scrum Coach with the Scrum Team Assessment tool - just $500 for a team to get targeted advice and great how-to informationPlease share!
The Certified Scrum Master (CSM) class I’ve usually offered is an interactive cartoon e-learning series (Scrum Training Series completed before attendance) + two days of team lab activities. It gets great reviews, such as this one from my last class:
Attended a Scrum Master class with Michael James as the teacher, and it was amazing. He was extremely knowledgeable, professional, and fun to get along with. I’d highly recommend anyone to take one or more of his classes.
While I was writing this article, a participant from the Washington DC area posted this on my LinkedIn profile:
I left the class so well-prepared for the certification exam — and so much more. I felt ready for the real world, as Michael, after ensuring that we were ready for the exam, maximized our practical learning through hands-on team activities, plus explanations of key Scrum and Agile concepts illustrated from his own professional experiences. I cannot imagine a better learning experience.
But many participants wish we had additional time to dig deeper into the implications of Agile to organizations that have more than seven people. Large organizations are where Scrum gets screwed up the most. The principles are exactly the same, but the layers of self deception and muscle memory in big companies stump even expert consultants.
So we’ve decided to offer a 3-day CSM class. (It’s actually 3.1 days if you count the cartoons and quizzes everyone does before the class.) Most of the additional time will focus on examples and case studies. As always, we’ll use fun interactive techniques that aid retention. I use activities rather than long lectures because we’ve found people don’t remember lectures that go on longer than 5-10 minutes. Years ago when I switched to activity-based learning, a university professor who attended my class in Europe wrote:
a fluid uninterrupted learning experience…. interesting, high value training.
The 3-day CSM class will appeal to you if:
- You are the type of person that has a natural curiosity for learning
- You push yourself to be the best in whatever you do
- You enjoy problem solving and are comfortable with ambiguity as you explore the best options for a complex situation
- You appreciate theory but learn best by doing
- You are a job seeker wanting to be more knowledgeable about Agile during job interviews than an ordinary CSM
- You are a consultant who wants greater confidence in guiding organizations
- You are a business leader seeking to avoid common mistakes in implementing Scrum
The main topics covered by primarily by team activities and examples:
- How Agile development differs from traditional project management.
- Three Scrum roles, responsibilities, boundaries, in depth.
- How to write well formed Product Backlog Items such as user stories.
- Techniques for splitting large requirements (e.g. epics) into small specific ones.
- Product Backlog prioritization.
- Effort estimation.
- Maintaining the Sprint Backlog.
- Five Scrum meetings (how to, how not to).
- Sprint execution for self organizing teams.
- Definition of done and the potentially-shippable product increment.
- Environments that encourage or impede team self organization.
- Small group dynamics (the psychology of innovative teams).
- Modern Agile engineering practices including test-driven development (TDD).
- Lean principles derived from the Toyota Production System.
- Product Owner planning and forecasting beyond one Sprint.
- Case studies of Scrum in large organizations.
- Case studies of Scrum for large scale development.
- Case studies of common organizational impediments.
- Case studies of successful and unsuccessful attempts to introduce Scrum/Agile to organizations.
The class contains individual and group knowledge tests that precede the ScrumAlliance’s online test.
Better prepared groups are able to spend more time on the advanced topics. For this reason, I will need you and your colleagues to complete the Scrum Training Series before class, and work with me beforehand about any areas of confusion. During the class you’ll be on a fast moving team applying what you learned before the class. The Scrum Training Series is also highly regarded:
This series is FANTASTIC! It was entertaining, engaging and informative. I’m very new to SCRUM and this material was presented in a very logical, easy to understand manner. It’s such a logical framework and approach! In fact, I sent the link to all the PMs in my company (we’re implementing this approach to SDLC).
The post Why Take A 3-Day Certified Scrum Master (CSM) Class? appeared first on blogs.collab.net.