Skip to content

Derick Bailey - new ThoughtStream
Syndicate content
Secrets of a developer/entrepreneur
Updated: 6 min 1 sec ago

How To Learn ES vNext Without Jeopardizing Existing Work

Mon, 04/24/2017 - 15:00

Around a year ago, I wrote a blog post lamenting the high cost of entry for using ES6 features like generators, modules, etc. Since then, the world of JavaScript runtime environments has progressed significantly.

Most (if not all) of the features that were once difficult to use without pre-compilers are now available to the general population with updated browsers, and to back-end developers with new versions of Node.js. The state of JavaScript has improved significantly!

But the problem of new features hasn’t gone away. It’s only moved.

Now instead of wondering how I can work with generators, I’m looking for opportunity to work with async functions and other ES2017 (and beyond) features.

There’s an underlying problem that we have, as JavaScript developers, when it comes to new language versions and features. Most languages require us to install the specific version in whatever environment we are running our application.

But JavaScript is different – at least on the front end of things.

Instead of developers and production engineers installing the latest JavaScript features, we’re waiting for web browsers to catch up to the features. Not only that, but we’re waiting for the general population of people that use our web sites to update their browsers so we can safely use those features.

Sure, we can install new versions of Node.js on our server and run new code in the back-end. But even that can be dangerous.

I mean, when was the last time you heard about a great new language feature or syntax change in JavaScript, and thought to yourself,

“That’s great! I’ll just install the latest, unstable Node.js release, update my Babel.js version and plugins, and download an experimental browser versions that might support this syntax if I use command-line flags when starting it!”

If you’re like me and millions of other developers, this isn’t even a remote possibility. It’s just not going to happen.

Why?

Because you have existing projects that need tried-and-true, stable, well-tested and supported versions of all these things. And the risk of installing new, unstable and experimental versions of Node, Babel or any other runtime, and having it break your actual work is far too great.

It’s enough to make a developer want to forget about learning new JavaScript features… to just wait until they become “old” features. Which is unfortunate – because when I see the future of JavaScript, I see code that I desperately want to write, now.

Fortunately, there is a solution to this problem. It is possible to learn new JavaScript features – to take advantage of async functions and other improvements that can greatly reduce the amount of code you have to write. And it can be done safely.

On May 2nd, I’ll be presenting a live WatchMeCode session all about this problem and solution.

The ES7 and Beyond with Docker webinar will give you everything you need to learn the latest JavaScript features without once putting your current projects in danger from new runtime libraries, or other software updates.

Es7 and beyond vsl

And don’t worry if you’ve never used Docker – with the solution that I have, you won’t need any prior Docker experience to take advantage of the latest JavaScript features.

Learn more about this session and register today, at WatchMeCode!

The post How To Learn ES vNext Without Jeopardizing Existing Work appeared first on DerickBailey.com.

Categories: Blogs

With ES7 And Beyond, Do You Need To Know ES6 Generators?

Wed, 04/19/2017 - 13:00

A few years ago, JavaScript introduced the idea of generators to the language. It’s a tool that was absolutely needed in the JavaScript world, and I was very happy to see it added, even if I didn’t like the name at the time.

But now, after a few years of seeing generators in the wild and using them in my code, it’s time to answer the big question.

Do you need to learn generators?

Before I get to the big secret … two secrets, really … it’s important to understand what generators are which informs why they are so important.

What Is A Generator?

The .NET world has called them “iterators” for 12 years. But I think JavaScript was following Python’s lead with “generators”. You could also call them “coroutines”, which may be the fundamental concept on which generators and iterators are built.

Ok, enough computer science lessons… what is a generator, really?

A generator will halt execution of a function for an indefinite period of time when you call “yield” from inside a function.

The code that invoked the generator can then control exactly when the generator resumes… if at all.

And since you can return a value every time execution halts (you “yield” a value from the function), you can effectively have multiple return values from a single generator.

That’s the real magic of a generator – halting function execution and yielding values – and this why generators are so incredibly important, too.

But a generator isn’t just one thing.

There’s 2 Parts To A Generator

There’s the generator function, and the generator iterator (I’ll just call that an iterator from here on out).

A generator function is defined with an * (asterisk) near the function keyword or name. This function is responsible for “yield”-ing control of its execution – with a yielded value if needed – to the iterator.

An iterator is created by invoking the generator function.

Once you have an iterator, you can … you guessed it, iterate over the values that the generator function yields.

The result of an iterator’s “.next()” call has multiple properties to tell you whether or not the iterator is completed, provide the value that was yielded, etc.

But the real power in this is when calling “it.next();” the function will continue executing from where it left off, pausing at the next yield statement.

This means you can execute a method partway, pause it by yielding control to the code that made the function call, and then later decide if you want to continue executing the generator or not.

For more detail on this, I’ll list some great resources for what generators can do, below.

Right now, though, you should know about the real value and power of generators: how they make async functions a possibility.

Secret #1: Async Functions Use Generators

The truth is, async functions wouldn’t exist without generators. They are built on top of the same core functionality in the JavaScript runtime engine and internally, may even use generators directly (I’m not 100% sure of that, but I wouldn’t be surprised).

In fact, with generators and promises alone, you can create async/await functionality on your own.

It’s true! I’ve written the code, myself. I’ve seen others write it. And there’s a very popular library called co (as in coroutines) that will do it for you.

Compare that to the same async function calls using the syntax you saw yesterday.

With only a few minor syntax changes, co has written the same level of abstraction in calling asynchronous methods that look synchronous. It even uses promises under the hood to make this work, like async functions.

Clearly there was influence from this ingenuity in co, that influenced the final specification of async/await.

Secret #2: You Don’t Need To Learn Generators

Async functions are built on the same underlying technology as generators. They encapsulate generators into what co provided for you, as a formal syntax instead of a 3rd party library.

But do you need to learn generators?

No.

Make no mistake. You absolutely need generators.

Without them, async functions wouldn’t work. But you do not need to learn how to use them, directly.

They’re complex, compared to the way you’ve been working. They aren’t just a new way to write iteration and asynchronous code. But they represent a fundamental shift in how code is executed, and the API to manage that is not how a developer building line-of-business applications wants to think.

Sure, there are some use cases where generators can do really cool things. I’ll show you those in the resources below. But your code will not suffer one iota if you don’t learn how to use generators, directly.

Let the library and framework developers deal with generators to optimize the way things work. You can just sit back and focus on the awesomeness that is async functions, and forget out generators.

One Use Case For Generators

Ok, there is one case where I would say you do need generators.

If you want to support semi-modern browsers or Node.js v4 / v6 with async/await functionality…

And if you can’t guarantee the absolute latest versions of these will be used… Node v7.6+, MS Edge 15, Chrome 57, etc…

Then generators are your go-to option with the co library, to create the async function syntax you want.

Other than that, you’re not missing much if you decide to not learn generators.

So I say skip it.

Just wait for async/await to be readily available and spend your time learning promises instead (you absolutely need promises to effectively work with async functions).

Generator Resources

In spite of my advice that you don’t need to learn generators, there are some fun things you can do with them if you can completely wrap your head around them.

It took me a long time to do that.

But here are the resources I used to get my head into a place where generators made sense.

And some of my own blog posts on fun things I’ve done with generators.

The post With ES7 And Beyond, Do You Need To Know ES6 Generators? appeared first on DerickBailey.com.

Categories: Blogs

You Need ES2017’s Async Functions. Here’s Why …

Tue, 04/18/2017 - 13:00

If you’ve ever written code like this, you know the pain that is asynchronous workflow in JavaScript.

Nested function after nested function. Multiple redundant (but probably necessary) checks for errors.

It’s enough to make you want to quit JavaScript… and this is a simple example!

Now imagine how great it would be if your code could look like this.

Soooo much easier to read… as if the code were entirely synchronous! I’ll take that any day, over the first example.

Using Async Functions

With async functions, that second code sample is incredibly close to what you can do. It only takes a few additional keywords to mark these function calls as async, and you’re golden.

Did you notice the difference, here?

With the addition of “async” to the outer function definition, you can now use the “await” keyword to call your other async functions.

By doing this, the JavaScript runtime will now invoke the async functions in a manner that allows you to wait for a response without using a callback. The code is still asynchronous where it needs to be, and synchronous where it can be.

This code does the same thing, has the same behavior from a functionality perspective. But visually, this code is significantly easier to read and understand.

The question now, is how do you create the async functions that save so much extra code and cruft, allowing you to write such simple workflow?

Writing Async Functions

If you’ve ever used a JavaScript Promise, then you already know how to create an async function.

Look at how the “createEmployee” function might be written, for example.

This code immediately creates and returns a promise. Once the work to create the employee is done, it then checks for some level of success and resolves the promise with the employee object. If there was a problem, it rejects the promise.

The only difference between this function and any other function where you might have returned a promise, is the use of the “async” keyword in the function definition.

But it’s this one keyword that solves the nested async problem that JavaScript has suffered with, forever.

Async With Flexibility

Beyond the simplicity of reading and understanding this code, there is one more giant benefit that needs to be stated.

With the use of promises in the the async functions, you have options for how you handle them. You are not required to “await” the result. You can still use the promise that is returned.

This code is just as valid as the previous code.

Yes, this code still calls the same async createEmployee function. But we’re able to take advantage of the promises that are returned when we want to.

And if you look back at the 3rd code sample above, you might remember that I was calling async functions but ultimately using a callback to return the result. Yet again, we see more flexibility.

Reevaluating My Stance On Promises

In the past, I’ve made some pretty strong statements about how I never look to promises as my first level of async code. I’m seriously reconsidering my position.

If the use of promises allows me to write such easily readable code, then I’m in.

Of course, now the challenge is getting support for this in the majority of browsers, as I’m not about to drop a ton of terrible pre-compiler hacks and 3rd party libraries into a browser to make this work for the web.

Node.js on the other hand? Well, it’s only a matter of time before v8.0 is stable for release.

For now, though, I’ll play with v7.6+ in a Docker container and get myself prepared for the new gold standard in asynchronous JavaScript.

The post You Need ES2017’s Async Functions. Here’s Why … appeared first on DerickBailey.com.

Categories: Blogs

What’s The Easiest Way To Get Docker Into Production?

Thu, 04/13/2017 - 15:00

Elton Stoneman  Docker in Production  Video Poster

… and once it’s in production, how do I manage, monitor and scale when I need to?

These are questions I’ve often been asked, and have asked several times. They are not questions for which I’ve had a good answer, though.

Sure, you can find all kinds of articles about container orchestration, management and auto-magically doing all kinds of awesome stuff at a massive scale. There are plenty of big talks from huge companies and well-known speakers on this. But that doesn’t help a developer on a small team, or someone working on their own. It doesn’t provide a simple answer for learning and growth, either.

And all these talks on the massive-scale auto-magic solutions never once helped me when I was looking at putting a single Docker image into a production environment.

So, how do you go from development to production?

And how do you grow your production environment as needed? When do you know you’re ready to use the next set of tools? And what are the next set of tools?

On March 14th, 2017, I interviewed Elton Stoneman from the Docker Developer Relations team, and we talked about exactly that – how to get Docker into production, after you’ve created your development image.

At a high level, this interview walks through what it takes to go from development to the most basic of production configurations. From there, it moves into a discussion of a more stable environment with Docker Swarm, deployment of multiple containers using a docker-compose.yml file, and then out to large-scale production and enterprise needs with Docker Datacenter.

Along the way, you’ll hear more about what “orchestration” is, get a clear understanding of how a Docker image can be monitored to ensure the application is responding correctly, and learn how to quickly and easily scale an application.

If you’ve ever wondered where to look next, after you’ve created your own Docker image and built an application into it, you’ll want to check out this interview.

Learn how to go from development to production with Elton Stoneman.

P.S. Be sure to read to the bottom of that page to find out how you can watch the interview for free!

The post What’s The Easiest Way To Get Docker Into Production? appeared first on DerickBailey.com.

Categories: Blogs

Are You Struggling To Learn ES6/ES7 Features Without Breaking Your Current Projects?

Mon, 04/10/2017 - 23:13

Sometimes it seems like it’s impossible to learn the new stuff without breaking your existing work… installing new versions of Node.js, updating Babel.js plugins, enabling experimental features with command-line flags?

Nope.

It’s far too easy to break things in your current project.

But I’ve been working on something that might help make this easier. Before I announce it, though, I need your help.

I’d like to ask you a question about what you’re struggling with, in learning new and upcoming features of ES6, ES7 and beyond.

Just enter your email address in this box, below, and I’ll send you a quick, 1-question survey.

The post Are You Struggling To Learn ES6/ES7 Features Without Breaking Your Current Projects? appeared first on DerickBailey.com.

Categories: Blogs

Docker for Developers – An Interview on JavaScript Jabber

Fri, 04/07/2017 - 13:30

On March 28th, 2017 I made an appearance on the JS Jabber podcast with a great panel of software developers, talking about Docker for software developers and JavaScript

Jsjabber docker for devs

In addition to the basics of “what is Docker?” we talk about why a developer would want to use it, including a lot of misconceptions and misunderstandings around the tooling and technologies, and more, including:

  • What’s the ultimate benefit that Docker provides?
  • Isn’t it a DevOps tools?
  • Why bother learning it, as a JavaScript developer? 
  • How does it compare to virtual machines?
  • Are you coding directly in the container, or ?

From the show notes:

As a JavaScript developer, learning Docker is going to have the same pay-off with other kinds of developers. There are times when one works well for one machine, but not on another. You then ask yourself why things are going that way when you are sure enough that you have tested it already.

The reasons that you come up with boil down to a few basic categories. It’s either because of a different operating system, configuration bits for the software itself, or libraries and runtimes that need to be installed and configured. These cause machine issues, which are solved by Docker.

Check out episode 255 of JS Jabber and learn more about Docker for JavaScript developers!

The post Docker for Developers – An Interview on JavaScript Jabber appeared first on DerickBailey.com.

Categories: Blogs

What I Learned By Deleting All Of My Docker Images And Containers

Wed, 04/05/2017 - 17:49

A few days ago I deleted all of my Docker containers, images and data volumes on my development laptop… wiped clean off my hard drive.

By accident.

And yes, I panicked!

Do not erase

But after a moment, the panic stopped; gone instantly after I realized that when it comes to Docker and containers, I’ve been doing it wrong.

Wait, You Deleted Them … Accidentally?!

If you build a lot of images and containers, like I do, you’re likely going to end up with a very large list of them on your machine.

Go ahead and open a terminal / console window and run these two commands:

Chances are, you have at least half a dozen containers with random names and more than a few dozen images with many of them having no tag info to tell you what they are. It’s a side effect of using Docker for development efforts, rebuilding images and rerunning new container instances on a regular basis.

No, it’s not a bug. It’s by design, and I understand the intent (another discussion for another time).

But, the average Docker developer knows that most of these old containers and images can be deleted safely. A good docker developer will clean them out on a regular basis. And great docker developers… well, they’re the ones that automate cleaning out all the old cruft to keep their machine running nice and smooth without taking up the entire hard drive with Docker related artifacts.

Then, there’s me.

DANGER, WILL ROBINSON

For whatever reason, I realized it had been a while since I had cleaned out my Docker artifacts. So I did what I always do: hit google and the magic answers of the internet for all my shell scripting needs.

My first priority was to remove all untagged images. A quick search and click later, I had a script that looked familiar pasted into my terminal window and I was hitting the enter button gleefully.

It wasn’t until a moment later – when I ran “docker images” again, and saw that I still had a dozen untagged images – that I figured out something was wrong.

Looking back at the page from which I copied the script, I saw the commands sitting under a heading that I had previously ignored. It read,

“Remove all stopped containers.”

Well, good news! All of my containers were already stopped, so guess what happened?

The panic hit hard as I quickly re-ran “docker ps -a” to find an empty list.

NewImage

The Epiphany And The Evanescent Panic

As fast as my panic had set in, it left. Only a mild annoyance with myself making such a simple mistake remained. And the only reason I had a mild amount of annoyance was knowing that I would have to recreate the container instances I need.

That only takes a moment, though, so it’s not a big deal.

In the end, the panic was gone due to my realization of something that I’ve read and said dozens of times.

From the documentation on Dockerfile best practices:

Containers should be ephemeral

The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.

I’ve used the word ephemeral, when talking about Docker containers, at least a dozen times in the last month.

But it wasn’t until this accidental moment of panic that I realized just how true it should be, and how wrong I was in my use of containers.

The Not-So Nuclear Option

The problem I had was the way in which I was using and thinking about containers, and this stemmed from how I viewed the data and configuration stored in them.

Basically, I was using my containers as if they were full-fledged installations on my machine or in a virtual machine. I was stopping and starting the same container over and over to ensure I never lost my data or configuration.

Sure, some of these containers used host-mounted volumes to read and write data to specific folders on my machine. For the most part, however, I assumed I would never lose the data in my containers because I would never delete them.

Well, that clearly wasn’t the case anymore…

I see now that what I once told a friend was “the nuclear option” of deleting all stopped containers, is really more like a dry-erase marker.

I’m just cleaning the board so I can use it again.

A Defining Moment

My experience, moment of panic and realization generated this post on twitter:

idea: if deleting all of your #Docker containers would cause you serious headache and hours of work to rebuild, you’re doing Docker wrong

— Derick Bailey (@derickbailey)

March 24, 2017


And honestly, this was a very defining experience, in reflection.

Reading and talking about how a Docker container is something that I can tear down, stand up again and continue from where I left off is one thing.

But having gone through this, I can see it directly applied to my own efforts, now.

Now the only minor annoyance that I have is rebuilding the container instances when I need them. The data and configuration are all easily re-created with scripts that I already have for my applications. At this point, I’m not even worried anymore.

That’s how Docker should be done.

The post What I Learned By Deleting All Of My Docker Images And Containers appeared first on DerickBailey.com.

Categories: Blogs

Tired Of Waiting For ‘npm install’ To Finish Every Time You Touch Docker?

Mon, 03/20/2017 - 13:30

In February, I launched the first of my WatchMeCode: Live! sessions on Docker. This is a series where I do a live webinar-style session of talking about code, providing commentary and getting live Q&A from the audience at the end.

For March 2017, I’m preparing another session on the ever-so-frustrating npm delay in Docker.

What is the npm delay?

This tweet from Sergio Rodrigo sums it up best:

Now throw Docker into the mix with “RUN npm install” in your Dockerfile and a host-mounted volume to edit code, and things get really ugly, fast.

What was once a 5 minute install is now more than 10 minutes. And worse, it seems every time you touch anything in Docker or your project, you incur yet another round of “npm install”.

Fortunately, there is a solution.

I’ve recently been using some tools – built in to Docker – to cut this constant time sink from my Docker projects.

Instead of having to deal with the npm delay by playing video games, watching netflix or generally slacking in my work, I’m only running “npm install” when I actually need a new dependency. I no longer have Docker running it for every single build of my Docker image, or when touch anything in my project.

The best news, though, is that these are simple tools and techniques and they have a huge impact. And I want to show you how to use them in the WatchMeCode: Live! session on March 27th.

Join me for this event and I’ll show you how to eliminate the npm delay in your Docker project.

 I look forward to seeing you at this live session!

   – Derick

The post Tired Of Waiting For ‘npm install’ To Finish Every Time You Touch Docker? appeared first on DerickBailey.com.

Categories: Blogs

Fixing npm’s Wall of Red Text in Docker

Thu, 03/16/2017 - 16:27

Docker + Node.js is a beautiful combination. But among all of the advantages of using these tools together, there are some mild frustrations… things that just don’t quite fit nicely.

Take the npm Wall of Red Text, for example.

It seems every time I run ‘npm install’ inside of my docker container, I nearly have a panic attack thinking my build is failing!

Npm log level

It’s nerve-racking to say the least… is my build failing? Did it fail? Or is this just the npm Wall of Red Text again?

9 out of 10 times, it ends up “npm info ok”, of course, but that doesn’t make me feel any better about the Wall…

Npm ok

Fortunately, we can work around this anxiety inducing Wall of Red Text with a few npm configuration options. The options you set will be dependent on what you do and don’t want to see.

Too Much “info”

The first thing I want to get rid of is the wall of text. Honestly, I just don’t care about seeing all of the information supplied. 

The default npm log level for a Docker image is “info” instead of “warn” like it would be on your machine. To fix that, adjust one of two options in your Dockerfile:

In this Dockerfile example, I have set both the “NPM_CONFIG_LOGLEVEL” environment variable and “–loglevel” command-line parameter to “warn”.

You don’t need both of these options in the Dockerfile, though. Pick whichever one works best for you in a given situation.

If you find yourself needing to run npm install multiple times, for example, and you want to ensure npm is always set to “warn”, then I would set the environment variable. This is a great option for a development Dockerfile when you know you’ll be shelled into a container to work with npm. Just be sure to set the ENV instruction before you run npm install, or it won’t have any effect on your Docker build process.

If you only want one specific instance of npm install to have a given log level, however, you can adjust that one call with the “–loglevel” parameter, as I’ve shown. 

The end result is a much more manageable output, with only warning and error messages shown… basically, duplicating what you would see on your computer.

NewImage

But not everyone wants to reduce the output of their build. Maybe you just want to get rid of the red text.

Too Much “color”

If you prefer to see the wall of text, but don’t want to see red all the time, you can adjust the “color” configuration setting.

Again, you have two options. Set the “NPM_CONFIG_COLOR” environment variable, or set the “–color” command-line parameter. 

The use of these options is basically the same as the previous example. Decide when you want to change the color setting and use the option that is appropriate for your situation.

With the “color” option set to false, as shown, npm will not use any color coding for the log output.

NewImage

Unfortunately, this means your warning and error messages won’t be in color, either. But if you don’t mind that limitation and you do want to see the wall of text, this is a good option.

Can I Get A Red Warning and White Wall?

It’s likely that there is some incompatibility between the Docker log output and npm which is causing the npm Wall of Red Text to begin with. And whatever is causing this is preventing me from getting the output that I really want: white info and red warning text.

Personally, I prefer to set the loglevel to “warn” so this isn’t a huge problem for me. There are times, however, when I want to see the “info” level output and I wish there were a way to get “info” in white and “warn” in red, when using npm with Docker.

For now, though, you’ll have to work with the combination of “loglevel” and “color” in your Docker setup with npm.

That’s Great, But How Do I Not Run “npm install” So Often?

Having color and log level control with npm is great. But that doesn’t solve the larger problem of how often “npm install” runs when working with Docker.

It seems everything you do, every time you even think about touching your Docker project, you end up running “npm install” again. It can be a slow, life-draining experience when it happens multiple times per hour.

Life passing

Fortunately, there are solutions for this as well. With a small adjustment to your Dockerfile and a clever use of Docker volumes, you can create a node modules cache that will easily cut 80% of the “npm install” from your Docker build.

And I want to show you how to do exactly that – to eliminate the npm install delay from your Docker project almost entirely.

Learn More About Caching Node.js Modules in Docker

Cache node modules in docker vsl

The post Fixing npm’s Wall of Red Text in Docker appeared first on DerickBailey.com.

Categories: Blogs

Are Your Development Tools Making You More Efficient?

Mon, 03/13/2017 - 19:42

Classroom

Several years ago, I found myself sitting in a classroom on a Saturday morning. 

It was an exciting day for me, attending my first code camp. I was surrounded by other developers with a shared enthusiasm for what we do, and had already seen several outstanding presentations.

The subject for the classroom in which I now sat was Resharper – a plugin for Visual Studio that added an incredible amount of power and flexibility for editing and restructuring code. I knew the basics, already, having installed it on my computer at work. But it was not something I felt I knew how to use efficiently.

Sure, I could click the squiggly red lines and the pop up menu that came with Resharper. But I knew there was more to it than this. I knew there had to be a reason beyond these simple, surface level features that made so many developers so excited about it.

That’s what made this Saturday morning session so important – knowing that I was about to see the real power of this tool.

With fluidity, the presenter demonstrated feature after feature. I was enthralled by the ease at which he edited and restructured code. More than anything else, though, I marveled at how he used Resharper’s features without once touching a mouse or trackpad. There were no arrow keys used to move around the menu system, either. It was all keyboard shortcuts and commands.

Furiously, I scribbled notes. I knew this was just the thing I needed, and I wanted to copy every technique shown. I could see in my mind, just how easy life as a developer was about to become!

Fast forward to Monday morning at the office. Opening my project in Visual Studio, I start editing code, excited about the opportunity to use Resharper with my new-found efficiency.

A moment later, I saw my opportunity. Checking my cheat sheet, I pressed a few buttons on my keyboard, and …

Wait. That wasn’t the feature I wanted.

Quickly, I checked my notes and tried another key combination. Again, it wasn’t the feature I wanted. I tried again. And again. And a few keyboard clicks later, I hit a brick wall.

Every last ounce of enthusiasm and excitement I had was draining – fast – to be replaced with a sinking feeling, like I was struggling in quicksand. Reluctantly, I reached for my mouse and clicked the drop down menu to find the option I was looking for.

It wasn’t enough to have the tool or to know the features existed. 

There was a level of efficiency that eluded me, still. One that I wanted. One that I had seen the previous Saturday. But one that I could not seem to achieve.

And so it is with many of today’s tools for modern software development. Simply having the tools available – and even the ability to use them – doesn’t create efficiency.

Take Docker as a more modern example.

Once you learn the basics, it can offer a tremendous amount of value in development, testing, deployment and beyond. But the value it offers doesn’t imply anything about efficiency of use. 

It only takes a few instructions in a Dockerfile to create a working Node.js application, for instance. 

But with this Dockerfile – pulled directly from a recent project I built – efficiency is not a word that comes to mind.

The problem is that efficiency can only be measured in light of a goal. So what’s the goal here? To write a Dockerfile in as few instructions as possible? I would think not. 

Rather, the goal should be to write a Dockerfile that runs the application as expected, and is able to be built and re-built as quickly as possible. 

If you were to build a Dockerfile like the one shown, your project would run quite well. However, every time you need to rebuild the Docker image, you will incur another full round of “npm install”.

This doesn’t sound too bad until you realize that every line of code change and every tweak to the environment configuration requiers a rebuild and another round of “npm install”. 

You can start to work around this with a host-mounted volume for editing code, of course. In fact, this is encouraged. It means you don’t have to rebuild your image every time you change a line of code.

But now you’re faced with a new problem: host-mounted volumes are notoriously slow. What was once a 2 to 3 minute install for your dependencies is now more like 5 to 10 minutes. And every time you decide you need to rebuild your base image or start a new container in which you want to install development dependencies, you’re stuck waiting for “npm install”. Again.

How, then, do you create efficiency in building and maintaining your Docker image, to match the value that Docker brings at runtime and deployment?

The short answer is to use the tools you have, more effectively.

When used correctly, Docker can cache your Node.js modules for you, eliminating the npm delay from your Docker projects almost entirely.

This isn’t the same kind of caching as the web, though. You’re not using a better CDN, a browser cache or proxying the HTTP requests to another server. You’re not switching package managers, either.

What kind of cache is it, then, if not the kind of cache that we build with web apps or a different package manager?

Read on to find out, and to see how you can cut the npm delay from your Docker projects, almost entirely.

The post Are Your Development Tools Making You More Efficient? appeared first on DerickBailey.com.

Categories: Blogs

Selecting A Node.js Image for Docker

Thu, 03/09/2017 - 22:57

Before you begin to run your Node.js application in a Docker container, or even build the app into a container, you have to answer an important question and make a key decision:

Which base Node.js image for Docker do I choose for my app?

Choosing nodejs docker image

The easy answer – and probably the most common one – is to specify the Node.js version you wish to use and take the full image for that version.

This is the mostly safe answer as it will give you a container in which you can do practically anything without having to install or configure any other tools; only your code.

It may not be the right choice, as you’ll see below. But in general it’s a safe choice as it includes a stable release of the Debian linux system (as most Node.js images do) with many pre-configured tools and dependencies.

Select Your Node.js Version, First

Before you select which version of Linux you want to build from, it’s important to pick your Node.js version.

There’s really only one thing you need to know when doing this. And that is:

Do Not Specify FROM node As Your Base Image

If you specify “FROM node” without a version number tag, you will always be given the latest version of Node.js.

While this may sound like a great thing, at first, you will quickly run into headaches when you rebuild your image and find your application doesn’t run because a new release of Node.js no longer supports an API you’re calling or makes other changes.

You should always specify at least a major version number tag.

More realistically, you should specify a minor version as well.

You can optionally specify a patch number but this is not as necessary. There are typically very few breaking issues between patch releases.

Now that you have your Node.js version selected, it’s time to deal with the specific linux distribution and image options.

Selecting Your Linux Distribution

In addition to the major Node.js version numbers, there are other options from which you can choose for your image, including these tag name extensions:

  • the default image (aka ‘jessie’)
  • -slim
  • -onbuild
  • -wheezy
  • -boron
  • -alpine
  • -argon
  • (and others in various combinations)

But before you make your decision on which image to use, you should know that the decision is not permanent.

In fact, it’s fairly simple to change the image from which you’re building. In truth it’s often far more challenging to change the Node.js version than to change the base image from which you build.

Don’t take the choice too seriously, then. Do consider a few of the following points on what each of the many images offers, though.

Wheezy, Jessie, what?

As mentioned before, most of the Node.js images are built from Debian linux. This is a very stable, very popular distribution of Linux.

This is also where many of these odd names come from.

Like many long-living projects, Debian uses code names for various major versions of it’s operating system. You can find a list of which version is what name on the Debian release page.

Not all of the Node.js images are built from Debian, however. There are a few other image, including distributions of Alpine linux and Scientific Linux to name a few.

However, you only need to care about 2 of these Linux distribution options, in my opinion: Debian and Alpine.

Selection: Full, Slim or Alpine?

Unless you’re looking for a very specific version of Linux and Node.js, for a very specific reason, I would stick with either the latest version of Debian (currently “jessie”), or Alpine Linux.

This drops your choice for the base image down to 3.

  • The full image (with version number tag!)
  • The ‘-slim’ image
  • The ‘-alpine’ image
With the choices narrowed down, it’s a bit easier to find the right option for you scenario. Full vs `-slim`

Both the full Node.js image and the ‘-slim’ image are built from Debian Linux (v8 “jessie” at the time of writing this). The major difference being the tools and libraries that are installed by default.

If you travel the hierarchy of Dockerfile builds and tags, you’ll find that the full version of the image builds from the same core ‘jessie:curl’ tag as the slim version.

The full version, however, adds a significant number of tools, including:

  • source controls software, such as git, subversion, bazaar, and mercurial
  • runtime libraries and build tools like make, gcc, g++, and others 
  • libraries and API sets such as imagemagick, a dozen other lib* libraries), and others

The end result of the tools and libraries that are installed into the full Node.js image is an image that starts at 650MB.

But, in spite of it’s size the full Node.js image can save a significant amount of drive space if you are using it for multiple images. This is one of the beautiful parts of how Docker manages images.

If you need these libraries and tools, then, this is a great option. If you’re building many images that need these tools, this is an especially important option.

If, however, you don’t need these tools and libraries – or you don’t have a lot of hard drive space and want to be more in control of the image contents – then the ‘-slim’ version of the image is probably what you want. This version runs around 250MB to start.

The third option from which you should choose is the ultimate in small images, however.

Alpine Linux: ‘-alpine’

Alpine Linux is a distribution that was almost purpose-built for Docker images and other small, container-like uses. It clocks in at a whopping 5MB of drive space for the base operating system.

By the time you add in the Node.js runtime requirements, this image does move up to around 50MB in space. But even at 10x the original Alpine size, it’s still 1/5 the size of the the `-slim` option.

My Recommendation: Alpine Linux

For my work, and for the work that I’m putting into my WatchMeCode guide to building Node.js apps in Docker, I prefer Alpine linux.

Alpine provides a great balance of incredibly small size with options for adding the build tools and libraries that you need. Even with a full build tool set installed, it only hits around 200MB. That’s still less than the ‘-slim’ option, while providing native compilation for things like bcrypt and other libraries.

There are a few caveats to note with the Alpine distribution, however.

The largest of which is that you may not be able to run all of your needed libraries. Alpine uses ‘musl libc’ instead of ‘glibc’. While this may not mean much to you right now, it does mean that you can’t install and use the Oracle Node.js driver, among other things.

For the most part, however, Alpine linux is a great distribution to use as a first choice.

And remember, you’re not stuck with the decision you make now. It’s not that difficult to change from Alpine to -slim or the full version, later.

The post Selecting A Node.js Image for Docker appeared first on DerickBailey.com.

Categories: Blogs