Skip to content

Derick Bailey - new ThoughtStream
Syndicate content
Secrets of a developer/entrepreneur
Updated: 5 min 50 sec ago

Docker for JavaScript Developers: On-Site Training

Wed, 06/07/2017 - 17:21

JavaScript moves fast. Both the language, and the frameworks and tooling around it are receiving updates at an enormous rate these days.

This pace of change oftens means a project you worked on a few months ago is now using “outdated” tools and technologies compared to your current project. And the older a project gets, the more difficult it is to maintain a development environment that supports the older libraries and runtime requirements.

In the past, developers have tried to solve these problems with configuration management tools, library dependency and versioning tools, full-on virtual machines to duplicate entire development environments, and more. But configuration drift is a problem that version management can’t always solve, and duplicating your entire development environment is the easiest way to introduce configuration drift (among other things).

Enter Docker

Docker is virtualization at the application level, encapsulating a single application process with all of it’s configuration, runtime environment and dependencies. It will help you solve the “works on my machine” problem by nearly eliminating the need to configure the machine on which it runs. You deploy the application as an immutable binary object, and all of it’s configuration and runtime environment come with it.

That means you no longer have to worry about what version of Node.js your old project is using. You don’t need to re-install Babel.js v5 for an old project, and then v6 again for a new project. You can test the latest and greatest webpack, browserify and other tooling with zero conflict in your current projects.

I Can Help You Get Up To Speed

If you’re getting started with Docker in your development environment and you need help to get your team up and running, let me know. I’ve got multiple services and training offerings that can help, including:

The post Docker for JavaScript Developers: On-Site Training appeared first on DerickBailey.com.

Categories: Blogs

3 Features of ES7 (and Beyond) That You Should Be Using, Now

Tue, 06/06/2017 - 16:54

JavaScript is anything but a “static” language (pun intended).

It seems everything is changing – even the naming of releases has changed, with years marking the language version.

And starting with the release of ES6 (officially, “ES2015”), the language has has continued to evolve at a rapid pace, introducing a staging system to mark the progress of features and changes.

But which features should you be using now? Or soon?

Esnext dartboard

It’s not always obvious, but there is a short list of features from ES2016+ (ES7 and beyond) that I believe every JavaScript developer should be using very soon, if not immediately.

The ES2016+ Short List

My short list of features could stretch all the way back to ES6 – but then it wouldn’t be a very short list.

If you build upon ES6 as the baseline, however, there are only a few features which I believe are real game changes for the every-day JavaScript developer.

  • Object Rest / Spread Properties
  • Observables
  • Async functions

While there are many other great changes that can benefit your work on a day-to-day basis, this short list stands to make an incredible difference in your work.

Object Rest / Spread Properties

How many times have you added underscore.js or lodash to your project, just to get the “.extend” method? I lost count years ago…

This is one of the many things that Object Spread Properties will give you natively in JavaScript. But to understand “spread”, first let’s look at “rest”.

At it’s core, Object Rest Properties is an update to destructuring assignment, which allows you to take many values out of object properties and assign them individually to variables, with one line of code:

With the changes in Object Rest Properties, you can now get the “rest of the properties” from an object, when doing destructuring assignment, as a new object.

Destructuring assignment is an important feature in reducing syntax noise and clarifying the intent of code.

But what if you want to “re”-structure multiple objects into a new object? With Object Spread Properties, you can do that easily:

With this, you no longer need underscore.js or lodash to get the “extend” method. You can easily create a shallow copy of an object using Object Spread Params, instead.

Just be aware that like underscore and lodash, Object Spread is “last one in wins” – meaning if an object at the end of the list (on the right) has the same property as a previous object in the list, the previous value will be overwritten.

Observables

Have you ever tried to mix native DOM events, jQuery events, events from a framework like Backbone or Ember, and other events from other code?

And when creating event handlers in these frameworks and libraries, have you ever noticed that sometimes your handler fires twice (or more) for the events?

This mismatch of API design and the potential for memory leaks are two of the largest problems that JavaScript developers face when dealing with event based development patterns.

In the past, developers had to be keenly aware of the pitfalls of memory leaks, manually removing the event handlers at the right time. As time moved on, framework developers got smart and started wiring up the magic of unregistering event handlers for you.

But the problem of mismatched API design remains… and can throw some serious wrenches in the code that is supposed to handle the registering and unregistering of event handlers.

Enter observables.

While there are new methods and features added to JavaScript to handle observables natively, the core feature set of an observable – the ability to register and unregister an event handler, among other things – is more an API design, implemented by you (or another JS dev providing support for them).

In this example, there are 2 observable objects.

The first listens to a browser window being resized through the DOM API directly

The second listens to an object’s custom event system

While the underlying code for these two events is very different, both of these observable objects share an API that can be used anywhere that supports observables.

This is important for two reasons.

First, the common API for an observable means you can adapt nearly any data source – a custom API, a stream of data, a different event source, and more – into an observable. When you want to use the data source, you no longer have to worry about the API design. It’s the same for all observables.

Second, the API standard for an observable includes a method to remove or unregister the events / stream from the underlying code. This means you now have a common way to clean up your event handlers, no matter where the events are coming from.

No more memory leaks due to mismatched API design or forgetting to unregister your handlers (unless you don’t implement the method call in your observable, of course).

(Note the difference in the Observable implementation in this version, to account for removing the event handlers when “unsubscribe” is called)

There’s a lot of power behind observables that can’t be shown here, however. For more information on how they work, how they’re built, etc, check out this interview and demonstration with Chet Harrison, on functional reactive programming.

Async Functions

Of all the features in ES2016+, async functions are by far the biggest game changer. If you only learn one thing from this article, it should be that you need to start using async functions as soon as possible.

Why, you ask?

If you’ve ever written code like this, you know the pain that is asynchronous workflow in JavaScript:

Nested function after nested function. Multiple redundant, but necessary, checks for errors. It’s enough to make you want to quit… and this is a simple example!

Imagine if your code could look like this, instead:

Now that looks nice! So much easier to read – as if the code were entirely synchronous.

The good news is that It only takes a few additional keywords to make this work in async functions:

By adding “async” to the outer function definition, you can now use the “await” keyword to call your other async functions. The functions that are called with “await” must also be marked as “async”, and there’s one more key to making them work: promises.

It’s common to use callbacks to enable asynchronous code. And I’ve often said that this is preferable to using promises. But with async functions, promises are now the way to go.

By returning a promise from an async function, it can now be consumed with the “await” keyword, as shown above. The end result is code that is far easier to read and understand, easier to maintain, and easier to deal with as a developer.

How To Use These Features, Today.

For the features mentioned here, you can start using them today. Even if the feature definition is not 100% complete, there is enough value and enough support to make it both easy and safe to use.

Object Rest / Spread Properties:

The syntax change behind destructuring and restructuring may help us reduce the amount of code we have to write (or reduce the number of libraries we have to bring along with our code) when assign multiple values to variables, and when making shallow copies of objects. However, there isn’t a lot of strange new implementation under the hood. These syntax features are easily handled by Babel.js and other transpilers.

Observables:

Observables are a tool that have been around for a while in functional programming languages, and there are multiple implementations available for JavaScript already. You can find them in RXJS, Babel.js, and TypeScript, along with other non-standard implementations elsewhere.

Async Functions:

The behavior behind this syntax is relatively new, built on top of generators from ES6. Without generators, async functions are very difficult to handle and require third party libraries and extensions for your JavaScript runtime.

However, all modern browsers support generators, making it easy for Babel.js to add async functions, or for you to use the “co” library to create the same feature set without a transpiler.

If you’re running Node.js, v4 and beyond support generators and v7.9.5+ supports async functions directly!

3 Rules To Know When It’s Safe To Use New JavaScript Features

While the three features above are available and safe to use, the question of when you can use new features, as they are developed, isn’t always cut and dry.

Prior to ES7 (officially known as “ES2016”), JavaScript moved at a rather slow pace. It would take years for new language features to be standardized, implemented by browsers and other JavaScript runtime environments, and put into general use by developers.

It was sort of easy to know when a feature was ready to use, in the old days, because of this.

Now, though, there’s a stage-based introduction of JavaScript features, used by TC-39 – the JavaScript working group. Browser vendors and Node.js tend to move quickly with new features, and it can be hard to keep up with what is and is not usable.

With the new release schedule of the ECMAScript standard, the better browsers auto-updating themselves, and Node.js running to catch up with the V8 JavaScript engine.

How, then, do you know when you can use a new feature?

It’s not as difficult as it might seem, honestly.

You can learn what the feature stages are and which ones are safe to use, learn to check the compatibility tables for your JavaScript environment, and learn to know whether or not something is just simple syntax or major behavioral change.

And you can learn all of this through the FREE guide,

3 Rules to know when it’s safe to use new JavaScript features.

The post 3 Features of ES7 (and Beyond) That You Should Be Using, Now appeared first on DerickBailey.com.

Categories: Blogs

How a 650MB Node.js Image for Docker Uses Less Space Than a 50MB Image

Wed, 05/31/2017 - 14:52

A while back I wrote a post about selecting a base Docker image for Node.js. In that post, I talked about the size difference of the default build for Node.js and the smaller, “slim” and “alpine” builds.

The difference can be significant: 650MB for the full image, vs 50MB for the Alpine Linux version.

However, there’s a note at the bottom of the description for the full, “node:<version”> image in the readme for the Node.js images on Dockerhub, that had me a bit confused (emphasis mine):

This tag is based off of buildpack-deps. buildpack-deps is designed for the average user of docker who has many images on their system. It, by design, has a large number of extremely common Debian packages. This reduces the number of packages that images that derive from it need to install, thus reducing the overall size of all images on your system.

 … wait, what?

Large vs small

You’re saying installing a large number of packages – with massive file size – will reduce the overall size of images on my system?!

Surprisingly (to me, at least) it’s true.

Here’s why…

Docker Image Layers

When you build a Docker image from a Dockerfile, every instruction in the file creates a new image layer. 

While these layers collectively create a single image, they are stored individually with individual IDs. This is done so that the layer can be cached and re-used whenever possible.

By tagging (naming) a Docker image, you can easily refer to a complete image – one that is built out of many layers. You can do a lot of things with a Docker image that has a tag, including create new images `FROM` it, in a new Dockerfile.

When you combine Docker’s cache with tagged images, you get a very efficient re-use of large binary objects.

Using the same tagged image in multiple Dockerfile `FROM` instructions will not re-create the base image every time. It will use the one existing image that your system already has (or download it if it doesn’t have it, yet).

Dockerhub: Public Image Cache

While Docker does a great job of caching images and image layers on your local system, it also provides a globally public repository of image caches, called Dockerhub.

This is where you’ll find the Node.js Docker images, among thousands of others, for public use.

And when you think of Dockerhub as little more than a public cache of images – which can be easily downloaded to your system, to be cached and used and re-used locally – then the way in which a 650MB Node.js image can save space, begins to reveal itself.

Saving Space With A Larger Base

Let’s say you have 4 Node.js applications that all build “FROM node:6.9.5-alpine”. Each of these applications uses a module from npm that requires native build tools. To install that module, you have to add the build tools to your Docker image.

Generally, this will balloon your Docker image from 50MB to around 200MB before you even install your project into the image. 

But worse yet, none of the images built from these three Dockerfiles will re-use the installed tool set. Each of them will add another 200MB of used hard drive space to your system, because each of them will individually install of the build tools.

With 4 applications and images, you now have 800MB+ of hard drive space used up.

If you were to switch to the full version of the “node:6.9.5” image, however, you would save approximately 550MB of drive space by not duplicating the build tool installation.

Yes, you need to have one copy of the full image and all of it’s layers, taking up 250MB of space when you build from it.

However, you only need one copy of the 250MB image.

When you specify FROM node:6.9.5 in multiple images on the same machine, it is re-used.

This is how a 650MB image can save space, compared to a 50MB image. Re-use.

Download vs Build

There is very little difference when it comes to caching, and downloading an image vs building an image locally.

If you build your own image and call it “my-image”, for example, you can then re-use the “my-image” as a base. Where that base image comes from is almost irrelevant, as long as Docker knows how to access it.

You can build “my-image” directly on your system and then re-use it on that system.

You can upload “my-image” to a private Docker repository, and re-use it from there.

You can upload “my-image” to the public Dockerhub, as well. 

Wherever “my-image” lives, specifying that as the base image of another Dockerfile will ensure it is re-used and not re-created.

Alpine? Or Full Node.js Image?

The question remains: should you use the Alpine image or the full image, or the “slim” image?

My guide to choosing a Docker image for Node.js (which can be downloaded as a .pdf, using the form, below) will recommend the “-alpine” variation to start with and I’ll stick with that recommendation.

However, once you start adding build tools and other common libraries, you have another choice to make.

Is it worth the extra space of building multiple apps with duplicated layers? Or should you use the full Node.js image?

Or, as a third option, should you build your own version of the Node.js -alpine image, with the build tools you need, and re-use that as the base image for your apps?

These are questions no one can answer but you and your team, for your specific project.

The post How a 650MB Node.js Image for Docker Uses Less Space Than a 50MB Image appeared first on DerickBailey.com.

Categories: Blogs

3 Rules For When A New JavaScript Feature Is Ready To Be Used

Mon, 05/15/2017 - 13:30

JavaScript is rapidly evolving.

Javascript evolving

With the TC39 Working Group setting the course for the language, and the larger community being involved in the process, it’s moving faster than any other language with 20+ years of history behind it.

And it can be overwhelming at times – trying to keep up, trying to use new features and wondering if they are available.

Browsers, Node.js and other JavaScript runtime environments do their best to implement syntax changes early, but that doesn’t mean all features are readily available to all users.

So, how do you know when a new JavaScript feature is ready for production?

Is it safe to use an early stage feature? Or should you wait until that feature is readily available in all browsers, across nearly all of your user base?

Unfortunately, the answer is not always simple.

I do, however, have a few guidelines that I follow when evaluating a change to JavaScript…

3 rules that let you know when you can start using a new JavaScript feature.

To get the guide and learn how to evaluate a new JavaScript feature for yourself, enter your email address in the box, below. I’ll email the guide to you!

The post 3 Rules For When A New JavaScript Feature Is Ready To Be Used appeared first on DerickBailey.com.

Categories: Blogs

Never Use The :latest Image From Docker Hub

Wed, 05/10/2017 - 17:43

It’s tempting to use the “:latest” tag of an image when you first get started with Docker and pulling images from DockerHub. After all, who wouldn’t want the latest and greatest version of MongoDB, Node.js, Redis, etc, when they start a project?

But this is a guaranteed way to ruin your life, destroy your productivity and rip your fancy new hair style to shreds, as you sit at your desk, pulling your hair out while stressing over why your project doesn’t run in your Docker container, a few weeks later.

Error

Ok, it may not ruin your life, but it can certainly cause you to waste hours of it trying to figure out problems.

For example, a while back I was talking with a WatchMeCode member. After watching my Docker episode on running MongoDB in a Docker container, he made the jump.

He followed the instructions I provided in that screencast, set up the appropriate host mounted volume, and ran the container with all the right settings.

And none of his existing data showed up in the container.

After what felt like hours of running around in circles, he reached out and asked me if I had ever seen any issues like this. It took us a long time to figure out what the problem was…

His locally installed version of MongoDB was fairly old. It used an older file system driver, and created files that were not immediately supported by the newer version of MongoDB he was using in his Docker container.

It turned out he was using the latest MongoDB from Dockerhub, without thinking about upgrade issues like this.

In the end the solution was simple – specify the same, old version of MongoDB that he had installed directly on his laptop previously. Once he did this, the Docker container picked up the data file and everything worked fine.

This is only one example of upgrade blues, though.

I’ve had other experiences with upgrading Node.js versions and modules and libraries being incompatible with the newer version of Node.

If I were to specify the latest version of the Node image from Dockerhub, then I would be opening myself to the risk of running into this problem again.

At some point, the code I’m writing today will have an issue with a newer version of Node. It may not happen tomorrow, but it will happen.

It’s just too risky to use the :latest tag for a Docker image, unless you are in control of that image.

So save yourself the headache, and specify the right version of the right image for your Docker projects.

The post Never Use The :latest Image From Docker Hub appeared first on DerickBailey.com.

Categories: Blogs

How To Learn ES vNext Without Jeopardizing Existing Work

Mon, 04/24/2017 - 15:00

Around a year ago, I wrote a blog post lamenting the high cost of entry for using ES6 features like generators, modules, etc. Since then, the world of JavaScript runtime environments has progressed significantly.

Most (if not all) of the features that were once difficult to use without pre-compilers are now available to the general population with updated browsers, and to back-end developers with new versions of Node.js. The state of JavaScript has improved significantly!

But the problem of new features hasn’t gone away. It’s only moved.

Now instead of wondering how I can work with generators, I’m looking for opportunity to work with async functions and other ES2017 (and beyond) features.

There’s an underlying problem that we have, as JavaScript developers, when it comes to new language versions and features. Most languages require us to install the specific version in whatever environment we are running our application.

But JavaScript is different – at least on the front end of things.

Instead of developers and production engineers installing the latest JavaScript features, we’re waiting for web browsers to catch up to the features. Not only that, but we’re waiting for the general population of people that use our web sites to update their browsers so we can safely use those features.

Sure, we can install new versions of Node.js on our server and run new code in the back-end. But even that can be dangerous.

I mean, when was the last time you heard about a great new language feature or syntax change in JavaScript, and thought to yourself,

“That’s great! I’ll just install the latest, unstable Node.js release, update my Babel.js version and plugins, and download an experimental browser versions that might support this syntax if I use command-line flags when starting it!”

If you’re like me and millions of other developers, this isn’t even a remote possibility. It’s just not going to happen.

Why?

Because you have existing projects that need tried-and-true, stable, well-tested and supported versions of all these things. And the risk of installing new, unstable and experimental versions of Node, Babel or any other runtime, and having it break your actual work is far too great.

It’s enough to make a developer want to forget about learning new JavaScript features… to just wait until they become “old” features. Which is unfortunate – because when I see the future of JavaScript, I see code that I desperately want to write, now.

Fortunately, there is a solution to this problem. It is possible to learn new JavaScript features – to take advantage of async functions and other improvements that can greatly reduce the amount of code you have to write. And it can be done safely.

On May 2nd, I’ll be presenting a live WatchMeCode session all about this problem and solution.

The ES7 and Beyond with Docker webinar will give you everything you need to learn the latest JavaScript features without once putting your current projects in danger from new runtime libraries, or other software updates.

Es7 and beyond vsl

And don’t worry if you’ve never used Docker – with the solution that I have, you won’t need any prior Docker experience to take advantage of the latest JavaScript features.

Learn more about this session and register today, at WatchMeCode!

The post How To Learn ES vNext Without Jeopardizing Existing Work appeared first on DerickBailey.com.

Categories: Blogs

With ES7 And Beyond, Do You Need To Know ES6 Generators?

Wed, 04/19/2017 - 13:00

A few years ago, JavaScript introduced the idea of generators to the language. It’s a tool that was absolutely needed in the JavaScript world, and I was very happy to see it added, even if I didn’t like the name at the time.

But now, after a few years of seeing generators in the wild and using them in my code, it’s time to answer the big question.

Do you need to learn generators?

Before I get to the big secret … two secrets, really … it’s important to understand what generators are which informs why they are so important.

What Is A Generator?

The .NET world has called them “iterators” for 12 years. But I think JavaScript was following Python’s lead with “generators”. You could also call them “coroutines”, which may be the fundamental concept on which generators and iterators are built.

Ok, enough computer science lessons… what is a generator, really?

A generator will halt execution of a function for an indefinite period of time when you call “yield” from inside a function.

The code that invoked the generator can then control exactly when the generator resumes… if at all.

And since you can return a value every time execution halts (you “yield” a value from the function), you can effectively have multiple return values from a single generator.

That’s the real magic of a generator – halting function execution and yielding values – and this why generators are so incredibly important, too.

But a generator isn’t just one thing.

There’s 2 Parts To A Generator

There’s the generator function, and the generator iterator (I’ll just call that an iterator from here on out).

A generator function is defined with an * (asterisk) near the function keyword or name. This function is responsible for “yield”-ing control of its execution – with a yielded value if needed – to the iterator.

An iterator is created by invoking the generator function.

Once you have an iterator, you can … you guessed it, iterate over the values that the generator function yields.

The result of an iterator’s “.next()” call has multiple properties to tell you whether or not the iterator is completed, provide the value that was yielded, etc.

But the real power in this is when calling “it.next();” the function will continue executing from where it left off, pausing at the next yield statement.

This means you can execute a method partway, pause it by yielding control to the code that made the function call, and then later decide if you want to continue executing the generator or not.

For more detail on this, I’ll list some great resources for what generators can do, below.

Right now, though, you should know about the real value and power of generators: how they make async functions a possibility.

Secret #1: Async Functions Use Generators

The truth is, async functions wouldn’t exist without generators. They are built on top of the same core functionality in the JavaScript runtime engine and internally, may even use generators directly (I’m not 100% sure of that, but I wouldn’t be surprised).

In fact, with generators and promises alone, you can create async/await functionality on your own.

It’s true! I’ve written the code, myself. I’ve seen others write it. And there’s a very popular library called co (as in coroutines) that will do it for you.

Compare that to the same async function calls using the syntax you saw yesterday.

With only a few minor syntax changes, co has written the same level of abstraction in calling asynchronous methods that look synchronous. It even uses promises under the hood to make this work, like async functions.

Clearly there was influence from this ingenuity in co, that influenced the final specification of async/await.

Secret #2: You Don’t Need To Learn Generators

Async functions are built on the same underlying technology as generators. They encapsulate generators into what co provided for you, as a formal syntax instead of a 3rd party library.

But do you need to learn generators?

No.

Make no mistake. You absolutely need generators.

Without them, async functions wouldn’t work. But you do not need to learn how to use them, directly.

They’re complex, compared to the way you’ve been working. They aren’t just a new way to write iteration and asynchronous code. But they represent a fundamental shift in how code is executed, and the API to manage that is not how a developer building line-of-business applications wants to think.

Sure, there are some use cases where generators can do really cool things. I’ll show you those in the resources below. But your code will not suffer one iota if you don’t learn how to use generators, directly.

Let the library and framework developers deal with generators to optimize the way things work. You can just sit back and focus on the awesomeness that is async functions, and forget out generators.

One Use Case For Generators

Ok, there is one case where I would say you do need generators.

If you want to support semi-modern browsers or Node.js v4 / v6 with async/await functionality…

And if you can’t guarantee the absolute latest versions of these will be used… Node v7.6+, MS Edge 15, Chrome 57, etc…

Then generators are your go-to option with the co library, to create the async function syntax you want.

Other than that, you’re not missing much if you decide to not learn generators.

So I say skip it.

Just wait for async/await to be readily available and spend your time learning promises instead (you absolutely need promises to effectively work with async functions).

Generator Resources

In spite of my advice that you don’t need to learn generators, there are some fun things you can do with them if you can completely wrap your head around them.

It took me a long time to do that.

But here are the resources I used to get my head into a place where generators made sense.

And some of my own blog posts on fun things I’ve done with generators.

The post With ES7 And Beyond, Do You Need To Know ES6 Generators? appeared first on DerickBailey.com.

Categories: Blogs

You Need ES2017’s Async Functions. Here’s Why …

Tue, 04/18/2017 - 13:00

If you’ve ever written code like this, you know the pain that is asynchronous workflow in JavaScript.

Nested function after nested function. Multiple redundant (but probably necessary) checks for errors.

It’s enough to make you want to quit JavaScript… and this is a simple example!

Now imagine how great it would be if your code could look like this.

Soooo much easier to read… as if the code were entirely synchronous! I’ll take that any day, over the first example.

Using Async Functions

With async functions, that second code sample is incredibly close to what you can do. It only takes a few additional keywords to mark these function calls as async, and you’re golden.

Did you notice the difference, here?

With the addition of “async” to the outer function definition, you can now use the “await” keyword to call your other async functions.

By doing this, the JavaScript runtime will now invoke the async functions in a manner that allows you to wait for a response without using a callback. The code is still asynchronous where it needs to be, and synchronous where it can be.

This code does the same thing, has the same behavior from a functionality perspective. But visually, this code is significantly easier to read and understand.

The question now, is how do you create the async functions that save so much extra code and cruft, allowing you to write such simple workflow?

Writing Async Functions

If you’ve ever used a JavaScript Promise, then you already know how to create an async function.

Look at how the “createEmployee” function might be written, for example.

This code immediately creates and returns a promise. Once the work to create the employee is done, it then checks for some level of success and resolves the promise with the employee object. If there was a problem, it rejects the promise.

The only difference between this function and any other function where you might have returned a promise, is the use of the “async” keyword in the function definition.

But it’s this one keyword that solves the nested async problem that JavaScript has suffered with, forever.

Async With Flexibility

Beyond the simplicity of reading and understanding this code, there is one more giant benefit that needs to be stated.

With the use of promises in the the async functions, you have options for how you handle them. You are not required to “await” the result. You can still use the promise that is returned.

This code is just as valid as the previous code.

Yes, this code still calls the same async createEmployee function. But we’re able to take advantage of the promises that are returned when we want to.

And if you look back at the 3rd code sample above, you might remember that I was calling async functions but ultimately using a callback to return the result. Yet again, we see more flexibility.

Reevaluating My Stance On Promises

In the past, I’ve made some pretty strong statements about how I never look to promises as my first level of async code. I’m seriously reconsidering my position.

If the use of promises allows me to write such easily readable code, then I’m in.

Of course, now the challenge is getting support for this in the majority of browsers, as I’m not about to drop a ton of terrible pre-compiler hacks and 3rd party libraries into a browser to make this work for the web.

Node.js on the other hand? Well, it’s only a matter of time before v8.0 is stable for release.

For now, though, I’ll play with v7.6+ in a Docker container and get myself prepared for the new gold standard in asynchronous JavaScript.

The post You Need ES2017’s Async Functions. Here’s Why … appeared first on DerickBailey.com.

Categories: Blogs

What’s The Easiest Way To Get Docker Into Production?

Thu, 04/13/2017 - 15:00

Elton Stoneman  Docker in Production  Video Poster

… and once it’s in production, how do I manage, monitor and scale when I need to?

These are questions I’ve often been asked, and have asked several times. They are not questions for which I’ve had a good answer, though.

Sure, you can find all kinds of articles about container orchestration, management and auto-magically doing all kinds of awesome stuff at a massive scale. There are plenty of big talks from huge companies and well-known speakers on this. But that doesn’t help a developer on a small team, or someone working on their own. It doesn’t provide a simple answer for learning and growth, either.

And all these talks on the massive-scale auto-magic solutions never once helped me when I was looking at putting a single Docker image into a production environment.

So, how do you go from development to production?

And how do you grow your production environment as needed? When do you know you’re ready to use the next set of tools? And what are the next set of tools?

On March 14th, 2017, I interviewed Elton Stoneman from the Docker Developer Relations team, and we talked about exactly that – how to get Docker into production, after you’ve created your development image.

At a high level, this interview walks through what it takes to go from development to the most basic of production configurations. From there, it moves into a discussion of a more stable environment with Docker Swarm, deployment of multiple containers using a docker-compose.yml file, and then out to large-scale production and enterprise needs with Docker Datacenter.

Along the way, you’ll hear more about what “orchestration” is, get a clear understanding of how a Docker image can be monitored to ensure the application is responding correctly, and learn how to quickly and easily scale an application.

If you’ve ever wondered where to look next, after you’ve created your own Docker image and built an application into it, you’ll want to check out this interview.

Learn how to go from development to production with Elton Stoneman.

P.S. Be sure to read to the bottom of that page to find out how you can watch the interview for free!

The post What’s The Easiest Way To Get Docker Into Production? appeared first on DerickBailey.com.

Categories: Blogs