Gorgi Kosev

code, music, math

Gorgi Kosev

code, music, math


Machine learning ethics

Tue Dec 19 2017

Today I found and watched one of the most important videos on machine learning published this year

We're building a dystopia just to make people click on ads https://www.youtube.com/watch?v=iFTWM7HV2UI&app=desktop

Go watch it first before reading ahead! I could not possibly summarise it without doing it a disservice.

What struck me most was the following quote:

Having interviewed people who worked at Facebook, I'm convinced that nobody there really understands how it [the machine learning system] works.

The important question is, howcome nobody understands how a machine learning system works? You would think, its because the system is very complex, its hard for any one person to understand it fully. Thats not the problem.

The problem is fundamental to machine learning systems.

A machine learning system is a program that is given a target goal, a list of possible actions, a history of previous actions and how well they achieved the goal in a past context. The system should learn on the historical data and be able to predict what action it can select to best achieve the goal.

Lets see what these parts would represent on say, YouTube, for a ML system that has to pick which videos to show on the sidebar right next to the video you're watching.

The target goal could be e.g. to maximise the time the user stays on YouTube, watching videos. More generally, a value function is given by the ML system creator that measures the desireability of a certain outcome or behaviour (it could include multiple things like number of product bought, number of ads clicked or viewed, etc).

The action the system can take is the choice of videos in the sidebar. Every different set of videos would be a different alternative action, and could cause the user to either stay on YouTube longer or perhaps leave the site.

Finally, the history of actions includes all previous video lists shown in the sidebar to users, together with the value function outcome from them: the time the user spent on the website after being presented that list. Additional context from that time is also included: which user was it, what was their personal information, their past watching history, the channels they're subscribed to, videos they liked, videos they disliked and so on.

Based on this data, the system learns how to tailor its actions (the videos it shows) so that it achieves the goal by picking the right action for a given context.

At the beginning it will try random things. After several iterations, it will find which things seem to maximize value in which context.

Once trained with sufficient data, it will be able to do some calculations and conclude: "well, when I encountered a situation like this other times, I tried these five options, and option two on average caused users like this one to stay the longest, so I'll do that".

Sure, there are ways to ask some ML systems why they made a decision after the fact, and they can elaborate the variables that had the most effect. But before the algorithm gets the training data, you don't know what it will decide - nobody does! It learns from the history of its own actions and how the users reacted to them, so in essence, the users are programming its behaviour (through the lens of its value function).

Lets say the system learnt that people who have cat videos in their watch history will stay a lot longer if they are given cat videos in their suggestion box. Nothing groundbreaking there.

Now lets say it figures out the same action is appropriate when they are watching something unrelated, like academic lecture material, because past data suggests that people of that profile leave slightly earlier when given more lecture videos, while they stay for hours when given cat videos, giving up the lecture videos.

This raises a very important question - is the system behaving in an ethical manner? Is it ethical to show cat videos to a person trying to study and nudge them towards wasting their time? Even that is a fairly benign example. There are far worse examples mentioned in the TED talk above.

The root of the problem is the value function. Our systems are often blisfully unaware of any side effects their decision may cause and blatantly disregard basic rules of behaviour that we take for granted. They have no other values than the value function they're maximizing. For them, the end justifies the means. Whether the value function is maximized by manipulating people, preying on their insecurities, making them scared, angry or sad - all of that is unimportant. Here is a scary proposition: if a person is epileptic, it might learn that the best way to keep thenm "on the website" is to show them something that will render them unconscious. It wouldn't even know that it didn't really achieve the goal: as far as it knows, autoplay is on and they haven't stopped it in the past two hours, so it all must be "good".

So how do we make these systems ethical?

The first challenge is technical, and its the easiest one. How do we come up with a value function that encodes additional basic values of of human ethics? Its easy as pie! You take a bunch of ethicists, give them various situations and ask them to rate actions as ethical/unethical. Then once you have enough data, you train a new value function so that the system can learn some basic humanity. You end up with a an ethics function, and you create a new value function that combines the old value function with the ethics function into the new value function. As a result the system starts picking more ethical actions. All done. (If only things were that easy!)

The second challenge is a business one. How far are you willing to reduce your value maximisation to be ethical? What to do if your competitor doesn't do that? What are the ethics of putting a number on how much ethics you're willing to sacrifice for profits? (Spoiler alert: they're not great)

One way to solve that is to have regulations for ethical behaviour of machine learning systems. Such systems could be held responsible for unethical actions. If those actions are reported by people, investigated by experts and found true in court, the company owning the ML system is held liable. Unethical behaviour of machine learning systems shouldn't be too difficult to spot, although getting evidence might prove difficult. Public pressure and exposure of companies seems to help too. Perhaps we could make a machine learning systems that detects unethical behaviour and call it the ML police. Citizens could agree to install the ML police add-on to help monitor and aggregate behaviour of online ML systems. (If these suggestions look silly, its because they are).

Another way to deal with this is to mandate that all ML systems have a feedback feature. The user (or a responsible guardian of the user) should be able to log on to the system, see its past actions within a given context and rate them as ethical or unethical. The system must be designed to use this data and give it precedence when making decisions, such that actions that are computed to be more ethical are always picked over actions that are less ethical. In this scenario the users are the ethicists.

The third challenge is philosophical. Until now, philosophers were content with "there is no right answer, but there have been many thoughts on what exactly is ethical". They better get their act together, because we'll need them to come up with a definite, quantifiable answer real soon.

On the more optimistic side, I hope that any generally agreed upon "standard" ethical system will be a better starting point than having none at all.

JavaScript isn't cancer

Thu Oct 06 2016

The last few days, I've been thinking about what leads so many people to hate JavaScript.

JS is so quirky and unclean! Thats supposed to be the primary reason, but after working with a few other dynamic languages, I don't buy it. JS actually has a fairly small amount of quirks compared to other dynamic languages.

Just think about PHP's named functions, which are always in the global scope. Except when they are in namespaces (oh hi another concept), and then its kinda weird because namespaces can be relative. There are no first class named functions, but function expressions can be assigned to variables. Which must be prefixed with $. There are no real modules, or proper nestable scope - at least not for functions, which are always global. But nested functions only exist once the outer function is called!

In Ruby, blocks are like lambdas except when they are not, and you can pass a block explicitly or yield to the first block implicitly. But there are also lambdas, which are different. Modules are uselessly global, cannot be parameterised over other modules (without resorting to meta programming), and there are several ways to nest them: if you don't nest them lexically, the lookup rules become different. And there are classes, with private variables, which are prefixed with @. I really don't get that sigil fetish.

The above examples are only scratching the surface.

And which are the most often cited problems of JavaScript? Implicit conversions (the wat talk), no large ints and hard to understand prototypical inheritance and this keyword. That doesn't look any worse than the above lists! Plus, the language (pre ES6) is very minimalistic. It has freeform records with prototypes, and closures with lexical scope. Thats it!

So this supposed "quirkiness" of JavaScript doesn't seem like a satisfactory explanation. There must be something else going on here, and I think I finally realized what that is.

JavaScript is seen as a "low status" language. A 10 day accident, a silly toy language for the browser that ought to be simple and easy to learn. To an extent this is true, largely thanks to the fact that there are very few distinct concepts to be learned.

However, those few concepts combine together into a package with a really good power-to-weight ratio. Additionally, the simplicity ensures that the language is malleable towards even more power (e.g. you can extend it with a type system and then you can idiomatically approximate some capabilities of algebraic sum types, like making illegal states unrepresentable).

The emphasis above is on idiomatically for a reason. This sort of extension is somehow perfectly normal in JavaScript. If you took Ruby and used its dictionary type to add a comparable feature, it has significantly lower likelyhood of being accepted by developers. Why? Because Ruby has standard ways of doing things. You should be using objects and classes, not hashes, to model most of your data. (*)

That was not the case with the simple pre-ES6 JavaScript. There was no module system to organize code. No classes system to hierarhically organize blueprints of things that hold state. Lack of basic standard library items, such as maps, sets, iterables, streams, promises. Lack of functions to manipulate existing data structures (dictionaries and arrays).

Combine sufficient power, simplicity/malleability, and the lack of the basic facilities. Add to this the fact that its the basic option in the browser, the most popular platform. What do you get? You get a TON of people working in it to extend it in various different ways. And they invent a TON of stuff!

We ended up with several popular module systems (object based namespaces, CommonJS, AMD, ES6, the angular module system, etc) as well as many package managers to manage these modules (npm, bower, jspm, ...). We also got many object/inheritance systems: plain objects, pure prototype extension, simulating classes, "composable object factories", and so on and so forth. Heck, a while ago every other library used to implement its own class system! (That is, until CoffeeScript came and gave the definite answer on how to implement classes on top of prototypes. This is interesting, and I'll come back to it later.)

This creates dissonance with the language's simplicity. JavaScript is this simple browser language that was supposed to be easy, so why is it so hard? Why are there so many things built on top of it and how the heck do I choose which one to use? I hate it. Why do I hate it? Probably its all these silly quirks that it has! Just look at its implicit conversions and lack of number types other than doubles!

It doesn't matter that many languages are much worse. A great example of the reverse phenomenon is C++. Its a complete abomination, far worse than JavaScript - a Frankenstein in the languages domain. But its seen as "high status", so it has many apologists that will come to defend its broken design: "Yeah, C++ is a serious language, you need grown-up pants to use it". Unfortunately JS has no such luck: its status as a hack-together glue for the web pages seems to have been forever cemented in people's heads.

So how do we fix this? You might not realize it, but this is already being fixed as we speak! Remember how CoffeeScript slowed down the prolification of custom object systems? Browsers and environments are quickly implementing ES6, which standardizes a huge percentage of what used to be the JS wild west. We now have the standard way to do modules, the standard way to do classes, the standard way to do basic procedural async (Promises; async/await). The standard way to do bundling will probably be no-bundling: HTTP2 push + ES6 modules will "just work"!

Finally, I believe the people who think that JavaScript will always be transpiled are wrong. As ES6+ features get implemented in major browsers, more and more people will find the overhead of ES.Next to ES transpilers isn't worth it. This process will stop entirely at some point as the basics get fully covered.

At this point, I'm hoping several things will happen. We'll finally get those big integers and number types that Brendan Eich has been promising. We'll have some more stuff on top of SharedArrayBuffer to enable easier shared memory parallelism, perhaps even immutable datastructures that are transferable objects. The wat talk will be obsolete: obviously, you'd be using a static analysis tool such as Flow or TypeScript to deal with that; the fact that the browser ignores those type annotations and does its best to interpret what you meant will be irrelevant. async/await will be implemented in all browsers as the de-facto way to do async control flow; perhaps even async iterators too. We'll also have widly accepted standard libraries for data and event streams.

Will JavaScript finally gain the status it deserves then? Probably. But at what cost? JavaScript is big enough now that there is less space for new inventions. And its fun to invent new things and read about other people's inventions!

On the other hand, maybe then we'll be able to focus on the stuff we're actually building instead.

(*) Or metaprogramming, but then everyone has to agree on the same metaprogramming. In JS, everyone uses records, and they probably use a tag field to discriminate them already: its a small step to add types for that.

ES7 async functions - a step in the wrong direction

Sun Aug 23 2015

Async functions are a new feature scheduled to become a part of ES7. They build on top of previous capabilities made available by ES6 (promises), letting you write async code as though it were synchronous. At the moment, they're a stage 1 proposal for ES7 and supported by babel / regenerator.

When generator functions were first made available in node, I was very excited. Finally, a way to write asynchronous JavaScript that doesn't descend into callback hell! At the time, I was unfamiliar with promises and the language power you get back by simply having async computations be first class values, so it seemed to me that generators are the best solution available.

Turns out, they aren't. And the same limitations apply for async functions.

Predicates in catch statements

With generators, thrown errors bubble up the function chain until a catch statement is encountered, much like in other languages that support exceptions. On one hand, this is convenient, but on the other, you never know what you're catching once you write a catch statement.

JavaScript catch doesn't support any mechanism to filter errors. This limitation isn't too hard to get around: we can write a function guard

function guard(e, predicate) {
  if (!predicate(e)) throw e;

and then use it to e.g. only filter "not found" errors when downloading an image

try {
    await downloadImage(url);
} catch (e) { guard(e, e => e.code == 404);

But that only gets us so far. What if we want to have a second error handler? We must resort to using if-then-else, making sure that we don't forget to rethrow the error at the end

try {
    await downloadImage(url);
} catch (e) {
    if (e.code == 404)  {
    } else if (e.code == 401) {
    } else {
        throw e;

Since promises are a userland library, restrictions like the above do not apply. We can write our own promise implementation that demands the use of a predicate filter:

.catch(e => e.code == 404, e => {
.catch(e => e.code == 401, e => {

Now if we want all errors to be caught, we have to say it explicitly:

.catch(e => true, e => {

Since these constructs are not built-in language features but a DSL built on top of higher order functions, we can impose any restrictions we like instead of waiting on TC39 to fix the language.

Cannot use higher order functions

Because generators and async-await are shallow, you cannot use yield or await within lambdas passed to higher order functions.

This is better explained here - The example given there is

async function renderChapters(urls) {
  urls.map(getJSON).forEach(j => addToPage((await j).html));

and will not work, because you're not allowed to use await from within a nested function. The following will work, but will execute in parallel:

async function renderChapters(urls) {
  urls.map(getJSON).forEach(async j => addToPage((await j).html));

To understand why, you need to read this article. In short: its much harder to implement deep coroutines so browser vendors probably wont do it.

Besides being very unintuitive, this is also limiting. Higher order functions are succint and powerful, yet we cannot really use them inside async functions. To get sequential execution we have to resort to the clumsy built in for loops which often force us into writing ceremonial, stateful code.

Arrow functions give us more power than ever before

Functional DSLs were very powerful even before JS had short lambda syntax. But with arrow functions, things get even cleaner. The amount of code one needs to write can be reduced greatly thanks to short lambda syntax and higher order functions. Lets take the motivating example from the async-await proposal

function chainAnimationsPromise(elem, animations) {
    var ret = null;
    var p = currentPromise;
    for(var anim of animations) {
        p = p.then(function(val) {
            ret = val;
            return anim(elem);
    return p.catch(function(e) {
        /* ignore and keep going */
    }).then(function() {
        return ret;

With bluebird's Promise.reduce, this becomes

function chainAnimationsPromise(elem, animations) {
  return Promise.reduce(animations,
      (lastVal, anim) => anim(elem).catch(_ => Promise.reject(lastVal)),
  .catch(lastVal => lastVal);

In short: functional DSLs are now more powerful than built in constructs, even though (admittedly) they may take some getting used to.

But this is not why async functions are a step in the wrong direction. The problems above are not unique to async functions. The same problems apply to generators: async functions merely inherit them as they're very similar.

Async functions also go another step backwards.

Loss of generality and power

Despite their shortcomings, generator based coroutines have one redeeming quality: they allow you to redefine the coroutine execution engine. This is extremely powerful, and I will demonstrate by giving the following example:

Lets say we were given the task to write the save function for an issue tracker. The issue author can specify the issue's title and text, as well as any other issues that are blocking the solution of the newly entered issue.

Our initial implementation is simple:

async function saveIssue(data, blockers) {
    let issue = await Issues.insert(data);
    for (let blockerId of blockers) {
      await BlockerIssues.insert({blocker: blockerId, blocks: issue.id});

Issues.insert = async function(data) {
    return db.query("INSERT ... VALUES", data).execWithin(db.pool);

BlockerIssue.insert = async function(data) {
    return db.query("INSERT .... VALUES", data).execWithin(db.pool);

Issue and BlockerIssues are references to the corresponding tables in an SQL database. Their insert methods return a promise that indicate whether the query has been completed. The query is executed by a connection pool.

But then, we run into a problem. We don't want to partially save the issue if some of the data was not inserted successfuly. We want the entire save operation to be atomic. Fortunately, SQL databases support this via transactions, and our database library has a transaction abstraction. So we change our code:

async function saveIssue(data, blockers) {
    let tx = db.beginTransaction();
    let issue = await Issue.insert(tx, data);
    for (let blockerId of blockers) {
      await BlockerIssues.insert(tx, {blocker: blockerId, blocks: issue.id});

Issues.insert = async function(tx, data) {
    return db.query("INSERT ... VALUES", data).execWithin(tx);

BlockerIssue.insert = async function(tx, data) {
    return db.query("INSERT .... VALUES", data).execWithin(tx);

Here, we changed the code in two ways. Firstly, we created a transaction within the saveIssue function. Secondly, we changed both insert methods to take this transaction as an argument.

Immediately we can see that this solution doesn't scale very well. What if we need to use saveIssue as a part of a larger transaction? Then it has to take a transaction as an argument. Who will create the transactions? The top level service. What if the top level service becomes a part of a larger service? Then we need to change the code again.

We can reduce the extent of this problem by writing a base class that automatically initializes a transaction if one is not passed via the constructor, and then have Issues, BlockerIssue etc inherit from this class.

class Transactionable {
    constructor(tx) {
        this.transaction = tx || db.beginTransaction();
class IssueService extends Transactionable {
    async saveIssue(data, blockers) {
        issues = new Issues(this.transaction);
        blockerIssues = new BlockerIssues(this.transaction);
class Issues extends Transactionable { ... }
class BlockerIssues extends Transactionable { ... }
// etc

Like many OO solutions, this only spreads the problem across the plate to make it look smaller but doesn't solve it.

Generators are better

Generators let us define the execution engine. The iteration is driven by the function that consumes the generator, which decides what to do with the yielded values. What if instead of only allowing promises, our engine let us also:

  1. Specify additional options which are accessible from within
  2. Yield queries. These will be run in the transaction specified in the options above
  3. Yield other generator iterables: These will be run with the same engine and options
  4. Yield promises: These will be handled normally

Lets take the original code and simplify it:

function* saveIssue(data, blockers) {
    let issue = yield Issues.insert(data);
    for (var blockerId of blockers) {
      yield BlockerIssues.insert({blocker: blockerId, blocks: issue.id});

Issues.insert = function* (data) {
    return db.query("INSERT ... VALUES", data)

BlockerIssue.insert = function* (data) {
    return db.query("INSERT .... VALUES", data)

From our http handler, we can now write

var myengine = require('./my-engine');

app.post('/issues/save', function(req, res) {
  myengine.run(saveIssue(data, blockers), {tx: db.beginTransaction()})

Lets implement this engine:

function run(iterator, options) {
    function id(x) { return x; }
    function iterate(value) {
        var next = iterator.next(value)
        var request = next.value;
        var nextAction = next.done ? id : iterate;

        if (isIterator(request)) {
            return run(request, options).then(nextAction)
        else if (isQuery(request)) {
            return request.execWithin(options.tx).then(nextAction)
        else if (isPromise(request)) {
            return request.then(nextAction);
    return iterate()

The best part of this change is that we did not have to change the original code at all. We didn't have to add the transaction parameter to every function, to take care to properly propagate it everywhere and to properly create the transaction. All we needed to do is just change our execution engine.

And we can add much more! We can yield a request to get the current user if any, so we don't have to thread that through our code. Infact we can implement continuation local storage with only a few lines of code.

Async generators are often given as a reason why we need async functions. If yield is already being used as await, how can we get both working at the same time without adding a new keyword? Is that even possible?

Yes. Here is a simple proof-of-concept. github.com/spion/async-generators. All we needed to do is change the execution engine to support a mechanism to distinguish between awaited and yielded values.

Another example worth exploring is a query optimizer that supports aggregate execution of queries. If we replace Promise.all with our own implementaiton caled parallel, then we can add support for non-promise arguments.

Lets say we have the following code to notify owners of blocked issues in parallel when an issue is resolved:

let blocked = yield BlockerIssues.where({blocker: blockerId})
let owners  = yield engine.parallel(blocked.map(issue => issue.getOwner()))

for (let owner of owners) yield owner.notifyResolved(issue)

Instead of returning an SQL based query, we can have getOwner() return data about the query:

{table: 'users', id: issue.user_id}

and have engine optimize the execution of parallel queries, by sending a single query per table rather then per item.

if (isParallelQuery(query)) {
    var results = _(query.items).groupBy('table')
      .map((items, t) => db.query(`select * from ${t} where id in ?`,
                                  items.map(it => it.id))
        .then(results => results.sort(byOrderOf(query.items)))

And voila, we've just implemented a query optimizer. It will fetch all issue owners with a single query. If we add an SQL parser into the mix, it should be possible to rewrite real SQL queries.

We can do something similar on the client too with GraphQL queries by aggregating multiple individual queries.

And if we add support for iterators, the optimization becomes deep: we would be able to aggregate queries that are several layers within other generator functions, In the above example, getOwner() could be another generatator which produces a query for the user as a first result. Our implementation of parallel will run all those getOwner() iterators and consolidate their first queries into a single query. All this is done without those functions knowing anything about it (thus, without breaking modularity).

Async functions cant let us do any of this. All we get is a single execution engine that only knows how to await promises. To make matters worse, thanks to the unfortunately short-sighted recursive thenable assimilation design decision, we can't simply create our own thenable that will support the above extra features. If we try to do that, we will be unable to safely use it with Promises. We're stuck with what we get by default in async functions, and thats it.

Generators are JavaScript's programmable semicolons. Lets not take away that power by taking away the programmability. Lets drop async/await and write our own interpreters.

Why I am switching to promises

Mon Oct 07 2013

I'm switching my node code from callbacks to promises. The reasons aren't merely aesthetical, they're rather practical:

Throw-catch vs throw-crash

We're all human. We make mistakes, and then JavaScript throws an error. How do callbacks punish that mistake? They crash your process!

But spion, why don't you use domains?

Yes, I could do that. I could crash my process gracefully instead of letting it just crash. But its still a crash no matter what lipstick you put on it. It still results with an inoperative worker. With thousands of requests, 0.5% hitting a throwing path means over 50 process shutdowns and most likely denial of service.

And guess what a user that hits an error does? Starts repeatedly refreshing the page, thats what. The horror!

Promises are throw-safe. If an error is thrown in one of the .then callbacks, only that single promise chain will die. I can also attach error or "finally" handlers to do any clean up if necessary - transparently! The process will happily continue to serve the rest of my users.

For more info see #5114 and #5149. To find out how promises can solve this, see bluebird #51

if (err) return callback(err)

That line is haunting me in my dreams now. What happened to the DRY principle?

I understand that its important to explicitly handle all errors. But I don't believe its important to explicitly bubble them up the callback chain. If I don't deal with the error here, thats because I can't deal with the error there - I simply don't have enough context.

But spion, why don't you wrap your calbacks?

I guess I could do that and lose the callback stack when generating a new Error(). Or since I'm already wrapping things, why not wrap the entire thing with promises, rely on longStackSupport, and handle errors at my discretion?

Also, what happened to the DRY principle?

Promises are now part of ES6

Yes, they will become a part of the language. New DOM APIs will be using them too. jQuery already switched to promise...ish things. Angular utilizes promises everywhere (even in the templates). Ember uses promises. The list goes on.

Browser libraries already switched. I'm switching too.

Containing Zalgo

Your promise library prevents you from releasing Zalgo. You can't release Zalgo with promises. Its impossible for a promise to result with the release of the Zalgo-beast. Promises are Zalgo-safe (see section 3.1).

Callbacks getting called multiple times

Promises solve that too. Once the operation is complete and the promise is resolved (either with a result or with an error), it cannot be resolved again.

Promises can do your laundry

Oops, unfortunately, promises wont do that. You still need to do it manually.

But you said promises are slow!

Yes, I know I wrote that. But I was wrong. A month after I wrote the giant comparison of async patterns, Petka Antonov wrote Bluebird. Its a wicked fast promise library, and here are the charts to prove it:

Time to complete (ms)

Parallel requests

Memory usage (MB)

Parallel requests

And now, a table containing many patterns, 10 000 parallel requests, 1 ms per I/O op. Measure ALL the things!

file time(ms) memory(MB)
callbacks-original.js 316 34.97
callbacks-flattened.js 335 35.10
callbacks-catcher.js 355 30.20
promises-bluebird-generator.js 364 41.89
dst-streamline.js 441 46.91
callbacks-deferred-queue.js 455 38.10
callbacks-generator-suspend.js 466 45.20
promises-bluebird.js 512 57.45
thunks-generator-gens.js 517 40.29
thunks-generator-co.js 707 47.95
promises-compose-bluebird.js 710 73.11
callbacks-generator-genny.js 801 67.67
callbacks-async-waterfall.js 989 89.97
promises-bluebird-spawn.js 1227 66.98
promises-kew.js 1578 105.14
dst-stratifiedjs-compiled.js 2341 148.24
rx.js 2369 266.59
promises-when.js 7950 240.11
promises-q-generator.js 21828 702.93
promises-q.js 28262 712.93
promises-compose-q.js 59413 778.05

Promises are not slow. At least, not anymore. Infact, bluebird generators are almost as fast as regular callback code (they're also the fastest generators as of now). And bluebird promises are definitely at least two times faster than async.waterfall.

Considering that bluebird wraps the underlying callback-based libraries and makes your own callbacks exception-safe, this is really amazing. async.waterfall doesn't do this. exceptions still crash your process.

What about stack traces?

Bluebird has them behind a flag that slows it down about 5 times. They're even longer than Q's longStackSupport: bluebird can give you the entire event chain. Simply enable the flag in development mode, and you're suddenly in debugging nirvana. It may even be viable to turn them on in production!

What about the community?

This is a valid point. Mikeal said it: If you write a library based on promises, nobody is going to use it.

However, both bluebird and Q give you promise.nodeify. With it, you can write a library with a dual API that can both take callbacks and return promises:

module.exports = function fetch(itemId, callback) {
    return locate(itemId).then(function(location) {
        return getFrom(location, itemId);

And now my library is not imposing promises on you. Infact, my library is even friendlier to the community: if I make a dumb mistake that causes an exception to be thrown in the library, the exception will be passed as an error to your callback instead of crashing your process. Now I don't have to fear the wrath of angry library users expecting zero downtime on their production servers. Thats always a plus, right?

What about generators?

To use generators with callbacks you have two options

  1. use a resumer style library like suspend or genny
  2. wrap callback-taking functions to become thunk returning functions.

Since #1 is proving to be unpopular, and #2 already involves wrapping, why not just s/thunk/promise/g in #2 and use generators with promises?

But promises are unnecessarily complicated!

Yes, the terminology used to explain promises can often be confusing. But promises themselves are pretty simple - they're basically like lightweight streams for single values.

Here is a straight-forward guide that uses known principles and analogies from node (remember, the focus is on simplicity, not correctness):

Edit (2014-01-07): I decided to re-do this tutorial into a series of short articles called promise nuggets. The content is CC0 so feel free to fork, modify, improve or send pull requests. The old tutorial will remain available within this article.

Promises are objects that have a then method. Unlike node functions, which take a single callback, the then method of a promise can take two callbacks: a success callback and an error callback. When one of these two callbacks returns a value or throws an exception, then must behave in a way that enables stream-like chaining and simplified error handling. Lets explain that behavior of then through examples:

Imagine that node's fs was wrapped to work in this manner. This is pretty easy to do - bluebird already lets you do something like that with promisify(). Then this code:

fs.readFile(file, function(err, res) {
    if (err) handleError();

will look like this:

fs.readFile(file).then(function(res) {
}, function(err) {

Whats going on here? fs.readFile(file) starts a file reading operation. That operation is not yet complete at the point when readFile returns. This means we can't return the file content. But we can still return something: we can return the reading operation itself. And that operation is represented with a promise.

This is sort of like a single-value stream:

net.connect(port).on('data', function(res) {
}).on('error', function(err) {

So far, this doesn't look that different from regular node callbacks - except that you use a second callback for the error (which isn't necessarily better). So when does it get better?

Its better because you can attach the callback later if you want. Remember, fs.readFile(file) returns a promise now, so you can put that in a var, or return it from a function:

var filePromise = fs.readFile(file);
// do more stuff... even nest inside another promise, then
filePromise.then(function(res) { ... });

Yup, the second callback is optional. We're going to see why later.

Okay, that's still not much of an improvement. How about this then? You can attach more than one callback to a promise if you like:

filePromise.then(function(res) { uploadData(url, res); });
filePromise.then(function(res) { saveLocal(url, res); });

Hey, this is beginning to look more and more like streams - they too can be piped to multiple destinations. But unlike streams, you can attach more callbacks and get the value even after the file reading operation completes.

Still not good enough?

What if I told you... that if you return something from inside a .then() callback, then you'll get a promise for that thing on the outside?

Say you want to get a line from a file. Well, you can get a promise for that line instead:

var filePromise = fs.readFile(file)

var linePromise = filePromise.then(function(data) {
    return data.toString().split('\n')[line];

var beginsWithHelloPromise = linePromise.then(function(line) {
    return /^hello/.test(line);

Thats pretty cool, although not terribly useful - we could just put both sync operations in the first .then() callback and be done with it.

But guess what happens when you return a promise from within a .then callback. You get a promise for a promise outside of .then()? Nope, you just get the same promise!

function readProcessAndSave(inPath, outPath) {
    // read the file
    var filePromise = fs.readFile(inPath);
    // then send it to the transform service
    var transformedPromise = filePromise.then(function(content) {
        return service.transform(content);
    // then save the transformed content
    var writeFilePromise = transformedPromise.then(function(transformed) {
        return fs.writeFile(otherPath, transformed)
    // return a promise that "succeeds" when the file is saved.
    return writeFilePromise;
readProcessAndSave(file, url, otherPath).then(function() {
}, function(err) {
    // This function will catch *ALL* errors from the above
    // operations including any exceptions thrown inside .then
    console.log("Oops, it failed.", err);

Now its easier to understand chaining: at the end of every function passed to a .then() call, simply return a promise.

Lets make our code even shorter:

function readProcessAndSave(file, url, otherPath) {
    return fs.readFile(file)
        .then(fs.writeFile.bind(fs, otherPath));

Mind = blown! Notice how I don't have to manually propagate errors. They will automatically get passed with the returned promise.

What if we want to read, process, then upload, then also save locally?

function readUploadAndSave(file, url, otherPath) {
    var content;
    // read the file and transform it
    return fs.readFile(file)
        content = vContent;
        // then upload it
        return uploadData(url, content);
    }).then(function() { // after its uploaded
        // save it
        return fs.writeFile(otherPath, content);

Or just nest it if you prefer the closure.

function readUploadAndSave(file, url, otherPath) {
    // read the file and transform it
    return fs.readFile(file)
            return uploadData(url, content).then(function() {
                // after its uploaded, save it
                return fs.writeFile(otherPath, content);

But hey, you can also upload and save in parallel!

function readUploadAndSave(file, url, otherPath) {
    // read the file and transform it
    return fs.readFile(file)
        .then(function(content) {
            // create a promise that is done when both the upload
            // and file write are done:
            return Promise.join(
                uploadData(url, content),
                fs.writeFile(otherPath, content));

No, these are not "conveniently chosen" functions. Promise code really is that short in practice!

Similarly to how in a stream.pipe chain the last stream is returned, in promise pipes the promise returned from the last .then callback is returned.

Thats all you need, really. The rest is just converting callback-taking functions to promise-returning functions and using the stuff above to do your control flow.

You can also return values in case of an error. So for example, to write a readFileOrDefault (which returns a default value if for example the file doesn't exist) you would simply return the default value from the error callback:

function readFileOrDefault(file, defaultContent) {
    return fs.readFile(file).then(function(fileContent) {
        return fileContent;
    }, function(err) {
        return defaultContent;

You can also throw exceptions within both callbacks passed to .then. The user of the returned promise can catch those errors by adding the second .then handler

Now how about configFromFileOrDefault that reads and parses a JSON config file, falls back to a default config if the file doesn't exist, but reports JSON parsing errors? Here it is:

function configFromFileOrDefault(file, defaultConfig) {
    // if fs.readFile fails, a default config is returned.
    // if JSON.parse throws, this promise propagates that.
    return fs.readFile(file).then(JSON.parse,
           function ifReadFails() {
               return defaultConfig;
    // if we want to catch JSON.parse errors, we need to chain another
    // .then here - this one only captures errors from fs.readFile(file)

Finally, you can make sure your resources are released in all cases, even when an error or exception happens:

var result = doSomethingAsync();

return result.then(function(value) {
    // clean up first, then return the value.
    return cleanUp().then(function() { return value; })
}, function(err) {
    // clean up, then re-throw that error
    return cleanUp().then(function() { throw err; });

Or you can do the same using .finally (from both Bluebird and Q):

var result = doSomethingAsync();
return result.finally(cleanUp);

The same promise is still returned, but only after cleanUp completes.

But what about async?

Since promises are actual values, most of the tools in async.js become unnecessary and you can just use whatever you're using for regular values, like your regular array.map / array.reduce functions, or just plain for loops. That, and a couple of promise array tools like .all, .spread and .some

You already have async.waterfall and async.auto with .then and .spread chaining:

    .then(function(items) {
        // fetch versions in parallel
        var v1 = versions.get(items.last),
            v2 = versions.get(items.previous);
        return [v1, v2];
    .spread(function(v1, v2) {
        // both of these are now complete.
        return diffService.compare(v1.blob, v2.blob)
    .then(function(diff) {
        // voila, diff is ready. Do something with it.

async.parallel / async.map are straightforward:

// download all items, then get their names
var pNames = ids.map(function(id) {
    return getItem(id).then(function(result) {
        return result.name;
// wait for things to complete:
Promise.all(pNames).then(function(names) {
    // we now have all the names.

What if you want to wait for the current item to download first (like async.mapSeries and async.series)? Thats also pretty straightforward: just wait for the current download to complete, then start the next download, then extract the item name, and thats exactly what you say in the code:

// start with current being an "empty" already-fulfilled promise
var current = Promise.fulfilled();
var namePromises = ids.map(function(id) {
    // wait for the current download to complete, then get the next
    // item, then extract its name.
    current = current
        .then(function() { return getItem(id); })
        .then(function(item) { return item.name; });
    return current;
Promise.all(namePromises).then(function(names) {
    // use all names here.

The only thing that remains is mapLimit - which is a bit harder to write - but still not that hard:

var queued = [], parallel = 3;
var namePromises = ids.map(function(id) {
    // How many items must download before fetching the next?
    // The queued, minus those running in parallel, plus one of
    // the parallel slots.
    var mustComplete = Math.max(0, queued.length - parallel + 1);
    // when enough items are complete, queue another request for an item
    return Promise.some(queued, mustComplete)
        .then(function() {
            var download = getItem(id);
            return download;
        }).then(function(item) {
            // after that new download completes, get the item's name.
            return item.name;
Promise.all(namePromises).then(function(names) {
    // use all names here.

That covers most of async.

What about early returns?

Early returns are a pattern used throughout both sync and async code. Take this hypothetical sync example:

function getItem(key) {
    var item;
    // early-return if the item is in the cache.
    if (item = cache.get(key)) return item;
    // continue to get the item from the database. cache.put returns the item.
    item = cache.put(database.get(key));

    return item;

If we attempt to write this using promises, at first it looks impossible:

function getItem(key) {
    return cache.get(key).then(function(item) {
        // early-return if the item is in the cache.
        if (item) return item;
        return database.get(item)
    }).then(function(putOrItem) {
        // what do we do here to avoid the unnecessary cache.put ?

How can we solve this?

We solve it by remembering that the callback variant looks like this:

function getItem(key, callback) {
    cache.get(key, function(err, res) {
        // early-return if the item is in the cache.
        if (res) return callback(null, res);
        // continue to get the item from the database
        database.get(key, function(err, res) {
            if (err) return callback(err);
            // cache.put calls back with the item
            cache.put(key, res, callback);

The promise version can do pretty much the same - just nest the rest of the chain inside the first callback.

function getItem(key) {
    return cache.get(key).then(function(res) {
        // early return if the item is in the cache
        if (res) return res;
        // continue the chain within the callback.
        return database.get(key)

Or alternatively, if a cache miss results with an error:

function getItem(key) {
    return cache.get(key).catch(function(err) {
        return database.get(key).then(cache.put);

That means that early returns are just as easy as with callbacks, and sometimes even easier (in case of errors)

What about streams?

Promises can work very well with streams. Imagine a limit stream that allows at most 3 promises resolving in parallel, backpressuring otherwise, processing items from leveldb:

originalSublevel.createReadStream().pipe(limit(3, function(data) {
    return convertor(data.value).then(function(converted) {
        return {key: data.key, value: converted};

Or how about stream pipelines that are safe from errors without attaching error handlers to all of them?

pipeline(original, limiter, converted).then(function(done) {

}, function(streamError) {


Looks awesome. I definitely want to explore that.

The future?

In ES7, promises will become monadic (by getting flatMap and unit). Also, we're going to get generic syntax sugar for monads. Then, it trully wont matter what style you use - stream, promise or thunk - as long as it also implements the monad functions. That is, except for callback-passing style - it wont be able to join the party because it doesn't produce values.

I'm just kidding, of course. I don't know if thats going to happen. Either way, promises are useful and practical and will remain useful and practical in the future.

Closures are unavoidable in node

Fri Aug 23 2013

A couple of weeks ago I wrote a giant comparison of node.js async code patterns that mostly focuses on the new generators feature in EcmaScript 6 (Harmony)

Among other implementations there were two callback versions: original.js, which contains nested callbacks, and flattened.js, which flattens the nesting a little bit. Both make extensive use of JavaScript closures: every time the benchmarked function is invoked, a lot of closures are created.

Then Trevor Norris wrote that we should be avoiding closures when writing performance-sensitive code, hinting that my benchmark may be an example of "doing it wrong"

I decided to try and write two more flattened variants. The idea is to minimize performance loss and memory usage by avoiding the creation of closures.

You can see the code here: flattened-class.js and flattened-noclosure.js

Of course, this made complexity skyrocket. Lets see what it did for performance.

These are the results for 50 000 parallel invocations of the upload function, with simulated I/O operations that always take 1ms. Note: suspend is currently the fastest generator based library

file time(ms) memory(MB)
flattened-class.js 1398 106.58
flattened.js 1453 110.19
flattened-noclosure.js 1574 102.28
original.js 1749 124.96
suspend.js 2701 144.66

No performance gains. Why?

Because this kind of code requires that results from previous callbacks are passed to the next callback. And unfortunately, in node this means creating closures.

There really is no other option. Node core functions only take callback functions. This means we have to create a closure: its the only mechanism in JS that allows you to include context together with a function.

And yeah, bind also creates a closure:

function bind(fn, ctx) {
    return function bound() {
        return fn.apply(ctx, arguments);

Notice how bound is a closure over ctx and fn.

Now, if node core functions were also able to take a context argument, things could have been different. For example, instead of writing:

fs.readFile(f, bind(this.afterFileRead, this));

if we were able to write:

fs.readFile(f, this.afterFileRead, this);

then we would be able to write code that avoids closures and flattened-class.js could have been much faster.

But we can't do that.

What if we could though? Lets fork timers.js from node core and find out:

I added context passing support to the Timeout class. The result was timers-ctx.js which in turn resulted with flattened-class-ctx.js

And here is how it performs:

file time(ms) memory(MB)
flattened-class-ctx.js 929 59.57
flattened-class.js 1403 106.57
flattened.js 1452 110.19
original.js 1743 125.02
suspend.js 2834 145.34

Yeah. That shaved off a couple of 100s of miliseconds more.

Is it worth it?

name tokens complexity
suspend.js 331 1.10
original.js 425 1.41
flattened.js 477 1.58
flattened-class-ctx.js 674 2.23

Maybe, maybe not. You decide.

Older posts

Analysis of generators and other async patterns in node

Google Docs on the iPad

Introducing npmsearch

Fixing Hacker News with a reputation system

Why native sucks and HTML5 rocks: porting

Let it snow

Intuitive JavaScript array filtering function pt2

Intuitive JavaScript array filtering function pt1

Amateur - Lasse Gjertsen