Message Systems in Programming: Part 5 of 7 – Promise and Deferred

Promise and Deferred (or Futures and Completers)

We’ve glossed over asynchronous coding up to this point. Many from languages which have reasonable event API’s (ActionScript) to extremely nice ones (C#), it may not at first look like a problem, or even appear to be an edge case. Coming from ActionScript, it took me years to get comfortable, and understand why, Promises were helpful. Also, many in those languages either create, or have facilities that help create, orchestration code to help mitigate common asynchronous coding issues.

But first, why asynchronous code? Synchronous code in GUI’s is challenging for 2 reasons.

The first is most UI runtimes have the code execution and drawing in the same thread. The Elastic Racetrack metaphor is often used to explain how this works. This means we have an our code doing calculations and handling user events (like clicks) done at the same time as the code that draws and redraws things to the screen. At first this seems reasonable given how fast computers are. However, as code complexity grows inline with sophisticated UI such as dynamic graphs, hardware accelerated animation, and complicated redrawing of multiple DOM elements for text and graphics in the browser… It’s a lot to do in a few milliseconds so the user doesn’t notice. That, and this all assumes the code has no orchestration to optimize this redraw part of the track, and the data processing part of the track such as invalidation strategies, also known as deferred rendering.

Second is network or I/O (input output) calls. For example, I want to show some text on this page; I know you’re German speaking based on the browser string, so I make a network request to my CMS (content management system) to get the German content. This may take a second or many seconds. In the synchronous or “blocking” world, the code would stop executing at that point and wait for that request to the server to spit back the text we need. The user’s mouse would work, but no clicks, drags, touches, double-clicks, rollovers, touch indicators, keyboard usage would function, nor would the user be able to see the UI visually update; the app looks like it’s broken. Even if it had a loading icon, it’d be paused. The UI is “locked” or “blocked”.

Cool, so things that require us to go outside of JavaScript for data like the server or the local disk/database, we’ll just make those asynchronous, which the Browser developers have done for us. The same holds true for server development platforms like Node and gaming ones like Corona and Unity. Other blocking languages like Google’s Go don’t have this issue/feature.

This, however, creates 2 new problems specifically how to write code that “waits” for things and how to chain these scenarios without making the code unreadable or hard to follow.

Callbacks are the first way. We’ve seen for simple code, they are the fastest, easiest, and most flexible way to do that. “Whenever you’re done, homey, call me”. Below is how JQuery does asynchronous loading of data from t3h interwebz:

$.ajax({
  url: "https://jessewarden.com",
  success: function(data)
  {
  	console.log("loaded data:", data);
  }
});

“Load data from here, and call this function when it worked, passing me the data as the 1st parameter, kthxbai.”

Events were another attempt to do that for multiple people who wish to know about some async event. You can utilize the same API, and your code looks the same, and the key here is that it works the exact same whether synchronous or asynchronous. The pseudo code below shows synchronous click code with asynchronous image loading code. Notice how the API looks and works the same, yet the first callback function fires immediately when you click in the same call stack, whereas the 2nd fires after some random amount of time, never in the same execution stack.

function onClick(event)
{
   console.log("clicked");
}
button.addEventListener("click", onClick);

function onImageLoaded(event)
{
   console.log("image loaded");
}
image.addEventListener("onload", onImageLoaded);

That’s all well and good until you start utilize class composition; meaning you utilize many objects that have events or callbacks in a parent class. One convention is to register all the event listeners up top, and then define the event handlers below, in the order they were added. If the event handlers utilize functions, well, you just either put them nearby and make finding the functions in order no longer work, or put them below the callbacks and force those reading/debugging to jump around your class.

For the sake of the reader, I’ll keep the below brief, but tripling the lines of code below it is something I commonly see:

define(["jquery",
		"Underscore",
		"com.company.project.views.InventoryView",
		"com.company.project.views.ScheduleView",
		"com.company.project.views.ProfilePopUp",
		"com.company.project.events.EventBus",
		"com.company.project.utils.InventoryUtils"], 
		function($,
					_,
					InventoryView, 
					ScheduleView, 
					ProfilePopUp,
					EventBus,
					InventoryUtils)
{
	var SomeClass = function()
	{
		init: function()
		{
			// TODO: use _.bindAll
			_.bind(this.onClick, this);
			_.bind(this.onInventory, this);
			_.bind(this.onScheduleTask, this);
			_.bind(this.onToggleProfile, this);
			_.bind(this.onToggleEditProfile, this);

			$("#SubmitButton").click(this.onClick);
			$("#InventoryButton").click(this.onInventory);
			$("#ScheduleTask").click(this.onScheduleTask);
			$("#ToggleProfile").click(this.onToggleProfile);
			$("#ModalProfileEditForm").click(this.onToggleEditProfile);
		},

		onClick: function()
		{

		},

		onInventory: function()
		{

		},

		onScheduleTask: function()
		{

		},

		onToggleProfile: function()
		{


		},

		onToggleEditProfile: function()
		{

		},

		someHelperMethodReferencedRandomlyAbove: function()
		{

		}
	};
	return SomeClass;

});

It looks nice at first, but then becomes a long class you have to constantly scroll up and down through. It gets worse as those event handlers and methods either grow in size, number, or have comments attached to them. You’ll notice the above is using UI events; this can be deceiving if some of those callbacks are actually listening to global pub sub messages. It gets confusion which functions are responding to global events and which are responding to local UI ones. Your unit tests also start to get pretty long as well. Languages which handle scope like CoffeeScript and TypeScript can help a little, but you still end up with the same problem: lot’s of code that’s hard to pin down to localized functionality; i.e. “all the stuff this class does about this one thing is right here” vs. spread out all over the place.

Now, remember, there are those of us from other languages/platforms where this was the norm and considered ok (even if you didn’t have a deadline). Worse, anonymous functions in some languages needed to be scoped to SOMETHING else the garbage collector would eat them leading to less terse code capabilities without a language change (i.e. Lambda’s in t3h newer Java). JavaScript and Lua’s friendless to lamda/anonymous functions in terms of the Garbage Collector not eating them in Mark and Sweep phases means they are a common way to reduce code length in larger code bases at a tiny cost to function creation time (anonymous functions vs. class member / Object.prototype ones).

Where async code really starts to break down is with data orchestration and animation: 2 common asynchronous operations central in UI development. Additionally, we have no idea if any of those events already happened before the class executed. The UI events that are user gestures (the user clicked on something), we don’t care, but the model or other UI asset changes, we do. This is why you’ll often see those event handlers called at the bottom of the init to “set the class up in case she’s late to the data party”.

Jake Archibald has a great, and often used example, that wonderfully illustrates how the above provides a false sense of security, specifically around loading of an image. I’ll copy that example here verbatim:

var img1 = document.querySelector('.img-1');

function loaded() {
  // woo yey image loaded
}

if (img1.complete) {
  loaded();
}
else {
  img1.addEventListener('load', loaded);
}

img1.addEventListener('error', function() {
  // argh everything's broken
});

2 important points in this example.

First, we burden the consumer, the developer using the class that emits events, to check if it already fired, and if so, manually fire her event assuming the event handler accepts a function call without an event parameter; JavaScript does, CoffeeScript(?), TypeScript, and Dart you have to specifically mark those functions to have the events as optional parameters.

Second, my SomeClass example above doesn’t have any error handling at all. In most apps, at best, they log errors with little to no user experience thought given around those scenarios. That said, very rarely do any of them react to errors that happened before their instantiated for the user (ie a graph is shown, but the data that was supposed to be loaded before it was shown failed to load). No one wants to spend time creating user experiences around a broken server, but it’s important. Pitching clients to spend a lot of their application development budget on designing user experiences around things breaking seems counterintuitive to quality software craftsmanship. Thus, this is often left to developers who have little time for such concerns, hence why error handling is often such a horrible experience in many applications. It’s not their responsibility, its the Designer(s). Catch 22, sucka!

When you take into account both events already happening before you’ve arrived on the scene as well as errors, you can see how the code above gets much more verbose, and complicated to follow.

One common work around at least in MVC applications, is to allow View classes to be extremely friendly to null values in the Models. That way, if the data isn’t loaded let, they can show either a loading or something to the user while it loads or fails, otherwise they can immediately draw the model when they’re shown. The thinking goes “someone else will handle orchestrating everything in the correct order later” and even “somehow allow this to be easily refactored”. This can work if your Models aren’t that complicated, but still makes the code quite verbose.

The second place the SomeClass example starts to break down is when you have multiple asynchronous events that depend on each other.

var userView = {
	userModel: null,
	imageTag: null,
	showAvatar: function()
	{
		var imageURL = userModel.avatarURL;
		imageLoader.get(imageURL, function(bitmapData)
		{
			imageTag.showBitmapData(bitmapData);
		});
	}
};

You’ll notice in the above simple JavaScript View, if both the model data for the user is not yet loaded from the server AND the HTML and CSS needed to display the image which also needs to be loaded, the View won’t work. 2 options here: either check for null and do nothing assuming “you didn’t load the data for me idiot, I’m not showing anything, but will be nice to you and not throw errors” or put the burden on the developer using the View to get both of those things ready ahead of time.

Many MVC and web application tutorials out there assume you have good REST services that give you only the data you need, and you only need to make 1 call for the entire app or per section the user see’s with a good set of data caching code behind it. Backbone is notoriously seductive in this respect regarding who it’s Models are built around this with no guidance given to those who aren’t actually hitting REST services.

If you’re a JavaScript client developer, and you have a bad ass Node developer who has your back and y’all can communicate often as requirements change… you have it quite well, my friend.

This is simply not how enterprise applications work in the real world, although I’m seeing this change once .NET/Java business tier developers realize their jobs aren’t in jeopardy because of Node. You’re often building atop of .NET or Java backends that are several years old, many of which were in support of page based web sites or desktop applications long before you got involved. Getting the back-end team to refactor this for the best user experience takes an act of God, especially if you’re one of hundreds of applications that consume their services.

Unless you got Node into the mix, you’ll be making many calls, often multiple times, before a particular View/Screen can be shown to the user… in addition to your other data orchestration duties. Sometimes you’re lucky if you abstract these multiple calls away into a single class that dispatches 1 event. If you’re not, you usually have the two common problems: multiple classes that only have pieces of the data you need that continually needs refreshing or lots of data on the client you have to parse, such as the case in dashboards and graphing applications. This isn’t just ajax calls we’re talking about either; this could be JavaScript code running on the client that that needs to run long running parsing code that may or may not include an ajax call, image and other media assets, and various multiple authentication calls from a variety of servers.

For example, many financial applications often pull data from a variety of back-end data sources to show on 1 screen. Whether these are different sub-domains, or a single REST/SOAP/RPC server, they could still all work slightly differently and take varying amount of times to respond. Sometimes you have to call them in a certain order since each successive call gives you a token/id/session cookie needed for the following call for the actual data, some of which you can’t cache because the nonce tokens (prove you’re a user who’s logged in) can expire quickly for financial data.

Let’s show an example where you first need to get some user info that needs to be obtained to snag the user’s current token (which changes often) so you can then get info on a particular financial transaction (something the user bought) which is then used to charting data related to it’s id + type (which the server has conveniently stashed a bunch of huge JSON files on a Heroku friendly CDN somewhere so you can parse on the client to show charts).

However, that last call has a TON of data, so you’re using a polyfill (a library that allows you to use future HTML5 functionality, today) that supports WebWorkers (threads in JavaScript) so you can parse the data safely and attempt save to LocalStorage to avoid the slow latter part of the process again if the user views a particular transaction again later. I’ve also had cases where the type dictates which type of data we have, thus requiring another parsing call and a basic fork in the path of how you parse the code. I’ve also left out the fun parts of debugging unknowingly neutered ArrayBuffers… but lets keep it simple (lulz).

Quite a mouthful yet common in the screwed up world of Enterprise Software.

var currentClientToken = null;
var clientID = null;
var transactionID = null;
var graphWorker = null;
var done = false;

function getCompleted()
{
	return this.done;
}

function getClientToken(clientID, transactionID)
{
	this.done = false;
	this.clientID = clientID;
	this.transactionID = transactionID;
   $.get("/getClientInfo", clientID, this.onGetClientTokenSuccess, this.onGetClientTokenError);
}

function triggerGenericError(error)
{
	EventBus.trigger("GetChartingInfo:error", error);
}

function onGetClientTokenError(error)
{
	this.done = true;
	this.triggerGenericError(error);
}

function onGetClientTokenSuccess(data)
{
	try
	{
		var clientVO = ClientFactory.get(data);
		this.currentClientToken = clientVO.token;
		$.get("/getTransactionInfo", 
			{token: this.currentClientToken, transactionID: this.transactionID}, 
			this.onGetTransactionInfoSuccess, this.onGetTransactionInfoError);
	}
	catch(err)
	{
		this.triggerGenericError(err);
	}
}

function onGetTransactionInfoError(error)
{
	this.triggerGenericError(error);
}

function onGetTransactionInfoSuccess(info)
{
	try
	{
		var transactionVO = TransactionFactory.get(info);
		var currentTransactionURL = "https://sub.company.com/project/graphingcdn/" + transactionVO.id + "/?clientID=" + this.currentClientToken;
		$.get(currentTransactionURL, this.onGraphDataSuccess, this.onGraphDataError);
	}
	catch(err)
	{
		this.triggerGenericError(err);
	}
}

function onGraphDataError(error)
{
	this.triggerGenericError(error);
}

function getGraphWorker()
{
	if(this.graphWorker == null)
	{
		this.graphWorker = new Worker("GraphFactory.js");
		this.graphWorker.onmessage = function(event)
		{
			this.onGraphDataParsed(event.data);
		};
	}
	return this.graphWorker;
}

function onGraphDataSuccess(graphData)
{
	getGraphWorker().postMessage(graphData);
}

function onGraphDataParsed(graphVO)
{
	if(graphVO != null)
	{
		this.done = true;
		EventBus.trigger("GetChartingInfo:success", graphVO);
	}
	else
	{
		// parsing failed
		EventBus.trigger("GetChartingInfo:error", error);
	}
}

… to recap our problems, we now have:

  1. lots of code
  2. you have to scroll to find where an event is handled
  3. finding event handlers for events registered within event handlers makes it even harder to follow
  4. using if statements to handle potential race conditions
  5. burden to ensure sequence can only start once with ability to stop it at any point
  6. forced to remove event listeners to help garbage collection
  7. code not fully shielded from uncaught exceptions
  8. burden of error handling can make API unwieldy

WAT DO!?

Science has shown that less code equals better software. Through 1 study at least.

Enter the Promise (known as Futures in Dart). They are a way to make asynchronous code look and feel like synchronous code.

Our “use” of the above is turned into 8 lines of code (6 if you cuddle, but Machete don’t cuddle):

getGraphData().then(function()
{
	EventBus.trigger("GetChartingInfo:success", graphVO);
},
function(error)
{
	EventBus.trigger("GetChartingInfo:error", error);
});

Now, using Underscore and some refactoring using your module system of choice, you could refactor the above to be similar without Promises:

EventBus.on("GetChartingInfo:success", function(graphVO)
{
	drawIt(graphVO);
});
EventBus.on("GetChartingInfo:error", function(error))
{
	console.error("GetChartingInfo:error:", error);
});
getGraphData();

9 (7c) lines vs. 8 (6c); not bad. However, not apples to apples either. They do completely different things and the Event way doesn’t guarantee the same order in the same stack like Promises do. Subtle, but painful for n00bs to debug when they don’t know what’s going on. Heck, it’s painful for me!

The promise way only has 1 callback. The Events have multiple potential ones.

The events must be registered as listeners first before the call to getGraphData happens. If the events are not registered before getGraphData is called, you do not get the data. While getGraphData may internally cache the data, you must call it again after your listeners are registered to get this data. For smaller applications, this is easy; for larger ones, this is more challenging. It also puts a burden on those types of classes to make a caching decision sooner to make an easier API to consume.

The Promise does not care if you call it before the value(s) internally have been fetched from the server yet or not; you can call it anytime and get the same result. There is never a race condition problem using Promises as opposed to Events.

Also, by default, most Promises assume you’re caching in RAM by default based on the last return value if you follow the Promise/A spec.

API wise, it should be noted that the Event by default allows multiple responders to hear about a success or failure, whereas the Promise is only executed with 1 callback. This is neither good nor bad, just something be aware of when comparing the callback nature of Promises compared to Events.

A minor point, but the events now have a message chosen. You now need a way to track that message. If you just use a magic string, fine, but for large applications you tend to use some form of enumeration or constant class to help both management, strong-typing & code hinting for IDE’s that support it, and to ease the developer’s need to View all possible messages in a central place without a grep. Now a decision must be made; where do these enumerations go? In the class dispatching as statics/constants? Is it 1 class that supplies all global pub sub events for the entire application? Each has their pro’s and con’s, but both include more code, cognitive overhead, and crap for you to deal with.

Lastly, most Promises & promise libraries will execute on the next tick whereas most event, pub sub, and streaming libraries do not. Yes, you can add a setTimeout(func, 0), but that’s a hack on top of an API most assume works, and codes their API as, synchronously. There are a variety of use cases that aren’t described very well so I’ll cover that below regarding expecting order of execution to work, whether in synch or async code. This is why many pub sub libraries have extra error handling on removing listeners because if it’s in the middle of a dispatch, you can accidentally prevent others from getting the message.

Now, Jake’s article covers all you need to know about how Promises work, and Q’s documentation is pretty legit as well to help you create Promises as well as work with libraries that aren’t Promises out of the box.

Done Yet?

Promises, whether ES6 or Q, store the state of whether the Promise is done or not internally. You don’t have write code to do that; it’s part of them implementing the state machine part of the Promises/A spec. The Event option, however, has to allow the “done” boolean to be managed amongst the many callbacks.

This internal state management has a positive effect on sub-sequent calls with the same data, specifically getClientToken, getTransactionInfo, loading of the transaction JSON data, and receiving the parsed data from the WebWorker. For Promises, they’ll simple cache the data and re-deliver it usually immediately in the next tick (next time the stack is unwound then wound).

In the Event abstraction, you’ll have to do that yourself. In both cases, you’ll have to utilize whatever local storage option you wish to use (cookies, local storage, app cache, etc), but this functionality of locked data is built into Promises; you don’t have to code it, it’s built in + expected behavior. This includes multiple calls that could utilize multiple Promises internally. In our case, that’s about 4 Promises that each has the built-in ability to store data if it hasn’t changed.

You can also refactor asynchronous code that is not a Promise to be a Promise. That is one way developers have attempted to solve the above problems.

Solving Lots of Code

Promises are made part of async API’s. As such, instead of first registering for an event before calling something that will trigger it, you instead just call the method and the Promise is the returned value. That’s 2 lines of code to 1. Like callbacks.

This has larger ramifications, though, down the line when you’re async call is part of a series of async call allowing them to be chained as well as put into groups where they all must complete before another operation can succeed. This further cuts down on needing to define event handlers.

Scroll to Find Code

No more scrolling, it’s defined inline. The call and the responses are all next to each other.

Nested Event Handler Registration

This problem isn’t solved immediately with Promises. You can get into situations where people do not return Promises from async code, and they end up doing work in the success handlers. Through refactoring you can flatten these deeply nested chains. Thomas Burleson has a great example where he shows both.

Handling Race Conditions

Part of the spec for Promises is to solve this issue. Promises will automatically call their success or error method if the event has already occurred. This, again, reduces the amount of code you need to write when consuming async services. Additionally, it adds flexibility to call them in any order, at any time you need to; no need for orchestration/setup code to ensure everything is “ready for the data events that will fire and YOU NEED TO BE THERE FOR”. You instead react. This is part of where the “reactive” part comes in Functional Reactive Programming.

Finally, Promises have an internal state machine convention that they only ever complete once; so you don’t have to worry about your error or success methods firing more than once. They are not like events in this aspect which are known to fire many times.

This is one of the main problems that Events and Pub Sub run into when the application starts to grow in size and developers.

Aborting Sequence

Promises do not solve the abort sequence; i.e. you cannot easily start a long async Promise or chain of Promises once they’ve started. De-reffing the parent in hopes the child will get garbage collected isn’t sure fire, and adding API’s to a Promise based class is a more prudent option, although, more work in the case of many Promises through Composition.

More advanced Promise API’s such as Dart’s StreamController and RX.js’ Backpressure have the API’s you need to wrap Promises with the ability to stop async sequences, or even pause the events being emitted from the emitter. However, both incur a completely different paradigm of programming so it’s not as simple as “oh, I’ll just use this awesome library/x-compiler…”. We’ll get into that in the Streams section.

No More Cleanup Code

Promises, if they’re values are non-global and the chains are local to the function block, will get eaten by garbage collection if you de-reference them.

From a memory standpoint, this can be a blessing or a curse. Promises are ensured to only execute one time, and cache the value each additional call assuming the inputs don’t change, all while retaining the same code execution order. However, they basically act like a getter; they cache the data they are referencing as a form of Observers, albeit with 1 listener to start. Note, the Observer pattern is different than what stream API’s refer to regarding Observables. People will use the word “observable” to either refer to something that emits change events like the Observer design pattern, or a watchable property/stream in something like RX.js/Bacon.js. For large data sets, images, etc. you may not wish to leave this stuff laying around in RAM.

Check out this post (plus the comments full of lulz) from Drew Crawford talking about how brazenly just caching all data in Promises isn’t always a good idea.

Shielded From Exceptions

In the native implementations, Promises are built around the concept of success and failure. This includes built-in support for if an exception is thrown in a constructor it calls the error callback. Additionally, you can interpret successful responses from the server as a mistake and cause an error condition safely.

That said, the burden of good error handling is still on the developer in case an error does bubble up through a deep Promise chain, you want a good idea of where it came from. The same goes for good synchronous error handling through tried and true try/catch blocks within more complex Promise operations. Looking through the stack trace to work your way back down is still required in some cases which, again, can be mitigated with an effective logging solution. Theoretically yield used within a Generator function could help mitigate some of the error hunting as well.

Also, take note some Promise libraries take the positive initiative to allow you to get stack traces that are… you know… actually useful in debugging such as Q (check out longStackSupport = true) and Bluebird’s longStackTraces. these types of debugging features are NOT in Angular’s version of $q, and it’s Promise API. Do the hard work of being proactive yourself with good try/catch error messages and logging.

Also note, many Promise libraries will allow you to apply multiple catches to differentiate between the various types of async errors which is a wonderful practice you should endeavor to do.

Next Tick

Some Promise libraries will make a big deal about “nextTick”. The reason is most people when writing synchronous code expect the same order of execution no matter how many times the code is run. While Promises aren’t technically synchronous code, you write it as such.

For example, we all expect the function below to print out 1, 2, 3 no matter how many times it’s called:

function dreamMachine()
{
	console.log("1");
	console.log("2");
	console.log("3");
}

We also expect the function below to print out 1, 3, 2 no matter how many times it’s called:

function dreamMachine()
{
	console.log("1");
	Q.fcall(somePromise).then(function()
	{
		console.log("2");
	});
	console.log("3");
}

This is one feature Promises have above simple Pub Sub because your callbacks are fired in the same order regardless of how long the response takes, and if the data is cached or not.

Those who get mad about it are the ones who want faster code, typically server-side Node developers. The nextTick is often at a minimum a setTimeout(runTheThenFunction(), 0). This’ll significantly warp benchmarks that make the code look slow whereas in reality the code isn’t actually using CPU.

However, it IS a problem in that the code still takes more time and can significantly increase unit test suite test runner time. There are various Promise/A+ spec compliant libraries that will give you a flag to turn this off. This is for developers who know exactly why you’ll get 1, 2, 3 in some situations above. If you don’t know why that can happen, leave the setting off, heh!

Be aware, Node.js has a more efficient nextTick with configuration options.

Another use case is asynchronous initialization. Some objects/classes have significant amounts of asynchronous setup they have to do. You want an opportunity to be aware of some of these operations, say the loading of images in gaming Sprites, or using a server-side logging that has JSON file based configuration values is must load and parse before being used. Incidentally, this makes writing invalidation routines on top of existing GUI code a big easier.

Finally, in the case of networking operations, you don’t always want to impose knowledge of order of operations on the user. Instead, let them set and call things in any order, and let the class figure out the true order on the nextTick/frame.

For example, imagine if JQuery’s ajax didn’t have object initialization as it’s API. Instead of:

$.ajax({
  url: "https://jessewarden.com",
  success: function(data)
  {
  	console.log("loaded data:", data);
  }
});

It’d be:

var operation = $.ajax();
operation.url = "https://jessewarden.com";
operation.dataType = "html";
operation.type = "GET";
operation.success = function(data)
{
	console.log("loaded data:", data);
};
operation.load();

You not only have to learn an API, but the order in which the methods are called just to make a simple GET call? What if even more code later continues to modify, or even overwrite the operation’s parameters?

Using nextTick, calling order doesn’t matter. Internally, the actual XHR call won’t be made until the next frame anyway, so the ajax operation can handle setting everything up in the order it needs. This follows OOP encapsulation principles, and makes life easy on the developer.

Promise Pros

Promises significantly reduce the amount of code you have to write for asynchronous operations where you’d traditionally use callbacks or events.

They also localize it vs. spreading it out like events and callbacks are want to do. Just be sure you practice flattening some Promise chains so you don’t end up with the opposite problem of Promise chain hell vs. callback hell.

They also make chaining multiple asynchronous operations together much easier to read and debug.

They help reduce race conditions when the Promise already has a cached value, and merely calls the callback immediately. For low memory items, this also has the side benefit of caching data in memory for expedited calls later on, especially for those classes/Views initialized later in the application lifecycle. They also make writing asynchronous libraries easier on the consumers because they can call their setup calls in any order knowing the actual resolving of them will occur on the next frame.

Finally, you can write your code and not have to care something is synchronous or asynchronous; it looks and is written in a synchronous way.

Promise Cons

For those who are new to Promises, as I mentioned, they end up creating callback hell using Promises. Rather than creating their own operations that return Promises, they instead merely use Promises and continue to nest them. Again, check out Thomas Burlesson’s great example where he shows how you flatten deeply nested Promise chains. Once you practice, you can quickly flatten these trees as you create them. Eventually you’ll start writing methods that return Promises by default making your life easier. Like once you memorize a for loop’s syntax, you never think about it again.

For low memory situations on mobile, you could get in trouble when caching the data in Promises. While you’re supposed to follow the Promise/A spec and ensure Promise’s fullfilled state stays true and never changes the data, care must be taken to ensure you aren’t keeping these around unnecessarily, especially on mobile devices. This isn’t necessarely indicative of Promises, rather caching in general, which is a commonly hard programming problem. They are just brought up since most Promises handling async operations are often around data, your want to cache it.

Caching isn’t straightforward either. Some libraries help in this regard, but basic programming for by value vs. by reference apply. When getting values you’re basically assuming the developer doesn’t change them if they’re by ref. Cloning has performance impacts as well for deeply nested objects. Following the Promise/A spec with regards to not changing the value is often based on convention and isn’t always easily enforced. Don’t get me started on the insanity of sharing ArrayBuffers across WebWorkers; shiz is magic death.

<< Part 4 – Publish Subscribe | Part 6 – Streams >>