The argument in favor of Promises for Promises is structured as follows:
- Examples
- Lazy Promises
- Remote Promises
- Partial Results
- Error Handling
- Testing
- Abstract Theory
- The Identity Law
- Parametricity
It has occasionally proved useful to have promises which are lazy. That is to say they don't actually do the work until someone asks for the value via .then
. These are useful in two scenarios:
- Providing a fluent API:
e.g. superagent (superagent's actual API is not promise based):
request.post('/api/pet')
.send({ name: 'Manny', species: 'cat' })
.set('Accept', 'application/json')
.then(function(res){
//Use result
}, function (err) {
//Handle error
});
In this instance, you could return a partially configured request from an asynchronous method, and the caller could finish configuring it and then resolve the request. They can't do that if it's always already resolved.
- When you don't know if a value will actually ever be used and it's expensive to compute.
If promises are always flattened, you lose the ability to maintain this lazyness across multiple chained .then
calls. You essentially lose some control over when the promises are evaluated.
Remote promises are a similar problem to lazy promises. The idea here is that you have a "promise" for a remote object, such as a database. You can call methods on it (that return promises) but you can't get the object itself. These remote promises may resolve with themselves when you call then
on them. They could equally well just not have a .then
method though.
A nice neat example is where a function returns something for which partial results may be useful, and may be available much faster than the whole results. Lets assume for the example that promise
can create a promise for a promise and promise.mixin(obj, fn)
mixes the keys of obj
into the promise.
This code is a get(url)
funciton that makes an http request and returns a promise for a promise:
var request = require('hyperquest');
var concat = require('concat-stream');
function get(url) {
return promise(function (fulfill, reject) {
var responseStream = request(url, function (err, response) {
if (err) return reject(err);
fulfill(obj.mixin(response, function (fulfill, reject) {
responseStream
.pipe(concat(function (err, body) {
if (err) return reject(err);
fulfill(body);
}))
});
});
})
}
Example usage:
get('http://example.com/foo.json')
.then(function (response) {
if (response.statusCode != 200) {
throw new Error('The server responded with status code ' + response.statusCode);
}
return response;
})
.then(function (body) {
return JSON.parse(body);
});
If you have a promise for a promise, and you know which part of a multi-stage operation each promise represents, you may be able to do better error handling because you can know more precisely where the error was thrown. It could be argued that this is better served by branding the errors (e.g. node.js ex.code === 'ENOENT'
).
I'm including testing here because it's often repeated as an example of why you would swap one monad for another. This doesn't necessitate nested promises, but I want to explain why actually it's not useful at all.
The idea is that in your unit tests you mock out functions that normally return a promise by returning an identity monad. This identity monad is then deterministic and synchronous. There are two reasons why this is misguided:
- The added determinism is a myth, most promise libraries are actually very close to determenistic with already resolved promises, and it's more useful to have tests that occasionally exhibit the pathalogical error cases, but be hard to reproduce, than have your tests never generate those error cases.
- Promises are always asynchronous, so it's safe to code with that assumption. It's dangerous to code with the assumption that they're always synchronous. If your tests are synchronous but your runtime is asynchronous you can assume neither and may accidentally assume that they are synchronous.
To conclude, it's never a good idea to swap a promise for an identity monad in unit test, just use a fulfilled promise.
Consider a method Promise.of(a)
that returns a promise for a
. It might be implemented as:
Promise.of = function (a) {
return new Promise(resolve => resolve.fulfill(a));
};
It is intuitively desirable for the left identity law to hold, it states that the following two things are equivallent:
Promise.of(a).then(f)
f(a)
providing f
returns a promise and is deterministic.
This will always be true if Promise.of
is allowed to return a promise for a promise, but not if it must recursively flatten its argument.
Parametricity is an idea from typed worlds. The concept is that you build a type out of other types. A good example is a list in C# or Java. Lists in those languages are generics, you can build a list of any type of object and they just take that object as a parameter. You aren't allowed to call any methods on the inner objects, becuase you don't know anything about their type.
Applying that same concept to an untyped world: if you inspect the contents of a promise in any way, and change your behavior based on those contents, you're no longer fully parameterised. This would mean that promises wouldn't be as generic as they could be, and when you deal with them, it's not as easy to reason about their behavior.
@medikoo I'm actually building this under the umbrella name "promised builtins". For me it makes sense. It's an alluring prospect. In some cases, it would remove the need for me to call "then" - ever. I could just do:
What's so interesting to me is that array could be a true Array or a PromisedArray. As long as I don't use special language features like
[index]
on it then it simply doesn't matter in the code that consumes the array.See https://github.com/meryn/promised-builtins for a rough sketch.