reduce
(aka foldL
in FP) is the most general iterative higher order function in Javascript. You can implement, for instance, map
or
Lazy evaluation solves this trivially. While we don't have that in JavaScript, we can emulate it by passing a function instead of a value:
const foldR = f => acc => xs => xs.length
? f(xs[0])(() => foldR(f)(acc)(xs.slice(1)))
: acc // ^^^^^ "lazy"
const map = f => foldR(x => acc => [f(x)].concat(acc()))([])
const every = f => foldR(x => acc => f(x) && acc())(true)
// ^^^^^^^^ short-circuited - f(x) ? acc() : false
let xs = [1, 2, 3, 4];
console.log(map(x => x+1)(xs)); // [2, 3, 4, 5]
console.log(every(x => x%2==0)(xs)); // false
An alternative would be to use CPS, where you can easily jump to the end of the function:
const foldL = f => acc => xs => cont => xs.length
? f(acc)(xs[0])(res => foldL(f)(res)(xs.slice(1))(cont))
: cont(acc);
const map = f => foldL(acc => x => cont => f(x)(res => cont(acc.concat([res]))))([]);
//const every = f => // xs => cont =>
// foldL(acc => x => c => f(x)(res => c(acc && res)))(true) // (xs)(cont)
// ^^^^ eager call
const every = f => xs => cont =>
foldL(acc => x => c => acc ? f(x)(c) : cont(false))(true)(xs)(cont)
// call only when acc=true ^^^^ ^^^^^^^^^^^ jump out early otherwise
let xs = [1, 2, 3, 4];
let inc = x => cont => cont(x+1);
map(inc)(xs)(console.log.bind(console)); // [2, 3, 4, 5]
let even = x => cont => cont(x%2==0)
every(even)(xs)(console.log.bind(console)); // false
It has turned out that the generalization of reduce can be quite easily achieved once you have become accustomed to CPS:
const foldL = f => acc => xs => xs.length
? f(acc)(xs[0])(xss => foldL(f)(xss)(xs.slice(1)))
: acc;
const map = f => foldL(acc => x => cont => cont(acc.concat([f(x)])))([]);
const filter = pred => foldL(acc => x => cont => cont(pred(x) ? acc.concat([x]) : acc))([]);
const every = pred => foldL(acc => x => cont => pred(x) ? cont(true) : false)(true);
const some = pred => foldL(acc => x => cont => pred(x) ? true : cont(false))(false);
const takeN = n => foldL(acc => x => cont => acc.length < n ? cont(acc.concat([x])) : acc)([]);
const comp = f => g => x => f(g(x));
const not = f => x => !f(x);
const inc = x => ++x;
const odd = x => x & 1;
const even = not(odd);
const lt3 = x => x < 3;
const eq3 = x => x === 3;
const sqr = x => x * x;
const xs = [1, 2, 3, 4, 5];
map(inc)(xs); // [2, 3, 4, 5, 6]
filter(even)(xs); // [2, 4]
every(lt3)(xs); // false
some(lt3)(xs); // true
takeN(3)(xs); // [1, 2, 3]
// we can compose transforming functions as usual
map(comp(inc)(sqr))(xs); // [2, 5, 10, 17, 26]
// and the reducing functions as well
comp(map(inc))(filter(even))(xs); // [3, 5]
comp(takeN(2))(filter(odd))(xs); // [1, 3]
As you can see this isn't really pure CPS, but mixed with Direct Style. This has the great benefit that foldL
and the usual transforming functions have to carry around no additional continuation argument, but maintain their normal signatures.
I use CPS functions only in parts of the code, where they are irreplaceable to achieve the desired behavior. CPS is an extremely powerful construct and you'd always use the least expressive constructs that you can.
comp(takeN(2))(filter(odd))(xs)
illustrates one of the weaknesses of the implementation (there will be probably others). The composition of the reducing functions does not take place at the level of the array elements. Thus an intermediate array is required ([1, 3, 5]
) before the final result can be computed ([1, 3]
). But that's the matter of transducers...