# Terminating point of reduce/fold

This is a question about reduce and fold more related to functional programming than specifically to scala. I want to implement my own application version of reduce which has an extra optimization feature. But I don’t know what to call the feature.

Consider the multiplication function, if it is being used as an argument to fold, then if the accumulated result ever becomes zero, then I want to terminate the iteration, and simply return 0, without visiting the remaining elements. Similar for the AND and OR functions. When folding with AND, I want to terminate and return false if the result ever becomes false; and when folding with OR, I want to terminate as soon as the result becomes true.

What should this terminating value be called? (argument-wise) and what is the correct way to document its behavior?

If you want the stopping condition to be a separate parameter, I’d suggest to take a `A => Boolean` instead of an `A`, which makes the API more generic. Possible names for the parameter would then be `breakCondition`, `stopAt / stopIf` (the latter may be also suited for a single value). I don’t know if there are standard names for such a parameter, but the behaviour itself is usually called “early stopping” or “breaking out of iteration / recursion”.

Another approach for this kind of optimization is shown in the Laziness chapter of “Functional Programming in Scala” (The “Red Book”). The signature for `foldRight` is changed to take its zero value and the function’s second parameter by name:

``````  def foldRight[A,B](list: List[A])(z: => B)(f: (A, => B) => B): B =
list match {
case h :: t => f(h, foldRight(t)(z)(f))
case Nil => z
}
``````

The recursion then stops early, if `f` does not evaluate it’s second parameter. So the general usage is exactly the same as normal folds, but `f` can decide to add early stopping. For example with multiplication, early stopping at 0 would be `(a,b) => if(a == 0) a else a * b`.
This is similar to how you handle infinite lists with `foldr` in Haskell, where everything is evaluated lazily.

given a type `A`, and a function `f(a1: A, a2: A) => A`, then there may exist a value `a1` such that for every possible value of `a2`, `f(a1, a2) = a1`. If such an element exits, it’s called the absorbing element or the annihilating element.

In case of your example with integer multiplication, that value is 0.

2 Likes