How to call-next-method in scala

In CLOS a method can delegate to the next most specific method by calling call-next-method. This is different (in Lisp) from calling the method in the direct superclass, because the superclass of a class might depend on the order it is mixed in. Consequently there is also a function next-method-p which evaluates to true if there is a next method to call.

Is there some equivalent of this in Scala. Here is an example.

case class BddNode(label:Int, positive:Bdd, negative:Bdd) extends Bdd {

  override def toString = {
    (positive, negative) match {
      case (BddTrue, BddFalse) => label.toString
      case (BddFalse, BddTrue) => "!" + label
      case _ => call-next-method // i.e., do what you would do if the toString method on BddNode did not exist
    }
  }
}

Do you mean super.toString?

What is super? Is it the direct superclass or is it the class that comes next in the linearized class list which might be different depending on which classes/traits are mixed in?

In this case the superclass is Bdd, but in general is something extends BddNode (never mind for the moment that it is a case class), and also inherits from other classes as well, it is no longer guaranteed that the direct superclass is still Bdd, right?

You would need to use super.toString, and the behavior will depend on Scala’s well-defined (but sometimes confusing) linearization rules.

If you need to customize this behavior, you have some options, but none of them are as “straightforward” (for some definitions of straightforward) as stackable modification. If you are really into OO-patterns, you could use something along the lines of a strategy pattern (possibly w/ factory, and possibly reflection) to determine what to do in this instance.

I would argue that, at least from an OO perspective, if you are using trait linearization to the point that things have gotten confusing, and you’re writing an application rather than a library, you might have failed to favor composition over inheritance.

When working functionally, in a language where functions are first class objects, you can obviously use higher-order functions (which can include your own) to work around this behavior. There’s an older controversial post that I think still does a decent job of outlining why relying on traits for mixing in behavior can be confusing.

Edit: The reason for the strict linearization rules is to avoid the complexity that comes along with diamond inheritance.

2 Likes

Something like what you’re describing is available as stackable traits but as the name says you can only do it with traits, as they’re the only things that can be mixed in in arbitrary order. The compiler will tell you at mix-in time if no implementation in the superclass is available.

For the call-next-method principle to work, it doesn’t matter which linerization order is used (as long as it is topologically consistent), rather it depends on that the “next method” (super), is not necessarily the class mentioned in the method definition, but rather really the applicable method in the linearization order.

In the example I showed above, the obvious and desired semantics are that I want to modify the behavior of this.toString but only in the cases that one of the first two pattern cases match. Otherwise want toString to behave as it would have otherwise. I.e., if toString is define an a less specific superclass, then the most specific such one is called.

Here is what I hope about the linearization. That it is topologically consistent. I.e., whenever class A extends B, then A comes before B in the linearized list. And if the linearized list is A1, A2, A3, …, An, and the method being executed is Ai, then super.toString calls toString on the minimum Aj ( with j>i) for which toString is defined.

For this simple case, it doesn’t matter because I know exactly what my small 300 line program does. But the question is for my general knowledge.

I second that. Composition is at least straightforward.

You can use Chain-of-responsibility pattern - Wikipedia if it fits your problem.

1 Like

To me (IMHO) this pattern business seems like a really difficult attempted solution to a very simple problem. In my opinion a principle of inheritance is augmentation. I want to extend a feature (functionality…) such that it still obeys is previous contract, but also does something in addition. I should not have to know how the class I’m extending is constructed in order to extend its behavior in a compatible way.

For example, if calling a method foo on an instance of class C fulfils a public contract, but also does some private things which I don’t know about and don’t care about and certainly don’t want to interfere with, I should be able to extend class C to myC, and define foo which extends foo of C without accidentally circumventing any of its private behavior. For example I want my foo to do everything foo used to do, plus more. I should not have to care whether foo was defined directly on C or on some superclass thereof or on some trait which was mixed-in to C.

Is my thinking misguided?

I have yet another case of this same phenomenon. I’m using a library called scalafx. The model seems to be that I create an object which extends JFXApp. In that object I may define new methods and I can call them liberally. If I group these calls into a main method, there is a complaint that I need to override main. To me this seems broken. I don’t want to override anything, I only want to augment something. I don’t want to prevent what was happening previously from happening; I simply want to extend its behavior to also do something else. I tried to solve this problem in three different ways, all of which fail:

  • If I simply override main then I get a java null pointer exception at run-time.

  • If override main and call super.main(args) at the beginning, I get a empty window popup, and my GUI code does not run.

  • If I move the super.main(args) call to the end of my main, then I get yet another java null exception.

Note that in the case of scalafx.application.JFXApp.main(Array[String]), the scaladoc explicitely says:

You are strongly advised not to override this function.

I don’t know why it’s not simply declared final, but it’s apparently not really supposed to be an extension point in that API.

Unfortunately it is not that simple when you transitively inherit a trait more than once. There are cases where linearization can lead to a reversal of the inheritance relationship, like the one we recently encountered in https://github.com/scala/scala/pull/8029#discussion_r281217129.

@szeiger, thanks for pointing out that issue to me. But isn’t the problem reported in that issue, that two traits (AbstractView and MapView) extend the same class (View), yet View fails to follows both of them in the linearilzed list. Isn’t that just a bug in the compiler? It seems to me that the View should be to the right of the rightmost of AbstractView and MapView, and that’d solve the problem. Right?

And if there’s a contradiction in the constraints, it should refused to compile because the classes cannot be topologically sorted.

As I explained in the comment, it has to be done the way it is currently implemented and specified because of classes (note that AbstractView is a class, not a trait). It might be possible (but inconsistent) to do it differently for traits, but I haven’t given it any further thought.

That’s an interesting idea. I don’t know if it was ever considered in the early days of Scala.

Hmmm. Well it sounds like a serious flaw to me, allowing contradictions in the class constraint graph.

As you suggest, it would indeed be interesting to hear the motivation from the early days to allow such contradictions. The technique I mention is well known since the late 1980s, it seems they rejected it for some specific reason.

I don’t really agree that its “not simple”. In my mind it is pretty simple. Just enforce the topological constraints. As you said “an interesting idea”.