Emulating CLOS-like :around methods in Scala

This is essentially the template method pattern?

4 Likes

@Jasper-M, thanks for the reference to template method pattern. It is nice to know the names of these idioms which I’ve used many times without knowing they had standard names and descriptions. I know that many lisp programmers (including myself) don’t know these standard patterns because the language itself obviates the need for many of them.

For example I’ve never understood the visitor pattern, except to know that multiple dispatch and the ability to define methods outside of classes obviates it. I believe also (maybe I’m wrong) that being able to write generic anonymous functions also obviates it. It is an entry in a very long list of things I want someday to learn.

The answer, as so often, is that you don’t do this with inheritance in Scala, you do it with typeclasses. You actually underline that point by accident:

That’s basically The Expression Problem, and it is the standard example for using typeclasses in Scala – a large fraction of intros to typeclasses start with it. There are scads of blog articles on this (try Googling “expression problem scala typeclass”), but here’s a fairly concise StackOverflow answer on it.

2 Likes

Looks like the Expression Problem is does not really seem to be about implementing expressions which have identities and annihilators, but rather about how to allow users to extend classes without modifying the source code or without breaking existing code.

On reading a bit out the Expression Problem, I see that multiple dispatch is one way to solve it, and that’s probably the way I have come to think about it; i.e., using an OO system which supports multiple dispatch. So it makes sense that trying to implement mult-dispatch-bases solution in a single-dispatch OO system is ugly.

Yes and no – fundamentally, both of those are about how to avoid combinatoric explosion in the complexity of implementing this sort of problem. This particular link doesn’t deal with identities and annihilators, but I’ve come across a number of more fully-fleshed-out discussions of the Expression Problem that do so, so I don’t think there’s any limitation there. (Assuming I’m understanding your terminology correctly.)

It’s news to me that type classes lead to fewer cases than inheritance. Could you elaborate why that would be so?

A bit late to the party, just trying to tie a few threads together (as far as I understand the problem)…

The issue of placing the “shared” implementation parts seems to be specific to the design based on OO inheritance. As discussed in another thread, there’s a tradeoff between this and an ADT/pattern matching/type class based design, and this seems to be one aspect in favor of the latter approach. A naive translation of @curoli’s example to this style might look like this:

sealed trait Galois5Element

object Galois5Element {
  case object Zero extends Galois5Element
  case object One extends Galois5Element
  case object Two extends Galois5Element
  case object Three extends Galois5Element
  case object Four extends Galois5Element
}

trait Algebra[T] {
  def mult(a: T, b: T): T
}

implicit val galois5ElementAlgebra: Algebra[Galois5Element] = {
  import Galois5Element._
  (a: Galois5Element, b: Galois5Element) =>
    (a, b) match {
      case (Zero, _) => Zero
      case (_, Zero) => Zero
      case (One, y) => y
      case (x, One) => x
      case(Two, Two) => Four
      case(Two, Three) => One
      case(Two, Four) => Three
      case (_, _) => ???
    }
}

Placing the shared algebra code becomes a non-issue, and subjectively this code looks somewhat cleaner and easier to grasp to me.

But there’s the issue of future extensions - the OO approach in general makes it easy to add new data flavors in isolation, but requires shotgun surgery to existing code when adding new functionality, with the FP/ADT approach it’s exactly the other way round. If you want both, i.e. add new data cases without changing existing code in the ADT design, you’re facing the expression problem. This can be solved, but it requires a more convoluted initial design.

I’m not sure whether this is an actual drawback in this case. It probably doesn’t make much sense to add a new data case to a Galois 5 field. :slight_smile: As for your actual code, if I understand correctly from the other thread, you want to use pattern matching with exhaustiveness checking internally in your methods, so you’ll have to fix up existing code upon adding new data cases in this “hybrid” approach, anyway. Sounds like a net plus for the ADT/pattern matching style to me - YMMV, of course.

It seems like very simple examples work well for the ADT model. in each case the multiplication table fits in less than 10 lines. In my case the code for the zeros and ones of the operations indeed fit in a few lines. However, the other cases take 10s or hundreds of lines each, especially after the code is well commented. And in some cases, after the 100s of lines, the routine still needs to then defer to the superclass via super.foo(...).

In my mind the issue of well-factored code seems to have the same effect as allowing applications to extend the class hierarchy.

In my case I’ve given up on letting the Scala language check the ADT. My underlying system is essentially an ADT, but Scala doesn’t know that it is. It is up to me to verify that pattern matching is exhaustive. I accept that as a limitation of scala not allowing sealed classes to extend beyond file boundaries.

The original question was about around methods. The way I see to implement this in Scala is to have a method, foo, with the public name, which contains the before and after code. Somewhere in the middle of this small public method is a call to a cryptically named, fooInternal method which is implemented for each of the classes in the ADT. My application and applications from my customers need to implement the cryptically named fooInternal method for classes which they add, customers need to be careful not to call fooInternal but call foo instead.

The unfortunate part is that this boilerplating pattern makes the code less apparent in the usual case that foo does nothing but directly call fooInternal. Yet it must be there so that in the future if refactoring is necessary, we won’t have to break the interface, and change all the documentation.

In CLOS foo and fooInternal have the same name. Whether the method is the public one or the internal one is handled by a qualifier, a sort of meta-data on the declaration.

Note that Scala has the protected modifier which is meant for internal methods that can be overridden by subclasses but not called from the outside.

1 Like

I’ve never really understood protected, although it sounds like the correct annotation here. Does it mean that any subclass is allowed to call the method by name? What about super.foo(...) is that allowed for subclasses?

To be completely honest the rules for protected and especially protected[xyz] are a bit murky to me too. But I think it’s like this:

class Foo { protected def foo: Int = 1 }
  1. Works:
class Bar extends Foo { override protected def foo = 3 }
  1. Works:
class Bar extends Foo { def bar = this.foo }
  1. Works:
class Bar extends Foo { def bar = super.foo }
  1. Works:
class Bar extends Foo { def bar(b: Bar) = b.foo }
  1. Doesn’t work:
class Bar extends Foo { def bar(f: Foo) = f.foo }

If you change foo to protected[this] then everything stays the same expect case 4 no longer works.

2 Likes

If I mark a method as protected in the superclass, do I need to also mark the method as protected when I implement the method in each subclass?

You don’t have to but then that method will be publicly accessible. To use my above example, (new Bar: Foo).foo still won’t compile because the Bar instance is upcast to Foo, but (new Bar).foo will.

1 Like

Functional decomposition? The Haskell crowd would be surprised to learn that their language only works for very simple examples. :slight_smile:

As said before, your approach is fine if you’re happy with it. It just feels somewhat odd that you choose to categorically rule out a design approach that’s nicely supported by Scala and then keep discovering flaws and limitations in Scala with your chosen style that would be non-issues with the alternative approach. (Which has its downsides, too, of course - it’s always a tradeoff.)

This is the classical OO inheritance based take, as one would do it in Java, as well, yes. Having a cryptic name for the method is not actually a prerequisite, though. :slight_smile: And as @Jasper-M already noted, it needn’t be public.

Just for fun, this could alternatively be achieved by (ab)using stackable traits, which is a Scala-specific take on the decorator pattern. However, stacking goes the wrong way round for this use case, so it requires additional “shadow classes” for the actual implementations. This makes it more verbose than the classical approach, and it’s certainly non-idiomatic.

sealed trait Galois5Element {
  def *(o: Galois5Element): Galois5Element
}

object Galois5Element {

  trait Galois5TrivialMult extends Galois5Element {
    abstract override def *(o: Galois5Element): Galois5Element = {
      o match {
        case Zero => Zero
        case One => this
        case _ => super.*(o)
      }
    }
  }

  protected sealed class ZeroImpl extends Galois5Element {
    override def *(o: Galois5Element): Galois5Element = Zero
  }

  protected sealed class OneImpl extends Galois5Element {
    override def *(o: Galois5Element): Galois5Element = o
  }

  protected sealed class TwoImpl extends Galois5Element {
    override def *(o: Galois5Element): Galois5Element =
      o match {
        case Two => Four
        case Three => One
        case Four => Three
      }
    }

  protected sealed class ThreeImpl extends Galois5Element {
    override def *(o: Galois5Element): Galois5Element = ???
  }

  protected sealed class FourImpl extends Galois5Element {
    override def *(o: Galois5Element): Galois5Element = ???
  }

  case object Zero extends ZeroImpl with Galois5TrivialMult
  case object One extends OneImpl with Galois5TrivialMult
  case object Two extends TwoImpl with Galois5TrivialMult
  case object Three extends ThreeImpl with Galois5TrivialMult
  case object Four extends FourImpl with Galois5TrivialMult
}
1 Like

I rejected the ADT-via-sealed-class approach because I don’t want all my code in a single file. I had that at the beginning, and I abandoned it because it was not manageable as the code grew. Having one file per concept is much more reasonable for me than having one file that mixes everything just to make the compiler happy.

BTW, I don’t know Haskell. Do you know if the Haskell solution also requires the user to put all the code in one single file?

With regard to the type class approach (which fundamentally is independent from the ADT approach). I cannot judge whether this approach would fit my needs or not. I rejected it because I don’t understand it well enough. Several people have suggested small snippets, which I appreciate, and find interesting every time. But I don’t see how they compose to solve my larger problem. It may well a good approach. However, until now every time I tried to implement something with type classes, I failed understood how it worked or how to refactor it to do more than originally planned for — even though my unit tests all passed. Also I had lots of trouble to convert the type class code from scala 2.12 to 2.13. And I have nightmares about trying to eventually convert it to 3.x.

To me the concept of implicits obfuscates type class related code. Type class code which I’ve written did not (for me) express the intent, but expressed language constraints which to me looked like leaky abstractions. As I understand, Scala 3 attempts to alleviate some of this obfuscation.

Additionally, in my mind the problem I’m trying to solve really is object oriented—despite the Scala object system’s limitations and java legacy. There is a hierarchy of equivalence classes of mathematical objects, and a set of unary and binary functions which each object or pair of objects must be capable of computing. Each of the equivalence classes knows how to solve part of the problem. In OO, when less specific classes know how to do a computation, then that avoids having to duplicate code in subclasses. When more specific classes have more specific algorithmic information that can be coded in subclasses which do their work and then defer to the superclasses to do the rest.

You can separate the “bodies” of your leaf classes into separate files: Class, trait, impl in different *.scala files? - #4 by Jasper-M

Again, the idea is that you have just data in one file and computation in others.
That is one of the main differences between OOP and FP, OOP wants to combine data and computation in a single entity, FP wants to split those.

But as I have said a couple of times, if you like your approach then ok.
It just seems that trying to keep with it always produce further problems.

No, because again they would just put data in one file and logic in others.

This is valid, I also reject the subclassing approach because I really can not understand how a code that jumps from one file to another, form one subclass to another, and from the son to the parent to the grand-parent to the son to the son’s son is understandable :stuck_out_tongue:

Not sure how typeclass would have had troubles with this? There is nothing in that migration that affects traits and implicits AFAIK. Maybe something related to custom collections?

Code using typclasses in 2.13 will compile just as it is in 3.0 and if you want to use the Scala 3 features then, that is just replacing a couple of keywords.
Extension methods would do require a bigger change but nothing to be worried about.

Yeah, the new keywords try to better express intent but the pattern is the same.
IMHO, if you didn’t understand it in Scala 2, then Scala 3 won’t help much really.


That didn’t work out well because OP no only needs to separate the implementation.
He also needs passing constructor parameters, calling super implementation and many other things that at the end make the code very complex.

I tried this, on a branch. I eventually abandoned it because it was even more complex than before. Admittedly I did find a couple of bugs in my code during that pursuit, bugs related to matching not exhaustive. So it has merits. But eventually, I decided I was creating extra classes just to satisfy the compiler.

No, just as Scala doesn’t. :slight_smile: In Scala, functionality can either go into methods of the ADT cases or in functions that dispatch over ADT cases via pattern matching - the latter can go in arbitrary external files. Haskell doesn’t have methods at all.

Type classes loosely correspond to OO interfaces - they declare an abstract API that can be implemented by multiple types. This is mostly independent, indeed, and whether you want to use them will depend on whether your “algebra” should be polymorphic across multiple ADTs/type hierarchies.

And that’s what I personally fail to see - evaluations over an expression tree are often used as introductory examples for ADTs and FP. There are problems that look inherently suited to OO to me, but these immediately conjure ideas about message passing and encapsulated, mutable state, and I can’t spot this here. (Which doesn’t mean that an OO approach is inappropriate, it just doesn’t feel more natural than FP at all.)

1 Like

I think the problem was that some of the functions I was using and some of the class names I was references were deprecated. However, to figure out how to update the code, I had to again look at the typeclass code which I didn’t understand.

Your point about that the code would still work is probably correct. I.e., it is not absolutely necessary to refactor deprecated code. But I did it in order to satisfy all the compilation warnings, and to be a good Scala citizen.