How to create varargs apply function

I have defined some top level objects in a simply hierarchy because I want their apply methods to always behave the same way in several base cases. Because of the apply methods defined in BinaryOperation I can call any of the objects And, Or, Xor etc as if they were functions, and I can pass exactly two arguments, either of which is a positive integer and either of which is a Bdd. Thus I can evaluate an expression such as Or(And(1,2),And(3,Xor(2,4))). The integers are treated as Boolean variables such as x1, x2, x3, x4. The result of this evaluation is an object which represents the Boolean function (x1 and x2) or (x3 and (x2 xor x4)).

This works great, but I’m limited to only use binary functions. I’d like to have varargs semantics. I’d like to say And(Or(1,2),Or(2,3),Or(1,3),Or(3,4)), but I’d also like to use And(1,2,3,Or(4,5,And(1,2)),Xor(2,5)).

I can’t figure out how to have a varargs function which accepts arguments which are each either Integer or Bdd. Can this be done somehow simply with varargs apply in the BinaryOperation class whose implementation is a call to fold? My guess is the answer is sadly “No” because Scala does not support union types. I can declare a parameter as Integer | Bdd.

If that supposition is correct, then can I do it with an implicit conversion? I.e., can I have the apply method implicitly convert any Integer n to a Bdd by calling the constructor Bdd(n), thus also eliminating the need for the 4 apply methods in the BinaryOperator class? And also eliminating the need to implement the Boolean operators as objects, allowing them to be implemented as regular functions?

sealed abstract class BinaryOperation() {
  def apply(b1: Bdd, b2: Bdd): Bdd

  def apply(l1: Int, l2: Int): Bdd = {
    this (Bdd(l1), Bdd(l2))
  }

  def apply(l1: Int, b2: Bdd): Bdd = {
    this (Bdd(l1), b2)
  }

  def apply(b1: Bdd, l2: Int): Bdd = {
    this (b1, Bdd(l2))
  }
}

object And extends BinaryOperation {
  def apply(b1: Bdd, b2: Bdd): Bdd = {
    (b1, b2) match {
      case (b1, b2) if (b1 eq b2) => b1
      case (BddTrue, _) => b2
      case (BddFalse, _) => BddFalse
      case (_, BddTrue) => b1
      case (_, BddFalse) => BddFalse
      case (b1: BddNode, b2: BddNode) => Bdd.bddOp(this, b1, b2)
    }
  }
}
object Or extends BinaryOperation {
  def apply(b1:Bdd,b2:Bdd):Bdd = {
     ...
  }
}
object Xor extends BinaryOperation {
  def apply(b1:Bdd,b2:Bdd):Bdd = {
      ...
  }
}
object Xnor extends BinaryOperation {
  def apply(b1:Bdd,b2:Bdd):Bdd = {
      ...
  }
}
object AndNot extends BinaryOperation {
  def apply(b1:Bdd,b2:Bdd):Bdd = {
      ...
  }
}

You can build arguments list one by one instead of relying on explosion of apply method. Here’s a possible solution:

package builder

case class Bdd(raw: Any)

class Args private (reversed: List[Bdd]) {
  def apply(arg: Int): Args =
    apply(Bdd(arg))

  def apply(arg: Bdd): Args =
    new Args(arg :: reversed)

  def build: List[Bdd] =
    reversed.reverse
}

object Args extends Args(Nil)

object ArgsExample {
  def main(args: Array[String]): Unit = {
    println(Args(5)(8)(Bdd("hey")).build)
    //    println(Args(5)(8.0)(Bdd("hey")).build) // doesn't compile because raw Double aren't allowed
  }
}

Above scheme allows for empty arguments list, i.e. you can do Args.build resulting in empty List. To have a minimum of one argument you would need to remove extends Args(Nil) from object Args and instead put two apply methods copied from class Args. To have a minimum of two arguments you would need to have 2 x 2 = 4 apply methods in object Args.

Alternatively you can define implicit conversions, which are generally not recommended:

import scala.language.implicitConversions

case class Bdd(raw: Any)

object Args {
  implicit def int2bdd(raw: Int): Bdd =
    Bdd(raw)

  def apply(args: Bdd*): List[Bdd] =
    args.toList

  def main(args: Array[String]): Unit = {
    println(Args(5, 8, Bdd("hey")))
//    println(Args(5, 8.0, Bdd("hey"))) // doesn't compile
  }
}

Thanks for the suggestion, but I don’t exactly see how to go from there to Or(1,And(2,3,Or(1,4))).

BTW, it makes sense to allow And() and Or() as these should return BddTrue and BddFalse respectively. However, it makes no sense to allow AndNot(), and I’m not sure about Xor() and Xnor(). Those, I’d need to think carefully about.

Can you elaborate on why implicit conversions are not recommended. Is it because they are going away in Scala 3? or is there some other reason?

With builder that would be impossible, but implicit conversions would work. Simply remove apply methods that take Ints and add implicit conversion from Int to Bdd in scope.

I forgot that placing implicit conversion in companion object of a type mentioned in implicit conversion signature would remove the need to import it, i.e. it would work everywhere. New version of code:

package builder

import scala.language.implicitConversions

case class Bdd(raw: Any)

object Bdd {
  implicit def int2bdd(raw: Int): Bdd =
    Bdd(raw)
}

object Args {
  def apply(args: Bdd*): List[Bdd] =
    args.toList

  def main(args: Array[String]): Unit = {
    println(Args(5, 8, Bdd("hey")))
    //    println(Args(5, 8.0, Bdd("hey"))) // doesn't compile
  }
}

So in the end just add

  implicit def int2bdd(raw: Int): Bdd =
    Bdd(raw)

into the companion object of Bdd.

Implicit conversions are highly unpredictable. With implicit conversions you only put implicit marker at the implicit conversion method and nowhere else. Therefore it’s easy to invoke implicit conversions at wrong use site. Also implicit conversions are source for many puzzlers:
https://engineering.rallyhealth.com/scala/implicits/functional/programming/2018/02/21/implicit-implications-conversions-part-2.html
http://scalapuzzlers.com/#pzzlr-045

Implicit parameters OTOH requires implicit marker on both implicit definition site and implicit parameter site.

1 Like

You mentioned in passing another point which I find confusing. Perhaps you can clarify. I was under the impression that a case class builds its own companion object. But in this case you have also declared the companion object. Are those two classes magically merged together by the compiler, or does your companion object declaration, erase the one created by the case class?

Yes, they are merged, but there’s a special rule - if there’s an explicitly defined method then compiler doesn’t provide its own generated definition. So e.g. case class X(a: Int, b: String) expands to:

class X(val a: Int, val b: String) {
  def equals ...
  def hashCode ...
  def toString ...
  ...
}
object X {
  def apply(a: Int, b: String): X = X(a, b)
  def unapply ...
}

but if you e.g. provide your own toString then your implementation won’t be replaced by autogenerated one:

case class X(a: Int, b: String) {
  override def toString(): String =
    "hey, this method is explicitly defined so compiler won't regenerate it!"
}

This rule applies to both class and it’s companion object.

We’ve already used that rule here: What does Equivalence of instances mean? to provide own equals and hashCode definitions instead of the auto-generated ones.

If you look at generated .class files then you’ll notice that object Xxx compiles to two files: Xxx.class and Xxx$.class. OTOH class Xxx compiles to just Xxx.class. So every time you have a class (no matter if case or not) with its companion object the Xxx.class contains definitions related to both class and object. In other words - merging happens quite often.

1 Like

@tarsa, I’m trying to implement your suggestion here. Recall that the goal is to allow my algebraic functions to be called in a varargs fashion such as And(1,2,Or(1,3,Xor(1,2,3)))

As you mentioned, implicit conversations are not very well defined in scala. But I hope you can explain what the incantation means in your post.

object Bdd {
  implicit def int2bdd(raw: Int): Bdd =
    Bdd(raw)
}

QUESTION: What is the significance of the name int2bdd? Can I use the name anywhere to limit the scope of this conversion?

QUESTION: What is the significance of the object in which int2bdd is defined? In your example you defined it in object Bdd. Is that just because it needs to exist, or are there semantics involved with where the function/method is defined?

I’ve created a ScalaFiddle which has my implementation using your suggestion. In particular line 178 makes the implicit definition. And the varargs functions are defined on line 251 and 252, their usage is shown in BddTest.main on lines 346 and 347.

However, it seems far too permissive in my opinion. Not only is an integer allowed as an argument to the And, Or, Xor, Xnor, AndNot, and Not functions, in which case the integer is implicitly converted to a Bdd instance, as per my request. However, integers are also converted to Bdd instances, even where I don’t want them converted.

For example if I call Bdd (my Bdd factory function) with three arguments, it should be (Int,Bdd,Bdd); however, it now unfortunately accepts (Int,Bdd,Bdd), the results of which might be disasterous.

QUESTION: Is there a way to specify that the implicit conversion only apply to the function definitions on line 251?

Do I have to accept these sloppy definitions if I want to allow the type Int | Bdd on one function. I.e., is it true that if I allow Int | Bdd on any varargs function, then Int | Bdd is implicitly allowed on any function which takes a Bdd as argument?

There’s no particular significance to the name – it’s actually unusual for the name of an implicit conversion to be relevant at all.

IIRC, you can shadow the name with something else in order to hide the conversion. That’s a fairly fragile pattern, though, and I wouldn’t recommend it. (I also think that this scoping is expected to change in Scala 3, but implicit conversion in general are changing in Scala 3.)

It’s mainly for convenience – when you import Bdd, it will also pull in this conversion because it’s in the companion object.

As for permissiveness – yeah, that’s a common problem of this sort of implicit conversion, that it tends to be leaky. It might be possible to lock things down as you want, but it would take some fiddling.

(Possibly with yet another type? My gut says that you might create a BddOrInt type for the functions that actually want to accept either, with implicit conversions from both Bdd and Int instead of the Int => Bdd conversion, and that would have a method to turn it into a Bdd inside the function. Messy, but it seems to plausibly do what you want.)

The good news is that what I think you really want is possible; the bad news is that it’s in Scala 3, not Scala 2. One of the more exciting additions in Scala 3 is proper union types, so that you can literally say Int | Bdd in the function signatures where that is desired, and it should work as you expect. But that doesn’t help you much right now…

Yes, I’ve been seeing waiting and salivating for a long time now for the arrival of proper union, intersection, and complement types in Scala 3.

On the other hand, your suggestion about a BddOrInt type is a good idea. But how to I make it work with an algebraic data type? I have an abstract type Bdd which two subclasses BddNode and BddTerm, and BddTerm (also abstract) has two subtypes BddTrue and BddFalse. Won’t I need to duplicated the topology of that class hierarchy?

I have to admit I haven’t followed the whole thread, but wouldn’t it be an option to either use Either[Int, Bdd] and #fold it when forcing to Bdd, or to provide an explicit conversion from Int to Bdd to be invoked where appropriate?

object Bdd {
  implicit class BddConvertibleInt(val a: Int) extends AnyVal {
    def toBdd: Bdd = ???
  }
}

import Bdd._
  
println(42.toBdd)

Probably not, although it’s a bit messy. My concept is something like this (very rough sketch, mind):

case class BddOrInt(bdd: Bdd)
object BddOrInt {
  implicit def bdd2bddOrInt(bdd: Bdd): BddOrInt = BddOrInt(bdd)
  implicit def int2bddOrInt(int: Int): BddOrInt = BddOrInt(Bdd(int))
}

Your functions that want to accept both take BddOrInt as their parameter, and then pull the resulting Bdd out of that, while functions that don’t want to accept an Int just take plain old Bdd.

Mind, this is all wild theory, and it might prove not to work out. But it’s my best guess of how to tackle what you’re describing…

@sangamon, a synopsis of the problem is that I have a binary function, actually it’s several objects inheriting from the same class, and there is a binary apply method (method with two arguments). I’ve defined the 3 varieties on (Bdd,Int), (Bdd,Bdd), (Int,Bdd), in the superclass, and (Bdd,Bdd). Now I want to create a varargs version of the function which uses a fold strategy.

The objects have the names of Boolean operators, so Or(1,2) returns a Bdd, and Bdd(1) returns a Bdd. I want to evaluate something like the following Or(1,2,And(Or(2,3,AndNot(3,4),Xor(1,4)))

My current solution is that the caller has to wrap all the lone integers in a Bdd(...) call. So that every non-binary call to Or, And etc. either have all the arguments as integers or all the arguments as Bdd. That’s a very unintuitive interface. I’m just trying to figure out if we can do better. Or(Bdd(1),Bdd(2),And(Or(Bdd(2),Bdd(3),And(3,4),Xor(1,4)))

I think this is what they call the magnet pattern.

In what way is this problem similar to the magnet pattern?

I meant the BddOrInt approach.