Polymorphic Interface Design and Implementation

The first problem I was trying to solve was how to define an API that could either be synchronous or asynchronous. This is what I defined

trait Leaderboard {
  type Response[A]
  . . .
  def getCount: Response[Int]
  . . .
}

trait LeaderboardSync extends Leaderboard {
  type Response[A] = A
}

trait LeaderboardAsync extends Leaderboard {
  type Response[A] = Future[A]
}

class SynchronizedLeaderboard(. . .) extends LeaderboardSync {
  override def getCount: Int = consecutiveLeaderboard.getCount
}

class LeaderboardActor(. . .) extends LeaderboardAsync {
  override def getCount: Future[Int] =
    selfActorReference ? (actorRef ⇒ GetCount(actorRef))
}

So I now have multiple implementations, some synchronous, some asynchronous. This seems to work well in practice. I like this design pattern that someone else gave me.

Where I am currently exploring is what is the best way to process results from these APIs when multiple implementations are in play.

One pattern I follow is

  def handle[I,O](input: Any, output: I ⇒ O): Future[O] = {
      input match {
        case future: Future[I] ⇒ // This needs to come first
          future.map(value => output(value))
        case value: I ⇒
          Future.successful(output(value))
      }
  }

For example

handle[scorekeeping.Score,MemberStatusResponse](
  leaderboard.update(updateMode, memberIdentifier, bigIntScore),
  score ⇒ MemberStatusResponse(leaderboardUrlId, memberUrlId, Some(Score(score))))

So I always return a Future, some already complete, some not. Alternately, I was thinking of returning Either instead of Future, Either[A,Future[A]], but I don’t have a sense of a best practice yet.

In the more abstract sense, what is the better way of handling Interfaces with polymorphic results, where synchronous/asynchronous is just a narrow example?

I’m looking for some discussion on what might be the best design and implementation practices.

Personally, I tend to do this. In my product’s main pipeline, I always return Futures, which are always complete 99% of the time. There’s a modest efficiency cost, but it works well.

(Alternatively but similarly, you could consider going the FP route, returning IO and having that be complete some of the time – that’s what I’m doing in my newer code.)

Also, take a look at http://eed3si9n.com/herding-cats/abstract-future.html

I figured there was some overhead, but I don’t have a sense of how much that it. In the past when I have used Future.success() it has only been 10% of the time or less, so this is a new pattern for me. That is why I considered using Either[T,Future[T]] instead of just Future[T]; but I cannot tell if this is less overhead and how much. I suppose I could design a performance test to measure it…

(Alternatively but similarly, you could consider going the FP route, returning IO and having that be complete some of the time – that’s what I’m doing in my newer code.)

I don’t understand. Could you please elaborate. Is this less overhead than returning a lot of completed Futures?

I can’t claim any particular expertise here, but when I trawled around the relevant code (which is all open source, remember) a few years back, there were a couple of object allocations and some reasonable-looking code paths, nothing scary. And in practice, I’m doing zillions of these, and it doesn’t appear to be causing problems in practice. So I don’t have the overhead quantified, but the pattern appears to be sane.

Quite possibly, but I honestly don’t know – folks in the Typelevel community might have a more informed opinion. Basically, this is the FP version of the same idea, where you are essentially describing the computation in advance, then pushing the “go” button.

The advantage here is that, since you’re describing the whole chain of events, the IO engine has opportunities to optimize it that don’t exist in the Future version. (Where each Future can only see itself, not the larger structure it’s contained in.) I’m given to understand that Monix, in particular, leverages this for seriously good performance, but you’d need to talk to the relevant people over there for the details, especially for this situation.

(I’m slowly starting to move towards this architecture, but haven’t had a chance yet to try it against my Future-based infrastructure.)