Scala Heaven and Hell — Michael Slinn

For starters, yes – I would expect some micro benchmark that shows how opaques are better than AnyVal. There might be another effect that we are not anticipating besides the boxing; for instance, using extension methods. When evaluating performance I never rely on logic / theory alone.

But that’s not actually the point. The point is whether opaques are at the end of the day – for most developers in most scenarios – make a difference in the performance of an application. If they do not, then they really aren’t something that great and shouldn’t be recommended for most developers.

It’s somewhat similar to performance characteristics of collections. I’ve seen cases where developers opted to use one collection over the other, spending time thinking and arguing which implementation would yield better performance, when at the end of the day the difference was completely insignificant when considering the performance of the entire application.

That’s obviously an entirely different motivation, and I do not disagree with it. In fact, I think that the current proposal syntax is actually worse. You’d might find the thread on the contributors forum interesting; it’s quite long though, so if you want to focus on the more recent discussion about the syntax you’d might want to start with this comment.

That is exactly the point that speaker is talking about. We should not simply rely others’ experience, opinions and theory; we should be given evidence as well. This principle holds not just for programming, but for science at large.

I think you’re mixing up cause and effect here.

Performance per se isn’t the point. The point is being able to replace weak types (eg, String, Int) with strong types (eg, Color, Category) without risking sacrificing performance. Strong types are very nearly the be-all and end-all of Scala, but performance matters to folks. Sometimes that performance turns out to be very important, sometimes not, but most engineers reflexively care about performance. So in practice, folks frequently avoid using stronger types because they don’t want to pay the price of boxing, and don’t want to go to the effort of figuring out whether that boxing is actually critical to their application. (Which you often don’t know until the code is all written.)

Hence, opaque types. (Which, mind, aren’t something the Dotty has suddenly invented: many of us have been cheering for this proposal, originally proposed by Erik for Scala 2, for years – I’ve never seen a SIP to which so many people immediately said, “I want this now”.) They provide a way to get strong types with stronger “walls” than AnyVal, with absolute guarantees that you will not suffer the unpredictable and sometimes disastrous performance problems of AnyVal. Basically, they’re the same idea, but done correctly: they achieve the desired goal, which AnyVal turned out not to.

This is a feature that much of the community desperately wants, which has been discussed extensively for years (longer than the vast majority of Dotty) – I’m not clear on why you seem so against it…

1 Like

It’s important that you bring up the full context behind the feature, pointing out that it is a means of negating the performance cost of certain design patterns.

However, it is relevant all the same to demonstrate how much this features negates the cost in these scenarios. We may come to realize that the new feature introduce unexpected performance costs elsewhere, in a way we didn’t anticipate, hence failing its goal of negating the performance cost in those scenarios.

In theory, anything can impact the performance of an application; in practice, developers (are expected to) learn over time what may or may not impact performance in their application, and only care to pre-optimize what they have learned is worth the costs – development time, readability, complexity, etc.

So there are two choices here; either defensively write code to always try and negate the potential cost of boxing; or only cross that bridge when we get to it.

In my opinion, boxing does not introduce a significant performance issue in most scenarios where wrappers are desired; hence, using opaques as a preventive mean – especially when said preventive effect is not measured – is a strong case of premature optimization.

I think you are continuing to make this issue more complicated than it is.

If you declare an opaque type alias to a basic type such as Int or String, there cannot possibly be a performance cost beyond just using one of those types. The performance gain compared to a wrapper can be zero or positive, but it CANNOT BE NEGATIVE. And that is guaranteed, so YOU DON’T HAVE TO BOTHER CHECKING!

You keep implying that the motivation for opaque types is performance, but that is misleading. The motivation is enhanced basic type checking with minimal boilerplate and without TAKING A CHANCE of sacrificing performance.


But of course there might be a performance cost, and even the SIP admits this:

The extension methods synthesized for implicit value classes are not static. We have not measured if the current encoding has an impact in performance, but if so, we can consider changing the encoding of extension methods either in this proposal or in a future one.

The motivation – according to the SIP – is to be able to differentiate between different type aliases of the same type – say Id and Password (both String) – so that the compiler would not allow to mistakenly mix between the values.

The SIP admits that one solution for such a scenario is via wrappers, but that there is a performance penalty with them, that “in many cases [this] is fine but in some [it] is not”; hence, they may not be a sufficient solution.

The SIP then suggests a new solution for this scenario, and it needs to provide a proof – via both theory and evidence – that this solution does not suffer the same performance penalty; otherwise, it’s no different than the wrappers solution, and therefore redundant.

@eyalroth - Argh. Some things are obvious to people with familiarity with an area and don’t require additional evidence.

Do you really think everyone who is saying that yes it is a big deal and have personally struggled with it are making a mistake?

Here’s an example of a 13% slowdown (and ~4x memory usage IIRC) due to boxing into an array (preboxed, before the benchmark, which is a near ideal case):

case class A(i: Int) extends AnyVal { def add(j: Int) = i + j }
val a = Array.range(0, 1000)
val aa = => A(ai))

  var i, s = 0
  while (i < a.length) { s = A(a(i)).add(s); i += 1 }
  var i, s = 0
  while (i < aa.length) { s = aa(i).add(s); i += 1 }


Benchmark comparison (in 690.1 ms)
Significantly different (p ~= 0)
  Time ratio:    1.12966   95% CI 1.11298 - 1.14635   (n=20)
    First     584.2 ns   95% CI 580.0 ns - 588.5 ns
    Second    660.0 ns   95% CI 651.5 ns - 668.5 ns
res2: Int = 499500

Let’s fold something. Starting with:

case class A(i: Int) extends AnyVal { def +(that: A) = A(i + that.i) }

def fold(xs: Array[Int], op: (Int, Int) => Int) = {
  var i, acc = 0
  while (i < xs.length) {
    acc = op(acc, xs(i))
    i += 1

def fold2(xs: Array[A], op: (A, A) => A) = {
  var i = 0
  var acc = A(0)
  while (i < xs.length) {
    acc = op(acc, xs(i))
    i += 1

We run:

val a = Array.range(0, 1000)
val aa = => A(ai))

val th = new ichi.bench.Thyme
th.pbenchOff(){ fold(a,  _ + _) }{ fold2(aa, _ + _).i }

Now instead of 13% longer it takes 110% longer:

Significantly different (p ~= 0)
  Time ratio:    2.10579   95% CI 2.08184 - 2.12975   (n=20)
    First     352.6 us   95% CI 349.7 us - 355.5 us
    Second    742.4 us   95% CI 736.6 us - 748.3 us

And we can just keep going along these lines, adding abstraction and accumulating increasingly bad penalties without any warning.

So, bottom line is that the reason we “don’t need evidence” is that we already have a ton of it, and creating more is time-wasting pedantry.

Hopefully you’ve learned something from these microbenchmarks so it’s not a complete waste of time.

Edit: in case it isn’t obvious, the opaque type versions either wouldn’t compile, or would produce code identical to the primitive versions (by design). Either way is fine, really–it’s thinking you’re okay but it not being okay which really gets you.


I don’t think eyalroth is disputing that the old way (wrappers) can degrade performance. I think he is saying that the new way (opaque types) can also possibly degrade performance. I didn’t think that was possible, but I don’t know enough about it to be 100% certain. Can a scala compiler expert confirm that opaque types can’t possibly degrade performance compared to just using basic types?

1 Like

I’ve never implied that there is a mistake. What I said is that the performance gain may very well be quite insignificant for most scenarios (and that cannot be demonstrated with micro benchmarks), so merely claiming that “opaques are better” is misleading without giving any real world context, examples and benchmarks.

FYI you can somewhat emulate opaques in Scala 2 (credit):

object Logarithm {
  type Logarithm
  def apply(d: Double): Logarithm = math.log(d).asInstanceOf[Logarithm]
  private[this] def unwrap(log: Logarithm): Double = log.asInstanceOf[Double]

  implicit class Ops(x: Logarithm) {
    def toDouble: Double = math.exp(unwrap(x))
    def + (y: Logarithm): Logarithm = Logarithm(x.toDouble + y.toDouble)
    def * (y: Logarithm): Logarithm = x + y

You mean, despite people-whose-job-it-is-to-know basically all agreeing that this is the way to do it, you want a wide variety people who have an extensive code base to make wide-ranging changes that they expect will trash their performance, and carefully document the changes and the impact; and they should do all this in order to save you the trouble from having to learn about how to tune Scala code for performance?

No thank you. You’re asking for way too much. Sometimes it’s appropriate to defer to experts. It’s never appropriate for the path to expertise to be obscured, but sometimes things are too burdensome to make obvious to non-experts.

Also, you missed the extends AnyVal on the implicit class Ops. This can be critical for Scala 2 performance (the JVM doesn’t always notice it can elide the wrapper class). And yes, I know the trick already.

This is absolutely not what you can conclude from the situation you described. That something isn’t heavily documented doesn’t mean that it’s misleading. Unproven is different from wrong. Non-obvious is also different from wrong.


I’m not a compiler expert, but I’m assuming it works like this: if B is an opaque alias of A, the compiler will first type check assuming A and B are different types, and then replace B by A.

For example:

opaque type MyFloat = Double

def calculate(x: MyFloat, y: MyFloat): MyFloat = ???

will check that arguments and return value are MyFloat, and after checking that will rewrite it to:

def calculate(x: Double, y: Double): Double = ???

By design, the performance is exactly as if Double had been used directly.

1 Like

Context worth noting, in support of @Ichoran’s point: this proposal originally came from Erik Osheim, one of the leads of the Spire project, who is one of the experts on Scala performance. This isn’t coming from some random nobody – it was designed by folks who know the system extremely well, specifically to support performance-critical, strongly-typed libraries…

1 Like

I have the notion that you’ve completely missed my point.

I was pointing out how the speaker of the original talk in this thread explains that it’s important for us the users to demand benchmarks for Scala features (6:44); yet he also later describes how opaques are better in performance and explains the reasoning behind this (1:00:37), but doesn’t point to any benchmarks that have been done to prove this, despite explaining earlier how important those are.

This discussion is not about questioning the performance of opaques; it is about calling for more transparency from Scala’s lead developers. Sure, you may deem this a waste of time, but that is exactly what the speaker is advocating for.

Partly, yeah, I missed your point. So did you, though.

Without watching it (I don’t have time) I can’t tell whether this is an inconsistency in the speaker’s point of view. If he means, “It’s important to know how fast things are–please just benchmark them once and tell us, instead of making each of us do it over again”, yeah, that’s a reasonable desire. If he means, “it’s critical to publish benchmarks that demonstrate incontrovertibly what everyone already knows”, no, that’s rubbish, whether or not he contradicts himself later on by lauding something that isn’t benchmarked this way.

If you’re merely critiquing the speaker, I don’t have an opinion to offer as to whether there is hypocrisy here. I’d have to watch the video. Except I don’t really care about the video. I do care about opaque types.

This is ridiculous.

First, you are mischaracterizing yourself because you are actually questioning the performance of opaques. For example:

You wrote all that stuff. This isn’t what someone says when they’re “not questioning the performance of opaques”. Of course you are. Repeatedly.

Secondly, there’s absolutely no issue with the transparency here. You admit that you aren’t familiar with writing high-performance Scala code. e.g.:

The only lack of transparency here is not always being willing to hold the hand of someone who isn’t willing to do their homework.

Don’t get me wrong–I think hand-holding is fantastic when you can spare the time for it. Education is very important. But, look, you’re right; not everyone writes code that has bottlenecks for which reducing boxing is important. So the educational mission is not that important. The feature is, especially for people who write the fast code that everyone else gets to call and benefit from.

The decisions about opaque types were made openly, for reasons and in discussions that are public. The supposed benefits can be checked by tools that are freely available with plenty of documentation for how to get the stuff working. Is it easy and simple to do so? No. Unfortunately, performance testing on the JVM is difficult, especially for low-level features like this which tend to sneak into cracks which profilers can’t even see.

But it’s absolutely transparent.

So, in summary: opaques by design improve performance by making it structurally impossible to fall into a particular trap that AnyVal was supposed to avoid but often didn’t. People who are experts in writing high-performance code benefit from exactly the kind of guarantee that opaques deliver.

1 Like

I indeed digressed, but that digression actually led to events that demonstrate the criticism that the speaker was talking about.

For what it’s worth, I don’t think the speaker is hypocrite, but rather that he acknowledges that he is a part of the problem. You could watch just the specific parts I’ve linked, there’s no need to listen to the entire talk.

He encourages the community to call out on experts (like himself) when they claim that a certain feature is better performing, but without providing enough reasoning and measurements, which then leads the community to blindly accept those as general practices without understanding the entire context and subtleties.

I used opaques as an anecdote to emphasize the speaker’s point. I am neither questioning their performance, nor the reasoning behind them, nor the transparency regarding them; after all, they are still WIP.

These are not examples of me questioning their performance. These are examples of me (a) asking to see results (transparency, not necessarily doubt), and (b) trying to emphasize that micro-performance gains may very well be irrelevant to most developers, as they are often hindered by other performance issues; i.e, you can choose to opt for a faster car, but that wouldn’t do you any good if traffic is constantly jammed.

I only admitted to have not been familiar and use AnyVal. There are so many other scenarios where high performing code is required – DB querying, heavy regex usage, XML processing, tag search indexing, networking (be it HTTP, SSH or what not), heavy disk usage, etc. – that do not require a boxing mitigation solution in any way.

Encouraging developers to blindly use every micro-optimization technique out there whenever they write “high performing code”, without considering whether that optimization makes any difference, is the root of all evil. Those micro-optimizations have other costs, such as complexity and readability, and should not be used blindly.

I am expecting that when opaques are published, they will be documented accordingly; that is, some micro-benchmarks should be published with them – similarly to collections’ characteristics – along with a clear explanation on their intended use: They are an advanced micro-optimization feature that should be used sparingly, only to solve specific performance issues after other means have been ruled out, and that merely using them does not guarantee any significant performance gain overall, and that overall users are encouraged to benchmark their application.

I don’t think this is too much to ask – after all, this is pretty minimal wording and simple benchmarks that I’m asking for. These may seem to you too trivial to even bother including, but you are an expert, and the audience of this documentation (Scala’s documentation) are everyday non-expert developers.

That’s not their intended use. Their intended use is as an opaque type alias, to reduce complexity, and increase readability. No feature should be used blindly, but whether they should be used sparingly or not is a stylistic choice. That they have better performance than AnyVal wrappers means that they can be used in performance-critical code, not that they shouldn’t be used outside of performance-critical code.

1 Like

Given you are already using AnyVals, then yes, they are definitely less complex and more readable (I still have some opinions on their currently proposed syntax, but that does not change this motivation).

The question is then why even bother using AnyVal or an opaque in the first place. It may end up resulting in a style-centric choice between opaques and case class wrappers, but overall it would incur some added complexity to the language (and any code base where opaques are being used), as this is an additional feature that a developer needs to be familiar with.

To reduce complexity and increase readibility.

1 Like

I’ve never seen a case where AnyVal is used because it is a more readable and less complex solution, and I believe that the motivation behind the feature was always about performance:

Properly-defined user value classes provide a way to improve performance on user-defined types by…

Value classes are a new mechanism in Scala to avoid allocating runtime objects. This is accomplished through the definition of new AnyVal subclasses.

Sorry, I meant for opaque types, not for AnyVal.

1 Like

Sure, that’s an educational mission. That’s valuable. It’s not always practical, because these things take work, but it’s great when we can have it.

I’ve invested time to do this personally, too: at Scala Days 2013 (I think?) I gave an entire talk on reasoning about and measurements of the performance of various ways of handling error conditions. But even for that talk, getting real-world usage differences was overly burdensome and I didn’t really do it.

Fair enough, but the people writing the regex engines, XML processors, search strategies, and network stack all have to worry about boxing. (DB and file access have so much overhead that boxes are irrelevant even in the internals.)

You rely all the time upon people caring about this. For instance, earlier versions of Scalaz used ropes of boxed characters for string processing, which ended up being atrociously slow (like 100x slower) in common use cases.

I…ah…what?! I have no idea what you’re trying to claim any more. It looks like you’re contradicting yourself from post to post and even within a post. Maybe there’s some subtlety in what you’re claiming that isn’t coming through. Maybe you have an odd definition of “questioning” (e.g. “flatly rejecting any possibility of” as opposed to “I am not sure”).

Um…which is it? “Simple benchmarks” or “multiple scenarios each of which are more complex than microbenchmarks”?

I did post (micro)benchmarks, and you didn’t think much of them.

Nobody is doing this, though. The question is about providing a feature that allows a zero-cost abstraction to increase readability and correctness.

Right now, if you want improved correctness (which may or may not improve readability also) you need to have a wrapper class. But there’s a usually-unavoidable penalty when you use a wrapper class, and it’s kind of clunky. Opaque types are simultaneously penalty-free and less clunky! Those are both upsides compared to the current state of affairs. The downside is that there’s another thing to know about.

(Aside–that quote is badly dated. As far as performance-related woes go, failing to design and/or optimize when needed is the source of at least equal evil.)

I…don’t even…

Are you noticing what I’m writing?

Opaque types allow people to ask the compiler’s type system to distinguish, at compile time, sets of instances of the same underlying type. One can do this in a relatively low-boilerplate way, and because it has very low to zero cost, one basically never needs to worry about avoiding this due to performance concerns, unlike wrapping with classes. So you can use the feature whenever it makes sense for correctness and clarity.

The existing feature, AnyVal, was supposed to but does not do this in practice, and should be retired.

1 Like