Scala Heaven and Hell — Michael Slinn

Dotty (Scala 3 Preview) by Michael Slinn was not what I thought it was, well not exactly. The prolog of the presentation is an excellent characterization of Scala Heaven and Scala Hell, and I highly recommend watching this as a Scala user, even if you don’t really care about Dotty.

Java 11 and beyond is a direct result of Scala Heaven, while Kotlin is a direct result of Scala Hell.

Scala 3, or Dotty, is an honest attempt to reconcile some of the worst aspects of Scala Hell, for example, the insanity created by ‘implicit’. It will be interesting to see how the Kotlin community responds to Scala 3 as there is some genuinely interesting stuff going on there. Hopefully, the Java designers are paying attention too.

If people are interested in discussing the topic of Scala Heaven and Hell, let me know, and I will create the topic somewhere else than ‘Announce’.

If people are interested more in Scala 3, there is probably already a discussion going on about that.

1 Like

kind of related: https://twitter.com/philip_schwarz/status/1211935394671005696

That was an interesting talk by Micheal Slinn on both the technical aspects of Scala 3 and administrative issues with the Scala community. Let’s hope these community issues get worked out so Scala usage can greatly increase in industry and academia.

I would like to reply to the poster’s offhand comment about “the insanity created by ‘implicit’”. One might as well refer to “the insanity created by ‘var’”. I think a more accurate characterization would be “the insanity created by the MISUSE of ‘implicit’”. Properly used, “implicit” is a useful tool that would be sorely missed were it not available. Ditto for ‘var’.

1 Like

I would support “the insanity created by the MISUSE of ‘implicit’” in that it was misused in several ways. Scala 3 keeps some of the functionality of ‘implicit’ but makes it a little harder to misuse the functionality. Without having tried Scala 3 yet, I don’t have a final impression yet.

Kotlin avoided implicit for good reason. If Scala 3 avoids enough problems with implicit-like functionality, maybe Kotlin will revisit the idea.

Personally I avoid ‘var’ as much as possible and only resort to using it in well-understood conditions.

My experience with Scala (since 2005) is that it takes a lot of discipline to not misuse Scala in general. I look forward to kicking the tires on Scala 3 soon.

Have you seen this talk by Martin Odersky on “Implicits Revisited”?

1 Like

@kolotyluk (cc @Russ) the following is partly based on that talk:

https://www.slideshare.net/pjschwarz/scala-3-by-example-better-semigroup-and-monoid (download for original-level quality - slides are a bit grainy when seen on the site).

“Implicits Revisited” is a good presentation. I did not learn too much new after seeing Michael Slinn, but I am withholding judgment on ‘givens’ until I play with them a little. I have been burned by ‘implicit’ too many times, so I remain skeptical. It is good that Odersky finally admitted to the problems with ‘implicit’ as he never did before.

“Scala 3 by example” I found last week, which led me to Michael Slinn’s presentation.

I installed Visual Studio Code with Scala 3, so now I am looking at Slinn’s examples…

1 Like

It’s a bit absurd how he mentions earlier that “we” (the users) should not just rely on some paper that says that option A is better than option B without any benchmarks, and later on he mentions how opaque types are a great addition.

The main motivation behind opaques is to improve performance, however there are no clear benchmarks provided, not even in the SIP. All that is mentioned is this paper which examines theoretical performance differences based on bytecode difference, but no real world benchmarks. Moreover, the SIP mentions that opaques have not been benchmarked at all.

Unless I am mistaken, opaque types are just a special kind of type alias, hence they have no runtime cost.

They are “special” in that they cannot be used in place of the original type. Their purpose is just to provide unique type names for particular data types that are equivalent to basic types. For example, an airport code such as “LAX” is just a string, but it is a specific type of data that, for type safety, should not be interchangeable with any string, so it would be declared as an opaque type.

Semantically they are not much different than a wrapper; they are actually less useful as you cannot perform instance checks (isInstanceOf) with them. The motivation behind them, as mentioned in the SIP, is to improve on the performance of wrappers.

What would be the point of benchmarking an opaque type if there is no runtime performance penalty? You might as well just benchmark the original, underlying type.

If you are interested in the performance gain compared to a wrapper, then you have to benchmark the wrapper and the original type.

Yes, that was my intention - a comparison of benchmarks between a simple wrapper, AnyVal and opaques.

How do you structure that benchmark in a useful way? The problem is and always has been that AnyVal's performance is unreliable. It’s totally efficient until you do something wrong, and then boom – you have unexpected boxing all over the place. There are rules, but they’re persnickety enough to be hard to follow reliably. In practice, a number of people consider AnyVal a somewhat failed experiment because, empirically, they expected it to be efficient and it wasn’t. Benchmarks have nothing to do with it – that inherent difficulty of using the mechanism performantly is the issue.

And the cost of wrappers depends on the use case. Many people have found, in practice, that the cost is important to their situations. What does a benchmark add to that? This is a problem in real-world, data-heavy Scala, that people actually complain about…

1 Like

I’m not very familiar with AnyVal and its performance issues; I’ve never found any need for them, as my bottle necks were in other areas. If I understand you correctly, the problem with them is not their (potential) performance characteristics, but the misuse of them due to… their poor semantics? poor documentation?

This is not what I understood from the SIP for opaques. What I understood is that AnyVals are simply not good enough to solve specific performance issues. In that case, I would expect a comparison between them and the newly proposed feature, demonstrating that indeed the new one is good enough. But then again, maybe I didn’t understand the problem that the SIP is trying to solve.

Agreed. The problem is that value classes still trigger boxing under the hood in many circumstances - these circumstances include using them as collection elements and anonymous function parameters/return values, and I don’t see any rules to avoid these usages and end up with clean, idiomatic code.

Here I agree with @jducoeur. If opaques work as intended, they simply won’t be boxed, as opposed to value classes. What kind of (micro) benchmark scenario would you write? If it’s code where value classes never get boxed, there won’t be a difference, if it focuses on exactly the cases where value classes get boxed, it will exaggerate the effect in real world code. Basically all you can say is that opaques should at least have the same performance characteristics as value classes and will usually do better - how much better depends on your application’s usage patterns and you’ll have to benchmark this for yourself to get a feeling what the impact may be. Not sure that’s a worthwhile investment of time, unless you’re profiling a hot spot using this kind of types, anyway.

In addition, benchmarks only make sense against the implementation proper, so a SIP by its very nature cannot really provide one.

1 Like

But there might be other problems that we didn’t anticipate that will hinder their goal (better performance). This is often the case with coding problems (or any problem as a matter of fact), and this is why we test our assumptions – either with tests or benchmarks, depending on the goal.

The same scenarios where AnyVal deemed to cause a performance problem, which are the scenarios that opaques are suppose to solve. I would expect to see a total gain in performance; otherwise, it might be that indeed opaques solve a certain (micro) problem, but introduce a new problem that renders the micro improvement irrelevant. In such case, I would deem opaques to be somewhat a form of “premature optimization”.

I’m not sure about the exact methodology regarding SIPs, but I wouldn’t expect the feature to be publicly available and recommended before we can assert that it actually achieves its goals. Perhaps releasing it as an experimental feature (with a well-documented indication for that) is the way to go.

What should these be, given opaques work as intended, i.e. they only exist at compile-time and are fully replaced with the underlying type at runtime? I would assume that comprehensive testing is done to verify that they work as intended, but I still don’t see the benefit a generic benchmark would offer.

Just to make my point more tangible… I don’t have a dotty env here, so I just ran a JMH benchmark with AnyVal wrappers vs. shapeless tagged types (as a drop-in for opaques, encumbered with some runtime overhead at creation time), exercising this:

def withStrings[T](f: String => T): List[T] =
  (0 until 1000000).foldLeft(List.empty[T])((a, c) => f(c.toString) :: a)

This reports 25.7 ops/s for tagged types vs. 23.5 ops/s for AnyVal wrappers. What have I learned now?

dotty as a whole is still experimental.

1 Like

What do you mean by “work as intended”; again, as far as I can tell, they are intended to improve performance in certain scenarios, and that needs to be asserted with experiments (benchmarks), not just theory.

I’m not suggesting generic micro benchmarks, but rather benchmarks of entire scenarios where AnyVals were identified as the cause of reduced performance. Again, I’ve not faced with these scenarios so I’m not sure which are those, but I would imagine them to be scenarios of a full real world application.

I’m not criticizing dotty. I pointed out this whole anecdote in reference to the original talk, where on one hand the author said to not rely on recommended techniques without benchmarks, but on the other hand he seem to be enthusiastic about opaques despite not proven to achieve their goal with benchmarks.

" i.e. they only exist at compile-time and are fully replaced with the underlying type at runtime"

But this (elements of a generic container) is exactly such a scenario. Of course you can come up with broader, more “real world” scenarios - in these cases I’d expect a smaller payoff than in more focused cases, because boxing/unboxing will make for a smaller part of the overall runtime. And in “degenerate” scenarios there may be no payoff at all, since no boxing for AnyVal is ever triggered. How do I judge what a “meaningful” benchmarking scenario from this range should be?

The bottom line is that I’d always expect the payoff to be >= 0 - otherwise, opaques would have failed their purpose, indeed. However, if they “work as intended”, this should follow logically, because they just remove boxing/unboxing, but don’t add any overhead compared to vanilla wrappers. Basically you seem to be asking for a benchmark that proves that boxing/unboxing comes with a cost (which I think is obvious) and quantifies this cost (which I think is not possible in a generic, meaningful way).

I fully agree that if you are trying to tackle some bottleneck in your code by replacing wrappers with opaques, you should by all means benchmark your solution, I just don’t see the benefit of a generic benchmark.

(That all being said, I wouldn’t exclusively focus on performance characteristics, anyway - to me it seems that opaques would also make for nicer code compared to AnyVal case class wrappers, assuming the right use cases, of course.)

I didn’t think you were. But you were suggesting to mark this feature as experimental - that’s what dotty is.

1 Like

Okay, but those of use who write high-performance code routinely are painfully aware of it, and how hard it is to avoid unexpected boxing that carries along a local penalty of 3x (or 20x!).

Opaque types, by design, cannot box, so those problems go away. If opaques are done the way this SIP indicates, there is no mechanism by which it could deliver unexpected boxing.

So we could, for fun, come up with a benchmark that shows that it’s a disaster. (Changing something like x = x + 2 to x = (x + 2).tap(cuml += _) would probably do it, with the standard library tap, where x is some basic numeric type.)

But it’s pointless. We know what the problem is with AnyVal, and that it’s often really bad in performance critical code. So why waste the effort making benchmarks?

If you want to convince yourself that it’s a problem, you’re of course welcome to create your own benchmarks.

1 Like