Why scala is sometimes much slower than other fp languages?

Looking at techempower and most scala implementations are below haskell, ocaml, f# for some implementations even slower than elixir/erlang and I’m not even talking about rust. So why scala’s servers consume much more resources and perform worse than implementations in those languages? Is this a jvm/scala impedance mismatch or something else?

When you post something like this, it would be very helpful if you included a link to the source. It is very hard to comment on performance tests without having the exact methodology for the tests that you are referring to.

For example, if the tests are all really short running and the timing includes the start up of the JVM then you have a clear answer. That would be a methodology problem, not a language problem. Anyone using Scala for that type of thing, like Lambdas, would inevitably compile their Scala to native images instead of using the JVM in that type of situation.

1 Like

Sorry, my bad, here is the link

Thanks. Looking through the data, I’m not convinced that your assertion about Scala being slow is really true. These tests are little tests of full web frameworks. There is a lot more going on than just the language. For example, I want to switch this around and ask, “What is the fastest language?” Unsurprisingly, Rust frameworks are at the top of every test. However, Rust frameworks also hold the slowest spot on EVERY TEST. So what is it? Is Rust fast or slow? The answer is that it can be both. Different frameworks get optimized for different workloads and use cases. All of these tests are fairly small. I can easily imagine a framework including some extra stuff that is useful for most real use cases that would kill it in tests like this.

To highlight this, I would point out what happened with Scala/Akka for gRPC calls recently: Akka gRPC update delivers 1200% performance improvement (so what happened?) | Lightbend. Was the fact that Akka had been slow for gRPC calls imply that Scala was slow? No. It implied that they hadn’t optimized for gRPC yet, and specifically for the use cases that appeared in those benchmarks.

I haven’t gone through to look at what they are doing with Akka or Play in these tests or to see if their code could be optimized by turning certain things off. But my guess is that the generally poor performance of those frameworks in these tests is because they come out of the box with certain functionality that benefits many real-world uses that hurts these micro-benchmarks and isn’t being turned off here.


Looking at this microbrnchmark and a few others does that mean that fs2, http4s, circe and other typelevel libraries, except for cats effect are not optimized for anything at all?
Asking this because a lot of people recommend going with typelevel/zio stacks rather than akka

I don’t know about http4s or circe, but fs2 is heavily performance-tuned.

In general, based on your posts so far, you seem eager to jump to conclusions that the data in front of you don’t actually support. I’d advise moving a bit more cautiously.

The usual pitch for frameworks such as Play and http4s centers on reliability, correctness, maintainability, and ease of development, not on performance on microbenchmarks, microbenchmarks that (as Mark notes) are unlikely to resemble real-world usage profiles. It’s a fairly rare web app or web service where the performance bottleneck is in the framework or language itself.

If you know exactly what you intend to build and are confident that performance of your system will be bounded by what these numbers are measuring, then great. But if you don’t know that and aren’t confident of that, then these Techempower numbers aren’t meaningful to you.


Another factor that occurred to me is “security by default”. Play has a number of security filters that are there by default. They inevitably slow down microbenchmarks, but if I’m building a real website, I’d prefer to have the security.

1 Like