What are some of the advantages of using Scala over rust , C++ and other native languages?

Come on! You’re stretching a point here. You don’t need to set up a type massacre, a.k.a. erasure, to translate types across type systems. The massacre is a loss of both bad and good – hence, my earlier hint in this thread on the loss of useful information.

Thank you for working out an example for what I call loss of useful information.

IMO all evidence points that you need to do that in practice, because almost all type systems don’t have reified generics. If you have an idea on how to universally teach existing Java or JavaScript code to reify types then I’m curious.

@tarsa:

My point was supposed to be about purity. Sorry if that wasn’t clear. You can be functional in the loose sense without having complete immutability because of how ownership linearizes access to mutable values. You can freely hand out references to things you would have been able to mutate as long as you promise not to mutate them any more, and you can use mutable things locally without fear that you might accidentally share them with another thread. You gain all the practical benefits in safety of purity without having to be pure since the compiler yells at you when you do something problematic.

So, multithreaded functional programming in Rust–sure thing, and it’ll be fast, too. Yes, you will constantly have to think about ownership, but you always do in Rust (even for immutable things, since immutable things need to have their resources recovered too). Part of that is not throwing everything into Arc<> just because it’s easy. Even Arc<> isn’t as easy as GC, but that doesn’t mean Rust “prohibits efficient multithreaded functional programming”, it’s just a bit more of a hassle.

I would still choose Scala for multithreaded FP in most cases, though, precisely because it’s easier! I don’t just want to not be prohibited.

Well, I can’t speak for every case, but I have used free lists, memory pools, and other custom things in C++ in almost every case where malloc overhead was a bottleneck and gotten performance improvements of 2-10x. Usually your memory usage isn’t dominated by large numbers of different randomly-sized objects whose lifetimes are unrelated; if you’re building a tree, for instance, you have a lot of nodes which are all the same size and will be deleted in batches. As the benchmark shows (with over 3x better performance in C with apr_pool, and Rust beating Java by using typed_arena), can have a huge impact.

This isn’t to say that GC never wins, or that it isn’t a better default option when you’re creating a bunch of small long-lived objects (i.e. stack allocation isn’t an option), but with effort you can almost always beat it.

But why go do the effort? I still think it’s a win for Scala–it’s just easier not to have to care.

@mghildiy:
Scala’s type system contains higher-kinded types, which are necessary to talk about types that express relationships between other generic types. A lot of work has been done, especially in Haskell, to develop machinery that allows your programs to be pure (referentially transparent, etc.), and that machinery translates fairly directly to Scala (see the cats or scalaz libraries). Rust’s type system can’t express the more advanced parts of that machinery (see Gentle Intro to Type-level Recursion in Rust: From Zero to HList Sculpting. - BeachApe. for about as much as you can do), and the deeper in you go the more you have to rely on macros.

@Hossein:
I think you’re just much more familiar with C++ than Scala? I give points to Scala over C++ for things when it’s easier to do in Scala, not only when it’s impossible in C++.

Yes, and some types of instruction that are easy to give in Scala are hard to give in C++, which can promote unsafe code. For instance, covariance and contravariance are difficult to express, and templates are “dynamically” resolved at compile-time by default, by generating code and if it works it works even if it doesn’t make any sense, if there are type conversions that you didn’t intend, etc… That’s why I gave the edge to Scala in this regard.

I’m not sure what you mean. What do you think the point of RTTI and dynamic_cast is?

I’m not sure what you mean. When do you think erasure happens?

I’m not sure why you say this. First, late-binding can be guaranteed to be safe as long as you are forced to obey appropriate type signatures. Second, I don’t see what this has to do with Scala vs. C++; C++ classes can have vtables, and even C has dynamic linking.

That information is lost, but why lose it if you need it? Generics give you the option of losing the type information, but you can always set things up so that at compile-time you haven’t lost the information.

And anyway, I thought you were arguing for better knowledge of types at compile-time and suggesting that C++ was better at it?

In Scala, there are mechanisms to not lose track (propagate type parameters), and mechanisms to lose track but then recover it again (e.g. bind type witnesses). If you mean “normally I want to be able to recover this kind of type information at runtime, and Scala doesn’t help me as much as C++ does”, okay, fair enough, I agree.

That would be very loose sense of being functional. Functional programming is based on immutable data and avoiding changing state. Higher order functions are very needed addition to that, but they don’t define what functional programming is. Java had closures in the form of anonymous classes since Java 1.1 - that didn’t make Java functional. C++ has const modifier that creates a different type, so you can also control mutability, somewhat like in Rust. That also didn’t make C++ functional.

You can mix pure code with impure code and in Scala that’s often practiced. But functional data structures have to be fully immutable (at least externally - to the user of them). If they aren’t then you can’t apply the same reasoning to them as with fully immutable ones.

Mutability control in Rust doesn’t make it functional, but helps to avoid data races. These things are different. Rust also has escape hatch with regards to immutability - it’s RefCell which allows for runtime control of mutability (as opposed to compile-time enforcements with the basic types of references).

I think the borrowing rules in Rust (you can have either single mutable reference to an object or any number of immutable references) facilitate simple and reliable data caching. In C/ C++ you can have pointer aliasing, so C/ C++ compiler isn’t always free to cache the data referenced by pointer in e.g. a register. Rust can always do it, because in Rust you can’t have two (or more) references to single portion of data (e.g. the same byte) where at least one reference can be used to modify that data.

I tend to think that functional programming is based more off of using functions than anything else–that the output of the functions contain what you want–and that the immutable data is a way to avoid mucking things up when doing it that way. I agree that with the type systems available in most languages, “FP” has had to mean “immutable data or you’re in trouble”, and so now FP is pretty synonymous with the latter.

struct X { value: u32 }

fn inc(x: &mut X) -> &mut X { x.value += 1; x }

fn test(x: &mut X) {
    let y = inc(x);
    println!("y is just {}", y.value);  // All cool
    println!("x is now {}", x.value);   // Compile error
}

This is totally not immutable, is totally safe, and is about as functional as you can get in that the output is completely determined by the inputs, and you can’t muck up and use them again.

Anyway, the immutable-style FP is far more common, and Scala’s type system makes the standard constructs more workable than does Rust, to not stray too far off topic.

1 Like

That doesn’t imply erasure.

I appreciate your curiosity. I, however, don’t have such an idea. Neither did I ever claimed so. I essnetially don’t think shooting for a universal reification recipe is currently the right thing. The bidirectional type translation needs to be customised for each and every pair of languages. And, as such, erasure adds no value.

I see it the other way. Type erasure simplifies compiler (failure of Scala.NET is the most profound proof) and unifies semantics of various compiler backends. Reification of complex Scala types is a lot of effort so you would need strong arguments in favor of reification to persuade people to undertake it yet again.

Indeed.

Me too. Like I did for functional programming.

You have a point here. And, I do agree, as another example, that family polymorphism is natural in Scala but next to impossible in C++. Whereas, it’s the other way around for lightweight family polymorphism. The fact, though, is that the former is often tried in Scala where it’s only the latter that suffices. OTOH, to be honest with you, I was never given an appealing application for covariance and contravariance. So, I tend to ignore them.

That’s unrelated to my point. Granted though; those are backdoors.

In Scala, there is no guarantee for a method not to be late-bound. On the contrary, a call to a C++ member function is guaranteed to be statically bound unless that’s done via a pointer or a reference. The separation between the compile-time life of an object and its runtime one is, thus, well-defined.

The craft of such a type signature gets impractical very quickly and does not scale at all. Sometimes, that’s even not possible. Again, I need to refer you to my posts on embedding $\lambda$-calculi in Scala and concept-based overloading.

Recall that the point here was on instructing the compiler for type calculation. That is practised at compile-time in C++. So, virtual tables are just off-topic. With dodging late-binding being impossible, the story is totally different in Scala.

How do you get type-polymorphism that way?

How does that help instructing type calculation at compile-time exclusively?

That can lead to runtime type resolution. It relies on implicits after all.

Most likely so. But, how many times would you ever need to perform my so-called bidirectional type translations? We’ve had one failure thus far (Scala.NET) and an erasure-unrelated success (Scala.js). This is whilst Scala is getting to the end of its lifetime (due to Dotty). So, all that type masacre for those two cases? One could have certainly scored similarly without any erasure and with comparable effort.

I’m not so certain. We need input from compiler devs (for each backend, ie Scala.JVM, Scala.NET, Scala.js and Scala-Native) to assess to what extent type reification is feasible, what effort would it require and what benefits would it provide (given that it wouldn’t be full because of interoperability).

Please be careful with your phrasing. Dotty is expected to be Scala 3 – it’s not the “end” of Scala. There’s enough FUD around Dotty; we don’t need to feed into that.

How official is that? The backward incompatibility suggests a seismic shift.

Other people can come in with more definitive responses, but it is my understanding that has been the plan from the beginning. As @jducoeur said, that idea that it is a completely separate language is just FUD. Odds are that because of the number of breaking changes, Dotty becomes Scala 3.0, but it is a playground for new ideas and they didn’t want to commit to any particular number when they started the process of designing it.

100% official, see e.g. Announcing Dotty 0.1.2-RC1, a major step towards Scala 3 | The Scala Programming Language, and also see almost any talk Martin has given in the past year or so.

The Dotty team are concerned to keep Scala 2 and 3 in close enough alignment that it will be possible for real projects to cross-compile across the two versions (perhaps with some extra care taken, a few compiler flags passed, that kind of thing). And in fact, that’s been happening for a while already now in the Dotty community build. Seems like definitely the same language to me.

That would be such a treasure to have. :slight_smile: Nevertheless, these all employ type-erasure, don’t they? If so, one cannot really asses how more (or less) difficult it would have been to score similarly without erasure.

I beg your pardon?

Thank you Seth for the confirmation and the link. Good to know.

Such a cross-compilation is nice. However, that’s only a second choice I’m afraid when it comes to backward compatibility. One would ideally want to compile the same old code on the exact same platform every other Scala code of one is being compiled. Setting different flags for different parts of the code is indeed doable but still a hassle to me even in presence of SBT. YMMV.

The issue of backward compatibility in programming languages is an interesting one. A lot of the major languages of the past have chosen the route of never breaking anything. The problem is that over time they become bloated. C++ is the poster child for this, but Java is now getting closer and JavaScript seems to be aiming to become a dynamically typed C++ in terms of language complexity. However, we are also seeing other languages that are taking a different route and making breaking changes every so often. That allows them to improve the language by not only adding new features, but also getting rid of old features that need to go away. To see the importance of this, just look at variable initialization in C++. Stroustrup’s book on C++11 argued for using {} almost all the time, while Meyers supports always using auto, but {} and auto don’t get along. It’s a mess. I believe that one can initialize a variable in five ways in C++, and some of those ways have weaknesses that make it so they really shouldn’t be used. I think it would really benefit C++ and other languages to do what Scala is doing and get rid of old syntax every so often.

Don’t forget the transition from Python 2 to Python 3. Python 3 is already 10 years old, but Python 2 is still going strong and posing a major challenge. It’s an example that transitions to incompatible versions of a language can be very problematic.

1 Like

I don’t forget it, and I’m certain that the Dotty team doesn’t either. I feel that there were a number of flaws with the Python 2 -> 3 transition, but the biggest is simply the fact that Python is dynamically typed. As a result, it is a lot harder to find out about things that are broken.

I also understand that one of the major goals of scalafix is the ability to largely automate the transition.

It is fascinating to read thoughts of fellow Scala programmers about backward compatibility. What I was trying to say when I brought that issue up was that the backward-incompatibility caused by the transition from Scala to Dotty is so high. And, I pointed that out to explain how I got the impression that Dotty is a separate language. I read that’s not what the designers had in mind.

Now, back to the pluses of Scala over C++: As we were comparing the two type systems, I expressed my concern about the Scala one building on erasure. A number of chaps tried to defend that design choice – implying richer type system for Scala. From what I can see, that claim remains unsubstantiated. The alleged benefit was facilitation of making Scala multiplatform. In order for that, however, to really be assessed, one requires comparison with unerased languages targeting the same platforms as Scala, which we at the moment don’t have.