Metaprogramming and Compile-Time

Dear all,

How much of Scala metaprogramming is guaranteed (by the language spec) to take place at compile-time? I insist on language spec because I want to know how much of that the programmer can rely on. I’d be interested in the exact part of the Scala spec when provided with responses please. How much of that spec is implemented by the compilers?

One more thing: I’d rather want Scala 3 but Scala 2 comments are also welcome – provided that the indication is clear.

TIA,
–Hossein

1 Like

If, by “metaprogramming”, you mean Scala macros, then in both Scala 2 and 3, all of it happens at compile time. None of it happens at runtime.

If you are including “reflection” in your definition of “metaprogramming”, then that by definition is occurring at runtime (the act of reflecting is the runtime actively examining itself).

Said at a very high level (perhaps to the point of over-simplification), Scala macros are “executed” at compile-time only. And Scala reflection is “evaluated” at run-time only.

I am more uncertain about the area where a Scala macro could express compiled code which then depended upon the runtime reflection. It seems it would be possible. I would need more experience to understand the limits of that overlap.

1 Like

So, is that possible to perform type computations at compile-time? What about type transformations? What’s the macro, for example, that maps the type Int to the type String?

In Scala 3 you don’t even need a macro for that: Match Types | Scala 3 Language Reference | Scala Documentation

2 Likes

Nice. How do you return a String when say a given argument is less than 10 and an Int otherwise?

For some, it might not be very clear how the above question of mine is related to the topic of this thread. Here is how: Let’s say we have a function f that does what I requested. For an x the value of which is known at compile-time, will f(x) be evaluated at compile-time?

There are singleton types that allows you to use such information at compile time.
That is, you could “prove” that at compile time you would have a specific input number (which should be matched with compile-time matching e.g.) which is less than the singleton for 10.
Matching that you will have a different return type.
Then in any call site, where the prove of the input singleton type is available [see note] you would have the compiler guarantee a specific result type.

note: I mean that any value that is, e.g. computed or read from the program arguments, would not be available at compile-time as a singleton type.

You can define the following type

import scala.compiletime.ops.int.<

type Foo[A <: Int & Singleton] = (A < 10) match {
  case true => String
  case false => Int
}

Whether you define the method that has return type Foo[A] with code that runs at compile time or at runtime is up to you.

That’s beautiful. Thanks! :slightly_smiling_face: Now, is Foo[50] then a valid Scala type? If so, is it just an alias for Int? Or, does it (type-)reduce to Int. In the latter case, when does that reduction take place? I take it that’s compile-time. But, does that happen lazily or greedily?

Did these questions of mine get forgotten over time?

I didn’t answer them because i don’t feel confident enough in my knowledge about this feature. But I think that for all intents and purposes you can think of Foo[50] as a type alias for Int and of Foo[5] as a type alias for String. Though I’m not entirely sure in the context of the scala compiler what the difference is between being a type alias of or type-reducing to. I think the compiler tries to be as lazy as possible in dealiasing types, but that that behavior is not strictly specified.

I’m sorry to push you out of your confidence zone then.

I see.

Well, a type alias is usually an entry in a dictionary (type table) where there is a pointer in the aliasee’s entry to that of the aliased. For a type that reduces to another type, however, the type expression is stored instead; that expression will then reduce (partially or fully) at a later point in time:

  • whenever the type is referenced, at compile-time;
  • upon the first time it’s referenced, at compile-time, and memoised thereafter; or
  • one of the above but at runtime.

I need to be absolutely sure about that. In particular, laziness to the degree of leaving dictionary resolutions to the runtime is also an option a language might choose. Haskell does all the reductions at compile-time but still carries the dictionary around at runtime (for some reason I can’t recall now). Where can I get the definitive answer about Scala?

Match types are always reduced at compile-time, never at runtime. They are wholly a compile-time concept.

1 Like

Thanks Seth. Pardon me for my taxing questions here. But, would you mind pointing me the part of the spec where this is specified? My reason for wanting that is that I’d need to know how much of that is guaranteed to the Scala programmer…

… and, do you think could answer my above questions in the same vein please?

It doesn’t seem to have occurred to the author of Match Types | Scala 3 Language Reference | Scala Documentation to explicitly state this. I assume it didn’t occur to them because the assumption that types are a compile-time concept is fundamental to Scala.

re: your other questions, I don’t know more about the reduction mechanism than is given on the doc page I’ve linked to. What concrete difference would it make to you if a match type reduction is made “greedily” or “lazily”? Can you give an example of what you mean and have you tried that example in the REPL?

1 Like

I see.

Is that true? Scala expressions have compile-time types and runtime ones.

def f[T](t: T) = t.toString()

The type T above is only the compile-time type of t. I seem to understand t.type is the runtime one.

Who does? Aren’t there any “spec lawyers” on this list we can cry for the help of?

This is all about managing the compile-time duration. I am co-authoring a paper about the C++ (compiler-time) metaprogramming and this will show up at our literature review.

Sure.

type P = (Foo[5], Foo[55])
...
def f1(p: P) = ???
def f2(p: P, i: Int) = ???

P above is equivalent to (String, Int). Is that equivalence established greedily, i.e., right at the first line above? Twice for its two occurrences on the two last lines? Or, once at the penultimate line and memoised afterwards, i.e., lazily?

How would I test this at the REPL? :-/ I seem to understand testing it requires access to the compiler source, which I don’t have. And, even if I had, it would have not helped me with my questions. Because the compiler implementation details is no guarantee for the Scala programmer; the spec is.

No, that’s incorrect, t.type is the singleton type of t, and singleton types are purely a compile-time thing.

I can only think of two senses in which Scala can be said to have something you might call “runtime types”. One is that the underlying platform may know at runtime what class a value is an instance of. Runtime reflection like this, such as the JVM offers, is a far more restricted concept than types in general. The other is that you can use TypeTag to carry a compile-time type over to runtime. This is supported on the JVM but not on Scala.js or Native. It is not fundamental to the language and is not very commonly used.

re: lazy vs greedy reduction, I’m puzzled by your remarks about not being able to test it in the REPL. If the distinction you’re making doesn’t result in user-visible differences in behavior, then you can’t expect it to be covered by the language spec, which doesn’t specify internal implementation details of the compiler.

2 Likes

OK, I confess I might have mixed up terminology here. Let me choose another example:

class B
class D extends B

def f(b: B) = b.toString

f(new D)

Upon the call at the last line, b is an instance of D even though the method toString that will be called in the body of f is that of B. My rusty Scala tells me that’s because the static type (?) of b is B; but, it’s dynamic type (?) is D. Is that correct at all? Are those the right Scala pieces of terminology?

Now, consider this one:

trait A {type T}
class B extends A {override type T = Int}
class D extends B {override type T =  String}

def g(a: A) = new a.T()
...
var x = scala.io.StdIn.readInt()
val y = if (x < 14) g(new B) else g(new D)

The compiler will not know y's exact type until runtime. True, it can always dependently type it at compile-time. But, there still remains type resolution to be done at runtime. Or, am I missing anything?

I never said it’s not visible to the user. I just said REPL doesn’t make it visible. How does one time compilation in REPL? And, even if one can perform that task, the experiment is only pertinent to the given Scala implementation. That’s not a Scala guarantee. Only the spec can give that guarantee.

I’m not sure what the “right” way of talking about this (e.g. in an academic paper), but sure, if you said that, I would definitely know what you meant.

Personally I would avoid the word “type” here, on the grounds that it promotes confusion between compile-time and runtime. Instead, I would say something like "at runtime, b holds an instance of D". Regardless, I acknowledge that use of the word “type” to refer to runtime values in this way is rampant in everyday programmer-speak.

Your second example is illegal Scala in at least two different ways:

  • the definition of D is illegal, because concrete type members cannot be overridden
  • the definition of g is illegal, because new doesn’t work on abstract types (SLS 5.1.1)

But perhaps I partially grasp what you’re getting at regardless. Suppose we instead write the following legal code:

trait A {type T}
class B extends A {override type T = Int}
class D extends A {override type T =  String}

def g(a: A): a.T = ???
var x = scala.io.StdIn.readInt()
def y = if (x < 14) g(new B) else g(new D)

Here, the type of y can only be Any, since that’s the least upper bound of (new B).T and (new D).T, because the least upper bound of B and D is A and the upper bound of T in A is Any.

I don’t know what you mean by “there still remains type resolution to be done at runtime”. Nothing I would ever call “type resolution” (even informally!) ever occurs at runtime, only at compile time. If you simply mean that because y's type is Any, it could hold anything at runtime, then sure. But I don’t see what new point has been made by bringing the abstract type member into the picture.

re: “lazy” vs “eager” reduction of match types, I feel our communication on that point has broken down; perhaps someone else sees what you’re getting at and can step in here.

Thank you very much for your explanations. Thanks also for correcting my terminology as well as my code. :slight_smile:

Oh, I had completely forgotten about this upper bound business of Scala. That changes the game.

Well, my impression was that the static type here remains a.T. In that case, at runtime, the runtime environment needs to resolve a.T to either Int or String. With Any kicking in, I admit there remains no need for runtime type resolution.

With the upper bounding to Any, nothing. But, I couldn’t recall that before.

Do you have a specific person in mind? Can we not ping them here?