final class A
final class B
val x: A & B = ???
val _: Nothing = x

This does not compile even though A&B is uninhabited. I’m aware that even according to the new Scala 3 spec, A&B is not a subtype of Nothing, but why not? Is there something specific that would go wrong if A&B were a subtype of Nothing?

Being able to conclude that A&B <: Nothing would enable more powerful reasoning about combinations of intersection and union types, since that reasoning already has special cases for combinations involving Nothing.

Related to this, given you have two disjoint types, why is the intersection not Nothing:

def x: Object & Int = ???
val _: Nothing = x

but this gives

[error] Tree: x
[error] I tried to show that
[error] Object & Int
[error] conforms to
[error] Nothing
[error] but the comparison trace ended with `false`:
[error]
[error] ==> Object & Int <: Nothing
[error] ==> glb(<notype>, <notype>)
[error] <== glb(<notype>, <notype>) = <notype>
[error] ==> Object <: Nothing
[error] <== Object <: Nothing = false
[error] ==> Int <: Nothing
[error] <== Int <: Nothing = false
[error] <== Object & Int <: Nothing = false
[error]
[error] The tests were made under the empty constraint
[error] one error found

Can somebody explain this reasoning? Why conclude Object & Int <: Nothing = false from Object <: Nothing = false and Int <: Nothing = false? The latter two are true, but how are the statements combined into the former?

For a type C to be a supertype-of-or-equal-to the greatest lower bound A & B of two types, it has to be a supertype-of-or-equal-to at least one of the two types, either A or B or both. (That’s how greatest lower bounds work in Lattice Theory in mathematics.)

In this example, the type C = Nothing is not a supertype-of-or-equal-to either one of the two types. If at least one of those two were true then it would be true. So “being bigger than the greatest lower bound of two things” works like a logical “or”, if that makes sense. In this case false or false ends up false.

Yeah I remember seeing some bugs on Github related to this, but can’t remember an exact instance right now… hopefully the compiler people can explain why it causes a problem.

A & B is defined to be the greatest lower bound of the types A & B. This means that, it is a supertype of any type C that is a subtype of bothA and B.

The way I think about it is like this: the intersection type A & B is a new type that does not exist in the current type hierarchy in which A and B are. It gets created and inserted into this hierarchy. (This is not true but it helps to think of it this way.)

So, of course it cannot be Nothing. It’s a new type that’s “in between” A and B and “everything below A and B”:

Before

... common supertypes up here...
A B
... common subtypes here...
Nothing is all the way down here at the bottom.
(it's one of the common subtypes)

After
Before

... common supertypes up here...
A B
\ /
A&B is above all the subtypes, but different than them.
... common subtypes here...
Nothing is all the way down here at the bottom.

The conceptually tricky part here is that, there is no concrete type among the common subtypes that could possibly be “equal to” A & B. Or maybe there could be, but it would require a lot more additional facts to be true.

Anyway that’s how I think about it, others can correct me if I’m wrong. I’m less familiar with types but more familiar with Lattices from math.

The question is rather, what would we gain if we admitted A & B <:< Nothing? It’s non-trivial in general to compute whether the intersection is inhabited. We do something like this for match type reduction, but that’s a special case. Rolling this into the general type comparison risks making type comparison slower and also risks creating new cyclic reference errors. So without a really good motivation it’s better to not do it.

Thank you @spamegg1 and @Odersky for these clarifications. If we see A&B as a type definition instead of a type computation it makes sense.

However, and this is a bit OT, i think it add’s to the reason why Scala is sometimes seen as a difficult language i guess. If you know the internals of the compiler this may feel completely natural, but for others, which just read the statement with a general assumption about the nature of operator & this maybe less so.

These are not compiler internals, by any means, and I think your confusion comes from mixing the type and the value level. So let me re-iterate the ground rules:

Types describe sets of values.

An intersection type A & B describes the set of values that belong to A and B.

If A and B are different final classes there is no value that belongs to both classes.

There is also no value that belongs to the type Nothing. So it would be OK from a set perspective to declare Nothing and A & B to be the same type (but we chose not to, for the reasons I have given).

You have very similar rules in Java when you define when a cast is valid. It’s just that most Java programmers don’t need or want to know the details. In retrospect, I have the impression this discussion would have been more appropriate for the Scala contributors forum.

Thanks for this lecture, and i though i got that, but obviously not the third point, since my first answer was incorrect.

However, suppose you did decide to declare Nothing for A & B (as i intuitively thought was natural somehow, which is the source of the confusion), would the original example than not compile?

My motivation, though not related to match types, is related to pattern matching and particularly to better type inference and exhaustiveness checking. I’ll work out the motivation more precisely and think about ways we could achieve the same goals without complicating typing in general. I’ll look at the special case for match type reduction for inspiration.