Is there an explanation of the reasoning behind various -Xlint options?

To me, some are arbitrary, some are plain wrong (like unused parameters), and some are baffling. I would like to hear why are they considered bad code - at least in some cases I don’t understand the option at all, and quite possibly I can’t see a particular scenario for a seamingly harmless construct.

Well all of them make all the sense for me and I use sbt-tpolecat to enable all of them and make them errors. In general, all of them help to catch coding errors that may represent a source of a bug or a failure in the design.

But I doubt the previous statement helped you in any way.

Why don’t you make a list of the lint options you don’t agree with and the reason why.
From that we may have an interesting discussion.

1 Like

Ok.
First, those I don’t understand and don’t remember triggering:

  • -Xlint:adapted-args Warn if an argument list is modified to match the receiver.
  • -Xlint:inaccessible Warn about inaccessible types in method signatures.
  • -Xlint:poly-implicit-overload Parameterized overloaded implicit methods are not visible as view bounds.
  • -Xlint:option-implicit Option.apply used implicit view.
  • -Xlint:delayedinit-select Selecting member of DelayedInit. (How is it different from selecting a member of a singleton object?)
  • -Xlint:eta-zero Warn on eta-expansion (rather than auto-application) of zero-ary method.

Disagree:

  • -Xlint:package-object-classes Class or object defined in package object.
    First, implicit classes. Second, any class sharing a symbol with something that couldn’t in Scala 2 be declared anywhere in the top scope. Doesn’t Predef go against it with val Seq = .... & type Seq[E] = ....?

  • -Wunused:implicits warn on unused implicit parameters.
    I very strongly disagree. I use implicit parameters (often just X =:= Y) to guide the type inferer. This is invaluable especially with implicit methods, as failure of type inference means a useless implicit.

  • -Wunused:explicits warn on unused explicit parameters.
    I got a warning for an unused parameter of a public method in a non-final class, when in the same file there was a subclass overriding that method. This is what triggered the post: any method which can be in theory overriden anywhere, should allow as many unused parameters as I like. Also, I sometimes take a value parameter of a type I expect the caller to have lying around, in order to spare them providing an explicit type parameter list. Also, while a hack, sometimes I take a default parameter on an implicit parameter in ordere to resolve erasure clashes.

  • -Wunused:imports warn on not referenced imports.
    I have a habit of manually importing all implicits in a separate section under other imports, with a comment “//here be implicits”. To me, this solves most of issues people have about their spooky-ness. I have a modest collection of implicit ‘language extensions’ such as ‘Option.mapOrElse’, or x.providing(x != 0) constructor for options, etc. I like them ‘always on’ and the most convenient way outside having them in root package object of the application (which doesn’t work in Scala3 and in Scala2 provides no hint as to where to look for their definitions. Automatically importing them in every file on creation is a compromise I came up with.

Arbitrary:

  • -Xlint:nullary-unit Warn when nullary methods return Unit.
    I understand, can even accept, although if anything, I’d rather have a convention and quick checks for functions/methods which modify state to those not, because as I understand it, it’s about unexpected mutability. def x :Unit is not that special if you can call def x :Int without assigning a value.

  • -Xlint:missing-interpolator A string literal appears to be missing an interpolator id.
    This is certainly very useful - at least if you don’t use an IDE (who doesn’t?). My problem is that in cases where it is intended, currently we have only @nowarn to switch it off, which is too strong in the way casting is too strong if you don’t specify the type you cast from, or @nowarn("cat=missing-interpolator") which I would be annoyed if forced to write.

  • -Xlint:type-parameter-shadow A local type parameter shadows a type already in scope.
    In fact, I do follow. The problems: 1) you also want overriden declarations to use the same type parameter names, and the pair easily clashes. 2) if we allow value parameter symbols to shadow value fields, then how are types any different? I embraced PDTs so far I think and use them as one would values/methods.

  • -Xlint:implicit-not-found Check @implicitNotFound and @implicitAmbiguous messages.
    For existence? I tend to provide them, but always include also the literal “Couldn’t find implicit MyClass[${Param}]”, because it is much more important and helpful than a message without information about the actual type of the missing implicit.

  • -Xlint:eta-sam Warn on eta-expansion to meet a Java-defined functional interface that is not explicitly annotated with @FunctionalInterface.
    Why Java and not Scala? Why eta-expansion bad, but an anonymous class with the single method implementation good? My point, it doesn’t differ in my eyes from extending a SAM class in general.

On the other hand: why no lint for different parameter names in overriden methods? If someone uses named parameter calls, this sure can get very confusing.

Quick disclaimer, there is one thing I have learned in my time on the internet and it is that 95% of the time, people won’t change their minds. Thus, I will try to explain why I use those flags as trying to give you a rationale why such flags do exist, but I am happy to agree to disagree with you; anyways, a good thing about those flags is that they are all turned off by default so you are free to turn on the ones you like, I don’t think that their existence is a bad thing even if I wouldn’t like them (like a few that I just learn about thanks to you!)

Don’t understand

  • -Xlint:adapted-args: This one catches the common case of calling a method that accepts a tuple with a single set of parenthesis making it look like they were multiple parameters; e.g.
def foo(t: (Int, String)): Unit = ???

foo(10, "bar") // warn
foo((10, "bar")) // compiles

This is very useful when the method accepting the tuple is generic, like the addOne in a Builder[(K, V), Map[K, V], which would give the impression that there is not a tuple allocation.

Another change it won’t accept is vomiting an empty parameter list when the method was defined with it, the only exception is methods defined in Java; e.g.

object Foo {
  def bar(): Unit = ???
  def baz: Int = ???
}

Foo.bar() // valid
Foo.bar // warn, because bar is defined with an empty parameter list
Foo.baz // valid, because baz is defined without an argument lists
Foo.toString // valid, because toString is defined in a Java source

Note that the flag is intended to become standard behavior someday (I believe Scala 3 already does this). Thus, in the future, this will always be an error and there won’t be a way to make the compiler accept it, so the flag is there to help you prepare your code for that moment.
The rationale for the change is to make the language more uniform and to avoid surprises when looking into APIs.

  • Xlint:inaccessible: This one avoids you to use inaccessible types in non-final methods; e.g.
object Foo {
  private[Foo] final case class Bar(x: Int)

  trait Baz {
    def quax: Bar
  }
}

In this case, is impossible to implement Baz because nobody outside the Foo object would have access to Bar.
The flag makes the compiler to warn in this case, without the flag you will find the error when trying to implement the trait; something which is not good if this would be part of a library.

  • poly-implicit-overload: Actually not totally sure about this one, it has something to do with overloaded view bounds where one on the overload was parametrized. View bounds have been deprecated for a while and I really can’t think of a good use case to overload one so it seems that any code that would trigger this warning was already in a real rabbit hole, since sbt-tpolecat turn it on for me that is great, but if I would manage my scalac flags myself I probably would omit this one since view bounds are not used anymore.
  • -Xlint:option-implicit: This would warn if an implicit conversion is being executed before a call to Option.apply, this is dangerous because such conversion may not be prepared to handle null values and if we are passing some value to Option.apply it probably means that such value may be null which would result in a NPE being thrown which is exactly what we want to avoid using Option

  • -Xlint:delayedinit-select: This is another one that if I would be managing my scalac options manually I would omit because using DelayedInit is deprecated now. Anyways, this warns because if you try to access a property of an object that inherits DelayedInit then its possible that such object hasn’t been initialized yet, and this access won’t initialize it so you would get an NPE or something similar.
    I actually never understood how the f*** DelayedInit worked, but I would guess this would be a common error and one reason for deprecating it.

  • -Xlint:eta-zero: Will warn if you pass a method with an empty parameter list into another method that expects a () => A function and thus it will be eta-expanded instead of auto-applicated; e.g.

def foo(f: () => Unit): Unit = ???
def baz(): Unit = ???

foo(baz) // warn.
foo(baz _) // compiles.

I am personally against this flag (I actually didn’t know it existed and is not part of sbt-tpolecat), because I would expect the first behavior since I am against auto-application.
Nevertheless, people used to auto-application would probably be surprised so I guess that is the reason for the flag.

Disagree

  • -Xlint:package-object-classes: package object is, in general, discouraged to be used because it can have surprising behavior and are not extensible; i.e. if two different JARs define the same package object they will collide.
    However, they may come in handy to share some code between an internal package, but in that case, there is no need to put classes in there, you can just move them to their own file in the same package and they will behave the same; whereas inside the package object they can confuse the compiler (although this is old history, maybe it is fixed but better safe than sorry).

First, implicit classes.

Yeah, some people say they are an exception to this rule. I personally prefer to define my implicit classes inside a normal object syntax and do the import manually in each file that use them.

Second, any class sharing a symbol with something that couldn’t in Scala 2 be declared anywhere in the top scope.

Sorry, not sure what you mean.

Doesn’t Predef go against it with val Seq = .... & type Seq[E] = .... ?

Ham no, that is not defining any class, that is just a type alias and an alias to the companion object in a normal val.
Also, Predef is not a companion object anyways, isn’t it?

  • -Wunused:implicits: Actually yes, your example is an excellent counterpoint.
    However, in general, an unused implicit is the same as an unused parameter, it probably means a coding error.
    Type proofs are the exception because they are indeed used… just at compile-time, but they are useless at runtime.
    Thus, I prefer to deal with these cases manually than disabling the flag, but maybe if your codebase is full of many proofs you may disagree.
    Final note, related: Proposal: Add void to Predef · Issue #43 · scala/scala-library-next · GitHub

  • -Wunused:explicits:

I got a warning for an unused parameter of a public method in a non-final class.
(…)
This is what triggered the post: any method which can be in theory overriden anywhere, should allow as many unused parameters as I like

Personal and very strong opinion, everything is either abstract or final. Thus, for me, I would never be in your case. But I understand why in your case you find that annoying.

Also, I sometimes take a value parameter of a type I expect the caller to have lying around, in order to spare them providing an explicit type parameter list

I hope it doesn’t sound offensive, but this is the first time I see someone doing that. Not to say that your use case is bad or anything, but is at least unusual (IMHO). So again, is not a problem for me but I understand why you find it annoying.

Also, while a hack, sometimes I take a default parameter on an implicit parameter in ordere to resolve erasure clashes.

Not sure how that is related to this flag, maybe the previous one? Anyways, have you looked at the DummyImplicit it may work in this case? But yeah, I have also used a similar trick sometimes something like: def foo()(implicit ev: Bar = None.orNull) = Option(ev) match, but then the value is used; so I guess your trick is different.

Anyways, you may always just disable this flag, configure it to don’t warn in the places you don’t want, or just use the @unused annotation to silence the compiler for each parameter.
Other than that, I personally believe that any unused parameter is just a coding error, and not removing it may lead to more garbage in other parts of the codebase.

  • -Wunused:imports: well is curious that you want to be very explicit about the implicit you import but then you want to import all of them instead of just a la carte.
    For syntax most people agree is easier to just do an import syntax.all._ or something, they are not that spooky as implicit conversions since the extension method is always explicit. Also, if you want all your syntax always on is because you think that your team does know this syntax well and is in the capacity of always using it and always understand it so I see no reason for just put them all inside a single import.

Other than your particular case, I think everyone agrees that importing something that is not used just pollutes the namespace makes the code longer, and can give the impression that a file does something that it doesn’t.

Arbitrary

  • -Xlint:nullary-unit: Sorry, I really did not understand your point, care to elaborate?

Anyways, the idea is that if you have def foo: Unit it either just return () or does some side effect, and, in general, the community has come to the agreement that methods that do side effects and don’t receive parameters have to be defined with an empty parameter list rather than with no parameter list at all; this in conjunction to no-adapted-args means that callers need to provide the extra () at the call site, all this as a little ceremony to remember such call will probably do something side-effecting. Thus, since a method that returns Unit is either useless or side-effecting it makes sense to force those to be defined with the empty parameter list ()

  • -Xlint:missing-interpolator: Yeah I agree that it is annoying to silence. But, oh well, I don’t think it is a big deal neither ¯\_(ツ)_/¯

  • -Xlint:type-parameter-shadow:

you also want overriden declarations to use the same type parameter names, and the pair easily clashes.

Sorry, again I don’t see what you mean, care to provide an example?

if we allow value parameter symbols to shadow value fields, then how are types any different?

Another very personal and too strong opinion, if you shadow a name you probably need to think on a better name.

I embraced PDTs so far I think and use them as one would values/methods.

Sorry, not sure what does PDTs stand for.

  • -Xlint:implicit-not-found: Actually, no idea what this flag does and is also not part of sbt-tpolecat, maybe is new?
    (invoking @SethTisue from within the depths of the issue tracker)

  • -Xlint:eta-sam: Yeah I also found that one weird (that for the record I just learned about, and also is not part of sbt-tpolecat). But, I don’t know too much bout the Java world, maybe there is a good rationale for that?

:100:


On top of all that, I need to correct my previous message a bit:
“Well all the ones enabled by sbt-tpolecat make all the sense for me”.

Also, I totally agree the documentation on the flags can be improved a lot. However, is not that simple since the whole page is auto-generated and AFAIK, as of today, the generation is broken.

The view bounds syntax is deprecated, but view bounds in the sense of implicit view: A => B parameters are still fine. Actually I thought there were some ideas floating around of preferring those kind of “explicit implicits” instead of regular implicit conversions in Scala 3. Though there you need Conversion[A, B] instead of A => B or whatever.

So anyway if you are defining some weird implicit conversions it is nice of the compiler to warn you that you might have stepped into that particular edge case. Though fixing the edge case might have been even nicer.

If I’m not mistaken this is meant to catch unknown type parameters in messages of @implicitNotFound or @implicitAmbiguous annotations. E.g. this should warn about T not referencing anything:

@implicitNotFound("No implicit Foo[$T] found")
trait Foo[A]
1 Like

Thank you for such a write up. As, I hope, it shows, I came here to educate myself rather than educate others - in some easy to understand cases, like unused method parameters, I am indeed unlikely to change my mind, but most of the post was trying to understand things I currently don’t. I think it is largely to personality/general viewpoint on the order-freedom axis, where I am reasonably far to the right: I don’t like rules which have valid (in my mind) examples against them. What I think would be a great compromise though, is to be able to add a @nowarn equivalent at declaration site. For example, I recently refactored a case class to a trait + private implementation (and frankly, ‘no public classes’ seems almost like a rule to me when writing a library). This made pattern matching on now manually coded unapply trigger warnings in code like:

pairs.map { case Pair(a, b) => a + b } //case class=good, interface + impl = bad.

@Jasper-M I still do not understand poly-implicit-overload, perhaps if you could provide an example?

  • Xlint:inaccessible: actually, I have what I think is a very good use case against (depends on if it is triggered always, or only if there is no implementation of the trait found). It is emulating sealed[package] keyword for classes: such a construct (if Bar is used as a parameter, not return value) guarantees that any concrete class extending Baz must extend one of the classes I provide which implement that method. Now, I declare it protected[Foo] then, but as I have only recently discovered, that alone doesn’t prevent its overriding by classes outside Foo.
    Here it is explained in detail: slang/Sealing.scala at master · noresttherein/slang · GitHub

  • -Xlint:package-object-classes: Two different jars providing the same package object for me is a a huge design error (or I fail to see some sensible use case). I would never put anything in a package which isn’t completely owned by a packaging unit, and as far as I can tell, this is the general practice.
    Counter examples are becoming obsolete with Scala3, but I quite often use a pair:

       type MyType = MyGenericType[SomeConcreteType]
       object MyType { /* apply and unapply */ }
    

    or, essentially some variation of the implicit class usecase:

        class :*:[L, R](val left :L, val right :R)
        implicit def :*:[L](left :L) = new extension_:*:(left)
        class extension_:*:[L](private val left :L) {
          def :*:[R](right :R) = new :*:(left, right)
        }
    

    In this pattern importing :*: imports both the type and the extension method and I at least find that elegant. A similar trick I occasionly use is an object being an implicit conversion, i.e

       class BitStream(bytes :Array[Byte])
    
       implicit object BitStream extends (Array[Byte] => BitStreamFactory) {
         def apply(bytes :Array[Byte]) = new BitStreamFactory(bytes)
         //typical companion business
       }
       class BitStreamFactory(private val self :Array[Byte]) {
          def toBitStream = new BitStream(self)
       }
    

    Same idea.

  • -Wunused:explicits: out of curiosity, how do you deal with subclasses requiring more parameters than their superclass to provide a correct implementation?

    1. I am so far on this, because I occasionally take a parameter I know neither the declaring class nor any existing subclass uses, but it quite conceivably might in the future - either a new implementation, or a refactor; at least when I expect the caller to have access to that value.
    2. A closely related case is a family of class hierarchies which all provide conceptually the same operation: I like to standardise the method signature in that case, even if I know I won’t use it
      in some cases. You can think of it as an equivalent of everything extending Formattable with a single method (or something similar), but I don’t care about polymorphic usage between separate branches of such a ‘conceptual type hierarchy’. This proved quite useful, because it often turned out for me that, further down the road, I needed to delegate a call from a class which previously didn’t require parameters to one which did:
           trait Expression { def format(env :Environment) :String }
           trait Declaration { def format(env :Environent) :String }
           trait Keyword { def format() :String }
           //at a later time:
           class PackageProtected(scope :String) extends Keyword { 
                /* how to implement format() if I need to check if scope is a valid symbol? */
           }
      
      In larger projects, even if I am reasonably sure that the parameter won’t ever be needed, a unified signature helps, because I don’t have to check for the parameter list before calling, I know that everything accepts the same parameter lists (or at least, shares one parameter list).
    3. I am not offended in the slightest even if I am alone in doing that, but when creating API that will be used a lot, I think a lot about type inference and how to reduce the signature at call site:
         class Expression[E <: Tuple] { 
           def adapt[E1 <: Tuple](implicit inclusion :ContainsAllElements[E, E1]) :Expression[E2] 
           def adaptLike[E1 <: Tuple](likeThisOne :Expression[E1])(implicit inclusion :ContainsAllElements[E, E1]) :Expression[E2]
          }
      
      If E and E1 are huge types encoding on type level all values required to evaluate an expression, I don’t want to provide that type as a type argument. More than likely, I have already another expression of that type, and providing its value is much more succinct.
    4. As for the erasure example:
         trait Sum {
           def apply(ints :Seq[Int], iWantInts :Int = 0) :Int
           def apply(doubles :Seq[Double]) :Double
         }
       
      
      Not something I would expose outside, but internally can be useful if the types in question are domain-specific.
  • -Wunused:imports: with most common syntax I indeed often import it with a wildcard, as you have shown. Sometimes however there is more than one ‘syntax’ scope (for example from separate libraries). It’s not a point I am going to defend very hard like the unused parameter thing, I am more playing a devil’s advocate, because Scala is such a flexible language that’s hard to come up with strict ‘common sense’ rules which in some case wouldn’t contradict another ‘common sense’ rule (that applies more to code format, but still).

  • -Xlint:nullary-unit Warn when nullary methods return Unit. I actually follows this one, my remark was that I’d rather have a stronger convention: instead requiring that side-effect, parameterless, Unit returning methods accept an empty parameter list, I’d rather have a convention (or a language feature like const methods in C++) that applies to any method with side effects, even those with parameters, like Ruby’s “!” suffix. Not going to happen, I just think it is a bit inconsistent and may provide a false sense of security (for example, a custom Builder class may declare result :T without a parameter list, producing no warnings, while the caller might assume that internal data is not cleared.

  • -Xlint:type-parameter-shadow:
    If you completely avoid local method values and parameters shadowing any symbol in any of the enclosing scopes, then it certainly is commendable, but somewhat impractical. Even in the simplest case, one often uses simple nouns - often decapitalized local class names for class properties. This means that any local values in local scopes inside methods have to have longer, qualified names, or the opposite - abbreviated.

        class OneThingOrAnother(val name :String) {
           def rename(newName :String) = new OneThingOrAnother(newName)
           class Part(val name :String) //shadowing!
        }
    

    This is a reduction of my point to absurdum, but already ilustrates the complexity and compromises that had to be made to attain it. Both rename and Part would have to change their name to something. In case of rename this is ok, although quite redundant - the caller would guess it’s a new name from the context, and if the method had more parameters, the symbol was longer than ‘name’, then this policy extends method signatures: increasing readability in the micro scope can reduce it in a wider scope.

       trait GenericFactory[CC[_]] { def newBuilder[E] :Builder[E, CC[E] }
       trait SpecificFactory[E, C] { def apply(elems :E*) :C }
       trait Factory[E, CC[_] extends SpecificFactory[E, CC[E]] with GenericFactory[CC] {
          override def newBuilder[WhatDoWeCallThis_?] = ???
       }
    

    Not impossible to avoid, but if you happen to have quite generic types/methods, and I type everything much more strongly than most people do, then it is a daily nuissance. In order to both not change the parameter name of newBuilder and SpecificFactory and not shadow, we have to go back to GenericFactory, which often was created earlier and already has other implementations… As I said, I avoid it, if only to shut up IDE’s flagging it.

I think (hope) that we agree that the rules catch potential errors, and that they also eliminate some valid usage. So the judgement comes down to what’s the ratio of those errors to exceptions, and that can vary quite considerably depending on code style. For example, preferring deep, narrow naming scopes and inner classes to a flat, wide file/naming structure in the project severly affects the problem of shadowing and I, believing in putting everything in as narrow scope as only possible, end up with deeper trees and more definitions in companion objects than people coming freshly out of Java with a class-per-file policy. Then of course there is the attitude to both @nowarn (I outlined mine talking about string interpolation) and the general preferrence to the strictness of enforced rules, which is very personal - I don’t blame the compiler it didn’t catch my error, but I get annoyed when it flags something I consciously consider not only correct, but valid. Even reading about the recent => T and () => T discussion it is clearly seen that there are camps with opposite stances on the desirability of explicitness, and the disputants are all from the same, very narrow community.

Which warning did it turn?

  • Xlint:inaccessible: Sorry, I did not understand your use case.

  • -Xlint:package-object-classes:

Two different jars providing the same package object for me is a a huge design error

Sure, I agree but that doesn’t mean people won’t do it.

Anyways, you can just move the classes / object to their respective files and keep the aliases in the package object.
You may even do this to keep all the logic together.

// file foo/something.scala
package foo

final class :*:[L, R](val left :L, val right :R)

final class extension_:*:[L](private val left :L) extends AnyVal {
  def :*:[R](right :R) = new :*:(left, right)
}

trait syntax_:*: {
  implicit final def :*:[L](left :L): extension_:*:[L] =
    new extension_:*:(left)
}

// file: foo/package.scala
package object foo extends syntax_:*: {
}

Although I believe there are also some problems with package object inheritance, I would just create another object syntax inside the package object

I actually don’t know very well why package objects are discouraged, I know a lot of very experienced developers that prefer to avoid them for bad experiences in the past so I learned to stay away from them from the very beginning.

  • -Wunused:explicits:

out of curiosity, how do you deal with subclasses requiring more parameters than their superclass to provide a correct implementation?

I haven’t been in this situation really. Because again, I never override something I just implement. So if I found one subclass that needs more parameters I either see if that subclass could receive that in the constructor (this is what I do 90% of the time), otherwise I edit the interface to include that parameter. In the latter case, I won’t get a warning because the interface is abstract and the subclasses are overriding a method, in those cases, you never get an unused warning.

I am so far on this, because I occasionally take a parameter I know neither the declaring class nor any existing subclass uses, but it quite conceivably might in the future - either a new implementation, or a refactor; at least when I expect the caller to have access to that value.

I would say that you are doing a premature optimization.

A closely related case is a family of class hierarchies which all provide conceptually the same operation: I like to standardise the method signature in that case, even if I know I won’t use it

Again, I think you are just doing a premature optimization.
If changing from one class to another should be straight forward then I would just make all of them extend an interface that way the ones that don’t need the parameter won’t receive a warning. If they are just similar but should not be part of the same hierarchy then I don’t think it is important to make the change from one to another seamlessly.

If E and E1 are huge types encoding on type level all values required to evaluate an expression, I don’t want to provide that type as a type argument.

I think this is a reasonable use for the @unused annotation.

As for the erasure example:

You should do this instead:

trait Sum {
  def apply(ints :Seq[Int]) :Int
  def apply(doubles :Seq[Double])(implicit ev: DummyImplicit):Double
} 
  • -Wunused:imports: Sorry, not sure what was your point.

  • -Xlint:nullary-unit: Sorry, again not sure what was your point. Like isn’t the presence of () that convention? Or you mean that all side-effecting methods should have another convention? If so, the closest is using an effect system like cats-effect and wrap everything in IO

  • -Xlint:type-parameter-shadow:

This is a reduction of my point to absurdum, but already ilustrates the complexity and compromises that had to be made to attain it.

I understand, and being honest I am not that strong in this and probably will let it pass a time or two, but in general, I would use a different name for both names to make sure I use the correct one.

override def newBuilder[WhatDoWeCallThis_?] = ???

I would call it EE or something since it is a different E than the one of the trait.
I don’t think using a different name than in the declaration site is that bad, those names are usually meaningless anyways.

I don’t blame the compiler it didn’t catch my error, but I get annoyed when it flags something I consciously consider not only correct, but valid.

I think this is our main disagreement. For me, the idea of using a strong and statically typed language is that it catches errors. I prefer to be explicit when the compiler is confused rather than it being relaxed and then having a runtime error.
Of course, everyone would prefer that the compiler is perfect and would be strict yet never been confused but we have to accept that the complexity of the Scala language is too big for that always being the case.

Now, I have a bias, since I rarely get the compiler confused so I only see the good things; while it seems your codebase has had the opposite experience.

I got this from a test case for the compiler.

implicit def imp1[T](x: List[T]): Map[T, T] = Map()
implicit def imp1[T](x: Set[T]): Map[T, T] = Map()

def f[T](x: T)(implicit ev: T => Map[Int, Int]): Double = 1.0d

f(List(1)) // error: implicit view not found

So in this case you get that warning to inform you that your code may not work as you intended.

1 Like

This is why Xlint warnings are not enabled by default: you are free to choose to receive warnings for things you believe you should be warned against, while not for things you believe that are not a problem. You may even use an Xlint annotation that you believe is usually a problem but not always, and suppress individual cases where the warning is spurious with @nowarn.

In an ideal world, compiler warnings and xlint warnings constructs should document (somewhere, be it in the static documentation or dynamic output)

  • exactly what the things are that trigger the warning
  • why they are warnings rather than errors
  • why they are not enabled by default (for xlint warnings). This necessarily means that there must be some valid code where the warning triggers.
  • what you could do instead

extended warning messages under -Explain in scala3 take a big step towards this, but I suspect further contributions in scala 2 will be gleefully accepted. Feel welcome to expand on them!

1 Like

Well, this post came from me tryig to find a suitable set of Xlint warnings I agree with, after all. I am not waging a war against them :slight_smile: -Explain flag is of course nice, but it only will come into use if you enable a specific Xlint option and then trigger it, while I hoped to look through everything, learn something in the process, and decide on a subset of options in one go, rather than switching everything on and then disabling one by one if found annoying.

@Jasper-M At the risk of making a fool of myself, why in the given parameterized case the implicit method for List[T] is not picked up? Does it have to do with Scala’s somewhat unfortunate overloading rules? I actually never overload implicit methods just to be sure, because unlike anything else, this is a costless limitation.

  • -Xlint:inaccessible We have a sealed keyword for traits and classes, but it is limited to one file. My files for roots of class hierarchies tend to get huge anyway due to scaladocs, so I wanted to achieve the same effect, but restricted to either my compilation unit (i.e. nothing outside of my jar) or simply a given package, so I can put those implementations in separate files.

  • -Wunused:explicits - this is not optimisation. This is a future proof design and this strategy proved beneficial many times for me, especially in the unified signature case. I don’t think I ever once in my life found myself in a situation where an unused parameter was a mistake, oversight or bug. Yes, splitting everything into completely abstract interfaces and concrete implemenations addresses this issue, but there is a reason why traits and even Java interfaces can now have default implementations. It is just convenient. And method parameters cannot be replaced with constructor parameters, because the function of methods is to be called for different arguments.

  • Erasure example. Well, that will trigger an Wunused:implicits so it’s not really different.

Type system and compiler warnings are two different things to me. Yes, many warning rules are very useful, but the nature of warnings, as martinijhoekstra said, is that there a valid use cases. In my very personal viewpoint, warnings should be reserved for:

  1. use cases which will be disallowed in a future version;
  2. cases which are 99% of time, like not modifying loop variable inside the loop, method doing nothing but calling itself, etc. I actually think IntelliJ is doing a very good job here, I have switched off maybe three rules out of dozens, and that includes one which simply didn’t work as intended.
    Because if the compiler thinks that something as simple as all above rules is very likely an error, then the language is flawed in that area, and, ideally, should be changed to avoid the trap, perhaps adding a safer method/syntax in the place of the old one. Because ultimately yes, code producing warnings is bad - at least once merged into a public branch in a VCS.
  • -Xlint:inaccessible: The code shouldn’t produce the warning there, if so then you probably found a bug and then you probably want to report it. - BTW, you may be interested in this.

  • -Wunused:explicits:

this is not optimisation. This is a future proof design

Again, I am not saying you are doing something wrong, I just think you are thinking too much for the future. A good design is not the one that works for all the future use cases, is the one that is easy to change in the future.

But if this strategy works for you then go ahead, just disable this warning. You even mentioned that you never found an unused parameter to be an error or a bug, I disagree but those are just personal experience / feelings.

Yes, splitting everything into completely abstract interfaces and concrete implemenations addresses this issue, but there is a reason why traits and even Java interfaces can now have default implementations.

Again, I personally never override an implementation, they are abstract or final. But I understand that is something very few people would agree with. But in that case, you only have the unused warnings in one place, you can easily silence them using the @unused annotation.

And method parameters cannot be replaced with constructor parameters, because the function of methods is to be called for different arguments.

Sorry for not being clear, my point is that most of the time I needed an extra parameter in a subclass, I found it easier / better to pass that in the constructor of such subclass. It may be just my bias to design.

  • Erasure example. Well, that will trigger an Wunused:implicits so it’s not really different.

I believe (but could be wrong) the compiler doesn’t emit unused warnings for the DummyImplicit, otherwise again a single @unused annotation doesn’t feel too bad.

But, I don’t want to keep the cycle, happy to agree to disagree in this one.

I hope you can find the set of rules that work the best for you and your coding style. I am personally very happy with sbt-tpolecat, since for me all the things it flags are 99% of the time potential bugs :slight_smile:

I can’t intuitively tell what the problem here is, and to be honest the combination of overloading rules and implicit resolution strikes much fear into my heart so I would prefer not to go and find out.
And indeed overloading an implicit def seems close to completely pointless so this shouldn’t be a huge issue.

1 Like

… and -Xlint:missing-interpolator just triggered a warning on an @implicitNotFound message. Another option bites the dust.