How to file if line is reached in scalatest

Thanks for the comment about fail.

As far as matchers are concerned, in the past I’ve found that these make code extremely difficult to read. In my opinion testing should be easy, to encourage people to write tests. I think many people will disagree with me, but having to learn a new DSL to make testing easy means there’s something insufficient with the language. I should not have to learn a new language to test my code. And I would not be able to convince my students of this requirement either.

Additionally a library of 100s of different kinds matchers with subtly different semantics which are different than the subtle semantics of the Scala language itself, just increases the chances that I’m not really testing what I think I am. Right?

What’s wrong with just asserting what you expect to be true and what you expect to be false?

1 Like

In my opinion, the biggest advantage of Scala Test’s DSL is the readability. If the right matchers are used, tests can be read as they were some kind of specification.

3 Likes

…but you’ve been asking for a “fail here” extension to the “assert” language yourself…?! :wink: If you’re actually happy with #assert() and only were looking to get rid of the IDE warning, just disable it in the inspections configuration (if you don’t appreciate this inspection in general) or suppress it in this instance (// noinspection NameBooleanParameters, can be generated via the “light bulb” popup).

But still, if you are using Scalatest at all, you are already using a new language, involving tests, suites and, notably, assertions. Matchers are just another part of this language that expands on the expressiveness of assertions (and this part is as optional as any other - you could use suites without matchers or vice versa). Why would suites and assertions be ok, but matchers hinting at language insufficiencies?

To me the matchers DSL is just as any other API. It encapsulates functionality so I don’t have to reimplement it everywhere I need it. It gives me concise, expressive names and limits the options for unintended wrong usage through the type system, thus actually reducing the risk of doing something different than I think I am. In addition, it gives me more helpful failure messages than plain assertions out of the box.

I can understand if the Scalatest matchers DSL feels somewhat overwhelming and confusing - I’ll always have to look up differences between should be, shouldBe, should equal, etc., even after years of using it. As with any API/DSL, one may even find so many subjective faults that one decides against using it. That’s a very specific and concrete tradeoff, though. (For me personally, the advantages by far outweigh the flaws.) The mere existence of a DSL on top of raw assertions certainly doesn’t seem to indicate a weakness of the language - that’s just programming. :slight_smile:

2 Likes

The problem with assert(false, ...) vs assert(condition=false,...) is clearly an error in the IDE. I can accept that the IDE has bugs. And I accept that to learn/use scala you have to use an IDE.

Some disagree of course, and are happy using Scala with vi–I can’t imagine.
Of course I’d rather use emacs, being a 30+year emacs user. I tried that route, and abandoned it— it is just too difficult. Too many things go wrong, and there’s nobody to help negotiate the mine field.

In my opinion a unit test platform should let me assert things about my code, and allow me to run the tests. There are other things I’d like it to do, but few such systems allow. For example, I’d like to mark, certain failures as expected failures. I don’t think scalatest allows this.

Why do I want expected failures? Because if I am happy with the way my program is running, and I discover a bug, without changing the program, then that discovery should not make the program unusable. The bug has been there a long time, I’m just finally discovering it. The bug should be eventually fixed, but the program is not worse off than it was previously, despite knowing about a bug.

Of course I’d rather use emacs, being a 30+year emacs user. I tried that route, and abandoned it— it is just too difficult. Too many things go wrong, and there’s nobody to help negotiate the mine field.

Have you looked at metals? I understand many people are happy with IntelliJ even with its false errors due to all its features. But I don’t, the only thing I want from an IDE is syntax highlight, some minor auto completions and accurate error messages. metals gives me all that and in multiple text editors.

certain failures as expected failures . I don’t think scalatest allows this.

Yes, it does.

I don’t see anything about expected failures in the documentation.
I see the fact that you can assert an exception, but that’s not the same thing.
I don’t want to assert that something fails. Rather, I want to mark a test as “Yes, I know this test is failing for now.”

I just looked at the webpage for metals. Do all these completion popups bother you, constantly obscuring your text?

I use VSCode they only appear when I tell them to appear.

Ah ok, sorry for the bad understanding.
What I do in those cases is mark the test as ignored, is not the same thing but is a way of saying I need to fix this latter.

Do you know pendingUntilFixed? That’s what I have used for code that is currently failing (in my case pending an upstream fix). As soon as the bug is fixed, the test will fail (letting you know that it should now be turned into a live assertion):

I hope the doc-link works (in this case the method is on AnyFlatSpec but that should be irrelevant) ScalaTest Doc 3.2.2 - org.scalatest.flatspec.AnyFlatSpec

(edit: link does not work as expected, you will have to search for the method)

3 Likes

The workaround which I use is to just create an incident in the bug data base, containing a test case. When I finally get around to fixing the bug, I create a failing test at that time and hack the code until the test is passing.

@cestLaVie, can you explain more about the use model? This seems somewhat similar to what I want, but not exactly. Maybe I don’t fully understand.

My use model at the moment is that I have a repository of code which works by some definition of works. I’m going to give this code to my students with a list of enhancement requests. I have written tests for each enhancement. I want the students to make the enhancements to the code until the tests pass.

I also want to use the tests to check my solutions. Their grade for the assignment will partially depend on how many of the enhancement-tests still fail.

The fact that an enhancement has not yet been made, does not mean the code is broken.

Can I distinguish these cases using scalatest?

That’s exactly the use case.

def concat(a: String, b: String): String = a

test("concat") {
  pendingUntilFixed {
    concat("a", "b") should be("ab")
  }
}
$ sbt test
[info] PendingTest:
[info] - concat (pending)

Fix implementation:

def concat(a: String, b: String): String = a + b
$ sbt test
[info] PendingTest:
[info] - concat *** FAILED ***
[info]   A block of code that was marked pendingUntilFixed did not throw an exception. Remove "pendingUntilFixed" and the curly braces to eliminate this failure. (PendingTest.scala:11)

Enable test:

test("concat") {
  concat("a", "b") should be("ab")
}
$ sbt test
[info] PendingTest:
[info] - concat

It doesn’t capture NotImplementedError (resulting from ???), though.

2 Likes

That means I would give my students a test case file containing tests using pendingUntilFixed, but when I run the tests to check their submitted work, I’d need to use a different test case file, i.e., one with the pendingUntilFixed calls removed?

I.e., I can’t use the same test case file for both?

Whoever fixes the implementation is supposed to remove #pendingUntilFixed(), so yes, if the students are not supposed to tamper with the tests, this is not a fit.

Why not just let those tests fail? This is like TDD, you first write a test suite that should fail, you write the implementation until is green and then you refactor the code.

I really do not understand why you want the test to fail but at the same time be correct, especially since you want to grade your students according to how many of those are still failing after they submit it.

1 Like

Perhaps my wish is not 100% consistent. I admit. I’m just looking for a solution. I’d like to keep developing the project getting ready for the class. I want to make sure the code is ready for the students to start looking at. I’ll use the working part of the code to illustrate many concepts during the course. Then as a final project, I’m going to give them a set of enhancement descriptions which they need to implement.

Of course I want to make sure my professor-solutions indeed work. I.e., am I asking the students to do something impossible, or something that will break existing tests? For that I need a set of non-enhancement tests that I maintain as I develop the code, and get ready for the course.

For the professor-solutions, I have a branch in the git repo. To test, I rebase this branch onto master, and run the tests. However, without rebasing, then the enhancement tests all fail. I would prefer if the enhancement tests didn’t fail but just registered expected-failure.

Because of this, every time I push master back to the git-repo, I get back a failure message for the CI/CD pipeline saying my tests failed. So I don’t know whether I really broke something, or whether it was ONLY the expected failures.

Scalatest supports something called tagging, I haven’t used it so maybe I will be speaking nonsense. But, I believe you can mark those test with an special tag (e.g. for-students) and have a command to run all tests except those, that is the one you will be using in your CI and all that. Then, students would just run the normal sbt test which should run all tests and see those fail.

1 Like

Maybe someone can chime in? If I tag certain tests with a particular tag, how do I ask by CI/CD pipeline to ignore tests with that tag?

My tests are marked with Tag(“enhancement”)

Here is my script which runs the tests

#!/bin/csh -f
alias STDERR 'bash -c "cat - 1>&2"'

echo ============== running  scala tests ==============

echo pwd= `pwd`
cd ./globe
set tmpfile = sbt.$$.out
sbt -Dsbt.log.noformat=true "set parallelExecution in Test := false" test |& tee $tmpfile
grep -e '^[[]error[]]' $tmpfile
if ($status == 0) then
  echo error running tests  | STDERR
  exit 1
endif

I see the post by Alvin Alexander relating to this. He suggests sbt -test-only -- -l TAG-STRING.
I’m not sure what the difference between -test and -test-only is.

BTW the sbt documentation uses testOnly and the Alvin posts suggest test-only. Do both work the same way? Is one deprecated/typo?

sbt "testOnly -- -l enhancement"

No idea where the dash notation for sbt tasks comes from (ScalaTest docs use it, too), but I’d suggest to always use the latest relevant documentation for each tool involved - in this case sbt for the test[Only] task and ScalaTest for the test runner parameters.

1 Like