How to test your ZIO code with scalamock?

Consider following simplified user authentication by password example:

import zio.*

enum UserStatus:
  case Normal, Blocked

enum FailedAuthResult:
  case UserNotFound, UserNotAllowed, WrongPassword

case class User(id: Long, status: UserStatus)

trait UserService:
  def findUser(userId: Long): UIO[Option[User]]

trait PasswordService:
  def checkPassword(id: Long, password: String): UIO[Boolean]

class UserAuthService(
  userService: UserService,
  passwordService: PasswordService
):
  def authorize(id: Long, password: String): IO[FailedAuthResult, Unit] =
    userService.findUser(id).flatMap:
      case None =>
        ZIO.fail(FailedAuthResult.UserNotFound)

      case Some(user) if user.status == UserStatus.Blocked =>
        ZIO.fail(FailedAuthResult.UserNotAllowed)

      case Some(user) =>
        passwordService.checkPassword(id, password)
          .filterOrFail(identity)(FailedAuthResult.WrongPassword)
          .unit

Let’s start with happy path example:

import zio.test.*
import org.scalamock.stubs.*

object ZIOUserAuthServiceSpec extends ZIOSpecDefault, ZIOStubs:

  val unknownUserId = 0
  val user = User(1, UserStatus.Normal)
  val blockedUser = User(2, UserStatus.Blocked)
  val validPassword = "valid"
  val invalidPassword = "invalid"

  val spec =
    suite("UserAuthService")(
      test("successful auth") {
        val userService = stub[UserService]
        val passwordService = stub[PasswordService]
        val userAuthService = UserAuthService(userService, passwordService)

        for
          _ <- userService.findUser.returnsZIO(_ => ZIO.some(user))
          _ <- passwordService.checkPassword.returnsZIO(_ => ZIO.succeed(true))
          result <- userAuthService.authorize(id, validPassword).exit
        yield assertTrue(
          result == expectedResult,
          passwordService.checkPassword.times == 0
        )
      }
   )

Quite simple, but we need 4 test-cases:

  1. success
  2. user not found
  3. user blocked
  4. password wrong

Let’s write more abstract test-case:

case class Verify(
  passwordCheckedTimes: Option[Int]
)

def testCase(
    description: String,
    id: Long,
    password: String,
    expectedResult: Exit[FailedAuthResult, Unit],
    verify: Verify
  ) = test(description) {
    val userService = stub[UserService]
    val passwordService = stub[PasswordService]
    val userAuthService = UserAuthService(userService, passwordService)
    for
      _ <- userService.findUser.returnsZIO:
        case user.id => ZIO.some(user)
        case blockedUser.id => ZIO.some(blockedUser)
        case _ => ZIO.none

      _ <- passwordService.checkPassword.returnsZIO:
        case (_, password) => ZIO.succeed(password == validPassword)

      result <- userAuthService.authorize(id, password).exit

    yield assertTrue(
      result == expectedResult,
      verify.passwordCheckedTimes.contains(passwordService.checkPassword.times)
    )

Now we can use it like this:

val spec =
  suite("UserAuthService")(
    testCase(
      description = "error if user not found",
      id = unknownUserId,
      password = validPassword,
      expectedResult = Exit.fail(FailedAuthResult.UserNotFound),
      verify = Verify(passwordCheckedTimes = Some(0))
    ),
    testCase(
      description = "error if user is blocked",
      id = blockedUser.id,
      password = validPassword,
      expectedResult = Exit.fail(FailedAuthResult.UserNotAllowed),
      verify = Verify(passwordCheckedTimes = Some(0))
    ),
    testCase(
      description = "error if password is invalid",
      id = user.id,
      password = invalidPassword,
      expectedResult = Exit.fail(FailedAuthResult.WrongPassword),
      verify = Verify(passwordCheckedTimes = Some(1))
    ),
    testCase(
      description = "password valid",
      id = user.id,
      password = validPassword,
      expectedResult = Exit.unit,
      verify = Verify(passwordCheckedTimes = Some(1))
    )
  )

Code for this example can be found here.

If you prefer cats-effect IO. You can find an example of how to test you cats-effect IO code with scalamock here. It is almost same

Do you like this approach or not?

This doesn’t feel specific to neither ZIO/cats-effect nor mocking - it rather seems to be about the generic question whether we should apply the same principles (DRY,…) and refactoring strategies to test cases as to production code. Is this interpretation correct?

I often want to use test cases as a driver for the API (TDD), and I want tests to serve as documentation for API usage, so I may want to have at least one test case that goes through the raw API motions, not obscured by any clever redundancy avoidance tactics that only apply to the test setup. OTOH, I don’t want my test suites to decay into copy/paste excesses.

One compromise strategy might be to have one clear, straightforward test case for the simplest happy path scenario (and perhaps one for each high level failure mode), and parametrized test case “factories” for input and failure case variants.

2 Likes

+1 for that recommendation.

As an example, here’s a frightfully complex (but extremely effective) parameterised test for a merge algorithm. It takes a long time to run, but has found all kinds of bugs in said algorithm, both during initial development and subsequent refactoring / capability extension.

The thing is, the shrunk failing test cases are instructive in their own right, so I wrote lots of small tests by hand to replicate those test cases once I’d realised what was going on when debugging them. Yes, I did the TDD thing and thought about various scenarios upfront prior to working on the merge algorithm, but it turned out there were a lot of unexpected possibilities in the test case space; there’s no way I would have anticipated them all, even though I thought very hard about it.

On the other hand, just relying on the parameterised testing, while fantastic for finding bugs, leads to a very opaque TDD experience. Sometimes you can write a pithy ‘laws’ style test, but not always - the linked example demonstrates this all too well.

So mixing the two approaches works nicely.

2 Likes

Since you mention laws and shrinking… Property based testing (i.e. with ScalaCheck) gives rise to similar considerations. Even if I could fully spec the intended behavior via laws (I never do), I’d still want to have at least one “API showcase” test case. And when the property-based tests surface a bug, I might want to turn this specific scenario into a dedicated, hardcoded test case, as well, for documentation and deterministic regression safety, while keeping the “laws” tests as is.

2 Likes

Yes - to be clear, my lawyer has advised me to clarify that for all intents and purposes, parameterised testing == property based testing :woman_judge:; we’re talking about the same thing here.

You can (and I sometimes do) write a parameterised test that is a smoke test, but in general they end up testing properties / axioms / contracts / laws whatever you call them.

Americium focusses on the test case generation part and leaves the core test style down to the user, whereas Scalacheck out of the box also has something to say about how tests are written, although there is the Scalatest integration route…

You could plug in use of Scalamock into a parameterised test, for instance (and I have done that sort of thing using Mockito using both Scalacheck + Scalatest and Americium + JUnit).

That also reminds me, I haven’t plugged Americium yet this year (apart from the link to the test that uses it above), so let me correct that glaring omission with this post. Sorry for highjacking the discussion. :grin:

Ah, ok. To me these are different (although related/overlapping) concepts. A parameterized test is a test that, well, takes parameters. :slightly_smiling_face: The argument values usually are hardcoded, the test case may not be applicable to all possible (combinations of) argument values for the given parameter types, and the parameters may include variable expectations in addition to input values. It’s an abstraction that may just naturally arise from using general refactoring principles on a test suite as it grows. #testCase() in @goshacodes’s example is representative of this flavor.

A property-based test is a test taking parameters that must succeed for all possible input combinations, thus representing a “law” for the unit under test, which usually is intended to be fed with generated values. To me the concept gives rise to a somewhat different angle and mindset that complements “example-based” tests - and that’s the important distinction to me.

But, yeah, as usual, there’s no authoritative definition of these terms, and e.g. Scalatest files table-driven tests under “property-based testing”. :person_shrugging: So these terms can be blurry, and it’s certainly good to have a lawyer that stays alert in their presence. :slightly_smiling_face:

2 Likes