I really like Li Haoyi’s book (https://www.handsonscala.com) for its pragmatism. It emphasizes his amazing suite of tools, but that’s a good thing.
Years ago, I wrote an open-source Jupyter notebook that’s a crash course on Scala syntax aimed for Spark developers with no Scala experience, GitHub - deanwampler/JustEnoughScalaForSpark: A tutorial on the most important features and idioms of Scala that you need to use Spark's Scala APIs. You might find it complements what you’ve already learned.
My book, Programming Scala, 3rd Edition (Programming Scala), is a book to consider as you go deeper into Scala features and start applying it to projects. I wrote it with working developers in mind. It’s intended to be comprehensive, but only touches data topics, for example. It’s coming out in a month or two. One of the great things about Scala 3 syntax is a new “optional indentation” feature that makes Scala code look a lot more like Python (i.e., almost no {}). It’s controversial, but I’ve grown to really like it and I use it exclusively in the book.
That leads me to a final point. As rich as the Scala ecosystem is, it doesn’t have the breadth of data-centric libraries and tools that Python has, so be aware of that. Spark is a great flagship tool. There are some powerful and interesting Scala libraries that are relevant, like Typelevel Spire (Spire: Readme) for numerics, as well as various other Typelevel projects (Typelevel.scala | Projects). Finally, many of the popular ML frameworks, like TensorFlow, have Java APIs that are easy to use from Scala.
Good luck!