Deprecated range syntax

method to in trait FractionalProxy is deprecated (since 2.12.6): use BigDecimal range instead
[warn] for (x <- 1.0 to 10.0 by 0.01) {

What is the best alternative here? I can’t seem to find what I need in BigDecimal. I also tried Vector.range, but it doesn’t seem to work the same (it omits the last number). Thanks.

https://www.scala-lang.org/api/2.12.5/scala/collection/immutable/Range$$BigDecimal$.html has help.

That said, it does sound like that deprecation warning could have it’s phrasing brushed up.

Welcome to Scala 2.12.6 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_172).
Type in expressions for evaluation. Or try :help.

scala> BigDecimal("1.0") to 10 by 0.01
res0: scala.collection.immutable.NumericRange.Inclusive[scala.math.BigDecimal] = NumericRange 1.0 to 10 by 0.01

scala> res0.take(5).toList
res1: List[scala.math.BigDecimal] = List(1.0, 1.01, 1.02, 1.03, 1.04)

scala> res0.last
res2: scala.math.BigDecimal = 10.00

In other words, there is no best alternative.

Any time I want to generate some doubles, for some benign purpose, not to calibrate nuclear weaponry, I’ll have to remember the syntax. It also puts me off whatever is concocted for BD literals. Because I’ll have to remember that too.

The argument that folks might attempt to do computation with doubles before asking for a range doesn’t hold water in the age of literal types and macros and -Xlint.

Just because my car didn’t come with “Honda Sense” doesn’t mean I should have to walk.

I think it’s too bad such a handy mechanism was deprecated hastily – with prelim work on strawman and then very fast merges on both master branches, the sort of speed reserved for terrible regressions.

All we have now are a bunch of excuses why I can’t say .1 to 1 by .1. Instead of Range.Double(0.1,1,.1).toList, make me Range.BigDecimal(0.1,1,.1).map(_.toDouble).toList.

The strawman thread was pretty good, with some math and some puns. But the wrong conclusion was drawn.

This was a chance to demonstrate that all the bells and whistles could satisfy a pretty simple use case.

Maybe after they intrinsify tap, they’ll give non-integer step range the same fancy treatment.

I appreciate that Seth resisted in the name of the common coder, until after about a day and a half, he surrendered to the onslaught. I bet if Propensive had been around, they could have had themselves an Alamo.

1 Like

I share some of the sentiment. It is baffling for a lesser mortal like me why we now need to use BigDecimal when Float and Double would suffice. Now the code I use to generate simple distributions needs some complexifying. 8-(

Any way we can deprecate the deprecate? :wink:

Ironically, the specific example 1.0 to 10.0 by 0.01 was a motivating example for deprecation. For example because (0.0 to 10.0 by 0.01).toList.last != (0.0 to 10.0 by 0.01).last

I hope it’s obvious that it should work like the Range.Double example.

I’m returning to the angry Java programmer thread now.

What would that simple benign use case be?

If you repeatedly add 0.01 to 1.0, Double will most likely not arrive at 10.0, but at something like 9.99999999999995 or 10.000000000007.

That’s why I’m wondering if there was a legitimate technical reason for this deprecation, namely the failure of Doubles to represent some decimal numbers precisely. If I were using this construct in critical code, I would add a small epsilon to the end point to make sure it is not missed due to numerical roundoff. For example:

for (x <- 0.0 to 10.01 by 0.2)

Does BigDecimal makes that little trick unnecessary? (Easy enough to test, but I don’t feel like doing it right now.)

If you want a series of equidistant Doubles, use Int indices and calculate, e.g.

__scala> val (min, max, step) = (1.0d, 10.0d, 0.01d)
min: Double = 1.0
max: Double = 10.0
step: Double = 0.01

val n = ((max - min)/step).toInt
n: Int = 900

val doubles = (0 to n).map(i => ((n - i)min + imax)/n)
doubles: scala.collection.immutable.IndexedSeq[Double] = Vector(1.0, 1.01, 1.02, 1.03, 1.04, 1.05, 1.06, 1.07, 1.08, 1.09, 1.1, 1.11, 1.12, 1.13, 1.14, 1.15, 1.16, 1.17, 1.18, 1.19, 1.2, 1.21, 1.22, 1.23, 1.24, 1.25, 1.26, 1.27, 1.28, 1.29, 1.3, 1.31, 1.32, 1.33, 1.34, 1.35, 1.36, 1.37, 1.38, 1.39, 1.4, 1.41, 1.42, 1.43, 1.44, 1.45, 1.46, 1.47, 1.48, 1.49, 1.5, 1.51, 1.52, 1.53, 1.54, 1.55, 1.56, 1.57, 1.58, 1.59, 1.6, 1.61, 1.62, 1.63, 1.64, 1.65, 1.66, 1.67, 1.68, 1.69, 1.7, 1.71, 1.72, 1.73, 1.74, 1.75, 1.76, 1.77, 1.78, 1.79, 1.8, 1.81, 1.82, 1.83, 1.84, 1.85, 1.86, 1.87, 1.88, 1.89, 1.9, 1.91, 1.92, 1.93, 1.94, 1.95, 1.96, 1.97, 1.98, 1.99, 2.0, 2.01, 2.02, 2.03, 2.04, 2.05, 2.06, 2.07, 2.08, 2.09, 2.1, 2.11, 2.12, 2.13, 2.14, 2.15, 2.16, 2.17, 2.18, 2.19,…

doubles.last
res0: Double = 10.0__

Best, Oliver

The technical reason that floating-point ranges are deprecated is that precise addition of Double and Float is not guaranteed. In particular, it fails on addition of numbers with a decimal fraction, even if that fraction is “not tiny”, like 0.1. The library should not contain methods that mysteriously give you unreliable behavior, hence the deprecation.

The BigDecimal approach solves the issue because the math is performed without imprecision.

There’s no advantage to trying epsilon-shifting schemes because then you may as well just use integers to represent your decimal fraction, for example in the manner that Oliver demonstrated.

One could envision a macro that did the right thing for literals.

Some literal-minded person with a vision ought to envision such a macro for literals.

One can dispute whether Range.Double is useful, stepping with precision but delivering doubles, or whether there is another useful sense for double-stepping that could be controlled by a context, but please acknowledge that I am discussing the former, which is limited but by itself is unproblematic. Sometimes all you need or want is an unproblematic solution to an unproblematic problem.

I came back from the angry Java thread even angrier. You wouldn’t like me when I’m angry.

It is fairly easy to write a routine that works right by using multiplication and division rather than addition. In fact, I did exactly that a long time ago for my scalar class (which represents physical scalars with units). Here it is:

def scalarSteps(start: Scalar, end: Scalar, step: Scalar): Vector[Scalar] = {

val inc = abs(step)
val sgn = Real(signum(step)) // convert to "Double"
val start1 = Real(start/inc)
val end1 = Real(end/inc) + 1e-10 * sgn

(BigDecimal(start1) to end1 by sgn)
    .map(_.toDouble).map(_ * inc).toVector
}

A simpler version of this (replace Scalar with Double) could be provided by default for so-called “Doubles” in Scala so that the deprecated syntax could be maintained and would work correctly. That would relieve users of spending time to figure out how to use BigDecimal. It would also result in a tiny performance penalty, but I would gladly take the slight hit in return for the convenience.

For what it’s worth, It just occurred to me that if the human race had chosen base 8 (octal) instead of base 10 as the standard numeral system, we wouldn’t have this problem. People say we use base 10 because we have ten fingers, but actually we have 8 fingers and two thumbs! Too late to fix that one, I guess!

That doesn’t work right on 0.1 to 0.299999999999 by 0.2.

(How) is this different from 1 to 7 by 2?

Binary-coded integers can represent whole decimal numbers exactly. Binary fractions, which is what Float and Double are, cannot represent decimal fractions exactly. So 1 to 7 by 2 is calculated without error.

I’m sorry, but I truly do not see why you think Range.Double is “unproblematic”. What do you think about this behavior:

**Welcome to Scala 2.12.4 (OpenJDK 64-Bit Server VM, Java 1.8.0_171).
Type in expressions for evaluation. Or try :help.

Range.Double(0.0, 7.0, 1.0).last
res0: Double = 6.0

Range.Double(0.0, 0.7, 0.1).last
res1: Double = 0.7000000000000001**

Best, Oliver

Even if you step with precision you run into surprises. What should the behavior of 0.1 until 3*0.1 by 0.1 be? More sneakily, suppose you have def steps(x: Double) = x until 3*x by x. Shall this sometimes give you three elements and sometimes two?

The imprecision can easily arrive in the input which is why a solution with precise stepping isn’t really a solution.