The problem is as follows. Suppose you have `Seq(1.0, NaN, -1.0)`

. You want to take the `min`

, the `max`

, and return the sorted version, all using the same ordering.

Well, the `min`

should clearly be `NaN`

because you always get `NaN`

if you try operations on data with NaN. And the `max`

should also clearly be `NaN`

for the same reason.

However, youâ€™d expect `xs.sorted.head`

to be the same as `min`

, so sorting clearly has to put `NaN`

first. Also, `xs.sorted.last`

should be the same as `max`

, so sorting as to put `NaN`

last. So sorting isâ€¦not a sort any more.

This is all kinds of weird. Very reasonable assumptions break if you accept both that sorting doubles is possible, and if normal IEEE NaN-eats-everything rules apply.

If you want sort order to agree with min and max, then you choose `TotalOrdering`

, and accept that `max`

will get `NaN`

but `min`

will avoid it (ranking `NaN`

as larger than `PositiveInfinity`

).

If you donâ€™t care that sort order disagrees with min and max, you just want your NaNs like IEEE says, then you choose `IeeeOrdering`

.

https://www.scala-lang.org/api/2.13.0/scala/math/Ordering$$Double$$IeeeOrdering.html and https://www.scala-lang.org/api/2.13.0/scala/math/Ordering$$Double$$TotalOrdering.html explain what they do, but not about the inherent conflict in more than minimal detail.