I am implementing an experimental validation workflow in Scala based on Mercury orbital data (2024–2025).
The dataset contains:
-
Position
x(meters) -
Velocity
v(m/s) -
Mass
m(kg) -
Momentum
p = m × v -
Invariant quantity
NKTg1 = x × p
Typical magnitudes:
-
x≈ 10^10 -
m≈ 10^23 -
NKTg1≈ 8.90 × 10^38
Velocity is reconstructed algebraically:
v = NKTg1 / (x * m)
Observed average relative deviation vs 2025 measured values: ~1–2%.
Current Scala Implementation
case class MercuryData(
date: String,
position: BigDecimal,
velocity: BigDecimal,
mass: BigDecimal
) {
val NKTg1: BigDecimal = BigDecimal("8.90E+38")
def momentum: BigDecimal =
mass * velocity
def invariantValue: BigDecimal =
position * momentum
def simulatedVelocity: BigDecimal =
NKTg1 / (position * mass)
def relativeErrorPercent: BigDecimal =
(simulatedVelocity - velocity) / velocity * 100
}
I chose BigDecimal because:
-
Double introduces drift at ~10^38 scale
-
Reproducibility is important
-
Deterministic results across runs matter
Questions
-
For magnitudes near 10^38, is
BigDecimalthe correct approach in Scala, or is there a performant alternative that still guarantees deterministic precision? -
Are there recommended numeric contexts (
MathContext) for scientific-scale workloads? -
If scaling to millions of rows:
-
Should this remain pure Scala collections?
-
Or move to a streaming / Spark-style pipeline?
-
Any recommended numeric libraries for high-precision invariant-style models?
-
-
Are there known performance pitfalls when repeatedly computing:
constant / (x * m)with
BigDecimalin tight loops?
Goal
This is not about astrophysical theory.
It is about:
-
Deterministic numeric modeling
-
Precision at extreme magnitudes
-
Functional modeling of invariant-based computations
-
Performance trade-offs in Scala
I would appreciate guidance from the Scala community regarding best practices for high-magnitude, high-precision arithmetic workflows.