Particularly, the default `from_f64` used `n as i64`, which has
undefined behavior on overflow, kind of defeating the purpose here.
Now we use a checked `to_i64()` for this, and even try `to_u64()` as a
fallback for completeness.
(All of the primitive implementations already do better, at least.)
This includes new conditional methods `ToPrimitive::{to_i128,to_u128}`
and `FromPrimitive::{from_i128,from_u128}`. Since features can only be
additive, these methods must not cause a breaking change to anyone when
enabled -- thus they have a default implementation that converts through
64-bit values. Types that can do better with a full 128-bit integer,
like bigint or floating-point, will probably want to override these.
61: Use constant for 180/π in f32::to_degrees r=cuviper a=vks
The current `f32::to_degrees` implementation uses a division to
calculate 180/π, which causes a loss of precision. Using a constant is
still not perfect (implementing a maximally-precise algorithm would come
with a high performance cost), but improves precision with a minimal
change.
This is a backport from [`std`].
[`std`]: e34c31bf02
Co-authored-by: Vinzent Steinberg <vinzent.steinberg@gmail.com>
Co-authored-by: Josh Stone <cuviper@gmail.com>
59: Added `MulAdd` and `MulAddAssign` traits r=cuviper a=regexident
Both `f32` and `f64` implement fused multiply-add, which computes `(self * a) + b` with only one rounding error. This produces a more accurate result with better performance than a separate multiplication operation followed by an add:
```rust
fn mul_add(self, a: f32, b: f32) -> f32[src]
```
It is however not possible to make use of this in a generic context by abstracting over a trait.
My concrete use-case is machine learning, [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) to be specific,
where the core operation of updating the gradient could make use of `mul_add` for both its `weights: Vector` as well as its `bias: f32`:
```rust
struct Perceptron {
weights: Vector,
bias: f32,
}
impl MulAdd<f32, Self> for Vector {
// ...
}
impl Perceptron {
fn learn(&mut self, example: Vector, expected: f32, learning_rate: f32) {
let alpha = self.error(example, expected, learning_rate);
self.weights = example.mul_add(alpha, self.weights);
self.bias = self.bias.mul_add(alpha, self.bias)
}
}
```
(The actual impl of `Vector` would be generic over its value type: `Vector<T>`, thus requiring the trait.)
Co-authored-by: Vincent Esche <regexident@gmail.com>
Co-authored-by: Josh Stone <cuviper@gmail.com>
63: Add CheckedRem and CheckedNeg r=cuviper a=LEXUGE
Continue from #58
I've alreadyremoved all the formats.
Co-authored-by: LEXUGE <lexugeyky@outlook.com>
Co-authored-by: Josh Stone <cuviper@gmail.com>
The current `f32::to_degrees` implementation uses a division to
calculate 180/π, which causes a loss of precision. Using a constant is
still not perfect (implementing a maximally-precise algorithm would come
with a high performance cost), but improves precision with a minimal
change.
This is a backport from [`std`].
[`std`]: e34c31bf02