59: Added `MulAdd` and `MulAddAssign` traits r=cuviper a=regexident
Both `f32` and `f64` implement fused multiply-add, which computes `(self * a) + b` with only one rounding error. This produces a more accurate result with better performance than a separate multiplication operation followed by an add:
```rust
fn mul_add(self, a: f32, b: f32) -> f32[src]
```
It is however not possible to make use of this in a generic context by abstracting over a trait.
My concrete use-case is machine learning, [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) to be specific,
where the core operation of updating the gradient could make use of `mul_add` for both its `weights: Vector` as well as its `bias: f32`:
```rust
struct Perceptron {
weights: Vector,
bias: f32,
}
impl MulAdd<f32, Self> for Vector {
// ...
}
impl Perceptron {
fn learn(&mut self, example: Vector, expected: f32, learning_rate: f32) {
let alpha = self.error(example, expected, learning_rate);
self.weights = example.mul_add(alpha, self.weights);
self.bias = self.bias.mul_add(alpha, self.bias)
}
}
```
(The actual impl of `Vector` would be generic over its value type: `Vector<T>`, thus requiring the trait.)
Co-authored-by: Vincent Esche <regexident@gmail.com>
Co-authored-by: Josh Stone <cuviper@gmail.com>
63: Add CheckedRem and CheckedNeg r=cuviper a=LEXUGE
Continue from #58
I've alreadyremoved all the formats.
Co-authored-by: LEXUGE <lexugeyky@outlook.com>
Co-authored-by: Josh Stone <cuviper@gmail.com>
The current `f32::to_degrees` implementation uses a division to
calculate 180/π, which causes a loss of precision. Using a constant is
still not perfect (implementing a maximally-precise algorithm would come
with a high performance cost), but improves precision with a minimal
change.
This is a backport from [`std`].
[`std`]: e34c31bf02
52: Refactor ToPrimitive range checks r=cuviper a=cuviper
This is a rebase and continuation of PR #28. The primary benefit is that
floats finally check for overflow before casting to integers, avoiding
undefined behavior. Fixes#12.
The inter-integer conversions and all of the macros for these have also been
tweaked, hopefully improving readability. Exhaustive tests have been added for
good and bad conversions around the target MIN and MAX values.
We don't actually need to compute the `trunc()` value, as long as we can
figure out the right values for the exclusive range `(MIN-1, MAX+1)` to
measure the same truncation effect.
This change adds some new macro rules used when converting from floats
to integers. There are two macro rule variants, one for signed ints, one
for unsigned ints.
Among other things, this change specifically addresses the overflow case
documented in https://github.com/rust-num/num-traits/issues/12
41: Various improvements to FloatCore r=vks a=cuviper
- New macros simplify forwarding method implementations.
- `Float` and `Real` use this to compact their implementations.
- `FloatCore` now forwards `std` implementations when possible.
- `FloatCore` now requires `NumCast`, like `Float does.
- New additions to `FloatCore`:
- Constants like `min_value()` -> `f64::MIN`
- Rounding methods `floor`, `ceil`, `round`, `trunc`, `fract`
- `integer_decode` matching `Float`'s
- Fix NAN sign handling in `FloatCore` (rust-num/num#312, rust-lang/rust#42425)
- Fix overflow in `FloatCore::powi` exponent negation.
- Add doctests to all `FloatCore` methods.