Add support for Serde 0.7.
Serde 0.7 dropped it's dependency on num, so this patch moves the implementations here. For the sake of a better implementation, this just serializes BigUint as a `Vec<u32>`, `BigInt` as a `(u8, Vec<u32>)`, `Complex<T>` as a `(T, T)`, and `Ratio<T>` as a `(T, T)`.
Serde 0.7 dropped it's dependency on num, so this patch moves
the implementations here. For the sake of a better implementation,
this just serializes BigUint as a `Vec<u32>`, `BigInt` as a
`(u8, Vec<u32>)`, `Complex<T>` as a `(T, T)`, and `Ratio<T>`
as a `(T, T)`.
- Integer only needs to require Ord explicitly, and then PartialOrd, Eq,
and PartialEq come transitively.
- Generics on Integer can implicitly use all of those comparison traits.
This should not be a breaking change, as it doesn't actually change any
effective trait requirements -- only what's explicit for simplicity.
Add checked_pow function
Implements a `checked_pow` function which does the same as `pow`, just with overflow checks.
And, similar to #152 and #153, the function uses references instead of cloning.
Adds a little macro to spare code repetition. Its scoped to the function so nothing gets polluted.
Previously, the `rand` and `rustc-serialize` dependencies were optional
except they were required for the `bigint` feature.
Make the dependency on the `rand` crate optional in all cases.
including when the `bigint` feature is selected. Some of the tests for
the bigint feature are randomized so, while `rand` is now an optional
dependency, it is a non-optional dev-dependency.
Similarly, make the dependency on the `rustc-serialize` crate optional
in all cases, including when the `bigint` feature is selected.
We can save a multiplication if we start the accumulation basically at
the first set bit of the exponent, rather than starting at one and
waiting to multiply. We only need one itself if the exponent is zero,
which is easy to pre-check.
Before:
test pow_bench ... bench: 8,267,370 ns/iter (+/- 93,319)
After:
test pow_bench ... bench: 7,506,463 ns/iter (+/- 116,311)
If a benchmark takes very long to run, it's harder to iterate on changes
to see their effect. Even reduced to 100, this pow_bench takes around 8
seconds on my machine, and still shows meaningful optimization effects.