A BigUint can be divided by a BigDigit - this is one of several
operations being implemented to allow scalar operations on BigInt and
BigUint across the board.
A BigDigit can be subtracted from a BigUint - this is one of several
operations being implemented to allow scalar operations on BigInt and
BigUint across the board.
A BigDigit can be added to a BigUint - this is one of several
operations being implemented to allow scalar operations on BigInt and
BigUint across the board.
- rename `_twos_complement_` methods to just `_signed_`
- make `from_` variants take &[u8]
- refactor helper functions twos_complement (they take byte slice but use a generic function underneath)
- fix issues in `to_signed_` functions (only two's complement negative numbers; perform byte extension where needed)
- add tests to `to_signed_` methods
`DefaultHasher` wasn't stable until 1.13, at which point all the other
hashers were deprecated, so it's not easy for us to name a hasher type
to use for testing. However, `RandomState` has been stable since 1.7,
and it implements `BuildHasher` that has a `Hasher` associated type.
If a `+` is encountered in the middle of parsing a BigUint, this should
generate an `ParseIntError::InvalidDigit`. Since we can't create that
directly, we get it by trying to parse a `u64` from this point, but of
course `+` is a perfectly valid prefix to a `u64`.
Now we include the previous character in the string passed to `u64`, so
it has proper parsing context to understand what's in error.
Fixes#268.
Also update to use https instead of http. This avois mixed content
degradation on docs.rs.
The doc root URLs are correct as they are, the URL does not include the
crate name itself.
A new Fibonacci benchmark demonstrates the improvement by using both
addition and subtraction in each iteration of the loop, like #200.
Before:
test fib2_100 ... bench: 4,558 ns/iter (+/- 3,357)
test fib2_1000 ... bench: 62,575 ns/iter (+/- 5,200)
test fib2_10000 ... bench: 2,898,425 ns/iter (+/- 207,973)
After:
test fib2_100 ... bench: 1,973 ns/iter (+/- 102)
test fib2_1000 ... bench: 41,203 ns/iter (+/- 947)
test fib2_10000 ... bench: 2,544,272 ns/iter (+/- 45,183)
This splits the main arithmetic loops from the final carry/borrow
propagation, and also normalizes the slice lengths before iteration.
This lets the optimizer generate better code for these functions.
When x.len() and y.len() are both equal and odd, we have x1.len() + y1.len() + 1
(the required size to multiply x1 and y1) > y.len() + 1
This fixes#187