Use Pollard's rho instead of more trial division #326
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In our current implementation, we can't handle any number whose factor
is bigger than, roughly, "the thousandth odd number after the hundredth
prime" --- "thousandth" because that's roughly where compiler iteration
limits kick in, and "hundredth prime" because we try the first hundred
primes in our initial trial division step. This works out to 2,541
(although again, this is only the approximate limit).
Pollard's rho algorithm requires a number of iterations on average
that is proportional to the square root of
p
, the smallest primefactor. Thus, we expect it to have good success rates for numbers whose
smallest factor is up to roughly one million, which is a lot higher than
2,541.
In practice, I've found some numbers that we can't factor with our
current implementation, but can if we use Pollard's rho. I've included
one of them in a test case. However, there are other factors (see #217)
that even the current version of Pollard's rho can't factor. If we
can't find an approach that works for these, we may just have to live
with it.
Helps #217.