The following diagrams show the difference of the process in evaluating the remainder operation between normal order and applicative order substitution methods; the red part is where the remainder operation actually happend. The remainder operation actually performed 4 times in both methods.
Exercise 1.21
The helper functions for computing a smallest divisor of a given number n:
Get the smallest divisor of each of the given numbers:
outputs:
Exercise 1.22
The helper functions for timing the prime number evaluation:
Search the three smallest prime numbers larger then the startNumber.
outputs:
The order of growth of the find prime algorithm is Θ(n); therefore, by consumption, the time needed to evaluate the primality of 1000003 possibly be n times of the time required to evaluate 100003. By running the timed test for 10 times, I found the average ratio of evaluation time of 1000003 and 100003 is around 1.5, slightly less than the half of 10. This discrepancy could be due to many reasons, but at least it does unveil that a larger number requires more time to evaluate.
Exercise 1.23
A way of optimizing the isPrime(n) algorithm is to skip the even numbers except 2, because every even number’s smallest divider is 2; taking those numbers into consideration definetly a waist of computing resources. The optimized function is as follows:
outputs:
I ran both algorithms 10 times and found that in most of the cases, the optimized function runs approximately twice as faster as the original one. As we halved the test cases in the optimized one, the result quite reflects our optimization.
Exercise 1.24
Fermat’s Little Theorem - a probabilistic method
In this test, I chose three numbers near 1,000 and 1,000,000 respectively, and test each prime number for 100 times, to figure out how the number affects the evaluation time:
output:
Theoretically, the order of growth of the numbers above is around Θ(log1000) and Θ(log1000000), therefore the evaluation time of the prime numbers near 1,000,000 is supposed to be log1000≈6.9 time of the time needed evaluating prime numbers near 1,000. However, the output shows that evaluating larger numbers takes around twice the time of the small numbers. I cannot really figure out what causes such discrepancy so far.
Exercise 1.25
Alyssa P. Hacker proposed a “consice” way of implementing the expmod function:
This implementation is a bit problematic. Let’s first take a look at the original implementation:
The difference between the two implementations is that Alyssa’s method returns baseexp%m, while the original method returns (base(exp/2)%m)2%m or ((base(exp−1)%m)×base)%m; the two methods return different results.
Exercise 1.26
Louis Reasoner implemented the expmod function in the following way:
The growth of this function is actually Θ(n) rather than Θ(logn). The problem actually happens in line 5, where the multiplication of two expmod functions makes the growth become Θ(logn)×Θ(logn)=Θ(n).