Skip to content

Instantly share code, notes, and snippets.

@conectado
Created July 19, 2021 13:39
Show Gist options
  • Save conectado/2690bf0a1497ac4ca3935f0ddf00330b to your computer and use it in GitHub Desktop.
Save conectado/2690bf0a1497ac4ca3935f0ddf00330b to your computer and use it in GitHub Desktop.
Complexity of a^b
To efficiently calculate a ^ b I found this algorithm https://en.wikipedia.org/wiki/Exponentiation_by_squaring
Luckily (for me) I don't think the reasoning behind the algorithm is very well explained so I had the chance to think it over and I think I came up with a good way to explain it.
The most important concept behind the algorithm is decomposing "b" in a sequence of multiplication by 2:
So, given the binary representation of a number we recognize 2 operations left-shift and "put a 1 in the lsb". For any binary representation the first can be done by multiplying by 2, the second can be done by summing 1.
With this operations, we proceed like in a calculator, we start with 1, this being your msb, then you shift it to the left and "enter 1" if there is one in the second msb or shift again, then you do it again for the third digit and so on until you get all the digits.
For example, 13 which binary representation is 1101, the steps are:
[lsb is 0 and msb is 3]
0001 -> 1 (Digit 3)
0010 -> (2 * 1)
0011 -> ((2 * 1) + 1) (Digit 2)
0110 -> 2 * ((2 * 1) + 1)
1100 -> 2 * (2 * ((2 * 1) + 1)) (Digit 1 is 0 so no addition was needed)
1101 -> (2 * (2 * ((2 * 1) + 1))) + 1
And (2 * (2 * ((2 * 1) + 1))) + 1 = (2 * (2 * 3)) + 1 = (2 * 6) + 1 = 12 + 1 = 13
Showing the example right
(Note: I think this is much easier to understand rather than a mathematical proof but if I had to prove it I'd use induction on the number of bits I think that would easily prove it)
So you can write any number given its binary representation by starting with 2(it's msb) for the next bit multiply it by 2 and sum 1 if it's 1 or 0 if it's 0 and continue like that for every other bit
Now, using this we can a ^ b as a series of squares:
Given b's representation as (((((0 + b_n-1) * 2 + b_n-2) * 2) + b_n-3) *2 ...) + b_0. We now can define:
a ^ b = a ^ ((2 + b_n-1) * 2 + b_n-2) * 2 + b_n-3) ...) + b_0 = (((((a ^ 2) * (a * b_n-1)) ^2 * (a * b_n-2)) * (a * b_n-3))^2 ...)_* (a * b_0)
where b_i is 0 if the bit i of b is 0 or 1 if the bit i of b is 1.
And this gives us an algorithm to compute a ^ b:
Start by b's msb, square a, go to the next bit, multiply it by a if it's 1, square the result, and so on until the last bit which just multiply by a if 1 otherwise you are done.
Going back to the example of b = 13 = 1101b:
(Digit 3) is 1 -> a ^ 2
(Digit 2) is 1 -> ((a ^ 2) * a) ^ 2 = a ^ 6
(Digit 1) is 0 -> (a ^ 6) ^ 2 = a ^ 12
(Digit 0) is 1 -> (a ^ 12) * a = a ^ 13
===
In pseudo code:
// here bit 0 is msb
given a and b
res <- 1
res <- (res * b_bits[0] * a)
for bit in b_bits[1..n-1] {
res <- res ^ 2
res <- res * (a * bit)
}
===
Here is the complexity analysis, clearly this algorithm does 2 multiplications for each bit which is 2 * floor(log(b)) which is presumably better than the naive algorithm that does b multiplication(multiplying b times a).
We know that squaring a results in a doubling of the digit, so in each step ith there are 2 ^ i * n digits involved in multiplication where n is the number of digits in a(meaning log(a)) and the multiplication of two n-digit number can be done in O(n^k) for some fixed k(that last one I took from wikipedia). Then the complexity is:
So, the ith multiplication takes O((2 ^ i * log(a)) ^ k) = (2^i O(log(a))) ^ k and since there are log(b) steps the complexity of calculating a^b is (taking i to be log(a) for every step) O(log(a) * (a * log(a)) ^ k) = O(log(a) ^ (k + 1) * a ^ k)
Being the answer: O(log(a) ^ (k + 1) * a ^ k)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment