#### In place rotation

It’s easy to rotate an array in O(n) time if we use O(n) additional space or if we don’t want the original array overwritten:

def rotate_right_copy(a, k): return a[-k:] + a[:-k]

void rotate_right_copy(int output[], int input[], size_t len, size_t k) { size_t i; for (i = 0; i < k; i++) output[i] = input[len - k + i]; for (i = k; i < len; i++) output[i] = input[i - k]; }

But how fast can the array be rotated “in place”, using only O(1) (constant) additional space?

#### Sorted arrays intersection

Given two sorted arrays, *a* and *b*, how fast can we find the intersection between them?

Advertisements

An easy solution for the first problem: doing k rotations of 1 to the right:

Of, course, this solution takes O(n*k) time. There surely are better solutions. But this was the first that I could think of.

Is there a sub linear (n + m, the lenghts of arrays a and b) solution for the second problem?

Yes, there are at least two different O(n) solutions.

There is a solution that is much faster than O(n + m) if

n<<m.Thanks for your comment!

Yay! I came up with an O(n) solution!

(the code might not be very good… my Python skills are quite limited =P)

Explaining how it works without a piece of paper is quite difficult, but, the basic idea is to move each element to it’s final position one at a time. Of course, when you move one element to it’s final position, you have to take care of the element that was in that position, so you just continue with that element. The algorithm ends when all elements were moved once.

Explainig the reason of the GCD and why this thing is O(n) is beyond the scope of this comment =P

Excellent code: it’s much nicer than my O(n) solution!

Explaining why it’s O(n) is easy: all the simple statements are O(1) and the inner loop body executes gcd(

n,k) * (n/ gcd(n,k)) =ntimes.I agree that explaining the reason of the GCD is harder; it’s the number of disjoint cycles… but this is not very clear 😀 . I will try to come up with a good explanation in my next post.

Hello!

First of all, that is some very nice Python code you both posted!

Regarding the second problem, Demian, you’re right if you go into every item in both arrays, you’ll end up with a O(n+m) algorithm, where n and m are the arrays’ lengths

As Mariano said, there’s another solution, which is more eficient *if* m << n.

It is based on finding each small array's item in the big array, using binary search.

In that case, the algorithm is O(m*log(n))

For example:

if m= 10 and n= 1000:

linear: 10 + 1000 = 1010

bin: 10* log(1000)= 30

if m= 1000 and n 1000:

linear: 2000

bin: 1000*3= 3000

I am not very experienced in Python, but here’s some code to test both solutions

Very nice Gille. I didn’t thought that a binary search would help that much! (When I saw that the complexity became a product, I immediatelly thought that it was worse than the sum =P).

It would be cool if the intersect() choose the right algorithm depending on the lenght of the arrays. It would also be cool that, if the array a is the one with less elements, that would be the one iterated to search the elements in the other one.

Luckily, this is quite easy to accomplish (and, it’s event easier to understand in code than in english =P):

Ups… I mean “Guille” xD

[…] found a very interesting linear time solution to the in place rotation problem. This is a cleaned up version that avoids carrying temporary […]

[…] 4, 2011 at 12:37 am (cs, python) This post finishes explaining the solutions that were given in the comments of the “two coding problems” post. As the […]