# Two coding problems

#### In place rotation

It’s easy to rotate an array in O(n) time if we use O(n) additional space or if we don’t want the original array overwritten:

```def rotate_right_copy(a, k):
return a[-k:] + a[:-k]
```
```void rotate_right_copy(int output[], int input[], size_t len, size_t k)
{
size_t i;
for (i = 0; i < k; i++)
output[i] = input[len - k + i];
for (i = k; i < len; i++)
output[i] = input[i - k];
}
```

But how fast can the array be rotated “in place”, using only O(1) (constant) additional space?

#### Sorted arrays intersection

Given two sorted arrays, a and b, how fast can we find the intersection between them?

## 10 thoughts on “Two coding problems”

1. Demian says:

An easy solution for the first problem: doing k rotations of 1 to the right:

```def rotate_right_1(a):
last = a[-1]
for i in xrange(1, len(a)):
a[-i] = a[-i - 1]
a[0] = last

def rotate_right_in_place(a, k):
for i in xrange(k):
rotate_right_1(a)
```

Of, course, this solution takes O(n*k) time. There surely are better solutions. But this was the first that I could think of.

Is there a sub linear (n + m, the lenghts of arrays a and b) solution for the second problem?

• mchouza says:

Of, course, this solution takes O(n*k) time. There surely are better solutions.

Yes, there are at least two different O(n) solutions.

Is there a sub linear (n + m, the lenghts of arrays a and b) solution for the second problem?

There is a solution that is much faster than O(n + m) if n << m.

2. Demian says:

Yay! I came up with an O(n) solution!

```import fractions

def replace(a, i, x):
old = a[i]
a[i] = x
return old

def rotate_right_in_place(a, k):
n = len(a)
gcd = fractions.gcd(n, k)
for i in range(gcd):
curr = i
x = a[curr]
for j in range(n // gcd):
next = (curr + k) % n
x = replace(a, next, x)
curr = next
```

(the code might not be very good… my Python skills are quite limited =P)

Explaining how it works without a piece of paper is quite difficult, but, the basic idea is to move each element to it’s final position one at a time. Of course, when you move one element to it’s final position, you have to take care of the element that was in that position, so you just continue with that element. The algorithm ends when all elements were moved once.

Explainig the reason of the GCD and why this thing is O(n) is beyond the scope of this comment =P

• mchouza says:

Excellent code: it’s much nicer than my O(n) solution!

Explaining why it’s O(n) is easy: all the simple statements are O(1) and the inner loop body executes gcd(n, k) * (n / gcd(n, k)) = n times.

I agree that explaining the reason of the GCD is harder; it’s the number of disjoint cycles… but this is not very clear 😀 . I will try to come up with a good explanation in my next post.

3. Guille says:

Hello!

First of all, that is some very nice Python code you both posted!

Regarding the second problem, Demian, you’re right if you go into every item in both arrays, you’ll end up with a O(n+m) algorithm, where n and m are the arrays’ lengths
As Mariano said, there’s another solution, which is more eficient *if* m << n.
It is based on finding each small array's item in the big array, using binary search.
In that case, the algorithm is O(m*log(n))

For example:

if m= 10 and n= 1000:
linear: 10 + 1000 = 1010
bin: 10* log(1000)= 30

if m= 1000 and n 1000:
linear: 2000
bin: 1000*3= 3000

4. Guille says:

I am not very experienced in Python, but here’s some code to test both solutions

```import random

def random_list(ubound, size):
l= random.sample(range(1, ubound), size)
l.sort()
return l

def bin_search(a, val):
low= 0
high= len(a)-1

while (high &gt;= low):
i= low + (high - low) / 2
if a[i] == val:
return i
else:
if a[i] &lt; val:
low= i + 1
else:
high= i - 1

return None

def linear_intersec(a, b):
i= j= 0
l= []

while (i &lt; len(a) and j &lt; len(b)):
if a[i] == b[j]:
l.append(a[i])
i += 1
j += 1
else:
if a[i] &lt; b[j]:
i += 1
else:
j += 1

return l

def intersec(a, b):
l= []

for val in b:
j= bin_search(a, val);
if j != None: l.append(val)

return l

def main():
ubound= 120

a= random_list(ubound, 100)
b= random_list(ubound, 5)

print &quot;a= %s, len(a)= %d&quot; % (a, len(a))
print &quot;b= %s, len(b)= %d&quot; % (b, len(b))

r1= linear_intersec(a, b)
r2= intersec(a, b)

print &quot;r1 == r2: %s, r1= %s, len(r1)= %d&quot; % (r1 == r2, r1, len(r1))

if __name__ == '__main__': main()
```
5. Demian says:

Very nice Gille. I didn’t thought that a binary search would help that much! (When I saw that the complexity became a product, I immediatelly thought that it was worse than the sum =P).

It would be cool if the intersect() choose the right algorithm depending on the lenght of the arrays. It would also be cool that, if the array a is the one with less elements, that would be the one iterated to search the elements in the other one.

Luckily, this is quite easy to accomplish (and, it’s event easier to understand in code than in english =P):

```def intersect(a, b):
if (len(a) < len(b)):
a, b = b, a

linear_cost = len(a) + len(b)
log_cost = len(b) * log(len(a), 2)

if linear_cost < log_cost:
linear_intersect(a, b)
else:
log_intersect(a, b)
```
6. Demian says:

Ups… I mean “Guille” xD

7. […] found a very interesting linear time solution to the in place rotation problem. This is a cleaned up version that avoids carrying temporary […]

8. […] 4, 2011 at 12:37 am (cs, python) This post finishes explaining the solutions that were given in the comments of the “two coding problems” post. As the […]