Two coding problems

In place rotation

It’s easy to rotate an array in O(n) time if we use O(n) additional space or if we don’t want the original array overwritten:

def rotate_right_copy(a, k):
	return a[-k:] + a[:-k]
void rotate_right_copy(int output[], int input[], size_t len, size_t k)
{
  size_t i;
  for (i = 0; i < k; i++)
    output[i] = input[len - k + i];
  for (i = k; i < len; i++)
    output[i] = input[i - k];
}

But how fast can the array be rotated “in place”, using only O(1) (constant) additional space?

Sorted arrays intersection

Given two sorted arrays, a and b, how fast can we find the intersection between them?

Factorial asymptotics

This post complements the previous TSP-related post by analyzing some asymptotic properties of the factorial.

n! vs n·2n

Let’s start by defining both functions as products:

\displaystyle n! = \prod_{i=2}^n i

\displaystyle n\cdot 2^n = n \prod_{i=1}^n 2

Extracting some factors:

\displaystyle n! = n (n - 1) \prod_{i=2}^{n-2} i

\displaystyle n\cdot 2^n = n\cdot 2^3 \prod_{i=2}^{n-2} 2

We can see that

\displaystyle \prod_{i=2}^{n-2} i \ge \prod_{i=2}^{n-2} 2,

as it has the first product has the same number of factors and is greater or equal factor by factor. On the other hand, for n greater or equal than 9,

\displaystyle n (n - 1) \ge n\cdot 2^3.

Combining this results, for n greater or equal than 9:

\displaystyle n (n - 1) \prod_{i=2}^{n-2} i \ge n\cdot 2^3 \prod_{i=2}^{n-2} 2

\displaystyle n! \ge n\cdot 2^n.

This is a conservative result, as the factorial is greater starting from n = 6.

A bound for the factorial values

One useful way of bounding sums of monotonically growing sequences whose terms follow a relatively simple function is by using the integral of the function. This applies the following identities:

(assuming \displaystyle a_i = f(i) and monotonically growing f)

\displaystyle \sum_{i=1}^n a_i = \sum_{i=1}^n \int_i^{i+1} dx\,f(i) = \sum_{i=1}^n \int_i^{i+1} dx\,f(\lfloor x \rfloor) \le \sum_{i=1}^n \int_i^{i+1} dx\,f(x)

\displaystyle \sum_{i=1}^n a_i \le \int_1^{n+1} dx\,f(x)

\displaystyle \sum_{i=1}^n a_i = \sum_{i=1}^n \int_{i-1}^i dx\,f(i) = \sum_{i=1}^n \int_{i-1}^i dx\,f(\lceil x \rceil) \ge \sum_{i=1}^n \int_{i-1}^i dx\,f(x)

\displaystyle \sum_{i=1}^n a_i \ge \int_0^n dx\,f(x).

The factorial is the product of a simple sequence, not a sum, but we can use logarithms to bridge this gap:

\displaystyle n! = \exp\left( \ln \prod_{i=1}^n i\right) = \exp\left( \sum_{i=1}^n \ln i\right).

So, applying the previously found bounds for sums, we get:

\displaystyle \int_0^n dx\,\ln x \le \sum_{i=1}^n \ln i \le \int_1^{n+1} dx\,\ln x

\displaystyle \int_0^n dx\,\ln x \le \ln n! \le \int_1^{n+1} dx\,\ln x.

Integrating by parts the logarithm function:

\displaystyle \int dx\,\ln x = \int dx\,1\cdot\ln x

\displaystyle \int dx\,\ln x = x \ln x - \int dx\,x\cdot x^{-1}

\displaystyle \int dx\,\ln x = x \ln x - \int dx

\displaystyle \int dx\,\ln x = x \ln x - x.

Putting all together:

\displaystyle n \ln n - n \le \ln n! \le (n + 1) \ln(n + 1) - (n + 1) + 1

\displaystyle n \ln n - n \le \ln n! \le (n + 1) \ln(n + 1) - n.

As the exponential function grows monotonically, we can apply it to all the members of the inequality:

\displaystyle \exp(n \ln n - n) \le n! \le \exp\left((n + 1) \ln(n + 1) - n\right).

Rearranging:

\displaystyle e^{-n}n^n \le n! \le e^{-n}(n + 1)^{n+1}.

There are tighter bounds for the factorial, such as the Stirling’s series. But they are harder to prove 😀

Solving the Travelling Salesman Problem

In this post we will analyse two exact algorithms to solve the Travelling Salesman Problem: one based on an exhaustive iteration through all the possible tours and another one using dynamic programming to reduce the asymptotic run time. Let’s start with the exhaustive one, as it’s easier.

Exhaustive O(n!) algorithm

We can number the cities from 0 to n and assume a distance matrix Di,j as given (the matrix doesn’t need to be symmetric, as it can reflect restrictions such as one-way roads). Then the code reduces to enumerating the tours (circular permutations of 0, 1, …, n) and selecting the one who has minimum length.

def tsp_naive_solve(d):
    btl, best_tour = min((tour_len(t, d), t)
                         for t in permutations(range(1, len(d))))
    return best_tour

itertools.permutations() is a Python primitive since the 2.6 version, but the internal implementation is explained in this excellent “Word Aligned” blog post.

Dynamic programming O(n 2n) algorithm

The first step (well, the first step that I do 😀 ) to make a dynamic programming algorithm for a problem is to get a recursive algorithm for the problem. In many cases, getting a recursive algorithm is just a matter of finding a formulation of the problem that matches useful subproblems. One example of a recursion-friendly presentation of the TSP is:

Get the path of minimum length that starts at city 0, passes through the set of cities 1 to n in any order and ends at city 0.

It’s fairly common that a more general version of a problem is easier to solve, and this is one of these cases. If we generalize the problem to:

Get the path of minimum length that starts at city 0, passes through a set of cities ts (not including i or 0) in any order and ends at city c.

we can make a straightforward recursive algorithm:

def rec_tsp_solve(c, ts):
    assert c not in ts
    if ts:
        return min((d[lc][c] + rec_tsp_solve(lc, ts - set([lc]))[0], lc)
                   for lc in ts)
    else:
        return (d[0][c], 0)

where c represents the target city, ts is the intermediate set of cities and the next-to-last city is returned to allow reconstructing the path. Adding the reconstruction code:

def tsp_rec_solve(d):
    def rec_tsp_solve(c, ts):
        assert c not in ts
        if ts:
            return min((d[lc][c] + rec_tsp_solve(lc, ts - set([lc]))[0], lc)
                       for lc in ts)
        else:
            return (d[0][c], 0)

    best_tour = []
    c = 0
    cs = set(range(1, len(d)))
    while True:
        l, lc = rec_tsp_solve(c, cs)
        if lc == 0:
            break
        best_tour.append(lc)
        c = lc
        cs = cs - set([lc])

    best_tour = tuple(reversed(best_tour))

    return best_tour

Now that we have a recursive solution, we can add a memoization decorator to avoid recomputing and we get a dynamic programming algorithm. The set was replaced by a Python frozenset, because the function arguments must be “hashable” if the memoization decorator is going to be used.

The dynamic programming algorithm performance can be greatly improved by writing it in C, using bit operations to handle the cities’ set and building the partial results table iteratively from the bottom up. But all this techniques can only give us a constant improvement, with the asymptotic complexity remaining the same.

Performance comparisons

The performance tests consist just of running the 3 algorithms for problem instances containing from 2 to 10 cities. We get the following times in seconds running the test (the exact numbers are clearly system-dependent):

n	naive time	rec time  	dp time
2	7.2583e-06	1.6943e-05	2.0364e-05
3	1.8945e-05	4.7088e-05	5.0050e-05
4	7.5889e-05	1.7024e-04	1.3948e-04
5	4.0331e-04	7.9357e-04	3.9836e-04
6	2.5928e-03	4.6139e-03	1.1181e-03
7	1.9732e-02	3.1740e-02	3.0223e-03
8	1.6328e-01	2.4179e-01	8.0337e-03
9	1.5632e+00	2.1190e+00	2.1028e-02
10	1.6668e+01	2.0946e+01	5.2824e-02

To be able to see more easily the differences between the growth of these execution times, we can plot the data using matplotlib:

Running times as a function of the number of cities for each of the three algorithms.

It can be seen in the plot that the initially slowest algorithm, the one applying dynamic programming, ends up being more than one hundred times faster that the other two. As we are working with a semilog plot, the straight line of the DP algorithm is associated with approximately exponential growth, while the other algorithms show super-exponential growth.

Visualizing the Mandelbrot set with JavaScript

Fractals and the Mandelbrot set

Even though mathematical structures that are now called fractals were first analysed in the 19th century, the study of self-similar structures got a significant boost when computers able to visualize them started to appear in the 1970s. In fact, the term fractal was coined in 1975 by the late Benoit Mandelbrot to describe structures whose parts are similar to the whole.

The Mandelbrot set is a particular fractal set defined as:

\displaystyle M = \{ c \in \mathbb{C}: \lim_{n\to\infty} P_c^n(0) \neq \infty \},

where P_c^n(z) describes the polynomial resulting of iterating n times the parameterized quadratic polynomial P_c(z):

\displaystyle P_c(z) = z^2 + c.

More about the Mandelbrot set

It is said that the limit of a complex sequence z_n is infinite if for any positive number M there is an element of the sequence z_N such that its module and the modules of all succeeding sequence elements are greater than M. In formal notation:

\displaystyle \lim_{n\to\infty} z_n = \infty \iff \forall M > 0\,\exists N > 0\,\forall n \geq N: |z_n| \geq N.

All the points of the complex plane that have a modulus greater than 2 are outside the set. To show that, let’s take a point c_0 whose module is strictly greater than 2:

\displaystyle |c_0| > 2.

To avoid cluttering the inequalities, let’s make two definitions:

\displaystyle a_n = |P_{c_0}^n(0)| and

\displaystyle k = |c_0|.

Then the statement that all points with modulus greater than two are outside the Mandelbrot set is equivalent to

\displaystyle \lim_{n \to \infty} a_n = \infty.

It’s clear from the definition that a_n is non-negative. Let’s analyse carefully the relationship between a term and the previous one:

\displaystyle a_{n+2} = |P_{c_0}^{n+2}(0)| = \left|\left(P_{c_0}^{n+1}(0)\right)^2+c_0\right|

\displaystyle a_{n+2} \geq \left|\left|\left(P_{c_0}^{n+1}(0)\right)^2\right|-|c_0|\right|

\displaystyle a_{n+2} \geq \left|\left|P_{c_0}^{n+1}(0)\right|^2-k\right|

\displaystyle a_{n+2} \geq \left|a_{n+1}^2-k\right|.

We can now inductively explore the hypothesis a_{n+2} \geq (1 + \epsilon) a_{n+1}, with \epsilon = k - 2.

Checking the base case:

\displaystyle a_2 \geq |a_1^2 - k| = |k^2 - k| = |(k-1)k| = (k-1)k = (1+\epsilon)k = (1+\epsilon)a_1

Applying the strong inductive hypothesis to a_{n+1} we get:

\displaystyle a_{n+1} \geq (1 + \epsilon)a_n \geq \ldots \geq (1 + \epsilon)^n a_1 = (1 + \epsilon)^n k

Using this result in the previous inequality:

\displaystyle a_{n+2} \geq a_{n+1}^2-k \quad (a_{n+1}^2 > a_{n+1} > k)

\displaystyle a_{n+2} > a_{n+1}^2-a_{n+1} \quad (a_{n+1} > k)

\displaystyle a_{n+2} > a_{n+1}(a_{n+1} - 1)

\displaystyle a_{n+2} > a_{n+1}(k - 1) \quad (a_{n+1} > k)

\displaystyle a_{n+2} > (1 + \epsilon) a_{n+1} \quad (1 + \epsilon = k - 1)

This implies that a_n will diverge at least geometrically, with ratio 1+ \epsilon. Then any point c whose modulus is greater than 2 will be outside the set.

It can be seen by a slight extension of this argument that, if there is some n such that |P_c^n(0)| is greater than 2, c will be outside the Mandelbrot set too.

Plotting the Mandelbrot set

The simplest way to plot a Mandelbrot set is to iterate the quadratic polynomial P_c(z) over the points c that are associated with an image in the normal way (via some choice of origin and scale factor). If the iterated point reaches a modulus greater than 2 we know that its module will diverge and that it is outside the set. There isn’t any simple test to know if a point is inside the set, but we can choose a fixed number of iterations or to stop the process when no new point outside the set is detected.

Any point known to be outside the set (reached a modulus greater than 2 when iterated) can be colored white, using black to color the points of the set:

A Mandelbrot set plotted using black for the points inside the set and white for those outside the set. (Click to see a live version.)

We can make prettier visualizations if we use some measure of the “difficulty of divergence” for each point. One natural measure for this quantity is the number of iterations required to reach a given modulus. Combining this number of iterations with a function translating the potentially unbounded number of iterations to a color in the range black – white, we get:

Mandelbrot set in gray scale. The gray level of the points outside the set are related to the number of iterations required to reach a given modulus. (Click to see a live version.)

This visualization is nicer, and shows more clearly the existence of filament-like structures (in fact, the set is connected, but I don’t know how to prove that… 😀 ). But we can do better by interpolating the iteration number in a continuous way, giving us a less discrete measure of the “difficulty of divergence”. If we use the algorithm described by Linas Vepstas,

\mu = n + 1 - \frac{1}{\log 2}\log \log P_c^n(0),

being \mu the “continuous” iteration count, the we get this picture:

A representation of the Mandelbrot set using the "continuous iteration count" to colorize the exterior. (Click to see a live version.)

The source code of the visualization software can be seen in the SVN repository of this blog and it can be used to check the self similarity of the fractal (left click zooms in, right click zooms out):

A "copy" of the Mandelbrot set in a filament. Zoom out in the live version to see its location.