Carmack’s unequal lights problem

The problem

John Carmack tweeted:

Thinking about the shape of the surface of equal contribution between two inverse square lights of different intensities.

Without loss of generality (by scaling, translating and rotating) we can use lights with this parameters:

Light 1 position: \mathbf{r}_1 = \mathbf{0}

Light 2 position: \mathbf{r}_2 = \mathbf{\hat{x}}

Light 1 strength: s_1 = 1

Light 2 strength: s_2 = \lambda < 1

1D variant

We get the following equation using the inverse square law:

\displaystyle \frac{1}{x^2} = \frac{\lambda}{(x - 1)^2}

Operating with it:

\displaystyle (x - 1)^2 = \lambda x^2

\displaystyle x^2 - 2x + 1 - \lambda x^2 = 0

\displaystyle (1-\lambda) x^2 - 2x + 1 = 0

Solving this quadratic equation:

\displaystyle x = \frac{2\pm\sqrt{4-4(1-\lambda)}}{2(1-\lambda)}

\displaystyle x = \frac{2\pm\sqrt{4-4+4\lambda}}{2(1-\lambda)}

\displaystyle x = \frac{1\pm\sqrt{\lambda}}{1-\lambda}

Full 3D variant

Using the inverse square law we get an equation for the surface of equal intensity:

\displaystyle \frac{1}{\mathbf{r}^2} = \frac{\lambda}{(\mathbf{r} - \mathbf{\hat{x}})^2}

Operating:

\displaystyle (\mathbf{r} - \mathbf{\hat{x}})^2 = \lambda\mathbf{r}^2

\displaystyle \mathbf{r}^2 - 2\mathbf{r}\cdot\mathbf{\hat{x}} + \mathbf{\hat{x}}^2 - \lambda\mathbf{r}^2 = 0

\displaystyle (1-\lambda)\mathbf{r}^2 - 2\mathbf{r}\cdot\mathbf{\hat{x}} + \mathbf{\hat{x}}^2 = 0

\displaystyle (1-\lambda)(x^2 + y^2 + z^2) - 2x + 1 = 0.

Using \mu = \sqrt{1 - \lambda} and completing the square:

\displaystyle \mu^2(x^2 + y^2 + z^2) - 2x + 1 = 0

\displaystyle \mu^2(y^2 + z^2) + (\mu x)^2 - 2x + 1 = 0

\displaystyle \mu^2(y^2 + z^2) + \left[(\mu x)^2 - 2(\mu x)\mu^{-1} + (\mu^{-1})^2\right] - (\mu^{-1})^2 + 1 = 0

\displaystyle (\mu y)^2 + (\mu z)^2 + (\mu x - \mu^{-1})^2 = (\mu^{-1})^2 - 1

Multiplying all terms by \mu^{-2} and reordering:

\displaystyle (x - \mu^{-2})^2 + y^2 + z^2 = (\mu^{-2})^2 - \mu^{-2}

\displaystyle (x - \mu^{-2})^2 + y^2 + z^2 = \mu^{-2}\left[\mu^{-2} - 1\right]

Replacing by the expression of \mu as function of \lambda:

\displaystyle \left(x - \frac{1}{1-\lambda}\right)^2 + y^2 + z^2 = \frac{\frac{1}{1-\lambda} - 1}{1-\lambda}

\displaystyle \left(x - \frac{1}{1-\lambda}\right)^2 + y^2 + z^2 = \frac{\frac{1-(1-\lambda)}{1-\lambda}}{1-\lambda}

\displaystyle \left(x - \frac{1}{1-\lambda}\right)^2 + y^2 + z^2 = \frac{\frac{\lambda}{1-\lambda}}{1-\lambda}

\displaystyle \left(x - \frac{1}{1-\lambda}\right)^2 + y^2 + z^2 = \frac{\lambda}{(1-\lambda)^2}

This is an sphere with

\displaystyle \text{center} = \left[\frac{1}{1-\lambda},0,0\right]

\displaystyle \text{radius} = \frac{\sqrt{\lambda}}{1-\lambda},

matching the results found for the 1D case.

Advertisement

Two coding problems – Solutions Part II

This post finishes explaining the solutions that were given in the comments of the “two coding problems” post. As the first problem was already discussed, this post will only be about the second one.

Basic solution

The basic solution is similar to merging sorted arrays, though discarding the elements that aren’t present in both arrays:

def basic_intersect(a, b):
    m = len(a)
    n = len(b)
    i = j = 0
    ret = []
    while i < m and j < n:
        if a[i] == b[j]:
            ret.append(a[i])
            i += 1
            j += 1
        elif a[i] < b[j]:
            i += 1
        elif a[i] > b[j]:
            j += 1
    return ret

As we walk over all the elements in both arrays, the running time is obviously O(m+n), where m and n are the lengths of the two arrays. It’s clear that in certain cases we cannot do better, for example if the two arrays are “almost equal” we need O(m+n) time just to write the output. But can we do better in other conditions?

Guille’s solution

This solution consists of running through all the elements of the smaller array, doing a binary search for each element in the bigger array:

def bin_search_intersect(a, b):
    def bin_search(e, a):
        # adapted from Python docs
        from bisect import bisect_left
        i = bisect_left(a, e)
        if i != len(a) and a[i] == e:
            return i
        return None
    if len(a) > len(b):
        a, b = b, a
    return [e for e in a if bin_search(e, b) is not None]

It’s not better than the basic solution in all cases, but its running time is O(m log n), asymptotically better than O(m+n) when n >> m.

A better solution?

Demian proposed to switch methods based on the relative size of the arrays, but doing this can be quite tricky as we don’t know the hidden constants in the running times of the algorithms. My idea to solve this problem (that turned out not to be an original one 😀 ) was to avoid doing a binary search in the full range of the bigger array by searching for an upper bound with exponentially growing steps:

def look_fwd_intersect(a, b):
    from bisect import bisect_left
    def ubound_search(a, m, i, e):
        k = 1
        while i < m and a[i] <= e:
            i += k
            k *= 2
        if i >= m:
            return m if a[m-1] >= e else None
        else:
            return i
    if len(a) > len(b):
        a, b = b, a
    m = len(a)
    n = len(b)
    i = j1 = 0
    ret = []
    while i < m and j1 < n:
        j2 = ubound_search(b, n, j1, a[i])
        if j2 is None:
            break
        k = bisect_left(b, a[i], j1, j2)
        if k < n and b[k] == a[i]:
            ret.append(a[i])
            j1 = k + 1
        else:
            j1 = k
        i += 1
    return ret

The running time of this solution is more difficult to calculate than those of the previous two. Let’s start by calling the distance we “jump” in the second array for each element of the first array di. Then we have

\displaystyle \sum_{i=0}^{m-1}d_i \le n,

where m is the length of the first array and n the length of the second one.

The exponential upper bound search and the binary search have both an O(log di) cost, so the total cost will be given by

\displaystyle \sum_{i=0}^{m-1}\mathcal{O}(\log d_i).

To get a tight bound for this sum, we can use the following inequality (using the AM–GM inequality in the second step):

\displaystyle \sum_{i=0}^{m-1}\log d_i = m \log\left(\prod_{i=0}^{m-1}d_i\right)^\frac{1}{m} \le m\log\left(\frac{1}{m}\sum_{i=0}^{m-1}d_i\right) \le m\log\frac{n}{m}.

Applying this inequality, but taking into account that there is a fixed cost for each element of the first array, we get

\displaystyle \sum_{i=0}^{m-1}\mathcal{O}(\log d_i) \le \sum_{i=0}^{m-1}\mathcal{O}\left(1 + \log \frac{n}{m}\right) = \mathcal{O}\left(m + m\log \frac{n}{m}\right),

giving us a final running time of \mathcal{O}\left(m + m\log \frac{n}{m}\right).

When m = O(n) , we get the same result as the basic search:

\displaystyle \mathcal{O}\left(m + m\log 1\right) = \mathcal{O}\left(m\right) = \mathcal{O}\left(m + n\right)

And when we have the other limit, m = O(1), we get:

\displaystyle \mathcal{O}\left(1 + 1\log \frac{n}{1}\right) = \mathcal{O}\left(\log n\right).