# Abusing macros for typechecking

One commonly used macro in C programming is ASIZE(), generally defined as something like this

#define ASIZE(a) (sizeof(a)/sizeof(a[0]))


and used to calculate the number of elements in an array.

The main problem with this macro, as written, is that it doesn’t distinguish between arrays and pointers. If passed a pointer, it will silently produce wrong results:

Code

#include <stdio.h>

#define ASIZE(a) (sizeof (a) / sizeof((a)[0]))

int main(void)
{
short a[3];
short *b;
int c[2];
int *d;
long long e[5][4];
char *f[4];
char (*g)[4];
(void)a; (void)b; (void)c; (void)d; (void)e; (void)f; (void)g;
printf("ASIZE() accepts pointers, producing invalid results.\n");
printf("%zu\n", ASIZE( a ));
printf("%zu\n", ASIZE( b ));
printf("%zu\n", ASIZE( c ));
printf("%zu\n", ASIZE( d ));
printf("%zu\n", ASIZE( e ));
printf("%zu\n", ASIZE( f ));
printf("%zu\n", ASIZE( g ));
return 0;
}


Output

3
2
2
1
5
4
1


By adding a new macro checking if the parameter is an array, we can define a safer ASIZE():

#define CHECK_ARRAY(a) ((void)(0&&((int (*)(__typeof__(a[0])(*)[ASIZE(a)]))NULL)(&(a))))
#define ASIZE_SAFE(a) (CHECK_ARRAY(a), ASIZE(a))


Checking this new version, we see it gets the correct results when passed arrays, but now the compilation fails when applied to pointers:

Code

#include <stdio.h>

#define ASIZE(a) (sizeof (a) / sizeof((a)[0]))

#define CHECK_ARRAY(a) ((void)(0&&((int (*)(__typeof__(a[0])(*)[ASIZE(a)]))NULL)(&(a))))

#define ASIZE_SAFE(a) (CHECK_ARRAY(a), ASIZE(a))

int main(void)
{
short a[3];
short *b;
int c[2];
int *d;
long long e[5][4];
char *f[4];
char (*g)[4];
(void)a; (void)b; (void)c; (void)d; (void)e; (void)f; (void)g;
printf("ASIZE() accepts pointers, producing invalid results.\n");
printf("%zu\n", ASIZE( a ));
printf("%zu\n", ASIZE( b ));
printf("%zu\n", ASIZE( c ));
printf("%zu\n", ASIZE( d ));
printf("%zu\n", ASIZE( e ));
printf("%zu\n", ASIZE( f ));
printf("%zu\n", ASIZE( g ));
printf("ASIZE_SAFE() only accepts arrays (try uncommenting).\n");
printf("%zu\n", ASIZE_SAFE( a ));
//printf("%zu\n", ASIZE_SAFE( b ));
printf("%zu\n", ASIZE_SAFE( c ));
//printf("%zu\n", ASIZE_SAFE( d ));
printf("%zu\n", ASIZE_SAFE( e ));
//printf("%zu\n", ASIZE_SAFE( f ));
//printf("%zu\n", ASIZE_SAFE( g ));
return 0;
}


Output

ASIZE() accepts pointers, producing invalid results.
3
2
2
1
5
4
1
ASIZE_SAFE() only accepts arrays (try uncommenting).
3
2
5


It works in a relatively straightforward way, though I have put the details in a gist to avoid spoiling them.

# Universality with only two NOT gates

In a previous post we have asked how many NOTs are needed to compute an arbitrary Boolean function. In this post we will see that only two NOT gates are enough.

### Building 3 NOT gates starting from 2

If we call the inputs $X$, $Y$ and $Z$, we can make a function detecting when no more than one input is active using a single NOT gate:

$\displaystyle f(X, Y, Z) = \overline{XY + YZ + XZ}$.

Detects when no more than one input is active.

By selecting only the cases where at least one input is present, adding a term to detect when all the inputs are active and using an additional NOT gate, we can detect when exactly zero or two inputs are active:

$\displaystyle g(X, Y, Z) = \overline{f(X, Y, Z)(X + Y + Z) + XYZ}$

$\displaystyle = \overline{\overline{XY + YZ + XZ}(X + Y + Z) + XYZ}$.

Detects when zero or two inputs are active.

Now we know that if $X$ is not present, we either have:

• 0 inputs present: we can check that by simultaneously ensuring that we don’t have more than one input present and that we have either zero or two inputs present, $f(X, Y, Z)\cdot g(X, Y, Z)$.
• 1 input present: we should have no more than one input present and $Y$ or $Z$ should be present, $f(X, Y, Z)\cdot(Y + Z)$.
• 2 inputs present: we can check that by simultaneously ensuring that either zero or two inputs are present and that $Y$ and $Z$ are present, $g(X, Y, Z)\cdot YZ$.

Putting all together and adding the symmetrical cases:

$\displaystyle \overline{X} = f(X, Y, Z) \cdot (Y + Z) + (f(X, Y, Z) + YZ)\cdot g(X, Y, Z)$

$\displaystyle \overline{Y} = f(X, Y, Z) \cdot (X + Z) + (f(X, Y, Z) + XZ)\cdot g(X, Y, Z)$

$\displaystyle \overline{Z} = f(X, Y, Z) \cdot (X + Y) + (f(X, Y, Z) + XY)\cdot g(X, Y, Z)$.

Computing NOT X (the other cases are symmetrical).

### Checking the solution

To get independent confirmation, let’s check the truth tables with a simple Python script:

from itertools import product

for x, y, z in product((False, True), repeat=3):
f_xyz = not (x and y or x and z or y and z)
g_xyz = not (f_xyz and (x or y or z) or x and y and z)
not_x = f_xyz and (y or z) or (f_xyz or y and z) and g_xyz
not_y = f_xyz and (x or z) or (f_xyz or x and z) and g_xyz
not_z = f_xyz and (x or y) or (f_xyz or x and y) and g_xyz
assert (not x == not_x) and (not y == not_y) and (not z == not_z)


### Conclusion

As this technique allows us to expand two NOT gates to three and it can be applied repeatedly, we can compute an arbitrary Boolean function with a circuit containing only two NOT gates.

In a following post we will see how I arrived to the solution by using brute force.

# How many NOTs are needed?

It’s relatively easy to see that we cannot compute an arbitrary Boolean function using only AND and OR gates. For example, even the NOT function cannot be computed using only those gates (why?).

Can we build a circuit to compute an arbitrary Boolean function using a constant number of NOT gates?

Solution to the “42 code golf” problem

This was my best result:

n=1e6,m,c,d;main(){while(c+=d==42,d=0,m=--n)while(d+=m%10,m/=10);printf("%d\n",c);}


It would have been nice to find a solution under 80 bytes but, after one hour of trying, that was the best I could manage…

# Hilbert matrices are positive definite

It can be seen by using its integral form:

$\displaystyle H_{ij} = \int_0^1 dx\,x^{i+j-2}$

Then we can express the positive definite condition as

$\displaystyle \sum_{i,j} x_i H_{ij} x_j > 0$

for $\mathbf{x} \ne 0$.

Replacing and operating:

$\displaystyle \sum_{i,j} x_i \int_0^1 du\,u^{i+j-2} x_j = \int_0^1 du \sum_{i,j} x_i u^{i+j-2} x_j$

$\displaystyle = \int_0^1 du \sum_{i,j} x_i u^{i-1} x_j u^{j-1}$

$\displaystyle = \int_0^1 du \sum_i x_i u^{i-1} \sum_j x_j u^{j-1}$

$\displaystyle = \int_0^1 du \left( \sum_i x_i u^{i-1} \right) \left( \sum_j x_j u^{j-1} \right)$

$\displaystyle = \int_0^1 du \left( \sum_i x_i u^{i-1} \right)^2$

The term inside the parentheses is just a (polynomial) function of $u$:

$\displaystyle = \int_0^1 du\,p(u)^2$

The only way the integral can be zero is if $p(u)$ is identically zero:

$\displaystyle \sum_i x_i u^{i-1} = 0 \implies \forall i: x_i = 0$.

QED

# Solving the viral Singaporean math problem

The following problem has become very popular in social media:

In this blog we have solved similar problems before, but this is one that can be easily solved by hand. We only need to be careful not to confuse our knowledge state with the one of Albert & Bernard.

Albert and Bernard just became friends with Cheryl, and they want to know when her birthday is. Cheryl gives them a list of 10 possible dates.

May 15 May 16 May 19

June 17 June 18

July 14 July 16

August 14 August 15 August 17

Cheryl then tells Albert and Bernard separately the month and the day of her birthday, respectively.

We will describe a list of the possible knowledge states of Albert and Bernard after being given that information:

##### Albert
1. May 15 or May or May 19
2. June 17 or June 18
3. July 14 or July 16
4. August 14 or August 15 or August 17
##### Bernard
1. July 14 or August 14
2. May 15 or August 15
3. May 16 or July 16
4. June 17 or August 17
5. June 18
6. May 19

Albert: I don’t know when Cheryl’s birthday is, but I know that Bernard does not know, too.

We already knew that Albert wouldn’t know the day based just on being given the month, but he is giving us additional information by telling us that he knows that Bernard doesn’t know. Bernard would know the date if the day were 18 or 19, so Albert knows that those days could not be the right ones. That excludes options 1 and 2 from our knowledge of Albert knowledge:

##### Albert
1. July 14 or July 16
2. August 14 or August 15 or August 17

Bernard can do the same deductions we have and eliminate the options that are not possible from his state of knowledge (all the options with months different from July and August).

##### Bernard
1. July 14 or August 14
2. August 15
3. July 16
4. August 17

Bernard: At first, I didn’t know when Cheryl’s birthday is, but I know now.

Updating our knowledge of Bernard knowledge:

##### Bernard
1. August 15
2. July 16
3. August 17

As Albert also knows what we know about Bernard knowledge…

##### Albert
1. July 16
2. August 15 or August 17

Albert: Then I also know when Cheryl’s birthday is.

Now we know the right date:

1. July 16
1. July 16

# 42 code golf

A nice and easy interview problem (link not posted to avoid giving good answers) is the following:

Print the number of integers below one million whose decimal digits sum to 42.

It can be solved with some simple Python code like the following:

print sum(1 if sum(int(c) for c in '%d' % n) == 42 else 0
for n in range(1000000))


A more interesting problem is to try to write the smallest C program that solves the problem, where C program is defined as something that can be compiled & executed by Ideone in “C” mode. I know it can be done in 83 bytes, but can it be done using less?

# Inverse kinematics and the Jacobian transpose

### Generalities

This is a relatively technical post. Its purpose is mainly to teach myself why the Jacobian transpose is so useful when doing inverse kinematics.

We are going to solve the following problem:

We have a mechanism with effectors applying forces whose components are given by $f_i$, with $i$ going from 1 to $N$. The required power is provided by a series of torques, whose components are called $\tau_j$, where $j$ goes from 1 to $M$. Get the required values of $\tau_j$ as a function of $f_i$.

### Jacobian

Let’s call $\theta_j$ the angular coordinate associated with the torque components $\tau_j$ and $x_i$ the normal (“linear”) coordinate of the effector associated with the force component $f_i$. In most useful cases, the positions of the effectors can be expressed as a function of the angular coordinates, $x_i(\theta_j)$. The Jacobian will be the linearization of this relationship around some point $\theta_j^0$,

$\displaystyle J_{ij}(\theta_j^0) = \left. \frac{\partial x_i}{\partial \theta_j} \right|_{\theta_j=\theta_j^0}$.

### Virtual work

If we can ignore inertia forces (either because we are dealing with a purely static problem or because inertia is negligible), we can get

$\displaystyle \sum_{i=1}^N f_i \delta x_i = \sum_{j=1}^M \tau_j \delta \theta_j$,

where $\delta x_i$ is the infinitesimal linear displacement associated with the force component $f_i$ and $\delta \theta_j$ is the infinitesimal angular displacement associated with the torque component $\tau_j$.

### Putting things together

As the previous expression uses infinitesimal movements, we can use the Jacobian to relate the linear displacements to the angular ones:

$\displaystyle \delta x_i = \sum_{j=1}^M J_{ij} \delta \theta_j$.

If we replace that result in the virtual work equation,

$\displaystyle \sum_{i=1}^N f_i \left(\sum_{j=1}^M J_{ij} \delta \theta_j\right) = \sum_{j=1}^M \tau_j \delta \theta_j$,

and we do some rearrangements, we get an expression with infinitesimal angular displacements in both sides:

$\displaystyle \sum_{j=1}^M \left(\sum_{i=1}^N f_i J_{ij}\right) \delta \theta_j = \sum_{j=1}^M \tau_j \delta \theta_j$.

As the infinitesimal angular displacements $\delta \theta_j$ are arbitrary, their factors should match:

$\displaystyle \sum_{i=1}^N f_i J_{ij} = \tau_j$.

By representing this equation in matrix form,

$\displaystyle \boldsymbol{\tau} = \mathbf{J}^T \mathbf{f}$,

we see how we arrive naturally to the Jacobian transpose.