Reusable code

# Efficiently finding two vectors both orthogonal/normal to a given vector This was a code snippet I did earlier for some work with light field mesh parameterizations. I had a normal vector and wanted to find two orthogonal vectors spanning the plane that the vector is normal to, in order to project another vector into it. This is an underspecified problem, as given one vector, there are many pairs of two vectors that are orthogonal to that vector and each other. This extra degree of freedom can be used to construct a numerically stable procedure that also uses less operations than computing a cross-product.

In the C# code snippet below, n is the input vector and b1 and b2 are the output vectors.

```if (Math.Abs(n.x) >= Math.Abs(n.y) &&
Math.Abs(n.x) >= Math.Abs(n.z)) {
b1.x = n.y / n.x; b1.y = -1.0; b1.z = 0.0;
double d = n.x * n.x + n.y * n.y;
b2.x = n.x * n.z / d; b2.y = n.y * n.z / d; b2.z = -1.0;
}
else if (Math.Abs(n.y) >= Math.Abs(n.x) &&
Math.Abs(n.y) >= Math.Abs(n.z)) {
b1.x = -1.0; b1.y = n.x / n.y; b1.z = 0.0;
double d = n.x * n.x + n.y * n.y;
b2.x = n.x * n.z / d; b2.y = n.y * n.z / d; b2.z = -1.0;
}
else { // Math.Abs(n.z) >= Math.Abs(n.x) &&
// Math.Abs(n.z) >= Math.Abs(n.y)
b1.x = -1.0; b1.y = 0.0; b1.z = n.x / n.z;
double d = n.x * n.x + n.z * n.z;
b2.x = n.x * n.y / d; b2.y = -1.0; b2.z = n.y * n.z / d;
}
// Optionally normalize b1 and b2 to unit length here
```

## Derivation

Step 1: Assume for the moment we’re in the first branch where we know n.x is nonzero (we assume at least one component of n is nonzero). For the first vector, b1, we start with this equation (the vectors must have dot product of zero if they are normal):

n.x * b1.x + n.y * b1.y + n.z * b1.z = 0

We substitute values for b1.y and b1.z to simplify the solution for b1.x as much as possible (we set y and z because this results in division by the nonzero n.x):

n.x * b1.x + n.y * (-1) + n.z * (0) = 0
n.x * b1.x – n.y = 0
b1.x = n.y/n.x

Step 2: We find a third vector orthogonal to both vectors by setting up a system of equations based on both dot products being equal to zero. We need three equations, so we also set b2.z to -1, which produces a simple valid solution:

n.x * b2.x + n.y * b2.y + n.z * b2.z = 0, (n.y/n.x) * b2.x + (-1) * b2.y = 0
n.x * b2.x + n.y * b2.y – n.z = 0, (n.y/n.x) * b2.x – b2.y = 0
n.x * b2.x + n.y * b2.y – n.z = 0, n.y * b2.x – n.x * b2.y = 0

I solved this system in Mathematica with this command:

FullSimplify[Solve[{nx*x + ny*y + nz*z == 0, (ny/nx)*x + (-1)*y == 0, z == -1}, {x, y, z}]]

with the solution:

b2.x = (n.x * n.z)/(n.x * n.x + n.y * n.y)
b2.y = (n.y * n.z)/(n.x * n.x + n.y * n.y)
b2.z = -1

Since n.x is nonzero, the common denominator above is also nonzero, and it only needs to be computed once. Note that in Step 1 we did not set both b1.y and b1.z to zero; if we had, here in Step 2 the result would have involved division by n.y, which may be zero.

The same steps apply to find the solution for the cases where n.y and n.z are nonzero, respectively. In each branch only 8 floating-point operations (flops) are required. Even computing a cross-product in 3 dimensions typically requires 9 flops, and this approach finds both orthogonal vectors at once.

Because the formulas involve no subtraction and only divide by the largest component of the vector, the approach is numerically stable for a large range of inputs. However, I haven’t formally analyzed numerical precision, and the addition in the denominator above may result in some loss of precision if the two components have very different sizes.

The procedure is well-suited to CPU-based implementations but may not be the best for a GPU setting such as a shader where branching is expensive. In cases where at least one component is known to be nonzero (and not too small), the corresponding branch can be used exclusively, which eliminates branching.

Let me know if you use the procedure and what you use it for, or if you have any thoughts on ways of improving it!