Danny's Blog
See all posts
GitHub

Why Is Matrix Math Different in Every Graphics Program?

Written by: Danny Spencer

Updated September 15, 2023

What gives?

So this is you: You're picking up some 3D software, hear the word "matrix" thrown around, and do some research.

You initialize the same 4x4 translation matrix in Unreal Engine and Godot, and aside from coordinate-system differences, they behave identically. You decide to multiply it with a rotation matrix... and they both behave differently! Unreal does "translate then rotate", but Godot does "rotate then translate"!

"That's weird", you say to yourself. "I guess some programs just do it backwards. Good to know."

But then you initialize that same 4x4 translation matrix in Blender's Python REPL... and it doesn't even do anything!

"Is this a bug in Blender?" you say, mildly annoyed.

After experimenting, you find that transposing the matrix works in Blender.

But it's not a bug in Blender, or in any of them.

Here's why

There are different conventions for representing transformation matrices. The authors of various 3D packages decide to use one convention over another. It really just boils down to that.

I present two core properties that define matrix transformations in different programs:

  1. Element order: Row-major vs Column-major order
  2. Transform style: Post-multiply column-vector (y=Axy=Ax) vs Pre-multiply row-vector (y=xAy=xA)

Element order: Row-major vs Column-major order

From the Wikipedia article: https://en.wikipedia.org/wiki/Row-_and_column-major_order

Row-major and column-major order answers the question: "How do we describe a matrix as a flat list of numbers?"

Given the 4x4 matrix:

A=[100501010001150001]A = \begin{bmatrix} 1&0&0&5\\ 0&1&0&10\\ 0&0&1&15\\ 0&0&0&1 \end{bmatrix}

We can write it as row-major:

float A[16] = {1,0,0,5, 0,1,0,10, 0,0,1,15, 0,0,0,1};

Or as column-major:

float A[16] = {1,0,0,0, 0,1,0,0, 0,0,1,0, 5,10,15,1};

When we talk about row-major and column-major order, it can be in the context of the memory layout of the particular matrix data structure or how it's indexed. Alternatively, it can be how it's initialized from code. I find the latter more useful in most contexts.

Note that element ordering virtually never describes a visual representation of a matrix. If you're pretty-printing a matrix or you see one in a user interface and it's arranged in rows and columns, then they're rows and columns!

With row-major of column-major order, the math doesn't change. It only affects the way it's represented as a flat list.

Transform style: y=Axy=Ax vs y=xAy=xA

We use the notation y=Axy=Ax and y=xAy=xA to refer to two distinct transform styles.

yy describes an output vector, xx describes an input vector, and AA describes a transformation matrix. Please do not confuse xx and yy with the x and y axes.

When we say y=Axy=Ax, we post-multiply with a column vector. The operation looks like this:

[xyz1]=[0101010020001300001][xyz1]\begin{bmatrix}x'\\y'\\z'\\1\end{bmatrix} = \begin{bmatrix} 0 & -1 & 0 & 10\\ 1 & 0 & 0 & 20\\ 0 & 0 & 1 & 30\\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix}x\\y\\z\\1\end{bmatrix}

When we say y=xAy=xA, we pre-multiply with a row vector. The operation looks like this:

[xyz1]=[xyz1][0100100000101020301]\begin{bmatrix}x'&y'&z'&1\end{bmatrix} = \begin{bmatrix}x&y&z&1\end{bmatrix}\begin{bmatrix}0&1&0&0\\-1&0&0&0\\0&0&1&0\\10&20&30&1\end{bmatrix}

Both examples represent a rotation on the Z axis by 90 degrees (positive), followed by a translation of (10,20,30). The results of both are equivalent vectors, just transposed.

Notice that the matrices are transposed, and that the order is reversed. Composing longer chains of transforms works this way too. This owes to the fact that in linear algebra, transposing a product will "reverse" the order of the factors and transpose those factors. i.e. (ABCZ)T=ZTCTBTAT(ABC\cdots Z)^T = Z^T \cdots C^TB^TA^T (the "shoes and socks rule")

Pros of y=Axy=Ax:

Pros of y=xAy=xA:

How do you test how a program does matrix transformations?

We need to figure out both transform style, and element order.

The test I like to use for transform style is to multiply a translate with a scale:

translate(10,20,30)scale(2)\mathrm{translate}(10,20,30) \cdot \mathrm{scale}(2)

Because the matrices will evaluate to one of the two:

Case 1, y=Ax:[1001001020001300001][2000020000200001]=[2001002020002300001]Case 2, y=xA:[1000010000101020301][2000020000200001]=[2000020000202040601]\begin{aligned} \text{Case 1, }y=Ax: && \begin{bmatrix} 1 & 0 & 0 & 10\\ 0 & 1 & 0 & 20\\ 0 & 0 & 1 & 30\\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} &= \begin{bmatrix} 2 & 0 & 0 & 10\\ 0 & 2 & 0 & 20\\ 0 & 0 & 2 & 30\\ 0 & 0 & 0 & 1 \end{bmatrix} \\ \text{Case 2, }y=xA: && \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 10 & 20 & 30 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} &= \begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 20 & 40 & 60 & 1 \end{bmatrix} \end{aligned}

The translation for Case 1 is 10,20,30\langle 10,20,30 \rangle, while the translation for Case 2 is 20,40,60\langle 20,40,60 \rangle.

Other methods certainly work, as long as the two transforms don't commute. For example, a rotation and a translate works, because they don't commute: rotatetranslatetranslaterotaterotate \cdot translate \ne translate \cdot rotate . But a rotation and a scale doesn't work, because they do commute: rotatescale=scalerotaterotate \cdot scale = scale \cdot rotate.

The transform style is y=Axy=Ax, if and only if (any of):

The transform style is y=xAy=xA, if and only if (any of):

Testing the element order is easiest if you know the transform style and how a matrix is initialized:

y=Axy=xA
mat4(1,0,0,10, 0,1,0,20, 0,0,1,30, 0,0,0,1)Row-majorColumn-major
mat4(1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1)Column-majorRow-major

Note that the left cells above are pseudocode for a matrix that translates by (10,20,30).

Some APIs will group into vectors, while others will make you provide a flat array of numbers.

y=Axy=Ax and y=xAy=xA don't affect basic matrix operations

Those two styles only prescribe how transform matrices are created. Given known matrices, it doesn't actually affect the basic matrix operations and the order of their arguments.

For example, for matrix multiplication, the order only swaps because the matrices themselves are not the same in the different programs.

No matter which style a program uses, the following operations should remain identical across them.

Appendix: a table of matrix transform conventions for popular programs

The results of this table are all my own independent research. I've tried and tested the code in the table. Feel free to fact-check!

ApplicationMatrix initialization order*Transform styleLanguageExample (translate by x=10,y=20,z=30)
BlenderRow-majory=AxPython
cube.matrix_local = Matrix( ((1,0,0,10),(0,1,0,20),(0,0,1,30),(0,0,0,1)) )
MayaRow-majory=xAMEL, Python
cmds.setAttr('pCube1.offsetParentMatrix', [1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1], type='matrix')
HoudiniRow-majory=xAVEX
@P = @P * {1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1};
Cinema 4DColumn-major*y=AxPython
out = c4d.Matrix(off=c4d.Vector(10,20,30), v1=c4d.Vector(1,0,0), v2=c4d.Vector(0,1,0), v3=c4d.Vector(0,0,1)) * pos
USDRow-majory=xA.usda format
matrix4d xformOp:transform = ( (1,0,0,0), (0,1,0,0), (0,0,1,0), (10,20,30,1) )
Unreal EngineRow-majory=xAC++
SetActorTransform(FTransform(FMatrix({ 1,0,0,0 }, { 0,1,0,0 }, { 0,0,1,0 }, { 10,20,30,1 })));
UnityColumn-majory=AxC#
GodotColumn-major**y=AxGDScript
transform = Transform3D(Vector3(1,0,0), Vector3(0,1,0), Vector3(0,0,1), Vector3(10,20,30)) * transform
CSSColumn-majory=AxCSS
transform:matrix3d(1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1)
GLSLColumn-majory=Ax (by convention)GLSL
gl_Position = worldViewProj * mat4(1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1) * pos;
HLSLRow-majory=xA (by convention)HLSL
output.position = mul(mul(float4(input.position,1), float4x4(1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1)), worldViewProj);
glmColumn-majory=AxC++
glm::vec4 out = glm::mat4(1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1) * in;
DirectXMathRow-majory=xAC++
XMVECTOR out = XMVector3Transform(in, XMMATRIX(1,0,0,0, 0,1,0,0, 0,0,1,0, 10,20,30,1));
EigenRow-majory=AxC++
Eigen::Vector4f out = Eigen::Matrix4f({ {1,0,0,10},{0,1,0,20},{0,0,1,30},{0,0,0,1} }) * in;

* Cinema 4D: The translation vector is the first column, and we effectively multiply with the column vector [1,x,y,z]T\begin{bmatrix}1,x,y,z\end{bmatrix}^T. See: https://developers.maxon.net/docs/Cinema4DPythonSDK/html/manuals/data_algorithms/classic_api/matrix.html#matrix-fundamental

** Godot: The Transform3D class is a composite of Basis vectors and an Origin vector, which is effectively column-major. The "tscn" scene file stores a flat list of the elements of a 3x4 matrix in column-major order.

* Note: Initialization order describes how a matrix is created from scratch. This describes the element order for numbers passed to a class constructor, function, array, or inline syntax. It's usually the same as storage order, but not always (Eigen is an example of this).

The work is not shown, but "Transform style" is based on other indications such as built-in transform functions and the order that transforms are applied (e.g. "translate", "rotate", "scale"). GLSL and HLSL are "by convention", because they don't include transform functions and they allow for both y=Axy=Ax and y=xAy=xA. GLSL and HLSL conventions are based on the old (deprecated!) transform stacks built into OpenGL and DirectX respectively.