In 1998, a technique emerged that would transform real-time graphics: normal mapping. The idea was deceptively simple. Instead of modeling every bump, scratch, and crevice with actual geometry, what if we could fake the way light interacts with a surface using nothing more than a texture?
Normal mapping allows a flat polygon to appear as though it has intricate surface detail. A six-polygon cube can look like a carved stone block. A smooth sphere can look like a golf ball. The geometry stays simple; the illusion does the heavy lifting. This article breaks down how normal mapping works: the math, the shaders, and the tricks that make it essential to game development.
The Geometry Problem
Every triangle in a game costs something: memory, vertex processing time, and rasterization bandwidth. A brick wall modeled with actual geometric detail, where each brick protrudes and each mortar line is recessed, might require tens of thousands of triangles. Multiply that by every surface in a level, and you have blown your polygon budget before placing a single character on screen.
Most of what we perceive as "surface detail" comes from how light interacts with a surface, not from the silhouette. The subtle shadows in the mortar lines and the highlights on brick edges are all products of the surface normal at each point. If we can change the normals without changing the geometry, we can change the lighting, and the surface appears detailed.
What Is a Normal?
A surface normal is a unit vector perpendicular to a surface at a given point. It determines how that point interacts with light. When a light ray hits a surface, the angle between the incoming light direction and the surface normal determines the brightness of that point. This relationship is the core of diffuse lighting, described by Lambert's cosine law:
$$I_{\text{diffuse}} = \max(0,\; \hat{N} \cdot \hat{L})$$
where $\hat{N}$ is the surface normal and $\hat{L}$ is the unit direction vector toward the light source. The dot product yields a value between $-1$ and $1$, and we clamp it to zero to avoid negative light contributions.
On a flat polygon, every fragment shares the same interpolated normal, so the entire surface lights uniformly. But in reality, surfaces are rarely perfectly flat. A brick wall has bumps, grooves, and irregularities, each with its own normal direction. If we could specify a different normal at every pixel, we could simulate all that detail without a single extra triangle.
That is exactly what a normal map does.
Encoding Normals in a Texture
A normal map is an RGB texture where each pixel's color encodes a normal vector:
- Red channel encodes the X component of the normal
- Green channel encodes the Y component
- Blue channel encodes the Z component
Since normal components range from $-1$ to $1$, but color channels range from $0$ to $1$ (or $0$ to $255$), we apply a simple remapping:
$$\text{color} = \frac{\hat{N} + 1}{2}$$
And to decode in a shader, we reverse it:
$$\hat{N} = \text{color} \times 2 - 1$$
This explains why normal maps appear predominantly blue. A flat surface pointing straight outward has a normal of $(0, 0, 1)$, which maps to the color $(0.5, 0.5, 1.0)$, a distinctive lavender-blue. Any deviation from flat shows up as a shift toward red, green, or darker blue.
Tangent Space: The Key Concept
Normal maps are typically authored in tangent space, a coordinate system local to the surface of the mesh. This is what makes them reusable: the same brick normal map can be applied to a wall, a floor, or a curved surface, and it will work correctly because the normals are defined relative to the surface, not the world.
Tangent space is defined by three orthogonal vectors at each vertex:
- $\hat{T}$ — the tangent, aligned with the U texture coordinate direction
- $\hat{B}$ — the bitangent (sometimes called the binormal), aligned with the V direction
- $\hat{N}$ — the normal, perpendicular to the surface
Together, these form the TBN matrix:
$$\mathbf{TBN} = \begin{bmatrix} T_x & B_x & N_x \\ T_y & B_y & N_y \\ T_z & B_z & N_z \end{bmatrix}$$
To use a sampled normal from the map in world-space (or view-space) lighting calculations, we multiply the tangent-space normal by this matrix:
$$\hat{N}_{\text{world}} = \mathbf{TBN} \cdot \hat{N}_{\text{tangent}}$$
This transformation rotates the per-pixel normal from the texture's local coordinate system into the coordinate system used for lighting. Without it, the normal map would only produce correct results on surfaces perfectly aligned with the world axes.
The Shader: Normal Mapping in Practice
Here is a simplified vertex and fragment shader pair implementing tangent-space normal mapping. The vertex shader passes the TBN basis vectors to the fragment shader, which samples the normal map and transforms the result:
// Vertex Shader
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
attribute vec3 tangent;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat3 normalMatrix;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vPosition;
void main() {
vUv = uv;
vNormal = normalize(normalMatrix * normal);
vTangent = normalize(normalMatrix * tangent);
vBitangent = cross(vNormal, vTangent);
vPosition = (modelViewMatrix * vec4(position, 1.0)).xyz;
gl_Position = projectionMatrix * vec4(vPosition, 1.0);
}// Fragment Shader
uniform sampler2D normalMap;
uniform vec3 lightPosition;
uniform float normalStrength;
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vTangent;
varying vec3 vBitangent;
varying vec3 vPosition;
void main() {
// Sample and unpack from [0,1] to [-1,1]
vec3 mapN = texture2D(normalMap, vUv).rgb * 2.0 - 1.0;
mapN.xy *= normalStrength;
mapN = normalize(mapN);
// Build TBN and transform to view space
mat3 tbn = mat3(vTangent, vBitangent, vNormal);
vec3 N = normalize(tbn * mapN);
// Diffuse lighting
vec3 lightDir = normalize(lightPosition - vPosition);
float diff = max(dot(N, lightDir), 0.0);
vec3 color = vec3(0.8, 0.6, 0.4) * (diff * 0.8 + 0.2);
gl_FragColor = vec4(color, 1.0);
}The critical line is vec3 N = normalize(tbn * mapN). This replaces the smooth interpolated vertex normal with a detailed per-pixel normal from the texture. Every fragment now has a unique normal direction, and the result is the illusion of complex surface geometry.
Generating Normal Maps
From a Height Map
The most intuitive way to create a normal map is from a grayscale height map (also called a bump map), where brighter pixels represent higher points on the surface. The normal at each pixel is computed from the partial derivatives of the height field:
$$\hat{N} = \text{normalize}\!\left(-\frac{\partial h}{\partial u},\; -\frac{\partial h}{\partial v},\; 1\right)$$
In practice, these derivatives are approximated using finite differences between neighboring pixels, essentially a Sobel filter applied to the height map:
function heightToNormal(heightData, x, y, width, strength) {
const left = getHeight(heightData, x - 1, y, width);
const right = getHeight(heightData, x + 1, y, width);
const up = getHeight(heightData, x, y - 1, width);
const down = getHeight(heightData, x, y + 1, width);
const dx = (left - right) * strength;
const dy = (down - up) * strength;
const dz = 1.0;
const len = Math.sqrt(dx * dx + dy * dy + dz * dz);
return {
r: (dx / len) * 0.5 + 0.5,
g: (dy / len) * 0.5 + 0.5,
b: (dz / len) * 0.5 + 0.5
};
}The strength parameter controls how pronounced the bumps appear. Higher values exaggerate the slopes, producing sharper surface detail.
Baking from High-Poly Models
The most common professional workflow is to sculpt a high-polygon model with full geometric detail, then bake the normals onto a low-polygon version. Tools like Substance Painter, xNormal, and Blender's bake system cast rays from each point on the low-poly surface to the high-poly surface and record the difference in normals. This captures fine detail that actual geometry couldn't render in real time: wrinkles, pores, bolts, seams.
Object-Space vs. Tangent-Space Normal Maps
There are two flavors of normal maps used in practice:
- Tangent-space normal maps are the most common. Normals are defined relative to the surface, making them reusable across different meshes and compatible with deforming or animated geometry. These maps appear predominantly blue.
- Object-space normal maps define normals in the object's local coordinate system. They appear rainbow-colored and are baked for a specific mesh. They can produce slightly better quality because they avoid tangent-space interpolation artifacts, but they cannot be tiled or reused on different geometry.
Most game engines default to tangent-space normal maps for their flexibility.
Real-World Examples
Normal mapping became mainstream with Doom 3 (2004), one of the first games to apply the technique to virtually every surface. Its relatively modest polygon counts combined with aggressive normal mapping produced more surface detail than players had seen before. John Carmack's id Tech 4 engine made normal mapping a standard expectation rather than a novelty.
Today, normal mapping is ubiquitous. The Uncharted series, The Last of Us, God of War, and virtually every modern AAA title uses normal maps on nearly every material. In Fortnite, character skins rely on normal maps to add sculpted detail to relatively simple meshes. Minecraft shader packs like SEUS and BSL add normal maps to the game's flat blocks; dynamic lighting and surface depth replace the flat textures.
Even 2D games benefit. Unity and Godot both support normal maps on 2D sprites, giving flat pixel art dynamic per-pixel lighting. Owlboy and Celeste both used this to add depth without changing their art style.
Limitations and What Comes Next
Normal mapping has two well-known weaknesses:
- Silhouettes remain flat. Because the actual geometry is unchanged, the outline of the object is still smooth. A normal-mapped brick wall looks convincing head-on, but at a grazing angle, the perfectly straight edge betrays the illusion.
- No self-occlusion or parallax. Bumps cannot cast shadows on neighboring bumps, and there is no depth shift when the camera moves laterally across the surface.
Several techniques extend normal mapping to address these shortcomings:
- Parallax Mapping offsets texture coordinates based on the viewing angle and a height map, creating the illusion of depth. Parallax Occlusion Mapping (POM) uses ray marching through the height field for even more convincing results at steep angles.
- Displacement Mapping actually moves vertices based on a height map, producing real geometric detail, but at the cost of additional polygons, typically generated through hardware tessellation.
- Detail Normal Maps blend a second, finely-tiled normal map at a smaller scale to add micro-surface texture like material grain or skin pores.
Despite these more advanced alternatives, normal mapping remains the workhorse of game surface detail. Its cost-to-quality ratio is hard to beat: a single texture lookup per pixel transforms flat surfaces into convincing, richly detailed materials. If you are building a game, normal mapping will be one of the first rendering techniques you reach for.
Comments