Home Games Shader Sandbox

Game Dev Mechanics: Metaballs & Implicit Surfaces — How It Works

Click and drag to move metaballs • Slider adjusts threshold

What Are Metaballs?

Imagine dropping two blobs of mercury onto a table. As they drift close together, they merge smoothly into a single larger blob. Pull them apart and they stretch into a thin bridge before snapping back into separate drops. This behavior is what metaballs simulate in computer graphics.

Introduced by Jim Blinn in 1982 under the name "blobby objects," metaballs define surfaces not through explicit geometry (vertices and triangles) but through an underlying scalar field. Any point in space has a field value determined by its proximity to nearby metaball centers. The visible surface appears wherever that field crosses a chosen threshold, creating shapes that blend, merge, and separate naturally.

You've seen metaballs in action even if you didn't know the name: the lava in a lava lamp, the goo in World of Goo, the character in LocoRoco, the paint blobs in de Blob, and countless fluid and plasma effects across decades of games.

Implicit Surfaces and Scalar Fields

To understand metaballs, you first need to understand implicit surfaces. In traditional 3D graphics, surfaces are explicit: defined by lists of vertices connected by edges and faces. An implicit surface, by contrast, is defined by a mathematical function:

$$f(\mathbf{p}) = T$$

where $\mathbf{p}$ is any point in space, $f$ is a scalar-valued function, and $T$ is a threshold constant. The surface is the set of all points where $f$ equals exactly $T$. Points where $f(\mathbf{p}) > T$ are considered "inside" the surface, and points where $f(\mathbf{p}) < T$ are "outside."

A simple example: a sphere of radius $r$ centered at the origin can be defined implicitly as $f(\mathbf{p}) = |\mathbf{p}|^2 = r^2$. Every point at distance $r$ from the origin satisfies this equation and lies on the sphere's surface.

The scalar field is the function $f$ evaluated at every point in space. You can think of it as an invisible "energy" that fills the world. Each metaball center radiates energy that falls off with distance, and the surface appears wherever the total accumulated energy hits the threshold.

The Metaball Energy Function

Each metaball $i$ has a center position $\mathbf{c}_i$ and a radius of influence $r_i$. It contributes to the scalar field based on the distance from any query point $\mathbf{p}$:

$$f_i(\mathbf{p}) = \frac{r_i^2}{|\mathbf{p} - \mathbf{c}_i|^2}$$

This is the inverse-square falloff model: the energy at distance $d$ from the center is $\frac{r^2}{d^2}$. At exactly $d = r$, the contribution is $1.0$. Closer to the center, the value grows toward infinity. Farther away, it rapidly diminishes toward zero.

The total field value at any point is the sum of contributions from all $n$ metaballs:

$$f(\mathbf{p}) = \sum_{i=1}^{n} \frac{r_i^2}{|\mathbf{p} - \mathbf{c}_i|^2}$$

With a threshold $T = 1.0$, each isolated metaball produces a perfect sphere (or circle in 2D) of radius $r_i$. But when two metaballs get close enough that their fields overlap, the combined field exceeds the threshold in the region between them, and the surface bulges outward, merging the two shapes seamlessly.

Merging is an emergent property of field summation, not something you have to code explicitly.

Alternative Falloff Functions

The inverse-square model isn't the only option. Blinn's original formulation used a Gaussian (exponential) falloff:

$$f_i(\mathbf{p}) = e^{-a \cdot |\mathbf{p} - \mathbf{c}_i|^2}$$

where $a$ controls how quickly the field decays. The Gaussian produces smoother blending but is more expensive to compute. Another popular choice is the polynomial falloff used by the Wyvill brothers, which provides compact support: the field drops to exactly zero beyond radius $r$. This means distant metaballs don't affect each other at all, which is useful for performance since you can skip metaballs that are too far from the query point.

In practice, most real-time implementations use the inverse-square model for its simplicity and visual quality, accepting that it has infinite range (the field never truly reaches zero, though it becomes negligible very quickly).

From Field to Pixels: Rendering Metaballs

Once you have the scalar field defined, you need to actually render it. There are two main approaches.

Approach 1: Isosurface Extraction

The classic approach extracts a polygonal mesh from the scalar field using algorithms like Marching Squares (in 2D) or Marching Cubes (in 3D). These algorithms divide space into a grid, evaluate the field at each grid vertex, and generate triangles wherever the field crosses the threshold. The resulting mesh can then be rendered with standard rasterization.

This approach works well for CPU-based rendering and situations where you need the actual mesh geometry, for example physics collisions against a metaball surface. The mesh resolution is limited by grid density, and updating the mesh every frame when metaballs move can be expensive.

Approach 2: Per-Pixel Evaluation (GPU Shader)

The modern, real-time approach evaluates the scalar field directly in a fragment shader. For every pixel on screen, the shader computes the total field value and colors the pixel based on whether it's inside or outside the surface. This is massively parallel on the GPU and produces pixel-perfect results at any resolution.

In 2D, this is straightforward: each pixel's screen coordinates map to a position in the field, and you sum the contributions from all metaballs. In 3D, you can combine this with ray marching: cast a ray from the camera through each pixel and step along it until the field value exceeds the threshold, then shade the surface at that intersection point.

The per-pixel approach is what powers the interactive demo at the top of this article. Here's a simplified GLSL fragment shader that renders 2D metaballs:

uniform vec2 balls[8];
uniform float radii[8];
uniform float threshold;
varying vec2 vUv;

void main() {
    vec2 p = vUv;
    float field = 0.0;

    for (int i = 0; i < 8; i++) {
        vec2 diff = p - balls[i];
        float d2 = dot(diff, diff);
        field += (radii[i] * radii[i]) / (d2 + 0.00001);
    }

    if (field > threshold) {
        float intensity = smoothstep(threshold, threshold * 5.0, field);
        gl_FragColor = vec4(mix(vec3(0.0, 0.4, 1.0), vec3(1.0), intensity), 1.0);
    } else {
        float glow = field / threshold;
        gl_FragColor = vec4(vec3(0.02, 0.05, 0.15) * glow, 1.0);
    }
}

The whole thing boils down to a simple loop: for each pixel, sum up the field contributions from every metaball, then color based on the result. The GPU executes this for millions of pixels simultaneously.

Implementation: A Complete 2D Metaball System

Here's a practical JavaScript implementation that manages a collection of metaballs and evaluates the scalar field on the CPU. While you'd typically use a GPU shader for real-time rendering, this illustrates the core logic clearly:

class MetaballSystem {
    constructor() {
        this.balls = [];
    }

    addBall(x, y, radius) {
        this.balls.push({ x, y, radius });
    }

    // Evaluate the scalar field at point (px, py)
    evaluate(px, py) {
        let field = 0;
        for (const ball of this.balls) {
            const dx = px - ball.x;
            const dy = py - ball.y;
            const distSq = dx * dx + dy * dy;
            field += (ball.radius * ball.radius) / (distSq + 0.00001);
        }
        return field;
    }

    // Render to a canvas using per-pixel evaluation
    render(ctx, width, height, threshold = 1.0) {
        const imageData = ctx.createImageData(width, height);
        const data = imageData.data;

        for (let y = 0; y < height; y++) {
            for (let x = 0; x < width; x++) {
                const px = x / width;
                const py = y / height;
                const field = this.evaluate(px, py);
                const idx = (y * width + x) * 4;

                if (field > threshold) {
                    const t = Math.min((field - threshold) / (threshold * 4), 1);
                    data[idx]     = Math.floor(t * 255);
                    data[idx + 1] = Math.floor(100 + t * 155);
                    data[idx + 2] = 255;
                } else {
                    const glow = field / threshold;
                    data[idx]     = Math.floor(5 * glow);
                    data[idx + 1] = Math.floor(13 * glow);
                    data[idx + 2] = Math.floor(38 * glow);
                }
                data[idx + 3] = 255;
            }
        }
        ctx.putImageData(imageData, 0, 0);
    }
}

This CPU-based approach works but is slow for large resolutions. On an 800 by 450 canvas, you're evaluating the field function 360,000 times per frame, and each evaluation loops through all metaballs. A GPU shader does this same work in parallel across thousands of cores, making it vastly faster.

Controlling the Merge Behavior

The threshold value $T$ is your primary creative lever for tuning the metaball look:

  • Lower threshold ($T = 0.5$): Metaballs appear larger and merge at greater distances. The surfaces are puffy and overlap easily, creating a blobbier, more connected look.
  • Higher threshold ($T = 2.0$): Metaballs appear smaller and only merge when very close. The surfaces are tighter and more defined, showing distinct individual blobs.

Try adjusting the threshold slider in the demo above to see this effect in real time. Many games expose this as a visual quality parameter or animate it dynamically. A slime creature, for instance, might lower its threshold when absorbing smaller blobs, making the merge appear more dramatic.

You can also vary the radii of individual metaballs to create asymmetric merging. A large "parent" blob will absorb smaller "child" blobs at a distance, while same-sized blobs merge symmetrically. Radius variation matters for convincing organic shapes; real-world fluids and organisms are rarely uniform.

Computing Normals from the Gradient

To light a metaball surface properly, you need surface normals. Since the surface is defined implicitly by $f(\mathbf{p}) = T$, the normal at any surface point is the gradient of the field:

$$\hat{n} = \frac{\nabla f}{|\nabla f|}$$

For the inverse-square falloff, the partial derivatives are:

$$\frac{\partial f}{\partial x} = \sum_{i=1}^{n} \frac{-2r_i^2(x - x_i)}{d_i^4}$$

In a shader, you can approximate the gradient numerically using finite differences, evaluating the field at small offsets from the current point:

float eps = 0.002;
float fx = field(p + vec2(eps, 0.0)) - field(p - vec2(eps, 0.0));
float fy = field(p + vec2(0.0, eps)) - field(p - vec2(0.0, eps));
vec3 normal = normalize(vec3(-fx, -fy, 0.3));

The $z$ component in the normal vector creates a pseudo-3D shading effect on a 2D metaball rendering, giving the blobs a rounded, glossy appearance. The demo above uses this technique: the gradient of the scalar field drives diffuse and specular lighting, making the flat 2D blobs appear three-dimensional.

Games and Applications

Metaballs appear across many genres and contexts in game development:

  • LocoRoco (PSP, 2006): The player character is a blob that splits into smaller blobs and merges back together. The 2D metaball-style rendering creates the smooth, organic look the game is known for.
  • World of Goo (2D Boy, 2008): While the goo balls are discrete entities connected by springs, the visual rendering uses metaball-like techniques to make the goo structures look fluid and organic.
  • de Blob (Blue Tongue, 2008): The main character is a paint blob that absorbs color and grows, with metaball-inspired visuals for the merging paint effects.
  • Mushroom 11 (Untame, 2015): The player controls an amorphous organism that grows and erodes, with metaball-style rendering creating the organic cellular appearance.
  • Fluid and Particle Effects: Many games use metaballs to render fluid simulations, lava, slime, and plasma effects. The particles from a physics-based fluid simulation (such as SPH) are individually tiny, but rendering them as metaballs creates a smooth, connected liquid surface.

Beyond games, metaballs appear in scientific visualization (molecular modeling, medical imaging), procedural modeling tools (sculpting organic forms in Blender, Houdini), and demoscene productions where compact code must produce impressive visuals.

Optimizations for Real-Time Performance

For real-time applications with dozens or hundreds of metaballs, several optimizations become important:

Spatial Partitioning

Since each metaball's influence falls off with distance, you don't need to evaluate every metaball for every pixel. Use a spatial data structure (a uniform grid, quadtree, or spatial hash) to quickly find which metaballs are close enough to meaningfully affect a given point. With falloff functions that have compact support (like the Wyvill polynomial), you can skip distant metaballs entirely.

Bounding Volume Culling

In the shader, you can pass a bounding rectangle for each metaball and skip the field computation if the current pixel is outside all bounding rectangles. This reduces the per-pixel cost from $O(n)$ to roughly $O(k)$ where $k$ is the number of nearby metaballs, a significant savings when $n$ is large and the metaballs are spread across the screen.

Reduced Resolution and Upscaling

For isosurface extraction, a coarser grid still produces smooth results because you interpolate vertex positions along edges. For shader-based rendering, you can render metaballs at half or quarter resolution and upscale with a bilateral filter or temporal anti-aliasing. This trades a small amount of sharpness for a 4x to 16x reduction in fragment shader invocations.

Compute Shader Approaches

On modern hardware, compute shaders can build spatial acceleration structures on the GPU itself. A common approach tiles the screen into cells and assigns each cell a list of relevant metaballs. The fragment shader then only iterates over the metaballs in its cell, combining spatial partitioning with GPU parallelism for maximum throughput.

Wrapping Up

Metaballs are one of the more durable techniques in game graphics. The core idea, defining shapes through overlapping scalar fields and rendering the isosurface, produces organic behavior that would be difficult to replicate with explicit geometry. Whether you're building a slime creature, a fluid simulation, a lava lamp effect, or an abstract art game, metaballs give you smooth merging and splitting for free, driven by nothing more than the positions and radii of your field sources.

The interactive demo above lets you drag the metaballs around and watch them merge and separate in real time. Experiment with the threshold slider to see how it changes the isosurface: lowering it makes the blobs bigger and more eager to merge, while raising it tightens them up. That single parameter does more to shape the look of your metaball effects than almost anything else.

Comments

Like this article? Consider supporting us

Your support helps us keep creating free game dev content, tutorials, and tools.

Free

$0 /month

Newsletter and public posts

  • Newsletter access
  • Public posts & updates
  • Community access

Studio Backer

$25 /month

Direct impact on development with your name in the credits

  • Everything in Supporter
  • Your name in game credits
  • Priority feature requests
  • Direct developer access
  • Monthly asset downloads