Gaussian Splatting on macOS — a primer for app developers
03/24/2026 · Created by Björn Kindler
Radiance fields and Gaussian splatting are no longer a research-only demo. Here is the lay of the land for Mac developers — what they actually are, what the rendering looks like, and where RadianceKit fits in.
What is a radiance field
A radiance field is a representation of a 3D scene that, for every point in space and every viewing direction, tells you the colour and density you would see. Train it from a few dozen photographs of a real object or room, and you can render that scene from arbitrary new viewpoints — the original NeRF paper (Mildenhall et al., 2020) made this concrete with a small MLP doing the heavy lifting.
NeRF rendering is slow, because every pixel of every frame is a ray-march through the MLP. Beautiful results, but minutes per frame on big GPUs. Real-time on consumer hardware was off the table.
What Gaussian splatting changes
Three years later, 3D Gaussian Splatting (Kerbl, Kopanas et al., 2023) replaced the implicit MLP with an explicit cloud of millions of 3D Gaussians — each one a position, an anisotropic covariance, a colour and an opacity. To render, you project these Gaussians into screen space, sort by depth, and alpha-blend them. That is a rasterizer-friendly operation. Modern GPUs eat it for breakfast.
For a Mac developer the practical consequence is that you can render a real captured scene at 60 fps on any Apple-silicon Mac, with the same visual fidelity that NeRFs offered offline. The format is dramatically smaller than a mesh + texture pipeline for the same level of detail, and it captures view-dependent effects (specular highlights, soft caustics) that meshes struggle with.
What it does not do
Splats are a rendering format, not a geometry format. You cannot reliably collide against a Gaussian cloud, extract a clean mesh from it, or edit it the way you would edit a polygonal model. They are also still large — a high-quality capture of a single object is tens of megabytes; a room is in the hundreds. Compression schemes are improving fast but not solved.
And training is not interactive at consumer scale. Today, you capture imagery on a phone, run the training on a desktop GPU or in the cloud, and bring the resulting .ply / .splat file back to your Mac for playback. On-device training is a research target, not yet a shipping technique.
Where RadianceKit fits
RadianceKit is the macOS-side piece of that pipeline: a Swift toolkit that loads splat assets, renders them with Metal, and gives you camera controls, lighting hooks and a viewer component to drop into your own Mac app. It does not do training. It does playback well. That separation has been the right call — viewer apps need predictable performance more than they need novel research features.
If you are a Mac developer eyeing this space, the honest advice is: try splatting before you commit to any architecture. Render a few public datasets in a small Metal sandbox. Get a feel for memory usage, fill rate and frame pacing on the Macs you actually target. Once you do, picking between “mesh + texture”, “neural field” and “Gaussian splat” for a given product gets dramatically clearer.
Where to read more
RadianceKit has its own product site at radiancekit.de with docs, sample scenes and the SDK download. Engineering notes on the renderer and on capture-side workflows will land here on this blog as the SDK evolves.