Renderer Module API Reference
This document captures the public surface of src/renderer/. It tracks the actual TypeScript exports so integrators can configure, drive, and inspect the GaussianRenderer.
Exports
import {
GaussianRenderer,
DEFAULT_KERNEL_SIZE,
type RendererConfig,
type RenderArgs,
type RenderStats,
type IRenderer,
} from 'src/renderer';
Interfaces
RenderArgs
interface RenderArgs {
camera: PerspectiveCamera; // required view/projection provider
viewport: [number, number]; // canvas width/height in pixels
clippingBox?: { min: vec3; max: vec3 };
maxSHDegree?: number;
showEnvMap?: boolean;
mipSplatting?: boolean;
kernelSize?: number;
walltime?: number;
sceneExtend?: number;
sceneCenter?: vec3;
}
These values are merged with per-cloud metadata inside buildRenderSettings() before each dispatchModel call. Any omitted field falls back to the PointCloud defaults (bbox, center, PointCloud.kernelSize, etc.).
RendererConfig
interface RendererConfig {
device: GPUDevice;
format: GPUTextureFormat; // swapchain / render target format
shDegree: number; // maximum SH degree supported by this renderer
compressed?: boolean; // reserved for future compressed data paths
debug?: boolean; // enable verbose logging & global handles
}
The legacy constructor (device, format, shDeg, compressed?) is still supported, but the config object is preferred.
RenderStats
interface RenderStats {
gaussianCount: number; // pointCloud.numPoints
visibleSplats: number; // latest sorter keys_size / cached num_points
memoryUsage: number; // coarse estimate (splat + sorter buffers)
}
IRenderer
interface IRenderer {
initialize(): Promise<void>;
prepareMulti(
encoder: GPUCommandEncoder,
queue: GPUQueue,
pointClouds: PointCloud[],
args: RenderArgs,
): void;
render(pass: GPURenderPassEncoder, pointCloud: PointCloud): void;
renderMulti(pass: GPURenderPassEncoder, pointClouds: PointCloud[]): void;
getPipelineInfo(): { format: GPUTextureFormat; bindGroupLayouts: GPUBindGroupLayout[] };
}
GaussianRenderer implements this interface and adds a handful of convenience/debug helpers documented below.
GaussianRenderer class
Constructors
new GaussianRenderer({ device, format, shDegree, compressed?, debug? }: RendererConfig)
new GaussianRenderer(device: GPUDevice, format: GPUTextureFormat, shDegree: number, compressed?: boolean)
Both forms are equivalent; the config version supports future options without breaking call sites.
Lifecycle
initialize(): Promise<void>�?creates sorter, dual preprocessors, pipeline layout, render/depth pipelines, indirect draw buffer, and an initial 1M-splat global buffer.ensureSorter(): Promise<void>�?legacy alias that simply callsinitialize()if needed.
Frame entry points
prepareMulti(encoder, queue, pointClouds, args)- Ensures global capacity �?sum of
pointCloud.numPoints(1.25× growth factor). - Resets the sorter’s indirect buffer and
keys_size. - For each point cloud: selects SH vs RGB preprocessor, calls
dispatchModelwithbaseOffset+ optional ONNXcountBuffer. - Runs a single
sorter.recordSortIndirect(...)and copies the visible splat count into the indirect draw buffer. render(pass, pointCloud)- Uses per-cloud cached sort resources (
WeakMap). - Binds
pointCloud.renderBindGroup()at@group(0)and the cached sorter render BG at@group(1). - Issues one
drawIndirectusing the shared indirect buffer. renderMulti(pass, pointClouds)- Requires that
prepareMultiwas called beforehand. - Binds the global
renderBG(global splat buffer) and global sorter render BG, then callsdrawIndirectonce.
Pipeline control
getPipelineInfo()�?returns{ format, bindGroupLayouts: [PointCloud.renderBindGroupLayout(device), GPURSSorter.createRenderBindGroupLayout(device)] }for external render passes.setDepthEnabled(enabled: boolean)�?toggles whether the depth-aware pipeline variant is used in subsequent draws.setDepthFormat(format: GPUTextureFormat)�?updatesdepth24plusby default; recreates the depth pipeline to match the new attachment format.
Diagnostics & stats
getRenderStats(pointCloud)�?wrapsRenderStatsfor UI overlays or logging.readInstanceCountDebug()�?GPU→CPU readback of the current indirect instance count.readPayloadSampleDebug(n = 8)�?dumps the firstnpayload indices from the global sorter buffers (requiresprepareMulti/ global buffers).debugONNXCount()�?hooks into the preprocessor’s debug routine when ONNX-driven counts are active.
Utilities
DEFAULT_KERNEL_SIZE�?exported constant (0.3) used whenever neitherRenderArgs.kernelSizenorPointCloud.kernelSizeis provided.
Usage patterns
Multi-model frame
const renderer = new GaussianRenderer({ device, format, shDegree: 3 });
await renderer.initialize();
renderer.prepareMulti(encoder, device.queue, pointClouds, {
camera,
viewport: [canvas.width, canvas.height],
maxSHDegree: 3,
});
const pass = encoder.beginRenderPass(passDesc);
renderer.renderMulti(pass, pointClouds);
pass.end();
Per-model rendering (legacy path)
Legacy Path refers to the per-model rendering approach used before the introduction of multi-model batching (prepareMulti/renderMulti). While still supported, the batched approach is recommended even for single models.
Legacy Path Characteristics:
- Uses render(pass, pointCloud) method, called separately for each model
- Uses each point cloud's own splat2DBuffer (managed by PointCloud module)
- Uses cached per-cloud sort resources (WeakMap<PointCloud, PointCloudSortStuff>)
- Executes separate draw calls for each model
const pointCloud = loadPointCloud();
renderer.prepareMulti(encoder, device.queue, [pointCloud], args); // still recommended
renderer.render(pass, pointCloud); // uses per-cloud cache, separate draw
Note: Even for a single model, renderMulti() is recommended as it uses global buffers and offers better performance.
Depth pipeline toggle
Debug helpers
await renderer.readInstanceCountDebug();
await renderer.readPayloadSampleDebug(16);
await renderer.debugONNXCount();
Notes
- Always call
prepareMultibeforerenderMulti; preprocessing populates the sorter buffers and indirect draw counts. - If you only render a single point cloud, caching is still active but capacity management may skip global buffers until
prepareMultiis used. - Statistics and debug utilities read back GPU buffers; they should be used sparingly in production builds.