Skip to content

Preprocessing Module API Reference

This document describes the public surface exposed by src/preprocess, with a focus on the concrete GaussianPreprocessor that the renderer uses for multi-model dispatch.

Module Exports

// src/preprocess/index.ts
export interface IPreprocessor {
  initialize(device: GPUDevice, shDegree: number): Promise<void>;
  getBindGroupLayout(device: GPUDevice): GPUBindGroupLayout;
}
export interface PreprocessResults { /* reserved for future use */ }
export { GaussianPreprocessor } from './gaussian_preprocessor';

Render Settings Shape

The settings field passed to dispatchModel uses the following structure:

interface RenderSettings {
  gaussianScaling: number;
  maxSHDegree: number;
  showEnvMap: boolean;
  mipSplatting: boolean;
  kernelSize: number;
  walltime: number;
  sceneExtend: number;
  center: Float32Array;        // vec3
  clippingBoxMin: Float32Array; // vec3
  clippingBoxMax: Float32Array; // vec3
}

Values are packed into the 80‑byte render-settings uniform exactly in the order above (see architecture doc for offsets).

DispatchModelArgs

dispatchModel receives the following shape (mirrors how GaussianRenderer.prepareMulti calls it):

interface DispatchModelArgs {
  camera: PerspectiveCamera;
  viewport: [number, number];
  pointCloud: PointCloud;
  sortStuff: PointCloudSortStuff; // from GPURSSorter
  settings: RenderSettings;
  modelMatrix: Float32Array;      // 4×4 transform
  baseOffset: number;             // slice inside global splat buffer
  global: { splat2D: GPUBuffer }; // shared output buffer
  countBuffer?: GPUBuffer;        // optional ONNX-produced count
}

GaussianPreprocessor

class GaussianPreprocessor implements IPreprocessor {
  constructor();
  initialize(device: GPUDevice, shDegree: number, useRawColor?: boolean): Promise<void>;
  dispatchModel(args: DispatchModelArgs, encoder: GPUCommandEncoder): void;
  getBindGroupLayout(device: GPUDevice): GPUBindGroupLayout;
  debugCountValues(): Promise<void>;
}

initialize(device, shDegree, useRawColor = false)

  • Creates two UniformBuffer instances (camera: 272 B, settings: 80 B).
  • Builds the pipeline layout described in the architecture doc (4 bind groups).
  • Injects the requested SH degree into preprocess.wgsl and compiles the shader module.
  • Sets the USE_RAW_COLOR pipeline constant when useRawColor is true (bypasses SH evaluation and interprets the SH buffer as direct RGBA).

dispatchModel(args, encoder)

Records a compute pass that writes one point cloud into the global splat buffer slice defined by baseOffset. Internally it:

  1. Packs and flushes camera + settings uniforms from scratch buffers.
  2. Calls PointCloud.updateModelParamsWithOffset(modelMatrix, baseOffset) and, if present, pointCloud.setPrecisionForShader().
  3. Flushes the point cloud's model-params buffer and optionally overwrites num_points (byte offset 68) from countBuffer using encoder.copyBufferToBuffer.
  4. Creates temporary bind groups for group 1 (gaussian/SH buffers + global splat output) and group 3 (settings + model params) and reuses prebuilt bind groups for the other slots.
  5. Dispatches ceil(pointCloud.numPoints / 256) workgroups.

The method does not return a value; results are written to GPU buffers referenced in args.

getBindGroupLayout(device)

Returns the camera uniform bind-group layout (group 0). This is mainly used by systems that need to embed the preprocessor's camera uniforms in their own pipeline layouts. Other bind-group layouts are internal to the class.

debugCountValues()

If dispatchModel ran with a dynamic countBuffer, the preprocessor stores references to the source/destination buffers. Calling debugCountValues() uses debugCountPipeline (see src/utils/debug-gpu-buffers.ts) to print the ONNX count alongside the model-params uniform, which is handy when diagnosing indirect draw mismatches.

Uniform Helpers

packCameraUniforms(camera, viewport)

  • Writes view matrix, view inverse, projection matrix (after applying VIEWPORT_Y_FLIP), projection inverse, viewport size, and camera.projection.focal(viewport) into the scratch buffer.
  • Calls UniformBuffer.setData(Float32Array) and flush(device).

packSettingsUniforms(pointCloud, settings)

  • Serializes the RenderSettings struct into a DataView with little-endian writes.
  • Pads to the required 16‑byte alignment before writing the scene center.
  • Calls UniformBuffer.setData(dataView) and flush(device).
  • PointCloud (see doc/modules/03-point_cloud) — owns Gaussian/SH buffers, draw uniforms, and model-params uniform.
  • PointCloudSortStuff (from src/sort/radix_sort.ts) — provides sorter_bg_pre, sorter_uni, sorter_dis, and ping-pong buffers used by preprocessing and rendering.
  • PerspectiveCamera — supplies the view/projection matrices and focal computation consumed by packCameraUniforms.

Error Handling

  • Matrix inversion inside packCameraUniforms throws if the view or projection matrix is singular (determinant < 1e-6).
  • initialize propagates WebGPU pipeline creation errors (e.g., unsupported shader features).
  • dispatchModel assumes the supplied buffers are large enough; callers must ensure global capacities (splat buffer + sorter) exceed the sum of point counts.

Usage Snapshot

const preprocessorSH = new GaussianPreprocessor();
await preprocessorSH.initialize(device, 3, false);

const preprocessorRGB = new GaussianPreprocessor();
await preprocessorRGB.initialize(device, 0, true);

const encoder = device.createCommandEncoder();
let offset = 0;
for (const pc of pointClouds) {
  const pre = pc.colorMode === 'rgb' ? preprocessorRGB : preprocessorSH;
  pre.dispatchModel({
    camera,
    viewport: [width, height],
    pointCloud: pc,
    sortStuff: globalSortStuff,
    settings: buildRenderSettings(pc, renderArgs),
    modelMatrix: pc.transform,
    baseOffset: offset,
    global: { splat2D: globalSplatBuffer },
    countBuffer: 'countBuffer' in pc ? pc.countBuffer?.() : undefined,
  }, encoder);
  offset += pc.numPoints;
}