Three.js Integration Module
The Three.js integration module is the glue between Visionary’s WebGPU Gaussian renderer and the regular Three.js scene graph. Instead of compositing two HTML canvases, Visionary runs everything on the same WebGPU device that powers THREE.WebGPURenderer. This keeps meshes, splats, depth, and gizmo overlays perfectly in sync while staying inside the familiar Three.js render loop.
Overview
- Single-device, single-canvas architecture. Gaussian splats are rendered into the same swap chain texture (and optional depth texture) that Three.js uses.
- GaussianThreeJSRenderer orchestrates the hybrid pipeline: captures scene depth, runs Gaussian preprocessing/splatting, and composites gizmos.
- GaussianSplattingThreeWebGPU exposes the low-level integration used by
GaussianThreeJSRendererand can be reused by tools that need direct render-pass control. - CameraAdapter / DirectCameraAdapter translate
THREE.PerspectiveCameraview/projection data into the WebGPU-friendly math used by the Gaussian renderer. - GaussianModel extends
THREE.Object3Dso transformations, visibility, and animation settings automatically propagate to the GPU buffers.
┌──────────────────────────────┐ ┌──────────────────────────────┐
│ Three.js Scene Graph │ │ Visionary Render Stack │
│ - Meshes / Lights / Gizmos │ │ - GaussianRenderer (compute)│
│ - GaussianModel instances │◀─────▶│ - CameraAdapter │
│ - GaussianThreeJSRenderer │ │ - Overlay compositor │
└──────────────────────────────┘ └──────────────────────────────┘
▲ │
└──── shared GPUDevice / command encoders ────────┘
Core Responsibilities
GaussianThreeJSRenderer (src/app/GaussianThreeJSRenderer.ts)
- Lives inside the scene graph (extends
THREE.Mesh) so it receivesonBeforeRendercallbacks. - Captures the Three.js scene into an internal
RenderTarget+DepthTextureviarenderThreeScene. - Schedules Gaussian preprocessing (compute + sorting) inside
onBeforeRender, then draws splats indrawSplats, reusing the sameGPUCanvasContext. - Provides Auto Depth Mode (default) that feeds the captured depth buffer back into the WebGPU render pass so meshes occlude splats automatically.
- Offers optional gizmo overlay rendering (
renderOverlayScene) which is composited after splats. - Manages runtime parameters for every
GaussianModel: SH degree, kernel size, opacity/cutoff/gain, animation clocks, visibility and more.
GaussianSplattingThreeWebGPU (src/three-integration/GaussianSplattingThreeWebGPU.ts)
- Minimal integration helper for consumers that already manage their own command encoders.
- Reuses the
GPUDevicefromTHREE.WebGPURenderer, handles point-cloud loading, and exposes arender()method that expects the caller to supply aGPUCommandEncoder, color/depth views, and a synced camera. - Ships with
DirectCameraAdapter, the standalone version of the camera conversion logic used elsewhere in the app.
CameraAdapter & GaussianModel
CameraAdaptermirrorsDirectCameraAdapterbut is packaged for reuse across the Visionary app (dynamic models, editors, exporters).GaussianModelkeeps Object3D transforms and Gaussian renderer buffers in lockstep, automatically syncing transforms whenever TRS changes. It also proxies animation controls for ONNX/dynamic splats.
Frame Flow (Mixed Rendering Loop)
- Update dynamics (optional)
gaussianRenderer.updateDynamicModels(camera, time)runs ONNX-backed point clouds so deformation always matches the current camera matrices. - Overlay / Gizmo pass (optional)
Auxiliary Three.js scenes (gizmos, helpers, HUD) get rendered intogizmoOverlayRT. - Scene pass
renderThreeScene(camera)renders the main Three.js scene into a half-float render target, captures its depth buffer, then blits the color buffer to the canvas via a WebGPU fullscreen pass (linear → sRGB conversion included). - Gaussian preprocess
InsideonBeforeRender, the renderer gathers visibleGaussianModels, syncs transforms to GPU, and runsGaussianRenderer.prepareMulti(...)on the shared device. - Gaussian draw + composite
drawSplats(...)renders splats directly into the current swap-chain view. When auto-depth is enabled, the previously captured depth texture is plugged in so meshes occlude splats. If a gizmo overlay exists, it is composited as the final fullscreen pass.
This ordering guarantees deterministic depth, avoids redundant scene renders, and removes the need for dual canvases or WebGL fallbacks.
Auto Depth Mode Highlights
- Uses
THREE.RenderTarget+THREE.DepthTexture(HalfFloat color, Float depth) sized to the drawing buffer. - Blits through a WebGPU render pass so we stay in control of color-space conversion and format compatibility.
setAutoDepthMode(false)gives advanced users manual control: you can feed your own occluder meshes viasetOccluderMeshes, but auto depth should be preferred.
Why the Shared-device Strategy?
- Zero copies – Gaussian data never leaves GPU memory between preprocess, sort, and draw.
- Consistent state – No divergent camera math or DOM overlays; everything uses the same matrices, pixel ratio, and visibility state.
- Extensibility – Because we run in Three.js’ render loop, post-processing stacks, XR sessions, OrbitControls, etc., continue to work.
- Diagnostics – The renderer exposes hooks like
diagnoseDepth()anddisposeDepthResources()to inspect and reset the pipeline without tearing down the scene.
Quick Start (Visionary App)
import { GaussianThreeJSRenderer } from 'src/app/GaussianThreeJSRenderer';
import { GaussianModel } from 'src/app/GaussianModel';
const threeRenderer = await initThreeContext(canvas); // r155+ WebGPU renderer
const gaussianModels = loadedEntries.map(entry => new GaussianModel(entry));
gaussianModels.forEach(model => scene.add(model));
const gaussianRenderer = new GaussianThreeJSRenderer(threeRenderer, scene, gaussianModels);
await gaussianRenderer.init();
scene.add(gaussianRenderer); // ensures onBeforeRender hooks fire
function animate(timeMs: number) {
requestAnimationFrame(animate);
gaussianRenderer.updateDynamicModels(camera, timeMs * 0.001);
gaussianRenderer.renderOverlayScene(gizmoScene, camera); // optional
gaussianRenderer.renderThreeScene(camera);
gaussianRenderer.drawSplats(threeRenderer, scene, camera);
}
animate(0);
Outside the App Shell
If you already own the render loop and just need to draw splats into a WebGPU framebuffer:
const gs = new GaussianSplattingThreeWebGPU();
await gs.initialize(webgpuRenderer.backend.device);
await gs.loadPLY('/assets/room.ply');
const encoder = device.createCommandEncoder();
gs.render(
encoder,
context.getCurrentTexture().createView(),
camera,
[width, height],
depthTexture?.createView()
);
device.queue.submit([encoder.finish()]);
Related Docs
- Architecture – Command flow diagrams, auto-depth internals, and diagnostic tooling.
- API Reference – Constructor signatures, lifecycle hooks, and Three.js bridge helpers.
- Renderer Module – Shares the core WebGPU renderer that powers the bridge.
- Camera Module – Details the
CameraAdapterinterface mirrored here. - Controls Module – Explains how DOM events feed the adapter pipeline.