Three.js Integration Module API Reference
This document covers the main classes that make up Visionary’s Three.js integration layer.
Table of Contents
- GaussianThreeJSRenderer
- GaussianSplattingThreeWebGPU
- CameraAdapter & DirectCameraAdapter
- GaussianModel
- GridHelper
- DebugHelpers
- ThreeJsCameraAdapter (legacy)
GaussianThreeJSRenderer
High-level orchestrator that lives inside a Three.js scene and drives the hybrid WebGPU pipeline.
constructor(renderer: THREE.WebGPURenderer, scene: THREE.Scene, gaussianModels: GaussianModel[])
const gaussianRenderer = new GaussianThreeJSRenderer(webgpuRenderer, scene, gaussianModels);
scene.add(gaussianRenderer); // enables onBeforeRender hook
await gaussianRenderer.init(); // prepares GaussianRenderer sorter
Rendering lifecycle
| Method | Description |
|---|---|
init(): Promise<void> |
Ensures GPU sorter pipelines are compiled. Call once after construction. |
renderThreeScene(camera: THREE.Camera): void |
Renders the entire scene into an internal HalfFloat RenderTarget, captures its DepthTexture, and blits the color buffer to the canvas (linear→sRGB). Required when auto depth mode is enabled (default). |
drawSplats(renderer: THREE.WebGPURenderer, scene: THREE.Scene, camera: THREE.Camera): boolean |
Runs the Gaussian render pass inside the current swap chain. Returns false when no visible Gaussian models exist. Automatically composites gizmo overlays when present. |
renderOverlayScene(scene: THREE.Scene, camera: THREE.Camera): void |
Renders helper scenes (gizmos, HUD) into gizmoOverlayRT. Call before drawSplats. |
updateDynamicModels(camera: THREE.Camera, time?: number): Promise<void> |
Invokes GaussianModel.update(...) on every registered model so ONNX point clouds receive up-to-date view/projection transforms. |
Depth & diagnostics
setAutoDepthMode(enabled: boolean): void– toggles automatic depth capture. When disabled you must supply occluders viasetOccluderMeshes(meshes: THREE.Mesh[]).diagnoseDepth(): void– logs current depth configuration, RT sizes, and GaussianRenderer depth state.disposeDepthResources(): void– releases cached render/overlay targets so the next frame recreates them (handy after device loss or canvas resize issues).
Model management helpers
appendGaussianModel(model: GaussianModel): voidremoveModelById(modelId: string): boolean– IDs follow themodel_{index}pattern.getGaussianModels(): GaussianModel[]– returns a shallow copy of the internal list.getModelParams(): { models: Record<string, { id: string; name: string; visible: boolean; gaussianScale: number; maxShDeg: number; kernelSize: number; opacityScale: number; cutoffScale: number; timeScale: number; timeOffset: number; timeUpdateMode: string | number; animationSpeed: number; isAnimationRunning: boolean; isAnimationPaused: boolean; }> }– collects visibility and spline parameters for UI panels.setModelVisible(modelId: string, visible: boolean): voidgetModelVisible(modelId: string): boolean
Per-model parameter controls
Each setter has a matching getter (getModelGaussianScale, getModelOpacityScale, …).
setModelGaussianScale(modelId: string, scale: number)setModelMaxShDeg(modelId: string, deg: number)setModelKernelSize(modelId: string, size: number)setModelOpacityScale(modelId: string, scale: number)setModelCutoffScale(modelId: string, scale: number)setModelRenderMode(modelId: string, mode: number)setModelAnimationIsLoop(modelId: string, loop: boolean)setModelTimeScale(modelId: string, scale: number)setModelTimeOffset(modelId: string, offset: number)setModelTimeUpdateMode(modelId: string, mode: string | number)– forwarded to the underlyingGaussianModel.setModelAnimationTime(modelId: string, time: number)setModelAnimationSpeed(modelId: string, speed: number)startModelAnimation(modelId: string, speed?: number)pauseModelAnimation(modelId: string)resumeModelAnimation(modelId: string)stopModelAnimation(modelId: string)
Global animation controls
setGlobalTimeScale(scale: number)setGlobalTimeOffset(offset: number)setGlobalTimeUpdateMode(mode: string | number)startAllAnimations(speed?: number)pauseAllAnimations()resumeAllAnimations()stopAllAnimations()setAllAnimationTime(time: number)setAllAnimationSpeed(speed: number)resetParameters()– restores global defaults (scale = 1, SH degree = 3, kernel = 0.1, etc.).
GaussianSplattingThreeWebGPU
Low-level helper for integrating Gaussian splats directly into a WebGPU render graph.
Lifecycle
const gs = new GaussianSplattingThreeWebGPU();
await gs.initialize(webgpuRenderer.backend.device);
await gs.loadPLY('/models/atrium.ply', p => console.log(p));
initialize(device: GPUDevice): Promise<void>– must be called before any load or render.loadPLY(url: string, onProgress?: (info: { progress: number }))loadFile(file: File, onProgress?: (info: { progress: number }))setDepthEnabled(enabled: boolean)– toggles the internal pipeline variant.setVisible(visible: boolean)render(commandEncoder: GPUCommandEncoder, textureView: GPUTextureView, camera: THREE.PerspectiveCamera, viewport: [number, number], depthView?: GPUTextureView): void– Updates the camera adapter, runsprepareMulti, and encodes a render pass that writes intotextureView. SupplydepthViewwhen you need mesh occlusion.numPoints: number(getter) – returns total splat count of the currently loaded point cloud.dispose()– releases GPU references.
CameraAdapter & DirectCameraAdapter
Two flavors of the same logic:
| Adapter | Location | Use case |
|---|---|---|
DirectCameraAdapter |
src/three-integration/GaussianSplattingThreeWebGPU.ts |
Packaged with the low-level helper. |
CameraAdapter |
src/camera/CameraAdapter.ts |
Reused throughout the Visionary app (dynamic models, editors). |
Shared API:
update(camera: THREE.PerspectiveCamera, viewport: [number, number]): voidviewMatrix(): mat4projMatrix(): mat4position(): Float32ArrayfrustumPlanes(): Float32Arrayprojection.focal(viewport?: [number, number]): [number, number]- Flags:
transposeRotation,flipProjY,flipProjX,compensatePreprocessYFlip(rarely changed; defaults work for WebGPU).
These adapters keep Three.js and Visionary in sync by applying the required Y-axis π rotation, focal-length derivation, and viewport-aware aspect ratio fixes.
GaussianModel
Located in src/app/GaussianModel.ts. Extends THREE.Object3D so the editor can place Gaussian assets inside the scene hierarchy.
Key features:
- Automatic TRS → GPU sync (via intercepted setters and throttled
updateMatrixoverrides). syncTransformToGPU()/forceSyncToGPU()for manual control.setGaussianScale,setOpacityScale,setCutoffScale,setKernelSize,setMaxShDeg, etc., mirroring the renderer’s per-model setters.- Dynamic model support:
update(viewMatrix: mat4, time?: number, projectionMatrix?: mat4)drives ONNX deformation. - Visibility helpers (
setModelVisible,getModelVisible,isVisible), animation controls (startAnimation,pauseAnimation,resumeAnimation,stopAnimation,setAnimationTime,setAnimationSpeed,setAnimationIsLoop). - AABB utilities:
getLocalAABB,setOverrideAABB,getWorldAABB.
GridHelper
Utility wrapper (see src/three-integration/GridHelper.ts) for spawning consistent ground grids inside WebGPU scenes.
const helper = new GridHelper(10, 10, colorCenterLine, colorGrid);
scene.add(helper.object);
helper.dispose();
Exposes object: THREE.GridHelper plus a dispose() method that forwards to the underlying helper.
DebugHelpers
WebGPU-based debug rendering class for visualizing axes, grids, and basic geometry in Gaussian Splatting scenes.
Path: src/three-integration/DebugHelpers.ts
constructor(device: GPUDevice)
Creates a new DebugHelpers instance with the provided WebGPU device.
Methods
initialize(format: GPUTextureFormat): Promise<void>– Initializes the debug helpers with the specified texture format. Must be called before rendering.updateMatrices(viewMatrix: mat4, projMatrix: mat4): void– Updates the view and projection matrices for rendering.render(passEncoder: GPURenderPassEncoder, options?: { showAxes?: boolean; showCube?: boolean; showCubeSolid?: boolean; showGrid?: boolean }): void– Renders debug helpers into the provided render pass. Options control which helpers are displayed.setVisible(visible: boolean): void– Controls visibility of all debug helpers.dispose(): void– Releases all GPU resources.
Usage
import { DebugHelpers } from './three-integration/DebugHelpers';
const debugHelpers = new DebugHelpers(device);
await debugHelpers.initialize(canvasFormat);
// In render loop
debugHelpers.updateMatrices(viewMatrix, projMatrix);
debugHelpers.render(renderPass, {
showAxes: true,
showGrid: true,
showCube: false
});
ThreeJsCameraAdapter (legacy)
src/three-integration/ThreeJsCameraAdapter.ts contains the original camera adapter that predates CameraAdapter. New code should use CameraAdapter or DirectCameraAdapter, but the class remains for backward compatibility.
Usage example (Visionary render loop)
const gaussianRenderer = new GaussianThreeJSRenderer(threeRenderer, scene, gaussianModels);
await gaussianRenderer.init();
scene.add(gaussianRenderer);
function animate(timeMs: number) {
requestAnimationFrame(animate);
gaussianRenderer.updateDynamicModels(camera, timeMs * 0.001);
gaussianRenderer.renderOverlayScene(gizmoScene, camera);
gaussianRenderer.renderThreeScene(camera);
gaussianRenderer.drawSplats(threeRenderer, scene, camera);
}
animate(0);
For toolchains that need direct control over command encoders (no Visionary shell), instantiate GaussianSplattingThreeWebGPU and call render(...) manually with your own color/depth views.
Notes
- Device sharing – Every class in this module reuses the GPU device owned by
THREE.WebGPURenderer. No additional canvases are created. - Depth – Auto depth mode is on by default. Disable it only if you really need custom occluder meshes.
- Coordinate conversion – Always go through
CameraAdapter/DirectCameraAdapter; duplicating the math elsewhere easily leads to mirrored renders. - Transform syncing –
GaussianModelhandles TRS updates automatically, but you can disable auto-sync for bulk edits (setAutoSync(false)→ mutate →forceSyncToGPU()). - Dynamic content –
updateDynamicModelsmust run beforedrawSplatswhen using ONNX/4D assets so point data stays in phase with the current camera.