My notes while reading the source code from threejs Editor app, as I've been curious about :
- the editor architecture
- the undo/redo system
- the camera control behaviour & code
- the object transform gizmos behaviours & code
- The code is simple, easy to follow and straight to the point
- The editor components communicate through concepts of
signals
andcommand-objects
- The
command-objects
are stored and replayed for undo/redo of operations - Camera controls are basic but effective, the code could be easier to follow
- Object transforms gizmos code is from a threejs 'example', is complex (complicated?), and I run out of time for now - I'll try to back for it
- Getting threejs source
- Editor Javascript code structure
- Starting with
index.html
- Editor state
- Communication architecture
- Viewport
- Camera control
- Object transform gizmos
I've been looking at the current master
source code, not a specific version : github commit b3ce68b4 on sept. 2019.
The git repo is larger than I thought it would be, almost 800Mb.
A good point, IMHO : even if threejs uses nodejs
for its build chain, it has
no external npm
dependencies for its sources - all used libraries are included
in the git repo.
Once the mandatory npm install
is done (using my probably outdated v3.10.5),
the build goes fine, following the doc instructions.
Located in editor/
folder, each 'class' is neatly stored in its source file.
The code do not use require
or import
statements :
- all objects and modules are sourced as globals in
index.html
- makes less easy to find related code and definitions
The UI code often seem to :
- simply build html strings to feed
innerHtml
in DOM - use an
UI
module that uses raw javascript DOM manipulation (without any lib)
The code is pretty clear, even if there's no comments.
The general operation sequence is :
-
include all (many many) scripts from :
threejs
itselfexamples/
for controls, loaders, exporters, rendererseditor/js/libs
for external libseditor/js
, for the actual editor code
-
construct the editor objects, and add their
dom
elements to the html document -
initialize the storage
-
hook some
signals
to asaveState
function that can handle autosave on actionseditor
exposestoJSON
/fromJSON
methods to handle document serialization
-
hook on some DOM events, such as drag, drop, windows resize
-
load an optional initial 'document' given in the URL
-
initialize ServiceWorker
And that's it.
The editor top-level objects are structured as follow :
Editor
, which is the root object, holding the editor state and operationsViewport
, handles 3D rendering, interactions, reacting to change signalsSideBar
, setup panels UI and handle its actions- e.g.
SideBar.Scene.js
handles scene properties signals
are dispatched on value changes
- e.g.
MenuBar
, setup top menu UI and handle its actions- e.g.
MenuBar.File.js
handles UI & logic for file export actions
- e.g.
Toolbar
, dispatchessignals
on UI actions, and listens to signals to update UIScript
, handles the script editorPlayer
, handles a threejs preview
I spotted a historyChanged
signal that leads me to think that the undo/redo system is named History
.
The editor state is held in multiple objects, members of Editor
object, such as :
- all the
Signal
objects, held in asignals
object scene
andselected
for nodes storage and selectionhistory
for undo/redocamera
,cameras
,viewportCamera
,addCamera
will need some digging- many methods to manipulate the state :
Object
related: add, remove, move, nameGeometry
relatedMaterial
Texture
Camera
Helper
, used to represent cameras, lights and skeletons in 3D viewportScript
- Selection & focus
- JSON serialization
- undo/redo
While browsing through the top-level objects, I found that the top-level objects in the app communicate with the editor
only through 2 channels :
- signals
- commands
The editor.signals
are all declared in the same spot, at Editor
construction.
They are used in all other objects :
- to
dispatch
change notifications (and values) - to react on change notifications through a callback function
Commands are dispatched by editor.execute
, passing a command-object.
Commands are the key elements for undo/redo system, as described in Implementing additional commands for undo-redo.md doc.
Command-objects are responsible to directly call editor
methods to change its state, and dispatch appropriate signals
.
The general behavious is :
- helpers geometry are created/managed :
- an inert grid mesh
- a selection box that reacts to selection changes
- object manipulation is done using the
THREE.TransformControls
'example'- translation, rotation, manipulation, etc.
- object selection is done using DOM events and
THREE.Raycaster
- camera manipulation is done using
EditorControls
- including focus on object to adjust camera to best fit object on screen
render
is triggered only when needed (change signals, ...)
I saw a few things that are not too clear :
-
Viewport
maintains, for some reason, its own list of activeobjects
used forRaycaster
intersection.- some special handling is done for Helpers, using special
'picker'
Mesh
- some special handling is done for Helpers, using special
-
Some of the
scene
state seem to be updated in the viewport :- I saw a call to
object.updateProjectionMatrix()
- I saw a call to
The code is in EditorControls.js
(link).
The viewport can only provide view from cameras in the scene - there is no default views such as 'perspective', 'bottom', 'top', etc.
Only the default Camera can be controlled, the additional ones have to be moved using the SideBar
properties.
Maybe a bug ? as the default Camera moves when trying to control another another camera in viewport
The behaviour, seen from the user is :
-
left-button, orbits camera around some center
-
right-button, pans the camera in a left/right/bottom/up fashion
-
mousewheel & middle-button, zooms camera towards the center
-
doubleclick, zooms on target object
-
one-finger touch, orbits camera
-
two-finger touch, zooms and pans
The code in EditorControls
is not that clear... Let's see the API :
THREE.EditorControls = function(object, domElement)
this.rotate(delta)
this.zoom(delta)
this.pan(delta)
this.focus(target)
- rest of methods are handling DOM events (down, move, up events for mouse and touch)
Looking at the new THREE.EditorControls
call, object
is in fact a camera
- that should clarify things.
Looking at the rotate
, pan
, zoom
call sites, I see the delta
values are expressed in pixels, from the previous DOM event, stored in proper component(s) of a THREE.Vector3
.
- considers a sphere from
center
tocamera.position
- computes spherical coordinates from the camera position
- adds an offset to these angles using
delta
and arotationSpeed
- does not seem to account for framerate
- derives a new camera position from new angles and
center
- scales
delta
according to apanSpeed
and distance fromcamera
tocenter
- not accounting for framerate or camera FOV
- transforms
delta
from eye-space to world-space (I suppose)- using
delta.applyMatrix3( normalMatrix.getNormalMatrix( camera.matrix ) );
- from threejs docs : "normal matrix is the inverse transpose of the matrix"
- using
- offsets camera
position
andcenter
- scales
delta
(which only have Z component set) much like inpan
- ensure the offset will not move camera past
center
- not accounting for camera near clip plane
- transforms
delta
much like inpan
- the camera looks to
center
sodelta
is moving from/back this point
- the camera looks to
- move the camera
position
only
- from the
target
object bounding box - set the new camera
center
to the box center - find a view distance, using bounding sphere radius and some constants
- not accounting for camera FOV
- find the camera orientation to preserve it, giving a
delta
look vector - place the camera
position
alongdelta
vector from center
The code is in threejs examples/controls/js/TransformControls.js
.
As I expected, the code is quite big and hard to follow.
I run out of time, so I'll try to get back to it later.
Did you ever get a chance to look at the Object transform gizmos? I really like your write up. What kind of stuff are you digging into like this as of now?