My article Stage3D Pipeline in a Nutshell is a conceptual journey through the phases of the 3D pipeline, but today I’m going to talk about the practical side. What code do you write if you want to transform points and vectors from one space to another? This article will show you how to do that.

First, let’s list the “spaces” that are the steps of the 3D pipeline:

  1. Model Space – each model at the center of the coordinate system
  2. World Space – all models positioned relative to each other
  3. Camera/View Space – all models positioned relative to the camera
  4. Clip Space – all models relative to the view frustum
  5. Viewport Space – all models relative to the viewport

Model space is easy because that’s where models start. The first real transformation between coordinate systems is when points move from model space to world space. This transformation is done by applying a 4D matrix to each point (e.g. via Matrix3D.transformVector). This matrix is typically the result of multiplying several matrices together so that the one matrix includes all of the effects of the matrices multiplied together. For example, you may multiply a translation matrix (to positon the model), rotation matrix (to orient the model), and scale matrix (to size it correctly) to form your model->world matrix. In any case, you’ll often create this matrix every frame as the model moves about the scene and apply it to each point of each triangle in the model’s meshes.

Next the model moves into camera/view space by multiplying points with a matrix created by the camera. In the case of my Simple 3D Camera, this is its __worldToView matrix. The camera class has done all the hard work so you don’t have to.

After camera/view space comes clip space. Again, this is handled by the camera class so you don’t have to. Simply multiply points in camera/view space with Camera3D.__viewToClip and you’ll have them in clip space. In practice, there’s little practical use for points in camera/view space so it’s very common to go straight from world space to clip space. With Camera3D, you can do this by multiplying your world space points with the Camera3D.__worldToClip matrix.

At this point you are usually done with your transformations. In pseudo-code, your transformational journey has looked like this:

// AS3
for each model
    modelToClip = model.modelToWorld * camera.worldToClip
    upload modelToClip to vertex shader as a constant
// vertex shader
output vertex = modelToClip * vertex position

After that, the GPU will handle the clip->viewport transformation based on the parameters you’ve set in Context3D.configureBackBuffer and Stage3D.x/y to specify the X, Y, width, and height of the viewport. However, what if you wanted to do that work yourself? Well, the first step is to take the clip space point and perform what’s called the perspective divide. This is actually very simple as all you do is divide the X, Y, and Z components by the W component since a clip space point is a 4D point. This is a common operation, so it is already built in to the Flash API in the form of Vector3D.project. Now you will have a point that is on the “viewplane”. Here, the point is in 2D in the range of [-1:1] in both X and Y. The remaining job is to transform this into pixels in the viewport rectangle like so:

*   Transform a point from clip space to viewport space
*   @param viewportRect The viewport rectangle you specified in Stage3D.x/y and Context3D.configureBackBuffer
*   @param world World space point you want to transform to the viewport
*   @param camera Camera whose clip space is being transformed from
function clipToViewport(viewportRect:Rectangle, world:Vector3D, camera:Camera3D): Vector3D
    var clip:Vector3D = camera.worldToClipMatrix.transformVector(world);
    var viewplane:Vector3D = clip.clone();
    var viewport:Vector3D = viewplane.clone();
    viewport.x = ((viewport.x+1)/2) * viewportRect.width + viewportRect.x;
    viewport.y = ((viewport.y+1)/2) * viewportRect.height + viewportRect.y;
    return viewport;

Note that the above function works, but it is not optimized. For example, it allocates two new Vector3D objects in the process. Optimizing is left as an exercise for the reader.

This concludes the practical steps you can take to transform points through the 3D pipeline yourself, even those normally covered by the GPU. If you’ve spotted a bug or have a question or suggestion, please leave a comment!