Thursday 5 November 2009

Irkonomics

Tonight, I have done a bad thing. A very bad thing. Not bad evil bad. Or bad naughty bad. But bad depressing bad. I fired up my old laptop, my trusty Acer 5003WLMi. The little fella wasn't packing too much punch any more, especially trying to do Eclipse plugin development, and the battery had gone south, so a couple of months ago he was shunted under the sofa in favour of a new shiny Acer 6930G. Dual core Intel P7450, 4GB RAM, 320GB disk, Nvidia card - the new recruit is a perky number. Finally I can turn on Compiz effects. The wireless is rock solid. I've got more disk that I know what to do with. I certainly got bang for my buck.

But it has a dark side to it. It appears that in a bid to save money, Acer have manufactured the 6930G without any human ever laying a hand on it's sorry little shell. For if they had, they would have found it immediately that the Acer 6930G is probably the most uncomfortable laptop I've ever had the misfortune to use. For the last two months, I've convinced myself that it's not that bad. But getting the old 5003WLMi out again has me weeping into my trackpad with it's smooth curves.

Of course, you don't need melodramatics and hyperbole, so here's the facts:

  • The trackpad. Oh the trackpad. The trackpad is so bad it demands... A SUBLIST....

    • It's left of centre. Fail.

    • It doesn't have a defined edge. It's just kind of sunken. Want edge scrolling? Forget it. Oh, except for there's a little ridge down one part that sort of tells you where the edge scrolling should be. Want tap zones? Really? Not here you don't. Of course, you can have them, just don't expect to find them without tapping 10 times.

    • The sensitivity is terrible

    • It's bumpy. Yeah, I know, you can't even understand what I'm trying to tell you, right? The plastic on the top panel has little raised bumpsdimples, and that texture is also on the trackpad. I'm a guitar player, so the tips of my fingers aren't exactly Fairy Soft, and yet still I get sore fingers.

  • The front edge of the laptop is sharp. You know, that bit where you rest the edge of your hand when you're using the trackpad. "Hey, you know what this laptop needs? Sharp bits where you put your hands! Yeah!"

  • The screen only tilts back to 45 degrees. "Hey you know what users of this laptop need? A sore neck when they're using it standing up! Yeah!"

  • My kids could use the keyboard as a trampoline, it's that bouncy

  • "Hey, you know what programmers will love? The PgUp/PgDown/Home/End keys out the way at the bottom of the number pad! While you're doing that, why not stick the PgUp key right next to the right arrow key. You know, so that you always hit the frickin' thing when you're trying to just move along the line! Yeah!"
So, in summary, you want a great performing laptop for not much money? Buy an Acer 6930G. Just don't ever use another laptop, lest you should remember what you're missing.

Monday 27 April 2009

Adventures in 3D: Part X - On the move

Yay, so we've finally got to the point where we can move a camera around in our scene, thanks to matrices. In the last post, we used a TransformMatrix to convert the world z-coordinate to something relative to the camera position, albeit with a hard coded view point. Now we just need to start hooking that into something that allows us to change that viewpoint. The ThreeDeePanel class already contains a worldToCamera variable that stores the transform matrix representing the camera view. Let's do things properly now - it's not hard to imagine that a camera is going to have a few different properties, so let's create a Camera class. Let's also represent the position as a Point instead of the raw transform matrix.

public class Camera {

private Point position;

public Camera(Point p) {
position = p;
}

public Point getPosition() {
return position;
}
}

Then we keep an instance of the Camera in the ThreeDeePanel class, and provide a method for other classes to get at it. We also need to calculate the corresponding Matrix from the camera's viewpoint before calling project():

Camera camera = new Camera(new Point(0,0,-300));

public void getCamera(Point p) {
return camera;
}

protected void paintComponent(Graphics g) {
...
TransformMatrix worldToCameraTransform = TransformMatrix.getWorldToCamera(camera.getPosition());
for(Primitive poly : renderScene){
poly.project(worldToCameraTransform);
poly.draw(g2);
}
...
}

In the ThreeDee class, which is doing all the input, I'm going to listen out for the cursor keys (assuming we're in the MOVE_CAMERA state), grab the camera and change it's position:

private int CAMERA_SPEED = 3;

public ThreeDee() {
...
panel.addKeyListener(new KeyAdapter() {
public void keyPressed(KeyEvent e) {
....
} else if (moveState == MoveState.MOVE_CAMERA) {
boolean ctrlDown = ((e.getModifiersEx() & MouseEvent.CTRL_DOWN_MASK) == MouseEvent.CTRL_DOWN_MASK);
Camera camera = panel.getCamera();
switch(e.getKeyCode()) {
case(KeyEvent.VK_UP):
if(ctrlDown) {
camera.getPosition().y += CAMERA_SPEED;
} else {
camera.getPosition().z += CAMERA_SPEED;
}
break;
case(KeyEvent.VK_DOWN):
if(ctrlDown) {
camera.getPosition().y -= CAMERA_SPEED;
} else {
camera.getPosition().z -= CAMERA_SPEED;
}
break;
case(KeyEvent.VK_RIGHT):
camera.getPosition().x += CAMERA_SPEED;
break;
case(KeyEvent.VK_LEFT):
camera.getPosition().x -= CAMERA_SPEED;
break;
}
}

Give that a go, and you should have a moving camera! However, move around a bit and you'll probably notice a problem. The canBeCulled() method in the Triangle class is still using a hard coded viewpoint to decide whether a face is away from the viewer, so if you move up alongside the object, you'll see that it's rear end is missing. We just need to adjust that method to take the new viewpoint into account (again, don't forget to change the method signature in the Primitive interface):

public boolean canBeCulled(Point camera) {
if (WIREFRAME) return false;

Vector viewer = camera.vectorTo(getPosition());
double cull = normal.dotProduct(viewer);
return (cull > 0);
}



As funky as this is, there is the small problem of only being able to look straight ahead. With the point where the camera is, we also need to store some information about which way it's pointing. The camera ought to be free to rotate around any of the axes - look left and right, up and down, and roll side to side. These are commonly know as yaw, pitch and roll. The good news is that these are simply rotations, which we already know how to deal with. What's slightly different is that we're no longer rotating around the world origin, but instead treating the camera as the origin. Also, going right back to the discussion in Part I, the rotation of the objects in the world is opposite to the camera rotation - turning the camera 90 degrees right is like rotating the world 90 degrees left.

We'll store the rotation of the camera as three angles, alpha, beta and gamma, which represent the rotation around the X, Y and Z axes respectively. It's worth considering exactly what that means, to save confusion later. Rotation around the X axis is the looking up and down (pitch) - it's easy to see the "X" and immediately think it ought to be side-to-side motion. Likewise, rotation around Y is looking side-to-side (yaw), and around Z is roll. I'll add a rotate() method to the Camera to change those angles:

public void rotate(double da, double db, double dg) {
alpha += da;
beta += db;
gamma += dg;
}

Now we just need to be able to get the appropriate RotationMatrix that represents those angles. Given rotation matrices Rx, Ry and Rz for each of the three axes, the total rotation is simply Rz.(Ry.Rx). Note that we multiply Rx and Ry first, then multiply by Rz. Why? Because applying rotations in different orders gives different results. Imagine you're looking at a point straight ahead. First, look up 45 degrees. Then look right 45 degrees. Then roll over 90 degrees. You're now looking at a point "top right" of where you were originally, and with your head tilted to one side. Start again, but this time do the "roll" first. You'll actually end up (if you've done it properly...) looking at a point "top left" of where you were. The secret is in the fact that "up" and "right" are relative to which way you're already facing. If you're on your feet and tilt your head backwards to look up, it really is "up". If you were lying on your side on the ground and tilted your head back, you'd actually still be looking along the ground. You would actually have to look "right" to look "up".

So, I'll add a new method to RotationMatrix to get an instance that represents alpha,beta,gamma, which we'll do by getting instances for each axis and multiplying them together.

public static RotationMatrix getInstance(double alpha, double beta, double gamma) {
return RotationMatrix.getInstance(gamma, RotationAxis.Z).times
(RotationMatrix.getInstance(beta, RotationAxis.Y).times
(RotationMatrix.getInstance(alpha, RotationAxis.X)));
}

Finally, we implement a getCameraRotation() method on Camera to get that matrix, and multiply it by the position transform matrix we created above to give the final transform from world to camera. Note that the method signature for project changes to take a base Matrix type. Also, we need to change our RotationMatrix class (and any code that calls it) to use homogenous coordinates, so that we can multiply a RotationMatrix and TransformMatrix. We know that this is pretty simple, just add a row and column to each matrix, with a 1 in the bottom right corner (in hindsight, there's no need for RotationMatrix and TransformMatrix to be different types, really we should just use the base Matrix class. Ah well, you live and learn)

public RotationMatrix getCameraTransform() {
Matrix translation = TransformMatrix.getWorldToCamera(position);
return RotationMatrix.getInstance(alpha, beta, gamma).times(translation);
}

and in ThreeDeePanel:
Matrix worldToCameraTransform = camera.getCameraTransform();

This is all very well, but it doesn't actually do much yet. All we have to do though, is wire it up to the input listeners. Whilst in the MOVE_CAMERA state, dragging the mouse will look left,right,up,down, and moving the scroll wheel will roll.

public void mouseDragged(MouseEvent e) {
...
case MOVE_CAMERA:
panel.getCamera().rotate(dy * RADIAN,dx * RADIAN,0.0);
}
}

and the same sort of thing in mouseWheelMoved() for the roll. Note that dx (moving the mouse left and right) affects the Y axis rotation, and vice versa, for the reasons we discussed above.

Run that, and you too can look around the back of your object! Take care to check whether the rotation is what you expect, as it's not always obvious. Matrix multiplication is not commutative - AB is not the same as BA - and if you've got something in the wrong order, it'll generally manifest itself here as the axis of rotation being wrong. For instance, you may find that instead of the object rotating around the camera, the camera rotates around the object.



You may also notice that the scene "stretches" when it's rotated towards the edge of the screen. This is related to the focal length. If the effect looks unnatural, try increasing the focal length - a shorter focal length effectively gives you more of a "fish-eye" look.

Now you've managed to get around the back, you've probably also noticed a few problems with the z-order. Remember that we're sorting, and therefore drawing, the screen by (world) z position, which is fine as long as we're looking in the +ve z direction. As soon as we move elsewhere, that becomes useless - we now need to sort by z order relative to the camera. That means we need to be converting from world space to camera space before doing the sorting.

Let's also take the opportunity to tidy up the Triangle class. We're currently marshalling vertex data between arrays and matrices, whereas we could just keep the vertex data as a matrix and be done with it. For convenience, I'll add methods to the Vector and Point classes to convert to/from matrices.

As well as storing the world coordinates, we'll also keep a matrix of the view coordinates in Triangle. These will be populated in the project() method, which now needs to move in the pipeline to a point before we do the z-order sorting.

public class Triangle extends Primitive {
// vertices of the triangle in world space
Matrix[] vertices = new Matrix[3];
Matrix[] viewVertices = new Matrix[3];

public void project(Matrix worldToCamera) {

for (int i = 0; i < 3; i++) {
viewVertices[i] = persMatrix.times(worldToCamera.times(vertices[i]));

if(viewVertices[i].data[2][0] < 0) {
draw = false;
return;
}

xPoints[i] = (int) (viewVertices[i].data[0][0] / viewVertices[i].data[3][0]);
yPoints[i] = (int) (viewVertices[i].data[1][0] / viewVertices[i].data[3][0]);
}
}
}

The doPipeline() method now consists of backfaceCulling(), project(), sortForZOrder(), lightScene(). All we need to do now is change getZOrder() in Triangle to return the z order from the (view coords) viewVertices array instead of the (world coords) vertices array.

public double getZOrder() {
return getAverage(viewVertices, 2);
}

@Override
public Point getPosition() {
return new Point(getAverage(vertices, 0), getAverage(vertices, 1), getAverage(vertices, 2));
}

private double getAverage(Matrix[] matrices, int index){
return (matrices[0].data[index][0] + matrices[1].data[index][0] + matrices[2].data[index][0])/3;
}

There, all sorted. We're gradually getting there. Unfortunately, it seems that for every bit we add, it throws up another couple of problems to solve. But we love solving problems, right? Right?! Put the kettle on, make a cup of tea, download the source, and let's ponder.

Tuesday 21 April 2009

Adventures in 3D: Part IX - A Bit Of Perspective

Stick with it 3D fans, we're getting there.

One thing we cheated at way back in Part I was perspective. Until now, everything has been drawn using parallel projection. That is, there is no perspective, everything appears the same size regardless of how far away it is. That works fine when you're looking at a single object, where the difference in distance between the front and the back of the object is small enough to be negligible in terms of how your brain perceives the image, but when you start adding objects into the scene in the background, it's a problem.

Thankfully, perspective is very simple to do. We're actually going to do this twice. For a first pass, we'll do the simplest possible thing, which is to just do the calculations explicitly, and with a hardcoded viewpoint. Hopefully your alarm bells will be ringing at the sight of the work "hardcode", so then we'll look at the more proper solution, which involves our old friend, the matrix.

So, first solution. We already have a project() method in the Triangle class, which is used to convert the 3D model's x,y,z (double) coordinates into the screen's x,y (integer) coordinates. Remember that perspective does not affect the 3D model in any way, everything stays where it is. Perspective is simply an effect of projection, so this is exactly where we need to be doing the perspective calculations. And what does "perspective" actually mean for our projection? Think about a wireframe cube rendered in 3D with perspective. The back face of the box, which is at a greater Z distance, will appear smaller than the front face - the left and right sides of the back face have smaller x values (assume the x and y axes are through the centre of the box), and the top and bottom sides of the back face have smaller y values. So it's clearly an adjustment of X and Y coordinates as a function of Z, we just need to figure out what that adjustment is. Time for a diagram.



The vertical line in the middle represents the screen onto which the model is projected - the camera C is at some distance z' behind that screen, and the point P is at a distance z beyond that screen, and distance y above the axis. Drawing a line from C to P represents our line of sight, and you can see that it intersects the screen at a distance y' above the axis. Our job is to figure out the distance y'. Cast your mind back to school maths classes, and the idea of similar triangles. The theory of similar triangles says that two triangles with the same angles will have sides that are in proportion. Therefore, y'/z' = y/(z+z'), which rearranged slightly gives the equation

           z'
y' = y * -----
z + z'

From that, you can see that as z tends towards zero (i.e. the object moves nearer the plane of projection), the second term becomes z'/z', which is 1, and so y'=y. Working out z is fairly easy, we just need to remember that it's the distance from the screen to the object, which, if we decide the screen is somewhere other than z=0, is not the same as the z coordinate in the world space. In other words, z = zworld - zscreen . We also need z' - this is actually a fairly abitrary number, representing the focal length. The lower this number, the more pronounced the perspective effect.

So, let's stick that into some code. We define a viewpoint that represents the position of the viewer, and a focal length (z' from the diagram), in this case determined pretty much by trial and error - this value gives a decent sense of depth without looking unrealistic. As the focal length is fixed, and the viewpoint will potentially move, we calculate the position of our "screen" as being the position of the camera plus the focal length. Then, the relative z distance is calculated (z in the diagram), being the distance from the screen to the object. Finally, we use those values to calculate the perspective correction as defined above.


private Point viewpoint = new Point(0,0,-300);
double focalLength = 300;

public void project() {
double zScreen = viewpoint.z + focalLength;
for (int i = 0; i < 3; i++) {
double zDistance = z[i] - zScreen;
double perspective = focalLength / (focalLength + zDistance);
xPoints[i] = (int) (x[i] * perspective);
yPoints[i] = (int) (y[i] * perspective);
}
}

If you spin the scene objects now, you should get some sense of perspective. If you can't really see anything, you may want to lower the focal length value so the effect is more pronounced.

That's all well and good, but there's another way to achieve the same effect, and it's going to set us up a bit better for getting the camera moving around. We're going to use a matrix to perform the same sort of maths.

Last time we used a matrix it was for rotation, and was a 3x3 matrix which acted on 1x3 matrix (the point). Now we're going to use a transformation matrix to translate points, both in 3D and 2D. Recall that if we're working with a 3x3 matrix on a point x,y,z, then matrix multiplication means the output for each coordinate is of the form Ax + By + Cz. However, in the case of translation, we often need to just add or subtract a constant that is not a function of position. This may sound familiar, for this is the definition of an affine transformation, which we've already been happily using to move the origin into the centre of the screen. To do affine transforms, we need to introduce homogenous coordinates. There is, I'm sure, lots of complicated geometry mathematics that can be used to describe homogenous coordinates - see that Wikipedia page for starters - but really you can just think of it as a hack to allow transformations of the form Ax + By + Cz + D. You do two things: add a column to the translation matrix containing the constants to add to each coordinate, and add a 4th row, with the value, 1 to the vector matrix. Easy. Here's an example:

|1 0 0 30 ||x|    |x + 30|
|0 1 0 10 ||y| |y + 10|
|0 0 1 -10||z| => |z - 10|
|0 0 0 1 ||1| | 1 |

Hopefully you can see how this can start getting us towards the idea of a moveable camera. The translation coordinates in the 4th column will come from the position of the camera, and the result will be to move the world coordinates to coordinates relative to the camera. We did the same thing in our first method in calculating (focalLength + zDistance), albeit only for the z axis. You can see that with the matrix method, we can very easily take the x and y coordinates into account as well.

Let's add some code. I create a new TransformationMatrix class, and simply have a static method worldToCamera(Point view) that, given a camera position, will return a matrix of the form:

|1 0 0 -view.x|
|0 1 0 -view.y|
|0 0 1 -view.z|
|0 0 0 1 |

That code is

public class TransformMatrix extends Matrix {

private TransformMatrix(double[][] data) {
super(data);
}

public static TransformMatrix getWorldToCamera(Point view) {
return new TransformMatrix(new double[][] { {1,0,0,-view.x}, {0,1,0,-view.y}, {0,0,1,-view.z}, {0,0,0,1}});
}
}


Note that the view coordinates are negated. If the camera is at z=10, and a world point is at z=20, the point will be 10 units from the camera i.e. z = zworld - zcamera. We'll pass in a matrix to the project() method to use for the transform from world to camera (don't forget to make that change in the Primitive interface too). For now you can just pick a camera position and hard code it in the call to project(). When we get round to moving the camera, that matrix will be recalculated each time.

So how does this help us with perspective? It doesn't yet. We also need to factor in that focalLength. In our world-to-camera transform, we're going to end up with a z-coordinate, z', that is the distance from the camera to the point. In the first effort above, we had zDistance, which was the distance from the screen to the point, and focalLength which was the distance from camera to screen. That means that:

z' = focalLength + zDistance

How very handy. The perspective calculation is now:

double perspective = focalLength / z';

We can express that in a matrix multiplication as well. The trick here is to use the homogenous coordinate (normally called w) to store that perspective calculation (w') and then it's a simple case of applying that to x' and y'. Just one other thing we have to think about - as we're multiplying matrices, we need to express the perspective as a multiplication of z' rather than dividing by it, so we simply turn it upside down, and instead of multiplying x' by w', we divide.

That means the perspective calculation can be applied as a matrix, although in our simple case it's nothing more than a way of dividing z' by the focal length. The benefit of using the matrix is that you could potentially encode other operations in there in future to apply different effects. Here's what the matrix looks like, and the result of applying that to homogenous coordinates:

| 1  0  0  0|| x' |    |  x'  |
| 0 1 0 0|| y' | | y' |
| 0 0 1 0|| z' | => | z' |
| 0 0 1/f 0|| w' | | z'/f | => wp

Let's recap:
  • Given a point x,y,z, we add the homogenous coordinate (which is just a 1) to give a vector matrix x,y,z,w.

  • The world-to-camera transform matrix is applied to give the coordinates x',y',z',w', which are the coordinates of the point relative to the camera position, and where w' is still just a 1.

  • The perspective matrix is applied to calculate wp, which is the perspective correction factor

  • Divide x' and y' by wp to give the final x and y coordinates


Sounds slight complicated, but it's really not doing anything more than we've already done. Again, the benefit is in being able to encode other transformations in the matrices, which should come in useful shortly.

In code, it's straightforward. We'll add a new method getPerspective(double focalLength) to the TransformMatrix class to return a matrix that divides z' by the focalLength:

public static TransformMatrix getPerspective(double focalLength) {
return new TransformMatrix(new double[][] { {1,0,0,0}, {0,1,0,0}, {0,0,1,0}, {0,0,1/focalLength, 0} });
}

Then in the Triangle class:

public void project(TransformMatrix worldToCamera) {

for (int i = 0; i < 3; i++) {
Matrix point = new Matrix(new double[][] { {x[i]}, {y[i]}, {z[i]}, {1}});
Matrix result = worldToCamera.times(point);

Matrix finalPoints = TransformMatrix.getPerspective(FOCALLENGTH).times(result);

xPoints[i] = (int) (finalPoints.get(0,0) / finalPoints.get(3,0));
yPoints[i] = (int) (finalPoints.get(1,0) / finalPoints.get(3,0));
}
}

Of course, result is just an intermediate, and the perspective matrix never changes given a fixed focal length, so if you're the sort of coder who hates to see waste, you can store the perspective matrix in the Triangle class, and do the whole lot in one go:

private final TransformMatrix persMatrix = TransformMatrix.getPerspective(focalLength);

public void project(TransformMatrix worldToCamera) {
...
Matrix finalPoints = persMatrix.times(worldToCamera.times(point));
...
}

Download the source and see for yourself.

One final thing for this episode - I promise. If you move your camera to a position that means objects going behind the camera, you'll see things go a bit pear-shaped because we're trying to render objects that should not be in the view. So there needs to be some sort of check to ensure polygons that are behind the camera are not drawn. That's easy enough, any object which has a negative z' (remember, z' is relative to the camera) should not be drawn. This is slightly tricky, because we need to tell the draw() method that. I'm going to hack it for now, and use an instance variable boolean draw = true;. So then in project(), we do the check:

if(finalPoints.get(2,0) < 0) {
draw = false;
return;
}

and in draw():


public void draw(Graphics2D graphics) {
if(draw == false) {
draw = true;
return;
}
...
}

Note that we reset the draw variable once we've decided not to draw the polygon, so that it can be considered for drawing again in the next frame.

That's a fair slice of stuff for what was really a quite simple bit of functionality. Next time we'll start getting that camera moving, and also think a bit more seriously about that last point.

Sunday 19 April 2009

Adventures in 3D: Part VIII - A Light Touch

(Yet again, I'm deviating from the original aim of getting a camera persepective working. But this is pretty cool, so hopefully you'll forgive)

First, a confessional. There was a pretty fundamental error in the Point class, in that vectorTo() was implemented such that vectors were actually backwards. D'oh. Which explained why, when I actually took the time to think about where lights were coming from and how the scene was lit, things were the wrong way round. That's fixed now, along with a couple of other things that were also wrong in compensation for that error. At least it's a good lesson in taking the time to properly consider such fundamentals, rather than ploughing on with whatever works... On the positive side, all the concepts introduced thus far still stand.

Anyway, so far, the lighting on this object has been pretty dumb. The light we've modelled is just a Vector, so any object anywhere in the scene is lit from the same direction and at the same intensity, and it's also pretty boring white light. We're going to spice things up a bit. In a 3D scene, there can be various types of lighting with different characteristics of where and how the light is cast. In our case, we're going to implement two types of light - ambient light and spotlights.

Ambient lights are super super easy. They're just a background level of light that's present everywhere. It doesn't come from a point, doesn't point in any particular direction, doesn't change in intensity. Think of it like daylight on a cloudy day - the light is just kind of there, without coming from any particular place.

To make life easier, we have an abstract Light superclass. This says that a light usually has a colour, a position and a direction, although, in the case of an ambient light, we just ignore those last two. What the Light class doesn't define is how the light affects the surfaces it falls on. For that, we have an abstract light(Lightable s) method, which returns a Color, being the colour (remember that brightness is one component of a colour) that this light contributes to that surface. The Lightable interface defines two methods, getNormal() and getPosition() - any object (in our case, a Primitive) that wants to be lit must implement these two methods so that the lights can tell where the surface is and which way it's facing. You can easily see that this interface could need to define other methods in the future for more sophisticated lighting - for instance, the light() method may need to know the absorptive or reflective properties of a surface.

The AmbientLight class only holds one thing - the colour of the light. The implementation of the light() method is dead simple, because the light that the AmbientLight contributes to each surface is simply it's colour. No need to worry about which way the surface is facing or how far away it is.

public class AmbientLight extends Light {

public AmbientLight(Color color) {
this.color = color;
}

@Override
public Color light(Lightable s) {
return this.color;
}
}

Our Triangle class has a lightPolygon() method, which is where we ask all the lights in the scene to tell us what they will contribute to the final colour of this polygon. It's just a loop calling light(this) for each light. The colour from each light is added together to get a final colour to render.

public void lightPolygon(LightScene lights) {
litColor = Color.black;
for(Light light : lights) {
litColor = addColor(litColor, light.light(this));
}
}

The addColor() method is also very simple - just add each RGB component separately, naturally making sure that the component values don't go above 255.

private Color addColor(Color c1, Color c2) {
return new Color(Math.min(255, c1.getRed() + c2.getRed()),
Math.min(255, c1.getGreen() + c2.getGreen()),
Math.min(255, c1.getBlue() + c2.getBlue()));
}

Put all that together, and define an AmbientLight with a muted colour. You don't want the ambient light to be too bright, or else it will just wash out all the other colours in the scene. I'm going to use RGB(0,0,30). The effect this has is to show up all the objects in the scene in a dark blue base light. If nothing else, it's handy for making sure that your objects are being rendered, where previously they would not have been painted if you got the lighting coordinates wrong.

Now let's try something far more interesting, the spotlight. Spotlights have a number of properties - the position of the light, the direction the light points, what colour it is, and the angle that the light spreads out at. For a more realistic representation, we also want to define how quickly the light falls off from full intensity around the edge. The first three are already taken care of in our Light superclass. The second two will be implemented in the Spotlight class, and I'll call them fullIntensityAngle and falloffAngle. For a light defined with a fullIntensityAngle of 20 and falloffAngle of 15, that means that surfaces within 20 degrees of the centre line of the light will be lit at full intensity, and surfaces another 15 degrees beyond that will be lit at an intensity proportional to their distance from the centre line. At 35 degrees from the centre and beyond, there's no light contributed from the spotlight.

There are two main calculations to do. The first is the standard calculation we're used to, work out which way the surface faces, and if it's facing away from the light, just return Color.BLACK (as far as adding lights is concerned, Color.BLACK is a null result).

double dtFace = s.getNormal().normalise().dotProduct(lightNormal);
if(dtFace >= 0) return Color.BLACK;

where lightNormal is the normalised vector pointing in the direction of the light.

Next, get a vector from the light to the surface, normalise it and calculate the dot product with lightNormal. For vectors of unit length, the dot product of the two gives the cosine of the angle between the two. At this point, we could use Math.acos() to convert back to an angle and figure out if it's within the spread of our light. But acos is a pretty expensive operation, so instead of comparing angles, we just compare raw cosine values (the cosine of the spread angle is calculated in the constructor or when the angles are changed) to see if the surface is outside the range. If it is, again, return Color.BLACK.

Point lightSource = this.getPosition();
Vector lightToPoly = lightSource.vectorTo(s.getPosition());
double dtPosition = lightToPoly.normalise().dotProduct(lightNormal);
if(dtPosition < cosFullSpread) return Color.BLACK;

Ok, now we're down to just the points that are actually lit. At this point, we will do that acos() operation to get the angle. This makes things simple as it's a straight comparison of angles to determine how to light the surface, and is also important because it means that when we calculate the falloff, it's linear with angle, rather than the cosine.

Within the spread of the fullIntensityAngle, the surfaces are light at the brightness determined by the direction they face, as usual. In the "fall off zone", the intensity of the light dims the further away you get from the centre, so we calculate a falloffFactor, which is a number from 0.0 to 1.0, by which we'll multiply the brightness in the final colour. Note that in the final colour, we create a HSB colour, which has the same hue and saturation as the specified light colour, and just scale the brightness.

double angle = Math.acos(dtPosition) * (180/Math.PI);

double fullSpreadAngle = fullIntensityAngle + falloffAngle;
double falloffFactor = 1;
if (angle >= fullIntensityAngle && angle <= fullSpreadAngle) {
falloffFactor -= ((angle - fullIntensityAngle)/(falloffAngle));
}
litColor = Color.getHSBColor(colorHue,
colorSaturation,
(float) (Math.abs(dtFace) * falloffFactor * colorBrightness));
return litColor;

Throw all this together, sprinkle a few different colour lights around, use a bit of artistic licence to add some other code (see below), and what do you get?



There is no denying that's pretty damn sexy. We can also do one more thing to bring colour to the scene, and that's to give the polygons themselves some colour. We'll assign a base colour to each polygon, and adjust the final lit colour to account for the surface colour. That adjustment is not immediately obvious, but if you consider a few cases it becomes apparent, especially if you think about the colour components as floats 0.0-1.0 instead of the traditional integer 0-255. For instance, a pure white surface (1.0,1.0,1.0) lit by a pure red light (1.0,0.0,0.0) will appear pure red (1.0,0.0,0.0). A pure red surface lit by a pure blue light (0.0,0.0,1.0) will appear black (0.0,0.0,0.0) - a red surface absorbs all blue wavelengths. A black surface always appears black, even if lit with white light. If we write those out, it should become clear:

(1,1,1) lit by (1,0,0) = (1,0,0)
(1,0,0) lit by (0,0,1) = (0,0,0)
(0,0,0) lit by (x,y,z) = (0,0,0)

It is, of course, multiplication of the colour components. A quick multiplyColour method:

private Color multiplyColor(Color c1, Color c2) {
float[] c1Comp = c1.getColorComponents(null);
float[] c2Comp = c2.getColorComponents(null);
return new Color(c1Comp[0] * c2Comp[0],
c1Comp[1] * c2Comp[1],
c1Comp[2] * c2Comp[2]);
}

and then apply that to the lit colour:

litColor = multiplyColor(litColor, surfaceColor);

and then you have coloured polygons:



As Shed Seven once sang, it's getting better all the time.

Cut out the middle man and just download the source. Not least because there's plenty of other tinkering I've done with the code. Of most interest:
  • There's a new BasicSceneObject, XYPlane, which provides the "back wall" effect. Notice that the rotate() method is overriden, with no implementation, which means it stays static whilst the other objects in the scene rotate in front of it.
  • The pipeline was previously using ArrayLists to store the list of polygons. The problem with this is that the backface culling does a remove() on the list, which is not very efficient for ArrayLists, because they then have to shuffle other objects in the list down the backing array. By changing to a LinkedList, for which removals are O(1) (simply change pointers), performance is improved.
  • For some sexy debugging, the InfoPanel class allows us to draw some basic info in the top left of the panel
  • Now we've got spotlights in the scene, it's useful to be able to move them around. There are some extra controls that you can use to control the scene:
  • Space cycles through modes of 1) rotating objects, 2) moving the focus point of the current light, 3) moving the position of the current light, 4) moving the camera (wait, not yet!).
  • In MOVE_OBJECT mode, clicking and dragging rotates X and Y axes. Using the scroll wheel (or edge drag on a touchpad) rotates the axis
  • In MOVE_LIGHT_POSITION mode, clicking and dragging moves the light source position in XY. Holding CTRL whilst doing so moves the light backwards and forwards.
  • In MOVE_LIGHT_DIRECTION mode, clicking and dragging moves the focus point of the light in XY. Holding CTRL whilst doing so moves the focus point backwards and forwards. Using the scroll wheel changes the size of the falloffAngle of the light, and holding CTRL while scrolling changes the fullIntensityAngle.
  • In both MOVE_LIGHT_POSITION and MOVE_LIGHT_DIRECTION, pressing N will cycle control through available spotlights (Warning: this is a bit of hackery - if you don't have a spotlight in the scene, this will go into an infinite loop...)
  • In all modes, clicking the mouse will toggle between wireframe and full mode.
That's a decent slab of work. A simple one for next time - adding some perspective.

Wednesday 8 April 2009

Adventures in 3D: Part VII - Matrix Revolutions

(What dastardly cunning, a piece about matrices on the 10th anniversary of The Matrix)

I already warned you that there was matrix maths coming up, so hold on to your hats. Once you've got to know them, you'll see that the principle of matrices is actually pretty simple, in that they just encode relatively complex equations in a simple form. Teaching matrix maths is outside the scope of this series, so I'll trust you'll do your own reading and just dive on in. I've also borrowed a Matrix class, as building our own is sure to be more of an education in bugfixing than 3D graphics.

Our ultimate aim at the moment is to change our viewpoint - the code at the moment has us fixed in one position and able to spin the world. We want to fix the world and be able to move around it, you know, like this. But first we'll slide in gently with using matrices, as they're a handy way to handle rotation.

Naturally you can, at the drop of a hat, quote the formula for rotation of points around an axis in 3D. Around the X axis, that is

y[i] = (y[i] * Math.cos(r)) - (z[i] * Math.sin(r));
z[i] = (z[i] * Math.cos(r)) + (y[i] * Math.sin(r));


Assuming we represent our 3D points as a column matrix [x,y,z]T, these two equations can be neatly made into a matrix:

| 1    0      0   |
| 0 cos(r) -sin(r)|
| 0 sin(r) cos(r)|


and we can also do the same for rotation around Y and Z axes:

|cos(r)  0  sin(r)|
| 0 1 0 |
|-sin(r) 0 cos(r)|

|cos(r) -sin(r) 0 |
|sin(r) cos(r) 0 |
| 0 0 1 |


So with our new Matrix class we can, for any given angle r, construct a matrix that encodes the rotation around the appropriate axis. When rotate() is called on the BasicSceneObject, we can build that matrix, and I'll add an overloaded form of rotate() on the abstract SceneObject so we can pass in that matrix to do the rotation. We push our points into a 1x3 matrix, multiply that by the 3x3 rotation matrix, then get the values from the result matrix and put those back into our points.

public void rotate(RotationMatrix rotationMatrix) {
Matrix[] points = new Matrix[3];

for(int i=0;i<3;i++) {
points[i] = new Matrix(new double[][] {{x[i]}, {y[i]}, {z[i]}});
Matrix result = rotationMatrix.times(points[i]);
x[i] = result.get(0,0);
y[i] = result.get(1,0);
z[i] = result.get(2,0);
}

normal = getNormal().normalise();
}


To build the rotation matrix, I created a RotationMatrix class, which is really just a utility class for building the matrices specified above, given an angle and an axis of rotation.

public static RotationMatrix getInstance(double theta, RotationAxis axis) {
switch(axis) {
case X:
return new RotationMatrix(new double[][] {
{1, 0, 0},
{0,cos(theta),-sin(theta)},
{0,sin(theta),cos(theta)} });
case Y:
return new RotationMatrix(new double[][] {
{cos(theta),0,sin(theta)},
{0, 1, 0},
{-sin(theta),0,cos(theta)} });
case Z:
return new RotationMatrix(new double[][] {
{cos(theta),-sin(theta),0},
{sin(theta),cos(theta),0},
{0, 0, 1} });
}
return null;
}


The final piece is to create the matrix and pass it to rotate() when the mouse is moved.

RotationMatrix yRot = RotationMatrix.getInstance(xangle, RotationAxis.Y);
RotationMatrix xRot = RotationMatrix.getInstance(yangle, RotationAxis.X);
for (SceneObject d : scene) {
d.rotate(yRot);
d.rotate(xRot);
}


Give that a bash, and watch in amazement as your scene does exactly the same thing that it's always done. Except in a bit of a neater way. Which is no bad thing, right? But we can make it even better than that. One lovely property of matrices is that if you have two matrices to do two rotations, you can just multiply the two matrices and get a single matrix that does both rotations in one step:

RotationMatrix yRot = RotationMatrix.getInstance(xangle, RotationAxis.Y);
RotationMatrix xRot = RotationMatrix.getInstance(yangle, RotationAxis.X);
RotationMatrix totalRot = xRot.times(yRot);
for (SceneObject d : scene) {
d.rotate(totalRot);
}


Great stuff. I love you matrices. If you love matrices too, download the source.

Saturday 4 April 2009

Adventures in 3D: Interlude

As mentioned in the intro to this series, this is largely an unplanned foray into 3D graphics, and the code presented here is as I write it, without necessarily being the best or neatest way to do things. At this point, I feel a need to do some refactoring on the code to try and make a solid base to build on further. These bits are not necessarily necessary or instructive, so feel free to skip this bit if you like, although if you're going to carry on I suggest you download the source so that future entries make sense. Even if you do continue reading, there are some bits that I'm simply going to point out rather than describe why I've made those decisions.

So far we've only created one object, our lonely spheroid. Naturally you are already deep into planning your own FPS that will, like, totally make Halo look like Wolfenstein in comparison, and so we'll need to start thinking about creating and managing multiple objects. An object is simply a bunch of polygons that are glued together - when the object moves 3 units to the left, all the polygons in that object move 3 units to the left. Let's change the class hierarchies around a bit. Top of the tree is a SceneObject, something that can be a) rotated and b) drawn. That extends to two classes. A BasicSceneObject is an object such as a sphere or a cube, or some other 3D shape we may wish to create. BasicSceneObjects are really just a way of group together the other type of SceneObject, which is a Primitive, the 2D polygons that make up that object - to all intents and purposes, this is our Triangle class, although we could choose to render objects with squares, pentagons, or icosagons if we so choose.

The BasicSceneObject class defines how to draw and rotate multiple Primitives - just loop over every Primitive in the object, and also allows a method to get at it's polygons. This becomes important because when sorting for z-order, we need to consider all polygons in the scene together at the same time, for instance if two objects overlap. The BSO also defines offsets for x,y,z, which defines where the object is in the world, and a translate() method to move the object.

Now we've got a basis for objects, we can add some concrete object classes. We simply extend BasicSceneObject, and in the constructor define how to build it in terms of Primitives. So there's a Spheroid object, which just uses the code we had in createScene() previously, and a Cuboid. The Cuboid just defines 12 polygons (6 faces of 2 triangles). The createScene() method now just becomes pretty simple:

    BasicSceneObject sphere = new Spheroid(100,60,60,50);
sphere.translate(-100, -30, 100);
BasicSceneObject sphere2 = new Spheroid(40,30,10,50);
sphere2.translate(50, 20, -10);
BasicSceneObject cube = new Cuboid(40,40,40);
cube.translate(100, 100, 30);

scene.add(sphere);
scene.add(sphere2);
scene.add(cube);



On the lighting front, until now the light has simply been a hardcoded vector in the Triangle class. Now there's a LightScene object, which could contain multiple Lights. A Light has a direction, a position (which is not yet taken into account), and a colour (also not used yet), and as things progress may have some other characteristics specific to a particular type of lighting (e.g. an ambient light, spotlight etc.). The LightScene is passed to a light() method on the Primitive class to decide what color the polygon should be rendered with.

The final notable change is that you may have noticed the Y-axis problem. That is, traditionally the Y axis points up. But in Java 2D, the Y axis goes down the screen, so essentially we're rendering the scene upside down. There's a simple answer to this, and it's back in the AffineTransform class we first met way back in Part I. That time, we cheated by moving the axis origin to the centre of the screen with the Graphics2D.translate() method, so we never had to actually touch the AffineTransform ourselves. This time, we'll create an actual AffineTransform which represents a matrix:

    |  1    0   width/2  |
| 0 -1 height/2 |
| 0 0 1 |


which, once you've famliarised yourself with matrix maths, you'll see means

x' = x + (width/2)
y' = -y + (height/2)


Although we're not explicitly stating it as such, this is a model-to-device transformation, the last step in our pipeline. As a minor optimisation, the AffineTransform object is created ahead of time and reused in each call to paintComponent(), rather than a new Transform object being created each time. However, as the panel can be resized, we catch the call to setBounds() and recreate the Transform when required.

Now that things are a bit better defined, download the source and let's move on.

Tuesday 31 March 2009

Adventures in 3D: Part VI - Faster Pussycat! Cull! Cull!

Oh look, the return of the Adventures in 3D series - you might want to start at the beginning

Last time out, we had a nice shaded sphere spinning around. First things first, a sphere is very boring to rotate - if it weren't for the polygons, you wouldn't even notice. So lets jam a little bit of maths in there so that it's a bit more obvious. In the createScene() method, add a factor to the calculation of each Point position to modify the rotundness. Note the 0.8 and 0.4. This will give you something akin to a Trebor Softmint.

points[phi][theta] = new Point(radius * Math.cos(stp * phi)
* Math.sin(stp * theta), radius * 0.8 * Math.sin(stp * phi)
* Math.sin(stp * theta), radius * 0.4 * Math.cos(stp * theta));


If you have any sense of adventure, you will probably already have played around with the line

int STP = 60;


in the createScene() method. This line determines how many latitude/longtitude sections the sphere is divided into, and therefore, the higher this number, the more polygons that make up the sphere, and the smoother it will appear. With a value of 10, the sphere has a total of 100 polygons and will appear very obviously made up of triangles, much like the 3D graphics of yore. A value of 100 will give you something much better looking, using 10000 polygons. Of course, this is all a trade-off - the time to render 10000 polygons is not insignificant, and you'll find that the rotation is much less responsive.

Polygon count is the rawest measure of 3D graphics performance. More polygons, in theory, equals better graphics. But you can't just up the numbers, because it all takes time. So you can try to find a way to render polygons on the screen quicker, or you can reduce the number of polygons you have to render. If you want to take the latter, you can either make your graphics less detailed, or you can just plain cheat. Guess which option we're going for?

In any given frame in our 3D scene, there are a bunch of polygons that we're just not going to see, which are those on the other side of the shape, facing away from us. In the case of a sphere, that's fully half of all the polygons we're drawing every frame which we don't need to draw. So there's a simple conclusion - let's not draw them. This is Backface Culling.

Working out which faces you don't have to draw turns out to be very simple. You'll remember that in Part IV we discussed normals, which are vectors telling us which way a polygon is facing, and the dot product, which tells us something about whether two vectors are pointing the same way or not. To check for a backwards facing polygon, we'll compare it's vector normal to a vector that represents the direction we're looking in. In the Triangle class:

private Vector viewer = new Vector(0,0,-1);

public void draw(Graphics2D graphics) {
Vector normal = getNormal().normalise();

//backface culling
double cull = normal.dotProduct(viewer);
if(cull > 0) return;

...rest of the method...
}


Quite simply, get the vector normal for this polygon, normalise it (not strictly necessary here, but we'll reuse it later), and then find the dot product with the view vector. Remember that a result of zero means that one vector is orthogonal (i.e. at 90 degrees) to the other. A result of greater than zero means that they form an angle less than 90 degrees, and so if the view vector is pointing away from the viewer, so is the vector normal, to some extent. If that's the case here, we return from the method, so this polygon is not drawn at all on screen.

There's one gotcha here - for the normal to point in the expected direction, the points in your polygons must be defined in a clockwise direction. If you define the points anti-clockwise, things can still work, but the vector normal for polygons facing away from you will point towards you, so the check becomes if(cull < 0) instead.

Let's do some unscientific testing. I added some code (the static getAndResetDrawnCount() method) to the Triangle class so we can keep count of how many polygons have actually been drawn, and retrieve that to display at the end of each frame render. I also added some very simple timing code to time how long each frame takes to display.

Without backface culling present (and using an STP value of 100), the output shows that all 10000 polygons were rendered, and the time varies but a quick eye grep suggests it averages (on my machine) around 27-30ms. When the two lines of culling code are added, the average number of objects actually drawn to the screen drops to around 5080, and the average time is somewhere around 15ms - as you might expect if it's doing half the work.

By now you must surely be taking your seat in the pantheon of the 3D gods. If not, cheat by downloading the source. In either case, we'll be coming back to earth with a bump next time when we look at introducing a camera. Warning: contains matrix maths.

Wednesday 11 February 2009

Hot Gigolo (#324)

For many years I've led an extraordinary double life. You wouldn't think it to look at me, but hiding under my mild mannered exterior is the equally mild mannered Reverend Bingo. The good Reverend goes back at least 13 years now, to my first steps on the net during my university years. Heck, I even used it on some Telnet MUD type thing, the name of which is lost to time and memory. Over the years many have found deep meaning to the name - some kind of comment on the church and their relationship with modern culture - but the name wasn't my invention. I seem to have borrowed it and never quite got round to giving it back.

Generally I shorten it to revbingo, as in the url for this blog, and as far as I'm concerned it's everything a netname should be. It's short, it's fairly memorable, it's interestingly cryptic and it's slightly silly without being downright embarrassing ("Ok Grandma, write this down - it's H-O-T-G-I-G-O-L-O-3-2-4 at gmail.com"). Most importantly, it's unique - I haven't found a website yet where I couldn't register with it as a username.

At least, I thought it was unique. One thing I've never really done in those 13 years is to google for "revbingo". Let's give it a go, eh? First two results, Amazon review. That's me alright - that book was terrible. Flickr, me. RateMyCover, me. Ubuntu Forums, me. Accidentally formatted my Sansa, m... wait! I've never owned a Sansa, let alone wiped one accidentally. Twitter, me. Spring RCP (ugh), me. Ubuntu again, me. Launchpad, me. Zoomr, me. MoneySupermarket - not me. Xandros - not me. Rudius - not me. Someone somewhere is polluting my brand.

Now, having borrowed the name in the first place, I guess I ought to not be surprised if a few other people use it too. But it brings home the importance we place on identity, especially in a context like the internet where real identity is all too difficult to establish. How does someone googling for me on the net know what's me and what's not? There's nothing about the other posts that particularly concern me. I guess I could live with folks thinking that I had reversed into a van or tried to install Half-Life on Linux (although let's be clear, I would never admit to having been a Linux n00b :). It's just that it's not me. The fact that these other uses only pop up here and there in amongst my own usage make it even more likely that someone would believe it's me.

Which makes me wonder, where have these people gone since? Have they discovered that I've already taken that username on gmail, twitter, flickr et al and gone for something else? (In a slight panic, I've just bought up www.revbingo.com). Did they decide that revbingo was a really poor username?

I guess there's nothing else for it. Just call me HotGigolo324.