Sunday 19 April 2009

Adventures in 3D: Part VIII - A Light Touch

(Yet again, I'm deviating from the original aim of getting a camera persepective working. But this is pretty cool, so hopefully you'll forgive)

First, a confessional. There was a pretty fundamental error in the Point class, in that vectorTo() was implemented such that vectors were actually backwards. D'oh. Which explained why, when I actually took the time to think about where lights were coming from and how the scene was lit, things were the wrong way round. That's fixed now, along with a couple of other things that were also wrong in compensation for that error. At least it's a good lesson in taking the time to properly consider such fundamentals, rather than ploughing on with whatever works... On the positive side, all the concepts introduced thus far still stand.

Anyway, so far, the lighting on this object has been pretty dumb. The light we've modelled is just a Vector, so any object anywhere in the scene is lit from the same direction and at the same intensity, and it's also pretty boring white light. We're going to spice things up a bit. In a 3D scene, there can be various types of lighting with different characteristics of where and how the light is cast. In our case, we're going to implement two types of light - ambient light and spotlights.

Ambient lights are super super easy. They're just a background level of light that's present everywhere. It doesn't come from a point, doesn't point in any particular direction, doesn't change in intensity. Think of it like daylight on a cloudy day - the light is just kind of there, without coming from any particular place.

To make life easier, we have an abstract Light superclass. This says that a light usually has a colour, a position and a direction, although, in the case of an ambient light, we just ignore those last two. What the Light class doesn't define is how the light affects the surfaces it falls on. For that, we have an abstract light(Lightable s) method, which returns a Color, being the colour (remember that brightness is one component of a colour) that this light contributes to that surface. The Lightable interface defines two methods, getNormal() and getPosition() - any object (in our case, a Primitive) that wants to be lit must implement these two methods so that the lights can tell where the surface is and which way it's facing. You can easily see that this interface could need to define other methods in the future for more sophisticated lighting - for instance, the light() method may need to know the absorptive or reflective properties of a surface.

The AmbientLight class only holds one thing - the colour of the light. The implementation of the light() method is dead simple, because the light that the AmbientLight contributes to each surface is simply it's colour. No need to worry about which way the surface is facing or how far away it is.

public class AmbientLight extends Light {

public AmbientLight(Color color) {
this.color = color;
}

@Override
public Color light(Lightable s) {
return this.color;
}
}

Our Triangle class has a lightPolygon() method, which is where we ask all the lights in the scene to tell us what they will contribute to the final colour of this polygon. It's just a loop calling light(this) for each light. The colour from each light is added together to get a final colour to render.

public void lightPolygon(LightScene lights) {
litColor = Color.black;
for(Light light : lights) {
litColor = addColor(litColor, light.light(this));
}
}

The addColor() method is also very simple - just add each RGB component separately, naturally making sure that the component values don't go above 255.

private Color addColor(Color c1, Color c2) {
return new Color(Math.min(255, c1.getRed() + c2.getRed()),
Math.min(255, c1.getGreen() + c2.getGreen()),
Math.min(255, c1.getBlue() + c2.getBlue()));
}

Put all that together, and define an AmbientLight with a muted colour. You don't want the ambient light to be too bright, or else it will just wash out all the other colours in the scene. I'm going to use RGB(0,0,30). The effect this has is to show up all the objects in the scene in a dark blue base light. If nothing else, it's handy for making sure that your objects are being rendered, where previously they would not have been painted if you got the lighting coordinates wrong.

Now let's try something far more interesting, the spotlight. Spotlights have a number of properties - the position of the light, the direction the light points, what colour it is, and the angle that the light spreads out at. For a more realistic representation, we also want to define how quickly the light falls off from full intensity around the edge. The first three are already taken care of in our Light superclass. The second two will be implemented in the Spotlight class, and I'll call them fullIntensityAngle and falloffAngle. For a light defined with a fullIntensityAngle of 20 and falloffAngle of 15, that means that surfaces within 20 degrees of the centre line of the light will be lit at full intensity, and surfaces another 15 degrees beyond that will be lit at an intensity proportional to their distance from the centre line. At 35 degrees from the centre and beyond, there's no light contributed from the spotlight.

There are two main calculations to do. The first is the standard calculation we're used to, work out which way the surface faces, and if it's facing away from the light, just return Color.BLACK (as far as adding lights is concerned, Color.BLACK is a null result).

double dtFace = s.getNormal().normalise().dotProduct(lightNormal);
if(dtFace >= 0) return Color.BLACK;

where lightNormal is the normalised vector pointing in the direction of the light.

Next, get a vector from the light to the surface, normalise it and calculate the dot product with lightNormal. For vectors of unit length, the dot product of the two gives the cosine of the angle between the two. At this point, we could use Math.acos() to convert back to an angle and figure out if it's within the spread of our light. But acos is a pretty expensive operation, so instead of comparing angles, we just compare raw cosine values (the cosine of the spread angle is calculated in the constructor or when the angles are changed) to see if the surface is outside the range. If it is, again, return Color.BLACK.

Point lightSource = this.getPosition();
Vector lightToPoly = lightSource.vectorTo(s.getPosition());
double dtPosition = lightToPoly.normalise().dotProduct(lightNormal);
if(dtPosition < cosFullSpread) return Color.BLACK;

Ok, now we're down to just the points that are actually lit. At this point, we will do that acos() operation to get the angle. This makes things simple as it's a straight comparison of angles to determine how to light the surface, and is also important because it means that when we calculate the falloff, it's linear with angle, rather than the cosine.

Within the spread of the fullIntensityAngle, the surfaces are light at the brightness determined by the direction they face, as usual. In the "fall off zone", the intensity of the light dims the further away you get from the centre, so we calculate a falloffFactor, which is a number from 0.0 to 1.0, by which we'll multiply the brightness in the final colour. Note that in the final colour, we create a HSB colour, which has the same hue and saturation as the specified light colour, and just scale the brightness.

double angle = Math.acos(dtPosition) * (180/Math.PI);

double fullSpreadAngle = fullIntensityAngle + falloffAngle;
double falloffFactor = 1;
if (angle >= fullIntensityAngle && angle <= fullSpreadAngle) {
falloffFactor -= ((angle - fullIntensityAngle)/(falloffAngle));
}
litColor = Color.getHSBColor(colorHue,
colorSaturation,
(float) (Math.abs(dtFace) * falloffFactor * colorBrightness));
return litColor;

Throw all this together, sprinkle a few different colour lights around, use a bit of artistic licence to add some other code (see below), and what do you get?



There is no denying that's pretty damn sexy. We can also do one more thing to bring colour to the scene, and that's to give the polygons themselves some colour. We'll assign a base colour to each polygon, and adjust the final lit colour to account for the surface colour. That adjustment is not immediately obvious, but if you consider a few cases it becomes apparent, especially if you think about the colour components as floats 0.0-1.0 instead of the traditional integer 0-255. For instance, a pure white surface (1.0,1.0,1.0) lit by a pure red light (1.0,0.0,0.0) will appear pure red (1.0,0.0,0.0). A pure red surface lit by a pure blue light (0.0,0.0,1.0) will appear black (0.0,0.0,0.0) - a red surface absorbs all blue wavelengths. A black surface always appears black, even if lit with white light. If we write those out, it should become clear:

(1,1,1) lit by (1,0,0) = (1,0,0)
(1,0,0) lit by (0,0,1) = (0,0,0)
(0,0,0) lit by (x,y,z) = (0,0,0)

It is, of course, multiplication of the colour components. A quick multiplyColour method:

private Color multiplyColor(Color c1, Color c2) {
float[] c1Comp = c1.getColorComponents(null);
float[] c2Comp = c2.getColorComponents(null);
return new Color(c1Comp[0] * c2Comp[0],
c1Comp[1] * c2Comp[1],
c1Comp[2] * c2Comp[2]);
}

and then apply that to the lit colour:

litColor = multiplyColor(litColor, surfaceColor);

and then you have coloured polygons:



As Shed Seven once sang, it's getting better all the time.

Cut out the middle man and just download the source. Not least because there's plenty of other tinkering I've done with the code. Of most interest:
  • There's a new BasicSceneObject, XYPlane, which provides the "back wall" effect. Notice that the rotate() method is overriden, with no implementation, which means it stays static whilst the other objects in the scene rotate in front of it.
  • The pipeline was previously using ArrayLists to store the list of polygons. The problem with this is that the backface culling does a remove() on the list, which is not very efficient for ArrayLists, because they then have to shuffle other objects in the list down the backing array. By changing to a LinkedList, for which removals are O(1) (simply change pointers), performance is improved.
  • For some sexy debugging, the InfoPanel class allows us to draw some basic info in the top left of the panel
  • Now we've got spotlights in the scene, it's useful to be able to move them around. There are some extra controls that you can use to control the scene:
  • Space cycles through modes of 1) rotating objects, 2) moving the focus point of the current light, 3) moving the position of the current light, 4) moving the camera (wait, not yet!).
  • In MOVE_OBJECT mode, clicking and dragging rotates X and Y axes. Using the scroll wheel (or edge drag on a touchpad) rotates the axis
  • In MOVE_LIGHT_POSITION mode, clicking and dragging moves the light source position in XY. Holding CTRL whilst doing so moves the light backwards and forwards.
  • In MOVE_LIGHT_DIRECTION mode, clicking and dragging moves the focus point of the light in XY. Holding CTRL whilst doing so moves the focus point backwards and forwards. Using the scroll wheel changes the size of the falloffAngle of the light, and holding CTRL while scrolling changes the fullIntensityAngle.
  • In both MOVE_LIGHT_POSITION and MOVE_LIGHT_DIRECTION, pressing N will cycle control through available spotlights (Warning: this is a bit of hackery - if you don't have a spotlight in the scene, this will go into an infinite loop...)
  • In all modes, clicking the mouse will toggle between wireframe and full mode.
That's a decent slab of work. A simple one for next time - adding some perspective.

No comments: