3D Rendering Tutorial

[Carrara 5 / Pro]
[Cinema 4D 9.5]
[Advanced Render 2.5]
[Sketch and Toon]
[SketchUp 5]
[3D Materials]

[3ds max 6]
[3DSOM Pro]
[Advanced Render 2.5]
[Bryce 5.5]
[Carrara 5 / Pro]
[Cinema 4D 9.5]
[Deep Exploration 3]
[Deep Paint 3D 2]
[Genetica 2]
[Hexagon 1]
[Piranesi 4]
[PhotoSEAM]
[Poser 6]
[SketchUp 5]
[Sketch and Toon]
[Swift 3D 4]
[Texture Maker]
[Vue Esprit 5]
[Vue Infinite 5]
[Xara 3D 6 ]
[Xfrog]

[3D Materials]
[Best 3D App]
[3D to 2D]
[2D to 3D]
[Texture Library]

 


[Home / What's New]
[DTP / Publishing]
[Vector Drawing]
[Bitmap / Photo Editing]
[Web Design]
[3D Software]
[All Reviews]
[All Articles / Tutorials]
[Book Search / Shop]
[Site Map / Search]
[Contact]
 

you can help support the site with a direct donation or by shopping via the following links:

[Amazon.com]
[Amazon.co.uk]
[Amazon.ca]

Thank you!

 

 

3D Rendering Explained

Tom Arah sheds some light on the dark arts of 3D rendering.

ray tracing

After you’ve created and arranged your models, applied your materials and set up your lights, it’s only when you hit your application’s Render command that your mathematically-defined 3D scene emerges slowly into more or less realistic life. Successful rendering is the key to successful 3D, but few users have much idea of just what is involved. And for good reason – the underlying theory is technical and the implementations complex. However to take full control of such a central process you really need to understand the principles at work. Here then is a simplified layperson’s guide to rendering right through from basic ray casting to today’s state-of-the-art global illumination, image based lighting and ambient occlusion.

hdri example

These days even budget 3D applications such as Carrara 5 offer advanced rendering features such as IBL.

Ultimately rendering, like vision itself, is all about light. Light in the form of photons is emitted from light sources, bounces around real world objects and eventually is recorded on the eye’s retina. The problem in rendering terms is that the sheer number of photons buzzing around the world in front of you is unimaginably large and so unmodelable. Fortunately however, because light travels in predictable straight paths or “rays”, we can reverse engineer the problem. By working backwards we can slash the number of rays that we are interested in to the tiny fraction that actually end up recorded on our eye. Put this in computer terms and we only need to track those rays that pass through each pixel of the onscreen image on their way to the camera/eye. To do this we effectively treat the camera as a light source and cast out rays from it through the bitmap grid of the screen (or of the rendered image if you aren’t rendering to screen) until they hit the nearest object. The rendered pixel value is then worked out based on the interaction of light and material properties at that point.

The massive simplification involved in this backwards, eye-first “ray casting” approach cuts out more than 99.9% of all calculations and makes 3D rendering possible. However don’t be fooled into thinking it’s a simple operation. Say you are producing an 800 x 600 render of a scene containing 30 objects. Before each pixel’s colour can be calculated each object’s geometry must be tested against each ray cast to determine just which point is nearest the eye. This means that you are immediately talking about 14,400,000 (800 x 600 x 30) complex intersection tests! Breaking down the scene into blocks can cut out some unnecessary processing, but it’s no wonder that rendering tends to be the perfect time to put the kettle on.

And ray casting is only the beginning - things are about to get a lot more complex and time-consuming. To begin with, the image quality of a simple render like this would be unacceptably poor due to the pixelated stair-step effect, or “ jaggies”, that appear along edges that aren’t completely horizontal or vertical. The solution is to anti-alias the render which involves breaking down each pixel into subpixels – grids typically vary between 2x2 and 16x16 depending on desired end accuracy - working out the returned colour for each and then averaging the results. If you were going to do this for every pixel you might as well just increase your image dimensions accordingly and take the massive performance hit involved. The big advantage of anti-aliasing is that it can be applied intelligently in a final render pass based on the geometry of the scene and, where necessary, on marked colour shifts in textures.

Anti-aliasing produces much smoother results, but frankly at the moment this is the least of our worries. The real problem is the lack of realism in our image. A ray cast render just doesn’t begin to do justice to the real world scene – everything looks strangely flat and unnatural as there is no light-based interaction between the elements of the scene. This is most immediately apparent in the lack of reflection. This is much more significant than it might at first appear. Look around you and you’ll see that it’s not just mirrors that reflect but metal surfaces, polished wood, ceramics, glass and so on.

The solution to render reflection accurately is to follow the cast ray on from the object it first hits, tracing its journey in reverse, to find the object it bounced off previously - an extension of the ray casting principle known as “ray tracing”. And because light travels in straight predictable lines, it’s easy enough to work out in which direction to send out a secondary “reflection ray” based on the angle with which the primary ray intersected the reflective surface. This intersection-seeking reflection ray is used to find the nearest point in the new direction and the colour of this secondary sample is then calculated and the value fed back into the calculation for the primary sample. If the secondary ray hits another reflective surface the process is simply repeated recursively until a fixed number of secondary rays, also known as “ray depth”, is reached or until the input into the final pixel value falls below a certain threshold.  

ray tracing

Ray tracing extends the ray casting approach to vary the primary sample’s colour based on secondary samples.

That’s reflectivity catered for, but that still leaves another major range of materials that we need to be able to replicate. Semi-transparent surfaces such as glass, water and see-through plastic might be less common than reflective surfaces, but they stand out like sore thumbs if they aren’t handled correctly – imagine a solid white glass of water. Again the solution for handling transparency is ray tracing, sending out a secondary ray (or rays) to test intersections to find the nearest object the light would have reflected off previously and then propagating the colour value back to determine the final rendered pixel value.

This means that the glass of water no longer appears solid, but instead you can see the objects behind. However to make the effect truly believable another characteristic of light has to be taken into account: refraction. When light passes between materials of different optical density, such as between air and glass and then back to air again, it changes direction based on its angle of incidence. To reproduce this effect the ray tracing model has to be tweaked to allow the direction of “refraction rays” to be deflected based on a refraction setting in the object’s material.

Thanks to ray tracing the realism of our rendered scene is certainly improving, but now we hit another major issue. So far the lights in our scene are casting light, but crucially they aren’t casting shadows. For this to happen our primary sample points need to know if they are actually being lit by each of the scene’s light sources or not. To find this out you need to imagine that you are standing at a sample point and then look at each light in turn to see if you can see it – if you can’t then you’re in that light’s shadow. In terms of ray tracing, this means sending out a further set of intersection-seeking secondary rays known as “shadow rays”. Unlike reflection and refraction rays however shadow rays don’t need to find the nearest object or bring back colour information, they just need to know if they hit any opaque object along the path to the point light. If they don’t, the sample is lit, otherwise the light is ignored and the sample is treated as “occluded” or in shadow.

Again this is a step forward but the resulting ray traced shadow is hardly realistic. To begin with, its edge is just far too hard with each pixel either completely lit or completely not. Real shadows aren’t pin-sharp, rather the natural diffusion of light means that they have a border of “penumbra” or partial shadow. To produce this softening effect some applications let you use area light sources which are effectively collections of distributed point sources. Using a 10x10 area light for example would produce the desired soft shadow edge.

ray traced shadows

Ray traced shadows are too hard to be realistic.

However the use of area lights massively ramps up the number of shadow ray intersection tests required for each sample and so rendering time. A much more common solution is to move beyond pure ray tracing in a process known as “shadow mapping”. Firstly the view from a light source is recorded as a greyscale “shadow map” containing depth information. During rendering this map of all objects and areas that are lit by the light can be referred to to see if each sample should be lit or shadowed and also to soften the effect near the shadow edge. The result is more realistic soft shadows produced much more quickly than ray traced hard shadows.

Using area lights or shadow mapping we can more or less accurately reproduce the shadow’s soft edge - but that still leaves the shadow itself looking completely unrealistic. Imagine we have set up a scene of a table lit from above. Ray trace the scene and everything under the table would be pitch black! In real life of course we can still see the area under the table because light is reflected off the ceiling, walls, floor and the objects in the room. And it’s not just the intensity of the light that is reflected, its colour is too – that’s why a white object near a green wall takes on a greenish hue and vice versa. To produce a realistic render for scenes like these, the fundamental importance of indirect lighting has to be recognized.

Within a ray trace environment the simplest and most common workaround for this is simply to add a constant ambient light factor. However such a uniform ambient addition is clearly an appallingly crude approximation of the varying intensities and colours of real world indirect lighting and has the undesirable effect of flattening out the image. For many years the best practical solution was to manually fake the effect of natural indirect lighting by creating a customised lighting setup with multiple light sources of varied intensity and varied colour.

It’s a laborious job demanding huge skill, and even then the results are still only a crude approximation to the subtleties of real world indirect lighting. Clearly there has to be a better way that recognises that real world scenes and their objects aren’t lit only by direct light sources, but by the light that is constantly being reflected onto them by the objects around them. In other words “diffuse reflection” is the norm not the exception and every surface needs to be treated as a light source. The various attempts to take indirect lighting into account during rendering all tend to use the overriding title of “global illumination” or “GI”.

The most well-known GI approach is a light-forward system called “ radiosity”. Effectively this works by breaking a scene down into representative patches and then recording how much light arrives at each patch from all parts of the scene that are visible to it. This light is then reflected out into the scene and is used as the basis for a second pass and so on. Eventually over multiple passes the radiosity process converges on a solution. Realistic under table lighting at last!

radiosity

Radiosity is particularly effective for indoor scenes with restricted lighting.

Radiosity is ideal for those indoor scenes where most lighting is indirect and where ray tracing copes badly. However, it’s intrinsically slow and not very good for scenes involving transparency and non-diffuse reflection where ray tracing excels. To get the best of both worlds we really want to be able to graft some indirect light handling onto our existing eye-forward ray tracing model. And that’s exactly what a GI system often referred to as “Monte Carlo” offers (the name derives from the casino and is used about systems involving a random chance element).

As you’d probably expect by now, this ray tracing extension involves tracing yet more intersection-seeking secondary rays from the original ray cast sample and then using their feedback to determine the final pixel value. The problem is that unlike shadow rays looking for a simple sighting of a limited number of point light sources we’d really like to measure the indirect light coming into every sample from the nearest point in every direction. And then repeat the whole process to find the indirect light coming into those points and so on!

Of course this simply isn’t feasible so compromises have to be made trading off accuracy for speed. To begin with, only a percentage of samples are usually checked. These GI samples aren’t picked completely at random but rather are focused where they are most needed in areas of strong contrast and where surfaces meet. Most importantly, indirect lighting rays can’t be sent out in all directions from these samples. Instead a limited number (300 is a good starting place) are sent out randomly in a dome shape. Further sprays of these random “stochastic samples” are then sent out from where they first land, usually limited to a ray depth of three or until a particular input threshold is reached. All the indirect lighting information is then propagated back for each GI sample and these values are then interpolated across the scene as a whole. Once this GI pass has finished, normal ray tracing begins but with an indirect lighting factor fed in to the calculation for each pixel’s final colour value.

The resulting rendering can’t compete with multiple-pass radiosity for scenes where indirect lighting dominates and the stochastic element to the process often leads to a certain graininess. However by providing a more realistic ambient light factor that varies in intensity and colour for each sample, the results tend to be much more realistic than simple ray tracings that only take direct lighting into account. The softer shadows and diffuse colour reflections offered by GI might not be consciously registered by the viewer, but the results just look more natural.

However this GI approach doesn’t work equally well for all scenes. In particular without surrounding objects, and in particular walls, floors and ceilings, most GI rays will exit a scene without bringing back any indirect lighting information to feed into the end pixel value. This isn’t just unhelpful it’s unrealistic as in the real world indirect lighting will always be coming from every angle, even when outdoors. The solution is to surround the scene with a “sky dome” so that each stochastic sample ray contributes. Naturally sky domes are a GI feature that proves particularly important for dedicated outdoor scene renderers where each sampled point in the realistic atmosphere can act as an indirect light source, varying in colour and intensity.

global illumination

GI handling can also be used by naturalistic outdoor scene renderers.

Tied in with the use of sky domes is another important GI technology called “image based lighting” or “IBL”. By specifying a special omni-directional HDRI (high dynamic range image) called a “light probe” as a luminance texture map for the sky dome, the scene is both surrounded and indirectly illuminated by a bitmap in which each pixel corresponds linearly to the light levels in the original scene. Once set up, you often only have to add one direct light source to replicate the sun, or can get away without any.

Even better, IBL is unique in terms of enabling the lighting of a computer-generated 3D model to be accurately mapped to a real world scene. Most importantly, the results really have to be seen to be believed. And IBL isn’t just useful for outdoor scenes, it can be just as effective for indoor work. In fact the HDRI doesn’t even have to match the current scene. If you hide the image itself and just use it for indirect illumination it’s worth experimenting with multiple light probes to see just what effects can be produced. There’s nothing to stop you rendering a model of your desk as if illuminated by Westminster Abbey!

With GI and IBL we’ve seen how ray tracing can be extended to tackle the crucial task of rendering indirect illumination. However there’s a huge downside. As we’ve seen, ray tracing itself is a massively complicated and time-consuming process. With GI added to the mix, the number of processor-intensive intersection tests and lighting calculations increases dramatically. Forget about putting the kettle on, it’s time to dig out your pyjamas. There are some pluses in that the GI solution can be saved and re-used if the indirect lighting in the scene hasn’t changed. And the indirect lighting can even be baked into texture maps where appropriate to speed up future renders and especially animation.

Even so, with current computer power, GI isn’t something to be undertaken lightly. Which is where another ray tracing extension comes in called “ambient occlusion” or “AO”. Rather than working out indirect lighting like GI, AO works out indirect shadowing or occlusion. Again it does this by sending out a random hemispherical spray of stochastic samples to see if these hit an object. Unlike GI rays however AO rays don’t need to find the nearest intersection or bother about multiple passes. Instead, like shadow rays, they just need to find whether the ray is occluded, and ideally within a set distance or “ray length”. Better still, there’s no need for complex colour/lighting calculations to be undertaken, the sample can be simply shaded based on the ratio of occluded and non-occluded rays. And to speed things up further you don’t need to process AO globally across the scene as a whole, but can apply it as a shader to selected objects.

ambient occlusion

Ambient occlusion produces more natural soft shadows.

The result of ambient occlusion is that corners, holes, cracks and the areas between objects that are close to each other appear darkened just as they do in real life. There’s none of the diffuse colour reflection offered by GI, but the soft shading effect is very realistic and natural which is why AO is usually presented as a speedier, more practical alternative. However there’s no reason why you shouldn’t combine both GI and AO for even better end quality. In fact with ray casting, subpixel antialiasing, ray tracing, reflection rays, refraction rays, shadow rays, area lights, shadow maps, GI, IBL and AO all working together you’ll be pleased to know that we’ve finally got the foundations in place for some seriously impressive rendering. Just don’t expect it to happen in real time.

Tom Arah

February 2006


Hopefully you've found the information you were looking for. For further information please click here.

For free trials and special offers please click the following recommended links:

For further information on the following design applications and subjects please click on the links below:

[3D], [3ds max], [Adobe], [Acrobat], [Cinema 4D], [Corel], [CorelDRAW], [Creative Suite], [Digital Image], [Dreamweaver], [Director], [Fireworks], [Flash], [FreeHand], [FrameMaker], [FrontPage], [GoLive], [Graphic Design], [HTML/CSS], [Illustrator], [InDesign], [Macromedia], [Macromedia Studio], [Microsoft], [NetObjects Fusion], [PageMaker], [Paint Shop Pro], [Painter], [Photo Editing], [PhotoImpact], [Photoshop], [Photoshop Elements], [Publisher], [QuarkXPress], [Web Design]

To continue your search on the designer-info.com site and beyond please use the Google and Amazon search boxes below:

Google
Web designer-info.com


       
designer-info.com: independent, informed, intelligent, incisive, in-depth...
 


All the work on the site (over 250 reviews, over 100 articles and tutorials) has been written by me, Tom Arah It's also me who maintains the site, answers your emails etc. The site is very popular and from your feedback I know it's a useful resource - but it takes a lot to keep it up.

You can help keep the site running, independent and free by Bookmarking the site (if you don't you might never find it again), telling others about it and by coming back (new content is added every month). Even better you can make a donation eg $5 the typical cost of just one issue of a print magazine or buy anything via Amazon.com or Amazon.co.uk (now or next time you feel like shopping) using these links or the designer-info.com shop - it's a great way of quickly finding the best buys, it costs you nothing and I gain a small but much-appreciated commission.

Thanks very much, Tom Arah


 
[DTP/Publishing] [Vector Drawing] [Bitmap/Photo] [Web] [3D]
[Articles/Tutorials]
[Reviews/Archive] [Shop]  [Home/What's New]

Copyright 1995-2005, Tom Arah, Designer-Info.com. Please get in contact to let me know what you think about a particular piece or the site in general.