This semester I'm taking one of my requisite natural science classes, In trying
to find a class which looked the least boring, I managed to find an Optics class. In Optics, we learn about the dual nature of light, a subject which has always fascinated me. In much of mathematics and physics, there is an inherent duality, a separation between two objects which simultaneously brings them together. When you get to Optics, we find that light- in on of the weirdest twists ever - manages do be its own dual, separate from itself, if you will. Light, as we (maybe) well know, has both wave-like and particle-like properties. My Optics Professor calls it the "Packet of Wiggling String" interpretation. This interpretation helps to explain things like the double slit experiment. It is this particular experiment that I want to talk about now.
I've fiddled with ray tracers before in my life, but I've never thought to try the double slit experiment in one of them, I cooked up a little pov-ray script to test my theory that, in fact, ray tracing is classical. Granted thats an obvious result to most, but think about it, raytracing is classical. You can't replicate the double slit experiment in Povray, more accurately, you can't treat light as a wave in Povray. only as a particle stream.[1] As far as I can tell, this gross approximation of light-- limited reflection calculation, particle stream vs wave, etc -- was based on limitations of hardware when the technique was invented, they simply _couldn't_ simulate things like the Double slit experiment. Perhaps more surprisingly, Prism's don't work the way they should either, since they won't separate light based on wavelength.
NOTE:: As an aside, probably the most fascinating thing we have learned in optics so far is how prisms separate light into its component colors. I thought a short description might pique your interest, so here you have it.
When light travels through certain substances (usuall called media (singular medium)), it slows down and actually bends due to a phenomenon called refraction. Refraction is really just an application of Fermat's Principle (that light will always take the fastest path between two points[2]). The easiest way to see refraction is by looking through a magnifying glass, a magnifying glass is a medium made of glass, which has a special, dimensionless number called an "index of refraction" at around 1.5, air has a IOR of about 1, and a vacuum (like space, not like a hoover) has a IOR of exactly 1, there is no substance with an IOR
n * sin(I) = n' * sin(I')
where n, n' are the IOR's of the two media (n being the IOR of the medium from which the light originated.)
and I,I' the angles at which the light (or lifeguard) "impacts" the medium
This equation, called Snell's Law (not snail, snell, it rhymes with "sell"), gives us a simple way to solve the lifeguard problem. By knowing the distances from shore both we, and the victim are[5] we can determine a shortest path using some trigonometry, which I'll leave as an exercise to the reader, since I don't have any good visualization software to draw all the necessary pictures (xpaint will _not_ be sufficient, and the 15 minute time limit on Cinderella2 is beyond
annoying.) Regardless, this is the same math that governs refraction, however, there is something I have not explained.
n is not a constant.
This shook my soul, at first, how can n not be a constant? If we have one uniform material, we assume no inconsistencies in the material when we do our math- the only possible thing it could depend on would be light itself, but if light is a uniform particle stream then this couldn't be the case.
Shocking revelation number two, light isn't a particle stream.
Refraction works on a particle stream, it makes _sense_ on a particle stream, in fact, the very reason for refraction really doesn't make sense on a standing wave- because how can a infinitely long wave slow down? Thats just silly. So really, this whole refraction business leads us to a more quantum interpretation, but for simplicity, we'll pretend it all works with waves.
n is a function of the wavelength of the light approaching the medium, this is important, because it tells us something interesting about light. Consider the prism, we all have seen how the prism can split apart light in an amazing display of party trickery into all its very pretty colors. Prism's truly are the life of the optical party, useful for all sorts of stuff, from periscopes to spectrometers. In any case, how can a prism split white light into a bunch of different wavelengths? We can't create something out of nothing, so we are left with only one explaination, white light _is_ all of the component colors. This leads us to see that really- when we see white light, we are seeing the superposition of many different wavelengths of light, and this gives us why a prism works. If n is just a function of wavelength, and white light is a superposition of different wavelengths, then each wavelength will bend more or less depending on _its_ own value for n, this means that when the light exits the prism, due to it's shape[6], it will remain separated, and create a beautiful collage of colorfulness on whatever the light happens to land on.
So, enough rambling about Optics, what has all this got to do with raytracing? Well, I realized that you can't build a prism in a raytracer, because it treats its light not only as a simple particle stream, but also as having a unique wavelength (of sorts) for each of its colors. In Povray, you specify color as a vector, nothing special, just a vector. Why not treat color as a series of wavelengths? Heck, we don't even need to give up our lovely RGB method, theres
very likely a way to convert from wavelengths to RGB and back. We would have a problem with the way Raytracers currently treat light and color, since we say that objects and light _have_ color, when in reality light is usually white, and the things it touches _absorb_ color and have some amount of transparency, which gives the illusion of colored light. Potentially we could specify the color of light which is emitted and the color(s) of light which are absorbed by the surfaces we create- but the latter of that bit might be more difficult. This is besides the point[7]. I suppose what I am suggesting is that we consider ways to incorporate the wave nature of light into our raytracers, since we could potentially add quite a bit of very interesting new capabilities, like prisms, interference effects, etc. It would also add to the wonderful photorealism effects, I think, since the light would be specified in a way that is more like
reality.
Just thoughts, I suppose, I'm certainly no expert in Raytracing. However, oh dear Lazyintarweb, if you are, please- tell me whether this could actually work, maybe I'll try to build it, someday.
[1] In fact, only as a particle. Since we only view the ray's reflections once.
[2] In reality, the principle is stated (mostly) as follows: Light will always
seek the path which minimizes its travel time. There is a subtle difference, but I think for our purposes, the simpler statement suffices. Also note that fastest doesn't necessarily mean shortest, since we're dealing with speed changes too.
[3] I don't think I'm wrong, but maybe exotic substances or whatever creates wormhole things might? I'm not sure how that works, I'm just a mathematician who likes pretty lightshows, not a physicist.
[4] Yes, I know there is the whole non-euclidean geometry of space thing, geodesics and whatnot, but bear with me.
[5] I never realized lifeguarding was such a deep realm of math.
[6] Namely, the triangluar shape of the classic prism prevents the light from bending back towards the normal, and reforming the normal white light. A thoroughly less satisfying party trick, to be sure.
[7] In fact, at this point, I have practically forgotten what the point was.