This semester I'm taking one of my requisite natural science classes, In trying
to find a class which looked the least boring, I managed to find an Optics class. In Optics, we learn about the dual nature of light, a subject which has always fascinated me. In much of mathematics and physics, there is an inherent duality, a separation between two objects which simultaneously brings them together. When you get to Optics, we find that light- in on of the weirdest twists ever - manages do be its own dual, separate from itself, if you will. Light, as we (maybe) well know, has both wave-like and particle-like properties. My Optics Professor calls it the "Packet of Wiggling String" interpretation. This interpretation helps to explain things like the double slit experiment. It is this particular experiment that I want to talk about now.
I've fiddled with ray tracers before in my life, but I've never thought to try the double slit experiment in one of them, I cooked up a little pov-ray script to test my theory that, in fact, ray tracing is classical. Granted thats an obvious result to most, but think about it, raytracing is classical. You can't replicate the double slit experiment in Povray, more accurately, you can't treat light as a wave in Povray. only as a particle stream.[1] As far as I can tell, this gross approximation of light-- limited reflection calculation, particle stream vs wave, etc -- was based on limitations of hardware when the technique was invented, they simply _couldn't_ simulate things like the Double slit experiment. Perhaps more surprisingly, Prism's don't work the way they should either, since they won't separate light based on wavelength.
NOTE:: As an aside, probably the most fascinating thing we have learned in optics so far is how prisms separate light into its component colors. I thought a short description might pique your interest, so here you have it.
When light travels through certain substances (usuall called media (singular medium)), it slows down and actually bends due to a phenomenon called refraction. Refraction is really just an application of Fermat's Principle (that light will always take the fastest path between two points[2]). The easiest way to see refraction is by looking through a magnifying glass, a magnifying glass is a medium made of glass, which has a special, dimensionless number called an "index of refraction" at around 1.5, air has a IOR of about 1, and a vacuum (like space, not like a hoover) has a IOR of exactly 1, there is no substance with an IOR
n * sin(I) = n' * sin(I')
where n, n' are the IOR's of the two media (n being the IOR of the medium from which the light originated.)
and I,I' the angles at which the light (or lifeguard) "impacts" the medium
This equation, called Snell's Law (not snail, snell, it rhymes with "sell"), gives us a simple way to solve the lifeguard problem. By knowing the distances from shore both we, and the victim are[5] we can determine a shortest path using some trigonometry, which I'll leave as an exercise to the reader, since I don't have any good visualization software to draw all the necessary pictures (xpaint will _not_ be sufficient, and the 15 minute time limit on Cinderella2 is beyond
annoying.) Regardless, this is the same math that governs refraction, however, there is something I have not explained.
n is not a constant.
This shook my soul, at first, how can n not be a constant? If we have one uniform material, we assume no inconsistencies in the material when we do our math- the only possible thing it could depend on would be light itself, but if light is a uniform particle stream then this couldn't be the case.
Shocking revelation number two, light isn't a particle stream.
Refraction works on a particle stream, it makes _sense_ on a particle stream, in fact, the very reason for refraction really doesn't make sense on a standing wave- because how can a infinitely long wave slow down? Thats just silly. So really, this whole refraction business leads us to a more quantum interpretation, but for simplicity, we'll pretend it all works with waves.
n is a function of the wavelength of the light approaching the medium, this is important, because it tells us something interesting about light. Consider the prism, we all have seen how the prism can split apart light in an amazing display of party trickery into all its very pretty colors. Prism's truly are the life of the optical party, useful for all sorts of stuff, from periscopes to spectrometers. In any case, how can a prism split white light into a bunch of different wavelengths? We can't create something out of nothing, so we are left with only one explaination, white light _is_ all of the component colors. This leads us to see that really- when we see white light, we are seeing the superposition of many different wavelengths of light, and this gives us why a prism works. If n is just a function of wavelength, and white light is a superposition of different wavelengths, then each wavelength will bend more or less depending on _its_ own value for n, this means that when the light exits the prism, due to it's shape[6], it will remain separated, and create a beautiful collage of colorfulness on whatever the light happens to land on.
So, enough rambling about Optics, what has all this got to do with raytracing? Well, I realized that you can't build a prism in a raytracer, because it treats its light not only as a simple particle stream, but also as having a unique wavelength (of sorts) for each of its colors. In Povray, you specify color as a vector, nothing special, just a vector. Why not treat color as a series of wavelengths? Heck, we don't even need to give up our lovely RGB method, theres
very likely a way to convert from wavelengths to RGB and back. We would have a problem with the way Raytracers currently treat light and color, since we say that objects and light _have_ color, when in reality light is usually white, and the things it touches _absorb_ color and have some amount of transparency, which gives the illusion of colored light. Potentially we could specify the color of light which is emitted and the color(s) of light which are absorbed by the surfaces we create- but the latter of that bit might be more difficult. This is besides the point[7]. I suppose what I am suggesting is that we consider ways to incorporate the wave nature of light into our raytracers, since we could potentially add quite a bit of very interesting new capabilities, like prisms, interference effects, etc. It would also add to the wonderful photorealism effects, I think, since the light would be specified in a way that is more like
reality.
Just thoughts, I suppose, I'm certainly no expert in Raytracing. However, oh dear Lazyintarweb, if you are, please- tell me whether this could actually work, maybe I'll try to build it, someday.
[1] In fact, only as a particle. Since we only view the ray's reflections once.
[2] In reality, the principle is stated (mostly) as follows: Light will always
seek the path which minimizes its travel time. There is a subtle difference, but I think for our purposes, the simpler statement suffices. Also note that fastest doesn't necessarily mean shortest, since we're dealing with speed changes too.
[3] I don't think I'm wrong, but maybe exotic substances or whatever creates wormhole things might? I'm not sure how that works, I'm just a mathematician who likes pretty lightshows, not a physicist.
[4] Yes, I know there is the whole non-euclidean geometry of space thing, geodesics and whatnot, but bear with me.
[5] I never realized lifeguarding was such a deep realm of math.
[6] Namely, the triangluar shape of the classic prism prevents the light from bending back towards the normal, and reforming the normal white light. A thoroughly less satisfying party trick, to be sure.
[7] In fact, at this point, I have practically forgotten what the point was.
8 comments:
Actually, you can simulate dispersion in POV-Ray. See "dispersion" in the documentation.
Of course, POV-Ray works by ray tracing. It wouldn't make much sense to render a macroscopic scene by adding countless waves.
It's perfectly possible to explain Snell's law with a wave model of light. See Huygens' Principle. Certainly the speed of a wave can vary locally.
Since taking difeq, I've thought it would be cool to try to write a sort of "wave tracer", rendering the scene using the wave equation. Of course, I'm also pretty sure I'm nowhere near smart enough to do it.
I don't know if that gives you anything as far as rendering prisms (I think you'd probably still trace only one wavelength at a time...), but it would give you a working double slit experiment.
Alfredo,
Hmm, I didn't know Povray had a dispersion effect- I guess thats just the nature of such a big system. That said- thats wicked cool. WRT The Prism Effect, I suppose it just seemed counterintuitive to me that a wave could slow down, to be honest- I have a hard time grasping how waves (that is, an infinitely long wavetrain) can move at all. Back with dispersion, I'll have to read up on it some. Thanks!
Hey there, found this via Reddit. To paraphrase Fark, "I research ray tracing, so I'm really getting a kick out of these replies."
Anyway, you're absolutely correct that standard ray tracing is based around classical optics. Specifically, it's based on the branch called "geometric optics."
As for what you describe about ray tracing based on the wavelengths, we call that "spectral rendering". There are various ways to handle the representation of the spectra for it. But once you have that, the conversion to RGB is fairly well known. It basically involves integrating the product of the computed spectra for the ray with each of the CIE tristimulus curves to reduce it to a color in the CIE XYZ color space. From there, ignoring color calibration, it's a simple linear transformation to get to one of the RGB color spaces.
In addition to the prismatic dispersion, spectral renderers can often handle things like chromatic aberration, thin film interference, and some degree of diffraction grating type effects.
As for the anonymous commenters wave tracer idea, there's been a tiny amount of work in that area. I know there was a paper in the SIGGRAPH 94 about simulating wavefronts to compute global illumination. More recently, this year's SIGGRAPH had a paper on eikonal rendering which seems to be related. Neither one will really reproduce the double slit effect, but they are at least a tad closer than the standard geometric optics approach.
Ah yes. Sorry for the double post, but I found the better paper I was originally looking for. It was "3D graphics and the wave theory" in the SIGGRAPH '81. I believe the algorithm described there could reproduce the double slit effects.
*Points at Boojum*
Look, someone who knows what there doing!
Thats wicked cool, I'll have to read up on spectral rendering, Optics is fun :)
Most of what you talk about has been done in raytracers before. People who write raytracers are not ignorant of optics, rather we concentrate on getting as realistic results as possible with as little machine time as needed - it is always a trade off. Since most of the wave-like optic effects only appear rarely, we avoid calculating them if they take a long time. Most of these (such as the slit and prism) can only be calculated using a light-forwards algorithm, and most everyone uses a camera-forwards algorithm, these rarely get implemented.
Given time, I would like to add prism effects to my own raytracer, since I think they add a lot to some scenes. I likely will never add the double slit effect, since I cannot image a scene I'd want to render that really needs it.
Brad,
I certainly wasn't trying to imply that people who write raytracers are ignorant of optics. I'm pretty sure the two are mutally exclusive. It just seemed like something that had been passed over- and like you said, for good reason- since it was so slow. My goal was more to expose the issue to those who don't write/work on raytracers. People like me. :)
Out of Curiosity, is your Raytracer publicly available? If so, where can I find it? I always enjoy reading through all the different approaches to the raytracing problem.
~~Joe
Post a Comment