the origin of idea to use RGB texture instead of depth texture was because it was faster to glCopyTexSubImage(.. RGB(A).. )than
glCopyTexSubImage(... DEPTH_COMPONENT...)(very slow if not supported, but fastest of all if supported even faster than RGB copy regadlles of 32/16bit)
but, with new depth texture and GL_COMPARE_R_TO_TEXTURE_ARB, and with Frame Buffer Objects we dont need above optimisation anymore which would, by the way, calculate and discard shadow pixels by having them in alpha channel and using GL_ALPHA_TEST
but there is still use for fog...
the goal is to produce distance fade-out and there is the trick is with rendering to FBOs.
why fade out?
here is what [www.cbloom.com](www.cbloom.com) say:
E. DISTANCE FADE-OUT
It's useful to make your shadows fade out in the distance. It makes them
look better, because it does a sort of fake simulation of the fact that
in the distance other light than the shadow caster are contributing
illumination. It also allows you to put a far clip plane on your shadow
frustum, which limits the number of things you project onto. It also
reduces the anomalies due to projecting through walls and such. You can
implement distance fade out in a few ways. You could do it with a per-
vertex computation in a vertex shader. You could also do it using the
trilinear mip-mapping hardware of the GPU (simply by giving your entire
shadow map a white mipmap in the second level); this technique is pretty
cool except for the memory waste of making a big all-white texture.
You could also do it using the clipper..."
its good read:
so as he later say,
you can do that with projecting 1D 1x2 b/w linear interpolated or, say, 1x256 grayscale texture, in the same time you get rid of back-projection aka reverse-projection problem, BUT there is better way that opens room for quite a few new tricks, like shadows of transparent colored bodies...
first, i want to point out the fact that if you render your depth texture then convert it to GL_INTENSITY or GL_LUMINANCE and show it on the screen it will be the SAME grayscale image fading to white as if you render your texture using glFog, GL_LINEAR with fogstart at your near plane and fogend at far, or actually move them, shift fog near and far to achieve effects like with glPolygonOffset ... of course with fog u have to render all objects black on white with white fog or you want to render some objects in color if they transparent
so now when we know they are the same, pixels containing information about distance from light, or you can think about them as color or as both for some interesting effects...
or, in short - if its faster on your hardware to get RGB values than depth, use fog.....
back to fade-out to distance....
if we now use harware supported depth texture and render to FBO, why do we need same thing in RGB texture as well, why dont use that depth texture as fog texture if they same?
unfortunately you cant bind same texture as depth and later in the same ARBmultitexturing combo as alpha, luminance or intensity
bind (shadow, depth_component)
bind (shadow, intensity)
but fortunately we dont loose any fps if we had attached rgb(a) texture as well so we get our shadow in rgb anyway, and now we have all new 32bits to play with for free in same render pass!
back to the point, again...
this is imprtant stuff if you new to Frame Buffer Objects (like me)
-u dont need depth buffer render context as long as you have attached depthtexture your polygons will get z-sorted, and funny thing is its FASTER!, the other funny thing is that if you only have attached 1 texture to FBO and its your depth texture, and you render with glReadBuffer(gl_none), glDrawBuffer(gl_none) it WILL NOT be faster than if you attach RGB(A) texture on top, so in other words if u render to FBOs:
-dont use depth_texture rendercontext, instead attach some depth texture and your depth_test will work
-if you want to render only to depth texture, you might as well attach a color texture too for no extra cost when rendering, binding is fast with FBOs
when i say "depth buffer render context", i mean this stuff:
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, width, height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depthbuffer);
you can skip that and instead have this, turns out to be faster:
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, img, 0);
then attach some rgb(a) like this:
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, img, 0);
so now we have 2 textures, depth and fog, they both the same, grayscale fading to white.. or black it doesnt matter, just use GL_ONE_MINUS.. to invert grayscale images, one is in depth texture and other is rgb (or 8/16/32bit INTENSITY, ALPHA or LUMINANCE... )
[quick tip, its faster to work with RGB(A) textures than with any of 1 channel like alpha, intesity or luminance even if you render to 8bit luminance and 32 RGBA the later will be faster, correct me if im wrong]
so for each light only in one pass we combine 3(4) or more arb multitexture stages:
- depth texture is projected with compare_to_r
- RGB fog texture is blended over to fade out with same projective transform as 1
- spotlight circle texture is blended with same projective transform as 1
4.(1.) blend whatever else, it could be mesh texture or other projective
3 and 2 can be combined and further exploit the fact that we now have shadows in color buffer as well, what that means is that we can beside projecting light/shadow on scene we can project images when creating light view and than have projected 'projections' ... im not really sure what else can be achieved except to combine spotcircle and to support colored shadows for transparent colored surfaces, which is by itself enough, any other ideas?