Well, from what I understand, when you use light volumes, you don't necessarily have to do the view frustum calculations to find the full screen quad coordinates, because you can just use the position of the light volume itself, transformed to view space. However, I don't know the specifics on how you accomplish that.
I think my bigger problem is that I'm having trouble understanding the some of the math involved in this type of deferred shading. My first attempt was in clip space (post-projection screen space), which made sense. Each pixel I was sampling was an actual pixel on the screen. But, due to the nature of projection, my lights came out distorted to the shape of the window.
Now I'm trying to redo it in view space (camera space, before projection), but I'm having trouble imagining how that relates to the final pixels on the screen. In the first (geometry) pass, where I render the scene geometry, I'm transforming the geometry like normal, into projection space, and then writing the results to textures. But then in the lighting pass, we're taking that pixel data, which is positioned on a window-sized g-buffer texture according to the projection matrix, but we're trying to perform the lighting calculations in view space instead.
I know some of the data written to the g-buffer textures is calculated in view space instead of projection space, but what is the coordinate range of view space? If I calculate that the fragment is positioned at -0.12,0.34 in view space, where is that on the projected window?
I can understand why the calculations need to be done in view space (to eliminate the projection distortion), but I'm having trouble visualizing the result and what it means.