]]>Are you thinking of Light Space Perspective Shadow Maps, maybe?

]]>OK, I know someone has done this before, I just can't remember what it's called.

What I want to do is use the camera's view frustum and the scenes geometry to create a projection matrix that closely maps to the on screen meshes, but from the point of view of the light.

Problem : I need more resolution in my depth map

Solution : Only render the portion of the scene that is actually visibleI can't just use the view frustum, it would just recreate the problem I already have

How can I explain it.

Take a scene, and draw the camera's view frustum on it.

Place the light and draw a frustum from the light that just fits the onscreen geometry.

I want the intersection of the two.

Then I can calculate the near and far planes so they closely match what is on screen, hopefully giving me enough accuracy in the depth buffer to do my shadows

]]>Can you push the near plane out when you zoom in, to get back more precision?

No, think of a plane flying above a town, tower blocks can come between the camera and the plane.

The other problem I'd expect to see is shadows "swimming" as you animate the FOV, due to the cascades changing size to fit tightly around the view frustum.

Yes, hadn't thought of that, but think you are right.

Time to come up with a different approach

]]>Can you push the near plane out when you zoom in, to get back more precision?

The other problem I'd expect to see is shadows "swimming" as you animate the FOV, due to the cascades changing size to fit tightly around the view frustum.

]]>As some of you know, a while ago I implemented cascade shadow maps for one of the games I am working on.

They worked ok, but now I have added an auto zoom feature to my cameras they are totally broken.

Doing the zoom was easy, I just calculated a field of view that made the bounding box of the target fill a proportion of the screen.

What this effectively does is squeeze the view volume down from our traditional pyramid into a thin inverted spear.

To illustrate the problem, consider a camera 2 meters above the ground pointing horizontally along a flat plane.

The nearest visible point is easy to calculate.

`float near = 2 / tan(FOV*0.5);`

So for a 45 degree field of view it is 4.82842712 for a 22.5 degree field of view it is 10.05467, but for a 5 degree field of view it is 45.8075310

The effect of this is that the bit of the screen you are displaying gets pushed further down the depth buffer.

I fact when I modified my shader to just draw the homogeneous Z coord, all the values were greater than 0.9

Sadly I have a 16 bit display and a 16 bit depth buffer, neither has the accuracy to give me nice shadows.

So I'm looking for ideas again.

Anyone had this issue and come up with a solution?