.oisyn you are terrible!
Can you explain that again in a more high-level language?
I am interested in what you said, but have problems to understand.
I think there is a sample included in the DirectX 9 SDK that does exactly this.
For fogging, you need to know how far a ray from the eye travels through the fog, so you can calculate how much of the light is scattered and how much will pass through. Classical fogging does this by taking the z-values either at the vertices or at the pixels when rendering polygons, but this obviously doesn't work with custom volumes. Fortunately, it isn't that hard to calculate the total distance that a ray travels through fog at every pixel.
Suppose you want to solve this with raytracing. Think of a convex volume, like a sphere. Shoot the ray through the sphere, and calculate the intersection points where the ray enters and exits the sphere. Of course, a raytracer calculates the actual z-values; the length that a ray needs to travel until it reaches the surface. So if you substract the z-value where the ray exits the volume from where it enters the volume, you are left with the total distance (in the z direction) the ray travels through the fog.
Since you only need the values of where a ray enters and exits the volume, you can simply render the actual volume using the GPU to a depthbuffer. All the polygons that face the camera are polygons where rays enter the volume. All polygons that backface the camera are polygons where the rays exit the volume. So if you add the z-values of all front-facing polygons of a volume to a buffer, and substract all z-values of all back-facing polygons from that buffer, you are left with the total distance a ray travels through the volume at every pixel. Note that this also works for concave volumes, as for every volume entry there is a corresponding volume exit.
Of course, a ray stops as soon as it hits actual geometry. If the geometry is between the camera and the fog volume, the volume polygons will get z-tested away. If the geometry is completely behind the volume, you'll get fog-values as expected. But if a ray enters the volume and then hits geometry without exiting the volume first, your backfacing volume polygons will get z-tested away while front-facing polygons won't, which leaves you with incorrect values in the buffer. This can be solved by taking the minimum of the current z-value of the pixel of the fog-polygon being rendered with the value at that pixel in the depth buffer.
Another problem is clipping against the near and far plane, you obviously don't want that to happen. Far plane clipping can be resolved using an infinite far plane, I'm not sure how you can solve the near plane clipping problem.