I've met kind of another weird problem. I need to somehow access lots of different textures in OpenCL program (like 20, 30 or even more). Is there any way to access that much textures (and I know that in OpenCL you can't pass array of image2d_t or any other texture objectt)?
The only thing that I made up are either using texture atlases or huge buffer where I store all textures (this is actually the same as texture atkas). Second way is inacceptable, because I simply need to CL-GL interoperate them.
Any hints, ideas, advices?
I hit the same problem when I try to implement OpenCL raytracer. There is also similar problem: you can't pass array of buffer objects, so it's impossible to freely load/unload models. For textures I think the only solution is texture atlas (or fixed number of texture atlases with different pixel formats). For dynamic world something like megatexture approach is best.
Yup, that's actually the same problem I'm facing right now. I've thought about megatexture/megameshes approach - although thats in the end quite similar to megatextures - storing everything in 1 big texture map.
So now the problem reduces to - how to use megatexture-like approach in this effectively. Because I don't know which textures I'll need (path tracing -> rays are in random directions), so I'll probably need all textures in scene. Second thing is which LOD of the textures I'll need (for megatextures we can look into scene, but path tracing complicates this) - basically I could use LODding mechanism based on distance from camera.
As for the meshes, I store triangle buffer separately and tree (either n-ary BVH or KD) also, in both - graphics card and processor + RAM. So modyfing them is quite simple (and I can easily stream from hard drive).
For non-texture data of any type I thinking about special memory manager. I can pass several (4 or 8) generic memory buffers (pages) to kernel then use uint32 "pointers" to specify sub-buffers. (ptr & 3) is page number and (ptr & \~3) is offset in that page.
As for textures, I think that image2d_t advantage over generic data buffer is not large in context of raytracer (poor data locality) so generic buffers for textures may be better (with bonus of bicubic sampling ). Also I personally lean toward procedural generation and hi-poly models with vertex color.
LOD based on distance from viewer is must have but may be quite different than in rasterizers. For example in my raytracer may be better to have the same hi-poly model near and far from camera than two different models in the same time.