I am developing an application for work that deals with sensor performance. I need to calculate the standard deviation of an entire color buffer (1-channel 16-bit floating point frame buffer object) in Opengl. For testing purposes, I am simply reading up the buffer into main memory and calculating the standard deviation on the CPU. Of-course, as you might expect, this is a killer on the FPS.
I have googled and googled, searched every where but I cannot find any references or leads to this. The only reference that was mentioned was some article by Horn that talks about calculating the sum of an entire buffer using shaders, but I could not find the article itself.
My first idea is to simply use the non-programmable pipeline and do additive blending onto another 16-bit f.p. surface drawing verticle lines ontop of each other over an over again with the texture coordinates referencing the buffer for each verticle line, once for the number of pixels in the x direction (I may skip pixels to just approximate the s.d.), and then draw single points ontop of each other referencing the horizontal sums. This would give me the mean value, then I would repeat the process somehow using the differences minus the mean squared. This doesn't seam terribly efficient but it would be better than reading it into the main memory. Also, I don't know how to do this without having to ping-pong between textures when it comes to the second pass using the mean.
Anybody have better (more efficient) ideas for this?
Thanks in advance,