Well, based on your posts, it's really not very clear either what your approach is, or what is the problem you are trying to solve in the first place.
If you want to determine what point(s) on a 3D model lie underneath a specific screen point, tracing a ray from the camera through the screen point and finding the intersection(s) with the model is one way to do it. (gluUnProject is insufficient, as it just does a matrix operation that is the inverse of projection - it will take a 2D screen position and a depth, and give you the corresponding 3D position, but if you don't know the depth ahead of time it's useless.) Another way would be to rasterize the model into a buffer - either a floating-point buffer that stores the 3D position directly at each pixel, or a depth buffer which can be used to reconstruct the 3D position of a pixel on demand, via gluUnProject or equivalent.
There are upsides and downsides to both, so which is better will depend on how you want to use it. The raytracing approach is relatively costly in time, but cheap in memory (discounting acceleration structures you might need, such as BSP trees), is able to give sub-pixel precision, and can return all surfaces under the 2D screen point (not just the nearest one). The rasterization approach is less costly on balance if you want to query a large number of points, because you can rasterize once up-front and re-use the buffer for as many queries as you like. But it needs memory to store the buffer, the precision is limited to the resolution at which it was rasterized, and it can give you only the nearest surface at each pixel.