I trying to make application myself without Graphics API (like DirectX or OpenGL). I wrote my own 3D Math Library for this.
I take structure C++ for vertex buffer and I render 3D image of simple cube with perspective correct texture mapping. And now my cube in static way render good. I can rotate camera around cube. But I have one bug.
Suppose I take first tri cube, first tir of front face. Then I rotate camera from right to left by 360 degree. My tri disappears behind left edge of monitor and appears from right edge of monitor - its right. But when my tri appears from right edge monitor after rotate, it looks inverted - inverted from top-to-down and inverted from left-to-right. What is the matter? I think all problem in my software camera. Who are know why cube looks inverted? My camera had built in traditional way - is vRight, vUp, vLook vectors, then they rotate by mouse about vUp (left-right) and vRight (up-down), and then fill View Matrix with this value. In static way my camera no have bug. But when camera move by 360 degree - bug. Why?
That sounds strange, check your visibility function, normally you have to do a cross product between the surface normal and the view direction of your camera, chances are that when you rotate the camera or the model something gets corrupted with normals, log your surfaces and normals and do a paper check , normally this is what i do when i face this kind of problems. Do you recompute surface normals on the fly ?? you shouldn't, the common way to do that is to compute the shared vertices normals for each vertex and rotate them accordingly to the camera transformations.
Sounds like a depth buffer problem to me.
When you look at it from one direction you are seeing the outside of the cube, so it looks correct.
When you look at it from the other direction you end up seeing the inside of the cube