Description I am going to present you the engine I have developed for all my experiments during my PhD thesis in virtual reality. My subject is : "Using Gaze Tracking Systems and Visual Attention Models to Improve Interaction and Rendering of Virtual Environments".
My PhD is about virtual reality so I needed an engine to render my Virtual Environments (VE) and conduct my experiments. I like to do things on the GPU so almost everything is computed on GPU. I wanted the engine to display point and spot lights that could be static or dynamic. Also, I wanted the model inside the environment to be simulated physically. Every experiment I were going to be very different so I needed the engine to be easily script-able. Finally, I needed to replay all experiment sessions so I wanted to be able to replay recorded navigations and interactions to apply coslty algorithms to study users' gaze behavior. Finally, I wanted to be able to create my own VE very easily. I have developed this engine during 2 month in summer 2008 from scratch.
Here are the features:
Virtual environments are created under Maya. Mental Ray computes a lightmap containing global illumination only. I have developed my own exporter/file format for the meshes, phong materials, point and spot lights.
The renderer is an OpenGL/GLSL based zPrePass renderer.Static lights come from the exported VE and, then, dynamic lights can be added from the script. Concerning shadows, I use a simple depth map for spot lights and a virtual depth cube map for point lights. I only use native hardware shadow map filtering. Not very eye candy but it is enough for the experiment I needed to conduct. Luminance adaptation is also implemented.
Physics simulation is done using the PhysX API. (only rigid body)
Scripting is allowed using Lua together with LuaBind. Lua is a very powerful scripting langage and I did a lot of thing with it and my engine interface: simple navigation, shooting game, object following way points, etc.
The engine is able to record and replay any session using an event based process.
Because my Phd is about gaze-tracking, the engine currently take into account the TobiiX50 gaze tracking hardware.
I hope that was not too long to read. You have a simple video on my YouTube channel: http://www.youtube.com/user/hillairesebastien. (not very demonstrative I have to admit)
I do not release the source code right now but if you still want it or just some piece of code, contact me.
There are more details and screenshots about the engine on my blog: http://sebh-blog.blogspot.com/.
To conclude, I had a lot of fun doing this and it was very interesting to create an simple engine that uses all this technologies together! I am currently changing the renderer for a light-prepass renderer and performance are really impressive.
My website: sebastien.hillaire.free.fr