Thing is with lighmap the process it too long, since I have kind of 400 objects in my scene, so you can imagine how much time it requires to do lightmaps.
Very true. This is why I never particularly liked lightmaps. As for implementing it, I tried once, got it semi-working, but then decided against it.The pre-process time was absurd (my environment was always changing) and its strictly for static scenes. It cant even handle doors opening and closing. I personally use Instant Radiosity, which is for semi-dynamic environments and the preprocess time is usually \< 5 seconds. Unfortunatley though, if you use this technique, deferred shading is almost required, and deferred shading is not usually the best for super-realism (although killzone 2 used it).
There are around the web some sort of "already made" ( like a .fx/hlsl file ) solution for a GI like system?
I think you'd be hard-pressed to find one. Shaders for light source models are absolutley everywhere. They're relativley easy to make and straight forward to implement. This is because shaders only have access to local information (one vertex, one pixel, one light source) and global illumination (as the name implies) needs acess to global information about the scene, so CPU work is absolutley necessary. The reason that GI is such a challenge to create can be summed up in 3 statements:
The GPU only nativley has acess to local information, while the CPU has access to everything.
The GPU is extremley fast, the CPU is extremley slow (by comparision) so even though the CPU has global info, computing GI on the CPU isn't a feasable option for games.
The trick is to gather and compress scene information on the CPU, send it to the GPU (effectivley making it "look like" local information) and using that to compute GI.
The most GPU independant GI method I've ever seen is GPU-based raytracing. It however, still involves much pre-processing (dynamic light sources with a static scene is required) and saying that it's a memory hog is quite the understatement. A good sized scene (such as, I dunno, an architectural visualization) would require a graph in the form of a 3D texture, at the very least 1024x1024x1024 at 32 bits, thats 4.2 gigs of video memory(!!!!!).
Sorry about that, my when it comes to programming I really don't know that much
Thats ok Here's what I suggest:
I think you're going to have quite the adventure trying to find a true GI engine. What I think the best option may be is to use a "fake gi" solution. It involves strategicly placing lights of different colors around the room. For example, if you have a red colored wall, you put a dimly-lit, red light source right next to it to light other objects. This solution is not the most efficient but it will certainly do. This approach was actually used in STALKER.
To use this method, the most efficient solution is to find an engine that uses deferred shading or lighting. If you're wondering why this is, its because a forward renderer requires one scene-render (sort of) per light. Deferred shading requires only one scene render... ever (unless you're doing shadows).
Here's to hoping this helps