It's been years I haven't been active here, and I suddenly decided I would try to make a contribution of some kind.
What you see here is a comparison of the same mesh textured a bit differently.
The mesh is completely flat, and any feeling of volume is only given by the good old parallax mapping and normal mapping (a shader fairly easy to write, cheap, and giving good results as long as you don't get in the cases that screw it completely).
It is lit by a single light; the blue and orange tones are given by some simple environment mapping. Remember the usual equation albedo * (ambient + diffuse) ?
Usually ambient is a constant is lighting tutorials, but getting inspired by the spherical harmonics papers, I tested the generation of a simple square texture describing the color of the ambient light: longitude along x, and latitude along y.
In the shader it becomes :
float invPi = 1. / 3.14159265358979;
vec3 wNormal = (vec4(nNormal, 0.) * cameraMatrix).xyz;
vec2 uv = vec2(0.5 * invPi * atan(wNormal.x, wNormal.z), invPi * atan(length(wNormal.xz), wNormal.y));
return vec4(vec3(texture2D(aMap, uv)), 1.);
* nNormal is the normal;
* cameraMatrix will allow us to get wNormal, the normal in world coordinates (otherwise the ambient light will seem to be attached to the camera);
* aMap is the texture the ambient map is bound onto.
Now the part being compared in the image is the fake ambient occlusion. Notice how edges look lighter and inner corners look darker on the right? Even the round parts look a bit darker than the flat ones. The effect is very faint, and you probably wouldn't notice, but when you compare the images, there is a difference. The left one looks flatter, and somehow less interesting.
This is done very easily, during the normal map creation: the normal map is created thanks to a heightmap, and so is a fake AO map. Fake, because it doesn't rely onto any math, just out of intuition. The idea is to consider that a higher element will to some extend prevent surrounding lower elements from being lit. So a fake shadow is generated from the heightmap with a convolution filter like this for example:
0 -1 0
-1 4 -1
0 -1 0
Also, the farther the elements, the softer the effect. Therefore this is done repeatedly, each time with a more blurry version of the heightmap, and with an increased distance for filter (well this is not a convolution anymore). This would probably work by reducing the size of the texture each time too; I haven't tested.
Anyway this is all about a hack, and about having quickly a fairly interesting result.