My experience with TBN vectors is that you definitivly should normalize them. I have thought about it some time and these are my conclusions:
The reasoning is similar to why we need special matrices to transform normals.
We use the TBN matrix to transform a _direction_ vector (the light vector)into the tangentspace! Matrices that contain non-uniform scaling (as non-normalized T and B would introduce) are supposed to transform direction vectors this way (transposed inverse of M):
n' = (M\\^-1)\\^t)*n
(this is kind of removing the non-uniformity in scaling, leaving us with a vector that has the right direction, but may not have the right length, thus a renormalization of n' is needed, too)
Eric Lengyel´s derivation in fact produces a matrix that would transform a vector from tangentspace to objectspace (lets call it matrix 'A'). But the lightingcode needs the inverted matrix of that ('B'). But instead of doing a real inversion of the matrix, Lengyel makes the assumption that A is nearly-orthogonal (or even forces orthogonality using that schmidth-gram step) and by normalizing T,B, N. This way he gets an orthonormalized (rotation-)matrix that is perfectly suited for transforming the light vector.
Lets see, what my "real" solution would yield:
l'= ((B\\^-1)\\^t) * l
l'= ((A\\^-1)\\^-1)\\^t) * l
l'= A\\^t * l
Doh! This is the same thing that Lengyel is using, except there´s no normalization and orthogonalization is taking place. But why doesn´t this work?
I forgot that I created the per-vertex-TB vectors by summing up the face-TB-vectors this vertex is belonging to! This creates vectors that are way too long. So, I would have to either a) normalize the vectors again or b) divide by the number of vertices we summed up (in order to get some kind of "average").
Actually I did not test, if averaging instead of normalization results in a different look.
But then it hit me. What is the basic reason, why we do that TBN stuff at all? We want to transform l into the space where the normalmap-vectors live, so we can "safely" calculate dot(n, l). They (usually?) live in "unstretched" texture space.
Here´s the thing that people arguing with "stretched texture space" seem to miss: Stretching the normalmap only changes the positions where the normal-texel get fetched. But it DOES NOT stretch the direction of the fetched normals. They still live in unstretched space. And this is why the incoming l vector must not get stretched too, thus TBN should be orthonormal. Otherwise we would combine a stretched l-vector with an unstretched n which would result in weird lighting.
Of course, if my assumption is wrong and the normalmap-vectors are kind of "pre-stretched", the reasoning above does not hold. But it would drastically reduce the re-usability of normalmaps (since the normalmap normals would be only right for triangles with a specific texture stretching) and we could switch to objectspace normalmaps anyway