]]>@Luz Reyes

Yes, good to point out that one of these functions is undefined at 0. Then again, we should keep in mind that log(0) is never defined, no matter what code you use, because there simply is no such thing as log(0) in mathematics. What could you possibly raise 10 to in order to get 0? Solution DNE.

While this is true over the reals, its not true over the extended reals (R union {-inf,inf}). On this interval log(0) is defined as -inf and this is what I think my 2 year younger self was hinting at. Also the log function in cmath returns -inf for log(0) and this behavior maybe preferable.

]]>Luz, this is a two-year-old thread...

]]>Yes, good to point out that one of these functions is undefined at 0. Then again, we should keep in mind that log(0) is never defined, no matter what code you use, because there simply is no such thing as log(0) in mathematics. What could you possibly raise 10 to in order to get 0? Solution DNE.

]]>Interesting. How about casting like (float&)i ? No (explicit) use of pointers here so possibly no aliasing trouble (plus, references are always constant anyway). Or are references treated like pointers with syntactic sugar?

References are a form of typed aliasing, so they (should) have the same problem. Unions, on the other hand, are a language-supported method for aliasing the same memory with different types.

While the strict aliasing rule rarely causes problems in practice, and thus many people ignore it, it DOES cause problems sometimes, and they can be extremely hard and time-consuming to find. Always try to write the code using unions from the start.

]]>@Nils Pipenbrinck

Be aware that casting float* to int* and similar stuff will break the strict aliasing rule and you'll run into serious trouble if you want to compile your code with newer GCC-compilers.

Interesting. How about casting like (float&)i ? No (explicit) use of pointers here so possibly no aliasing trouble (plus, references are always constant anyway). Or are references treated like pointers with syntactic sugar?

]]>You need to keep in mind Nicks version is undefined for 0.

Yes, but log(0) is always undefined anyway. :happy:

The easiest way to always get a well-defined result is to logic OR the input with 1. But obviously the programmer should always carefully check the behavior on a per-case basis.

]]>Nils, just out of interest, would GCC react the same way for both c++ reinterpret_cast and a c-style cast, or would it not complain about reinterpret_cast (the same response as to union type-punning) ?

]]>Be aware that casting float* to int* and similar stuff will break the strict aliasing rule and you'll run into serious trouble if you want to compile your code with newer GCC-compilers.

One way around this is to use unions, another is to include char* as a type during the cast. This one makes a magical connection between the two addresses and tells the optimizer to not assume that floats and ints never share the same memory space.

Compiles start to become aggressive when it comes to aliasing.. I'm sure the next VS will be much more aggressive as well.

]]>You need to keep in mind Nicks version is undefined for 0.

Slightly slower, but more accurate version for floats (posted on flipcode many moons ago.)

`inline float fast_log2 (float val) { int * const exp_ptr = reinterpret_cast <int *>(&val); int x = *exp_ptr; const int log_2 = ((x >> 23) & 255) - 128; x &= ~(255 << 23); x += 127 << 23; *exp_ptr = x; val = ((-1.0f/3) * val + 2) * val - 2.0f/3; //(1) return (val + log_2); }`

The line (1) computes 1+log2(m), m ranging from 1 to 2. The proposed formula is a 3rd degree polynomial keeping first derivate continuity. Higher degree could be used for more accuracy. For faster results, one can remove this line, if accuracy is not the matter (it gives some linear interpolation between powers of 2).

]]>While that's vastly faster than the standard log2 function, there's an even faster approach using a processor independent intrinsic in Visual C++:

Ahh, I didn't think of using BSR, thanks for the tip!

JF

]]>While that's vastly faster than the standard log2 function, there's an even faster approach using a processor independent intrinsic in Visual C++:

`#include <intrin.h> inline unsigned long log2(int x) { unsigned long y; _BitScanReverse(&y, x); return y; }`

]]>We often need to compute the log2 value for a particular number: finding the required power of two size of a texture, computing how many bits we need to send over the network for a particular value range, etc.

Here's my implementation, taking advantage of the FPU internal representation: converting the number into floating-point effectively converts it to the sign-exponent-mantissa representation of a float: we only need to extract the exponent and we're done:

`unsigned long kFloorLogTwo(unsigned long Value) { unsigned long Result = 0; if(Value) { const float FloatValue = Value; Result = ((((*(unsigned long *)&FloatValue) & 0x7f800000) >> 23) - 127); } return Result; } unsigned long kCeilLogTwo(unsigned long Value) { unsigned long Result = 0; if(Value) { const float FloatValue = Value; Result = ((((*(unsigned long*)&FloatValue) & 0x7f800000) >> 23) - 126); } return Result; }`

JF