Oh dear. I spent so much time trying to find out why a windows build of glut was not working. Turns out that I changed the parameters for glutInit(argc, argv); where program parameters are passed in. Since I was working between platforms I just made some mock parameter char *argv[] = {""}; argc = 0; Well, on Linux/OSX, having argc = 0 was ok. This translated to memory allocation inside glut for a zero sized memory copy of the parameters. Not so for windows! While I am forever grateful for having source code to dive into, it can also be the wrong place to spend time to find out why things went wrong... I solved a precision issue when computing a reconstruction. The problem was based on single float, whereas double was used in one, initial transformation of that encode/decode process. (gdb) print dwx1 (encoding stage) $3 = 191.49999834373966 (gdb) print fwx1 (decoding stage) $4 = 191.5 Such cases cannot cause harm, but it is still treated carefully as the error would cause a different scan line to be used. I can foresee that this will affect correctness after certain optimisations are applied. This is because they depend on the knowledge of a particular number of points to be on the scan line. This algorithm depends on homogenous data types. All floats or all doubles! |