Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 453 Vote(s) - 3.66 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Different math rounding behaviour between Linux, Mac OS X and Windows

#1
HI,

I developed some mixed C/C++ code, with some intensive numerical calculations. When compiled in Linux and Mac OS X I get very similar results after the simulation ends. In Windows the program compiles as well but I get very different results and sometimes the program does not seem to work.

I used GNU compilers in all systems. Some friend recommend me to add -frounding-math and now the windows version seems to work more stable, but Linux and Os X, their results, do not change at all.

Could you recommend another options to get more concordance between Win and Linux/OSX versions?

Thanks

P.D. I also tried -O0 (no optimizations) and specified -m32

Reply

#2
I can't speak to the implementation in Windows, but Intel chips contain 80-bit floating point registers, and can give greater precision than that specified in the IEEE-754 floating point standard. You can try calling this routine in the *main()* of your application (on Intel chip platforms):

inline void fpu_round_to_IEEE_double()
{
unsigned short cw = 0;
_FPU_GETCW(cw); // Get the FPU control word
cw &= ~_FPU_EXTENDED; // mask out '80-bit' register precision
cw |= _FPU_DOUBLE; // Mask in '64-bit' register precision
_FPU_SETCW(cw); // Set the FPU control word
}

I *think* this is distinct from the rounding modes discussed by @Alok.
Reply

#3
In addition to the runtime rounding settings that people mentioned, you can control the Visual Studio compiler settings in Properties > C++ > Code Generation > Floating Point Model. I've seen cases where setting this to "Fast" may cause some bad numerical behavior (e.g. iterative methods fail to converge).

The settings are explained here:

[To see links please register here]

Reply

#4
There are four different types of rounding for floating-point numbers: round toward zero, round up, round down, and round to the nearest number. Depending upon compiler/operating system, the default may be different on different systems. For programmatically changing the rounding method, see [`fesetround`][1]. It is specified by C99 standard, but may be available to you.

You can also try `-ffloat-store` gcc option. This will try to prevent gcc from using 80-bit floating-point values in registers.

Also, if your results change depending upon the rounding method, and the differences are significant, it means that your calculations may not be stable. Please consider doing interval analysis, or using some other method to find the problem. For more information, see [How Futile are Mindless Assessments of Roundoff in Floating-Point Computation?][2] (pdf) and [The pitfalls of verifying floating-point computations][3] (ACM link, but you can get PDF from a lot of places if that doesn't work for you).


[1]:

[To see links please register here]

[2]:
[3]:

[To see links please register here]

Reply

#5
The IEEE and C/C++ standards leave some aspects of floating-point math unspecified. Yes, the precise result of adding to floats is determined, but any more complicated calculation is not. For instance, if you add three floats then the compiler can do the evaluation at float precision, double precision, or higher. Similarly, if you add three doubles then the compiler may do the evaluation at double precision or higher.

VC++ defaults to setting the x87 FPUs precision to double. I believe that gcc leaves it at 80-bit precision. Neither is clearly better, but they can easily give different results, especially if there is any instability in your calculations. In particular 'tiny + large - large' may give very different results if you have extra bits of precision (or if the order of evaluation changes). The implications of varying intermediate precision are discussed here:

[To see links please register here]


The challenges of deterministic floating-point are discussed here:

[To see links please register here]


Floating-point math is tricky. You need to find out when your calculations diverge and examine the generated code to understand why. Only then can you decide what actions to take.
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through