Problems with /Op option used to compile proj in Microsoft Visual Studio

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Problems with /Op option used to compile proj in Microsoft Visual Studio

Calogero Mauceri
I've been struggling since a few days to understand a very odd behavior with proj.
I have 2 versions of proj libraries and utils, one compiled with Microsoft Visual C++ 2008 (Express Edition) and another one compiled with Microsoft Visual C++ 2003.

Analyzing some strange results in our processes, I found out that executing some transformations using the two proj libraries return two (very) different results.
Here is an example of a transformation that is giving me problems using the cs2cs utility.

cs2cs -f "%.5f" +proj=tpeqd +a=2440e3 +b=2440e3 +lat_1=-22.960070000 +lon_1=65.546300000 +lat_2=-22.959730000 +lon_2=65.546160000 +no_defs +to +proj=longlat +a=2440e3 +b=2440e3 +lon_0=0 +no_defs
-2385.33232 0.00000

On the cs2cs compiled with Microsoft Visual C++ 2008 that transformation returns a meaningful result
65.56780        -23.01227 0.00000
while in the version compiled with Microsoft Visual C++ 2003, it returns
*       * 0.00000

The same result happens with other (few) sets of coordinates.
Looking at the error returned in the latter case, the pj_transform exits with error
-19: acos/asin: |arg| >1.+1e-14

The error happens in the aacos function: in the first case 0.999999999999996 is passed to the function while in the other case the number passed to the aacos is 1.000000000000012.

Now, looking (by chance) at the compilation options of the proj, I noticed that between the options passed to the cl compiler there is the consistency floating point option " /Op". As stated in the "Microsoft Visual Floating-Point Optimization" http://msdn.microsoft.com/en-us/library/aa289157(v=vs.71).aspx

"The consistency model can seriously reduce efficiency while simultaneously providing no guarantee of increased accuracy. To serious numerical programmers, this doesn't seem like a very good tradeoff and is the primary reason that the model is not generally well received."

Disabling that option, the result returned in the version of proj compiled with Microsoft Visual C++ 2003 is more consistent.
Is there any reason why the /Op option is used to compile the proj?

Regards,
    Calogero


_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

support.mn
Hello,

Calogero Mauceri [[hidden email]] kirjoitti:

> Disabling that option, the result returned in the version of proj
> compiled with Microsoft Visual C++ 2003 is more consistent.
> Is there any reason why the /Op option is used to compile the proj?
>

I think that the /Op option only works with the 2003 version and what it does is as follows
(text from MSDN):


"This option improves the consistency of floating-point tests for equality and inequality by disabling optimizations that could change the precision of floating-point calculations.

By default, the compiler uses the coprocessor's 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision (80 bits minus the number of bits in a smaller floating-point type) through a lengthy calculation can produce inconsistent results."

http://msdn.microsoft.com/en-us/library/aa984742(v=vs.71).aspx


So it is rather obvious that MS compilers use the full 80 bit FPP registers in PC processors by default and in most cases and the reason for using that switch in the first place must be the idea to have similar performance with other ANSI compiler environments? I am not sure if this answered the question?

regards: Janne.



_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Calogero Mauceri
Hi Janne,

Disabling that option, the result returned in the version of proj 
compiled with Microsoft Visual C++ 2003 is more consistent.
Is there any reason why the /Op option is used to compile the proj?

I think that the /Op option only works with the 2003 version and what it does is as follows
(text from MSDN):


"This option improves the consistency of floating-point tests for equality and inequality by disabling optimizations that could change the precision of floating-point calculations.

By default, the compiler uses the coprocessor's 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision (80 bits minus the number of bits in a smaller floating-point type) through a lengthy calculation can produce inconsistent results."

http://msdn.microsoft.com/en-us/library/aa984742(v=vs.71).aspx


So it is rather obvious that MS compilers use the full 80 bit FPP registers in PC processors by default and in most cases and the reason for using that switch in the first place must be the idea to have similar performance with other ANSI compiler environments? I am not sure if this answered the question?

regards: Janne.


Ok, the /Op option seems to be used to produce consistent and repeatable results.
The problem is that the cost of that repeatability is that the intermidiate results in computations *lose accuracy*.
That explains the problem proj (compiled with Visual Studio 2003) is having in the conversion I stated before.


It is well explained here (text from MSDN)

http://msdn.microsoft.com/en-us/library/aa289157%28v=vs.71%29.aspx

"Many C++ compilers offer a "consistency" floating-point model, (through a /Op or /fltconsistency switch) which enables a developer to create programs compliant with strict floating-point semantics. When engaged, this model prevents the compiler from using most optimizations on floating-point computations [...]. The consistency model, however, has a dark-side. In order to return predictable results on different FPU architectures, nearly all implementations of /Op round intermediate expressions to the user specified precision; for example, consider the following expression:

float a, b, c, d, e;
. . .
a = b*c + d*e;

In order to produce consistent and repeatable results under /Op, this expression gets evaluated as if it were implemented as follows:

float x = b*c;
float y = d*e;
a = x+y;

The final result now suffers from single-precision rounding errors at each step in evaluating the expression. Although this interpretation doesn't strictly break any C++ semantics rules, it's almost never the best way to evaluate floating-point expressions. It is generally more desirable to compute the intermediate results in as high a
 s precis
ion as is practical. For instance, it would be better to compute the expression a=b*c+d*e in a higher precision as in,
double x = b*c;
double y = d*e;
double z = x+y;
a = (float)z;

or better yet

long double x = b*c;
long double y = d*e
long double z = x+y;
a = (float)z;

When computing the intermediate results in a higher precision, the final result is *significantly more accurate*. Ironically, by adopting a consistency model, the likelihood of error is increased precisely when the user is trying to reduce error by disabling unsafe optimizations. Thus the consistency model can seriously reduce efficiency while simultaneously providing no guarantee of increased accuracy. To serious numerical programmers, this doesn't seem like a very good tradeoff and is the primary reason that the model is not generally well received.
"

From Visual C++ 2005 they are suggesting to use other kind of options fp:precise, fp:fast or fp:strict.

"Under fp:precise, only safe optimizations are performed on floating-point code and, unlike /Op, intermediate computations are consistently performed at the highest practical precision."

Calogero

_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Mikael Rittri

Here is an interesting article:

 

David Monniaux.

The pitfalls of verifying floating-point computations.

ACM Transactions on Programming Languages and Systems 30, 3 (2008) 12.

http://hal.archives-ouvertes.fr/hal-00128124

 

Best regards,

 

Mikael Rittri

Carmenta

Sweden

http://www.carmenta.com

 


From: [hidden email] [mailto:[hidden email]] On Behalf Of Calogero Mauceri
Sent: Thursday, March 15, 2012 9:06 AM
To: PROJ.4 and general Projections Discussions
Subject: Re: [Proj] Problems with /Op option used to compile proj in Microsoft Visual Studio

 

Hi Janne,


Disabling that option, the result returned in the version of proj 
compiled with Microsoft Visual C++ 2003 is more consistent.
Is there any reason why the /Op option is used to compile the proj?
 
 
I think that the /Op option only works with the 2003 version and what it does is as follows
(text from MSDN):
 
 
"This option improves the consistency of floating-point tests for equality and inequality by disabling optimizations that could change the precision of floating-point calculations.
 
By default, the compiler uses the coprocessor's 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision (80 bits minus the number of bits in a smaller floating-point type) through a lengthy calculation can produce inconsistent results."
 
http://msdn.microsoft.com/en-us/library/aa984742(v=vs.71).aspx
 
 
So it is rather obvious that MS compilers use the full 80 bit FPP registers in PC processors by default and in most cases and the reason for using that switch in the first place must be the idea to have similar performance with other ANSI compiler environments? I am not sure if this answered the question?
 
regards: Janne.
 


Ok, the /Op option seems to be used to produce consistent and repeatable results.
The problem is that the cost of that repeatability is that the intermidiate results in computations *lose accuracy*.
That explains the problem proj (compiled with Visual Studio 2003) is having in the conversion I stated before.


It is well explained here (text from MSDN)

http://msdn.microsoft.com/en-us/library/aa289157%28v=vs.71%29.aspx

"
Many C++ compilers offer a "consistency" floating-point model, (through a /Op or /fltconsistency switch) which enables a developer to create programs compliant with strict floating-point semantics. When engaged, this model prevents the compiler from using most optimizations on floating-point computations [...]. The consistency model, however, has a dark-side. In order to return predictable results on different FPU architectures, nearly all implementations of /Op round intermediate expressions to the user specified precision; for example, consider the following expression:


float a, b, c, d, e;
. . .
a = b*c + d*e;
 
In order to produce consistent and repeatable results under /Op, this expression gets evaluated as if it were implemented as follows:



float x = b*c;
float y = d*e;
a = x+y;
 
The final result now suffers from single-precision rounding errors at each step in evaluating the expression. Although this interpretation doesn't strictly break any C++ semantics rules, it's almost never the best way to evaluate floating-point expressions. It is generally more desirable to compute the intermediate results in as high a s precis ion as is practical. For instance, it would be better to compute the expression a=b*c+d*e in a higher precision as in,
double x = b*c;
double y = d*e;
double z = x+y;
a = (float)z;


or better yet


long double x = b*c;
long double y = d*e
long double z = x+y;
a = (float)z;
 

When computing the intermediate results in a higher precision, the final result is *significantly more accurate*. Ironically, by adopting a consistency model, the likelihood of error is increased precisely when the user is trying to reduce error by disabling unsafe optimizations. Thus the consistency model can seriously reduce efficiency while simultaneously providing no guarantee of increased accuracy. To serious numerical programmers, this doesn't seem like a very good tradeoff and is the primary reason that the model is not generally well received.
"

From Visual C++ 2005 they are suggesting to use other kind of options fp:precise, fp:fast or fp:strict.

"Under fp:precise, only safe optimizations are performed on floating-point code and, unlike /Op, intermediate computations are consistently performed at the highest practical precision."

Calogero


_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Glynn Clements
In reply to this post by Calogero Mauceri

Calogero Mauceri wrote:

> The same result happens with other (few) sets of coordinates.
> Looking at the error returned in the latter case, the pj_transform exits
> with error
> -19: acos/asin: |arg| >1.+1e-14
>
> The error happens in the aacos function: in the first case
> 0.999999999999996 is passed to the function while in the other case the
> number passed to the aacos is 1.000000000000012.

Calculating the arcsine or arccosine of a value very close to 1 is
bound to be numerically unstable.

Discussion of compiler optimisations is something of a red herring
here. If the calculation is relying upon the 80-bit extended
floating-point support in the x87 FPU, it won't work on architectures
which "only" use 64 bits. Similarly, just about any change to the way
that the compiler generates code may produce a different result, as
may changes to the FPU state or the system's math library.

Ultimately, I suspect that the calculation for the two-point
equidistant projection just wasn't designed for the case where the two
points are around one arc-second apart (approx. 15 metres; would be
approx. 40 metres on Earth).

--
Glynn Clements <[hidden email]>
_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Calogero Mauceri
Glynn Clements wrote:
> Ultimately, I suspect that the calculation for the two-point
> equidistant projection just wasn't designed for the case where the two
> points are around one arc-second apart (approx. 15 metres; would be
> approx. 40 metres on Earth).

Glynn, so you are suggesting that more than in the float number
approximations, the problem is in why the number passed to the aacos is
so close to 1.
Well, then my question is, which are the cases where the two point
equidistant projection could not behave well? Is it a problem of the two
point equidistant definition or a problem of the way it is implemented
in proj lib?

I can provide a long set of cases where the conversion from tpeqd to
longlat fails, I just need a bit of time to run some data processing.

Thanks,
     Calogero


_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Frank Warmerdam
On Thu, Mar 15, 2012 at 8:09 AM, Calogero Mauceri <[hidden email]> wrote:
> Well, then my question is, which are the cases where the two point
> equidistant projection could not behave well? Is it a problem of the two
> point equidistant definition or a problem of the way it is implemented
> in proj lib?

Calogero,

I don't claim to understand the tpeqd projection well, but your parameters:

 +lat_1=-22.960070000 +lon_1=65.546300000
 +lat_2=-22.959730000 +lon_2=65.546160000

seem suspect to me in that they are so close.  Do you find similar
stability issues when the two points used are not so very close
together?

My must admit, I'm not sure why /Op is used and I suspect it is not
really all that appropriate as a default for PROJ.4.

Best regards,
--
---------------------------------------+--------------------------------------
I set the clouds in motion - turn up   | Frank Warmerdam, [hidden email]
light and sound - activate the windows | http://pobox.com/~warmerdam
and watch the world go round - Rush    | Geospatial Software Developer
_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

strebe
In reply to this post by Calogero Mauceri

Well, then my question is, which are the cases where the two point equidistant projection could not behave well? Is it a problem of the two point equidistant definition or a problem of the way it is implemented in proj lib?

The problem is in implementation. These cases are properly dealt with by reformulating and series expansion. The question is whether it’s worth the bother.

Regards,
— daan Strebe


-----Original Message-----
From: Calogero Mauceri <[hidden email]>
To: proj <[hidden email]>
Sent: Thu, Mar 15, 2012 8:14 am
Subject: Re: [Proj] Problems with /Op option used to compile proj in Microsoft Visual Studio

Glynn Clements wrote:
> Ultimately, I suspect that the calculation for the two-point
> equidistant projection just wasn't designed for the case where the two
> points are around one arc-second apart (approx. 15 metres; would be
> approx. 40 metres on Earth).

Glynn, so you are suggesting that more than in the float number 
approximations, the problem is in why the number passed to the aacos is 
so close to 1.
Well, then my question is, which are the cases where the two point 
equidistant projection could not behave well? Is it a problem of the two 
point equidistant definition or a problem of the way it is implemented 
in proj lib?

I can provide a long set of cases where the conversion from tpeqd to 
longlat fails, I just need a bit of time to run some data processing.

Thanks,
     Calogero


_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj

_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Glynn Clements
In reply to this post by Calogero Mauceri

Calogero Mauceri wrote:

> > Ultimately, I suspect that the calculation for the two-point
> > equidistant projection just wasn't designed for the case where the two
> > points are around one arc-second apart (approx. 15 metres; would be
> > approx. 40 metres on Earth).
>
> Glynn, so you are suggesting that more than in the float number
> approximations, the problem is in why the number passed to the aacos is
> so close to 1.
> Well, then my question is, which are the cases where the two point
> equidistant projection could not behave well? Is it a problem of the two
> point equidistant definition or a problem of the way it is implemented
> in proj lib?

It's presumably possible to reformulate the calculation to be more
numerically stable, the question is whether it's worthwhile. AFAICT,
the projection seems to be more common at larger scales.

From the information you provided, I'm assuming that it's failing at
either:

        lp.phi = aacos(hypot(P->thz0 * s, d) * P->rhshz0);
or:
        P->z02 = aacos(P->sp1 * P->sp2 + P->cp1 * P->cp2 * cos(P->dlam2));

Looking at how those parameters are derived:

        P->dlam2 = adjlon(lam_2 - lam_1);

        P->cp1 = cos(phi_1);
        P->cp2 = cos(phi_2);
        P->sp1 = sin(phi_1);
        P->sp2 = sin(phi_2);

        P->z02 = aacos(P->sp1 * P->sp2 + P->cp1 * P->cp2 * cos(P->dlam2));
        P->hz0 = .5 * P->z02;

        P->thz0 = tan(P->hz0);
        P->rhshz0 = .5 / sin(P->hz0);

        cz1 = cos(hypot(xy.y, xy.x + P->hz0));
        cz2 = cos(hypot(xy.y, xy.x - P->hz0));
        s = cz1 + cz2;
        d = cz1 - cz2;

When lam_1 ~= lam_2 and phi_1 ~= phi_2, then:

        P->dlam2 ~=0
        P->cp1 ~= P->cp2
        P->sp1 ~= P->sp2
        P->sp1 * P->sp2 + P->cp1 * P->cp2 * cos(P->dlam2) ~= sin²(phi) + cos²(phi) ~= 1
        cos(P->dlam2) ~= cos(0) ~= 1
        P->z02 ~= aacos(1) ~= 0

So the calculation of P->z02 is likely to be numerically unstable.

Also: cz1 ~= cz2, so d will have poor relative accuracy.

IOW, the formulation used is going to be unstable when the angle
between the reference points is small.

More generally, any arccosine calculation should be seen as a warning
sign regarding numerical stability, as the derivative approaches
infinity as the result approaches zero, meaning that you get the
greatest error magnification at the point where you can least afford
it.

--
Glynn Clements <[hidden email]>
_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Calogero Mauceri
Sorry for replying so late, but I've been quite busy lately.

Frank Warmerdam wrote:
> I don't claim to understand the tpeqd projection well, but your parameters:
>
>   +lat_1=-22.960070000 +lon_1=65.546300000
>   +lat_2=-22.959730000 +lon_2=65.546160000
>
> seem suspect to me in that they are so close.  Do you find similar
> stability issues when the two points used are not so very close
> together?

Unfortunately we are mostly using the tpeqd projection with the two
points very close. We need minimal distortion between the reference points.


Glynn Clements wrote:
> IOW, the formulation used is going to be unstable when the angle
> between the reference points is small.

Glynn, thank you very much for your explanation. So the smaller is the
distance between the two points of the tpeqd, then the less accurate are
the conversions. Those are really bad news :(.


Calogero
_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj
Reply | Threaded
Open this post in threaded view
|

Re: Problems with /Op option used to compile proj in Microsoft Visual Studio

Charles Karney
On 03/20/12 13:09, Calogero Mauceri wrote:
> Glynn, thank you very much for your explanation. So the smaller is the
> distance between the two points of the tpeqd, then the less accurate are
> the conversions. Those are really bad news:(.

In the limit where the distance between the two central points AB
vanishes tpeqd degenerates to the azimuthal equidistant projection (the
spherical variety because tpeqd is only implemented for a sphere).  So
if you're projecting a point C and CA >> AB and CB >> AB, you might get
more accurate results using aeqd.

   --Charles
_______________________________________________
Proj mailing list
[hidden email]
http://lists.maptools.org/mailman/listinfo/proj