As noted in a recent thread, it's tricky to create unit tests for overlay operations. One reason for this is that the order of the result components and coordinates is undefined, and so can vary between diifferent algorithms and even different library releases. (The lack of well-defined ordering is deliberate, since introducing ordering would reduce performance).
The solution to this is to normalize the actual result (and possibly the expected value, to ensure it is stable). This imposes a well-defined ordering. It works well for cases which are hand-crafted to test overlay functionality. But overlay results can vary in other ways as well, due to the use of different algorithms, different handling of numerical precision, and the heuristics used to ensure robustness. This is much harder to handle. In GEOS we deal with this in a limited way by testing area and length of the result, rather than the actual result geometry. This is obviously a fairly weak test, and it's used mostly for checking robustness (which tends to produce either grossly wrong results or outright failure). A further option would be to use some kind of similarity metric. JTS has used Area of Symmetric Difference and Hausdorff distance to unit test the buffer operation (which has this problem as well). I've also experimented with a "pinhole" method, involving generating points within a given distance of the input and result linework. To keep things simple, it's easiest to use relatively simple geometry to test functionality, and cruder comparisons to test robustness. _______________________________________________ geos-devel mailing list [hidden email] https://lists.osgeo.org/mailman/listinfo/geos-devel |
Forgot about the area and length tests I think that is what I settled on on some tests in PostGIS where the answers truly were different but different in a unmeaningful way, by that I mean they both achieved satisfactory answers. This was done for GDAL as well cause GDAL in 3.2 changed their polygonise algorithms which broke one of our raster tests and the results even after normalize were different. From: geos-devel [mailto:[hidden email]] On Behalf Of Martin Davis As noted in a recent thread, it's tricky to create unit tests for overlay operations. One reason for this is that the order of the result components and coordinates is undefined, and so can vary between diifferent algorithms and even different library releases. (The lack of well-defined ordering is deliberate, since introducing ordering would reduce performance). The solution to this is to normalize the actual result (and possibly the expected value, to ensure it is stable). This imposes a well-defined ordering. It works well for cases which are hand-crafted to test overlay functionality. But overlay results can vary in other ways as well, due to the use of different algorithms, different handling of numerical precision, and the heuristics used to ensure robustness. This is much harder to handle. In GEOS we deal with this in a limited way by testing area and length of the result, rather than the actual result geometry. This is obviously a fairly weak test, and it's used mostly for checking robustness (which tends to produce either grossly wrong results or outright failure). A further option would be to use some kind of similarity metric. JTS has used Area of Symmetric Difference and Hausdorff distance to unit test the buffer operation (which has this problem as well). I've also experimented with a "pinhole" method, involving generating points within a given distance of the input and result linework. To keep things simple, it's easiest to use relatively simple geometry to test functionality, and cruder comparisons to test robustness. _______________________________________________ geos-devel mailing list [hidden email] https://lists.osgeo.org/mailman/listinfo/geos-devel |
Free forum by Nabble | Edit this page |