Numerical Robustness for Geometric Calculations

Presentation on theme: "Numerical Robustness for Geometric Calculations"— Presentation transcript:

Numerical Robustness for Geometric Calculations
Christer Ericson Sony Computer Entertainment What an incredibly dry and boring title I picked for my GDC’05 talk!

Numerical Robustness for Geometric Calculations
EPSILON is NOT ! Christer Ericson Sony Computer Entertainment Chris Hecker had a talk some day before my talk wherein the felt he could change the title of his talk. If he can, so can I! This is much catchier, no?

Takeaway An appreciation of the pitfalls inherent in working with floating-point arithmetic. Tools for addressing the robustness of floating-point based code. Probably something else too.

Somewhat shameless plug!
The contents of this presentation is discussed in further detail in my book: More detail on what I cover in this presentation can be found in my book Real-Time Collision Detection (Morgan Kaufmann, 2005). Read more about it here: Okay so I didn’t actually plug this during the presentation. That would have been rather low; even I wouldn’t stoop to that. However, that little devil on my shoulder made me add this slide to the downloadable after-the-fact slides.

Floating-point arithmetic
THE PROBLEM Floating-point arithmetic Let’s start with a brief recap of floating-point numbers.

Floating-point numbers
Real numbers must be approximated Floating-point numbers Fixed-point numbers (integers) Rational numbers Homogeneous representation If we could work in real arithmetic, I wouldn’t be having this talk!

Floating-point numbers
IEEE-754 single precision 1 bit sign 8 bit exponent (biased) 23 bits fraction (24 bits mantissa, including hidden bit) 31 31 23 22 s Exponent (e) Fraction (f) This presentation only discusses single-precision floating-point numbers. However, double-precision numbers are defined analogously and have similar issues as single-precision numbers. Note that for nonzero numbers, the leading bit of the mantissa of a normalized number is always set. Thus, because this bit does not need to be stored, the normalization effectively results in one extra mantissa bit for free. This bit is called the hidden or implicit bit. (Internally arithmetic is performed with additional bits, known as the guard, round, and sticky bits.) This is a normalized format

Floating-point numbers
IEEE-754 representable numbers: Exponent Fraction Sign Value 0<e<255 e=0 f=0 s=0 s=1 f≠0 e=255

Floating-point numbers
In IEEE-754, domain extended with: –Inf, +Inf, NaN Some examples: a/0 = +Inf, if a > 0 a/0 = –Inf, if a < 0 0/0 = Inf – Inf = ±Inf · 0 = NaN Known as Infinity Arithmetic (IA)

Floating-point numbers
IA is a potential source of robustness errors! +Inf and –Inf compare as normal But NaN compares as unordered NaN != NaN is true All other comparisons involving NaNs are false These expressions are not equivalent: The two expressions work fine for, say, a = 2, b = 1. However, consider what happens if a = NaN. Now both tests fails! if (a > b) X(); else Y(); if (a <= b) Y(); else X();

Floating-point numbers
But IA provides a nice feature too Allows not having to test for div-by-zero Removes test branch from inner loop Useful for SIMD code (Although same approach usually works for non-IEEE CPUs too.)

Floating-point numbers
Irregular number line Spacing increases the farther away from zero a number is located Number range for exponent k+1 has twice the spacing of the one for exponent k Equally many representable numbers from one exponent to another Note, for example, that there are as many numbers between, say, 1.0 and 2.0 as there are between and

Floating-point numbers
Consequence of irregular spacing: – ( ) = 0 (– ) + 1 = 1 Thus, not associative (in general): (a + b) + c != a + (b + c) Source of endless errors!

Floating-point numbers
All discrete representations have non-representable points D A P For any discrete representation (whether integers, floating-point numbers, or something else) the intersection point P between segments AB and CD typically does not lie on the grid of representable points. Thus, P must be snapped to some nearby (usually the nearest) representable point, here Q. The output generally requires more precision to represent than the input, regardless of the representation. Q C B

The floating-point grid
In floating-point, behavior changes based on position, due to the irregular spacing! Because the floating-point grid is nonuniform, we generally get different results at different positions for the same problem. Here this is illustrated by the rounded intersection point (red circle) of the two segments moving relative the problem configuration as the configuration is translated (even though all segment endpoints are machine-representable throughout). Note also how the red circle moves from one side of the longer line segment to the other. Consistent sidedness is not guaranteed. This particular problem is exhibited a lot in large-world games. No time to go into it here, but avoid performing computations far away from the origin.

EXAMPLE Polygon splitting
Now an example of how to split a polygon (into two parts) against a plane in a robust fashion, highly relevant to various tree construction methods over polygonal data.

Polygon splitting Sutherland-Hodgman clipping algorithm J A D A D C C
Shown here are the four cases of the Sutherland-Hodgman (SH) clipping algorithm. SH only retains the portion of the polygon in front of the plane. For splitting of a polygon to a plane both parts must be retained. This is a simple modification of the SH algorithm (see e.g. [Ericson]). C C I B B

Polygon splitting Enter floating-point errors! Q Q I IF P P
Left: the exact intersection I of the segment PQ with a plane. Right: in a floating-point representation the intersection point is highly unlikely to lie on the plane (or on the segment). P P

Polygon splitting ABCD split against a plane A A J B B D D I C C
Left: The quad ABCD is split against a plane. Right: I is the intersection point of BC with the plane. J is the intersection point of DA with the plane. Note how both resulting quads ABIJ and JIDC straddle the plane that ABCD was split against! Note also that ABIJ is concave! I C C

Polygon splitting Thick planes to the rescue! Q Desired invariant:
OnPlane(I, plane) = true where: I = IntersectionPoint(PQ, plane) By making the plane thick, the “bad” intersection point is made to lie on the plane again. P

Polygon splitting Thick planes also help bound the error PQ P'Q' e
Thick planes limit how close endpoints P and Q can come to a plane with PQ still being subject for intersection against the plane. Thus, assuming a finite length PQ, the thick plane also limits how parallel to the plane PQ can become. This is good, because the more parallel to the plane PQ becomes, the larger is the error in the intersection point. To see this, lets consider the error in the computed intersection point of a segment PQ under displacement (to P’Q’). We look at two representative cases: Top:When PQ is close to perpendicular to the plane, the error distance e is about the same as the displacement of PQ to P’Q’. Bottom: As PQ becomes increasingly parallel to the plane the error distance grows drastically (approaching infinity). Note: here in 2D the plane is just another line (segment). Considered as the intersection of two line (segments), it is possible to view the geometric configuration of the two lines (segments) instead as a 2x2 system of linear equations. A system of linear equations is said to be ill-conditioned when a small perturbation in the system can produce large changes in the exact solution. (A system that is not ill-conditioned is called well-conditioned.) P'Q' PQ e

Polygon splitting ABCD split against a thick plane A A B B D D I C C
Left: The quad ABCD is split against a thick plane. Right: Only edge BC straddles the plane, and I is the intersection point of BC with the plane. The result of the split operation is the triangle ABI (on the left) and the quad AICD (on the right). Note how both resulting pieces are convex and correctly classify as lying to the left and right, respectively, of the plane. I C C

Polygon splitting Cracks introduced by inconsistent ordering A A C C B
Left: Two counterclockwise triangles (ABC and BDC) being clipped/split against a plane. Right: Treating the shared edge as both BC and CB (as counterclockwise-ness does) will give rise to two different intersection points with the plane, causing a crack! The fix? Always clip edges as ordered going from e.g. in front to behind the plane! B B D D

EXAMPLE (K-d) tree robustness

(K-d) tree robustness Robustness problems for: Same problems apply to:
Insertion of primitives Querying (collision detection) Same problems apply to: All spatial partitioning schemes! (BSP trees, grids, octrees, quadtrees, …)

(K-d) tree robustness Query robustness Q C A I IF B P 1 2
For this two-level k-d tree, the segment/ray PQ should traverse all three leaves of the tree (in the order B, C, A). However, it is possible for floating-point errors to move the actual intersection point I of PQ with plane 1 to lie at IF, which means that leaf C would never be visited. (The scale of the error is exaggerated to make the error more obvious.) NOTE: After the lecture, someone pointed out that for a k-d tree specifically you wouldn’t compute the x component of I here, you’d just set it to the plane’s x component. Thus IF would lie on the plane still, the error only occuring in y. That’s very very true. Having said that, rather than moving IF to the plane in this slide, I decided to keep the slide as-is in these published slides. While I used k-d trees as an example (because they were easier to draw!) I really intended for the slide to illustrate the general problem (and thus the parenthesis around the k-d label in the title). For a BSP-tree, for example, the point would not in general lie on the plane. I wanted to stress this. (The same comment applies to the next slide.) 2 IF B P

(K-d) tree robustness Insertion robustness C C A A I IF B B 1 1 2 2
On the left: triangles have to be clipped/split as they are inserted into the tree to ensure they are only inserted into the leaves they are overlapping, here leaves A and B. If the triangle is not split it would additionally end up in leaf C, in that the triangle as a whole straddles plane 2. On the right (with the triangle in a different position): because we are splitting the triangles, we can get the same situation we had with the query ray, namely that the intersection point of a triangle edge with plane 1 is moved such that the triangle is not inserted into a leaf it actually overlaps (leaf C). 2 2 IF B B

(K-d) tree robustness How to achieve robustness?
Insert primitives conservatively Accounting for errors in querying and insertion Can then ignore problem for queries

EXAMPLE Ray-triangle test
Intersecting a ray against a triangle is far more problematic than common “solutions” assume. Here it is shown how an incorrect computation can lead to catastropic failures (e.g. botched collision tests causing objects to fall outside the world). It is also shown how to perform the calculation robustly.

Ray-triangle test Common approach: Alas, this is not robust!
Compute intersection point P of ray R with plane of triangle T Test if P lies inside boundaries of T Alas, this is not robust! The reason this test is not robust is that it fails for intersection against two triangles sharing an edge, as shown in the next couple of slides.

Ray-triangle test A problem configuration R C B P D A
The problem configuration: a ray R strikes the shared edge AB of triangles ABC and ABD. A

Ray-triangle test Intersecting R against one plane R P
First we compute the intersection point P of the ray R with the plane of triangle ABC. Lets say small errors in the computation of P puts P slightly to the right of AB such that the containment test for P being inside ABC fails.

Ray-triangle test Intersecting R against the other plane R P
We then compute the intersection point P of the ray R with the plane of triangle ABD. This computation uses completely different floating-point values than the previous computation, so we get a point P slightly different from the previous one. Lets say this time small errors in the computation of P puts P slightly to the left AB such that the containment test for P being inside ABD fails. Conclusion: R misses both triangles, seemingly passing through the shared edge!

Ray-triangle test Robust test must share calculations for shared edge AB Perform test directly in 3D! Let ray be Then, sign of says whether is left or right of AB If R left of all edges, R intersects CCW triangle Only then compute P Still errors, but managable

Ray-triangle test “Fat” tests are also robust! P
An alternative to trying to make a ray test robust is to instead switch to a “fat” query primitive. For example, here a sphere is swept against the triangles. Fat tests allow intersections to be found even if the triangles have a gap between them, as long as the width of the gap is smaller than the “fatness diameter” of the query object. Fat tests are more expensive to compute than the basic ray test, however.

EXAMPLES SUMMARY Achieve robustness through…
(Correct) use of tolerances Sharing of calculations Use of fat primitives

TOLERANCES

Tolerance comparisons
Absolute tolerance Relative tolerance Combined tolerance Integer test

Absolute tolerance Comparing two floats for equality:
if (Abs(x – y) <= EPSILON) … Almost never used correctly! What should EPSILON be? Typically arbitrary small number used! OMFG!!

Absolute tolerances Delta step to next representable number: Decimal
Hex Next representable number 10.0 0x x 100.0 0x42C80000 x 1,000.0 0x447A0000 x 10,000.0 0x461C4000 x 100,000.0 0x47C35000 x 1,000,000.0 0x x 10,000,000.0 0x4B189680 x + 1.0

Absolute tolerances Möller-Trumbore ray-triangle code:
#define EPSILON #define DOT(v1,v2) (v1[0]*v2[0]+v1[1]*v2[1]+v1[2]*v2[2]) ... /* if determinant is near zero, ray lies in plane of triangle */ det = DOT(edge1, pvec); if (det > -EPSILON && det < EPSILON) // Abs(det) < EPSILON return 0; DOT({10,10,10}, {10,10,10}) = which needs an epsilon of at least (the delta to the next number after given on the previous page). Because EPSILON = < , the epsilon test effectively becomes a test for “det == 0”. Written using doubles. Change to float without changing epsilon? DOT({10,10,10},{10,10,10}) breaks test!

Relative tolerance Comparing two floats for equality:
if (Abs(x – y) <= EPSILON * Max(Abs(x), Abs(y)) … Epsilon scaled by magnitude of inputs But consider Abs(x)<1.0, Abs(y)<1.0

Combined tolerance Comparing two floats for equality:
if (Abs(x – y) <= EPSILON * Max(1.0f, Abs(x), Abs(y)) … Absolute test for Abs(x)≤1.0, Abs(y)≤1.0 Relative test otherwise!

Integer test Also suggested, an integer-based test:
bool AlmostEqual2sComplement(float A, float B, int maxUlps) { // Make sure maxUlps is non-negative and small enough that the // default NAN won't compare as equal to anything. assert(maxUlps > 0 && maxUlps < 4 * 1024 * 1024); int aInt = *(int*)&A; // Make aInt lexicographically ordered as a twos-complement int if (aInt < 0) aInt = 0x aInt; // Make bInt lexicographically ordered as a twos-complement int int bInt = *(int*)&B; if (bInt < 0) bInt = 0x bInt; int intDiff = abs(aInt - bInt); if (intDiff <= maxUlps) return true; return false; } It is possible to compare floating-point numbers based on their integer representation. However, it is hard to correctly handle the signed zero and the quasi numbers (plus and minus infinity and NaN) of infinity arithmetic. This particular code given here is from Bruce Dawson’s internet article “Comparing floating point numbers”: I’m not a big fan of trying to compare floats as integers.

Floating-point numbers
Caveat: Intel uses 80-bit format internally Unless told otherwise. Errors dependent on what code generated. Gives different results in debug and release.

(and semi-exact ditto)
ARITHMETIC (and semi-exact ditto)

Exact arithmetic Hey! Integer arithmetic is exact
As long as there is no overflow Closed under +, –, and * Not closed under / but can often remove divisions through cross multiplication

Exact arithmetic Example: Does C project onto AB ? C D A B Floats:
float t = Dot(AC, AB) / Dot(AB, AB); if (t >= 0.0f && t <= 1.0f) ... /* do something */ Integers: int tnum = Dot(AC, AB), tdenom = Dot(AB, AB); if (tnum >= 0 && tnum <= tdenom) ... /* do something */

Exact arithmetic Another example: D C A B

Exact arithmetic Tests Constructions Boolean, can be evaluated exactly
Non-Boolean, cannot be done exactly

Exact arithmetic Tests, often expressed as determinant predicates. E.g. Shewchuk's predicates well-known example Evaluates using extended-precision arithmetic (EPA) EPA is expensive to evaluate Limit EPA use through “floating-point filter” Common filter is interval arithmetic

Exact arithmetic Interval arithmetic x = [1,3] = { x  R | 1 ≤ x ≤ 3 }
Rules: [a,b] + [c,d] = [a+c,b+d] [a,b] – [c,d] = [a–d,b–c] [a,b] * [c,d] = [min(ac,ad,bc,bd), max(ac,ad,bc,bd)] [a,b] / [c,d] = [a,b] * [1/d,1/c] for 0[c,d] E.g. [100,101] + [10,12] = [110,113]

Exact arithmetic Interval arithmetic
Intervals must be rounded up/down to nearest machine-representable number Is a reliable calculation

References BOOKS Ericson, Christer. Real-Time Collision Detection. Morgan Kaufmann, Hoffmann, Christoph. Geometric and Solid Modeling: An Introduction. Morgan Kaufmann, 1989. Ratschek, Helmut. Jon Rokne. Geometric Computations with Interval and New Robust Methods. Horwood Publishing, 2003. PAPERS Hoffmann, Christoph. “Robustness in Geometric Computations.” JCISE 1, 2001, pp Santisteve, Francisco. “Robust Geometric Computation (RGC), State of the Art.” Technical report, Schirra, Stefan. “Robustness and precision issues in geometric computation.” Research Report MPI-I , Max Planck Institute for Computer Science, Shewchuk, Jonathan. “Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates.” Discrete & Computational Geometry 18(3): , October There’s really not a lot of information available on robust geometric calculations, especially targeted for real-time use. My book, Real-Time Collision Detection, probably contains the most coverage of the problem of any book out there right now. (No, really. If anyone disagrees, I’d love to learn of books that contain more. Please me!) All the material presented in these slides is covered in more detail in Real-Time Collision Detection. Some other good book references are Hoffmann (for an overview, now slightly dated) and Ratschek and Rokne (for interval arithmetic, and the exact computation of dot products). Most computational geometry papers have a tendency to scoff at the use of tolerances (epsilons), but they’re also mostly written by academics with no practical experience (of real-time systems) so we can scoff at them in turn. Having said that, there are some good papers out there, the four listed above are some of the better papers on robustness available.