任务单 #35803

Floating-point instability

开放日期: 2015-12-01 19:36 最后更新: 2015-12-01 19:39

报告人:
属主:
类型:
状态:
开启 [Owner assigned]
组件:
里程碑:
(无)
优先:
6
严重性:
5 - Medium
处理结果:
文件:

Details

First off: spline geometry, and especially, overlap removal, are mission-critical for FontAnvil. Removing these features, or reducing them temporarily to an "under construction" state less usable than the current big hot mess, are not acceptable courses of action.

But the spline geometry inherited from FontForge includes a lot of numerical computation that seems to have been written cowboy-style without a deep understanding of its behaviour. There are fuzz factors on comparisons all over the place, apparently chosen at random but presumably with a lot of careful trial and error behind getting them to work. *Some* of the fuzz factors are centralized in functions and others are hardcoded in different places. There are comparisons that ought to have fuzz factors and don't, a couple of them with butthurt code comments from George "explaining" that the complaints about resulting infinite loops are "incorrect." Sorry, I've observed the infinite loops reliably. Infinities and NaNs occur frequently in practice and are seldom properly handled. Near variations of the same algorithm (e.g. cubic equation solving) are implemented in multiple places in order to achieve minor variations in external interface that seem like they could be better factored together. Many algorithms attempt to set and recognize an "undefined" float value, and they use -1.0 (usually), -99999 (varying numbers of nines), or -88888888 (observed once) instead of infinities or NaNs, resulting in collisions with genuine defined floating point values that may occasionally hit those numbers. Floating point constants with integer values are almost invariably coded as integer-type constants, leaving the compiler to cast them implicitly (this is useful as a marker of where code is likely to be bad in other ways). Algorithms that attempt to be bit-perfect about results use magic constants based on assumptions about exactly how many bits are in a given numeric type. Irrational-valued constants, such as multiples of pi, are hardcoded, usually to IEEE 754 "single" precision even though in most configurations these will be used to compute "double" values. Most floating point numbers are of type "real," "bigreal," or "extended," all of which are preprocessor-defined to mean "double" in a default configuration, but until I removed them, there were compile-time options to change these defines, which had unpredictable results. The assumptions about numeric precision interact in a bad way with GCC's horribly broken handling of intermediate-result precision, so that the instability of the algorithms varies in a complicated way depending on compiler, architecture, optimization settings, possibly CPU family (such as Intel Atom, Intel desktop, or AMD desktop), and "volatile" (liberal use of which seems to be the only simple fix for some of the infinite-loop bugs).

This situation should be cleaned up. In particular:

* The implementation should use doubles throughout and remain agnostic about exactly how precise they actually are.

* Loops should not depend for their termination on a floating point number reaching a machine-exact value, especially not if the trajectory to get there will take it through denormals and rounding. In particular, there should be no binary searches on floating-point values that depend for their termination on the endpoints comparing equal to each other, or the length of an interval comparing equal to zero.

* Fundamental operations like solving cubics should be defined in only one place, where they are implemented with full generality, correct handling of special (such as infinite) input and output values, correct handling of ill-conditioned but otherwise non-special input in particular, current best-practice algorithms, and documentation of what exactly they do and what their limitations may be. More specialized operations should call the general ones instead of reimplementing the fundamental algorithms.

* Normal real numbers (such as "-1.0") should not be used as reserved values to represent conditions like "no solution," NOT EVEN IF they are outside the range of expected non-reserved values. Reserved values should be avoided where possible, but if used, should be NaN or infinite.

* Fuzzes, epsilons, and so on should be determined by examining the applications (for instance: how precise does a coordinate of a font control point need to be based on the precision in the file formats?) instead of by trial and error. They should not be tighter than necessary, and they should usually be much looser than the tightest precision the machine can handle.

* Constants which have exact irrational values should be coded to the precision of an IEEE 754 80-bit "extended".

* The pedigree of every nontrivial real constant should be documented.

任务单历史 (2/2 Histories)

2015-12-01 19:36 Updated by: mskala
  • New Ticket "Floating-point instability" created
2015-12-01 19:39 Updated by: mskala
  • Details Updated

Attachment File List

No attachments

编辑

You are not logged in. I you are not logged in, your comment will be treated as an anonymous post. » 登录名