(Title taken from my own assessment of my spelling.)
It's been known for awhile that rather than directly comparing two floating-point numbers, you should instead subtract the larger from the smaller and compare the difference to some epsilon value. The reason for this is that two numbers might be very very similar, but not exactly equal. “Epsilon” in a mathematical sense means “minimum precision that you care about”. The epsilon value for money, for example, is usually 0.01 — differences lower than this are thrown away.
So of course I went scrounging in the headers, found macros named
LDBL_EPSILON, and recommended to all my programmer friends that they use these constants for comparisons of floating-point values rather than
From time to time, facts just float up to the top of my head for no obvious reason. I have a sheet taped to my wall called “Word of the Day”; when a word pops into my head like this, completely unrelated to any previous thoughts, I write it down on that sheet to look up later. I consider this a more advanced (if slow) form of self-education. They might be long-forgotten memories, or something else; I don't know, I just write them down and look them up.
About half an hour ago, this happened to me again. Except this time, the thought was definitely a memory, of something I'd read in float.h:
/* The difference between 1 and the least value greater than 1 that is representable in the given floating point type, b**1-p. */
Another thought had bubbled up with this, and it was an epiphany: Technically, this means that the expression
x != (x + FOO_EPSILON) should evaluate to 1. In other words, subtracting from
FOO_EPSILON isn't necessary.
So, as is my wont, I wrote a test app. Sure enough, that expression does evaluate to 1.
So forget what I said.
x != y is directly equivalent to comparison against
FOO_EPSILON, and it's easier to read, too. So just use that.