While I don't know why the behavior changed, the comparison of floating point
values has been an issue since the day computers were created. In the last 10
- 15 years, the comparisons have worked better, but it's still an issue. The
problem is that most decimal fractions do not convert well to binary.
Actually, I'd expect the example of 4.5 to work fine as 0.5 does convert well
to a binary floating point - but that is academic.
In the 1970's I got used to writing floating point comparisons as
x = 4.3
if(abs(x - 4.3) .lt. <some tolerance>)
The tolerance is how close the values needed to be for me to consider them
equal. An example might be
if(abs(x - 4.3) .lt. 0.00001)
Someone else might be able to shed some light on why the behavior changed.
On Monday, November 08, 2010 12:50:16 pm Larry Martell wrote:
> I have a client that asked me to look into a situation where they were
> seeing different behavior with the same data and same sql on 2
> different servers.
> The have some sql that was comparing a double to a varchar in a where
> clause - something like this:
> where (doubleCol > varcharCol and some other conditions) or
> (doubleCol < varcharCol and some other conditions)
> Let's take an example where the varcharCol has a string in like
> '4.5000' and the doubleCol has 4.5.
> On their 5.0.45 server this was working as 'expected' (i.e. the
> comparisons in the where clause were false since they are numerically
> equal). But on their 5.0.18 server the > clause was resolving to true.
> I changed the query to cast both columns to binary, so the query now
> where (cast(doubleCol as binary) > cast(varcharCol as binary) and some
> other conditions) or
> (cast(doubleCol as binary) < cast(varcharCol as binary) and
> some other conditions)
> And now on both servers the < clause of the query is resolving to true.
> I know I can strip off the trailing zeros from the varchar, but there
> must be a generic way to cast these so they compare properly as