On Mon, April 18, 2005 9:16 pm, Hank said:
>> Let's say you've got, oh, 2000 records to search through.
>> You're gonna end up doing a JOIN with:
>> 2,000 X 65,000 == 130,000,000 tuples (records/results).
>> 130 MILLION tuples is *way* too many for your basic $20/month site.
> I'd say take some easy shortcuts first... like limit the join to the
> zip code table by the target state (which of course is indexed), then
Might not have the target state, and not all countries have states at all,
> add the one, two or at most three neighboring states, if you're near a
Well, now, THAT just complicated things all to hell...
How would I know if 60601 in IL is "near" a border or not?
> Or just limit the join to all the neighboring states in one
So now I need to lookup the neighboring states, but depending on the
target distance, and/or the state size (Rhode Island, anyone?) maybe
neighboring states are *NOT* a good filter at all. Take a look at the top
of Idaho some time. Want to bet there are zips in the states on each side
that are closer than, say, two arbitrary zips in Idaho and a neighboring
> One, two, or three short running queries all in SQL is a
> whole-lot better than adding un-normalized and redundant fields to the
> source data table and populating it in a cron job and triggers. Talk
> about taking the long way around.
It works. :-)
It's incredibly efficient.
The "un-normalized" data never suffers from the problems which make
un-normalized data "bad" in the first place.