>>>>> "Peter" == Peter Zaitsev <pz@stripped> writes:
>> When scanning tables, MYSQL reads everything in 'record_buffer'
>> hunks. When using keys MySQL just reads the needed row from the data.
Peter> And here is the problem located: If you have lange table (several millions
Peter> of records) and need to do s range-index scan producing let's say 10000 of
Peter> record used in join or that ever. In my task the data is "clusterized" on
Peter> index as the index is timestamp and the records are only inserted to the end
Peter> of the table.
Peter> So MYSQL works like reading a key block (or finding it in the cashe) then
Peter> reading comparetively small peice of data and then searching for the key
Peter> blocks again. So I have a huge amount of reads of data (and less keyblocks)
Peter> instead of several big reads of data. This of couse slow everything much
Peter> then this is not the only task working and so the disk head has to fly a
Peter> lot. That's why I vas asking about configuring read-ahead on linux - I can
Peter> provide enough cache memory to have for example 128K read-aheads.
Peter> Now I'm thinking of buying hardware raid which has it's own read-ahead so
Peter> should help a bit this problem additionaly to spreading requests to the
Peter> The other question I didn't found the answer yet - does mysql flushes (by
Peter> calling fsync() or that ever) data to the shurely disk or just does fflush()
Peter> or what ever to flush data into OS cache ?
I did mail an answer to this a while ago. It just puts things into
the OS cache (if you are not using the --flush startup option)
I have even updated the manual about this :)
>> Before MySQL 3.23.12 MySQL , when flushing key blocks, did collect
>> 2000 blocks at a time, sorted these and wrote these on disk.
>> In MySQL 3.23.12 we changed that to sort and write all disk blocks and
>> only use the 2000 blocks as a fallback if there isn't enough memory to
>> to do the sort on everything..
Peter> Thank you this is really nice change.
Peter> The nice thing would be allowing to specify with create table
Peter> something like
Peter> "groth chunk" which may decrease fragmentation.
>> The problem with the above is mainly that this would confuse myisamchk
>> a lot. We had this in an earlier ISAM release but removed this a
>> couple of years ago because we thought we wouldn't need that for what
>> we where doing. It may be time to rethink that and do it again.
>> (The MyISAM table format has actually reserved space for relocation
Peter> I think it would be the nice feature allowing to reduce fragmentation in
Peter> several cases much.
Peter> By now I thing I'll rewrite an application to more use huge multeple inserts
Peter> so file will grow in a few bigger peices. (Now I use not so huge but
Peter> multiple inserts).