>>>>> "Peter" == Peter Zaitsev <pz@stripped> writes:
Peter> There are much descussed in the manual how mysql does optimizations but
Peter> nothing about how it really works with filesystem, in cases different from
Peter> full table scan. The questions are similar:
Peter> 1) If mysql does update of fragmented row, does it only accesses the first
Peter> part of it then it's possibe. For example having much often changed integer
Peter> values in the begeaning of the row and a varchar in the start of it. If
Peter> it's so the better choise would be all varibale length columns to the end of
Peter> field list with the order they are frequently updated.
Currently MySQL always reads the whole row into memory.
Peter> 2) How mysqld does scaning the table on index ? Does it just reads the value
Peter> in index and then goes to table file to read the value there and then again
Peter> or perfomes the scan of index file to build a list of rows needed and then
Peter> reading them in sorted by offset order ?
MySQL scans the index in sorted order; For each row it goes the row
file and reads the data. If the row is ok, it sent to the client (or
but in a temporary table depending on context). Then MySQL does a
'read_next' on the index; Doing this, it will continue to scan from
the last used index block (which is cached by the thread). In effect
MySQL gets a block of keys at a time and reads all matching rows for
these blocks before continuing with the next block of keys.
Peter> 3) How does mysql performs join (natural and star). Does it selects the
Peter> table to start with with the small number of rows (or somehow else) and then
Peter> scans it somehow (full,index) and on every row doing a search of needed
Peter> components by index in other tables or it does it like building a list of
Peter> records it has to get for every table and then getting it out ?
This depends totally on the join type. I have tried to describe this
in the explain section of the manual.
- If you don't have any keys, then MySQL will read # rows from the
larger file and then do a full scan of smaller table and joining
all rows to eachother. Then it will read the next # rows from the
first file and do another full scan of the smaller table (this is
done recursively if you have many tables).
- In the normal case MySQL will first find one matching row from the
first table and then find, based on keys, the first matching row
from the second table and so on.
Sasha asked the following question:
> And while you are at it, can you also explain if delete + insert could cause
> fragmentation on a table with variable length records?
DELETE + INSERT of different length rows will normally cause some kind
of fragmentation of the table. With MyISAM this is normally not that
bad, if you delete many rows at a time, as MySQL automaticly joins
empty blocks to bigger empty blocks at delete and automaticly extends
old blocks on update if the next block is a deleted block.