List:General Discussion« Previous MessageNext Message »
From:Paul DuBois Date:July 30 2002 2:43pm
Subject:Re: Mysql and Variable Length Records
View as plain text  
At 16:27 +0200 7/30/02, Daniel Brockhaus wrote:
>Hi there,
>here's something everyone using variable length records (varchar, 
>text, blob) should know:
>Create a table containing at least one blob:
>>  create table db_test (
>>	ID int not null,
>>	vara blob,
>>	primary key (ID)
>>  );
>Insert two records:
>>  insert db_test values(1,'');
>>  insert db_test values(2,'');
>Now, keep updating the blobs, increasing their size by 16 bytes each time:
>>  update db_test set vara='AAAAAAAAAAAAAAAA' where ID=1;
>>  update db_test set vara='BBBBBBBBBBBBBBBB' where ID=2;
>>  update db_test set vara='AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' where ID=1;
>>  update db_test set vara='BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB' where ID=2;
>>  update db_test set 
>>  update db_test set 
>Repeat this a few hundred times, then run myisamcheck -ci. You'll 
>get (after 1000 iterations):
>Checking MyISAM file: db_test
>- check record links
>Records:                 2    M.recordlength:    16008   Packed: 
>Recordspace used:      100%   Empty space:           0% 
>Blocks/Record: 1001.00
>Record blocks:        2002    Delete blocks:         0
>Record data:         32016    Deleted data:          0
>Lost space:              4    Linkdata:          22012
>Whoa. Each record has been split into 1000 (one thousand!) blocks. 
>Reading one of these records would require 1000 reads from your 
>harddisk. That's about 14 seconds to read a record of 16K length! 
>(You might get lucky and get better values from the OS's read-ahead 
>logic and disk cache.)
>Now sit back and marvel at the efficiency of mysql's dynamic record handling.

Use OPTIMIZE TABLE periodically to defragment your table.

>(Tested on mysql 3.23.33 and 3.23.41, known not to be fixed in the 
>lastest 3.23.X)
>I know this is a worst-case scenario. But still: I was using a table 
>with blobs of changing size, and this table was getting slower and 
>slower over time, at a rate of 100% per week.
>You can work around the problem by making sure that your records 
>change size as seldom as possible. One way to do this is to add 
>another blob and change its size as needed to keep the total record 
>length constant. This would fix the problem, but it adds a lot of 
>overhead to the application, and it is doing something on 
>application level that the database should do.
>What I'd like to have is an addition to the myisam table handler 
>(and possibly to the other table handlers too, if those have the 
>same problem) which allows you to set the minimum block size mysql 
>will allocate for any record. That is, if you set that value to 512, 
>every block allocated would be of 512, 1024, 1536, ... bytes length.
>If this was added and you'd use a minimum block size of 512, the 
>above example would have created 32 blocks for each record. Still 
>bad, but a hell of a lot of an improvement over the 1000 without it.
>Does anybody know whom I have to write to ask for this to be added?
>I apologize for the lengthy post, the bad english and the sarcasm.
>Daniel Brockhaus
>sql, query

Mysql and Variable Length RecordsDaniel Brockhaus30 Jul
  • Re: Mysql and Variable Length RecordsPaul DuBois30 Jul
    • Re: Mysql and Variable Length RecordsDaniel Brockhaus30 Jul
  • Re: Mysql and Variable Length RecordsBrent Baisley30 Jul
Re: Mysql and Variable Length RecordsDaniel Brockhaus30 Jul