On Jul 29, Niels Larsen wrote:
> I had two columns indexes of type BTREE where the .MYI file was ~5 gb.
> Then I decided to drop one of them and replace it with a fulltext index. The
> dropping took 4 hours and fulltext indexing also 4 hours. There were about
You should've done it in one command:
ALTER TABLE table DROP INDEX index1, ADD FULLTEXT INDEX (col1, col2);
> 30,000,000 rows. I found very good query speeds, only problem is large
> outputs, but maybe thats a temp-file thing. This gives me hope that we can
> use MySQL. Now I will try the other and much bigger column, which is the
> long-running one that I killed.
> The final problem (I hope) is how to add to a 30-50 gb database that has
> fulltext indices. I need to add 100-500 mb updates once a day; but if I just
> say "load data infile ... " similar to when I first build the database, then the
> loading crawls and will never finish. I havent realized a way to do this, can
> you think a working way? That is, I want both efficient word-search and the
> ability to frequently add to the database.
There's little I can suggest for now :(
Try to experiment with bulk_insert_buffer variable. You need both big
keycache and big bulk_insert_buffer.
When MERGE tables will support fulltext searches you can use them to
keep small table where you insert data, and from time to time move the
data from it to the big one. Actually you can do it with UNION even now,
but it's not nice solution at all.
Note - when you insert more than 1% of the old table size, it's faster
to use ALTER TABLE t DISABLE KEYS before loading (and ... ENABLE KEYS
__ ___ ___ ____ __
/ |/ /_ __/ __/ __ \/ / Sergei Golubchik <serg@stripped>
/ /|_/ / // /\ \/ /_/ / /__ MySQL AB, Senior Software Developer
/_/ /_/\_, /___/\___\_\___/ Osnabrueck, Germany