Even if you use disk based fields to get around the memory issue,
remember that all indexed fields are always stored in memory.
So, if all of the index data is more than the available memory, its
Even if you were to say that there were no indexed fields (which would
just be odd for that much data or any amount of data for that matter),
MySQL will still create an index - using memory regardless.. (8 Bytes)
Pluse, all text and blob fields are not stored completely on disk.. The
first 256 bytes fore EVERY FIELD of EVERY RECORD are also stored in
memory.. The remainder is on disk..
So, a record of 10 text/blob fields with no indexes(silly) all stored to
disk will still use a minimum of 2,568 bytes of memory (8 + (256 *10))
Add your indexes and the number goes up..
So, if you are sure that you have enough memory to store only indexed
fields in memory and the rest to disk, then try the following:
mysqldump the table definitions
mysqldump the data
alter the table definitions to match what is needed for NDB and each
field as stored to disk.
import the table defs into a new database
import the data (try just a few records first to make sure)
A second note on memory..
You say that you have one server with 16 gig of memory.. Well, NDB is
meant to run on a cluster of servers. Although it is possible to run
multiple instances on a single machine (killing redndancy and
preformance), you dont really have as much memory as you think as there
are all sorts of buffers, etc as well
Ricardo Soares Guimarães wrote:
> I have one database with 300Gb of data, and want to try NDB.
> That´s possible?
> I tried several configurations, and keys, but seems that can´t import
> such data
> My server has (now) 16Gb of RAM, but soon will have 32.
> Anyone tried such configuration?
> I am using mysql-cluster 7.1.x with 64bits linux server.