We run an VERY io intensive file application service. Currently, our
problem is that our disk spindles are being completely killed due to
insufficient SEEK time on the hard drives (NOT physical read/write speeds).
We have an directory structure where the files are stored based on the MD5
checksum of the file name, i.e. /0/00/000/000044533779fce5cf3497f87de1d060
The majority of these files, are between 256K and 800K with the ODD
exception (say less than 15%) being more than 1M but no more than 5M in
size. The content of the files are pure text (MIME Encoded).
We believe that storing these files into an InnoDB table, may actually give
us better performance:
- There is one large file that is being read/written, instead of BILLIONS of
- We can split the structure so that each directory (4096 in total) sit's on
their own database
- We can move the databases as load increases, which means that we can
potentially run 2 physical database servers, each with 2048 databases each)
- It's easy to move / migrate the data due to mysql and replication - same
can be said for redundancy of the data
We are more than likely looking at BLOB columns of course, and we need to
read/write from the DB in excess of 100mbit/s
Would the experts consider something like this as being feasible? Is it
worth it to go down this avenue, or are we just going to run into different
problems? If we are facing different problems, what can we possibly expect
to go wrong here?
Many thanks, and I look forward to any input.