I read through the article and ran some more tests. The new scripts and
tables provide similar initial latency, but I think the test results show
them to be faster overall.
When it comes to latency, direct file access is still the champion without
caching. I think you made a good point about throughput which makes MySQL
more appealing for storing larger files. That kind of surprised me
actually, because I always figured I'd have to store things like PDF's on
disk and control access to them by putting them outside of the document
There's still a question of whether caching provides the edge and at what
cost. I haven't set up caching, so I'm not sure if it's complicated or not.
It would provide performance boosts to more than just images through, so it
seems worthwhile to explore. That's what I'll be exploring next. =)
> -----Original Message-----
> Most people make the mistake of using the biggest blob size to store
> files.. That blob size is capable of storing just HUGE files.. What
> we do is store files in 64K (medium blob) chunks..
> So if the image was say 200K in size, the metadata for the image would
> be 1 row in a table, and the image data would be 4 rows in the data
> table. 3 full 64K rows + 1 partially used rows.
> There is a good article/sample code here on the kind of technique we
> started with:
> Using chunked data, apache/php only needs to pull row by row(64k) and
> deliver to the client, keeping the resultset size low = memory
> overhead low.
> The storage servers (mysql storage) I have tested on the LAN; them
> storing and retreiving data from mysql (using FTP gateway) at rates of
> 4600K/sec.. which is I think the fastest speed my laptop network card
> could deliver.
> That's pretty fast.. Rare day when most internet users can talk to
> servers at those speeds.