List:General Discussion« Previous MessageNext Message »
From:Colin McKinnon Date:March 19 1999 8:42am
Subject:Re: Storing large files in the database
View as plain text  
At 11:06 18/03/99 -0600, "Fred Lindberg" <lindberg@stripped> wrote:
>[long]
>
>On Thu, 18 Mar 1999 09:55:44 -0600, Ed Carp wrote:
>
>>Can someone explain to me why this will not *always* be slower than storing
>>the actual message in the database?  Most filesystems are not optimized for
<snip>
>1. The file system is optimized for accessing reasonable numbers of
>files in a directory. Storing data in the database involves the
>transfer and parsing + overhead from interactions between the database
>and the file system (to make the db larger, etc). Not much code to
>write. All you need is a hash function. I have a hard time imagining
>that it would be faster to get 2 MB from a 20 GB database directly,
>than to get a file name from a 20 MB database, and then the 2MB file
>from the file system.

Just to throw in my 2 cents worth as this thread goes completely off topic:

It all really boils down to what is a *reasonable* number of files. AIR,
NFS places an upper limit on the number of files in a directory (something
to do with connectionless handles); since a lot of filesystems are written
with NFS in mind, they too have this limit. Before anyone asks, no I can't
remember what the limit is.

When I've written applications in the past using BLOBs I've kept them as
files, using a generated name - but also generating a path e.g.
	filename - 0123.ext  ->   root_dir/0/1/2/0123.ext
It's marginally slower under sensible operating systems, but makes a LOT of
difference under MS-DOS (which is very poor at finding files in directories).

Colin

Thread
Storing large files in the databaseEd Carp18 Mar
Re: Storing large files in the databaseFred Lindberg18 Mar
  • Re: Storing large files in the databaseColin McKinnon19 Mar