List:General Discussion« Previous MessageNext Message »
From:Chris Wilson Date:January 9 2002 9:34am
Subject:Re: 2 GB limit reached
View as plain text  
On Tue, 08 Jan 2002 20:03:07 -0500
Dennis <dennis@stripped> wrote:

> At 07:07 PM 01/08/2002, you wrote:
> >Dennis,
> >
> >You may want to look into using InnoDB tables.  I believe InnoDB tables
> >are immune to the 2gb limit (which usually comes from the filesystem).
> >Also, InnoDB claims that the innodb tables are faster than MyISAM
tables> >in some cases.  See www.innodb.com or
> >http://www.mysql.com/doc/I/n/InnoDB_overview.html for further detail.
> 
> 
> thanks, but that doesnt tell me how to recover THIS file....the right 
> answer is "use a different OS", but thats out of my control here.
> 

You could use a mysqld that been configured with --with-raid then do
something along the lines of:

ALTER TABLE bigtable RAID_TYPE=STRIPED RAID_CHUNKS=16
RAID_CHUNKSIZE=524288;

This splits the file into 16 chunks and stripes the data across them - if
all 16 chunks are going to be on the same disk then I guess you'd want a
very large chunk size (like the 512meg above) so that your disk heads
aren't continually seeking :)

Also bear in mind that you'll need > 2gig free to perform the above
operation since all it really does is create a new table for you and copy
the data across.

The 2gig limit is a problem that I'm going to hit fairly shortly - perhaps
someone with a little more knowledge can tell me what the performance will
be like using mysql's raid rather than OS large file support? Also where
can one find good information about linux large file support - on my
slackware 8, 2.4.17, ext2 testbox I can create > 4 gig files using dd but mysql
failed to create a table greater than that size (not quite sure why it's
4gig rather than 2gig - suggests something's working :).

Regards,

Chris

> >-----Original Message-----
> >From: Dennis [mailto:dennis@stripped]
> >Sent: Tuesday, January 08, 2002 3:31 PM
> >To: mysql@stripped
> >Subject: RE: 2 GB limit reached
> >
> >
> >
> >We have a database that seems to have grown too large, and now any
> >operation fails on it. How can we fix this?
> >
> >Dennis
> 

-- 
Chris Wilson <chris@stripped>
http://www.wapmx.com

Thread
RE: 2 GB limit reachedDennis9 Jan
  • Re: 2 GB limit reachedRandy Katz9 Jan
  • Re: 2 GB limit reachedDan Nelson9 Jan
    • Re: 2 GB limit reachedDennis9 Jan
      • Re: 2 GB limit reachedJeremy Zawodny9 Jan
    • Suggestions - FullText ??Jon Shoberg10 Jan
      • Re: Suggestions - FullText ??Amir Aliabadi13 Jan
RE: 2 GB limit reachedEric Mayers9 Jan
  • RE: 2 GB limit reachedDennis9 Jan
    • Re: 2 GB limit reachedJeremy Zawodny9 Jan
      • Re: 2 GB limit reachedDennis9 Jan
        • RE: 2 GB limit reachedJohnny Withers9 Jan
          • Re: 2 GB limit reachedChris Wilson9 Jan
    • Re: 2 GB limit reachedChris Wilson9 Jan
      • Re: 2 GB limit reachedChris Cooper9 Jan
RE: 2 GB limit reachedQuentin Bennett9 Jan
  • RE: 2 GB limit reachedDennis9 Jan
    • Re: 2 GB limit reachedDan Nelson9 Jan