List:General Discussion« Previous MessageNext Message »
From:Sudhir N Date:November 5 2009 11:04am
Subject:Re: Fw: 50M records each year.. how to handle
View as plain text  
Hi Johan

Thanks, This is really helpful response...

As I said.. there would be more read operations and slow write
operation.. I may use Application level caching of data to reduce
select queries..
I am totally against of using manual partitioning as it would create
lots of problems like primary keys, joins, clusters and replication
etc.. 
it would be difficult to migrate to another DB or upgrade mysql in
future.. In future if we plan to upgrade mysql or use oracle than we
wont be able to use built in partitioning support without modifying the
application code because application knows about and handles partitions

However my biggest concern is, Will mysql 4.1 be able to handle such a big table? around
50M each year so threre can be 250M records after five years..

I guess, table structure would be some thing like below.. 

- message_folders
-- id
-- user_id (FK)
-- name varchar2
-- created_at timestap

- Conversation
-- id
-- folder_id (FK)


- messages
-- id
-- author_id (FK)
-- Recipant (FK)
-- Subject varchar2
-- Body varchar2
-- attchment_name
-- author_deleted boolean
-  Recipant_deleted boolean
-- date_sent timestamp

- Conversation_messages
-- conversation_id
-- message_id - unique

Any suggestion on Table structure or any thing is welcome..

Thanks again
SN
  

 
  
    
Sudhir NimavatSenior software engineer. 
Quick start global PVT LTD.
Baroda - 390007
Gujarat, India

Personally I'm always ready to learn, although I do not always like being taught
  




________________________________
From: Johan De Meersman <vegivamp@stripped>
To: || Sudhir Nimavat || <sudhir_nimavat@stripped>
Cc: Mysql <mysql@stripped>
Sent: Thu, 5 November, 2009 4:14:30 PM
Subject: Re: Fw: 50M records each year.. how to handle

I did some quick math, and it comes down to 1.5 records per second on
average. Even with peak loads, this is doable without the annoying
manual partitioning, I think. Just make sure your storage is optimal,
think Raid10. If possible, use INSERT DELAYED. If you do updates  and
deletes, too, use InnoDB, if just select and insert, use MyISAM - or
if you can do with really slow selects, use the Archive engine.

It depends a lot on your planned use, really, but the above should help.

On 11/5/09, || Sudhir Nimavat || <sudhir_nimavat@stripped> wrote:
> Ok,
>
> Now the problem is.. currently we are using mysql 4.1 and it is not possible
> to upgrade mysql at this time.. as far as I know mysql 4.1 does not support
> partitioning.. So the only solution would be to use manual partitioning and
> handle it through code..
>
> Can any one tell me.. how this requirement may be satisfied with my sql
> 4.1..
>
> Any ideas, inputs ?
>
> Thanks
> SN
>
>
>
>
>
>
>
> Personally I'm always ready to learn, although I do not always like being
> taught
>
>
>
>
> ----- Forwarded Message ----
> From: Rob <admin@stripped>
> To: || Sudhir Nimavat || <sudhir_nimavat@stripped>
> Sent: Mon, 2 November, 2009 3:18:05 PM
> Subject: Re: 50M records each year.. how to handle
>
> Hi ,
>
> Yes pretty much, depending on the type of access to the records of
> course. But 50 million sounds like historical semi static data. It will
> function pretty much like a sort of raid. Having more storage devices to
> seek through all the data.
>
> (see your other mail), partitioning support is fine. See the example there,
>
> CREATE TABLE ti (id INT, amount DECIMAL(7,2), tr_date DATE)
>     ENGINE=INNODB
>     PARTITION BY HASH( MONTH(tr_date) )
>     PARTITIONS 6;
>
>
> The partitioning take place based on the tr_date colum, based on month,
> and chops it up in 6 partitions.
>
> There are some quirks with partitioning, like it does not automatically
> partition the data yet, which i think is strange, and something that you
> want (specially based on time). But if performance is a big issue at the
> moment then the manual partitioning is something worth it.
>
> Kind Regards
>
>
> On 11/02/2009 10:35 AM, || Sudhir Nimavat || wrote:
>> Thanks Rob for quick response..
>>
>> Will that give decent performance ?
>>
>> Note: There would be more read operations and slow write.. The table
>> is very simple with few columns only..
>>
>>
>>
>>
>>
>>
>> *Sudhir Nimavat*
>> *Senior software engineer. **
>> Quick start global PVT LTD.
>> Baroda - 390007*
>> *Gujarat, India
>>
>> Personally I'm always ready to learn, although I do not always like
>> being taught
>> *
>>
>>
>>
>> ------------------------------------------------------------------------
>> *From:* Rob <admin@stripped>
>> *To:* || Sudhir Nimavat || <sudhir_nimavat@stripped>
>> *Sent:* Mon, 2 November, 2009 3:01:39 PM
>> *Subject:* Re: 50M records each year.. how to handle
>>
>> On 11/02/2009 10:27 AM, || Sudhir Nimavat || wrote:
>> > I have come across a requirement where I need to store a very large
>> amount of data in a table.
>> > In one of our app.. we can have around 50 Million records each
>> year.. Can any one guide me in choosing a strategy than can handle
>> this load.
>> >
>> >
>> >
>> >
>> > Thanks
>> > SN
>> >
>> >
>> >
>> >
>> > Sudhir NimavatSenior software engineer.
>> > Quick start global PVT LTD.
>> > Baroda - 390007
>> > Gujarat, India
>> >
>> > Personally I'm always ready to learn, although I do not always like
>> being taught
>> >
>> >
>> >      Try the new Yahoo! India Homepage. Click here.
>> http://in.yahoo.com/trynew
>> >
>> Consider vertical partitioning with seperate storage devices
>>
>>
>>
>> ------------------------------------------------------------------------
>> Add whatever you love to the Yahoo! India homepage. Try now!
>> <http://in.rd.yahoo.com/tagline_metro_3/*http://in.yahoo.com/trynew>
>> --
>> This message has been scanned for viruses and
>> dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is
>> believed to be clean.
>
>
>       The INTERNET now has a personality. YOURS! See your Yahoo! Homepage.
> http://in.yahoo.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=1


      From cricket scores to your friends. Try the Yahoo! India Homepage!
http://in.yahoo.com/trynew
Thread
Fw: 50M records each year.. how to handle|| Sudhir Nimavat ||5 Nov
  • Re: Fw: 50M records each year.. how to handleJohan De Meersman5 Nov
    • Re: Fw: 50M records each year.. how to handleSudhir N5 Nov
      • Re: Fw: 50M records each year.. how to handleJay Ess5 Nov
        • Re: Fw: 50M records each year.. how to handlesudhir543-nimavat5 Nov
          • Re: Fw: 50M records each year.. how to handleJoerg Bruehe5 Nov
            • Re: Fw: 50M records each year.. how to handlesudhir543-nimavat5 Nov