thanks for you reply.
I will try to explain what I did before, depend on your idea:
I use xfs as file system to maximize the I/O performance on disk
Disk are hardware raided with BBCW
2x 600GB (RAID 1) 15K
** operation system
** ralay logs
6x 300GB (RAID 10) 15K
** Mysql DATA (ibdata and tables data)
I had been enabled file_per_table and partitioned the large tables to
innodb_flush_log_at_trx_commit = 0 ( to not to commit per transaction )
innodb_flush_support_xa = 0 ( to not to sync file system and log files per
innodb_flush_method =O_DSYNC ( to import disk utilization )
and enabled the Large File large-pages
cat /proc/meminfo | grep -i huge
AnonHugePages: 6326272 kB
Hugepagesize: 2048 kB
and set the innodb buffer pool to 100GB ( which is using huge block is
memory as mentioned above)
write now when I start the replication, disk utilization is finally some
and just one of the 24Cores are under the load of about (70% :-S && I don't
know why it dose not read 100% load ( this core seem to be in use with SQL
cos I have just one DB to be replication ( totally about 800GB ) the new
multi thread capability dose not changed any things for me.
regard to what I explained:
Plan A is Done
Plan B is Done
Plan C is in progress
** but I didn't still split tables is septate DB ( software team
challenges :-) )
Pan D this is what I am reading some about yesterday and like to test
On Tue, Jan 8, 2013 at 1:45 AM, Rick James <rjames@stripped> wrote:
> Plan A: Minimize I/O on Slaves:
> innodb_flush_log_at_trx_commit = 2
> sync_binlog = 0
> innodb_doublewrite = OFF
> (If a slave crashes, it may be best to reclone it rather than hope that
> these performance settings did not lose data.)
> Plan B: Faster Hardware on Slaves.
> Plan C: Pave the way for 5.6:
> Move tables to different databases.
> Change the code to reference the tables thus: dbname.tblname
> Plan D: If the Slaves are I/O-bound:
> Seems like there is a script (in Python?) that peeks in the relay log,
> turns queries into SELECTs, does the SELECTs -- thereby priming the
> buffer_pool using a separate thread.
> > -----Original Message-----
> > From: shayne.alone@stripped [mailto:shayne.alone@stripped]
> > Sent: Monday, January 07, 2013 12:29 PM
> > To: replication@stripped
> > Subject: Replicating on multiple Slaves DB from one Master
> > Dears;
> > I have been faced with a case of replication which may most of you had
> > been faced before...
> > I did some checks and test to find out the ways to solve, but I'm not
> > pretty sure about the cons and pron.
> > problems is as fallow:
> > Master:
> > Single Mysql with about ~30K QPS, mainly just work as AAA data back
> > end.
> > not all but lots of these queries are writes (INSERT/UPDATE).
> > the matter is that when the slave wants to execute Transactions with
> > one thread! this will lead to a lots of replication delay...
> > I'm looking for a way to rewrite DB user for queries depend on table
> > name and not just rename whole of statements.
> > is such away! i hope be able to use Multi thread replication ability of
> > mysql5.6 for a single DB and multiple independent tables.
> > --
> > Regards,
> > Ali R. Taleghani <a.taleghani@stripped>
Ali R. Taleghani