At 6:33 PM +0000 12/2/99, miso@stripped wrote:
>On Thu, 2 Dec 1999, Tonu Samuel wrote:
>> miso@stripped wrote:
>> > I have a rather small table with 4,000 rows and some 20
>> > columns. The table is being updated heavily by one thread
>> > all the time using UPDATE command. There's like 30 threads
>> > that run SELECT queries on the table from web based
>> > applications. Since there's so many SELECTs on the table the
>> > UPDATEs are backing up. In other words those updates take
>> > ages to execute.
>> 1. Prorably your UPDATE clause have somw "WHERE" in it? Check the
>> indexes and optimization of UPDATE.
>> 2. Check out the SELECT HIGH_PRIORITY, UPDATE LOW_PRIORITY and UPDATE
>> DELAYED keywords.
>All queries are super optimized. I think you can't have
>anything better than this:
> table: BAR
> type: const
> key: PRIMARY
> key_len: 4
> ref: BAZ
> rows: 1
>It's just the huge amount of selects on the table that are
>causing the trouble. Because they all compete for the locks
>with the UPDATEs.
>Would it be possible to run 2 mysql daemons on the same
>physical files. One would be used for updating, the other
>for selecting only. So that the UPDATEs would run
>independently of the SELECTs and not compete for the same
>locks? Occasionaly I have to run some DELETE and INSERT
>statements - would this cause any trouble in the 2 daemon
Yes, it would. The two servers must cooperate anyway,
or you'll get garbage back from your SELECTs while the
tables are being updated. The difference is that the
servers must cooperate using external locking, and that
would be a lot slower than having a single server that
manages its own locks.
Your desire not to compete for locks runs afoul of the
fact that locks are necessary to keep from selecting
from the table during the update.
Paul DuBois, paul@stripped