On Thu, 2011-01-13 at 13:51 -0700, Steve Meyers wrote:
> On 1/13/11 1:21 PM, Steve Staples wrote:
> > table type is MyISAM, it is a "customer_account" table, which holds the
> > email address, and the customer_id field, the queries that are
> > constantly being queried is "select customer_id from customer_account
> > where customer_email = '<email address>' and `status`='1';
> Do you have a key on customer_email? If not, create one ASAP. What
> kind of writes happen on this table?
> With MyISAM, serial-like behavior can occur under certain circumstances.
> All queries must obtain either a read lock or a write lock. A table
> can have multiple read locks simultaneously, but a write lock must have
> exclusive access. Write locks have higher priority than read locks.
> What this means in practice is that if a query requests a write lock, it
> is placed at the front of the queue. Any currently-running queries are
> allowed to finish, but any new lock requests are queued. If the
> currently-running queries take very long to complete (and "very long"
> could be fractions of a second, depending on the application), then lots
> of queries will get queued up behind the write lock that is pending.
> All new read requests would need to wait for the write lock queue to
> clear out before they can get back to reading from the table again.
> Usually, this results in inconsistent performance under high load. One
> solution is to switch to InnoDB, since it does row-level locking. This
> is not a perfect solution for every situation, and you should do some
> research before doing this. It's probably the best solution in the long
> term for you, but I can't guarantee that.
> Another possibility would be to queue writes in a separate table (or
> memcache, or something like that). Then do the writes in batches.
> Steve Meyers
the only writes that happen, is when a customer has their "status"
changed, password updates, or they change their name, OR when a new
customer is created. I would say a new customer get created (insert
into) about 30 times per day, and a customer gets updated (update)
prolly about 20 times per day, and customers getting deleted or
suspended about 20 (there are rough guesses), and then customer searches
are about 200-300 times per day, and then the other part that does all
the searching, prolly about 100-200 per minute (or so). I am really
guessing here, but i am prolly pretty close.
the query that does the most, is:
mysql> EXPLAIN SELECT customer_id FROM customer_account WHERE
`full_login`='<email address>' ORDER BY `status` DESC LIMIT 1 \G
*************************** 1. row ***************************
Extra: Using where
1 row in set (0.00 sec)
NewIndex5 is 'full_login', 'status'
NewIndex4 is 'full_login' (fulltext)
maybe my indexes are out of line here... maybe i was just doing this
in haste, since the indexes are both kinda the same.
I guess maybe it is just the sheer amount of queries per second/minute
that is the issue?