In the last episode (Nov 25), John Kopanas said:
> Sorry about these questions. I am used to working with DBs with less
> then 10K rows and now I am working with tables with over 500K rows
> which seems to be changing a lot for me. I was hoping I can get some
> people's advice.
> I have a 'companies' table with over 500K rows and a
> 'purchased_services' table with over 650K rows.
> The following query takes over 6 seconds:
> SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)
How many rows does it return, and what's its explain plan?
> purchased_services.company_id has an index on it.
> The following query takes over 3 seconds:
> SELECT count(id) FROM companies;
An explain plan here would be useful too.
> To me the time it takes to run these queries makes no sense. I would
> imagine both of these queries would take a fraction of a second.
> When running some of these queries and looking at 'SHOW processlist'
> I was getting a lot of 'Writing to tmp'.
That doesn't make much sense, since both queries should simply be doing
index scans (or full table scans depending on how many rows are
expected to match in the first query, but the explain plans will show
> My DB engine is InnoDB. I am running this on my laptop that is a
> PowerBook 867 with 756 MB of Ram.
What's your innodb_buffer_pool_size set to?