James Rogers wrote:
> At 03:15 PM 7/6/99 +1000, you wrote:
> >Before I spend more time on trying to get query times of <2 seconds on
> >tables with between 500,000 and 1,250,000 rows, I thought I'd ask if
> >anyone has done any web-based statistical pages which are driven by
> >mysql databases with tables this size.
> >Is it feasible to try and directly query tables this size from a web
> >application to return graphs, etc based on the returned recordsets
> >provided my mysql server is configured correctly and the tables are
> >constructed correctly, or would a it be better to create scheduled
> >queries and dump the summarised data into smaller tables for querying by
> >a web-based application? Obviously I'm trying to avoid waits of longer
> >than around 15 seconds for a stats page.
> If your cache is good, you should (in theory) be able to run your query in
> acceptable time on a table this size. However, there are a lot of "if"s
> here. If you have a lot of cache misses, or you have a high level of
> continuous writes to the table in question, or if you have a relatively
> processor intensive grouping function, you'll need to create batched
> pre-calculation tables to get anything resembling acceptable performance.
> Of course, hardware plays a big role here too. A lot of these types of
> queries run as indexed table scans and exercise your I/O pretty hard. The
> performance can vary by a few orders of magnitude, depending on the
> specific parameters of your situation (as described above). I've used
> batch-mode precalc tables for grouping web queries on really large tables
> (e.g. 70 million rows) and gotten queries to execute in a couple seconds
> that normally took minutes. However, preprocessing a table in batch mode
> can be very resource intensive; in my 70 million row example, the fastest
> possible batch code I could produce took about two hours to run (the first
> attempt took *days* to run), and this was on really fast hardware.
> -James Rogers
This raises an interesting point guys... we have just started running a
system that uses MySQL as the backend. I am running the system on a Sun
, 333mhz cpu,256mb ram, lots of hd space etc. I was running the system
on a Pii400 linux box.
Now, I am interested in how much the system will scale. Currently we
have about 900
'tickets', but also included are various other tables, status codes,
(btw is there a way of finding out the total number of rows in the db?).
What sort of amount of rows can MySQL handle comfortably??? Are we
(i.e. the figure of 70 million was mentioned, is this with mysql?).
thanks in advance...
> Please check "http://www.mysql.com/Manual_chapter/manual_toc.html" before
> posting. To request this thread, e-mail mysql-thread6607@stripped
> To unsubscribe, send a message to the address shown in the
> List-Unsubscribe header of this message. If you cannot see it,
> e-mail mysql-unsubscribe@stripped instead.
Matt Duggan, Internet Network Services Ltd.
Tel: +44 (0)1203 723 030, Fax: +44 (0)1203 723 049