Also depends on your data access pattern as well.
If you can take advantage of clustering my primary key for your
selects, then InnoDB could do it for you.
My suggestion would be to write some queries based on projected
workload, build 2 tables with lots and lots of data, and do some
isolated testing. For work, I do a lot of query profiling using
maatkit. Be sure to clear out as much of the caching as possible
including the OS cache.
BTW, I've never had much luck storing large docs in MySQL. If you can
compromise on data integrity, consider filesystem storage.
On Fri, Apr 2, 2010 at 5:50 PM, Mitchell Maltenfort <mmalten@stripped> wrote:
>>> You want the crash safety and data integrity that comes with InnoDB.
>>> more so as your dataset grows. It's performance is far better than
>>> tables for most OLTP users, and as your number of concurrent readers and
>>> writers grows, the improvement in performance from using innodb over
>>> myisam becomes more pronounced.
>> His scenario is "perhaps updated once a year", though, so crash recovery and
>> multiple writer performance is not important.
> And the concurrent reader and writer number is set at one, unless I
> undergo mitosis or something.
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?unsub=1