I think you'll need to do a lot of testing to yourself to find the
right answer to that. The number of disks, type of disks, and raid
configuration will have the most effect on performance.
Personally, we had roughly 15% increase in performance from ditching
our EMC clarion and going with external scsi arrays for each server. For
us it was cost driven - 8 scsi disks (in raid 10 config) and the arrays
were cheaper then 4 fiber disks. The extra heads and less latency made a
noticeable difference - our database has a really high write rate.
On Thu, 2007-05-31 at 09:15 -0400, B. Keith Murphy wrote:
> So here is the brief situation. We have a coraid (www.coraid.com) SAN
> unit - the 1520 I believe. It is ATA-over-ethernet.
> Right now we have a about 500 gigs of data spread across five servers.
> To simplify things I would like to implement the coraid on the backend
> of these servers. Then all the data is served up out of the same
> place. Of course I would like to improve I/O throughput also.
> Googling shows that these units have good read speed but the write speed
> doesn't seem to be that impressive.
> Does anyone have any experience with these? Good? Bad? Maybe other SAN
> suggestions? Am I barking up the wrong tree?