[olug] SATA Drives

Rob Townley rob.townley at gmail.com
Fri Aug 10 21:57:55 UTC 2007


On 8/9/07, Carl Lundstedt <clundst at unlserve.unl.edu> wrote:
>
> Curtis LaMasters wrote:
> > Some of they guy's in my company complain that SATA drives are worthless
> and
> > should NEVER be installed in a server platform.  I just wanted to know
> your
> > take on the situation.  My personal belief is match the server specs to
> it's
> > requirements.  Enough said.
> >
> >
> We currently have about 120 TB of disk (around 300 drives) on the floor,
> all SATA based.  Some are in SCSI enclosures, some are in 3U storage
> servers with 12 drive 3ware cards, a few are in a fiber enclosure and a
> small minority are located in 1U servers.  We lose around a drive a
> week.  The majority of our usage is large file size read/writes to SATA
> RAID arrays, but we do have SATA disk RAID arrays running on our 3U,
> 3ware based NFS server (which get hammered by all kinds of usage) and
> SATA drives standing alone in our database servers.
>
> I'll regret saying this, I'm sure, but I don't recall having lost a SATA
> disk out of our 1U database servers, and our NFS arrays seem to have far
> less failures than our large file storage servers.  This may just be an
> illusion of scale since we have far more storage servers than NFS servers.
>
> We do only use the raid edition drives, we do not accept bids with
> standard desktop drives (although, I guess, we have a couple scattered
> through the cluster for logs and archives).
>
> For cost per GB I have no idea why you wouldn't go with SATA in a
> RAID'ed system.
>
> Carl Lundstedt
> UNL
> _______________________________________________
> OLUG mailing list
> OLUG at olug.org
> http://lists.olug.org/mailman/listinfo/olug
>

Carl, i am glad to hear someone is successfully rebuilding a SATA drive each
week.

>From someone that has had 5 SCSI / SCA drives die in the last year, i am not
sure why they have such a good name.  Hotpluggable SCA with hardware raid
made them very easy to rebuild, but why pay $800.00 per drive when they
crash so often.  They often have more restrictive environmental specs, so
this may be part of the problem.

However, i was reluctant to switch over to an all SATA infrastructure
because there is so much that has to be just right for RAID to do its job
well - hardware raid cards, firmware revisions at all levels, drivers that
come on the install CD,  application software, and just plain
documentation.  i was concerned that SATA may not be completely tested.
There must be SATA drives that do not meet SAS specifications, but how do
you know until you find your array does not rebuild.

So thanks for letting us know about real world heavily tested SATA RAID
experience.

Has anyone tested SATA drives in a RAID 10/TEN (not 1+0) configuration?



More information about the OLUG mailing list