[olug] SAN Information

Rogers, John C NWD02 John.C.Rogers at nwd02.usace.army.mil
Fri Sep 27 21:51:13 UTC 2002


Bill is right for the most part, the processing we have is only the RAID
controllers or the FC switch.  It has it's own language and CLI that we can
manage it by RS232, network or FC/AL.  Some of the other controller vendors
have GUIs, ours is CLI only.  If I loaded Veritas volume management on the
Sun it would see the LUNs and I could then create logical volumes on it and
do the other stuff Bill describes.  We choose not to purchase the Veritas
solution.  So to our Sun it is a big disk that happens to be RAID.  To the
NT servers it is the same, just a disk.  The WWN or LUN masking keeps the
different OS's from seeing each other on the FC/AL and trying to mount the
disks.  It is nice to consolidate storage on a FC network to make it fast
and redundant, that is unless your SAN goes down.  That is another topic,
check out redundant SANs across geographic areas (SANs on WANs).  To change
RAID levels or block sizes on our unit you must reinitialize the controller
via the CLI and they warn you that data may be lost.  My guess is inside of
Bill's box is a controller from one of the manufactures I mentioned before.
There are a finite number of controller manufactures that sell the products
to integrators for their systems.

I always thought of JBOD as having no RAID smarts (like a sun external disk
pack) and ours is definitely not that, DigiData claims they get the most raw
performance of any RAID and I know under RAID 3 using GIS type of data with
large files (typically 100 meg or more) we can move data at close to 100MB/s
or maximum capacity of 1GB FC/AL.  They just came out with a 2GB FC model
and with the new 320mb/s drives I bet you can really copy stuff fast.  The
Sun T3 is very close in design to Bill's Magnitude from what it sounds like.
I know Sun is using a Veritas implementation on qlogic chipsets.

Changing RAID levels on the fly is a very cool feature, what I am wondering
is if it really changes the physical storage blocks on disk or is it
changing the blocking factors in the controller OS settings only?  Most of
the big smart SANs I have seen and read about really implement RAID in
software and do not ever change the data on the physical disks.  The
algorithms used to place the block to the metal are proprietary and very
specialized to their requirements and SAN OS specifics.

The mirroring can be accomplished with a snapshot group in Solaris at least.
Sort of cool that you have two filesystems looking at the same inodes on
disk.  You can snap one and then you have a complete copy of the disk for
backups or testing.

I am not familiar with RAID 5 with different parities.  Can you explain that
to the group?  I always knew it as RAID 0-5 with some new versions like 10
and 7 coming out now.  Is parity RAID changing the parity calculations or
just the striping locations?  When we use RAID 5 or RAID 3 it is physically
changing the controller setups and the way the blocks are put to disk.  Like
RAID 3 only stripes parity to one disk instead of 5 as RAID 5 does.

Thanks for the thread, it has been interesting for me at least,

John




-----Original Message-----
From: bbrush at unlnotes.unl.edu [mailto:bbrush at unlnotes.unl.edu]
Sent: Friday, September 27, 2002 3:02 PM
To: olug at olug.org
Subject: RE: [olug] SAN Information



This is an interesting idea, but (I think) it kind of boils down to a
remote RAID attached via FC.   There's no processing unit, right?  While
this gives you some of the benefits of a SAN appliance it doesn't give you
a lot of the really useful features I've seen (at least I don't see how it
does, please correct me if I'm wrong).  I think this is referred to as a
JBOD.  (Just a bunch of disks)

For instance, using the software with a Magnitude I could create a new
virtual disk (vdisk) from unused space.  I could then  mirror it to an
existing disk.  Once it was mirrored I could break the mirror and I have a
perfect copy of that data.  I could back it up, assign it to another
server, upgrade it, or do anything else I wanted to it, and my original
data is unaffected, and the server OS never even knows anything about this.
Let's say I make a mirror, then upgrade the mirror copy to a newer version
of software and test it.  If it works, I can then SWAP the two disks with
my production server.  If it doesn't, no biggy, delete the vdisk and start
over.

Oh, one thing I forgot to mention that's unique to the Xiotech Magnitude is
the ability to change RAID levels on the fly.  You can just select a vdisk
and tell to to change from a RAID 5 to a RAID 10, or a RAID 5/parity 3 to a
RAID 5/parity 9, etc.

As pretty much everyone has said, it all boils down to what you need.  :-)
The more you need, the more money you'd better have.

Bill



 

                      "Rogers, John C NWD02"

                      <John.C.Rogers at nwd02.usac        To:
"'olug at olug.org'" <olug at olug.org>                           
                      e.army.mil>                      cc:

                      Sent by:                         Subject:  RE: [olug]
SAN Information                                  
                      olug-admin at olug.org

 

 

                      09/27/2002 02:09 PM

                      Please respond to olug

 

 





The software depends on how fancy you want to be.  If you use LUN masking
etc. in the controller or switch then each host can only see the LUN the
controller lets it see via hardware addressing on the loop.  Now some say
that is not a true SAN but I use the definition that multiple hosts are
using one physical disk array so I call it a SAN.  Now if you add software
to the mix then a whole bucket of possibilities opens up.  You can choose
Veritas or the software from the SAN vendor.  In this case the SAN is more
like another host on the loop but it's job is to store and retrieve data
to/from the disks.  These more advanced SANs allow you to do all sorts of
cool things like have been discussed (volume management, filesystem
resizing, cache stripe optimization, disk block size masking and the like).
In that environment the host never really "owns" the data is how I look at
it.  The data belongs to the SAN and it is served to the host as it
requests it.


In my environment the host really owns the data because it probes the loop
for the LUNs and then attaches to them and will get really mad if it does
not see them (it is not a virtual volume).  What is cool is that if you use
fibre channel drives and fibre controllers they are all dual attached by
design.  So you can build a completely separate data path to the disks for
redundancy if you want to.  Again the more you add the more you pay.  In
Sun's case they have redundant interface software that watches for hardware
failure and can switch to the secondary path if needed.  This is basically
Veritas software under the covers.  The really big arrays spend a lot of
effort to make them fast.  Some big Hitachi units have over 10GB of disk
cache (ram) that the CPU manages to allocate the writes in the most
efficient manner.  There is a world of difference between what I use and
those units.  We wanted something reasonable in cost, expandable, reliable
and vendor neutral for upgrading in the future.  So we built our system.


John






-----Original Message-----
X-Sybari-Space: 00000000 00000000 00000000 00000000
From: roger schmeits [mailto:schmeits at clarksoncollege.edu]
Sent: Friday, September 27, 2002 1:25 PM
To: olug at olug.org
Subject: Re: [olug] SAN Information





Ok. lets say we build the SANs. Buy all the hardware ans so forth. Dont
you need software to interface with the different o/s?


That where it gets pricey right?  I understand the hardware part but I
thought there had to something in between the servers.


Please correct me if I am wrong.


Congrads on building your own SANs ..impressive..


_____________________________________________
OLUG mailing list
OLUG at olug.org
http://lists.olug.org/mailman/listinfo/olug








_______________________________________________
OLUG mailing list
OLUG at olug.org
http://lists.olug.org/mailman/listinfo/olug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.olug.org/pipermail/olug/attachments/20020927/41f20ed0/attachment.html>


More information about the OLUG mailing list