Mailing List Archive

Oracle using NetApp
Has anyone got experience with running Oracle on NetApp filers? I am looking for information for a project I am involved in.

1. How does performance really stack up compared with a DAS or SAN solution? Does the IP stack -vs- SCSI stack cause a significant performance lag?

2. Are there compelling reasons why this would be a preferred solution?

3. Are there compelling reasons why this solution should be avoided?

4. What are the gotchas of such a solution? Are there configuration issues that can trip you up?

I've read the whitepapers on the NetApp and Oracle sites. I am looking for someone though that has actually done this.

Thanks,

Robin
winslett@ev1.net
Re: Oracle using NetApp [ In reply to ]
> 1. How does performance really stack up compared with a DAS or SAN
> solution? Does the IP stack -vs- SCSI stack cause a significant
> performance lag?

Robin,

It all depends on which Unix platform you run Oracle. Solaris' NFS client
in any release before Solaris 9 has a huge bug that slows down
performance. Try exporting a filesystem on your Solaris host and mounting
localhost:/filesystem. Then try some benchmarks...they stink. As I
understand it, the HP-UX nfs client does not have these issues and
performs well with NAS.

> 2. Are there compelling reasons why this would be a preferred solution?

Netapp features: instant snapshot, snapmirror, snaprestore. These
features are critical in an Oracle (and Clearcase) implementation.

> 3. Are there compelling reasons why this solution should be avoided?
>
> 4. What are the gotchas of such a solution? Are there configuration issues that can trip you up?

NFS over UDP is sometimes recommended by Netapp SE's to avoid TCP
overhead. TCP is preferable in my opinion.

I have not personally implemented Oracle on NAS, but I've been in contact
with folks in my company who have played with it. The Solaris gotcha is
the big problem. If you're using ORacle on HP-UX, try it out.

/Brian/
--
Brian Long | | |
Americas IT Hosting Sys Admin | .|||. .|||.
Phone: (919) XXX-XXXX | ..:|||||||:...:|||||||:..
Pager: (888) XXX-XXXX | C i s c o S y s t e m s
Re: Oracle using NetApp [ In reply to ]
Hello,

We have 6 servers hitting our NetApp 760 with about 25 instances.
Love love love the filer.

I have not seen a problem with the database even if the network goes
down for some time (20 minutes and more).

The only hiccup I have seen is if the filer is rebooted while databases
are operating. A control file that is located on the filer may have a
lock held against it when it comes back up. We run 1 filer and
2 local control files. If that happens, delete the filer based control
file and replace it with a copy from the local set. Then your database
will restart.

Cheers,

Joe




At 09:35 AM 8/20/02 -0400, Brian Long wrote:
> > 1. How does performance really stack up compared with a DAS or SAN
> > solution? Does the IP stack -vs- SCSI stack cause a significant
> > performance lag?
>
>Robin,
>
>It all depends on which Unix platform you run Oracle. Solaris' NFS client
>in any release before Solaris 9 has a huge bug that slows down
>performance. Try exporting a filesystem on your Solaris host and mounting
>localhost:/filesystem. Then try some benchmarks...they stink. As I
>understand it, the HP-UX nfs client does not have these issues and
>performs well with NAS.
>
> > 2. Are there compelling reasons why this would be a preferred solution?
>
>Netapp features: instant snapshot, snapmirror, snaprestore. These
>features are critical in an Oracle (and Clearcase) implementation.
>
> > 3. Are there compelling reasons why this solution should be avoided?
> >
> > 4. What are the gotchas of such a solution? Are there configuration
> issues that can trip you up?
>
>NFS over UDP is sometimes recommended by Netapp SE's to avoid TCP
>overhead. TCP is preferable in my opinion.
>
>I have not personally implemented Oracle on NAS, but I've been in contact
>with folks in my company who have played with it. The Solaris gotcha is
>the big problem. If you're using ORacle on HP-UX, try it out.
>
>/Brian/
>--
> Brian Long | | |
> Americas IT Hosting Sys Admin | .|||. .|||.
> Phone: (919) XXX-XXXX | ..:|||||||:...:|||||||:..
> Pager: (888) XXX-XXXX | C i s c o S y s t e m s
Re: Oracle using NetApp [ In reply to ]
Brian,

In reagards to the Solaris NFS issue, I've heard it is the other way
around. UDP with Solaris and the filers is a loser but TCP works well.
I have actually experienced that myself and my SE said it was a known
issue with Solaris, Netapp, and <insert switch vendor here>. It's only
a problem with the Solaris NetApp combo though, NFS over UDP that is
Solaris to Solaris worked fine for me.

And, since this has been opened to people who haven't actually
implemented Oracle on NAS, the only real issues I've heard of are when
people use automount maps for Oracle mounts. Sometimes they go away and
Oracle doesn't like that. But I've heard from numerous sources that are
running Oracle on a filer that if you hard mount it's a great solution.

ymmv, obviously.

~JK

Brian Long wrote:
>
> > 1. How does performance really stack up compared with a DAS or SAN
> > solution? Does the IP stack -vs- SCSI stack cause a significant
> > performance lag?
>
> Robin,
>
> It all depends on which Unix platform you run Oracle. Solaris' NFS client
> in any release before Solaris 9 has a huge bug that slows down
> performance. Try exporting a filesystem on your Solaris host and mounting
> localhost:/filesystem. Then try some benchmarks...they stink. As I
> understand it, the HP-UX nfs client does not have these issues and
> performs well with NAS.
>
> > 2. Are there compelling reasons why this would be a preferred solution?
>
> Netapp features: instant snapshot, snapmirror, snaprestore. These
> features are critical in an Oracle (and Clearcase) implementation.
>
> > 3. Are there compelling reasons why this solution should be avoided?
> >
> > 4. What are the gotchas of such a solution? Are there configuration issues that can trip you up?
>
> NFS over UDP is sometimes recommended by Netapp SE's to avoid TCP
> overhead. TCP is preferable in my opinion.
>
> I have not personally implemented Oracle on NAS, but I've been in contact
> with folks in my company who have played with it. The Solaris gotcha is
> the big problem. If you're using ORacle on HP-UX, try it out.
>
> /Brian/
> --
> Brian Long | | |
> Americas IT Hosting Sys Admin | .|||. .|||.
> Phone: (919) XXX-XXXX | ..:|||||||:...:|||||||:..
> Pager: (888) XXX-XXXX | C i s c o S y s t e m s

--
=====================
Jeff Kennedy
Unix Administrator
AMCC
jlkennedy@amcc.com
Re: Oracle using NetApp [ In reply to ]
On Tue, Aug 20, 2002 at 07:10:23AM -0700, Jeff Kennedy wrote:
> Brian,
>
> In reagards to the Solaris NFS issue, I've heard it is the other way
> around. UDP with Solaris and the filers is a loser but TCP works well.
> I have actually experienced that myself and my SE said it was a known
> issue with Solaris, Netapp, and <insert switch vendor here>. It's only
> a problem with the Solaris NetApp combo though, NFS over UDP that is
> Solaris to Solaris worked fine for me.

As far as I'm concerned, this is an urban legend :-) I use UDP v3 on
Solaris with NetApp, and I've always had a good performance, no worse
than Linux or HPUX. When you say you experienced it yourself, what
exactly did you see? Was it reproducible with iozone? What were the
numbers?

Thanks

Igor
Re: Oracle using NetApp [ In reply to ]
Reproduceable on every Sun server running gigabit for 4 days. We
upgraded code on the Foundry gear we use on a Monday and, wham!, all Sun
gigabit performance went to hell. It was odd that the 100mb connections
still worked as before.

I spoke with Foundry and NetApp as well as Sun. My local SE said "turn
on TCP" since it was a know issue ( he was suprised support didn't tell
me that ). Once we did that and remounted the filesystems under TCP it
went back to normal.

Now I know what you're going to say; "it was the Foundry code idiot!".
And yes, that is partially correct. But under that same Foundry code,
before we turned TCP on on the filers, we did Sun to Sun NFS and those
worked fine under UDP and TCP. My SE said it was specifically a
Sun/NetApp/<Foundry?> issue.

As to why it worked under UDP pre-code upgrade I couldn't say.

~JK

Igor Schein wrote:
>
> On Tue, Aug 20, 2002 at 07:10:23AM -0700, Jeff Kennedy wrote:
> > Brian,
> >
> > In reagards to the Solaris NFS issue, I've heard it is the other way
> > around. UDP with Solaris and the filers is a loser but TCP works well.
> > I have actually experienced that myself and my SE said it was a known
> > issue with Solaris, Netapp, and <insert switch vendor here>. It's only
> > a problem with the Solaris NetApp combo though, NFS over UDP that is
> > Solaris to Solaris worked fine for me.
>
> As far as I'm concerned, this is an urban legend :-) I use UDP v3 on
> Solaris with NetApp, and I've always had a good performance, no worse
> than Linux or HPUX. When you say you experienced it yourself, what
> exactly did you see? Was it reproducible with iozone? What were the
> numbers?
>
> Thanks
>
> Igor

--
=====================
Jeff Kennedy
Unix Administrator
AMCC
jlkennedy@amcc.com
RE: Oracle using NetApp [ In reply to ]
-----Original Message-----
As far as I'm concerned, this is an urban legend :-) I use UDP v3 on
Solaris with NetApp, and I've always had a good performance, no worse
than Linux or HPUX. When you say you experienced it yourself, what
exactly did you see? Was it reproducible with iozone? What were the
numbers?
-----Original Message-----

In our environment, YMMV, Solaris (100mb) to NetApp (100mb) works fine over
UDP. Solaris (100mb) to NetApp (gigabit) was horribly slow for certain
operations.
Normal file reads were fine, but a compare (`cmp`) of two files from the
same filer
were unbelievably slow. A cmp that would take 15 seconds to the 100mb
interface of
the NetApp would take an hour or more to the gigabit interface. To get Sun
to actually
admit it was their problem, we did a Solaris (100mb) to Solaris (gigabit)
test over UDP,
and it took 5minutes or so. Not as bad as to NetApp, but bad enough for
them to file a
bug, which Sun Engineering ignored since there is a work-around (use TCP or
the 100mb
interface of the filer). This was with Solaris 8. Solaris 7 did not have
this problem
in our environment.

Since nearly all our clients are 100mb, and only our QA group was running
into this,
we set their automount entries to point to the 100mb interface.

John
Re: Oracle using NetApp [ In reply to ]
On Tue, Aug 20, 2002 at 10:47:13AM -0700, Jeff Kennedy wrote:
> Reproduceable on every Sun server running gigabit for 4 days. We
> upgraded code on the Foundry gear we use on a Monday and, wham!, all Sun
> gigabit performance went to hell. It was odd that the 100mb connections
> still worked as before.

There are issues with flow control, back-to-back network packets and
buffer space on switches. This is not related to Solaris boxes,
Foundry switches or NetApp filers in particular, it's just a place
where people notice the issues. I have experienced the same
with Cisco switches.


--
i.A. Michael van Elst / phone: +49 721 9652 330
Xlink - Network Information Centre \/ fax: +49 721 9652 349
Emmy-Noether-Strasse 9 /\ link http://nic.xlink.net/
D-76131 Karlsruhe, Germany /_______ email: hostmaster@xlink.net
[ KPNQwest Germany GmbH, Sitz Karlsruhe ]
[ Amtsgericht Karlsruhe HRB 8161, Geschaeftsfuehrer: Michael Mueller-Berg ]
Re: Oracle using NetApp [ In reply to ]
Robin Winslett wrote:
>
> Has anyone got experience with running Oracle on NetApp filers? I am looking for information for a project I am involved in.
>
> 1. How does performance really stack up compared with a DAS or SAN solution? Does the IP stack -vs- SCSI stack cause a significant performance lag?

In our environment, we have typically seen faster writes to the filer
than to local disk (Sun UltraSCSI1 10000 rpm). We have GbE fibre
between our servers and our filers and we have not seen a problem with
performance. We have one F740 running about 8 instances and another two
clusters of F760C's running a couple more.

> 2. Are there compelling reasons why this would be a preferred solution?

Backups - set the database in backup mode for about five minutes and
that's all. During that time you can create a snapshot of an entire
filesystem and start the backup (using NDMP capable backup software).

> 3. Are there compelling reasons why this solution should be avoided?

We did move one database off the filers because it was doing many small
reads and writes (about 4 bytes each). Since the typical Oracle write
size is 8K, there was a lot of wasted space in each network packet; we
broke down and bought a Sun A5000.

> 4. What are the gotchas of such a solution? Are there configuration issues that can trip you up?

We are looking at converting some of our instances from UDP to TCP, but
we're not seeing immediate pressure to make the move, so we're going to
hold off until it looks like it might be a problem.

> I've read the whitepapers on the NetApp and Oracle sites. I am looking for someone though that has actually done this.
>
> Thanks,
>
> Robin
> winslett@ev1.net

Geoff Hardin
geoff.hardin@dalsemi.com
Ignorance can be cured by learning, Stupidity usually can't.
Re: Oracle using NetApp [ In reply to ]
Not urban legend for us. We've experienced this just last week.

We have a new F810 with a dedicated GigE connection to an E880 (Sol8),
intending to use the F810 as our mail spool device.

NFS mounting across UDP we can write to the NetApp at 60MB/s. Going
from the NetApp to the SUN it goes at 25-30MB/s once or twice, and
then slows right down to 1-2MB/s.

Mounting NFS across TCP, we see 50MB/s to the NetApp, and 25MB/s going back
from the NetApp to the SUN.

With our production F740 mounting NFS/UDP across 100mb to our SUNs (Sol 2.6 and 8)
we've had no performance problems.


> -----Original Message-----
> As far as I'm concerned, this is an urban legend :-) I use UDP v3 on
> Solaris with NetApp, and I've always had a good performance, no worse
> than Linux or HPUX. When you say you experienced it yourself, what
> exactly did you see? Was it reproducible with iozone? What were the
> numbers?
> -----Original Message-----
>
> In our environment, YMMV, Solaris (100mb) to NetApp (100mb) works fine over
> UDP. Solaris (100mb) to NetApp (gigabit) was horribly slow for certain
> operations.
> Normal file reads were fine, but a compare (`cmp`) of two files from the
> same filer
> were unbelievably slow. A cmp that would take 15 seconds to the 100mb
> interface of
> the NetApp would take an hour or more to the gigabit interface. To get Sun
> to actually
> admit it was their problem, we did a Solaris (100mb) to Solaris (gigabit)
> test over UDP,
> and it took 5minutes or so. Not as bad as to NetApp, but bad enough for
> them to file a
> bug, which Sun Engineering ignored since there is a work-around (use TCP or
> the 100mb
> interface of the filer). This was with Solaris 8. Solaris 7 did not have
> this problem
> in our environment.
>
> Since nearly all our clients are 100mb, and only our QA group was running
> into this,
> we set their automount entries to point to the 100mb interface.
>
> John
>

--
Jeff Bryer bryer@sfu.ca
Systems Administrator (604) 291-4935
Academic Computing, Simon Fraser University
Re: Oracle using NetApp [ In reply to ]
One plea, from the heart:

If your site has seen performance issues on NFS between Suns and NetApps
please, please log support calls with Sun, NetApp and the switch vendors.

And don't let any of them off the hook until you get patches, workarounds
explanations and reliable performance all the time.

It'll help everyone else - particularly of you have the chutzpah to drive
the calls hard and make lots of noise. When this stuff gets back to the
engineering groups *then* and *only* then will the decent work get done.

Sorry if this is egg-sucking 101, but I find, as a conslutant (or whatever
you want to call me) that people are way too backward about pushing the
vendors, and far too willing to just moan to friends.

This Sun-NetApp discussion crops up again and again on this list, and
it's obviously a real bugbear for lots of people, so I'd like to get it
squished once and for all.

Message ends.


--
-Mark ... an Englishman in Delft ...
Re: Oracle using NetApp [ In reply to ]
On Tue, Aug 20, 2002 at 01:56:08PM -0700, Jeff Bryer wrote:
> Not urban legend for us. We've experienced this just last week.
>
> We have a new F810 with a dedicated GigE connection to an E880 (Sol8),
> intending to use the F810 as our mail spool device.
>
> NFS mounting across UDP we can write to the NetApp at 60MB/s. Going
> from the NetApp to the SUN it goes at 25-30MB/s once or twice, and
> then slows right down to 1-2MB/s.
>
> Mounting NFS across TCP, we see 50MB/s to the NetApp, and 25MB/s going back
> from the NetApp to the SUN.
>
> With our production F740 mounting NFS/UDP across 100mb to our SUNs (Sol 2.6 and 8)
> we've had no performance problems.

After reading through responses, I get a feel that

1) the alleged Solaris bug only appears with Gb connection involved (
it wasn't clear in the original messages ). I only have 100Mb Suns
with 1 exception.

2) the problem only occurs with NFS reads - NFS writes are fine ( only
Jeff Bryer said that explicitely )

There're some things I still don't understand. Jeff says that his
F810 is connected directly with a Sun - no switch in between I
presume. If it's the case, then it pretty much eliminates network
switches as a source of problem.

I have a V880 server with a Gb interface, and it talks to a NetApp's
Gb interface through Extreme Network's Summit48 switch. Preliminary
testing didn't indicate any NFS read problem, and I'm still deciding
on the course of further testing.

I also found Sun bug #4434256, which talks about slow reads over UDP.

I'll quote it here:

\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
When Solaris8 read a remote file using NFSv3/UDP, Solaris8 sometimes wait 0.8
sec before
sending the READ request to the server.

Following is a network configuration of customer's site.

FirstEther Gigabit
E250 ------------- SSR2000 ------------ NetApp
(Sol 8)

2332 0.00005 NetApp -> Sol8 UDP IP fragment ID=17850
Offset=31080 MF=1
2333 0.00006 NetApp -> Sol8 UDP IP fragment ID=17850
Offset=32560 MF=0
2334 0.84602 Sol8 -> NetApp NFS C READ3 FH=5AA1 at 7503872 for
32768
~~~~~~~
2335 1.73994 Sol8 -> NetApp NFS C READ3 FH=5AA1 at 7503872 for
32768 (retransmit)
2336 0.00051 NetApp -> Sol8 UDP IP fragment ID=18106 Offset=0
MF=1
2337 0.00010 NetApp -> Sol8 UDP IP fragment ID=18106
Offset=1480 MF=1

Since there is a speed difference between client and server, the switch has
caused buffer
overflow. So It caused retransmission of READ packet from Solaris 8.
In order to lessen transmission of read request Packet, the user has set
nfs3_nra parameter to 1.

However,It is not necessary to wait to send a FIRST read request packet even in
such a environment.
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Can any confirm this? The workaround of course is to use NFS over TCP.

Igor
Re: Oracle using NetApp [ In reply to ]
On Tue, 20 Aug 2002, Igor Schein wrote:

> On Tue, Aug 20, 2002 at 01:56:08PM -0700, Jeff Bryer wrote:
> > Not urban legend for us. We've experienced this just last week.
/.../

> After reading through responses, I get a feel that
>
> 1) the alleged Solaris bug only appears with Gb connection involved (
> it wasn't clear in the original messages ). I only have 100Mb Suns
> with 1 exception.
/.../

Hmmm. I've had a "unique" situation, then. The symptoms that I first
reported (a year ago? longer?) were peculiar only to Oracle's I/O to the
filer, _not_ an issue with NFS per se.

I noticed that under extreme stress, UDP performance on Solaris 7 was
topping out around 40,000 packets/sec - a limit in the network stack or in
the switch, or in the Gigabit driver? We've switched to TCP and seen some
improvement, but the underlying cause is still a mystery. Since our
performance is acceptable and we don't have the time/equipment to really
do a thorough set of tests to get to the bottom of it, we've decided to
ignore it. :-)

We're (still) running 5.3.7R3 on an F760 with six FC9's, two volumes (8x
36GB each) for Oracle on two FC-AL loops, Gigabit-II card; Cisco 3508G
switch; Sun E4500, Sbus Gigabit 2.0, Solaris 7 HW11/99+latest patches,
driver updates, etc. Using "bonnie" or "cpio" or various other tools to
test plain old NFS performance, the E4500 can easily saturate the single
PCI bus in the F760, getting write speeds of 35-37MB/sec, read speeds
sometimes into the 50MB's range. Our 420R's with PCI Gigabit cards can
top it out too. We've never had a problem with typical NFS loads.

Oracle, on the other hand, seems to throttle itself - we typically see a
single Oracle (8.1.7.3) I/O slave reading or writing 3-5MB/sec (UDP) or
now about 7-8MB/sec (TCP). We've concluded that this is NOT a problem
with the filer, or the network - with multiple Oracle I/O procs running on
multiple CPUs and doing parititioned table scans, for instance, Oracle
*can* saturate the filer. But for small non-partitioned apps where you
get no parallelism, Oracle - or its interaction with Solaris - introduces
a bottleneck that nobody has been able to identify...

Anyway, we'll hopefully be jumping up to Solaris 8 (or 9) and Oracle 9i
and ONTAP 6.x, at which point we might be able to take the time to do some
tests and see if newer versions of all the various software bits will make
things go faster...

As to the original poster's question, though, our experience with Oracle
on the filer has been overwhelmingly positive. I have no hesitation
recommending it - subject, of course, to the caveat that you should test
your particular application to determine if it's a good fit or not.

-- Chris

--
Chris Lamb, Unix Guy
MeasureCast, Inc.
503-241-1469 x247
<skeezics@measurecast.com>
RE: Oracle using NetApp [ In reply to ]
-----Original Message-----
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Igor Schein
Sent: Tuesday, August 20, 2002 6:24 PM
To: Jeff Bryer
Cc: toasters@mathworks.com
Subject: Re: Oracle using NetApp


After reading through responses, I get a feel that

1) the alleged Solaris bug only appears with Gb connection involved (
it wasn't clear in the original messages ). I only have 100Mb Suns
with 1 exception.

---------------------

Not in our case. We experienced problems with NFS over UDP on Sun Ultra
1, Ultra 2 and Ultra 10 boxes but not on Linux boxes, all of which are
connected to an Extreme Black Diamond over 100Mbps links. We had the
problem with both a 100Mbps connection to the filer and a 1Gbps
connection.
Re: Oracle using NetApp [ In reply to ]
On Tue, Aug 20, 2002 at 09:24:24PM -0400, Igor Schein wrote:
>
> I have a V880 server with a Gb interface, and it talks to a NetApp's
> Gb interface through Extreme Network's Summit48 switch. Preliminary
> testing didn't indicate any NFS read problem, and I'm still deciding
> on the course of further testing.

I ran NFS read tests with the forementioned V880 server as NFS client.
There was no significant difference between NFS TCP and NFS UDP. The average
read speed was 32MBs, which is weak, but I think this can be configured by
tweaking some parameters on both Solaris and NetApp side. Can anyone recommend
any specific settings that can improve NFS read?

Thanks

Igor
Re: Oracle using NetApp [ In reply to ]
On Wed, Aug 21, 2002 at 02:10:12PM -0400, Igor Schein wrote:
> I ran NFS read tests with the forementioned V880 server as NFS client.
> There was no significant difference between NFS TCP and NFS UDP. The average
> read speed was 32MBs, which is weak, but I think this can be configured by
> tweaking some parameters on both Solaris and NetApp side. Can anyone recommend
> any specific settings that can improve NFS read?

NetApp has documented the parameters used in one of their benchmarks at
<http://www.netapp.com/tech_library/3145.html#5.>

In particular, we have found that nfs:nfs3_max_threads and nfs:nfs3_nra
can make quite a difference.

--
Deron Johnson
djohnson@amgen.com
RE: Oracle using NetApp [ In reply to ]
*** Before acting on this e-mail or opening any attachment you are advised to read the disclaimer at the end of this e-mail ***

Beg to differ Gents. We had an install with UDP/Solaris/1GB over private
circuits Sun to Filer - so no switch or hubs in the way at all.

Performance was worse than the kit it replaced. Specifically backing up (I
know - over NFS not a good idea - but that's another tale <g>) it took 36
hours for 225GB to go to tape.

Switch to TCP and with TCP tuning (that I published to the list a while
back) and performance screamed - file I/O via a Sybase control database went
from 30-60 seconds down to 3 or 4. And the backups dropped to 4-5 hours.

NetApp definitely stated UDP/Solaris/1GB was not a good combination and
stick with TCP. It's a Solaris implementation they implied. I posted their
notes here as well

TTFN

Peter


-----Original Message-----
From: John Clear [mailto:jclear@ati.com]
Sent: 20 August 2002 19:02
To: 'igor@txc.com'; Jeff Kennedy
Cc: Brian Long; Robin Winslett; toasters@mathworks.com
Subject: RE: Oracle using NetApp



-----Original Message-----
As far as I'm concerned, this is an urban legend :-) I use UDP v3 on
Solaris with NetApp, and I've always had a good performance, no worse
than Linux or HPUX. When you say you experienced it yourself, what
exactly did you see? Was it reproducible with iozone? What were the
numbers?
-----Original Message-----

In our environment, YMMV, Solaris (100mb) to NetApp (100mb) works fine over
UDP. Solaris (100mb) to NetApp (gigabit) was horribly slow for certain
operations.
Normal file reads were fine, but a compare (`cmp`) of two files from the
same filer
were unbelievably slow. A cmp that would take 15 seconds to the 100mb
interface of
the NetApp would take an hour or more to the gigabit interface. To get Sun
to actually
admit it was their problem, we did a Solaris (100mb) to Solaris (gigabit)
test over UDP,
and it took 5minutes or so. Not as bad as to NetApp, but bad enough for
them to file a
bug, which Sun Engineering ignored since there is a work-around (use TCP or
the 100mb
interface of the filer). This was with Solaris 8. Solaris 7 did not have
this problem
in our environment.

Since nearly all our clients are 100mb, and only our QA group was running
into this,
we set their automount entries to point to the 100mb interface.

John



******************************* Disclaimer *****************************
Confidentiality: This e-mail and any attachments are intended for the
addressee(s) only and may be confidential. If they have come to you in
error you must take no action based on them, nor must you copy or show
them to anyone; please reply to this e-mail and highlight the error.

Viruses: Although we have taken steps to ensure that this e-mail and
attachments are free from any virus, we advise that in keeping with good
practice the recipient should ensure they are actually virus free.
********************** http://www.burallplastec.com ********************
RE: Oracle using NetApp [ In reply to ]
Does this only apply to Sun GIGE cards? Has anyone seen this type of issue
using Syskonnect GIGE?

Thanks

-Mark
-----Original Message-----
From: Peter Bryant [mailto:PBryant@burallplastec.com]
Sent: Thursday, August 22, 2002 4:12 AM
To: toasters@mathworks.com
Subject: RE: Oracle using NetApp


*** Before acting on this e-mail or opening any attachment you are advised
to read the disclaimer at the end of this e-mail ***


Beg to differ Gents. We had an install with UDP/Solaris/1GB over private
circuits Sun to Filer - so no switch or hubs in the way at all.

Performance was worse than the kit it replaced. Specifically backing up (I
know - over NFS not a good idea - but that's another tale <g>) it took 36
hours for 225GB to go to tape.

Switch to TCP and with TCP tuning (that I published to the list a while
back) and performance screamed - file I/O via a Sybase control database went
from 30-60 seconds down to 3 or 4. And the backups dropped to 4-5 hours.

NetApp definitely stated UDP/Solaris/1GB was not a good combination and
stick with TCP. It's a Solaris implementation they implied. I posted their
notes here as well

TTFN

Peter


-----Original Message-----
From: John Clear [mailto:jclear@ati.com]
Sent: 20 August 2002 19:02
To: 'igor@txc.com'; Jeff Kennedy
Cc: Brian Long; Robin Winslett; toasters@mathworks.com
Subject: RE: Oracle using NetApp



-----Original Message-----
As far as I'm concerned, this is an urban legend :-) I use UDP v3 on
Solaris with NetApp, and I've always had a good performance, no worse
than Linux or HPUX. When you say you experienced it yourself, what
exactly did you see? Was it reproducible with iozone? What were the
numbers?
-----Original Message-----

In our environment, YMMV, Solaris (100mb) to NetApp (100mb) works fine over
UDP. Solaris (100mb) to NetApp (gigabit) was horribly slow for certain
operations.
Normal file reads were fine, but a compare (`cmp`) of two files from the
same filer
were unbelievably slow. A cmp that would take 15 seconds to the 100mb
interface of
the NetApp would take an hour or more to the gigabit interface. To get Sun
to actually
admit it was their problem, we did a Solaris (100mb) to Solaris (gigabit)
test over UDP,
and it took 5minutes or so. Not as bad as to NetApp, but bad enough for
them to file a
bug, which Sun Engineering ignored since there is a work-around (use TCP or
the 100mb
interface of the filer). This was with Solaris 8. Solaris 7 did not have
this problem
in our environment.

Since nearly all our clients are 100mb, and only our QA group was running
into this,
we set their automount entries to point to the 100mb interface.

John



******************************* Disclaimer *****************************
Confidentiality: This e-mail and any attachments are intended for the
addressee(s) only and may be confidential. If they have come to you in
error you must take no action based on them, nor must you copy or show
them to anyone; please reply to this e-mail and highlight the error.

Viruses: Although we have taken steps to ensure that this e-mail and
attachments are free from any virus, we advise that in keeping with good
practice the recipient should ensure they are actually virus free.
********************** http://www.burallplastec.com ********************