Mailing List Archive

single or multiple aggregates?
Hello All,

We're running a pair of FAS8020's with Flash Cache modules, SAS 600GB
disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 storage resource
for our Xen VM farm and one for NFS/CIFS file storage for our users.

We currently run one SVM on one node and one on the other to avoid any
impact from mixing IO loads on the cache modules. We also have two
aggregates, one for the VMs and one for the file server.

We're getting tight on space and also getting tired of needing to
withhold unallocated disks and hand them out as needed to each
aggregate. Right now we have too much free space on one aggregate and
not enough on the other. We're leaning toward using the system like an
enterprise storage cluster is meant to be and migrating both to a single
aggregate.

On the other hand, we've been bitten before when mixing different
performance loads on the same gear, admittedly that was for a larger
pool of very different loads on SATA disks in a SAN behind a V-Series filer.

We're not much worried about cache poisoning as long as we keep each SVM
on each node. Our bigger concern is mixing loads at the actual disk IO
level. Does anyone have any guidance? The total cluster is 144 600GB SAS
10K RPM disks and we'd be mixing Xen VMs and NFS/CIFS file services that
tend to run heavy on "get attr" reads.

Let us know your thoughts and if you need any more information,


Randy in Seattle

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: single or multiple aggregates? [ In reply to ]
Randy -

Have you heard of FlexGroups? They’re in a later rev of ONTAP and allow you to collapse disparate volumes on different aggregates into a single large volume — you can even spread the workloads across controllers (so for instance create 2 67x drive aggrs on each on its own controller then create a flexgroup out of 2x volumes on each controller into a single usable namespace per application (say one for your CIFS/SMB and one for your NFSv3 use cases). The only difficulty would lie in obtaining swing storage to move the existing vols over and reconfigure the current storage — you can migrate the vols with the ‘vol move’ command from one aggregate to another, etc.

Something to think about anyway, and it does allow for more flexibility with how you use the disk.

Here’s a link to the TR on FlexGroups: https://www.netapp.com/us/media/tr-4571.pdf

I would suggest upgrading to the latest OS rev. 9.7P5 so you have access to qtrees, quotas and so on with FlexGroups as these features were added incrementally to the FlexGroups toolkit.

Anthony Bar

On Jun 29, 2020, at 10:11 AM, Rue, Randy <randyrue@gmail.com> wrote:

?Hello All,

We're running a pair of FAS8020's with Flash Cache modules, SAS 600GB disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 storage resource for our Xen VM farm and one for NFS/CIFS file storage for our users.

We currently run one SVM on one node and one on the other to avoid any impact from mixing IO loads on the cache modules. We also have two aggregates, one for the VMs and one for the file server.

We're getting tight on space and also getting tired of needing to withhold unallocated disks and hand them out as needed to each aggregate. Right now we have too much free space on one aggregate and not enough on the other. We're leaning toward using the system like an enterprise storage cluster is meant to be and migrating both to a single aggregate.

On the other hand, we've been bitten before when mixing different performance loads on the same gear, admittedly that was for a larger pool of very different loads on SATA disks in a SAN behind a V-Series filer.

We're not much worried about cache poisoning as long as we keep each SVM on each node. Our bigger concern is mixing loads at the actual disk IO level. Does anyone have any guidance? The total cluster is 144 600GB SAS 10K RPM disks and we'd be mixing Xen VMs and NFS/CIFS file services that tend to run heavy on "get attr" reads.

Let us know your thoughts and if you need any more information,


Randy in Seattle

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: single or multiple aggregates? [ In reply to ]
>>>>> "Randy" == Randy Rue <randyrue@gmail.com> writes:

Randy> We're running a pair of FAS8020's with Flash Cache modules, SAS
Randy> 600GB disks and DOT9.1P6 to serve two SVM's, one for an NFSv3
Randy> storage resource for our Xen VM farm and one for NFS/CIFS file
Randy> storage for our users.

How many volumes are you running in each SVM? But basically in my
experience it's not too big a deal to have different SVMs sharing
aggregates. This bigger deal is having SATA vs SAS on the same head,
at least with older versions. I'm running 9.3 these days and not
seeing problems.

Randy> We currently run one SVM on one node and one on the other to
Randy> avoid any impact from mixing IO loads on the cache modules. We
Randy> also have two aggregates, one for the VMs and one for the file
Randy> server.

Are you running OnCommand performance manager and watching your loads
and metrics? Personally I don't think you'll run into any problems
serving data from either SVM on either head or either aggregate.

Randy> We're getting tight on space and also getting tired of needing
Randy> to withhold unallocated disks and hand them out as needed to
Randy> each aggregate. Right now we have too much free space on one
Randy> aggregate and not enough on the other. We're leaning toward
Randy> using the system like an enterprise storage cluster is meant to
Randy> be and migrating both to a single aggregate.

I'd just move some vols from the full aggregate to the emptier one.
Using performance advisor you can look for your lower performing, but
higher disk usage vol(s) to move over and balance the load.

Randy> On the other hand, we've been bitten before when mixing
Randy> different performance loads on the same gear, admittedly that
Randy> was for a larger pool of very different loads on SATA disks in
Randy> a SAN behind a V-Series filer.

That is a different situation, since the Netapp V-Filer can't really
control the backend nearly as well.

Randy> We're not much worried about cache poisoning as long as we keep
Randy> each SVM on each node. Our bigger concern is mixing loads at
Randy> the actual disk IO level. Does anyone have any guidance? The
Randy> total cluster is 144 600GB SAS 10K RPM disks and we'd be mixing
Randy> Xen VMs and NFS/CIFS file services that tend to run heavy on
Randy> "get attr" reads.

Randy> Let us know your thoughts and if you need any more information,

How many volumes and how many files per-volume do you have? I would
expect the Xen VMs to not have nearly as much load, and that the
getattr() calls would be heavily dependent on how big and busy those
volumes are.

What sort of load do you see on your 8020 pair? Can you post some
output of 'statistic aggregate show' and 'statistic volume show'?
Basically, just get some data before you start moving stuff around.

John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: single or multiple aggregates? [ In reply to ]
Hi Randy

We tend to split the disks between controllers and allocate all the disks
(minus spares) to the aggregates on setup. I don't like to WAFL iron /
reallocate if I can avoid it! I also don't like to put all the load onto a
single controller.

As others have said, keep an eye on performance and move volumes around as
needed to balance space and IOPS.

For your Xen NFS storage repositories, I would use several volumes. You
might lose some de-dupe efficiency but you'll gain a lot of flexibility.
Create a separate LIF for each volume and home the LIF with the controller
that has the data. If you need to move one of the volumes to the other
controller's aggregate in the future, you can move and re-home the LIF to
keep traffic off the cluster network without affecting the traffic to the
other volumes.

Good luck
Steve

On Mon, 29 Jun 2020 at 18:13, Rue, Randy <randyrue@gmail.com> wrote:

> Hello All,
>
> We're running a pair of FAS8020's with Flash Cache modules, SAS 600GB
> disks and DOT9.1P6 to serve two SVM's, one for an NFSv3 storage resource
> for our Xen VM farm and one for NFS/CIFS file storage for our users.
>
> We currently run one SVM on one node and one on the other to avoid any
> impact from mixing IO loads on the cache modules. We also have two
> aggregates, one for the VMs and one for the file server.
>
> We're getting tight on space and also getting tired of needing to
> withhold unallocated disks and hand them out as needed to each
> aggregate. Right now we have too much free space on one aggregate and
> not enough on the other. We're leaning toward using the system like an
> enterprise storage cluster is meant to be and migrating both to a single
> aggregate.
>
> On the other hand, we've been bitten before when mixing different
> performance loads on the same gear, admittedly that was for a larger
> pool of very different loads on SATA disks in a SAN behind a V-Series
> filer.
>
> We're not much worried about cache poisoning as long as we keep each SVM
> on each node. Our bigger concern is mixing loads at the actual disk IO
> level. Does anyone have any guidance? The total cluster is 144 600GB SAS
> 10K RPM disks and we'd be mixing Xen VMs and NFS/CIFS file services that
> tend to run heavy on "get attr" reads.
>
> Let us know your thoughts and if you need any more information,
>
>
> Randy in Seattle
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
Re: single or multiple aggregates? [ In reply to ]
9.7 P4/P5 is definitely the place to be right now — if your system can run it (check https://hwu.netapp.com to see) There are a couple ugly bugs in the GA 9.7 and P1-P3 but P4/P5 ironed those out and it’s very stable, and there are a lot of performance improvements (esp. if you run on an AFF). The new low-end AFF (c190) is a really great deal price-wise too for a small deployment.

Anthony Bar
tbar@berkcom.com<mailto:tbar@berkcom.com>
www.berkcom.com<http://www.berkcom.com/>


On Jun 29, 2020, at 3:38 PM, Michael Bergman <michael.bergman@ericsson.com> wrote:

?On 2020-06-29 21:15, John Stoffel wrote:
How many volumes are you running in each SVM? But basically in my
experience it's not too big a deal to have different SVMs sharing
aggregates. This bigger deal is having SATA vs SAS on the same head,
at least with older versions. I'm running 9.3 these days and not
seeing problems.

Indeed, there's no problem having many vservers share Aggregates. That was
never an issue, really. Certainly not ONTAP 9 and up. And the problem with
CP processing (WAFL Consistency Point) you mention was solved long ago. I.e.
mixing 10K rpm and 7.2K rpm Aggregates in the same controller.
Really: 9.7 is a good place to be all in all. It has many carefully thought
out performance tunings in WAFL and other places which earlier versions do
not have

/M
Re: single or multiple aggregates? [ In reply to ]
2 aggrs give you more available WAFL affinities (cpu threads for parallel write metadata operations) which will help performance - especially with flexgroup volumes. But you do lose disks to parity. You’ll want to weigh capacity vs. potential performance.

Get Outlook for iOS<https://aka.ms/o0ukef>
________________________________
From: Toasters <toasters-bounces@teaparty.net> on behalf of Tony Bar <tbar@berkcom.com>
Sent: Monday, June 29, 2020 7:52:49 PM
To: NGC-michael.bergman-ericsson.com <michael.bergman@ericsson.com>
Cc: Toasters <toasters@teaparty.net>
Subject: Re: single or multiple aggregates?

NetApp Security WARNING: This is an external email. Do not click links or open attachments unless you recognize the sender and know the content is safe.



9.7 P4/P5 is definitely the place to be right now — if your system can run it (check https://hwu.netapp.com to see) There are a couple ugly bugs in the GA 9.7 and P1-P3 but P4/P5 ironed those out and it’s very stable, and there are a lot of performance improvements (esp. if you run on an AFF). The new low-end AFF (c190) is a really great deal price-wise too for a small deployment.


Anthony Bar

tbar@berkcom.com<mailto:tbar@berkcom.com>

www.berkcom.com<http://www.berkcom.com/>


On Jun 29, 2020, at 3:38 PM, Michael Bergman <michael.bergman@ericsson.com> wrote:

?On 2020-06-29 21:15, John Stoffel wrote:
How many volumes are you running in each SVM? But basically in my
experience it's not too big a deal to have different SVMs sharing
aggregates. This bigger deal is having SATA vs SAS on the same head,
at least with older versions. I'm running 9.3 these days and not
seeing problems.

Indeed, there's no problem having many vservers share Aggregates. That was
never an issue, really. Certainly not ONTAP 9 and up. And the problem with
CP processing (WAFL Consistency Point) you mention was solved long ago. I.e.
mixing 10K rpm and 7.2K rpm Aggregates in the same controller.
Really: 9.7 is a good place to be all in all. It has many carefully thought
out performance tunings in WAFL and other places which earlier versions do
not have

/M