Mailing List Archive

hardware or software based disk ownership?
Hello all, I sure do have a lot of questions lately,

I'm hoping to consolidate two aggregates into one for better space use,
knowing that we might lose some performance.

We're currently running two SVMs, each with its own aggregate, one one
each node of a FAS8020 cluster running 9.1P6 (I know, I know).

I inherited ownership of this cluster and didn't set it up.

When I try to add a new volume to one SVM I can only see one aggregate
but I'd like to put that volume on the other aggr.

How can I tell if I'm running HW or SW based ownership?

To confirm, if I was running SW based should I be able to use either
aggr on either SVM?

And last, is there any way to change from HW to SW without tearing it
all up?


Randy


_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
>>>>> "Randy" == Randy Rue <randyrue@gmail.com> writes:

Randy> Hello all, I sure do have a lot of questions lately,
Randy> I'm hoping to consolidate two aggregates into one for better space use,
Randy> knowing that we might lose some performance.

Randy> We're currently running two SVMs, each with its own aggregate, one one
Randy> each node of a FAS8020 cluster running 9.1P6 (I know, I know).

Randy> I inherited ownership of this cluster and didn't set it up.

Randy> When I try to add a new volume to one SVM I can only see one aggregate
Randy> but I'd like to put that volume on the other aggr.

This doesn't make any sense to me, since aggregates aren't owned by
the SVM. Unless whomever set this up did something funky and I'm
about to learn something new.

From the sound of it, you're doing this from the web interface? Can
you maybe show some of the output from the CLI commands instead?

Can you do:

storage aggregate show -fields aggregate ,node ,is-home ,volcount
vserver show -fields aggregate
vol create -vserver <foo> -size 10g -aggregate <aggregate1> \
-volume test1

And then show us the errors.

Randy> How can I tell if I'm running HW or SW based ownership?

This is more at the node/disk level, not the aggregate level.

Randy> To confirm, if I was running SW based should I be able to use either
Randy> aggr on either SVM?

It doesn't matter what you're running, aggregates are visible to all
nodes and vservers in a cluster unless (I think!) they've been locked
down in some way.

Randy> And last, is there any way to change from HW to SW without
Randy> tearing it all up?

No need... I think you really need to look at:

storage aggregate show -instance

and see how things look there, then do:

vserver show -instance

and see if they are locked to only allocated on a specific aggregate
somehow. God knows why anyone does this normally...

John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
On 2020-07-07 20:51, Rue, Randy wrote:
> We're currently running two SVMs, each with its own aggregate, one one
> each node of a FAS8020 cluster running 9.1P6 (I know, I know).

Conceptually, SVMs do not own aggregates. SVMs can generally use
resources from all over the cluster. For aggregates, you can restrict
that with the "aggr-list" property of the vserver object.

> When I try to add a new volume to one SVM I can only see one aggregate
> but I'd like to put that volume on the other aggr.

That's probably the aggr-list property then. You can modify that list
with the vserver modify command:

vserver modify -vserver Vserver_name -aggr-list aggr_name[, aggr_name]

> How can I tell if I'm running HW or SW based ownership?

That won't be the issue. HW-based disk ownership has been deprecated in
the 7.3 days, IIRC, and ONTAP 9 doesn't support it at all anymore.

> To confirm, if I was running SW based should I be able to use either
> aggr on either SVM?
>
> And last, is there any way to change from HW to SW without tearing it
> all up?

As mentioned above, irrelevant.

Hope that helps,
Oliver
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
Randy,
Check your configuration, you need to assign the aggregates to the
vservers.



https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html

shows you the steps to take.

vserver show -fields aggr-list

vserver modify -vserver <vserver> -aggr-list <aggr1>,<aggr2>[,aggrN]

John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
I do not think HW ownership has been around since ONTAP 8.
Somewhere between 6.x and the end of 7, it all became SW ownership

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*




On Tue, Jul 7, 2020 at 2:53 PM Rue, Randy <randyrue@gmail.com> wrote:

> Hello all, I sure do have a lot of questions lately,
>
> I'm hoping to consolidate two aggregates into one for better space use,
> knowing that we might lose some performance.
>
> We're currently running two SVMs, each with its own aggregate, one one
> each node of a FAS8020 cluster running 9.1P6 (I know, I know).
>
> I inherited ownership of this cluster and didn't set it up.
>
> When I try to add a new volume to one SVM I can only see one aggregate
> but I'd like to put that volume on the other aggr.
>
> How can I tell if I'm running HW or SW based ownership?
>
> To confirm, if I was running SW based should I be able to use either
> aggr on either SVM?
>
> And last, is there any way to change from HW to SW without tearing it
> all up?
>
>
> Randy
>
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
Re: hardware or software based disk ownership? [ In reply to ]
Also, if you are doing NAS data (and unstructured, like home dirs, NOT
databases or VMDKs) you should upgrade to 9.7P5 and use FlexGroups which
would utilize your entire system: aggregates on both nodes, Networking on
both nodes, CPU/RAM on both nodes. Actually can improve performance!

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Tue, Jul 7, 2020 at 3:12 PM Oliver Brakmann <
oliver.brakmann+toasters@posteo.de> wrote:

> On 2020-07-07 20:51, Rue, Randy wrote:
> > We're currently running two SVMs, each with its own aggregate, one one
> > each node of a FAS8020 cluster running 9.1P6 (I know, I know).
>
> Conceptually, SVMs do not own aggregates. SVMs can generally use
> resources from all over the cluster. For aggregates, you can restrict
> that with the "aggr-list" property of the vserver object.
>
> > When I try to add a new volume to one SVM I can only see one aggregate
> > but I'd like to put that volume on the other aggr.
>
> That's probably the aggr-list property then. You can modify that list
> with the vserver modify command:
>
> vserver modify -vserver Vserver_name -aggr-list aggr_name[, aggr_name]
>
> > How can I tell if I'm running HW or SW based ownership?
>
> That won't be the issue. HW-based disk ownership has been deprecated in
> the 7.3 days, IIRC, and ONTAP 9 doesn't support it at all anymore.
>
> > To confirm, if I was running SW based should I be able to use either
> > aggr on either SVM?
> >
> > And last, is there any way to change from HW to SW without tearing it
> > all up?
>
> As mentioned above, irrelevant.
>
> Hope that helps,
> Oliver
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
Re: hardware or software based disk ownership? [ In reply to ]
>>>>> "tmac" == tmac <tmacmd@gmail.com> writes:

tmac> Also, if you are doing NAS data (and unstructured, like home
tmac> dirs, NOT databases or VMDKs) you should upgrade to 9.7P5 and
tmac> use FlexGroups which would utilize your entire system:
tmac> aggregates on both nodes, Networking on both nodes, CPU/RAM on
tmac> both nodes. Actually can improve performance!

Unfortunately there's no hope of me getting to that release any time
soon, but do DBs and VMDKs lose performance with FlexGroups? And does
it really help on just two node clusters? I would assume it might
help on four node clusters on up.

John


_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
I was misremembering from my early days in pre-cluster mode.

Forget any mention of disk ownership :)

Can an aggregate be used by more than one SVM? If so, how? When I try to
add the aggregate to the other SVM, the command returns without error
but vserver show still shows the SVMS and their assigned aggregates
unchanged.


On 7/7/2020 12:10 PM, John Stoffel wrote:
> Randy,
> Check your configuration, you need to assign the aggregates to the
> vservers.
>
>
>
> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
>
> shows you the steps to take.
>
> vserver show -fields aggr-list
>
> vserver modify -vserver <vserver> -aggr-list <aggr1>,<aggr2>[,aggrN]
>
> John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
In the current versions, it does not help. The goal of flexgroups is to
distribute.
DBs and VMDKs are generally large(r) files. Flexgroups work with files and
will distribute a file to a constituent.
It will monitor space usage to help with the distribution.

When large files are in play, it will drop a chunk onto a flexgroup member
but that plays a little on the algorithms for placement of normal (smaller)
files.

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Tue, Jul 7, 2020 at 4:21 PM John Stoffel <john@stoffel.org> wrote:

> >>>>> "tmac" == tmac <tmacmd@gmail.com> writes:
>
> tmac> Also, if you are doing NAS data (and unstructured, like home
> tmac> dirs, NOT databases or VMDKs) you should upgrade to 9.7P5 and
> tmac> use FlexGroups which would utilize your entire system:
> tmac> aggregates on both nodes, Networking on both nodes, CPU/RAM on
> tmac> both nodes. Actually can improve performance!
>
> Unfortunately there's no hope of me getting to that release any time
> soon, but do DBs and VMDKs lose performance with FlexGroups? And does
> it really help on just two node clusters? I would assume it might
> help on four node clusters on up.
>
> John
>
>
>
Re: hardware or software based disk ownership? [ In reply to ]
Something may be getting lost here. Are you able to send any output?

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com> wrote:

> I was misremembering from my early days in pre-cluster mode.
>
> Forget any mention of disk ownership :)
>
> Can an aggregate be used by more than one SVM? If so, how? When I try to
> add the aggregate to the other SVM, the command returns without error
> but vserver show still shows the SVMS and their assigned aggregates
> unchanged.
>
>
> On 7/7/2020 12:10 PM, John Stoffel wrote:
> > Randy,
> > Check your configuration, you need to assign the aggregates to the
> > vservers.
> >
> >
> >
> >
> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
> >
> > shows you the steps to take.
> >
> > vserver show -fields aggr-list
> >
> > vserver modify -vserver <vserver> -aggr-list
> <aggr1>,<aggr2>[,aggrN]
> >
> > John
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
Re: hardware or software based disk ownership? [ In reply to ]
I think I may just be looking at a limit of the system. If an aggregate
is already assigned to one SVM, can it also be assigned to another? That
is, can two different SVMs access the same aggregate?

Output:

netapp4::> vserver modify -vserver scharp_vm_storage -aggr-list
scharpdata,scharp_vm_storage

netapp4::> vserver show
                               Admin      Operational Root
Vserver     Type    Subtype    State      State       Volume Aggregate
----------- ------- ---------- ---------- ----------- ---------- ----------
netapp4     admin   -          -          -           -          -
netapp4-a   node    -          -          -           -          -
netapp4-b   node    -          -          -           -          -
scharp_kube data    default    running    running     scharp_ scharpdata
                                                      kube_root
scharp_vm_storage
            data    default    running    running     scharp_vm_ scharp_vm_
                                                      storage_ storage
                                                      root
scharpdata  data    default    running    running     scharpdata scharpdata
                                                      _root
6 entries were displayed.

netapp4::>

On 7/7/2020 2:06 PM, tmac wrote:
> Something may be getting lost here. Are you able to send any output?
>
> --tmac
>
> *Tim McCarthy, */Principal Consultant/
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*
>
>
>
>
> On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com
> <mailto:randyrue@gmail.com>> wrote:
>
> I was misremembering from my early days in pre-cluster mode.
>
> Forget any mention of disk ownership :)
>
> Can an aggregate be used by more than one SVM? If so, how? When I
> try to
> add the aggregate to the other SVM, the command returns without error
> but vserver show still shows the SVMS and their assigned aggregates
> unchanged.
>
>
> On 7/7/2020 12:10 PM, John Stoffel wrote:
> > Randy,
> > Check your configuration, you need to assign the aggregates to the
> > vservers.
> >
> >
> >
> >
> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
> >
> > shows you the steps to take.
> >
> >        vserver show -fields aggr-list
> >
> >        vserver modify -vserver <vserver> -aggr-list
> <aggr1>,<aggr2>[,aggrN]
> >
> > John
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net <mailto:Toasters@teaparty.net>
> https://www.teaparty.net/mailman/listinfo/toasters
>
Re: hardware or software based disk ownership? [ In reply to ]
Yes aggregates can of cause be shared between multiple SVMs…. (erhm… I think a little polite rtfm is in order here?) ????
And before you ask, you cannot use the two aggr0 root aggregates for any data volumes…

/Heino


Fra: Toasters <toasters-bounces@teaparty.net> på vegne af "Rue, Randy" <randyrue@gmail.com>
Dato: onsdag den 8. juli 2020 kl. 00.23
Til: tmac <tmacmd@gmail.com>
Cc: Toasters <toasters@teaparty.net>
Emne: Re: hardware or software based disk ownership?


I think I may just be looking at a limit of the system. If an aggregate is already assigned to one SVM, can it also be assigned to another? That is, can two different SVMs access the same aggregate?

Output:

netapp4::> vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage

netapp4::> vserver show
Admin Operational Root
Vserver Type Subtype State State Volume Aggregate
----------- ------- ---------- ---------- ----------- ---------- ----------
netapp4 admin - - - - -
netapp4-a node - - - - -
netapp4-b node - - - - -
scharp_kube data default running running scharp_ scharpdata
kube_root
scharp_vm_storage
data default running running scharp_vm_ scharp_vm_
storage_ storage
root
scharpdata data default running running scharpdata scharpdata
_root
6 entries were displayed.

netapp4::>
On 7/7/2020 2:06 PM, tmac wrote:
Something may be getting lost here. Are you able to send any output?

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam<https://twitter.com/NetAppATeam>

I Blog at TMACsRack<https://tmacsrack.wordpress.com/>



On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com<mailto:randyrue@gmail.com>> wrote:
I was misremembering from my early days in pre-cluster mode.

Forget any mention of disk ownership :)

Can an aggregate be used by more than one SVM? If so, how? When I try to
add the aggregate to the other SVM, the command returns without error
but vserver show still shows the SVMS and their assigned aggregates
unchanged.


On 7/7/2020 12:10 PM, John Stoffel wrote:
> Randy,
> Check your configuration, you need to assign the aggregates to the
> vservers.
>
>
>
> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
>
> shows you the steps to take.
>
> vserver show -fields aggr-list
>
> vserver modify -vserver <vserver> -aggr-list <aggr1>,<aggr2>[,aggrN]
>
> John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
Randy> I think I may just be looking at a limit of the system. If an
Randy> aggregate is already assigned to one SVM, can it also be
Randy> assigned to another? That is, can two different SVMs access the
Randy> same aggregate?

Yes. All you should need to do is:

vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage
vserver modify -vserver scharpdata -aggr-list scharpdata,scharp_vm_storage
vserver modify -vserver scharp_kube -aggr-list scharpdata,scharp_vm_storage

And now all three of your SVMs should be able to create volumes on
both aggregates. You can then test with:

vol create -vserver scharp_vm_storage -aggregate scharpdata -volume \
test -size 1g

But if it doesn't, send the full command you used and the output for
us to look at with you.

John


Randy> Output:

Randy> netapp4::> vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage

Randy> netapp4::> vserver show
Randy> ?????????????????????????????? Admin????? Operational Root
Randy> Vserver???? Type??? Subtype??? State????? State?????? Volume???? Aggregate
Randy> ----------- ------- ---------- ---------- ----------- ---------- ----------
Randy> netapp4???? admin?? -????????? -????????? -?????????? -????????? -
Randy> netapp4-a?? node??? -????????? -????????? -?????????? -????????? -
Randy> netapp4-b?? node??? -????????? -????????? -?????????? -????????? -
Randy> scharp_kube data??? default??? running??? running???? scharp_??? scharpdata
Randy> ????????????????????????????????????????????????????? kube_root
Randy> scharp_vm_storage
Randy> ??????????? data??? default??? running??? running???? scharp_vm_ scharp_vm_
Randy> ????????????????????????????????????????????????????? storage_?? storage
Randy> ????????????????????????????????????????????????????? root
Randy> scharpdata? data??? default??? running??? running???? scharpdata scharpdata
Randy> ????????????????????????????????????????????????????? _root
Randy> 6 entries were displayed.

Randy> netapp4::>

Randy> On 7/7/2020 2:06 PM, tmac wrote:

Randy> Something may be getting lost here. Are you able to send any output?

Randy> --tmac

Randy> Tim McCarthy, Principal Consultant

Randy> Proud Member of the?#NetAppATeam

Randy> I Blog at?TMACsRack

Randy> On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com> wrote:

Randy> I was misremembering from my early days in pre-cluster mode.

Randy> Forget any mention of disk ownership :)

Randy> Can an aggregate be used by more than one SVM? If so, how? When I try to
Randy> add the aggregate to the other SVM, the command returns without error
Randy> but vserver show still shows the SVMS and their assigned aggregates
Randy> unchanged.

Randy> On 7/7/2020 12:10 PM, John Stoffel wrote:
>> Randy,
>> Check your configuration, you need to assign the aggregates to the
>> vservers.
>>
>>
>>
>>
Randy> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
>>
>> shows you the steps to take.
>>
>> ? ? ? ? vserver show -fields aggr-list
>>
>> ? ? ? ? vserver modify -vserver <vserver> -aggr-list <aggr1>,<aggr2>[,aggrN]
>>
>> John
Randy> _______________________________________________
Randy> Toasters mailing list
Randy> Toasters@teaparty.net
Randy> https://www.teaparty.net/mailman/listinfo/toasters


_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
gah.

Previously I added the aggr to the vm_storage SVM on the CLI and then
ran vserver show again and only saw the previous aggr listed. Also, if I
tried to create a new volume on that SVM in the webUI I only was
presented with one option, the original SVM.

Now when I look in the webUI I can see both aggrs as an option for a new
volume. Maybe I just needed to wait for the dust to settle.

I'm calling this pair of n00b issues resolved (shared aggregates and
disk ownership). The painful part is that this n00b has been running
toasters for about fifteen years now. For the last five years I've been
branching into other things and clearly my toaster skills are aging.

Thanks all,

Randy


On 7/8/2020 6:34 AM, John Stoffel wrote:
>
> Randy> I think I may just be looking at a limit of the system. If an
> Randy> aggregate is already assigned to one SVM, can it also be
> Randy> assigned to another? That is, can two different SVMs access the
> Randy> same aggregate?
>
> Yes. All you should need to do is:
>
> vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage
> vserver modify -vserver scharpdata -aggr-list scharpdata,scharp_vm_storage
> vserver modify -vserver scharp_kube -aggr-list scharpdata,scharp_vm_storage
>
> And now all three of your SVMs should be able to create volumes on
> both aggregates. You can then test with:
>
> vol create -vserver scharp_vm_storage -aggregate scharpdata -volume \
> test -size 1g
>
> But if it doesn't, send the full command you used and the output for
> us to look at with you.
>
> John
>
>
> Randy> Output:
>
> Randy> netapp4::> vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage
>
> Randy> netapp4::> vserver show
> Randy>                                Admin      Operational Root
> Randy> Vserver     Type    Subtype    State      State       Volume     Aggregate
> Randy> ----------- ------- ---------- ---------- ----------- ---------- ----------
> Randy> netapp4     admin   -          -          -           -          -
> Randy> netapp4-a   node    -          -          -           -          -
> Randy> netapp4-b   node    -          -          -           -          -
> Randy> scharp_kube data    default    running    running     scharp_    scharpdata
> Randy>                                                       kube_root
> Randy> scharp_vm_storage
> Randy>             data    default    running    running     scharp_vm_ scharp_vm_
> Randy>                                                       storage_   storage
> Randy>                                                       root
> Randy> scharpdata  data    default    running    running     scharpdata scharpdata
> Randy>                                                       _root
> Randy> 6 entries were displayed.
>
> Randy> netapp4::>
>
> Randy> On 7/7/2020 2:06 PM, tmac wrote:
>
> Randy> Something may be getting lost here. Are you able to send any output?
>
> Randy> --tmac
>
> Randy> Tim McCarthy, Principal Consultant
>
> Randy> Proud Member of the #NetAppATeam
>
> Randy> I Blog at TMACsRack
>
> Randy> On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com> wrote:
>
> Randy> I was misremembering from my early days in pre-cluster mode.
>
> Randy> Forget any mention of disk ownership :)
>
> Randy> Can an aggregate be used by more than one SVM? If so, how? When I try to
> Randy> add the aggregate to the other SVM, the command returns without error
> Randy> but vserver show still shows the SVMS and their assigned aggregates
> Randy> unchanged.
>
> Randy> On 7/7/2020 12:10 PM, John Stoffel wrote:
>>> Randy,
>>> Check your configuration, you need to assign the aggregates to the
>>> vservers.
>>>
>>>
>>>
>>>
> Randy> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
>>> shows you the steps to take.
>>>
>>>         vserver show -fields aggr-list
>>>
>>>         vserver modify -vserver <vserver> -aggr-list <aggr1>,<aggr2>[,aggrN]
>>>
>>> John
> Randy> _______________________________________________
> Randy> Toasters mailing list
> Randy> Toasters@teaparty.net
> Randy> https://www.teaparty.net/mailman/listinfo/toasters
>
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
Re: hardware or software based disk ownership? [ In reply to ]
I always cringe when I see a timid newcomer get spanked and I appreciate
the tact and diplomacy of your gentle RTFM :)

In this case I'd read the docs on managing aggrs and was confused by the
system not behaving like I'd expect. Admittedly whatever was going wrong
was likely carbon-based.

All is well...


On 7/7/2020 3:28 PM, Heino Walther wrote:
>
> Yes aggregates can of cause be shared between multiple SVMs…. (erhm… I
> think a little polite rtfm is in order here?) ????
>
> And before you ask, you cannot use the two aggr0 root aggregates for
> any data volumes…
>
> /Heino
>
> *Fra: *Toasters <toasters-bounces@teaparty.net> på vegne af "Rue,
> Randy" <randyrue@gmail.com>
> *Dato: *onsdag den 8. juli 2020 kl. 00.23
> *Til: *tmac <tmacmd@gmail.com>
> *Cc: *Toasters <toasters@teaparty.net>
> *Emne: *Re: hardware or software based disk ownership?
>
> I think I may just be looking at a limit of the system. If an
> aggregate is already assigned to one SVM, can it also be assigned to
> another? That is, can two different SVMs access the same aggregate?
>
> Output:
>
> netapp4::> vserver modify -vserver scharp_vm_storage -aggr-list
> scharpdata,scharp_vm_storage
>
> netapp4::> vserver show
>                                Admin      Operational Root
> Vserver     Type    Subtype    State      State Volume     Aggregate
> ----------- ------- ---------- ---------- ----------- ----------
> ----------
> netapp4     admin   -          -          - -          -
> netapp4-a   node    -          -          - -          -
> netapp4-b   node    -          -          - -          -
> scharp_kube data    default    running    running scharp_    scharpdata
> kube_root
> scharp_vm_storage
>             data    default    running    running scharp_vm_ scharp_vm_
> storage_   storage
>                                                       root
> scharpdata  data    default    running    running scharpdata scharpdata
>                                                       _root
> 6 entries were displayed.
>
> netapp4::>
>
> On 7/7/2020 2:06 PM, tmac wrote:
>
> Something may be getting lost here. Are you able to send any output?
>
>
> --tmac
>
> *Tim McCarthy, */Principal Consultant/
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>
> On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com
> <mailto:randyrue@gmail.com>> wrote:
>
> I was misremembering from my early days in pre-cluster mode.
>
> Forget any mention of disk ownership :)
>
> Can an aggregate be used by more than one SVM? If so, how?
> When I try to
> add the aggregate to the other SVM, the command returns
> without error
> but vserver show still shows the SVMS and their assigned
> aggregates
> unchanged.
>
>
> On 7/7/2020 12:10 PM, John Stoffel wrote:
> > Randy,
> > Check your configuration, you need to assign the aggregates
> to the
> > vservers.
> >
> >
> >
> >
> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
> >
> > shows you the steps to take.
> >
> >        vserver show -fields aggr-list
> >
> >        vserver modify -vserver <vserver> -aggr-list
> <aggr1>,<aggr2>[,aggrN]
> >
> > John
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net <mailto:Toasters@teaparty.net>
> https://www.teaparty.net/mailman/listinfo/toasters
>
Re: hardware or software based disk ownership? [ In reply to ]
>>>>> "Randy" == Randy Rue <randyrue@gmail.com> writes:

Randy> gah.

Randy> Previously I added the aggr to the vm_storage SVM on the CLI and then
Randy> ran vserver show again and only saw the previous aggr listed. Also, if I
Randy> tried to create a new volume on that SVM in the webUI I only was
Randy> presented with one option, the original SVM.

Randy> Now when I look in the webUI I can see both aggrs as an option for a new
Randy> volume. Maybe I just needed to wait for the dust to settle.

Randy> I'm calling this pair of n00b issues resolved (shared aggregates and
Randy> disk ownership). The painful part is that this n00b has been running
Randy> toasters for about fifteen years now. For the last five years I've been
Randy> branching into other things and clearly my toaster skills are aging.

I suspect that someone tried to get a little too smart and limit
VServers to specific aggregates in the mistaken belief that it would
improve performance.

In my environment, all but one of my vservers has '-' for the
aggr-list setting, and the final one has all of my aggregates listed.
So I think the even *better* answer for you, especially if you add
aggregates in the future is to just do:

vserver modify -vserver <VSERVER> -aggr-list -

instead for all of your aggregates.

John




Randy> On 7/8/2020 6:34 AM, John Stoffel wrote:
>>
Randy> I think I may just be looking at a limit of the system. If an
Randy> aggregate is already assigned to one SVM, can it also be
Randy> assigned to another? That is, can two different SVMs access the
Randy> same aggregate?
>>
>> Yes. All you should need to do is:
>>
>> vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage
>> vserver modify -vserver scharpdata -aggr-list scharpdata,scharp_vm_storage
>> vserver modify -vserver scharp_kube -aggr-list scharpdata,scharp_vm_storage
>>
>> And now all three of your SVMs should be able to create volumes on
>> both aggregates. You can then test with:
>>
>> vol create -vserver scharp_vm_storage -aggregate scharpdata -volume \
>> test -size 1g
>>
>> But if it doesn't, send the full command you used and the output for
>> us to look at with you.
>>
>> John
>>
>>
Randy> Output:
>>
Randy> netapp4::> vserver modify -vserver scharp_vm_storage -aggr-list scharpdata,scharp_vm_storage
>>
Randy> netapp4::> vserver show
Randy> ?????????????????????????????? Admin????? Operational Root
Randy> Vserver???? Type??? Subtype??? State????? State?????? Volume???? Aggregate
Randy> ----------- ------- ---------- ---------- ----------- ---------- ----------
Randy> netapp4???? admin?? -????????? -????????? -?????????? -????????? -
Randy> netapp4-a?? node??? -????????? -????????? -?????????? -????????? -
Randy> netapp4-b?? node??? -????????? -????????? -?????????? -????????? -
Randy> scharp_kube data??? default??? running??? running???? scharp_??? scharpdata
Randy> ????????????????????????????????????????????????????? kube_root
Randy> scharp_vm_storage
Randy> ??????????? data??? default??? running??? running???? scharp_vm_ scharp_vm_
Randy> ????????????????????????????????????????????????????? storage_?? storage
Randy> ????????????????????????????????????????????????????? root
Randy> scharpdata? data??? default??? running??? running???? scharpdata scharpdata
Randy> ????????????????????????????????????????????????????? _root
Randy> 6 entries were displayed.
>>
Randy> netapp4::>
>>
Randy> On 7/7/2020 2:06 PM, tmac wrote:
>>
Randy> Something may be getting lost here. Are you able to send any output?
>>
Randy> --tmac
>>
Randy> Tim McCarthy, Principal Consultant
>>
Randy> Proud Member of the?#NetAppATeam
>>
Randy> I Blog at?TMACsRack
>>
Randy> On Tue, Jul 7, 2020 at 4:47 PM Rue, Randy <randyrue@gmail.com> wrote:
>>
Randy> I was misremembering from my early days in pre-cluster mode.
>>
Randy> Forget any mention of disk ownership :)
>>
Randy> Can an aggregate be used by more than one SVM? If so, how? When I try to
Randy> add the aggregate to the other SVM, the command returns without error
Randy> but vserver show still shows the SVMS and their assigned aggregates
Randy> unchanged.
>>
Randy> On 7/7/2020 12:10 PM, John Stoffel wrote:
>>>> Randy,
>>>> Check your configuration, you need to assign the aggregates to the
>>>> vservers.
>>>>
>>>>
>>>>
>>>>
Randy> https://library.netapp.com/ecmdocs/ECMP1196912/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html
>>>> shows you the steps to take.
>>>>
>>>> ? ? ? ? vserver show -fields aggr-list
>>>>
>>>> ? ? ? ? vserver modify -vserver <vserver> -aggr-list <aggr1>,<aggr2>[,aggrN]
>>>>
>>>> John
Randy> _______________________________________________
Randy> Toasters mailing list
Randy> Toasters@teaparty.net
Randy> https://www.teaparty.net/mailman/listinfo/toasters
>>

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters