Mailing List Archive

AFF-A220 9.8P1 default aggregates and root volumes
Hello,

We're setting up a new and relatively small SSD NAS and it's arrived
configured for two equally sized tiers with each node and its root
aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and
moving data volumes around to balance them. For one thing, our largest
data volumes are larger than 15TB and snapmirror doesn't seem to want to
let us set up a relationship to load the new volumes from our old
cluster, even if the target has ample room after the guaranteed 3X
compression.

We're willing to lose the wasted space involved in creating one
tier/partition/aggr/root volume with the minimum number of disks for
raid-dp (3?) for one node if that will allow us to put the rest on the
other set and have a single large container for our unstructured file
volumes.

We tried moving all volumes to one tier and deleting the other. But one
node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have
changed with the introduction of partitions and whatnot. So bear with me
if I'm asking n00b questions even after a few years running NetApp gear.

* Is what I've proposed above reasonable? (one minimum aggr and one
large one) Is it commonly done? Is it a good idea?
* Can you point me to any "change notes" type doc that explains these
new terms/objects to an otherwise experienced NetApp admin?
* If the above is viable, what do I need to do to get there?

For what it's worth, I've been noodling a bit with the "system node
migrate-root" command  (this new AFF is not in production yet) and got a
warning that my target disks don't have a spare root partition (I
specified some of the disks on the "old" aggr). That warning says I can
find available disks with the command "storage disk partition show
-owner-node-name redacted-a -container-type spare -is-root true" but the
CLI then complains that partition is not a command (I'm at "advanced"
privilege level). Is the given command correct?

Hope to hear from you,


Randy in Seattle
Re: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
If it is an all-SSD unit (affectionately known as an AFF or All Flash FAS)
then what you likely have is something called root-data-data partitioning

There are 3 partitions:
P1 -> node 1
P2 -> node 2
P3 -> root

What happens (in most cases) half of the P3 partitions are given to each
node. A minimal root aggregate is created leaving at least 1 but in most
cases 2 spare root partitions.
All the P1 partitions end up belonging to node 1 and all the P2 partitions
belong to node 2. This balances performance and capacity.

Should you want to break that, you can use the "disk option" command to
turn auto-assign off for both nodes.
Destroy your two data aggregates, then remove ownership of all "data 2"
partitions:

set advanced
disk removeowner -data2 true -disk x.y.x
(repeat for all disks in the system)

Then change the ownership:
disk assign -data2 true -node node1 -disk x.y.z
(repeat for all disks in the system)

You could then try auto-provisioning:
aggregate auto-provision -node node1 -verbose true
(if you don't like the raid layout, you can manually create your aggregate)

I suspect by default, if the partition size is >6TB (or maybe 8TB), the
system will automatically use RAID-TEC (triple parity RAID)
Otherwise, the system will use RAID-DP and limit raid-group size.

With that said, why? With newer versions of ONTAP, you could take advantage
of FlexGroups (a volume that spans one or more controllers and one or more
aggregates)

Are you able to update the source to ONTAP 9.3 (take advantage of XDP)
which might let you squeeze that data in?

personally, seems like a FAS unit with SATA drives would be a better fit
for a large snapmirror destination

Summary:

- Is what I've proposed above reasonable? (one minimum aggr and one
large one) Is it commonly done? Is it a good idea?
- It can be done. I personally avoid it whenever possible. Only
implement for edge cases
- Think of it this way: you have two controllers in active/active and
you are forcing it to basically be active/passive
- You are unable to take full advantage of the second controller!
- Can you point me to any "change notes" type doc that explains these
new terms/objects to an otherwise experienced NetApp admin?
- https://library.netapp.com/ecmdocs/ECMLP2492508/html/frameset.html
(release notes for ONTAP, 9.0 through 9.8)
- If the above is viable, what do I need to do to get there?
- Yes (if you really want to). Clues above!




--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Mon, Apr 26, 2021 at 10:02 AM Rue, Randy <randyrue@gmail.com> wrote:

> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's arrived
> configured for two equally sized tiers with each node and its root aggr/vol
> in each tier. Each tier is about 15TB in size before compression.
>
> We're hoping to avoid needing to manage two equally sized data aggrs and
> moving data volumes around to balance them. For one thing, our largest data
> volumes are larger than 15TB and snapmirror doesn't seem to want to let us
> set up a relationship to load the new volumes from our old cluster, even if
> the target has ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks for
> raid-dp (3?) for one node if that will allow us to put the rest on the
> other set and have a single large container for our unstructured file
> volumes.
>
> We tried moving all volumes to one tier and deleting the other. But one
> node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts have
> changed with the introduction of partitions and whatnot. So bear with me if
> I'm asking n00b questions even after a few years running NetApp gear.
>
> - Is what I've proposed above reasonable? (one minimum aggr and one
> large one) Is it commonly done? Is it a good idea?
> - Can you point me to any "change notes" type doc that explains these
> new terms/objects to an otherwise experienced NetApp admin?
> - If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system node
> migrate-root" command (this new AFF is not in production yet) and got a
> warning that my target disks don't have a spare root partition (I specified
> some of the disks on the "old" aggr). That warning says I can find
> available disks with the command "storage disk partition show
> -owner-node-name redacted-a -container-type spare -is-root true" but the
> CLI then complains that partition is not a command (I'm at "advanced"
> privilege level). Is the given command correct?
>
> Hope to hear from you,
>
>
> Randy in Seattle
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
Hi Randy

There are two ways you can go… the correct way, and the “hacky” way…
When you boot up the system with the serial or Service-Processor attached you can press “Ctrl-C” to get into the Maintaince menu from where you can Zero all disks on the system.
This will build two root aggregates, one for each controller. They are typically build from 4 disks that are partitioned as “Root-Data-Data” disks, that is a small Root partition and two equal sized Data partitions.
Disk sizes of these may vary dependent on your disks and your controller model. (I think more disks are used… maybe 8…)
All other disks are not touched, and are therefore spares, and the 2 x 4 data partitions are also presented as spare “disks” half the size of the physical disk (minus the small root partition).
Each controller require a root aggregate, no matter what.
If you would like to have just one data aggregate on one of the controllers, you can do so, but be aware that if you start your aggregate with the partitioned disks, and add non-partitioned disks later on, the disks will most likely be partitioned by default, and the other partition is assigned to the partner controller.
The way to get around this, is to either not use the partitioned disks, and just start your new aggregate with unpartitioned disks, which you will have to assign to one of the controllers.
If you would like to use the partitioned disks, you can create a new raidgroup in the same aggregate using the partitions from one of systems….
You will then have the partitioned that are assigned to the passive node… (this is where it gets hacky)…
You are infact able to assign these partitioned to the same node, and you are able to add them to the same RAID group as the other partitions… so a RAID group consisting of partitions of the same disk ????
If one disk fails you will have two partitions fail inside your RAID group… which is a bit scary for me… so I would suggest to create a separate RAID group for them…

So: Example.. system with 24 disks… lets say the disks are 10TB large…
Ctrl-A:
RootAggr: Consists of 4 partitions (10G each)
Data Aggr: Consists of:
RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
Ctrl-B:
RootAggr: Consists of 4 partitions (10G each)

I hope this makes sense… keep in mind that this example is not how it is going to look, as there will be more partitioned disks, because as a minimum the system needs 4 partitions for the root aggregate… so is will more likely be 8 disks that are partitioned…
(I just didn’t want to correct the numbers above)

Now… that being said… there are ways to limit the number of partitions or disks used for the root aggregate…. Once you are up and running per default, you can create two new root aggregates that are either smaller in size, or using RAID4 instead of RAID-DP… of cause with the increased risk as a disk dies…. (the information on the root aggregate can and should be backed up, and is pretty easy to restore if they should fail). Your data aggregates should be no less than RAID-DP.
The way to create smaller root aggregates and to create your own partition sizes includes fairly hairy commands which require “priv set diag”… so unless you know what you are doing, I would suggest against it. (but it is possible)

Basically I would suggest the default with a two node half and half setup… maybe you can use some other features in order to spread out your load on both aggregates? I am pretty sure that if you are just using CIFS or NFS, you should be able to “merge” two volumes (one from each controller) into one logical workspace… But since I have not been working with this that much, I would let someone else explain this part… (I’m pretty sure you can set this up from the GUI even…)

/Heino




Fra: Toasters <toasters-bounces@teaparty.net> på vegne af Rue, Randy <randyrue@gmail.com>
Dato: mandag, 26. april 2021 kl. 15.59
Til: Toasters <toasters@teaparty.net>
Emne: AFF-A220 9.8P1 default aggregates and root volumes

Hello,

We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.

We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.

We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.

* Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
* Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
* If the above is viable, what do I need to do to get there?

For what it's worth, I've been noodling a bit with the "system node migrate-root" command (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?

Hope to hear from you,



Randy in Seattle
Re: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
I'm really glad I asked, and that you answered!

This is indeed an AFF and matches what you describe below.

Sounds like FlexGroups is a perfect example of the kind of new
architecture I needed to know about, allowing us to stick with the
default disk setup and still have the volumes we need.

Note we're only using snapmirror to populate the new AFF. If the old
cluster had a newer version on it, I could set the targets as caching
volumes instead of DP volumes, but we're trying to retire the old array
ASAP.

I'll re-create that tier and start reading up on FlexGroups.

Will holler here if I hit any more bumps.

Thanks!

On 4/26/2021 7:20 AM, tmac wrote:
> If it is an all-SSD unit (affectionately known as an AFF or All Flash
> FAS) then what you likely have is something called root-data-data
> partitioning
>
> There are 3 partitions:
> P1 -> node 1
> P2 -> node 2
> P3 -> root
>
> What happens (in most cases) half of the P3 partitions are given to
> each node. A minimal root aggregate is created leaving at least 1 but
> in most cases 2 spare root partitions.
> All the P1 partitions end up belonging to node 1 and all the P2
> partitions belong to node 2. This balances performance and capacity.
>
> Should you want to break that, you can use the "disk option" command
> to turn auto-assign off for both nodes.
> Destroy your two data aggregates, then remove ownership of all "data
> 2" partitions:
>
> set advanced
> disk removeowner -data2 true -disk x.y.x
> (repeat for all disks in the system)
>
> Then change the ownership:
> disk assign -data2 true -node node1 -disk x.y.z
> (repeat for all disks in the system)
>
> You could then try auto-provisioning:
> aggregate auto-provision -node node1 -verbose true
> (if you don't like the raid layout, you can manually create your
> aggregate)
>
> I suspect by default, if the partition size is >6TB (or maybe 8TB),
> the system will automatically use RAID-TEC (triple parity RAID)
> Otherwise, the system will use RAID-DP and limit raid-group size.
>
> With that said, why? With newer versions of ONTAP, you could take
> advantage of FlexGroups (a volume that spans one or more controllers
> and one or more aggregates)
>
> Are you able to update the source to ONTAP 9.3 (take advantage of XDP)
> which might let you squeeze that data in?
>
> personally, seems like a FAS unit with SATA drives would be a better
> fit for a large snapmirror destination
>
> Summary:
>
> * Is what I've proposed above reasonable? (one minimum aggr and one
> large one) Is it commonly done? Is it a good idea?
> o It can be done. I personally avoid it whenever possible. Only
> implement for edge cases
> o Think of it this way: you have two controllers in
> active/active and you are forcing it to basically be
> active/passive
> o You are unable to take full advantage of the second controller!
> * Can you point me to any "change notes" type doc that explains
> these new terms/objects to an otherwise experienced NetApp admin?
> o https://library.netapp.com/ecmdocs/ECMLP2492508/html/frameset.html
> <https://library.netapp.com/ecmdocs/ECMLP2492508/html/frameset.html>
> (release notes for ONTAP, 9.0 through 9.8)
> * If the above is viable, what do I need to do to get there?
> o Yes (if you really want to). Clues above!
>
>
>
>
> --tmac
>
> *Tim McCarthy, */Principal Consultant/
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*
>
>
>
>
> On Mon, Apr 26, 2021 at 10:02 AM Rue, Randy <randyrue@gmail.com
> <mailto:randyrue@gmail.com>> wrote:
>
> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's
> arrived configured for two equally sized tiers with each node and
> its root aggr/vol in each tier. Each tier is about 15TB in size
> before compression.
>
> We're hoping to avoid needing to manage two equally sized data
> aggrs and moving data volumes around to balance them. For one
> thing, our largest data volumes are larger than 15TB and
> snapmirror doesn't seem to want to let us set up a relationship to
> load the new volumes from our old cluster, even if the target has
> ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks
> for raid-dp (3?) for one node if that will allow us to put the
> rest on the other set and have a single large container for our
> unstructured file volumes.
>
> We tried moving all volumes to one tier and deleting the other.
> But one node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts
> have changed with the introduction of partitions and whatnot. So
> bear with me if I'm asking n00b questions even after a few years
> running NetApp gear.
>
> * Is what I've proposed above reasonable? (one minimum aggr and
> one large one) Is it commonly done? Is it a good idea?
> * Can you point me to any "change notes" type doc that explains
> these new terms/objects to an otherwise experienced NetApp admin?
> * If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system
> node migrate-root" command  (this new AFF is not in production
> yet) and got a warning that my target disks don't have a spare
> root partition (I specified some of the disks on the "old" aggr).
> That warning says I can find available disks with the command
> "storage disk partition show -owner-node-name redacted-a
> -container-type spare -is-root true" but the CLI then complains
> that partition is not a command (I'm at "advanced" privilege
> level). Is the given command correct?
>
> Hope to hear from you,
>
>
> Randy in Seattle
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net <mailto:Toasters@teaparty.net>
> https://www.teaparty.net/mailman/listinfo/toasters
> <https://www.teaparty.net/mailman/listinfo/toasters>
>
Re: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
Using "whole" or non-partitioned drives will use a minimum of 6 full drives
(at least to start, P+P+D with RAID-DP and drive size <10TB).
If the drive size is 10T or larger, RAID-TEC is used instead.

Not-knowing the size of the drives/SSDs in question make this a little
difficult to answer.
It is trivial to get around the whole-drive/partitioned drive when adding
in the future.
Set the "maxraidsize" to whatever is in the raidgroup now. any future
drives are always added as whole drives unless they are pre-partitioned.

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk> wrote:

> Hi Randy
>
>
>
> There are two ways you can go… the correct way, and the “hacky” way…
>
> When you boot up the system with the serial or Service-Processor attached
> you can press “Ctrl-C” to get into the Maintaince menu from where you can
> Zero all disks on the system.
>
> This will build two root aggregates, one for each controller. They are
> typically build from 4 disks that are partitioned as “Root-Data-Data”
> disks, that is a small Root partition and two equal sized Data partitions.
>
> Disk sizes of these may vary dependent on your disks and your controller
> model. (I think more disks are used… maybe 8…)
>
> All other disks are not touched, and are therefore spares, and the 2 x 4
> data partitions are also presented as spare “disks” half the size of the
> physical disk (minus the small root partition).
>
> Each controller require a root aggregate, no matter what.
>
> If you would like to have just one data aggregate on one of the
> controllers, you can do so, but be aware that if you start your aggregate
> with the partitioned disks, and add non-partitioned disks later on, the
> disks will most likely be partitioned by default, and the other partition
> is assigned to the partner controller.
>
> The way to get around this, is to either not use the partitioned disks,
> and just start your new aggregate with unpartitioned disks, which you will
> have to assign to one of the controllers.
>
> If you would like to use the partitioned disks, you can create a new
> raidgroup in the same aggregate using the partitions from one of systems….
>
> You will then have the partitioned that are assigned to the passive node…
> (this is where it gets hacky)…
>
> You are infact able to assign these partitioned to the same node, and you
> are able to add them to the same RAID group as the other partitions… so a
> RAID group consisting of partitions of the same disk ????
>
> If one disk fails you will have two partitions fail inside your RAID
> group… which is a bit scary for me… so I would suggest to create a
> separate RAID group for them…
>
>
>
> So: Example.. system with 24 disks… lets say the disks are 10TB large…
>
> Ctrl-A:
>
> RootAggr: Consists of 4 partitions (10G each)
>
> Data Aggr: Consists of:
>
> RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
>
> RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> Ctrl-B:
>
> RootAggr: Consists of 4 partitions (10G each)
>
>
>
> I hope this makes sense… keep in mind that this example is not how it is
> going to look, as there will be more partitioned disks, because as a
> minimum the system needs 4 partitions for the root aggregate… so is will
> more likely be 8 disks that are partitioned…
>
> (I just didn’t want to correct the numbers above)
>
>
>
> Now… that being said… there are ways to limit the number of partitions or
> disks used for the root aggregate…. Once you are up and running per
> default, you can create two new root aggregates that are either smaller in
> size, or using RAID4 instead of RAID-DP… of cause with the increased risk
> as a disk dies…. (the information on the root aggregate can and should be
> backed up, and is pretty easy to restore if they should fail). Your data
> aggregates should be no less than RAID-DP.
>
> The way to create smaller root aggregates and to create your own partition
> sizes includes fairly hairy commands which require “priv set diag”… so
> unless you know what you are doing, I would suggest against it. (but it is
> possible)
>
>
>
> Basically I would suggest the default with a two node half and half setup…
> maybe you can use some other features in order to spread out your load on
> both aggregates? I am pretty sure that if you are just using CIFS or NFS,
> you should be able to “merge” two volumes (one from each controller) into
> one logical workspace… But since I have not been working with this that
> much, I would let someone else explain this part… (I’m pretty sure you can
> set this up from the GUI even…)
>
>
>
> /Heino
>
>
>
>
>
>
>
>
>
> *Fra: *Toasters <toasters-bounces@teaparty.net> på vegne af Rue, Randy <
> randyrue@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 15.59
> *Til: *Toasters <toasters@teaparty.net>
> *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
>
> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's arrived
> configured for two equally sized tiers with each node and its root aggr/vol
> in each tier. Each tier is about 15TB in size before compression.
>
> We're hoping to avoid needing to manage two equally sized data aggrs and
> moving data volumes around to balance them. For one thing, our largest data
> volumes are larger than 15TB and snapmirror doesn't seem to want to let us
> set up a relationship to load the new volumes from our old cluster, even if
> the target has ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks for
> raid-dp (3?) for one node if that will allow us to put the rest on the
> other set and have a single large container for our unstructured file
> volumes.
>
> We tried moving all volumes to one tier and deleting the other. But one
> node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts have
> changed with the introduction of partitions and whatnot. So bear with me if
> I'm asking n00b questions even after a few years running NetApp gear.
>
> - Is what I've proposed above reasonable? (one minimum aggr and one
> large one) Is it commonly done? Is it a good idea?
> - Can you point me to any "change notes" type doc that explains these
> new terms/objects to an otherwise experienced NetApp admin?
> - If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system node
> migrate-root" command (this new AFF is not in production yet) and got a
> warning that my target disks don't have a spare root partition (I specified
> some of the disks on the "old" aggr). That warning says I can find
> available disks with the command "storage disk partition show
> -owner-node-name redacted-a -container-type spare -is-root true" but the
> CLI then complains that partition is not a command (I'm at "advanced"
> privilege level). Is the given command correct?
>
> Hope to hear from you,
>
>
>
> Randy in Seattle
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
I think we agree that FlexGroups are the way forward, but one will have to take care and verify that stuff like SnapMirror/Vault still works (if this is used)… also I think FlexGroups does not support LUNS ?

Anyway you are correct about the partitions… there are several ways to change the configuration… I have had many more or less hacky setups, so normally it is possible to go out of the “box” NetApp has set as default.
Example is the possibility to create a root aggregate with a RAID4… and a way more hacky and “internal only” way is to create your own partition sizes which involves non-documented commands…. But it’s possible..
But I have had issues with these hacky setups… for example, it is possible to force to partitions from the same disk into the same RAID group… it’s not clever, but possible non the less ????. But this will cause issues if you were to play around with “disk replace” later on… I had an issue with this, and NetApp support was required in order to fix this… basically the disk replace commands hang, waiting for something… which causes all rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they are not making much effort to fix it, because if you choose to “hack” this much, you are basically on your own.. but as always NetApp support is always ready to help, no questions asked ????

One other remark… I really hope that NetApp is moving their root aggregate onto local flash cards in the future, so that we can get rid of these partitions ????. Or if we can get back to the good old days with root volumes “vol0” (which is no likely to happen).

/Heino

Fra: tmac <tmacmd@gmail.com>
Dato: mandag, 26. april 2021 kl. 16.46
Til: Heino Walther <hw@beardmann.dk>
Cc: Rue, Randy <randyrue@gmail.com>, Toasters <toasters@teaparty.net>
Emne: Re: AFF-A220 9.8P1 default aggregates and root volumes
Using "whole" or non-partitioned drives will use a minimum of 6 full drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
If the drive size is 10T or larger, RAID-TEC is used instead.

Not-knowing the size of the drives/SSDs in question make this a little difficult to answer.
It is trivial to get around the whole-drive/partitioned drive when adding in the future.
Set the "maxraidsize" to whatever is in the raidgroup now. any future drives are always added as whole drives unless they are pre-partitioned.

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam<https://twitter.com/NetAppATeam>

I Blog at TMACsRack<https://tmacsrack.wordpress.com/>



On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk<mailto:hw@beardmann.dk>> wrote:
Hi Randy

There are two ways you can go… the correct way, and the “hacky” way…
When you boot up the system with the serial or Service-Processor attached you can press “Ctrl-C” to get into the Maintaince menu from where you can Zero all disks on the system.
This will build two root aggregates, one for each controller. They are typically build from 4 disks that are partitioned as “Root-Data-Data” disks, that is a small Root partition and two equal sized Data partitions.
Disk sizes of these may vary dependent on your disks and your controller model. (I think more disks are used… maybe 8…)
All other disks are not touched, and are therefore spares, and the 2 x 4 data partitions are also presented as spare “disks” half the size of the physical disk (minus the small root partition).
Each controller require a root aggregate, no matter what.
If you would like to have just one data aggregate on one of the controllers, you can do so, but be aware that if you start your aggregate with the partitioned disks, and add non-partitioned disks later on, the disks will most likely be partitioned by default, and the other partition is assigned to the partner controller.
The way to get around this, is to either not use the partitioned disks, and just start your new aggregate with unpartitioned disks, which you will have to assign to one of the controllers.
If you would like to use the partitioned disks, you can create a new raidgroup in the same aggregate using the partitions from one of systems….
You will then have the partitioned that are assigned to the passive node… (this is where it gets hacky)…
You are infact able to assign these partitioned to the same node, and you are able to add them to the same RAID group as the other partitions… so a RAID group consisting of partitions of the same disk ????
If one disk fails you will have two partitions fail inside your RAID group… which is a bit scary for me… so I would suggest to create a separate RAID group for them…

So: Example.. system with 24 disks… lets say the disks are 10TB large…
Ctrl-A:
RootAggr: Consists of 4 partitions (10G each)
Data Aggr: Consists of:
RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
Ctrl-B:
RootAggr: Consists of 4 partitions (10G each)

I hope this makes sense… keep in mind that this example is not how it is going to look, as there will be more partitioned disks, because as a minimum the system needs 4 partitions for the root aggregate… so is will more likely be 8 disks that are partitioned…
(I just didn’t want to correct the numbers above)

Now… that being said… there are ways to limit the number of partitions or disks used for the root aggregate…. Once you are up and running per default, you can create two new root aggregates that are either smaller in size, or using RAID4 instead of RAID-DP… of cause with the increased risk as a disk dies…. (the information on the root aggregate can and should be backed up, and is pretty easy to restore if they should fail). Your data aggregates should be no less than RAID-DP.
The way to create smaller root aggregates and to create your own partition sizes includes fairly hairy commands which require “priv set diag”… so unless you know what you are doing, I would suggest against it. (but it is possible)

Basically I would suggest the default with a two node half and half setup… maybe you can use some other features in order to spread out your load on both aggregates? I am pretty sure that if you are just using CIFS or NFS, you should be able to “merge” two volumes (one from each controller) into one logical workspace… But since I have not been working with this that much, I would let someone else explain this part… (I’m pretty sure you can set this up from the GUI even…)

/Heino




Fra: Toasters <toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net>> på vegne af Rue, Randy <randyrue@gmail.com<mailto:randyrue@gmail.com>>
Dato: mandag, 26. april 2021 kl. 15.59
Til: Toasters <toasters@teaparty.net<mailto:toasters@teaparty.net>>
Emne: AFF-A220 9.8P1 default aggregates and root volumes

Hello,

We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.

We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.

We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.

* Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
* Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
* If the above is viable, what do I need to do to get there?

For what it's worth, I've been noodling a bit with the "system node migrate-root" command (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?

Hope to hear from you,



Randy in Seattle
_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
https://www.teaparty.net/mailman/listinfo/toasters
Re: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
We're not serving any LUNs and have no plans to.

For all my weird-ass questions I'm actually a big fan of sticking with
defaults unless there's a true need to "go snowflake" and our
application is not exotic, pretty much just a plain old NFS NAS that
serves up files and Xen datastores.

Testing now whether I can create a flexgroup as a DP type and make it a
snapmirror target.

On 4/26/2021 7:58 AM, Heino Walther wrote:
>
> I think we agree that FlexGroups are the way forward, but one will
> have to take care and verify that stuff like SnapMirror/Vault still
> works (if this is used)… also I think FlexGroups does not support LUNS ?
>
> Anyway you are correct about the partitions… there are several ways to
> change the configuration…  I have had many more or less hacky setups,
> so normally it is possible to go out of the “box” NetApp has set as
> default.
>
> Example is the possibility to create a root aggregate with a RAID4…
>  and a way more hacky and “internal only” way is to create your own
> partition sizes which involves non-documented commands…. But it’s
> possible..
>
> But I have had issues with these hacky setups… for example, it is
> possible to force to partitions from the same disk into the same RAID
> group… it’s not clever, but possible non the less ????.  But this will
> cause issues if you were to play around with “disk replace” later on…
> I had an issue with this, and NetApp support was required in order to
> fix this… basically the disk replace commands hang, waiting for
> something…  which causes all rebuild tasks etc. stall…. I think this
> was a bug in ONTAP, but I bet they are not making much effort to fix
> it, because if you choose to “hack” this much, you are basically on
> your own..  but as always NetApp support is always ready to help, no
> questions asked ????
>
> One other remark… I really hope that NetApp is moving their root
> aggregate onto local flash cards in the future, so that we can get rid
> of these partitions ????. Or if we can get back to the good old days
> with root volumes “vol0” (which is no likely to happen).
>
> /Heino
>
> *Fra: *tmac <tmacmd@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 16.46
> *Til: *Heino Walther <hw@beardmann.dk>
> *Cc: *Rue, Randy <randyrue@gmail.com>, Toasters <toasters@teaparty.net>
> *Emne: *Re: AFF-A220 9.8P1 default aggregates and root volumes
>
> Using "whole" or non-partitioned drives will use a minimum of 6 full
> drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
>
> If the drive size is 10T or larger, RAID-TEC is used instead.
>
> Not-knowing the size of the drives/SSDs in question make this a little
> difficult to answer.
>
> It is trivial to get around the whole-drive/partitioned drive when
> adding in the future.
>
> Set the "maxraidsize" to whatever is in the raidgroup now. any future
> drives are always added as whole drives unless they are pre-partitioned.
>
>
> --tmac
>
> *Tim McCarthy, */Principal Consultant/
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>
> On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk
> <mailto:hw@beardmann.dk>> wrote:
>
> Hi Randy
>
> There are two ways you can go… the correct way, and the “hacky” way…
>
> When you boot up the system with the serial or Service-Processor
> attached you can press “Ctrl-C” to get into the Maintaince menu
> from where you can Zero all disks on the system.
>
> This will build two root aggregates, one for each controller. 
> They are typically build from 4 disks that are partitioned as
> “Root-Data-Data” disks, that is a small Root partition and two
> equal sized Data partitions.
>
> Disk sizes of these may vary dependent on your disks and your
> controller model. (I think more disks are used… maybe 8…)
>
> All other disks are not touched, and are therefore spares, and the
> 2 x 4 data partitions are also presented as spare “disks” half the
> size of the physical disk (minus the small root partition).
>
> Each controller require a root aggregate, no matter what.
>
> If you would like to have just one data aggregate on one of the
> controllers, you can do so, but be aware that if you start your
> aggregate with the partitioned disks, and add non-partitioned
> disks later on, the disks will most likely be partitioned by
> default, and the other partition is assigned to the partner
> controller.
>
> The way to get around this, is to either not use the partitioned
> disks, and just start your new aggregate with unpartitioned disks,
> which you will have to assign to one of the controllers.
>
> If you would like to use the partitioned disks, you can create a
> new raidgroup in the same aggregate using the partitions from one
> of systems….
>
> You will then have the partitioned that are assigned to the
> passive node…  (this is where it gets hacky)…
>
> You are infact able to assign these partitioned to the same node,
> and you are able to add them to the same RAID group as the other
> partitions… so a RAID group consisting of partitions of the same
> disk ????
>
> If one disk fails you will have two partitions fail inside your
> RAID group…  which is a bit scary for me… so I would suggest to
> create a separate RAID group for them…
>
> So: Example.. system with 24 disks… lets say the disks are 10TB large…
>
> Ctrl-A:
>
> RootAggr: Consists of 4 partitions (10G each)
>
> Data Aggr: Consists of:
>
> RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
>
> RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> Ctrl-B:
>
> RootAggr: Consists of 4 partitions (10G each)
>
> I hope this makes sense… keep in mind that this example is not how
> it is going to look, as there will be more partitioned disks,
> because as a minimum the system needs 4 partitions for the root
> aggregate… so is will more likely be 8 disks that are partitioned…
>
> (I just didn’t want to correct the numbers above)
>
> Now… that being said… there are ways to limit the number of
> partitions or disks used for the root aggregate…. Once you are up
> and running per default, you can create two new root aggregates
> that are either smaller in size, or using RAID4 instead of
> RAID-DP… of cause with the increased risk as a disk dies…. (the
> information on the root aggregate can and should be backed up, and
> is pretty easy to restore if they should fail). Your data
> aggregates should be no less than RAID-DP.
>
> The way to create smaller root aggregates and to create your own
> partition sizes includes fairly hairy commands which require “priv
> set diag”…  so unless you know what you are doing, I would suggest
> against it. (but it is possible)
>
> Basically I would suggest the default with a two node half and
> half setup… maybe you can use some other features in order to
> spread out your load on both aggregates?   I am pretty sure that
> if you are just using CIFS or NFS, you should be able to “merge”
> two volumes (one from each controller) into one logical workspace…
>  But since I have not been working with this that much, I would
> let someone else explain this part… (I’m pretty sure you can set
> this up from the GUI even…)
>
> /Heino
>
> *Fra: *Toasters <toasters-bounces@teaparty.net
> <mailto:toasters-bounces@teaparty.net>> på vegne af Rue, Randy
> <randyrue@gmail.com <mailto:randyrue@gmail.com>>
> *Dato: *mandag, 26. april 2021 kl. 15.59
> *Til: *Toasters <toasters@teaparty.net <mailto:toasters@teaparty.net>>
> *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
>
> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's
> arrived configured for two equally sized tiers with each node and
> its root aggr/vol in each tier. Each tier is about 15TB in size
> before compression.
>
> We're hoping to avoid needing to manage two equally sized data
> aggrs and moving data volumes around to balance them. For one
> thing, our largest data volumes are larger than 15TB and
> snapmirror doesn't seem to want to let us set up a relationship to
> load the new volumes from our old cluster, even if the target has
> ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks
> for raid-dp (3?) for one node if that will allow us to put the
> rest on the other set and have a single large container for our
> unstructured file volumes.
>
> We tried moving all volumes to one tier and deleting the other.
> But one node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts
> have changed with the introduction of partitions and whatnot. So
> bear with me if I'm asking n00b questions even after a few years
> running NetApp gear.
>
> * Is what I've proposed above reasonable? (one minimum aggr and
> one large one) Is it commonly done? Is it a good idea?
> * Can you point me to any "change notes" type doc that explains
> these new terms/objects to an otherwise experienced NetApp admin?
> * If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system
> node migrate-root" command  (this new AFF is not in production
> yet) and got a warning that my target disks don't have a spare
> root partition (I specified some of the disks on the "old" aggr).
> That warning says I can find available disks with the command
> "storage disk partition show -owner-node-name redacted-a
> -container-type spare -is-root true" but the CLI then complains
> that partition is not a command (I'm at "advanced" privilege
> level). Is the given command correct?
>
> Hope to hear from you,
>
> Randy in Seattle
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net <mailto:Toasters@teaparty.net>
> https://www.teaparty.net/mailman/listinfo/toasters
> <https://www.teaparty.net/mailman/listinfo/toasters>
>
Re: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
Your source and destination both have to be a FlexGroup for Snapmirror.

I've asked a long time ago to not have that requirement.


Plus you have to match geometry (so same number of constituent volumes)


________________________________
From: Toasters <toasters-bounces@teaparty.net> on behalf of Randy Rue <randyrue@gmail.com>
Sent: Monday, April 26, 2021 9:14 AM
To: Heino Walther; tmac
Cc: Toasters
Subject: Re: SV: AFF-A220 9.8P1 default aggregates and root volumes


We're not serving any LUNs and have no plans to.

For all my weird-ass questions I'm actually a big fan of sticking with defaults unless there's a true need to "go snowflake" and our application is not exotic, pretty much just a plain old NFS NAS that serves up files and Xen datastores.

Testing now whether I can create a flexgroup as a DP type and make it a snapmirror target.

On 4/26/2021 7:58 AM, Heino Walther wrote:
I think we agree that FlexGroups are the way forward, but one will have to take care and verify that stuff like SnapMirror/Vault still works (if this is used)… also I think FlexGroups does not support LUNS ?

Anyway you are correct about the partitions… there are several ways to change the configuration… I have had many more or less hacky setups, so normally it is possible to go out of the “box” NetApp has set as default.
Example is the possibility to create a root aggregate with a RAID4… and a way more hacky and “internal only” way is to create your own partition sizes which involves non-documented commands…. But it’s possible..
But I have had issues with these hacky setups… for example, it is possible to force to partitions from the same disk into the same RAID group… it’s not clever, but possible non the less ????. But this will cause issues if you were to play around with “disk replace” later on… I had an issue with this, and NetApp support was required in order to fix this… basically the disk replace commands hang, waiting for something… which causes all rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they are not making much effort to fix it, because if you choose to “hack” this much, you are basically on your own.. but as always NetApp support is always ready to help, no questions asked ????

One other remark… I really hope that NetApp is moving their root aggregate onto local flash cards in the future, so that we can get rid of these partitions ????. Or if we can get back to the good old days with root volumes “vol0” (which is no likely to happen).

/Heino

Fra: tmac <tmacmd@gmail.com><mailto:tmacmd@gmail.com>
Dato: mandag, 26. april 2021 kl. 16.46
Til: Heino Walther <hw@beardmann.dk><mailto:hw@beardmann.dk>
Cc: Rue, Randy <randyrue@gmail.com><mailto:randyrue@gmail.com>, Toasters <toasters@teaparty.net><mailto:toasters@teaparty.net>
Emne: Re: AFF-A220 9.8P1 default aggregates and root volumes
Using "whole" or non-partitioned drives will use a minimum of 6 full drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
If the drive size is 10T or larger, RAID-TEC is used instead.

Not-knowing the size of the drives/SSDs in question make this a little difficult to answer.
It is trivial to get around the whole-drive/partitioned drive when adding in the future.
Set the "maxraidsize" to whatever is in the raidgroup now. any future drives are always added as whole drives unless they are pre-partitioned.

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam<https://twitter.com/NetAppATeam>

I Blog at TMACsRack<https://tmacsrack.wordpress.com/>



On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk<mailto:hw@beardmann.dk>> wrote:
Hi Randy

There are two ways you can go… the correct way, and the “hacky” way…
When you boot up the system with the serial or Service-Processor attached you can press “Ctrl-C” to get into the Maintaince menu from where you can Zero all disks on the system.
This will build two root aggregates, one for each controller. They are typically build from 4 disks that are partitioned as “Root-Data-Data” disks, that is a small Root partition and two equal sized Data partitions.
Disk sizes of these may vary dependent on your disks and your controller model. (I think more disks are used… maybe 8…)
All other disks are not touched, and are therefore spares, and the 2 x 4 data partitions are also presented as spare “disks” half the size of the physical disk (minus the small root partition).
Each controller require a root aggregate, no matter what.
If you would like to have just one data aggregate on one of the controllers, you can do so, but be aware that if you start your aggregate with the partitioned disks, and add non-partitioned disks later on, the disks will most likely be partitioned by default, and the other partition is assigned to the partner controller.
The way to get around this, is to either not use the partitioned disks, and just start your new aggregate with unpartitioned disks, which you will have to assign to one of the controllers.
If you would like to use the partitioned disks, you can create a new raidgroup in the same aggregate using the partitions from one of systems….
You will then have the partitioned that are assigned to the passive node… (this is where it gets hacky)…
You are infact able to assign these partitioned to the same node, and you are able to add them to the same RAID group as the other partitions… so a RAID group consisting of partitions of the same disk ????
If one disk fails you will have two partitions fail inside your RAID group… which is a bit scary for me… so I would suggest to create a separate RAID group for them…

So: Example.. system with 24 disks… lets say the disks are 10TB large…
Ctrl-A:
RootAggr: Consists of 4 partitions (10G each)
Data Aggr: Consists of:
RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
Ctrl-B:
RootAggr: Consists of 4 partitions (10G each)

I hope this makes sense… keep in mind that this example is not how it is going to look, as there will be more partitioned disks, because as a minimum the system needs 4 partitions for the root aggregate… so is will more likely be 8 disks that are partitioned…
(I just didn’t want to correct the numbers above)

Now… that being said… there are ways to limit the number of partitions or disks used for the root aggregate…. Once you are up and running per default, you can create two new root aggregates that are either smaller in size, or using RAID4 instead of RAID-DP… of cause with the increased risk as a disk dies…. (the information on the root aggregate can and should be backed up, and is pretty easy to restore if they should fail). Your data aggregates should be no less than RAID-DP.
The way to create smaller root aggregates and to create your own partition sizes includes fairly hairy commands which require “priv set diag”… so unless you know what you are doing, I would suggest against it. (but it is possible)

Basically I would suggest the default with a two node half and half setup… maybe you can use some other features in order to spread out your load on both aggregates? I am pretty sure that if you are just using CIFS or NFS, you should be able to “merge” two volumes (one from each controller) into one logical workspace… But since I have not been working with this that much, I would let someone else explain this part… (I’m pretty sure you can set this up from the GUI even…)

/Heino




Fra: Toasters <toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net>> på vegne af Rue, Randy <randyrue@gmail.com<mailto:randyrue@gmail.com>>
Dato: mandag, 26. april 2021 kl. 15.59
Til: Toasters <toasters@teaparty.net<mailto:toasters@teaparty.net>>
Emne: AFF-A220 9.8P1 default aggregates and root volumes

Hello,

We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.

We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.

We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.

* Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
* Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
* If the above is viable, what do I need to do to get there?

For what it's worth, I've been noodling a bit with the "system node migrate-root" command (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?

Hope to hear from you,



Randy in Seattle
_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
https://www.teaparty.net/mailman/listinfo/toasters
Re: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
You didn't say...
What is the current source hardware? can it be upgraded to 9.3?
You can use XDP for replication at that point. It is usually better in most
cases.

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Mon, Apr 26, 2021 at 12:14 PM Randy Rue <randyrue@gmail.com> wrote:

> We're not serving any LUNs and have no plans to.
>
> For all my weird-ass questions I'm actually a big fan of sticking with
> defaults unless there's a true need to "go snowflake" and our application
> is not exotic, pretty much just a plain old NFS NAS that serves up files
> and Xen datastores.
>
> Testing now whether I can create a flexgroup as a DP type and make it a
> snapmirror target.
> On 4/26/2021 7:58 AM, Heino Walther wrote:
>
> I think we agree that FlexGroups are the way forward, but one will have to
> take care and verify that stuff like SnapMirror/Vault still works (if this
> is used)… also I think FlexGroups does not support LUNS ?
>
>
>
> Anyway you are correct about the partitions… there are several ways to
> change the configuration… I have had many more or less hacky setups, so
> normally it is possible to go out of the “box” NetApp has set as default.
>
> Example is the possibility to create a root aggregate with a RAID4… and a
> way more hacky and “internal only” way is to create your own partition
> sizes which involves non-documented commands…. But it’s possible..
>
> But I have had issues with these hacky setups… for example, it is possible
> to force to partitions from the same disk into the same RAID group… it’s
> not clever, but possible non the less ????. But this will cause issues if
> you were to play around with “disk replace” later on… I had an issue with
> this, and NetApp support was required in order to fix this… basically the
> disk replace commands hang, waiting for something… which causes all
> rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they
> are not making much effort to fix it, because if you choose to “hack” this
> much, you are basically on your own.. but as always NetApp support is
> always ready to help, no questions asked ????
>
>
>
> One other remark… I really hope that NetApp is moving their root aggregate
> onto local flash cards in the future, so that we can get rid of these
> partitions ????. Or if we can get back to the good old days with root
> volumes “vol0” (which is no likely to happen).
>
>
>
> /Heino
>
>
>
> *Fra: *tmac <tmacmd@gmail.com> <tmacmd@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 16.46
> *Til: *Heino Walther <hw@beardmann.dk> <hw@beardmann.dk>
> *Cc: *Rue, Randy <randyrue@gmail.com> <randyrue@gmail.com>, Toasters
> <toasters@teaparty.net> <toasters@teaparty.net>
> *Emne: *Re: AFF-A220 9.8P1 default aggregates and root volumes
>
> Using "whole" or non-partitioned drives will use a minimum of 6 full
> drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
>
> If the drive size is 10T or larger, RAID-TEC is used instead.
>
>
>
> Not-knowing the size of the drives/SSDs in question make this a little
> difficult to answer.
>
> It is trivial to get around the whole-drive/partitioned drive when adding
> in the future.
>
> Set the "maxraidsize" to whatever is in the raidgroup now. any future
> drives are always added as whole drives unless they are pre-partitioned.
>
>
> --tmac
>
>
>
> *Tim McCarthy, **Principal Consultant*
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>
>
>
>
>
>
>
> On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk> wrote:
>
> Hi Randy
>
>
>
> There are two ways you can go… the correct way, and the “hacky” way…
>
> When you boot up the system with the serial or Service-Processor attached
> you can press “Ctrl-C” to get into the Maintaince menu from where you can
> Zero all disks on the system.
>
> This will build two root aggregates, one for each controller. They are
> typically build from 4 disks that are partitioned as “Root-Data-Data”
> disks, that is a small Root partition and two equal sized Data partitions.
>
> Disk sizes of these may vary dependent on your disks and your controller
> model. (I think more disks are used… maybe 8…)
>
> All other disks are not touched, and are therefore spares, and the 2 x 4
> data partitions are also presented as spare “disks” half the size of the
> physical disk (minus the small root partition).
>
> Each controller require a root aggregate, no matter what.
>
> If you would like to have just one data aggregate on one of the
> controllers, you can do so, but be aware that if you start your aggregate
> with the partitioned disks, and add non-partitioned disks later on, the
> disks will most likely be partitioned by default, and the other partition
> is assigned to the partner controller.
>
> The way to get around this, is to either not use the partitioned disks,
> and just start your new aggregate with unpartitioned disks, which you will
> have to assign to one of the controllers.
>
> If you would like to use the partitioned disks, you can create a new
> raidgroup in the same aggregate using the partitions from one of systems….
>
> You will then have the partitioned that are assigned to the passive node…
> (this is where it gets hacky)…
>
> You are infact able to assign these partitioned to the same node, and you
> are able to add them to the same RAID group as the other partitions… so a
> RAID group consisting of partitions of the same disk ????
>
> If one disk fails you will have two partitions fail inside your RAID
> group… which is a bit scary for me… so I would suggest to create a
> separate RAID group for them…
>
>
>
> So: Example.. system with 24 disks… lets say the disks are 10TB large…
>
> Ctrl-A:
>
> RootAggr: Consists of 4 partitions (10G each)
>
> Data Aggr: Consists of:
>
> RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
>
> RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> Ctrl-B:
>
> RootAggr: Consists of 4 partitions (10G each)
>
>
>
> I hope this makes sense… keep in mind that this example is not how it is
> going to look, as there will be more partitioned disks, because as a
> minimum the system needs 4 partitions for the root aggregate… so is will
> more likely be 8 disks that are partitioned…
>
> (I just didn’t want to correct the numbers above)
>
>
>
> Now… that being said… there are ways to limit the number of partitions or
> disks used for the root aggregate…. Once you are up and running per
> default, you can create two new root aggregates that are either smaller in
> size, or using RAID4 instead of RAID-DP… of cause with the increased risk
> as a disk dies…. (the information on the root aggregate can and should be
> backed up, and is pretty easy to restore if they should fail). Your data
> aggregates should be no less than RAID-DP.
>
> The way to create smaller root aggregates and to create your own partition
> sizes includes fairly hairy commands which require “priv set diag”… so
> unless you know what you are doing, I would suggest against it. (but it is
> possible)
>
>
>
> Basically I would suggest the default with a two node half and half setup…
> maybe you can use some other features in order to spread out your load on
> both aggregates? I am pretty sure that if you are just using CIFS or NFS,
> you should be able to “merge” two volumes (one from each controller) into
> one logical workspace… But since I have not been working with this that
> much, I would let someone else explain this part… (I’m pretty sure you can
> set this up from the GUI even…)
>
>
>
> /Heino
>
>
>
>
>
>
>
>
>
> *Fra: *Toasters <toasters-bounces@teaparty.net> på vegne af Rue, Randy <
> randyrue@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 15.59
> *Til: *Toasters <toasters@teaparty.net>
> *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
>
> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's arrived
> configured for two equally sized tiers with each node and its root aggr/vol
> in each tier. Each tier is about 15TB in size before compression.
>
> We're hoping to avoid needing to manage two equally sized data aggrs and
> moving data volumes around to balance them. For one thing, our largest data
> volumes are larger than 15TB and snapmirror doesn't seem to want to let us
> set up a relationship to load the new volumes from our old cluster, even if
> the target has ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks for
> raid-dp (3?) for one node if that will allow us to put the rest on the
> other set and have a single large container for our unstructured file
> volumes.
>
> We tried moving all volumes to one tier and deleting the other. But one
> node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts have
> changed with the introduction of partitions and whatnot. So bear with me if
> I'm asking n00b questions even after a few years running NetApp gear.
>
> - Is what I've proposed above reasonable? (one minimum aggr and one
> large one) Is it commonly done? Is it a good idea?
> - Can you point me to any "change notes" type doc that explains these
> new terms/objects to an otherwise experienced NetApp admin?
> - If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system node
> migrate-root" command (this new AFF is not in production yet) and got a
> warning that my target disks don't have a spare root partition (I specified
> some of the disks on the "old" aggr). That warning says I can find
> available disks with the command "storage disk partition show
> -owner-node-name redacted-a -container-type spare -is-root true" but the
> CLI then complains that partition is not a command (I'm at "advanced"
> privilege level). Is the given command correct?
>
> Hope to hear from you,
>
>
>
> Randy in Seattle
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
>
Re: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
And our source FAS is at 9.1. And It's no longer on NA support (we have
HW support only from a 3rd party), so no upgrades.


I'm liking rsync more and more




On 4/26/2021 9:20 AM, Jeff Bryer wrote:
>
> Your source and destination both have to be a FlexGroup for Snapmirror.
>
> I've asked a long time ago to not have that requirement.
>
>
> Plus you have to match geometry (so same number of constituent volumes)
>
>
>
> ------------------------------------------------------------------------
> *From:* Toasters <toasters-bounces@teaparty.net> on behalf of Randy
> Rue <randyrue@gmail.com>
> *Sent:* Monday, April 26, 2021 9:14 AM
> *To:* Heino Walther; tmac
> *Cc:* Toasters
> *Subject:* Re: SV: AFF-A220 9.8P1 default aggregates and root volumes
>
> We're not serving any LUNs and have no plans to.
>
> For all my weird-ass questions I'm actually a big fan of sticking with
> defaults unless there's a true need to "go snowflake" and our
> application is not exotic, pretty much just a plain old NFS NAS that
> serves up files and Xen datastores.
>
> Testing now whether I can create a flexgroup as a DP type and make it
> a snapmirror target.
>
> On 4/26/2021 7:58 AM, Heino Walther wrote:
>>
>> I think we agree that FlexGroups are the way forward, but one will
>> have to take care and verify that stuff like SnapMirror/Vault still
>> works (if this is used)… also I think FlexGroups does not support LUNS ?
>>
>> Anyway you are correct about the partitions… there are several ways
>> to change the configuration…  I have had many more or less hacky
>> setups, so normally it is possible to go out of the “box” NetApp has
>> set as default.
>>
>> Example is the possibility to create a root aggregate with a RAID4…
>>  and a way more hacky and “internal only” way is to create your own
>> partition sizes which involves non-documented commands…. But it’s
>> possible..
>>
>> But I have had issues with these hacky setups… for example, it is
>> possible to force to partitions from the same disk into the same RAID
>> group… it’s not clever, but possible non the less ????.  But this will
>> cause issues if you were to play around with “disk replace” later on…
>> I had an issue with this, and NetApp support was required in order to
>> fix this… basically the disk replace commands hang, waiting for
>> something…  which causes all rebuild tasks etc. stall…. I think this
>> was a bug in ONTAP, but I bet they are not making much effort to fix
>> it, because if you choose to “hack” this much, you are basically on
>> your own.. but as always NetApp support is always ready to help, no
>> questions asked ????
>>
>> One other remark… I really hope that NetApp is moving their root
>> aggregate onto local flash cards in the future, so that we can get
>> rid of these partitions ????. Or if we can get back to the good old
>> days with root volumes “vol0” (which is no likely to happen).
>>
>> /Heino
>>
>> *Fra: *tmac <tmacmd@gmail.com>
>> *Dato: *mandag, 26. april 2021 kl. 16.46
>> *Til: *Heino Walther <hw@beardmann.dk>
>> *Cc: *Rue, Randy <randyrue@gmail.com>, Toasters <toasters@teaparty.net>
>> *Emne: *Re: AFF-A220 9.8P1 default aggregates and root volumes
>>
>> Using "whole" or non-partitioned drives will use a minimum of 6 full
>> drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
>>
>> If the drive size is 10T or larger, RAID-TEC is used instead.
>>
>> Not-knowing the size of the drives/SSDs in question make this a
>> little difficult to answer.
>>
>> It is trivial to get around the whole-drive/partitioned drive when
>> adding in the future.
>>
>> Set the "maxraidsize" to whatever is in the raidgroup now. any future
>> drives are always added as whole drives unless they are pre-partitioned.
>>
>>
>> --tmac
>>
>> *Tim McCarthy, */Principal Consultant/
>>
>> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>>
>> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>>
>> On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk
>> <mailto:hw@beardmann.dk>> wrote:
>>
>> Hi Randy
>>
>> There are two ways you can go… the correct way, and the “hacky” way…
>>
>> When you boot up the system with the serial or Service-Processor
>> attached you can press “Ctrl-C” to get into the Maintaince menu
>> from where you can Zero all disks on the system.
>>
>> This will build two root aggregates, one for each controller. 
>> They are typically build from 4 disks that are partitioned as
>> “Root-Data-Data” disks, that is a small Root partition and two
>> equal sized Data partitions.
>>
>> Disk sizes of these may vary dependent on your disks and your
>> controller model. (I think more disks are used… maybe 8…)
>>
>> All other disks are not touched, and are therefore spares, and
>> the 2 x 4 data partitions are also presented as spare “disks”
>> half the size of the physical disk (minus the small root partition).
>>
>> Each controller require a root aggregate, no matter what.
>>
>> If you would like to have just one data aggregate on one of the
>> controllers, you can do so, but be aware that if you start your
>> aggregate with the partitioned disks, and add non-partitioned
>> disks later on, the disks will most likely be partitioned by
>> default, and the other partition is assigned to the partner
>> controller.
>>
>> The way to get around this, is to either not use the partitioned
>> disks, and just start your new aggregate with unpartitioned
>> disks, which you will have to assign to one of the controllers.
>>
>> If you would like to use the partitioned disks, you can create a
>> new raidgroup in the same aggregate using the partitions from one
>> of systems….
>>
>> You will then have the partitioned that are assigned to the
>> passive node…  (this is where it gets hacky)…
>>
>> You are infact able to assign these partitioned to the same node,
>> and you are able to add them to the same RAID group as the other
>> partitions… so a RAID group consisting of partitions of the same
>> disk ????
>>
>> If one disk fails you will have two partitions fail inside your
>> RAID group…  which is a bit scary for me… so I would suggest to
>> create a separate RAID group for them…
>>
>> So: Example.. system with 24 disks… lets say the disks are 10TB
>> large…
>>
>> Ctrl-A:
>>
>> RootAggr: Consists of 4 partitions (10G each)
>>
>> Data Aggr: Consists of:
>>
>> RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
>>
>> RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>>
>> RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>>
>> Ctrl-B:
>>
>> RootAggr: Consists of 4 partitions (10G each)
>>
>> I hope this makes sense… keep in mind that this example is not
>> how it is going to look, as there will be more partitioned disks,
>> because as a minimum the system needs 4 partitions for the root
>> aggregate… so is will more likely be 8 disks that are partitioned…
>>
>> (I just didn’t want to correct the numbers above)
>>
>> Now… that being said… there are ways to limit the number of
>> partitions or disks used for the root aggregate…. Once you are up
>> and running per default, you can create two new root aggregates
>> that are either smaller in size, or using RAID4 instead of
>> RAID-DP… of cause with the increased risk as a disk dies…. (the
>> information on the root aggregate can and should be backed up,
>> and is pretty easy to restore if they should fail). Your data
>> aggregates should be no less than RAID-DP.
>>
>> The way to create smaller root aggregates and to create your own
>> partition sizes includes fairly hairy commands which require
>> “priv set diag”…  so unless you know what you are doing, I would
>> suggest against it. (but it is possible)
>>
>> Basically I would suggest the default with a two node half and
>> half setup… maybe you can use some other features in order to
>> spread out your load on both aggregates?   I am pretty sure that
>> if you are just using CIFS or NFS, you should be able to “merge”
>> two volumes (one from each controller) into one logical
>> workspace…  But since I have not been working with this that
>> much, I would let someone else explain this part… (I’m pretty
>> sure you can set this up from the GUI even…)
>>
>> /Heino
>>
>> *Fra: *Toasters <toasters-bounces@teaparty.net
>> <mailto:toasters-bounces@teaparty.net>> på vegne af Rue, Randy
>> <randyrue@gmail.com <mailto:randyrue@gmail.com>>
>> *Dato: *mandag, 26. april 2021 kl. 15.59
>> *Til: *Toasters <toasters@teaparty.net
>> <mailto:toasters@teaparty.net>>
>> *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
>>
>> Hello,
>>
>> We're setting up a new and relatively small SSD NAS and it's
>> arrived configured for two equally sized tiers with each node and
>> its root aggr/vol in each tier. Each tier is about 15TB in size
>> before compression.
>>
>> We're hoping to avoid needing to manage two equally sized data
>> aggrs and moving data volumes around to balance them. For one
>> thing, our largest data volumes are larger than 15TB and
>> snapmirror doesn't seem to want to let us set up a relationship
>> to load the new volumes from our old cluster, even if the target
>> has ample room after the guaranteed 3X compression.
>>
>> We're willing to lose the wasted space involved in creating one
>> tier/partition/aggr/root volume with the minimum number of disks
>> for raid-dp (3?) for one node if that will allow us to put the
>> rest on the other set and have a single large container for our
>> unstructured file volumes.
>>
>> We tried moving all volumes to one tier and deleting the other.
>> But one node is still sitting on those disks.
>>
>> Our old cluster is at 9.1P6 and I'm clear that some basic
>> concepts have changed with the introduction of partitions and
>> whatnot. So bear with me if I'm asking n00b questions even after
>> a few years running NetApp gear.
>>
>> * Is what I've proposed above reasonable? (one minimum aggr and
>> one large one) Is it commonly done? Is it a good idea?
>> * Can you point me to any "change notes" type doc that explains
>> these new terms/objects to an otherwise experienced NetApp
>> admin?
>> * If the above is viable, what do I need to do to get there?
>>
>> For what it's worth, I've been noodling a bit with the "system
>> node migrate-root" command (this new AFF is not in production
>> yet) and got a warning that my target disks don't have a spare
>> root partition (I specified some of the disks on the "old" aggr).
>> That warning says I can find available disks with the command
>> "storage disk partition show -owner-node-name redacted-a
>> -container-type spare -is-root true" but the CLI then complains
>> that partition is not a command (I'm at "advanced" privilege
>> level). Is the given command correct?
>>
>> Hope to hear from you,
>>
>> Randy in Seattle
>>
>> _______________________________________________
>> Toasters mailing list
>> Toasters@teaparty.net <mailto:Toasters@teaparty.net>
>> https://www.teaparty.net/mailman/listinfo/toasters
>> <https://www.teaparty.net/mailman/listinfo/toasters>
>>
Re: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
If you like rsync...check out XDP on the NetApp support site.

--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Mon, Apr 26, 2021 at 1:06 PM Randy Rue <randyrue@gmail.com> wrote:

> And our source FAS is at 9.1. And It's no longer on NA support (we have HW
> support only from a 3rd party), so no upgrades.
>
>
> I'm liking rsync more and more
>
>
>
>
> On 4/26/2021 9:20 AM, Jeff Bryer wrote:
>
> Your source and destination both have to be a FlexGroup for Snapmirror.
>
> I've asked a long time ago to not have that requirement.
>
>
> Plus you have to match geometry (so same number of constituent volumes)
>
>
> ------------------------------
> *From:* Toasters <toasters-bounces@teaparty.net>
> <toasters-bounces@teaparty.net> on behalf of Randy Rue
> <randyrue@gmail.com> <randyrue@gmail.com>
> *Sent:* Monday, April 26, 2021 9:14 AM
> *To:* Heino Walther; tmac
> *Cc:* Toasters
> *Subject:* Re: SV: AFF-A220 9.8P1 default aggregates and root volumes
>
>
> We're not serving any LUNs and have no plans to.
>
> For all my weird-ass questions I'm actually a big fan of sticking with
> defaults unless there's a true need to "go snowflake" and our application
> is not exotic, pretty much just a plain old NFS NAS that serves up files
> and Xen datastores.
>
> Testing now whether I can create a flexgroup as a DP type and make it a
> snapmirror target.
> On 4/26/2021 7:58 AM, Heino Walther wrote:
>
> I think we agree that FlexGroups are the way forward, but one will have to
> take care and verify that stuff like SnapMirror/Vault still works (if this
> is used)… also I think FlexGroups does not support LUNS ?
>
>
>
> Anyway you are correct about the partitions… there are several ways to
> change the configuration… I have had many more or less hacky setups, so
> normally it is possible to go out of the “box” NetApp has set as default.
>
> Example is the possibility to create a root aggregate with a RAID4… and a
> way more hacky and “internal only” way is to create your own partition
> sizes which involves non-documented commands…. But it’s possible..
>
> But I have had issues with these hacky setups… for example, it is possible
> to force to partitions from the same disk into the same RAID group… it’s
> not clever, but possible non the less ????. But this will cause issues if
> you were to play around with “disk replace” later on… I had an issue with
> this, and NetApp support was required in order to fix this… basically the
> disk replace commands hang, waiting for something… which causes all
> rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they
> are not making much effort to fix it, because if you choose to “hack” this
> much, you are basically on your own.. but as always NetApp support is
> always ready to help, no questions asked ????
>
>
>
> One other remark… I really hope that NetApp is moving their root aggregate
> onto local flash cards in the future, so that we can get rid of these
> partitions ????. Or if we can get back to the good old days with root
> volumes “vol0” (which is no likely to happen).
>
>
>
> /Heino
>
>
>
> *Fra: *tmac <tmacmd@gmail.com> <tmacmd@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 16.46
> *Til: *Heino Walther <hw@beardmann.dk> <hw@beardmann.dk>
> *Cc: *Rue, Randy <randyrue@gmail.com> <randyrue@gmail.com>, Toasters
> <toasters@teaparty.net> <toasters@teaparty.net>
> *Emne: *Re: AFF-A220 9.8P1 default aggregates and root volumes
>
> Using "whole" or non-partitioned drives will use a minimum of 6 full
> drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
>
> If the drive size is 10T or larger, RAID-TEC is used instead.
>
>
>
> Not-knowing the size of the drives/SSDs in question make this a little
> difficult to answer.
>
> It is trivial to get around the whole-drive/partitioned drive when adding
> in the future.
>
> Set the "maxraidsize" to whatever is in the raidgroup now. any future
> drives are always added as whole drives unless they are pre-partitioned.
>
>
> --tmac
>
>
>
> *Tim McCarthy, **Principal Consultant*
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>
>
>
>
>
>
>
> On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk> wrote:
>
> Hi Randy
>
>
>
> There are two ways you can go… the correct way, and the “hacky” way…
>
> When you boot up the system with the serial or Service-Processor attached
> you can press “Ctrl-C” to get into the Maintaince menu from where you can
> Zero all disks on the system.
>
> This will build two root aggregates, one for each controller. They are
> typically build from 4 disks that are partitioned as “Root-Data-Data”
> disks, that is a small Root partition and two equal sized Data partitions.
>
> Disk sizes of these may vary dependent on your disks and your controller
> model. (I think more disks are used… maybe 8…)
>
> All other disks are not touched, and are therefore spares, and the 2 x 4
> data partitions are also presented as spare “disks” half the size of the
> physical disk (minus the small root partition).
>
> Each controller require a root aggregate, no matter what.
>
> If you would like to have just one data aggregate on one of the
> controllers, you can do so, but be aware that if you start your aggregate
> with the partitioned disks, and add non-partitioned disks later on, the
> disks will most likely be partitioned by default, and the other partition
> is assigned to the partner controller.
>
> The way to get around this, is to either not use the partitioned disks,
> and just start your new aggregate with unpartitioned disks, which you will
> have to assign to one of the controllers.
>
> If you would like to use the partitioned disks, you can create a new
> raidgroup in the same aggregate using the partitions from one of systems….
>
> You will then have the partitioned that are assigned to the passive node…
> (this is where it gets hacky)…
>
> You are infact able to assign these partitioned to the same node, and you
> are able to add them to the same RAID group as the other partitions… so a
> RAID group consisting of partitions of the same disk ????
>
> If one disk fails you will have two partitions fail inside your RAID
> group… which is a bit scary for me… so I would suggest to create a
> separate RAID group for them…
>
>
>
> So: Example.. system with 24 disks… lets say the disks are 10TB large…
>
> Ctrl-A:
>
> RootAggr: Consists of 4 partitions (10G each)
>
> Data Aggr: Consists of:
>
> RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
>
> RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> Ctrl-B:
>
> RootAggr: Consists of 4 partitions (10G each)
>
>
>
> I hope this makes sense… keep in mind that this example is not how it is
> going to look, as there will be more partitioned disks, because as a
> minimum the system needs 4 partitions for the root aggregate… so is will
> more likely be 8 disks that are partitioned…
>
> (I just didn’t want to correct the numbers above)
>
>
>
> Now… that being said… there are ways to limit the number of partitions or
> disks used for the root aggregate…. Once you are up and running per
> default, you can create two new root aggregates that are either smaller in
> size, or using RAID4 instead of RAID-DP… of cause with the increased risk
> as a disk dies…. (the information on the root aggregate can and should be
> backed up, and is pretty easy to restore if they should fail). Your data
> aggregates should be no less than RAID-DP.
>
> The way to create smaller root aggregates and to create your own partition
> sizes includes fairly hairy commands which require “priv set diag”… so
> unless you know what you are doing, I would suggest against it. (but it is
> possible)
>
>
>
> Basically I would suggest the default with a two node half and half setup…
> maybe you can use some other features in order to spread out your load on
> both aggregates? I am pretty sure that if you are just using CIFS or NFS,
> you should be able to “merge” two volumes (one from each controller) into
> one logical workspace… But since I have not been working with this that
> much, I would let someone else explain this part… (I’m pretty sure you can
> set this up from the GUI even…)
>
>
>
> /Heino
>
>
>
>
>
>
>
>
>
> *Fra: *Toasters <toasters-bounces@teaparty.net> på vegne af Rue, Randy <
> randyrue@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 15.59
> *Til: *Toasters <toasters@teaparty.net>
> *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
>
> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's arrived
> configured for two equally sized tiers with each node and its root aggr/vol
> in each tier. Each tier is about 15TB in size before compression.
>
> We're hoping to avoid needing to manage two equally sized data aggrs and
> moving data volumes around to balance them. For one thing, our largest data
> volumes are larger than 15TB and snapmirror doesn't seem to want to let us
> set up a relationship to load the new volumes from our old cluster, even if
> the target has ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks for
> raid-dp (3?) for one node if that will allow us to put the rest on the
> other set and have a single large container for our unstructured file
> volumes.
>
> We tried moving all volumes to one tier and deleting the other. But one
> node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts have
> changed with the introduction of partitions and whatnot. So bear with me if
> I'm asking n00b questions even after a few years running NetApp gear.
>
> - Is what I've proposed above reasonable? (one minimum aggr and one
> large one) Is it commonly done? Is it a good idea?
> - Can you point me to any "change notes" type doc that explains these
> new terms/objects to an otherwise experienced NetApp admin?
> - If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system node
> migrate-root" command (this new AFF is not in production yet) and got a
> warning that my target disks don't have a spare root partition (I specified
> some of the disks on the "old" aggr). That warning says I can find
> available disks with the command "storage disk partition show
> -owner-node-name redacted-a -container-type spare -is-root true" but the
> CLI then complains that partition is not a command (I'm at "advanced"
> privilege level). Is the given command correct?
>
> Hope to hear from you,
>
>
>
> Randy in Seattle
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
>
SV: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
If you are migrating from one NetApp system to a new one, I think if you ask NetApp, they will let you upgrade ONTAP on your old system.
It’s the same ONTAP files as you use on your new system… soooo. You might just “fix” it yourself ????
But ofcause maybe the system cannot be upgraded… I think some of the older models cannot go past 9.1

/Heino

Fra: Randy Rue <randyrue@gmail.com>
Dato: mandag, 26. april 2021 kl. 19.06
Til: Jeff Bryer <bryer@sfu.ca>, Heino Walther <hw@beardmann.dk>, tmac <tmacmd@gmail.com>
Cc: Toasters <toasters@teaparty.net>
Emne: Re: SV: AFF-A220 9.8P1 default aggregates and root volumes

And our source FAS is at 9.1. And It's no longer on NA support (we have HW support only from a 3rd party), so no upgrades.



I'm liking rsync more and more
RE: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
XCP. XDP is the snapmirror data protection engine. ????

http://xcp.netapp.com



From: Toasters <toasters-bounces@teaparty.net> On Behalf Of tmac
Sent: Monday, April 26, 2021 1:09 PM
To: Randy Rue <randyrue@gmail.com>
Cc: Toasters <toasters@teaparty.net>
Subject: Re: SV: AFF-A220 9.8P1 default aggregates and root volumes

NetApp Security WARNING: This is an external email. Do not click links or open attachments unless you recognize the sender and know the content is safe.


If you like rsync...check out XDP on the NetApp support site.

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam<https://twitter.com/NetAppATeam>

I Blog at TMACsRack<https://tmacsrack.wordpress.com/>



On Mon, Apr 26, 2021 at 1:06 PM Randy Rue <randyrue@gmail.com<mailto:randyrue@gmail.com>> wrote:

And our source FAS is at 9.1. And It's no longer on NA support (we have HW support only from a 3rd party), so no upgrades.



I'm liking rsync more and more






On 4/26/2021 9:20 AM, Jeff Bryer wrote:

Your source and destination both have to be a FlexGroup for Snapmirror.

I've asked a long time ago to not have that requirement.



Plus you have to match geometry (so same number of constituent volumes)

________________________________
From: Toasters <toasters-bounces@teaparty.net><mailto:toasters-bounces@teaparty.net> on behalf of Randy Rue <randyrue@gmail.com><mailto:randyrue@gmail.com>
Sent: Monday, April 26, 2021 9:14 AM
To: Heino Walther; tmac
Cc: Toasters
Subject: Re: SV: AFF-A220 9.8P1 default aggregates and root volumes


We're not serving any LUNs and have no plans to.

For all my weird-ass questions I'm actually a big fan of sticking with defaults unless there's a true need to "go snowflake" and our application is not exotic, pretty much just a plain old NFS NAS that serves up files and Xen datastores.

Testing now whether I can create a flexgroup as a DP type and make it a snapmirror target.
On 4/26/2021 7:58 AM, Heino Walther wrote:
I think we agree that FlexGroups are the way forward, but one will have to take care and verify that stuff like SnapMirror/Vault still works (if this is used)… also I think FlexGroups does not support LUNS ?

Anyway you are correct about the partitions… there are several ways to change the configuration… I have had many more or less hacky setups, so normally it is possible to go out of the “box” NetApp has set as default.
Example is the possibility to create a root aggregate with a RAID4… and a way more hacky and “internal only” way is to create your own partition sizes which involves non-documented commands…. But it’s possible..
But I have had issues with these hacky setups… for example, it is possible to force to partitions from the same disk into the same RAID group… it’s not clever, but possible non the less ????. But this will cause issues if you were to play around with “disk replace” later on… I had an issue with this, and NetApp support was required in order to fix this… basically the disk replace commands hang, waiting for something… which causes all rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they are not making much effort to fix it, because if you choose to “hack” this much, you are basically on your own.. but as always NetApp support is always ready to help, no questions asked ????

One other remark… I really hope that NetApp is moving their root aggregate onto local flash cards in the future, so that we can get rid of these partitions ????. Or if we can get back to the good old days with root volumes “vol0” (which is no likely to happen).

/Heino

Fra: tmac <tmacmd@gmail.com><mailto:tmacmd@gmail.com>
Dato: mandag, 26. april 2021 kl. 16.46
Til: Heino Walther <hw@beardmann.dk><mailto:hw@beardmann.dk>
Cc: Rue, Randy <randyrue@gmail.com><mailto:randyrue@gmail.com>, Toasters <toasters@teaparty.net><mailto:toasters@teaparty.net>
Emne: Re: AFF-A220 9.8P1 default aggregates and root volumes
Using "whole" or non-partitioned drives will use a minimum of 6 full drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
If the drive size is 10T or larger, RAID-TEC is used instead.

Not-knowing the size of the drives/SSDs in question make this a little difficult to answer.
It is trivial to get around the whole-drive/partitioned drive when adding in the future.
Set the "maxraidsize" to whatever is in the raidgroup now. any future drives are always added as whole drives unless they are pre-partitioned.

--tmac

Tim McCarthy, Principal Consultant

Proud Member of the #NetAppATeam<https://twitter.com/NetAppATeam>

I Blog at TMACsRack<https://tmacsrack.wordpress.com/>



On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk<mailto:hw@beardmann.dk>> wrote:
Hi Randy

There are two ways you can go… the correct way, and the “hacky” way…
When you boot up the system with the serial or Service-Processor attached you can press “Ctrl-C” to get into the Maintaince menu from where you can Zero all disks on the system.
This will build two root aggregates, one for each controller. They are typically build from 4 disks that are partitioned as “Root-Data-Data” disks, that is a small Root partition and two equal sized Data partitions.
Disk sizes of these may vary dependent on your disks and your controller model. (I think more disks are used… maybe 8…)
All other disks are not touched, and are therefore spares, and the 2 x 4 data partitions are also presented as spare “disks” half the size of the physical disk (minus the small root partition).
Each controller require a root aggregate, no matter what.
If you would like to have just one data aggregate on one of the controllers, you can do so, but be aware that if you start your aggregate with the partitioned disks, and add non-partitioned disks later on, the disks will most likely be partitioned by default, and the other partition is assigned to the partner controller.
The way to get around this, is to either not use the partitioned disks, and just start your new aggregate with unpartitioned disks, which you will have to assign to one of the controllers.
If you would like to use the partitioned disks, you can create a new raidgroup in the same aggregate using the partitions from one of systems….
You will then have the partitioned that are assigned to the passive node… (this is where it gets hacky)…
You are infact able to assign these partitioned to the same node, and you are able to add them to the same RAID group as the other partitions… so a RAID group consisting of partitions of the same disk ????
If one disk fails you will have two partitions fail inside your RAID group… which is a bit scary for me… so I would suggest to create a separate RAID group for them…

So: Example.. system with 24 disks… lets say the disks are 10TB large…
Ctrl-A:
RootAggr: Consists of 4 partitions (10G each)
Data Aggr: Consists of:
RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
Ctrl-B:
RootAggr: Consists of 4 partitions (10G each)

I hope this makes sense… keep in mind that this example is not how it is going to look, as there will be more partitioned disks, because as a minimum the system needs 4 partitions for the root aggregate… so is will more likely be 8 disks that are partitioned…
(I just didn’t want to correct the numbers above)

Now… that being said… there are ways to limit the number of partitions or disks used for the root aggregate…. Once you are up and running per default, you can create two new root aggregates that are either smaller in size, or using RAID4 instead of RAID-DP… of cause with the increased risk as a disk dies…. (the information on the root aggregate can and should be backed up, and is pretty easy to restore if they should fail). Your data aggregates should be no less than RAID-DP.
The way to create smaller root aggregates and to create your own partition sizes includes fairly hairy commands which require “priv set diag”… so unless you know what you are doing, I would suggest against it. (but it is possible)

Basically I would suggest the default with a two node half and half setup… maybe you can use some other features in order to spread out your load on both aggregates? I am pretty sure that if you are just using CIFS or NFS, you should be able to “merge” two volumes (one from each controller) into one logical workspace… But since I have not been working with this that much, I would let someone else explain this part… (I’m pretty sure you can set this up from the GUI even…)

/Heino




Fra: Toasters <toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net>> på vegne af Rue, Randy <randyrue@gmail.com<mailto:randyrue@gmail.com>>
Dato: mandag, 26. april 2021 kl. 15.59
Til: Toasters <toasters@teaparty.net<mailto:toasters@teaparty.net>>
Emne: AFF-A220 9.8P1 default aggregates and root volumes

Hello,

We're setting up a new and relatively small SSD NAS and it's arrived configured for two equally sized tiers with each node and its root aggr/vol in each tier. Each tier is about 15TB in size before compression.

We're hoping to avoid needing to manage two equally sized data aggrs and moving data volumes around to balance them. For one thing, our largest data volumes are larger than 15TB and snapmirror doesn't seem to want to let us set up a relationship to load the new volumes from our old cluster, even if the target has ample room after the guaranteed 3X compression.

We're willing to lose the wasted space involved in creating one tier/partition/aggr/root volume with the minimum number of disks for raid-dp (3?) for one node if that will allow us to put the rest on the other set and have a single large container for our unstructured file volumes.

We tried moving all volumes to one tier and deleting the other. But one node is still sitting on those disks.

Our old cluster is at 9.1P6 and I'm clear that some basic concepts have changed with the introduction of partitions and whatnot. So bear with me if I'm asking n00b questions even after a few years running NetApp gear.

* Is what I've proposed above reasonable? (one minimum aggr and one large one) Is it commonly done? Is it a good idea?
* Can you point me to any "change notes" type doc that explains these new terms/objects to an otherwise experienced NetApp admin?
* If the above is viable, what do I need to do to get there?

For what it's worth, I've been noodling a bit with the "system node migrate-root" command (this new AFF is not in production yet) and got a warning that my target disks don't have a spare root partition (I specified some of the disks on the "old" aggr). That warning says I can find available disks with the command "storage disk partition show -owner-node-name redacted-a -container-type spare -is-root true" but the CLI then complains that partition is not a command (I'm at "advanced" privilege level). Is the given command correct?

Hope to hear from you,



Randy in Seattle
_______________________________________________
Toasters mailing list
Toasters@teaparty.net<mailto:Toasters@teaparty.net>
https://www.teaparty.net/mailman/listinfo/toasters
Re: SV: AFF-A220 9.8P1 default aggregates and root volumes [ In reply to ]
Yeah...thanks just. auto-correct for the mistake again!
--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Mon, Apr 26, 2021 at 1:16 PM Parisi, Justin <Justin.Parisi@netapp.com>
wrote:

> XCP. XDP is the snapmirror data protection engine. ????
>
>
>
> http://xcp.netapp.com
>
>
>
>
>
>
>
> *From:* Toasters <toasters-bounces@teaparty.net> *On Behalf Of *tmac
> *Sent:* Monday, April 26, 2021 1:09 PM
> *To:* Randy Rue <randyrue@gmail.com>
> *Cc:* Toasters <toasters@teaparty.net>
> *Subject:* Re: SV: AFF-A220 9.8P1 default aggregates and root volumes
>
>
>
> *NetApp Security WARNING*: This is an external email. Do not click links
> or open attachments unless you recognize the sender and know the content is
> safe.
>
>
>
> If you like rsync...check out XDP on the NetApp support site.
>
>
> --tmac
>
>
>
> *Tim McCarthy, **Principal Consultant*
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>
>
>
>
>
>
>
> On Mon, Apr 26, 2021 at 1:06 PM Randy Rue <randyrue@gmail.com> wrote:
>
> And our source FAS is at 9.1. And It's no longer on NA support (we have HW
> support only from a 3rd party), so no upgrades.
>
>
>
> I'm liking rsync more and more
>
>
>
>
>
>
>
> On 4/26/2021 9:20 AM, Jeff Bryer wrote:
>
> Your source and destination both have to be a FlexGroup for Snapmirror.
>
> I've asked a long time ago to not have that requirement.
>
>
>
> Plus you have to match geometry (so same number of constituent volumes)
>
>
> ------------------------------
>
> *From:* Toasters <toasters-bounces@teaparty.net>
> <toasters-bounces@teaparty.net> on behalf of Randy Rue
> <randyrue@gmail.com> <randyrue@gmail.com>
> *Sent:* Monday, April 26, 2021 9:14 AM
> *To:* Heino Walther; tmac
> *Cc:* Toasters
> *Subject:* Re: SV: AFF-A220 9.8P1 default aggregates and root volumes
>
>
>
> We're not serving any LUNs and have no plans to.
>
> For all my weird-ass questions I'm actually a big fan of sticking with
> defaults unless there's a true need to "go snowflake" and our application
> is not exotic, pretty much just a plain old NFS NAS that serves up files
> and Xen datastores.
>
> Testing now whether I can create a flexgroup as a DP type and make it a
> snapmirror target.
>
> On 4/26/2021 7:58 AM, Heino Walther wrote:
>
> I think we agree that FlexGroups are the way forward, but one will have to
> take care and verify that stuff like SnapMirror/Vault still works (if this
> is used)… also I think FlexGroups does not support LUNS ?
>
>
>
> Anyway you are correct about the partitions… there are several ways to
> change the configuration… I have had many more or less hacky setups, so
> normally it is possible to go out of the “box” NetApp has set as default.
>
> Example is the possibility to create a root aggregate with a RAID4… and a
> way more hacky and “internal only” way is to create your own partition
> sizes which involves non-documented commands…. But it’s possible..
>
> But I have had issues with these hacky setups… for example, it is possible
> to force to partitions from the same disk into the same RAID group… it’s
> not clever, but possible non the less ????. But this will cause issues if
> you were to play around with “disk replace” later on… I had an issue with
> this, and NetApp support was required in order to fix this… basically the
> disk replace commands hang, waiting for something… which causes all
> rebuild tasks etc. stall…. I think this was a bug in ONTAP, but I bet they
> are not making much effort to fix it, because if you choose to “hack” this
> much, you are basically on your own.. but as always NetApp support is
> always ready to help, no questions asked ????
>
>
>
> One other remark… I really hope that NetApp is moving their root aggregate
> onto local flash cards in the future, so that we can get rid of these
> partitions ????. Or if we can get back to the good old days with root
> volumes “vol0” (which is no likely to happen).
>
>
>
> /Heino
>
>
>
> *Fra: *tmac <tmacmd@gmail.com> <tmacmd@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 16.46
> *Til: *Heino Walther <hw@beardmann.dk> <hw@beardmann.dk>
> *Cc: *Rue, Randy <randyrue@gmail.com> <randyrue@gmail.com>, Toasters
> <toasters@teaparty.net> <toasters@teaparty.net>
> *Emne: *Re: AFF-A220 9.8P1 default aggregates and root volumes
>
> Using "whole" or non-partitioned drives will use a minimum of 6 full
> drives (at least to start, P+P+D with RAID-DP and drive size <10TB).
>
> If the drive size is 10T or larger, RAID-TEC is used instead.
>
>
>
> Not-knowing the size of the drives/SSDs in question make this a little
> difficult to answer.
>
> It is trivial to get around the whole-drive/partitioned drive when adding
> in the future.
>
> Set the "maxraidsize" to whatever is in the raidgroup now. any future
> drives are always added as whole drives unless they are pre-partitioned.
>
>
> --tmac
>
>
>
> *Tim McCarthy, **Principal Consultant*
>
> *Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*
>
> *I Blog at **TMACsRack <https://tmacsrack.wordpress.com/>*
>
>
>
>
>
>
>
> On Mon, Apr 26, 2021 at 10:31 AM Heino Walther <hw@beardmann.dk> wrote:
>
> Hi Randy
>
>
>
> There are two ways you can go… the correct way, and the “hacky” way…
>
> When you boot up the system with the serial or Service-Processor attached
> you can press “Ctrl-C” to get into the Maintaince menu from where you can
> Zero all disks on the system.
>
> This will build two root aggregates, one for each controller. They are
> typically build from 4 disks that are partitioned as “Root-Data-Data”
> disks, that is a small Root partition and two equal sized Data partitions.
>
> Disk sizes of these may vary dependent on your disks and your controller
> model. (I think more disks are used… maybe 8…)
>
> All other disks are not touched, and are therefore spares, and the 2 x 4
> data partitions are also presented as spare “disks” half the size of the
> physical disk (minus the small root partition).
>
> Each controller require a root aggregate, no matter what.
>
> If you would like to have just one data aggregate on one of the
> controllers, you can do so, but be aware that if you start your aggregate
> with the partitioned disks, and add non-partitioned disks later on, the
> disks will most likely be partitioned by default, and the other partition
> is assigned to the partner controller.
>
> The way to get around this, is to either not use the partitioned disks,
> and just start your new aggregate with unpartitioned disks, which you will
> have to assign to one of the controllers.
>
> If you would like to use the partitioned disks, you can create a new
> raidgroup in the same aggregate using the partitions from one of systems….
>
> You will then have the partitioned that are assigned to the passive node…
> (this is where it gets hacky)…
>
> You are infact able to assign these partitioned to the same node, and you
> are able to add them to the same RAID group as the other partitions… so a
> RAID group consisting of partitions of the same disk ????
>
> If one disk fails you will have two partitions fail inside your RAID
> group… which is a bit scary for me… so I would suggest to create a
> separate RAID group for them…
>
>
>
> So: Example.. system with 24 disks… lets say the disks are 10TB large…
>
> Ctrl-A:
>
> RootAggr: Consists of 4 partitions (10G each)
>
> Data Aggr: Consists of:
>
> RAID Group 0: 19 x Physical disks (RAID-DP) 170TB
>
> RAID Group 1: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> RAID Group 2: 4 x Partitions (RAID-DP) 9,98TB (10/2 minus 10G)
>
> Ctrl-B:
>
> RootAggr: Consists of 4 partitions (10G each)
>
>
>
> I hope this makes sense… keep in mind that this example is not how it is
> going to look, as there will be more partitioned disks, because as a
> minimum the system needs 4 partitions for the root aggregate… so is will
> more likely be 8 disks that are partitioned…
>
> (I just didn’t want to correct the numbers above)
>
>
>
> Now… that being said… there are ways to limit the number of partitions or
> disks used for the root aggregate…. Once you are up and running per
> default, you can create two new root aggregates that are either smaller in
> size, or using RAID4 instead of RAID-DP… of cause with the increased risk
> as a disk dies…. (the information on the root aggregate can and should be
> backed up, and is pretty easy to restore if they should fail). Your data
> aggregates should be no less than RAID-DP.
>
> The way to create smaller root aggregates and to create your own partition
> sizes includes fairly hairy commands which require “priv set diag”… so
> unless you know what you are doing, I would suggest against it. (but it is
> possible)
>
>
>
> Basically I would suggest the default with a two node half and half setup…
> maybe you can use some other features in order to spread out your load on
> both aggregates? I am pretty sure that if you are just using CIFS or NFS,
> you should be able to “merge” two volumes (one from each controller) into
> one logical workspace… But since I have not been working with this that
> much, I would let someone else explain this part… (I’m pretty sure you can
> set this up from the GUI even…)
>
>
>
> /Heino
>
>
>
>
>
>
>
>
>
> *Fra: *Toasters <toasters-bounces@teaparty.net> på vegne af Rue, Randy <
> randyrue@gmail.com>
> *Dato: *mandag, 26. april 2021 kl. 15.59
> *Til: *Toasters <toasters@teaparty.net>
> *Emne: *AFF-A220 9.8P1 default aggregates and root volumes
>
> Hello,
>
> We're setting up a new and relatively small SSD NAS and it's arrived
> configured for two equally sized tiers with each node and its root aggr/vol
> in each tier. Each tier is about 15TB in size before compression.
>
> We're hoping to avoid needing to manage two equally sized data aggrs and
> moving data volumes around to balance them. For one thing, our largest data
> volumes are larger than 15TB and snapmirror doesn't seem to want to let us
> set up a relationship to load the new volumes from our old cluster, even if
> the target has ample room after the guaranteed 3X compression.
>
> We're willing to lose the wasted space involved in creating one
> tier/partition/aggr/root volume with the minimum number of disks for
> raid-dp (3?) for one node if that will allow us to put the rest on the
> other set and have a single large container for our unstructured file
> volumes.
>
> We tried moving all volumes to one tier and deleting the other. But one
> node is still sitting on those disks.
>
> Our old cluster is at 9.1P6 and I'm clear that some basic concepts have
> changed with the introduction of partitions and whatnot. So bear with me if
> I'm asking n00b questions even after a few years running NetApp gear.
>
> - Is what I've proposed above reasonable? (one minimum aggr and one
> large one) Is it commonly done? Is it a good idea?
> - Can you point me to any "change notes" type doc that explains these
> new terms/objects to an otherwise experienced NetApp admin?
> - If the above is viable, what do I need to do to get there?
>
> For what it's worth, I've been noodling a bit with the "system node
> migrate-root" command (this new AFF is not in production yet) and got a
> warning that my target disks don't have a spare root partition (I specified
> some of the disks on the "old" aggr). That warning says I can find
> available disks with the command "storage disk partition show
> -owner-node-name redacted-a -container-type spare -is-root true" but the
> CLI then complains that partition is not a command (I'm at "advanced"
> privilege level). Is the given command correct?
>
> Hope to hear from you,
>
>
>
> Randy in Seattle
>
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters
>
>