Mailing List Archive

volume move perforance issues
Hi there

We are in the process of migrating from an AFF8060 to an AFF220.
We have attached two CN1610 switches and we have started to migrate several volumes over? (NFS and CIFS)
Because we want to do the cut-over in a service window we have done it with the command:

volume move start -vserver vserver -volume volme -cutover-action wait

We currently have 10 volumes waiting.

Users has started to complain about poor performance?

I think this is due to the fact that the volume move while in ?wait? status does a snapshot every minute, and updates the mirror.
(I can verify this with a ?snapshot show?..

Is there a way to modify this, so that it does not update so often?
Of cause when we get closer to our service window we would like to set it back to 1 minute again?

There is a ?volume move pause? but I would rather modify the frequency of the snapmirror updates if possible?

/Heino


Heino Walther<https://www.linkedin.com/in/heinowalther/>
Beardmann ApS<http://beardmann.dk/>
Jellingvej 9 - 7100 Vejle<https://goo.gl/maps/xQVPFMHXpXu>

D: 7199 9060 M: 2075 7501
--
Re: volume move perforance issues [ In reply to ]
>>>>> "Heino" == Heino Walther <hw@beardmann.dk> writes:

Heino> We are in the process of migrating from an AFF8060 to an AFF220.

Heino> We have attached two CN1610 switches and we have started to
Heino> migrate several volumes over… (NFS and CIFS)

Heino> Because we want to do the cut-over in a service window we have
Heino> done it with the command:

Heino> volume move start -vserver vserver -volume volme -cutover-action wait

Heino> We currently have 10 volumes waiting.

Why are you waiting? Are they *that* active that users will notice
the cutover interruption?

Heino> Users has started to complain about poor performance…

Heino> I think this is due to the fact that the volume move while in
Heino> “wait” status does a snapshot every minute, and updates the
Heino> mirror.

Ugh, this is a new feature I don't see in my 9.3 cluster...

Heino> (I can verify this with a “snapshot show”..

Heino> Is there a way to modify this, so that it does not update so often?

Heino> Of cause when we get closer to our service window we would like
Heino> to set it back to 1 minute again…

Heino> There is a “volume move pause” but I would rather modify the
Heino> frequency of the snapmirror updates if possible?

Again, why wait? I've always found that the cutover impact is mostly
from the cloning onto the new destination, not the cutover.

But are you adding the AFF220 into the existing cluster, or just doing
snapmirrors? It implies it's all one big cluster now... so why wait
for the window?

John

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
SV: volume move perforance issues [ In reply to ]
Hi John

We were waiting mostly because the customer wanted to do the cut-over in a service window… so we did the initial mirroring first which takes the longest, and then the cut-overs in a service-window…. But as it turns out now, the cut-overs actually takes quite some time on larger volumes, even with AFF systems in both ends…

Basically the “vol move show -instance” reports “Preparing source volume for cutover: Deduplication is not ready for volmove cutover”
The Estimated Remaining Time just keeps climbing… but it does complete eventually ????
Tonight I have managed to move about 40TB so far in three hours…. It might be possible if I did cut-over on all volumes at a time…

I think the -cutover-action is also in 9.3, but you have to be in “diag” or “advanced” mode in order to get the option…

Again all the waiting with the cut-over is mainly because of the customer and because they want to be 100% sure that there is no disruption outside the service windows…
(if it was my own setup, I’d be done months ago) ????

BTW: The performance issue was not at all because of the storage system… some network guy *ucked up some DNS records… but as always everyone looks angry towards the storge guy ????

/Heino


Fra: John Stoffel <john@stoffel.org>
Dato: tirsdag, 2. marts 2021 kl. 19.39
Til: Heino Walther <hw@beardmann.dk>
Cc: toasters@teaparty.net <toasters@teaparty.net>
Emne: Re: volume move perforance issues
>>>>> "Heino" == Heino Walther <hw@beardmann.dk> writes:

Heino> We are in the process of migrating from an AFF8060 to an AFF220.

Heino> We have attached two CN1610 switches and we have started to
Heino> migrate several volumes over… (NFS and CIFS)

Heino> Because we want to do the cut-over in a service window we have
Heino> done it with the command:

Heino> volume move start -vserver vserver -volume volme -cutover-action wait

Heino> We currently have 10 volumes waiting.

Why are you waiting? Are they *that* active that users will notice
the cutover interruption?

Heino> Users has started to complain about poor performance…

Heino> I think this is due to the fact that the volume move while in
Heino> “wait” status does a snapshot every minute, and updates the
Heino> mirror.

Ugh, this is a new feature I don't see in my 9.3 cluster...

Heino> (I can verify this with a “snapshot show”..

Heino> Is there a way to modify this, so that it does not update so often?

Heino> Of cause when we get closer to our service window we would like
Heino> to set it back to 1 minute again…

Heino> There is a “volume move pause” but I would rather modify the
Heino> frequency of the snapmirror updates if possible?

Again, why wait? I've always found that the cutover impact is mostly
from the cloning onto the new destination, not the cutover.

But are you adding the AFF220 into the existing cluster, or just doing
snapmirrors? It implies it's all one big cluster now... so why wait
for the window?

John