Mailing List Archive

Distribution Function & Jumbo Frames
I've got a four node cluster where each node has a 10Gb ifgrp connected to
LACP port-channels on Nexus 9k's. Each of the ifgrp's has its distribution
function set to ip. My first question is, should I tear these ifgrp's down
to change their distribution functions to port? Next, all of these
port-channels and ifgrp's have MTU 1500. I'd like to of course set them to
9000. The process to do this I believe is to raise the MTU on the
port-channels to 9216. Raise the MTU on the ifgrp's to 9000, and then raise
the MTU's on the Broadcast domains to 9000. My question is, can I do this
without an outage? I've tested this on another cluster, but it
unfortunately doesn't have ESXi(NFS), Oracle(NFS), SQL Server(iSCSi)
clients on it. There seems to be a small outage when the ifgrp is changed,
but I can't determine if that will cause any issues on the client side.

Thanks as always for the help.

--Carl
Re: Distribution Function & Jumbo Frames [ In reply to ]
It is very minor. I would VERIFY that the switch ports though are set
properly:
"spanning-tree port type edge trunk"
--> I presume you are using VLAN tags for ESXi/NFS.

This allows the ports to come up quickly rather than waiting for STP to do
its thing (45 seconds or so)

You would need to modify the switch first. Most NX-OS code already has
jumbo frames enabled. You should verify just in case.

Then you will need to set the base interface (ifgrp/a0a) to MTU of 9000. If
the port is in a broadcast domain, but not in use by anything, I would
remove from the broadcast domain.
Then set the port to mtu of 9000 (net port modify -node xxx -port a0a -mtu
9000) and wait. When it comes back, wait for the ifgrp to return to "full"
(ifgrp show)
After your ifgrps are set, give a go a and change ONTAP (broadcast-domain
modify -broadcast-domain esx -mtu 9000) which will change all ports at the
same time

After the ONTAP side is set: VERIFY you at least have jumbo frames working
across the switch:
net ping -lif esx-nfs-01 -vserver esxi-svm -destination xx.xx.xx.xx (where
that is the IP of the esx-nfs-02 LIF)
net ping -lif esx-nfs-01 -vserver esxi-svm -disable true -packet 5000
-destination xx.xx.xx.xx (where that is the IP of the esx-nfs-02 LIF)
-> this will test a 5000 mtu packet between NetApp nodes.
If that works continue on

Modify the vswitch on ESX to be 9000 then go to each VMK and set the MTU to
9000 also.
From the NetApp you should be able to do that last ping and test jumbo
frames.

If all is well...great. If not, revert back to 1500 and try again. NFS is
slightly forgiving.


--tmac

*Tim McCarthy, **Principal Consultant*

*Proud Member of the #NetAppATeam <https://twitter.com/NetAppATeam>*

*I Blog at TMACsRack <https://tmacsrack.wordpress.com/>*



On Tue, Oct 5, 2021 at 8:53 AM Carl Howell <chowell@uwf.edu> wrote:

> I've got a four node cluster where each node has a 10Gb ifgrp connected to
> LACP port-channels on Nexus 9k's. Each of the ifgrp's has its distribution
> function set to ip. My first question is, should I tear these ifgrp's down
> to change their distribution functions to port? Next, all of these
> port-channels and ifgrp's have MTU 1500. I'd like to of course set them to
> 9000. The process to do this I believe is to raise the MTU on the
> port-channels to 9216. Raise the MTU on the ifgrp's to 9000, and then raise
> the MTU's on the Broadcast domains to 9000. My question is, can I do this
> without an outage? I've tested this on another cluster, but it
> unfortunately doesn't have ESXi(NFS), Oracle(NFS), SQL Server(iSCSi)
> clients on it. There seems to be a small outage when the ifgrp is changed,
> but I can't determine if that will cause any issues on the client side.
>
> Thanks as always for the help.
>
> --Carl
> _______________________________________________
> Toasters mailing list
> Toasters@teaparty.net
> https://www.teaparty.net/mailman/listinfo/toasters