Mailing List Archive

QFX CRB
Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
I have two spine switches and two leaf switches, when I use the
virtual-gateway in active / active mode in the spines, the servers
connected only in leaf1 have a large increase in IRQ's, generating
higher CPU consumption in the servers.
I did a test by deactivating spine2 and leaving only the gateway
spine1, and the IRQ was zeroed out.
Did anyone happen to go through this?
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
Can you show your irb and protocols evpn configuration please

Nitzan

On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
cristian.cardoso11@gmail.com> wrote:

> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
> I have two spine switches and two leaf switches, when I use the
> virtual-gateway in active / active mode in the spines, the servers
> connected only in leaf1 have a large increase in IRQ's, generating
> higher CPU consumption in the servers.
> I did a test by deactivating spine2 and leaving only the gateway
> spine1, and the IRQ was zeroed out.
> Did anyone happen to go through this?
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
> show configuration protocols evpn
vni-options {
vni 810 {
vrf-target target:888:888;
}
vni 815 {
vrf-target target:888:888;
}
vni 821 {
vrf-target target:888:888;
}
vni 822 {
vrf-target target:888:888;
}
vni 827 {
vrf-target target:888:888;
}
vni 830 {
vrf-target target:888:888;
}
vni 832 {
vrf-target target:888:888;
}
vni 910 {
vrf-target target:666:666;
}
vni 915 {
vrf-target target:666:666;
}
vni 921 {
vrf-target target:666:666;
}
vni 922 {
vrf-target target:666:666;
}
vni 927 {
vrf-target target:666:666;
}
vni 930 {
vrf-target target:666:666;
}
vni 932 {
vrf-target target:666:666;
}
vni 4018 {
vrf-target target:4018:4018;
}
}
encapsulation vxlan;
default-gateway no-gateway-community;
extended-vni-list all;


An example of configuring the interfaces follows, all follow this
pattern with more or less IP's.
> show configuration interfaces irb.810
proxy-macip-advertisement;
virtual-gateway-accept-data;
family inet {
mtu 9000;
address 10.19.11.253/22 {
preferred;
virtual-gateway-address 10.19.8.1;
}
}

Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
<nitzan.tzelniker@gmail.com> escreveu:
>
> Can you show your irb and protocols evpn configuration please
>
> Nitzan
>
> On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <cristian.cardoso11@gmail.com> wrote:
>>
>> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
>> I have two spine switches and two leaf switches, when I use the
>> virtual-gateway in active / active mode in the spines, the servers
>> connected only in leaf1 have a large increase in IRQ's, generating
>> higher CPU consumption in the servers.
>> I did a test by deactivating spine2 and leaving only the gateway
>> spine1, and the IRQ was zeroed out.
>> Did anyone happen to go through this?
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
Looks ok to me
Which junos version you are running ? and which devices ?
Did you capture on the servers to see what is the traffic that causes the
high CPU utilization ?


On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <
cristian.cardoso11@gmail.com> wrote:

> > show configuration protocols evpn
> vni-options {
> vni 810 {
> vrf-target target:888:888;
> }
> vni 815 {
> vrf-target target:888:888;
> }
> vni 821 {
> vrf-target target:888:888;
> }
> vni 822 {
> vrf-target target:888:888;
> }
> vni 827 {
> vrf-target target:888:888;
> }
> vni 830 {
> vrf-target target:888:888;
> }
> vni 832 {
> vrf-target target:888:888;
> }
> vni 910 {
> vrf-target target:666:666;
> }
> vni 915 {
> vrf-target target:666:666;
> }
> vni 921 {
> vrf-target target:666:666;
> }
> vni 922 {
> vrf-target target:666:666;
> }
> vni 927 {
> vrf-target target:666:666;
> }
> vni 930 {
> vrf-target target:666:666;
> }
> vni 932 {
> vrf-target target:666:666;
> }
> vni 4018 {
> vrf-target target:4018:4018;
> }
> }
> encapsulation vxlan;
> default-gateway no-gateway-community;
> extended-vni-list all;
>
>
> An example of configuring the interfaces follows, all follow this
> pattern with more or less IP's.
> > show configuration interfaces irb.810
> proxy-macip-advertisement;
> virtual-gateway-accept-data;
> family inet {
> mtu 9000;
> address 10.19.11.253/22 {
> preferred;
> virtual-gateway-address 10.19.8.1;
> }
> }
>
> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
> <nitzan.tzelniker@gmail.com> escreveu:
> >
> > Can you show your irb and protocols evpn configuration please
> >
> > Nitzan
> >
> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
> cristian.cardoso11@gmail.com> wrote:
> >>
> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging
> topology?
> >> I have two spine switches and two leaf switches, when I use the
> >> virtual-gateway in active / active mode in the spines, the servers
> >> connected only in leaf1 have a large increase in IRQ's, generating
> >> higher CPU consumption in the servers.
> >> I did a test by deactivating spine2 and leaving only the gateway
> >> spine1, and the IRQ was zeroed out.
> >> Did anyone happen to go through this?
> >> _______________________________________________
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
I running 19.1R2.8 version on Junos.
Today I was in contact with Juniper support about a route depletion
problem and it seems to be related to the IRQs problem.
When the IPv4 / IPv6 routes are exhausted in the LTM table, the IRQ
increment begins.
I did an analysis of the packages trafficked on the servers, but I
found nothing out of the ordinary.

Em ter., 10 de nov. de 2020 às 17:47, Nitzan Tzelniker
<nitzan.tzelniker@gmail.com> escreveu:
>
> Looks ok to me
> Which junos version you are running ? and which devices ?
> Did you capture on the servers to see what is the traffic that causes the high CPU utilization ?
>
>
> On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <cristian.cardoso11@gmail.com> wrote:
>>
>> > show configuration protocols evpn
>> vni-options {
>> vni 810 {
>> vrf-target target:888:888;
>> }
>> vni 815 {
>> vrf-target target:888:888;
>> }
>> vni 821 {
>> vrf-target target:888:888;
>> }
>> vni 822 {
>> vrf-target target:888:888;
>> }
>> vni 827 {
>> vrf-target target:888:888;
>> }
>> vni 830 {
>> vrf-target target:888:888;
>> }
>> vni 832 {
>> vrf-target target:888:888;
>> }
>> vni 910 {
>> vrf-target target:666:666;
>> }
>> vni 915 {
>> vrf-target target:666:666;
>> }
>> vni 921 {
>> vrf-target target:666:666;
>> }
>> vni 922 {
>> vrf-target target:666:666;
>> }
>> vni 927 {
>> vrf-target target:666:666;
>> }
>> vni 930 {
>> vrf-target target:666:666;
>> }
>> vni 932 {
>> vrf-target target:666:666;
>> }
>> vni 4018 {
>> vrf-target target:4018:4018;
>> }
>> }
>> encapsulation vxlan;
>> default-gateway no-gateway-community;
>> extended-vni-list all;
>>
>>
>> An example of configuring the interfaces follows, all follow this
>> pattern with more or less IP's.
>> > show configuration interfaces irb.810
>> proxy-macip-advertisement;
>> virtual-gateway-accept-data;
>> family inet {
>> mtu 9000;
>> address 10.19.11.253/22 {
>> preferred;
>> virtual-gateway-address 10.19.8.1;
>> }
>> }
>>
>> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
>> <nitzan.tzelniker@gmail.com> escreveu:
>> >
>> > Can you show your irb and protocols evpn configuration please
>> >
>> > Nitzan
>> >
>> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <cristian.cardoso11@gmail.com> wrote:
>> >>
>> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
>> >> I have two spine switches and two leaf switches, when I use the
>> >> virtual-gateway in active / active mode in the spines, the servers
>> >> connected only in leaf1 have a large increase in IRQ's, generating
>> >> higher CPU consumption in the servers.
>> >> I did a test by deactivating spine2 and leaving only the gateway
>> >> spine1, and the IRQ was zeroed out.
>> >> Did anyone happen to go through this?
>> >> _______________________________________________
>> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
How are you measuring IRQ on the servers? If it's network related IRQs, it
should be seen during a packet capture.

On Tue, Nov 10, 2020 at 4:40 PM Cristian Cardoso <
cristian.cardoso11@gmail.com> wrote:

> I running 19.1R2.8 version on Junos.
> Today I was in contact with Juniper support about a route depletion
> problem and it seems to be related to the IRQs problem.
> When the IPv4 / IPv6 routes are exhausted in the LTM table, the IRQ
> increment begins.
> I did an analysis of the packages trafficked on the servers, but I
> found nothing out of the ordinary.
>
> Em ter., 10 de nov. de 2020 às 17:47, Nitzan Tzelniker
> <nitzan.tzelniker@gmail.com> escreveu:
> >
> > Looks ok to me
> > Which junos version you are running ? and which devices ?
> > Did you capture on the servers to see what is the traffic that causes
> the high CPU utilization ?
> >
> >
> > On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <
> cristian.cardoso11@gmail.com> wrote:
> >>
> >> > show configuration protocols evpn
> >> vni-options {
> >> vni 810 {
> >> vrf-target target:888:888;
> >> }
> >> vni 815 {
> >> vrf-target target:888:888;
> >> }
> >> vni 821 {
> >> vrf-target target:888:888;
> >> }
> >> vni 822 {
> >> vrf-target target:888:888;
> >> }
> >> vni 827 {
> >> vrf-target target:888:888;
> >> }
> >> vni 830 {
> >> vrf-target target:888:888;
> >> }
> >> vni 832 {
> >> vrf-target target:888:888;
> >> }
> >> vni 910 {
> >> vrf-target target:666:666;
> >> }
> >> vni 915 {
> >> vrf-target target:666:666;
> >> }
> >> vni 921 {
> >> vrf-target target:666:666;
> >> }
> >> vni 922 {
> >> vrf-target target:666:666;
> >> }
> >> vni 927 {
> >> vrf-target target:666:666;
> >> }
> >> vni 930 {
> >> vrf-target target:666:666;
> >> }
> >> vni 932 {
> >> vrf-target target:666:666;
> >> }
> >> vni 4018 {
> >> vrf-target target:4018:4018;
> >> }
> >> }
> >> encapsulation vxlan;
> >> default-gateway no-gateway-community;
> >> extended-vni-list all;
> >>
> >>
> >> An example of configuring the interfaces follows, all follow this
> >> pattern with more or less IP's.
> >> > show configuration interfaces irb.810
> >> proxy-macip-advertisement;
> >> virtual-gateway-accept-data;
> >> family inet {
> >> mtu 9000;
> >> address 10.19.11.253/22 {
> >> preferred;
> >> virtual-gateway-address 10.19.8.1;
> >> }
> >> }
> >>
> >> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
> >> <nitzan.tzelniker@gmail.com> escreveu:
> >> >
> >> > Can you show your irb and protocols evpn configuration please
> >> >
> >> > Nitzan
> >> >
> >> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
> cristian.cardoso11@gmail.com> wrote:
> >> >>
> >> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging
> topology?
> >> >> I have two spine switches and two leaf switches, when I use the
> >> >> virtual-gateway in active / active mode in the spines, the servers
> >> >> connected only in leaf1 have a large increase in IRQ's, generating
> >> >> higher CPU consumption in the servers.
> >> >> I did a test by deactivating spine2 and leaving only the gateway
> >> >> spine1, and the IRQ was zeroed out.
> >> >> Did anyone happen to go through this?
> >> >> _______________________________________________
> >> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
When there was an exhaustion of IPv4 LPM routes in spine2, I noticed a
greater increase in IRQs. I analyzed tcpdump and found nothing strange
on the server itself.
It reduces the L3 routes sent to the spines and released more slots in
the L2 profile below.

Profile active: l2-profile-three
Type Max Used Free % free
----------------------------------------------------
IPv4 Host 147456 1389 144923 98.28
IPv4 LPM 24576 18108 6286 25.58
IPv4 Mcast 73728 0 72462 98.28

IPv6 Host 73728 572 72462 98.28
IPv6 LPM(< 64) 12288 91 3143 25.58
IPv6 LPM(> 64) 2048 7 2041 99.66
IPv6 Mcast 36864 0 36231 98.28

Em qua., 11 de nov. de 2020 às 23:36, Laurent Dumont
<laurentfdumont@gmail.com> escreveu:
>
> How are you measuring IRQ on the servers? If it's network related IRQs, it should be seen during a packet capture.
>
> On Tue, Nov 10, 2020 at 4:40 PM Cristian Cardoso <cristian.cardoso11@gmail.com> wrote:
>>
>> I running 19.1R2.8 version on Junos.
>> Today I was in contact with Juniper support about a route depletion
>> problem and it seems to be related to the IRQs problem.
>> When the IPv4 / IPv6 routes are exhausted in the LTM table, the IRQ
>> increment begins.
>> I did an analysis of the packages trafficked on the servers, but I
>> found nothing out of the ordinary.
>>
>> Em ter., 10 de nov. de 2020 às 17:47, Nitzan Tzelniker
>> <nitzan.tzelniker@gmail.com> escreveu:
>> >
>> > Looks ok to me
>> > Which junos version you are running ? and which devices ?
>> > Did you capture on the servers to see what is the traffic that causes the high CPU utilization ?
>> >
>> >
>> > On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <cristian.cardoso11@gmail.com> wrote:
>> >>
>> >> > show configuration protocols evpn
>> >> vni-options {
>> >> vni 810 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 815 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 821 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 822 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 827 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 830 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 832 {
>> >> vrf-target target:888:888;
>> >> }
>> >> vni 910 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 915 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 921 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 922 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 927 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 930 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 932 {
>> >> vrf-target target:666:666;
>> >> }
>> >> vni 4018 {
>> >> vrf-target target:4018:4018;
>> >> }
>> >> }
>> >> encapsulation vxlan;
>> >> default-gateway no-gateway-community;
>> >> extended-vni-list all;
>> >>
>> >>
>> >> An example of configuring the interfaces follows, all follow this
>> >> pattern with more or less IP's.
>> >> > show configuration interfaces irb.810
>> >> proxy-macip-advertisement;
>> >> virtual-gateway-accept-data;
>> >> family inet {
>> >> mtu 9000;
>> >> address 10.19.11.253/22 {
>> >> preferred;
>> >> virtual-gateway-address 10.19.8.1;
>> >> }
>> >> }
>> >>
>> >> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
>> >> <nitzan.tzelniker@gmail.com> escreveu:
>> >> >
>> >> > Can you show your irb and protocols evpn configuration please
>> >> >
>> >> > Nitzan
>> >> >
>> >> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <cristian.cardoso11@gmail.com> wrote:
>> >> >>
>> >> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging topology?
>> >> >> I have two spine switches and two leaf switches, when I use the
>> >> >> virtual-gateway in active / active mode in the spines, the servers
>> >> >> connected only in leaf1 have a large increase in IRQ's, generating
>> >> >> higher CPU consumption in the servers.
>> >> >> I did a test by deactivating spine2 and leaving only the gateway
>> >> >> spine1, and the IRQ was zeroed out.
>> >> >> Did anyone happen to go through this?
>> >> >> _______________________________________________
>> >> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> >> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: QFX CRB [ In reply to ]
Reviewing my old emails, I noticed that I didn't give the final overview of
the case. The problem was resolved in 2021 by the team that takes care of
server virtualization. The root of the problem was updating a package
relating to the XCP-NG system network. After they isolated the update, the
IRQ problem no longer occurred and in recent system updates there were no
major problems.

Em qui., 12 de nov. de 2020 às 10:20, Cristian Cardoso <
cristian.cardoso11@gmail.com> escreveu:

> When there was an exhaustion of IPv4 LPM routes in spine2, I noticed a
> greater increase in IRQs. I analyzed tcpdump and found nothing strange
> on the server itself.
> It reduces the L3 routes sent to the spines and released more slots in
> the L2 profile below.
>
> Profile active: l2-profile-three
> Type Max Used Free % free
> ----------------------------------------------------
> IPv4 Host 147456 1389 144923 98.28
> IPv4 LPM 24576 18108 6286 25.58
> IPv4 Mcast 73728 0 72462 98.28
>
> IPv6 Host 73728 572 72462 98.28
> IPv6 LPM(< 64) 12288 91 3143 25.58
> IPv6 LPM(> 64) 2048 7 2041 99.66
> IPv6 Mcast 36864 0 36231 98.28
>
> Em qua., 11 de nov. de 2020 às 23:36, Laurent Dumont
> <laurentfdumont@gmail.com> escreveu:
> >
> > How are you measuring IRQ on the servers? If it's network related IRQs,
> it should be seen during a packet capture.
> >
> > On Tue, Nov 10, 2020 at 4:40 PM Cristian Cardoso <
> cristian.cardoso11@gmail.com> wrote:
> >>
> >> I running 19.1R2.8 version on Junos.
> >> Today I was in contact with Juniper support about a route depletion
> >> problem and it seems to be related to the IRQs problem.
> >> When the IPv4 / IPv6 routes are exhausted in the LTM table, the IRQ
> >> increment begins.
> >> I did an analysis of the packages trafficked on the servers, but I
> >> found nothing out of the ordinary.
> >>
> >> Em ter., 10 de nov. de 2020 às 17:47, Nitzan Tzelniker
> >> <nitzan.tzelniker@gmail.com> escreveu:
> >> >
> >> > Looks ok to me
> >> > Which junos version you are running ? and which devices ?
> >> > Did you capture on the servers to see what is the traffic that causes
> the high CPU utilization ?
> >> >
> >> >
> >> > On Tue, Nov 10, 2020 at 9:07 PM Cristian Cardoso <
> cristian.cardoso11@gmail.com> wrote:
> >> >>
> >> >> > show configuration protocols evpn
> >> >> vni-options {
> >> >> vni 810 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 815 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 821 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 822 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 827 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 830 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 832 {
> >> >> vrf-target target:888:888;
> >> >> }
> >> >> vni 910 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 915 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 921 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 922 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 927 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 930 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 932 {
> >> >> vrf-target target:666:666;
> >> >> }
> >> >> vni 4018 {
> >> >> vrf-target target:4018:4018;
> >> >> }
> >> >> }
> >> >> encapsulation vxlan;
> >> >> default-gateway no-gateway-community;
> >> >> extended-vni-list all;
> >> >>
> >> >>
> >> >> An example of configuring the interfaces follows, all follow this
> >> >> pattern with more or less IP's.
> >> >> > show configuration interfaces irb.810
> >> >> proxy-macip-advertisement;
> >> >> virtual-gateway-accept-data;
> >> >> family inet {
> >> >> mtu 9000;
> >> >> address 10.19.11.253/22 {
> >> >> preferred;
> >> >> virtual-gateway-address 10.19.8.1;
> >> >> }
> >> >> }
> >> >>
> >> >> Em ter., 10 de nov. de 2020 às 15:16, Nitzan Tzelniker
> >> >> <nitzan.tzelniker@gmail.com> escreveu:
> >> >> >
> >> >> > Can you show your irb and protocols evpn configuration please
> >> >> >
> >> >> > Nitzan
> >> >> >
> >> >> > On Tue, Nov 10, 2020 at 3:26 PM Cristian Cardoso <
> cristian.cardoso11@gmail.com> wrote:
> >> >> >>
> >> >> >> Does anyone use EVPN-VXLAN in the Centrally-Routed and Bridging
> topology?
> >> >> >> I have two spine switches and two leaf switches, when I use the
> >> >> >> virtual-gateway in active / active mode in the spines, the servers
> >> >> >> connected only in leaf1 have a large increase in IRQ's, generating
> >> >> >> higher CPU consumption in the servers.
> >> >> >> I did a test by deactivating spine2 and leaving only the gateway
> >> >> >> spine1, and the IRQ was zeroed out.
> >> >> >> Did anyone happen to go through this?
> >> >> >> _______________________________________________
> >> >> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> >> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
> >> _______________________________________________
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp