Mailing List Archive

LACP is not running between two VMX
Hello all,

I tried to mount a MC-LAG between two VMXs (using EVE-NG). I note that the
lacp protocol is not operational.
I did some research (including on this forum). The explanations I find are
a little complicated. That's why I post this message. If anyone among you
has a simple solution for this problem.

Thanks

Omar
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: LACP is not running between two VMX [ In reply to ]
Sorry I don't have mc-lag configs for vMX, but I did do mc-lag on vQFX...

Here is some quick outputs from my eve-ng lab...

I have mc-lag between (2) vQFX devices... and actually, the lag client side
is one vMX node...

Here's one side of the mc-lag pair... I grabbed some commands that I recall
being important to make this work... forgive me, it's been a while... lemme
know if you need anything else from this...

{master:0}
root@stlr-qfx-01> show configuration interfaces ae1 | display set
set interfaces ae1 mtu 9216
set interfaces ae1 aggregated-ether-options lacp active
set interfaces ae1 aggregated-ether-options lacp system-id 00:01:02:03:04:05
set interfaces ae1 aggregated-ether-options lacp admin-key 3
set interfaces ae1 aggregated-ether-options mc-ae mc-ae-id 3
set interfaces ae1 aggregated-ether-options mc-ae redundancy-group 1
set interfaces ae1 aggregated-ether-options mc-ae chassis-id 0
set interfaces ae1 aggregated-ether-options mc-ae mode active-active
set interfaces ae1 aggregated-ether-options mc-ae status-control active
set interfaces ae1 aggregated-ether-options mc-ae init-delay-time 240
set interfaces ae1 unit 0 family ethernet-switching interface-mode trunk
set interfaces ae1 unit 0 family ethernet-switching vlan members ten

set multi-chassis multi-chassis-protection 1.1.1.15 interface ae0

set protocols iccp local-ip-addr 1.1.1.5
set protocols iccp peer 1.1.1.15 session-establishment-hold-time 50
set protocols iccp peer 1.1.1.15 redundancy-group-id-list 1
set protocols iccp peer 1.1.1.15 backup-liveness-detection backup-peer-ip
10.207.64.233
set protocols iccp peer 1.1.1.15 liveness-detection minimum-receive-interval
60
set protocols iccp peer 1.1.1.15 liveness-detection transmit-interval
minimum-interval 60
set protocols rstp bpdu-block-on-edge
set switch-options service-id 1


{master:0}
root@stlr-qfx-01> show interfaces mc-ae
Member Link : ae1
Current State Machine's State: mcae active state
Local Status : active
Local State : up
Peer Status : active
Peer State : up
Logical Interface : ae1.0
Topology Type : bridge
Local State : up
Peer State : up
Peer Ip/MCP/State : 1.1.1.15 ae0.0 up

{master:0}
root@stlr-qfx-01> show iccp

Redundancy Group Information for peer 1.1.1.15
TCP Connection : Established
Liveliness Detection : Up
Backup liveness peer status: Up
Redundancy Group ID Status
1 Up

Client Application: lacpd
Redundancy Group IDs Joined: 1

Client Application: MCSNOOPD
Redundancy Group IDs Joined: None

Client Application: l2ald_iccpd_client
Redundancy Group IDs Joined: 1

{master:0}
root@stlr-qfx-01>


-Aaron


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: LACP is not running between two VMX [ In reply to ]
> omar sahnoun
> Sent: Wednesday, April 24, 2019 8:55 AM
>
> Hello all,
>
> I tried to mount a MC-LAG between two VMXs (using EVE-NG). I note that
> the lacp protocol is not operational.
> I did some research (including on this forum). The explanations I find are
a
> little complicated. That's why I post this message. If anyone among you
has a
> simple solution for this problem.
>
I haven't tried MC-LAG, but I used standard LAG (with LACP).
The problem I faced was that the standard Linux bridges (usually used to
simulate virtual p2p links between vMX-es won't forward BPDUs including LACP
(and I did not find a way to hack around at that time)
However then I started to use OVS instead of Linux bridges and there's a
settings to enable BPDU forwarding and that did the trick.
So what I had was all ~50 interfaces of all ~30 vMX-es connected to a single
OVS where putting two ports on the same VLAN creates a virtual p2p link
between a pair of interfaces/nodes (very easy to change topology add/remove
links or nodes) -this OVS has BPDU forwarding enabled so now I can bundle
any links between any two vMX-es
I'm then using another OVS for the internal VCP-VFP connections, and another
one for FXP interface and connectivity outside of the lab.

Disclaimer,
I never used the juniper scripts to define and start VMs and connectivity, I
created the VM configs and the whole lab setup myself (later automated with
ansible).

adam

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: LACP is not running between two VMX [ In reply to ]
? 25 avril 2019 09:31 +01, <adamv0025@netconsultings.com>:

> I haven't tried MC-LAG, but I used standard LAG (with LACP).
> The problem I faced was that the standard Linux bridges (usually used to
> simulate virtual p2p links between vMX-es won't forward BPDUs including LACP
> (and I did not find a way to hack around at that time)

You can play with /sys/class/net/br0/bridge/group_fwd_mask. Well, in
fact, you can't:

hat: /sys/class/net/<bridge iface>/bridge/group_fwd_mask
Date: January 2012
KernelVersion: 3.2
Contact: netdev@vger.kernel.org
Description:
Bitmask to allow forwarding of link local frames with address
01-80-C2-00-00-0X on a bridge device. Only values that set bits
not matching BR_GROUPFWD_RESTRICTED in net/bridge/br_private.h
allowed.
Default value 0 does not forward any link local frames.

Restricted bits:
0: 01-80-C2-00-00-00 Bridge Group Address used for STP
1: 01-80-C2-00-00-01 (MAC Control) 802.3 used for MAC PAUSE
2: 01-80-C2-00-00-02 (Link Aggregation) 802.3ad

Any values not setting these bits can be used. Take special
care when forwarding control frames e.g. 802.1X-PAE or LLDP.
--
Go not to the elves for counsel, for they will say both yes and no.
-- J.R.R. Tolkien
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp