Mailing List Archive

no inet.2 for multicast RPF etc. by default?
Hi,

It seems to me that Juniper does not use inet.2 at all by default.

That is, even if I configure routes for inet.2 or use MBGP to distribute
multicast routes when unicast/multicast topology is not congruent, Juniper
only uses inet.0 by default.

This seems *odd*. The desired behaviour should be to always look for
inet.2 first, and if it fails, look at inet.0 (or so I've been believing).
Otherwise, being transit using MBGP (for example) would likely break RPF
quite badly.

Am I missing something?

--8<--
Configure PIM RPF Routing Table
By default, PIM uses inet.0 as its Reverse Path Forwarding (RPF) routing
table group. PIM uses an RPF routing table group to resolve its RPF
neighbor for a particular multicast source address and to resolve the RPF
neighbor for the RP address. PIM can optionally use inet.2 as its RPF
routing table group. To do this, add the rib-groups statement at the [edit
routing-options] hierarchy level.

routing-options {
rib-groups {
pim-rg {
import-rib inet.2;
}
}
}
protocols {
pim {
rib-group inet pim-rg;
--8<--
blah@foo> show route x.y.0.0

inet.0: 153 destinations, 215 routes (152 active, 0 holddown, 1 hidden)
+ = Active Route, - = Last Active, * = Both

x.y.0.0/16 *[BGP/170] 16:10:57, MED 0, localpref 200, from 172.31.5.199
AS path: 65300 I
> to a.b.c.177 via ge-1/0/0.0

inet.2: 130 destinations, 180 routes (129 active, 0 holddown, 1 hidden)
+ = Active Route, - = Last Active, * = Both

x.y.0.0/16 *[BGP/170] 5d 02:15:43, MED 0, localpref 50, from 172.31.5.198
AS path: 65300 I
> to a.b.c.1 via fe-0/3/1.0
to a.b.c.25 via fe-0/3/3.0

blah@foo> show multicast rpf x.y.0.0
Multicast RPF table: inet.0, 153 entries

x.y.0.0/16
Protocol: BGP
Interface: ge-1/0/0.0
--8<--

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
no inet.2 for multicast RPF etc. by default? [ In reply to ]
On Tue, 15 Apr 2003, Ashutosh Thakur wrote:
> Have you configured interface routes to be intalled into inet.2?
> Try this and see if inet.2 is used
>
> regress@grenadine# show routing-options
> interface-routes {
> rib-group inet if-rib;
> }
> rib-groups {
> if-rib {
> import-rib [ inet.0 inet.2 ];
> }

Yes, this has been configured (except by using the name 'ifrg' not
'if-rib'. The router is running 5.6R2.

If this seems like a problem, I can raise a case with JTAC.

> > -----Original Message-----
> > From: Pekka Savola [mailto:pekkas@netcore.fi]
> > Sent: Monday, April 14, 2003 11:58 PM
> > To: juniper-nsp@puck.nether.net
> > Subject: [j-nsp] no inet.2 for multicast RPF etc. by default?
> >
> >
> > Hi,
> >
> > It seems to me that Juniper does not use inet.2 at all by default.
> >
> > That is, even if I configure routes for inet.2 or use MBGP to
> > distribute
> > multicast routes when unicast/multicast topology is not
> > congruent, Juniper
> > only uses inet.0 by default.
> >
> > This seems *odd*. The desired behaviour should be to always look for
> > inet.2 first, and if it fails, look at inet.0 (or so I've
> > been believing).
> > Otherwise, being transit using MBGP (for example) would
> > likely break RPF
> > quite badly.
> >
> > Am I missing something?
> >
> > --8<--
> > Configure PIM RPF Routing Table
> > By default, PIM uses inet.0 as its Reverse Path Forwarding
> > (RPF) routing
> > table group. PIM uses an RPF routing table group to resolve its RPF
> > neighbor for a particular multicast source address and to
> > resolve the RPF
> > neighbor for the RP address. PIM can optionally use inet.2 as its RPF
> > routing table group. To do this, add the rib-groups statement
> > at the [edit
> > routing-options] hierarchy level.
> >
> > routing-options {
> > rib-groups {
> > pim-rg {
> > import-rib inet.2;
> > }
> > }
> > }
> > protocols {
> > pim {
> > rib-group inet pim-rg;
> > --8<--
> > blah@foo> show route x.y.0.0
> >
> > inet.0: 153 destinations, 215 routes (152 active, 0 holddown,
> > 1 hidden)
> > + = Active Route, - = Last Active, * = Both
> >
> > x.y.0.0/16 *[BGP/170] 16:10:57, MED 0, localpref 200,
> > from 172.31.5.199
> > AS path: 65300 I
> > > to a.b.c.177 via ge-1/0/0.0
> >
> > inet.2: 130 destinations, 180 routes (129 active, 0 holddown,
> > 1 hidden)
> > + = Active Route, - = Last Active, * = Both
> >
> > x.y.0.0/16 *[BGP/170] 5d 02:15:43, MED 0, localpref 50,
> > from 172.31.5.198
> > AS path: 65300 I
> > > to a.b.c.1 via fe-0/3/1.0
> > to a.b.c.25 via fe-0/3/3.0
> >
> > blah@foo> show multicast rpf x.y.0.0
> > Multicast RPF table: inet.0, 153 entries
> >
> > x.y.0.0/16
> > Protocol: BGP
> > Interface: ge-1/0/0.0
> > --8<--
> >
> > --
> > Pekka Savola "You each name yourselves king, yet the
> > Netcore Oy kingdom bleeds."
> > Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
> >
> > _______________________________________________
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > http://puck.nether.net/mailman/listinfo/juniper-nsp
> >
>

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
no inet.2 for multicast RPF etc. by default? [ In reply to ]
Pekka,

Yes, inet.0 is used by default. If you want pim and/or msdp to use
inet.2, you just need to configure these protocols to do so.

protocols {
msdp {
rib-group inet pim-rg;
}
pim {
rib-group inet pim-rg;
}
}


-Lenny

On Tue, 15 Apr 2003, Pekka Savola wrote:

-) Hi,
-)
-) It seems to me that Juniper does not use inet.2 at all by default.
-)
-) That is, even if I configure routes for inet.2 or use MBGP to distribute
-) multicast routes when unicast/multicast topology is not congruent, Juniper
-) only uses inet.0 by default.
-)
-) This seems *odd*. The desired behaviour should be to always look for
-) inet.2 first, and if it fails, look at inet.0 (or so I've been believing).
-) Otherwise, being transit using MBGP (for example) would likely break RPF
-) quite badly.
-)
-) Am I missing something?
-)
-) --8<--
-) Configure PIM RPF Routing Table
-) By default, PIM uses inet.0 as its Reverse Path Forwarding (RPF) routing
-) table group. PIM uses an RPF routing table group to resolve its RPF
-) neighbor for a particular multicast source address and to resolve the RPF
-) neighbor for the RP address. PIM can optionally use inet.2 as its RPF
-) routing table group. To do this, add the rib-groups statement at the [edit
-) routing-options] hierarchy level.
-)
-) routing-options {
-) rib-groups {
-) pim-rg {
-) import-rib inet.2;
-) }
-) }
-) }
-) protocols {
-) pim {
-) rib-group inet pim-rg;
-) --8<--
-) blah@foo> show route x.y.0.0
-)
-) inet.0: 153 destinations, 215 routes (152 active, 0 holddown, 1 hidden)
-) + = Active Route, - = Last Active, * = Both
-)
-) x.y.0.0/16 *[BGP/170] 16:10:57, MED 0, localpref 200, from 172.31.5.199
-) AS path: 65300 I
-) > to a.b.c.177 via ge-1/0/0.0
-)
-) inet.2: 130 destinations, 180 routes (129 active, 0 holddown, 1 hidden)
-) + = Active Route, - = Last Active, * = Both
-)
-) x.y.0.0/16 *[BGP/170] 5d 02:15:43, MED 0, localpref 50, from 172.31.5.198
-) AS path: 65300 I
-) > to a.b.c.1 via fe-0/3/1.0
-) to a.b.c.25 via fe-0/3/3.0
-)
-) blah@foo> show multicast rpf x.y.0.0
-) Multicast RPF table: inet.0, 153 entries
-)
-) x.y.0.0/16
-) Protocol: BGP
-) Interface: ge-1/0/0.0
-) --8<--
-)
-) --
-) Pekka Savola "You each name yourselves king, yet the
-) Netcore Oy kingdom bleeds."
-) Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
-)
-) _______________________________________________
-) juniper-nsp mailing list juniper-nsp@puck.nether.net
-) http://puck.nether.net/mailman/listinfo/juniper-nsp
-)
no inet.2 for multicast RPF etc. by default? [ In reply to ]
On Tue, 15 Apr 2003, Leonard Giuliano wrote:
> Yes, inet.0 is used by default. If you want pim and/or msdp to use
> inet.2, you just need to configure these protocols to do so.
>
> protocols {
> msdp {
> rib-group inet pim-rg;
> }
> pim {
> rib-group inet pim-rg;
> }
> }

But there is *no* way to try inet.2 first, and if it fails, only then go
back to inet.0 ?

This would really, *really* simplify things in scenarios when most routes
are unicast-multicast congruent, but some of them are not.

Otherwise you have to copy all the routes in inet.0 to inet.2 and even
then you get breakage if you have multicast routers which do not speak
MBGP in your network.

> On Tue, 15 Apr 2003, Pekka Savola wrote:
>
> -) Hi,
> -)
> -) It seems to me that Juniper does not use inet.2 at all by default.
> -)
> -) That is, even if I configure routes for inet.2 or use MBGP to distribute
> -) multicast routes when unicast/multicast topology is not congruent, Juniper
> -) only uses inet.0 by default.
> -)
> -) This seems *odd*. The desired behaviour should be to always look for
> -) inet.2 first, and if it fails, look at inet.0 (or so I've been believing).
> -) Otherwise, being transit using MBGP (for example) would likely break RPF
> -) quite badly.
> -)
> -) Am I missing something?
> -)
> -) --8<--
> -) Configure PIM RPF Routing Table
> -) By default, PIM uses inet.0 as its Reverse Path Forwarding (RPF) routing
> -) table group. PIM uses an RPF routing table group to resolve its RPF
> -) neighbor for a particular multicast source address and to resolve the RPF
> -) neighbor for the RP address. PIM can optionally use inet.2 as its RPF
> -) routing table group. To do this, add the rib-groups statement at the [edit
> -) routing-options] hierarchy level.
> -)
> -) routing-options {
> -) rib-groups {
> -) pim-rg {
> -) import-rib inet.2;
> -) }
> -) }
> -) }
> -) protocols {
> -) pim {
> -) rib-group inet pim-rg;
> -) --8<--
> -) blah@foo> show route x.y.0.0
> -)
> -) inet.0: 153 destinations, 215 routes (152 active, 0 holddown, 1 hidden)
> -) + = Active Route, - = Last Active, * = Both
> -)
> -) x.y.0.0/16 *[BGP/170] 16:10:57, MED 0, localpref 200, from 172.31.5.199
> -) AS path: 65300 I
> -) > to a.b.c.177 via ge-1/0/0.0
> -)
> -) inet.2: 130 destinations, 180 routes (129 active, 0 holddown, 1 hidden)
> -) + = Active Route, - = Last Active, * = Both
> -)
> -) x.y.0.0/16 *[BGP/170] 5d 02:15:43, MED 0, localpref 50, from 172.31.5.198
> -) AS path: 65300 I
> -) > to a.b.c.1 via fe-0/3/1.0
> -) to a.b.c.25 via fe-0/3/3.0
> -)
> -) blah@foo> show multicast rpf x.y.0.0
> -) Multicast RPF table: inet.0, 153 entries
> -)
> -) x.y.0.0/16
> -) Protocol: BGP
> -) Interface: ge-1/0/0.0
> -) --8<--
> -)
> -) --
> -) Pekka Savola "You each name yourselves king, yet the
> -) Netcore Oy kingdom bleeds."
> -) Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
> -)
> -) _______________________________________________
> -) juniper-nsp mailing list juniper-nsp@puck.nether.net
> -) http://puck.nether.net/mailman/listinfo/juniper-nsp
> -)
>

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
no inet.2 for multicast RPF etc. by default? [ In reply to ]
On Tue, 15 Apr 2003, Pekka Savola wrote:

-) On Tue, 15 Apr 2003, Leonard Giuliano wrote:
-) > Yes, inet.0 is used by default. If you want pim and/or msdp to use
-) > inet.2, you just need to configure these protocols to do so.
-) >
-) > protocols {
-) > msdp {
-) > rib-group inet pim-rg;
-) > }
-) > pim {
-) > rib-group inet pim-rg;
-) > }
-) > }
-)
-) But there is *no* way to try inet.2 first, and if it fails, only then go
-) back to inet.0 ?
-)

I believe you can do this by copying the rest of the inet.0 routes into
inet.2, and somehow give them a lower preference. I think the behavior
you want is possible, but it may take some configuration.


-) This would really, *really* simplify things in scenarios when most routes
-) are unicast-multicast congruent, but some of them are not.
-)
-) Otherwise you have to copy all the routes in inet.0 to inet.2 and even
-) then you get breakage if you have multicast routers which do not speak
-) MBGP in your network.
-)
-) > On Tue, 15 Apr 2003, Pekka Savola wrote:
-) >
-) > -) Hi,
-) > -)
-) > -) It seems to me that Juniper does not use inet.2 at all by default.
-) > -)
-) > -) That is, even if I configure routes for inet.2 or use MBGP to distribute
-) > -) multicast routes when unicast/multicast topology is not congruent, Juniper
-) > -) only uses inet.0 by default.
-) > -)
-) > -) This seems *odd*. The desired behaviour should be to always look for
-) > -) inet.2 first, and if it fails, look at inet.0 (or so I've been believing).
-) > -) Otherwise, being transit using MBGP (for example) would likely break RPF
-) > -) quite badly.
-) > -)
-) > -) Am I missing something?
-) > -)
-) > -) --8<--
-) > -) Configure PIM RPF Routing Table
-) > -) By default, PIM uses inet.0 as its Reverse Path Forwarding (RPF) routing
-) > -) table group. PIM uses an RPF routing table group to resolve its RPF
-) > -) neighbor for a particular multicast source address and to resolve the RPF
-) > -) neighbor for the RP address. PIM can optionally use inet.2 as its RPF
-) > -) routing table group. To do this, add the rib-groups statement at the [edit
-) > -) routing-options] hierarchy level.
-) > -)
-) > -) routing-options {
-) > -) rib-groups {
-) > -) pim-rg {
-) > -) import-rib inet.2;
-) > -) }
-) > -) }
-) > -) }
-) > -) protocols {
-) > -) pim {
-) > -) rib-group inet pim-rg;
-) > -) --8<--
-) > -) blah@foo> show route x.y.0.0
-) > -)
-) > -) inet.0: 153 destinations, 215 routes (152 active, 0 holddown, 1 hidden)
-) > -) + = Active Route, - = Last Active, * = Both
-) > -)
-) > -) x.y.0.0/16 *[BGP/170] 16:10:57, MED 0, localpref 200, from 172.31.5.199
-) > -) AS path: 65300 I
-) > -) > to a.b.c.177 via ge-1/0/0.0
-) > -)
-) > -) inet.2: 130 destinations, 180 routes (129 active, 0 holddown, 1 hidden)
-) > -) + = Active Route, - = Last Active, * = Both
-) > -)
-) > -) x.y.0.0/16 *[BGP/170] 5d 02:15:43, MED 0, localpref 50, from 172.31.5.198
-) > -) AS path: 65300 I
-) > -) > to a.b.c.1 via fe-0/3/1.0
-) > -) to a.b.c.25 via fe-0/3/3.0
-) > -)
-) > -) blah@foo> show multicast rpf x.y.0.0
-) > -) Multicast RPF table: inet.0, 153 entries
-) > -)
-) > -) x.y.0.0/16
-) > -) Protocol: BGP
-) > -) Interface: ge-1/0/0.0
-) > -) --8<--
-) > -)
-) > -) --
-) > -) Pekka Savola "You each name yourselves king, yet the
-) > -) Netcore Oy kingdom bleeds."
-) > -) Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
-) > -)
-) > -) _______________________________________________
-) > -) juniper-nsp mailing list juniper-nsp@puck.nether.net
-) > -) http://puck.nether.net/mailman/listinfo/juniper-nsp
-) > -)
-) >
-)
-) --
-) Pekka Savola "You each name yourselves king, yet the
-) Netcore Oy kingdom bleeds."
-) Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
-)
no inet.2 for multicast RPF etc. by default? [ In reply to ]
At 09:58 AM 4/15/2003 +0300, Pekka Savola wrote:
>Hi,
>
>It seems to me that Juniper does not use inet.2 at all by default.
>
>That is, even if I configure routes for inet.2 or use MBGP to distribute
>multicast routes when unicast/multicast topology is not congruent, Juniper
>only uses inet.0 by default.
>
>This seems *odd*. The desired behaviour should be to always look for
>inet.2 first, and if it fails, look at inet.0 (or so I've been believing).
>Otherwise, being transit using MBGP (for example) would likely break RPF
>quite badly.
>
>Am I missing something?

I disagree that the "desired behavior" should be the default.

You can configure your "desired behaviour" by an appropriate rib-group,
applied to the various mujlticast routing protocols. For example:

rib-groups {
mcast-rib {
export-rib inet.2;
import-rib [ inet.2 inet.0 ];
}
}
protocols {
msdp {
rib-group mcast-rib;
}
pim {
rib-group inet mcast-rib;
}
}



===
Bill Nickless http://www.mcs.anl.gov/people/nickless +1 630 252 7390
PGP:0E 0F 16 80 C5 B1 69 52 E1 44 1A A5 0E 1B 74 F7 nickless@mcs.anl.gov
no inet.2 for multicast RPF etc. by default? [ In reply to ]
On Tue, 15 Apr 2003, Leonard Giuliano wrote:
> -) On Tue, 15 Apr 2003, Leonard Giuliano wrote:
> -) > Yes, inet.0 is used by default. If you want pim and/or msdp to use
> -) > inet.2, you just need to configure these protocols to do so.
> -) >
> -) > protocols {
> -) > msdp {
> -) > rib-group inet pim-rg;
> -) > }
> -) > pim {
> -) > rib-group inet pim-rg;
> -) > }
> -) > }
> -)
> -) But there is *no* way to try inet.2 first, and if it fails, only then go
> -) back to inet.0 ?
> -)
>
> I believe you can do this by copying the rest of the inet.0 routes into
> inet.2, and somehow give them a lower preference. I think the behavior
> you want is possible, but it may take some configuration.

yes, this would probably be preferable -- but the "somehow" seems like a
tricky part. it would seem to require special casing in all the protocol
configurations, policies if so. so seems clearly be out of question.

I don't know how the copying works, but being able to modify some
attributes (preferences between the copy sources, in particular) would be
highly valuable, I think.

--
Pekka Savola "You each name yourselves king, yet the
Netcore Oy kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings