Mailing List Archive

Optimizing the FIB on MX
Hey!

Being a bit unsatisfied with a pair of MX104 turning themselves as a
blackhole during BGP convergence, I am trying to reduce the size of the
FIB.

I am in a simple situation: one upstream on each router, an iBGP session
between the two routers. I am also receiving a default route along the
full feed.

I have tried the simple approach of rejecting routes learned from BGP
with a combination of prefix length and AS path length:

https://github.com/vincentbernat/network-lab/blob/c4e7647b65fb954afbfc67378171451e967a4b9b/lab-vmx-fullview/vMX2.conf#L63-L122

I didn't try for real, but on a small lab using vMX, the FIB size is
divided by 20, which should be quite enough.

I have tried a smarter approach:

https://github.com/vincentbernat/network-lab/blob/c4e7647b65fb954afbfc67378171451e967a4b9b/lab-vmx-fullview/vMX1.conf#L71-L121

Unfortunately, the condition system seems not powerful enough to express
what I want:

1. Accept the default route.

2. Reject any small route (ge /25).

3. Reject any route with the same next-hop as the default route.

4. Accept everything else.

Currently, I was able to achieve this:

3. Reject any route using upstream as next-hop (with the assumption
that we have a default route to upstream since it would come from
the same eBGP session).

4. Accept everything else.

This is not satisfactory because if upstream becomes unavailable, a lot
of routes will be programmed in the FIB.

If the condition system would allow me to match a next-hop or an
interface in addition to a route, I could do:

3. Reject any route with upstream as next-hop if there is a default
route to upstream.

4. Reject any route with peer as next-hop if there is a default route
to peer.

5. Accept everything else.

This way, only routes to peer would be put in FIB (and they are far less
numerous than routes to upstream). Eventually, those routes could be
trimmed down with prefix-length and AS path-length too.

The condition could look like this:

#v+
policy-options {
condition default-to-upstream {
if-route-exists {
0.0.0.0/0;
next-hop 192.0.2.0;
}
}
condition default-to-peer {
if-route-exists {
0.0.0.0/0;
next-hop 192.0.2.129;
}
}
}
#v-

I think that I will simply keep the first approach (just using AS
path-length and prefix-length of individual routes) but I would welcome
any comments and tips on how to optimize the FIB (notably prior work).

Thanks!
--
Make sure all variables are initialised before use.
- The Elements of Programming Style (Kernighan & Plauger)
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
On Wed, Feb 17, 2016 at 08:51:23PM +0100, Vincent Bernat wrote:
> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> blackhole during BGP convergence, I am trying to reduce the size of the
> FIB.
>
> I am in a simple situation: one upstream on each router, an iBGP session
> between the two routers. I am also receiving a default route along the
> full feed.

Can you use Junos 15.1? Try this:

http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
❦ 17 février 2016 15:18 -0500, Chuck Anderson <cra@WPI.EDU> :

>> Being a bit unsatisfied with a pair of MX104 turning themselves as a
>> blackhole during BGP convergence, I am trying to reduce the size of the
>> FIB.
>>
>> I am in a simple situation: one upstream on each router, an iBGP session
>> between the two routers. I am also receiving a default route along the
>> full feed.
>
> Can you use Junos 15.1? Try this:
>
> http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html

Thanks for the tip. I can't update right now (I am on 13.3), but I put
that on my todo list!
--
Make it right before you make it faster.
- The Elements of Programming Style (Kernighan & Plauger)
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
Hello,

On 17/02/2016 19:51, Vincent Bernat wrote:
> Hey!
>
>
> If the condition system would allow me to match a next-hop or an
> interface in addition to a route, I could do:
>
> 3. Reject any route with upstream as next-hop if there is a default
> route to upstream.
>
> 4. Reject any route with peer as next-hop if there is a default route
> to peer.
>
> 5. Accept everything else.

True, one cannot match on "next-hop" in "condition", only on exact
prefix+table name.
But this can be done using "route isolation" approach.
So, the overall approach is:
1/ create a separate table and leak a 0/0 route there matching on 0/0
exact + next-hop ("isolate the interested route"). Use "instance-import"
+ policy.
2/ create condition

policy-options {
condition default-to-upstream {
if-route-exists {
0.0.0.0/0;
table isolate-0/0.inet.0;
}
}

3/ use condition to match & reject the specifics:

policy-options {
policy-statement reject-same-nh-as-0/0 {
term 1 {
from {
protocol bgp;
route-filter 0/0 longer;
condition default-to-upstream;
next-hop 198.18.1.1;
}
then reject;
}
term 2 {
from {
protocol bgp;
route-filter 0/0 longer;
next-hop 198.18.1.1;
}
then accept;
}

Disclaimer - I haven't tested this myself.

HTH
Thx
Alex
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
❦ 17 février 2016 21:07 GMT, Alexander Arseniev <arseniev@btinternet.com> :

>> If the condition system would allow me to match a next-hop or an
>> interface in addition to a route, I could do:
>>
>> 3. Reject any route with upstream as next-hop if there is a default
>> route to upstream.
>>
>> 4. Reject any route with peer as next-hop if there is a default route
>> to peer.
>>
>> 5. Accept everything else.
>
> True, one cannot match on "next-hop" in "condition", only on exact
> prefix+table name.
> But this can be done using "route isolation" approach.
> So, the overall approach is:
> 1/ create a separate table and leak a 0/0 route there matching on 0/0
> exact + next-hop ("isolate the interested route"). Use
> "instance-import" + policy.

Thanks for the suggestion. I tried to do that but was unable to create a
separate table and do the leak.

policy-options {
rib XXXX.0 { ... }
}

In this case XXXX.0 is not recognized as a table in condition. But I
only tried with XXXX.0. From example, maybe I should have tried
XXXX.0.inet.0?

So, I tried to create a routing instance for that purpose but I did try
to use a generated route and I wasn't able to have it goes up. I can try
to use import instead.

So, is a separate table defined with policy-options rib or with a
routing-instance?

> Disclaimer - I haven't tested this myself.

I'll try that.
--
Nothing so needs reforming as other people's habits.
-- Mark Twain, "Pudd'nhead Wilson's Calendar"
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
Hello,
a/ please create an instance of type no-forwarding (default) or
virtual-router. They both accept "instance-import <policy-name>" knob.
b/ please don't use generated route, use a true BGP 0/0 route.
You can use logical systems for testing if You lack actual physical routers.
Thx
Alex

On 17/02/2016 21:50, Vincent Bernat wrote:
> ❦ 17 février 2016 21:07 GMT, Alexander Arseniev <arseniev@btinternet.com> :
>
>>> If the condition system would allow me to match a next-hop or an
>>> interface in addition to a route, I could do:
>>>
>>> 3. Reject any route with upstream as next-hop if there is a default
>>> route to upstream.
>>>
>>> 4. Reject any route with peer as next-hop if there is a default route
>>> to peer.
>>>
>>> 5. Accept everything else.
>> True, one cannot match on "next-hop" in "condition", only on exact
>> prefix+table name.
>> But this can be done using "route isolation" approach.
>> So, the overall approach is:
>> 1/ create a separate table and leak a 0/0 route there matching on 0/0
>> exact + next-hop ("isolate the interested route"). Use
>> "instance-import" + policy.
> Thanks for the suggestion. I tried to do that but was unable to create a
> separate table and do the leak.
>
> policy-options {
> rib XXXX.0 { ... }
> }
>
> In this case XXXX.0 is not recognized as a table in condition. But I
> only tried with XXXX.0. From example, maybe I should have tried
> XXXX.0.inet.0?
>
> So, I tried to create a routing instance for that purpose but I did try
> to use a generated route and I wasn't able to have it goes up. I can try
> to use import instead.
>
> So, is a separate table defined with policy-options rib or with a
> routing-instance?
>
>> Disclaimer - I haven't tested this myself.
> I'll try that.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
> Vincent Bernat
> Sent: Wednesday, February 17, 2016 7:51 PM
>
> Hey!
>
> Being a bit unsatisfied with a pair of MX104 turning themselves as a blackhole
> during BGP convergence, I am trying to reduce the size of the FIB.
>
You mentioned earlier that this is a new installation so why not use routing instance for Internet which allows you to use PIC with your current version of code and save you all this trouble duck-taping the solution together.


adam


Adam Vitkovsky
IP Engineer

T: 0333 006 5936
E: Adam.Vitkovsky@gamma.co.uk
W: www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
❦ 17 février 2016 22:56 GMT, Adam Vitkovsky <Adam.Vitkovsky@gamma.co.uk> :

>> Being a bit unsatisfied with a pair of MX104 turning themselves as a blackhole
>> during BGP convergence, I am trying to reduce the size of the FIB.
>>
> You mentioned earlier that this is a new installation so why not use
> routing instance for Internet which allows you to use PIC with your
> current version of code and save you all this trouble duck-taping the
> solution together.

You are right. I didn't understand your answer the first time as I
thought that PIC was for "programmable integrated circuit", so I thought
this was a plan for Juniper to fix the problem with some dedicated piece
of hardware.
--
Truth is the most valuable thing we have -- so let us economize it.
-- Mark Twain
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
Hi Chuck !

Followed with interest the problem and especially your solution and I have
looked into the docu BUT:

DOCU says:
" Before you begin:

Configure the device interfaces.
Configure OSPF or any other IGP protocol.
Configure MPLS and LDP. <---------------------------------- REALLY
????????????????
Configure BGP.
"

Why do you need to enable MPLS and LDP for PIC ?????

IMHO this is a documentation error , or do I miss something ?

Regarding you suggestion of using it in a routing instance with version
<15.1 I am not sure if that works as documentation says that it only works
for vpnv4-BGP routes

DOCU says
"Before you begin:

Configure LDP.
Configure an IGP, either OSPF or IS-IS.
Configure a Layer 3 VPN.
Configure multiprotocol BGP for either an IPv4 VPN or an IPv6 VPN.
<---------------- nthis seems to be a restriction regarding your proposed
solution
"

Any more info on that available ?

Regards

alexander

-----Ursprüngliche Nachricht-----
Von: juniper-nsp [mailto:juniper-nsp-bounces@puck.nether.net] Im Auftrag von
Chuck Anderson
Gesendet: Mittwoch, 17. Februar 2016 21:19
An: juniper-nsp@puck.nether.net
Betreff: Re: [j-nsp] Optimizing the FIB on MX

On Wed, Feb 17, 2016 at 08:51:23PM +0100, Vincent Bernat wrote:
> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> blackhole during BGP convergence, I am trying to reduce the size of
> the FIB.
>
> I am in a simple situation: one upstream on each router, an iBGP
> session between the two routers. I am also receiving a default route
> along the full feed.

Can you use Junos 15.1? Try this:

http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-
bgp-pic-for-inet-inet6-lu.html
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
❦ 17 février 2016 21:07 GMT, Alexander Arseniev <arseniev@btinternet.com> :

> True, one cannot match on "next-hop" in "condition", only on exact
> prefix+table name.
> But this can be done using "route isolation" approach.
> So, the overall approach is:
> 1/ create a separate table and leak a 0/0 route there matching on 0/0
> exact + next-hop ("isolate the interested route"). Use
> "instance-import" + policy.
> 2/ create condition
>
> policy-options {
> condition default-to-upstream {
> if-route-exists {
> 0.0.0.0/0;
> table isolate-0/0.inet.0;
> }
> }
>
> 3/ use condition to match & reject the specifics:
>
> policy-options {
> policy-statement reject-same-nh-as-0/0 {
> term 1 {
> from {
> protocol bgp;
> route-filter 0/0 longer;
> condition default-to-upstream;
> next-hop 198.18.1.1;
> }
> then reject;
> }
> term 2 {
> from {
> protocol bgp;
> route-filter 0/0 longer;
> next-hop 198.18.1.1;
> }
> then accept;
> }

Just by curiosity, I tried your approach and it almost work. However,
for some reason, the condition can match when there is no route in the
associated table. I didn't do exactly as you proposed, so maybe I am
doing something wrong. I am not really interested in getting to the
bottom of this matter. I just post my current configuration in case
somebody is interested:

https://github.com/vincentbernat/network-lab/blob/d984d6c5f847b96a131b240d91346b46bfaecac9/lab-vmx-fullview/vMX1.conf#L106-L115

If I enable term 4, it catches all routes whose next-hop is
192.0.2.129 despite the condition being false. In the RIB, I have many
routes whose next-hop is 192.0.2.129:

root@vMX1# run show route next-hop 192.0.2.129

inet.0: 1110 destinations, 1869 routes (1110 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0 [BGP/140] 00:38:12, MED 10, localpref 100
AS path: 65002 ?, validation-state: unverified
> to 192.0.2.129 via ge-0/0/1.0
[OSPF/150] 00:37:31, metric 10, tag 0
> to 192.0.2.129 via ge-0/0/1.0
1.0.240.0/20 *[BGP/140] 00:38:12, MED 10, localpref 100
AS path: 65002 3257 3356 4651 9737 23969 I, validation-state: unverified
> to 192.0.2.129 via ge-0/0/1.0
1.1.1.0/24 *[BGP/140] 00:38:12, MED 10, localpref 100
AS path: 65002 8758 15576 6772 13030 226 I, validation-state: unverified
> to 192.0.2.129 via ge-0/0/1.0
[...]

But none of them make it to the FIB:

root@vMX1# run show route forwarding-table matching 1.1.1.0/24
Routing table: default.inet
Internet:

Routing table: __master.anon__.inet
Internet:

The peer.inet.0 table is empty:

root@vMX1# run show route summary
Autonomous system number: 64512
Router ID: 192.0.2.128

inet.0: 1110 destinations, 1869 routes (1110 active, 0 holddown, 0 hidden)
Direct: 3 routes, 3 active
Local: 3 routes, 3 active
OSPF: 2 routes, 1 active
BGP: 1861 routes, 1103 active

upstream.inet.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
BGP: 1 routes, 1 active

Adding a static route to peer.inet.0 doesn't help (I added a discard
route). Switching the default to the peer doesn't change anything (term
3 also matches anything). Tested on vMX 14.1R1. Maybe a bug in
if-route-exists?
--
Use the fundamental control flow constructs.
- The Elements of Programming Style (Kernighan & Plauger)
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
> Alexander Marhold
> Sent: Thursday, February 18, 2016 9:31 AM
>
> Hi Chuck !
>
> Followed with interest the problem and especially your solution and I have
> looked into the docu BUT:
>
> DOCU says:
> " Before you begin:
>
> Configure the device interfaces.
> Configure OSPF or any other IGP protocol.
> Configure MPLS and LDP. <---------------------------------- REALLY
> ????????????????
> Configure BGP.
> "
>
> Why do you need to enable MPLS and LDP for PIC ?????
>
Well because Junos implementation of PIC sucks big times :)

> IMHO this is a documentation error , or do I miss something ?
>
> Regarding you suggestion of using it in a routing instance with version
> <15.1 I am not sure if that works as documentation says that it only works for
> vpnv4-BGP routes
>
> DOCU says
> "Before you begin:
>
> Configure LDP.
> Configure an IGP, either OSPF or IS-IS.
> Configure a Layer 3 VPN.
> Configure multiprotocol BGP for either an IPv4 VPN or an IPv6 VPN.
> <---------------- nthis seems to be a restriction regarding your proposed
> solution "
>
And I was hoping that Junos would finally support BGP PIC for v4 and V6 (not a VRF-lite).


adam


Adam Vitkovsky
IP Engineer

T: 0333 006 5936
E: Adam.Vitkovsky@gamma.co.uk
W: www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
> Vincent Bernat
> Sent: Wednesday, February 17, 2016 11:14 PM
>
> 17 février 2016 22:56 GMT, Adam Vitkovsky
> <Adam.Vitkovsky@gamma.co.uk> :
>
> >> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> >> blackhole during BGP convergence, I am trying to reduce the size of the
> FIB.
> >>
> > You mentioned earlier that this is a new installation so why not use
> > routing instance for Internet which allows you to use PIC with your
> > current version of code and save you all this trouble duck-taping the
> > solution together.
>
> You are right. I didn't understand your answer the first time as I thought that
> PIC was for "programmable integrated circuit", so I thought this was a plan
> for Juniper to fix the problem with some dedicated piece of hardware.
> --
Sorry about that, I'll try to be more explicit in my future posts.

The setup is really easy


1) carve up the FIB so that it allows for multiple next-hops (in our case pointer to a backup path)
set routing-options forwarding-table export lb
set policy-options policy-statement lb then load-balance per-packet


2)advertise the best external routes
set protocols bgp group MX140-IBGP advertise-external <<<configured on iBGP session between the MX140s

- BGP by default advertises only overall best-path to neighbours (to save memory) so if MX140-A has a prefix-X say with a shorter AS_PATH and advertises it to the neighbouring MX140-B -then that MX140-B, even though it has a route for prefix-X, although with longer AS_PATH, it would by default shut up and not tell MX140-A about it. So MX140-A wouldn't know there's a possible backup path it could use for BGP-PIC fast reroute.
So the "advertise best external" feature modifies the default behaviour and in addition to the overall best path a best path among all the eBGP paths is selected and advertised.


3)enable "provider edge link protection"
set routing-instances Internet protocols bgp group toTransit family inet unicast protection
set routing-instances Internet protocols bgp group toTransit family inet6 unicast protection


4)check
show route 1.1.1.6 extensive
-at the last tabbed section concerning the indirect next-hops
Next hop: ELNH Address 0x9240a74 weight 0x1, selected <<<<<your primary next-hop to Transit ISP
Next hop: 10.1.1.26 via ge-2/0/2.0

Next hop: ELNH Address 0x92413a8 weight 0x4000 <<<<your backup next-hop to the other MX104
Next hop: 10.1.1.17 via ge-2/0/1.0
Label operation: Push 299840, Push 299776(top)


5)always prefer eBGP over iBGP*
set policy-options policy-statement FROM_TRANSIT term INGRESS_POLICY_A then preference 169 <<or whatever works for you

-by default,
If the MX140-A from our previous example loses its Transit link it will (via BGP-PIC) immediately reroute traffic to MX140-B
However by default MX140-B has a best path via MX140-A -so until it receives withdrawn from MX140-A it'll loop traffic back to MX140-A.
That's why you want MX140-B to prefer it's local exit.

*not sure what was Juniper and ALU thinking when they came up with the same protocol preference for eBGP and iBGP routes, there's a ton of reasons why you always want to prefer closest AS-EXIT.


Caveats:
"vrf-table-label" must be enabled at the routing-instance on the MX140s - just another stupidity in this script kiddie OS of Junos
Please note the advertise best external feature will cause increased RP RAM utilization


Sorry about the rants, just can't help it.
If Juniper is reinventing the wheel why wouldn't they make it circle shaped is just beyond my comprehension.

adam


Adam Vitkovsky
IP Engineer

T: 0333 006 5936
E: Adam.Vitkovsky@gamma.co.uk
W: www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
Just commenting on a couple things:

> If the MX140-A from our previous example loses its Transit link it will (via BGP-PIC) immediately reroute traffic to MX140-B
> However by default MX140-B has a best path via MX140-A -so until it receives withdrawn from MX140-A it'll loop traffic back to MX140-A.
> That's why you want MX140-B to prefer it's local exit.
>
> *not sure what was Juniper and ALU thinking when they came up with the same protocol preference for eBGP and iBGP routes, there's a ton of reasons why you always want to prefer closest AS-EXIT.

Probably the same as Cisco, when Cisco on multiple occasions have
promoted using the same administrative distance (200) for both EBGP
and IBGP as "best practice".

> Caveats:
> "vrf-table-label" must be enabled at the routing-instance on the MX140s - just another stupidity in this script kiddie OS of Junos

You are of course free to call JunOS whatever you want. Calling JunOS a
"script kiddie OS" may not the best way to be taken seriously.

In any case, vrf-table-label is *much* older than PIC (around 10 years,
if I remember correctly).

Steinar Haug, Nethelp consulting, sthaug@nethelp.no
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
❦ 18 février 2016 10:50 GMT, Adam Vitkovsky <Adam.Vitkovsky@gamma.co.uk> :

>> You are right. I didn't understand your answer the first time as I thought that
>> PIC was for "programmable integrated circuit", so I thought this was a plan
>> for Juniper to fix the problem with some dedicated piece of hardware.

> Sorry about that, I'll try to be more explicit in my future posts.
>
> The setup is really easy
[...]

Oh, many thanks for the detailed setup! I'll need some time to update to
15.1 and I'll get back to you with the results once this is done.

> 5)always prefer eBGP over iBGP*
> set policy-options policy-statement FROM_TRANSIT term INGRESS_POLICY_A then preference 169 <<or whatever works for you
>
> -by default,
> If the MX140-A from our previous example loses its Transit link it will (via BGP-PIC) immediately reroute traffic to MX140-B
> However by default MX140-B has a best path via MX140-A -so until it
> receives withdrawn from MX140-A it'll loop traffic back to MX140-A.
> That's why you want MX140-B to prefer it's local exit.
>
> *not sure what was Juniper and ALU thinking when they came up with the
> same protocol preference for eBGP and iBGP routes, there's a ton of
> reasons why you always want to prefer closest AS-EXIT.

Unfortunately, I don't have the same upstream on both MX and for some
routes, one of them may have a better route than the other. The two MX
are advertising just a default, so they can attract traffic that would
be better routed by their neighbor. I'll try to think a bit about what's
more important.
--
Make sure your code "does nothing" gracefully.
- The Elements of Programming Style (Kernighan & Plauger)
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
So is the MX-104 processor really that underpowered? I have heard reports
that is was too underpowered for its pricepoint, and now I am starting to
believe it. Vincent what are your thoughts?

On Wed, Feb 17, 2016 at 5:14 PM, Vincent Bernat <bernat@luffy.cx> wrote:

> ❦ 17 février 2016 22:56 GMT, Adam Vitkovsky <Adam.Vitkovsky@gamma.co.uk
> > :
>
> >> Being a bit unsatisfied with a pair of MX104 turning themselves as a
> blackhole
> >> during BGP convergence, I am trying to reduce the size of the FIB.
> >>
> > You mentioned earlier that this is a new installation so why not use
> > routing instance for Internet which allows you to use PIC with your
> > current version of code and save you all this trouble duck-taping the
> > solution together.
>
> You are right. I didn't understand your answer the first time as I
> thought that PIC was for "programmable integrated circuit", so I thought
> this was a plan for Juniper to fix the problem with some dedicated piece
> of hardware.
> --
> Truth is the most valuable thing we have -- so let us economize it.
> -- Mark Twain
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
I've not used the MX104, but the MX80 is incredibly slow to commit
changes, and from discussion on this mailing list slow to converge
also. As has been mentioned though, this can be got around by using
things like BGP PIC, and LFA to maintain a valid forwarding path while
the control plane sorts itself out though.

What I would be interested to know though is do these technologies use
additional FIB slots? IE by enabling BGP PIC, are you effectively
cutting your FIB capacity in half, from 1 million routes to 0.5
million?

Regards,
Dave

On 18 February 2016 at 13:31, Colton Conor <colton.conor@gmail.com> wrote:
> So is the MX-104 processor really that underpowered? I have heard reports
> that is was too underpowered for its pricepoint, and now I am starting to
> believe it. Vincent what are your thoughts?
>
> On Wed, Feb 17, 2016 at 5:14 PM, Vincent Bernat <bernat@luffy.cx> wrote:
>
>> ❦ 17 février 2016 22:56 GMT, Adam Vitkovsky <Adam.Vitkovsky@gamma.co.uk
>> > :
>>
>> >> Being a bit unsatisfied with a pair of MX104 turning themselves as a
>> blackhole
>> >> during BGP convergence, I am trying to reduce the size of the FIB.
>> >>
>> > You mentioned earlier that this is a new installation so why not use
>> > routing instance for Internet which allows you to use PIC with your
>> > current version of code and save you all this trouble duck-taping the
>> > solution together.
>>
>> You are right. I didn't understand your answer the first time as I
>> thought that PIC was for "programmable integrated circuit", so I thought
>> this was a plan for Juniper to fix the problem with some dedicated piece
>> of hardware.
>> --
>> Truth is the most valuable thing we have -- so let us economize it.
>> -- Mark Twain
>> _______________________________________________
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
❦ 18 février 2016 07:31 -0600, Colton Conor <colton.conor@gmail.com> :

> So is the MX-104 processor really that underpowered? I have heard
> reports that is was too underpowered for its pricepoint, and now I am
> starting to believe it. Vincent what are your thoughts?

Well, I don't have enough experience with it. However, it's unfortunate
that it needs a pricey license to handle a full view but is not able to
accomodate it efficiently (without BGP PIC).
--
Work consists of whatever a body is obliged to do.
Play consists of whatever a body is not obliged to do.
-- Mark Twain
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
On 18 February 2016 at 15:31, Colton Conor <colton.conor@gmail.com> wrote:
> So is the MX-104 processor really that underpowered? I have heard reports
> that is was too underpowered for its pricepoint, and now I am starting to
> believe it. Vincent what are your thoughts?

Define underpowered?

MX80 has 8572, also sported by platforms such as sup7, sup2t, nexus7k,
me3600x, sfm3-12@alu
RSP720, EX8200 RE have even slower spec cpu 8548

MX104 has faster cpu than any of these, P5021. Yet RSP720 runs circles
around MX104 in terms of BGP performance.

I'd say it is underpowered for JunOS (All PPC's are, but is that HW or
SW issue, that's debatable), but it really can't be considered
particularly slow cpu in this market generally, especially during its
launch year.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
Saku,

You seems to know a bit about processors to say the least.

What processor is in the Cisco 9001, and how does it compare to a MX104 in
terms of speed and BGP Performance?

What about a Cisco 9010 ASR9K Route Switch Processor with 440G/slot Fabric
and 6GB?

On Thu, Feb 18, 2016 at 8:12 AM, Saku Ytti <saku@ytti.fi> wrote:

> On 18 February 2016 at 15:31, Colton Conor <colton.conor@gmail.com> wrote:
> > So is the MX-104 processor really that underpowered? I have heard reports
> > that is was too underpowered for its pricepoint, and now I am starting to
> > believe it. Vincent what are your thoughts?
>
> Define underpowered?
>
> MX80 has 8572, also sported by platforms such as sup7, sup2t, nexus7k,
> me3600x, sfm3-12@alu
> RSP720, EX8200 RE have even slower spec cpu 8548
>
> MX104 has faster cpu than any of these, P5021. Yet RSP720 runs circles
> around MX104 in terms of BGP performance.
>
> I'd say it is underpowered for JunOS (All PPC's are, but is that HW or
> SW issue, that's debatable), but it really can't be considered
> particularly slow cpu in this market generally, especially during its
> launch year.
>
> --
> ++ytti
>
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
On Wed, Feb 17, 2016 at 03:18:59PM -0500, Chuck Anderson wrote:
>Can you use Junos 15.1? Try this:
>
>http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html

From
http://www.juniper.net/techpubs/en_US/junos15.1/topics/task/configuration/bgp-configuring-bgp-pic-for-inet.html

"Note: The BGP PIC edge feature is supported only on routers with
MPC interfaces."

AIUI, this excludes MX80/MX104 - arguably where one would need it
most...

cheers,
s.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
> >http://www.juniper.net/techpubs/en_US/junos15.1/topics/concept/use-case-for-bgp-pic-for-inet-inet6-lu.html
>
> >>From
> http://www.juniper.net/techpubs/en_US/junos15.1/topics/task/configuration/bgp-configuring-bgp-pic-for-inet.html
>
> "Note: The BGP PIC edge feature is supported only on routers with
> MPC interfaces."
>
> AIUI, this excludes MX80/MX104 - arguably where one would need it
> most...

MX80/MX104 is a "fixed config" MPC.

Steinar Haug, Nethelp consulting, sthaug@nethelp.no
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
On 18 February 2016 at 16:21, Colton Conor <colton.conor@gmail.com> wrote:

Hey Colton,

> What processor is in the Cisco 9001, and how does it compare to a MX104 in
> terms of speed and BGP Performance?

ASR9001 is P4040 on RP, lower single core performance than MX104
P5021. But the problem this thread addresses is not a problem IOS-XR
has.

> What about a Cisco 9010 ASR9K Route Switch Processor with 440G/slot Fabric
> and 6GB?

RSP440 is 4 core Intel, at about 2GHz. I'm actually sure which
specific Intel CPU.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
Hi Ytti / Colton,

ASR9001-RP
cisco ASR9K Series (P4040) processor with 8388608K bytes of memory.
P4040 processor at 1500MHz, Revision 3.0

This box ist only available as SE (service enhanced) version.

A9K-RSP440-SE
cisco ASR9K Series (Intel 686 F6M14S4) processor with 12582912K bytes of memory.
Intel 686 F6M14S4 processor at 2135MHz, Revision 2.174

There is a TR (transport) version with half the memory:
http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/data_sheet_c78-674143.html

A9K-RSP880-SE
cisco ASR9K Series (Intel 686 F6M14S4) processor with 33554432K bytes of memory.
Intel 686 F6M14S4 processor at 1904MHz, Revision 2.174

There is a TR (transport) version with half the memory:
http://www.cisco.com/c/en/us/products/collateral/routers/asr-9000-series-aggregation-services-routers/datasheet-c78-733763.html

As AS9001 and AS9006/9010 have a different cpu architecture as MX104 and MX240/480/960 the comparison is not easy just by the type of the cpu itself.

--
Sebastian Becker
sb@lab.dtag.de

> Am 18.02.2016 um 16:06 schrieb Saku Ytti <saku@ytti.fi>:
>
>
> On 18 February 2016 at 16:21, Colton Conor <colton.conor@gmail.com> wrote:
>
> Hey Colton,
>
>> What processor is in the Cisco 9001, and how does it compare to a MX104 in
>> terms of speed and BGP Performance?
>
> ASR9001 is P4040 on RP, lower single core performance than MX104
> P5021. But the problem this thread addresses is not a problem IOS-XR
> has.
>
>> What about a Cisco 9010 ASR9K Route Switch Processor with 440G/slot Fabric
>> and 6GB?
>
> RSP440 is 4 core Intel, at about 2GHz. I'm actually sure which
> specific Intel CPU.
>
> --
> ++ytti
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
On 18 February 2016 at 17:29, Sebastian Becker <sb@lab.dtag.de> wrote:

Hey Sebastian,

> As AS9001 and AS9006/9010 have a different cpu architecture as MX104 and MX240/480/960 the comparison is not easy just by the type of the cpu itself.

ASR9001 and MX104 use same Freescale QorIQ family, so it's very direct
comparison. Specsheets are publically available.
Larger MX and ASR9k use Intel X86/AMD64, so easy to compare.

But of course the software is very different, and this MX104 issue
thread is about, does not exist in IOS-XR, regardless how slow CPU it
is rocking.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Optimizing the FIB on MX [ In reply to ]
> sthaug@nethelp.no [mailto:sthaug@nethelp.no]
> Sent: Thursday, February 18, 2016 11:13 AM
>
> Just commenting on a couple things:
>
> > If the MX140-A from our previous example loses its Transit link it
> > will (via BGP-PIC) immediately reroute traffic to MX140-B However by
> default MX140-B has a best path via MX140-A -so until it receives withdrawn
> from MX140-A it'll loop traffic back to MX140-A.
> > That's why you want MX140-B to prefer it's local exit.
> >
> > *not sure what was Juniper and ALU thinking when they came up with the
> same protocol preference for eBGP and iBGP routes, there's a ton of reasons
> why you always want to prefer closest AS-EXIT.
>
> Probably the same as Cisco, when Cisco on multiple occasions have
> promoted using the same administrative distance (200) for both EBGP and
> IBGP as "best practice".
>
Well the bottom line is the sanity won.


> > Caveats:
> > "vrf-table-label" must be enabled at the routing-instance on the
> > MX140s - just another stupidity in this script kiddie OS of Junos
>
> You are of course free to call JunOS whatever you want. Calling JunOS a
> "script kiddie OS" may not the best way to be taken seriously.
>
> In any case, vrf-table-label is *much* older than PIC (around 10 years, if I
> remember correctly).
>
And that makes vrf-table-label a prerequisite for PIC?
My point is that if a packet is received with a VRF label the label points to an indirect next hop pointer
And the pointer points to a forwarding next-hop i.e. IP edge-interface and adjacent L2 rewrite
and as a backup could point to an MPLS core-interface and adjacent label stack (NH label to backup PE(if any) and VRF label that backup PE advertised)
-basically an ASBR option B label swap operation
Why would I need to run an unnecessary IP lookup in the VRF table to derive info about the backup path?

adam








Adam Vitkovsky
IP Engineer

T: 0333 006 5936
E: Adam.Vitkovsky@gamma.co.uk
W: www.gamma.co.uk

This is an email from Gamma Telecom Ltd, trading as “Gamma”. The contents of this email are confidential to the ordinary user of the email address to which it was addressed. This email is not intended to create any legal relationship. No one else may place any reliance upon it, or copy or forward all or any of it in any form (unless otherwise notified). If you receive this email in error, please accept our apologies, we would be obliged if you would telephone our postmaster on +44 (0) 808 178 9652 or email postmaster@gamma.co.uk

Gamma Telecom Limited, a company incorporated in England and Wales, with limited liability, with registered number 04340834, and whose registered office is at 5 Fleet Place London EC4M 7RD and whose principal place of business is at Kings House, Kings Road West, Newbury, Berkshire, RG14 5BY.


_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

1 2  View All