Mailing List Archive

SGBP/MMP limit on numbers of peers
I was wondering if there is any limit on the number of peers configured for SGBP.

We have 5 until now and we are thinking of making them 25. Is there any problem with this?
_______________________________________________
cisco-nas mailing list
cisco-nas@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nas
RE: SGBP/MMP limit on numbers of peers [ In reply to ]
Tassos Chatzithomaoglou <> wrote on Wednesday, October 26, 2005 5:07 PM:

> I was wondering if there is any limit on the number of peers
> configured for SGBP.
>
> We have 5 until now and we are thinking of making them 25. Is there
> any problem with this?

Hmm.. If have never seen 25 peers, but why do you need that many? Are
you planning to run SGBP between different pops to bundle multilink
links? Generally, I would not recommend this as it could cause different
MLP link members having different serialization delays which causes
overhead at the other side re-assembling fragments and putting the
frames in order.

oli

_______________________________________________
cisco-nas mailing list
cisco-nas@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nas
Re: SGBP/MMP limit on numbers of peers [ In reply to ]
Oliver Boehmer (oboehmer) wrote:

>Tassos Chatzithomaoglou <> wrote on Wednesday, October 26, 2005 5:07 PM:
>
>
>
>>I was wondering if there is any limit on the number of peers
>>configured for SGBP.
>>
>>We have 5 until now and we are thinking of making them 25. Is there
>>any problem with this?
>>
>>
>
>Hmm.. If have never seen 25 peers, but why do you need that many?
>

I seem dimly to recall hearing of an SGBP group that comprised some 100
or more 5300s, that reportedly worked well.

>Are
>you planning to run SGBP between different pops to bundle multilink
>links? Generally, I would not recommend this as it could cause different
>MLP link members having different serialization delays which causes
>overhead at the other side re-assembling fragments and putting the
>frames in order.
>
> oli
>
>

I've seen an SGBP group that spanned pops working OK ... the idea is
that each pop has its own lead number which hunts into other pops only
if all of the local pop's trunks are full or down. So that way you get
hardly any MLPPP bundles that actually span pops ... seems to me that
this is a pretty good design wrt scalability and resiliency.

Cheers,

Aaron


_______________________________________________
cisco-nas mailing list
cisco-nas@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nas