Dear list,
we obviously never had this issue before, but this happened:
CPE connecting two channels (supervectoring-vdsl) using MLPPPoE
Combined downstream bandwidth would be 400+mbps, but each
session seems limited to quite exact 100mbps.
single sessions do well above 200+mbps on each link.
First seen on ASR1001-x in production and verified on ASR1002
afterwards with no other clients connected.
In case you'd argue against mlppp here - its main use would be
bundling the 30+30mbps upload - but loosing half the download
on the way is not expected.
Anyone ever heard about some magic mlppp-session-speedlimit built
into the qfp or any other ASR hardware?
(funfact broadband_4k is licensed on the production box)
Regards,
hk
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
we obviously never had this issue before, but this happened:
CPE connecting two channels (supervectoring-vdsl) using MLPPPoE
Combined downstream bandwidth would be 400+mbps, but each
session seems limited to quite exact 100mbps.
single sessions do well above 200+mbps on each link.
First seen on ASR1001-x in production and verified on ASR1002
afterwards with no other clients connected.
In case you'd argue against mlppp here - its main use would be
bundling the 30+30mbps upload - but loosing half the download
on the way is not expected.
Anyone ever heard about some magic mlppp-session-speedlimit built
into the qfp or any other ASR hardware?
(funfact broadband_4k is licensed on the production box)
Regards,
hk
_______________________________________________
cisco-nsp mailing list cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/