Mailing List Archive

Re: Offload PPPoE processing from DSL aggregation 7206toanother 7206?
On Fri, Jul 21, 2006 at 11:54:22AM -0400, vince@cisco.com wrote:
> Scott,
>
> When looking at CPU, its important to look at both numbers and the
> process.

This is the current status. We haven't hit the peak time of day just
yet.

router-7204#show proc cpu | exclude 0.00% 0.00%
CPU utilization for five seconds: 67%/35%; one minute: 65%; five minutes: 65%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
3 879936 3434053 256 0.08% 0.15% 0.16% 0 OSPF Hello
4 874024 75920 11512 0.00% 0.24% 0.18% 0 Check heaps
15 2331748 2152819 1083 0.57% 0.40% 0.38% 0 ARP Input
16 155060 104160 1488 0.00% 0.02% 0.00% 0 HC Counter Timer
22 16505324 3536298 4667 3.43% 3.47% 3.45% 0 Net Background
25 564820 453492 1245 0.00% 0.05% 0.05% 0 Per-Second Jobs
40 2317800 13995077 165 0.32% 0.56% 0.53% 0 IP Input
41 1152344 5346371 215 0.16% 0.20% 0.18% 0 PPP auth
49 361768 5269743 68 0.24% 0.06% 0.06% 0 IP Background
63 363544 748701 485 0.08% 0.07% 0.08% 0 CEF process
71 393668 6970 56480 0.00% 0.09% 0.06% 0 IP Cache Ager
76 54270392 3276067 16565 12.42% 10.94% 10.91% 0 VTEMPLATE Backgr
93 256388 13600790 18 0.08% 0.03% 0.01% 0 Net Input
94 865600 83490 10367 0.08% 0.15% 0.16% 0 Compute load avg
95 86016 10069 8542 0.00% 0.01% 0.00% 0 Per-minute Jobs
108 79760492 5828914 13683 14.28% 14.33% 15.54% 0 PPPOE discovery
114 2772132 8902392 311 0.24% 0.28% 0.26% 0 PPP manager
121 597352 4526689 131 0.00% 0.08% 0.08% 0 RADIUS
126 1067848 6702179 159 0.16% 0.26% 0.22% 0 OSPF Router
128 28 146 191 0.00% 0.01% 0.00% 2 Virtual Exec

> The left and the right. A fair amount of the time the CPU is high
> because of fragmentation.
>
> 1000 users on a NPE400 sounds a little low, but this also depends on
> the throughput.
>
> Can you post you config? Do you have any MTU adjust commands in your
> config?

I'll attach a privacy modified version of the config that RANCID keeps.

Just in the PPPoE virtual template:

interface Virtual-Template3
description PPPoE Template
mtu 1492
ip unnumbered FastEthernet0/0.1
no ip route-cache
ip ospf database-filter all out
no logging event link-status
peer default ip address pool dsl
ppp authentication pap callin


> > -----Original Message-----
> > From: cisco-nas-bounces@puck.nether.net
> > [mailto:cisco-nas-bounces@puck.nether.net] On Behalf Of Scott Lambert
> > Sent: Thu Jul 20, 2006 2:04 PM
> > To: cisco-nas@puck.nether.net
> > Subject: Re: [cisco-nas] Offload PPPoE processing from DSL
> > aggregation 7206toanother 7206?
> >
> > On Thu, Jul 20, 2006 at 05:40:08AM +0200, Oliver Boehmer
> > (oboehmer) wrote:
> > > Scott Lambert <> wrote on Thursday, July 20, 2006 1:40 AM:
> > >
> > > > I have about 1000 PPPoE users on an 7206vxr with NPE400. The CPU
> > > > load is at about 75% according to the MRTG 1 and 5 minute
> > averages.
> > > > According to sho proc cpu, the load is much higher than that for
> > > > tens of seconds at a time. I'm thinking that is about as high a
> > > > load as I want on a router.
> > >
> > > Right, looks too high.
> > > Are you terminating PPPoE (over ATM) directly on the box,
> > or are you
> > > terminating PPP sessions forwarded to you via L2TP?
> >
> > Sorry, it is PPPoE over ATM.
> >
> > > > I have another 500 users I need to migrate over from
> > aquisition of
> > > > another ISP. My connection to the Telco is an OC3 and
> > the migrated
> > > > user will be brought in over the same OC3.
> > >
> > > If you terminate the PPPoE sessions directly, you definitly need a
> > > faster hardware. You could still forward the sessions via L2TP, but
> > > this will not really decrease the load compared to if you
> > terminated
> > > them directly..
> >
> > I would like to thank everyone for their advice. I will be
> > investigating what it takes to do the L2TP to a cluster of *nix boxes.
> > If it doesn't take the same amount of horsepower to go from
> > the ATM to an L2TP tunnel(s) as it does to go from ATM to
> > PPPoE, it sounds like a nice idea for future scaleablility.
> >
> > I now have an NPE-G1 on order. I hope that will hold us
> > until we run out of bandwidth on the ATM OC3. Or, at least,
> > until it's feasible to get another circuit we can terminate
> > in a seperate box.
> >
> > --
> > Scott Lambert KC5MLE
> > Unix SysAdmin
> > lambert@lambertfam.org
> >
> > _______________________________________________
> > cisco-nas mailing list
> > cisco-nas@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/cisco-nas
> >

--
Scott Lambert KC5MLE Unix SysAdmin
lambert@lambertfam.org