Mailing List Archive

Netflow config for MX204
Hi,

Just wondering is someone here has a working netflow config for a MX204
they might be able to share.

Last time I did netflow on a Juniper router it was a J2320 ????

--
Kind Regards


Liam Farr

Maxum Data
+64-9-950-5302
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 8/Apr/20 11:26, Liam Farr wrote:
> Hi,
>
> Just wondering is someone here has a working netflow config for a MX204
> they might be able to share.
>
> Last time I did netflow on a Juniper router it was a J2320 ????

https://www.juniper.net/documentation/en_US/junos/topics/example/inline-sampling-configuring.html

Mark.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On Wed, 8 Apr 2020 09:26:10 +0000
Liam Farr <liam@maxumdata.com> wrote:

> Just wondering is someone here has a working netflow config for a MX204
> they might be able to share.

I've used IPFIX before, here is an example of how that might be setup,
whether it is good or not I'll let others judge and I can fix if there
is feedback:

<https://github.com/jtkristoff/junos/blob/master/flows.md>

John
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 8/Apr/20 14:42, John Kristoff wrote:

>
> I've used IPFIX before, here is an example of how that might be setup,
> whether it is good or not I'll let others judge and I can fix if there
> is feedback:
>
> <https://github.com/jtkristoff/junos/blob/master/flows.md>

Looks good.

The only issue we've found is you can't export flows over IPv6. Not a
big issue since you can export IPv6 flows over IPv4, but still :-)...

Also, in some versions of Junos, you can't export flows to more than one
collector at the same time for the same address family. But this is
fixed in Junos 17 onward.

Mark.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 8/Apr/20 14:51, Mark Tinka wrote:

>
> Looks good.

The only other thing I would do different is to sample directly on the
interface, rather than through a firewall filter:

xe-0/1/0 {
    unit 0 {
        family inet {
            sampling {
                input;
                output;
            }
        family inet6 {
            sampling {
                input;
                output;
            }
    }
}

But either works. Just haven't sampled in firewall filters for some time
now.
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
    Hi,

    IMHO,

    Directly on the interface permit to use plugins in Elastiflow
(example) to highlight odd traffic behavior (Scans/DDoS)

-----
Alain Hebert ahebert@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911 http://www.pubnix.net Fax: 514-990-9443

On 2020-04-08 08:56, Mark Tinka wrote:
>
> On 8/Apr/20 14:51, Mark Tinka wrote:
>
>> Looks good.
> The only other thing I would do different is to sample directly on the
> interface, rather than through a firewall filter:
>
> xe-0/1/0 {
>     unit 0 {
>         family inet {
>             sampling {
>                 input;
>                 output;
>             }
>         family inet6 {
>             sampling {
>                 input;
>                 output;
>             }
>     }
> }
>
> But either works. Just haven't sampled in firewall filters for some time
> now.
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
hey,

> I've used IPFIX before, here is an example of how that might be setup,
> whether it is good or not I'll let others judge and I can fix if there
> is feedback:
>
> <https://github.com/jtkristoff/junos/blob/master/flows.md>

I don't have any 204s but perhaps use flex-flow-sizing instead manual
table sizes?

And if you do a lot of flow then you need to raise flow-export-rate from
default as well.

--
tarko
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 8/Apr/20 16:33, Tarko Tikan wrote:

>
> I don't have any 204s but perhaps use flex-flow-sizing instead manual
> table sizes?
>
> And if you do a lot of flow then you need to raise flow-export-rate
> from default as well.

Does one need to reboot the box if you switch to "flex-flow-sizing"? The
documentation seems to say so if you're going from the old format to the
new one.

Mark.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
hey,

> Does one need to reboot the box if you switch to "flex-flow-sizing"? The
> documentation seems to say so if you're going from the old format to the
> new one.

AFAIR no. You can verify via "show jnh 0 inline-services
flow-table-info" from the PFE shell.

--
tarko
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
Hi,

I'm using the config example at
https://github.com/jtkristoff/junos/blob/master/flows.md (many thanks) with
a couple of exceptions.

However I am getting export packet failures.

Exceptions / changes from the example are the use of
*flex-flow-sizing* and *sampling
on the interface* rather than firewall.

Config is as follows;

chassis {
fpc 0 {
sampling-instance default;
inline-services {
flex-flow-sizing;
}
}
}
services {
flow-monitoring {
version-ipfix {
template v4 {
ipv4-template;
}
template v6 {
ipv6-template;
}
}
}
}
forwarding-options {
sampling {
sample-once;
instance {
default {
input {
rate 100;
}
family inet {
output {
flow-server 103.247.xxx.xxx {
port 6363;
version-ipfix {
template {
v4;
}
}
}
inline-jflow {
source-address 43.252.xxx.xxx;
}
}
}
family inet6 {
output {
flow-server 103.247.xxx.xxx {
port 6363;
version-ipfix {
template {
v6;
}
}
}
inline-jflow {
source-address 43.252.xxx.xxx;
}
}
}
}
}
}
}
interfaces {
xe-0/1/7 {
unit 442 {
vlan-id 442;
family inet {
mtu 1998;
sampling {
input;
output;
}
address 111.69.xxx.xxx/30;
}
family inet6 {
mtu 1998;
sampling {
input;
output;
}
address 2406:xxxx:xxxx:xxxx::xxxx/64;
}

}
}
}

For the source address I had originally used the internal management
network address on fxp0 but was receiving no flows at the collector so
changed to a loopback address on one of the VRF's, both the internal
management IP and the VRF loopback have reachability to the flow-server
address.

The below is the error output;

show services accounting errors inline-jflow fpc-slot 0
Error information
FPC Slot: 0
Flow Creation Failures: 0
Route Record Lookup Failures: 0, AS Lookup Failures: 0
Export Packet Failures: 137
Memory Overload: No, Memory Alloc Fail Count: 0

IPv4:
IPv4 Flow Creation Failures: 0
IPv4 Route Record Lookup Failures: 0, IPv4 AS Lookup Failures: 0
IPv4 Export Packet Failures: 134

IPv6:
IPv6 Flow Creation Failures: 0
IPv6 Route Record Lookup Failures: 0, IPv6 AS Lookup Failures: 0
IPv6 Export Packet Failures: 3

show services accounting flow inline-jflow fpc-slot 0
Flow information
FPC Slot: 0
Flow Packets: 7976, Flow Bytes: 1129785
Active Flows: 83, Total Flows: 2971
Flows Exported: 1814, Flow Packets Exported: 1477
Flows Inactive Timed Out: 1020, Flows Active Timed Out: 1725
Total Flow Insert Count: 1246

IPv4 Flows:
IPv4 Flow Packets: 7821, IPv4 Flow Bytes: 951645
IPv4 Active Flows: 82, IPv4 Total Flows: 2912
IPv4 Flows Exported: 1776, IPv4 Flow Packets exported: 1439
IPv4 Flows Inactive Timed Out: 1003, IPv4 Flows Active Timed Out: 1687
IPv4 Flow Insert Count: 1225

IPv6 Flows:
IPv6 Flow Packets: 155, IPv6 Flow Bytes: 178140
IPv6 Active Flows: 1, IPv6 Total Flows: 59
IPv6 Flows Exported: 38, IPv6 Flow Packets Exported: 38
IPv6 Flows Inactive Timed Out: 17, IPv6 Flows Active Timed Out: 38
IPv6 Flow Insert Count: 21

show services accounting status inline-jflow fpc-slot 0
Status information
FPC Slot: 0
IPV4 export format: Version-IPFIX, IPV6 export format: Version-IPFIX
BRIDGE export format: Not set, MPLS export format: Not set
IPv4 Route Record Count: 1698135, IPv6 Route Record Count: 247572, MPLS
Route Record Count: 0
Route Record Count: 1945707, AS Record Count: 167101
Route-Records Set: Yes, Config Set: Yes
Service Status: PFE-0: Steady
Using Extended Flow Memory?: PFE-0: No
Flex Flow Sizing ENABLED?: PFE-0: Yes
IPv4 MAX FLOW Count: 5242884, IPv6 MAX FLOW Count: 5242884
BRIDGE MAX FLOW Count: 5242884, MPLS MAX FLOW Count: 5242884

Not sure specifically what I am doing wrong here, it seems to be collecting
the flows ok, but exporting is the issue?

I'd appreciate any advice or pointers thanks :)


On Thu, 9 Apr 2020 at 04:20, Tarko Tikan <tarko@lanparty.ee> wrote:

> hey,
>
> > Does one need to reboot the box if you switch to "flex-flow-sizing"? The
> > documentation seems to say so if you're going from the old format to the
> > new one.
>
> AFAIR no. You can verify via "show jnh 0 inline-services
> flow-table-info" from the PFE shell.
>
> --
> tarko
> _______________________________________________
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>


--
Kind Regards


Liam Farr

Maxum Data
+64-9-950-5302
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 8/Apr/20 18:17, Tarko Tikan wrote:

>  
>
> AFAIR no. You can verify via "show jnh 0 inline-services
> flow-table-info" from the PFE shell.

Okay.

To be honest, we are on the old method and don't notice any badness. One
of those "If it ain't broke" times :-).

Mark.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
hey,

> To be honest, we are on the old method and don't notice any badness. One
> of those "If it ain't broke" times :-).

If you have your tables sized correctly then why would you notice
anything? They are the same tables after all.

I was just pointing out that if someone is distributing a template for
new users, perhaps include the newer automatic sizing (which was not
available at first so it's reasonable to use it if you started before it
become available).

--
tarko
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
Seems I cant just drop the forwarding options into the vrf verbatim;

# show | compare
[edit]
- forwarding-options {
- sampling {
- sample-once;
- instance {
- default {
- input {
- rate 100;
- }
- family inet {
- output {
- flow-server 103.247.xxx.xxx {
- port 6363;
- version-ipfix {
- template {
- v4;
- }
- }
- }
- inline-jflow {
- source-address 43.252.xxx.xxx;
- }
- }
- }
- family inet6 {
- output {
- flow-server 103.247.xxx.xxx {
- port 6363;
- version-ipfix {
- template {
- v6;
- }
- }
- }
- inline-jflow {
- source-address 43.252.xxx.xxx;
- }
- }
- }
- }
- }
- }
- }
[edit routing-instances myvrf_Intl]
+ forwarding-options {
+ sampling {
+ sample-once;
+ instance {
+ default {
+ input {
+ rate 100;
+ }
+ family inet {
+ output {
+ flow-server 103.247.xxx.xxx {
+ port 6363;
+ version-ipfix {
+ template {
+ v4;
+ }
+ }
+ }
+ inline-jflow {
+ source-address 43.252.xxx.xxx;
+ }
+ }
+ }
+ family inet6 {
+ output {
+ flow-server 103.247.xxx.xxx {
+ port 6363;
+ version-ipfix {
+ template {
+ v6;
+ }
+ }
+ }
+ inline-jflow {
+ source-address 43.252.xxx.xxx;
+ }
+ }
+ }
+ }
+ }
+ }
+ }

[edit]
# commit check
[edit chassis fpc 0 sampling-instance]
'default'
Referenced sampling instance does not exist
[edit interfaces xe-0/1/7 unit 442 family inet]
'sampling'
Requires forwarding-options sampling or packet-capture config
[edit interfaces xe-0/1/7 unit 442 family inet6]
'sampling'
Requires forwarding-options sampling or packet-capture config
error: configuration check-out failed: (statements constraint check failed)


This would commit (i.e. not removing the base forwarding-options);

# show | compare
[edit routing-instances myvrf_Intl]
+ forwarding-options {
+ sampling {
+ instance {
+ default {
+ input {
+ rate 100;
+ }
+ family inet {
+ output {
+ flow-server 103.247.xxx.xxx {
+ port 6363;
+ version-ipfix {
+ template {
+ v4;
+ }
+ }
+ }
+ inline-jflow {
+ source-address 43.252.xxx.xxx;
+ }
+ }
+ }
+ family inet6 {
+ output {
+ flow-server 103.247.xxx.xxx {
+ port 6363;
+ version-ipfix {
+ template {
+ v6;
+ }
+ }
+ }
+ inline-jflow {
+ source-address 43.252.xxx.xxx;
+ }
+ }
+ }
+ }
+ }
+ }
+ }

I then ran a *> clear services accounting flow inline-jflow fpc-slot 0*

But still getting export failures;

> show services accounting errors inline-jflow fpc-slot 0
Error information
FPC Slot: 0
Flow Creation Failures: 0
Route Record Lookup Failures: 0, AS Lookup Failures: 0
Export Packet Failures: 188
Memory Overload: No, Memory Alloc Fail Count: 0

IPv4:
IPv4 Flow Creation Failures: 0
IPv4 Route Record Lookup Failures: 0, IPv4 AS Lookup Failures: 0
IPv4 Export Packet Failures: 183

IPv6:
IPv6 Flow Creation Failures: 0
IPv6 Route Record Lookup Failures: 0, IPv6 AS Lookup Failures: 0
IPv6 Export Packet Failures: 5

Thoughts?

On Thu, 9 Apr 2020 at 19:35, Timur Maryin <timamaryin@mail.ru> wrote:

>
>
> On 09-Apr-20 08:20, Liam Farr wrote:
> > Hi,
> >
> > changed to a loopback address on one of the VRF's,
>
> ...
>
> > Not sure specifically what I am doing wrong here, it seems to be
> collecting
> > the flows ok, but exporting is the issue?
> >
> > I'd appreciate any advice or pointers thanks :)
>
>
> maybe this?
>
>
> https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/routing-instance-edit-forwarding-options-sampling.html
>
>

--
Kind Regards


Liam Farr

Maxum Data
+64-9-950-5302
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On Thu, 9 Apr 2020 06:20:00 +0000
Liam Farr <liam@maxumdata.com> wrote:

> However I am getting export packet failures.

Some loss of flows being exported may be unavoidable depending on
your configuration and environment. If you want to see fewer errors
you may just have to sample less frequently. The numbers reported in
your "accounting errors" don't seem that large.

In my repo page were the example config is from you'll see a couple of
images at the bottom that show the difference between the two modes. I
was aware of the flex mode when I originally did this. I think at the
time I was under the impression that setting the memory pools manually
offered some desirable predictability.

Looking back at my notes, I think it was when Juniper TAC told me this
that led me to that conclusion: "And regarding flex-flow-sizing; this
configuration results in a first-come-first-serve creation of flows.
Whichever flow comes first, that is allowed to occupy the flow-table if
there is space in the table. Otherwise, the flow is dropped and an
error count is created." Rightly or wrongly, I recall seeming to want
to ensure some amount of reasonable memory for both v4 and v6 flows.

John
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
Hi,

Got things working in the end, thanks everyone for their help and patience.

Also thanks @John Kristoff especially for the template at
https://github.com/jtkristoff/junos/blob/master/flows.md it was
very helpful.

As I suspected I was doing something dumb, or rather a combination of the
dumb.

1. I had initially tried to use fxp0 as my export interface, it seems this
is not supported.
2. I then tried to use an interface in a VRF to export the flows, I think
some additional config may be required for this (
https://kb.juniper.net/InfoCenter/index?page=content&id=KB28958).
3. It's always MTU... I suspect in one of my various config attempts flow's
were being sent, but dropped because of the 1500 MTU on the flow collector
and a larger MTU on the MX204 interface generating them.

In the end I set up a new link-net on a new vlan interface attached to
inet0 between the MX204 and netflow collector, set the inet mtu to 1500
and ???? everything started working.


Again thanks everyone for the help, I now have some really interesting flow
stats to examine :)



On Fri, 10 Apr 2020 at 07:10, John Kristoff <jtk@depaul.edu> wrote:

> On Thu, 9 Apr 2020 06:20:00 +0000
> Liam Farr <liam@maxumdata.com> wrote:
>
> > However I am getting export packet failures.
>
> Some loss of flows being exported may be unavoidable depending on
> your configuration and environment. If you want to see fewer errors
> you may just have to sample less frequently. The numbers reported in
> your "accounting errors" don't seem that large.
>
> In my repo page were the example config is from you'll see a couple of
> images at the bottom that show the difference between the two modes. I
> was aware of the flex mode when I originally did this. I think at the
> time I was under the impression that setting the memory pools manually
> offered some desirable predictability.
>
> Looking back at my notes, I think it was when Juniper TAC told me this
> that led me to that conclusion: "And regarding flex-flow-sizing; this
> configuration results in a first-come-first-serve creation of flows.
> Whichever flow comes first, that is allowed to occupy the flow-table if
> there is space in the table. Otherwise, the flow is dropped and an
> error count is created." Rightly or wrongly, I recall seeming to want
> to ensure some amount of reasonable memory for both v4 and v6 flows.
>
> John
>


--
Kind Regards


Liam Farr

Maxum Data
+64-9-950-5302

On Fri, 10 Apr 2020 at 07:10, John Kristoff <jtk@depaul.edu> wrote:

> On Thu, 9 Apr 2020 06:20:00 +0000
> Liam Farr <liam@maxumdata.com> wrote:
>
> > However I am getting export packet failures.
>
> Some loss of flows being exported may be unavoidable depending on
> your configuration and environment. If you want to see fewer errors
> you may just have to sample less frequently. The numbers reported in
> your "accounting errors" don't seem that large.
>
> In my repo page were the example config is from you'll see a couple of
> images at the bottom that show the difference between the two modes. I
> was aware of the flex mode when I originally did this. I think at the
> time I was under the impression that setting the memory pools manually
> offered some desirable predictability.
>
> Looking back at my notes, I think it was when Juniper TAC told me this
> that led me to that conclusion: "And regarding flex-flow-sizing; this
> configuration results in a first-come-first-serve creation of flows.
> Whichever flow comes first, that is allowed to occupy the flow-table if
> there is space in the table. Otherwise, the flow is dropped and an
> error count is created." Rightly or wrongly, I recall seeming to want
> to ensure some amount of reasonable memory for both v4 and v6 flows.
>
> John
>


--
Kind Regards


Liam Farr

Maxum Data
+64-9-950-5302
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 11/Apr/20 08:04, Nick Schmalenberger via juniper-nsp wrote:
> I had the same issue with first trying to export over fxp0, then
> trying with my routing instance, and I ended up making a static
> route in inet6.0 with next-table over to the instance table where
> the route into the LAN for my elastiflow collector is. Flow
> export over IPv6 does also seem to work.

We just export flows in-band. Just seems simpler, and has been reliable
for close to 10 years.

Mark.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On Sun, 12 Apr 2020 at 03:53, Mark Tinka <mark.tinka@seacom.mu> wrote:

> On 11/Apr/20 08:04, Nick Schmalenberger via juniper-nsp wrote:
> > I had the same issue with first trying to export over fxp0, then
>
> We just export flows in-band. Just seems simpler, and has been reliable
> for close to 10 years.

in-band is right, Trio can export the flow itself, you will kill your
performance if you do non-revenue port export.

In my mind JNPR non-revenue ports have no use-case. They are dangerous
with no utility. Cisco is much better here, as they offer true OOB
non-revenue ports. JNPR non-revenue port is a convenient way to
quickly break a lot of your network at the same time, as they entirely
fate-share the control-plane. Cisco has non-revenue ports with their
own isolated management-plane, so state of your control-plane will not
impact the management-plane vice versa.
I think Nokia has true OOB ports too. We should start pushing JNPR to
jump on board. RS232 is not true OOB either, as it fate-shares the
control-plane, but it's lot better than JNPR non-revenue-ports, as
breaking the system is lot harder from there, and as well as break is
HW interrupt, which means, you can potentially reload your host from
RS232 even if the host Linux is halted/non-responsive, but requires
non-standard+hidden config.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
? 12 avril 2020 11:43 +03, Saku Ytti:

>> We just export flows in-band. Just seems simpler, and has been reliable
>> for close to 10 years.
>
> in-band is right, Trio can export the flow itself, you will kill your
> performance if you do non-revenue port export.

What's a "non-revenue port"?
--
When in doubt, tell the truth.
-- Mark Twain
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On Sun, 12 Apr 2020 at 12:13, Vincent Bernat <bernat@luffy.cx> wrote:

> What's a "non-revenue port"?

fxp0, em0, mgmtmethernet0, etc. Any port not hanging off of forwarding hardware.

--
++ytti
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: Netflow config for MX204 [ In reply to ]
On 14/Apr/20 22:50, Nick Schmalenberger via juniper-nsp wrote:
> I am exporting in-band, the next-table is so the default table
> can access a port in my routing instance that has the in-band
> ports.

Well, we don't use VRF's for the Internet table. For us, it's always
seemed like an overly complex solution to routing/forwarding, but as
I've said before, it works for many others. So...


> What flow collector are you using? Any tips on the
> under-counting? Thanks!

We are using Kentik.

Haven't experienced any issues so far. My advice would be to make sure
both your SNMP and Netflow are working well together, as accuracy is
then improved.

Mark.

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

1 2  View All