Mailing List Archive

10Gbit Line Rate performance on Dell R520
Hi All,

We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.

Is there anything else we can do with the hardware to up potential performance?

We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.

Would it be worth installing NTop on bare metal?

Regards,

Tim


_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim
how many RAM slots did you fill in practice? “All” or “all channels”?
Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
Are you running a VM on this R520 or a native OS?

Alfredo

> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Hi All,
>
> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>
> Is there anything else we can do with the hardware to up potential performance?
>
> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>
> Would it be worth installing NTop on bare metal?
>
> Regards,
>
> Tim
>
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim
how many RAM slots did you fill in practice? “All” or “all channels”?
Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
Are you running a VM on this R520 or a native OS?

Alfredo

> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Hi All,
>
> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>
> Is there anything else we can do with the hardware to up potential performance?
>
> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>
> Would it be worth installing NTop on bare metal?
>
> Regards,
>
> Tim
>
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Thanks Alfredo,

The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
I’ll have a crack at the n2membenchmark tool and let you know.

Cheers,

Tim




> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Tim
> how many RAM slots did you fill in practice? “All” or “all channels”?
> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
> Are you running a VM on this R520 or a native OS?
>
> Alfredo
>
>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>
>> Hi All,
>>
>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>
>> Is there anything else we can do with the hardware to up potential performance?
>>
>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>
>> Would it be worth installing NTop on bare metal?
>>
>> Regards,
>>
>> Tim
>>
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Thanks Alfredo,

The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
I’ll have a crack at the n2membenchmark tool and let you know.

Cheers,

Tim




> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Tim
> how many RAM slots did you fill in practice? “All” or “all channels”?
> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
> Are you running a VM on this R520 or a native OS?
>
> Alfredo
>
>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>
>> Hi All,
>>
>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>
>> Is there anything else we can do with the hardware to up potential performance?
>>
>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>
>> Would it be worth installing NTop on bare metal?
>>
>> Regards,
>>
>> Tim
>>
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi All,

These are our n2membenchmarks:

user@mon03:~$ sudo n2membenchmark
43368699.838202 pps/22.204774 Gbps
42639209.533752 pps/21.831275 Gbps
42501135.455717 pps/21.760581 Gbps
43745856.911580 pps/22.397879 Gbps
35157099.401825 pps/18.000434 Gbps
32567529.758572 pps/16.674576 Gbps
43278821.125976 pps/22.158756 Gbps
42753771.110469 pps/21.889931 Gbps

This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.

I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?

- Tim



> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Thanks Alfredo,
>
> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
> I’ll have a crack at the n2membenchmark tool and let you know.
>
> Cheers,
>
> Tim
>
>
>
>
>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Tim
>> how many RAM slots did you fill in practice? “All” or “all channels”?
>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>> Are you running a VM on this R520 or a native OS?
>>
>> Alfredo
>>
>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>
>>> Is there anything else we can do with the hardware to up potential performance?
>>>
>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>
>>> Would it be worth installing NTop on bare metal?
>>>
>>> Regards,
>>>
>>> Tim
>>>
>>>
>>> _______________________________________________
>>> Ntop mailing list
>>> Ntop@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi All,

These are our n2membenchmarks:

user@mon03:~$ sudo n2membenchmark
43368699.838202 pps/22.204774 Gbps
42639209.533752 pps/21.831275 Gbps
42501135.455717 pps/21.760581 Gbps
43745856.911580 pps/22.397879 Gbps
35157099.401825 pps/18.000434 Gbps
32567529.758572 pps/16.674576 Gbps
43278821.125976 pps/22.158756 Gbps
42753771.110469 pps/21.889931 Gbps

This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.

I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?

- Tim



> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Thanks Alfredo,
>
> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
> I’ll have a crack at the n2membenchmark tool and let you know.
>
> Cheers,
>
> Tim
>
>
>
>
>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Tim
>> how many RAM slots did you fill in practice? “All” or “all channels”?
>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>> Are you running a VM on this R520 or a native OS?
>>
>> Alfredo
>>
>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>
>>> Is there anything else we can do with the hardware to up potential performance?
>>>
>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>
>>> Would it be worth installing NTop on bare metal?
>>>
>>> Regards,
>>>
>>> Tim
>>>
>>>
>>> _______________________________________________
>>> Ntop mailing list
>>> Ntop@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim
I just realised you are using ntop (I guess you mean ntopng) for processing traffic, I thought you were running performance tests with PF_RING,
please provide a few more info about your configuration:
- ntopng version
- ntopng configuration
- traffic rate (pps and gbps)

Best Regards
Alfredo

> On 8 Jan 2017, at 23:29, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Hi All,
>
> These are our n2membenchmarks:
>
> user@mon03:~$ sudo n2membenchmark
> 43368699.838202 pps/22.204774 Gbps
> 42639209.533752 pps/21.831275 Gbps
> 42501135.455717 pps/21.760581 Gbps
> 43745856.911580 pps/22.397879 Gbps
> 35157099.401825 pps/18.000434 Gbps
> 32567529.758572 pps/16.674576 Gbps
> 43278821.125976 pps/22.158756 Gbps
> 42753771.110469 pps/21.889931 Gbps
>
> This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.
>
> I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?
>
> - Tim
>
>
>
>> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>
>> Thanks Alfredo,
>>
>> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
>> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
>> I’ll have a crack at the n2membenchmark tool and let you know.
>>
>> Cheers,
>>
>> Tim
>>
>>
>>
>>
>>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Hi Tim
>>> how many RAM slots did you fill in practice? “All” or “all channels”?
>>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>>> Are you running a VM on this R520 or a native OS?
>>>
>>> Alfredo
>>>
>>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>>
>>>> Is there anything else we can do with the hardware to up potential performance?
>>>>
>>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>>
>>>> Would it be worth installing NTop on bare metal?
>>>>
>>>> Regards,
>>>>
>>>> Tim
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop mailing list
>>>> Ntop@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>
>>> _______________________________________________
>>> Ntop mailing list
>>> Ntop@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim
I just realised you are using ntop (I guess you mean ntopng) for processing traffic, I thought you were running performance tests with PF_RING,
please provide a few more info about your configuration:
- ntopng version
- ntopng configuration
- traffic rate (pps and gbps)

Best Regards
Alfredo

> On 8 Jan 2017, at 23:29, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Hi All,
>
> These are our n2membenchmarks:
>
> user@mon03:~$ sudo n2membenchmark
> 43368699.838202 pps/22.204774 Gbps
> 42639209.533752 pps/21.831275 Gbps
> 42501135.455717 pps/21.760581 Gbps
> 43745856.911580 pps/22.397879 Gbps
> 35157099.401825 pps/18.000434 Gbps
> 32567529.758572 pps/16.674576 Gbps
> 43278821.125976 pps/22.158756 Gbps
> 42753771.110469 pps/21.889931 Gbps
>
> This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.
>
> I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?
>
> - Tim
>
>
>
>> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>
>> Thanks Alfredo,
>>
>> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
>> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
>> I’ll have a crack at the n2membenchmark tool and let you know.
>>
>> Cheers,
>>
>> Tim
>>
>>
>>
>>
>>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Hi Tim
>>> how many RAM slots did you fill in practice? “All” or “all channels”?
>>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>>> Are you running a VM on this R520 or a native OS?
>>>
>>> Alfredo
>>>
>>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>>
>>>> Is there anything else we can do with the hardware to up potential performance?
>>>>
>>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>>
>>>> Would it be worth installing NTop on bare metal?
>>>>
>>>> Regards,
>>>>
>>>> Tim
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop mailing list
>>>> Ntop@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>
>>> _______________________________________________
>>> Ntop mailing list
>>> Ntop@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Alfredo,

This is our current version:

v.2.5.170109 [Enterprise/Professional Edition]
Pro rev: r870
Built on: Ubuntu 16.04.1 LTS

We are likely to see up to 8-9Gbit/sec off traffic from ~1000 hosts.

NTopng configuration:

user@mon03:~$ cat /etc/ntopng/ntopng.conf
-w=3000
-W=0
-g=-1
-F=es;flows;nprobe-%Y.%m.%d;http://localhost:9200/_bulk;
-m=“138.0.0.0/22"
-d=/storage/ntopng
-G=/var/run/ntopng.pid
-U=root
-i=zc:eth4@0
-i=zc:eth4@1
-i=zc:eth4@2
-i=zc:eth4@3
-i=zc:eth4@4
-i=zc:eth4@5
-i=zc:eth4@6
-i=zc:eth4@7
-i=view:zc:eth4@0,zc:eth4@1,zc:eth4@2,zc:eth4@3,zc:eth4@4,zc:eth4@5,zc:eth4@6,zc:eth4@7
--online-license-check



I also want to confirm that PF_RING ZC is working correctly:

user@mon03:~$ cat /proc/net/pf_ring/info
PF_RING Version : 6.5.0 (dev:b07e3297700d70c836a626beee697c8fc9fad019)
Total rings : 9

Standard (non ZC) Options
Ring slots : 4096
Slot version : 16
Capture TX : Yes [RX+TX]
IP Defragment : No
Socket Mode : Standard
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0


user@mon03:~$ cat /proc/net/pf_ring/dev/eth4/info
Name: eth4
Index: 8
Address: 00:1B:21:A4:86:10
Polling Mode: NAPI/ZC
Type: Ethernet
Family: Intel ixgbe 82599
TX Queues: 12
RX Queues: 12
Num RX Slots: 32768
Num TX Slots: 32768


Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?
The documentation seems to be out of date with regard to confirming the NIC is actually running in ZC mode. A regular TCPDump on eth4 shows no packets (i assume this is correct as the kernel shouldn’t be receiving packets) but ifconfig counters for eth4 seem to still be increasing - is this correct when the packets shouldn’t be seen by the kernel?

Also, with the change from an 8-core VM to a 12-core bare mental hosts, PF_RING is now using 12 Queues, is this the default behaviour to increase the queues to the number of processor cores?

Regards,

Tim




> On 10 Jan 2017, at 4:19 am, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Tim
> I just realised you are using ntop (I guess you mean ntopng) for processing traffic, I thought you were running performance tests with PF_RING,
> please provide a few more info about your configuration:
> - ntopng version
> - ntopng configuration
> - traffic rate (pps and gbps)
>
> Best Regards
> Alfredo
>
>> On 8 Jan 2017, at 23:29, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>
>> Hi All,
>>
>> These are our n2membenchmarks:
>>
>> user@mon03:~$ sudo n2membenchmark
>> 43368699.838202 pps/22.204774 Gbps
>> 42639209.533752 pps/21.831275 Gbps
>> 42501135.455717 pps/21.760581 Gbps
>> 43745856.911580 pps/22.397879 Gbps
>> 35157099.401825 pps/18.000434 Gbps
>> 32567529.758572 pps/16.674576 Gbps
>> 43278821.125976 pps/22.158756 Gbps
>> 42753771.110469 pps/21.889931 Gbps
>>
>> This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.
>>
>> I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?
>>
>> - Tim
>>
>>
>>
>>> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>
>>> Thanks Alfredo,
>>>
>>> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
>>> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
>>> I’ll have a crack at the n2membenchmark tool and let you know.
>>>
>>> Cheers,
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>
>>>> Hi Tim
>>>> how many RAM slots did you fill in practice? “All” or “all channels”?
>>>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>>>> Are you running a VM on this R520 or a native OS?
>>>>
>>>> Alfredo
>>>>
>>>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>>
>>>>> Hi All,
>>>>>
>>>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>>>
>>>>> Is there anything else we can do with the hardware to up potential performance?
>>>>>
>>>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>>>
>>>>> Would it be worth installing NTop on bare metal?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop mailing list
>>>>> Ntop@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>
>>>> _______________________________________________
>>>> Ntop mailing list
>>>> Ntop@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Alfredo,

This is our current version:

v.2.5.170109 [Enterprise/Professional Edition]
Pro rev: r870
Built on: Ubuntu 16.04.1 LTS

We are likely to see up to 8-9Gbit/sec off traffic from ~1000 hosts.

NTopng configuration:

user@mon03:~$ cat /etc/ntopng/ntopng.conf
-w=3000
-W=0
-g=-1
-F=es;flows;nprobe-%Y.%m.%d;http://localhost:9200/_bulk;
-m=“138.0.0.0/22"
-d=/storage/ntopng
-G=/var/run/ntopng.pid
-U=root
-i=zc:eth4@0
-i=zc:eth4@1
-i=zc:eth4@2
-i=zc:eth4@3
-i=zc:eth4@4
-i=zc:eth4@5
-i=zc:eth4@6
-i=zc:eth4@7
-i=view:zc:eth4@0,zc:eth4@1,zc:eth4@2,zc:eth4@3,zc:eth4@4,zc:eth4@5,zc:eth4@6,zc:eth4@7
--online-license-check



I also want to confirm that PF_RING ZC is working correctly:

user@mon03:~$ cat /proc/net/pf_ring/info
PF_RING Version : 6.5.0 (dev:b07e3297700d70c836a626beee697c8fc9fad019)
Total rings : 9

Standard (non ZC) Options
Ring slots : 4096
Slot version : 16
Capture TX : Yes [RX+TX]
IP Defragment : No
Socket Mode : Standard
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0


user@mon03:~$ cat /proc/net/pf_ring/dev/eth4/info
Name: eth4
Index: 8
Address: 00:1B:21:A4:86:10
Polling Mode: NAPI/ZC
Type: Ethernet
Family: Intel ixgbe 82599
TX Queues: 12
RX Queues: 12
Num RX Slots: 32768
Num TX Slots: 32768


Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?
The documentation seems to be out of date with regard to confirming the NIC is actually running in ZC mode. A regular TCPDump on eth4 shows no packets (i assume this is correct as the kernel shouldn’t be receiving packets) but ifconfig counters for eth4 seem to still be increasing - is this correct when the packets shouldn’t be seen by the kernel?

Also, with the change from an 8-core VM to a 12-core bare mental hosts, PF_RING is now using 12 Queues, is this the default behaviour to increase the queues to the number of processor cores?

Regards,

Tim




> On 10 Jan 2017, at 4:19 am, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Tim
> I just realised you are using ntop (I guess you mean ntopng) for processing traffic, I thought you were running performance tests with PF_RING,
> please provide a few more info about your configuration:
> - ntopng version
> - ntopng configuration
> - traffic rate (pps and gbps)
>
> Best Regards
> Alfredo
>
>> On 8 Jan 2017, at 23:29, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>
>> Hi All,
>>
>> These are our n2membenchmarks:
>>
>> user@mon03:~$ sudo n2membenchmark
>> 43368699.838202 pps/22.204774 Gbps
>> 42639209.533752 pps/21.831275 Gbps
>> 42501135.455717 pps/21.760581 Gbps
>> 43745856.911580 pps/22.397879 Gbps
>> 35157099.401825 pps/18.000434 Gbps
>> 32567529.758572 pps/16.674576 Gbps
>> 43278821.125976 pps/22.158756 Gbps
>> 42753771.110469 pps/21.889931 Gbps
>>
>> This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.
>>
>> I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?
>>
>> - Tim
>>
>>
>>
>>> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>
>>> Thanks Alfredo,
>>>
>>> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
>>> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
>>> I’ll have a crack at the n2membenchmark tool and let you know.
>>>
>>> Cheers,
>>>
>>> Tim
>>>
>>>
>>>
>>>
>>>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>
>>>> Hi Tim
>>>> how many RAM slots did you fill in practice? “All” or “all channels”?
>>>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>>>> Are you running a VM on this R520 or a native OS?
>>>>
>>>> Alfredo
>>>>
>>>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>>
>>>>> Hi All,
>>>>>
>>>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>>>
>>>>> Is there anything else we can do with the hardware to up potential performance?
>>>>>
>>>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>>>
>>>>> Would it be worth installing NTop on bare metal?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Tim
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop mailing list
>>>>> Ntop@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>
>>>> _______________________________________________
>>>> Ntop mailing list
>>>> Ntop@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim,
can you please check the core affinity

[--core-affinity|-g] <cpu core ids> | Bind the capture/processing threads to
| specific CPU cores (specified as a comma-
| separated list)

and give every ntopng interface a different (better if physical) core, and report?

Thanks Luca


> On 9 Jan 2017, at 16:45, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Hi Alfredo,
>
> This is our current version:
>
> v.2.5.170109 [Enterprise/Professional Edition]
> Pro rev: r870
> Built on: Ubuntu 16.04.1 LTS
>
> We are likely to see up to 8-9Gbit/sec off traffic from ~1000 hosts.
>
> NTopng configuration:
>
> user@mon03:~$ cat /etc/ntopng/ntopng.conf
> -w=3000
> -W=0
> -g=-1
> -F=es;flows;nprobe-%Y.%m.%d;http://localhost:9200/_bulk;
> -m=“138.0.0.0/22"
> -d=/storage/ntopng
> -G=/var/run/ntopng.pid
> -U=root
> -i=zc:eth4@0
> -i=zc:eth4@1
> -i=zc:eth4@2
> -i=zc:eth4@3
> -i=zc:eth4@4
> -i=zc:eth4@5
> -i=zc:eth4@6
> -i=zc:eth4@7
> -i=view:zc:eth4@0,zc:eth4@1,zc:eth4@2,zc:eth4@3,zc:eth4@4,zc:eth4@5,zc:eth4@6,zc:eth4@7
> --online-license-check
>
>
>
> I also want to confirm that PF_RING ZC is working correctly:
>
> user@mon03:~$ cat /proc/net/pf_ring/info
> PF_RING Version : 6.5.0 (dev:b07e3297700d70c836a626beee697c8fc9fad019)
> Total rings : 9
>
> Standard (non ZC) Options
> Ring slots : 4096
> Slot version : 16
> Capture TX : Yes [RX+TX]
> IP Defragment : No
> Socket Mode : Standard
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
>
>
> user@mon03:~$ cat /proc/net/pf_ring/dev/eth4/info
> Name: eth4
> Index: 8
> Address: 00:1B:21:A4:86:10
> Polling Mode: NAPI/ZC
> Type: Ethernet
> Family: Intel ixgbe 82599
> TX Queues: 12
> RX Queues: 12
> Num RX Slots: 32768
> Num TX Slots: 32768
>
>
> Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?
> The documentation seems to be out of date with regard to confirming the NIC is actually running in ZC mode. A regular TCPDump on eth4 shows no packets (i assume this is correct as the kernel shouldn’t be receiving packets) but ifconfig counters for eth4 seem to still be increasing - is this correct when the packets shouldn’t be seen by the kernel?
>
> Also, with the change from an 8-core VM to a 12-core bare mental hosts, PF_RING is now using 12 Queues, is this the default behaviour to increase the queues to the number of processor cores?
>
> Regards,
>
> Tim
>
>
>
>
>> On 10 Jan 2017, at 4:19 am, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Tim
>> I just realised you are using ntop (I guess you mean ntopng) for processing traffic, I thought you were running performance tests with PF_RING,
>> please provide a few more info about your configuration:
>> - ntopng version
>> - ntopng configuration
>> - traffic rate (pps and gbps)
>>
>> Best Regards
>> Alfredo
>>
>>> On 8 Jan 2017, at 23:29, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> These are our n2membenchmarks:
>>>
>>> user@mon03:~$ sudo n2membenchmark
>>> 43368699.838202 pps/22.204774 Gbps
>>> 42639209.533752 pps/21.831275 Gbps
>>> 42501135.455717 pps/21.760581 Gbps
>>> 43745856.911580 pps/22.397879 Gbps
>>> 35157099.401825 pps/18.000434 Gbps
>>> 32567529.758572 pps/16.674576 Gbps
>>> 43278821.125976 pps/22.158756 Gbps
>>> 42753771.110469 pps/21.889931 Gbps
>>>
>>> This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.
>>>
>>> I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?
>>>
>>> - Tim
>>>
>>>
>>>
>>>> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>
>>>> Thanks Alfredo,
>>>>
>>>> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
>>>> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
>>>> I’ll have a crack at the n2membenchmark tool and let you know.
>>>>
>>>> Cheers,
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>>
>>>>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>>
>>>>> Hi Tim
>>>>> how many RAM slots did you fill in practice? “All” or “all channels”?
>>>>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>>>>> Are you running a VM on this R520 or a native OS?
>>>>>
>>>>> Alfredo
>>>>>
>>>>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>>>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>>>>
>>>>>> Is there anything else we can do with the hardware to up potential performance?
>>>>>>
>>>>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>>>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>>>>
>>>>>> Would it be worth installing NTop on bare metal?
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Ntop mailing list
>>>>>> Ntop@listgateway.unipi.it
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>>
>>>>> _______________________________________________
>>>>> Ntop mailing list
>>>>> Ntop@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>
>>>
>>> _______________________________________________
>>> Ntop mailing list
>>> Ntop@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim,
can you please check the core affinity

[--core-affinity|-g] <cpu core ids> | Bind the capture/processing threads to
| specific CPU cores (specified as a comma-
| separated list)

and give every ntopng interface a different (better if physical) core, and report?

Thanks Luca


> On 9 Jan 2017, at 16:45, Tim Raphael <raphael.timothy@gmail.com> wrote:
>
> Hi Alfredo,
>
> This is our current version:
>
> v.2.5.170109 [Enterprise/Professional Edition]
> Pro rev: r870
> Built on: Ubuntu 16.04.1 LTS
>
> We are likely to see up to 8-9Gbit/sec off traffic from ~1000 hosts.
>
> NTopng configuration:
>
> user@mon03:~$ cat /etc/ntopng/ntopng.conf
> -w=3000
> -W=0
> -g=-1
> -F=es;flows;nprobe-%Y.%m.%d;http://localhost:9200/_bulk;
> -m=“138.0.0.0/22"
> -d=/storage/ntopng
> -G=/var/run/ntopng.pid
> -U=root
> -i=zc:eth4@0
> -i=zc:eth4@1
> -i=zc:eth4@2
> -i=zc:eth4@3
> -i=zc:eth4@4
> -i=zc:eth4@5
> -i=zc:eth4@6
> -i=zc:eth4@7
> -i=view:zc:eth4@0,zc:eth4@1,zc:eth4@2,zc:eth4@3,zc:eth4@4,zc:eth4@5,zc:eth4@6,zc:eth4@7
> --online-license-check
>
>
>
> I also want to confirm that PF_RING ZC is working correctly:
>
> user@mon03:~$ cat /proc/net/pf_ring/info
> PF_RING Version : 6.5.0 (dev:b07e3297700d70c836a626beee697c8fc9fad019)
> Total rings : 9
>
> Standard (non ZC) Options
> Ring slots : 4096
> Slot version : 16
> Capture TX : Yes [RX+TX]
> IP Defragment : No
> Socket Mode : Standard
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
>
>
> user@mon03:~$ cat /proc/net/pf_ring/dev/eth4/info
> Name: eth4
> Index: 8
> Address: 00:1B:21:A4:86:10
> Polling Mode: NAPI/ZC
> Type: Ethernet
> Family: Intel ixgbe 82599
> TX Queues: 12
> RX Queues: 12
> Num RX Slots: 32768
> Num TX Slots: 32768
>
>
> Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?
> The documentation seems to be out of date with regard to confirming the NIC is actually running in ZC mode. A regular TCPDump on eth4 shows no packets (i assume this is correct as the kernel shouldn’t be receiving packets) but ifconfig counters for eth4 seem to still be increasing - is this correct when the packets shouldn’t be seen by the kernel?
>
> Also, with the change from an 8-core VM to a 12-core bare mental hosts, PF_RING is now using 12 Queues, is this the default behaviour to increase the queues to the number of processor cores?
>
> Regards,
>
> Tim
>
>
>
>
>> On 10 Jan 2017, at 4:19 am, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Tim
>> I just realised you are using ntop (I guess you mean ntopng) for processing traffic, I thought you were running performance tests with PF_RING,
>> please provide a few more info about your configuration:
>> - ntopng version
>> - ntopng configuration
>> - traffic rate (pps and gbps)
>>
>> Best Regards
>> Alfredo
>>
>>> On 8 Jan 2017, at 23:29, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>
>>> Hi All,
>>>
>>> These are our n2membenchmarks:
>>>
>>> user@mon03:~$ sudo n2membenchmark
>>> 43368699.838202 pps/22.204774 Gbps
>>> 42639209.533752 pps/21.831275 Gbps
>>> 42501135.455717 pps/21.760581 Gbps
>>> 43745856.911580 pps/22.397879 Gbps
>>> 35157099.401825 pps/18.000434 Gbps
>>> 32567529.758572 pps/16.674576 Gbps
>>> 43278821.125976 pps/22.158756 Gbps
>>> 42753771.110469 pps/21.889931 Gbps
>>>
>>> This is on bare metal with ~32GB RAM and 12 Cores on a Hex-core with HT enabled.
>>>
>>> I plan on running ~ 8 Virtual NIC queues to keep 4 cores free - thoughts?
>>>
>>> - Tim
>>>
>>>
>>>
>>>> On 5 Jan 2017, at 10:18 pm, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>
>>>> Thanks Alfredo,
>>>>
>>>> The installed NTop application is currently in a VM however the numademo numbers were generated via a live CD (an easy way to test performance without flattening the host).
>>>> The R520 has 12 RAM slots, we’re filled the 6 (in triple-channel configuration) associated with the filled processor.
>>>> I’ll have a crack at the n2membenchmark tool and let you know.
>>>>
>>>> Cheers,
>>>>
>>>> Tim
>>>>
>>>>
>>>>
>>>>
>>>>> On 5 Jan 2017, at 10:12 pm, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>>
>>>>> Hi Tim
>>>>> how many RAM slots did you fill in practice? “All” or “all channels”?
>>>>> Please run n2membenchmark, included in the n2disk package, which is our benchmarking tool and let us see some output.
>>>>> Are you running a VM on this R520 or a native OS?
>>>>>
>>>>> Alfredo
>>>>>
>>>>>> On 5 Jan 2017, at 14:37, Tim Raphael <raphael.timothy@gmail.com> wrote:
>>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> We have a Dell R520 with a single processor (and one empty slot) and all the associated RAM slots filled.
>>>>>> numademo shows we can do 14,000MB/s which is apparently a little short of the 16,000MB/s required for line rate 10Gbit PF_RING NTop analysis.
>>>>>>
>>>>>> Is there anything else we can do with the hardware to up potential performance?
>>>>>>
>>>>>> We have previously installed NTop with PF_RING on a VM on a dedicated R710 (dual Proc, 24GB RAM) and could only do 4Gbit/s tops.
>>>>>> In the case of the R520, we don’t have to worry about NUMA allocation as there is only one CPU, all the correct RAM slots are filled and the PCIe slot the NIC is using is directly connected to the CPU filled.
>>>>>>
>>>>>> Would it be worth installing NTop on bare metal?
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Tim
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Ntop mailing list
>>>>>> Ntop@listgateway.unipi.it
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>>
>>>>> _______________________________________________
>>>>> Ntop mailing list
>>>>> Ntop@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>>>
>>>
>>> _______________________________________________
>>> Ntop mailing list
>>> Ntop@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop
>>
>> _______________________________________________
>> Ntop mailing list
>> Ntop@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop
>
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Tim,

I'm currently also planning a similar installation like yours.

Regarding your question:

On 10.01.2017 01:45, Tim Raphael wrote:
> Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?

I've found some slides from the ntop users meeting which say that
"NAPI/ZC" means it's working correctly.

http://www.ntop.org/wp-content/uploads/2016/10/ntop-Users-Meeting-Sharkfest-2016-[pfring].pdf
--> #19

Well. They say "ZC/NAPI". Not sure if the order matters.
When I have my system running I'll tell what mine says.


Cheers
Robert
_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Robert and TIm
NAPI/ZC means that the ZC driver has been loaded, anyway as long as you use the zc: prefix and the application does not complain you are good.

Alfredo

> On 13 Jan 2017, at 03:05, Finze, Robert <robert.finze@uni-tuebingen.de> wrote:
>
> Hi Tim,
>
> I'm currently also planning a similar installation like yours.
>
> Regarding your question:
>
> On 10.01.2017 01:45, Tim Raphael wrote:
>> Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?
>
> I've found some slides from the ntop users meeting which say that
> "NAPI/ZC" means it's working correctly.
>
> http://www.ntop.org/wp-content/uploads/2016/10/ntop-Users-Meeting-Sharkfest-2016-[pfring].pdf
> --> #19
>
> Well. They say "ZC/NAPI". Not sure if the order matters.
> When I have my system running I'll tell what mine says.
>
>
> Cheers
> Robert
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop
Re: 10Gbit Line Rate performance on Dell R520 [ In reply to ]
Hi Robert and TIm
NAPI/ZC means that the ZC driver has been loaded, anyway as long as you use the zc: prefix and the application does not complain you are good.

Alfredo

> On 13 Jan 2017, at 03:05, Finze, Robert <robert.finze@uni-tuebingen.de> wrote:
>
> Hi Tim,
>
> I'm currently also planning a similar installation like yours.
>
> Regarding your question:
>
> On 10.01.2017 01:45, Tim Raphael wrote:
>> Does the above indicate the device is actually running in ZC mode even though the polling mode says “NAPI/ZC”?
>
> I've found some slides from the ntop users meeting which say that
> "NAPI/ZC" means it's working correctly.
>
> http://www.ntop.org/wp-content/uploads/2016/10/ntop-Users-Meeting-Sharkfest-2016-[pfring].pdf
> --> #19
>
> Well. They say "ZC/NAPI". Not sure if the order matters.
> When I have my system running I'll tell what mine says.
>
>
> Cheers
> Robert
> _______________________________________________
> Ntop mailing list
> Ntop@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

_______________________________________________
Ntop mailing list
Ntop@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop