Mailing List Archive

Only 10Gbps in one direction?
Hello,

My first run through with poring. :)

I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.

The NIC is a 6 port 82599 based one.

root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh
irqbalance: no process found
Configuring eth6
IFACE CORE MASK -> FILE
=======================
eth6 0 1 -> /proc/irq/87/smp_affinity
Configuring eth7
IFACE CORE MASK -> FILE
=======================
eth7 0 1 -> /proc/irq/89/smp_affinity
Configuring eth2
IFACE CORE MASK -> FILE
=======================
eth2 0 1 -> /proc/irq/96/smp_affinity
eth2 1 2 -> /proc/irq/97/smp_affinity
eth2 2 4 -> /proc/irq/98/smp_affinity
eth2 3 8 -> /proc/irq/99/smp_affinity
eth2 4 10 -> /proc/irq/100/smp_affinity
eth2 5 20 -> /proc/irq/101/smp_affinity
eth2 6 40 -> /proc/irq/102/smp_affinity
eth2 7 80 -> /proc/irq/103/smp_affinity
eth2 8 100 -> /proc/irq/104/smp_affinity
eth2 9 200 -> /proc/irq/105/smp_affinity
eth2 10 400 -> /proc/irq/106/smp_affinity
eth2 11 800 -> /proc/irq/107/smp_affinity
Configuring eth3
IFACE CORE MASK -> FILE
=======================
eth3 0 1 -> /proc/irq/109/smp_affinity
eth3 1 2 -> /proc/irq/110/smp_affinity
eth3 2 4 -> /proc/irq/111/smp_affinity
eth3 3 8 -> /proc/irq/112/smp_affinity
eth3 4 10 -> /proc/irq/113/smp_affinity
eth3 5 20 -> /proc/irq/114/smp_affinity
eth3 6 40 -> /proc/irq/115/smp_affinity
eth3 7 80 -> /proc/irq/116/smp_affinity
eth3 8 100 -> /proc/irq/117/smp_affinity
eth3 9 200 -> /proc/irq/118/smp_affinity
eth3 10 400 -> /proc/irq/119/smp_affinity
eth3 11 800 -> /proc/irq/120/smp_affinity
Configuring eth4
IFACE CORE MASK -> FILE
=======================
eth4 0 1 -> /proc/irq/91/smp_affinity
Configuring eth5
IFACE CORE MASK -> FILE
=======================
eth5 0 1 -> /proc/irq/93/smp_affinity
root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#

My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:

root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2

=========================
Absolute Stats: 111'707'057 pkts - 9'383'392'788 bytes
Actual Stats: 13'983'520.42 pps - 9.40 Gbps [1133946996 bytes / 1.0 sec]
=========================

root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1

=========================
Absolute Stats: 5'699'096 pkts (33'135'445 drops) - 478'724'064 bytes
Actual Stats: 802'982.00 pps (4'629'316.93 drops) - 0.54 Gbps
=========================

But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.

root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2

=========================
Absolute Stats: 28'285'274 pkts - 2'375'963'016 bytes
Actual Stats: 14'114'355.24 pps - 9.48 Gbps [1185800280 bytes / 1.0 sec]
=========================

root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1

=========================
Absolute Stats: 28'007'460 pkts (0 drops) - 2'352'626'640 bytes
Actual Stats: 14'044'642.54 pps (0.00 drops) - 9.44 Gbps
=========================

I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?

Thanks!
Re: Only 10Gbps in one direction? [ In reply to ]
My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!

I’m sorry this is so impossible to read :(

> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>
> Hello,
>
> My first run through with poring. :)
>
> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>
> The NIC is a 6 port 82599 based one.
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>
> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>
> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>
> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>
> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>
> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>
> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>
> Thanks!
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNA1EbyYmTZQFQKZZpZCT7kHwxMyYwr-2FA8o8lx22P8ZV59Il5HqvfMvDUBwzzmxgh2zFyWOndNBbH7vGvpkb9kr4-2BiFijJbcvyp2rS7yzbRZu3MWU1-2BceqRdwrgzpTgflcgWGiiV3z-2Fqayndng-2FGshGzxpes5iwcw2fp9oTaNJGwsFvSFw5krlpzZVMlD1tniY-3D
Re: Only 10Gbps in one direction? [ In reply to ]
Hi Jason
no problem, I was able to read something :-)
Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)

Alfredo

> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>
> I’m sorry this is so impossible to read :(
>
> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>
> Hello,
>
> My first run through with poring. :)
>
> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>
> The NIC is a 6 port 82599 based one.
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>
> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>
> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>
> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>
> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>
> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>
> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>
> Thanks!
>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNA1EbyYmTZQFQKZZpZCT7kHwxMyYwr-2FA8o8lx22P8ZV8cucxQg3zYdN88FyXRrIY0CW3kHsXQlkfSNB9Ao-2FyBpG5J-2BfNIWW-2BMMmdaNvEAa42idnkPD3t9MZfejZLlDfUZnbH8-2B6fG2uBxuy73bf-2F2Pwldq4jzpVJpPjM2vzLVOZu0huPZOpUyA2WXlUNOT2AI-3D>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
Thanks Alfredo,

If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:

root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info
Name: eth2
Index: 86
Address: 00:0C:BD:08:80:98
Polling Mode: NAPI/ZC
Type: Ethernet
Family: Intel ixgbe 82599
Max # TX Queues: 12
# Used RX Queues: 12
Num RX Slots: 32768
Num TX Slots: 32768
root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info
Name: eth7
Index: 83
Address: 00:0C:BD:08:80:9D
Polling Mode: NAPI/ZC
Type: Ethernet
Family: Intel ixgbe 82599
Max # TX Queues: 1
# Used RX Queues: 1
Num RX Slots: 32768
Num TX Slots: 32768
root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#

load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?

root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh
#insmod ./ixgbe.ko RSS=0,0,0,0
insmod ./ixgbe.ko RSS=1,1,1,1
#insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1
#insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16
#insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3
#insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0
root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#

> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason
> no problem, I was able to read something :-)
> Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>
> Alfredo
>
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>
>> I’m sorry this is so impossible to read :(
>>
>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>
>> Hello,
>>
>> My first run through with poring. :)
>>
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>
>> The NIC is a 6 port 82599 based one.
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>
>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>
>> Thanks!
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBN4vIfRS7DIDklXWy9R3sJqsweQ147Rx8q-2FqgH7zvE5KtOdJYcbDbLTnShT5EInEX-2BxHT1GAAM64960-2FtUYERvwWVli0JEDodseHEb2XBr17oUR7hYaSZUVqcP79goBSUa3gGpBzLUKJR152g3V5ddBIKNj8a-2BksuTO1mMQP-2BJqKOkiIMD2oifWJGuGGmsqbtg-3D
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBN4vIfRS7DIDklXWy9R3sJqsweQ147Rx8q-2FqgH7zvE5Kj3idwxsLvv-2BAEVjDj5UORJ0EJ6nGu5pY9Uj0-2BXs00gfHAOu5LNDWs4Rao1hk7FX-2FWq-2B8krhIB-2BH2C-2Bb82v975qQDGCd06bPGpq1wwWJh7QA3Hm0oM81OXL5hKvIWMJFm0LqXClfIQbufnCl7bjqTc4-3D
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBN4vIfRS7DIDklXWy9R3sJqsweQ147Rx8q-2FqgH7zvE5KmTFOi1LOQ3h7nSPdn3hj3oEr5izae6i0ZG7aFFWl8UlpuCaLmSe-2FoJh7KiqyEW4tBH8kKCU4W0qmzxpGtgZV94fF5Zdz4o7-2BiwJ60eiy5jgHydVIgGVcKyvTSyLHrbLyvPhTCN3YMkldLCnGelO4Iw-3D
Re: Only 10Gbps in one direction? [ In reply to ]
Please use RSS=1,1,1,1,1,1 as you have 6 ports.

Alfredo

> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>
> Thanks Alfredo,
>
> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>
> Alfredo
>
> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>
> I’m sorry this is so impossible to read :(
>
> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>
> Hello,
>
> My first run through with poring. :)
>
> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>
> The NIC is a 6 port 82599 based one.
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>
> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>
> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>
> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>
> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>
> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>
> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>
> Thanks!
>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBN4vIfRS7DIDklXWy9R3sJqsweQ147Rx8q-2FqgH7zvE5KkcOAvIvmy6g21-2BtUuUX9l9Y6YROqbLzmLCNH9-2FYIKL49265Y8pXVKbdnQbrkqjjJFpVn9Ms6LmslDd8sg4hQKfPqz-2BXg9gclIZCum7L5Wmv2nehZ35N6rpvjcRMmdNi-2FpxZjW5O0LUSKYUveSd-2BQ50-3D>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBN4vIfRS7DIDklXWy9R3sJqsweQ147Rx8q-2FqgH7zvE5KufZNJnWyZooZKmw-2FuODAgUFGgLkBGkgSEZgNak0JfVqQRTs32REVqgP8Eqbhp-2BdXFnrp1FBD0Lh48ZO5PN6NMJRXBpCnRJG3tICTUPEsR0Jz2t7Th3X4bl-2B2d3g3-2FkOdHEXZgiClicfKXl-2BTdqwHk4-3D>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBN4vIfRS7DIDklXWy9R3sJqsweQ147Rx8q-2FqgH7zvE5Ktd0xTyQFOtwKF9hqMlvwZDfcAPkUGdlcMrOAwOtn2tLXz0PV8u4E-2BPrZkpMBZAcr05GhmlD0kIAlzkwSX5brvJ1LPvhIAFCwqYMwZ3bD7hUAJ02oE4mws3v-2FxI09AtlBsWOkg8YKDRE-2Bq-2BEwlySvDs-3D>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
Oh, that makes perfect sense! Thanks!

> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>
> Alfredo
>
>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>
>> Thanks Alfredo,
>>
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>
>> I’m sorry this is so impossible to read :(
>>
>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>
>> Hello,
>>
>> My first run through with poring. :)
>>
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>
>> The NIC is a 6 port 82599 based one.
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>
>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>
>> Thanks!
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBO5DEz2Z9Cj8yeQjd7GE3vIAhcB1-2BfxqFbqDUSdLrrc-2BJYNJham8EyECeW9zruY3UGkPMSawKoRqqWeWFdHCCnmg9OKMFnaO7E4oN5nZJod-2Be4X0MBbMCH9HxzAHfHo3cNn8bmVZ3Fz1Uos3ScAIpD5-2BB54ylyd5l4nC7CBoN7nmVtJtieH-2BOVdkZFgIC2nHMI-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBO5DEz2Z9Cj8yeQjd7GE3vIAhcB1-2BfxqFbqDUSdLrrc-2BNfbal-2FxO0KD9IbsLuWkPVpAfwlh2prVkJp89yiYfc9w-2BYViZ1-2FZ-2Bdy8unEtlVj1A9cFdTgBE86uwHEyFzaprhgbGnvWDF5Q-2B0BEnBa1ufnbkeHoFT-2BeLtGqKUxKUcsdsJ-2BZri8Tvt-2FtgkjwgYeYiTQ-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBO5DEz2Z9Cj8yeQjd7GE3vIAhcB1-2BfxqFbqDUSdLrrc-2BOh94Msvs3zrGZc-2Bg3GvuJppD57rrsYcKsj3eypqUcY2-2FGLLpNAUIfVvfEjHP-2FgUQJvB3LBCPp4cMAn-2FyIdMDI4m4ACajOC7OTeiB42vDnn5HBpW6lgmsnKs6Azh26Z1egpqDWvh2cNYduBgWzdgH5Q-3D
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBO5DEz2Z9Cj8yeQjd7GE3vIAhcB1-2BfxqFbqDUSdLrrc-2BMqLSUvlotQkX3p1vo-2BnVAJKC20kY0z20n6JYngdM9GxzUMXRWxsFQtZu9iVMQsUcvvcPMePPkLaxPVzvGLo-2FvQYOw3lIOGYv6XCq6E66kTaN7F3JvbDLQh-2FH0kYZ6zEC94-2B3MLjXr-2BxXsb0cM869Nc-3D
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBO5DEz2Z9Cj8yeQjd7GE3vIAhcB1-2BfxqFbqDUSdLrrc-2BGQUNPh6T4wf6CSSsWCi69J2OiX1p6xve6-2ByIlt8FFVX7ul7WEXdebVR94Fnl3nlE-2F4Z7B0IGzElBK5naqq0eMpZzX1OtHQmE8P-2BemohBqgtn6BW-2FFsui9K8wwnCsydPbjwvB9PL-2BsXwp1cB9Z96oNw-3D
Re: Only 10Gbps in one direction? [ In reply to ]
I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.

My CPU is a 6 core Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz

Thanks once again!

> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>
> Alfredo
>
>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>
>> Thanks Alfredo,
>>
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>
>> I’m sorry this is so impossible to read :(
>>
>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>
>> Hello,
>>
>> My first run through with poring. :)
>>
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>
>> The NIC is a 6 port 82599 based one.
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>
>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>
>> Thanks!
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4UWJKHmHBXIlMx-2ByHk8yFTDpyp0OCAp7jKDk8fMT1J2Zr8OqRpZ0q4O4veQ-2BtjgOxu43Rym9iDeQX8z3-2FVUmPzlRInEemVslKFAEC2UEMAOmGUX-2FQdmZzoqIhr-2BFojq5hn5dj-2FvbkSsBTO1k13szlc0-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4fEsIoffgY3sUAqMUZekK8OhQPciZPxUldKEMXpVHm9DsO0dVfyQZNbjPiJBa4IZwpKdSaoBr-2Bz0EO7CWu7fWzzuaKV6M3EqmtQWOP1azkFFEsKLdRVSsa1Cy-2BhIAluIYL0xxeqQAE4g1Si2-2FqmVK4U-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4QqGl0bWiGs17lL7MazIDi94GLRfqag21S47cQpnmQj6Ij0wnUJMYngvW4lOxkn5PT3VCmQKT8kaySBDiHVS-2FgYnoMfOIf94XdwOLeOsLCrcMdIWhAMonizqG-2B9vI31QFzWJQQ35A7dIN8CrR56OQCM-3D
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4bFs1szALPCb-2FtAgp8-2FT5PxrQzdv0PVS-2FWctC6nPQHUYdNLDpfRZN89U0-2BLwgrz4B3kxBBZcH1R3-2BL6w-2FbRhNOY8aiNTmM7ZEURKm63vZBN-2FpmXvcZa5gNRAnwHK9eEUubMHi7E9OflWcZm5KmKV4Pg-3D
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4aeUnLwX-2BmyGvSJOjrQFHK0IFUQG2oZD4n13AV8w-2BBR1T-2Be5bPwDZSMMM9vIoj5H7fE3vuzMBnCRnJSgyAmSn79W21iu277LpNLW6iFxHTJILzT5wASxSyLr13Ua3Tu43Y4XrXIpe5Vs7jEtKQEPmt4-3D
Re: Only 10Gbps in one direction? [ In reply to ]
Did you bind pfsend/zsend to a cpu core?

Alfredo

> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>
> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>
> Thanks once again!
>
> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>
> Alfredo
>
> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>
> Thanks Alfredo,
>
> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>
> Alfredo
>
> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>
> I’m sorry this is so impossible to read :(
>
> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>
> Hello,
>
> My first run through with poring. :)
>
> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>
> The NIC is a 6 port 82599 based one.
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>
> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>
> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>
> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>
> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>
> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>
> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>
> Thanks!
>
> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4Z1DxJ-2F2foMCcYzXmcSGFeB8PcXVIWoKO14u-2BzgLzX04sN9I31-2BVoCaMIGofNmRj6siHy2arRlzGF86mk59AmoRadAEWrU8-2Be-2F-2FCtl6-2F2H11EUkRqysPyOoKM2NvBNn22D0yExS8xFKFz9tO93BVKNc-3D>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4UHYR85A3Y84zCeucH3VABGqbPdFpsgtZqOGTd1ZcwBMxotGh2EX323N01ZAWCSTRITWqHHWtFB4ez0vLbABHlj286gRhYGCVYREVO0vhI7Vu2CxBBH-2FqgLZtqCMn-2BoP7jpdVVmtli5l5xyN0Q2OjHw-3D>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4TFzVdH-2FBWf8qHQhAMpuGTQXj-2BN-2FN4HKYucJ6ikvd3UxWg3UKhZoMZYi4CenfSxCk2bwD8VICBzsk4pbe6oJ-2BTDfc0wdeiXUdMx-2FnzCSWhCXGy3Zc3qQtz0g91NH8XGBjKrb-2FXaIRJhCa-2BIU06FCDBo-3D>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4fDj9zVqbxhqM0Kgb7bgF3UpyZcNtMMvWH-2FUtWDEHfCOJiVH34CYdvQxWFC54GnL6SNtKRPnDN1w0Uih-2F-2Be97nMBgH6lQcwhwSwWqIWeoCmQU2FGNNsxybaXC7NY7V30DmUi6mhmVG1kw-2F5asf6gwDc-3D>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOWAM77MdhIc6DTrhx2xYfAmrxxHoFPYTM9C7cbqT6B4TVkkLT-2F-2FnNjrOTJTmepvAHQYMkeUQ9b36a1RDga-2BfQ7Pk-2Fi6KAQDMjhUOxDcvmOQm9omMgvwm66OVYlWoL83zg3GN-2FYQiEx876LaSnRyD2Krr0jLVuO-2Ft1SRi-2FnUAMAjoM3DCouLhfvT570-2Bw8-2F5eE-3D>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.

> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Did you bind pfsend/zsend to a cpu core?
>
> Alfredo
>
>> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>>
>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>>
>> Thanks once again!
>>
>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>
>> Thanks Alfredo,
>>
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>
>> I’m sorry this is so impossible to read :(
>>
>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>
>> Hello,
>>
>> My first run through with poring. :)
>>
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>
>> The NIC is a 6 port 82599 based one.
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>
>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>
>> Thanks!
>>
>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibki7wmHJdlV570AmOI2YpteiaAzHxF4bSRly9Qf9xm8h8t-2FuLy-2BKjR4mBPHoM8MhECHllIOU8DKo8aAURwG-2BSh40bZxRqJl-2Fvb-2Fy6PY5iXjsc1ygG3a2ZQ1R6xghdM0XV6ZKoavY4Ay7LjI3kz0iBBNc-3D
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkvYWplSAwZyalR6vft-2BHCD7W0bpNQv4-2BzOnLFxVUSbbRbBZctxOL2C1iSnJQxZLA2dI8GZovN9ZC4JMbgc4hbbfDFgf1ak0FdEHHPKeid8lSVl0ckqg04VMVvht8aW5mmcQWj2BJfakDjPC8tUASEeg-3D
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkoV-2BEbDwXcTqkXpmczHc3AjU-2FBauGHl0oOBBv-2FC-2FYERo4USxgFXFJbUdiLxnuOgK9PPs4-2BvLLlHAe8A7FAtb0gj-2FogbkTXdH1I08YDqO1Vl9wNVKTdzRm0jf8CKfAn0do-2B1kaxaXfEsRsJaDR3658nc-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkkvklG0-2BoCKq3MJ-2FmROeAQt0-2BbqXqlVB-2F9ci2OmQJyci40fIQEBFDdSHn613X9mAS99jLDl-2F-2FzJFBY-2FJdU-2FUYIGZQhvlxgy3mLoRuhC7BEMaMfTEK2BOharlv8XV-2FHwZc7SjuKmcY2-2Bkd3rEZbB6mnM-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkl1JE09-2B-2FxPgmwD2UZyvOfS6d-2B-2BJWqUfY-2BkGF3qx7bSgV1nN8TWdrblkSgPJz9jBhR-2BVz3xHcDi8ytfTAKdEa-2Bksby-2F-2B7wZR6KrvZvvkjvKyU5nCAN1Uw-2Bz9FcxTN7VzP24BZjHNjBEgQ8MMjUzn0O4-3D
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibksKFK1fxC1rGjRW8BSRUxeSnc3nq2zZ6zcTdkvkPRQ0WKtIimSIvw8N8V3-2FQqiAIUvjkPJjjsRpBR6OS-2FlJRzbQtRNnNiX1iLbKyM-2BRnJdPh1sZMGG2x7kpR-2FWbTsT0z4phWyDh3fSIG28VJTSczsGQ-3D
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkrw-2BaOp6mQmELEUg2hWz4yi-2BRiGoDIKfEunl6GHtIunuDMt6bWsqILIobHnEp1c8Ge-2FYNVTjE-2F6reu04aGSmGfXXDuj8h3rlWt-2BYkco4mELtUHnVCuAfsdIIw9r460Xgo8OkXlz5ovJyK-2BXTYWVYfeM-3D
Re: Only 10Gbps in one direction? [ In reply to ]
It could be related to available memory bandwidth, do you see the same when zcount is not running?

Alfredo

> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>
> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Did you bind pfsend/zsend to a cpu core?
>
> Alfredo
>
> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>
> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>
> Thanks once again!
>
> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>
> Alfredo
>
> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>
> Thanks Alfredo,
>
> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>
> Alfredo
>
> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>
> I’m sorry this is so impossible to read :(
>
> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>
> Hello,
>
> My first run through with poring. :)
>
> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>
> The NIC is a 6 port 82599 based one.
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>
> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>
> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>
> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>
> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>
> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>
> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>
> Thanks!
>
> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibknxYcTKe9qE5VdQyoepofg0NN5t-2FrxushXJFLOiHmFBA01yoxh19bTQxGLXFvCKQhtZyLlStUPDDCmokdYeJbLWrs2Iam4wOwaAjH4zd9QkrPUXCZVdCUSnUg3d74dEF0TxXR5sWsA7E1pDibZaoADA-3D>
> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibko-2F7KQ6J65kiwk3WUkLm9gv5hnAGOUtFlfWJ1hCWjjv2lQ3yGyDHqbahrw2nlOu-2Fwk1xqcBuNYwoQf0Qhww-2BeXoGcuTiRtbKdKvkTFb-2B4ESjNAb6LIVTQ5cZ2vX-2BWV6jKiENMPP4FU4e-2FgzR2NYDGqo-3D>
> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkmKL8C8cPtQ5dLi0-2F1Zy6i1eP10vcKMNp5qDOLaLpz8kJ-2BZC5fS6pKattY6apOes3Fs2RKpe5J9-2B0PkO-2B-2F27ggooSBB7dQ7y8MDgU2tkOZga0zQiATo7H9wgjON6Blz5M32Yay6mxLfZSMHxTqwMdwM-3D>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibks6hm7u8418mofijS0mUcSvQvv5FnO8qkx2eNPhg5m2EJGUHiB-2Fbch1Y1KB6FnVbzpf53-2BpAM-2BAHk8aMs6X3TUwr-2F2kxhxiyJrgiEK7SXX4p7-2BCOEJUn-2F7C5konsGtXe7vuY215D-2BOgGWoKcJbI-2B6jA-3D>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkl0LxlIQb9dTYPF4vmb8eL-2BKHmMa4n29yzeX9Omdp4FRxaNqylp18qV7mEVR4tgcnYEbP3NAGNLmVkpzBf54zSPTdR042egG8PZ1NFuNe07WxOueRlqGCUDBmdpJLLQXWTcgKKH6Jy8d3hV6xzMh7ZI-3D>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkggTxtnwUgQbvQfoUZ4PJ3iRRQrj7Cv76t4mq3kXAvhUfGo1nHWPwMVoNzYiq4zCLTpES6UtoOSgrTAAtU48bUAbpx7MYqLXnLgKJ-2Fe0bn6-2BKAEY3z2lBydmzywHZvMPtvEUmWddf-2FY-2BMahf8-2BPmDXk-3D>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc <https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBNGCYbL1pm-2BUp3pjkSSXmHO-2BSjQS-2FE3plG0DCCI1oibkkS5-2B5-2BlRRt4m06D-2BxFNDLmZTi9X9X2noegzhZfqTuMURUePrmRsu8xskSMGbxNAiMYVhH4LR0IoRTvB4XfWFg9iq7HkpT2ooPUzytJvOOihvstf2ahHU-2BxZhaIZb6GdTU1pqJOqT9UI82hZaOvcRmQ-3D>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 14.88/10Gbps. It stays like that until I start doing stuff in another window (ssh’d in) like running top or changing directories. At which point it dropped down to 14.5Mpps and seems to be hovering there now.

> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> It could be related to available memory bandwidth, do you see the same when zcount is not running?
>
> Alfredo
>
>> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>>
>> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Did you bind pfsend/zsend to a cpu core?
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>>
>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>>
>> Thanks once again!
>>
>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>
>> Thanks Alfredo,
>>
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>
>> I’m sorry this is so impossible to read :(
>>
>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>
>> Hello,
>>
>> My first run through with poring. :)
>>
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>
>> The NIC is a 6 port 82599 based one.
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>
>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>
>> Thanks!
>>
>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSY5J-2Ffpk6-2BFcVnqND69ZBuhYmQjZ-2Bq3mlcoeiYBh67t9KsJILTV9hvhbd1L8IixulO8MUefntlHu6ZBs-2BNCPIl1uh24zjUs36YBVlzFLfsRIwPrI2M0zDX2Wa01RpmD4UxWBnBK6uTTTmJ-2B9BjYX2Z34-3D
>>
>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSYxhd-2BRbjxOym6QCkXCjOjakdRI3eWBrJCqUixq8KAPZPx-2BkLhFONgmbI6mD-2FjzHC-2F7RcGHtH5MUzGvB2MoVD8-2FkYkdfe7I5QBHR5vI-2FwjGMb2ewGKIjFtf1lzDhRLAaDFAKPtg8aBIE3MlxxBDCx7jE-3D
>>
>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSY85UPhR4GSaI-2F-2FoL9Mud4BwtMKDr-2FWTl-2F8PB0gQCCGXgGzgJU5uN4onRl8ihuXvXFLqoQxk7CPkPuZU4OmndQRBYINdWpejhwPWp8yYmby4g6YWqLy1qvVTbp3LQtNd49VgMcu8rdQl96BpClctOwXA-3D
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSYxGcFfKLabCWfIbDfSYxB78OMpJ-2FMm8-2Fp97dm25oYwN2Njp0r-2FAyjglzoCigTRXS0td5g-2BdKIksCMU83a8LgzzDr8WdwXZf9QKAQFwx5NyiRINOzl2269IYqcVUHySrgx-2BZSp0FYvIZtzhDam1UtKdw-3D
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSYzvD47TuWHqsDkdAUU09bFCR18Kd4yd5VzQBImb4LlOSnL5A36XKq-2BNxcT4F0Kr7yc4hsZMVrBfuHDcPBLfPAXL2Dll3KTOeNU5KnP8ZPOZnkFbjH7MUSMYHsSzxvgSqXh92vro2UEjhY2wmZ0DTpMo-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSYzJjEW2kU6rlawAJygmCR5mFtfvfjdiFSjVvdRrE2XXhOTy9N-2B2T0wEZhhZjGT-2FyQswVbv2ypLLCYGEswuM7ofBr1mdDLjMSINU6U2scnBBaKX8hxQbew3reCNVK68tjtEmdnFDKXsMG-2FMzaORyJr2c-3D
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSY1tIKBAfd7zTcbEK0nQR8MRk-2FDar2ORBCZo3LDJG-2B2bXoZbwzyFsy4KyZwTxx6TGy22aoV5fNZRF6-2BpmFf3Ab3gh6Cvg7ayjk2RLaTNiS7wn-2Fvp4KMXAuMktocWwVKdkX8Jq3LP90SEBdR1kQCtilOg-3D
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSY0tMxSMCFlLPLDT8UK9hWXlgxblQsHwtaw8yhT-2BmI3gFewBR-2B2yniWPtxAG8GoVlbjalK5OGleKdyFuwN59KR2MOgBSPJHcTDaVvri0spBMJrcGy6uEqP2mQXSbPHMXFNXFQGIitwLhnoNqn3l-2BkvwA-3D
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> https://u3172041.ct.sendgrid.net/wf/click?upn=UmVayPZKpIdftgoYx-2BU88-2B1jnUM7Rvx0OUKMS78f3Zn78u-2BsRXpXcIeS3oh8u5YEWIgu-2BvmYwr4GzDvZ-2FfPGPw-3D-3D_rhOy4YLBxOPOohPdjw4-2BtC7rnGr1J9pkUkX-2FFAaRhBOTfw29ZNkYUq90lhDJi-2B9CuxzIJw-2FLGOjZ9aHSbsPSY9SeHUk-2BQxh51lTmOnrCYSJb6usytmeZQRRrXxHml8fQJn-2B2LUjg0zvhhor6oPmWx-2FIkgMtlQbcFMxFtJeTItiyKkwLHTBBUmW8uUUUSd8Nno7KwxXMBfckof5Rz62V7RWAgLIgW8UOI0j5XLecxJsc-3D
Re: Only 10Gbps in one direction? [ In reply to ]
Good day!

I’m just circling back to see if anyone has any further insights on how I might be able to get a steady 14.88Mpps?

Admittedly, I spec’d this box to be able to use MoonGen. According to their specs, I should be able to get a full 14.88Mpps across all 6 ports on this NIC. That said, after discovering that pf_ring:zc is able to perform as well as MoonGen, I’d rather go this route so I can (hopefully) us Ostinato as a front-end.

Thanks!

> On Jun 8, 2016, at 5:55 PM, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 14.88/10Gbps. It stays like that until I start doing stuff in another window (ssh’d in) like running top or changing directories. At which point it dropped down to 14.5Mpps and seems to be hovering there now.
>
> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> It could be related to available memory bandwidth, do you see the same when zcount is not running?
>
> Alfredo
>
> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>
> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Did you bind pfsend/zsend to a cpu core?
>
> Alfredo
>
> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>
> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>
> Thanks once again!
>
> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>
> Alfredo
>
> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>
> Thanks Alfredo,
>
> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>
> Alfredo
>
> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>
> I’m sorry this is so impossible to read :(
>
> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>
> Hello,
>
> My first run through with poring. :)
>
> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>
> The NIC is a 6 port 82599 based one.
>
> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>
> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>
> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>
> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>
> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>
> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>
> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>
> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>
> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>
> Thanks!
>
> _____________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
Hi Jason
it seems that other applications are interfering with zsend affecting the transmission rate,
this could depend on several factor including core isolation (other applications using the
core where zsend is running) and memory bandwidth (it seems that starting pfcount on
another core affects zsend, thus this seems to be the case).
Were you able to run MoonGen or other applications with better performance than zsend
on the same machine?

Thank you
Alfredo

> On 10 Jun 2016, at 13:27, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> Good day!
>
> I’m just circling back to see if anyone has any further insights on how I might be able to get a steady 14.88Mpps?
>
> Admittedly, I spec’d this box to be able to use MoonGen. According to their specs, I should be able to get a full 14.88Mpps across all 6 ports on this NIC. That said, after discovering that pf_ring:zc is able to perform as well as MoonGen, I’d rather go this route so I can (hopefully) us Ostinato as a front-end.
>
> Thanks!
>
>> On Jun 8, 2016, at 5:55 PM, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 14.88/10Gbps. It stays like that until I start doing stuff in another window (ssh’d in) like running top or changing directories. At which point it dropped down to 14.5Mpps and seems to be hovering there now.
>>
>> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> It could be related to available memory bandwidth, do you see the same when zcount is not running?
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>>
>> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Did you bind pfsend/zsend to a cpu core?
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>>
>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>>
>> Thanks once again!
>>
>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>
>> Thanks Alfredo,
>>
>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>
>> Alfredo
>>
>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>
>> I’m sorry this is so impossible to read :(
>>
>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>
>> Hello,
>>
>> My first run through with poring. :)
>>
>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>
>> The NIC is a 6 port 82599 based one.
>>
>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>
>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>
>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>
>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>
>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>
>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>
>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>
>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>
>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>
>> Thanks!
>>
>> _____________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
Hi Alfredo,

I can run MoonGen’s benchmark application (60 byte UDP packets) on one core and achieve full literate.

root@pgen:~/MoonGen# ./build/MoonGen examples/benchmark/udp-throughput.lua 1:1
[INFO] Initializing DPDK. This will take a few seconds...
[INFO] Found 7 usable devices:
Device 0: 00:0C:BD:08:80:9C (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 1: 00:0C:BD:08:80:9D (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 2: 00:0C:BD:08:80:9A (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 3: 00:0C:BD:08:80:9B (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 4: 00:0C:BD:08:80:98 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 5: 00:0C:BD:08:80:99 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 6: 0C:C4:7A:AB:F9:C1 (I350 Gigabit Network Connection)
[WARN] You are running Linux >= 3.14, DDIO might not be working with DPDK in this setup!
[WARN] This can cause a huge performance impact (one memory access per packet!) preventing MoonGen from reaching line rate.
[WARN] Try using an older kernel (we recommend 3.13) if you see a low performance or huge cache miss ratio.
[INFO] Waiting for devices to come up...
[INFO] Device 1 (00:0C:BD:08:80:9D) is up: full-duplex 10000 MBit/s
[INFO] 1 devices are up.
[Device: id=1] Sent 14878261 packets, current rate 14.88 Mpps, 7617.60 MBit/s, 9998.09 MBit/s wire rate.
[Device: id=1] Sent 29758773 packets, current rate 14.88 Mpps, 7618.79 MBit/s, 9999.66 MBit/s wire rate.
[Device: id=1] Sent 44639285 packets, current rate 14.88 Mpps, 7618.79 MBit/s, 9999.66 MBit/s wire rate.
^C[Device: id=1] Sent 51834112 packets with 3317383168 bytes payload (including CRC).
[Device: id=1] Sent 14.880450 (StdDev 0.000001) Mpps, 7618.790143 (StdDev 0.000847) MBit/s, 9999.662143 (StdDev 0.000999) MBit/s wire rate on average.
PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
root@pgen:~/MoonGen#

I am also to do 2 cores and get literate on two ports:

root@pgen:~/MoonGen# ./build/MoonGen examples/benchmark/udp-throughput.lua 1:1 4:1
[INFO] Initializing DPDK. This will take a few seconds...
[INFO] Found 7 usable devices:
Device 0: 00:0C:BD:08:80:9C (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 1: 00:0C:BD:08:80:9D (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 2: 00:0C:BD:08:80:9A (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 3: 00:0C:BD:08:80:9B (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 4: 00:0C:BD:08:80:98 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 5: 00:0C:BD:08:80:99 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 6: 0C:C4:7A:AB:F9:C1 (I350 Gigabit Network Connection)
[WARN] You are running Linux >= 3.14, DDIO might not be working with DPDK in this setup!
[WARN] This can cause a huge performance impact (one memory access per packet!) preventing MoonGen from reaching line rate.
[WARN] Try using an older kernel (we recommend 3.13) if you see a low performance or huge cache miss ratio.
[INFO] Waiting for devices to come up...
[INFO] Device 1 (00:0C:BD:08:80:9D) is up: full-duplex 10000 MBit/s
[INFO] Device 4 (00:0C:BD:08:80:98) is up: full-duplex 10000 MBit/s
[INFO] 2 devices are up.
[Device: id=1] Sent 14876880 packets, current rate 14.88 Mpps, 7616.92 MBit/s, 9997.21 MBit/s wire rate.
[Device: id=4] Sent 14876732 packets, current rate 14.88 Mpps, 7616.78 MBit/s, 9997.02 MBit/s wire rate.
[Device: id=1] Sent 29757364 packets, current rate 14.88 Mpps, 7618.78 MBit/s, 9999.65 MBit/s wire rate.
[Device: id=4] Sent 29756984 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
[Device: id=1] Sent 44637898 packets, current rate 14.88 Mpps, 7618.80 MBit/s, 9999.68 MBit/s wire rate.
[Device: id=4] Sent 44637237 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
[Device: id=1] Sent 59518743 packets, current rate 14.88 Mpps, 7618.98 MBit/s, 9999.91 MBit/s wire rate.
[Device: id=4] Sent 59517493 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
^C[Device: id=1] Sent 69843968 packets with 4470013952 bytes payload (including CRC).
[Device: id=1] Sent 14.880561 (StdDev 0.000194) Mpps, 7618.853606 (StdDev 0.110259) MBit/s, 9999.743332 (StdDev 0.141375) MBit/s wire rate on average.
[Device: id=4] Sent 69767424 packets with 4465115136 bytes payload (including CRC).
[Device: id=4] Sent 14.880251 (StdDev 0.000002) Mpps, 7618.688334 (StdDev 0.001419) MBit/s, 9999.528438 (StdDev 0.001742) MBit/s wire rate on average.
PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
root@pgen:~/MoonGen#

> On Jun 10, 2016, at 9:07 AM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason
> it seems that other applications are interfering with zsend affecting the transmission rate,
> this could depend on several factor including core isolation (other applications using the
> core where zsend is running) and memory bandwidth (it seems that starting pfcount on
> another core affects zsend, thus this seems to be the case).
> Were you able to run MoonGen or other applications with better performance than zsend
> on the same machine?
>
> Thank you
> Alfredo
>
>> On 10 Jun 2016, at 13:27, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> Good day!
>>
>> I’m just circling back to see if anyone has any further insights on how I might be able to get a steady 14.88Mpps?
>>
>> Admittedly, I spec’d this box to be able to use MoonGen. According to their specs, I should be able to get a full 14.88Mpps across all 6 ports on this NIC. That said, after discovering that pf_ring:zc is able to perform as well as MoonGen, I’d rather go this route so I can (hopefully) us Ostinato as a front-end.
>>
>> Thanks!
>>
>>> On Jun 8, 2016, at 5:55 PM, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 14.88/10Gbps. It stays like that until I start doing stuff in another window (ssh’d in) like running top or changing directories. At which point it dropped down to 14.5Mpps and seems to be hovering there now.
>>>
>>> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> It could be related to available memory bandwidth, do you see the same when zcount is not running?
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>>>
>>> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Did you bind pfsend/zsend to a cpu core?
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>>>
>>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>>>
>>> Thanks once again!
>>>
>>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>>
>>> Thanks Alfredo,
>>>
>>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>
>>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>
>>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>>
>>> I’m sorry this is so impossible to read :(
>>>
>>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>>
>>> Hello,
>>>
>>> My first run through with poring. :)
>>>
>>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>>
>>> The NIC is a 6 port 82599 based one.
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>
>>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>>
>>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>>
>>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>>
>>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>>
>>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>>
>>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>>
>>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>>
>>> Thanks!
>>>
>>> _____________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: Only 10Gbps in one direction? [ In reply to ]
Hi Jason
I will have a look at what it does (how it generates traffic) and come back to you.

Alfredo

> On 10 Jun 2016, at 15:56, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>
> Hi Alfredo,
>
> I can run MoonGen’s benchmark application (60 byte UDP packets) on one core and achieve full literate.
>
> root@pgen:~/MoonGen# ./build/MoonGen examples/benchmark/udp-throughput.lua 1:1
> [INFO] Initializing DPDK. This will take a few seconds...
> [INFO] Found 7 usable devices:
> Device 0: 00:0C:BD:08:80:9C (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 1: 00:0C:BD:08:80:9D (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 2: 00:0C:BD:08:80:9A (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 3: 00:0C:BD:08:80:9B (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 4: 00:0C:BD:08:80:98 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 5: 00:0C:BD:08:80:99 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 6: 0C:C4:7A:AB:F9:C1 (I350 Gigabit Network Connection)
> [WARN] You are running Linux >= 3.14, DDIO might not be working with DPDK in this setup!
> [WARN] This can cause a huge performance impact (one memory access per packet!) preventing MoonGen from reaching line rate.
> [WARN] Try using an older kernel (we recommend 3.13) if you see a low performance or huge cache miss ratio.
> [INFO] Waiting for devices to come up...
> [INFO] Device 1 (00:0C:BD:08:80:9D) is up: full-duplex 10000 MBit/s
> [INFO] 1 devices are up.
> [Device: id=1] Sent 14878261 packets, current rate 14.88 Mpps, 7617.60 MBit/s, 9998.09 MBit/s wire rate.
> [Device: id=1] Sent 29758773 packets, current rate 14.88 Mpps, 7618.79 MBit/s, 9999.66 MBit/s wire rate.
> [Device: id=1] Sent 44639285 packets, current rate 14.88 Mpps, 7618.79 MBit/s, 9999.66 MBit/s wire rate.
> ^C[Device: id=1] Sent 51834112 packets with 3317383168 bytes payload (including CRC).
> [Device: id=1] Sent 14.880450 (StdDev 0.000001) Mpps, 7618.790143 (StdDev 0.000847) MBit/s, 9999.662143 (StdDev 0.000999) MBit/s wire rate on average.
> PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
> root@pgen:~/MoonGen#
>
> I am also to do 2 cores and get literate on two ports:
>
> root@pgen:~/MoonGen# ./build/MoonGen examples/benchmark/udp-throughput.lua 1:1 4:1
> [INFO] Initializing DPDK. This will take a few seconds...
> [INFO] Found 7 usable devices:
> Device 0: 00:0C:BD:08:80:9C (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 1: 00:0C:BD:08:80:9D (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 2: 00:0C:BD:08:80:9A (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 3: 00:0C:BD:08:80:9B (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 4: 00:0C:BD:08:80:98 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 5: 00:0C:BD:08:80:99 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
> Device 6: 0C:C4:7A:AB:F9:C1 (I350 Gigabit Network Connection)
> [WARN] You are running Linux >= 3.14, DDIO might not be working with DPDK in this setup!
> [WARN] This can cause a huge performance impact (one memory access per packet!) preventing MoonGen from reaching line rate.
> [WARN] Try using an older kernel (we recommend 3.13) if you see a low performance or huge cache miss ratio.
> [INFO] Waiting for devices to come up...
> [INFO] Device 1 (00:0C:BD:08:80:9D) is up: full-duplex 10000 MBit/s
> [INFO] Device 4 (00:0C:BD:08:80:98) is up: full-duplex 10000 MBit/s
> [INFO] 2 devices are up.
> [Device: id=1] Sent 14876880 packets, current rate 14.88 Mpps, 7616.92 MBit/s, 9997.21 MBit/s wire rate.
> [Device: id=4] Sent 14876732 packets, current rate 14.88 Mpps, 7616.78 MBit/s, 9997.02 MBit/s wire rate.
> [Device: id=1] Sent 29757364 packets, current rate 14.88 Mpps, 7618.78 MBit/s, 9999.65 MBit/s wire rate.
> [Device: id=4] Sent 29756984 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
> [Device: id=1] Sent 44637898 packets, current rate 14.88 Mpps, 7618.80 MBit/s, 9999.68 MBit/s wire rate.
> [Device: id=4] Sent 44637237 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
> [Device: id=1] Sent 59518743 packets, current rate 14.88 Mpps, 7618.98 MBit/s, 9999.91 MBit/s wire rate.
> [Device: id=4] Sent 59517493 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
> ^C[Device: id=1] Sent 69843968 packets with 4470013952 bytes payload (including CRC).
> [Device: id=1] Sent 14.880561 (StdDev 0.000194) Mpps, 7618.853606 (StdDev 0.110259) MBit/s, 9999.743332 (StdDev 0.141375) MBit/s wire rate on average.
> [Device: id=4] Sent 69767424 packets with 4465115136 bytes payload (including CRC).
> [Device: id=4] Sent 14.880251 (StdDev 0.000002) Mpps, 7618.688334 (StdDev 0.001419) MBit/s, 9999.528438 (StdDev 0.001742) MBit/s wire rate on average.
> PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
> PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
> root@pgen:~/MoonGen#
>
>> On Jun 10, 2016, at 9:07 AM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>
>> Hi Jason
>> it seems that other applications are interfering with zsend affecting the transmission rate,
>> this could depend on several factor including core isolation (other applications using the
>> core where zsend is running) and memory bandwidth (it seems that starting pfcount on
>> another core affects zsend, thus this seems to be the case).
>> Were you able to run MoonGen or other applications with better performance than zsend
>> on the same machine?
>>
>> Thank you
>> Alfredo
>>
>>> On 10 Jun 2016, at 13:27, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> Good day!
>>>
>>> I’m just circling back to see if anyone has any further insights on how I might be able to get a steady 14.88Mpps?
>>>
>>> Admittedly, I spec’d this box to be able to use MoonGen. According to their specs, I should be able to get a full 14.88Mpps across all 6 ports on this NIC. That said, after discovering that pf_ring:zc is able to perform as well as MoonGen, I’d rather go this route so I can (hopefully) us Ostinato as a front-end.
>>>
>>> Thanks!
>>>
>>>> On Jun 8, 2016, at 5:55 PM, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>>
>>>> Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 14.88/10Gbps. It stays like that until I start doing stuff in another window (ssh’d in) like running top or changing directories. At which point it dropped down to 14.5Mpps and seems to be hovering there now.
>>>>
>>>> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>
>>>> It could be related to available memory bandwidth, do you see the same when zcount is not running?
>>>>
>>>> Alfredo
>>>>
>>>> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>>
>>>> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>>>>
>>>> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>
>>>> Did you bind pfsend/zsend to a cpu core?
>>>>
>>>> Alfredo
>>>>
>>>> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>>
>>>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>>>>
>>>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>>>>
>>>> Thanks once again!
>>>>
>>>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>
>>>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>>>>
>>>> Alfredo
>>>>
>>>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>>>
>>>> Thanks Alfredo,
>>>>
>>>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>>
>>>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>>
>>>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>
>>>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>>>
>>>> Alfredo
>>>>
>>>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>>
>>>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>>>
>>>> I’m sorry this is so impossible to read :(
>>>>
>>>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>>>
>>>> Hello,
>>>>
>>>> My first run through with poring. :)
>>>>
>>>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>>>
>>>> The NIC is a 6 port 82599 based one.
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>>
>>>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>>>
>>>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>>>
>>>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>>>
>>>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>>>
>>>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>>>
>>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>>>
>>>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>>>
>>>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>>>
>>>> Thanks!
>>>>
>>>> _____________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc