Hi Alfredo,
I can run MoonGen’s benchmark application (60 byte UDP packets) on one core and achieve full literate.
root@pgen:~/MoonGen# ./build/MoonGen examples/benchmark/udp-throughput.lua 1:1
[INFO] Initializing DPDK. This will take a few seconds...
[INFO] Found 7 usable devices:
Device 0: 00:0C:BD:08:80:9C (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 1: 00:0C:BD:08:80:9D (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 2: 00:0C:BD:08:80:9A (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 3: 00:0C:BD:08:80:9B (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 4: 00:0C:BD:08:80:98 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 5: 00:0C:BD:08:80:99 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 6: 0C:C4:7A:AB:F9:C1 (I350 Gigabit Network Connection)
[WARN] You are running Linux >= 3.14, DDIO might not be working with DPDK in this setup!
[WARN] This can cause a huge performance impact (one memory access per packet!) preventing MoonGen from reaching line rate.
[WARN] Try using an older kernel (we recommend 3.13) if you see a low performance or huge cache miss ratio.
[INFO] Waiting for devices to come up...
[INFO] Device 1 (00:0C:BD:08:80:9D) is up: full-duplex 10000 MBit/s
[INFO] 1 devices are up.
[Device: id=1] Sent 14878261 packets, current rate 14.88 Mpps, 7617.60 MBit/s, 9998.09 MBit/s wire rate.
[Device: id=1] Sent 29758773 packets, current rate 14.88 Mpps, 7618.79 MBit/s, 9999.66 MBit/s wire rate.
[Device: id=1] Sent 44639285 packets, current rate 14.88 Mpps, 7618.79 MBit/s, 9999.66 MBit/s wire rate.
^C[Device: id=1] Sent 51834112 packets with 3317383168 bytes payload (including CRC).
[Device: id=1] Sent 14.880450 (StdDev 0.000001) Mpps, 7618.790143 (StdDev 0.000847) MBit/s, 9999.662143 (StdDev 0.000999) MBit/s wire rate on average.
PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
root@pgen:~/MoonGen#
I am also to do 2 cores and get literate on two ports:
root@pgen:~/MoonGen# ./build/MoonGen examples/benchmark/udp-throughput.lua 1:1 4:1
[INFO] Initializing DPDK. This will take a few seconds...
[INFO] Found 7 usable devices:
Device 0: 00:0C:BD:08:80:9C (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 1: 00:0C:BD:08:80:9D (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 2: 00:0C:BD:08:80:9A (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 3: 00:0C:BD:08:80:9B (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 4: 00:0C:BD:08:80:98 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 5: 00:0C:BD:08:80:99 (82599EB 10-Gigabit SFI/SFP+ Network Connection)
Device 6: 0C:C4:7A:AB:F9:C1 (I350 Gigabit Network Connection)
[WARN] You are running Linux >= 3.14, DDIO might not be working with DPDK in this setup!
[WARN] This can cause a huge performance impact (one memory access per packet!) preventing MoonGen from reaching line rate.
[WARN] Try using an older kernel (we recommend 3.13) if you see a low performance or huge cache miss ratio.
[INFO] Waiting for devices to come up...
[INFO] Device 1 (00:0C:BD:08:80:9D) is up: full-duplex 10000 MBit/s
[INFO] Device 4 (00:0C:BD:08:80:98) is up: full-duplex 10000 MBit/s
[INFO] 2 devices are up.
[Device: id=1] Sent 14876880 packets, current rate 14.88 Mpps, 7616.92 MBit/s, 9997.21 MBit/s wire rate.
[Device: id=4] Sent 14876732 packets, current rate 14.88 Mpps, 7616.78 MBit/s, 9997.02 MBit/s wire rate.
[Device: id=1] Sent 29757364 packets, current rate 14.88 Mpps, 7618.78 MBit/s, 9999.65 MBit/s wire rate.
[Device: id=4] Sent 29756984 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
[Device: id=1] Sent 44637898 packets, current rate 14.88 Mpps, 7618.80 MBit/s, 9999.68 MBit/s wire rate.
[Device: id=4] Sent 44637237 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
[Device: id=1] Sent 59518743 packets, current rate 14.88 Mpps, 7618.98 MBit/s, 9999.91 MBit/s wire rate.
[Device: id=4] Sent 59517493 packets, current rate 14.88 Mpps, 7618.69 MBit/s, 9999.53 MBit/s wire rate.
^C[Device: id=1] Sent 69843968 packets with 4470013952 bytes payload (including CRC).
[Device: id=1] Sent 14.880561 (StdDev 0.000194) Mpps, 7618.853606 (StdDev 0.110259) MBit/s, 9999.743332 (StdDev 0.141375) MBit/s wire rate on average.
[Device: id=4] Sent 69767424 packets with 4465115136 bytes payload (including CRC).
[Device: id=4] Sent 14.880251 (StdDev 0.000002) Mpps, 7618.688334 (StdDev 0.001419) MBit/s, 9999.528438 (StdDev 0.001742) MBit/s wire rate on average.
PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
PMD: ixgbe_dev_tx_queue_stop(): Could not disable Tx Queue 0
root@pgen:~/MoonGen#
> On Jun 10, 2016, at 9:07 AM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Jason
> it seems that other applications are interfering with zsend affecting the transmission rate,
> this could depend on several factor including core isolation (other applications using the
> core where zsend is running) and memory bandwidth (it seems that starting pfcount on
> another core affects zsend, thus this seems to be the case).
> Were you able to run MoonGen or other applications with better performance than zsend
> on the same machine?
>
> Thank you
> Alfredo
>
>> On 10 Jun 2016, at 13:27, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>
>> Good day!
>>
>> I’m just circling back to see if anyone has any further insights on how I might be able to get a steady 14.88Mpps?
>>
>> Admittedly, I spec’d this box to be able to use MoonGen. According to their specs, I should be able to get a full 14.88Mpps across all 6 ports on this NIC. That said, after discovering that pf_ring:zc is able to perform as well as MoonGen, I’d rather go this route so I can (hopefully) us Ostinato as a front-end.
>>
>> Thanks!
>>
>>> On Jun 8, 2016, at 5:55 PM, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> Without zcount and ./zsend -i zc:eth7 -g 1 -c 1 -a, it starts off at 14.88/10Gbps. It stays like that until I start doing stuff in another window (ssh’d in) like running top or changing directories. At which point it dropped down to 14.5Mpps and seems to be hovering there now.
>>>
>>> On Jun 8, 2016, at 5:43 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> It could be related to available memory bandwidth, do you see the same when zcount is not running?
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 23:40, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> I did. zsend to one core, zcount to another, but I can’t seen to quote get up there. send sometimes has the tendency to start off strong, about 14.86Mpps, but slowly ramps down to about 14.25Mpps after about 15 seconds.
>>>
>>> On Jun 8, 2016, at 4:48 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Did you bind pfsend/zsend to a cpu core?
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 22:44, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> I don’t seem to be able to transmit the full 14.88Mpps to get full linerate. Is there anything else I can tweak? I’ve added the -a option which has given gotten me a bit closer, but not all the way.
>>>
>>> My CPU is a 6 core Intel® Xeon® CPU E5-2620 v3 @ 2.40GHz
>>>
>>> Thanks once again!
>>>
>>> On Jun 8, 2016, at 4:21 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Please use RSS=1,1,1,1,1,1 as you have 6 ports.
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 22:11, jason-ntop@lixfeld.ca wrote:
>>>
>>> Thanks Alfredo,
>>>
>>> If I’m reading this correctly, eth2 has 12 Tx and Rx queues while eth7 has 1 each:
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth2/info Name: eth2 Index: 86 Address: 00:0C:BD:08:80:98 Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 12 # Used RX Queues: 12 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# cat /proc/net/pf_ring/dev/eth7/info Name: eth7 Index: 83 Address: 00:0C:BD:08:80:9D Polling Mode: NAPI/ZC Type: Ethernet Family: Intel ixgbe 82599 Max # TX Queues: 1 # Used RX Queues: 1 Num RX Slots: 32768 Num TX Slots: 32768 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>
>>> load_driver.sh seems to be set to disable multi queue so I’m not quite sure how this got this way, or how to correct it?
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# grep -i rss load_driver.sh #insmod ./ixgbe.ko RSS=0,0,0,0 insmod ./ixgbe.ko RSS=1,1,1,1 #insmod ./ixgbe.ko RSS=1,1,1,1 low_latency_tx=1 #insmod ./ixgbe.ko MQ=1,1,1,1 RSS=16,16,16,16 #insmod ./ixgbe.ko RSS=1,1,1,1 FdirPballoc=3,3,3,3 #insmod ./ixgbe.ko RSS=1,1,1,1 numa_cpu_affinity=0,0,0,0 root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>
>>> On Jun 8, 2016, at 3:55 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Hi Jason no problem, I was able to read something :-) Please check that both the interfaces are configured with a single RSS queue (take a look at /proc/net/pf_ring/dev/eth2/info)
>>>
>>> Alfredo
>>>
>>> On 08 Jun 2016, at 21:09, Jason Lixfeld <jason-ntop@lixfeld.ca> wrote:
>>>
>>> My gosh! I’m so sorry for the way this is formatted. My mailer insists that this message was sent in plain-text, not whatever the heck this is!
>>>
>>> I’m sorry this is so impossible to read :(
>>>
>>> On Jun 8, 2016, at 3:05 PM, jason-ntop@lixfeld.ca wrote:
>>>
>>> Hello,
>>>
>>> My first run through with poring. :)
>>>
>>> I’ve compiled the zc variant of pfring in an attempt to get linerate between two of the ports which are looped together.
>>>
>>> The NIC is a 6 port 82599 based one.
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src# ./load_driver.sh irqbalance: no process found Configuring eth6 IFACE CORE MASK -> FILE ======================= eth6 0 1 -> /proc/irq/87/smp_affinity Configuring eth7 IFACE CORE MASK -> FILE ======================= eth7 0 1 -> /proc/irq/89/smp_affinity Configuring eth2 IFACE CORE MASK -> FILE ======================= eth2 0 1 -> /proc/irq/96/smp_affinity eth2 1 2 -> /proc/irq/97/smp_affinity eth2 2 4 -> /proc/irq/98/smp_affinity eth2 3 8 -> /proc/irq/99/smp_affinity eth2 4 10 -> /proc/irq/100/smp_affinity eth2 5 20 -> /proc/irq/101/smp_affinity eth2 6 40 -> /proc/irq/102/smp_affinity eth2 7 80 -> /proc/irq/103/smp_affinity eth2 8 100 -> /proc/irq/104/smp_affinity eth2 9 200 -> /proc/irq/105/smp_affinity eth2 10 400 -> /proc/irq/106/smp_affinity eth2 11 800 -> /proc/irq/107/smp_affinity Configuring eth3 IFACE CORE MASK -> FILE ======================= eth3 0 1 -> /proc/irq/109/smp_affinity eth3 1 2 -> /proc/irq/110/smp_affinity eth3 2 4 -> /proc/irq/111/smp_affinity eth3 3 8 -> /proc/irq/112/smp_affinity eth3 4 10 -> /proc/irq/113/smp_affinity eth3 5 20 -> /proc/irq/114/smp_affinity eth3 6 40 -> /proc/irq/115/smp_affinity eth3 7 80 -> /proc/irq/116/smp_affinity eth3 8 100 -> /proc/irq/117/smp_affinity eth3 9 200 -> /proc/irq/118/smp_affinity eth3 10 400 -> /proc/irq/119/smp_affinity eth3 11 800 -> /proc/irq/120/smp_affinity Configuring eth4 IFACE CORE MASK -> FILE ======================= eth4 0 1 -> /proc/irq/91/smp_affinity Configuring eth5 IFACE CORE MASK -> FILE ======================= eth5 0 1 -> /proc/irq/93/smp_affinity root@pgen:/home/jlixfeld/PF_RING/drivers/intel/ixgbe/ixgbe-4.1.5-zc/src#
>>>
>>> My issue is that I only get 10Gbps in one direction. If zc:eth7 is the sender, zc:eth2 only sees Rx @ 0.54Gbps:
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth7 -c 2
>>>
>>> Absolute Stats: 111'707'057 pkts – 9'383'392'788 bytes Actual Stats: 13'983'520.42 pps – 9.40 Gbps [1133946996 bytes / 1.0 sec]
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth2 -c 1
>>>
>>> Absolute Stats: 5'699'096 pkts (33'135'445 drops) – 478'724'064 bytes Actual Stats: 802'982.00 pps (4'629'316.93 drops) – 0.54 Gbps
>>>
>>> But, if zc:eth2 is the sender, zc:eth7 sees rates more in-line with what zc:eth2 is sending.
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zsend -i zc:eth2 -c 2
>>>
>>> Absolute Stats: 28'285'274 pkts – 2'375'963'016 bytes Actual Stats: 14'114'355.24 pps – 9.48 Gbps [1185800280 bytes / 1.0 sec]
>>>
>>> root@pgen:/home/jlixfeld/PF_RING/userland/examples_zc# ./zcount -i zc:eth7 -c 1
>>>
>>> Absolute Stats: 28'007'460 pkts (0 drops) – 2'352'626'640 bytes Actual Stats: 14'044'642.54 pps (0.00 drops) – 9.44 Gbps
>>>
>>> I’ve done some reading, but I haven’t found anything that has pointed me towards a possible reason why this is happening. I’m wondering if anyone has any thoughts?
>>>
>>> Thanks!
>>>
>>> _____________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> ___________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _____________________________________________ Ntop-misc mailing list Ntop-misc@listgateway.unipi.it http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc