Mailing List Archive

cluster_2_tuple not working as expected
Hi,

We are using PFRING cluster feature and using cluster_2_tuple and 2
applications
are reading from same cluster id.

We have observed that the packets having same source and destination ip
addresses are getting distributed across 2 applications which has
completely tossed our logic as we are trying to assemble the fragments in
our applications.

Is there any bug in PFRING clustering mechanism which is causing this.

Using PFRING 6.2.0 and pfring is loaded with below command -
insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0

I tried with this also.
insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
enable_frag_coherence=1


Regards,
Gautam
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Gautam
could you provide a pcap we can use to reproduce this?

Alfredo

> On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi,
>
> We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
> are reading from same cluster id.
>
> We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
>
> Is there any bug in PFRING clustering mechanism which is causing this.
>
> Using PFRING 6.2.0 and pfring is loaded with below command -
> insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>
> I tried with this also.
> insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
>
>
> Regards,
> Gautam
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo

PFA the traces having vlan and not vlan.

To add more details to this, there are 2 observations -
1. We ran a bigger file of 1 lakh packets, out of which fragments of same
packet got distributed across application

2. We ran with the attached file and observed that the 2 packets were going
to one application and rest of the packets were to other one.

Thanks & Regards

On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

> Hi Gautam
> could you provide a pcap we can use to reproduce this?
>
> Alfredo
>
> > On 10 Nov 2016, at 11:22, Chandrika Gautam <
> chandrika.iitd.rock@gmail.com> wrote:
> >
> > Hi,
> >
> > We are using PFRING cluster feature and using cluster_2_tuple and 2
> applications
> > are reading from same cluster id.
> >
> > We have observed that the packets having same source and destination ip
> addresses are getting distributed across 2 applications which has
> completely tossed our logic as we are trying to assemble the fragments in
> our applications.
> >
> > Is there any bug in PFRING clustering mechanism which is causing this.
> >
> > Using PFRING 6.2.0 and pfring is loaded with below command -
> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
> >
> > I tried with this also.
> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
> enable_frag_coherence=1
> >
> >
> > Regards,
> > Gautam
> >
> > _______________________________________________
> > Ntop-misc mailing list
> > Ntop-misc@listgateway.unipi.it
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Gautam
your traffic is GTP traffic and the hash was computed on the inner headers when present,
I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced
new hash types cluster_per_inner_* for computing hash on inner header, when present.
Please update from github or wait for new packages.

Regards
Alfredo

> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi Alfredo
>
> PFA the traces having vlan and not vlan.
>
> To add more details to this, there are 2 observations -
> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application
>
> 2. We ran with the attached file and observed that the 2 packets were going to one application and rest of the packets were to other one.
>
> Thanks & Regards
>
> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
> Hi Gautam
> could you provide a pcap we can use to reproduce this?
>
> Alfredo
>
> > On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
> >
> > Hi,
> >
> > We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
> > are reading from same cluster id.
> >
> > We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
> >
> > Is there any bug in PFRING clustering mechanism which is causing this.
> >
> > Using PFRING 6.2.0 and pfring is loaded with below command -
> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
> >
> > I tried with this also.
> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
> >
> >
> > Regards,
> > Gautam
> >
> > _______________________________________________
> > Ntop-misc mailing list
> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
Thanks Alfredo for an update.
I will update you once merge with latest
PFRing.
Regards,
Gautam

Sent from my iPhone

> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
> Hi Gautam
> your traffic is GTP traffic and the hash was computed on the inner headers when present,
> I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced
> new hash types cluster_per_inner_* for computing hash on inner header, when present.
> Please update from github or wait for new packages.
>
> Regards
> Alfredo
>
>> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>>
>> Hi Alfredo
>>
>> PFA the traces having vlan and not vlan.
>>
>> To add more details to this, there are 2 observations -
>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application
>>
>> 2. We ran with the attached file and observed that the 2 packets were going to one application and rest of the packets were to other one.
>>
>> Thanks & Regards
>>
>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>> Hi Gautam
>>> could you provide a pcap we can use to reproduce this?
>>>
>>> Alfredo
>>>
>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>>> >
>>> > Hi,
>>> >
>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
>>> > are reading from same cluster id.
>>> >
>>> > We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
>>> >
>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>> >
>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>> >
>>> > I tried with this also.
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
>>> >
>>> >
>>> > Regards,
>>> > Gautam
>>> >
>>> > _______________________________________________
>>> > Ntop-misc mailing list
>>> > Ntop-misc@listgateway.unipi.it
>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo,

I tested with latest pfring from github but still packets are segregated to
different applications.
After your latest change, We need to use cluster_per_flow_2_tuple only
right to segregate traffic on outer ip addresses ?

Should we load pfring module with enable_frag_coherence=1? I have tested
with using this or without this with the latest package from github.


Regrads,
Gautam

On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <
chandrika.iitd.rock@gmail.com> wrote:

> Thanks Alfredo for an update.
> I will update you once merge with latest
> PFRing.
> Regards,
> Gautam
>
> Sent from my iPhone
>
> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org>
> wrote:
>
> Hi Gautam
> your traffic is GTP traffic and the hash was computed on the inner headers
> when present,
> I did change the behaviour computing the hash on the outer header when
> using cluster_per_flow_2_tuple, and introduced
> new hash types cluster_per_inner_* for computing hash on inner header,
> when present.
> Please update from github or wait for new packages.
>
> Regards
> Alfredo
>
> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> Hi Alfredo
>
> PFA the traces having vlan and not vlan.
>
> To add more details to this, there are 2 observations -
> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same
> packet got distributed across application
>
> 2. We ran with the attached file and observed that the 2 packets were
> going to one application and rest of the packets were to other one.
>
> Thanks & Regards
>
> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org
> > wrote:
>
>> Hi Gautam
>> could you provide a pcap we can use to reproduce this?
>>
>> Alfredo
>>
>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <
>> chandrika.iitd.rock@gmail.com> wrote:
>> >
>> > Hi,
>> >
>> > We are using PFRING cluster feature and using cluster_2_tuple and 2
>> applications
>> > are reading from same cluster id.
>> >
>> > We have observed that the packets having same source and destination ip
>> addresses are getting distributed across 2 applications which has
>> completely tossed our logic as we are trying to assemble the fragments in
>> our applications.
>> >
>> > Is there any bug in PFRING clustering mechanism which is causing this.
>> >
>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>> >
>> > I tried with this also.
>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>> enable_frag_coherence=1
>> >
>> >
>> > Regards,
>> > Gautam
>> >
>> > _______________________________________________
>> > Ntop-misc mailing list
>> > Ntop-misc@listgateway.unipi.it
>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>
> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
Re: cluster_2_tuple not working as expected [ In reply to ]
> On 11 Nov 2016, at 07:29, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi Alfredo,
>
> I tested with latest pfring from github but still packets are segregated to different applications.

Please provide me the output of "cat /proc/net/pf_ring/info"

> After your latest change, We need to use cluster_per_flow_2_tuple only right to segregate traffic on outer ip addresses ?

Correct

> Should we load pfring module with enable_frag_coherence=1? I have tested with using this or without this with the latest package from github.

enable_frag_coherence is set to 1 by default

Alfredo

>
>
> Regrads,
> Gautam
>
> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
> Thanks Alfredo for an update.
> I will update you once merge with latest
> PFRing.
> Regards,
> Gautam
>
> Sent from my iPhone
>
> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>
>> Hi Gautam
>> your traffic is GTP traffic and the hash was computed on the inner headers when present,
>> I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced
>> new hash types cluster_per_inner_* for computing hash on inner header, when present.
>> Please update from github or wait for new packages.
>>
>> Regards
>> Alfredo
>>
>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>
>>> Hi Alfredo
>>>
>>> PFA the traces having vlan and not vlan.
>>>
>>> To add more details to this, there are 2 observations -
>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application
>>>
>>> 2. We ran with the attached file and observed that the 2 packets were going to one application and rest of the packets were to other one.
>>>
>>> Thanks & Regards
>>>
>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>> Hi Gautam
>>> could you provide a pcap we can use to reproduce this?
>>>
>>> Alfredo
>>>
>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
>>> > are reading from same cluster id.
>>> >
>>> > We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
>>> >
>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>> >
>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>> >
>>> > I tried with this also.
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
>>> >
>>> >
>>> > Regards,
>>> > Gautam
>>> >
>>> > _______________________________________________
>>> > Ntop-misc mailing list
>>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>
>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
# cat /proc/net/pf_ring/info
PF_RING Version : 6.5.0 (unknown)
Total rings : 0

Standard (non ZC) Options
Ring slots : 409600
Slot version : 16
Capture TX : No [RX only]
IP Defragment : No
Socket Mode : Standard
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0

Regards,
Gautam

On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

>
> On 11 Nov 2016, at 07:29, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> Hi Alfredo,
>
> I tested with latest pfring from github but still packets are segregated
> to different applications.
>
>
> Please provide me the output of "cat /proc/net/pf_ring/info"
>
> After your latest change, We need to use cluster_per_flow_2_tuple only
> right to segregate traffic on outer ip addresses ?
>
>
> Correct
>
> Should we load pfring module with enable_frag_coherence=1? I have tested
> with using this or without this with the latest package from github.
>
>
> enable_frag_coherence is set to 1 by default
>
> Alfredo
>
>
>
> Regrads,
> Gautam
>
> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <
> chandrika.iitd.rock@gmail.com> wrote:
>
>> Thanks Alfredo for an update.
>> I will update you once merge with latest
>> PFRing.
>> Regards,
>> Gautam
>>
>> Sent from my iPhone
>>
>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org>
>> wrote:
>>
>> Hi Gautam
>> your traffic is GTP traffic and the hash was computed on the inner
>> headers when present,
>> I did change the behaviour computing the hash on the outer header when
>> using cluster_per_flow_2_tuple, and introduced
>> new hash types cluster_per_inner_* for computing hash on inner header,
>> when present.
>> Please update from github or wait for new packages.
>>
>> Regards
>> Alfredo
>>
>> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
>> wrote:
>>
>> Hi Alfredo
>>
>> PFA the traces having vlan and not vlan.
>>
>> To add more details to this, there are 2 observations -
>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same
>> packet got distributed across application
>>
>> 2. We ran with the attached file and observed that the 2 packets were
>> going to one application and rest of the packets were to other one.
>>
>> Thanks & Regards
>>
>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <
>> cardigliano@ntop.org> wrote:
>>
>>> Hi Gautam
>>> could you provide a pcap we can use to reproduce this?
>>>
>>> Alfredo
>>>
>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>> >
>>> > Hi,
>>> >
>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2
>>> applications
>>> > are reading from same cluster id.
>>> >
>>> > We have observed that the packets having same source and destination
>>> ip addresses are getting distributed across 2 applications which has
>>> completely tossed our logic as we are trying to assemble the fragments in
>>> our applications.
>>> >
>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>> >
>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>> >
>>> > I tried with this also.
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>> enable_frag_coherence=1
>>> >
>>> >
>>> > Regards,
>>> > Gautam
>>> >
>>> > _______________________________________________
>>> > Ntop-misc mailing list
>>> > Ntop-misc@listgateway.unipi.it
>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>
>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515
>> _wo_vlan.pcap>_______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Gautam
for some reason I do not see the pf_ring revision here, please make sure the pf_ring.ko module you are using is from latest code,
if you are using packages, please remove the pfring package, manually delete all pf_ring.ko in your system, and reinstall it to make
sure DKMS installs the new module.

Alfredo

> On 11 Nov 2016, at 09:53, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> # cat /proc/net/pf_ring/info
> PF_RING Version : 6.5.0 (unknown)
> Total rings : 0
>
> Standard (non ZC) Options
> Ring slots : 409600
> Slot version : 16
> Capture TX : No [RX only]
> IP Defragment : No
> Socket Mode : Standard
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>
>> On 11 Nov 2016, at 07:29, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>
>> Hi Alfredo,
>>
>> I tested with latest pfring from github but still packets are segregated to different applications.
>
> Please provide me the output of "cat /proc/net/pf_ring/info"
>
>> After your latest change, We need to use cluster_per_flow_2_tuple only right to segregate traffic on outer ip addresses ?
>
> Correct
>
>> Should we load pfring module with enable_frag_coherence=1? I have tested with using this or without this with the latest package from github.
>
> enable_frag_coherence is set to 1 by default
>
> Alfredo
>
>>
>>
>> Regrads,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>> Thanks Alfredo for an update.
>> I will update you once merge with latest
>> PFRing.
>> Regards,
>> Gautam
>>
>> Sent from my iPhone
>>
>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>
>>> Hi Gautam
>>> your traffic is GTP traffic and the hash was computed on the inner headers when present,
>>> I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced
>>> new hash types cluster_per_inner_* for computing hash on inner header, when present.
>>> Please update from github or wait for new packages.
>>>
>>> Regards
>>> Alfredo
>>>
>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>
>>>> Hi Alfredo
>>>>
>>>> PFA the traces having vlan and not vlan.
>>>>
>>>> To add more details to this, there are 2 observations -
>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application
>>>>
>>>> 2. We ran with the attached file and observed that the 2 packets were going to one application and rest of the packets were to other one.
>>>>
>>>> Thanks & Regards
>>>>
>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>>> Hi Gautam
>>>> could you provide a pcap we can use to reproduce this?
>>>>
>>>> Alfredo
>>>>
>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>> >
>>>> > Hi,
>>>> >
>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
>>>> > are reading from same cluster id.
>>>> >
>>>> > We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
>>>> >
>>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>>> >
>>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>> >
>>>> > I tried with this also.
>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
>>>> >
>>>> >
>>>> > Regards,
>>>> > Gautam
>>>> >
>>>> > _______________________________________________
>>>> > Ntop-misc mailing list
>>>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>
>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo,

I have not used any packages.
I downloaded the latest PFRING from https://github.com/ntop/PF_RING and
selected Branch as dev and Saved the Zip file using CloneorDownload
option.
I compiled the PFRING source code and used all the necessary files. I can
see the changes you have done in pf_ring.c also.

I think version is not displayed due to some issues with git. Received this
error while executing ./configure in kernel directory.
fatal: Not a git repository (or any of the parent directories): .git

Have you used any pfring examples to verify these changes?

Regards,
Chandrika

On Fri, Nov 11, 2016 at 2:32 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

> Hi Gautam
> for some reason I do not see the pf_ring revision here, please make sure
> the pf_ring.ko module you are using is from latest code,
> if you are using packages, please remove the pfring package, manually
> delete all pf_ring.ko in your system, and reinstall it to make
> sure DKMS installs the new module.
>
> Alfredo
>
> On 11 Nov 2016, at 09:53, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> # cat /proc/net/pf_ring/info
> PF_RING Version : 6.5.0 (unknown)
> Total rings : 0
>
> Standard (non ZC) Options
> Ring slots : 409600
> Slot version : 16
> Capture TX : No [RX only]
> IP Defragment : No
> Socket Mode : Standard
> Cluster Fragment Queue : 0
> Cluster Fragment Discard : 0
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <cardigliano@ntop.org
> > wrote:
>
>>
>> On 11 Nov 2016, at 07:29, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
>> wrote:
>>
>> Hi Alfredo,
>>
>> I tested with latest pfring from github but still packets are segregated
>> to different applications.
>>
>>
>> Please provide me the output of "cat /proc/net/pf_ring/info"
>>
>> After your latest change, We need to use cluster_per_flow_2_tuple only
>> right to segregate traffic on outer ip addresses ?
>>
>>
>> Correct
>>
>> Should we load pfring module with enable_frag_coherence=1? I have tested
>> with using this or without this with the latest package from github.
>>
>>
>> enable_frag_coherence is set to 1 by default
>>
>> Alfredo
>>
>>
>>
>> Regrads,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <
>> chandrika.iitd.rock@gmail.com> wrote:
>>
>>> Thanks Alfredo for an update.
>>> I will update you once merge with latest
>>> PFRing.
>>> Regards,
>>> Gautam
>>>
>>> Sent from my iPhone
>>>
>>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org>
>>> wrote:
>>>
>>> Hi Gautam
>>> your traffic is GTP traffic and the hash was computed on the inner
>>> headers when present,
>>> I did change the behaviour computing the hash on the outer header when
>>> using cluster_per_flow_2_tuple, and introduced
>>> new hash types cluster_per_inner_* for computing hash on inner header,
>>> when present.
>>> Please update from github or wait for new packages.
>>>
>>> Regards
>>> Alfredo
>>>
>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>>
>>> Hi Alfredo
>>>
>>> PFA the traces having vlan and not vlan.
>>>
>>> To add more details to this, there are 2 observations -
>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of
>>> same packet got distributed across application
>>>
>>> 2. We ran with the attached file and observed that the 2 packets were
>>> going to one application and rest of the packets were to other one.
>>>
>>> Thanks & Regards
>>>
>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <
>>> cardigliano@ntop.org> wrote:
>>>
>>>> Hi Gautam
>>>> could you provide a pcap we can use to reproduce this?
>>>>
>>>> Alfredo
>>>>
>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <
>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>> >
>>>> > Hi,
>>>> >
>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2
>>>> applications
>>>> > are reading from same cluster id.
>>>> >
>>>> > We have observed that the packets having same source and destination
>>>> ip addresses are getting distributed across 2 applications which has
>>>> completely tossed our logic as we are trying to assemble the fragments in
>>>> our applications.
>>>> >
>>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>>> >
>>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>> >
>>>> > I tried with this also.
>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>> enable_frag_coherence=1
>>>> >
>>>> >
>>>> > Regards,
>>>> > Gautam
>>>> >
>>>> > _______________________________________________
>>>> > Ntop-misc mailing list
>>>> > Ntop-misc@listgateway.unipi.it
>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>
>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515
>>> _wo_vlan.pcap>_______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
> On 11 Nov 2016, at 10:31, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi Alfredo,
>
> I have not used any packages.
> I downloaded the latest PFRING from https://github.com/ntop/PF_RING <https://github.com/ntop/PF_RING> and selected Branch as dev and Saved the Zip file using CloneorDownload option.
> I compiled the PFRING source code and used all the necessary files. I can see the changes you have done in pf_ring.c also.
>
> I think version is not displayed due to some issues with git. Received this error while executing ./configure in kernel directory.
> fatal: Not a git repository (or any of the parent directories): .git

Ok got it

> Have you used any pfring examples to verify these changes?

Yes, I ran 2 instances of pfcount using this command line:

./pfcount -i eth2 -c 99 -H 2 -v 1 -m

Alfredo

>
> Regards,
> Chandrika
>
> On Fri, Nov 11, 2016 at 2:32 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
> Hi Gautam
> for some reason I do not see the pf_ring revision here, please make sure the pf_ring.ko module you are using is from latest code,
> if you are using packages, please remove the pfring package, manually delete all pf_ring.ko in your system, and reinstall it to make
> sure DKMS installs the new module.
>
> Alfredo
>
>> On 11 Nov 2016, at 09:53, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>
>> # cat /proc/net/pf_ring/info
>> PF_RING Version : 6.5.0 (unknown)
>> Total rings : 0
>>
>> Standard (non ZC) Options
>> Ring slots : 409600
>> Slot version : 16
>> Capture TX : No [RX only]
>> IP Defragment : No
>> Socket Mode : Standard
>> Cluster Fragment Queue : 0
>> Cluster Fragment Discard : 0
>>
>> Regards,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>
>>> On 11 Nov 2016, at 07:29, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>
>>> Hi Alfredo,
>>>
>>> I tested with latest pfring from github but still packets are segregated to different applications.
>>
>> Please provide me the output of "cat /proc/net/pf_ring/info"
>>
>>> After your latest change, We need to use cluster_per_flow_2_tuple only right to segregate traffic on outer ip addresses ?
>>
>> Correct
>>
>>> Should we load pfring module with enable_frag_coherence=1? I have tested with using this or without this with the latest package from github.
>>
>> enable_frag_coherence is set to 1 by default
>>
>> Alfredo
>>
>>>
>>>
>>> Regrads,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>> Thanks Alfredo for an update.
>>> I will update you once merge with latest
>>> PFRing.
>>> Regards,
>>> Gautam
>>>
>>> Sent from my iPhone
>>>
>>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>>
>>>> Hi Gautam
>>>> your traffic is GTP traffic and the hash was computed on the inner headers when present,
>>>> I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced
>>>> new hash types cluster_per_inner_* for computing hash on inner header, when present.
>>>> Please update from github or wait for new packages.
>>>>
>>>> Regards
>>>> Alfredo
>>>>
>>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>>
>>>>> Hi Alfredo
>>>>>
>>>>> PFA the traces having vlan and not vlan.
>>>>>
>>>>> To add more details to this, there are 2 observations -
>>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application
>>>>>
>>>>> 2. We ran with the attached file and observed that the 2 packets were going to one application and rest of the packets were to other one.
>>>>>
>>>>> Thanks & Regards
>>>>>
>>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>>>> Hi Gautam
>>>>> could you provide a pcap we can use to reproduce this?
>>>>>
>>>>> Alfredo
>>>>>
>>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>> >
>>>>> > Hi,
>>>>> >
>>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
>>>>> > are reading from same cluster id.
>>>>> >
>>>>> > We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
>>>>> >
>>>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>>>> >
>>>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>>> >
>>>>> > I tried with this also.
>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
>>>>> >
>>>>> >
>>>>> > Regards,
>>>>> > Gautam
>>>>> >
>>>>> > _______________________________________________
>>>>> > Ntop-misc mailing list
>>>>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>>
>>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
I tried with above. I found the same result one instance of pfcount
receiving 2 packets and 6 in other instance for the file shared
multiple_fragments_id35515_wo_vlan.pcap.

Are you receiving all 6 packets in one pfcount instance ?

Regards,
Chandrika

On Fri, Nov 11, 2016 at 3:02 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

>
> On 11 Nov 2016, at 10:31, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> Hi Alfredo,
>
> I have not used any packages.
> I downloaded the latest PFRING from https://github.com/ntop/PF_RING and
> selected Branch as dev and Saved the Zip file using CloneorDownload
> option.
> I compiled the PFRING source code and used all the necessary files. I can
> see the changes you have done in pf_ring.c also.
>
> I think version is not displayed due to some issues with git. Received
> this error while executing ./configure in kernel directory.
> fatal: Not a git repository (or any of the parent directories): .git
>
>
> Ok got it
>
> Have you used any pfring examples to verify these changes?
>
>
> Yes, I ran 2 instances of pfcount using this command line:
>
> ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>
> Alfredo
>
>
> Regards,
> Chandrika
>
> On Fri, Nov 11, 2016 at 2:32 PM, Alfredo Cardigliano <cardigliano@ntop.org
> > wrote:
>
>> Hi Gautam
>> for some reason I do not see the pf_ring revision here, please make sure
>> the pf_ring.ko module you are using is from latest code,
>> if you are using packages, please remove the pfring package, manually
>> delete all pf_ring.ko in your system, and reinstall it to make
>> sure DKMS installs the new module.
>>
>> Alfredo
>>
>> On 11 Nov 2016, at 09:53, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
>> wrote:
>>
>> # cat /proc/net/pf_ring/info
>> PF_RING Version : 6.5.0 (unknown)
>> Total rings : 0
>>
>> Standard (non ZC) Options
>> Ring slots : 409600
>> Slot version : 16
>> Capture TX : No [RX only]
>> IP Defragment : No
>> Socket Mode : Standard
>> Cluster Fragment Queue : 0
>> Cluster Fragment Discard : 0
>>
>> Regards,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <
>> cardigliano@ntop.org> wrote:
>>
>>>
>>> On 11 Nov 2016, at 07:29, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>>
>>> Hi Alfredo,
>>>
>>> I tested with latest pfring from github but still packets are segregated
>>> to different applications.
>>>
>>>
>>> Please provide me the output of "cat /proc/net/pf_ring/info"
>>>
>>> After your latest change, We need to use cluster_per_flow_2_tuple only
>>> right to segregate traffic on outer ip addresses ?
>>>
>>>
>>> Correct
>>>
>>> Should we load pfring module with enable_frag_coherence=1? I have tested
>>> with using this or without this with the latest package from github.
>>>
>>>
>>> enable_frag_coherence is set to 1 by default
>>>
>>> Alfredo
>>>
>>>
>>>
>>> Regrads,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>>
>>>> Thanks Alfredo for an update.
>>>> I will update you once merge with latest
>>>> PFRing.
>>>> Regards,
>>>> Gautam
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org>
>>>> wrote:
>>>>
>>>> Hi Gautam
>>>> your traffic is GTP traffic and the hash was computed on the inner
>>>> headers when present,
>>>> I did change the behaviour computing the hash on the outer header when
>>>> using cluster_per_flow_2_tuple, and introduced
>>>> new hash types cluster_per_inner_* for computing hash on inner header,
>>>> when present.
>>>> Please update from github or wait for new packages.
>>>>
>>>> Regards
>>>> Alfredo
>>>>
>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <
>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>
>>>> Hi Alfredo
>>>>
>>>> PFA the traces having vlan and not vlan.
>>>>
>>>> To add more details to this, there are 2 observations -
>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of
>>>> same packet got distributed across application
>>>>
>>>> 2. We ran with the attached file and observed that the 2 packets were
>>>> going to one application and rest of the packets were to other one.
>>>>
>>>> Thanks & Regards
>>>>
>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <
>>>> cardigliano@ntop.org> wrote:
>>>>
>>>>> Hi Gautam
>>>>> could you provide a pcap we can use to reproduce this?
>>>>>
>>>>> Alfredo
>>>>>
>>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <
>>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>> >
>>>>> > Hi,
>>>>> >
>>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2
>>>>> applications
>>>>> > are reading from same cluster id.
>>>>> >
>>>>> > We have observed that the packets having same source and destination
>>>>> ip addresses are getting distributed across 2 applications which has
>>>>> completely tossed our logic as we are trying to assemble the fragments in
>>>>> our applications.
>>>>> >
>>>>> > Is there any bug in PFRING clustering mechanism which is causing
>>>>> this.
>>>>> >
>>>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>>> >
>>>>> > I tried with this also.
>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>>> enable_frag_coherence=1
>>>>> >
>>>>> >
>>>>> > Regards,
>>>>> > Gautam
>>>>> >
>>>>> > _______________________________________________
>>>>> > Ntop-misc mailing list
>>>>> > Ntop-misc@listgateway.unipi.it
>>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>
>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515
>>>> _wo_vlan.pcap>_______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
This is what I am receiving, it looks correct as they are distributed by 2-tuple:

# ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
Using PF_RING v.6.5.0
Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: 10000Mb/s]
# Device RX channels: 1
# Polling threads: 1
pfring_set_cluster returned 0
Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152] [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][216.58.194.110:443 -> 100.83.201.244:43485] [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=42]
10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0] [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=0]
10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152] [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][216.58.194.97:443 -> 100.83.201.244:55379] [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=42]
10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0] [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=0]
^CLeaving...

# ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
Using PF_RING v.6.5.0
Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: 10000Mb/s]
# Device RX channels: 1
# Polling threads: 1
pfring_set_cluster returned 0
Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 -> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152] [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80 -> 10.160.153.151:60856] [hash=2820071437][tos=104][tcp_seq_num=0] [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=42]
10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 -> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 203.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0] [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
^CLeaving...

Alfredo

> On 11 Nov 2016, at 10:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> I tried with above. I found the same result one instance of pfcount receiving 2 packets and 6 in other instance for the file shared multiple_fragments_id35515_wo_vlan.pcap.
>
> Are you receiving all 6 packets in one pfcount instance ?
>
> Regards,
> Chandrika
>
> On Fri, Nov 11, 2016 at 3:02 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>
>> On 11 Nov 2016, at 10:31, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>
>> Hi Alfredo,
>>
>> I have not used any packages.
>> I downloaded the latest PFRING from https://github.com/ntop/PF_RING <https://github.com/ntop/PF_RING> and selected Branch as dev and Saved the Zip file using CloneorDownload option.
>> I compiled the PFRING source code and used all the necessary files. I can see the changes you have done in pf_ring.c also.
>>
>> I think version is not displayed due to some issues with git. Received this error while executing ./configure in kernel directory.
>> fatal: Not a git repository (or any of the parent directories): .git
>
> Ok got it
>
>> Have you used any pfring examples to verify these changes?
>
> Yes, I ran 2 instances of pfcount using this command line:
>
> ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>
> Alfredo
>
>>
>> Regards,
>> Chandrika
>>
>> On Fri, Nov 11, 2016 at 2:32 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>> Hi Gautam
>> for some reason I do not see the pf_ring revision here, please make sure the pf_ring.ko module you are using is from latest code,
>> if you are using packages, please remove the pfring package, manually delete all pf_ring.ko in your system, and reinstall it to make
>> sure DKMS installs the new module.
>>
>> Alfredo
>>
>>> On 11 Nov 2016, at 09:53, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>
>>> # cat /proc/net/pf_ring/info
>>> PF_RING Version : 6.5.0 (unknown)
>>> Total rings : 0
>>>
>>> Standard (non ZC) Options
>>> Ring slots : 409600
>>> Slot version : 16
>>> Capture TX : No [RX only]
>>> IP Defragment : No
>>> Socket Mode : Standard
>>> Cluster Fragment Queue : 0
>>> Cluster Fragment Discard : 0
>>>
>>> Regards,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>>
>>>> On 11 Nov 2016, at 07:29, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>
>>>> Hi Alfredo,
>>>>
>>>> I tested with latest pfring from github but still packets are segregated to different applications.
>>>
>>> Please provide me the output of "cat /proc/net/pf_ring/info"
>>>
>>>> After your latest change, We need to use cluster_per_flow_2_tuple only right to segregate traffic on outer ip addresses ?
>>>
>>> Correct
>>>
>>>> Should we load pfring module with enable_frag_coherence=1? I have tested with using this or without this with the latest package from github.
>>>
>>> enable_frag_coherence is set to 1 by default
>>>
>>> Alfredo
>>>
>>>>
>>>>
>>>> Regrads,
>>>> Gautam
>>>>
>>>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>> Thanks Alfredo for an update.
>>>> I will update you once merge with latest
>>>> PFRing.
>>>> Regards,
>>>> Gautam
>>>>
>>>> Sent from my iPhone
>>>>
>>>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>>>
>>>>> Hi Gautam
>>>>> your traffic is GTP traffic and the hash was computed on the inner headers when present,
>>>>> I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced
>>>>> new hash types cluster_per_inner_* for computing hash on inner header, when present.
>>>>> Please update from github or wait for new packages.
>>>>>
>>>>> Regards
>>>>> Alfredo
>>>>>
>>>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>>>
>>>>>> Hi Alfredo
>>>>>>
>>>>>> PFA the traces having vlan and not vlan.
>>>>>>
>>>>>> To add more details to this, there are 2 observations -
>>>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application
>>>>>>
>>>>>> 2. We ran with the attached file and observed that the 2 packets were going to one application and rest of the packets were to other one.
>>>>>>
>>>>>> Thanks & Regards
>>>>>>
>>>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>>>>> Hi Gautam
>>>>>> could you provide a pcap we can use to reproduce this?
>>>>>>
>>>>>> Alfredo
>>>>>>
>>>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>>> >
>>>>>> > Hi,
>>>>>> >
>>>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 applications
>>>>>> > are reading from same cluster id.
>>>>>> >
>>>>>> > We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are trying to assemble the fragments in our applications.
>>>>>> >
>>>>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>>>>> >
>>>>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>>>> >
>>>>>> > I tried with this also.
>>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 enable_frag_coherence=1
>>>>>> >
>>>>>> >
>>>>>> > Regards,
>>>>>> > Gautam
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Ntop-misc mailing list
>>>>>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>>>
>>>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
If you check the outer src and dst IP addresses of all these 6 packets are
same, then shouldn't all these 6 packets go to 1 pfcount instance if we
have chosen cluster_type as cluster_per_2_flow?

Regards,
Gautam

On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

> This is what I am receiving, it looks correct as they are distributed by
> 2-tuple:
>
> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
> 10000Mb/s]
> # Device RX channels: 1
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][
> 216.58.194.110:443 -> 100.83.201.244:43485] [hash=4182140810][tos=0][tcp_seq_num=0]
> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_
> offset=34][payload_offset=42]
> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_
> offset=34][payload_offset=0]
> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][
> 216.58.194.97:443 -> 100.83.201.244:55379] [hash=4182140810][tos=0][tcp_seq_num=0]
> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_
> offset=34][payload_offset=42]
> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_
> offset=34][payload_offset=0]
> ^CLeaving...
>
> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
> 10000Mb/s]
> # Device RX channels: 1
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80
> -> 10.160.153.151:60856] [hash=2820071437][tos=104][tcp_seq_num=0]
> [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_
> offset=34][payload_offset=42]
> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 ->
> 203.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_
> offset=38][payload_offset=0]
> ^CLeaving...
>
> Alfredo
>
> On 11 Nov 2016, at 10:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> I tried with above. I found the same result one instance of pfcount
> receiving 2 packets and 6 in other instance for the file shared
> multiple_fragments_id35515_wo_vlan.pcap.
>
> Are you receiving all 6 packets in one pfcount instance ?
>
> Regards,
> Chandrika
>
> On Fri, Nov 11, 2016 at 3:02 PM, Alfredo Cardigliano <cardigliano@ntop.org
> > wrote:
>
>>
>> On 11 Nov 2016, at 10:31, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
>> wrote:
>>
>> Hi Alfredo,
>>
>> I have not used any packages.
>> I downloaded the latest PFRING from https://github.com/ntop/PF_RING and
>> selected Branch as dev and Saved the Zip file using CloneorDownload
>> option.
>> I compiled the PFRING source code and used all the necessary files. I can
>> see the changes you have done in pf_ring.c also.
>>
>> I think version is not displayed due to some issues with git. Received
>> this error while executing ./configure in kernel directory.
>> fatal: Not a git repository (or any of the parent directories): .git
>>
>>
>> Ok got it
>>
>> Have you used any pfring examples to verify these changes?
>>
>>
>> Yes, I ran 2 instances of pfcount using this command line:
>>
>> ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>
>> Alfredo
>>
>>
>> Regards,
>> Chandrika
>>
>> On Fri, Nov 11, 2016 at 2:32 PM, Alfredo Cardigliano <
>> cardigliano@ntop.org> wrote:
>>
>>> Hi Gautam
>>> for some reason I do not see the pf_ring revision here, please make sure
>>> the pf_ring.ko module you are using is from latest code,
>>> if you are using packages, please remove the pfring package, manually
>>> delete all pf_ring.ko in your system, and reinstall it to make
>>> sure DKMS installs the new module.
>>>
>>> Alfredo
>>>
>>> On 11 Nov 2016, at 09:53, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>>
>>> # cat /proc/net/pf_ring/info
>>> PF_RING Version : 6.5.0 (unknown)
>>> Total rings : 0
>>>
>>> Standard (non ZC) Options
>>> Ring slots : 409600
>>> Slot version : 16
>>> Capture TX : No [RX only]
>>> IP Defragment : No
>>> Socket Mode : Standard
>>> Cluster Fragment Queue : 0
>>> Cluster Fragment Discard : 0
>>>
>>> Regards,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <
>>> cardigliano@ntop.org> wrote:
>>>
>>>>
>>>> On 11 Nov 2016, at 07:29, Chandrika Gautam <
>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>
>>>> Hi Alfredo,
>>>>
>>>> I tested with latest pfring from github but still packets are
>>>> segregated to different applications.
>>>>
>>>>
>>>> Please provide me the output of "cat /proc/net/pf_ring/info"
>>>>
>>>> After your latest change, We need to use cluster_per_flow_2_tuple only
>>>> right to segregate traffic on outer ip addresses ?
>>>>
>>>>
>>>> Correct
>>>>
>>>> Should we load pfring module with enable_frag_coherence=1? I have
>>>> tested with using this or without this with the latest package from github.
>>>>
>>>>
>>>> enable_frag_coherence is set to 1 by default
>>>>
>>>> Alfredo
>>>>
>>>>
>>>>
>>>> Regrads,
>>>> Gautam
>>>>
>>>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam <
>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>
>>>>> Thanks Alfredo for an update.
>>>>> I will update you once merge with latest
>>>>> PFRing.
>>>>> Regards,
>>>>> Gautam
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <
>>>>> cardigliano@ntop.org> wrote:
>>>>>
>>>>> Hi Gautam
>>>>> your traffic is GTP traffic and the hash was computed on the inner
>>>>> headers when present,
>>>>> I did change the behaviour computing the hash on the outer header when
>>>>> using cluster_per_flow_2_tuple, and introduced
>>>>> new hash types cluster_per_inner_* for computing hash on inner header,
>>>>> when present.
>>>>> Please update from github or wait for new packages.
>>>>>
>>>>> Regards
>>>>> Alfredo
>>>>>
>>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <
>>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>>
>>>>> Hi Alfredo
>>>>>
>>>>> PFA the traces having vlan and not vlan.
>>>>>
>>>>> To add more details to this, there are 2 observations -
>>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of
>>>>> same packet got distributed across application
>>>>>
>>>>> 2. We ran with the attached file and observed that the 2 packets were
>>>>> going to one application and rest of the packets were to other one.
>>>>>
>>>>> Thanks & Regards
>>>>>
>>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <
>>>>> cardigliano@ntop.org> wrote:
>>>>>
>>>>>> Hi Gautam
>>>>>> could you provide a pcap we can use to reproduce this?
>>>>>>
>>>>>> Alfredo
>>>>>>
>>>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam <
>>>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>>> >
>>>>>> > Hi,
>>>>>> >
>>>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2
>>>>>> applications
>>>>>> > are reading from same cluster id.
>>>>>> >
>>>>>> > We have observed that the packets having same source and
>>>>>> destination ip addresses are getting distributed across 2 applications
>>>>>> which has completely tossed our logic as we are trying to assemble the
>>>>>> fragments in our applications.
>>>>>> >
>>>>>> > Is there any bug in PFRING clustering mechanism which is causing
>>>>>> this.
>>>>>> >
>>>>>> > Using PFRING 6.2.0 and pfring is loaded with below command -
>>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>>>> >
>>>>>> > I tried with this also.
>>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>>>> enable_frag_coherence=1
>>>>>> >
>>>>>> >
>>>>>> > Regards,
>>>>>> > Gautam
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Ntop-misc mailing list
>>>>>> > Ntop-misc@listgateway.unipi.it
>>>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>
>>>>>> _______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> Ntop-misc@listgateway.unipi.it
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>
>>>>>
>>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515
>>>>> _wo_vlan.pcap>_______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Gautam
they are not all the same, you have 4 flows 199.223.102.6 -> 49.103.1.132 and 2 flows 220.159.237.103 -> 203.118.242.166

Alfredo

> On 11 Nov 2016, at 10:51, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
>
> If you check the outer src and dst IP addresses of all these 6 packets are same, then shouldn't all these 6 packets go to 1 pfcount instance if we have chosen cluster_type as cluster_per_2_flow?
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
> This is what I am receiving, it looks correct as they are distributed by 2-tuple:
>
> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: 10000Mb/s]
> # Device RX channels: 1
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 <http://199.223.102.6:2152/> -> 49.103.1.132:2152 <http://49.103.1.132:2152/>] [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][216.58.194.110:443 <http://216.58.194.110:443/> -> 100.83.201.244:43485 <http://100.83.201.244:43485/>] [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=42]
> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 <http://199.223.102.6:0/> -> 49.103.1.132:0 <http://49.103.1.132:0/>] [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=0]
> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 <http://199.223.102.6:2152/> -> 49.103.1.132:2152 <http://49.103.1.132:2152/>] [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][216.58.194.97:443 <http://216.58.194.97:443/> -> 100.83.201.244:55379 <http://100.83.201.244:55379/>] [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=42]
> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 <http://199.223.102.6:0/> -> 49.103.1.132:0 <http://49.103.1.132:0/>] [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=0]
> ^CLeaving...
>
> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: 10000Mb/s]
> # Device RX channels: 1
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 -> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 <http://220.159.237.103:2152/> -> 203.118.242.166:2152 <http://203.118.242.166:2152/>] [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80 <http://49.96.0.26/> -> 10.160.153.151:60856 <http://10.160.153.151:60856/>] [hash=2820071437][tos=104][tcp_seq_num=0] [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_offset=34][payload_offset=42]
> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 -> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 <http://220.159.237.103:0/> -> 203.118.242.166:0 <http://203.118.242.166:0/>] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0] [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
> ^CLeaving...
>
> Alfredo
>
>> On 11 Nov 2016, at 10:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>
>> I tried with above. I found the same result one instance of pfcount receiving 2 packets and 6 in other instance for the file shared multiple_fragments_id35515_wo_vlan.pcap.
>>
>> Are you receiving all 6 packets in one pfcount instance ?
>>
>> Regards,
>> Chandrika
Re: cluster_2_tuple not working as expected [ In reply to ]
My bad !!!

I am checking this for longer run and will update.

Thanks & Regards,
Gautam

On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

> Gautam
> they are not all the same, you have 4 flows 199.223.102.6 -> 49.103.1.132
> and 2 flows 220.159.237.103 -> 203.118.242.166
>
> Alfredo
>
> On 11 Nov 2016, at 10:51, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
>
> If you check the outer src and dst IP addresses of all these 6 packets are
> same, then shouldn't all these 6 packets go to 1 pfcount instance if we
> have chosen cluster_type as cluster_per_2_flow?
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.org
> > wrote:
>
>> This is what I am receiving, it looks correct as they are distributed by
>> 2-tuple:
>>
>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>> Using PF_RING v.6.5.0
>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>> 10000Mb/s]
>> # Device RX channels: 1
>> # Polling threads: 1
>> pfring_set_cluster returned 0
>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>> 6.58.194.110:443 -> 100.83.201.244:43485] [hash=4182140810][tos=0][tcp_seq_num=0]
>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>> 34][payload_offset=42]
>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>> =34][payload_offset=0]
>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>> 6.58.194.97:443 -> 100.83.201.244:55379] [hash=4182140810][tos=0][tcp_seq_num=0]
>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>> 34][payload_offset=42]
>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>> =34][payload_offset=0]
>> ^CLeaving...
>>
>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>> Using PF_RING v.6.5.0
>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>> 10000Mb/s]
>> # Device RX channels: 1
>> # Polling threads: 1
>> pfring_set_cluster returned 0
>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80
>> <http://49.96.0.26/> -> 10.160.153.151:60856]
>> [hash=2820071437][tos=104][tcp_seq_num=0] [caplen=128][len=1514][eth_off
>> set=0][l3_offset=14][l4_offset=34][payload_offset=42]
>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20
>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=
>> 38][payload_offset=0]
>> ^CLeaving...
>>
>> Alfredo
>>
>> On 11 Nov 2016, at 10:41, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
>> wrote:
>>
>> I tried with above. I found the same result one instance of pfcount
>> receiving 2 packets and 6 in other instance for the file shared
>> multiple_fragments_id35515_wo_vlan.pcap.
>>
>> Are you receiving all 6 packets in one pfcount instance ?
>>
>> Regards,
>> Chandrika
>>
>>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo,

There is an observation further on this.

PFA for the new traces having 8 packets from same source and destination.
On first run. They are getting segregated across pfcount different
instances. When I send the same file again, It goes to one instance of
pfcount.




*Output of first run ----------------------------------------*

userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
Using PF_RING v.6.5.0
Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
10000Mb/s]
# Device RX channels: 16
# Polling threads: 1
pfring_set_cluster returned 0
Dumping statistics on /proc/net/pf_ring/stats/6442-ens2f0.2
15:25:05.593222239 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:0 ->* 49.103.84.212*:0] [l3_proto=UDP][
*hash=2780252203*
][tos=0][tcp_seq_num=0][caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:25:05.593439521 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0] [l3_proto=UDP][
*hash=2780252203*][tos=0][tcp_seq_num=0]
[caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:25:05.593618032 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0] [l3_proto=UDP][
*hash=2780252203*][tos=0][tcp_seq_num=0]
[caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]

userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
Using PF_RING v.6.5.0
Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
10000Mb/s]
# Device RX channels: 16
# Polling threads: 1
pfring_set_cluster returned 0
Dumping statistics on /proc/net/pf_ring/stats/6441-ens2f0.1
15:25:05.593070816 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0] [l3_proto=UDP][
*hash=2780252203*][tos=0][tcp_seq_num=0]
[caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:25:05.593123086 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4]*[116.79.243.70*:2152 -> *49.103.84.212*:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366
-> 172.217.25.241:443] [*hash=2780252186*][tos=0][tcp_seq_num=0]
[caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
15:25:05.593326381 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:2152 -> *49.103.84.212*:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366
-> 172.217.25.241:443] [*hash=2780252186*
][tos=0][tcp_seq_num=0][caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
15:25:05.593529674 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:2152 -> *49.103.84.212*:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367
-> 172.217.25.241:443] [*hash=2780252186*][tos=0][tcp_seq_num=0]
[caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
15:25:05.593776442 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][*116.79.243.70*:2152 -> *49.103.84.212*:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367
-> 172.217.25.241:443] [*hash=2780252186*
][tos=0][tcp_seq_num=0][caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]


*Output of second run ----------------------------------------*


15:28:03.255165805 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
[l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
[caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:28:03.255217727 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366
-> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0]
[caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
15:28:03.255367715 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
[l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0][caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:28:03.255416304 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366
-> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0]
[caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
15:28:03.255551827 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
[l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
[caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:28:03.255616828 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367
-> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0]
[caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
15:28:03.255765232 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
[l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
[caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
15:28:03.255917611 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01]
[vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152]
[l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367
-> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0]
[caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]




Regards,
Gautam

On Fri, Nov 11, 2016 at 3:42 PM, Chandrika Gautam <
chandrika.iitd.rock@gmail.com> wrote:

> My bad !!!
>
> I am checking this for longer run and will update.
>
> Thanks & Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <cardigliano@ntop.org
> > wrote:
>
>> Gautam
>> they are not all the same, you have 4 flows 199.223.102.6 -> 49.103.1.132
>> and 2 flows 220.159.237.103 -> 203.118.242.166
>>
>> Alfredo
>>
>> On 11 Nov 2016, at 10:51, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
>> wrote:
>>
>>
>> If you check the outer src and dst IP addresses of all these 6 packets
>> are same, then shouldn't all these 6 packets go to 1 pfcount instance if we
>> have chosen cluster_type as cluster_per_2_flow?
>>
>> Regards,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.
>> org> wrote:
>>
>>> This is what I am receiving, it looks correct as they are distributed by
>>> 2-tuple:
>>>
>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>> Using PF_RING v.6.5.0
>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>> 10000Mb/s]
>>> # Device RX channels: 1
>>> # Polling threads: 1
>>> pfring_set_cluster returned 0
>>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
>>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>> 6.58.194.110:443 -> 100.83.201.244:43485] [hash=4182140810][tos=0][tcp_seq_num=0]
>>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>>> 34][payload_offset=42]
>>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>> =34][payload_offset=0]
>>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>> 6.58.194.97:443 -> 100.83.201.244:55379] [hash=4182140810][tos=0][tcp_seq_num=0]
>>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>>> 34][payload_offset=42]
>>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>> =34][payload_offset=0]
>>> ^CLeaving...
>>>
>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>> Using PF_RING v.6.5.0
>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>> 10000Mb/s]
>>> # Device RX channels: 1
>>> # Polling threads: 1
>>> pfring_set_cluster returned 0
>>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
>>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
>>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80
>>> <http://49.96.0.26/> -> 10.160.153.151:60856]
>>> [hash=2820071437][tos=104][tcp_seq_num=0] [caplen=128][len=1514][eth_off
>>> set=0][l3_offset=14][l4_offset=34][payload_offset=42]
>>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20
>>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
>>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38
>>> ][payload_offset=0]
>>> ^CLeaving...
>>>
>>> Alfredo
>>>
>>> On 11 Nov 2016, at 10:41, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>>
>>> I tried with above. I found the same result one instance of pfcount
>>> receiving 2 packets and 6 in other instance for the file shared
>>> multiple_fragments_id35515_wo_vlan.pcap.
>>>
>>> Are you receiving all 6 packets in one pfcount instance ?
>>>
>>> Regards,
>>> Chandrika
>>>
>>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo,

In my last email, I have highlighted the hash value calculated for each
packet. Hash value of all the fragments which are not first
(Eth->IP->Data) is *2780252203* whereas it is *2780252186 *for first
fragments (Eth->IP->udp->gtp->IP->tcp). Shouldn't this value be same for
fragments with same source and destination IP addresses if clustering
mechanism used 2 tuple?


I tried changing the pfring code for hash_pkt_cluster to below one ; could
see all the fragments of same source and dest ip are generating same hash
but still packet got segregated for first run.

static inline u_int32_t hash_pkt_header(struct pfring_pkthdr *hdr,
u_int32_t flags)
{
if (hdr->extended_hdr.pkt_hash == 0) {
hdr->extended_hdr.pkt_hash =
hash_pkt(0,0,hdr->extended_hdr.parsed_pkt.ip_src,
hdr->extended_hdr.parsed_pkt.ip_dst, 0,0) ; }
return hdr->extended_hdr.pkt_hash;
}

While checking pfring code further, I came across this piece of code which
seems will not work for out of order packets correctly.
For ex -
First packet (fragment but not first having fragment offset !=0) received
is out of order, As per below piece of code, It will try to retrieve any
element from cluster hash but get_fragment_app_id () will return -1 and pf
ring will set skb_hash to 0 and eventually will add to the
queue 0 whereas doing correct calculation based on ipsrc,ipdst and
ip_fragment_id could have land this fragment_but_not_first to a different
queue.

if (enable_frag_coherence && fragment_not_first) {
if (skb_hash == -1) { /* read hash once */
skb_hash = get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_
src,hdr.extended_hdr.parsed_pkt.ipv4_dst,ip_id, more_fragments);
if (skb_hash < 0)
skb_hash = 0;
}

I changed this code so that skb_hash is generated based on the packet
headers rather than setting it to 0 but no success again.

Can you please help to check this. This has really put our project on hold
since we have used this clustering mechanism to scale our application.
Let me know if you need any more info.

Regards,
Gautam

On Mon, Nov 14, 2016 at 2:01 PM, Chandrika Gautam <
chandrika.iitd.rock@gmail.com> wrote:

> Hi Alfredo,
>
> There is an observation further on this.
>
> PFA for the new traces having 8 packets from same source and destination.
> On first run. They are getting segregated across pfcount different
> instances. When I send the same file again, It goes to one instance of
> pfcount.
>
>
>
>
> *Output of first run ----------------------------------------*
>
> userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
> 10000Mb/s]
> # Device RX channels: 16
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/6442-ens2f0.2
> 15:25:05.593222239 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 ->* 49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0][caplen
> =64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
> 15:25:05.593439521 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:25:05.593618032 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
>
> userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
> Using PF_RING v.6.5.0
> Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
> 10000Mb/s]
> # Device RX channels: 16
> # Polling threads: 1
> pfring_set_cluster returned 0
> Dumping statistics on /proc/net/pf_ring/stats/6441-ens2f0.1
> 15:25:05.593070816 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:25:05.593123086 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4]*[116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:25:05.593326381 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0][caplen=128][len=1518
> ][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:25:05.593529674 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:25:05.593776442 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [
> *hash=2780252186*][tos=0][tcp_seq_num=0][caplen=128][len=1518
> ][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
>
>
> *Output of second run ----------------------------------------*
>
>
> 15:28:03.255165805 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:28:03.255217727 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:28:03.255367715 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0][
> caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38]
> [payload_offset=0]
> 15:28:03.255416304 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:28:03.255551827 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:28:03.255616828 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
> 15:28:03.255765232 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=
> 38][payload_offset=0]
> 15:28:03.255917611 [RX][if_index=6][10:F3:11:B3:06:01 ->
> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443]
> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>
>
>
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 3:42 PM, Chandrika Gautam <
> chandrika.iitd.rock@gmail.com> wrote:
>
>> My bad !!!
>>
>> I am checking this for longer run and will update.
>>
>> Thanks & Regards,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <
>> cardigliano@ntop.org> wrote:
>>
>>> Gautam
>>> they are not all the same, you have 4 flows 199.223.102.6 ->
>>> 49.103.1.132 and 2 flows 220.159.237.103 -> 203.118.242.166
>>>
>>> Alfredo
>>>
>>> On 11 Nov 2016, at 10:51, Chandrika Gautam <
>>> chandrika.iitd.rock@gmail.com> wrote:
>>>
>>>
>>> If you check the outer src and dst IP addresses of all these 6 packets
>>> are same, then shouldn't all these 6 packets go to 1 pfcount instance if we
>>> have chosen cluster_type as cluster_per_2_flow?
>>>
>>> Regards,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.
>>> org> wrote:
>>>
>>>> This is what I am receiving, it looks correct as they are distributed
>>>> by 2-tuple:
>>>>
>>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>>> Using PF_RING v.6.5.0
>>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>>> 10000Mb/s]
>>>> # Device RX channels: 1
>>>> # Polling threads: 1
>>>> pfring_set_cluster returned 0
>>>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
>>>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>>> 6.58.194.110:443 -> 100.83.201.244:43485]
>>>> [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offs
>>>> et=0][l3_offset=14][l4_offset=34][payload_offset=42]
>>>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>>> =34][payload_offset=0]
>>>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>>> 6.58.194.97:443 -> 100.83.201.244:55379] [hash=4182140810][tos=0][tcp_seq_num=0]
>>>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>>>> 34][payload_offset=42]
>>>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>>> =34][payload_offset=0]
>>>> ^CLeaving...
>>>>
>>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>>> Using PF_RING v.6.5.0
>>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>>> 10000Mb/s]
>>>> # Device RX channels: 1
>>>> # Polling threads: 1
>>>> pfring_set_cluster returned 0
>>>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
>>>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
>>>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80
>>>> <http://49.96.0.26/> -> 10.160.153.151:60856]
>>>> [hash=2820071437][tos=104][tcp_seq_num=0]
>>>> [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_offset
>>>> =34][payload_offset=42]
>>>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20
>>>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
>>>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38
>>>> ][payload_offset=0]
>>>> ^CLeaving...
>>>>
>>>> Alfredo
>>>>
>>>> On 11 Nov 2016, at 10:41, Chandrika Gautam <
>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>
>>>> I tried with above. I found the same result one instance of pfcount
>>>> receiving 2 packets and 6 in other instance for the file shared
>>>> multiple_fragments_id35515_wo_vlan.pcap.
>>>>
>>>> Are you receiving all 6 packets in one pfcount instance ?
>>>>
>>>> Regards,
>>>> Chandrika
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>
>>
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Guys,

Please help to resolve this.

Regards,
Gautam

On Wed, Nov 16, 2016 at 12:04 PM, Chandrika Gautam <
chandrika.iitd.rock@gmail.com> wrote:

> Hi Alfredo,
>
> In my last email, I have highlighted the hash value calculated for each
> packet. Hash value of all the fragments which are not first
> (Eth->IP->Data) is *2780252203* whereas it is *2780252186 *for first
> fragments (Eth->IP->udp->gtp->IP->tcp). Shouldn't this value be same for
> fragments with same source and destination IP addresses if clustering
> mechanism used 2 tuple?
>
>
> I tried changing the pfring code for hash_pkt_cluster to below one ; could
> see all the fragments of same source and dest ip are generating same hash
> but still packet got segregated for first run.
>
> static inline u_int32_t hash_pkt_header(struct pfring_pkthdr *hdr,
> u_int32_t flags)
> {
> if (hdr->extended_hdr.pkt_hash == 0) {
> hdr->extended_hdr.pkt_hash = hash_pkt(0,0,hdr->extended_hdr.parsed_pkt.ip_src,
> hdr->extended_hdr.parsed_pkt.ip_dst, 0,0) ; }
> return hdr->extended_hdr.pkt_hash;
> }
>
> While checking pfring code further, I came across this piece of code which
> seems will not work for out of order packets correctly.
> For ex -
> First packet (fragment but not first having fragment offset !=0) received
> is out of order, As per below piece of code, It will try to retrieve any
> element from cluster hash but get_fragment_app_id () will return -1 and pf
> ring will set skb_hash to 0 and eventually will add to the
> queue 0 whereas doing correct calculation based on ipsrc,ipdst and
> ip_fragment_id could have land this fragment_but_not_first to a different
> queue.
>
> if (enable_frag_coherence && fragment_not_first) {
> if (skb_hash == -1) { /* read hash once */
> skb_hash = get_fragment_app_id(hdr.extend
> ed_hdr.parsed_pkt.ipv4_src,hdr.extended_hdr.parsed_pkt.ipv4_dst,ip_id,
> more_fragments);
> if (skb_hash < 0)
> skb_hash = 0;
> }
>
> I changed this code so that skb_hash is generated based on the packet
> headers rather than setting it to 0 but no success again.
>
> Can you please help to check this. This has really put our project on hold
> since we have used this clustering mechanism to scale our application.
> Let me know if you need any more info.
>
> Regards,
> Gautam
>
>
> On Mon, Nov 14, 2016 at 2:01 PM, Chandrika Gautam <
> chandrika.iitd.rock@gmail.com> wrote:
>
>> Hi Alfredo,
>>
>> There is an observation further on this.
>>
>> PFA for the new traces having 8 packets from same source and destination.
>> On first run. They are getting segregated across pfcount different
>> instances. When I send the same file again, It goes to one instance of
>> pfcount.
>>
>>
>>
>>
>> *Output of first run ----------------------------------------*
>>
>> userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
>> Using PF_RING v.6.5.0
>> Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
>> 10000Mb/s]
>> # Device RX channels: 16
>> # Polling threads: 1
>> pfring_set_cluster returned 0
>> Dumping statistics on /proc/net/pf_ring/stats/6442-ens2f0.2
>> 15:25:05.593222239 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 ->* 49.103.84.212*:0]
>> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0][caplen
>> =64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
>> 15:25:05.593439521 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
>> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
>> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38
>> ][payload_offset=0]
>> 15:25:05.593618032 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
>> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
>> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38
>> ][payload_offset=0]
>>
>> userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m
>> Using PF_RING v.6.5.0
>> Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed:
>> 10000Mb/s]
>> # Device RX channels: 16
>> # Polling threads: 1
>> pfring_set_cluster returned 0
>> Dumping statistics on /proc/net/pf_ring/stats/6441-ens2f0.1
>> 15:25:05.593070816 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0]
>> [l3_proto=UDP][*hash=2780252203*][tos=0][tcp_seq_num=0]
>> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38
>> ][payload_offset=0]
>> 15:25:05.593123086 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4]*[116.79.243.70*:2152 ->
>> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [
>> *hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
>> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>> 15:25:05.593326381 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
>> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [
>> *hash=2780252186*][tos=0][tcp_seq_num=0][caplen=128][len=1518
>> ][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
>> 15:25:05.593529674 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
>> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [
>> *hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
>> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>> 15:25:05.593776442 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 ->
>> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [
>> *hash=2780252186*][tos=0][tcp_seq_num=0][caplen=128][len=1518
>> ][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46]
>>
>>
>> *Output of second run ----------------------------------------*
>>
>>
>> 15:28:03.255165805 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
>> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
>> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38
>> ][payload_offset=0]
>> 15:28:03.255217727 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
>> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443]
>> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
>> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>> 15:28:03.255367715 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
>> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0][caplen
>> =64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0]
>> 15:28:03.255416304 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
>> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443]
>> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
>> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>> 15:28:03.255551827 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
>> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
>> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38
>> ][payload_offset=0]
>> 15:28:03.255616828 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
>> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443]
>> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
>> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>> 15:28:03.255765232 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0]
>> [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0]
>> [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38
>> ][payload_offset=0]
>> 15:28:03.255917611 [RX][if_index=6][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 ->
>> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6
>> ][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443]
>> [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_off
>> set=0][l3_offset=18][l4_offset=38][payload_offset=46]
>>
>>
>>
>>
>> Regards,
>> Gautam
>>
>> On Fri, Nov 11, 2016 at 3:42 PM, Chandrika Gautam <
>> chandrika.iitd.rock@gmail.com> wrote:
>>
>>> My bad !!!
>>>
>>> I am checking this for longer run and will update.
>>>
>>> Thanks & Regards,
>>> Gautam
>>>
>>> On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <
>>> cardigliano@ntop.org> wrote:
>>>
>>>> Gautam
>>>> they are not all the same, you have 4 flows 199.223.102.6 ->
>>>> 49.103.1.132 and 2 flows 220.159.237.103 -> 203.118.242.166
>>>>
>>>> Alfredo
>>>>
>>>> On 11 Nov 2016, at 10:51, Chandrika Gautam <
>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>
>>>>
>>>> If you check the outer src and dst IP addresses of all these 6 packets
>>>> are same, then shouldn't all these 6 packets go to 1 pfcount instance if we
>>>> have chosen cluster_type as cluster_per_2_flow?
>>>>
>>>> Regards,
>>>> Gautam
>>>>
>>>> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop.
>>>> org> wrote:
>>>>
>>>>> This is what I am receiving, it looks correct as they are distributed
>>>>> by 2-tuple:
>>>>>
>>>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>>>> Using PF_RING v.6.5.0
>>>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>>>> 10000Mb/s]
>>>>> # Device RX channels: 1
>>>>> # Polling threads: 1
>>>>> pfring_set_cluster returned 0
>>>>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
>>>>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>>>> 6.58.194.110:443 -> 100.83.201.244:43485]
>>>>> [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offs
>>>>> et=0][l3_offset=14][l4_offset=34][payload_offset=42]
>>>>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>>>> =34][payload_offset=0]
>>>>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>>>>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>>>>> 6.58.194.97:443 -> 100.83.201.244:55379]
>>>>> [hash=4182140810][tos=0][tcp_seq_num=0] [caplen=128][len=226][eth_offs
>>>>> et=0][l3_offset=14][l4_offset=34][payload_offset=42]
>>>>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
>>>>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>>>>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>>>>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>>>>> =34][payload_offset=0]
>>>>> ^CLeaving...
>>>>>
>>>>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>>>>> Using PF_RING v.6.5.0
>>>>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>>>>> 10000Mb/s]
>>>>> # Device RX channels: 1
>>>>> # Polling threads: 1
>>>>> pfring_set_cluster returned 0
>>>>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
>>>>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>>>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
>>>>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49
>>>>> .96.0.26:80 <http://49.96.0.26/> -> 10.160.153.151:60856]
>>>>> [hash=2820071437][tos=104][tcp_seq_num=0]
>>>>> [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_offset
>>>>> =34][payload_offset=42]
>>>>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
>>>>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20
>>>>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
>>>>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38
>>>>> ][payload_offset=0]
>>>>> ^CLeaving...
>>>>>
>>>>> Alfredo
>>>>>
>>>>> On 11 Nov 2016, at 10:41, Chandrika Gautam <
>>>>> chandrika.iitd.rock@gmail.com> wrote:
>>>>>
>>>>> I tried with above. I found the same result one instance of pfcount
>>>>> receiving 2 packets and 6 in other instance for the file shared
>>>>> multiple_fragments_id35515_wo_vlan.pcap.
>>>>>
>>>>> Are you receiving all 6 packets in one pfcount instance ?
>>>>>
>>>>> Regards,
>>>>> Chandrika
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>
>>>
>>
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Gautam
I will come back on this asap.

Alfredo

> On 17 Nov 2016, at 07:47, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi Guys,
>
> Please help to resolve this.
>
> Regards,
> Gautam
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo,

While debugging this issue, I have found out two issues -

1. Below API using s6_addr32 which is an array of type uint32_t of size 4
and hence calculated hash gives a garbage value.
I mentioned in my previous email that hash value getting generated are
different and hash values are exceeding the Integer limit also
Is there any specific reason to use s6_addr32 rather than using
s6_addr.

struct in6_addr {
union {
uint8_t __u6_addr8[16];
uint16_t __u6_addr16[8];
uint32_t __u6_addr32[4];
} __u6_addr; /* 128-bit IP6 address */
};
#define s6_addr __u6_addr.__u6_addr8
#ifdef _KERNEL /* XXX nonstandard */
#define s6_addr8 __u6_addr.__u6_addr8
#define s6_addr16 __u6_addr.__u6_addr16
#define s6_addr32 __u6_addr.__u6_addr32
#endif

static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
ip_addr host_peer_a, ip_addr host_peer_b,
u_int16_t port_peer_a, u_int16_t
port_peer_b)
{
return(vlan_id+proto+
host_peer_a.v6.s6_addr32[0]+host_peer_a.v6.s6_addr32[1]+
host_peer_a.v6.s6_addr32[2]+host_peer_a.v6.s6_addr32[3]+
host_peer_b.v6.s6_addr32[0]+host_peer_b.v6.s6_addr32[1]+
host_peer_b.v6.s6_addr32[2]+host_peer_b.v6.s6_addr32[3]+
port_peer_a+port_peer_b);
}

So I changed the above code to below and it started giving a value which
make sense. I matched the hash value generated for few packets manually and
it is matching.

static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
ip_addr host_peer_a, ip_addr host_peer_b,
u_int16_t port_peer_a, u_int16_t
port_peer_b)
{
return(vlan_id+proto+
host_peer_a.v6.s6_addr[0]+host_peer_a.v6.s6_addr[1]+
host_peer_a.v6.s6_addr[2]+host_peer_a.v6.s6_addr[3]+
host_peer_b.v6.s6_addr[0]+host_peer_b.v6.s6_addr[1]+
host_peer_b.v6.s6_addr[2]+host_peer_b.v6.s6_addr[3]+
port_peer_a+port_peer_b);
}

2. Clustering logic is not working as expected for out of order fragments.
If non first fragment received first, then below snippet of code will
enqueue this packet to an index 0 always.
Existing code creates an entry in hash only when first fragment is
received. I tried to modified this code to first search a fragment in hash
and if not found, then insert in into the hash irrespective of the order of
the fragment. But It did not work for some reason.

Before even working on that piece of code, for our requirement of using
cluster_2_tuple, I feel that we don't even require to use the cluster
fragment hash at all since we need only source and destination IP address
which will be present in each and every packet
including fragments also.

So I went ahead commenting whole piece of below code and just used "skb_hash
= hash_pkt_cluster(cluster_ptr, &hdr);"
to calculate the hash and it seems to work perfect. Even this will remove
the overhead of using hashing.

if (enable_frag_coherence && fragment_not_first) {
if (skb_hash == -1) { /* read hash once */
skb_hash =
get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src,
hdr.extended_hdr.parsed_pkt.ipv4_dst, ip_id, more_fragments);
if (skb_hash < 0)
skb_hash = 0;
}
}
else if (!(enable_frag_coherence && first_fragment) || skb_hash ==
-1) {
/* compute hash once for all clusters in case of first fragment
*/
skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);

if (skb_hash < 0)
skb_hash = -skb_hash;

if (enable_frag_coherence && first_fragment) {
add_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src,
hdr.extended_hdr.parsed_pkt.ipv4_dst,
ip_id, skb_hash % num_cluster_elements);
}
}


Do you foresee any issue if we go ahead with the mentioned above changes?

Regards,
Gautam

On Thu, Nov 17, 2016 at 4:40 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

> Hi Gautam
> I will come back on this asap.
>
> Alfredo
>
> On 17 Nov 2016, at 07:47, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> Hi Guys,
>
> Please help to resolve this.
>
> Regards,
> Gautam
>
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Chandrika
1. I reworked the hash to explicitly handle ip v4 vs v6 now, however the result should be the same as the non v4 portion in case of v4 should be 0’ed, thus not affecting the hash.
2. there is no need to comment the code, you just need to pass enable_frag_coherence=0 to pf_ring.ko (at insmod time, or using the configuration file if you are using packages)

Alfredo

> On 23 Nov 2016, at 06:37, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi Alfredo,
>
> While debugging this issue, I have found out two issues -
>
> 1. Below API using s6_addr32 which is an array of type uint32_t of size 4 and hence calculated hash gives a garbage value.
> I mentioned in my previous email that hash value getting generated are different and hash values are exceeding the Integer limit also
> Is there any specific reason to use s6_addr32 rather than using s6_addr.
>
> struct in6_addr {
> union {
> uint8_t __u6_addr8[16];
> uint16_t __u6_addr16[8];
> uint32_t __u6_addr32[4];
> } __u6_addr; /* 128-bit IP6 address */
> };
> #define s6_addr __u6_addr.__u6_addr8
> #ifdef _KERNEL /* XXX nonstandard */
> #define s6_addr8 __u6_addr.__u6_addr8
> #define s6_addr16 __u6_addr.__u6_addr16
> #define s6_addr32 __u6_addr.__u6_addr32
> #endif
>
> static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
> ip_addr host_peer_a, ip_addr host_peer_b,
> u_int16_t port_peer_a, u_int16_t port_peer_b)
> {
> return(vlan_id+proto+
> host_peer_a.v6.s6_addr32[0]+host_peer_a.v6.s6_addr32[1]+
> host_peer_a.v6.s6_addr32[2]+host_peer_a.v6.s6_addr32[3]+
> host_peer_b.v6.s6_addr32[0]+host_peer_b.v6.s6_addr32[1]+
> host_peer_b.v6.s6_addr32[2]+host_peer_b.v6.s6_addr32[3]+
> port_peer_a+port_peer_b);
> }
>
> So I changed the above code to below and it started giving a value which make sense. I matched the hash value generated for few packets manually and it is matching.
>
> static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
> ip_addr host_peer_a, ip_addr host_peer_b,
> u_int16_t port_peer_a, u_int16_t port_peer_b)
> {
> return(vlan_id+proto+
> host_peer_a.v6.s6_addr[0]+host_peer_a.v6.s6_addr[1]+
> host_peer_a.v6.s6_addr[2]+host_peer_a.v6.s6_addr[3]+
> host_peer_b.v6.s6_addr[0]+host_peer_b.v6.s6_addr[1]+
> host_peer_b.v6.s6_addr[2]+host_peer_b.v6.s6_addr[3]+
> port_peer_a+port_peer_b);
> }
>
> 2. Clustering logic is not working as expected for out of order fragments.
> If non first fragment received first, then below snippet of code will enqueue this packet to an index 0 always.
> Existing code creates an entry in hash only when first fragment is received. I tried to modified this code to first search a fragment in hash and if not found, then insert in into the hash irrespective of the order of the fragment. But It did not work for some reason.
>
> Before even working on that piece of code, for our requirement of using cluster_2_tuple, I feel that we don't even require to use the cluster fragment hash at all since we need only source and destination IP address which will be present in each and every packet
> including fragments also.
>
> So I went ahead commenting whole piece of below code and just used "skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);"
> to calculate the hash and it seems to work perfect. Even this will remove the overhead of using hashing.
>
> if (enable_frag_coherence && fragment_not_first) {
> if (skb_hash == -1) { /* read hash once */
> skb_hash = get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src, hdr.extended_hdr.parsed_pkt.ipv4_dst, ip_id, more_fragments);
> if (skb_hash < 0)
> skb_hash = 0;
> }
> }
> else if (!(enable_frag_coherence && first_fragment) || skb_hash == -1) {
> /* compute hash once for all clusters in case of first fragment */
> skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);
>
> if (skb_hash < 0)
> skb_hash = -skb_hash;
>
> if (enable_frag_coherence && first_fragment) {
> add_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src, hdr.extended_hdr.parsed_pkt.ipv4_dst,
> ip_id, skb_hash % num_cluster_elements);
> }
> }
>
>
> Do you foresee any issue if we go ahead with the mentioned above changes?
>
> Regards,
> Gautam
>
> On Thu, Nov 17, 2016 at 4:40 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
> Hi Gautam
> I will come back on this asap.
>
> Alfredo
>
>> On 17 Nov 2016, at 07:47, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>
>> Hi Guys,
>>
>> Please help to resolve this.
>>
>> Regards,
>> Gautam
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
Hi Alfredo,

Shall I take the code from github ?
Have you checked the point #2 also mentioned in my last email ?

Regards,
Chandrika

Sent from my iPhone

On Nov 28, 2016, at 5:28 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

Hi Chandrika
1. I reworked the hash to explicitly handle ip v4 vs v6 now, however the
result should be the same as the non v4 portion in case of v4 should be
0’ed, thus not affecting the hash.
2. there is no need to comment the code, you just need to pass
enable_frag_coherence=0 to pf_ring.ko (at insmod time, or using the
configuration file if you are using packages)

Alfredo

On 23 Nov 2016, at 06:37, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
wrote:

Hi Alfredo,

While debugging this issue, I have found out two issues -

1. Below API using s6_addr32 which is an array of type uint32_t of size 4
and hence calculated hash gives a garbage value.
I mentioned in my previous email that hash value getting generated are
different and hash values are exceeding the Integer limit also
Is there any specific reason to use s6_addr32 rather than using
s6_addr.

struct in6_addr {
union {
uint8_t __u6_addr8[16];
uint16_t __u6_addr16[8];
uint32_t __u6_addr32[4];
} __u6_addr; /* 128-bit IP6 address */
};
#define s6_addr __u6_addr.__u6_addr8
#ifdef _KERNEL /* XXX nonstandard */
#define s6_addr8 __u6_addr.__u6_addr8
#define s6_addr16 __u6_addr.__u6_addr16
#define s6_addr32 __u6_addr.__u6_addr32
#endif

static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
ip_addr host_peer_a, ip_addr host_peer_b,
u_int16_t port_peer_a, u_int16_t
port_peer_b)
{
return(vlan_id+proto+
host_peer_a.v6.s6_addr32[0]+host_peer_a.v6.s6_addr32[1]+
host_peer_a.v6.s6_addr32[2]+host_peer_a.v6.s6_addr32[3]+
host_peer_b.v6.s6_addr32[0]+host_peer_b.v6.s6_addr32[1]+
host_peer_b.v6.s6_addr32[2]+host_peer_b.v6.s6_addr32[3]+
port_peer_a+port_peer_b);
}

So I changed the above code to below and it started giving a value which
make sense. I matched the hash value generated for few packets manually and
it is matching.

static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
ip_addr host_peer_a, ip_addr host_peer_b,
u_int16_t port_peer_a, u_int16_t
port_peer_b)
{
return(vlan_id+proto+
host_peer_a.v6.s6_addr[0]+host_peer_a.v6.s6_addr[1]+
host_peer_a.v6.s6_addr[2]+host_peer_a.v6.s6_addr[3]+
host_peer_b.v6.s6_addr[0]+host_peer_b.v6.s6_addr[1]+
host_peer_b.v6.s6_addr[2]+host_peer_b.v6.s6_addr[3]+
port_peer_a+port_peer_b);
}

2. Clustering logic is not working as expected for out of order fragments.
If non first fragment received first, then below snippet of code will
enqueue this packet to an index 0 always.
Existing code creates an entry in hash only when first fragment is
received. I tried to modified this code to first search a fragment in hash
and if not found, then insert in into the hash irrespective of the order of
the fragment. But It did not work for some reason.

Before even working on that piece of code, for our requirement of using
cluster_2_tuple, I feel that we don't even require to use the cluster
fragment hash at all since we need only source and destination IP address
which will be present in each and every packet
including fragments also.

So I went ahead commenting whole piece of below code and just used "skb_hash
= hash_pkt_cluster(cluster_ptr, &hdr);"
to calculate the hash and it seems to work perfect. Even this will remove
the overhead of using hashing.

if (enable_frag_coherence && fragment_not_first) {
if (skb_hash == -1) { /* read hash once */
skb_hash =
get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src,
hdr.extended_hdr.parsed_pkt.ipv4_dst, ip_id, more_fragments);
if (skb_hash < 0)
skb_hash = 0;
}
}
else if (!(enable_frag_coherence && first_fragment) || skb_hash ==
-1) {
/* compute hash once for all clusters in case of first fragment
*/
skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);

if (skb_hash < 0)
skb_hash = -skb_hash;

if (enable_frag_coherence && first_fragment) {
add_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src,
hdr.extended_hdr.parsed_pkt.ipv4_dst,
ip_id, skb_hash % num_cluster_elements);
}
}


Do you foresee any issue if we go ahead with the mentioned above changes?

Regards,
Gautam

On Thu, Nov 17, 2016 at 4:40 PM, Alfredo Cardigliano <cardigliano@ntop.org>
wrote:

> Hi Gautam
> I will come back on this asap.
>
> Alfredo
>
> On 17 Nov 2016, at 07:47, Chandrika Gautam <chandrika.iitd.rock@gmail.com>
> wrote:
>
> Hi Guys,
>
> Please help to resolve this.
>
> Regards,
> Gautam
>
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>

_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
> On 7 Dec 2016, at 06:59, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>
> Hi Alfredo,
>
> Shall I take the code from github ?

Github or dev packages.

> Have you checked the point #2 also mentioned in my last email ?

Out of order fragments are not handled atm, we will add it asap, however you said you are using a 2-tuple hash thus no need for the hash right?

Alfredo

>
> Regards,
> Chandrika
>
> Sent from my iPhone
>
> On Nov 28, 2016, at 5:28 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>
>> Hi Chandrika
>> 1. I reworked the hash to explicitly handle ip v4 vs v6 now, however the result should be the same as the non v4 portion in case of v4 should be 0’ed, thus not affecting the hash.
>> 2. there is no need to comment the code, you just need to pass enable_frag_coherence=0 to pf_ring.ko (at insmod time, or using the configuration file if you are using packages)
>>
>> Alfredo
>>
>>> On 23 Nov 2016, at 06:37, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>
>>> Hi Alfredo,
>>>
>>> While debugging this issue, I have found out two issues -
>>>
>>> 1. Below API using s6_addr32 which is an array of type uint32_t of size 4 and hence calculated hash gives a garbage value.
>>> I mentioned in my previous email that hash value getting generated are different and hash values are exceeding the Integer limit also
>>> Is there any specific reason to use s6_addr32 rather than using s6_addr.
>>>
>>> struct in6_addr {
>>> union {
>>> uint8_t __u6_addr8[16];
>>> uint16_t __u6_addr16[8];
>>> uint32_t __u6_addr32[4];
>>> } __u6_addr; /* 128-bit IP6 address */
>>> };
>>> #define s6_addr __u6_addr.__u6_addr8
>>> #ifdef _KERNEL /* XXX nonstandard */
>>> #define s6_addr8 __u6_addr.__u6_addr8
>>> #define s6_addr16 __u6_addr.__u6_addr16
>>> #define s6_addr32 __u6_addr.__u6_addr32
>>> #endif
>>>
>>> static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
>>> ip_addr host_peer_a, ip_addr host_peer_b,
>>> u_int16_t port_peer_a, u_int16_t port_peer_b)
>>> {
>>> return(vlan_id+proto+
>>> host_peer_a.v6.s6_addr32[0]+host_peer_a.v6.s6_addr32[1]+
>>> host_peer_a.v6.s6_addr32[2]+host_peer_a.v6.s6_addr32[3]+
>>> host_peer_b.v6.s6_addr32[0]+host_peer_b.v6.s6_addr32[1]+
>>> host_peer_b.v6.s6_addr32[2]+host_peer_b.v6.s6_addr32[3]+
>>> port_peer_a+port_peer_b);
>>> }
>>>
>>> So I changed the above code to below and it started giving a value which make sense. I matched the hash value generated for few packets manually and it is matching.
>>>
>>> static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
>>> ip_addr host_peer_a, ip_addr host_peer_b,
>>> u_int16_t port_peer_a, u_int16_t port_peer_b)
>>> {
>>> return(vlan_id+proto+
>>> host_peer_a.v6.s6_addr[0]+host_peer_a.v6.s6_addr[1]+
>>> host_peer_a.v6.s6_addr[2]+host_peer_a.v6.s6_addr[3]+
>>> host_peer_b.v6.s6_addr[0]+host_peer_b.v6.s6_addr[1]+
>>> host_peer_b.v6.s6_addr[2]+host_peer_b.v6.s6_addr[3]+
>>> port_peer_a+port_peer_b);
>>> }
>>>
>>> 2. Clustering logic is not working as expected for out of order fragments.
>>> If non first fragment received first, then below snippet of code will enqueue this packet to an index 0 always.
>>> Existing code creates an entry in hash only when first fragment is received. I tried to modified this code to first search a fragment in hash and if not found, then insert in into the hash irrespective of the order of the fragment. But It did not work for some reason.
>>>
>>> Before even working on that piece of code, for our requirement of using cluster_2_tuple, I feel that we don't even require to use the cluster fragment hash at all since we need only source and destination IP address which will be present in each and every packet
>>> including fragments also.
>>>
>>> So I went ahead commenting whole piece of below code and just used "skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);"
>>> to calculate the hash and it seems to work perfect. Even this will remove the overhead of using hashing.
>>>
>>> if (enable_frag_coherence && fragment_not_first) {
>>> if (skb_hash == -1) { /* read hash once */
>>> skb_hash = get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src, hdr.extended_hdr.parsed_pkt.ipv4_dst, ip_id, more_fragments);
>>> if (skb_hash < 0)
>>> skb_hash = 0;
>>> }
>>> }
>>> else if (!(enable_frag_coherence && first_fragment) || skb_hash == -1) {
>>> /* compute hash once for all clusters in case of first fragment */
>>> skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);
>>>
>>> if (skb_hash < 0)
>>> skb_hash = -skb_hash;
>>>
>>> if (enable_frag_coherence && first_fragment) {
>>> add_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src, hdr.extended_hdr.parsed_pkt.ipv4_dst,
>>> ip_id, skb_hash % num_cluster_elements);
>>> }
>>> }
>>>
>>>
>>> Do you foresee any issue if we go ahead with the mentioned above changes?
>>>
>>> Regards,
>>> Gautam
>>>
>>> On Thu, Nov 17, 2016 at 4:40 PM, Alfredo Cardigliano <cardigliano@ntop.org <mailto:cardigliano@ntop.org>> wrote:
>>> Hi Gautam
>>> I will come back on this asap.
>>>
>>> Alfredo
>>>
>>>> On 17 Nov 2016, at 07:47, Chandrika Gautam <chandrika.iitd.rock@gmail.com <mailto:chandrika.iitd.rock@gmail.com>> wrote:
>>>>
>>>> Hi Guys,
>>>>
>>>> Please help to resolve this.
>>>>
>>>> Regards,
>>>> Gautam
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>_______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cluster_2_tuple not working as expected [ In reply to ]
Yes I need only two tuple cluster hashing mechanism. I checked the PFRing code and made sense to disable the coherence flag. So I tested with latest package taken from GitHub and enable_frag_coherence set to 0 and it seems to be working fine. I will be doing some more testing to test this further.
Thanks for your support !

Regards
Chandrika

Sent from my iPhone

> On Dec 7, 2016, at 7:53 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>
>
>> On 7 Dec 2016, at 06:59, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>>
>> Hi Alfredo,
>>
>> Shall I take the code from github ?
>
> Github or dev packages.
>
>> Have you checked the point #2 also mentioned in my last email ?
>
> Out of order fragments are not handled atm, we will add it asap, however you said you are using a 2-tuple hash thus no need for the hash right?
>
> Alfredo
>
>>
>> Regards,
>> Chandrika
>>
>> Sent from my iPhone
>>
>>> On Nov 28, 2016, at 5:28 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>
>>> Hi Chandrika
>>> 1. I reworked the hash to explicitly handle ip v4 vs v6 now, however the result should be the same as the non v4 portion in case of v4 should be 0’ed, thus not affecting the hash.
>>> 2. there is no need to comment the code, you just need to pass enable_frag_coherence=0 to pf_ring.ko (at insmod time, or using the configuration file if you are using packages)
>>>
>>> Alfredo
>>>
>>>> On 23 Nov 2016, at 06:37, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>>>>
>>>> Hi Alfredo,
>>>>
>>>> While debugging this issue, I have found out two issues -
>>>>
>>>> 1. Below API using s6_addr32 which is an array of type uint32_t of size 4 and hence calculated hash gives a garbage value.
>>>> I mentioned in my previous email that hash value getting generated are different and hash values are exceeding the Integer limit also
>>>> Is there any specific reason to use s6_addr32 rather than using s6_addr.
>>>>
>>>> struct in6_addr {
>>>> union {
>>>> uint8_t __u6_addr8[16];
>>>> uint16_t __u6_addr16[8];
>>>> uint32_t __u6_addr32[4];
>>>> } __u6_addr; /* 128-bit IP6 address */
>>>> };
>>>> #define s6_addr __u6_addr.__u6_addr8
>>>> #ifdef _KERNEL /* XXX nonstandard */
>>>> #define s6_addr8 __u6_addr.__u6_addr8
>>>> #define s6_addr16 __u6_addr.__u6_addr16
>>>> #define s6_addr32 __u6_addr.__u6_addr32
>>>> #endif
>>>>
>>>> static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
>>>> ip_addr host_peer_a, ip_addr host_peer_b,
>>>> u_int16_t port_peer_a, u_int16_t port_peer_b)
>>>> {
>>>> return(vlan_id+proto+
>>>> host_peer_a.v6.s6_addr32[0]+host_peer_a.v6.s6_addr32[1]+
>>>> host_peer_a.v6.s6_addr32[2]+host_peer_a.v6.s6_addr32[3]+
>>>> host_peer_b.v6.s6_addr32[0]+host_peer_b.v6.s6_addr32[1]+
>>>> host_peer_b.v6.s6_addr32[2]+host_peer_b.v6.s6_addr32[3]+
>>>> port_peer_a+port_peer_b);
>>>> }
>>>>
>>>> So I changed the above code to below and it started giving a value which make sense. I matched the hash value generated for few packets manually and it is matching.
>>>>
>>>> static inline u_int32_t hash_pkt(u_int16_t vlan_id, u_int8_t proto,
>>>> ip_addr host_peer_a, ip_addr host_peer_b,
>>>> u_int16_t port_peer_a, u_int16_t port_peer_b)
>>>> {
>>>> return(vlan_id+proto+
>>>> host_peer_a.v6.s6_addr[0]+host_peer_a.v6.s6_addr[1]+
>>>> host_peer_a.v6.s6_addr[2]+host_peer_a.v6.s6_addr[3]+
>>>> host_peer_b.v6.s6_addr[0]+host_peer_b.v6.s6_addr[1]+
>>>> host_peer_b.v6.s6_addr[2]+host_peer_b.v6.s6_addr[3]+
>>>> port_peer_a+port_peer_b);
>>>> }
>>>>
>>>> 2. Clustering logic is not working as expected for out of order fragments.
>>>> If non first fragment received first, then below snippet of code will enqueue this packet to an index 0 always.
>>>> Existing code creates an entry in hash only when first fragment is received. I tried to modified this code to first search a fragment in hash and if not found, then insert in into the hash irrespective of the order of the fragment. But It did not work for some reason.
>>>>
>>>> Before even working on that piece of code, for our requirement of using cluster_2_tuple, I feel that we don't even require to use the cluster fragment hash at all since we need only source and destination IP address which will be present in each and every packet
>>>> including fragments also.
>>>>
>>>> So I went ahead commenting whole piece of below code and just used "skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);"
>>>> to calculate the hash and it seems to work perfect. Even this will remove the overhead of using hashing.
>>>>
>>>> if (enable_frag_coherence && fragment_not_first) {
>>>> if (skb_hash == -1) { /* read hash once */
>>>> skb_hash = get_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src, hdr.extended_hdr.parsed_pkt.ipv4_dst, ip_id, more_fragments);
>>>> if (skb_hash < 0)
>>>> skb_hash = 0;
>>>> }
>>>> }
>>>> else if (!(enable_frag_coherence && first_fragment) || skb_hash == -1) {
>>>> /* compute hash once for all clusters in case of first fragment */
>>>> skb_hash = hash_pkt_cluster(cluster_ptr, &hdr);
>>>>
>>>> if (skb_hash < 0)
>>>> skb_hash = -skb_hash;
>>>>
>>>> if (enable_frag_coherence && first_fragment) {
>>>> add_fragment_app_id(hdr.extended_hdr.parsed_pkt.ipv4_src, hdr.extended_hdr.parsed_pkt.ipv4_dst,
>>>> ip_id, skb_hash % num_cluster_elements);
>>>> }
>>>> }
>>>>
>>>>
>>>> Do you foresee any issue if we go ahead with the mentioned above changes?
>>>>
>>>> Regards,
>>>> Gautam
>>>>
>>>>> On Thu, Nov 17, 2016 at 4:40 PM, Alfredo Cardigliano <cardigliano@ntop.org> wrote:
>>>>> Hi Gautam
>>>>> I will come back on this asap.
>>>>>
>>>>> Alfredo
>>>>>
>>>>>> On 17 Nov 2016, at 07:47, Chandrika Gautam <chandrika.iitd.rock@gmail.com> wrote:
>>>>>>
>>>>>> Hi Guys,
>>>>>>
>>>>>> Please help to resolve this.
>>>>>>
>>>>>> Regards,
>>>>>> Gautam
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

1 2  View All