Mailing List Archive

cento packet processing pipeline
I am looking to implement a packet processing pipeline that first
aggregates a number of interfaces, generates ipfix flows and outputs
to multiple aggregated queues (cento-ids). Afterwards, I am looking
to have (cento?) read from one of the aggregated queues and apply
packet shunting before passing it off to n2disk. The other aggregated
queues would be used to feed other analysis tools via load balanced
output queues.

I am currently using zbalance_ipc for the initial “capture” and
distribution. But it seems cento is the way it should be done. From
what I can tell, I would require two new features to be added:

1) Adding the ability to create multiple aggregated queues as output
from cento. (https://github.com/ntop/nProbe/issues/86).

2) Add the ability to bypass flow generation in cento and use it just
for aggregation and distribution of packets.

I am not sure if this is the correct approach. I wonder if you might
offer some advice as to the best approach for doing this.

Thanks!
_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cento packet processing pipeline [ In reply to ]
Hi Jeremy
please read below.

> On 26 Jul 2016, at 11:50, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>
> I am looking to implement a packet processing pipeline that first
> aggregates a number of interfaces, generates ipfix flows and outputs
> to multiple aggregated queues (cento-ids).

Please provide:
- (rough) number on ingress interfaces
- expected ingress total rate

> Afterwards, I am looking
> to have (cento?) read from one of the aggregated queues and apply
> packet shunting before passing it off to n2disk.

Cento already does shunting on aggregated queue (for feeding n2disk).

> The other aggregated
> queues would be used to feed other analysis tools via load balanced
> output queues.

Cento already does load balancing, but not after aggregation (it distribute to multiple
egress queues traffic coming from each -i). Please note you can specify an interface
pair with -i ethX,ethY if you have two directions from two ingress interfaces (i.e. from a TAP).
Is it ok for your use case?

Alfredo

> I am currently using zbalance_ipc for the initial “capture” and
> distribution. But it seems cento is the way it should be done. From
> what I can tell, I would require two new features to be added:
>
> 1) Adding the ability to create multiple aggregated queues as output
> from cento. (https://github.com/ntop/nProbe/issues/86).
>
> 2) Add the ability to bypass flow generation in cento and use it just
> for aggregation and distribution of packets.
>
> I am not sure if this is the correct approach. I wonder if you might
> offer some advice as to the best approach for doing this.
>
> Thanks!
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cento packet processing pipeline [ In reply to ]
aggregate link to our edge switches. The switches are then configured
to send any of the monitor sessions down the aggregate link.
Sustained traffic volume ~<7Gbps. Ideal capture capacity for ~20Gbps
before starting to drop packets.

question would be, how do I go about using cento to aggregate the
interfaces into multiple aggregate egress queues. Of which, we could
then use cento to read from the aggregated egress queue then shunt the
traffic for use by n2disk.

distribute to multiple egress queues traffic coming from each -i).
Please note you can specify an interface pair with -i ethic thY if you
have two directions from two ingress interfaces (i.e. from a TAP). Is
it ok for your use case?
packets that were output from a separate cento process providing an
aggregated queue.

I am not sure if this is making sense. Let me know if you require
clarification.



On Tue, Jul 26, 2016 at 5:01 PM, Alfredo Cardigliano
<cardigliano@ntop.org> wrote:
> Hi Jeremy
> please read below.
>
>> On 26 Jul 2016, at 11:50, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>
>> I am looking to implement a packet processing pipeline that first
>> aggregates a number of interfaces, generates ipfix flows and outputs
>> to multiple aggregated queues (cento-ids).
>
> Please provide:
> - (rough) number on ingress interfaces
> - expected ingress total rate
>
>> Afterwards, I am looking
>> to have (cento?) read from one of the aggregated queues and apply
>> packet shunting before passing it off to n2disk.
>
> Cento already does shunting on aggregated queue (for feeding n2disk).
>
>> The other aggregated
>> queues would be used to feed other analysis tools via load balanced
>> output queues.
>
> Cento already does load balancing, but not after aggregation (it distribute to multiple
> egress queues traffic coming from each -i). Please note you can specify an interface
> pair with -i ethX,ethY if you have two directions from two ingress interfaces (i.e. from a TAP).
> Is it ok for your use case?
>
> Alfredo
>
>> I am currently using zbalance_ipc for the initial “capture” and
>> distribution. But it seems cento is the way it should be done. From
>> what I can tell, I would require two new features to be added:
>>
>> 1) Adding the ability to create multiple aggregated queues as output
>> from cento. (https://github.com/ntop/nProbe/issues/86).
>>
>> 2) Add the ability to bypass flow generation in cento and use it just
>> for aggregation and distribution of packets.
>>
>> I am not sure if this is the correct approach. I wonder if you might
>> offer some advice as to the best approach for doing this.
>>
>> Thanks!
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cento packet processing pipeline [ In reply to ]
Hi Jeremy
please read inline:

> On 26 Jul 2016, at 14:24, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>
> AC> (rough) number on ingress interfaces, expected ingress total rate
> JA> Ingress interfaces will be 8x 10Gbit. With these we form 80Gbps
> aggregate link to our edge switches. The switches are then configured
> to send any of the monitor sessions down the aggregate link.
> Sustained traffic volume ~<7Gbps. Ideal capture capacity for ~20Gbps
> before starting to drop packets.

Ok got it.

> AC> Cento already does shunting on aggregated queue (for feeding n2disk).
> JA> I understand it already offers shunting on aggregated queue. The
> question would be, how do I go about using cento to aggregate the
> interfaces into multiple aggregate egress queues.

At the moment only one aggregated queue is supported, we will discuss
this internally and probably add it to the roadmap.

> Of which, we could
> then use cento to read from the aggregated egress queue then shunt the
> traffic for use by n2disk.

Why do you want to use 2 cento processes in a 2-stage pipeline instead of
doing everything in a single process?

> AC> Cento already does load balancing, but not after aggregation (it
> distribute to multiple egress queues traffic coming from each -i).
> Please note you can specify an interface pair with -i ethic thY if you
> have two directions from two ingress interfaces (i.e. from a TAP). Is
> it ok for your use case?
> JA> I think what I am asking is if one can use cento as a consumer of
> packets that were output from a separate cento process providing an
> aggregated queue.

This is not an expected use case and it’s not optimal (e.g. it requires
1 packet copy passing from one cento process to the other), even though
it should “work”

> I am not sure if this is making sense. Let me know if you require
> clarification.

It could make sense, but I am trying to figure out what is the reason for
pipelining 2 cento processes, probably I am missing something.

Regards
Alfredo

> On Tue, Jul 26, 2016 at 5:01 PM, Alfredo Cardigliano
> <cardigliano@ntop.org> wrote:
>> Hi Jeremy
>> please read below.
>>
>>> On 26 Jul 2016, at 11:50, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>>
>>> I am looking to implement a packet processing pipeline that first
>>> aggregates a number of interfaces, generates ipfix flows and outputs
>>> to multiple aggregated queues (cento-ids).
>>
>> Please provide:
>> - (rough) number on ingress interfaces
>> - expected ingress total rate
>>
>>> Afterwards, I am looking
>>> to have (cento?) read from one of the aggregated queues and apply
>>> packet shunting before passing it off to n2disk.
>>
>> Cento already does shunting on aggregated queue (for feeding n2disk).
>>
>>> The other aggregated
>>> queues would be used to feed other analysis tools via load balanced
>>> output queues.
>>
>> Cento already does load balancing, but not after aggregation (it distribute to multiple
>> egress queues traffic coming from each -i). Please note you can specify an interface
>> pair with -i ethX,ethY if you have two directions from two ingress interfaces (i.e. from a TAP).
>> Is it ok for your use case?
>>
>> Alfredo
>>
>>> I am currently using zbalance_ipc for the initial “capture” and
>>> distribution. But it seems cento is the way it should be done. From
>>> what I can tell, I would require two new features to be added:
>>>
>>> 1) Adding the ability to create multiple aggregated queues as output
>>> from cento. (https://github.com/ntop/nProbe/issues/86).
>>>
>>> 2) Add the ability to bypass flow generation in cento and use it just
>>> for aggregation and distribution of packets.
>>>
>>> I am not sure if this is the correct approach. I wonder if you might
>>> offer some advice as to the best approach for doing this.
>>>
>>> Thanks!
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cento packet processing pipeline [ In reply to ]
egress queue then shunt the traffic for use by n2disk.
instead of doing everything in a single process?
output would have packet shunting rules applied to feed n2disk.
Meanwhile another output queue would be created to load-balance across
a cluster of analysis engines. This would imply we require the
ability to choose the shunting rules based on the queue. To
summarize, we need:

1) Aggregate up to 8x 10Gbit links (sustained <~7Gbps).
2) Generate ipfix records for the associated flows.
3) Create an aggregated and packet shunted output queue for n2disk consumption.
4) Create a balanced output queue for consumption by analysis tool 1.
5) Create a balanced output queue for consumption by analysis tool 2.

NB: it would be nice if one could attach a egress queue configuration
to each individual queue.

Hope that help clear it up a bit. Thanks.


On Wed, Jul 27, 2016 at 5:17 AM, Alfredo Cardigliano
<cardigliano@ntop.org> wrote:
> Hi Jeremy
> please read inline:
>
>> On 26 Jul 2016, at 14:24, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>
>> AC> (rough) number on ingress interfaces, expected ingress total rate
>> JA> Ingress interfaces will be 8x 10Gbit. With these we form 80Gbps
>> aggregate link to our edge switches. The switches are then configured
>> to send any of the monitor sessions down the aggregate link.
>> Sustained traffic volume ~<7Gbps. Ideal capture capacity for ~20Gbps
>> before starting to drop packets.
>
> Ok got it.
>
>> AC> Cento already does shunting on aggregated queue (for feeding n2disk).
>> JA> I understand it already offers shunting on aggregated queue. The
>> question would be, how do I go about using cento to aggregate the
>> interfaces into multiple aggregate egress queues.
>
> At the moment only one aggregated queue is supported, we will discuss
> this internally and probably add it to the roadmap.
>
>> Of which, we could
>> then use cento to read from the aggregated egress queue then shunt the
>> traffic for use by n2disk.
>
> Why do you want to use 2 cento processes in a 2-stage pipeline instead of
> doing everything in a single process?
>
>> AC> Cento already does load balancing, but not after aggregation (it
>> distribute to multiple egress queues traffic coming from each -i).
>> Please note you can specify an interface pair with -i ethic thY if you
>> have two directions from two ingress interfaces (i.e. from a TAP). Is
>> it ok for your use case?
>> JA> I think what I am asking is if one can use cento as a consumer of
>> packets that were output from a separate cento process providing an
>> aggregated queue.
>
> This is not an expected use case and it’s not optimal (e.g. it requires
> 1 packet copy passing from one cento process to the other), even though
> it should “work”
>
>> I am not sure if this is making sense. Let me know if you require
>> clarification.
>
> It could make sense, but I am trying to figure out what is the reason for
> pipelining 2 cento processes, probably I am missing something.
>
> Regards
> Alfredo
>
>> On Tue, Jul 26, 2016 at 5:01 PM, Alfredo Cardigliano
>> <cardigliano@ntop.org> wrote:
>>> Hi Jeremy
>>> please read below.
>>>
>>>> On 26 Jul 2016, at 11:50, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>>>
>>>> I am looking to implement a packet processing pipeline that first
>>>> aggregates a number of interfaces, generates ipfix flows and outputs
>>>> to multiple aggregated queues (cento-ids).
>>>
>>> Please provide:
>>> - (rough) number on ingress interfaces
>>> - expected ingress total rate
>>>
>>>> Afterwards, I am looking
>>>> to have (cento?) read from one of the aggregated queues and apply
>>>> packet shunting before passing it off to n2disk.
>>>
>>> Cento already does shunting on aggregated queue (for feeding n2disk).
>>>
>>>> The other aggregated
>>>> queues would be used to feed other analysis tools via load balanced
>>>> output queues.
>>>
>>> Cento already does load balancing, but not after aggregation (it distribute to multiple
>>> egress queues traffic coming from each -i). Please note you can specify an interface
>>> pair with -i ethX,ethY if you have two directions from two ingress interfaces (i.e. from a TAP).
>>> Is it ok for your use case?
>>>
>>> Alfredo
>>>
>>>> I am currently using zbalance_ipc for the initial “capture” and
>>>> distribution. But it seems cento is the way it should be done. From
>>>> what I can tell, I would require two new features to be added:
>>>>
>>>> 1) Adding the ability to create multiple aggregated queues as output
>>>> from cento. (https://github.com/ntop/nProbe/issues/86).
>>>>
>>>> 2) Add the ability to bypass flow generation in cento and use it just
>>>> for aggregation and distribution of packets.
>>>>
>>>> I am not sure if this is the correct approach. I wonder if you might
>>>> offer some advice as to the best approach for doing this.
>>>>
>>>> Thanks!
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cento packet processing pipeline [ In reply to ]
Hi Jeremy
ok now I understand,
please note that there is already a separate configuration for aggregate and balanced queues, and you can run them simultaneously,
what is missing is the ability to create multiple sets of balanced queues for feeding different analysis tools.
We will discuss this internally.

Regards
Alfredo

> On 27 Jul 2016, at 09:33, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>
> JA> Of which, we could then use cento to read from the aggregated
> egress queue then shunt the traffic for use by n2disk.
> AC> Why do you want to use 2 cento processes in a 2-stage pipeline
> instead of doing everything in a single process?
> JA> We need to break up the various output queues. One aggregated
> output would have packet shunting rules applied to feed n2disk.
> Meanwhile another output queue would be created to load-balance across
> a cluster of analysis engines. This would imply we require the
> ability to choose the shunting rules based on the queue. To
> summarize, we need:
>
> 1) Aggregate up to 8x 10Gbit links (sustained <~7Gbps).
> 2) Generate ipfix records for the associated flows.
> 3) Create an aggregated and packet shunted output queue for n2disk consumption.
> 4) Create a balanced output queue for consumption by analysis tool 1.
> 5) Create a balanced output queue for consumption by analysis tool 2.
>
> NB: it would be nice if one could attach a egress queue configuration
> to each individual queue.
>
> Hope that help clear it up a bit. Thanks.
>
>
> On Wed, Jul 27, 2016 at 5:17 AM, Alfredo Cardigliano
> <cardigliano@ntop.org> wrote:
>> Hi Jeremy
>> please read inline:
>>
>>> On 26 Jul 2016, at 14:24, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>>
>>> AC> (rough) number on ingress interfaces, expected ingress total rate
>>> JA> Ingress interfaces will be 8x 10Gbit. With these we form 80Gbps
>>> aggregate link to our edge switches. The switches are then configured
>>> to send any of the monitor sessions down the aggregate link.
>>> Sustained traffic volume ~<7Gbps. Ideal capture capacity for ~20Gbps
>>> before starting to drop packets.
>>
>> Ok got it.
>>
>>> AC> Cento already does shunting on aggregated queue (for feeding n2disk).
>>> JA> I understand it already offers shunting on aggregated queue. The
>>> question would be, how do I go about using cento to aggregate the
>>> interfaces into multiple aggregate egress queues.
>>
>> At the moment only one aggregated queue is supported, we will discuss
>> this internally and probably add it to the roadmap.
>>
>>> Of which, we could
>>> then use cento to read from the aggregated egress queue then shunt the
>>> traffic for use by n2disk.
>>
>> Why do you want to use 2 cento processes in a 2-stage pipeline instead of
>> doing everything in a single process?
>>
>>> AC> Cento already does load balancing, but not after aggregation (it
>>> distribute to multiple egress queues traffic coming from each -i).
>>> Please note you can specify an interface pair with -i ethic thY if you
>>> have two directions from two ingress interfaces (i.e. from a TAP). Is
>>> it ok for your use case?
>>> JA> I think what I am asking is if one can use cento as a consumer of
>>> packets that were output from a separate cento process providing an
>>> aggregated queue.
>>
>> This is not an expected use case and it’s not optimal (e.g. it requires
>> 1 packet copy passing from one cento process to the other), even though
>> it should “work”
>>
>>> I am not sure if this is making sense. Let me know if you require
>>> clarification.
>>
>> It could make sense, but I am trying to figure out what is the reason for
>> pipelining 2 cento processes, probably I am missing something.
>>
>> Regards
>> Alfredo
>>
>>> On Tue, Jul 26, 2016 at 5:01 PM, Alfredo Cardigliano
>>> <cardigliano@ntop.org> wrote:
>>>> Hi Jeremy
>>>> please read below.
>>>>
>>>>> On 26 Jul 2016, at 11:50, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>>>>
>>>>> I am looking to implement a packet processing pipeline that first
>>>>> aggregates a number of interfaces, generates ipfix flows and outputs
>>>>> to multiple aggregated queues (cento-ids).
>>>>
>>>> Please provide:
>>>> - (rough) number on ingress interfaces
>>>> - expected ingress total rate
>>>>
>>>>> Afterwards, I am looking
>>>>> to have (cento?) read from one of the aggregated queues and apply
>>>>> packet shunting before passing it off to n2disk.
>>>>
>>>> Cento already does shunting on aggregated queue (for feeding n2disk).
>>>>
>>>>> The other aggregated
>>>>> queues would be used to feed other analysis tools via load balanced
>>>>> output queues.
>>>>
>>>> Cento already does load balancing, but not after aggregation (it distribute to multiple
>>>> egress queues traffic coming from each -i). Please note you can specify an interface
>>>> pair with -i ethX,ethY if you have two directions from two ingress interfaces (i.e. from a TAP).
>>>> Is it ok for your use case?
>>>>
>>>> Alfredo
>>>>
>>>>> I am currently using zbalance_ipc for the initial “capture” and
>>>>> distribution. But it seems cento is the way it should be done. From
>>>>> what I can tell, I would require two new features to be added:
>>>>>
>>>>> 1) Adding the ability to create multiple aggregated queues as output
>>>>> from cento. (https://github.com/ntop/nProbe/issues/86).
>>>>>
>>>>> 2) Add the ability to bypass flow generation in cento and use it just
>>>>> for aggregation and distribution of packets.
>>>>>
>>>>> I am not sure if this is the correct approach. I wonder if you might
>>>>> offer some advice as to the best approach for doing this.
>>>>>
>>>>> Thanks!
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
Re: cento packet processing pipeline [ In reply to ]
Alfredo, have you had a chance to discuss internally? What is the
outcome of said discussion?

On Wed, Jul 27, 2016 at 10:30 PM, Alfredo Cardigliano
<cardigliano@ntop.org> wrote:
> Hi Jeremy
> ok now I understand,
> please note that there is already a separate configuration for aggregate and balanced queues, and you can run them simultaneously,
> what is missing is the ability to create multiple sets of balanced queues for feeding different analysis tools.
> We will discuss this internally.
>
> Regards
> Alfredo
>
>> On 27 Jul 2016, at 09:33, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>
>> JA> Of which, we could then use cento to read from the aggregated
>> egress queue then shunt the traffic for use by n2disk.
>> AC> Why do you want to use 2 cento processes in a 2-stage pipeline
>> instead of doing everything in a single process?
>> JA> We need to break up the various output queues. One aggregated
>> output would have packet shunting rules applied to feed n2disk.
>> Meanwhile another output queue would be created to load-balance across
>> a cluster of analysis engines. This would imply we require the
>> ability to choose the shunting rules based on the queue. To
>> summarize, we need:
>>
>> 1) Aggregate up to 8x 10Gbit links (sustained <~7Gbps).
>> 2) Generate ipfix records for the associated flows.
>> 3) Create an aggregated and packet shunted output queue for n2disk consumption.
>> 4) Create a balanced output queue for consumption by analysis tool 1.
>> 5) Create a balanced output queue for consumption by analysis tool 2.
>>
>> NB: it would be nice if one could attach a egress queue configuration
>> to each individual queue.
>>
>> Hope that help clear it up a bit. Thanks.
>>
>>
>> On Wed, Jul 27, 2016 at 5:17 AM, Alfredo Cardigliano
>> <cardigliano@ntop.org> wrote:
>>> Hi Jeremy
>>> please read inline:
>>>
>>>> On 26 Jul 2016, at 14:24, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>>>
>>>> AC> (rough) number on ingress interfaces, expected ingress total rate
>>>> JA> Ingress interfaces will be 8x 10Gbit. With these we form 80Gbps
>>>> aggregate link to our edge switches. The switches are then configured
>>>> to send any of the monitor sessions down the aggregate link.
>>>> Sustained traffic volume ~<7Gbps. Ideal capture capacity for ~20Gbps
>>>> before starting to drop packets.
>>>
>>> Ok got it.
>>>
>>>> AC> Cento already does shunting on aggregated queue (for feeding n2disk).
>>>> JA> I understand it already offers shunting on aggregated queue. The
>>>> question would be, how do I go about using cento to aggregate the
>>>> interfaces into multiple aggregate egress queues.
>>>
>>> At the moment only one aggregated queue is supported, we will discuss
>>> this internally and probably add it to the roadmap.
>>>
>>>> Of which, we could
>>>> then use cento to read from the aggregated egress queue then shunt the
>>>> traffic for use by n2disk.
>>>
>>> Why do you want to use 2 cento processes in a 2-stage pipeline instead of
>>> doing everything in a single process?
>>>
>>>> AC> Cento already does load balancing, but not after aggregation (it
>>>> distribute to multiple egress queues traffic coming from each -i).
>>>> Please note you can specify an interface pair with -i ethic thY if you
>>>> have two directions from two ingress interfaces (i.e. from a TAP). Is
>>>> it ok for your use case?
>>>> JA> I think what I am asking is if one can use cento as a consumer of
>>>> packets that were output from a separate cento process providing an
>>>> aggregated queue.
>>>
>>> This is not an expected use case and it’s not optimal (e.g. it requires
>>> 1 packet copy passing from one cento process to the other), even though
>>> it should “work”
>>>
>>>> I am not sure if this is making sense. Let me know if you require
>>>> clarification.
>>>
>>> It could make sense, but I am trying to figure out what is the reason for
>>> pipelining 2 cento processes, probably I am missing something.
>>>
>>> Regards
>>> Alfredo
>>>
>>>> On Tue, Jul 26, 2016 at 5:01 PM, Alfredo Cardigliano
>>>> <cardigliano@ntop.org> wrote:
>>>>> Hi Jeremy
>>>>> please read below.
>>>>>
>>>>>> On 26 Jul 2016, at 11:50, Jeremy Ashton <jeremy.ashton@shopify.com> wrote:
>>>>>>
>>>>>> I am looking to implement a packet processing pipeline that first
>>>>>> aggregates a number of interfaces, generates ipfix flows and outputs
>>>>>> to multiple aggregated queues (cento-ids).
>>>>>
>>>>> Please provide:
>>>>> - (rough) number on ingress interfaces
>>>>> - expected ingress total rate
>>>>>
>>>>>> Afterwards, I am looking
>>>>>> to have (cento?) read from one of the aggregated queues and apply
>>>>>> packet shunting before passing it off to n2disk.
>>>>>
>>>>> Cento already does shunting on aggregated queue (for feeding n2disk).
>>>>>
>>>>>> The other aggregated
>>>>>> queues would be used to feed other analysis tools via load balanced
>>>>>> output queues.
>>>>>
>>>>> Cento already does load balancing, but not after aggregation (it distribute to multiple
>>>>> egress queues traffic coming from each -i). Please note you can specify an interface
>>>>> pair with -i ethX,ethY if you have two directions from two ingress interfaces (i.e. from a TAP).
>>>>> Is it ok for your use case?
>>>>>
>>>>> Alfredo
>>>>>
>>>>>> I am currently using zbalance_ipc for the initial “capture” and
>>>>>> distribution. But it seems cento is the way it should be done. From
>>>>>> what I can tell, I would require two new features to be added:
>>>>>>
>>>>>> 1) Adding the ability to create multiple aggregated queues as output
>>>>>> from cento. (https://github.com/ntop/nProbe/issues/86).
>>>>>>
>>>>>> 2) Add the ability to bypass flow generation in cento and use it just
>>>>>> for aggregation and distribution of packets.
>>>>>>
>>>>>> I am not sure if this is the correct approach. I wonder if you might
>>>>>> offer some advice as to the best approach for doing this.
>>>>>>
>>>>>> Thanks!
>>>>>> _______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> Ntop-misc@listgateway.unipi.it
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc