Mailing List Archive

numa_cpu_affinity/binding cores for snort
If 'hwloc-ls' tells me my ixgbe device is on node 1:

NUMANode L#0 (P#0 62GB)
[...]
PCIBridge
PCI 14e4:1657
Net L#2 "eno1"
PCI 14e4:1657
Net L#3 "eno2"
PCI 14e4:1657
Net L#4 "eno3"
PCI 14e4:1657
Net L#5 "eno4"
[...]
NUMANode L#1 (P#1 63GB)
[...]
HostBridge L#8
PCIBridge
PCI 8086:10fb
Net L#7 "ens5f0"
PCI 8086:10fb
Net L#8 "ens5f1"

and 'numactl --hardware' tells me my cpu cores are located as follows:

available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
node 0 size: 63470 MB
node 0 free: 26682 MB
node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71

and I want to run 31 snorts with zbalance_ipc, should I be doing
this?

modprobe ixgbe RSS=1,1 numa_cpu_affinity=18,19,...,35,54,55,...,N
/usr/local/pf/sbin/zbalance_ipc -i zc:ens5f0 -m 4 -n 31,1 -c 99 -g 70 -S 71

and using

--daq-var bindcpu=18
--daq-var bindcpu=19
[...]
--daq-var bindcpu=35
--daq-var bindcpu=54
--daq-var bindcpu=55
--daq-var bindcpu=N

for my snort processes?

Thanks,

--
Jim Hranicky
Data Security Specialist
UF Information Technology
105 NW 16TH ST Room #104 GAINESVILLE FL 32603-1826
352-273-1341
Re: numa_cpu_affinity/binding cores for snort [ In reply to ]
Hi Jim
everything looks good, just one change to the driver parameters, this is enough:

RSS=1,1 numa_cpu_affinity=18,18

Regards
Alfredo

> On 21 Mar 2018, at 22:41, Jim Hranicky <jfh@ufl.edu> wrote:
>
> If 'hwloc-ls' tells me my ixgbe device is on node 1:
>
> NUMANode L#0 (P#0 62GB)
> [...]
> PCIBridge
> PCI 14e4:1657
> Net L#2 "eno1"
> PCI 14e4:1657
> Net L#3 "eno2"
> PCI 14e4:1657
> Net L#4 "eno3"
> PCI 14e4:1657
> Net L#5 "eno4"
> [...]
> NUMANode L#1 (P#1 63GB)
> [...]
> HostBridge L#8
> PCIBridge
> PCI 8086:10fb
> Net L#7 "ens5f0"
> PCI 8086:10fb
> Net L#8 "ens5f1"
>
> and 'numactl --hardware' tells me my cpu cores are located as follows:
>
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
> node 0 size: 63470 MB
> node 0 free: 26682 MB
> node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
>
> and I want to run 31 snorts with zbalance_ipc, should I be doing
> this?
>
> modprobe ixgbe RSS=1,1 numa_cpu_affinity=18,19,...,35,54,55,...,N
> /usr/local/pf/sbin/zbalance_ipc -i zc:ens5f0 -m 4 -n 31,1 -c 99 -g 70 -S 71
>
> and using
>
> --daq-var bindcpu=18
> --daq-var bindcpu=19
> [...]
> --daq-var bindcpu=35
> --daq-var bindcpu=54
> --daq-var bindcpu=55
> --daq-var bindcpu=N
>
> for my snort processes?
>
> Thanks,
>
> --
> Jim Hranicky
> Data Security Specialist
> UF Information Technology
> 105 NW 16TH ST Room #104 GAINESVILLE FL 32603-1826
> 352-273-1341
>
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc