Skip to content

Commit

Permalink
threading: support thread autopinning and interface-specific affinity
Browse files Browse the repository at this point in the history
Using the new configuration format, it is now possible to set CPU affinity
settings per interface.

The threading.autopin option has been added to automatically use CPUs from the
same NUMA node as the interface. The autopin option requires
hwloc-devel / hwloc-dev to be installed and --enable-hwloc flag in configure
script.

Ticket: 7036
  • Loading branch information
Lukas Sismis authored and lukashino committed Jan 9, 2025
1 parent 59d64d1 commit 2f02890
Show file tree
Hide file tree
Showing 13 changed files with 857 additions and 66 deletions.
28 changes: 28 additions & 0 deletions configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -741,6 +741,33 @@
exit 1
fi

LIBHWLOC=""
AC_ARG_ENABLE(hwloc,
AS_HELP_STRING([--enable-hwloc], [Enable hwloc support [default=no]]),
[enable_hwloc=$enableval],[enable_hwloc=no])
AS_IF([test "x$enable_hwloc" = "xyes"], [
PKG_CHECK_MODULES([HWLOC], [hwloc >= 2.0.0],
[AC_DEFINE([HAVE_HWLOC], [1], [Define if hwloc library is present and meets version requirements])],
LIBHWLOC="no")
if test "$LIBHWLOC" = "no"; then
echo
echo " ERROR! hwloc library version > 2.0.0 not found, go get it"
echo " from https://www.open-mpi.org/projects/hwloc/ "
echo " or your distribution:"
echo
echo " Ubuntu: apt-get install hwloc libhwloc-dev"
echo " Fedora: dnf install hwloc hwloc-devel"
echo " CentOS/RHEL: yum install hwloc hwloc-devel"
echo
exit 1
else
CFLAGS="${CFLAGS} ${HWLOC_CFLAGS}"
LDFLAGS="${LDFLAGS} ${HWLOC_LIBS}"
enable_hwloc="yes"
fi
])

# libpthread
AC_ARG_WITH(libpthread_includes,
[ --with-libpthread-includes=DIR libpthread include directory],
Expand Down Expand Up @@ -2561,6 +2588,7 @@ SURICATA_BUILD_CONF="Suricata Configuration:
JA4 support: ${enable_ja4}
Non-bundled htp: ${enable_non_bundled_htp}
Hyperscan support: ${enable_hyperscan}
Hwloc support: ${enable_hwloc}
Libnet support: ${enable_libnet}
liblz4 support: ${enable_liblz4}
Landlock support: ${enable_landlock}
Expand Down
82 changes: 82 additions & 0 deletions doc/userguide/configuration/suricata-yaml.rst
Original file line number Diff line number Diff line change
Expand Up @@ -924,6 +924,7 @@ per available CPU/CPU core.

threading:
set-cpu-affinity: yes
autopin: no
cpu-affinity:
management-cpu-set:
cpu: [ 0 ] # include only these cpus in affinity settings
Expand All @@ -940,6 +941,13 @@ per available CPU/CPU core.
medium: [ "1-2" ]
high: [ 3 ]
default: "medium"
interface-specific-cpu-set:
- interface: "enp4s0f0" # 0000:3b:00.0 # net_bonding0 # ens1f0
cpu: [ 1,3,5,7,9 ]
mode: "exclusive"
prio:
high: [ "all" ]
default: "medium"
verdict-cpu-set:
cpu: [ 0 ]
prio:
Expand Down Expand Up @@ -976,6 +984,80 @@ Runmode Workers::
worker-cpu-set - used for receive,streamtcp,decode,detect,output(logging),respond/reject, verdict


Interface-specific CPU affinity settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Using the new configuration format introduced in Suricata 8.0 it is possible
to set CPU affinity settings per interface. This can be useful
when you have multiple interfaces and you want to dedicate specific CPU cores
to specific interfaces. This can be useful for example when Suricata runs on
multiple NUMA nodes and reads from interfaces on each NUMA node.

Interface-specific affinity settings can be configured for the worker-cpu-set
and the receive-cpu-set (only used in autofp mode).
This feature is available for capture modes which work with interfaces
(af-packet, dpdk, etc.). The value of the interface key can be the kernel
interface name (e.g. eth0 for af-packet), the PCI address of the interface
(e.g. 0000:3b:00.0 for DPDK capture mode), or the name of the virtual device
interface (e.g. net_bonding0 for DPDK capture mode).
The interface names needs to be unique and be located under the capture mode
configuration.

The interface-specific settings will override the global settings for the
worker-cpu-set and receive-cpu-set. The CPUs do not need to be contained in
the parent node settings. If the interface-specific settings are not defined,
the global settings will be used.

::

threading:
set-cpu-affinity: yes
cpu-affinity:
worker-cpu-set:
interface-specific-cpu-set:
- interface: "eth0" # 0000:3b:00.0 # net_bonding0
cpu: [ 1,3,5,7,9 ]
mode: "exclusive"
prio:
high: [ "all" ]
default: "medium"

Automatic NUMA-aware CPU core pinning
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When Suricata is running on a system with multiple NUMA nodes, it is possible
to automatically use CPUs from the same NUMA node as the network capture
interface.
CPU cores on the same NUMA nodes as the network capture interface have
reduced memory access latency and increased the performance of Suricata.
This is enabled by setting the `autopin` option to `yes` in the threading
section. This option is available for worker-cpu-set and receive-cpu-set.

::

threading:
set-cpu-affinity: yes
autopin: yes
cpu-affinity:
worker-cpu-set:
cpu: [ "all" ]
mode: "exclusive"
prio:
high: [ "all" ]

Consider 2 interfaces defined in the capture mode configuration, one on each
NUMA node. The `autopin` option is enabled to automatically use CPUs from the
same NUMA node as the interface. The worker-cpu-set is set to use all CPUs.
When interface on the first NUMA node is used, the worker threads will be
pinned to CPUs on the first NUMA node. When interface on the second NUMA node
is used, the worker threads will be pinned to CPUs on the second NUMA node.
If the number of CPU cores on a given NUMA node is exhausted then the worker
threads will be pinned to CPUs on the other NUMA node.

The option `threading.autopin` can be combined with the interface-specific CPU
affinity settings.
To use the `autopin` option, the system must have the `hwloc`
dependency installed and pass `--enable-hwloc` to the configure script.

IP Defrag
---------
Expand Down
14 changes: 14 additions & 0 deletions doc/userguide/upgrade.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,20 @@ Major changes
+ worker-cpu-set:
+ cpu: [0, 1]
- The `threading.cpu-affinity` configuration has been extended to support
interface-specific CPU affinity settings. This allows you to specify
CPU affinity settings for each interface separately.
The new configuration format is described in :ref:`suricata-yaml-threading`.
The old configuration format does not support this extension and will be
removed in Suricata 9.0.
- The `threading.cpu-affinity` configuration now supports autopinning
worker or receive threads to the same NUMA node as the network capture
interface is located on.
This can be enabled by setting `threading.autopin` to `yes`.
See :ref:`suricata-yaml-threading` for more information.
This requires hwloc dependency to be installed and `--enable-hwloc`
to be passed to configure script.

Removals
~~~~~~~~
- The ssh keywords ``ssh.protoversion`` and ``ssh.softwareversion`` have been removed.
Expand Down
62 changes: 48 additions & 14 deletions src/runmode-dpdk.c
Original file line number Diff line number Diff line change
Expand Up @@ -368,12 +368,17 @@ static int ConfigSetThreads(DPDKIfaceConfig *iconf, const char *entry_str)
SCReturnInt(-EINVAL);
}

ThreadsAffinityType *wtaf = GetAffinityTypeFromName("worker-cpu-set");
bool wtaf_periface = true;
ThreadsAffinityType *wtaf = GetAffinityTypeForNameAndIface("worker-cpu-set", iconf->iface);
if (wtaf == NULL) {
SCLogError("Specify worker-cpu-set list in the threading section");
SCReturnInt(-EINVAL);
wtaf_periface = false;
wtaf = GetAffinityTypeForNameAndIface("worker-cpu-set", NULL); // mandatory
if (wtaf == NULL) {
SCLogError("Specify worker-cpu-set list in the threading section");
SCReturnInt(-EINVAL);
}
}
ThreadsAffinityType *mtaf = GetAffinityTypeFromName("management-cpu-set");
ThreadsAffinityType *mtaf = GetAffinityTypeForNameAndIface("management-cpu-set", NULL);
if (mtaf == NULL) {
SCLogError("Specify management-cpu-set list in the threading section");
SCReturnInt(-EINVAL);
Expand Down Expand Up @@ -406,7 +411,12 @@ static int ConfigSetThreads(DPDKIfaceConfig *iconf, const char *entry_str)
}

if (strcmp(entry_str, "auto") == 0) {
iconf->threads = (uint16_t)sched_cpus / LiveGetDeviceCount();
if (wtaf_periface) {
iconf->threads = (uint16_t)sched_cpus;
SCLogConfig("%s: auto-assigned %u threads", iconf->iface, iconf->threads);
SCReturnInt(0);
}
iconf->threads = (uint16_t)sched_cpus / LiveGetDeviceCountWithoutAssignedThreading();
if (iconf->threads == 0) {
SCLogError("Not enough worker CPU cores with affinity were configured");
SCReturnInt(-ERANGE);
Expand All @@ -416,7 +426,8 @@ static int ConfigSetThreads(DPDKIfaceConfig *iconf, const char *entry_str)
iconf->threads++;
remaining_auto_cpus--;
} else if (remaining_auto_cpus == -1) {
remaining_auto_cpus = (int32_t)sched_cpus % LiveGetDeviceCount();
remaining_auto_cpus =
(int32_t)sched_cpus % LiveGetDeviceCountWithoutAssignedThreading();
if (remaining_auto_cpus > 0) {
iconf->threads++;
remaining_auto_cpus--;
Expand Down Expand Up @@ -844,23 +855,46 @@ static int ConfigLoad(DPDKIfaceConfig *iconf, const char *iface)
SCReturnInt(0);
}

static int32_t ConfigValidateThreads(uint16_t iface_threads)
static bool ConfigThreadsGenericIsValid(uint16_t iface_threads, ThreadsAffinityType *wtaf)
{
static uint32_t total_cpus = 0;
total_cpus += iface_threads;
ThreadsAffinityType *wtaf = GetAffinityTypeFromName("worker-cpu-set");
if (wtaf == NULL) {
SCLogError("Specify worker-cpu-set list in the threading section");
return -1;
return false;
}
if (total_cpus > UtilAffinityGetAffinedCPUNum(wtaf)) {
SCLogError("Interfaces requested more cores than configured in the threading section "
"(requested %d configured %d",
SCLogError("Interfaces requested more cores than configured in the worker-cpu-set "
"threading section (requested %d configured %d",
total_cpus, UtilAffinityGetAffinedCPUNum(wtaf));
return -1;
return false;
}

return 0;
return true;
}

static bool ConfigThreadsInterfaceIsValid(uint16_t iface_threads, ThreadsAffinityType *itaf)
{
if (iface_threads > UtilAffinityGetAffinedCPUNum(itaf)) {
SCLogError("Interface requested more cores than configured in the interface-specific "
"threading section (requested %d configured %d",
iface_threads, UtilAffinityGetAffinedCPUNum(itaf));
return false;
}

return true;
}

static bool ConfigIsThreadingValid(uint16_t iface_threads, const char *iface)
{
ThreadsAffinityType *itaf = GetAffinityTypeForNameAndIface("worker-cpu-set", iface);
ThreadsAffinityType *wtaf = GetAffinityTypeForNameAndIface("worker-cpu-set", NULL);
if (itaf && !ConfigThreadsInterfaceIsValid(iface_threads, itaf)) {
return false;
} else if (itaf == NULL && !ConfigThreadsGenericIsValid(iface_threads, wtaf)) {
return false;
}
return true;
}

static DPDKIfaceConfig *ConfigParse(const char *iface)
Expand All @@ -873,7 +907,7 @@ static DPDKIfaceConfig *ConfigParse(const char *iface)

ConfigInit(&iconf);
retval = ConfigLoad(iconf, iface);
if (retval < 0 || ConfigValidateThreads(iconf->threads) != 0) {
if (retval < 0 || !ConfigIsThreadingValid(iconf->threads, iface)) {
iconf->DerefFunc(iconf);
SCReturnPtr(NULL, "void *");
}
Expand Down
4 changes: 4 additions & 0 deletions src/suricata.c
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,7 @@
#include "tmqh-packetpool.h"
#include "tm-queuehandlers.h"

#include "util-affinity.h"
#include "util-byte.h"
#include "util-conf.h"
#include "util-coredump-config.h"
Expand Down Expand Up @@ -2297,6 +2298,9 @@ void PostRunDeinit(const int runmode, struct timeval *start_time)
StreamTcpFreeConfig(STREAM_VERBOSE);
DefragDestroy();
HttpRangeContainersDestroy();
#ifdef HAVE_HWLOC
TopologyDestroy();
#endif /* HAVE_HWLOC */

TmqResetQueues();
#ifdef PROFILING
Expand Down
3 changes: 3 additions & 0 deletions src/threadvars.h
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,9 @@ typedef struct ThreadVars_ {
struct FlowQueue_ *flow_queue;
bool break_loop;

/** Interface-specific thread affinity */
char *iface_name;

Storage storage[];
} ThreadVars;

Expand Down
22 changes: 21 additions & 1 deletion src/tm-threads.c
Original file line number Diff line number Diff line change
Expand Up @@ -865,8 +865,24 @@ TmEcode TmThreadSetupOptions(ThreadVars *tv)
TmThreadSetPrio(tv);
if (tv->thread_setup_flags & THREAD_SET_AFFTYPE) {
ThreadsAffinityType *taf = &thread_affinity[tv->cpu_affinity];
bool use_iface_affinity = RunmodeIsAutofp() && tv->cpu_affinity == RECEIVE_CPU_SET &&
FindAffinityByInterface(taf, tv->iface_name) != NULL;
use_iface_affinity |= RunmodeIsWorkers() && tv->cpu_affinity == WORKER_CPU_SET &&
FindAffinityByInterface(taf, tv->iface_name) != NULL;

if (use_iface_affinity) {
taf = FindAffinityByInterface(taf, tv->iface_name);
}

if (UtilAffinityGetAffinedCPUNum(taf) == 0) {
if (!taf->nocpu_warned) {
SCLogWarning("No CPU affinity set for %s", AffinityGetYamlPath(taf));
taf->nocpu_warned = true;
}
}

if (taf->mode_flag == EXCLUSIVE_AFFINITY) {
uint16_t cpu = AffinityGetNextCPU(taf);
uint16_t cpu = AffinityGetNextCPU(tv, taf);
SetCPUAffinity(cpu);
/* If CPU is in a set overwrite the default thread prio */
if (CPU_ISSET(cpu, &taf->lowprio_cpu)) {
Expand Down Expand Up @@ -1600,6 +1616,10 @@ static void TmThreadFree(ThreadVars *tv)
SCFree(tv->printable_name);
}

if (tv->iface_name) {
SCFree(tv->iface_name);
}

if (tv->stream_pq_local) {
BUG_ON(tv->stream_pq_local->len);
SCMutexDestroy(&tv->stream_pq_local->mutex_q);
Expand Down
Loading

0 comments on commit 2f02890

Please sign in to comment.