diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-08-15 15:04:25 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-08-15 15:04:25 -0700 |
commit | 9a76aba02a37718242d7cdc294f0a3901928aa57 (patch) | |
tree | 2040d038f85d2120f21af83b0793efd5af1864e3 /tools | |
parent | 0a957467c5fd46142bc9c52758ffc552d4c5e2f7 (diff) | |
parent | 26a1ccc6c117be8e33e0410fce8c5298b0015b99 (diff) |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:
"Highlights:
- Gustavo A. R. Silva keeps working on the implicit switch fallthru
changes.
- Support 802.11ax High-Efficiency wireless in cfg80211 et al, From
Luca Coelho.
- Re-enable ASPM in r8169, from Kai-Heng Feng.
- Add virtual XFRM interfaces, which avoids all of the limitations of
existing IPSEC tunnels. From Steffen Klassert.
- Convert GRO over to use a hash table, so that when we have many
flows active we don't traverse a long list during accumluation.
- Many new self tests for routing, TC, tunnels, etc. Too many
contributors to mention them all, but I'm really happy to keep
seeing this stuff.
- Hardware timestamping support for dpaa_eth/fsl-fman from Yangbo Lu.
- Lots of cleanups and fixes in L2TP code from Guillaume Nault.
- Add IPSEC offload support to netdevsim, from Shannon Nelson.
- Add support for slotting with non-uniform distribution to netem
packet scheduler, from Yousuk Seung.
- Add UDP GSO support to mlx5e, from Boris Pismenny.
- Support offloading of Team LAG in NFP, from John Hurley.
- Allow to configure TX queue selection based upon RX queue, from
Amritha Nambiar.
- Support ethtool ring size configuration in aquantia, from Anton
Mikaev.
- Support DSCP and flowlabel per-transport in SCTP, from Xin Long.
- Support list based batching and stack traversal of SKBs, this is
very exciting work. From Edward Cree.
- Busyloop optimizations in vhost_net, from Toshiaki Makita.
- Introduce the ETF qdisc, which allows time based transmissions. IGB
can offload this in hardware. From Vinicius Costa Gomes.
- Add parameter support to devlink, from Moshe Shemesh.
- Several multiplication and division optimizations for BPF JIT in
nfp driver, from Jiong Wang.
- Lots of prepatory work to make more of the packet scheduler layer
lockless, when possible, from Vlad Buslov.
- Add ACK filter and NAT awareness to sch_cake packet scheduler, from
Toke Høiland-Jørgensen.
- Support regions and region snapshots in devlink, from Alex Vesker.
- Allow to attach XDP programs to both HW and SW at the same time on
a given device, with initial support in nfp. From Jakub Kicinski.
- Add TLS RX offload and support in mlx5, from Ilya Lesokhin.
- Use PHYLIB in r8169 driver, from Heiner Kallweit.
- All sorts of changes to support Spectrum 2 in mlxsw driver, from
Ido Schimmel.
- PTP support in mv88e6xxx DSA driver, from Andrew Lunn.
- Make TCP_USER_TIMEOUT socket option more accurate, from Jon
Maxwell.
- Support for templates in packet scheduler classifier, from Jiri
Pirko.
- IPV6 support in RDS, from Ka-Cheong Poon.
- Native tproxy support in nf_tables, from Máté Eckl.
- Maintain IP fragment queue in an rbtree, but optimize properly for
in-order frags. From Peter Oskolkov.
- Improvde handling of ACKs on hole repairs, from Yuchung Cheng"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1996 commits)
bpf: test: fix spelling mistake "REUSEEPORT" -> "REUSEPORT"
hv/netvsc: Fix NULL dereference at single queue mode fallback
net: filter: mark expected switch fall-through
xen-netfront: fix warn message as irq device name has '/'
cxgb4: Add new T5 PCI device ids 0x50af and 0x50b0
net: dsa: mv88e6xxx: missing unlock on error path
rds: fix building with IPV6=m
inet/connection_sock: prefer _THIS_IP_ to current_text_addr
net: dsa: mv88e6xxx: bitwise vs logical bug
net: sock_diag: Fix spectre v1 gadget in __sock_diag_cmd()
ieee802154: hwsim: using right kind of iteration
net: hns3: Add vlan filter setting by ethtool command -K
net: hns3: Set tx ring' tc info when netdev is up
net: hns3: Remove tx ring BD len register in hns3_enet
net: hns3: Fix desc num set to default when setting channel
net: hns3: Fix for phy link issue when using marvell phy driver
net: hns3: Fix for information of phydev lost problem when down/up
net: hns3: Fix for command format parsing error in hclge_is_all_function_id_zero
net: hns3: Add support for serdes loopback selftest
bnxt_en: take coredump_record structure off stack
...
Diffstat (limited to 'tools')
110 files changed, 11995 insertions, 486 deletions
diff --git a/tools/bpf/.gitignore b/tools/bpf/.gitignore new file mode 100644 index 000000000000..dfe2bd5a4b95 --- /dev/null +++ b/tools/bpf/.gitignore @@ -0,0 +1,5 @@ +FEATURE-DUMP.bpf +bpf_asm +bpf_dbg +bpf_exp.yacc.* +bpf_jit_disasm diff --git a/tools/bpf/Makefile.helpers b/tools/bpf/Makefile.helpers new file mode 100644 index 000000000000..c34fea77f39f --- /dev/null +++ b/tools/bpf/Makefile.helpers @@ -0,0 +1,59 @@ +ifndef allow-override + include ../scripts/Makefile.include + include ../scripts/utilities.mak +else + # Assume Makefile.helpers is being run from bpftool/Documentation + # subdirectory. Go up two more directories to fetch bpf.h header and + # associated script. + UP2DIR := ../../ +endif + +INSTALL ?= install +RM ?= rm -f +RMDIR ?= rmdir --ignore-fail-on-non-empty + +ifeq ($(V),1) + Q = +else + Q = @ +endif + +prefix ?= /usr/local +mandir ?= $(prefix)/man +man7dir = $(mandir)/man7 + +HELPERS_RST = bpf-helpers.rst +MAN7_RST = $(HELPERS_RST) + +_DOC_MAN7 = $(patsubst %.rst,%.7,$(MAN7_RST)) +DOC_MAN7 = $(addprefix $(OUTPUT),$(_DOC_MAN7)) + +helpers: man7 +man7: $(DOC_MAN7) + +RST2MAN_DEP := $(shell command -v rst2man 2>/dev/null) + +$(OUTPUT)$(HELPERS_RST): $(UP2DIR)../../include/uapi/linux/bpf.h + $(QUIET_GEN)$(UP2DIR)../../scripts/bpf_helpers_doc.py --filename $< > $@ + +$(OUTPUT)%.7: $(OUTPUT)%.rst +ifndef RST2MAN_DEP + $(error "rst2man not found, but required to generate man pages") +endif + $(QUIET_GEN)rst2man $< > $@ + +helpers-clean: + $(call QUIET_CLEAN, eBPF_helpers-manpage) + $(Q)$(RM) $(DOC_MAN7) $(OUTPUT)$(HELPERS_RST) + +helpers-install: helpers + $(call QUIET_INSTALL, eBPF_helpers-manpage) + $(Q)$(INSTALL) -d -m 755 $(DESTDIR)$(man7dir) + $(Q)$(INSTALL) -m 644 $(DOC_MAN7) $(DESTDIR)$(man7dir) + +helpers-uninstall: + $(call QUIET_UNINST, eBPF_helpers-manpage) + $(Q)$(RM) $(addprefix $(DESTDIR)$(man7dir)/,$(_DOC_MAN7)) + $(Q)$(RMDIR) $(DESTDIR)$(man7dir) + +.PHONY: helpers helpers-clean helpers-install helpers-uninstall diff --git a/tools/bpf/bpftool/.gitignore b/tools/bpf/bpftool/.gitignore index d7e678c2d396..67167e44b726 100644 --- a/tools/bpf/bpftool/.gitignore +++ b/tools/bpf/bpftool/.gitignore @@ -1,3 +1,5 @@ *.d bpftool +bpftool*.8 +bpf-helpers.* FEATURE-DUMP.bpftool diff --git a/tools/bpf/bpftool/Documentation/Makefile b/tools/bpf/bpftool/Documentation/Makefile index a9d47c1558bb..f7663a3e60c9 100644 --- a/tools/bpf/bpftool/Documentation/Makefile +++ b/tools/bpf/bpftool/Documentation/Makefile @@ -15,12 +15,15 @@ prefix ?= /usr/local mandir ?= $(prefix)/man man8dir = $(mandir)/man8 -MAN8_RST = $(wildcard *.rst) +# Load targets for building eBPF helpers man page. +include ../../Makefile.helpers + +MAN8_RST = $(filter-out $(HELPERS_RST),$(wildcard *.rst)) _DOC_MAN8 = $(patsubst %.rst,%.8,$(MAN8_RST)) DOC_MAN8 = $(addprefix $(OUTPUT),$(_DOC_MAN8)) -man: man8 +man: man8 helpers man8: $(DOC_MAN8) RST2MAN_DEP := $(shell command -v rst2man 2>/dev/null) @@ -31,16 +34,16 @@ ifndef RST2MAN_DEP endif $(QUIET_GEN)rst2man $< > $@ -clean: +clean: helpers-clean $(call QUIET_CLEAN, Documentation) $(Q)$(RM) $(DOC_MAN8) -install: man +install: man helpers-install $(call QUIET_INSTALL, Documentation-man) $(Q)$(INSTALL) -d -m 755 $(DESTDIR)$(man8dir) $(Q)$(INSTALL) -m 644 $(DOC_MAN8) $(DESTDIR)$(man8dir) -uninstall: +uninstall: helpers-uninstall $(call QUIET_UNINST, Documentation-man) $(Q)$(RM) $(addprefix $(DESTDIR)$(man8dir)/,$(_DOC_MAN8)) $(Q)$(RMDIR) $(DESTDIR)$(man8dir) diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst index 7b0e6d453e92..edbe81534c6d 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst @@ -15,12 +15,13 @@ SYNOPSIS *OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] | { **-f** | **--bpffs** } } *COMMANDS* := - { **show** | **list** | **attach** | **detach** | **help** } + { **show** | **list** | **tree** | **attach** | **detach** | **help** } MAP COMMANDS ============= | **bpftool** **cgroup { show | list }** *CGROUP* +| **bpftool** **cgroup tree** [*CGROUP_ROOT*] | **bpftool** **cgroup attach** *CGROUP* *ATTACH_TYPE* *PROG* [*ATTACH_FLAGS*] | **bpftool** **cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG* | **bpftool** **cgroup help** @@ -39,6 +40,15 @@ DESCRIPTION Output will start with program ID followed by attach type, attach flags and program name. + **bpftool cgroup tree** [*CGROUP_ROOT*] + Iterate over all cgroups in *CGROUP_ROOT* and list all + attached programs. If *CGROUP_ROOT* is not specified, + bpftool uses cgroup v2 mountpoint. + + The output is similar to the output of cgroup show/list + commands: it starts with absolute cgroup path, followed by + program ID, attach type, attach flags and program name. + **bpftool cgroup attach** *CGROUP* *ATTACH_TYPE* *PROG* [*ATTACH_FLAGS*] Attach program *PROG* to the cgroup *CGROUP* with attach type *ATTACH_TYPE* and optional *ATTACH_FLAGS*. diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst index 43d34a5c3ec5..64156a16d530 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst @@ -24,10 +24,20 @@ MAP COMMANDS | **bpftool** **prog dump xlated** *PROG* [{**file** *FILE* | **opcodes** | **visual**}] | **bpftool** **prog dump jited** *PROG* [{**file** *FILE* | **opcodes**}] | **bpftool** **prog pin** *PROG* *FILE* -| **bpftool** **prog load** *OBJ* *FILE* +| **bpftool** **prog load** *OBJ* *FILE* [**type** *TYPE*] [**map** {**idx** *IDX* | **name** *NAME*} *MAP*] [**dev** *NAME*] | **bpftool** **prog help** | +| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* } | *PROG* := { **id** *PROG_ID* | **pinned** *FILE* | **tag** *PROG_TAG* } +| *TYPE* := { +| **socket** | **kprobe** | **kretprobe** | **classifier** | **action** | +| **tracepoint** | **raw_tracepoint** | **xdp** | **perf_event** | **cgroup/skb** | +| **cgroup/sock** | **cgroup/dev** | **lwt_in** | **lwt_out** | **lwt_xmit** | +| **lwt_seg6local** | **sockops** | **sk_skb** | **sk_msg** | **lirc_mode2** | +| **cgroup/bind4** | **cgroup/bind6** | **cgroup/post_bind4** | **cgroup/post_bind6** | +| **cgroup/connect4** | **cgroup/connect6** | **cgroup/sendmsg4** | **cgroup/sendmsg6** +| } + DESCRIPTION =========== @@ -64,8 +74,19 @@ DESCRIPTION Note: *FILE* must be located in *bpffs* mount. - **bpftool prog load** *OBJ* *FILE* + **bpftool prog load** *OBJ* *FILE* [**type** *TYPE*] [**map** {**idx** *IDX* | **name** *NAME*} *MAP*] [**dev** *NAME*] Load bpf program from binary *OBJ* and pin as *FILE*. + **type** is optional, if not specified program type will be + inferred from section names. + By default bpftool will create new maps as declared in the ELF + object being loaded. **map** parameter allows for the reuse + of existing maps. It can be specified multiple times, each + time for a different map. *IDX* refers to index of the map + to be replaced in the ELF file counting from 0, while *NAME* + allows to replace a map by name. *MAP* specifies the map to + use, referring to it by **id** or through a **pinned** file. + If **dev** *NAME* is specified program will be loaded onto + given networking device (offload). Note: *FILE* must be located in *bpffs* mount. @@ -159,6 +180,14 @@ EXAMPLES mov %rbx,0x0(%rbp) 48 89 5d 00 +| +| **# bpftool prog load xdp1_kern.o /sys/fs/bpf/xdp1 type xdp map name rxcnt id 7** +| **# bpftool prog show pinned /sys/fs/bpf/xdp1** +| 9: xdp name xdp_prog1 tag 539ec6ce11b52f98 gpl +| loaded_at 2018-06-25T16:17:31-0700 uid 0 +| xlated 488B jited 336B memlock 4096B map_ids 7 +| **# rm /sys/fs/bpf/xdp1** +| SEE ALSO ======== diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile index 892dbf095bff..74288a2197ab 100644 --- a/tools/bpf/bpftool/Makefile +++ b/tools/bpf/bpftool/Makefile @@ -23,10 +23,10 @@ endif LIBBPF = $(BPF_PATH)libbpf.a -BPFTOOL_VERSION=$(shell make --no-print-directory -sC ../../.. kernelversion) +BPFTOOL_VERSION := $(shell make --no-print-directory -sC ../../.. kernelversion) $(LIBBPF): FORCE - $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(OUTPUT) $(OUTPUT)libbpf.a FEATURES_DUMP=$(FEATURE_DUMP_EXPORT) + $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(OUTPUT) $(OUTPUT)libbpf.a $(LIBBPF)-clean: $(call QUIET_CLEAN, libbpf) @@ -52,7 +52,7 @@ INSTALL ?= install RM ?= rm -f FEATURE_USER = .bpftool -FEATURE_TESTS = libbfd disassembler-four-args +FEATURE_TESTS = libbfd disassembler-four-args reallocarray FEATURE_DISPLAY = libbfd disassembler-four-args check_feat := 1 @@ -75,6 +75,10 @@ ifeq ($(feature-disassembler-four-args), 1) CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE endif +ifeq ($(feature-reallocarray), 0) +CFLAGS += -DCOMPAT_NEED_REALLOCARRAY +endif + include $(wildcard $(OUTPUT)*.d) all: $(OUTPUT)bpftool diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 1e1083321643..598066c40191 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -99,6 +99,35 @@ _bpftool_get_prog_tags() command sed -n 's/.*"tag": "\(.*\)",$/\1/p' )" -- "$cur" ) ) } +_bpftool_get_obj_map_names() +{ + local obj + + obj=$1 + + maps=$(objdump -j maps -t $obj 2>/dev/null | \ + command awk '/g . maps/ {print $NF}') + + COMPREPLY+=( $( compgen -W "$maps" -- "$cur" ) ) +} + +_bpftool_get_obj_map_idxs() +{ + local obj + + obj=$1 + + nmaps=$(objdump -j maps -t $obj 2>/dev/null | grep -c 'g . maps') + + COMPREPLY+=( $( compgen -W "$(seq 0 $((nmaps - 1)))" -- "$cur" ) ) +} + +_sysfs_get_netdevs() +{ + COMPREPLY+=( $( compgen -W "$( ls /sys/class/net 2>/dev/null )" -- \ + "$cur" ) ) +} + # For bpftool map update: retrieve type of the map to update. _bpftool_map_update_map_type() { @@ -153,6 +182,13 @@ _bpftool() local cur prev words objword _init_completion || return + # Deal with options + if [[ ${words[cword]} == -* ]]; then + local c='--version --json --pretty --bpffs' + COMPREPLY=( $( compgen -W "$c" -- "$cur" ) ) + return 0 + fi + # Deal with simplest keywords case $prev in help|hex|opcodes|visual) @@ -172,20 +208,23 @@ _bpftool() ;; esac - # Search for object and command - local object command cmdword - for (( cmdword=1; cmdword < ${#words[@]}-1; cmdword++ )); do - [[ -n $object ]] && command=${words[cmdword]} && break - [[ ${words[cmdword]} != -* ]] && object=${words[cmdword]} + # Remove all options so completions don't have to deal with them. + local i + for (( i=1; i < ${#words[@]}; )); do + if [[ ${words[i]::1} == - ]]; then + words=( "${words[@]:0:i}" "${words[@]:i+1}" ) + [[ $i -le $cword ]] && cword=$(( cword - 1 )) + else + i=$(( ++i )) + fi done + cur=${words[cword]} + prev=${words[cword - 1]} + + local object=${words[1]} command=${words[2]} - if [[ -z $object ]]; then + if [[ -z $object || $cword -eq 1 ]]; then case $cur in - -*) - local c='--version --json --pretty' - COMPREPLY=( $( compgen -W "$c" -- "$cur" ) ) - return 0 - ;; *) COMPREPLY=( $( compgen -W "$( bpftool help 2>&1 | \ command sed \ @@ -204,12 +243,14 @@ _bpftool() # Completion depends on object and command in use case $object in prog) - case $prev in - id) - _bpftool_get_prog_ids - return 0 - ;; - esac + if [[ $command != "load" ]]; then + case $prev in + id) + _bpftool_get_prog_ids + return 0 + ;; + esac + fi local PROG_TYPE='id pinned tag' case $command in @@ -252,8 +293,57 @@ _bpftool() return 0 ;; load) - _filedir - return 0 + local obj + + if [[ ${#words[@]} -lt 6 ]]; then + _filedir + return 0 + fi + + obj=${words[3]} + + if [[ ${words[-4]} == "map" ]]; then + COMPREPLY=( $( compgen -W "id pinned" -- "$cur" ) ) + return 0 + fi + if [[ ${words[-3]} == "map" ]]; then + if [[ ${words[-2]} == "idx" ]]; then + _bpftool_get_obj_map_idxs $obj + elif [[ ${words[-2]} == "name" ]]; then + _bpftool_get_obj_map_names $obj + fi + return 0 + fi + if [[ ${words[-2]} == "map" ]]; then + COMPREPLY=( $( compgen -W "idx name" -- "$cur" ) ) + return 0 + fi + + case $prev in + type) + COMPREPLY=( $( compgen -W "socket kprobe kretprobe classifier action tracepoint raw_tracepoint xdp perf_event cgroup/skb cgroup/sock cgroup/dev lwt_in lwt_out lwt_xmit lwt_seg6local sockops sk_skb sk_msg lirc_mode2 cgroup/bind4 cgroup/bind6 cgroup/connect4 cgroup/connect6 cgroup/sendmsg4 cgroup/sendmsg6 cgroup/post_bind4 cgroup/post_bind6" -- \ + "$cur" ) ) + return 0 + ;; + id) + _bpftool_get_map_ids + return 0 + ;; + pinned) + _filedir + return 0 + ;; + dev) + _sysfs_get_netdevs + return 0 + ;; + *) + COMPREPLY=( $( compgen -W "map" -- "$cur" ) ) + _bpftool_once_attr 'type' + _bpftool_once_attr 'dev' + return 0 + ;; + esac ;; *) [[ $prev == $object ]] && \ @@ -404,6 +494,10 @@ _bpftool() _filedir return 0 ;; + tree) + _filedir + return 0 + ;; attach|detach) local ATTACH_TYPES='ingress egress sock_create sock_ops \ device bind4 bind6 post_bind4 post_bind6 connect4 \ @@ -445,7 +539,7 @@ _bpftool() *) [[ $prev == $object ]] && \ COMPREPLY=( $( compgen -W 'help attach detach \ - show list' -- "$cur" ) ) + show list tree' -- "$cur" ) ) ;; esac ;; diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c new file mode 100644 index 000000000000..55bc512a1831 --- /dev/null +++ b/tools/bpf/bpftool/btf_dumper.c @@ -0,0 +1,251 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2018 Facebook */ + +#include <ctype.h> +#include <stdio.h> /* for (FILE *) used by json_writer */ +#include <string.h> +#include <asm/byteorder.h> +#include <linux/bitops.h> +#include <linux/btf.h> +#include <linux/err.h> + +#include "btf.h" +#include "json_writer.h" +#include "main.h" + +#define BITS_PER_BYTE_MASK (BITS_PER_BYTE - 1) +#define BITS_PER_BYTE_MASKED(bits) ((bits) & BITS_PER_BYTE_MASK) +#define BITS_ROUNDDOWN_BYTES(bits) ((bits) >> 3) +#define BITS_ROUNDUP_BYTES(bits) \ + (BITS_ROUNDDOWN_BYTES(bits) + !!BITS_PER_BYTE_MASKED(bits)) + +static int btf_dumper_do_type(const struct btf_dumper *d, __u32 type_id, + __u8 bit_offset, const void *data); + +static void btf_dumper_ptr(const void *data, json_writer_t *jw, + bool is_plain_text) +{ + if (is_plain_text) + jsonw_printf(jw, "%p", *(unsigned long *)data); + else + jsonw_printf(jw, "%u", *(unsigned long *)data); +} + +static int btf_dumper_modifier(const struct btf_dumper *d, __u32 type_id, + const void *data) +{ + int actual_type_id; + + actual_type_id = btf__resolve_type(d->btf, type_id); + if (actual_type_id < 0) + return actual_type_id; + + return btf_dumper_do_type(d, actual_type_id, 0, data); +} + +static void btf_dumper_enum(const void *data, json_writer_t *jw) +{ + jsonw_printf(jw, "%d", *(int *)data); +} + +static int btf_dumper_array(const struct btf_dumper *d, __u32 type_id, + const void *data) +{ + const struct btf_type *t = btf__type_by_id(d->btf, type_id); + struct btf_array *arr = (struct btf_array *)(t + 1); + long long elem_size; + int ret = 0; + __u32 i; + + elem_size = btf__resolve_size(d->btf, arr->type); + if (elem_size < 0) + return elem_size; + + jsonw_start_array(d->jw); + for (i = 0; i < arr->nelems; i++) { + ret = btf_dumper_do_type(d, arr->type, 0, + data + i * elem_size); + if (ret) + break; + } + + jsonw_end_array(d->jw); + return ret; +} + +static void btf_dumper_int_bits(__u32 int_type, __u8 bit_offset, + const void *data, json_writer_t *jw, + bool is_plain_text) +{ + int left_shift_bits, right_shift_bits; + int nr_bits = BTF_INT_BITS(int_type); + int total_bits_offset; + int bytes_to_copy; + int bits_to_copy; + __u64 print_num; + + total_bits_offset = bit_offset + BTF_INT_OFFSET(int_type); + data += BITS_ROUNDDOWN_BYTES(total_bits_offset); + bit_offset = BITS_PER_BYTE_MASKED(total_bits_offset); + bits_to_copy = bit_offset + nr_bits; + bytes_to_copy = BITS_ROUNDUP_BYTES(bits_to_copy); + + print_num = 0; + memcpy(&print_num, data, bytes_to_copy); +#if defined(__BIG_ENDIAN_BITFIELD) + left_shift_bits = bit_offset; +#elif defined(__LITTLE_ENDIAN_BITFIELD) + left_shift_bits = 64 - bits_to_copy; +#else +#error neither big nor little endian +#endif + right_shift_bits = 64 - nr_bits; + + print_num <<= left_shift_bits; + print_num >>= right_shift_bits; + if (is_plain_text) + jsonw_printf(jw, "0x%llx", print_num); + else + jsonw_printf(jw, "%llu", print_num); +} + +static int btf_dumper_int(const struct btf_type *t, __u8 bit_offset, + const void *data, json_writer_t *jw, + bool is_plain_text) +{ + __u32 *int_type; + __u32 nr_bits; + + int_type = (__u32 *)(t + 1); + nr_bits = BTF_INT_BITS(*int_type); + /* if this is bit field */ + if (bit_offset || BTF_INT_OFFSET(*int_type) || + BITS_PER_BYTE_MASKED(nr_bits)) { + btf_dumper_int_bits(*int_type, bit_offset, data, jw, + is_plain_text); + return 0; + } + + switch (BTF_INT_ENCODING(*int_type)) { + case 0: + if (BTF_INT_BITS(*int_type) == 64) + jsonw_printf(jw, "%lu", *(__u64 *)data); + else if (BTF_INT_BITS(*int_type) == 32) + jsonw_printf(jw, "%u", *(__u32 *)data); + else if (BTF_INT_BITS(*int_type) == 16) + jsonw_printf(jw, "%hu", *(__u16 *)data); + else if (BTF_INT_BITS(*int_type) == 8) + jsonw_printf(jw, "%hhu", *(__u8 *)data); + else + btf_dumper_int_bits(*int_type, bit_offset, data, jw, + is_plain_text); + break; + case BTF_INT_SIGNED: + if (BTF_INT_BITS(*int_type) == 64) + jsonw_printf(jw, "%ld", *(long long *)data); + else if (BTF_INT_BITS(*int_type) == 32) + jsonw_printf(jw, "%d", *(int *)data); + else if (BTF_INT_BITS(*int_type) == 16) + jsonw_printf(jw, "%hd", *(short *)data); + else if (BTF_INT_BITS(*int_type) == 8) + jsonw_printf(jw, "%hhd", *(char *)data); + else + btf_dumper_int_bits(*int_type, bit_offset, data, jw, + is_plain_text); + break; + case BTF_INT_CHAR: + if (isprint(*(char *)data)) + jsonw_printf(jw, "\"%c\"", *(char *)data); + else + if (is_plain_text) + jsonw_printf(jw, "0x%hhx", *(char *)data); + else + jsonw_printf(jw, "\"\\u00%02hhx\"", + *(char *)data); + break; + case BTF_INT_BOOL: + jsonw_bool(jw, *(int *)data); + break; + default: + /* shouldn't happen */ + return -EINVAL; + } + + return 0; +} + +static int btf_dumper_struct(const struct btf_dumper *d, __u32 type_id, + const void *data) +{ + const struct btf_type *t; + struct btf_member *m; + const void *data_off; + int ret = 0; + int i, vlen; + + t = btf__type_by_id(d->btf, type_id); + if (!t) + return -EINVAL; + + vlen = BTF_INFO_VLEN(t->info); + jsonw_start_object(d->jw); + m = (struct btf_member *)(t + 1); + + for (i = 0; i < vlen; i++) { + data_off = data + BITS_ROUNDDOWN_BYTES(m[i].offset); + jsonw_name(d->jw, btf__name_by_offset(d->btf, m[i].name_off)); + ret = btf_dumper_do_type(d, m[i].type, + BITS_PER_BYTE_MASKED(m[i].offset), + data_off); + if (ret) + break; + } + + jsonw_end_object(d->jw); + + return ret; +} + +static int btf_dumper_do_type(const struct btf_dumper *d, __u32 type_id, + __u8 bit_offset, const void *data) +{ + const struct btf_type *t = btf__type_by_id(d->btf, type_id); + + switch (BTF_INFO_KIND(t->info)) { + case BTF_KIND_INT: + return btf_dumper_int(t, bit_offset, data, d->jw, + d->is_plain_text); + case BTF_KIND_STRUCT: + case BTF_KIND_UNION: + return btf_dumper_struct(d, type_id, data); + case BTF_KIND_ARRAY: + return btf_dumper_array(d, type_id, data); + case BTF_KIND_ENUM: + btf_dumper_enum(data, d->jw); + return 0; + case BTF_KIND_PTR: + btf_dumper_ptr(data, d->jw, d->is_plain_text); + return 0; + case BTF_KIND_UNKN: + jsonw_printf(d->jw, "(unknown)"); + return 0; + case BTF_KIND_FWD: + /* map key or value can't be forward */ + jsonw_printf(d->jw, "(fwd-kind-invalid)"); + return -EINVAL; + case BTF_KIND_TYPEDEF: + case BTF_KIND_VOLATILE: + case BTF_KIND_CONST: + case BTF_KIND_RESTRICT: + return btf_dumper_modifier(d, type_id, data); + default: + jsonw_printf(d->jw, "(unsupported-kind"); + return -EINVAL; + } +} + +int btf_dumper_type(const struct btf_dumper *d, __u32 type_id, + const void *data) +{ + return btf_dumper_do_type(d, type_id, 0, data); +} diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c index 16bee011e16c..ee7a9765c6b3 100644 --- a/tools/bpf/bpftool/cgroup.c +++ b/tools/bpf/bpftool/cgroup.c @@ -2,7 +2,12 @@ // Copyright (C) 2017 Facebook // Author: Roman Gushchin <guro@fb.com> +#define _XOPEN_SOURCE 500 +#include <errno.h> #include <fcntl.h> +#include <ftw.h> +#include <mntent.h> +#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/stat.h> @@ -53,7 +58,8 @@ static enum bpf_attach_type parse_attach_type(const char *str) } static int show_bpf_prog(int id, const char *attach_type_str, - const char *attach_flags_str) + const char *attach_flags_str, + int level) { struct bpf_prog_info info = {}; __u32 info_len = sizeof(info); @@ -78,7 +84,8 @@ static int show_bpf_prog(int id, const char *attach_type_str, jsonw_string_field(json_wtr, "name", info.name); jsonw_end_object(json_wtr); } else { - printf("%-8u %-15s %-15s %-15s\n", info.id, + printf("%s%-8u %-15s %-15s %-15s\n", level ? " " : "", + info.id, attach_type_str, attach_flags_str, info.name); @@ -88,7 +95,20 @@ static int show_bpf_prog(int id, const char *attach_type_str, return 0; } -static int show_attached_bpf_progs(int cgroup_fd, enum bpf_attach_type type) +static int count_attached_bpf_progs(int cgroup_fd, enum bpf_attach_type type) +{ + __u32 prog_cnt = 0; + int ret; + + ret = bpf_prog_query(cgroup_fd, type, 0, NULL, NULL, &prog_cnt); + if (ret) + return -1; + + return prog_cnt; +} + +static int show_attached_bpf_progs(int cgroup_fd, enum bpf_attach_type type, + int level) { __u32 prog_ids[1024] = {0}; char *attach_flags_str; @@ -123,7 +143,7 @@ static int show_attached_bpf_progs(int cgroup_fd, enum bpf_attach_type type) for (iter = 0; iter < prog_cnt; iter++) show_bpf_prog(prog_ids[iter], attach_type_strings[type], - attach_flags_str); + attach_flags_str, level); return 0; } @@ -161,7 +181,7 @@ static int do_show(int argc, char **argv) * If we were able to get the show for at least one * attach type, let's return 0. */ - if (show_attached_bpf_progs(cgroup_fd, type) == 0) + if (show_attached_bpf_progs(cgroup_fd, type, 0) == 0) ret = 0; } @@ -173,6 +193,143 @@ exit: return ret; } +/* + * To distinguish nftw() errors and do_show_tree_fn() errors + * and avoid duplicating error messages, let's return -2 + * from do_show_tree_fn() in case of error. + */ +#define NFTW_ERR -1 +#define SHOW_TREE_FN_ERR -2 +static int do_show_tree_fn(const char *fpath, const struct stat *sb, + int typeflag, struct FTW *ftw) +{ + enum bpf_attach_type type; + bool skip = true; + int cgroup_fd; + + if (typeflag != FTW_D) + return 0; + + cgroup_fd = open(fpath, O_RDONLY); + if (cgroup_fd < 0) { + p_err("can't open cgroup %s: %s", fpath, strerror(errno)); + return SHOW_TREE_FN_ERR; + } + + for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) { + int count = count_attached_bpf_progs(cgroup_fd, type); + + if (count < 0 && errno != EINVAL) { + p_err("can't query bpf programs attached to %s: %s", + fpath, strerror(errno)); + close(cgroup_fd); + return SHOW_TREE_FN_ERR; + } + if (count > 0) { + skip = false; + break; + } + } + + if (skip) { + close(cgroup_fd); + return 0; + } + + if (json_output) { + jsonw_start_object(json_wtr); + jsonw_string_field(json_wtr, "cgroup", fpath); + jsonw_name(json_wtr, "programs"); + jsonw_start_array(json_wtr); + } else { + printf("%s\n", fpath); + } + + for (type = 0; type < __MAX_BPF_ATTACH_TYPE; type++) + show_attached_bpf_progs(cgroup_fd, type, ftw->level); + + if (json_output) { + jsonw_end_array(json_wtr); + jsonw_end_object(json_wtr); + } + + close(cgroup_fd); + + return 0; +} + +static char *find_cgroup_root(void) +{ + struct mntent *mnt; + FILE *f; + + f = fopen("/proc/mounts", "r"); + if (f == NULL) + return NULL; + + while ((mnt = getmntent(f))) { + if (strcmp(mnt->mnt_type, "cgroup2") == 0) { + fclose(f); + return strdup(mnt->mnt_dir); + } + } + + fclose(f); + return NULL; +} + +static int do_show_tree(int argc, char **argv) +{ + char *cgroup_root; + int ret; + + switch (argc) { + case 0: + cgroup_root = find_cgroup_root(); + if (!cgroup_root) { + p_err("cgroup v2 isn't mounted"); + return -1; + } + break; + case 1: + cgroup_root = argv[0]; + break; + default: + p_err("too many parameters for cgroup tree"); + return -1; + } + + + if (json_output) + jsonw_start_array(json_wtr); + else + printf("%s\n" + "%-8s %-15s %-15s %-15s\n", + "CgroupPath", + "ID", "AttachType", "AttachFlags", "Name"); + + switch (nftw(cgroup_root, do_show_tree_fn, 1024, FTW_MOUNT)) { + case NFTW_ERR: + p_err("can't iterate over %s: %s", cgroup_root, + strerror(errno)); + ret = -1; + break; + case SHOW_TREE_FN_ERR: + ret = -1; + break; + default: + ret = 0; + } + + if (json_output) + jsonw_end_array(json_wtr); + + if (argc == 0) + free(cgroup_root); + + return ret; +} + static int do_attach(int argc, char **argv) { enum bpf_attach_type attach_type; @@ -289,6 +446,7 @@ static int do_help(int argc, char **argv) fprintf(stderr, "Usage: %s %s { show | list } CGROUP\n" + " %s %s tree [CGROUP_ROOT]\n" " %s %s attach CGROUP ATTACH_TYPE PROG [ATTACH_FLAGS]\n" " %s %s detach CGROUP ATTACH_TYPE PROG\n" " %s %s help\n" @@ -298,6 +456,7 @@ static int do_help(int argc, char **argv) " " HELP_SPEC_PROGRAM "\n" " " HELP_SPEC_OPTIONS "\n" "", + bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2]); @@ -307,6 +466,7 @@ static int do_help(int argc, char **argv) static const struct cmd cmds[] = { { "show", do_show }, { "list", do_show }, + { "tree", do_show_tree }, { "attach", do_attach }, { "detach", do_detach }, { "help", do_help }, diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index 3f140eff039f..b3a0709ea7ed 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -31,8 +31,6 @@ * SOFTWARE. */ -/* Author: Jakub Kicinski <kubakici@wp.pl> */ - #include <ctype.h> #include <errno.h> #include <fcntl.h> diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c index eea7f14355f3..d15a62be6cf0 100644 --- a/tools/bpf/bpftool/main.c +++ b/tools/bpf/bpftool/main.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2017 Netronome Systems, Inc. + * Copyright (C) 2017-2018 Netronome Systems, Inc. * * This software is dual licensed under the GNU General License Version 2, * June 1991 as shown in the file COPYING in the top-level directory of this @@ -31,8 +31,6 @@ * SOFTWARE. */ -/* Author: Jakub Kicinski <kubakici@wp.pl> */ - #include <bfd.h> #include <ctype.h> #include <errno.h> diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h index 63fdb310b9a4..238e734d75b3 100644 --- a/tools/bpf/bpftool/main.h +++ b/tools/bpf/bpftool/main.h @@ -31,8 +31,6 @@ * SOFTWARE. */ -/* Author: Jakub Kicinski <kubakici@wp.pl> */ - #ifndef __BPF_TOOL_H #define __BPF_TOOL_H @@ -44,6 +42,7 @@ #include <linux/compiler.h> #include <linux/kernel.h> #include <linux/hashtable.h> +#include <tools/libc_compat.h> #include "json_writer.h" @@ -52,6 +51,21 @@ #define NEXT_ARG() ({ argc--; argv++; if (argc < 0) usage(); }) #define NEXT_ARGP() ({ (*argc)--; (*argv)++; if (*argc < 0) usage(); }) #define BAD_ARG() ({ p_err("what is '%s'?", *argv); -1; }) +#define GET_ARG() ({ argc--; *argv++; }) +#define REQ_ARGS(cnt) \ + ({ \ + int _cnt = (cnt); \ + bool _res; \ + \ + if (argc < _cnt) { \ + p_err("'%s' needs at least %d arguments, %d found", \ + argv[-1], _cnt, argc); \ + _res = false; \ + } else { \ + _res = true; \ + } \ + _res; \ + }) #define ERR_MAX_LEN 1024 @@ -61,6 +75,8 @@ "PROG := { id PROG_ID | pinned FILE | tag PROG_TAG }" #define HELP_SPEC_OPTIONS \ "OPTIONS := { {-j|--json} [{-p|--pretty}] | {-f|--bpffs} }" +#define HELP_SPEC_MAP \ + "MAP := { id MAP_ID | pinned FILE }" enum bpf_obj_type { BPF_OBJ_UNKNOWN, @@ -122,6 +138,7 @@ int do_cgroup(int argc, char **arg); int do_perf(int argc, char **arg); int prog_parse_fd(int *argc, char ***argv); +int map_parse_fd(int *argc, char ***argv); int map_parse_fd_and_info(int *argc, char ***argv, void *info, __u32 *info_len); void disasm_print_insn(unsigned char *image, ssize_t len, int opcodes, @@ -133,4 +150,19 @@ unsigned int get_page_size(void); unsigned int get_possible_cpus(void); const char *ifindex_to_bfd_name_ns(__u32 ifindex, __u64 ns_dev, __u64 ns_ino); +struct btf_dumper { + const struct btf *btf; + json_writer_t *jw; + bool is_plain_text; +}; + +/* btf_dumper_type - print data along with type information + * @d: an instance containing context for dumping types + * @type_id: index in btf->types array. this points to the type to be dumped + * @data: pointer the actual data, i.e. the values to be printed + * + * Returns zero on success and negative error code otherwise + */ +int btf_dumper_type(const struct btf_dumper *d, __u32 type_id, + const void *data); #endif diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c index f74a8bcbda87..b2ec20e562bd 100644 --- a/tools/bpf/bpftool/map.c +++ b/tools/bpf/bpftool/map.c @@ -31,11 +31,10 @@ * SOFTWARE. */ -/* Author: Jakub Kicinski <kubakici@wp.pl> */ - #include <assert.h> #include <errno.h> #include <fcntl.h> +#include <linux/err.h> #include <linux/kernel.h> #include <stdbool.h> #include <stdio.h> @@ -47,6 +46,8 @@ #include <bpf.h> +#include "btf.h" +#include "json_writer.h" #include "main.h" static const char * const map_type_name[] = { @@ -68,6 +69,7 @@ static const char * const map_type_name[] = { [BPF_MAP_TYPE_SOCKMAP] = "sockmap", [BPF_MAP_TYPE_CPUMAP] = "cpumap", [BPF_MAP_TYPE_SOCKHASH] = "sockhash", + [BPF_MAP_TYPE_CGROUP_STORAGE] = "cgroup_storage", }; static bool map_is_per_cpu(__u32 type) @@ -97,7 +99,7 @@ static void *alloc_value(struct bpf_map_info *info) return malloc(info->value_size); } -static int map_parse_fd(int *argc, char ***argv) +int map_parse_fd(int *argc, char ***argv) { int fd; @@ -152,8 +154,109 @@ int map_parse_fd_and_info(int *argc, char ***argv, void *info, __u32 *info_len) return fd; } +static int do_dump_btf(const struct btf_dumper *d, + struct bpf_map_info *map_info, void *key, + void *value) +{ + int ret; + + /* start of key-value pair */ + jsonw_start_object(d->jw); + + jsonw_name(d->jw, "key"); + + ret = btf_dumper_type(d, map_info->btf_key_type_id, key); + if (ret) + goto err_end_obj; + + jsonw_name(d->jw, "value"); + + ret = btf_dumper_type(d, map_info->btf_value_type_id, value); + +err_end_obj: + /* end of key-value pair */ + jsonw_end_object(d->jw); + + return ret; +} + +static int get_btf(struct bpf_map_info *map_info, struct btf **btf) +{ + struct bpf_btf_info btf_info = { 0 }; + __u32 len = sizeof(btf_info); + __u32 last_size; + int btf_fd; + void *ptr; + int err; + + err = 0; + *btf = NULL; + btf_fd = bpf_btf_get_fd_by_id(map_info->btf_id); + if (btf_fd < 0) + return 0; + + /* we won't know btf_size until we call bpf_obj_get_info_by_fd(). so + * let's start with a sane default - 4KiB here - and resize it only if + * bpf_obj_get_info_by_fd() needs a bigger buffer. + */ + btf_info.btf_size = 4096; + last_size = btf_info.btf_size; + ptr = malloc(last_size); + if (!ptr) { + err = -ENOMEM; + goto exit_free; + } + + bzero(ptr, last_size); + btf_info.btf = ptr_to_u64(ptr); + err = bpf_obj_get_info_by_fd(btf_fd, &btf_info, &len); + + if (!err && btf_info.btf_size > last_size) { + void *temp_ptr; + + last_size = btf_info.btf_size; + temp_ptr = realloc(ptr, last_size); + if (!temp_ptr) { + err = -ENOMEM; + goto exit_free; + } + ptr = temp_ptr; + bzero(ptr, last_size); + btf_info.btf = ptr_to_u64(ptr); + err = bpf_obj_get_info_by_fd(btf_fd, &btf_info, &len); + } + + if (err || btf_info.btf_size > last_size) { + err = errno; + goto exit_free; + } + + *btf = btf__new((__u8 *)btf_info.btf, btf_info.btf_size, NULL); + if (IS_ERR(*btf)) { + err = PTR_ERR(*btf); + *btf = NULL; + } + +exit_free: + close(btf_fd); + free(ptr); + + return err; +} + +static json_writer_t *get_btf_writer(void) +{ + json_writer_t *jw = jsonw_new(stdout); + + if (!jw) + return NULL; + jsonw_pretty(jw, true); + + return jw; +} + static void print_entry_json(struct bpf_map_info *info, unsigned char *key, - unsigned char *value) + unsigned char *value, struct btf *btf) { jsonw_start_object(json_wtr); @@ -162,6 +265,16 @@ static void print_entry_json(struct bpf_map_info *info, unsigned char *key, print_hex_data_json(key, info->key_size); jsonw_name(json_wtr, "value"); print_hex_data_json(value, info->value_size); + if (btf) { + struct btf_dumper d = { + .btf = btf, + .jw = json_wtr, + .is_plain_text = false, + }; + + jsonw_name(json_wtr, "formatted"); + do_dump_btf(&d, info, key, value); + } } else { unsigned int i, n, step; @@ -514,10 +627,12 @@ static int do_show(int argc, char **argv) static int do_dump(int argc, char **argv) { + struct bpf_map_info info = {}; void *key, *value, *prev_key; unsigned int num_elems = 0; - struct bpf_map_info info = {}; __u32 len = sizeof(info); + json_writer_t *btf_wtr; + struct btf *btf = NULL; int err; int fd; @@ -543,8 +658,27 @@ static int do_dump(int argc, char **argv) } prev_key = NULL; + + err = get_btf(&info, &btf); + if (err) { + p_err("failed to get btf"); + goto exit_free; + } + if (json_output) jsonw_start_array(json_wtr); + else + if (btf) { + btf_wtr = get_btf_writer(); + if (!btf_wtr) { + p_info("failed to create json writer for btf. falling back to plain output"); + btf__free(btf); + btf = NULL; + } else { + jsonw_start_array(btf_wtr); + } + } + while (true) { err = bpf_map_get_next_key(fd, prev_key, key); if (err) { @@ -555,9 +689,19 @@ static int do_dump(int argc, char **argv) if (!bpf_map_lookup_elem(fd, key, value)) { if (json_output) - print_entry_json(&info, key, value); + print_entry_json(&info, key, value, btf); else - print_entry_plain(&info, key, value); + if (btf) { + struct btf_dumper d = { + .btf = btf, + .jw = btf_wtr, + .is_plain_text = true, + }; + + do_dump_btf(&d, &info, key, value); + } else { + print_entry_plain(&info, key, value); + } } else { if (json_output) { jsonw_name(json_wtr, "key"); @@ -580,14 +724,19 @@ static int do_dump(int argc, char **argv) if (json_output) jsonw_end_array(json_wtr); - else + else if (btf) { + jsonw_end_array(btf_wtr); + jsonw_destroy(&btf_wtr); + } else { printf("Found %u element%s\n", num_elems, num_elems != 1 ? "s" : ""); + } exit_free: free(key); free(value); close(fd); + btf__free(btf); return err; } @@ -643,6 +792,8 @@ static int do_lookup(int argc, char **argv) { struct bpf_map_info info = {}; __u32 len = sizeof(info); + json_writer_t *btf_wtr; + struct btf *btf = NULL; void *key, *value; int err; int fd; @@ -667,27 +818,60 @@ static int do_lookup(int argc, char **argv) goto exit_free; err = bpf_map_lookup_elem(fd, key, value); - if (!err) { - if (json_output) - print_entry_json(&info, key, value); - else + if (err) { + if (errno == ENOENT) { + if (json_output) { + jsonw_null(json_wtr); + } else { + printf("key:\n"); + fprint_hex(stdout, key, info.key_size, " "); + printf("\n\nNot found\n"); + } + } else { + p_err("lookup failed: %s", strerror(errno)); + } + + goto exit_free; + } + + /* here means bpf_map_lookup_elem() succeeded */ + err = get_btf(&info, &btf); + if (err) { + p_err("failed to get btf"); + goto exit_free; + } + + if (json_output) { + print_entry_json(&info, key, value, btf); + } else if (btf) { + /* if here json_wtr wouldn't have been initialised, + * so let's create separate writer for btf + */ + btf_wtr = get_btf_writer(); + if (!btf_wtr) { + p_info("failed to create json writer for btf. falling back to plain output"); + btf__free(btf); + btf = NULL; print_entry_plain(&info, key, value); - } else if (errno == ENOENT) { - if (json_output) { - jsonw_null(json_wtr); } else { - printf("key:\n"); - fprint_hex(stdout, key, info.key_size, " "); - printf("\n\nNot found\n"); + struct btf_dumper d = { + .btf = btf, + .jw = btf_wtr, + .is_plain_text = true, + }; + + do_dump_btf(&d, &info, key, value); + jsonw_destroy(&btf_wtr); } } else { - p_err("lookup failed: %s", strerror(errno)); + print_entry_plain(&info, key, value); } exit_free: free(key); free(value); close(fd); + btf__free(btf); return err; } @@ -830,7 +1014,7 @@ static int do_help(int argc, char **argv) " %s %s event_pipe MAP [cpu N index M]\n" " %s %s help\n" "\n" - " MAP := { id MAP_ID | pinned FILE }\n" + " " HELP_SPEC_MAP "\n" " DATA := { [hex] BYTES }\n" " " HELP_SPEC_PROGRAM "\n" " VALUE := { DATA | MAP | PROG }\n" diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c index 959aa53ab678..dce960d22106 100644 --- a/tools/bpf/bpftool/prog.c +++ b/tools/bpf/bpftool/prog.c @@ -1,5 +1,5 @@ /* - * Copyright (C) 2017 Netronome Systems, Inc. + * Copyright (C) 2017-2018 Netronome Systems, Inc. * * This software is dual licensed under the GNU General License Version 2, * June 1991 as shown in the file COPYING in the top-level directory of this @@ -31,8 +31,7 @@ * SOFTWARE. */ -/* Author: Jakub Kicinski <kubakici@wp.pl> */ - +#define _GNU_SOURCE #include <errno.h> #include <fcntl.h> #include <stdarg.h> @@ -41,9 +40,12 @@ #include <string.h> #include <time.h> #include <unistd.h> +#include <net/if.h> #include <sys/types.h> #include <sys/stat.h> +#include <linux/err.h> + #include <bpf.h> #include <libbpf.h> @@ -681,31 +683,247 @@ static int do_pin(int argc, char **argv) return err; } +struct map_replace { + int idx; + int fd; + char *name; +}; + +int map_replace_compar(const void *p1, const void *p2) +{ + const struct map_replace *a = p1, *b = p2; + + return a->idx - b->idx; +} + static int do_load(int argc, char **argv) { + enum bpf_attach_type expected_attach_type; + struct bpf_object_open_attr attr = { + .prog_type = BPF_PROG_TYPE_UNSPEC, + }; + struct map_replace *map_replace = NULL; + unsigned int old_map_fds = 0; + struct bpf_program *prog; struct bpf_object *obj; - int prog_fd; - - if (argc != 2) - usage(); + struct bpf_map *map; + const char *pinfile; + unsigned int i, j; + __u32 ifindex = 0; + int idx, err; - if (bpf_prog_load(argv[0], BPF_PROG_TYPE_UNSPEC, &obj, &prog_fd)) { - p_err("failed to load program"); + if (!REQ_ARGS(2)) return -1; + attr.file = GET_ARG(); + pinfile = GET_ARG(); + + while (argc) { + if (is_prefix(*argv, "type")) { + char *type; + + NEXT_ARG(); + + if (attr.prog_type != BPF_PROG_TYPE_UNSPEC) { + p_err("program type already specified"); + goto err_free_reuse_maps; + } + if (!REQ_ARGS(1)) + goto err_free_reuse_maps; + + /* Put a '/' at the end of type to appease libbpf */ + type = malloc(strlen(*argv) + 2); + if (!type) { + p_err("mem alloc failed"); + goto err_free_reuse_maps; + } + *type = 0; + strcat(type, *argv); + strcat(type, "/"); + + err = libbpf_prog_type_by_name(type, &attr.prog_type, + &expected_attach_type); + free(type); + if (err < 0) { + p_err("unknown program type '%s'", *argv); + goto err_free_reuse_maps; + } + NEXT_ARG(); + } else if (is_prefix(*argv, "map")) { + char *endptr, *name; + int fd; + + NEXT_ARG(); + + if (!REQ_ARGS(4)) + goto err_free_reuse_maps; + + if (is_prefix(*argv, "idx")) { + NEXT_ARG(); + + idx = strtoul(*argv, &endptr, 0); + if (*endptr) { + p_err("can't parse %s as IDX", *argv); + goto err_free_reuse_maps; + } + name = NULL; + } else if (is_prefix(*argv, "name")) { + NEXT_ARG(); + + name = *argv; + idx = -1; + } else { + p_err("expected 'idx' or 'name', got: '%s'?", + *argv); + goto err_free_reuse_maps; + } + NEXT_ARG(); + + fd = map_parse_fd(&argc, &argv); + if (fd < 0) + goto err_free_reuse_maps; + + map_replace = reallocarray(map_replace, old_map_fds + 1, + sizeof(*map_replace)); + if (!map_replace) { + p_err("mem alloc failed"); + goto err_free_reuse_maps; + } + map_replace[old_map_fds].idx = idx; + map_replace[old_map_fds].name = name; + map_replace[old_map_fds].fd = fd; + old_map_fds++; + } else if (is_prefix(*argv, "dev")) { + NEXT_ARG(); + + if (ifindex) { + p_err("offload device already specified"); + goto err_free_reuse_maps; + } + if (!REQ_ARGS(1)) + goto err_free_reuse_maps; + + ifindex = if_nametoindex(*argv); + if (!ifindex) { + p_err("unrecognized netdevice '%s': %s", + *argv, strerror(errno)); + goto err_free_reuse_maps; + } + NEXT_ARG(); + } else { + p_err("expected no more arguments, 'type', 'map' or 'dev', got: '%s'?", + *argv); + goto err_free_reuse_maps; + } + } + + obj = bpf_object__open_xattr(&attr); + if (IS_ERR_OR_NULL(obj)) { + p_err("failed to open object file"); + goto err_free_reuse_maps; + } + + prog = bpf_program__next(NULL, obj); + if (!prog) { + p_err("object file doesn't contain any bpf program"); + goto err_close_obj; + } + + bpf_program__set_ifindex(prog, ifindex); + if (attr.prog_type == BPF_PROG_TYPE_UNSPEC) { + const char *sec_name = bpf_program__title(prog, false); + + err = libbpf_prog_type_by_name(sec_name, &attr.prog_type, + &expected_attach_type); + if (err < 0) { + p_err("failed to guess program type based on section name %s\n", + sec_name); + goto err_close_obj; + } + } + bpf_program__set_type(prog, attr.prog_type); + bpf_program__set_expected_attach_type(prog, expected_attach_type); + + qsort(map_replace, old_map_fds, sizeof(*map_replace), + map_replace_compar); + + /* After the sort maps by name will be first on the list, because they + * have idx == -1. Resolve them. + */ + j = 0; + while (j < old_map_fds && map_replace[j].name) { + i = 0; + bpf_map__for_each(map, obj) { + if (!strcmp(bpf_map__name(map), map_replace[j].name)) { + map_replace[j].idx = i; + break; + } + i++; + } + if (map_replace[j].idx == -1) { + p_err("unable to find map '%s'", map_replace[j].name); + goto err_close_obj; + } + j++; + } + /* Resort if any names were resolved */ + if (j) + qsort(map_replace, old_map_fds, sizeof(*map_replace), + map_replace_compar); + + /* Set ifindex and name reuse */ + j = 0; + idx = 0; + bpf_map__for_each(map, obj) { + if (!bpf_map__is_offload_neutral(map)) + bpf_map__set_ifindex(map, ifindex); + + if (j < old_map_fds && idx == map_replace[j].idx) { + err = bpf_map__reuse_fd(map, map_replace[j++].fd); + if (err) { + p_err("unable to set up map reuse: %d", err); + goto err_close_obj; + } + + /* Next reuse wants to apply to the same map */ + if (j < old_map_fds && map_replace[j].idx == idx) { + p_err("replacement for map idx %d specified more than once", + idx); + goto err_close_obj; + } + } + + idx++; + } + if (j < old_map_fds) { + p_err("map idx '%d' not used", map_replace[j].idx); + goto err_close_obj; + } + + err = bpf_object__load(obj); + if (err) { + p_err("failed to load object file"); + goto err_close_obj; } - if (do_pin_fd(prog_fd, argv[1])) + if (do_pin_fd(bpf_program__fd(prog), pinfile)) goto err_close_obj; if (json_output) jsonw_null(json_wtr); bpf_object__close(obj); + for (i = 0; i < old_map_fds; i++) + close(map_replace[i].fd); + free(map_replace); return 0; err_close_obj: bpf_object__close(obj); +err_free_reuse_maps: + for (i = 0; i < old_map_fds; i++) + close(map_replace[i].fd); + free(map_replace); return -1; } @@ -721,10 +939,19 @@ static int do_help(int argc, char **argv) " %s %s dump xlated PROG [{ file FILE | opcodes | visual }]\n" " %s %s dump jited PROG [{ file FILE | opcodes }]\n" " %s %s pin PROG FILE\n" - " %s %s load OBJ FILE\n" + " %s %s load OBJ FILE [type TYPE] [dev NAME] \\\n" + " [map { idx IDX | name NAME } MAP]\n" " %s %s help\n" "\n" + " " HELP_SPEC_MAP "\n" " " HELP_SPEC_PROGRAM "\n" + " TYPE := { socket | kprobe | kretprobe | classifier | action |\n" + " tracepoint | raw_tracepoint | xdp | perf_event | cgroup/skb |\n" + " cgroup/sock | cgroup/dev | lwt_in | lwt_out | lwt_xmit |\n" + " lwt_seg6local | sockops | sk_skb | sk_msg | lirc_mode2 |\n" + " cgroup/bind4 | cgroup/bind6 | cgroup/post_bind4 |\n" + " cgroup/post_bind6 | cgroup/connect4 | cgroup/connect6 |\n" + " cgroup/sendmsg4 | cgroup/sendmsg6 }\n" " " HELP_SPEC_OPTIONS "\n" "", bin_name, argv[-2], bin_name, argv[-2], bin_name, argv[-2], diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c index b97f1da60dd1..3284759df98a 100644 --- a/tools/bpf/bpftool/xlated_dumper.c +++ b/tools/bpf/bpftool/xlated_dumper.c @@ -35,6 +35,7 @@ * POSSIBILITY OF SUCH DAMAGE. */ +#define _GNU_SOURCE #include <stdarg.h> #include <stdio.h> #include <stdlib.h> @@ -66,9 +67,8 @@ void kernel_syms_load(struct dump_data *dd) while (!feof(fp)) { if (!fgets(buff, sizeof(buff), fp)) break; - tmp = realloc(dd->sym_mapping, - (dd->sym_count + 1) * - sizeof(*dd->sym_mapping)); + tmp = reallocarray(dd->sym_mapping, dd->sym_count + 1, + sizeof(*dd->sym_mapping)); if (!tmp) { out: free(dd->sym_mapping); diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature index 5b6dda3b1ca8..f216b2f5c3d7 100644 --- a/tools/build/Makefile.feature +++ b/tools/build/Makefile.feature @@ -57,6 +57,7 @@ FEATURE_TESTS_BASIC := \ libunwind-aarch64 \ pthread-attr-setaffinity-np \ pthread-barrier \ + reallocarray \ stackprotector-all \ timerfd \ libdw-dwarf-unwind \ diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile index dac9563b5470..0516259be70f 100644 --- a/tools/build/feature/Makefile +++ b/tools/build/feature/Makefile @@ -14,6 +14,7 @@ FILES= \ test-libaudit.bin \ test-libbfd.bin \ test-disassembler-four-args.bin \ + test-reallocarray.bin \ test-liberty.bin \ test-liberty-z.bin \ test-cplus-demangle.bin \ @@ -204,6 +205,9 @@ $(OUTPUT)test-libbfd.bin: $(OUTPUT)test-disassembler-four-args.bin: $(BUILD) -DPACKAGE='"perf"' -lbfd -lopcodes +$(OUTPUT)test-reallocarray.bin: + $(BUILD) + $(OUTPUT)test-liberty.bin: $(CC) $(CFLAGS) -Wall -Werror -o $@ test-libbfd.c -DPACKAGE='"perf"' $(LDFLAGS) -lbfd -ldl -liberty diff --git a/tools/build/feature/test-reallocarray.c b/tools/build/feature/test-reallocarray.c new file mode 100644 index 000000000000..8170de35150d --- /dev/null +++ b/tools/build/feature/test-reallocarray.c @@ -0,0 +1,8 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <stdlib.h> + +int main(void) +{ + return !!reallocarray(NULL, 1, 1); +} diff --git a/tools/include/linux/compiler-gcc.h b/tools/include/linux/compiler-gcc.h index 70fe61295733..0d35f18006a1 100644 --- a/tools/include/linux/compiler-gcc.h +++ b/tools/include/linux/compiler-gcc.h @@ -36,3 +36,7 @@ #endif #define __printf(a, b) __attribute__((format(printf, a, b))) #define __scanf(a, b) __attribute__((format(scanf, a, b))) + +#if GCC_VERSION >= 50100 +#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 +#endif diff --git a/tools/include/linux/overflow.h b/tools/include/linux/overflow.h new file mode 100644 index 000000000000..8712ff70995f --- /dev/null +++ b/tools/include/linux/overflow.h @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: GPL-2.0 OR MIT */ +#ifndef __LINUX_OVERFLOW_H +#define __LINUX_OVERFLOW_H + +#include <linux/compiler.h> + +/* + * In the fallback code below, we need to compute the minimum and + * maximum values representable in a given type. These macros may also + * be useful elsewhere, so we provide them outside the + * COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW block. + * + * It would seem more obvious to do something like + * + * #define type_min(T) (T)(is_signed_type(T) ? (T)1 << (8*sizeof(T)-1) : 0) + * #define type_max(T) (T)(is_signed_type(T) ? ((T)1 << (8*sizeof(T)-1)) - 1 : ~(T)0) + * + * Unfortunately, the middle expressions, strictly speaking, have + * undefined behaviour, and at least some versions of gcc warn about + * the type_max expression (but not if -fsanitize=undefined is in + * effect; in that case, the warning is deferred to runtime...). + * + * The slightly excessive casting in type_min is to make sure the + * macros also produce sensible values for the exotic type _Bool. [The + * overflow checkers only almost work for _Bool, but that's + * a-feature-not-a-bug, since people shouldn't be doing arithmetic on + * _Bools. Besides, the gcc builtins don't allow _Bool* as third + * argument.] + * + * Idea stolen from + * https://mail-index.netbsd.org/tech-misc/2007/02/05/0000.html - + * credit to Christian Biere. + */ +#define is_signed_type(type) (((type)(-1)) < (type)1) +#define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - is_signed_type(type))) +#define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T))) +#define type_min(T) ((T)((T)-type_max(T)-(T)1)) + + +#ifdef COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW +/* + * For simplicity and code hygiene, the fallback code below insists on + * a, b and *d having the same type (similar to the min() and max() + * macros), whereas gcc's type-generic overflow checkers accept + * different types. Hence we don't just make check_add_overflow an + * alias for __builtin_add_overflow, but add type checks similar to + * below. + */ +#define check_add_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + __builtin_add_overflow(__a, __b, __d); \ +}) + +#define check_sub_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + __builtin_sub_overflow(__a, __b, __d); \ +}) + +#define check_mul_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + __builtin_mul_overflow(__a, __b, __d); \ +}) + +#else + + +/* Checking for unsigned overflow is relatively easy without causing UB. */ +#define __unsigned_add_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = __a + __b; \ + *__d < __a; \ +}) +#define __unsigned_sub_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = __a - __b; \ + __a < __b; \ +}) +/* + * If one of a or b is a compile-time constant, this avoids a division. + */ +#define __unsigned_mul_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = __a * __b; \ + __builtin_constant_p(__b) ? \ + __b > 0 && __a > type_max(typeof(__a)) / __b : \ + __a > 0 && __b > type_max(typeof(__b)) / __a; \ +}) + +/* + * For signed types, detecting overflow is much harder, especially if + * we want to avoid UB. But the interface of these macros is such that + * we must provide a result in *d, and in fact we must produce the + * result promised by gcc's builtins, which is simply the possibly + * wrapped-around value. Fortunately, we can just formally do the + * operations in the widest relevant unsigned type (u64) and then + * truncate the result - gcc is smart enough to generate the same code + * with and without the (u64) casts. + */ + +/* + * Adding two signed integers can overflow only if they have the same + * sign, and overflow has happened iff the result has the opposite + * sign. + */ +#define __signed_add_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = (u64)__a + (u64)__b; \ + (((~(__a ^ __b)) & (*__d ^ __a)) \ + & type_min(typeof(__a))) != 0; \ +}) + +/* + * Subtraction is similar, except that overflow can now happen only + * when the signs are opposite. In this case, overflow has happened if + * the result has the opposite sign of a. + */ +#define __signed_sub_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = (u64)__a - (u64)__b; \ + ((((__a ^ __b)) & (*__d ^ __a)) \ + & type_min(typeof(__a))) != 0; \ +}) + +/* + * Signed multiplication is rather hard. gcc always follows C99, so + * division is truncated towards 0. This means that we can write the + * overflow check like this: + * + * (a > 0 && (b > MAX/a || b < MIN/a)) || + * (a < -1 && (b > MIN/a || b < MAX/a) || + * (a == -1 && b == MIN) + * + * The redundant casts of -1 are to silence an annoying -Wtype-limits + * (included in -Wextra) warning: When the type is u8 or u16, the + * __b_c_e in check_mul_overflow obviously selects + * __unsigned_mul_overflow, but unfortunately gcc still parses this + * code and warns about the limited range of __b. + */ + +#define __signed_mul_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + typeof(a) __tmax = type_max(typeof(a)); \ + typeof(a) __tmin = type_min(typeof(a)); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = (u64)__a * (u64)__b; \ + (__b > 0 && (__a > __tmax/__b || __a < __tmin/__b)) || \ + (__b < (typeof(__b))-1 && (__a > __tmin/__b || __a < __tmax/__b)) || \ + (__b == (typeof(__b))-1 && __a == __tmin); \ +}) + + +#define check_add_overflow(a, b, d) \ + __builtin_choose_expr(is_signed_type(typeof(a)), \ + __signed_add_overflow(a, b, d), \ + __unsigned_add_overflow(a, b, d)) + +#define check_sub_overflow(a, b, d) \ + __builtin_choose_expr(is_signed_type(typeof(a)), \ + __signed_sub_overflow(a, b, d), \ + __unsigned_sub_overflow(a, b, d)) + +#define check_mul_overflow(a, b, d) \ + __builtin_choose_expr(is_signed_type(typeof(a)), \ + __signed_mul_overflow(a, b, d), \ + __unsigned_mul_overflow(a, b, d)) + + +#endif /* COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */ + +/** + * array_size() - Calculate size of 2-dimensional array. + * + * @a: dimension one + * @b: dimension two + * + * Calculates size of 2-dimensional array: @a * @b. + * + * Returns: number of bytes needed to represent the array or SIZE_MAX on + * overflow. + */ +static inline __must_check size_t array_size(size_t a, size_t b) +{ + size_t bytes; + + if (check_mul_overflow(a, b, &bytes)) + return SIZE_MAX; + + return bytes; +} + +/** + * array3_size() - Calculate size of 3-dimensional array. + * + * @a: dimension one + * @b: dimension two + * @c: dimension three + * + * Calculates size of 3-dimensional array: @a * @b * @c. + * + * Returns: number of bytes needed to represent the array or SIZE_MAX on + * overflow. + */ +static inline __must_check size_t array3_size(size_t a, size_t b, size_t c) +{ + size_t bytes; + + if (check_mul_overflow(a, b, &bytes)) + return SIZE_MAX; + if (check_mul_overflow(bytes, c, &bytes)) + return SIZE_MAX; + + return bytes; +} + +static inline __must_check size_t __ab_c_size(size_t n, size_t size, size_t c) +{ + size_t bytes; + + if (check_mul_overflow(n, size, &bytes)) + return SIZE_MAX; + if (check_add_overflow(bytes, c, &bytes)) + return SIZE_MAX; + + return bytes; +} + +/** + * struct_size() - Calculate size of structure with trailing array. + * @p: Pointer to the structure. + * @member: Name of the array member. + * @n: Number of elements in the array. + * + * Calculates size of memory needed for structure @p followed by an + * array of @n @member elements. + * + * Return: number of bytes needed or SIZE_MAX on overflow. + */ +#define struct_size(p, member, n) \ + __ab_c_size(n, \ + sizeof(*(p)->member) + __must_be_array((p)->member),\ + sizeof(*(p))) + +#endif /* __LINUX_OVERFLOW_H */ diff --git a/tools/include/tools/libc_compat.h b/tools/include/tools/libc_compat.h new file mode 100644 index 000000000000..664ced8cb1b0 --- /dev/null +++ b/tools/include/tools/libc_compat.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Copyright (C) 2018 Netronome Systems, Inc. */ + +#ifndef __TOOLS_LIBC_COMPAT_H +#define __TOOLS_LIBC_COMPAT_H + +#include <stdlib.h> +#include <linux/overflow.h> + +#ifdef COMPAT_NEED_REALLOCARRAY +static inline void *reallocarray(void *ptr, size_t nmemb, size_t size) +{ + size_t bytes; + + if (unlikely(check_mul_overflow(nmemb, size, &bytes))) + return NULL; + return realloc(ptr, bytes); +} +#endif +#endif diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index b7db3261c62d..66917a4eba27 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -75,6 +75,11 @@ struct bpf_lpm_trie_key { __u8 data[0]; /* Arbitrary size */ }; +struct bpf_cgroup_storage_key { + __u64 cgroup_inode_id; /* cgroup inode id */ + __u32 attach_type; /* program attach type */ +}; + /* BPF syscall commands, see bpf(2) man-page for details. */ enum bpf_cmd { BPF_MAP_CREATE, @@ -120,6 +125,8 @@ enum bpf_map_type { BPF_MAP_TYPE_CPUMAP, BPF_MAP_TYPE_XSKMAP, BPF_MAP_TYPE_SOCKHASH, + BPF_MAP_TYPE_CGROUP_STORAGE, + BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, }; enum bpf_prog_type { @@ -144,6 +151,7 @@ enum bpf_prog_type { BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_PROG_TYPE_LWT_SEG6LOCAL, BPF_PROG_TYPE_LIRC_MODE2, + BPF_PROG_TYPE_SK_REUSEPORT, }; enum bpf_attach_type { @@ -1371,6 +1379,20 @@ union bpf_attr { * A 8-byte long non-decreasing number on success, or 0 if the * socket field is missing inside *skb*. * + * u64 bpf_get_socket_cookie(struct bpf_sock_addr *ctx) + * Description + * Equivalent to bpf_get_socket_cookie() helper that accepts + * *skb*, but gets socket from **struct bpf_sock_addr** contex. + * Return + * A 8-byte long non-decreasing number. + * + * u64 bpf_get_socket_cookie(struct bpf_sock_ops *ctx) + * Description + * Equivalent to bpf_get_socket_cookie() helper that accepts + * *skb*, but gets socket from **struct bpf_sock_ops** contex. + * Return + * A 8-byte long non-decreasing number. + * * u32 bpf_get_socket_uid(struct sk_buff *skb) * Return * The owner UID of the socket associated to *skb*. If the socket @@ -1826,7 +1848,7 @@ union bpf_attr { * A non-negative value equal to or less than *size* on success, * or a negative error in case of failure. * - * int skb_load_bytes_relative(const struct sk_buff *skb, u32 offset, void *to, u32 len, u32 start_header) + * int bpf_skb_load_bytes_relative(const struct sk_buff *skb, u32 offset, void *to, u32 len, u32 start_header) * Description * This helper is similar to **bpf_skb_load_bytes**\ () in that * it provides an easy way to load *len* bytes from *offset* @@ -1877,7 +1899,7 @@ union bpf_attr { * * < 0 if any input argument is invalid * * 0 on success (packet is forwarded, nexthop neighbor exists) * * > 0 one of **BPF_FIB_LKUP_RET_** codes explaining why the - * * packet is not forwarded or needs assist from full stack + * packet is not forwarded or needs assist from full stack * * int bpf_sock_hash_update(struct bpf_sock_ops_kern *skops, struct bpf_map *map, void *key, u64 flags) * Description @@ -2033,7 +2055,6 @@ union bpf_attr { * This helper is only available is the kernel was compiled with * the **CONFIG_BPF_LIRC_MODE2** configuration option set to * "**y**". - * * Return * 0 * @@ -2053,7 +2074,6 @@ union bpf_attr { * This helper is only available is the kernel was compiled with * the **CONFIG_BPF_LIRC_MODE2** configuration option set to * "**y**". - * * Return * 0 * @@ -2073,10 +2093,54 @@ union bpf_attr { * Return * The id is returned or 0 in case the id could not be retrieved. * + * u64 bpf_skb_ancestor_cgroup_id(struct sk_buff *skb, int ancestor_level) + * Description + * Return id of cgroup v2 that is ancestor of cgroup associated + * with the *skb* at the *ancestor_level*. The root cgroup is at + * *ancestor_level* zero and each step down the hierarchy + * increments the level. If *ancestor_level* == level of cgroup + * associated with *skb*, then return value will be same as that + * of **bpf_skb_cgroup_id**\ (). + * + * The helper is useful to implement policies based on cgroups + * that are upper in hierarchy than immediate cgroup associated + * with *skb*. + * + * The format of returned id and helper limitations are same as in + * **bpf_skb_cgroup_id**\ (). + * Return + * The id is returned or 0 in case the id could not be retrieved. + * * u64 bpf_get_current_cgroup_id(void) * Return * A 64-bit integer containing the current cgroup id based * on the cgroup within which the current task is running. + * + * void* get_local_storage(void *map, u64 flags) + * Description + * Get the pointer to the local storage area. + * The type and the size of the local storage is defined + * by the *map* argument. + * The *flags* meaning is specific for each map type, + * and has to be 0 for cgroup local storage. + * + * Depending on the bpf program type, a local storage area + * can be shared between multiple instances of the bpf program, + * running simultaneously. + * + * A user should care about the synchronization by himself. + * For example, by using the BPF_STX_XADD instruction to alter + * the shared data. + * Return + * Pointer to the local storage area. + * + * int bpf_sk_select_reuseport(struct sk_reuseport_md *reuse, struct bpf_map *map, void *key, u64 flags) + * Description + * Select a SO_REUSEPORT sk from a BPF_MAP_TYPE_REUSEPORT_ARRAY map + * It checks the selected sk is matching the incoming + * request in the skb. + * Return + * 0 on success, or a negative error in case of failure. */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -2159,7 +2223,10 @@ union bpf_attr { FN(rc_repeat), \ FN(rc_keydown), \ FN(skb_cgroup_id), \ - FN(get_current_cgroup_id), + FN(get_current_cgroup_id), \ + FN(get_local_storage), \ + FN(sk_select_reuseport), \ + FN(skb_ancestor_cgroup_id), /* integer value in 'imm' field of BPF_CALL instruction selects which helper * function eBPF program intends to call @@ -2376,6 +2443,30 @@ struct sk_msg_md { __u32 local_port; /* stored in host byte order */ }; +struct sk_reuseport_md { + /* + * Start of directly accessible data. It begins from + * the tcp/udp header. + */ + void *data; + void *data_end; /* End of directly accessible data */ + /* + * Total length of packet (starting from the tcp/udp header). + * Note that the directly accessible bytes (data_end - data) + * could be less than this "len". Those bytes could be + * indirectly read by a helper "bpf_skb_load_bytes()". + */ + __u32 len; + /* + * Eth protocol in the mac header (network byte order). e.g. + * ETH_P_IP(0x0800) and ETH_P_IPV6(0x86DD) + */ + __u32 eth_protocol; + __u32 ip_protocol; /* IP protocol. e.g. IPPROTO_TCP, IPPROTO_UDP */ + __u32 bind_inany; /* Is sock bound to an INANY address? */ + __u32 hash; /* A hash of the packet 4 tuples */ +}; + #define BPF_TAG_SIZE 8 struct bpf_prog_info { @@ -2557,6 +2648,9 @@ enum { * Arg1: old_state * Arg2: new_state */ + BPF_SOCK_OPS_TCP_LISTEN_CB, /* Called on listen(2), right after + * socket transition to LISTEN state. + */ }; /* List of TCP states. There is a build check in net/ipv4/tcp.c to detect diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build index 6070e655042d..13a861135127 100644 --- a/tools/lib/bpf/Build +++ b/tools/lib/bpf/Build @@ -1 +1 @@ -libbpf-y := libbpf.o bpf.o nlattr.o btf.o +libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile index 5390e7725e43..d49902e818b5 100644 --- a/tools/lib/bpf/Makefile +++ b/tools/lib/bpf/Makefile @@ -66,7 +66,7 @@ ifndef VERBOSE endif FEATURE_USER = .libbpf -FEATURE_TESTS = libelf libelf-getphdrnum libelf-mmap bpf +FEATURE_TESTS = libelf libelf-mmap bpf reallocarray FEATURE_DISPLAY = libelf bpf INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi -I$(srctree)/tools/perf @@ -116,8 +116,8 @@ ifeq ($(feature-libelf-mmap), 1) override CFLAGS += -DHAVE_LIBELF_MMAP_SUPPORT endif -ifeq ($(feature-libelf-getphdrnum), 1) - override CFLAGS += -DHAVE_ELF_GETPHDRNUM_SUPPORT +ifeq ($(feature-reallocarray), 0) + override CFLAGS += -DCOMPAT_NEED_REALLOCARRAY endif # Append required CFLAGS diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 9ddc89dae962..60aa4ca8b2c5 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -92,6 +92,7 @@ int bpf_create_map_xattr(const struct bpf_create_map_attr *create_attr) attr.btf_key_type_id = create_attr->btf_key_type_id; attr.btf_value_type_id = create_attr->btf_value_type_id; attr.map_ifindex = create_attr->map_ifindex; + attr.inner_map_fd = create_attr->inner_map_fd; return sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr)); } diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 0639a30a457d..6f38164b2618 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -39,6 +39,7 @@ struct bpf_create_map_attr { __u32 btf_key_type_id; __u32 btf_value_type_id; __u32 map_ifindex; + __u32 inner_map_fd; }; int bpf_create_map_xattr(const struct bpf_create_map_attr *create_attr); diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c index c36a3a76986a..cf94b0770522 100644 --- a/tools/lib/bpf/btf.c +++ b/tools/lib/bpf/btf.c @@ -16,6 +16,11 @@ #define BTF_MAX_NR_TYPES 65535 +#define IS_MODIFIER(k) (((k) == BTF_KIND_TYPEDEF) || \ + ((k) == BTF_KIND_VOLATILE) || \ + ((k) == BTF_KIND_CONST) || \ + ((k) == BTF_KIND_RESTRICT)) + static struct btf_type btf_void; struct btf { @@ -32,14 +37,6 @@ struct btf { int fd; }; -static const char *btf_name_by_offset(const struct btf *btf, __u32 offset) -{ - if (offset < btf->hdr->str_len) - return &btf->strings[offset]; - else - return NULL; -} - static int btf_add_type(struct btf *btf, struct btf_type *t) { if (btf->types_size - btf->nr_types < 2) { @@ -269,6 +266,26 @@ __s64 btf__resolve_size(const struct btf *btf, __u32 type_id) return nelems * size; } +int btf__resolve_type(const struct btf *btf, __u32 type_id) +{ + const struct btf_type *t; + int depth = 0; + + t = btf__type_by_id(btf, type_id); + while (depth < MAX_RESOLVE_DEPTH && + !btf_type_is_void_or_null(t) && + IS_MODIFIER(BTF_INFO_KIND(t->info))) { + type_id = t->type; + t = btf__type_by_id(btf, type_id); + depth++; + } + + if (depth == MAX_RESOLVE_DEPTH || btf_type_is_void_or_null(t)) + return -EINVAL; + + return type_id; +} + __s32 btf__find_by_name(const struct btf *btf, const char *type_name) { __u32 i; @@ -278,7 +295,7 @@ __s32 btf__find_by_name(const struct btf *btf, const char *type_name) for (i = 1; i <= btf->nr_types; i++) { const struct btf_type *t = btf->types[i]; - const char *name = btf_name_by_offset(btf, t->name_off); + const char *name = btf__name_by_offset(btf, t->name_off); if (name && !strcmp(type_name, name)) return i; @@ -368,3 +385,11 @@ int btf__fd(const struct btf *btf) { return btf->fd; } + +const char *btf__name_by_offset(const struct btf *btf, __u32 offset) +{ + if (offset < btf->hdr->str_len) + return &btf->strings[offset]; + else + return NULL; +} diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h index caac3a404dc5..4897e0724d4e 100644 --- a/tools/lib/bpf/btf.h +++ b/tools/lib/bpf/btf.h @@ -19,6 +19,8 @@ struct btf *btf__new(__u8 *data, __u32 size, btf_print_fn_t err_log); __s32 btf__find_by_name(const struct btf *btf, const char *type_name); const struct btf_type *btf__type_by_id(const struct btf *btf, __u32 id); __s64 btf__resolve_size(const struct btf *btf, __u32 type_id); +int btf__resolve_type(const struct btf *btf, __u32 type_id); int btf__fd(const struct btf *btf); +const char *btf__name_by_offset(const struct btf *btf, __u32 offset); #endif diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 1aafdbe827fe..2abd0f112627 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -22,6 +22,7 @@ * License along with this program; if not, see <http://www.gnu.org/licenses> */ +#define _GNU_SOURCE #include <stdlib.h> #include <stdio.h> #include <stdarg.h> @@ -42,6 +43,7 @@ #include <sys/stat.h> #include <sys/types.h> #include <sys/vfs.h> +#include <tools/libc_compat.h> #include <libelf.h> #include <gelf.h> @@ -96,54 +98,6 @@ void libbpf_set_print(libbpf_print_fn_t warn, #define STRERR_BUFSIZE 128 -#define ERRNO_OFFSET(e) ((e) - __LIBBPF_ERRNO__START) -#define ERRCODE_OFFSET(c) ERRNO_OFFSET(LIBBPF_ERRNO__##c) -#define NR_ERRNO (__LIBBPF_ERRNO__END - __LIBBPF_ERRNO__START) - -static const char *libbpf_strerror_table[NR_ERRNO] = { - [ERRCODE_OFFSET(LIBELF)] = "Something wrong in libelf", - [ERRCODE_OFFSET(FORMAT)] = "BPF object format invalid", - [ERRCODE_OFFSET(KVERSION)] = "'version' section incorrect or lost", - [ERRCODE_OFFSET(ENDIAN)] = "Endian mismatch", - [ERRCODE_OFFSET(INTERNAL)] = "Internal error in libbpf", - [ERRCODE_OFFSET(RELOC)] = "Relocation failed", - [ERRCODE_OFFSET(VERIFY)] = "Kernel verifier blocks program loading", - [ERRCODE_OFFSET(PROG2BIG)] = "Program too big", - [ERRCODE_OFFSET(KVER)] = "Incorrect kernel version", - [ERRCODE_OFFSET(PROGTYPE)] = "Kernel doesn't support this program type", - [ERRCODE_OFFSET(WRNGPID)] = "Wrong pid in netlink message", - [ERRCODE_OFFSET(INVSEQ)] = "Invalid netlink sequence", -}; - -int libbpf_strerror(int err, char *buf, size_t size) -{ - if (!buf || !size) - return -1; - - err = err > 0 ? err : -err; - - if (err < __LIBBPF_ERRNO__START) { - int ret; - - ret = strerror_r(err, buf, size); - buf[size - 1] = '\0'; - return ret; - } - - if (err < __LIBBPF_ERRNO__END) { - const char *msg; - - msg = libbpf_strerror_table[ERRNO_OFFSET(err)]; - snprintf(buf, size, "%s", msg); - buf[size - 1] = '\0'; - return 0; - } - - snprintf(buf, size, "Unknown libbpf error %d", err); - buf[size - 1] = '\0'; - return -1; -} - #define CHECK_ERR(action, err, out) do { \ err = action; \ if (err) \ @@ -235,6 +189,7 @@ struct bpf_object { size_t nr_maps; bool loaded; + bool has_pseudo_calls; /* * Information when doing elf related work. Only valid if fd @@ -369,7 +324,7 @@ bpf_object__add_program(struct bpf_object *obj, void *data, size_t size, progs = obj->programs; nr_progs = obj->nr_programs; - progs = realloc(progs, sizeof(progs[0]) * (nr_progs + 1)); + progs = reallocarray(progs, nr_progs + 1, sizeof(progs[0])); if (!progs) { /* * In this case the original obj->programs @@ -401,10 +356,6 @@ bpf_object__init_prog_names(struct bpf_object *obj) const char *name = NULL; prog = &obj->programs[pi]; - if (prog->idx == obj->efile.text_shndx) { - name = ".text"; - goto skip_search; - } for (si = 0; si < symbols->d_size / sizeof(GElf_Sym) && !name; si++) { @@ -427,12 +378,15 @@ bpf_object__init_prog_names(struct bpf_object *obj) } } + if (!name && prog->idx == obj->efile.text_shndx) + name = ".text"; + if (!name) { pr_warning("failed to find sym for prog %s\n", prog->section_name); return -EINVAL; } -skip_search: + prog->name = strdup(name); if (!prog->name) { pr_warning("failed to allocate memory for prog sym %s\n", @@ -514,8 +468,10 @@ static int bpf_object__elf_init(struct bpf_object *obj) } else { obj->efile.fd = open(obj->path, O_RDONLY); if (obj->efile.fd < 0) { - pr_warning("failed to open %s: %s\n", obj->path, - strerror(errno)); + char errmsg[STRERR_BUFSIZE]; + char *cp = strerror_r(errno, errmsg, sizeof(errmsg)); + + pr_warning("failed to open %s: %s\n", obj->path, cp); return -errno; } @@ -854,10 +810,11 @@ static int bpf_object__elf_collect(struct bpf_object *obj) data->d_size, name, idx); if (err) { char errmsg[STRERR_BUFSIZE]; + char *cp = strerror_r(-err, errmsg, + sizeof(errmsg)); - strerror_r(-err, errmsg, sizeof(errmsg)); pr_warning("failed to alloc program %s (%s): %s", - name, obj->path, errmsg); + name, obj->path, cp); } } else if (sh.sh_type == SHT_REL) { void *reloc = obj->efile.reloc; @@ -871,8 +828,8 @@ static int bpf_object__elf_collect(struct bpf_object *obj) continue; } - reloc = realloc(reloc, - sizeof(*obj->efile.reloc) * nr_reloc); + reloc = reallocarray(reloc, nr_reloc, + sizeof(*obj->efile.reloc)); if (!reloc) { pr_warning("realloc failed\n"); err = -ENOMEM; @@ -920,6 +877,18 @@ bpf_object__find_prog_by_idx(struct bpf_object *obj, int idx) return NULL; } +struct bpf_program * +bpf_object__find_program_by_title(struct bpf_object *obj, const char *title) +{ + struct bpf_program *pos; + + bpf_object__for_each_program(pos, obj) { + if (pos->section_name && !strcmp(pos->section_name, title)) + return pos; + } + return NULL; +} + static int bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr, Elf_Data *data, struct bpf_object *obj) @@ -982,6 +951,7 @@ bpf_program__collect_reloc(struct bpf_program *prog, GElf_Shdr *shdr, prog->reloc_desc[i].type = RELO_CALL; prog->reloc_desc[i].insn_idx = insn_idx; prog->reloc_desc[i].text_off = sym.st_value; + obj->has_pseudo_calls = true; continue; } @@ -1085,6 +1055,53 @@ static int bpf_map_find_btf_info(struct bpf_map *map, const struct btf *btf) return 0; } +int bpf_map__reuse_fd(struct bpf_map *map, int fd) +{ + struct bpf_map_info info = {}; + __u32 len = sizeof(info); + int new_fd, err; + char *new_name; + + err = bpf_obj_get_info_by_fd(fd, &info, &len); + if (err) + return err; + + new_name = strdup(info.name); + if (!new_name) + return -errno; + + new_fd = open("/", O_RDONLY | O_CLOEXEC); + if (new_fd < 0) + goto err_free_new_name; + + new_fd = dup3(fd, new_fd, O_CLOEXEC); + if (new_fd < 0) + goto err_close_new_fd; + + err = zclose(map->fd); + if (err) + goto err_close_new_fd; + free(map->name); + + map->fd = new_fd; + map->name = new_name; + map->def.type = info.type; + map->def.key_size = info.key_size; + map->def.value_size = info.value_size; + map->def.max_entries = info.max_entries; + map->def.map_flags = info.map_flags; + map->btf_key_type_id = info.btf_key_type_id; + map->btf_value_type_id = info.btf_value_type_id; + + return 0; + +err_close_new_fd: + close(new_fd); +err_free_new_name: + free(new_name); + return -errno; +} + static int bpf_object__create_maps(struct bpf_object *obj) { @@ -1095,8 +1112,15 @@ bpf_object__create_maps(struct bpf_object *obj) for (i = 0; i < obj->nr_maps; i++) { struct bpf_map *map = &obj->maps[i]; struct bpf_map_def *def = &map->def; + char *cp, errmsg[STRERR_BUFSIZE]; int *pfd = &map->fd; + if (map->fd >= 0) { + pr_debug("skip map create (preset) %s: fd=%d\n", + map->name, map->fd); + continue; + } + create_attr.name = map->name; create_attr.map_ifindex = map->map_ifindex; create_attr.map_type = def->type; @@ -1116,8 +1140,9 @@ bpf_object__create_maps(struct bpf_object *obj) *pfd = bpf_create_map_xattr(&create_attr); if (*pfd < 0 && create_attr.btf_key_type_id) { + cp = strerror_r(errno, errmsg, sizeof(errmsg)); pr_warning("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n", - map->name, strerror(errno), errno); + map->name, cp, errno); create_attr.btf_fd = 0; create_attr.btf_key_type_id = 0; create_attr.btf_value_type_id = 0; @@ -1130,9 +1155,9 @@ bpf_object__create_maps(struct bpf_object *obj) size_t j; err = *pfd; + cp = strerror_r(errno, errmsg, sizeof(errmsg)); pr_warning("failed to create map (name: '%s'): %s\n", - map->name, - strerror(errno)); + map->name, cp); for (j = 0; j < i; j++) zclose(obj->maps[j].fd); return err; @@ -1167,7 +1192,7 @@ bpf_program__reloc_text(struct bpf_program *prog, struct bpf_object *obj, return -LIBBPF_ERRNO__RELOC; } new_cnt = prog->insns_cnt + text->insns_cnt; - new_insn = realloc(prog->insns, new_cnt * sizeof(*insn)); + new_insn = reallocarray(prog->insns, new_cnt, sizeof(*insn)); if (!new_insn) { pr_warning("oom in prog realloc\n"); return -ENOMEM; @@ -1284,6 +1309,7 @@ load_program(enum bpf_prog_type type, enum bpf_attach_type expected_attach_type, char *license, u32 kern_version, int *pfd, int prog_ifindex) { struct bpf_load_program_attr load_attr; + char *cp, errmsg[STRERR_BUFSIZE]; char *log_buf; int ret; @@ -1313,7 +1339,8 @@ load_program(enum bpf_prog_type type, enum bpf_attach_type expected_attach_type, } ret = -LIBBPF_ERRNO__LOAD; - pr_warning("load bpf program failed: %s\n", strerror(errno)); + cp = strerror_r(errno, errmsg, sizeof(errmsg)); + pr_warning("load bpf program failed: %s\n", cp); if (log_buf && log_buf[0] != '\0') { ret = -LIBBPF_ERRNO__VERIFY; @@ -1431,6 +1458,12 @@ out: return err; } +static bool bpf_program__is_function_storage(struct bpf_program *prog, + struct bpf_object *obj) +{ + return prog->idx == obj->efile.text_shndx && obj->has_pseudo_calls; +} + static int bpf_object__load_progs(struct bpf_object *obj) { @@ -1438,7 +1471,7 @@ bpf_object__load_progs(struct bpf_object *obj) int err; for (i = 0; i < obj->nr_programs; i++) { - if (obj->programs[i].idx == obj->efile.text_shndx) + if (bpf_program__is_function_storage(&obj->programs[i], obj)) continue; err = bpf_program__load(&obj->programs[i], obj->license, @@ -1468,6 +1501,7 @@ static bool bpf_prog_type__needs_kver(enum bpf_prog_type type) case BPF_PROG_TYPE_SK_MSG: case BPF_PROG_TYPE_CGROUP_SOCK_ADDR: case BPF_PROG_TYPE_LIRC_MODE2: + case BPF_PROG_TYPE_SK_REUSEPORT: return false; case BPF_PROG_TYPE_UNSPEC: case BPF_PROG_TYPE_KPROBE: @@ -1518,15 +1552,26 @@ out: return ERR_PTR(err); } -struct bpf_object *bpf_object__open(const char *path) +struct bpf_object *bpf_object__open_xattr(struct bpf_object_open_attr *attr) { /* param validation */ - if (!path) + if (!attr->file) return NULL; - pr_debug("loading %s\n", path); + pr_debug("loading %s\n", attr->file); - return __bpf_object__open(path, NULL, 0, true); + return __bpf_object__open(attr->file, NULL, 0, + bpf_prog_type__needs_kver(attr->prog_type)); +} + +struct bpf_object *bpf_object__open(const char *path) +{ + struct bpf_object_open_attr attr = { + .file = path, + .prog_type = BPF_PROG_TYPE_UNSPEC, + }; + + return bpf_object__open_xattr(&attr); } struct bpf_object *bpf_object__open_buffer(void *obj_buf, @@ -1595,6 +1640,7 @@ out: static int check_path(const char *path) { + char *cp, errmsg[STRERR_BUFSIZE]; struct statfs st_fs; char *dname, *dir; int err = 0; @@ -1608,7 +1654,8 @@ static int check_path(const char *path) dir = dirname(dname); if (statfs(dir, &st_fs)) { - pr_warning("failed to statfs %s: %s\n", dir, strerror(errno)); + cp = strerror_r(errno, errmsg, sizeof(errmsg)); + pr_warning("failed to statfs %s: %s\n", dir, cp); err = -errno; } free(dname); @@ -1624,6 +1671,7 @@ static int check_path(const char *path) int bpf_program__pin_instance(struct bpf_program *prog, const char *path, int instance) { + char *cp, errmsg[STRERR_BUFSIZE]; int err; err = check_path(path); @@ -1642,7 +1690,8 @@ int bpf_program__pin_instance(struct bpf_program *prog, const char *path, } if (bpf_obj_pin(prog->instances.fds[instance], path)) { - pr_warning("failed to pin program: %s\n", strerror(errno)); + cp = strerror_r(errno, errmsg, sizeof(errmsg)); + pr_warning("failed to pin program: %s\n", cp); return -errno; } pr_debug("pinned program '%s'\n", path); @@ -1652,13 +1701,16 @@ int bpf_program__pin_instance(struct bpf_program *prog, const char *path, static int make_dir(const char *path) { + char *cp, errmsg[STRERR_BUFSIZE]; int err = 0; if (mkdir(path, 0700) && errno != EEXIST) err = -errno; - if (err) - pr_warning("failed to mkdir %s: %s\n", path, strerror(-err)); + if (err) { + cp = strerror_r(-err, errmsg, sizeof(errmsg)); + pr_warning("failed to mkdir %s: %s\n", path, cp); + } return err; } @@ -1705,6 +1757,7 @@ int bpf_program__pin(struct bpf_program *prog, const char *path) int bpf_map__pin(struct bpf_map *map, const char *path) { + char *cp, errmsg[STRERR_BUFSIZE]; int err; err = check_path(path); @@ -1717,7 +1770,8 @@ int bpf_map__pin(struct bpf_map *map, const char *path) } if (bpf_obj_pin(map->fd, path)) { - pr_warning("failed to pin map: %s\n", strerror(errno)); + cp = strerror_r(errno, errmsg, sizeof(errmsg)); + pr_warning("failed to pin map: %s\n", cp); return -errno; } @@ -1863,8 +1917,8 @@ void *bpf_object__priv(struct bpf_object *obj) return obj ? obj->priv : ERR_PTR(-EINVAL); } -struct bpf_program * -bpf_program__next(struct bpf_program *prev, struct bpf_object *obj) +static struct bpf_program * +__bpf_program__next(struct bpf_program *prev, struct bpf_object *obj) { size_t idx; @@ -1885,6 +1939,18 @@ bpf_program__next(struct bpf_program *prev, struct bpf_object *obj) return &obj->programs[idx]; } +struct bpf_program * +bpf_program__next(struct bpf_program *prev, struct bpf_object *obj) +{ + struct bpf_program *prog = prev; + + do { + prog = __bpf_program__next(prog, obj); + } while (prog && bpf_program__is_function_storage(prog, obj)); + + return prog; +} + int bpf_program__set_priv(struct bpf_program *prog, void *priv, bpf_program_clear_priv_t clear_priv) { @@ -1901,6 +1967,11 @@ void *bpf_program__priv(struct bpf_program *prog) return prog ? prog->priv : ERR_PTR(-EINVAL); } +void bpf_program__set_ifindex(struct bpf_program *prog, __u32 ifindex) +{ + prog->prog_ifindex = ifindex; +} + const char *bpf_program__title(struct bpf_program *prog, bool needs_copy) { const char *title; @@ -1954,6 +2025,9 @@ int bpf_program__nth_fd(struct bpf_program *prog, int n) { int fd; + if (!prog) + return -EINVAL; + if (n >= prog->instances.nr || n < 0) { pr_warning("Can't get the %dth fd from program %s: only %d instances\n", n, prog->section_name, prog->instances.nr); @@ -2042,9 +2116,11 @@ static const struct { BPF_PROG_SEC("lwt_in", BPF_PROG_TYPE_LWT_IN), BPF_PROG_SEC("lwt_out", BPF_PROG_TYPE_LWT_OUT), BPF_PROG_SEC("lwt_xmit", BPF_PROG_TYPE_LWT_XMIT), + BPF_PROG_SEC("lwt_seg6local", BPF_PROG_TYPE_LWT_SEG6LOCAL), BPF_PROG_SEC("sockops", BPF_PROG_TYPE_SOCK_OPS), BPF_PROG_SEC("sk_skb", BPF_PROG_TYPE_SK_SKB), BPF_PROG_SEC("sk_msg", BPF_PROG_TYPE_SK_MSG), + BPF_PROG_SEC("lirc_mode2", BPF_PROG_TYPE_LIRC_MODE2), BPF_SA_PROG_SEC("cgroup/bind4", BPF_CGROUP_INET4_BIND), BPF_SA_PROG_SEC("cgroup/bind6", BPF_CGROUP_INET6_BIND), BPF_SA_PROG_SEC("cgroup/connect4", BPF_CGROUP_INET4_CONNECT), @@ -2060,23 +2136,31 @@ static const struct { #undef BPF_S_PROG_SEC #undef BPF_SA_PROG_SEC -static int bpf_program__identify_section(struct bpf_program *prog) +int libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type, + enum bpf_attach_type *expected_attach_type) { int i; - if (!prog->section_name) - goto err; - - for (i = 0; i < ARRAY_SIZE(section_names); i++) - if (strncmp(prog->section_name, section_names[i].sec, - section_names[i].len) == 0) - return i; + if (!name) + return -EINVAL; -err: - pr_warning("failed to guess program type based on section name %s\n", - prog->section_name); + for (i = 0; i < ARRAY_SIZE(section_names); i++) { + if (strncmp(name, section_names[i].sec, section_names[i].len)) + continue; + *prog_type = section_names[i].prog_type; + *expected_attach_type = section_names[i].expected_attach_type; + return 0; + } + return -EINVAL; +} - return -1; +static int +bpf_program__identify_section(struct bpf_program *prog, + enum bpf_prog_type *prog_type, + enum bpf_attach_type *expected_attach_type) +{ + return libbpf_prog_type_by_name(prog->section_name, prog_type, + expected_attach_type); } int bpf_map__fd(struct bpf_map *map) @@ -2125,6 +2209,16 @@ void *bpf_map__priv(struct bpf_map *map) return map ? map->priv : ERR_PTR(-EINVAL); } +bool bpf_map__is_offload_neutral(struct bpf_map *map) +{ + return map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY; +} + +void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex) +{ + map->map_ifindex = ifindex; +} + struct bpf_map * bpf_map__next(struct bpf_map *prev, struct bpf_object *obj) { @@ -2199,12 +2293,15 @@ int bpf_prog_load(const char *file, enum bpf_prog_type type, int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, struct bpf_object **pobj, int *prog_fd) { + struct bpf_object_open_attr open_attr = { + .file = attr->file, + .prog_type = attr->prog_type, + }; struct bpf_program *prog, *first_prog = NULL; enum bpf_attach_type expected_attach_type; enum bpf_prog_type prog_type; struct bpf_object *obj; struct bpf_map *map; - int section_idx; int err; if (!attr) @@ -2212,8 +2309,7 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, if (!attr->file) return -EINVAL; - obj = __bpf_object__open(attr->file, NULL, 0, - bpf_prog_type__needs_kver(attr->prog_type)); + obj = bpf_object__open_xattr(&open_attr); if (IS_ERR_OR_NULL(obj)) return -ENOENT; @@ -2226,26 +2322,27 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, prog->prog_ifindex = attr->ifindex; expected_attach_type = attr->expected_attach_type; if (prog_type == BPF_PROG_TYPE_UNSPEC) { - section_idx = bpf_program__identify_section(prog); - if (section_idx < 0) { + err = bpf_program__identify_section(prog, &prog_type, + &expected_attach_type); + if (err < 0) { + pr_warning("failed to guess program type based on section name %s\n", + prog->section_name); bpf_object__close(obj); return -EINVAL; } - prog_type = section_names[section_idx].prog_type; - expected_attach_type = - section_names[section_idx].expected_attach_type; } bpf_program__set_type(prog, prog_type); bpf_program__set_expected_attach_type(prog, expected_attach_type); - if (prog->idx != obj->efile.text_shndx && !first_prog) + if (!bpf_program__is_function_storage(prog, obj) && !first_prog) first_prog = prog; } bpf_map__for_each(map, obj) { - map->map_ifindex = attr->ifindex; + if (!bpf_map__is_offload_neutral(map)) + map->map_ifindex = attr->ifindex; } if (!first_prog) { diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index b33ae02f7d0e..96c55fac54c3 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -66,7 +66,13 @@ void libbpf_set_print(libbpf_print_fn_t warn, /* Hide internal to user */ struct bpf_object; +struct bpf_object_open_attr { + const char *file; + enum bpf_prog_type prog_type; +}; + struct bpf_object *bpf_object__open(const char *path); +struct bpf_object *bpf_object__open_xattr(struct bpf_object_open_attr *attr); struct bpf_object *bpf_object__open_buffer(void *obj_buf, size_t obj_buf_sz, const char *name); @@ -80,6 +86,9 @@ const char *bpf_object__name(struct bpf_object *obj); unsigned int bpf_object__kversion(struct bpf_object *obj); int bpf_object__btf_fd(const struct bpf_object *obj); +struct bpf_program * +bpf_object__find_program_by_title(struct bpf_object *obj, const char *title); + struct bpf_object *bpf_object__next(struct bpf_object *prev); #define bpf_object__for_each_safe(pos, tmp) \ for ((pos) = bpf_object__next(NULL), \ @@ -92,6 +101,9 @@ int bpf_object__set_priv(struct bpf_object *obj, void *priv, bpf_object_clear_priv_t clear_priv); void *bpf_object__priv(struct bpf_object *prog); +int libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type, + enum bpf_attach_type *expected_attach_type); + /* Accessors of bpf_program */ struct bpf_program; struct bpf_program *bpf_program__next(struct bpf_program *prog, @@ -109,6 +121,7 @@ int bpf_program__set_priv(struct bpf_program *prog, void *priv, bpf_program_clear_priv_t clear_priv); void *bpf_program__priv(struct bpf_program *prog); +void bpf_program__set_ifindex(struct bpf_program *prog, __u32 ifindex); const char *bpf_program__title(struct bpf_program *prog, bool needs_copy); @@ -251,6 +264,9 @@ typedef void (*bpf_map_clear_priv_t)(struct bpf_map *, void *); int bpf_map__set_priv(struct bpf_map *map, void *priv, bpf_map_clear_priv_t clear_priv); void *bpf_map__priv(struct bpf_map *map); +int bpf_map__reuse_fd(struct bpf_map *map, int fd); +bool bpf_map__is_offload_neutral(struct bpf_map *map); +void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex); int bpf_map__pin(struct bpf_map *map, const char *path); long libbpf_get_error(const void *ptr); diff --git a/tools/lib/bpf/libbpf_errno.c b/tools/lib/bpf/libbpf_errno.c new file mode 100644 index 000000000000..d9ba851bd7f9 --- /dev/null +++ b/tools/lib/bpf/libbpf_errno.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: LGPL-2.1 + +/* + * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org> + * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com> + * Copyright (C) 2015 Huawei Inc. + * Copyright (C) 2017 Nicira, Inc. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; + * version 2.1 of the License (not later!) + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this program; if not, see <http://www.gnu.org/licenses> + */ + +#include <stdio.h> +#include <string.h> + +#include "libbpf.h" + +#define ERRNO_OFFSET(e) ((e) - __LIBBPF_ERRNO__START) +#define ERRCODE_OFFSET(c) ERRNO_OFFSET(LIBBPF_ERRNO__##c) +#define NR_ERRNO (__LIBBPF_ERRNO__END - __LIBBPF_ERRNO__START) + +static const char *libbpf_strerror_table[NR_ERRNO] = { + [ERRCODE_OFFSET(LIBELF)] = "Something wrong in libelf", + [ERRCODE_OFFSET(FORMAT)] = "BPF object format invalid", + [ERRCODE_OFFSET(KVERSION)] = "'version' section incorrect or lost", + [ERRCODE_OFFSET(ENDIAN)] = "Endian mismatch", + [ERRCODE_OFFSET(INTERNAL)] = "Internal error in libbpf", + [ERRCODE_OFFSET(RELOC)] = "Relocation failed", + [ERRCODE_OFFSET(VERIFY)] = "Kernel verifier blocks program loading", + [ERRCODE_OFFSET(PROG2BIG)] = "Program too big", + [ERRCODE_OFFSET(KVER)] = "Incorrect kernel version", + [ERRCODE_OFFSET(PROGTYPE)] = "Kernel doesn't support this program type", + [ERRCODE_OFFSET(WRNGPID)] = "Wrong pid in netlink message", + [ERRCODE_OFFSET(INVSEQ)] = "Invalid netlink sequence", +}; + +int libbpf_strerror(int err, char *buf, size_t size) +{ + if (!buf || !size) + return -1; + + err = err > 0 ? err : -err; + + if (err < __LIBBPF_ERRNO__START) { + int ret; + + ret = strerror_r(err, buf, size); + buf[size - 1] = '\0'; + return ret; + } + + if (err < __LIBBPF_ERRNO__END) { + const char *msg; + + msg = libbpf_strerror_table[ERRNO_OFFSET(err)]; + snprintf(buf, size, "%s", msg); + buf[size - 1] = '\0'; + return 0; + } + + snprintf(buf, size, "Unknown libbpf error %d", err); + buf[size - 1] = '\0'; + return -1; +} diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index a362e3d7abc6..fff7fb1285fc 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -22,7 +22,8 @@ $(TEST_CUSTOM_PROGS): $(OUTPUT)/%: %.c # Order correspond to 'make run_tests' order TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \ test_align test_verifier_log test_dev_cgroup test_tcpbpf_user \ - test_sock test_btf test_sockmap test_lirc_mode2_user get_cgroup_id_user + test_sock test_btf test_sockmap test_lirc_mode2_user get_cgroup_id_user \ + test_socket_cookie test_cgroup_storage test_select_reuseport TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test_obj_id.o \ test_pkt_md_access.o test_xdp_redirect.o test_xdp_meta.o sockmap_parse_prog.o \ @@ -33,7 +34,8 @@ TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test test_btf_haskv.o test_btf_nokv.o test_sockmap_kern.o test_tunnel_kern.o \ test_get_stack_rawtp.o test_sockmap_kern.o test_sockhash_kern.o \ test_lwt_seg6local.o sendmsg4_prog.o sendmsg6_prog.o test_lirc_mode2_kern.o \ - get_cgroup_id_kern.o + get_cgroup_id_kern.o socket_cookie_prog.o test_select_reuseport_kern.o \ + test_skb_cgroup_id_kern.o # Order correspond to 'make run_tests' order TEST_PROGS := test_kmod.sh \ @@ -44,10 +46,11 @@ TEST_PROGS := test_kmod.sh \ test_sock_addr.sh \ test_tunnel.sh \ test_lwt_seg6local.sh \ - test_lirc_mode2.sh + test_lirc_mode2.sh \ + test_skb_cgroup_id.sh # Compile but not part of 'make run_tests' -TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr +TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr test_skb_cgroup_id_user include ../lib.mk @@ -58,11 +61,15 @@ $(TEST_GEN_PROGS): $(BPFOBJ) $(TEST_GEN_PROGS_EXTENDED): $(OUTPUT)/libbpf.a $(OUTPUT)/test_dev_cgroup: cgroup_helpers.c +$(OUTPUT)/test_skb_cgroup_id_user: cgroup_helpers.c $(OUTPUT)/test_sock: cgroup_helpers.c $(OUTPUT)/test_sock_addr: cgroup_helpers.c +$(OUTPUT)/test_socket_cookie: cgroup_helpers.c $(OUTPUT)/test_sockmap: cgroup_helpers.c +$(OUTPUT)/test_tcpbpf_user: cgroup_helpers.c $(OUTPUT)/test_progs: trace_helpers.c $(OUTPUT)/get_cgroup_id_user: cgroup_helpers.c +$(OUTPUT)/test_cgroup_storage: cgroup_helpers.c .PHONY: force diff --git a/tools/testing/selftests/bpf/bpf_helpers.h b/tools/testing/selftests/bpf/bpf_helpers.h index 810de20e8e26..e4be7730222d 100644 --- a/tools/testing/selftests/bpf/bpf_helpers.h +++ b/tools/testing/selftests/bpf/bpf_helpers.h @@ -65,6 +65,8 @@ static int (*bpf_xdp_adjust_head)(void *ctx, int offset) = (void *) BPF_FUNC_xdp_adjust_head; static int (*bpf_xdp_adjust_meta)(void *ctx, int offset) = (void *) BPF_FUNC_xdp_adjust_meta; +static int (*bpf_get_socket_cookie)(void *ctx) = + (void *) BPF_FUNC_get_socket_cookie; static int (*bpf_setsockopt)(void *ctx, int level, int optname, void *optval, int optlen) = (void *) BPF_FUNC_setsockopt; @@ -109,6 +111,8 @@ static int (*bpf_xdp_adjust_tail)(void *ctx, int offset) = static int (*bpf_skb_get_xfrm_state)(void *ctx, int index, void *state, int size, int flags) = (void *) BPF_FUNC_skb_get_xfrm_state; +static int (*bpf_sk_select_reuseport)(void *ctx, void *map, void *key, __u32 flags) = + (void *) BPF_FUNC_sk_select_reuseport; static int (*bpf_get_stack)(void *ctx, void *buf, int size, int flags) = (void *) BPF_FUNC_get_stack; static int (*bpf_fib_lookup)(void *ctx, struct bpf_fib_lookup *params, @@ -133,6 +137,12 @@ static int (*bpf_rc_keydown)(void *ctx, unsigned int protocol, (void *) BPF_FUNC_rc_keydown; static unsigned long long (*bpf_get_current_cgroup_id)(void) = (void *) BPF_FUNC_get_current_cgroup_id; +static void *(*bpf_get_local_storage)(void *map, unsigned long long flags) = + (void *) BPF_FUNC_get_local_storage; +static unsigned long long (*bpf_skb_cgroup_id)(void *ctx) = + (void *) BPF_FUNC_skb_cgroup_id; +static unsigned long long (*bpf_skb_ancestor_cgroup_id)(void *ctx, int level) = + (void *) BPF_FUNC_skb_ancestor_cgroup_id; /* llvm builtin functions that eBPF C program may use to * emit BPF_LD_ABS and BPF_LD_IND instructions @@ -169,6 +179,8 @@ struct bpf_map_def { static int (*bpf_skb_load_bytes)(void *ctx, int off, void *to, int len) = (void *) BPF_FUNC_skb_load_bytes; +static int (*bpf_skb_load_bytes_relative)(void *ctx, int off, void *to, int len, __u32 start_header) = + (void *) BPF_FUNC_skb_load_bytes_relative; static int (*bpf_skb_store_bytes)(void *ctx, int off, void *from, int len, int flags) = (void *) BPF_FUNC_skb_store_bytes; static int (*bpf_l3_csum_replace)(void *ctx, int off, int from, int to, int flags) = diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h index d0811b3d6a6f..315a44fa32af 100644 --- a/tools/testing/selftests/bpf/bpf_util.h +++ b/tools/testing/selftests/bpf/bpf_util.h @@ -44,4 +44,8 @@ static inline unsigned int bpf_num_possible_cpus(void) name[bpf_num_possible_cpus()] #define bpf_percpu(name, cpu) name[(cpu)].v +#ifndef ARRAY_SIZE +# define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + #endif /* __BPF_UTIL__ */ diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c index c87b4e052ce9..cf16948aad4a 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.c +++ b/tools/testing/selftests/bpf/cgroup_helpers.c @@ -118,7 +118,7 @@ static int join_cgroup_from_top(char *cgroup_path) * * On success, it returns 0, otherwise on failure it returns 1. */ -int join_cgroup(char *path) +int join_cgroup(const char *path) { char cgroup_path[PATH_MAX + 1]; @@ -158,7 +158,7 @@ void cleanup_cgroup_environment(void) * On success, it returns the file descriptor. On failure it returns 0. * If there is a failure, it prints the error to stderr. */ -int create_and_get_cgroup(char *path) +int create_and_get_cgroup(const char *path) { char cgroup_path[PATH_MAX + 1]; int fd; @@ -186,7 +186,7 @@ int create_and_get_cgroup(char *path) * which is an invalid cgroup id. * If there is a failure, it prints the error to stderr. */ -unsigned long long get_cgroup_id(char *path) +unsigned long long get_cgroup_id(const char *path) { int dirfd, err, flags, mount_id, fhsize; union { diff --git a/tools/testing/selftests/bpf/cgroup_helpers.h b/tools/testing/selftests/bpf/cgroup_helpers.h index 20a4a5dcd469..d64bb8957090 100644 --- a/tools/testing/selftests/bpf/cgroup_helpers.h +++ b/tools/testing/selftests/bpf/cgroup_helpers.h @@ -9,10 +9,10 @@ __FILE__, __LINE__, clean_errno(), ##__VA_ARGS__) -int create_and_get_cgroup(char *path); -int join_cgroup(char *path); +int create_and_get_cgroup(const char *path); +int join_cgroup(const char *path); int setup_cgroup_environment(void); void cleanup_cgroup_environment(void); -unsigned long long get_cgroup_id(char *path); +unsigned long long get_cgroup_id(const char *path); #endif diff --git a/tools/testing/selftests/bpf/socket_cookie_prog.c b/tools/testing/selftests/bpf/socket_cookie_prog.c new file mode 100644 index 000000000000..9ff8ac4b0bf6 --- /dev/null +++ b/tools/testing/selftests/bpf/socket_cookie_prog.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Facebook + +#include <linux/bpf.h> +#include <sys/socket.h> + +#include "bpf_helpers.h" +#include "bpf_endian.h" + +struct bpf_map_def SEC("maps") socket_cookies = { + .type = BPF_MAP_TYPE_HASH, + .key_size = sizeof(__u64), + .value_size = sizeof(__u32), + .max_entries = 1 << 8, +}; + +SEC("cgroup/connect6") +int set_cookie(struct bpf_sock_addr *ctx) +{ + __u32 cookie_value = 0xFF; + __u64 cookie_key; + + if (ctx->family != AF_INET6 || ctx->user_family != AF_INET6) + return 1; + + cookie_key = bpf_get_socket_cookie(ctx); + if (bpf_map_update_elem(&socket_cookies, &cookie_key, &cookie_value, 0)) + return 0; + + return 1; +} + +SEC("sockops") +int update_cookie(struct bpf_sock_ops *ctx) +{ + __u32 new_cookie_value; + __u32 *cookie_value; + __u64 cookie_key; + + if (ctx->family != AF_INET6) + return 1; + + if (ctx->op != BPF_SOCK_OPS_TCP_CONNECT_CB) + return 1; + + cookie_key = bpf_get_socket_cookie(ctx); + + cookie_value = bpf_map_lookup_elem(&socket_cookies, &cookie_key); + if (!cookie_value) + return 1; + + new_cookie_value = (ctx->local_port << 8) | *cookie_value; + bpf_map_update_elem(&socket_cookies, &cookie_key, &new_cookie_value, 0); + + return 1; +} + +int _version SEC("version") = 1; + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/tcp_client.py b/tools/testing/selftests/bpf/tcp_client.py index 481dccdf140c..7f8200a8702b 100755 --- a/tools/testing/selftests/bpf/tcp_client.py +++ b/tools/testing/selftests/bpf/tcp_client.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python2 +#!/usr/bin/env python3 # # SPDX-License-Identifier: GPL-2.0 # @@ -9,11 +9,11 @@ import subprocess import select def read(sock, n): - buf = '' + buf = b'' while len(buf) < n: rem = n - len(buf) try: s = sock.recv(rem) - except (socket.error), e: return '' + except (socket.error) as e: return b'' buf += s return buf @@ -22,7 +22,7 @@ def send(sock, s): count = 0 while count < total: try: n = sock.send(s) - except (socket.error), e: n = 0 + except (socket.error) as e: n = 0 if n == 0: return count; count += n @@ -39,10 +39,10 @@ try: except socket.error as e: sys.exit(1) -buf = '' +buf = b'' n = 0 while n < 1000: - buf += '+' + buf += b'+' n += 1 sock.settimeout(1); diff --git a/tools/testing/selftests/bpf/tcp_server.py b/tools/testing/selftests/bpf/tcp_server.py index bc454d7d0be2..b39903fca4c8 100755 --- a/tools/testing/selftests/bpf/tcp_server.py +++ b/tools/testing/selftests/bpf/tcp_server.py @@ -1,4 +1,4 @@ -#!/usr/bin/env python2 +#!/usr/bin/env python3 # # SPDX-License-Identifier: GPL-2.0 # @@ -9,11 +9,11 @@ import subprocess import select def read(sock, n): - buf = '' + buf = b'' while len(buf) < n: rem = n - len(buf) try: s = sock.recv(rem) - except (socket.error), e: return '' + except (socket.error) as e: return b'' buf += s return buf @@ -22,7 +22,7 @@ def send(sock, s): count = 0 while count < total: try: n = sock.send(s) - except (socket.error), e: n = 0 + except (socket.error) as e: n = 0 if n == 0: return count; count += n @@ -43,7 +43,7 @@ host = socket.gethostname() try: serverSocket.bind((host, 0)) except socket.error as msg: - print 'bind fails: ', msg + print('bind fails: ' + str(msg)) sn = serverSocket.getsockname() serverPort = sn[1] @@ -51,10 +51,10 @@ serverPort = sn[1] cmdStr = ("./tcp_client.py %d &") % (serverPort) os.system(cmdStr) -buf = '' +buf = b'' n = 0 while n < 500: - buf += '.' + buf += b'.' n += 1 serverSocket.listen(MAX_PORTS) @@ -79,5 +79,5 @@ while True: serverSocket.close() sys.exit(0) else: - print 'Select timeout!' + print('Select timeout!') sys.exit(1) diff --git a/tools/testing/selftests/bpf/test_align.c b/tools/testing/selftests/bpf/test_align.c index 6b1b302310fe..5f377ec53f2f 100644 --- a/tools/testing/selftests/bpf/test_align.c +++ b/tools/testing/selftests/bpf/test_align.c @@ -18,10 +18,7 @@ #include "../../../include/linux/filter.h" #include "bpf_rlimit.h" - -#ifndef ARRAY_SIZE -# define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) -#endif +#include "bpf_util.h" #define MAX_INSNS 512 #define MAX_MATCHES 16 diff --git a/tools/testing/selftests/bpf/test_btf.c b/tools/testing/selftests/bpf/test_btf.c index ffdd27737c9e..6b5cfeb7a9cc 100644 --- a/tools/testing/selftests/bpf/test_btf.c +++ b/tools/testing/selftests/bpf/test_btf.c @@ -19,6 +19,7 @@ #include <bpf/btf.h> #include "bpf_rlimit.h" +#include "bpf_util.h" static uint32_t pass_cnt; static uint32_t error_cnt; @@ -93,10 +94,6 @@ static int __base_pr(const char *format, ...) #define MAX_NR_RAW_TYPES 1024 #define BTF_LOG_BUF_SIZE 65535 -#ifndef ARRAY_SIZE -# define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) -#endif - static struct args { unsigned int raw_test_num; unsigned int file_test_num; @@ -131,6 +128,8 @@ struct btf_raw_test { __u32 max_entries; bool btf_load_err; bool map_create_err; + bool ordered_map; + bool lossless_map; int hdr_len_delta; int type_off_delta; int str_off_delta; @@ -2093,8 +2092,7 @@ struct pprint_mapv { } aenum; }; -static struct btf_raw_test pprint_test = { - .descr = "BTF pretty print test #1", +static struct btf_raw_test pprint_test_template = { .raw_types = { /* unsighed char */ /* [1] */ BTF_TYPE_INT_ENC(NAME_TBD, 0, 0, 8, 1), @@ -2146,8 +2144,6 @@ static struct btf_raw_test pprint_test = { }, .str_sec = "\0unsigned char\0unsigned short\0unsigned int\0int\0unsigned long long\0uint8_t\0uint16_t\0uint32_t\0int32_t\0uint64_t\0ui64\0ui8a\0ENUM_ZERO\0ENUM_ONE\0ENUM_TWO\0ENUM_THREE\0pprint_mapv\0ui32\0ui16\0si32\0unused_bits2a\0bits28\0unused_bits2b\0aenum", .str_sec_size = sizeof("\0unsigned char\0unsigned short\0unsigned int\0int\0unsigned long long\0uint8_t\0uint16_t\0uint32_t\0int32_t\0uint64_t\0ui64\0ui8a\0ENUM_ZERO\0ENUM_ONE\0ENUM_TWO\0ENUM_THREE\0pprint_mapv\0ui32\0ui16\0si32\0unused_bits2a\0bits28\0unused_bits2b\0aenum"), - .map_type = BPF_MAP_TYPE_ARRAY, - .map_name = "pprint_test", .key_size = sizeof(unsigned int), .value_size = sizeof(struct pprint_mapv), .key_type_id = 3, /* unsigned int */ @@ -2155,6 +2151,40 @@ static struct btf_raw_test pprint_test = { .max_entries = 128 * 1024, }; +static struct btf_pprint_test_meta { + const char *descr; + enum bpf_map_type map_type; + const char *map_name; + bool ordered_map; + bool lossless_map; +} pprint_tests_meta[] = { +{ + .descr = "BTF pretty print array", + .map_type = BPF_MAP_TYPE_ARRAY, + .map_name = "pprint_test_array", + .ordered_map = true, + .lossless_map = true, +}, + +{ + .descr = "BTF pretty print hash", + .map_type = BPF_MAP_TYPE_HASH, + .map_name = "pprint_test_hash", + .ordered_map = false, + .lossless_map = true, +}, + +{ + .descr = "BTF pretty print lru hash", + .map_type = BPF_MAP_TYPE_LRU_HASH, + .map_name = "pprint_test_lru_hash", + .ordered_map = false, + .lossless_map = false, +}, + +}; + + static void set_pprint_mapv(struct pprint_mapv *v, uint32_t i) { v->ui32 = i; @@ -2166,10 +2196,12 @@ static void set_pprint_mapv(struct pprint_mapv *v, uint32_t i) v->aenum = i & 0x03; } -static int test_pprint(void) +static int do_test_pprint(void) { - const struct btf_raw_test *test = &pprint_test; + const struct btf_raw_test *test = &pprint_test_template; struct bpf_create_map_attr create_attr = {}; + unsigned int key, nr_read_elems; + bool ordered_map, lossless_map; int map_fd = -1, btf_fd = -1; struct pprint_mapv mapv = {}; unsigned int raw_btf_size; @@ -2178,7 +2210,6 @@ static int test_pprint(void) char pin_path[255]; size_t line_len = 0; char *line = NULL; - unsigned int key; uint8_t *raw_btf; ssize_t nread; int err, ret; @@ -2251,14 +2282,18 @@ static int test_pprint(void) goto done; } - key = 0; + nr_read_elems = 0; + ordered_map = test->ordered_map; + lossless_map = test->lossless_map; do { ssize_t nexpected_line; + unsigned int next_key; - set_pprint_mapv(&mapv, key); + next_key = ordered_map ? nr_read_elems : atoi(line); + set_pprint_mapv(&mapv, next_key); nexpected_line = snprintf(expected_line, sizeof(expected_line), "%u: {%u,0,%d,0x%x,0x%x,0x%x,{%lu|[%u,%u,%u,%u,%u,%u,%u,%u]},%s}\n", - key, + next_key, mapv.ui32, mapv.si32, mapv.unused_bits2a, mapv.bits28, mapv.unused_bits2b, mapv.ui64, @@ -2281,11 +2316,12 @@ static int test_pprint(void) } nread = getline(&line, &line_len, pin_file); - } while (++key < test->max_entries && nread > 0); + } while (++nr_read_elems < test->max_entries && nread > 0); - if (CHECK(key < test->max_entries, - "Unexpected EOF. key:%u test->max_entries:%u", - key, test->max_entries)) { + if (lossless_map && + CHECK(nr_read_elems < test->max_entries, + "Unexpected EOF. nr_read_elems:%u test->max_entries:%u", + nr_read_elems, test->max_entries)) { err = -1; goto done; } @@ -2314,6 +2350,24 @@ done: return err; } +static int test_pprint(void) +{ + unsigned int i; + int err = 0; + + for (i = 0; i < ARRAY_SIZE(pprint_tests_meta); i++) { + pprint_test_template.descr = pprint_tests_meta[i].descr; + pprint_test_template.map_type = pprint_tests_meta[i].map_type; + pprint_test_template.map_name = pprint_tests_meta[i].map_name; + pprint_test_template.ordered_map = pprint_tests_meta[i].ordered_map; + pprint_test_template.lossless_map = pprint_tests_meta[i].lossless_map; + + err |= count_result(do_test_pprint()); + } + + return err; +} + static void usage(const char *cmd) { fprintf(stderr, "Usage: %s [-l] [[-r test_num (1 - %zu)] | [-g test_num (1 - %zu)] | [-f test_num (1 - %zu)] | [-p]]\n", @@ -2409,7 +2463,7 @@ int main(int argc, char **argv) err |= test_file(); if (args.pprint_test) - err |= count_result(test_pprint()); + err |= test_pprint(); if (args.raw_test || args.get_info_test || args.file_test || args.pprint_test) diff --git a/tools/testing/selftests/bpf/test_cgroup_storage.c b/tools/testing/selftests/bpf/test_cgroup_storage.c new file mode 100644 index 000000000000..dc83fb2d3f27 --- /dev/null +++ b/tools/testing/selftests/bpf/test_cgroup_storage.c @@ -0,0 +1,130 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <assert.h> +#include <bpf/bpf.h> +#include <linux/filter.h> +#include <stdio.h> +#include <stdlib.h> + +#include "cgroup_helpers.h" + +char bpf_log_buf[BPF_LOG_BUF_SIZE]; + +#define TEST_CGROUP "/test-bpf-cgroup-storage-buf/" + +int main(int argc, char **argv) +{ + struct bpf_insn prog[] = { + BPF_LD_MAP_FD(BPF_REG_1, 0), /* map fd */ + BPF_MOV64_IMM(BPF_REG_2, 0), /* flags, not used */ + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_MOV64_IMM(BPF_REG_1, 1), + BPF_STX_XADD(BPF_DW, BPF_REG_0, BPF_REG_1, 0), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x1), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }; + size_t insns_cnt = sizeof(prog) / sizeof(struct bpf_insn); + int error = EXIT_FAILURE; + int map_fd, prog_fd, cgroup_fd; + struct bpf_cgroup_storage_key key; + unsigned long long value; + + map_fd = bpf_create_map(BPF_MAP_TYPE_CGROUP_STORAGE, sizeof(key), + sizeof(value), 0, 0); + if (map_fd < 0) { + printf("Failed to create map: %s\n", strerror(errno)); + goto out; + } + + prog[0].imm = map_fd; + prog_fd = bpf_load_program(BPF_PROG_TYPE_CGROUP_SKB, + prog, insns_cnt, "GPL", 0, + bpf_log_buf, BPF_LOG_BUF_SIZE); + if (prog_fd < 0) { + printf("Failed to load bpf program: %s\n", bpf_log_buf); + goto out; + } + + if (setup_cgroup_environment()) { + printf("Failed to setup cgroup environment\n"); + goto err; + } + + /* Create a cgroup, get fd, and join it */ + cgroup_fd = create_and_get_cgroup(TEST_CGROUP); + if (!cgroup_fd) { + printf("Failed to create test cgroup\n"); + goto err; + } + + if (join_cgroup(TEST_CGROUP)) { + printf("Failed to join cgroup\n"); + goto err; + } + + /* Attach the bpf program */ + if (bpf_prog_attach(prog_fd, cgroup_fd, BPF_CGROUP_INET_EGRESS, 0)) { + printf("Failed to attach bpf program\n"); + goto err; + } + + if (bpf_map_get_next_key(map_fd, NULL, &key)) { + printf("Failed to get the first key in cgroup storage\n"); + goto err; + } + + if (bpf_map_lookup_elem(map_fd, &key, &value)) { + printf("Failed to lookup cgroup storage\n"); + goto err; + } + + /* Every second packet should be dropped */ + assert(system("ping localhost -c 1 -W 1 -q > /dev/null") == 0); + assert(system("ping localhost -c 1 -W 1 -q > /dev/null")); + assert(system("ping localhost -c 1 -W 1 -q > /dev/null") == 0); + + /* Check the counter in the cgroup local storage */ + if (bpf_map_lookup_elem(map_fd, &key, &value)) { + printf("Failed to lookup cgroup storage\n"); + goto err; + } + + if (value != 3) { + printf("Unexpected data in the cgroup storage: %llu\n", value); + goto err; + } + + /* Bump the counter in the cgroup local storage */ + value++; + if (bpf_map_update_elem(map_fd, &key, &value, 0)) { + printf("Failed to update the data in the cgroup storage\n"); + goto err; + } + + /* Every second packet should be dropped */ + assert(system("ping localhost -c 1 -W 1 -q > /dev/null") == 0); + assert(system("ping localhost -c 1 -W 1 -q > /dev/null")); + assert(system("ping localhost -c 1 -W 1 -q > /dev/null") == 0); + + /* Check the final value of the counter in the cgroup local storage */ + if (bpf_map_lookup_elem(map_fd, &key, &value)) { + printf("Failed to lookup the cgroup storage\n"); + goto err; + } + + if (value != 7) { + printf("Unexpected data in the cgroup storage: %llu\n", value); + goto err; + } + + error = 0; + printf("test_cgroup_storage:PASS\n"); + +err: + cleanup_cgroup_environment(); + +out: + return error; +} diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c index 6c253343a6f9..6f54f84144a0 100644 --- a/tools/testing/selftests/bpf/test_maps.c +++ b/tools/testing/selftests/bpf/test_maps.c @@ -17,7 +17,8 @@ #include <stdlib.h> #include <sys/wait.h> - +#include <sys/socket.h> +#include <netinet/in.h> #include <linux/bpf.h> #include <bpf/bpf.h> @@ -26,8 +27,21 @@ #include "bpf_util.h" #include "bpf_rlimit.h" +#ifndef ENOTSUPP +#define ENOTSUPP 524 +#endif + static int map_flags; +#define CHECK(condition, tag, format...) ({ \ + int __ret = !!(condition); \ + if (__ret) { \ + printf("%s(%d):FAIL:%s ", __func__, __LINE__, tag); \ + printf(format); \ + exit(-1); \ + } \ +}) + static void test_hashmap(int task, void *data) { long long key, next_key, first_key, value; @@ -1150,6 +1164,250 @@ static void test_map_wronly(void) assert(bpf_map_get_next_key(fd, &key, &value) == -1 && errno == EPERM); } +static void prepare_reuseport_grp(int type, int map_fd, + __s64 *fds64, __u64 *sk_cookies, + unsigned int n) +{ + socklen_t optlen, addrlen; + struct sockaddr_in6 s6; + const __u32 index0 = 0; + const int optval = 1; + unsigned int i; + u64 sk_cookie; + __s64 fd64; + int err; + + s6.sin6_family = AF_INET6; + s6.sin6_addr = in6addr_any; + s6.sin6_port = 0; + addrlen = sizeof(s6); + optlen = sizeof(sk_cookie); + + for (i = 0; i < n; i++) { + fd64 = socket(AF_INET6, type, 0); + CHECK(fd64 == -1, "socket()", + "sock_type:%d fd64:%lld errno:%d\n", + type, fd64, errno); + + err = setsockopt(fd64, SOL_SOCKET, SO_REUSEPORT, + &optval, sizeof(optval)); + CHECK(err == -1, "setsockopt(SO_REUSEPORT)", + "err:%d errno:%d\n", err, errno); + + /* reuseport_array does not allow unbound sk */ + err = bpf_map_update_elem(map_fd, &index0, &fd64, + BPF_ANY); + CHECK(err != -1 || errno != EINVAL, + "reuseport array update unbound sk", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + + err = bind(fd64, (struct sockaddr *)&s6, sizeof(s6)); + CHECK(err == -1, "bind()", + "sock_type:%d err:%d errno:%d\n", type, err, errno); + + if (i == 0) { + err = getsockname(fd64, (struct sockaddr *)&s6, + &addrlen); + CHECK(err == -1, "getsockname()", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + } + + err = getsockopt(fd64, SOL_SOCKET, SO_COOKIE, &sk_cookie, + &optlen); + CHECK(err == -1, "getsockopt(SO_COOKIE)", + "sock_type:%d err:%d errno:%d\n", type, err, errno); + + if (type == SOCK_STREAM) { + /* + * reuseport_array does not allow + * non-listening tcp sk. + */ + err = bpf_map_update_elem(map_fd, &index0, &fd64, + BPF_ANY); + CHECK(err != -1 || errno != EINVAL, + "reuseport array update non-listening sk", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + err = listen(fd64, 0); + CHECK(err == -1, "listen()", + "sock_type:%d, err:%d errno:%d\n", + type, err, errno); + } + + fds64[i] = fd64; + sk_cookies[i] = sk_cookie; + } +} + +static void test_reuseport_array(void) +{ +#define REUSEPORT_FD_IDX(err, last) ({ (err) ? last : !last; }) + + const __u32 array_size = 4, index0 = 0, index3 = 3; + int types[2] = { SOCK_STREAM, SOCK_DGRAM }, type; + __u64 grpa_cookies[2], sk_cookie, map_cookie; + __s64 grpa_fds64[2] = { -1, -1 }, fd64 = -1; + const __u32 bad_index = array_size; + int map_fd, err, t, f; + __u32 fds_idx = 0; + int fd; + + map_fd = bpf_create_map(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, + sizeof(__u32), sizeof(__u64), array_size, 0); + CHECK(map_fd == -1, "reuseport array create", + "map_fd:%d, errno:%d\n", map_fd, errno); + + /* Test lookup/update/delete with invalid index */ + err = bpf_map_delete_elem(map_fd, &bad_index); + CHECK(err != -1 || errno != E2BIG, "reuseport array del >=max_entries", + "err:%d errno:%d\n", err, errno); + + err = bpf_map_update_elem(map_fd, &bad_index, &fd64, BPF_ANY); + CHECK(err != -1 || errno != E2BIG, + "reuseport array update >=max_entries", + "err:%d errno:%d\n", err, errno); + + err = bpf_map_lookup_elem(map_fd, &bad_index, &map_cookie); + CHECK(err != -1 || errno != ENOENT, + "reuseport array update >=max_entries", + "err:%d errno:%d\n", err, errno); + + /* Test lookup/delete non existence elem */ + err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); + CHECK(err != -1 || errno != ENOENT, + "reuseport array lookup not-exist elem", + "err:%d errno:%d\n", err, errno); + err = bpf_map_delete_elem(map_fd, &index3); + CHECK(err != -1 || errno != ENOENT, + "reuseport array del not-exist elem", + "err:%d errno:%d\n", err, errno); + + for (t = 0; t < ARRAY_SIZE(types); t++) { + type = types[t]; + + prepare_reuseport_grp(type, map_fd, grpa_fds64, + grpa_cookies, ARRAY_SIZE(grpa_fds64)); + + /* Test BPF_* update flags */ + /* BPF_EXIST failure case */ + err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], + BPF_EXIST); + CHECK(err != -1 || errno != ENOENT, + "reuseport array update empty elem BPF_EXIST", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + fds_idx = REUSEPORT_FD_IDX(err, fds_idx); + + /* BPF_NOEXIST success case */ + err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], + BPF_NOEXIST); + CHECK(err == -1, + "reuseport array update empty elem BPF_NOEXIST", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + fds_idx = REUSEPORT_FD_IDX(err, fds_idx); + + /* BPF_EXIST success case. */ + err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], + BPF_EXIST); + CHECK(err == -1, + "reuseport array update same elem BPF_EXIST", + "sock_type:%d err:%d errno:%d\n", type, err, errno); + fds_idx = REUSEPORT_FD_IDX(err, fds_idx); + + /* BPF_NOEXIST failure case */ + err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], + BPF_NOEXIST); + CHECK(err != -1 || errno != EEXIST, + "reuseport array update non-empty elem BPF_NOEXIST", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + fds_idx = REUSEPORT_FD_IDX(err, fds_idx); + + /* BPF_ANY case (always succeed) */ + err = bpf_map_update_elem(map_fd, &index3, &grpa_fds64[fds_idx], + BPF_ANY); + CHECK(err == -1, + "reuseport array update same sk with BPF_ANY", + "sock_type:%d err:%d errno:%d\n", type, err, errno); + + fd64 = grpa_fds64[fds_idx]; + sk_cookie = grpa_cookies[fds_idx]; + + /* The same sk cannot be added to reuseport_array twice */ + err = bpf_map_update_elem(map_fd, &index3, &fd64, BPF_ANY); + CHECK(err != -1 || errno != EBUSY, + "reuseport array update same sk with same index", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + + err = bpf_map_update_elem(map_fd, &index0, &fd64, BPF_ANY); + CHECK(err != -1 || errno != EBUSY, + "reuseport array update same sk with different index", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + + /* Test delete elem */ + err = bpf_map_delete_elem(map_fd, &index3); + CHECK(err == -1, "reuseport array delete sk", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + + /* Add it back with BPF_NOEXIST */ + err = bpf_map_update_elem(map_fd, &index3, &fd64, BPF_NOEXIST); + CHECK(err == -1, + "reuseport array re-add with BPF_NOEXIST after del", + "sock_type:%d err:%d errno:%d\n", type, err, errno); + + /* Test cookie */ + err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); + CHECK(err == -1 || sk_cookie != map_cookie, + "reuseport array lookup re-added sk", + "sock_type:%d err:%d errno:%d sk_cookie:0x%llx map_cookie:0x%llxn", + type, err, errno, sk_cookie, map_cookie); + + /* Test elem removed by close() */ + for (f = 0; f < ARRAY_SIZE(grpa_fds64); f++) + close(grpa_fds64[f]); + err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); + CHECK(err != -1 || errno != ENOENT, + "reuseport array lookup after close()", + "sock_type:%d err:%d errno:%d\n", + type, err, errno); + } + + /* Test SOCK_RAW */ + fd64 = socket(AF_INET6, SOCK_RAW, IPPROTO_UDP); + CHECK(fd64 == -1, "socket(SOCK_RAW)", "err:%d errno:%d\n", + err, errno); + err = bpf_map_update_elem(map_fd, &index3, &fd64, BPF_NOEXIST); + CHECK(err != -1 || errno != ENOTSUPP, "reuseport array update SOCK_RAW", + "err:%d errno:%d\n", err, errno); + close(fd64); + + /* Close the 64 bit value map */ + close(map_fd); + + /* Test 32 bit fd */ + map_fd = bpf_create_map(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, + sizeof(__u32), sizeof(__u32), array_size, 0); + CHECK(map_fd == -1, "reuseport array create", + "map_fd:%d, errno:%d\n", map_fd, errno); + prepare_reuseport_grp(SOCK_STREAM, map_fd, &fd64, &sk_cookie, 1); + fd = fd64; + err = bpf_map_update_elem(map_fd, &index3, &fd, BPF_NOEXIST); + CHECK(err == -1, "reuseport array update 32 bit fd", + "err:%d errno:%d\n", err, errno); + err = bpf_map_lookup_elem(map_fd, &index3, &map_cookie); + CHECK(err != -1 || errno != ENOSPC, + "reuseport array lookup 32 bit fd", + "err:%d errno:%d\n", err, errno); + close(fd); + close(map_fd); +} + static void run_all_tests(void) { test_hashmap(0, NULL); @@ -1170,6 +1428,8 @@ static void run_all_tests(void) test_map_rdonly(); test_map_wronly(); + + test_reuseport_array(); } int main(void) diff --git a/tools/testing/selftests/bpf/test_offload.py b/tools/testing/selftests/bpf/test_offload.py index be800d0e7a84..d59642e70f56 100755 --- a/tools/testing/selftests/bpf/test_offload.py +++ b/tools/testing/selftests/bpf/test_offload.py @@ -158,8 +158,9 @@ def tool(name, args, flags, JSON=True, ns="", fail=True, include_stderr=False): else: return ret, out -def bpftool(args, JSON=True, ns="", fail=True): - return tool("bpftool", args, {"json":"-p"}, JSON=JSON, ns=ns, fail=fail) +def bpftool(args, JSON=True, ns="", fail=True, include_stderr=False): + return tool("bpftool", args, {"json":"-p"}, JSON=JSON, ns=ns, + fail=fail, include_stderr=include_stderr) def bpftool_prog_list(expected=None, ns=""): _, progs = bpftool("prog show", JSON=True, ns=ns, fail=True) @@ -201,6 +202,21 @@ def bpftool_map_list_wait(expected=0, n_retry=20): time.sleep(0.05) raise Exception("Time out waiting for map counts to stabilize want %d, have %d" % (expected, nmaps)) +def bpftool_prog_load(sample, file_name, maps=[], prog_type="xdp", dev=None, + fail=True, include_stderr=False): + args = "prog load %s %s" % (os.path.join(bpf_test_dir, sample), file_name) + if prog_type is not None: + args += " type " + prog_type + if dev is not None: + args += " dev " + dev + if len(maps): + args += " map " + " map ".join(maps) + + res = bpftool(args, fail=fail, include_stderr=include_stderr) + if res[0] == 0: + files.append(file_name) + return res + def ip(args, force=False, JSON=True, ns="", fail=True, include_stderr=False): if force: args = "-force " + args @@ -307,21 +323,25 @@ class NetdevSim: Class for netdevsim netdevice and its attributes. """ - def __init__(self): + def __init__(self, link=None): + self.link = link + self.dev = self._netdevsim_create() devs.append(self) self.ns = "" self.dfs_dir = '/sys/kernel/debug/netdevsim/%s' % (self.dev['ifname']) + self.sdev_dir = self.dfs_dir + '/sdev/' self.dfs_refresh() def __getitem__(self, key): return self.dev[key] def _netdevsim_create(self): + link = "" if self.link is None else "link " + self.link.dev['ifname'] _, old = ip("link show") - ip("link add sim%d type netdevsim") + ip("link add sim%d {link} type netdevsim".format(link=link)) _, new = ip("link show") for dev in new: @@ -339,13 +359,18 @@ class NetdevSim: self.dfs = DebugfsDir(self.dfs_dir) return self.dfs + def dfs_read(self, f): + path = os.path.join(self.dfs_dir, f) + _, data = cmd('cat %s' % (path)) + return data.strip() + def dfs_num_bound_progs(self): - path = os.path.join(self.dfs_dir, "bpf_bound_progs") + path = os.path.join(self.sdev_dir, "bpf_bound_progs") _, progs = cmd('ls %s' % (path)) return len(progs.split()) def dfs_get_bound_progs(self, expected): - progs = DebugfsDir(os.path.join(self.dfs_dir, "bpf_bound_progs")) + progs = DebugfsDir(os.path.join(self.sdev_dir, "bpf_bound_progs")) if expected is not None: if len(progs) != expected: fail(True, "%d BPF programs bound, expected %d" % @@ -547,11 +572,11 @@ def check_extack(output, reference, args): if skip_extack: return lines = output.split("\n") - comp = len(lines) >= 2 and lines[1] == reference + comp = len(lines) >= 2 and lines[1] == 'Error: ' + reference fail(not comp, "Missing or incorrect netlink extack message") def check_extack_nsim(output, reference, args): - check_extack(output, "Error: netdevsim: " + reference, args) + check_extack(output, "netdevsim: " + reference, args) def check_no_extack(res, needle): fail((res[1] + res[2]).count(needle) or (res[1] + res[2]).count("Warning:"), @@ -654,7 +679,7 @@ try: ret, _, err = sim.cls_bpf_add_filter(obj, skip_sw=True, fail=False, include_stderr=True) fail(ret == 0, "TC filter loaded without enabling TC offloads") - check_extack(err, "Error: TC offload is disabled on net device.", args) + check_extack(err, "TC offload is disabled on net device.", args) sim.wait_for_flush() sim.set_ethtool_tc_offloads(True) @@ -694,7 +719,7 @@ try: skip_sw=True, fail=False, include_stderr=True) fail(ret == 0, "Offloaded a filter to chain other than 0") - check_extack(err, "Error: Driver supports only offload of chain 0.", args) + check_extack(err, "Driver supports only offload of chain 0.", args) sim.tc_flush_filters() start_test("Test TC replace...") @@ -814,24 +839,20 @@ try: "Device parameters reported for non-offloaded program") start_test("Test XDP prog replace with bad flags...") - ret, _, err = sim.set_xdp(obj, "offload", force=True, + ret, _, err = sim.set_xdp(obj, "generic", force=True, fail=False, include_stderr=True) fail(ret == 0, "Replaced XDP program with a program in different mode") - check_extack_nsim(err, "program loaded with different flags.", args) + fail(err.count("File exists") != 1, "Replaced driver XDP with generic") ret, _, err = sim.set_xdp(obj, "", force=True, fail=False, include_stderr=True) fail(ret == 0, "Replaced XDP program with a program in different mode") - check_extack_nsim(err, "program loaded with different flags.", args) + check_extack(err, "program loaded with different flags.", args) start_test("Test XDP prog remove with bad flags...") - ret, _, err = sim.unset_xdp("offload", force=True, - fail=False, include_stderr=True) - fail(ret == 0, "Removed program with a bad mode mode") - check_extack_nsim(err, "program loaded with different flags.", args) ret, _, err = sim.unset_xdp("", force=True, fail=False, include_stderr=True) - fail(ret == 0, "Removed program with a bad mode mode") - check_extack_nsim(err, "program loaded with different flags.", args) + fail(ret == 0, "Removed program with a bad mode") + check_extack(err, "program loaded with different flags.", args) start_test("Test MTU restrictions...") ret, _ = sim.set_mtu(9000, fail=False) @@ -846,6 +867,25 @@ try: sim.set_mtu(1500) sim.wait_for_flush() + start_test("Test non-offload XDP attaching to HW...") + bpftool_prog_load("sample_ret0.o", "/sys/fs/bpf/nooffload") + nooffload = bpf_pinned("/sys/fs/bpf/nooffload") + ret, _, err = sim.set_xdp(nooffload, "offload", + fail=False, include_stderr=True) + fail(ret == 0, "attached non-offloaded XDP program to HW") + check_extack_nsim(err, "xdpoffload of non-bound program.", args) + rm("/sys/fs/bpf/nooffload") + + start_test("Test offload XDP attaching to drv...") + bpftool_prog_load("sample_ret0.o", "/sys/fs/bpf/offload", + dev=sim['ifname']) + offload = bpf_pinned("/sys/fs/bpf/offload") + ret, _, err = sim.set_xdp(offload, "drv", fail=False, include_stderr=True) + fail(ret == 0, "attached offloaded XDP program to drv") + check_extack(err, "using device-bound program without HW_MODE flag is not supported.", args) + rm("/sys/fs/bpf/offload") + sim.wait_for_flush() + start_test("Test XDP offload...") _, _, err = sim.set_xdp(obj, "offload", verbose=True, include_stderr=True) ipl = sim.ip_link_show(xdp=True) @@ -891,6 +931,60 @@ try: rm(pin_file) bpftool_prog_list_wait(expected=0) + start_test("Test multi-attachment XDP - attach...") + sim.set_xdp(obj, "offload") + xdp = sim.ip_link_show(xdp=True)["xdp"] + offloaded = sim.dfs_read("bpf_offloaded_id") + fail("prog" not in xdp, "Base program not reported in single program mode") + fail(len(ipl["xdp"]["attached"]) != 1, + "Wrong attached program count with one program") + + sim.set_xdp(obj, "") + two_xdps = sim.ip_link_show(xdp=True)["xdp"] + offloaded2 = sim.dfs_read("bpf_offloaded_id") + + fail(two_xdps["mode"] != 4, "Bad mode reported with multiple programs") + fail("prog" in two_xdps, "Base program reported in multi program mode") + fail(xdp["attached"][0] not in two_xdps["attached"], + "Offload program not reported after driver activated") + fail(len(two_xdps["attached"]) != 2, + "Wrong attached program count with two programs") + fail(two_xdps["attached"][0]["prog"]["id"] == + two_xdps["attached"][1]["prog"]["id"], + "offloaded and drv programs have the same id") + fail(offloaded != offloaded2, + "offload ID changed after loading driver program") + + start_test("Test multi-attachment XDP - replace...") + ret, _, err = sim.set_xdp(obj, "offload", fail=False, include_stderr=True) + fail(err.count("busy") != 1, "Replaced one of programs without -force") + + start_test("Test multi-attachment XDP - detach...") + ret, _, err = sim.unset_xdp("drv", force=True, + fail=False, include_stderr=True) + fail(ret == 0, "Removed program with a bad mode") + check_extack(err, "program loaded with different flags.", args) + + sim.unset_xdp("offload") + xdp = sim.ip_link_show(xdp=True)["xdp"] + offloaded = sim.dfs_read("bpf_offloaded_id") + + fail(xdp["mode"] != 1, "Bad mode reported after multiple programs") + fail("prog" not in xdp, + "Base program not reported after multi program mode") + fail(xdp["attached"][0] not in two_xdps["attached"], + "Offload program not reported after driver activated") + fail(len(ipl["xdp"]["attached"]) != 1, + "Wrong attached program count with remaining programs") + fail(offloaded != "0", "offload ID reported with only driver program left") + + start_test("Test multi-attachment XDP - device remove...") + sim.set_xdp(obj, "offload") + sim.remove() + + sim = NetdevSim() + sim.set_ethtool_tc_offloads(True) + start_test("Test mixing of TC and XDP...") sim.tc_add_ingress() sim.set_xdp(obj, "offload") @@ -1085,6 +1179,106 @@ try: fail(ret == 0, "netdevsim didn't refuse to create a map with offload disabled") + sim.remove() + + start_test("Test multi-dev ASIC program reuse...") + simA = NetdevSim() + simB1 = NetdevSim() + simB2 = NetdevSim(link=simB1) + simB3 = NetdevSim(link=simB1) + sims = (simA, simB1, simB2, simB3) + simB = (simB1, simB2, simB3) + + bpftool_prog_load("sample_map_ret0.o", "/sys/fs/bpf/nsimA", + dev=simA['ifname']) + progA = bpf_pinned("/sys/fs/bpf/nsimA") + bpftool_prog_load("sample_map_ret0.o", "/sys/fs/bpf/nsimB", + dev=simB1['ifname']) + progB = bpf_pinned("/sys/fs/bpf/nsimB") + + simA.set_xdp(progA, "offload", JSON=False) + for d in simB: + d.set_xdp(progB, "offload", JSON=False) + + start_test("Test multi-dev ASIC cross-dev replace...") + ret, _ = simA.set_xdp(progB, "offload", force=True, JSON=False, fail=False) + fail(ret == 0, "cross-ASIC program allowed") + for d in simB: + ret, _ = d.set_xdp(progA, "offload", force=True, JSON=False, fail=False) + fail(ret == 0, "cross-ASIC program allowed") + + start_test("Test multi-dev ASIC cross-dev install...") + for d in sims: + d.unset_xdp("offload") + + ret, _, err = simA.set_xdp(progB, "offload", force=True, JSON=False, + fail=False, include_stderr=True) + fail(ret == 0, "cross-ASIC program allowed") + check_extack_nsim(err, "program bound to different dev.", args) + for d in simB: + ret, _, err = d.set_xdp(progA, "offload", force=True, JSON=False, + fail=False, include_stderr=True) + fail(ret == 0, "cross-ASIC program allowed") + check_extack_nsim(err, "program bound to different dev.", args) + + start_test("Test multi-dev ASIC cross-dev map reuse...") + + mapA = bpftool("prog show %s" % (progA))[1]["map_ids"][0] + mapB = bpftool("prog show %s" % (progB))[1]["map_ids"][0] + + ret, _ = bpftool_prog_load("sample_map_ret0.o", "/sys/fs/bpf/nsimB_", + dev=simB3['ifname'], + maps=["idx 0 id %d" % (mapB)], + fail=False) + fail(ret != 0, "couldn't reuse a map on the same ASIC") + rm("/sys/fs/bpf/nsimB_") + + ret, _, err = bpftool_prog_load("sample_map_ret0.o", "/sys/fs/bpf/nsimA_", + dev=simA['ifname'], + maps=["idx 0 id %d" % (mapB)], + fail=False, include_stderr=True) + fail(ret == 0, "could reuse a map on a different ASIC") + fail(err.count("offload device mismatch between prog and map") == 0, + "error message missing for cross-ASIC map") + + ret, _, err = bpftool_prog_load("sample_map_ret0.o", "/sys/fs/bpf/nsimB_", + dev=simB1['ifname'], + maps=["idx 0 id %d" % (mapA)], + fail=False, include_stderr=True) + fail(ret == 0, "could reuse a map on a different ASIC") + fail(err.count("offload device mismatch between prog and map") == 0, + "error message missing for cross-ASIC map") + + start_test("Test multi-dev ASIC cross-dev destruction...") + bpftool_prog_list_wait(expected=2) + + simA.remove() + bpftool_prog_list_wait(expected=1) + + ifnameB = bpftool("prog show %s" % (progB))[1]["dev"]["ifname"] + fail(ifnameB != simB1['ifname'], "program not bound to originial device") + simB1.remove() + bpftool_prog_list_wait(expected=1) + + start_test("Test multi-dev ASIC cross-dev destruction - move...") + ifnameB = bpftool("prog show %s" % (progB))[1]["dev"]["ifname"] + fail(ifnameB not in (simB2['ifname'], simB3['ifname']), + "program not bound to remaining devices") + + simB2.remove() + ifnameB = bpftool("prog show %s" % (progB))[1]["dev"]["ifname"] + fail(ifnameB != simB3['ifname'], "program not bound to remaining device") + + simB3.remove() + bpftool_prog_list_wait(expected=0) + + start_test("Test multi-dev ASIC cross-dev destruction - orphaned...") + ret, out = bpftool("prog show %s" % (progB), fail=False) + fail(ret == 0, "got information about orphaned program") + fail("error" not in out, "no error reported for get info on orphaned") + fail(out["error"] != "can't get prog info: No such device", + "wrong error for get info on orphaned") + print("%s: OK" % (os.path.basename(__file__))) finally: diff --git a/tools/testing/selftests/bpf/test_select_reuseport.c b/tools/testing/selftests/bpf/test_select_reuseport.c new file mode 100644 index 000000000000..75646d9b34aa --- /dev/null +++ b/tools/testing/selftests/bpf/test_select_reuseport.c @@ -0,0 +1,688 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2018 Facebook */ + +#include <stdlib.h> +#include <unistd.h> +#include <stdbool.h> +#include <string.h> +#include <errno.h> +#include <assert.h> +#include <fcntl.h> +#include <linux/bpf.h> +#include <linux/err.h> +#include <linux/types.h> +#include <linux/if_ether.h> +#include <sys/types.h> +#include <sys/epoll.h> +#include <sys/socket.h> +#include <netinet/in.h> +#include <bpf/bpf.h> +#include <bpf/libbpf.h> +#include "bpf_rlimit.h" +#include "bpf_util.h" +#include "test_select_reuseport_common.h" + +#define MIN_TCPHDR_LEN 20 +#define UDPHDR_LEN 8 + +#define TCP_SYNCOOKIE_SYSCTL "/proc/sys/net/ipv4/tcp_syncookies" +#define TCP_FO_SYSCTL "/proc/sys/net/ipv4/tcp_fastopen" +#define REUSEPORT_ARRAY_SIZE 32 + +static int result_map, tmp_index_ovr_map, linum_map, data_check_map; +static enum result expected_results[NR_RESULTS]; +static int sk_fds[REUSEPORT_ARRAY_SIZE]; +static int reuseport_array, outer_map; +static int select_by_skb_data_prog; +static int saved_tcp_syncookie; +static struct bpf_object *obj; +static int saved_tcp_fo; +static __u32 index_zero; +static int epfd; + +static union sa46 { + struct sockaddr_in6 v6; + struct sockaddr_in v4; + sa_family_t family; +} srv_sa; + +#define CHECK(condition, tag, format...) ({ \ + int __ret = !!(condition); \ + if (__ret) { \ + printf("%s(%d):FAIL:%s ", __func__, __LINE__, tag); \ + printf(format); \ + exit(-1); \ + } \ +}) + +static void create_maps(void) +{ + struct bpf_create_map_attr attr = {}; + + /* Creating reuseport_array */ + attr.name = "reuseport_array"; + attr.map_type = BPF_MAP_TYPE_REUSEPORT_SOCKARRAY; + attr.key_size = sizeof(__u32); + attr.value_size = sizeof(__u32); + attr.max_entries = REUSEPORT_ARRAY_SIZE; + + reuseport_array = bpf_create_map_xattr(&attr); + CHECK(reuseport_array == -1, "creating reuseport_array", + "reuseport_array:%d errno:%d\n", reuseport_array, errno); + + /* Creating outer_map */ + attr.name = "outer_map"; + attr.map_type = BPF_MAP_TYPE_ARRAY_OF_MAPS; + attr.key_size = sizeof(__u32); + attr.value_size = sizeof(__u32); + attr.max_entries = 1; + attr.inner_map_fd = reuseport_array; + outer_map = bpf_create_map_xattr(&attr); + CHECK(outer_map == -1, "creating outer_map", + "outer_map:%d errno:%d\n", outer_map, errno); +} + +static void prepare_bpf_obj(void) +{ + struct bpf_program *prog; + struct bpf_map *map; + int err; + struct bpf_object_open_attr attr = { + .file = "test_select_reuseport_kern.o", + .prog_type = BPF_PROG_TYPE_SK_REUSEPORT, + }; + + obj = bpf_object__open_xattr(&attr); + CHECK(IS_ERR_OR_NULL(obj), "open test_select_reuseport_kern.o", + "obj:%p PTR_ERR(obj):%ld\n", obj, PTR_ERR(obj)); + + prog = bpf_program__next(NULL, obj); + CHECK(!prog, "get first bpf_program", "!prog\n"); + bpf_program__set_type(prog, attr.prog_type); + + map = bpf_object__find_map_by_name(obj, "outer_map"); + CHECK(!map, "find outer_map", "!map\n"); + err = bpf_map__reuse_fd(map, outer_map); + CHECK(err, "reuse outer_map", "err:%d\n", err); + + err = bpf_object__load(obj); + CHECK(err, "load bpf_object", "err:%d\n", err); + + select_by_skb_data_prog = bpf_program__fd(prog); + CHECK(select_by_skb_data_prog == -1, "get prog fd", + "select_by_skb_data_prog:%d\n", select_by_skb_data_prog); + + map = bpf_object__find_map_by_name(obj, "result_map"); + CHECK(!map, "find result_map", "!map\n"); + result_map = bpf_map__fd(map); + CHECK(result_map == -1, "get result_map fd", + "result_map:%d\n", result_map); + + map = bpf_object__find_map_by_name(obj, "tmp_index_ovr_map"); + CHECK(!map, "find tmp_index_ovr_map", "!map\n"); + tmp_index_ovr_map = bpf_map__fd(map); + CHECK(tmp_index_ovr_map == -1, "get tmp_index_ovr_map fd", + "tmp_index_ovr_map:%d\n", tmp_index_ovr_map); + + map = bpf_object__find_map_by_name(obj, "linum_map"); + CHECK(!map, "find linum_map", "!map\n"); + linum_map = bpf_map__fd(map); + CHECK(linum_map == -1, "get linum_map fd", + "linum_map:%d\n", linum_map); + + map = bpf_object__find_map_by_name(obj, "data_check_map"); + CHECK(!map, "find data_check_map", "!map\n"); + data_check_map = bpf_map__fd(map); + CHECK(data_check_map == -1, "get data_check_map fd", + "data_check_map:%d\n", data_check_map); +} + +static void sa46_init_loopback(union sa46 *sa, sa_family_t family) +{ + memset(sa, 0, sizeof(*sa)); + sa->family = family; + if (sa->family == AF_INET6) + sa->v6.sin6_addr = in6addr_loopback; + else + sa->v4.sin_addr.s_addr = htonl(INADDR_LOOPBACK); +} + +static void sa46_init_inany(union sa46 *sa, sa_family_t family) +{ + memset(sa, 0, sizeof(*sa)); + sa->family = family; + if (sa->family == AF_INET6) + sa->v6.sin6_addr = in6addr_any; + else + sa->v4.sin_addr.s_addr = INADDR_ANY; +} + +static int read_int_sysctl(const char *sysctl) +{ + char buf[16]; + int fd, ret; + + fd = open(sysctl, 0); + CHECK(fd == -1, "open(sysctl)", "sysctl:%s fd:%d errno:%d\n", + sysctl, fd, errno); + + ret = read(fd, buf, sizeof(buf)); + CHECK(ret <= 0, "read(sysctl)", "sysctl:%s ret:%d errno:%d\n", + sysctl, ret, errno); + close(fd); + + return atoi(buf); +} + +static void write_int_sysctl(const char *sysctl, int v) +{ + int fd, ret, size; + char buf[16]; + + fd = open(sysctl, O_RDWR); + CHECK(fd == -1, "open(sysctl)", "sysctl:%s fd:%d errno:%d\n", + sysctl, fd, errno); + + size = snprintf(buf, sizeof(buf), "%d", v); + ret = write(fd, buf, size); + CHECK(ret != size, "write(sysctl)", + "sysctl:%s ret:%d size:%d errno:%d\n", sysctl, ret, size, errno); + close(fd); +} + +static void restore_sysctls(void) +{ + write_int_sysctl(TCP_FO_SYSCTL, saved_tcp_fo); + write_int_sysctl(TCP_SYNCOOKIE_SYSCTL, saved_tcp_syncookie); +} + +static void enable_fastopen(void) +{ + int fo; + + fo = read_int_sysctl(TCP_FO_SYSCTL); + write_int_sysctl(TCP_FO_SYSCTL, fo | 7); +} + +static void enable_syncookie(void) +{ + write_int_sysctl(TCP_SYNCOOKIE_SYSCTL, 2); +} + +static void disable_syncookie(void) +{ + write_int_sysctl(TCP_SYNCOOKIE_SYSCTL, 0); +} + +static __u32 get_linum(void) +{ + __u32 linum; + int err; + + err = bpf_map_lookup_elem(linum_map, &index_zero, &linum); + CHECK(err == -1, "lookup_elem(linum_map)", "err:%d errno:%d\n", + err, errno); + + return linum; +} + +static void check_data(int type, sa_family_t family, const struct cmd *cmd, + int cli_fd) +{ + struct data_check expected = {}, result; + union sa46 cli_sa; + socklen_t addrlen; + int err; + + addrlen = sizeof(cli_sa); + err = getsockname(cli_fd, (struct sockaddr *)&cli_sa, + &addrlen); + CHECK(err == -1, "getsockname(cli_fd)", "err:%d errno:%d\n", + err, errno); + + err = bpf_map_lookup_elem(data_check_map, &index_zero, &result); + CHECK(err == -1, "lookup_elem(data_check_map)", "err:%d errno:%d\n", + err, errno); + + if (type == SOCK_STREAM) { + expected.len = MIN_TCPHDR_LEN; + expected.ip_protocol = IPPROTO_TCP; + } else { + expected.len = UDPHDR_LEN; + expected.ip_protocol = IPPROTO_UDP; + } + + if (family == AF_INET6) { + expected.eth_protocol = htons(ETH_P_IPV6); + expected.bind_inany = !srv_sa.v6.sin6_addr.s6_addr32[3] && + !srv_sa.v6.sin6_addr.s6_addr32[2] && + !srv_sa.v6.sin6_addr.s6_addr32[1] && + !srv_sa.v6.sin6_addr.s6_addr32[0]; + + memcpy(&expected.skb_addrs[0], cli_sa.v6.sin6_addr.s6_addr32, + sizeof(cli_sa.v6.sin6_addr)); + memcpy(&expected.skb_addrs[4], &in6addr_loopback, + sizeof(in6addr_loopback)); + expected.skb_ports[0] = cli_sa.v6.sin6_port; + expected.skb_ports[1] = srv_sa.v6.sin6_port; + } else { + expected.eth_protocol = htons(ETH_P_IP); + expected.bind_inany = !srv_sa.v4.sin_addr.s_addr; + + expected.skb_addrs[0] = cli_sa.v4.sin_addr.s_addr; + expected.skb_addrs[1] = htonl(INADDR_LOOPBACK); + expected.skb_ports[0] = cli_sa.v4.sin_port; + expected.skb_ports[1] = srv_sa.v4.sin_port; + } + + if (memcmp(&result, &expected, offsetof(struct data_check, + equal_check_end))) { + printf("unexpected data_check\n"); + printf(" result: (0x%x, %u, %u)\n", + result.eth_protocol, result.ip_protocol, + result.bind_inany); + printf("expected: (0x%x, %u, %u)\n", + expected.eth_protocol, expected.ip_protocol, + expected.bind_inany); + CHECK(1, "data_check result != expected", + "bpf_prog_linum:%u\n", get_linum()); + } + + CHECK(!result.hash, "data_check result.hash empty", + "result.hash:%u", result.hash); + + expected.len += cmd ? sizeof(*cmd) : 0; + if (type == SOCK_STREAM) + CHECK(expected.len > result.len, "expected.len > result.len", + "expected.len:%u result.len:%u bpf_prog_linum:%u\n", + expected.len, result.len, get_linum()); + else + CHECK(expected.len != result.len, "expected.len != result.len", + "expected.len:%u result.len:%u bpf_prog_linum:%u\n", + expected.len, result.len, get_linum()); +} + +static void check_results(void) +{ + __u32 results[NR_RESULTS]; + __u32 i, broken = 0; + int err; + + for (i = 0; i < NR_RESULTS; i++) { + err = bpf_map_lookup_elem(result_map, &i, &results[i]); + CHECK(err == -1, "lookup_elem(result_map)", + "i:%u err:%d errno:%d\n", i, err, errno); + } + + for (i = 0; i < NR_RESULTS; i++) { + if (results[i] != expected_results[i]) { + broken = i; + break; + } + } + + if (i == NR_RESULTS) + return; + + printf("unexpected result\n"); + printf(" result: ["); + printf("%u", results[0]); + for (i = 1; i < NR_RESULTS; i++) + printf(", %u", results[i]); + printf("]\n"); + + printf("expected: ["); + printf("%u", expected_results[0]); + for (i = 1; i < NR_RESULTS; i++) + printf(", %u", expected_results[i]); + printf("]\n"); + + CHECK(expected_results[broken] != results[broken], + "unexpected result", + "expected_results[%u] != results[%u] bpf_prog_linum:%u\n", + broken, broken, get_linum()); +} + +static int send_data(int type, sa_family_t family, void *data, size_t len, + enum result expected) +{ + union sa46 cli_sa; + int fd, err; + + fd = socket(family, type, 0); + CHECK(fd == -1, "socket()", "fd:%d errno:%d\n", fd, errno); + + sa46_init_loopback(&cli_sa, family); + err = bind(fd, (struct sockaddr *)&cli_sa, sizeof(cli_sa)); + CHECK(fd == -1, "bind(cli_sa)", "err:%d errno:%d\n", err, errno); + + err = sendto(fd, data, len, MSG_FASTOPEN, (struct sockaddr *)&srv_sa, + sizeof(srv_sa)); + CHECK(err != len && expected >= PASS, + "sendto()", "family:%u err:%d errno:%d expected:%d\n", + family, err, errno, expected); + + return fd; +} + +static void do_test(int type, sa_family_t family, struct cmd *cmd, + enum result expected) +{ + int nev, srv_fd, cli_fd; + struct epoll_event ev; + struct cmd rcv_cmd; + ssize_t nread; + + cli_fd = send_data(type, family, cmd, cmd ? sizeof(*cmd) : 0, + expected); + nev = epoll_wait(epfd, &ev, 1, expected >= PASS ? 5 : 0); + CHECK((nev <= 0 && expected >= PASS) || + (nev > 0 && expected < PASS), + "nev <> expected", + "nev:%d expected:%d type:%d family:%d data:(%d, %d)\n", + nev, expected, type, family, + cmd ? cmd->reuseport_index : -1, + cmd ? cmd->pass_on_failure : -1); + check_results(); + check_data(type, family, cmd, cli_fd); + + if (expected < PASS) + return; + + CHECK(expected != PASS_ERR_SK_SELECT_REUSEPORT && + cmd->reuseport_index != ev.data.u32, + "check cmd->reuseport_index", + "cmd:(%u, %u) ev.data.u32:%u\n", + cmd->pass_on_failure, cmd->reuseport_index, ev.data.u32); + + srv_fd = sk_fds[ev.data.u32]; + if (type == SOCK_STREAM) { + int new_fd = accept(srv_fd, NULL, 0); + + CHECK(new_fd == -1, "accept(srv_fd)", + "ev.data.u32:%u new_fd:%d errno:%d\n", + ev.data.u32, new_fd, errno); + + nread = recv(new_fd, &rcv_cmd, sizeof(rcv_cmd), MSG_DONTWAIT); + CHECK(nread != sizeof(rcv_cmd), + "recv(new_fd)", + "ev.data.u32:%u nread:%zd sizeof(rcv_cmd):%zu errno:%d\n", + ev.data.u32, nread, sizeof(rcv_cmd), errno); + + close(new_fd); + } else { + nread = recv(srv_fd, &rcv_cmd, sizeof(rcv_cmd), MSG_DONTWAIT); + CHECK(nread != sizeof(rcv_cmd), + "recv(sk_fds)", + "ev.data.u32:%u nread:%zd sizeof(rcv_cmd):%zu errno:%d\n", + ev.data.u32, nread, sizeof(rcv_cmd), errno); + } + + close(cli_fd); +} + +static void test_err_inner_map(int type, sa_family_t family) +{ + struct cmd cmd = { + .reuseport_index = 0, + .pass_on_failure = 0, + }; + + printf("%s: ", __func__); + expected_results[DROP_ERR_INNER_MAP]++; + do_test(type, family, &cmd, DROP_ERR_INNER_MAP); + printf("OK\n"); +} + +static void test_err_skb_data(int type, sa_family_t family) +{ + printf("%s: ", __func__); + expected_results[DROP_ERR_SKB_DATA]++; + do_test(type, family, NULL, DROP_ERR_SKB_DATA); + printf("OK\n"); +} + +static void test_err_sk_select_port(int type, sa_family_t family) +{ + struct cmd cmd = { + .reuseport_index = REUSEPORT_ARRAY_SIZE, + .pass_on_failure = 0, + }; + + printf("%s: ", __func__); + expected_results[DROP_ERR_SK_SELECT_REUSEPORT]++; + do_test(type, family, &cmd, DROP_ERR_SK_SELECT_REUSEPORT); + printf("OK\n"); +} + +static void test_pass(int type, sa_family_t family) +{ + struct cmd cmd; + int i; + + printf("%s: ", __func__); + cmd.pass_on_failure = 0; + for (i = 0; i < REUSEPORT_ARRAY_SIZE; i++) { + expected_results[PASS]++; + cmd.reuseport_index = i; + do_test(type, family, &cmd, PASS); + } + printf("OK\n"); +} + +static void test_syncookie(int type, sa_family_t family) +{ + int err, tmp_index = 1; + struct cmd cmd = { + .reuseport_index = 0, + .pass_on_failure = 0, + }; + + if (type != SOCK_STREAM) + return; + + printf("%s: ", __func__); + /* + * +1 for TCP-SYN and + * +1 for the TCP-ACK (ack the syncookie) + */ + expected_results[PASS] += 2; + enable_syncookie(); + /* + * Simulate TCP-SYN and TCP-ACK are handled by two different sk: + * TCP-SYN: select sk_fds[tmp_index = 1] tmp_index is from the + * tmp_index_ovr_map + * TCP-ACK: select sk_fds[reuseport_index = 0] reuseport_index + * is from the cmd.reuseport_index + */ + err = bpf_map_update_elem(tmp_index_ovr_map, &index_zero, + &tmp_index, BPF_ANY); + CHECK(err == -1, "update_elem(tmp_index_ovr_map, 0, 1)", + "err:%d errno:%d\n", err, errno); + do_test(type, family, &cmd, PASS); + err = bpf_map_lookup_elem(tmp_index_ovr_map, &index_zero, + &tmp_index); + CHECK(err == -1 || tmp_index != -1, + "lookup_elem(tmp_index_ovr_map)", + "err:%d errno:%d tmp_index:%d\n", + err, errno, tmp_index); + disable_syncookie(); + printf("OK\n"); +} + +static void test_pass_on_err(int type, sa_family_t family) +{ + struct cmd cmd = { + .reuseport_index = REUSEPORT_ARRAY_SIZE, + .pass_on_failure = 1, + }; + + printf("%s: ", __func__); + expected_results[PASS_ERR_SK_SELECT_REUSEPORT] += 1; + do_test(type, family, &cmd, PASS_ERR_SK_SELECT_REUSEPORT); + printf("OK\n"); +} + +static void prepare_sk_fds(int type, sa_family_t family, bool inany) +{ + const int first = REUSEPORT_ARRAY_SIZE - 1; + int i, err, optval = 1; + struct epoll_event ev; + socklen_t addrlen; + + if (inany) + sa46_init_inany(&srv_sa, family); + else + sa46_init_loopback(&srv_sa, family); + addrlen = sizeof(srv_sa); + + /* + * The sk_fds[] is filled from the back such that the order + * is exactly opposite to the (struct sock_reuseport *)reuse->socks[]. + */ + for (i = first; i >= 0; i--) { + sk_fds[i] = socket(family, type, 0); + CHECK(sk_fds[i] == -1, "socket()", "sk_fds[%d]:%d errno:%d\n", + i, sk_fds[i], errno); + err = setsockopt(sk_fds[i], SOL_SOCKET, SO_REUSEPORT, + &optval, sizeof(optval)); + CHECK(err == -1, "setsockopt(SO_REUSEPORT)", + "sk_fds[%d] err:%d errno:%d\n", + i, err, errno); + + if (i == first) { + err = setsockopt(sk_fds[i], SOL_SOCKET, + SO_ATTACH_REUSEPORT_EBPF, + &select_by_skb_data_prog, + sizeof(select_by_skb_data_prog)); + CHECK(err == -1, "setsockopt(SO_ATTACH_REUEPORT_EBPF)", + "err:%d errno:%d\n", err, errno); + } + + err = bind(sk_fds[i], (struct sockaddr *)&srv_sa, addrlen); + CHECK(err == -1, "bind()", "sk_fds[%d] err:%d errno:%d\n", + i, err, errno); + + if (type == SOCK_STREAM) { + err = listen(sk_fds[i], 10); + CHECK(err == -1, "listen()", + "sk_fds[%d] err:%d errno:%d\n", + i, err, errno); + } + + err = bpf_map_update_elem(reuseport_array, &i, &sk_fds[i], + BPF_NOEXIST); + CHECK(err == -1, "update_elem(reuseport_array)", + "sk_fds[%d] err:%d errno:%d\n", i, err, errno); + + if (i == first) { + socklen_t addrlen = sizeof(srv_sa); + + err = getsockname(sk_fds[i], (struct sockaddr *)&srv_sa, + &addrlen); + CHECK(err == -1, "getsockname()", + "sk_fds[%d] err:%d errno:%d\n", i, err, errno); + } + } + + epfd = epoll_create(1); + CHECK(epfd == -1, "epoll_create(1)", + "epfd:%d errno:%d\n", epfd, errno); + + ev.events = EPOLLIN; + for (i = 0; i < REUSEPORT_ARRAY_SIZE; i++) { + ev.data.u32 = i; + err = epoll_ctl(epfd, EPOLL_CTL_ADD, sk_fds[i], &ev); + CHECK(err, "epoll_ctl(EPOLL_CTL_ADD)", "sk_fds[%d]\n", i); + } +} + +static void setup_per_test(int type, unsigned short family, bool inany) +{ + int ovr = -1, err; + + prepare_sk_fds(type, family, inany); + err = bpf_map_update_elem(tmp_index_ovr_map, &index_zero, &ovr, + BPF_ANY); + CHECK(err == -1, "update_elem(tmp_index_ovr_map, 0, -1)", + "err:%d errno:%d\n", err, errno); +} + +static void cleanup_per_test(void) +{ + int i, err; + + for (i = 0; i < REUSEPORT_ARRAY_SIZE; i++) + close(sk_fds[i]); + close(epfd); + + err = bpf_map_delete_elem(outer_map, &index_zero); + CHECK(err == -1, "delete_elem(outer_map)", + "err:%d errno:%d\n", err, errno); +} + +static void cleanup(void) +{ + close(outer_map); + close(reuseport_array); + bpf_object__close(obj); +} + +static void test_all(void) +{ + /* Extra SOCK_STREAM to test bind_inany==true */ + const int types[] = { SOCK_STREAM, SOCK_DGRAM, SOCK_STREAM }; + const char * const type_strings[] = { "TCP", "UDP", "TCP" }; + const char * const family_strings[] = { "IPv6", "IPv4" }; + const unsigned short families[] = { AF_INET6, AF_INET }; + const bool bind_inany[] = { false, false, true }; + int t, f, err; + + for (f = 0; f < ARRAY_SIZE(families); f++) { + unsigned short family = families[f]; + + for (t = 0; t < ARRAY_SIZE(types); t++) { + bool inany = bind_inany[t]; + int type = types[t]; + + printf("######## %s/%s %s ########\n", + family_strings[f], type_strings[t], + inany ? " INANY " : "LOOPBACK"); + + setup_per_test(type, family, inany); + + test_err_inner_map(type, family); + + /* Install reuseport_array to the outer_map */ + err = bpf_map_update_elem(outer_map, &index_zero, + &reuseport_array, BPF_ANY); + CHECK(err == -1, "update_elem(outer_map)", + "err:%d errno:%d\n", err, errno); + + test_err_skb_data(type, family); + test_err_sk_select_port(type, family); + test_pass(type, family); + test_syncookie(type, family); + test_pass_on_err(type, family); + + cleanup_per_test(); + printf("\n"); + } + } +} + +int main(int argc, const char **argv) +{ + create_maps(); + prepare_bpf_obj(); + saved_tcp_fo = read_int_sysctl(TCP_FO_SYSCTL); + saved_tcp_syncookie = read_int_sysctl(TCP_SYNCOOKIE_SYSCTL); + enable_fastopen(); + disable_syncookie(); + atexit(restore_sysctls); + + test_all(); + + cleanup(); + return 0; +} diff --git a/tools/testing/selftests/bpf/test_select_reuseport_common.h b/tools/testing/selftests/bpf/test_select_reuseport_common.h new file mode 100644 index 000000000000..08eb2a9f145f --- /dev/null +++ b/tools/testing/selftests/bpf/test_select_reuseport_common.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (c) 2018 Facebook */ + +#ifndef __TEST_SELECT_REUSEPORT_COMMON_H +#define __TEST_SELECT_REUSEPORT_COMMON_H + +#include <linux/types.h> + +enum result { + DROP_ERR_INNER_MAP, + DROP_ERR_SKB_DATA, + DROP_ERR_SK_SELECT_REUSEPORT, + DROP_MISC, + PASS, + PASS_ERR_SK_SELECT_REUSEPORT, + NR_RESULTS, +}; + +struct cmd { + __u32 reuseport_index; + __u32 pass_on_failure; +}; + +struct data_check { + __u32 ip_protocol; + __u32 skb_addrs[8]; + __u16 skb_ports[2]; + __u16 eth_protocol; + __u8 bind_inany; + __u8 equal_check_end[0]; + + __u32 len; + __u32 hash; +}; + +#endif diff --git a/tools/testing/selftests/bpf/test_select_reuseport_kern.c b/tools/testing/selftests/bpf/test_select_reuseport_kern.c new file mode 100644 index 000000000000..5b54ec637ada --- /dev/null +++ b/tools/testing/selftests/bpf/test_select_reuseport_kern.c @@ -0,0 +1,180 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2018 Facebook */ + +#include <stdlib.h> +#include <linux/in.h> +#include <linux/ip.h> +#include <linux/ipv6.h> +#include <linux/tcp.h> +#include <linux/udp.h> +#include <linux/bpf.h> +#include <linux/types.h> +#include <linux/if_ether.h> + +#include "bpf_endian.h" +#include "bpf_helpers.h" +#include "test_select_reuseport_common.h" + +int _version SEC("version") = 1; + +#ifndef offsetof +#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) +#endif + +struct bpf_map_def SEC("maps") outer_map = { + .type = BPF_MAP_TYPE_ARRAY_OF_MAPS, + .key_size = sizeof(__u32), + .value_size = sizeof(__u32), + .max_entries = 1, +}; + +struct bpf_map_def SEC("maps") result_map = { + .type = BPF_MAP_TYPE_ARRAY, + .key_size = sizeof(__u32), + .value_size = sizeof(__u32), + .max_entries = NR_RESULTS, +}; + +struct bpf_map_def SEC("maps") tmp_index_ovr_map = { + .type = BPF_MAP_TYPE_ARRAY, + .key_size = sizeof(__u32), + .value_size = sizeof(int), + .max_entries = 1, +}; + +struct bpf_map_def SEC("maps") linum_map = { + .type = BPF_MAP_TYPE_ARRAY, + .key_size = sizeof(__u32), + .value_size = sizeof(__u32), + .max_entries = 1, +}; + +struct bpf_map_def SEC("maps") data_check_map = { + .type = BPF_MAP_TYPE_ARRAY, + .key_size = sizeof(__u32), + .value_size = sizeof(struct data_check), + .max_entries = 1, +}; + +#define GOTO_DONE(_result) ({ \ + result = (_result); \ + linum = __LINE__; \ + goto done; \ +}) + +SEC("select_by_skb_data") +int _select_by_skb_data(struct sk_reuseport_md *reuse_md) +{ + __u32 linum, index = 0, flags = 0, index_zero = 0; + __u32 *result_cnt, *linum_value; + struct data_check data_check = {}; + struct cmd *cmd, cmd_copy; + void *data, *data_end; + void *reuseport_array; + enum result result; + int *index_ovr; + int err; + + data = reuse_md->data; + data_end = reuse_md->data_end; + data_check.len = reuse_md->len; + data_check.eth_protocol = reuse_md->eth_protocol; + data_check.ip_protocol = reuse_md->ip_protocol; + data_check.hash = reuse_md->hash; + data_check.bind_inany = reuse_md->bind_inany; + if (data_check.eth_protocol == bpf_htons(ETH_P_IP)) { + if (bpf_skb_load_bytes_relative(reuse_md, + offsetof(struct iphdr, saddr), + data_check.skb_addrs, 8, + BPF_HDR_START_NET)) + GOTO_DONE(DROP_MISC); + } else { + if (bpf_skb_load_bytes_relative(reuse_md, + offsetof(struct ipv6hdr, saddr), + data_check.skb_addrs, 32, + BPF_HDR_START_NET)) + GOTO_DONE(DROP_MISC); + } + + /* + * The ip_protocol could be a compile time decision + * if the bpf_prog.o is dedicated to either TCP or + * UDP. + * + * Otherwise, reuse_md->ip_protocol or + * the protocol field in the iphdr can be used. + */ + if (data_check.ip_protocol == IPPROTO_TCP) { + struct tcphdr *th = data; + + if (th + 1 > data_end) + GOTO_DONE(DROP_MISC); + + data_check.skb_ports[0] = th->source; + data_check.skb_ports[1] = th->dest; + + if ((th->doff << 2) + sizeof(*cmd) > data_check.len) + GOTO_DONE(DROP_ERR_SKB_DATA); + if (bpf_skb_load_bytes(reuse_md, th->doff << 2, &cmd_copy, + sizeof(cmd_copy))) + GOTO_DONE(DROP_MISC); + cmd = &cmd_copy; + } else if (data_check.ip_protocol == IPPROTO_UDP) { + struct udphdr *uh = data; + + if (uh + 1 > data_end) + GOTO_DONE(DROP_MISC); + + data_check.skb_ports[0] = uh->source; + data_check.skb_ports[1] = uh->dest; + + if (sizeof(struct udphdr) + sizeof(*cmd) > data_check.len) + GOTO_DONE(DROP_ERR_SKB_DATA); + if (data + sizeof(struct udphdr) + sizeof(*cmd) > data_end) { + if (bpf_skb_load_bytes(reuse_md, sizeof(struct udphdr), + &cmd_copy, sizeof(cmd_copy))) + GOTO_DONE(DROP_MISC); + cmd = &cmd_copy; + } else { + cmd = data + sizeof(struct udphdr); + } + } else { + GOTO_DONE(DROP_MISC); + } + + reuseport_array = bpf_map_lookup_elem(&outer_map, &index_zero); + if (!reuseport_array) + GOTO_DONE(DROP_ERR_INNER_MAP); + + index = cmd->reuseport_index; + index_ovr = bpf_map_lookup_elem(&tmp_index_ovr_map, &index_zero); + if (!index_ovr) + GOTO_DONE(DROP_MISC); + + if (*index_ovr != -1) { + index = *index_ovr; + *index_ovr = -1; + } + err = bpf_sk_select_reuseport(reuse_md, reuseport_array, &index, + flags); + if (!err) + GOTO_DONE(PASS); + + if (cmd->pass_on_failure) + GOTO_DONE(PASS_ERR_SK_SELECT_REUSEPORT); + else + GOTO_DONE(DROP_ERR_SK_SELECT_REUSEPORT); + +done: + result_cnt = bpf_map_lookup_elem(&result_map, &result); + if (!result_cnt) + return SK_DROP; + + bpf_map_update_elem(&linum_map, &index_zero, &linum, BPF_ANY); + bpf_map_update_elem(&data_check_map, &index_zero, &data_check, BPF_ANY); + + (*result_cnt)++; + return result < PASS ? SK_DROP : SK_PASS; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_skb_cgroup_id.sh b/tools/testing/selftests/bpf/test_skb_cgroup_id.sh new file mode 100755 index 000000000000..42544a969abc --- /dev/null +++ b/tools/testing/selftests/bpf/test_skb_cgroup_id.sh @@ -0,0 +1,62 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 +# Copyright (c) 2018 Facebook + +set -eu + +wait_for_ip() +{ + local _i + echo -n "Wait for testing link-local IP to become available " + for _i in $(seq ${MAX_PING_TRIES}); do + echo -n "." + if ping -6 -q -c 1 -W 1 ff02::1%${TEST_IF} >/dev/null 2>&1; then + echo " OK" + return + fi + sleep 1 + done + echo 1>&2 "ERROR: Timeout waiting for test IP to become available." + exit 1 +} + +setup() +{ + # Create testing interfaces not to interfere with current environment. + ip link add dev ${TEST_IF} type veth peer name ${TEST_IF_PEER} + ip link set ${TEST_IF} up + ip link set ${TEST_IF_PEER} up + + wait_for_ip + + tc qdisc add dev ${TEST_IF} clsact + tc filter add dev ${TEST_IF} egress bpf obj ${BPF_PROG_OBJ} \ + sec ${BPF_PROG_SECTION} da + + BPF_PROG_ID=$(tc filter show dev ${TEST_IF} egress | \ + awk '/ id / {sub(/.* id /, "", $0); print($1)}') +} + +cleanup() +{ + ip link del ${TEST_IF} 2>/dev/null || : + ip link del ${TEST_IF_PEER} 2>/dev/null || : +} + +main() +{ + trap cleanup EXIT 2 3 6 15 + setup + ${PROG} ${TEST_IF} ${BPF_PROG_ID} +} + +DIR=$(dirname $0) +TEST_IF="test_cgid_1" +TEST_IF_PEER="test_cgid_2" +MAX_PING_TRIES=5 +BPF_PROG_OBJ="${DIR}/test_skb_cgroup_id_kern.o" +BPF_PROG_SECTION="cgroup_id_logger" +BPF_PROG_ID=0 +PROG="${DIR}/test_skb_cgroup_id_user" + +main diff --git a/tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c b/tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c new file mode 100644 index 000000000000..68cf9829f5a7 --- /dev/null +++ b/tools/testing/selftests/bpf/test_skb_cgroup_id_kern.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Facebook + +#include <linux/bpf.h> +#include <linux/pkt_cls.h> + +#include <string.h> + +#include "bpf_helpers.h" + +#define NUM_CGROUP_LEVELS 4 + +struct bpf_map_def SEC("maps") cgroup_ids = { + .type = BPF_MAP_TYPE_ARRAY, + .key_size = sizeof(__u32), + .value_size = sizeof(__u64), + .max_entries = NUM_CGROUP_LEVELS, +}; + +static __always_inline void log_nth_level(struct __sk_buff *skb, __u32 level) +{ + __u64 id; + + /* [1] &level passed to external function that may change it, it's + * incompatible with loop unroll. + */ + id = bpf_skb_ancestor_cgroup_id(skb, level); + bpf_map_update_elem(&cgroup_ids, &level, &id, 0); +} + +SEC("cgroup_id_logger") +int log_cgroup_id(struct __sk_buff *skb) +{ + /* Loop unroll can't be used here due to [1]. Unrolling manually. + * Number of calls should be in sync with NUM_CGROUP_LEVELS. + */ + log_nth_level(skb, 0); + log_nth_level(skb, 1); + log_nth_level(skb, 2); + log_nth_level(skb, 3); + + return TC_ACT_OK; +} + +int _version SEC("version") = 1; + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_skb_cgroup_id_user.c b/tools/testing/selftests/bpf/test_skb_cgroup_id_user.c new file mode 100644 index 000000000000..c121cc59f314 --- /dev/null +++ b/tools/testing/selftests/bpf/test_skb_cgroup_id_user.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Facebook + +#include <stdlib.h> +#include <string.h> +#include <unistd.h> + +#include <arpa/inet.h> +#include <net/if.h> +#include <netinet/in.h> +#include <sys/socket.h> +#include <sys/types.h> + + +#include <bpf/bpf.h> +#include <bpf/libbpf.h> + +#include "bpf_rlimit.h" +#include "cgroup_helpers.h" + +#define CGROUP_PATH "/skb_cgroup_test" +#define NUM_CGROUP_LEVELS 4 + +/* RFC 4291, Section 2.7.1 */ +#define LINKLOCAL_MULTICAST "ff02::1" + +static int mk_dst_addr(const char *ip, const char *iface, + struct sockaddr_in6 *dst) +{ + memset(dst, 0, sizeof(*dst)); + + dst->sin6_family = AF_INET6; + dst->sin6_port = htons(1025); + + if (inet_pton(AF_INET6, ip, &dst->sin6_addr) != 1) { + log_err("Invalid IPv6: %s", ip); + return -1; + } + + dst->sin6_scope_id = if_nametoindex(iface); + if (!dst->sin6_scope_id) { + log_err("Failed to get index of iface: %s", iface); + return -1; + } + + return 0; +} + +static int send_packet(const char *iface) +{ + struct sockaddr_in6 dst; + char msg[] = "msg"; + int err = 0; + int fd = -1; + + if (mk_dst_addr(LINKLOCAL_MULTICAST, iface, &dst)) + goto err; + + fd = socket(AF_INET6, SOCK_DGRAM, 0); + if (fd == -1) { + log_err("Failed to create UDP socket"); + goto err; + } + + if (sendto(fd, &msg, sizeof(msg), 0, (const struct sockaddr *)&dst, + sizeof(dst)) == -1) { + log_err("Failed to send datagram"); + goto err; + } + + goto out; +err: + err = -1; +out: + if (fd >= 0) + close(fd); + return err; +} + +int get_map_fd_by_prog_id(int prog_id) +{ + struct bpf_prog_info info = {}; + __u32 info_len = sizeof(info); + __u32 map_ids[1]; + int prog_fd = -1; + int map_fd = -1; + + prog_fd = bpf_prog_get_fd_by_id(prog_id); + if (prog_fd < 0) { + log_err("Failed to get fd by prog id %d", prog_id); + goto err; + } + + info.nr_map_ids = 1; + info.map_ids = (__u64) (unsigned long) map_ids; + + if (bpf_obj_get_info_by_fd(prog_fd, &info, &info_len)) { + log_err("Failed to get info by prog fd %d", prog_fd); + goto err; + } + + if (!info.nr_map_ids) { + log_err("No maps found for prog fd %d", prog_fd); + goto err; + } + + map_fd = bpf_map_get_fd_by_id(map_ids[0]); + if (map_fd < 0) + log_err("Failed to get fd by map id %d", map_ids[0]); +err: + if (prog_fd >= 0) + close(prog_fd); + return map_fd; +} + +int check_ancestor_cgroup_ids(int prog_id) +{ + __u64 actual_ids[NUM_CGROUP_LEVELS], expected_ids[NUM_CGROUP_LEVELS]; + __u32 level; + int err = 0; + int map_fd; + + expected_ids[0] = 0x100000001; /* root cgroup */ + expected_ids[1] = get_cgroup_id(""); + expected_ids[2] = get_cgroup_id(CGROUP_PATH); + expected_ids[3] = 0; /* non-existent cgroup */ + + map_fd = get_map_fd_by_prog_id(prog_id); + if (map_fd < 0) + goto err; + + for (level = 0; level < NUM_CGROUP_LEVELS; ++level) { + if (bpf_map_lookup_elem(map_fd, &level, &actual_ids[level])) { + log_err("Failed to lookup key %d", level); + goto err; + } + if (actual_ids[level] != expected_ids[level]) { + log_err("%llx (actual) != %llx (expected), level: %u\n", + actual_ids[level], expected_ids[level], level); + goto err; + } + } + + goto out; +err: + err = -1; +out: + if (map_fd >= 0) + close(map_fd); + return err; +} + +int main(int argc, char **argv) +{ + int cgfd = -1; + int err = 0; + + if (argc < 3) { + fprintf(stderr, "Usage: %s iface prog_id\n", argv[0]); + exit(EXIT_FAILURE); + } + + if (setup_cgroup_environment()) + goto err; + + cgfd = create_and_get_cgroup(CGROUP_PATH); + if (!cgfd) + goto err; + + if (join_cgroup(CGROUP_PATH)) + goto err; + + if (send_packet(argv[1])) + goto err; + + if (check_ancestor_cgroup_ids(atoi(argv[2]))) + goto err; + + goto out; +err: + err = -1; +out: + close(cgfd); + cleanup_cgroup_environment(); + printf("[%s]\n", err ? "FAIL" : "PASS"); + return err; +} diff --git a/tools/testing/selftests/bpf/test_sock.c b/tools/testing/selftests/bpf/test_sock.c index f4d99fabc56d..b8ebe2f58074 100644 --- a/tools/testing/selftests/bpf/test_sock.c +++ b/tools/testing/selftests/bpf/test_sock.c @@ -14,10 +14,7 @@ #include "cgroup_helpers.h" #include "bpf_rlimit.h" - -#ifndef ARRAY_SIZE -# define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) -#endif +#include "bpf_util.h" #define CG_PATH "/foo" #define MAX_INSNS 512 diff --git a/tools/testing/selftests/bpf/test_sock_addr.c b/tools/testing/selftests/bpf/test_sock_addr.c index a5e76b9219b9..aeeb76a54d63 100644 --- a/tools/testing/selftests/bpf/test_sock_addr.c +++ b/tools/testing/selftests/bpf/test_sock_addr.c @@ -20,15 +20,12 @@ #include "cgroup_helpers.h" #include "bpf_rlimit.h" +#include "bpf_util.h" #ifndef ENOTSUPP # define ENOTSUPP 524 #endif -#ifndef ARRAY_SIZE -# define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) -#endif - #define CG_PATH "/foo" #define CONNECT4_PROG_PATH "./connect4_prog.o" #define CONNECT6_PROG_PATH "./connect6_prog.o" @@ -998,8 +995,9 @@ int init_pktinfo(int domain, struct cmsghdr *cmsg) return 0; } -static int sendmsg_to_server(const struct sockaddr_storage *addr, - socklen_t addr_len, int set_cmsg, int *syscall_err) +static int sendmsg_to_server(int type, const struct sockaddr_storage *addr, + socklen_t addr_len, int set_cmsg, int flags, + int *syscall_err) { union { char buf[CMSG_SPACE(sizeof(struct in6_pktinfo))]; @@ -1022,7 +1020,7 @@ static int sendmsg_to_server(const struct sockaddr_storage *addr, goto err; } - fd = socket(domain, SOCK_DGRAM, 0); + fd = socket(domain, type, 0); if (fd == -1) { log_err("Failed to create client socket"); goto err; @@ -1052,7 +1050,7 @@ static int sendmsg_to_server(const struct sockaddr_storage *addr, } } - if (sendmsg(fd, &hdr, 0) != sizeof(data)) { + if (sendmsg(fd, &hdr, flags) != sizeof(data)) { log_err("Fail to send message to server"); *syscall_err = errno; goto err; @@ -1066,6 +1064,15 @@ out: return fd; } +static int fastconnect_to_server(const struct sockaddr_storage *addr, + socklen_t addr_len) +{ + int sendmsg_err; + + return sendmsg_to_server(SOCK_STREAM, addr, addr_len, /*set_cmsg*/0, + MSG_FASTOPEN, &sendmsg_err); +} + static int recvmsg_from_client(int sockfd, struct sockaddr_storage *src_addr) { struct timeval tv; @@ -1185,6 +1192,20 @@ static int run_connect_test_case(const struct sock_addr_test *test) if (cmp_local_ip(clientfd, &expected_src_addr)) goto err; + if (test->type == SOCK_STREAM) { + /* Test TCP Fast Open scenario */ + clientfd = fastconnect_to_server(&requested_addr, addr_len); + if (clientfd == -1) + goto err; + + /* Make sure src and dst addrs were overridden properly */ + if (cmp_peer_addr(clientfd, &expected_addr)) + goto err; + + if (cmp_local_ip(clientfd, &expected_src_addr)) + goto err; + } + goto out; err: err = -1; @@ -1222,8 +1243,9 @@ static int run_sendmsg_test_case(const struct sock_addr_test *test) if (clientfd >= 0) close(clientfd); - clientfd = sendmsg_to_server(&requested_addr, addr_len, - set_cmsg, &err); + clientfd = sendmsg_to_server(test->type, &requested_addr, + addr_len, set_cmsg, /*flags*/0, + &err); if (err) goto out; else if (clientfd == -1) diff --git a/tools/testing/selftests/bpf/test_socket_cookie.c b/tools/testing/selftests/bpf/test_socket_cookie.c new file mode 100644 index 000000000000..68e108e4687a --- /dev/null +++ b/tools/testing/selftests/bpf/test_socket_cookie.c @@ -0,0 +1,225 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2018 Facebook + +#include <string.h> +#include <unistd.h> + +#include <arpa/inet.h> +#include <netinet/in.h> +#include <sys/types.h> +#include <sys/socket.h> + +#include <bpf/bpf.h> +#include <bpf/libbpf.h> + +#include "bpf_rlimit.h" +#include "cgroup_helpers.h" + +#define CG_PATH "/foo" +#define SOCKET_COOKIE_PROG "./socket_cookie_prog.o" + +static int start_server(void) +{ + struct sockaddr_in6 addr; + int fd; + + fd = socket(AF_INET6, SOCK_STREAM, 0); + if (fd == -1) { + log_err("Failed to create server socket"); + goto out; + } + + memset(&addr, 0, sizeof(addr)); + addr.sin6_family = AF_INET6; + addr.sin6_addr = in6addr_loopback; + addr.sin6_port = 0; + + if (bind(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) { + log_err("Failed to bind server socket"); + goto close_out; + } + + if (listen(fd, 128) == -1) { + log_err("Failed to listen on server socket"); + goto close_out; + } + + goto out; + +close_out: + close(fd); + fd = -1; +out: + return fd; +} + +static int connect_to_server(int server_fd) +{ + struct sockaddr_storage addr; + socklen_t len = sizeof(addr); + int fd; + + fd = socket(AF_INET6, SOCK_STREAM, 0); + if (fd == -1) { + log_err("Failed to create client socket"); + goto out; + } + + if (getsockname(server_fd, (struct sockaddr *)&addr, &len)) { + log_err("Failed to get server addr"); + goto close_out; + } + + if (connect(fd, (const struct sockaddr *)&addr, len) == -1) { + log_err("Fail to connect to server"); + goto close_out; + } + + goto out; + +close_out: + close(fd); + fd = -1; +out: + return fd; +} + +static int validate_map(struct bpf_map *map, int client_fd) +{ + __u32 cookie_expected_value; + struct sockaddr_in6 addr; + socklen_t len = sizeof(addr); + __u32 cookie_value; + __u64 cookie_key; + int err = 0; + int map_fd; + + if (!map) { + log_err("Map not found in BPF object"); + goto err; + } + + map_fd = bpf_map__fd(map); + + err = bpf_map_get_next_key(map_fd, NULL, &cookie_key); + if (err) { + log_err("Can't get cookie key from map"); + goto out; + } + + err = bpf_map_lookup_elem(map_fd, &cookie_key, &cookie_value); + if (err) { + log_err("Can't get cookie value from map"); + goto out; + } + + err = getsockname(client_fd, (struct sockaddr *)&addr, &len); + if (err) { + log_err("Can't get client local addr"); + goto out; + } + + cookie_expected_value = (ntohs(addr.sin6_port) << 8) | 0xFF; + if (cookie_value != cookie_expected_value) { + log_err("Unexpected value in map: %x != %x", cookie_value, + cookie_expected_value); + goto err; + } + + goto out; +err: + err = -1; +out: + return err; +} + +static int run_test(int cgfd) +{ + enum bpf_attach_type attach_type; + struct bpf_prog_load_attr attr; + struct bpf_program *prog; + struct bpf_object *pobj; + const char *prog_name; + int server_fd = -1; + int client_fd = -1; + int prog_fd = -1; + int err = 0; + + memset(&attr, 0, sizeof(attr)); + attr.file = SOCKET_COOKIE_PROG; + attr.prog_type = BPF_PROG_TYPE_UNSPEC; + + err = bpf_prog_load_xattr(&attr, &pobj, &prog_fd); + if (err) { + log_err("Failed to load %s", attr.file); + goto out; + } + + bpf_object__for_each_program(prog, pobj) { + prog_name = bpf_program__title(prog, /*needs_copy*/ false); + + if (strcmp(prog_name, "cgroup/connect6") == 0) { + attach_type = BPF_CGROUP_INET6_CONNECT; + } else if (strcmp(prog_name, "sockops") == 0) { + attach_type = BPF_CGROUP_SOCK_OPS; + } else { + log_err("Unexpected prog: %s", prog_name); + goto err; + } + + err = bpf_prog_attach(bpf_program__fd(prog), cgfd, attach_type, + BPF_F_ALLOW_OVERRIDE); + if (err) { + log_err("Failed to attach prog %s", prog_name); + goto out; + } + } + + server_fd = start_server(); + if (server_fd == -1) + goto err; + + client_fd = connect_to_server(server_fd); + if (client_fd == -1) + goto err; + + if (validate_map(bpf_map__next(NULL, pobj), client_fd)) + goto err; + + goto out; +err: + err = -1; +out: + close(client_fd); + close(server_fd); + bpf_object__close(pobj); + printf("%s\n", err ? "FAILED" : "PASSED"); + return err; +} + +int main(int argc, char **argv) +{ + int cgfd = -1; + int err = 0; + + if (setup_cgroup_environment()) + goto err; + + cgfd = create_and_get_cgroup(CG_PATH); + if (!cgfd) + goto err; + + if (join_cgroup(CG_PATH)) + goto err; + + if (run_test(cgfd)) + goto err; + + goto out; +err: + err = -1; +out: + close(cgfd); + cleanup_cgroup_environment(); + return err; +} diff --git a/tools/testing/selftests/bpf/test_tcpbpf.h b/tools/testing/selftests/bpf/test_tcpbpf.h index 2fe43289943c..7bcfa6207005 100644 --- a/tools/testing/selftests/bpf/test_tcpbpf.h +++ b/tools/testing/selftests/bpf/test_tcpbpf.h @@ -12,5 +12,6 @@ struct tcpbpf_globals { __u32 good_cb_test_rv; __u64 bytes_received; __u64 bytes_acked; + __u32 num_listen; }; #endif diff --git a/tools/testing/selftests/bpf/test_tcpbpf_kern.c b/tools/testing/selftests/bpf/test_tcpbpf_kern.c index 3e645ee41ed5..4b7fd540cea9 100644 --- a/tools/testing/selftests/bpf/test_tcpbpf_kern.c +++ b/tools/testing/selftests/bpf/test_tcpbpf_kern.c @@ -96,15 +96,22 @@ int bpf_testcb(struct bpf_sock_ops *skops) if (!gp) break; g = *gp; - g.total_retrans = skops->total_retrans; - g.data_segs_in = skops->data_segs_in; - g.data_segs_out = skops->data_segs_out; - g.bytes_received = skops->bytes_received; - g.bytes_acked = skops->bytes_acked; + if (skops->args[0] == BPF_TCP_LISTEN) { + g.num_listen++; + } else { + g.total_retrans = skops->total_retrans; + g.data_segs_in = skops->data_segs_in; + g.data_segs_out = skops->data_segs_out; + g.bytes_received = skops->bytes_received; + g.bytes_acked = skops->bytes_acked; + } bpf_map_update_elem(&global_map, &key, &g, BPF_ANY); } break; + case BPF_SOCK_OPS_TCP_LISTEN_CB: + bpf_sock_ops_cb_flags_set(skops, BPF_SOCK_OPS_STATE_CB_FLAG); + break; default: rv = -1; } diff --git a/tools/testing/selftests/bpf/test_tcpbpf_user.c b/tools/testing/selftests/bpf/test_tcpbpf_user.c index 84ab5163c828..a275c2971376 100644 --- a/tools/testing/selftests/bpf/test_tcpbpf_user.c +++ b/tools/testing/selftests/bpf/test_tcpbpf_user.c @@ -1,27 +1,59 @@ // SPDX-License-Identifier: GPL-2.0 +#include <inttypes.h> #include <stdio.h> #include <stdlib.h> -#include <stdio.h> #include <unistd.h> #include <errno.h> -#include <signal.h> #include <string.h> -#include <assert.h> -#include <linux/perf_event.h> -#include <linux/ptrace.h> #include <linux/bpf.h> -#include <sys/ioctl.h> -#include <sys/time.h> #include <sys/types.h> -#include <sys/stat.h> -#include <fcntl.h> #include <bpf/bpf.h> #include <bpf/libbpf.h> -#include "bpf_util.h" + #include "bpf_rlimit.h" -#include <linux/perf_event.h> +#include "bpf_util.h" +#include "cgroup_helpers.h" + #include "test_tcpbpf.h" +#define EXPECT_EQ(expected, actual, fmt) \ + do { \ + if ((expected) != (actual)) { \ + printf(" Value of: " #actual "\n" \ + " Actual: %" fmt "\n" \ + " Expected: %" fmt "\n", \ + (actual), (expected)); \ + goto err; \ + } \ + } while (0) + +int verify_result(const struct tcpbpf_globals *result) +{ + __u32 expected_events; + + expected_events = ((1 << BPF_SOCK_OPS_TIMEOUT_INIT) | + (1 << BPF_SOCK_OPS_RWND_INIT) | + (1 << BPF_SOCK_OPS_TCP_CONNECT_CB) | + (1 << BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB) | + (1 << BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB) | + (1 << BPF_SOCK_OPS_NEEDS_ECN) | + (1 << BPF_SOCK_OPS_STATE_CB) | + (1 << BPF_SOCK_OPS_TCP_LISTEN_CB)); + + EXPECT_EQ(expected_events, result->event_map, "#" PRIx32); + EXPECT_EQ(501ULL, result->bytes_received, "llu"); + EXPECT_EQ(1002ULL, result->bytes_acked, "llu"); + EXPECT_EQ(1, result->data_segs_in, PRIu32); + EXPECT_EQ(1, result->data_segs_out, PRIu32); + EXPECT_EQ(0x80, result->bad_cb_test_rv, PRIu32); + EXPECT_EQ(0, result->good_cb_test_rv, PRIu32); + EXPECT_EQ(1, result->num_listen, PRIu32); + + return 0; +err: + return -1; +} + static int bpf_find_map(const char *test, struct bpf_object *obj, const char *name) { @@ -35,42 +67,28 @@ static int bpf_find_map(const char *test, struct bpf_object *obj, return bpf_map__fd(map); } -#define SYSTEM(CMD) \ - do { \ - if (system(CMD)) { \ - printf("system(%s) FAILS!\n", CMD); \ - } \ - } while (0) - int main(int argc, char **argv) { const char *file = "test_tcpbpf_kern.o"; struct tcpbpf_globals g = {0}; - int cg_fd, prog_fd, map_fd; - bool debug_flag = false; + const char *cg_path = "/foo"; int error = EXIT_FAILURE; struct bpf_object *obj; - char cmd[100], *dir; - struct stat buffer; + int prog_fd, map_fd; + int cg_fd = -1; __u32 key = 0; - int pid; int rv; - if (argc > 1 && strcmp(argv[1], "-d") == 0) - debug_flag = true; + if (setup_cgroup_environment()) + goto err; - dir = "/tmp/cgroupv2/foo"; + cg_fd = create_and_get_cgroup(cg_path); + if (!cg_fd) + goto err; - if (stat(dir, &buffer) != 0) { - SYSTEM("mkdir -p /tmp/cgroupv2"); - SYSTEM("mount -t cgroup2 none /tmp/cgroupv2"); - SYSTEM("mkdir -p /tmp/cgroupv2/foo"); - } - pid = (int) getpid(); - sprintf(cmd, "echo %d >> /tmp/cgroupv2/foo/cgroup.procs", pid); - SYSTEM(cmd); + if (join_cgroup(cg_path)) + goto err; - cg_fd = open(dir, O_DIRECTORY, O_RDONLY); if (bpf_prog_load(file, BPF_PROG_TYPE_SOCK_OPS, &obj, &prog_fd)) { printf("FAILED: load_bpf_file failed for: %s\n", file); goto err; @@ -83,7 +101,10 @@ int main(int argc, char **argv) goto err; } - SYSTEM("./tcp_server.py"); + if (system("./tcp_server.py")) { + printf("FAILED: TCP server\n"); + goto err; + } map_fd = bpf_find_map(__func__, obj, "global_map"); if (map_fd < 0) @@ -95,34 +116,16 @@ int main(int argc, char **argv) goto err; } - if (g.bytes_received != 501 || g.bytes_acked != 1002 || - g.data_segs_in != 1 || g.data_segs_out != 1 || - (g.event_map ^ 0x47e) != 0 || g.bad_cb_test_rv != 0x80 || - g.good_cb_test_rv != 0) { + if (verify_result(&g)) { printf("FAILED: Wrong stats\n"); - if (debug_flag) { - printf("\n"); - printf("bytes_received: %d (expecting 501)\n", - (int)g.bytes_received); - printf("bytes_acked: %d (expecting 1002)\n", - (int)g.bytes_acked); - printf("data_segs_in: %d (expecting 1)\n", - g.data_segs_in); - printf("data_segs_out: %d (expecting 1)\n", - g.data_segs_out); - printf("event_map: 0x%x (at least 0x47e)\n", - g.event_map); - printf("bad_cb_test_rv: 0x%x (expecting 0x80)\n", - g.bad_cb_test_rv); - printf("good_cb_test_rv:0x%x (expecting 0)\n", - g.good_cb_test_rv); - } goto err; } + printf("PASSED!\n"); error = 0; err: bpf_prog_detach(cg_fd, BPF_CGROUP_SOCK_OPS); + close(cg_fd); + cleanup_cgroup_environment(); return error; - } diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 41106d9d5cc7..67c412d19c09 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -42,15 +42,12 @@ #endif #include "bpf_rlimit.h" #include "bpf_rand.h" +#include "bpf_util.h" #include "../../../include/linux/filter.h" -#ifndef ARRAY_SIZE -# define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) -#endif - #define MAX_INSNS BPF_MAXINSNS #define MAX_FIXUPS 8 -#define MAX_NR_MAPS 7 +#define MAX_NR_MAPS 8 #define POINTER_VALUE 0xcafe4all #define TEST_DATA_LEN 64 @@ -70,6 +67,7 @@ struct bpf_test { int fixup_prog1[MAX_FIXUPS]; int fixup_prog2[MAX_FIXUPS]; int fixup_map_in_map[MAX_FIXUPS]; + int fixup_cgroup_storage[MAX_FIXUPS]; const char *errstr; const char *errstr_unpriv; uint32_t retval; @@ -4631,6 +4629,121 @@ static struct bpf_test tests[] = { .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, }, { + "valid cgroup storage access", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .fixup_cgroup_storage = { 1 }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { + "invalid cgroup storage access 1", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .fixup_map1 = { 1 }, + .result = REJECT, + .errstr = "cannot pass map_type 1 into func bpf_get_local_storage", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { + "invalid cgroup storage access 2", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_LD_MAP_FD(BPF_REG_1, 1), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = REJECT, + .errstr = "fd 1 is not pointing to valid bpf_map", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { + "invalid per-cgroup storage access 3", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .fixup_cgroup_storage = { 1 }, + .result = REJECT, + .errstr = "invalid access to map value, value_size=64 off=256 size=4", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { + "invalid cgroup storage access 4", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_EXIT_INSN(), + }, + .fixup_cgroup_storage = { 1 }, + .result = REJECT, + .errstr = "invalid access to map value, value_size=64 off=-2 size=4", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { + "invalid cgroup storage access 5", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 7), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .fixup_cgroup_storage = { 1 }, + .result = REJECT, + .errstr = "get_local_storage() doesn't support non-zero flags", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { + "invalid cgroup storage access 6", + .insns = { + BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), + BPF_LD_MAP_FD(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, + BPF_FUNC_get_local_storage), + BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), + BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .fixup_cgroup_storage = { 1 }, + .result = REJECT, + .errstr = "get_local_storage() doesn't support non-zero flags", + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + }, + { "multiple registers share map_lookup_elem result", .insns = { BPF_MOV64_IMM(BPF_REG_1, 10), @@ -6997,7 +7110,7 @@ static struct bpf_test tests[] = { BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, .fixup_map_in_map = { 3 }, @@ -7020,7 +7133,7 @@ static struct bpf_test tests[] = { BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, .fixup_map_in_map = { 3 }, @@ -7042,7 +7155,7 @@ static struct bpf_test tests[] = { BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, .fixup_map_in_map = { 3 }, @@ -12372,6 +12485,32 @@ static struct bpf_test tests[] = { .result = REJECT, .errstr = "variable ctx access var_off=(0x0; 0x4)", }, + { + "mov64 src == dst", + .insns = { + BPF_MOV64_IMM(BPF_REG_2, 0), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_2), + // Check bounds are OK + BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, + .result = ACCEPT, + }, + { + "mov64 src != dst", + .insns = { + BPF_MOV64_IMM(BPF_REG_3, 0), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_3), + // Check bounds are OK + BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, + .result = ACCEPT, + }, }; static int probe_filter_length(const struct bpf_insn *fp) @@ -12476,6 +12615,19 @@ static int create_map_in_map(void) return outer_map_fd; } +static int create_cgroup_storage(void) +{ + int fd; + + fd = bpf_create_map(BPF_MAP_TYPE_CGROUP_STORAGE, + sizeof(struct bpf_cgroup_storage_key), + TEST_DATA_LEN, 0, 0); + if (fd < 0) + printf("Failed to create array '%s'!\n", strerror(errno)); + + return fd; +} + static char bpf_vlog[UINT_MAX >> 8]; static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog, @@ -12488,6 +12640,7 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog, int *fixup_prog1 = test->fixup_prog1; int *fixup_prog2 = test->fixup_prog2; int *fixup_map_in_map = test->fixup_map_in_map; + int *fixup_cgroup_storage = test->fixup_cgroup_storage; if (test->fill_helper) test->fill_helper(test); @@ -12555,6 +12708,14 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog, fixup_map_in_map++; } while (*fixup_map_in_map); } + + if (*fixup_cgroup_storage) { + map_fds[7] = create_cgroup_storage(); + do { + prog[*fixup_cgroup_storage].imm = map_fds[7]; + fixup_cgroup_storage++; + } while (*fixup_cgroup_storage); + } } static void do_test_single(struct bpf_test *test, bool unpriv, diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c index 3868dcb63420..cabe2a3a3b30 100644 --- a/tools/testing/selftests/bpf/trace_helpers.c +++ b/tools/testing/selftests/bpf/trace_helpers.c @@ -88,7 +88,7 @@ static int page_size; static int page_cnt = 8; static struct perf_event_mmap_page *header; -int perf_event_mmap(int fd) +int perf_event_mmap_header(int fd, struct perf_event_mmap_page **header) { void *base; int mmap_size; @@ -102,10 +102,15 @@ int perf_event_mmap(int fd) return -1; } - header = base; + *header = base; return 0; } +int perf_event_mmap(int fd) +{ + return perf_event_mmap_header(fd, &header); +} + static int perf_event_poll(int fd) { struct pollfd pfd = { .fd = fd, .events = POLLIN }; @@ -163,3 +168,42 @@ int perf_event_poller(int fd, perf_event_print_fn output_fn) return ret; } + +int perf_event_poller_multi(int *fds, struct perf_event_mmap_page **headers, + int num_fds, perf_event_print_fn output_fn) +{ + enum bpf_perf_event_ret ret; + struct pollfd *pfds; + void *buf = NULL; + size_t len = 0; + int i; + + pfds = calloc(num_fds, sizeof(*pfds)); + if (!pfds) + return LIBBPF_PERF_EVENT_ERROR; + + for (i = 0; i < num_fds; i++) { + pfds[i].fd = fds[i]; + pfds[i].events = POLLIN; + } + + for (;;) { + poll(pfds, num_fds, 1000); + for (i = 0; i < num_fds; i++) { + if (!pfds[i].revents) + continue; + + ret = bpf_perf_event_read_simple(headers[i], + page_cnt * page_size, + page_size, &buf, &len, + bpf_perf_event_print, + output_fn); + if (ret != LIBBPF_PERF_EVENT_CONT) + break; + } + } + free(buf); + free(pfds); + + return ret; +} diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h index 3b4bcf7f5084..18924f23db1b 100644 --- a/tools/testing/selftests/bpf/trace_helpers.h +++ b/tools/testing/selftests/bpf/trace_helpers.h @@ -3,6 +3,7 @@ #define __TRACE_HELPER_H #include <libbpf.h> +#include <linux/perf_event.h> struct ksym { long addr; @@ -16,6 +17,9 @@ long ksym_get_addr(const char *name); typedef enum bpf_perf_event_ret (*perf_event_print_fn)(void *data, int size); int perf_event_mmap(int fd); +int perf_event_mmap_header(int fd, struct perf_event_mmap_page **header); /* return LIBBPF_PERF_EVENT_DONE or LIBBPF_PERF_EVENT_ERROR */ int perf_event_poller(int fd, perf_event_print_fn output_fn); +int perf_event_poller_multi(int *fds, struct perf_event_mmap_page **headers, + int num_fds, perf_event_print_fn output_fn); #endif diff --git a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre.sh b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre.sh new file mode 100755 index 000000000000..76f1ab4898d9 --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre.sh @@ -0,0 +1,217 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# This test uses standard topology for testing gretap. See +# ../../../net/forwarding/mirror_gre_topo_lib.sh for more details. +# +# Test offloading various features of offloading gretap mirrors specific to +# mlxsw. + +lib_dir=$(dirname $0)/../../../net/forwarding + +NUM_NETIFS=6 +source $lib_dir/lib.sh +source $lib_dir/mirror_lib.sh +source $lib_dir/mirror_gre_lib.sh +source $lib_dir/mirror_gre_topo_lib.sh + +setup_keyful() +{ + tunnel_create gt6-key ip6gretap 2001:db8:3::1 2001:db8:3::2 \ + ttl 100 tos inherit allow-localremote \ + key 1234 + + tunnel_create h3-gt6-key ip6gretap 2001:db8:3::2 2001:db8:3::1 \ + key 1234 + ip link set h3-gt6-key vrf v$h3 + matchall_sink_create h3-gt6-key + + ip address add dev $swp3 2001:db8:3::1/64 + ip address add dev $h3 2001:db8:3::2/64 +} + +cleanup_keyful() +{ + ip address del dev $h3 2001:db8:3::2/64 + ip address del dev $swp3 2001:db8:3::1/64 + + tunnel_destroy h3-gt6-key + tunnel_destroy gt6-key +} + +setup_soft() +{ + # Set up a topology for testing underlay routes that point at an + # unsupported soft device. + + tunnel_create gt6-soft ip6gretap 2001:db8:4::1 2001:db8:4::2 \ + ttl 100 tos inherit allow-localremote + + tunnel_create h3-gt6-soft ip6gretap 2001:db8:4::2 2001:db8:4::1 + ip link set h3-gt6-soft vrf v$h3 + matchall_sink_create h3-gt6-soft + + ip link add name v1 type veth peer name v2 + ip link set dev v1 up + ip address add dev v1 2001:db8:4::1/64 + + ip link set dev v2 vrf v$h3 + ip link set dev v2 up + ip address add dev v2 2001:db8:4::2/64 +} + +cleanup_soft() +{ + ip link del dev v1 + + tunnel_destroy h3-gt6-soft + tunnel_destroy gt6-soft +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + swp3=${NETIFS[p5]} + h3=${NETIFS[p6]} + + vrf_prepare + mirror_gre_topo_create + + ip address add dev $swp3 2001:db8:2::1/64 + ip address add dev $h3 2001:db8:2::2/64 + + ip address add dev $swp3 192.0.2.129/28 + ip address add dev $h3 192.0.2.130/28 + + setup_keyful + setup_soft +} + +cleanup() +{ + pre_cleanup + + cleanup_soft + cleanup_keyful + + ip address del dev $h3 2001:db8:2::2/64 + ip address del dev $swp3 2001:db8:2::1/64 + + ip address del dev $h3 192.0.2.130/28 + ip address del dev $swp3 192.0.2.129/28 + + mirror_gre_topo_destroy + vrf_cleanup +} + +test_span_gre_ttl_inherit() +{ + local tundev=$1; shift + local type=$1; shift + local what=$1; shift + + RET=0 + + ip link set dev $tundev type $type ttl inherit + mirror_install $swp1 ingress $tundev "matchall $tcflags" + fail_test_span_gre_dir $tundev ingress + + ip link set dev $tundev type $type ttl 100 + + quick_test_span_gre_dir $tundev ingress + mirror_uninstall $swp1 ingress + + log_test "$what: no offload on TTL of inherit ($tcflags)" +} + +test_span_gre_tos_fixed() +{ + local tundev=$1; shift + local type=$1; shift + local what=$1; shift + + RET=0 + + ip link set dev $tundev type $type tos 0x10 + mirror_install $swp1 ingress $tundev "matchall $tcflags" + fail_test_span_gre_dir $tundev ingress + + ip link set dev $tundev type $type tos inherit + quick_test_span_gre_dir $tundev ingress + mirror_uninstall $swp1 ingress + + log_test "$what: no offload on a fixed TOS ($tcflags)" +} + +test_span_failable() +{ + local should_fail=$1; shift + local tundev=$1; shift + local what=$1; shift + + RET=0 + + mirror_install $swp1 ingress $tundev "matchall $tcflags" + if ((should_fail)); then + fail_test_span_gre_dir $tundev ingress + else + quick_test_span_gre_dir $tundev ingress + fi + mirror_uninstall $swp1 ingress + + log_test "$what: should_fail=$should_fail ($tcflags)" +} + +test_failable() +{ + local should_fail=$1; shift + + test_span_failable $should_fail gt6-key "mirror to keyful gretap" + test_span_failable $should_fail gt6-soft "mirror to gretap w/ soft underlay" +} + +test_sw() +{ + slow_path_trap_install $swp1 ingress + slow_path_trap_install $swp1 egress + + test_failable 0 + + slow_path_trap_uninstall $swp1 egress + slow_path_trap_uninstall $swp1 ingress +} + +test_hw() +{ + test_failable 1 + + test_span_gre_tos_fixed gt4 gretap "mirror to gretap" + test_span_gre_tos_fixed gt6 ip6gretap "mirror to ip6gretap" + + test_span_gre_ttl_inherit gt4 gretap "mirror to gretap" + test_span_gre_ttl_inherit gt6 ip6gretap "mirror to ip6gretap" +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +if ! tc_offload_check; then + check_err 1 "Could not test offloaded functionality" + log_test "mlxsw-specific tests for mirror to gretap" + exit +fi + +tcflags="skip_hw" +test_sw + +tcflags="skip_sw" +test_hw + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh new file mode 100644 index 000000000000..6f3a70df63bc --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/mirror_gre_scale.sh @@ -0,0 +1,197 @@ +# SPDX-License-Identifier: GPL-2.0 + +# Test offloading a number of mirrors-to-gretap. The test creates a number of +# tunnels. Then it adds one flower mirror for each of the tunnels, matching a +# given host IP. Then it generates traffic at each of the host IPs and checks +# that the traffic has been mirrored at the appropriate tunnel. +# +# +--------------------------+ +--------------------------+ +# | H1 | | H2 | +# | + $h1 | | $h2 + | +# | | 2001:db8:1:X::1/64 | | 2001:db8:1:X::2/64 | | +# +-----|--------------------+ +--------------------|-----+ +# | | +# +-----|-------------------------------------------------------------|-----+ +# | SW o--> mirrors | | +# | +---|-------------------------------------------------------------|---+ | +# | | + $swp1 BR $swp2 + | | +# | +---------------------------------------------------------------------+ | +# | | +# | + $swp3 + gt6-<X> (ip6gretap) | +# | | 2001:db8:2:X::1/64 : loc=2001:db8:2:X::1 | +# | | : rem=2001:db8:2:X::2 | +# | | : ttl=100 | +# | | : tos=inherit | +# | | : | +# +-----|--------------------------------:----------------------------------+ +# | : +# +-----|--------------------------------:----------------------------------+ +# | H3 + $h3 + h3-gt6-<X> (ip6gretap) | +# | 2001:db8:2:X::2/64 loc=2001:db8:2:X::2 | +# | rem=2001:db8:2:X::1 | +# | ttl=100 | +# | tos=inherit | +# | | +# +-------------------------------------------------------------------------+ + +source ../../../../net/forwarding/mirror_lib.sh + +MIRROR_NUM_NETIFS=6 + +mirror_gre_ipv6_addr() +{ + local net=$1; shift + local num=$1; shift + + printf "2001:db8:%x:%x" $net $num +} + +mirror_gre_tunnels_create() +{ + local count=$1; shift + local should_fail=$1; shift + + MIRROR_GRE_BATCH_FILE="$(mktemp)" + for ((i=0; i < count; ++i)); do + local match_dip=$(mirror_gre_ipv6_addr 1 $i)::2 + local htun=h3-gt6-$i + local tun=gt6-$i + + ((mirror_gre_tunnels++)) + + ip address add dev $h1 $(mirror_gre_ipv6_addr 1 $i)::1/64 + ip address add dev $h2 $(mirror_gre_ipv6_addr 1 $i)::2/64 + + ip address add dev $swp3 $(mirror_gre_ipv6_addr 2 $i)::1/64 + ip address add dev $h3 $(mirror_gre_ipv6_addr 2 $i)::2/64 + + tunnel_create $tun ip6gretap \ + $(mirror_gre_ipv6_addr 2 $i)::1 \ + $(mirror_gre_ipv6_addr 2 $i)::2 \ + ttl 100 tos inherit allow-localremote + + tunnel_create $htun ip6gretap \ + $(mirror_gre_ipv6_addr 2 $i)::2 \ + $(mirror_gre_ipv6_addr 2 $i)::1 + ip link set $htun vrf v$h3 + matchall_sink_create $htun + + cat >> $MIRROR_GRE_BATCH_FILE <<-EOF + filter add dev $swp1 ingress pref 1000 \ + protocol ipv6 \ + flower $tcflags dst_ip $match_dip \ + action mirred egress mirror dev $tun + EOF + done + + tc -b $MIRROR_GRE_BATCH_FILE + check_err_fail $should_fail $? "Mirror rule insertion" +} + +mirror_gre_tunnels_destroy() +{ + local count=$1; shift + + for ((i=0; i < count; ++i)); do + local htun=h3-gt6-$i + local tun=gt6-$i + + ip address del dev $h3 $(mirror_gre_ipv6_addr 2 $i)::2/64 + ip address del dev $swp3 $(mirror_gre_ipv6_addr 2 $i)::1/64 + + ip address del dev $h2 $(mirror_gre_ipv6_addr 1 $i)::2/64 + ip address del dev $h1 $(mirror_gre_ipv6_addr 1 $i)::1/64 + + tunnel_destroy $htun + tunnel_destroy $tun + done +} + +__mirror_gre_test() +{ + local count=$1; shift + local should_fail=$1; shift + + mirror_gre_tunnels_create $count $should_fail + if ((should_fail)); then + return + fi + + sleep 5 + + for ((i = 0; i < count; ++i)); do + local dip=$(mirror_gre_ipv6_addr 1 $i)::2 + local htun=h3-gt6-$i + local message + + icmp6_capture_install $htun + mirror_test v$h1 "" $dip $htun 100 10 + icmp6_capture_uninstall $htun + done +} + +mirror_gre_test() +{ + local count=$1; shift + local should_fail=$1; shift + + if ! tc_offload_check $TC_FLOWER_NUM_NETIFS; then + check_err 1 "Could not test offloaded functionality" + return + fi + + tcflags="skip_sw" + __mirror_gre_test $count $should_fail +} + +mirror_gre_setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + swp3=${NETIFS[p5]} + h3=${NETIFS[p6]} + + mirror_gre_tunnels=0 + + vrf_prepare + + simple_if_init $h1 + simple_if_init $h2 + simple_if_init $h3 + + ip link add name br1 type bridge vlan_filtering 1 + ip link set dev br1 up + + ip link set dev $swp1 master br1 + ip link set dev $swp1 up + tc qdisc add dev $swp1 clsact + + ip link set dev $swp2 master br1 + ip link set dev $swp2 up + + ip link set dev $swp3 up +} + +mirror_gre_cleanup() +{ + mirror_gre_tunnels_destroy $mirror_gre_tunnels + + ip link set dev $swp3 down + + ip link set dev $swp2 down + + tc qdisc del dev $swp1 clsact + ip link set dev $swp1 down + + ip link del dev br1 + + simple_if_fini $h3 + simple_if_fini $h2 + simple_if_fini $h1 + + vrf_cleanup +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh new file mode 100755 index 000000000000..1ca631d5aaba --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_bridge.sh @@ -0,0 +1,189 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for DSCP prioritization and rewrite. Packets ingress $swp1 with a DSCP +# tag and are prioritized according to the map at $swp1. They egress $swp2 and +# the DSCP value is updated to match the map at that interface. The updated DSCP +# tag is verified at $h2. +# +# ICMP responses are produced with the same DSCP tag that arrived at $h2. They +# go through prioritization at $swp2 and DSCP retagging at $swp1. The tag is +# verified at $h1--it should match the original tag. +# +# +----------------------+ +----------------------+ +# | H1 | | H2 | +# | + $h1 | | $h2 + | +# | | 192.0.2.1/28 | | 192.0.2.2/28 | | +# +----|-----------------+ +----------------|-----+ +# | | +# +----|----------------------------------------------------------------|-----+ +# | SW | | | +# | +-|----------------------------------------------------------------|-+ | +# | | + $swp1 BR $swp2 + | | +# | | APP=0,5,10 .. 7,5,17 APP=0,5,20 .. 7,5,27 | | +# | +--------------------------------------------------------------------+ | +# +---------------------------------------------------------------------------+ + +ALL_TESTS=" + ping_ipv4 + test_dscp +" + +lib_dir=$(dirname $0)/../../../net/forwarding + +NUM_NETIFS=4 +source $lib_dir/lib.sh + +h1_create() +{ + local dscp; + + simple_if_init $h1 192.0.2.1/28 + tc qdisc add dev $h1 clsact + dscp_capture_install $h1 10 +} + +h1_destroy() +{ + dscp_capture_uninstall $h1 10 + tc qdisc del dev $h1 clsact + simple_if_fini $h1 192.0.2.1/28 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.2/28 + tc qdisc add dev $h2 clsact + dscp_capture_install $h2 20 +} + +h2_destroy() +{ + dscp_capture_uninstall $h2 20 + tc qdisc del dev $h2 clsact + simple_if_fini $h2 192.0.2.2/28 +} + +dscp_map() +{ + local base=$1; shift + + for prio in {0..7}; do + echo app=$prio,5,$((base + prio)) + done +} + +switch_create() +{ + ip link add name br1 type bridge vlan_filtering 1 + ip link set dev br1 up + ip link set dev $swp1 master br1 + ip link set dev $swp1 up + ip link set dev $swp2 master br1 + ip link set dev $swp2 up + + lldptool -T -i $swp1 -V APP $(dscp_map 10) >/dev/null + lldptool -T -i $swp2 -V APP $(dscp_map 20) >/dev/null + lldpad_app_wait_set $swp1 + lldpad_app_wait_set $swp2 +} + +switch_destroy() +{ + lldptool -T -i $swp2 -V APP -d $(dscp_map 20) >/dev/null + lldptool -T -i $swp1 -V APP -d $(dscp_map 10) >/dev/null + lldpad_app_wait_del + + ip link set dev $swp2 nomaster + ip link set dev $swp1 nomaster + ip link del dev br1 +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + vrf_prepare + + h1_create + h2_create + switch_create +} + +cleanup() +{ + pre_cleanup + + switch_destroy + h2_destroy + h1_destroy + + vrf_cleanup +} + +ping_ipv4() +{ + ping_test $h1 192.0.2.2 +} + +dscp_ping_test() +{ + local vrf_name=$1; shift + local sip=$1; shift + local dip=$1; shift + local prio=$1; shift + local dev_10=$1; shift + local dev_20=$1; shift + + local dscp_10=$(((prio + 10) << 2)) + local dscp_20=$(((prio + 20) << 2)) + + RET=0 + + local -A t0s + eval "t0s=($(dscp_fetch_stats $dev_10 10) + $(dscp_fetch_stats $dev_20 20))" + + ip vrf exec $vrf_name \ + ${PING} -Q $dscp_10 ${sip:+-I $sip} $dip \ + -c 10 -i 0.1 -w 2 &> /dev/null + + local -A t1s + eval "t1s=($(dscp_fetch_stats $dev_10 10) + $(dscp_fetch_stats $dev_20 20))" + + for key in ${!t0s[@]}; do + local expect + if ((key == prio+10 || key == prio+20)); then + expect=10 + else + expect=0 + fi + + local delta=$((t1s[$key] - t0s[$key])) + ((expect == delta)) + check_err $? "DSCP $key: Expected to capture $expect packets, got $delta." + done + + log_test "DSCP rewrite: $dscp_10-(prio $prio)-$dscp_20" +} + +test_dscp() +{ + for prio in {0..7}; do + dscp_ping_test v$h1 192.0.2.1 192.0.2.2 $prio $h1 $h2 + done +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_router.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_router.sh new file mode 100755 index 000000000000..281d90766e12 --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/qos_dscp_router.sh @@ -0,0 +1,233 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for DSCP prioritization in the router. +# +# With ip_forward_update_priority disabled, the packets are expected to keep +# their DSCP (which in this test uses only values 0..7) intact as they are +# forwarded by the switch. That is verified at $h2. ICMP responses are formed +# with the same DSCP as the requests, and likewise pass through the switch +# intact, which is verified at $h1. +# +# With ip_forward_update_priority enabled, router reprioritizes the packets +# according to the table in reprioritize(). Thus, say, DSCP 7 maps to priority +# 4, which on egress maps back to DSCP 4. The response packet then gets +# reprioritized to 6, getting DSCP 6 on egress. +# +# +----------------------+ +----------------------+ +# | H1 | | H2 | +# | + $h1 | | $h2 + | +# | | 192.0.2.1/28 | | 192.0.2.18/28 | | +# +----|-----------------+ +----------------|-----+ +# | | +# +----|----------------------------------------------------------------|-----+ +# | SW | | | +# | + $swp1 $swp2 + | +# | 192.0.2.2/28 192.0.2.17/28 | +# | APP=0,5,0 .. 7,5,7 APP=0,5,0 .. 7,5,7 | +# +---------------------------------------------------------------------------+ + +ALL_TESTS=" + ping_ipv4 + test_update + test_no_update +" + +lib_dir=$(dirname $0)/../../../net/forwarding + +NUM_NETIFS=4 +source $lib_dir/lib.sh + +reprioritize() +{ + local in=$1; shift + + # This is based on rt_tos2priority in include/net/route.h. Assuming 1:1 + # mapping between priorities and TOS, it yields a new priority for a + # packet with ingress priority of $in. + local -a reprio=(0 0 2 2 6 6 4 4) + + echo ${reprio[$in]} +} + +h1_create() +{ + local dscp; + + simple_if_init $h1 192.0.2.1/28 + tc qdisc add dev $h1 clsact + dscp_capture_install $h1 0 + ip route add vrf v$h1 192.0.2.16/28 via 192.0.2.2 +} + +h1_destroy() +{ + ip route del vrf v$h1 192.0.2.16/28 via 192.0.2.2 + dscp_capture_uninstall $h1 0 + tc qdisc del dev $h1 clsact + simple_if_fini $h1 192.0.2.1/28 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.18/28 + tc qdisc add dev $h2 clsact + dscp_capture_install $h2 0 + ip route add vrf v$h2 192.0.2.0/28 via 192.0.2.17 +} + +h2_destroy() +{ + ip route del vrf v$h2 192.0.2.0/28 via 192.0.2.17 + dscp_capture_uninstall $h2 0 + tc qdisc del dev $h2 clsact + simple_if_fini $h2 192.0.2.18/28 +} + +dscp_map() +{ + local base=$1; shift + + for prio in {0..7}; do + echo app=$prio,5,$((base + prio)) + done +} + +switch_create() +{ + simple_if_init $swp1 192.0.2.2/28 + __simple_if_init $swp2 v$swp1 192.0.2.17/28 + + lldptool -T -i $swp1 -V APP $(dscp_map 0) >/dev/null + lldptool -T -i $swp2 -V APP $(dscp_map 0) >/dev/null + lldpad_app_wait_set $swp1 + lldpad_app_wait_set $swp2 +} + +switch_destroy() +{ + lldptool -T -i $swp2 -V APP -d $(dscp_map 0) >/dev/null + lldptool -T -i $swp1 -V APP -d $(dscp_map 0) >/dev/null + lldpad_app_wait_del + + __simple_if_fini $swp2 192.0.2.17/28 + simple_if_fini $swp1 192.0.2.2/28 +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + vrf_prepare + + sysctl_set net.ipv4.ip_forward_update_priority 1 + h1_create + h2_create + switch_create +} + +cleanup() +{ + pre_cleanup + + switch_destroy + h2_destroy + h1_destroy + sysctl_restore net.ipv4.ip_forward_update_priority + + vrf_cleanup +} + +ping_ipv4() +{ + ping_test $h1 192.0.2.18 +} + +dscp_ping_test() +{ + local vrf_name=$1; shift + local sip=$1; shift + local dip=$1; shift + local prio=$1; shift + local reprio=$1; shift + local dev1=$1; shift + local dev2=$1; shift + + local prio2=$($reprio $prio) # ICMP Request egress prio + local prio3=$($reprio $prio2) # ICMP Response egress prio + + local dscp=$((prio << 2)) # ICMP Request ingress DSCP + local dscp2=$((prio2 << 2)) # ICMP Request egress DSCP + local dscp3=$((prio3 << 2)) # ICMP Response egress DSCP + + RET=0 + + eval "local -A dev1_t0s=($(dscp_fetch_stats $dev1 0))" + eval "local -A dev2_t0s=($(dscp_fetch_stats $dev2 0))" + + ip vrf exec $vrf_name \ + ${PING} -Q $dscp ${sip:+-I $sip} $dip \ + -c 10 -i 0.1 -w 2 &> /dev/null + + eval "local -A dev1_t1s=($(dscp_fetch_stats $dev1 0))" + eval "local -A dev2_t1s=($(dscp_fetch_stats $dev2 0))" + + for i in {0..7}; do + local dscpi=$((i << 2)) + local expect2=0 + local expect3=0 + + if ((i == prio2)); then + expect2=10 + fi + if ((i == prio3)); then + expect3=10 + fi + + local delta=$((dev2_t1s[$i] - dev2_t0s[$i])) + ((expect2 == delta)) + check_err $? "DSCP $dscpi@$dev2: Expected to capture $expect2 packets, got $delta." + + delta=$((dev1_t1s[$i] - dev1_t0s[$i])) + ((expect3 == delta)) + check_err $? "DSCP $dscpi@$dev1: Expected to capture $expect3 packets, got $delta." + done + + log_test "DSCP rewrite: $dscp-(prio $prio2)-$dscp2-(prio $prio3)-$dscp3" +} + +__test_update() +{ + local update=$1; shift + local reprio=$1; shift + + sysctl_restore net.ipv4.ip_forward_update_priority + sysctl_set net.ipv4.ip_forward_update_priority $update + + for prio in {0..7}; do + dscp_ping_test v$h1 192.0.2.1 192.0.2.18 $prio $reprio $h1 $h2 + done +} + +test_update() +{ + __test_update 1 reprioritize +} + +test_no_update() +{ + __test_update 0 echo +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/drivers/net/mlxsw/router_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/router_scale.sh new file mode 100644 index 000000000000..d231649b4f01 --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/router_scale.sh @@ -0,0 +1,167 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +ROUTER_NUM_NETIFS=4 + +router_h1_create() +{ + simple_if_init $h1 192.0.1.1/24 + ip route add 193.0.0.0/8 via 192.0.1.2 dev $h1 +} + +router_h1_destroy() +{ + ip route del 193.0.0.0/8 via 192.0.1.2 dev $h1 + simple_if_fini $h1 192.0.1.1/24 +} + +router_h2_create() +{ + simple_if_init $h2 192.0.2.1/24 + tc qdisc add dev $h2 handle ffff: ingress +} + +router_h2_destroy() +{ + tc qdisc del dev $h2 handle ffff: ingress + simple_if_fini $h2 192.0.2.1/24 +} + +router_create() +{ + ip link set dev $rp1 up + ip link set dev $rp2 up + + ip address add 192.0.1.2/24 dev $rp1 + ip address add 192.0.2.2/24 dev $rp2 +} + +router_destroy() +{ + ip address del 192.0.2.2/24 dev $rp2 + ip address del 192.0.1.2/24 dev $rp1 + + ip link set dev $rp2 down + ip link set dev $rp1 down +} + +router_setup_prepare() +{ + h1=${NETIFS[p1]} + rp1=${NETIFS[p2]} + + rp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + h1mac=$(mac_get $h1) + rp1mac=$(mac_get $rp1) + + vrf_prepare + + router_h1_create + router_h2_create + + router_create +} + +router_offload_validate() +{ + local route_count=$1 + local offloaded_count + + offloaded_count=$(ip route | grep -o 'offload' | wc -l) + [[ $offloaded_count -ge $route_count ]] +} + +router_routes_create() +{ + local route_count=$1 + local count=0 + + ROUTE_FILE="$(mktemp)" + + for i in {0..255} + do + for j in {0..255} + do + for k in {0..255} + do + if [[ $count -eq $route_count ]]; then + break 3 + fi + + echo route add 193.${i}.${j}.${k}/32 via \ + 192.0.2.1 dev $rp2 >> $ROUTE_FILE + ((count++)) + done + done + done + + ip -b $ROUTE_FILE &> /dev/null +} + +router_routes_destroy() +{ + if [[ -v ROUTE_FILE ]]; then + rm -f $ROUTE_FILE + fi +} + +router_test() +{ + local route_count=$1 + local should_fail=$2 + local count=0 + + RET=0 + + router_routes_create $route_count + + router_offload_validate $route_count + check_err_fail $should_fail $? "Offload of $route_count routes" + if [[ $RET -ne 0 ]] || [[ $should_fail -eq 1 ]]; then + return + fi + + tc filter add dev $h2 ingress protocol ip pref 1 flower \ + skip_sw dst_ip 193.0.0.0/8 action drop + + for i in {0..255} + do + for j in {0..255} + do + for k in {0..255} + do + if [[ $count -eq $route_count ]]; then + break 3 + fi + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $rp1mac \ + -A 192.0.1.1 -B 193.${i}.${j}.${k} \ + -t ip -q + ((count++)) + done + done + done + + tc_check_packets "dev $h2 ingress" 1 $route_count + check_err $? "Offload mismatch" + + tc filter del dev $h2 ingress protocol ip pref 1 flower \ + skip_sw dst_ip 193.0.0.0/8 action drop + + router_routes_destroy +} + +router_cleanup() +{ + pre_cleanup + + router_routes_destroy + router_destroy + + router_h2_destroy + router_h1_destroy + + vrf_cleanup +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh new file mode 100755 index 000000000000..3b75180f455d --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum-2/tc_flower.sh @@ -0,0 +1,366 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# This test is for checking the A-TCAM and C-TCAM operation in Spectrum-2. +# It tries to exercise as many code paths in the eRP state machine as +# possible. + +lib_dir=$(dirname $0)/../../../../net/forwarding + +ALL_TESTS="single_mask_test identical_filters_test two_masks_test \ + multiple_masks_test ctcam_edge_cases_test" +NUM_NETIFS=2 +source $lib_dir/tc_common.sh +source $lib_dir/lib.sh + +tcflags="skip_hw" + +h1_create() +{ + simple_if_init $h1 192.0.2.1/24 198.51.100.1/24 +} + +h1_destroy() +{ + simple_if_fini $h1 192.0.2.1/24 198.51.100.1/24 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.2/24 198.51.100.2/24 + tc qdisc add dev $h2 clsact +} + +h2_destroy() +{ + tc qdisc del dev $h2 clsact + simple_if_fini $h2 192.0.2.2/24 198.51.100.2/24 +} + +single_mask_test() +{ + # When only a single mask is required, the device uses the master + # mask and not the eRP table. Verify that under this mode the right + # filter is matched + + RET=0 + + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + $tcflags dst_ip 192.0.2.2 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 101 1 + check_err $? "Single filter - did not match" + + tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \ + $tcflags dst_ip 198.51.100.2 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 101 2 + check_err $? "Two filters - did not match highest priority" + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 198.51.100.1 -B 198.51.100.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 102 1 + check_err $? "Two filters - did not match lowest priority" + + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 198.51.100.1 -B 198.51.100.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 102 2 + check_err $? "Single filter - did not match after delete" + + tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower + + log_test "single mask test ($tcflags)" +} + +identical_filters_test() +{ + # When two filters that only differ in their priority are used, + # one needs to be inserted into the C-TCAM. This test verifies + # that filters are correctly spilled to C-TCAM and that the right + # filter is matched + + RET=0 + + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + $tcflags dst_ip 192.0.2.2 action drop + tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \ + $tcflags dst_ip 192.0.2.2 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 101 1 + check_err $? "Did not match A-TCAM filter" + + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 102 1 + check_err $? "Did not match C-TCAM filter after A-TCAM delete" + + tc filter add dev $h2 ingress protocol ip pref 3 handle 103 flower \ + $tcflags dst_ip 192.0.2.2 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 102 2 + check_err $? "Did not match C-TCAM filter after A-TCAM add" + + tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 103 1 + check_err $? "Did not match A-TCAM filter after C-TCAM delete" + + tc filter del dev $h2 ingress protocol ip pref 3 handle 103 flower + + log_test "identical filters test ($tcflags)" +} + +two_masks_test() +{ + # When more than one mask is required, the eRP table is used. This + # test verifies that the eRP table is correctly allocated and used + + RET=0 + + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + $tcflags dst_ip 192.0.2.2 action drop + tc filter add dev $h2 ingress protocol ip pref 3 handle 103 flower \ + $tcflags dst_ip 192.0.0.0/16 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 101 1 + check_err $? "Two filters - did not match highest priority" + + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 103 1 + check_err $? "Single filter - did not match" + + tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \ + $tcflags dst_ip 192.0.2.0/24 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 102 1 + check_err $? "Two filters - did not match highest priority after add" + + tc filter del dev $h2 ingress protocol ip pref 3 handle 103 flower + tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower + + log_test "two masks test ($tcflags)" +} + +multiple_masks_test() +{ + # The number of masks in a region is limited. Once the maximum + # number of masks has been reached filters that require new + # masks are spilled to the C-TCAM. This test verifies that + # spillage is performed correctly and that the right filter is + # matched + + local index + + RET=0 + + NUM_MASKS=32 + BASE_INDEX=100 + + for i in $(eval echo {1..$NUM_MASKS}); do + index=$((BASE_INDEX - i)) + + tc filter add dev $h2 ingress protocol ip pref $index \ + handle $index \ + flower $tcflags dst_ip 192.0.2.2/${i} src_ip 192.0.2.1 \ + action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 \ + -B 192.0.2.2 -t ip -q + + tc_check_packets "dev $h2 ingress" $index 1 + check_err $? "$i filters - did not match highest priority (add)" + done + + for i in $(eval echo {$NUM_MASKS..1}); do + index=$((BASE_INDEX - i)) + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 \ + -B 192.0.2.2 -t ip -q + + tc_check_packets "dev $h2 ingress" $index 2 + check_err $? "$i filters - did not match highest priority (del)" + + tc filter del dev $h2 ingress protocol ip pref $index \ + handle $index flower + done + + log_test "multiple masks test ($tcflags)" +} + +ctcam_two_atcam_masks_test() +{ + RET=0 + + # First case: C-TCAM is disabled when there are two A-TCAM masks. + # We push a filter into the C-TCAM by using two identical filters + # as in identical_filters_test() + + # Filter goes into A-TCAM + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + $tcflags dst_ip 192.0.2.2 action drop + # Filter goes into C-TCAM + tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \ + $tcflags dst_ip 192.0.2.2 action drop + # Filter goes into A-TCAM + tc filter add dev $h2 ingress protocol ip pref 3 handle 103 flower \ + $tcflags dst_ip 192.0.2.0/24 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 101 1 + check_err $? "Did not match A-TCAM filter" + + # Delete both A-TCAM and C-TCAM filters and make sure the remaining + # A-TCAM filter still works + tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 103 1 + check_err $? "Did not match A-TCAM filter" + + tc filter del dev $h2 ingress protocol ip pref 3 handle 103 flower + + log_test "ctcam with two atcam masks test ($tcflags)" +} + +ctcam_one_atcam_mask_test() +{ + RET=0 + + # Second case: C-TCAM is disabled when there is one A-TCAM mask. + # The test is similar to identical_filters_test() + + # Filter goes into A-TCAM + tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \ + $tcflags dst_ip 192.0.2.2 action drop + # Filter goes into C-TCAM + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + $tcflags dst_ip 192.0.2.2 action drop + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 101 1 + check_err $? "Did not match C-TCAM filter" + + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + + $MZ $h1 -c 1 -p 64 -a $h1mac -b $h2mac -A 192.0.2.1 -B 192.0.2.2 \ + -t ip -q + + tc_check_packets "dev $h2 ingress" 102 1 + check_err $? "Did not match A-TCAM filter" + + tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower + + log_test "ctcam with one atcam mask test ($tcflags)" +} + +ctcam_no_atcam_masks_test() +{ + RET=0 + + # Third case: C-TCAM is disabled when there are no A-TCAM masks + # This test exercises the code path that transitions the eRP table + # to its initial state after deleting the last C-TCAM mask + + # Filter goes into A-TCAM + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + $tcflags dst_ip 192.0.2.2 action drop + # Filter goes into C-TCAM + tc filter add dev $h2 ingress protocol ip pref 2 handle 102 flower \ + $tcflags dst_ip 192.0.2.2 action drop + + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + tc filter del dev $h2 ingress protocol ip pref 2 handle 102 flower + + log_test "ctcam with no atcam masks test ($tcflags)" +} + +ctcam_edge_cases_test() +{ + # When the C-TCAM is disabled after deleting the last C-TCAM + # mask, we want to make sure the eRP state machine is put in + # the correct state + + ctcam_two_atcam_masks_test + ctcam_one_atcam_mask_test + ctcam_no_atcam_masks_test +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + h2=${NETIFS[p2]} + h1mac=$(mac_get $h1) + h2mac=$(mac_get $h2) + + vrf_prepare + + h1_create + h2_create +} + +cleanup() +{ + pre_cleanup + + h2_destroy + h1_destroy + + vrf_cleanup +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +if ! tc_offload_check; then + check_err 1 "Could not test offloaded functionality" + log_test "mlxsw-specific tests for tc flower" + exit +else + tcflags="skip_sw" + tests_run +fi + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/devlink_lib_spectrum.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/devlink_lib_spectrum.sh new file mode 100644 index 000000000000..73035e25085d --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/devlink_lib_spectrum.sh @@ -0,0 +1,119 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +source "../../../../net/forwarding/devlink_lib.sh" + +if [ "$DEVLINK_VIDDID" != "15b3:cb84" ]; then + echo "SKIP: test is tailored for Mellanox Spectrum" + exit 1 +fi + +# Needed for returning to default +declare -A KVD_DEFAULTS + +KVD_CHILDREN="linear hash_single hash_double" +KVDL_CHILDREN="singles chunks large_chunks" + +devlink_sp_resource_minimize() +{ + local size + local i + + for i in $KVD_CHILDREN; do + size=$(devlink_resource_get kvd "$i" | jq '.["size_min"]') + devlink_resource_size_set "$size" kvd "$i" + done + + for i in $KVDL_CHILDREN; do + size=$(devlink_resource_get kvd linear "$i" | \ + jq '.["size_min"]') + devlink_resource_size_set "$size" kvd linear "$i" + done +} + +devlink_sp_size_kvd_to_default() +{ + local need_reload=0 + local i + + for i in $KVD_CHILDREN; do + local size=$(echo "${KVD_DEFAULTS[kvd_$i]}" | jq '.["size"]') + current_size=$(devlink_resource_size_get kvd "$i") + + if [ "$size" -ne "$current_size" ]; then + devlink_resource_size_set "$size" kvd "$i" + need_reload=1 + fi + done + + for i in $KVDL_CHILDREN; do + local size=$(echo "${KVD_DEFAULTS[kvd_linear_$i]}" | \ + jq '.["size"]') + current_size=$(devlink_resource_size_get kvd linear "$i") + + if [ "$size" -ne "$current_size" ]; then + devlink_resource_size_set "$size" kvd linear "$i" + need_reload=1 + fi + done + + if [ "$need_reload" -ne "0" ]; then + devlink_reload + fi +} + +devlink_sp_read_kvd_defaults() +{ + local key + local i + + KVD_DEFAULTS[kvd]=$(devlink_resource_get "kvd") + for i in $KVD_CHILDREN; do + key=kvd_$i + KVD_DEFAULTS[$key]=$(devlink_resource_get kvd "$i") + done + + for i in $KVDL_CHILDREN; do + key=kvd_linear_$i + KVD_DEFAULTS[$key]=$(devlink_resource_get kvd linear "$i") + done +} + +KVD_PROFILES="default scale ipv4_max" + +devlink_sp_resource_kvd_profile_set() +{ + local profile=$1 + + case "$profile" in + scale) + devlink_resource_size_set 64000 kvd linear + devlink_resource_size_set 15616 kvd linear singles + devlink_resource_size_set 32000 kvd linear chunks + devlink_resource_size_set 16384 kvd linear large_chunks + devlink_resource_size_set 128000 kvd hash_single + devlink_resource_size_set 48000 kvd hash_double + devlink_reload + ;; + ipv4_max) + devlink_resource_size_set 64000 kvd linear + devlink_resource_size_set 15616 kvd linear singles + devlink_resource_size_set 32000 kvd linear chunks + devlink_resource_size_set 16384 kvd linear large_chunks + devlink_resource_size_set 144000 kvd hash_single + devlink_resource_size_set 32768 kvd hash_double + devlink_reload + ;; + default) + devlink_resource_size_set 98304 kvd linear + devlink_resource_size_set 16384 kvd linear singles + devlink_resource_size_set 49152 kvd linear chunks + devlink_resource_size_set 32768 kvd linear large_chunks + devlink_resource_size_set 87040 kvd hash_single + devlink_resource_size_set 60416 kvd hash_double + devlink_reload + ;; + *) + check_err 1 "Unknown profile $profile" + esac +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/devlink_resources.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/devlink_resources.sh new file mode 100755 index 000000000000..b1fe960e398a --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/devlink_resources.sh @@ -0,0 +1,117 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +NUM_NETIFS=1 +source devlink_lib_spectrum.sh + +setup_prepare() +{ + devlink_sp_read_kvd_defaults +} + +cleanup() +{ + pre_cleanup + devlink_sp_size_kvd_to_default +} + +trap cleanup EXIT + +setup_prepare + +profiles_test() +{ + local i + + log_info "Running profile tests" + + for i in $KVD_PROFILES; do + RET=0 + devlink_sp_resource_kvd_profile_set $i + log_test "'$i' profile" + done + + # Default is explicitly tested at end to ensure it's actually applied + RET=0 + devlink_sp_resource_kvd_profile_set "default" + log_test "'default' profile" +} + +resources_min_test() +{ + local size + local i + local j + + log_info "Running KVD-minimum tests" + + for i in $KVD_CHILDREN; do + RET=0 + size=$(devlink_resource_get kvd "$i" | jq '.["size_min"]') + devlink_resource_size_set "$size" kvd "$i" + + # In case of linear, need to minimize sub-resources as well + if [[ "$i" == "linear" ]]; then + for j in $KVDL_CHILDREN; do + devlink_resource_size_set 0 kvd linear "$j" + done + fi + + devlink_reload + devlink_sp_size_kvd_to_default + log_test "'$i' minimize [$size]" + done +} + +resources_max_test() +{ + local min_size + local size + local i + local j + + log_info "Running KVD-maximum tests" + for i in $KVD_CHILDREN; do + RET=0 + devlink_sp_resource_minimize + + # Calculate the maximum possible size for the given partition + size=$(devlink_resource_size_get kvd) + for j in $KVD_CHILDREN; do + if [ "$i" != "$j" ]; then + min_size=$(devlink_resource_get kvd "$j" | \ + jq '.["size_min"]') + size=$((size - min_size)) + fi + done + + # Test almost maximum size + devlink_resource_size_set "$((size - 128))" kvd "$i" + devlink_reload + log_test "'$i' almost maximize [$((size - 128))]" + + # Test above maximum size + devlink resource set "$DEVLINK_DEV" \ + path "kvd/$i" size $((size + 128)) &> /dev/null + check_fail $? "Set kvd/$i to size $((size + 128)) should fail" + log_test "'$i' Overflow rejection [$((size + 128))]" + + # Test maximum size + if [ "$i" == "hash_single" ] || [ "$i" == "hash_double" ]; then + echo "SKIP: Observed problem with exact max $i" + continue + fi + + devlink_resource_size_set "$size" kvd "$i" + devlink_reload + log_test "'$i' maximize [$size]" + + devlink_sp_size_kvd_to_default + done +} + +profiles_test +resources_min_test +resources_max_test + +exit "$RET" diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/mirror_gre_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/mirror_gre_scale.sh new file mode 100644 index 000000000000..8d2186c7c62b --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/mirror_gre_scale.sh @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: GPL-2.0 +source ../mirror_gre_scale.sh + +mirror_gre_get_target() +{ + local should_fail=$1; shift + + if ((! should_fail)); then + echo 3 + else + echo 4 + fi +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh new file mode 100755 index 000000000000..a0a80e1a69e8 --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh @@ -0,0 +1,55 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +NUM_NETIFS=6 +source ../../../../net/forwarding/lib.sh +source ../../../../net/forwarding/tc_common.sh +source devlink_lib_spectrum.sh + +current_test="" + +cleanup() +{ + pre_cleanup + if [ ! -z $current_test ]; then + ${current_test}_cleanup + fi + devlink_sp_size_kvd_to_default +} + +devlink_sp_read_kvd_defaults +trap cleanup EXIT + +ALL_TESTS="router tc_flower mirror_gre" +for current_test in ${TESTS:-$ALL_TESTS}; do + source ${current_test}_scale.sh + + num_netifs_var=${current_test^^}_NUM_NETIFS + num_netifs=${!num_netifs_var:-$NUM_NETIFS} + + for profile in $KVD_PROFILES; do + RET=0 + devlink_sp_resource_kvd_profile_set $profile + if [[ $RET -gt 0 ]]; then + log_test "'$current_test' [$profile] setting" + continue + fi + + for should_fail in 0 1; do + RET=0 + target=$(${current_test}_get_target "$should_fail") + ${current_test}_setup_prepare + setup_wait $num_netifs + ${current_test}_test "$target" "$should_fail" + ${current_test}_cleanup + if [[ "$should_fail" -eq 0 ]]; then + log_test "'$current_test' [$profile] $target" + else + log_test "'$current_test' [$profile] overflow $target" + fi + done + done +done +current_test="" + +exit "$RET" diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/router_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/router_scale.sh new file mode 100644 index 000000000000..21c4697d5bab --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/router_scale.sh @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: GPL-2.0 +source ../router_scale.sh + +router_get_target() +{ + local should_fail=$1 + local target + + target=$(devlink_resource_size_get kvd hash_single) + + if [[ $should_fail -eq 0 ]]; then + target=$((target * 85 / 100)) + else + target=$((target + 1)) + fi + + echo $target +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/spectrum/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/spectrum/tc_flower_scale.sh new file mode 100644 index 000000000000..f9bfd8937765 --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/spectrum/tc_flower_scale.sh @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: GPL-2.0 +source ../tc_flower_scale.sh + +tc_flower_get_target() +{ + local should_fail=$1; shift + + # 6144 (6x1024) is the theoretical maximum. + # One bank of 512 rules is taken by the 18-byte MC router rule. + # One rule is the ACL catch-all. + # 6144 - 512 - 1 = 5631 + local target=5631 + + if ((! should_fail)); then + echo $target + else + echo $((target + 1)) + fi +} diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh new file mode 100644 index 000000000000..a6d733d2a4b4 --- /dev/null +++ b/tools/testing/selftests/drivers/net/mlxsw/tc_flower_scale.sh @@ -0,0 +1,134 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for resource limit of offloaded flower rules. The test adds a given +# number of flower matches for different IPv6 addresses, then generates traffic, +# and ensures each was hit exactly once. This file contains functions to set up +# a testing topology and run the test, and is meant to be sourced from a test +# script that calls the testing routine with a given number of rules. + +TC_FLOWER_NUM_NETIFS=2 + +tc_flower_h1_create() +{ + simple_if_init $h1 + tc qdisc add dev $h1 clsact +} + +tc_flower_h1_destroy() +{ + tc qdisc del dev $h1 clsact + simple_if_fini $h1 +} + +tc_flower_h2_create() +{ + simple_if_init $h2 + tc qdisc add dev $h2 clsact +} + +tc_flower_h2_destroy() +{ + tc qdisc del dev $h2 clsact + simple_if_fini $h2 +} + +tc_flower_setup_prepare() +{ + h1=${NETIFS[p1]} + h2=${NETIFS[p2]} + + vrf_prepare + + tc_flower_h1_create + tc_flower_h2_create +} + +tc_flower_cleanup() +{ + pre_cleanup + + tc_flower_h2_destroy + tc_flower_h1_destroy + + vrf_cleanup + + if [[ -v TC_FLOWER_BATCH_FILE ]]; then + rm -f $TC_FLOWER_BATCH_FILE + fi +} + +tc_flower_addr() +{ + local num=$1; shift + + printf "2001:db8:1::%x" $num +} + +tc_flower_rules_create() +{ + local count=$1; shift + local should_fail=$1; shift + + TC_FLOWER_BATCH_FILE="$(mktemp)" + + for ((i = 0; i < count; ++i)); do + cat >> $TC_FLOWER_BATCH_FILE <<-EOF + filter add dev $h2 ingress \ + prot ipv6 \ + pref 1000 \ + flower $tcflags dst_ip $(tc_flower_addr $i) \ + action drop + EOF + done + + tc -b $TC_FLOWER_BATCH_FILE + check_err_fail $should_fail $? "Rule insertion" +} + +__tc_flower_test() +{ + local count=$1; shift + local should_fail=$1; shift + local last=$((count - 1)) + + tc_flower_rules_create $count $should_fail + + for ((i = 0; i < count; ++i)); do + $MZ $h1 -q -c 1 -t ip -p 20 -b bc -6 \ + -A 2001:db8:2::1 \ + -B $(tc_flower_addr $i) + done + + MISMATCHES=$( + tc -j -s filter show dev $h2 ingress | + jq -r '[ .[] | select(.kind == "flower") | .options | + values as $rule | .actions[].stats.packets | + select(. != 1) | "\(.) on \($rule.keys.dst_ip)" ] | + join(", ")' + ) + + test -z "$MISMATCHES" + check_err $? "Expected to capture 1 packet for each IP, but got $MISMATCHES" +} + +tc_flower_test() +{ + local count=$1; shift + local should_fail=$1; shift + + # We use lower 16 bits of IPv6 address for match. Also there are only 16 + # bits of rule priority space. + if ((count > 65536)); then + check_err 1 "Invalid count of $count. At most 65536 rules supported" + return + fi + + if ! tc_offload_check $TC_FLOWER_NUM_NETIFS; then + check_err 1 "Could not test offloaded functionality" + return + fi + + tcflags="skip_sw" + __tc_flower_test $count $should_fail +} diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore index 1a0ac3a29ec5..78b24cf76f40 100644 --- a/tools/testing/selftests/net/.gitignore +++ b/tools/testing/selftests/net/.gitignore @@ -13,3 +13,4 @@ udpgso udpgso_bench_rx udpgso_bench_tx tcp_inq +tls diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index 663e11e85727..9cca68e440a0 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -13,7 +13,7 @@ TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy TEST_GEN_FILES += tcp_mmap tcp_inq psock_snd TEST_GEN_FILES += udpgso udpgso_bench_tx udpgso_bench_rx TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa -TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict +TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls include ../lib.mk diff --git a/tools/testing/selftests/net/forwarding/README b/tools/testing/selftests/net/forwarding/README index 4a0964c42860..b8a2af8fcfb7 100644 --- a/tools/testing/selftests/net/forwarding/README +++ b/tools/testing/selftests/net/forwarding/README @@ -46,6 +46,8 @@ Guidelines for Writing Tests o Where possible, reuse an existing topology for different tests instead of recreating the same topology. +o Tests that use anything but the most trivial topologies should include + an ASCII art showing the topology. o Where possible, IPv6 and IPv4 addresses shall conform to RFC 3849 and RFC 5737, respectively. o Where possible, tests shall be written so that they can be reused by diff --git a/tools/testing/selftests/net/forwarding/bridge_port_isolation.sh b/tools/testing/selftests/net/forwarding/bridge_port_isolation.sh new file mode 100755 index 000000000000..a43b4645c4de --- /dev/null +++ b/tools/testing/selftests/net/forwarding/bridge_port_isolation.sh @@ -0,0 +1,151 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +ALL_TESTS="ping_ipv4 ping_ipv6 flooding" +NUM_NETIFS=6 +CHECK_TC="yes" +source lib.sh + +h1_create() +{ + simple_if_init $h1 192.0.2.1/24 2001:db8:1::1/64 +} + +h1_destroy() +{ + simple_if_fini $h1 192.0.2.1/24 2001:db8:1::1/64 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.2/24 2001:db8:1::2/64 +} + +h2_destroy() +{ + simple_if_fini $h2 192.0.2.2/24 2001:db8:1::2/64 +} + +h3_create() +{ + simple_if_init $h3 192.0.2.3/24 2001:db8:1::3/64 +} + +h3_destroy() +{ + simple_if_fini $h3 192.0.2.3/24 2001:db8:1::3/64 +} + +switch_create() +{ + ip link add dev br0 type bridge + + ip link set dev $swp1 master br0 + ip link set dev $swp2 master br0 + ip link set dev $swp3 master br0 + + ip link set dev $swp1 type bridge_slave isolated on + check_err $? "Can't set isolation on port $swp1" + ip link set dev $swp2 type bridge_slave isolated on + check_err $? "Can't set isolation on port $swp2" + ip link set dev $swp3 type bridge_slave isolated off + check_err $? "Can't disable isolation on port $swp3" + + ip link set dev br0 up + ip link set dev $swp1 up + ip link set dev $swp2 up + ip link set dev $swp3 up +} + +switch_destroy() +{ + ip link set dev $swp3 down + ip link set dev $swp2 down + ip link set dev $swp1 down + + ip link del dev br0 +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + swp3=${NETIFS[p5]} + h3=${NETIFS[p6]} + + vrf_prepare + + h1_create + h2_create + h3_create + + switch_create +} + +cleanup() +{ + pre_cleanup + + switch_destroy + + h3_destroy + h2_destroy + h1_destroy + + vrf_cleanup +} + +ping_ipv4() +{ + RET=0 + ping_do $h1 192.0.2.2 + check_fail $? "Ping worked when it should not have" + + RET=0 + ping_do $h3 192.0.2.2 + check_err $? "Ping didn't work when it should have" + + log_test "Isolated port ping" +} + +ping_ipv6() +{ + RET=0 + ping6_do $h1 2001:db8:1::2 + check_fail $? "Ping6 worked when it should not have" + + RET=0 + ping6_do $h3 2001:db8:1::2 + check_err $? "Ping6 didn't work when it should have" + + log_test "Isolated port ping6" +} + +flooding() +{ + local mac=de:ad:be:ef:13:37 + local ip=192.0.2.100 + + RET=0 + flood_test_do false $mac $ip $h1 $h2 + check_err $? "Packet was flooded when it should not have been" + + RET=0 + flood_test_do true $mac $ip $h3 $h2 + check_err $? "Packet was not flooded when it should have been" + + log_test "Isolated port flooding" +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/devlink_lib.sh b/tools/testing/selftests/net/forwarding/devlink_lib.sh new file mode 100644 index 000000000000..5ab1e5f43022 --- /dev/null +++ b/tools/testing/selftests/net/forwarding/devlink_lib.sh @@ -0,0 +1,108 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +############################################################################## +# Source library + +relative_path="${BASH_SOURCE%/*}" +if [[ "$relative_path" == "${BASH_SOURCE}" ]]; then + relative_path="." +fi + +source "$relative_path/lib.sh" + +############################################################################## +# Defines + +DEVLINK_DEV=$(devlink port show | grep "${NETIFS[p1]}" | \ + grep -v "${NETIFS[p1]}[0-9]" | cut -d" " -f1 | \ + rev | cut -d"/" -f2- | rev) +if [ -z "$DEVLINK_DEV" ]; then + echo "SKIP: ${NETIFS[p1]} has no devlink device registered for it" + exit 1 +fi +if [[ "$(echo $DEVLINK_DEV | grep -c pci)" -eq 0 ]]; then + echo "SKIP: devlink device's bus is not PCI" + exit 1 +fi + +DEVLINK_VIDDID=$(lspci -s $(echo $DEVLINK_DEV | cut -d"/" -f2) \ + -n | cut -d" " -f3) + +############################################################################## +# Sanity checks + +devlink -j resource show "$DEVLINK_DEV" &> /dev/null +if [ $? -ne 0 ]; then + echo "SKIP: iproute2 too old, missing devlink resource support" + exit 1 +fi + +############################################################################## +# Devlink helpers + +devlink_resource_names_to_path() +{ + local resource + local path="" + + for resource in "${@}"; do + if [ "$path" == "" ]; then + path="$resource" + else + path="${path}/$resource" + fi + done + + echo "$path" +} + +devlink_resource_get() +{ + local name=$1 + local resource_name=.[][\"$DEVLINK_DEV\"] + + resource_name="$resource_name | .[] | select (.name == \"$name\")" + + shift + for resource in "${@}"; do + resource_name="${resource_name} | .[\"resources\"][] | \ + select (.name == \"$resource\")" + done + + devlink -j resource show "$DEVLINK_DEV" | jq "$resource_name" +} + +devlink_resource_size_get() +{ + local size=$(devlink_resource_get "$@" | jq '.["size_new"]') + + if [ "$size" == "null" ]; then + devlink_resource_get "$@" | jq '.["size"]' + else + echo "$size" + fi +} + +devlink_resource_size_set() +{ + local new_size=$1 + local path + + shift + path=$(devlink_resource_names_to_path "$@") + devlink resource set "$DEVLINK_DEV" path "$path" size "$new_size" + check_err $? "Failed setting path $path to size $size" +} + +devlink_reload() +{ + local still_pending + + devlink dev reload "$DEVLINK_DEV" &> /dev/null + check_err $? "Failed reload" + + still_pending=$(devlink resource show "$DEVLINK_DEV" | \ + grep -c "size_new") + check_err $still_pending "Failed reload - There are still unset sizes" +} diff --git a/tools/testing/selftests/net/forwarding/gre_multipath.sh b/tools/testing/selftests/net/forwarding/gre_multipath.sh new file mode 100755 index 000000000000..cca2baa03fb8 --- /dev/null +++ b/tools/testing/selftests/net/forwarding/gre_multipath.sh @@ -0,0 +1,253 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test traffic distribution when a wECMP route forwards traffic to two GRE +# tunnels. +# +# +-------------------------+ +# | H1 | +# | $h1 + | +# | 192.0.2.1/28 | | +# +-------------------|-----+ +# | +# +-------------------|------------------------+ +# | SW1 | | +# | $ol1 + | +# | 192.0.2.2/28 | +# | | +# | + g1a (gre) + g1b (gre) | +# | loc=192.0.2.65 loc=192.0.2.81 | +# | rem=192.0.2.66 --. rem=192.0.2.82 --. | +# | tos=inherit | tos=inherit | | +# | .------------------' | | +# | | .------------------' | +# | v v | +# | + $ul1.111 (vlan) + $ul1.222 (vlan) | +# | | 192.0.2.129/28 | 192.0.2.145/28 | +# | \ / | +# | \________________/ | +# | | | +# | + $ul1 | +# +------------|-------------------------------+ +# | +# +------------|-------------------------------+ +# | SW2 + $ul2 | +# | _______|________ | +# | / \ | +# | / \ | +# | + $ul2.111 (vlan) + $ul2.222 (vlan) | +# | ^ 192.0.2.130/28 ^ 192.0.2.146/28 | +# | | | | +# | | '------------------. | +# | '------------------. | | +# | + g2a (gre) | + g2b (gre) | | +# | loc=192.0.2.66 | loc=192.0.2.82 | | +# | rem=192.0.2.65 --' rem=192.0.2.81 --' | +# | tos=inherit tos=inherit | +# | | +# | $ol2 + | +# | 192.0.2.17/28 | | +# +-------------------|------------------------+ +# | +# +-------------------|-----+ +# | H2 | | +# | $h2 + | +# | 192.0.2.18/28 | +# +-------------------------+ + +ALL_TESTS=" + ping_ipv4 + multipath_ipv4 +" + +NUM_NETIFS=6 +source lib.sh + +h1_create() +{ + simple_if_init $h1 192.0.2.1/28 2001:db8:1::1/64 + ip route add vrf v$h1 192.0.2.16/28 via 192.0.2.2 +} + +h1_destroy() +{ + ip route del vrf v$h1 192.0.2.16/28 via 192.0.2.2 + simple_if_fini $h1 192.0.2.1/28 +} + +sw1_create() +{ + simple_if_init $ol1 192.0.2.2/28 + __simple_if_init $ul1 v$ol1 + vlan_create $ul1 111 v$ol1 192.0.2.129/28 + vlan_create $ul1 222 v$ol1 192.0.2.145/28 + + tunnel_create g1a gre 192.0.2.65 192.0.2.66 tos inherit dev v$ol1 + __simple_if_init g1a v$ol1 192.0.2.65/32 + ip route add vrf v$ol1 192.0.2.66/32 via 192.0.2.130 + + tunnel_create g1b gre 192.0.2.81 192.0.2.82 tos inherit dev v$ol1 + __simple_if_init g1b v$ol1 192.0.2.81/32 + ip route add vrf v$ol1 192.0.2.82/32 via 192.0.2.146 + + ip route add vrf v$ol1 192.0.2.16/28 \ + nexthop dev g1a \ + nexthop dev g1b + + tc qdisc add dev $ul1 clsact + tc filter add dev $ul1 egress pref 111 prot ipv4 \ + flower dst_ip 192.0.2.66 action pass + tc filter add dev $ul1 egress pref 222 prot ipv4 \ + flower dst_ip 192.0.2.82 action pass +} + +sw1_destroy() +{ + tc qdisc del dev $ul1 clsact + + ip route del vrf v$ol1 192.0.2.16/28 + + ip route del vrf v$ol1 192.0.2.82/32 via 192.0.2.146 + __simple_if_fini g1b 192.0.2.81/32 + tunnel_destroy g1b + + ip route del vrf v$ol1 192.0.2.66/32 via 192.0.2.130 + __simple_if_fini g1a 192.0.2.65/32 + tunnel_destroy g1a + + vlan_destroy $ul1 222 + vlan_destroy $ul1 111 + __simple_if_fini $ul1 + simple_if_fini $ol1 192.0.2.2/28 +} + +sw2_create() +{ + simple_if_init $ol2 192.0.2.17/28 + __simple_if_init $ul2 v$ol2 + vlan_create $ul2 111 v$ol2 192.0.2.130/28 + vlan_create $ul2 222 v$ol2 192.0.2.146/28 + + tunnel_create g2a gre 192.0.2.66 192.0.2.65 tos inherit dev v$ol2 + __simple_if_init g2a v$ol2 192.0.2.66/32 + ip route add vrf v$ol2 192.0.2.65/32 via 192.0.2.129 + + tunnel_create g2b gre 192.0.2.82 192.0.2.81 tos inherit dev v$ol2 + __simple_if_init g2b v$ol2 192.0.2.82/32 + ip route add vrf v$ol2 192.0.2.81/32 via 192.0.2.145 + + ip route add vrf v$ol2 192.0.2.0/28 \ + nexthop dev g2a \ + nexthop dev g2b +} + +sw2_destroy() +{ + ip route del vrf v$ol2 192.0.2.0/28 + + ip route del vrf v$ol2 192.0.2.81/32 via 192.0.2.145 + __simple_if_fini g2b 192.0.2.82/32 + tunnel_destroy g2b + + ip route del vrf v$ol2 192.0.2.65/32 via 192.0.2.129 + __simple_if_fini g2a 192.0.2.66/32 + tunnel_destroy g2a + + vlan_destroy $ul2 222 + vlan_destroy $ul2 111 + __simple_if_fini $ul2 + simple_if_fini $ol2 192.0.2.17/28 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.18/28 + ip route add vrf v$h2 192.0.2.0/28 via 192.0.2.17 +} + +h2_destroy() +{ + ip route del vrf v$h2 192.0.2.0/28 via 192.0.2.17 + simple_if_fini $h2 192.0.2.18/28 +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + ol1=${NETIFS[p2]} + + ul1=${NETIFS[p3]} + ul2=${NETIFS[p4]} + + ol2=${NETIFS[p5]} + h2=${NETIFS[p6]} + + vrf_prepare + h1_create + sw1_create + sw2_create + h2_create +} + +cleanup() +{ + pre_cleanup + + h2_destroy + sw2_destroy + sw1_destroy + h1_destroy + vrf_cleanup +} + +multipath4_test() +{ + local what=$1; shift + local weight1=$1; shift + local weight2=$1; shift + + sysctl_set net.ipv4.fib_multipath_hash_policy 1 + ip route replace vrf v$ol1 192.0.2.16/28 \ + nexthop dev g1a weight $weight1 \ + nexthop dev g1b weight $weight2 + + local t0_111=$(tc_rule_stats_get $ul1 111 egress) + local t0_222=$(tc_rule_stats_get $ul1 222 egress) + + ip vrf exec v$h1 \ + $MZ $h1 -q -p 64 -A 192.0.2.1 -B 192.0.2.18 \ + -d 1msec -t udp "sp=1024,dp=0-32768" + + local t1_111=$(tc_rule_stats_get $ul1 111 egress) + local t1_222=$(tc_rule_stats_get $ul1 222 egress) + + local d111=$((t1_111 - t0_111)) + local d222=$((t1_222 - t0_222)) + multipath_eval "$what" $weight1 $weight2 $d111 $d222 + + ip route replace vrf v$ol1 192.0.2.16/28 \ + nexthop dev g1a \ + nexthop dev g1b + sysctl_restore net.ipv4.fib_multipath_hash_policy +} + +ping_ipv4() +{ + ping_test $h1 192.0.2.18 +} + +multipath_ipv4() +{ + log_info "Running IPv4 multipath tests" + multipath4_test "ECMP" 1 1 + multipath4_test "Weighted MP 2:1" 2 1 + multipath4_test "Weighted MP 11:45" 11 45 +} + +trap cleanup EXIT + +setup_prepare +setup_wait +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh index 7b18a53aa556..ca53b539aa2d 100644 --- a/tools/testing/selftests/net/forwarding/lib.sh +++ b/tools/testing/selftests/net/forwarding/lib.sh @@ -8,14 +8,21 @@ PING=${PING:=ping} PING6=${PING6:=ping6} MZ=${MZ:=mausezahn} +ARPING=${ARPING:=arping} +TEAMD=${TEAMD:=teamd} WAIT_TIME=${WAIT_TIME:=5} PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no} PAUSE_ON_CLEANUP=${PAUSE_ON_CLEANUP:=no} NETIF_TYPE=${NETIF_TYPE:=veth} NETIF_CREATE=${NETIF_CREATE:=yes} -if [[ -f forwarding.config ]]; then - source forwarding.config +relative_path="${BASH_SOURCE%/*}" +if [[ "$relative_path" == "${BASH_SOURCE}" ]]; then + relative_path="." +fi + +if [[ -f $relative_path/forwarding.config ]]; then + source "$relative_path/forwarding.config" fi ############################################################################## @@ -28,7 +35,10 @@ check_tc_version() echo "SKIP: iproute2 too old; tc is missing JSON support" exit 1 fi +} +check_tc_shblock_support() +{ tc filter help 2>&1 | grep block &> /dev/null if [[ $? -ne 0 ]]; then echo "SKIP: iproute2 too old; tc is missing shared block support" @@ -36,6 +46,15 @@ check_tc_version() fi } +check_tc_chain_support() +{ + tc help 2>&1|grep chain &> /dev/null + if [[ $? -ne 0 ]]; then + echo "SKIP: iproute2 too old; tc is missing chain support" + exit 1 + fi +} + if [[ "$(id -u)" -ne 0 ]]; then echo "SKIP: need root privileges" exit 0 @@ -45,15 +64,18 @@ if [[ "$CHECK_TC" = "yes" ]]; then check_tc_version fi -if [[ ! -x "$(command -v jq)" ]]; then - echo "SKIP: jq not installed" - exit 1 -fi +require_command() +{ + local cmd=$1; shift -if [[ ! -x "$(command -v $MZ)" ]]; then - echo "SKIP: $MZ not installed" - exit 1 -fi + if [[ ! -x "$(command -v "$cmd")" ]]; then + echo "SKIP: $cmd not installed" + exit 1 + fi +} + +require_command jq +require_command $MZ if [[ ! -v NUM_NETIFS ]]; then echo "SKIP: importer does not define \"NUM_NETIFS\"" @@ -151,6 +173,19 @@ check_fail() fi } +check_err_fail() +{ + local should_fail=$1; shift + local err=$1; shift + local what=$1; shift + + if ((should_fail)); then + check_fail $err "$what succeeded, but should have failed" + else + check_err $err "$what failed" + fi +} + log_test() { local test_name=$1 @@ -185,24 +220,54 @@ log_info() echo "INFO: $msg" } +setup_wait_dev() +{ + local dev=$1; shift + + while true; do + ip link show dev $dev up \ + | grep 'state UP' &> /dev/null + if [[ $? -ne 0 ]]; then + sleep 1 + else + break + fi + done +} + setup_wait() { - for i in $(eval echo {1..$NUM_NETIFS}); do - while true; do - ip link show dev ${NETIFS[p$i]} up \ - | grep 'state UP' &> /dev/null - if [[ $? -ne 0 ]]; then - sleep 1 - else - break - fi - done + local num_netifs=${1:-$NUM_NETIFS} + + for ((i = 1; i <= num_netifs; ++i)); do + setup_wait_dev ${NETIFS[p$i]} done # Make sure links are ready. sleep $WAIT_TIME } +lldpad_app_wait_set() +{ + local dev=$1; shift + + while lldptool -t -i $dev -V APP -c app | grep -q pending; do + echo "$dev: waiting for lldpad to push pending APP updates" + sleep 5 + done +} + +lldpad_app_wait_del() +{ + # Give lldpad a chance to push down the changes. If the device is downed + # too soon, the updates will be left pending. However, they will have + # been struck off the lldpad's DB already, so we won't be able to tell + # they are pending. Then on next test iteration this would cause + # weirdness as newly-added APP rules conflict with the old ones, + # sometimes getting stuck in an "unknown" state. + sleep 5 +} + pre_cleanup() { if [ "${PAUSE_ON_CLEANUP}" = "yes" ]; then @@ -287,6 +352,29 @@ __addr_add_del() done } +__simple_if_init() +{ + local if_name=$1; shift + local vrf_name=$1; shift + local addrs=("${@}") + + ip link set dev $if_name master $vrf_name + ip link set dev $if_name up + + __addr_add_del $if_name add "${addrs[@]}" +} + +__simple_if_fini() +{ + local if_name=$1; shift + local addrs=("${@}") + + __addr_add_del $if_name del "${addrs[@]}" + + ip link set dev $if_name down + ip link set dev $if_name nomaster +} + simple_if_init() { local if_name=$1 @@ -298,11 +386,8 @@ simple_if_init() array=("${@}") vrf_create $vrf_name - ip link set dev $if_name master $vrf_name ip link set dev $vrf_name up - ip link set dev $if_name up - - __addr_add_del $if_name add "${array[@]}" + __simple_if_init $if_name $vrf_name "${array[@]}" } simple_if_fini() @@ -315,9 +400,7 @@ simple_if_fini() vrf_name=v$if_name array=("${@}") - __addr_add_del $if_name del "${array[@]}" - - ip link set dev $if_name down + __simple_if_fini $if_name "${array[@]}" vrf_destroy $vrf_name } @@ -365,6 +448,28 @@ vlan_destroy() ip link del dev $name } +team_create() +{ + local if_name=$1; shift + local mode=$1; shift + + require_command $TEAMD + $TEAMD -t $if_name -d -c '{"runner": {"name": "'$mode'"}}' + for slave in "$@"; do + ip link set dev $slave down + ip link set dev $slave master $if_name + ip link set dev $slave up + done + ip link set dev $if_name up +} + +team_destroy() +{ + local if_name=$1; shift + + $TEAMD -t $if_name -k +} + master_name_get() { local if_name=$1 @@ -383,9 +488,10 @@ tc_rule_stats_get() { local dev=$1; shift local pref=$1; shift + local dir=$1; shift - tc -j -s filter show dev $dev ingress pref $pref | - jq '.[1].options.actions[].stats.packets' + tc -j -s filter show dev $dev ${dir:-ingress} pref $pref \ + | jq '.[1].options.actions[].stats.packets' } mac_get() @@ -437,7 +543,9 @@ forwarding_restore() tc_offload_check() { - for i in $(eval echo {1..$NUM_NETIFS}); do + local num_netifs=${1:-$NUM_NETIFS} + + for ((i = 1; i <= num_netifs; ++i)); do ethtool -k ${NETIFS[p$i]} \ | grep "hw-tc-offload: on" &> /dev/null if [[ $? -ne 0 ]]; then @@ -453,9 +561,15 @@ trap_install() local dev=$1; shift local direction=$1; shift - # For slow-path testing, we need to install a trap to get to - # slow path the packets that would otherwise be switched in HW. - tc filter add dev $dev $direction pref 1 flower skip_sw action trap + # Some devices may not support or need in-hardware trapping of traffic + # (e.g. the veth pairs that this library creates for non-existent + # loopbacks). Use continue instead, so that there is a filter in there + # (some tests check counters), and so that other filters are still + # processed. + tc filter add dev $dev $direction pref 1 \ + flower skip_sw action trap 2>/dev/null \ + || tc filter add dev $dev $direction pref 1 \ + flower action continue } trap_uninstall() @@ -463,11 +577,13 @@ trap_uninstall() local dev=$1; shift local direction=$1; shift - tc filter del dev $dev $direction pref 1 flower skip_sw + tc filter del dev $dev $direction pref 1 flower } slow_path_trap_install() { + # For slow-path testing, we need to install a trap to get to + # slow path the packets that would otherwise be switched in HW. if [ "${tcflags/skip_hw}" != "$tcflags" ]; then trap_install "$@" fi @@ -537,6 +653,48 @@ vlan_capture_uninstall() __vlan_capture_add_del del 100 "$@" } +__dscp_capture_add_del() +{ + local add_del=$1; shift + local dev=$1; shift + local base=$1; shift + local dscp; + + for prio in {0..7}; do + dscp=$((base + prio)) + __icmp_capture_add_del $add_del $((dscp + 100)) "" $dev \ + "skip_hw ip_tos $((dscp << 2))" + done +} + +dscp_capture_install() +{ + local dev=$1; shift + local base=$1; shift + + __dscp_capture_add_del add $dev $base +} + +dscp_capture_uninstall() +{ + local dev=$1; shift + local base=$1; shift + + __dscp_capture_add_del del $dev $base +} + +dscp_fetch_stats() +{ + local dev=$1; shift + local base=$1; shift + + for prio in {0..7}; do + local dscp=$((base + prio)) + local t=$(tc_rule_stats_get $dev $((dscp + 100))) + echo "[$dscp]=$t " + done +} + matchall_sink_create() { local dev=$1; shift @@ -557,33 +715,86 @@ tests_run() done } +multipath_eval() +{ + local desc="$1" + local weight_rp12=$2 + local weight_rp13=$3 + local packets_rp12=$4 + local packets_rp13=$5 + local weights_ratio packets_ratio diff + + RET=0 + + if [[ "$weight_rp12" -gt "$weight_rp13" ]]; then + weights_ratio=$(echo "scale=2; $weight_rp12 / $weight_rp13" \ + | bc -l) + else + weights_ratio=$(echo "scale=2; $weight_rp13 / $weight_rp12" \ + | bc -l) + fi + + if [[ "$packets_rp12" -eq "0" || "$packets_rp13" -eq "0" ]]; then + check_err 1 "Packet difference is 0" + log_test "Multipath" + log_info "Expected ratio $weights_ratio" + return + fi + + if [[ "$weight_rp12" -gt "$weight_rp13" ]]; then + packets_ratio=$(echo "scale=2; $packets_rp12 / $packets_rp13" \ + | bc -l) + else + packets_ratio=$(echo "scale=2; $packets_rp13 / $packets_rp12" \ + | bc -l) + fi + + diff=$(echo $weights_ratio - $packets_ratio | bc -l) + diff=${diff#-} + + test "$(echo "$diff / $weights_ratio > 0.15" | bc -l)" -eq 0 + check_err $? "Too large discrepancy between expected and measured ratios" + log_test "$desc" + log_info "Expected ratio $weights_ratio Measured ratio $packets_ratio" +} + ############################################################################## # Tests -ping_test() +ping_do() { local if_name=$1 local dip=$2 local vrf_name - RET=0 - vrf_name=$(master_name_get $if_name) ip vrf exec $vrf_name $PING $dip -c 10 -i 0.1 -w 2 &> /dev/null +} + +ping_test() +{ + RET=0 + + ping_do $1 $2 check_err $? log_test "ping" } -ping6_test() +ping6_do() { local if_name=$1 local dip=$2 local vrf_name - RET=0 - vrf_name=$(master_name_get $if_name) ip vrf exec $vrf_name $PING6 $dip -c 10 -i 0.1 -w 2 &> /dev/null +} + +ping6_test() +{ + RET=0 + + ping6_do $1 $2 check_err $? log_test "ping6" } diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh new file mode 100755 index 000000000000..c5095da7f6bf --- /dev/null +++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d.sh @@ -0,0 +1,132 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for "tc action mirred egress mirror" when the underlay route points at a +# bridge device without vlan filtering (802.1d). +# +# This test uses standard topology for testing mirror-to-gretap. See +# mirror_gre_topo_lib.sh for more details. The full topology is as follows: +# +# +---------------------+ +---------------------+ +# | H1 | | H2 | +# | + $h1 | | $h2 + | +# | | 192.0.2.1/28 | | 192.0.2.2/28 | | +# +-----|---------------+ +---------------|-----+ +# | | +# +-----|-------------------------------------------------------------|-----+ +# | SW o---> mirror | | +# | +---|-------------------------------------------------------------|---+ | +# | | + $swp1 + br1 (802.1q bridge) $swp2 + | | +# | +---------------------------------------------------------------------+ | +# | | +# | +---------------------------------------------------------------------+ | +# | | + br2 (802.1d bridge) | | +# | | 192.0.2.129/28 | | +# | | + $swp3 2001:db8:2::1/64 | | +# | +---|-----------------------------------------------------------------+ | +# | | ^ ^ | +# | | + gt6 (ip6gretap) | + gt4 (gretap) | | +# | | : loc=2001:db8:2::1 | : loc=192.0.2.129 | | +# | | : rem=2001:db8:2::2 -+ : rem=192.0.2.130 -+ | +# | | : ttl=100 : ttl=100 | +# | | : tos=inherit : tos=inherit | +# +-----|---------------------:----------------------:----------------------+ +# | : : +# +-----|---------------------:----------------------:----------------------+ +# | H3 + $h3 + h3-gt6(ip6gretap) + h3-gt4 (gretap) | +# | 192.0.2.130/28 loc=2001:db8:2::2 loc=192.0.2.130 | +# | 2001:db8:2::2/64 rem=2001:db8:2::1 rem=192.0.2.129 | +# | ttl=100 ttl=100 | +# | tos=inherit tos=inherit | +# +-------------------------------------------------------------------------+ + +ALL_TESTS=" + test_gretap + test_ip6gretap +" + +NUM_NETIFS=6 +source lib.sh +source mirror_lib.sh +source mirror_gre_lib.sh +source mirror_gre_topo_lib.sh + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + swp3=${NETIFS[p5]} + h3=${NETIFS[p6]} + + vrf_prepare + mirror_gre_topo_create + + ip link add name br2 type bridge vlan_filtering 0 + ip link set dev br2 up + + ip link set dev $swp3 master br2 + ip route add 192.0.2.130/32 dev br2 + ip -6 route add 2001:db8:2::2/128 dev br2 + + ip address add dev br2 192.0.2.129/28 + ip address add dev br2 2001:db8:2::1/64 + + ip address add dev $h3 192.0.2.130/28 + ip address add dev $h3 2001:db8:2::2/64 +} + +cleanup() +{ + pre_cleanup + + ip address del dev $h3 2001:db8:2::2/64 + ip address del dev $h3 192.0.2.130/28 + ip link del dev br2 + + mirror_gre_topo_destroy + vrf_cleanup +} + +test_gretap() +{ + full_test_span_gre_dir gt4 ingress 8 0 "mirror to gretap" + full_test_span_gre_dir gt4 egress 0 8 "mirror to gretap" +} + +test_ip6gretap() +{ + full_test_span_gre_dir gt6 ingress 8 0 "mirror to ip6gretap" + full_test_span_gre_dir gt6 egress 0 8 "mirror to ip6gretap" +} + +test_all() +{ + slow_path_trap_install $swp1 ingress + slow_path_trap_install $swp1 egress + + tests_run + + slow_path_trap_uninstall $swp1 egress + slow_path_trap_uninstall $swp1 ingress +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tcflags="skip_hw" +test_all + +if ! tc_offload_check; then + echo "WARN: Could not test offloaded functionality" +else + tcflags="skip_sw" + test_all +fi + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh index 3bb4c2ba7b14..197e769c2ed1 100755 --- a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh +++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1d_vlan.sh @@ -74,12 +74,14 @@ test_vlan_match() test_gretap() { - test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap" + test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \ + "mirror to gretap" } test_ip6gretap() { - test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap" + test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \ + "mirror to ip6gretap" } test_gretap_stp() diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh new file mode 100755 index 000000000000..a3402cd8d5b6 --- /dev/null +++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q.sh @@ -0,0 +1,126 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for "tc action mirred egress mirror" when the underlay route points at a +# bridge device with vlan filtering (802.1q). +# +# This test uses standard topology for testing mirror-to-gretap. See +# mirror_gre_topo_lib.sh for more details. The full topology is as follows: +# +# +---------------------+ +---------------------+ +# | H1 | | H2 | +# | + $h1 | | $h2 + | +# | | 192.0.2.1/28 | | 192.0.2.2/28 | | +# +-----|---------------+ +---------------|-----+ +# | | +# +-----|---------------------------------------------------------------|-----+ +# | SW o---> mirror | | +# | +---|---------------------------------------------------------------|---+ | +# | | + $swp1 + br1 (802.1q bridge) $swp2 + | | +# | | 192.0.2.129/28 | | +# | | + $swp3 2001:db8:2::1/64 | | +# | | | vid555 vid555[pvid,untagged] | | +# | +---|-------------------------------------------------------------------+ | +# | | ^ ^ | +# | | + gt6 (ip6gretap) | + gt4 (gretap) | | +# | | : loc=2001:db8:2::1 | : loc=192.0.2.129 | | +# | | : rem=2001:db8:2::2 -+ : rem=192.0.2.130 -+ | +# | | : ttl=100 : ttl=100 | +# | | : tos=inherit : tos=inherit | +# +-----|---------------------:------------------------:----------------------+ +# | : : +# +-----|---------------------:------------------------:----------------------+ +# | H3 + $h3 + h3-gt6(ip6gretap) + h3-gt4 (gretap) | +# | | loc=2001:db8:2::2 loc=192.0.2.130 | +# | + $h3.555 rem=2001:db8:2::1 rem=192.0.2.129 | +# | 192.0.2.130/28 ttl=100 ttl=100 | +# | 2001:db8:2::2/64 tos=inherit tos=inherit | +# +---------------------------------------------------------------------------+ + +ALL_TESTS=" + test_gretap + test_ip6gretap +" + +NUM_NETIFS=6 +source lib.sh +source mirror_lib.sh +source mirror_gre_lib.sh +source mirror_gre_topo_lib.sh + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + swp3=${NETIFS[p5]} + h3=${NETIFS[p6]} + + vrf_prepare + mirror_gre_topo_create + + ip link set dev $swp3 master br1 + bridge vlan add dev br1 vid 555 pvid untagged self + ip address add dev br1 192.0.2.129/28 + ip address add dev br1 2001:db8:2::1/64 + + ip -4 route add 192.0.2.130/32 dev br1 + ip -6 route add 2001:db8:2::2/128 dev br1 + + vlan_create $h3 555 v$h3 192.0.2.130/28 2001:db8:2::2/64 + bridge vlan add dev $swp3 vid 555 +} + +cleanup() +{ + pre_cleanup + + ip link set dev $swp3 nomaster + vlan_destroy $h3 555 + + mirror_gre_topo_destroy + vrf_cleanup +} + +test_gretap() +{ + full_test_span_gre_dir gt4 ingress 8 0 "mirror to gretap" + full_test_span_gre_dir gt4 egress 0 8 "mirror to gretap" +} + +test_ip6gretap() +{ + full_test_span_gre_dir gt6 ingress 8 0 "mirror to ip6gretap" + full_test_span_gre_dir gt6 egress 0 8 "mirror to ip6gretap" +} + +tests() +{ + slow_path_trap_install $swp1 ingress + slow_path_trap_install $swp1 egress + + tests_run + + slow_path_trap_uninstall $swp1 egress + slow_path_trap_uninstall $swp1 ingress +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tcflags="skip_hw" +tests + +if ! tc_offload_check; then + echo "WARN: Could not test offloaded functionality" +else + tcflags="skip_sw" + tests +fi + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh new file mode 100755 index 000000000000..61844caf671e --- /dev/null +++ b/tools/testing/selftests/net/forwarding/mirror_gre_bridge_1q_lag.sh @@ -0,0 +1,283 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for "tc action mirred egress mirror" when the underlay route points at a +# bridge device with vlan filtering (802.1q), and the egress device is a team +# device. +# +# +----------------------+ +----------------------+ +# | H1 | | H2 | +# | + $h1.333 | | $h1.555 + | +# | | 192.0.2.1/28 | | 192.0.2.18/28 | | +# +-----|----------------+ +----------------|-----+ +# | $h1 | +# +--------------------------------+------------------------------+ +# | +# +--------------------------------------|------------------------------------+ +# | SW o---> mirror | +# | | | +# | +--------------------------------+------------------------------+ | +# | | $swp1 | | +# | + $swp1.333 $swp1.555 + | +# | 192.0.2.2/28 192.0.2.17/28 | +# | | +# | +-----------------------------------------------------------------------+ | +# | | BR1 (802.1q) | | +# | | + lag (team) 192.0.2.129/28 | | +# | | / \ 2001:db8:2::1/64 | | +# | +---/---\---------------------------------------------------------------+ | +# | / \ ^ | +# | | \ + gt4 (gretap) | | +# | | \ loc=192.0.2.129 | | +# | | \ rem=192.0.2.130 -+ | +# | | \ ttl=100 | +# | | \ tos=inherit | +# | | \ | +# | | \_________________________________ | +# | | \ | +# | + $swp3 + $swp4 | +# +---|------------------------------------------------|----------------------+ +# | | +# +---|----------------------+ +---|----------------------+ +# | + $h3 H3 | | + $h4 H4 | +# | 192.0.2.130/28 | | 192.0.2.130/28 | +# | 2001:db8:2::2/64 | | 2001:db8:2::2/64 | +# +--------------------------+ +--------------------------+ + +ALL_TESTS=" + test_mirror_gretap_first + test_mirror_gretap_second +" + +NUM_NETIFS=6 +source lib.sh +source mirror_lib.sh +source mirror_gre_lib.sh + +require_command $ARPING + +vlan_host_create() +{ + local if_name=$1; shift + local vid=$1; shift + local vrf_name=$1; shift + local ips=("${@}") + + vrf_create $vrf_name + ip link set dev $vrf_name up + vlan_create $if_name $vid $vrf_name "${ips[@]}" +} + +vlan_host_destroy() +{ + local if_name=$1; shift + local vid=$1; shift + local vrf_name=$1; shift + + vlan_destroy $if_name $vid + ip link set dev $vrf_name down + vrf_destroy $vrf_name +} + +h1_create() +{ + vlan_host_create $h1 333 vrf-h1 192.0.2.1/28 + ip -4 route add 192.0.2.16/28 vrf vrf-h1 nexthop via 192.0.2.2 +} + +h1_destroy() +{ + ip -4 route del 192.0.2.16/28 vrf vrf-h1 + vlan_host_destroy $h1 333 vrf-h1 +} + +h2_create() +{ + vlan_host_create $h1 555 vrf-h2 192.0.2.18/28 + ip -4 route add 192.0.2.0/28 vrf vrf-h2 nexthop via 192.0.2.17 +} + +h2_destroy() +{ + ip -4 route del 192.0.2.0/28 vrf vrf-h2 + vlan_host_destroy $h1 555 vrf-h2 +} + +h3_create() +{ + simple_if_init $h3 192.0.2.130/28 + tc qdisc add dev $h3 clsact +} + +h3_destroy() +{ + tc qdisc del dev $h3 clsact + simple_if_fini $h3 192.0.2.130/28 +} + +h4_create() +{ + simple_if_init $h4 192.0.2.130/28 + tc qdisc add dev $h4 clsact +} + +h4_destroy() +{ + tc qdisc del dev $h4 clsact + simple_if_fini $h4 192.0.2.130/28 +} + +switch_create() +{ + ip link set dev $swp1 up + tc qdisc add dev $swp1 clsact + vlan_create $swp1 333 "" 192.0.2.2/28 + vlan_create $swp1 555 "" 192.0.2.17/28 + + tunnel_create gt4 gretap 192.0.2.129 192.0.2.130 \ + ttl 100 tos inherit + + ip link set dev $swp3 up + ip link set dev $swp4 up + + ip link add name br1 type bridge vlan_filtering 1 + ip link set dev br1 up + __addr_add_del br1 add 192.0.2.129/32 + ip -4 route add 192.0.2.130/32 dev br1 + + team_create lag loadbalance $swp3 $swp4 + ip link set dev lag master br1 +} + +switch_destroy() +{ + ip link set dev lag nomaster + team_destroy lag + + ip -4 route del 192.0.2.130/32 dev br1 + __addr_add_del br1 del 192.0.2.129/32 + ip link set dev br1 down + ip link del dev br1 + + ip link set dev $swp4 down + ip link set dev $swp3 down + + tunnel_destroy gt4 + + vlan_destroy $swp1 555 + vlan_destroy $swp1 333 + tc qdisc del dev $swp1 clsact + ip link set dev $swp1 down +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp3=${NETIFS[p3]} + h3=${NETIFS[p4]} + + swp4=${NETIFS[p5]} + h4=${NETIFS[p6]} + + vrf_prepare + + ip link set dev $h1 up + h1_create + h2_create + h3_create + h4_create + switch_create + + trap_install $h3 ingress + trap_install $h4 ingress +} + +cleanup() +{ + pre_cleanup + + trap_uninstall $h4 ingress + trap_uninstall $h3 ingress + + switch_destroy + h4_destroy + h3_destroy + h2_destroy + h1_destroy + ip link set dev $h1 down + + vrf_cleanup +} + +test_lag_slave() +{ + local host_dev=$1; shift + local up_dev=$1; shift + local down_dev=$1; shift + local what=$1; shift + + RET=0 + + mirror_install $swp1 ingress gt4 \ + "proto 802.1q flower vlan_id 333 $tcflags" + + # Test connectivity through $up_dev when $down_dev is set down. + ip link set dev $down_dev down + setup_wait_dev $up_dev + setup_wait_dev $host_dev + $ARPING -I br1 192.0.2.130 -qfc 1 + sleep 2 + mirror_test vrf-h1 192.0.2.1 192.0.2.18 $host_dev 1 10 + + # Test lack of connectivity when both slaves are down. + ip link set dev $up_dev down + sleep 2 + mirror_test vrf-h1 192.0.2.1 192.0.2.18 $h3 1 0 + mirror_test vrf-h1 192.0.2.1 192.0.2.18 $h4 1 0 + + ip link set dev $up_dev up + ip link set dev $down_dev up + mirror_uninstall $swp1 ingress + + log_test "$what ($tcflags)" +} + +test_mirror_gretap_first() +{ + test_lag_slave $h3 $swp3 $swp4 "mirror to gretap: LAG first slave" +} + +test_mirror_gretap_second() +{ + test_lag_slave $h4 $swp4 $swp3 "mirror to gretap: LAG second slave" +} + +test_all() +{ + slow_path_trap_install $swp1 ingress + slow_path_trap_install $swp1 egress + + tests_run + + slow_path_trap_uninstall $swp1 egress + slow_path_trap_uninstall $swp1 ingress +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tcflags="skip_hw" +test_all + +if ! tc_offload_check; then + echo "WARN: Could not test offloaded functionality" +else + tcflags="skip_sw" + test_all +fi + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh b/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh index aa29d46186a8..135902aa8b11 100755 --- a/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh +++ b/tools/testing/selftests/net/forwarding/mirror_gre_changes.sh @@ -122,15 +122,8 @@ test_span_gre_egress_up() # After setting the device up, wait for neighbor to get resolved so that # we can expect mirroring to work. ip link set dev $swp3 up - while true; do - ip neigh sh dev $swp3 $remote_ip nud reachable | - grep -q ^ - if [[ $? -ne 0 ]]; then - sleep 1 - else - break - fi - done + setup_wait_dev $swp3 + ping -c 1 -I $swp3 $remote_ip &>/dev/null quick_test_span_gre_dir $tundev ingress mirror_uninstall $swp1 ingress diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_lag_lacp.sh b/tools/testing/selftests/net/forwarding/mirror_gre_lag_lacp.sh new file mode 100755 index 000000000000..9edf4cb104a8 --- /dev/null +++ b/tools/testing/selftests/net/forwarding/mirror_gre_lag_lacp.sh @@ -0,0 +1,285 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +# Test for "tc action mirred egress mirror" when the underlay route points at a +# team device. +# +# +----------------------+ +----------------------+ +# | H1 | | H2 | +# | + $h1.333 | | $h1.555 + | +# | | 192.0.2.1/28 | | 192.0.2.18/28 | | +# +----|-----------------+ +----------------|-----+ +# | $h1 | +# +---------------------------------+------------------------------+ +# | +# +--------------------------------------|------------------------------------+ +# | SW o---> mirror | +# | | | +# | +----------------------------------+------------------------------+ | +# | | $swp1 | | +# | + $swp1.333 $swp1.555 + | +# | 192.0.2.2/28 192.0.2.17/28 | +# | | +# | | +# | + gt4 (gretap) ,-> + lag1 (team) | +# | loc=192.0.2.129 | | 192.0.2.129/28 | +# | rem=192.0.2.130 --' | | +# | ttl=100 | | +# | tos=inherit | | +# | _____________________|______________________ | +# | / \ | +# | / \ | +# | + $swp3 + $swp4 | +# +---|------------------------------------------------|----------------------+ +# | | +# +---|------------------------------------------------|----------------------+ +# | + $h3 + $h4 H3 | +# | \ / | +# | \____________________________________________/ | +# | | | +# | + lag2 (team) | +# | 192.0.2.130/28 | +# | | +# +---------------------------------------------------------------------------+ + +ALL_TESTS=" + test_mirror_gretap_first + test_mirror_gretap_second +" + +NUM_NETIFS=6 +source lib.sh +source mirror_lib.sh +source mirror_gre_lib.sh + +require_command $ARPING + +vlan_host_create() +{ + local if_name=$1; shift + local vid=$1; shift + local vrf_name=$1; shift + local ips=("${@}") + + vrf_create $vrf_name + ip link set dev $vrf_name up + vlan_create $if_name $vid $vrf_name "${ips[@]}" +} + +vlan_host_destroy() +{ + local if_name=$1; shift + local vid=$1; shift + local vrf_name=$1; shift + + vlan_destroy $if_name $vid + ip link set dev $vrf_name down + vrf_destroy $vrf_name +} + +h1_create() +{ + vlan_host_create $h1 333 vrf-h1 192.0.2.1/28 + ip -4 route add 192.0.2.16/28 vrf vrf-h1 nexthop via 192.0.2.2 +} + +h1_destroy() +{ + ip -4 route del 192.0.2.16/28 vrf vrf-h1 + vlan_host_destroy $h1 333 vrf-h1 +} + +h2_create() +{ + vlan_host_create $h1 555 vrf-h2 192.0.2.18/28 + ip -4 route add 192.0.2.0/28 vrf vrf-h2 nexthop via 192.0.2.17 +} + +h2_destroy() +{ + ip -4 route del 192.0.2.0/28 vrf vrf-h2 + vlan_host_destroy $h1 555 vrf-h2 +} + +h3_create_team() +{ + team_create lag2 lacp $h3 $h4 + __simple_if_init lag2 vrf-h3 192.0.2.130/32 + ip -4 route add vrf vrf-h3 192.0.2.129/32 dev lag2 +} + +h3_destroy_team() +{ + ip -4 route del vrf vrf-h3 192.0.2.129/32 dev lag2 + __simple_if_fini lag2 192.0.2.130/32 + team_destroy lag2 + + ip link set dev $h3 down + ip link set dev $h4 down +} + +h3_create() +{ + vrf_create vrf-h3 + ip link set dev vrf-h3 up + tc qdisc add dev $h3 clsact + tc qdisc add dev $h4 clsact + h3_create_team +} + +h3_destroy() +{ + h3_destroy_team + tc qdisc del dev $h4 clsact + tc qdisc del dev $h3 clsact + ip link set dev vrf-h3 down + vrf_destroy vrf-h3 +} + +switch_create() +{ + ip link set dev $swp1 up + tc qdisc add dev $swp1 clsact + vlan_create $swp1 333 "" 192.0.2.2/28 + vlan_create $swp1 555 "" 192.0.2.17/28 + + tunnel_create gt4 gretap 192.0.2.129 192.0.2.130 \ + ttl 100 tos inherit + + ip link set dev $swp3 up + ip link set dev $swp4 up + team_create lag1 lacp $swp3 $swp4 + __addr_add_del lag1 add 192.0.2.129/32 + ip -4 route add 192.0.2.130/32 dev lag1 +} + +switch_destroy() +{ + ip -4 route del 192.0.2.130/32 dev lag1 + __addr_add_del lag1 del 192.0.2.129/32 + team_destroy lag1 + + ip link set dev $swp4 down + ip link set dev $swp3 down + + tunnel_destroy gt4 + + vlan_destroy $swp1 555 + vlan_destroy $swp1 333 + tc qdisc del dev $swp1 clsact + ip link set dev $swp1 down +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp3=${NETIFS[p3]} + h3=${NETIFS[p4]} + + swp4=${NETIFS[p5]} + h4=${NETIFS[p6]} + + vrf_prepare + + ip link set dev $h1 up + h1_create + h2_create + h3_create + switch_create + + trap_install $h3 ingress + trap_install $h4 ingress +} + +cleanup() +{ + pre_cleanup + + trap_uninstall $h4 ingress + trap_uninstall $h3 ingress + + switch_destroy + h3_destroy + h2_destroy + h1_destroy + ip link set dev $h1 down + + vrf_cleanup +} + +test_lag_slave() +{ + local up_dev=$1; shift + local down_dev=$1; shift + local what=$1; shift + + RET=0 + + mirror_install $swp1 ingress gt4 \ + "proto 802.1q flower vlan_id 333 $tcflags" + + # Move $down_dev away from the team. That will prompt change in + # txability of the connected device, without changing its upness. The + # driver should notice the txability change and move the traffic to the + # other slave. + ip link set dev $down_dev nomaster + sleep 2 + mirror_test vrf-h1 192.0.2.1 192.0.2.18 $up_dev 1 10 + + # Test lack of connectivity when neither slave is txable. + ip link set dev $up_dev nomaster + sleep 2 + mirror_test vrf-h1 192.0.2.1 192.0.2.18 $h3 1 0 + mirror_test vrf-h1 192.0.2.1 192.0.2.18 $h4 1 0 + mirror_uninstall $swp1 ingress + + # Recreate H3's team device, because mlxsw, which this test is + # predominantly mean to test, requires a bottom-up construction and + # doesn't allow enslavement to a device that already has an upper. + h3_destroy_team + h3_create_team + # Wait for ${h,swp}{3,4}. + setup_wait + + log_test "$what ($tcflags)" +} + +test_mirror_gretap_first() +{ + test_lag_slave $h3 $h4 "mirror to gretap: LAG first slave" +} + +test_mirror_gretap_second() +{ + test_lag_slave $h4 $h3 "mirror to gretap: LAG second slave" +} + +test_all() +{ + slow_path_trap_install $swp1 ingress + slow_path_trap_install $swp1 egress + + tests_run + + slow_path_trap_uninstall $swp1 egress + slow_path_trap_uninstall $swp1 ingress +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tcflags="skip_hw" +test_all + +if ! tc_offload_check; then + echo "WARN: Could not test offloaded functionality" +else + tcflags="skip_sw" + test_all +fi + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh index 619b469365be..fac486178ef7 100644 --- a/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh +++ b/tools/testing/selftests/net/forwarding/mirror_gre_lib.sh @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0 -source mirror_lib.sh +source "$relative_path/mirror_lib.sh" quick_test_span_gre_dir_ips() { @@ -62,7 +62,7 @@ full_test_span_gre_dir_vlan_ips() "$backward_type" "$ip1" "$ip2" tc filter add dev $h3 ingress pref 77 prot 802.1q \ - flower $vlan_match ip_proto 0x2f \ + flower $vlan_match \ action pass mirror_test v$h1 $ip1 $ip2 $h3 77 10 tc filter del dev $h3 ingress pref 77 diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_nh.sh b/tools/testing/selftests/net/forwarding/mirror_gre_nh.sh index 8fa681eb90e7..6f9ef1820e93 100755 --- a/tools/testing/selftests/net/forwarding/mirror_gre_nh.sh +++ b/tools/testing/selftests/net/forwarding/mirror_gre_nh.sh @@ -35,6 +35,8 @@ setup_prepare() vrf_prepare mirror_gre_topo_create + sysctl_set net.ipv4.conf.v$h3.rp_filter 0 + ip address add dev $swp3 192.0.2.161/28 ip address add dev $h3 192.0.2.162/28 ip address add dev gt4 192.0.2.129/32 @@ -61,6 +63,8 @@ cleanup() ip address del dev $h3 192.0.2.162/28 ip address del dev $swp3 192.0.2.161/28 + sysctl_restore net.ipv4.conf.v$h3.rp_filter 0 + mirror_gre_topo_destroy vrf_cleanup diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_topo_lib.sh b/tools/testing/selftests/net/forwarding/mirror_gre_topo_lib.sh index 253419564708..39c03e2867f4 100644 --- a/tools/testing/selftests/net/forwarding/mirror_gre_topo_lib.sh +++ b/tools/testing/selftests/net/forwarding/mirror_gre_topo_lib.sh @@ -33,7 +33,7 @@ # | | # +-------------------------------------------------------------------------+ -source mirror_topo_lib.sh +source "$relative_path/mirror_topo_lib.sh" mirror_gre_topo_h3_create() { diff --git a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh index 5dbc7a08f4bd..204b25f13934 100755 --- a/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh +++ b/tools/testing/selftests/net/forwarding/mirror_gre_vlan_bridge_1q.sh @@ -28,6 +28,8 @@ source mirror_lib.sh source mirror_gre_lib.sh source mirror_gre_topo_lib.sh +require_command $ARPING + setup_prepare() { h1=${NETIFS[p1]} @@ -39,6 +41,12 @@ setup_prepare() swp3=${NETIFS[p5]} h3=${NETIFS[p6]} + # gt4's remote address is at $h3.555, not $h3. Thus the packets arriving + # directly to $h3 for test_gretap_untagged_egress() are rejected by + # rp_filter and the test spuriously fails. + sysctl_set net.ipv4.conf.all.rp_filter 0 + sysctl_set net.ipv4.conf.$h3.rp_filter 0 + vrf_prepare mirror_gre_topo_create @@ -65,6 +73,9 @@ cleanup() mirror_gre_topo_destroy vrf_cleanup + + sysctl_restore net.ipv4.conf.$h3.rp_filter + sysctl_restore net.ipv4.conf.all.rp_filter } test_vlan_match() @@ -79,12 +90,14 @@ test_vlan_match() test_gretap() { - test_vlan_match gt4 'vlan_id 555 vlan_ethtype ip' "mirror to gretap" + test_vlan_match gt4 'skip_hw vlan_id 555 vlan_ethtype ip' \ + "mirror to gretap" } test_ip6gretap() { - test_vlan_match gt6 'vlan_id 555 vlan_ethtype ipv6' "mirror to ip6gretap" + test_vlan_match gt6 'skip_hw vlan_id 555 vlan_ethtype ip' \ + "mirror to ip6gretap" } test_span_gre_forbidden_cpu() @@ -138,7 +151,7 @@ test_span_gre_forbidden_egress() bridge vlan add dev $swp3 vid 555 # Re-prime FDB - arping -I br1.555 192.0.2.130 -fqc 1 + $ARPING -I br1.555 192.0.2.130 -fqc 1 sleep 1 quick_test_span_gre_dir $tundev ingress @@ -212,7 +225,7 @@ test_span_gre_fdb_roaming() bridge fdb del dev $swp2 $h3mac vlan 555 master # Re-prime FDB - arping -I br1.555 192.0.2.130 -fqc 1 + $ARPING -I br1.555 192.0.2.130 -fqc 1 sleep 1 quick_test_span_gre_dir $tundev ingress diff --git a/tools/testing/selftests/net/forwarding/mirror_lib.sh b/tools/testing/selftests/net/forwarding/mirror_lib.sh index d36dc26c6c51..07991e1025c7 100644 --- a/tools/testing/selftests/net/forwarding/mirror_lib.sh +++ b/tools/testing/selftests/net/forwarding/mirror_lib.sh @@ -105,7 +105,7 @@ do_test_span_vlan_dir_ips() # Install the capture as skip_hw to avoid double-counting of packets. # The traffic is meant for local box anyway, so will be trapped to # kernel. - vlan_capture_install $dev "skip_hw vlan_id $vid" + vlan_capture_install $dev "skip_hw vlan_id $vid vlan_ethtype ip" mirror_test v$h1 $ip1 $ip2 $dev 100 $expect mirror_test v$h2 $ip2 $ip1 $dev 100 $expect vlan_capture_uninstall $dev diff --git a/tools/testing/selftests/net/forwarding/router_bridge.sh b/tools/testing/selftests/net/forwarding/router_bridge.sh new file mode 100755 index 000000000000..ebc596a272f7 --- /dev/null +++ b/tools/testing/selftests/net/forwarding/router_bridge.sh @@ -0,0 +1,113 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +ALL_TESTS=" + ping_ipv4 + ping_ipv6 +" +NUM_NETIFS=4 +source lib.sh + +h1_create() +{ + simple_if_init $h1 192.0.2.1/28 2001:db8:1::1/64 + ip -4 route add 192.0.2.128/28 vrf v$h1 nexthop via 192.0.2.2 + ip -6 route add 2001:db8:2::/64 vrf v$h1 nexthop via 2001:db8:1::2 +} + +h1_destroy() +{ + ip -6 route del 2001:db8:2::/64 vrf v$h1 + ip -4 route del 192.0.2.128/28 vrf v$h1 + simple_if_fini $h1 192.0.2.1/28 2001:db8:1::1/64 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.130/28 2001:db8:2::2/64 + ip -4 route add 192.0.2.0/28 vrf v$h2 nexthop via 192.0.2.129 + ip -6 route add 2001:db8:1::/64 vrf v$h2 nexthop via 2001:db8:2::1 +} + +h2_destroy() +{ + ip -6 route del 2001:db8:1::/64 vrf v$h2 + ip -4 route del 192.0.2.0/28 vrf v$h2 + simple_if_fini $h2 192.0.2.130/28 2001:db8:2::2/64 +} + +router_create() +{ + ip link add name br1 type bridge vlan_filtering 1 + ip link set dev br1 up + + ip link set dev $swp1 master br1 + ip link set dev $swp1 up + __addr_add_del br1 add 192.0.2.2/28 2001:db8:1::2/64 + + ip link set dev $swp2 up + __addr_add_del $swp2 add 192.0.2.129/28 2001:db8:2::1/64 +} + +router_destroy() +{ + __addr_add_del $swp2 del 192.0.2.129/28 2001:db8:2::1/64 + ip link set dev $swp2 down + + __addr_add_del br1 del 192.0.2.2/28 2001:db8:1::2/64 + ip link set dev $swp1 down + ip link set dev $swp1 nomaster + + ip link del dev br1 +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + vrf_prepare + + h1_create + h2_create + + router_create + + forwarding_enable +} + +cleanup() +{ + pre_cleanup + + forwarding_restore + + router_destroy + + h2_destroy + h1_destroy + + vrf_cleanup +} + +ping_ipv4() +{ + ping_test $h1 192.0.2.130 +} + +ping_ipv6() +{ + ping6_test $h1 2001:db8:2::2 +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/router_bridge_vlan.sh b/tools/testing/selftests/net/forwarding/router_bridge_vlan.sh new file mode 100755 index 000000000000..fef88eb4b873 --- /dev/null +++ b/tools/testing/selftests/net/forwarding/router_bridge_vlan.sh @@ -0,0 +1,132 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +ALL_TESTS=" + ping_ipv4 + ping_ipv6 + vlan +" +NUM_NETIFS=4 +source lib.sh + +h1_create() +{ + simple_if_init $h1 + vlan_create $h1 555 v$h1 192.0.2.1/28 2001:db8:1::1/64 + ip -4 route add 192.0.2.128/28 vrf v$h1 nexthop via 192.0.2.2 + ip -6 route add 2001:db8:2::/64 vrf v$h1 nexthop via 2001:db8:1::2 +} + +h1_destroy() +{ + ip -6 route del 2001:db8:2::/64 vrf v$h1 + ip -4 route del 192.0.2.128/28 vrf v$h1 + vlan_destroy $h1 555 + simple_if_fini $h1 +} + +h2_create() +{ + simple_if_init $h2 192.0.2.130/28 2001:db8:2::2/64 + ip -4 route add 192.0.2.0/28 vrf v$h2 nexthop via 192.0.2.129 + ip -6 route add 2001:db8:1::/64 vrf v$h2 nexthop via 2001:db8:2::1 +} + +h2_destroy() +{ + ip -6 route del 2001:db8:1::/64 vrf v$h2 + ip -4 route del 192.0.2.0/28 vrf v$h2 + simple_if_fini $h2 192.0.2.130/28 +} + +router_create() +{ + ip link add name br1 type bridge vlan_filtering 1 + ip link set dev br1 up + + ip link set dev $swp1 master br1 + ip link set dev $swp1 up + + bridge vlan add dev br1 vid 555 self pvid untagged + bridge vlan add dev $swp1 vid 555 + + __addr_add_del br1 add 192.0.2.2/28 2001:db8:1::2/64 + + ip link set dev $swp2 up + __addr_add_del $swp2 add 192.0.2.129/28 2001:db8:2::1/64 +} + +router_destroy() +{ + __addr_add_del $swp2 del 192.0.2.129/28 2001:db8:2::1/64 + ip link set dev $swp2 down + + __addr_add_del br1 del 192.0.2.2/28 2001:db8:1::2/64 + ip link set dev $swp1 down + ip link set dev $swp1 nomaster + + ip link del dev br1 +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + swp1=${NETIFS[p2]} + + swp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + vrf_prepare + + h1_create + h2_create + + router_create + + forwarding_enable +} + +cleanup() +{ + pre_cleanup + + forwarding_restore + + router_destroy + + h2_destroy + h1_destroy + + vrf_cleanup +} + +vlan() +{ + RET=0 + + bridge vlan add dev br1 vid 333 self + check_err $? "Can't add a non-PVID VLAN" + bridge vlan del dev br1 vid 333 self + check_err $? "Can't remove a non-PVID VLAN" + + log_test "vlan" +} + +ping_ipv4() +{ + ping_test $h1 192.0.2.130 +} + +ping_ipv6() +{ + ping6_test $h1 2001:db8:2::2 +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/router_broadcast.sh b/tools/testing/selftests/net/forwarding/router_broadcast.sh new file mode 100755 index 000000000000..7bd2ebb6e9de --- /dev/null +++ b/tools/testing/selftests/net/forwarding/router_broadcast.sh @@ -0,0 +1,233 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +ALL_TESTS="ping_ipv4" +NUM_NETIFS=6 +source lib.sh + +h1_create() +{ + vrf_create "vrf-h1" + ip link set dev $h1 master vrf-h1 + + ip link set dev vrf-h1 up + ip link set dev $h1 up + + ip address add 192.0.2.2/24 dev $h1 + + ip route add 198.51.100.0/24 vrf vrf-h1 nexthop via 192.0.2.1 + ip route add 198.51.200.0/24 vrf vrf-h1 nexthop via 192.0.2.1 +} + +h1_destroy() +{ + ip route del 198.51.200.0/24 vrf vrf-h1 + ip route del 198.51.100.0/24 vrf vrf-h1 + + ip address del 192.0.2.2/24 dev $h1 + + ip link set dev $h1 down + vrf_destroy "vrf-h1" +} + +h2_create() +{ + vrf_create "vrf-h2" + ip link set dev $h2 master vrf-h2 + + ip link set dev vrf-h2 up + ip link set dev $h2 up + + ip address add 198.51.100.2/24 dev $h2 + + ip route add 192.0.2.0/24 vrf vrf-h2 nexthop via 198.51.100.1 + ip route add 198.51.200.0/24 vrf vrf-h2 nexthop via 198.51.100.1 +} + +h2_destroy() +{ + ip route del 198.51.200.0/24 vrf vrf-h2 + ip route del 192.0.2.0/24 vrf vrf-h2 + + ip address del 198.51.100.2/24 dev $h2 + + ip link set dev $h2 down + vrf_destroy "vrf-h2" +} + +h3_create() +{ + vrf_create "vrf-h3" + ip link set dev $h3 master vrf-h3 + + ip link set dev vrf-h3 up + ip link set dev $h3 up + + ip address add 198.51.200.2/24 dev $h3 + + ip route add 192.0.2.0/24 vrf vrf-h3 nexthop via 198.51.200.1 + ip route add 198.51.100.0/24 vrf vrf-h3 nexthop via 198.51.200.1 +} + +h3_destroy() +{ + ip route del 198.51.100.0/24 vrf vrf-h3 + ip route del 192.0.2.0/24 vrf vrf-h3 + + ip address del 198.51.200.2/24 dev $h3 + + ip link set dev $h3 down + vrf_destroy "vrf-h3" +} + +router_create() +{ + ip link set dev $rp1 up + ip link set dev $rp2 up + ip link set dev $rp3 up + + ip address add 192.0.2.1/24 dev $rp1 + + ip address add 198.51.100.1/24 dev $rp2 + ip address add 198.51.200.1/24 dev $rp3 +} + +router_destroy() +{ + ip address del 198.51.200.1/24 dev $rp3 + ip address del 198.51.100.1/24 dev $rp2 + + ip address del 192.0.2.1/24 dev $rp1 + + ip link set dev $rp3 down + ip link set dev $rp2 down + ip link set dev $rp1 down +} + +setup_prepare() +{ + h1=${NETIFS[p1]} + rp1=${NETIFS[p2]} + + rp2=${NETIFS[p3]} + h2=${NETIFS[p4]} + + rp3=${NETIFS[p5]} + h3=${NETIFS[p6]} + + vrf_prepare + + h1_create + h2_create + h3_create + + router_create + + forwarding_enable +} + +cleanup() +{ + pre_cleanup + + forwarding_restore + + router_destroy + + h3_destroy + h2_destroy + h1_destroy + + vrf_cleanup +} + +bc_forwarding_disable() +{ + sysctl_set net.ipv4.conf.all.bc_forwarding 0 + sysctl_set net.ipv4.conf.$rp1.bc_forwarding 0 +} + +bc_forwarding_enable() +{ + sysctl_set net.ipv4.conf.all.bc_forwarding 1 + sysctl_set net.ipv4.conf.$rp1.bc_forwarding 1 +} + +bc_forwarding_restore() +{ + sysctl_restore net.ipv4.conf.$rp1.bc_forwarding + sysctl_restore net.ipv4.conf.all.bc_forwarding +} + +ping_test_from() +{ + local oif=$1 + local dip=$2 + local from=$3 + local fail=${4:-0} + + RET=0 + + log_info "ping $dip, expected reply from $from" + ip vrf exec $(master_name_get $oif) \ + $PING -I $oif $dip -c 10 -i 0.1 -w 2 -b 2>&1 | grep $from &> /dev/null + check_err_fail $fail $? +} + +ping_ipv4() +{ + sysctl_set net.ipv4.icmp_echo_ignore_broadcasts 0 + + bc_forwarding_disable + log_info "bc_forwarding disabled on r1 =>" + ping_test_from $h1 198.51.100.255 192.0.2.1 + log_test "h1 -> net2: reply from r1 (not forwarding)" + ping_test_from $h1 198.51.200.255 192.0.2.1 + log_test "h1 -> net3: reply from r1 (not forwarding)" + ping_test_from $h1 192.0.2.255 192.0.2.1 + log_test "h1 -> net1: reply from r1 (not dropping)" + ping_test_from $h1 255.255.255.255 192.0.2.1 + log_test "h1 -> 255.255.255.255: reply from r1 (not forwarding)" + + ping_test_from $h2 192.0.2.255 198.51.100.1 + log_test "h2 -> net1: reply from r1 (not forwarding)" + ping_test_from $h2 198.51.200.255 198.51.100.1 + log_test "h2 -> net3: reply from r1 (not forwarding)" + ping_test_from $h2 198.51.100.255 198.51.100.1 + log_test "h2 -> net2: reply from r1 (not dropping)" + ping_test_from $h2 255.255.255.255 198.51.100.1 + log_test "h2 -> 255.255.255.255: reply from r1 (not forwarding)" + bc_forwarding_restore + + bc_forwarding_enable + log_info "bc_forwarding enabled on r1 =>" + ping_test_from $h1 198.51.100.255 198.51.100.2 + log_test "h1 -> net2: reply from h2 (forwarding)" + ping_test_from $h1 198.51.200.255 198.51.200.2 + log_test "h1 -> net3: reply from h3 (forwarding)" + ping_test_from $h1 192.0.2.255 192.0.2.1 1 + log_test "h1 -> net1: no reply (dropping)" + ping_test_from $h1 255.255.255.255 192.0.2.1 + log_test "h1 -> 255.255.255.255: reply from r1 (not forwarding)" + + ping_test_from $h2 192.0.2.255 192.0.2.2 + log_test "h2 -> net1: reply from h1 (forwarding)" + ping_test_from $h2 198.51.200.255 198.51.200.2 + log_test "h2 -> net3: reply from h3 (forwarding)" + ping_test_from $h2 198.51.100.255 198.51.100.1 1 + log_test "h2 -> net2: no reply (dropping)" + ping_test_from $h2 255.255.255.255 198.51.100.1 + log_test "h2 -> 255.255.255.255: reply from r1 (not forwarding)" + bc_forwarding_restore + + sysctl_restore net.ipv4.icmp_echo_ignore_broadcasts +} + +trap cleanup EXIT + +setup_prepare +setup_wait + +tests_run + +exit $EXIT_STATUS diff --git a/tools/testing/selftests/net/forwarding/router_multipath.sh b/tools/testing/selftests/net/forwarding/router_multipath.sh index 8b6d0fb6d604..79a209927962 100755 --- a/tools/testing/selftests/net/forwarding/router_multipath.sh +++ b/tools/testing/selftests/net/forwarding/router_multipath.sh @@ -159,45 +159,6 @@ router2_destroy() vrf_destroy "vrf-r2" } -multipath_eval() -{ - local desc="$1" - local weight_rp12=$2 - local weight_rp13=$3 - local packets_rp12=$4 - local packets_rp13=$5 - local weights_ratio packets_ratio diff - - RET=0 - - if [[ "$packets_rp12" -eq "0" || "$packets_rp13" -eq "0" ]]; then - check_err 1 "Packet difference is 0" - log_test "Multipath" - log_info "Expected ratio $weights_ratio" - return - fi - - if [[ "$weight_rp12" -gt "$weight_rp13" ]]; then - weights_ratio=$(echo "scale=2; $weight_rp12 / $weight_rp13" \ - | bc -l) - packets_ratio=$(echo "scale=2; $packets_rp12 / $packets_rp13" \ - | bc -l) - else - weights_ratio=$(echo "scale=2; $weight_rp13 / $weight_rp12" | \ - bc -l) - packets_ratio=$(echo "scale=2; $packets_rp13 / $packets_rp12" | \ - bc -l) - fi - - diff=$(echo $weights_ratio - $packets_ratio | bc -l) - diff=${diff#-} - - test "$(echo "$diff / $weights_ratio > 0.15" | bc -l)" -eq 0 - check_err $? "Too large discrepancy between expected and measured ratios" - log_test "$desc" - log_info "Expected ratio $weights_ratio Measured ratio $packets_ratio" -} - multipath4_test() { local desc="$1" diff --git a/tools/testing/selftests/net/forwarding/tc_chains.sh b/tools/testing/selftests/net/forwarding/tc_chains.sh index d2c783e94df3..2934fb5ed2a2 100755 --- a/tools/testing/selftests/net/forwarding/tc_chains.sh +++ b/tools/testing/selftests/net/forwarding/tc_chains.sh @@ -1,7 +1,8 @@ #!/bin/bash # SPDX-License-Identifier: GPL-2.0 -ALL_TESTS="unreachable_chain_test gact_goto_chain_test" +ALL_TESTS="unreachable_chain_test gact_goto_chain_test create_destroy_chain \ + template_filter_fits" NUM_NETIFS=2 source tc_common.sh source lib.sh @@ -80,6 +81,87 @@ gact_goto_chain_test() log_test "gact goto chain ($tcflags)" } +create_destroy_chain() +{ + RET=0 + + tc chain add dev $h2 ingress + check_err $? "Failed to create default chain" + + output="$(tc -j chain get dev $h2 ingress)" + check_err $? "Failed to get default chain" + + echo $output | jq -e ".[] | select(.chain == 0)" &> /dev/null + check_err $? "Unexpected output for default chain" + + tc chain add dev $h2 ingress chain 1 + check_err $? "Failed to create chain 1" + + output="$(tc -j chain get dev $h2 ingress chain 1)" + check_err $? "Failed to get chain 1" + + echo $output | jq -e ".[] | select(.chain == 1)" &> /dev/null + check_err $? "Unexpected output for chain 1" + + output="$(tc -j chain show dev $h2 ingress)" + check_err $? "Failed to dump chains" + + echo $output | jq -e ".[] | select(.chain == 0)" &> /dev/null + check_err $? "Can't find default chain in dump" + + echo $output | jq -e ".[] | select(.chain == 1)" &> /dev/null + check_err $? "Can't find chain 1 in dump" + + tc chain del dev $h2 ingress + check_err $? "Failed to destroy default chain" + + tc chain del dev $h2 ingress chain 1 + check_err $? "Failed to destroy chain 1" + + log_test "create destroy chain" +} + +template_filter_fits() +{ + RET=0 + + tc chain add dev $h2 ingress protocol ip \ + flower dst_mac 00:00:00:00:00:00/FF:FF:FF:FF:FF:FF &> /dev/null + tc chain add dev $h2 ingress chain 1 protocol ip \ + flower src_mac 00:00:00:00:00:00/FF:FF:FF:FF:FF:FF &> /dev/null + + tc filter add dev $h2 ingress protocol ip pref 1 handle 1101 \ + flower dst_mac $h2mac action drop + check_err $? "Failed to insert filter which fits template" + + tc filter add dev $h2 ingress protocol ip pref 1 handle 1102 \ + flower src_mac $h2mac action drop &> /dev/null + check_fail $? "Incorrectly succeded to insert filter which does not template" + + tc filter add dev $h2 ingress chain 1 protocol ip pref 1 handle 1101 \ + flower src_mac $h2mac action drop + check_err $? "Failed to insert filter which fits template" + + tc filter add dev $h2 ingress chain 1 protocol ip pref 1 handle 1102 \ + flower dst_mac $h2mac action drop &> /dev/null + check_fail $? "Incorrectly succeded to insert filter which does not template" + + tc filter del dev $h2 ingress chain 1 protocol ip pref 1 handle 1102 \ + flower &> /dev/null + tc filter del dev $h2 ingress chain 1 protocol ip pref 1 handle 1101 \ + flower &> /dev/null + + tc filter del dev $h2 ingress protocol ip pref 1 handle 1102 \ + flower &> /dev/null + tc filter del dev $h2 ingress protocol ip pref 1 handle 1101 \ + flower &> /dev/null + + tc chain del dev $h2 ingress chain 1 + tc chain del dev $h2 ingress + + log_test "template filter fits" +} + setup_prepare() { h1=${NETIFS[p1]} @@ -103,6 +185,8 @@ cleanup() vrf_cleanup } +check_tc_chain_support + trap cleanup EXIT setup_prepare diff --git a/tools/testing/selftests/net/forwarding/tc_shblocks.sh b/tools/testing/selftests/net/forwarding/tc_shblocks.sh index b5b917203815..9826a446e2c0 100755 --- a/tools/testing/selftests/net/forwarding/tc_shblocks.sh +++ b/tools/testing/selftests/net/forwarding/tc_shblocks.sh @@ -105,6 +105,8 @@ cleanup() ip link set $swp2 address $swp2origmac } +check_tc_shblock_support + trap cleanup EXIT setup_prepare diff --git a/tools/testing/selftests/net/ip6_gre_headroom.sh b/tools/testing/selftests/net/ip6_gre_headroom.sh new file mode 100755 index 000000000000..5b41e8bb6e2d --- /dev/null +++ b/tools/testing/selftests/net/ip6_gre_headroom.sh @@ -0,0 +1,65 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 +# +# Test that enough headroom is reserved for the first packet passing through an +# IPv6 GRE-like netdevice. + +setup_prepare() +{ + ip link add h1 type veth peer name swp1 + ip link add h3 type veth peer name swp3 + + ip link set dev h1 up + ip address add 192.0.2.1/28 dev h1 + + ip link add dev vh3 type vrf table 20 + ip link set dev h3 master vh3 + ip link set dev vh3 up + ip link set dev h3 up + + ip link set dev swp3 up + ip address add dev swp3 2001:db8:2::1/64 + ip address add dev swp3 2001:db8:2::3/64 + + ip link set dev swp1 up + tc qdisc add dev swp1 clsact + + ip link add name er6 type ip6erspan \ + local 2001:db8:2::1 remote 2001:db8:2::2 oseq okey 123 + ip link set dev er6 up + + ip link add name gt6 type ip6gretap \ + local 2001:db8:2::3 remote 2001:db8:2::4 + ip link set dev gt6 up + + sleep 1 +} + +cleanup() +{ + ip link del dev gt6 + ip link del dev er6 + ip link del dev swp1 + ip link del dev swp3 + ip link del dev vh3 +} + +test_headroom() +{ + local type=$1; shift + local tundev=$1; shift + + tc filter add dev swp1 ingress pref 1000 matchall skip_hw \ + action mirred egress mirror dev $tundev + ping -I h1 192.0.2.2 -c 1 -w 2 &> /dev/null + tc filter del dev swp1 ingress pref 1000 + + # If it doesn't panic, it passes. + printf "TEST: %-60s [PASS]\n" "$type headroom" +} + +trap cleanup EXIT + +setup_prepare +test_headroom ip6gretap gt6 +test_headroom ip6erspan er6 diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh index 0d7a44fa30af..08c341b49760 100755 --- a/tools/testing/selftests/net/rtnetlink.sh +++ b/tools/testing/selftests/net/rtnetlink.sh @@ -525,18 +525,21 @@ kci_test_macsec() #------------------------------------------------------------------- kci_test_ipsec() { - srcip="14.0.0.52" - dstip="14.0.0.70" + ret=0 algo="aead rfc4106(gcm(aes)) 0x3132333435363738393031323334353664636261 128" + srcip=192.168.123.1 + dstip=192.168.123.2 + spi=7 + + ip addr add $srcip dev $devdummy # flush to be sure there's nothing configured ip x s flush ; ip x p flush check_err $? # start the monitor in the background - tmpfile=`mktemp ipsectestXXX` - ip x m > $tmpfile & - mpid=$! + tmpfile=`mktemp /var/run/ipsectestXXX` + mpid=`(ip x m > $tmpfile & echo $!) 2>/dev/null` sleep 0.2 ipsecid="proto esp src $srcip dst $dstip spi 0x07" @@ -599,6 +602,7 @@ kci_test_ipsec() check_err $? ip x p flush check_err $? + ip addr del $srcip/32 dev $devdummy if [ $ret -ne 0 ]; then echo "FAIL: ipsec" @@ -607,6 +611,119 @@ kci_test_ipsec() echo "PASS: ipsec" } +#------------------------------------------------------------------- +# Example commands +# ip x s add proto esp src 14.0.0.52 dst 14.0.0.70 \ +# spi 0x07 mode transport reqid 0x07 replay-window 32 \ +# aead 'rfc4106(gcm(aes))' 1234567890123456dcba 128 \ +# sel src 14.0.0.52/24 dst 14.0.0.70/24 +# offload dev sim1 dir out +# ip x p add dir out src 14.0.0.52/24 dst 14.0.0.70/24 \ +# tmpl proto esp src 14.0.0.52 dst 14.0.0.70 \ +# spi 0x07 mode transport reqid 0x07 +# +#------------------------------------------------------------------- +kci_test_ipsec_offload() +{ + ret=0 + algo="aead rfc4106(gcm(aes)) 0x3132333435363738393031323334353664636261 128" + srcip=192.168.123.3 + dstip=192.168.123.4 + dev=simx1 + sysfsd=/sys/kernel/debug/netdevsim/$dev + sysfsf=$sysfsd/ipsec + + # setup netdevsim since dummydev doesn't have offload support + modprobe netdevsim + check_err $? + if [ $ret -ne 0 ]; then + echo "FAIL: ipsec_offload can't load netdevsim" + return 1 + fi + + ip link add $dev type netdevsim + ip addr add $srcip dev $dev + ip link set $dev up + if [ ! -d $sysfsd ] ; then + echo "FAIL: ipsec_offload can't create device $dev" + return 1 + fi + if [ ! -f $sysfsf ] ; then + echo "FAIL: ipsec_offload netdevsim doesn't support IPsec offload" + return 1 + fi + + # flush to be sure there's nothing configured + ip x s flush ; ip x p flush + + # create offloaded SAs, both in and out + ip x p add dir out src $srcip/24 dst $dstip/24 \ + tmpl proto esp src $srcip dst $dstip spi 9 \ + mode transport reqid 42 + check_err $? + ip x p add dir out src $dstip/24 dst $srcip/24 \ + tmpl proto esp src $dstip dst $srcip spi 9 \ + mode transport reqid 42 + check_err $? + + ip x s add proto esp src $srcip dst $dstip spi 9 \ + mode transport reqid 42 $algo sel src $srcip/24 dst $dstip/24 \ + offload dev $dev dir out + check_err $? + ip x s add proto esp src $dstip dst $srcip spi 9 \ + mode transport reqid 42 $algo sel src $dstip/24 dst $srcip/24 \ + offload dev $dev dir in + check_err $? + if [ $ret -ne 0 ]; then + echo "FAIL: ipsec_offload can't create SA" + return 1 + fi + + # does offload show up in ip output + lines=`ip x s list | grep -c "crypto offload parameters: dev $dev dir"` + if [ $lines -ne 2 ] ; then + echo "FAIL: ipsec_offload SA offload missing from list output" + check_err 1 + fi + + # use ping to exercise the Tx path + ping -I $dev -c 3 -W 1 -i 0 $dstip >/dev/null + + # does driver have correct offload info + diff $sysfsf - << EOF +SA count=2 tx=3 +sa[0] tx ipaddr=0x00000000 00000000 00000000 00000000 +sa[0] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1 +sa[0] key=0x34333231 38373635 32313039 36353433 +sa[1] rx ipaddr=0x00000000 00000000 00000000 037ba8c0 +sa[1] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1 +sa[1] key=0x34333231 38373635 32313039 36353433 +EOF + if [ $? -ne 0 ] ; then + echo "FAIL: ipsec_offload incorrect driver data" + check_err 1 + fi + + # does offload get removed from driver + ip x s flush + ip x p flush + lines=`grep -c "SA count=0" $sysfsf` + if [ $lines -ne 1 ] ; then + echo "FAIL: ipsec_offload SA not removed from driver" + check_err 1 + fi + + # clean up any leftovers + ip link del $dev + rmmod netdevsim + + if [ $ret -ne 0 ]; then + echo "FAIL: ipsec_offload" + return 1 + fi + echo "PASS: ipsec_offload" +} + kci_test_gretap() { testns="testns" @@ -861,6 +978,7 @@ kci_test_rtnl() kci_test_encap kci_test_macsec kci_test_ipsec + kci_test_ipsec_offload kci_del_dummy } diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c new file mode 100644 index 000000000000..b3ebf2646e52 --- /dev/null +++ b/tools/testing/selftests/net/tls.c @@ -0,0 +1,692 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define _GNU_SOURCE + +#include <arpa/inet.h> +#include <errno.h> +#include <error.h> +#include <fcntl.h> +#include <poll.h> +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> + +#include <linux/tls.h> +#include <linux/tcp.h> +#include <linux/socket.h> + +#include <sys/types.h> +#include <sys/sendfile.h> +#include <sys/socket.h> +#include <sys/stat.h> + +#include "../kselftest_harness.h" + +#define TLS_PAYLOAD_MAX_LEN 16384 +#define SOL_TLS 282 + +FIXTURE(tls) +{ + int fd, cfd; + bool notls; +}; + +FIXTURE_SETUP(tls) +{ + struct tls12_crypto_info_aes_gcm_128 tls12; + struct sockaddr_in addr; + socklen_t len; + int sfd, ret; + + self->notls = false; + len = sizeof(addr); + + memset(&tls12, 0, sizeof(tls12)); + tls12.info.version = TLS_1_2_VERSION; + tls12.info.cipher_type = TLS_CIPHER_AES_GCM_128; + + addr.sin_family = AF_INET; + addr.sin_addr.s_addr = htonl(INADDR_ANY); + addr.sin_port = 0; + + self->fd = socket(AF_INET, SOCK_STREAM, 0); + sfd = socket(AF_INET, SOCK_STREAM, 0); + + ret = bind(sfd, &addr, sizeof(addr)); + ASSERT_EQ(ret, 0); + ret = listen(sfd, 10); + ASSERT_EQ(ret, 0); + + ret = getsockname(sfd, &addr, &len); + ASSERT_EQ(ret, 0); + + ret = connect(self->fd, &addr, sizeof(addr)); + ASSERT_EQ(ret, 0); + + ret = setsockopt(self->fd, IPPROTO_TCP, TCP_ULP, "tls", sizeof("tls")); + if (ret != 0) { + self->notls = true; + printf("Failure setting TCP_ULP, testing without tls\n"); + } + + if (!self->notls) { + ret = setsockopt(self->fd, SOL_TLS, TLS_TX, &tls12, + sizeof(tls12)); + ASSERT_EQ(ret, 0); + } + + self->cfd = accept(sfd, &addr, &len); + ASSERT_GE(self->cfd, 0); + + if (!self->notls) { + ret = setsockopt(self->cfd, IPPROTO_TCP, TCP_ULP, "tls", + sizeof("tls")); + ASSERT_EQ(ret, 0); + + ret = setsockopt(self->cfd, SOL_TLS, TLS_RX, &tls12, + sizeof(tls12)); + ASSERT_EQ(ret, 0); + } + + close(sfd); +} + +FIXTURE_TEARDOWN(tls) +{ + close(self->fd); + close(self->cfd); +} + +TEST_F(tls, sendfile) +{ + int filefd = open("/proc/self/exe", O_RDONLY); + struct stat st; + + EXPECT_GE(filefd, 0); + fstat(filefd, &st); + EXPECT_GE(sendfile(self->fd, filefd, 0, st.st_size), 0); +} + +TEST_F(tls, send_then_sendfile) +{ + int filefd = open("/proc/self/exe", O_RDONLY); + char const *test_str = "test_send"; + int to_send = strlen(test_str) + 1; + char recv_buf[10]; + struct stat st; + char *buf; + + EXPECT_GE(filefd, 0); + fstat(filefd, &st); + buf = (char *)malloc(st.st_size); + + EXPECT_EQ(send(self->fd, test_str, to_send, 0), to_send); + EXPECT_EQ(recv(self->cfd, recv_buf, to_send, 0), to_send); + EXPECT_EQ(memcmp(test_str, recv_buf, to_send), 0); + + EXPECT_GE(sendfile(self->fd, filefd, 0, st.st_size), 0); + EXPECT_EQ(recv(self->cfd, buf, st.st_size, 0), st.st_size); +} + +TEST_F(tls, recv_max) +{ + unsigned int send_len = TLS_PAYLOAD_MAX_LEN; + char recv_mem[TLS_PAYLOAD_MAX_LEN]; + char buf[TLS_PAYLOAD_MAX_LEN]; + + EXPECT_GE(send(self->fd, buf, send_len, 0), 0); + EXPECT_NE(recv(self->cfd, recv_mem, send_len, 0), -1); + EXPECT_EQ(memcmp(buf, recv_mem, send_len), 0); +} + +TEST_F(tls, recv_small) +{ + char const *test_str = "test_read"; + int send_len = 10; + char buf[10]; + + send_len = strlen(test_str) + 1; + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + EXPECT_NE(recv(self->cfd, buf, send_len, 0), -1); + EXPECT_EQ(memcmp(buf, test_str, send_len), 0); +} + +TEST_F(tls, msg_more) +{ + char const *test_str = "test_read"; + int send_len = 10; + char buf[10 * 2]; + + EXPECT_EQ(send(self->fd, test_str, send_len, MSG_MORE), send_len); + EXPECT_EQ(recv(self->cfd, buf, send_len, MSG_DONTWAIT), -1); + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + EXPECT_EQ(recv(self->cfd, buf, send_len * 2, MSG_DONTWAIT), + send_len * 2); + EXPECT_EQ(memcmp(buf, test_str, send_len), 0); +} + +TEST_F(tls, sendmsg_single) +{ + struct msghdr msg; + + char const *test_str = "test_sendmsg"; + size_t send_len = 13; + struct iovec vec; + char buf[13]; + + vec.iov_base = (char *)test_str; + vec.iov_len = send_len; + memset(&msg, 0, sizeof(struct msghdr)); + msg.msg_iov = &vec; + msg.msg_iovlen = 1; + EXPECT_EQ(sendmsg(self->fd, &msg, 0), send_len); + EXPECT_EQ(recv(self->cfd, buf, send_len, 0), send_len); + EXPECT_EQ(memcmp(buf, test_str, send_len), 0); +} + +TEST_F(tls, sendmsg_large) +{ + void *mem = malloc(16384); + size_t send_len = 16384; + size_t sends = 128; + struct msghdr msg; + size_t recvs = 0; + size_t sent = 0; + + memset(&msg, 0, sizeof(struct msghdr)); + while (sent++ < sends) { + struct iovec vec = { (void *)mem, send_len }; + + msg.msg_iov = &vec; + msg.msg_iovlen = 1; + EXPECT_EQ(sendmsg(self->cfd, &msg, 0), send_len); + } + + while (recvs++ < sends) + EXPECT_NE(recv(self->fd, mem, send_len, 0), -1); + + free(mem); +} + +TEST_F(tls, sendmsg_multiple) +{ + char const *test_str = "test_sendmsg_multiple"; + struct iovec vec[5]; + char *test_strs[5]; + struct msghdr msg; + int total_len = 0; + int len_cmp = 0; + int iov_len = 5; + char *buf; + int i; + + memset(&msg, 0, sizeof(struct msghdr)); + for (i = 0; i < iov_len; i++) { + test_strs[i] = (char *)malloc(strlen(test_str) + 1); + snprintf(test_strs[i], strlen(test_str) + 1, "%s", test_str); + vec[i].iov_base = (void *)test_strs[i]; + vec[i].iov_len = strlen(test_strs[i]) + 1; + total_len += vec[i].iov_len; + } + msg.msg_iov = vec; + msg.msg_iovlen = iov_len; + + EXPECT_EQ(sendmsg(self->cfd, &msg, 0), total_len); + buf = malloc(total_len); + EXPECT_NE(recv(self->fd, buf, total_len, 0), -1); + for (i = 0; i < iov_len; i++) { + EXPECT_EQ(memcmp(test_strs[i], buf + len_cmp, + strlen(test_strs[i])), + 0); + len_cmp += strlen(buf + len_cmp) + 1; + } + for (i = 0; i < iov_len; i++) + free(test_strs[i]); + free(buf); +} + +TEST_F(tls, sendmsg_multiple_stress) +{ + char const *test_str = "abcdefghijklmno"; + struct iovec vec[1024]; + char *test_strs[1024]; + int iov_len = 1024; + int total_len = 0; + char buf[1 << 14]; + struct msghdr msg; + int len_cmp = 0; + int i; + + memset(&msg, 0, sizeof(struct msghdr)); + for (i = 0; i < iov_len; i++) { + test_strs[i] = (char *)malloc(strlen(test_str) + 1); + snprintf(test_strs[i], strlen(test_str) + 1, "%s", test_str); + vec[i].iov_base = (void *)test_strs[i]; + vec[i].iov_len = strlen(test_strs[i]) + 1; + total_len += vec[i].iov_len; + } + msg.msg_iov = vec; + msg.msg_iovlen = iov_len; + + EXPECT_EQ(sendmsg(self->fd, &msg, 0), total_len); + EXPECT_NE(recv(self->cfd, buf, total_len, 0), -1); + + for (i = 0; i < iov_len; i++) + len_cmp += strlen(buf + len_cmp) + 1; + + for (i = 0; i < iov_len; i++) + free(test_strs[i]); +} + +TEST_F(tls, splice_from_pipe) +{ + int send_len = TLS_PAYLOAD_MAX_LEN; + char mem_send[TLS_PAYLOAD_MAX_LEN]; + char mem_recv[TLS_PAYLOAD_MAX_LEN]; + int p[2]; + + ASSERT_GE(pipe(p), 0); + EXPECT_GE(write(p[1], mem_send, send_len), 0); + EXPECT_GE(splice(p[0], NULL, self->fd, NULL, send_len, 0), 0); + EXPECT_GE(recv(self->cfd, mem_recv, send_len, 0), 0); + EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0); +} + +TEST_F(tls, splice_from_pipe2) +{ + int send_len = 16000; + char mem_send[16000]; + char mem_recv[16000]; + int p2[2]; + int p[2]; + + ASSERT_GE(pipe(p), 0); + ASSERT_GE(pipe(p2), 0); + EXPECT_GE(write(p[1], mem_send, 8000), 0); + EXPECT_GE(splice(p[0], NULL, self->fd, NULL, 8000, 0), 0); + EXPECT_GE(write(p2[1], mem_send + 8000, 8000), 0); + EXPECT_GE(splice(p2[0], NULL, self->fd, NULL, 8000, 0), 0); + EXPECT_GE(recv(self->cfd, mem_recv, send_len, 0), 0); + EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0); +} + +TEST_F(tls, send_and_splice) +{ + int send_len = TLS_PAYLOAD_MAX_LEN; + char mem_send[TLS_PAYLOAD_MAX_LEN]; + char mem_recv[TLS_PAYLOAD_MAX_LEN]; + char const *test_str = "test_read"; + int send_len2 = 10; + char buf[10]; + int p[2]; + + ASSERT_GE(pipe(p), 0); + EXPECT_EQ(send(self->fd, test_str, send_len2, 0), send_len2); + EXPECT_NE(recv(self->cfd, buf, send_len2, 0), -1); + EXPECT_EQ(memcmp(test_str, buf, send_len2), 0); + + EXPECT_GE(write(p[1], mem_send, send_len), send_len); + EXPECT_GE(splice(p[0], NULL, self->fd, NULL, send_len, 0), send_len); + + EXPECT_GE(recv(self->cfd, mem_recv, send_len, 0), 0); + EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0); +} + +TEST_F(tls, splice_to_pipe) +{ + int send_len = TLS_PAYLOAD_MAX_LEN; + char mem_send[TLS_PAYLOAD_MAX_LEN]; + char mem_recv[TLS_PAYLOAD_MAX_LEN]; + int p[2]; + + ASSERT_GE(pipe(p), 0); + EXPECT_GE(send(self->fd, mem_send, send_len, 0), 0); + EXPECT_GE(splice(self->cfd, NULL, p[1], NULL, send_len, 0), 0); + EXPECT_GE(read(p[0], mem_recv, send_len), 0); + EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0); +} + +TEST_F(tls, recvmsg_single) +{ + char const *test_str = "test_recvmsg_single"; + int send_len = strlen(test_str) + 1; + char buf[20]; + struct msghdr hdr; + struct iovec vec; + + memset(&hdr, 0, sizeof(hdr)); + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + vec.iov_base = (char *)buf; + vec.iov_len = send_len; + hdr.msg_iovlen = 1; + hdr.msg_iov = &vec; + EXPECT_NE(recvmsg(self->cfd, &hdr, 0), -1); + EXPECT_EQ(memcmp(test_str, buf, send_len), 0); +} + +TEST_F(tls, recvmsg_single_max) +{ + int send_len = TLS_PAYLOAD_MAX_LEN; + char send_mem[TLS_PAYLOAD_MAX_LEN]; + char recv_mem[TLS_PAYLOAD_MAX_LEN]; + struct iovec vec; + struct msghdr hdr; + + EXPECT_EQ(send(self->fd, send_mem, send_len, 0), send_len); + vec.iov_base = (char *)recv_mem; + vec.iov_len = TLS_PAYLOAD_MAX_LEN; + + hdr.msg_iovlen = 1; + hdr.msg_iov = &vec; + EXPECT_NE(recvmsg(self->cfd, &hdr, 0), -1); + EXPECT_EQ(memcmp(send_mem, recv_mem, send_len), 0); +} + +TEST_F(tls, recvmsg_multiple) +{ + unsigned int msg_iovlen = 1024; + unsigned int len_compared = 0; + struct iovec vec[1024]; + char *iov_base[1024]; + unsigned int iov_len = 16; + int send_len = 1 << 14; + char buf[1 << 14]; + struct msghdr hdr; + int i; + + EXPECT_EQ(send(self->fd, buf, send_len, 0), send_len); + for (i = 0; i < msg_iovlen; i++) { + iov_base[i] = (char *)malloc(iov_len); + vec[i].iov_base = iov_base[i]; + vec[i].iov_len = iov_len; + } + + hdr.msg_iovlen = msg_iovlen; + hdr.msg_iov = vec; + EXPECT_NE(recvmsg(self->cfd, &hdr, 0), -1); + for (i = 0; i < msg_iovlen; i++) + len_compared += iov_len; + + for (i = 0; i < msg_iovlen; i++) + free(iov_base[i]); +} + +TEST_F(tls, single_send_multiple_recv) +{ + unsigned int total_len = TLS_PAYLOAD_MAX_LEN * 2; + unsigned int send_len = TLS_PAYLOAD_MAX_LEN; + char send_mem[TLS_PAYLOAD_MAX_LEN * 2]; + char recv_mem[TLS_PAYLOAD_MAX_LEN * 2]; + + EXPECT_GE(send(self->fd, send_mem, total_len, 0), 0); + memset(recv_mem, 0, total_len); + + EXPECT_NE(recv(self->cfd, recv_mem, send_len, 0), -1); + EXPECT_NE(recv(self->cfd, recv_mem + send_len, send_len, 0), -1); + EXPECT_EQ(memcmp(send_mem, recv_mem, total_len), 0); +} + +TEST_F(tls, multiple_send_single_recv) +{ + unsigned int total_len = 2 * 10; + unsigned int send_len = 10; + char recv_mem[2 * 10]; + char send_mem[10]; + + EXPECT_GE(send(self->fd, send_mem, send_len, 0), 0); + EXPECT_GE(send(self->fd, send_mem, send_len, 0), 0); + memset(recv_mem, 0, total_len); + EXPECT_EQ(recv(self->cfd, recv_mem, total_len, 0), total_len); + + EXPECT_EQ(memcmp(send_mem, recv_mem, send_len), 0); + EXPECT_EQ(memcmp(send_mem, recv_mem + send_len, send_len), 0); +} + +TEST_F(tls, recv_partial) +{ + char const *test_str = "test_read_partial"; + char const *test_str_first = "test_read"; + char const *test_str_second = "_partial"; + int send_len = strlen(test_str) + 1; + char recv_mem[18]; + + memset(recv_mem, 0, sizeof(recv_mem)); + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + EXPECT_NE(recv(self->cfd, recv_mem, strlen(test_str_first), 0), -1); + EXPECT_EQ(memcmp(test_str_first, recv_mem, strlen(test_str_first)), 0); + memset(recv_mem, 0, sizeof(recv_mem)); + EXPECT_NE(recv(self->cfd, recv_mem, strlen(test_str_second), 0), -1); + EXPECT_EQ(memcmp(test_str_second, recv_mem, strlen(test_str_second)), + 0); +} + +TEST_F(tls, recv_nonblock) +{ + char buf[4096]; + bool err; + + EXPECT_EQ(recv(self->cfd, buf, sizeof(buf), MSG_DONTWAIT), -1); + err = (errno == EAGAIN || errno == EWOULDBLOCK); + EXPECT_EQ(err, true); +} + +TEST_F(tls, recv_peek) +{ + char const *test_str = "test_read_peek"; + int send_len = strlen(test_str) + 1; + char buf[15]; + + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + EXPECT_NE(recv(self->cfd, buf, send_len, MSG_PEEK), -1); + EXPECT_EQ(memcmp(test_str, buf, send_len), 0); + memset(buf, 0, sizeof(buf)); + EXPECT_NE(recv(self->cfd, buf, send_len, 0), -1); + EXPECT_EQ(memcmp(test_str, buf, send_len), 0); +} + +TEST_F(tls, recv_peek_multiple) +{ + char const *test_str = "test_read_peek"; + int send_len = strlen(test_str) + 1; + unsigned int num_peeks = 100; + char buf[15]; + int i; + + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + for (i = 0; i < num_peeks; i++) { + EXPECT_NE(recv(self->cfd, buf, send_len, MSG_PEEK), -1); + EXPECT_EQ(memcmp(test_str, buf, send_len), 0); + memset(buf, 0, sizeof(buf)); + } + EXPECT_NE(recv(self->cfd, buf, send_len, 0), -1); + EXPECT_EQ(memcmp(test_str, buf, send_len), 0); +} + +TEST_F(tls, pollin) +{ + char const *test_str = "test_poll"; + struct pollfd fd = { 0, 0, 0 }; + char buf[10]; + int send_len = 10; + + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + fd.fd = self->cfd; + fd.events = POLLIN; + + EXPECT_EQ(poll(&fd, 1, 20), 1); + EXPECT_EQ(fd.revents & POLLIN, 1); + EXPECT_EQ(recv(self->cfd, buf, send_len, 0), send_len); + /* Test timing out */ + EXPECT_EQ(poll(&fd, 1, 20), 0); +} + +TEST_F(tls, poll_wait) +{ + char const *test_str = "test_poll_wait"; + int send_len = strlen(test_str) + 1; + struct pollfd fd = { 0, 0, 0 }; + char recv_mem[15]; + + fd.fd = self->cfd; + fd.events = POLLIN; + EXPECT_EQ(send(self->fd, test_str, send_len, 0), send_len); + /* Set timeout to inf. secs */ + EXPECT_EQ(poll(&fd, 1, -1), 1); + EXPECT_EQ(fd.revents & POLLIN, 1); + EXPECT_EQ(recv(self->cfd, recv_mem, send_len, 0), send_len); +} + +TEST_F(tls, blocking) +{ + size_t data = 100000; + int res = fork(); + + EXPECT_NE(res, -1); + + if (res) { + /* parent */ + size_t left = data; + char buf[16384]; + int status; + int pid2; + + while (left) { + int res = send(self->fd, buf, + left > 16384 ? 16384 : left, 0); + + EXPECT_GE(res, 0); + left -= res; + } + + pid2 = wait(&status); + EXPECT_EQ(status, 0); + EXPECT_EQ(res, pid2); + } else { + /* child */ + size_t left = data; + char buf[16384]; + + while (left) { + int res = recv(self->cfd, buf, + left > 16384 ? 16384 : left, 0); + + EXPECT_GE(res, 0); + left -= res; + } + } +} + +TEST_F(tls, nonblocking) +{ + size_t data = 100000; + int sendbuf = 100; + int flags; + int res; + + flags = fcntl(self->fd, F_GETFL, 0); + fcntl(self->fd, F_SETFL, flags | O_NONBLOCK); + fcntl(self->cfd, F_SETFL, flags | O_NONBLOCK); + + /* Ensure nonblocking behavior by imposing a small send + * buffer. + */ + EXPECT_EQ(setsockopt(self->fd, SOL_SOCKET, SO_SNDBUF, + &sendbuf, sizeof(sendbuf)), 0); + + res = fork(); + EXPECT_NE(res, -1); + + if (res) { + /* parent */ + bool eagain = false; + size_t left = data; + char buf[16384]; + int status; + int pid2; + + while (left) { + int res = send(self->fd, buf, + left > 16384 ? 16384 : left, 0); + + if (res == -1 && errno == EAGAIN) { + eagain = true; + usleep(10000); + continue; + } + EXPECT_GE(res, 0); + left -= res; + } + + EXPECT_TRUE(eagain); + pid2 = wait(&status); + + EXPECT_EQ(status, 0); + EXPECT_EQ(res, pid2); + } else { + /* child */ + bool eagain = false; + size_t left = data; + char buf[16384]; + + while (left) { + int res = recv(self->cfd, buf, + left > 16384 ? 16384 : left, 0); + + if (res == -1 && errno == EAGAIN) { + eagain = true; + usleep(10000); + continue; + } + EXPECT_GE(res, 0); + left -= res; + } + EXPECT_TRUE(eagain); + } +} + +TEST_F(tls, control_msg) +{ + if (self->notls) + return; + + char cbuf[CMSG_SPACE(sizeof(char))]; + char const *test_str = "test_read"; + int cmsg_len = sizeof(char); + char record_type = 100; + struct cmsghdr *cmsg; + struct msghdr msg; + int send_len = 10; + struct iovec vec; + char buf[10]; + + vec.iov_base = (char *)test_str; + vec.iov_len = 10; + memset(&msg, 0, sizeof(struct msghdr)); + msg.msg_iov = &vec; + msg.msg_iovlen = 1; + msg.msg_control = cbuf; + msg.msg_controllen = sizeof(cbuf); + cmsg = CMSG_FIRSTHDR(&msg); + cmsg->cmsg_level = SOL_TLS; + /* test sending non-record types. */ + cmsg->cmsg_type = TLS_SET_RECORD_TYPE; + cmsg->cmsg_len = CMSG_LEN(cmsg_len); + *CMSG_DATA(cmsg) = record_type; + msg.msg_controllen = cmsg->cmsg_len; + + EXPECT_EQ(sendmsg(self->fd, &msg, 0), send_len); + /* Should fail because we didn't provide a control message */ + EXPECT_EQ(recv(self->cfd, buf, send_len, 0), -1); + + vec.iov_base = buf; + EXPECT_EQ(recvmsg(self->cfd, &msg, 0), send_len); + cmsg = CMSG_FIRSTHDR(&msg); + EXPECT_NE(cmsg, NULL); + EXPECT_EQ(cmsg->cmsg_level, SOL_TLS); + EXPECT_EQ(cmsg->cmsg_type, TLS_GET_RECORD_TYPE); + record_type = *((unsigned char *)CMSG_DATA(cmsg)); + EXPECT_EQ(record_type, 100); + EXPECT_EQ(memcmp(buf, test_str, send_len), 0); +} + +TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/tc-testing/README b/tools/testing/selftests/tc-testing/README index 3a0336782d2d..49a6f8c3fdae 100644 --- a/tools/testing/selftests/tc-testing/README +++ b/tools/testing/selftests/tc-testing/README @@ -17,6 +17,10 @@ REQUIREMENTS * The kernel must have veth support available, as a veth pair is created prior to running the tests. +* The kernel must have the appropriate infrastructure enabled to run all tdc + unit tests. See the config file in this directory for minimum required + features. As new tests will be added, config options list will be updated. + * All tc-related features being tested must be built in or available as modules. To check what is required in current setup run: ./tdc.py -c @@ -109,8 +113,8 @@ COMMAND LINE ARGUMENTS Run tdc.py -h to see the full list of available arguments. usage: tdc.py [-h] [-p PATH] [-D DIR [DIR ...]] [-f FILE [FILE ...]] - [-c [CATG [CATG ...]]] [-e ID [ID ...]] [-l] [-s] [-i] [-v] - [-d DEVICE] [-n NS] [-V] + [-c [CATG [CATG ...]]] [-e ID [ID ...]] [-l] [-s] [-i] [-v] [-N] + [-d DEVICE] [-P] [-n] [-V] Linux TC unit tests @@ -118,8 +122,10 @@ optional arguments: -h, --help show this help message and exit -p PATH, --path PATH The full path to the tc executable to use -v, --verbose Show the commands that are being run + -N, --notap Suppress tap results for command under test -d DEVICE, --device DEVICE Execute the test case in flower category + -P, --pause Pause execution just before post-suite stage selection: select which test cases: files plus directories; filtered by categories @@ -146,10 +152,10 @@ action: -i, --id Generate ID numbers for new test cases netns: - options for nsPlugin(run commands in net namespace) + options for nsPlugin (run commands in net namespace) - -n NS, --namespace NS - Run commands in namespace NS + -n, --namespace + Run commands in namespace as specified in tdc_config.py valgrind: options for valgrindPlugin (run command under test under Valgrind) diff --git a/tools/testing/selftests/tc-testing/config b/tools/testing/selftests/tc-testing/config new file mode 100644 index 000000000000..203302065458 --- /dev/null +++ b/tools/testing/selftests/tc-testing/config @@ -0,0 +1,48 @@ +CONFIG_NET_SCHED=y + +# +# Queueing/Scheduling +# +CONFIG_NET_SCH_PRIO=m +CONFIG_NET_SCH_INGRESS=m + +# +# Classification +# +CONFIG_NET_CLS=y +CONFIG_NET_CLS_FW=m +CONFIG_NET_CLS_U32=m +CONFIG_CLS_U32_PERF=y +CONFIG_CLS_U32_MARK=y +CONFIG_NET_EMATCH=y +CONFIG_NET_EMATCH_STACK=32 +CONFIG_NET_EMATCH_CMP=m +CONFIG_NET_EMATCH_NBYTE=m +CONFIG_NET_EMATCH_U32=m +CONFIG_NET_EMATCH_META=m +CONFIG_NET_EMATCH_TEXT=m +CONFIG_NET_EMATCH_IPSET=m +CONFIG_NET_EMATCH_IPT=m +CONFIG_NET_CLS_ACT=y +CONFIG_NET_ACT_POLICE=m +CONFIG_NET_ACT_GACT=m +CONFIG_GACT_PROB=y +CONFIG_NET_ACT_MIRRED=m +CONFIG_NET_ACT_SAMPLE=m +CONFIG_NET_ACT_IPT=m +CONFIG_NET_ACT_NAT=m +CONFIG_NET_ACT_PEDIT=m +CONFIG_NET_ACT_SIMP=m +CONFIG_NET_ACT_SKBEDIT=m +CONFIG_NET_ACT_CSUM=m +CONFIG_NET_ACT_VLAN=m +CONFIG_NET_ACT_BPF=m +CONFIG_NET_ACT_CONNMARK=m +CONFIG_NET_ACT_SKBMOD=m +CONFIG_NET_ACT_IFE=m +CONFIG_NET_ACT_TUNNEL_KEY=m +CONFIG_NET_IFE_SKBMARK=m +CONFIG_NET_IFE_SKBPRIO=m +CONFIG_NET_IFE_SKBTCINDEX=m +CONFIG_NET_CLS_IND=y +CONFIG_NET_SCH_FIFO=y diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json index 70952bd98ff9..13147a1f5731 100644 --- a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json +++ b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json @@ -17,7 +17,7 @@ "cmdUnderTest": "$TC actions add action connmark", "expExitCode": "0", "verifyCmd": "$TC actions list action connmark", - "matchPattern": "action order [0-9]+: connmark zone 0 pipe", + "matchPattern": "action order [0-9]+: connmark zone 0 pipe", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -41,7 +41,7 @@ "cmdUnderTest": "$TC actions add action connmark pass index 1", "expExitCode": "0", "verifyCmd": "$TC actions get action connmark index 1", - "matchPattern": "action order [0-9]+: connmark zone 0 pass.*index 1 ref", + "matchPattern": "action order [0-9]+: connmark zone 0 pass.*index 1 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -65,7 +65,7 @@ "cmdUnderTest": "$TC actions add action connmark drop index 100", "expExitCode": "0", "verifyCmd": "$TC actions get action connmark index 100", - "matchPattern": "action order [0-9]+: connmark zone 0 drop.*index 100 ref", + "matchPattern": "action order [0-9]+: connmark zone 0 drop.*index 100 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -89,7 +89,7 @@ "cmdUnderTest": "$TC actions add action connmark pipe index 455", "expExitCode": "0", "verifyCmd": "$TC actions get action connmark index 455", - "matchPattern": "action order [0-9]+: connmark zone 0 pipe.*index 455 ref", + "matchPattern": "action order [0-9]+: connmark zone 0 pipe.*index 455 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -113,7 +113,7 @@ "cmdUnderTest": "$TC actions add action connmark reclassify index 7", "expExitCode": "0", "verifyCmd": "$TC actions list action connmark", - "matchPattern": "action order [0-9]+: connmark zone 0 reclassify.*index 7 ref", + "matchPattern": "action order [0-9]+: connmark zone 0 reclassify.*index 7 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -137,7 +137,7 @@ "cmdUnderTest": "$TC actions add action connmark continue index 17", "expExitCode": "0", "verifyCmd": "$TC actions list action connmark", - "matchPattern": "action order [0-9]+: connmark zone 0 continue.*index 17 ref", + "matchPattern": "action order [0-9]+: connmark zone 0 continue.*index 17 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -161,7 +161,7 @@ "cmdUnderTest": "$TC actions add action connmark jump 10 index 17", "expExitCode": "0", "verifyCmd": "$TC actions list action connmark", - "matchPattern": "action order [0-9]+: connmark zone 0 jump 10.*index 17 ref", + "matchPattern": "action order [0-9]+: connmark zone 0 jump 10.*index 17 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -185,7 +185,7 @@ "cmdUnderTest": "$TC actions add action connmark zone 100 pipe index 1", "expExitCode": "0", "verifyCmd": "$TC actions get action connmark index 1", - "matchPattern": "action order [0-9]+: connmark zone 100 pipe.*index 1 ref", + "matchPattern": "action order [0-9]+: connmark zone 100 pipe.*index 1 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -209,7 +209,7 @@ "cmdUnderTest": "$TC actions add action connmark zone 65536 reclassify index 21", "expExitCode": "255", "verifyCmd": "$TC actions get action connmark index 1", - "matchPattern": "action order [0-9]+: connmark zone 65536 reclassify.*index 21 ref", + "matchPattern": "action order [0-9]+: connmark zone 65536 reclassify.*index 21 ref", "matchCount": "0", "teardown": [ "$TC actions flush action connmark" @@ -233,7 +233,7 @@ "cmdUnderTest": "$TC actions add action connmark zone 655 unsupp_arg pass index 2", "expExitCode": "255", "verifyCmd": "$TC actions get action connmark index 2", - "matchPattern": "action order [0-9]+: connmark zone 655 unsupp_arg pass.*index 2 ref", + "matchPattern": "action order [0-9]+: connmark zone 655 unsupp_arg pass.*index 2 ref", "matchCount": "0", "teardown": [ "$TC actions flush action connmark" @@ -258,7 +258,7 @@ "cmdUnderTest": "$TC actions replace action connmark zone 555 reclassify index 555", "expExitCode": "0", "verifyCmd": "$TC actions get action connmark index 555", - "matchPattern": "action order [0-9]+: connmark zone 555 reclassify.*index 555 ref", + "matchPattern": "action order [0-9]+: connmark zone 555 reclassify.*index 555 ref", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" @@ -282,7 +282,7 @@ "cmdUnderTest": "$TC actions add action connmark zone 555 pipe index 5 cookie aabbccddeeff112233445566778800a1", "expExitCode": "0", "verifyCmd": "$TC actions get action connmark index 5", - "matchPattern": "action order [0-9]+: connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1", + "matchPattern": "action order [0-9]+: connmark zone 555 pipe.*index 5 ref.*cookie aabbccddeeff112233445566778800a1", "matchCount": "1", "teardown": [ "$TC actions flush action connmark" diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json b/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json index 3a2f51fc7fd4..a022792d392a 100644 --- a/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json +++ b/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json @@ -336,6 +336,30 @@ ] }, { + "id": "b10b", + "name": "Add all 7 csum actions", + "category": [ + "actions", + "csum" + ], + "setup": [ + [ + "$TC actions flush action csum", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action csum icmp ip4h sctp igmp udplite udp tcp index 7", + "expExitCode": "0", + "verifyCmd": "$TC actions get action csum index 7", + "matchPattern": "action order [0-9]*: csum \\(iph, icmp, igmp, tcp, udp, udplite, sctp\\).*index 7 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action csum" + ] + }, + { "id": "ce92", "name": "Add csum udp action with cookie", "category": [ diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json index 6e4edfae1799..db49fd0f8445 100644 --- a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json +++ b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json @@ -44,7 +44,8 @@ "matchPattern": "action order [0-9]*: mirred \\(Egress Redirect to device lo\\).*index 2 ref", "matchCount": "1", "teardown": [ - "$TC actions flush action mirred" + "$TC actions flush action mirred", + "$TC actions flush action gact" ] }, { diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json b/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json new file mode 100644 index 000000000000..0080dc2fd41c --- /dev/null +++ b/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json @@ -0,0 +1,593 @@ +[ + { + "id": "7565", + "name": "Add nat action on ingress with default control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 192.168.1.1 200.200.200.1", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat ingress 192.168.1.1/32 200.200.200.1 pass", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "fd79", + "name": "Add nat action on ingress with pipe control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 1.1.1.1 2.2.2.1 pipe index 77", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 77", + "matchPattern": "action order [0-9]+: nat ingress 1.1.1.1/32 2.2.2.1 pipe.*index 77 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "eab9", + "name": "Add nat action on ingress with continue control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 192.168.10.10 192.168.20.20 continue index 1000", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 1000", + "matchPattern": "action order [0-9]+: nat ingress 192.168.10.10/32 192.168.20.20 continue.*index 1000 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "c53a", + "name": "Add nat action on ingress with reclassify control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 192.168.10.10 192.168.20.20 reclassify index 1000", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 1000", + "matchPattern": "action order [0-9]+: nat ingress 192.168.10.10/32 192.168.20.20 reclassify.*index 1000 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "76c9", + "name": "Add nat action on ingress with jump control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 12.18.10.10 12.18.20.20 jump 10 index 22", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 22", + "matchPattern": "action order [0-9]+: nat ingress 12.18.10.10/32 12.18.20.20 jump 10.*index 22 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "24c6", + "name": "Add nat action on ingress with drop control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 1.18.1.1 1.18.2.2 drop index 722", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 722", + "matchPattern": "action order [0-9]+: nat ingress 1.18.1.1/32 1.18.2.2 drop.*index 722 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "2120", + "name": "Add nat action on ingress with maximum index value", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 1.18.1.1 1.18.2.2 index 4294967295", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 4294967295", + "matchPattern": "action order [0-9]+: nat ingress 1.18.1.1/32 1.18.2.2 pass.*index 4294967295 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "3e9d", + "name": "Add nat action on ingress with invalid index value", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 1.18.1.1 1.18.2.2 index 4294967295555", + "expExitCode": "255", + "verifyCmd": "$TC actions get action nat index 4294967295555", + "matchPattern": "action order [0-9]+: nat ingress 1.18.1.1/32 1.18.2.2 pass.*index 4294967295555 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ] + }, + { + "id": "f6c9", + "name": "Add nat action on ingress with invalid IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 1.1.1.1 1.1888.2.2 index 7", + "expExitCode": "255", + "verifyCmd": "$TC actions get action nat index 7", + "matchPattern": "action order [0-9]+: nat ingress 1.1.1.1/32 1.1888.2.2 pass.*index 7 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ] + }, + { + "id": "be25", + "name": "Add nat action on ingress with invalid argument", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 1.1.1.1 1.18.2.2 another_arg index 12", + "expExitCode": "255", + "verifyCmd": "$TC actions get action nat index 12", + "matchPattern": "action order [0-9]+: nat ingress 1.1.1.1/32 1.18.2.2 pass.*another_arg.*index 12 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ] + }, + { + "id": "a7bd", + "name": "Add nat action on ingress with DEFAULT IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress default 10.10.10.1 index 12", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 12", + "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/32 10.10.10.1 pass.*index 12 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "ee1e", + "name": "Add nat action on ingress with ANY IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress any 10.10.10.1 index 12", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 12", + "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/32 10.10.10.1 pass.*index 12 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "1de8", + "name": "Add nat action on ingress with ALL IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress all 10.10.10.1 index 12", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 12", + "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/32 10.10.10.1 pass.*index 12 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "8dba", + "name": "Add nat action on egress with default control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress 10.10.10.1 20.20.20.1", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat egress 10.10.10.1/32 20.20.20.1 pass", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "19a7", + "name": "Add nat action on egress with pipe control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress 10.10.10.1 20.20.20.1 pipe", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat egress 10.10.10.1/32 20.20.20.1 pipe", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "f1d9", + "name": "Add nat action on egress with continue control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress 10.10.10.1 20.20.20.1 continue", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat egress 10.10.10.1/32 20.20.20.1 continue", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "6d4a", + "name": "Add nat action on egress with reclassify control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress 10.10.10.1 20.20.20.1 reclassify", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat egress 10.10.10.1/32 20.20.20.1 reclassify", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "b313", + "name": "Add nat action on egress with jump control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress 10.10.10.1 20.20.20.1 jump 777", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat egress 10.10.10.1/32 20.20.20.1 jump 777", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "d9fc", + "name": "Add nat action on egress with drop control action", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress 10.10.10.1 20.20.20.1 drop", + "expExitCode": "0", + "verifyCmd": "$TC actions ls action nat", + "matchPattern": "action order [0-9]+: nat egress 10.10.10.1/32 20.20.20.1 drop", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "a895", + "name": "Add nat action on egress with DEFAULT IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress default 20.20.20.1 pipe index 10", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 10", + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "2572", + "name": "Add nat action on egress with ANY IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress any 20.20.20.1 pipe index 10", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 10", + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "37f3", + "name": "Add nat action on egress with ALL IP address", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress all 20.20.20.1 pipe index 10", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 10", + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "6054", + "name": "Add nat action on egress with cookie", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat egress all 20.20.20.1 pipe index 10 cookie aa1bc2d3eeff112233445566778800a1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 10", + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref.*cookie aa1bc2d3eeff112233445566778800a1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + }, + { + "id": "79d6", + "name": "Add nat action on ingress with cookie", + "category": [ + "actions", + "nat" + ], + "setup": [ + [ + "$TC actions flush action nat", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action nat ingress 192.168.1.1 10.10.10.1 reclassify index 1 cookie 112233445566778899aabbccddeeff11", + "expExitCode": "0", + "verifyCmd": "$TC actions get action nat index 1", + "matchPattern": "action order [0-9]+: nat ingress 192.168.1.1/32 10.10.10.1 reclassify.*index 1 ref.*cookie 112233445566778899aabbccddeeff11", + "matchCount": "1", + "teardown": [ + "$TC actions flush action nat" + ] + } +] diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json b/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json index 37ecc2716fee..5aaf593b914a 100644 --- a/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json +++ b/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json @@ -17,7 +17,7 @@ "cmdUnderTest": "$TC actions add action skbedit mark 1", "expExitCode": "0", "verifyCmd": "$TC actions list action skbedit", - "matchPattern": "action order [0-9]*: skbedit mark 1", + "matchPattern": "action order [0-9]*: skbedit mark 1", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -65,7 +65,7 @@ "cmdUnderTest": "$TC actions add action skbedit prio 99", "expExitCode": "0", "verifyCmd": "$TC actions list action skbedit", - "matchPattern": "action order [0-9]*: skbedit priority :99", + "matchPattern": "action order [0-9]*: skbedit priority :99", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -113,7 +113,7 @@ "cmdUnderTest": "$TC actions add action skbedit queue_mapping 909", "expExitCode": "0", "verifyCmd": "$TC actions list action skbedit", - "matchPattern": "action order [0-9]*: skbedit queue_mapping 909", + "matchPattern": "action order [0-9]*: skbedit queue_mapping 909", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -161,7 +161,7 @@ "cmdUnderTest": "$TC actions add action skbedit ptype host", "expExitCode": "0", "verifyCmd": "$TC actions list action skbedit", - "matchPattern": "action order [0-9]*: skbedit ptype host", + "matchPattern": "action order [0-9]*: skbedit ptype host", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -185,7 +185,7 @@ "cmdUnderTest": "$TC actions add action skbedit ptype otherhost", "expExitCode": "0", "verifyCmd": "$TC actions list action skbedit", - "matchPattern": "action order [0-9]*: skbedit ptype otherhost", + "matchPattern": "action order [0-9]*: skbedit ptype otherhost", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -233,7 +233,7 @@ "cmdUnderTest": "$TC actions add action skbedit ptype host pipe index 11", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 11", - "matchPattern": "action order [0-9]*: skbedit ptype host pipe.*index 11 ref", + "matchPattern": "action order [0-9]*: skbedit ptype host pipe.*index 11 ref", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -257,7 +257,7 @@ "cmdUnderTest": "$TC actions add action skbedit mark 56789 reclassify index 90", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 90", - "matchPattern": "action order [0-9]*: skbedit mark 56789 reclassify.*index 90 ref", + "matchPattern": "action order [0-9]*: skbedit mark 56789 reclassify.*index 90 ref", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -281,7 +281,7 @@ "cmdUnderTest": "$TC actions add action skbedit queue_mapping 3 pass index 271", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 271", - "matchPattern": "action order [0-9]*: skbedit queue_mapping 3 pass.*index 271 ref", + "matchPattern": "action order [0-9]*: skbedit queue_mapping 3 pass.*index 271 ref", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -305,7 +305,7 @@ "cmdUnderTest": "$TC actions add action skbedit queue_mapping 3 drop index 271", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 271", - "matchPattern": "action order [0-9]*: skbedit queue_mapping 3 drop.*index 271 ref", + "matchPattern": "action order [0-9]*: skbedit queue_mapping 3 drop.*index 271 ref", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -329,7 +329,7 @@ "cmdUnderTest": "$TC actions add action skbedit priority 8 jump 9 index 2", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 2", - "matchPattern": "action order [0-9]*: skbedit priority :8 jump 9.*index 2 ref", + "matchPattern": "action order [0-9]*: skbedit priority :8 jump 9.*index 2 ref", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -353,7 +353,7 @@ "cmdUnderTest": "$TC actions add action skbedit priority 16 continue index 32", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 32", - "matchPattern": "action order [0-9]*: skbedit priority :16 continue.*index 32 ref", + "matchPattern": "action order [0-9]*: skbedit priority :16 continue.*index 32 ref", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -377,7 +377,7 @@ "cmdUnderTest": "$TC actions add action skbedit priority 16 continue index 32 cookie deadbeef", "expExitCode": "0", "verifyCmd": "$TC actions get action skbedit index 32", - "matchPattern": "action order [0-9]*: skbedit priority :16 continue.*index 32 ref.*cookie deadbeef", + "matchPattern": "action order [0-9]*: skbedit priority :16 continue.*index 32 ref.*cookie deadbeef", "matchCount": "1", "teardown": [ "$TC actions flush action skbedit" @@ -405,7 +405,7 @@ "cmdUnderTest": "$TC actions list action skbedit", "expExitCode": "0", "verifyCmd": "$TC actions list action skbedit", - "matchPattern": "action order [0-9]*: skbedit", + "matchPattern": "action order [0-9]*: skbedit", "matchCount": "4", "teardown": [ "$TC actions flush action skbedit" diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json b/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json new file mode 100644 index 000000000000..10b2d894e436 --- /dev/null +++ b/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json @@ -0,0 +1,917 @@ +[ + { + "id": "2b11", + "name": "Add tunnel_key set action with mandatory parameters", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 id 1", + "expExitCode": "0", + "verifyCmd": "$TC actions list action tunnel_key", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 10.10.10.1.*dst_ip 20.20.20.2.*key_id 1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "dc6b", + "name": "Add tunnel_key set action with missing mandatory src_ip parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set dst_ip 20.20.20.2 id 100", + "expExitCode": "255", + "verifyCmd": "$TC actions list action tunnel_key", + "matchPattern": "action order [0-9]+: tunnel_key set.*dst_ip 20.20.20.2.*key_id 100", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "7f25", + "name": "Add tunnel_key set action with missing mandatory dst_ip parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 id 100", + "expExitCode": "255", + "verifyCmd": "$TC actions list action tunnel_key", + "matchPattern": "action order [0-9]+: tunnel_key set.*src_ip 10.10.10.1.*key_id 100", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "ba4e", + "name": "Add tunnel_key set action with missing mandatory id parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2", + "expExitCode": "255", + "verifyCmd": "$TC actions list action tunnel_key", + "matchPattern": "action order [0-9]+: tunnel_key set.*src_ip 10.10.10.1.*dst_ip 20.20.20.2", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "a5e0", + "name": "Add tunnel_key set action with invalid src_ip parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 300.168.100.1 dst_ip 192.168.200.1 id 7 index 1", + "expExitCode": "1", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key set.*src_ip 300.168.100.1.*dst_ip 192.168.200.1.*key_id 7.*index 1 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "eaa8", + "name": "Add tunnel_key set action with invalid dst_ip parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 192.168.100.1 dst_ip 192.168.800.1 id 10 index 11", + "expExitCode": "1", + "verifyCmd": "$TC actions get action tunnel_key index 11", + "matchPattern": "action order [0-9]+: tunnel_key set.*src_ip 192.168.100.1.*dst_ip 192.168.800.1.*key_id 10.*index 11 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "3b09", + "name": "Add tunnel_key set action with invalid id parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 112233445566778899 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 112233445566778899.*index 1 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "9625", + "name": "Add tunnel_key set action with invalid dst_port parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 11 dst_port 998877 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 11.*dst_port 998877.*index 1 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "05af", + "name": "Add tunnel_key set action with optional dst_port parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 192.168.100.1 dst_ip 192.168.200.1 id 789 dst_port 4000 index 10", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 10", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 192.168.100.1.*dst_ip 192.168.200.1.*key_id 789.*dst_port 4000.*index 10 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "da80", + "name": "Add tunnel_key set action with index at 32-bit maximum", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 11 index 4294967295", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 4294967295", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*id 11.*index 4294967295 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "d407", + "name": "Add tunnel_key set action with index exceeding 32-bit maximum", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 11 index 4294967295678", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 4294967295678", + "matchPattern": "action order [0-9]+: tunnel_key set.*index 4294967295678 ref", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "5cba", + "name": "Add tunnel_key set action with id value at 32-bit maximum", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 4294967295 index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 4294967295.*index 1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "e84a", + "name": "Add tunnel_key set action with id value exceeding 32-bit maximum", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42949672955 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 4294967295", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42949672955.*index 1", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "9c19", + "name": "Add tunnel_key set action with dst_port value at 16-bit maximum", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 429 dst_port 65535 index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 429.*dst_port 65535.*index 1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "3bd9", + "name": "Add tunnel_key set action with dst_port value exceeding 16-bit maximum", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 429 dst_port 65535789 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 429.*dst_port 65535789.*index 1", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "68e2", + "name": "Add tunnel_key unset action", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key unset index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*unset.*index 1 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "6192", + "name": "Add tunnel_key unset continue action", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key unset continue index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*unset continue.*index 1 ref", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "061d", + "name": "Add tunnel_key set continue action with cookie", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 192.168.10.1 dst_ip 192.168.20.2 id 123 continue index 1 cookie aa11bb22cc33dd44ee55ff66aa11b1b2", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 192.168.10.1.*dst_ip 192.168.20.2.*key_id 123.*csum continue.*index 1.*cookie aa11bb22cc33dd44ee55ff66aa11b1b2", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "8acb", + "name": "Add tunnel_key set continue action with invalid cookie", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 192.168.10.1 dst_ip 192.168.20.2 id 123 continue index 1 cookie aa11bb22cc33dd44ee55ff66aa11b1b2777888", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 192.168.10.1.*dst_ip 192.168.20.2.*key_id 123.*csum continue.*index 1.*cookie aa11bb22cc33dd44ee55ff66aa11b1b2777888", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "a07e", + "name": "Add tunnel_key action with no set/unset command specified", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key src_ip 10.10.10.1 dst_ip 20.20.20.2 id 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*src_ip 10.10.10.1.*dst_ip 20.20.20.2.*key_id 1", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "b227", + "name": "Add tunnel_key action with csum option", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 id 1 csum index 99", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 99", + "matchPattern": "action order [0-9]+: tunnel_key.*src_ip 10.10.10.1.*dst_ip 20.20.20.2.*key_id 1.*csum pipe.*index 99", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "58a7", + "name": "Add tunnel_key action with nocsum option", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 10.10.10.2 id 7823 nocsum index 234", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 234", + "matchPattern": "action order [0-9]+: tunnel_key.*src_ip 10.10.10.1.*dst_ip 10.10.10.2.*key_id 7823.*nocsum pipe.*index 234", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "2575", + "name": "Add tunnel_key action with not-supported parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 10.10.10.2 id 7 foobar 999 index 4", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 4", + "matchPattern": "action order [0-9]+: tunnel_key.*src_ip 10.10.10.1.*dst_ip 10.10.10.2.*key_id 7.*foobar 999.*index 4", + "matchCount": "0", + "teardown": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ] + }, + { + "id": "7a88", + "name": "Add tunnel_key action with cookie parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 10.10.10.2 id 7 index 4 cookie aa11bb22cc33dd44ee55ff66aa11b1b2", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 4", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 10.10.10.1.*dst_ip 10.10.10.2.*key_id 7.*dst_port 0.*csum pipe.*index 4 ref.*cookie aa11bb22cc33dd44ee55ff66aa11b1b2", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "4f20", + "name": "Add tunnel_key action with a single geneve option parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:00880022 index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:00880022.*index 1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "e33d", + "name": "Add tunnel_key action with multiple geneve options parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:00880022,0408:42:0040007611223344,0111:02:1020304011223344 index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:00880022,0408:42:0040007611223344,0111:02:1020304011223344.*index 1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "0778", + "name": "Add tunnel_key action with invalid class geneve option parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 824212:80:00880022 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 824212:80:00880022.*index 1", + "matchCount": "0", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "4ae8", + "name": "Add tunnel_key action with invalid type geneve option parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:4224:00880022 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:4224:00880022.*index 1", + "matchCount": "0", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "4039", + "name": "Add tunnel_key action with short data length geneve option parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:4288 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:4288.*index 1", + "matchCount": "0", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "26a6", + "name": "Add tunnel_key action with non-multiple of 4 data length geneve option parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:4288428822 index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:4288428822.*index 1", + "matchCount": "0", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "f44d", + "name": "Add tunnel_key action with incomplete geneve options parameter", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ] + ], + "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:00880022,0408:42: index 1", + "expExitCode": "255", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:00880022,0408:42:.*index 1", + "matchCount": "0", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "7afc", + "name": "Replace tunnel_key set action with all parameters", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ], + "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 dst_port 3128 csum id 1 index 1" + ], + "cmdUnderTest": "$TC actions replace action tunnel_key set src_ip 11.11.11.1 dst_ip 21.21.21.2 dst_port 3129 nocsum id 11 index 1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 11.11.11.1.*dst_ip 21.21.21.2.*key_id 11.*dst_port 3129.*nocsum pipe.*index 1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "364d", + "name": "Replace tunnel_key set action with all parameters and cookie", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ], + "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 dst_port 3128 nocsum id 1 index 1 cookie aabbccddeeff112233445566778800a" + ], + "cmdUnderTest": "$TC actions replace action tunnel_key set src_ip 11.11.11.1 dst_ip 21.21.21.2 dst_port 3129 id 11 csum reclassify index 1 cookie a1b1c1d1", + "expExitCode": "0", + "verifyCmd": "$TC actions get action tunnel_key index 1", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 11.11.11.1.*dst_ip 21.21.21.2.*key_id 11.*dst_port 3129.*csum reclassify.*index 1.*cookie a1b1c1d1", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "937c", + "name": "Fetch all existing tunnel_key actions", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ], + "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 dst_port 3128 nocsum id 1 pipe index 1", + "$TC actions add action tunnel_key set src_ip 11.10.10.1 dst_ip 21.20.20.2 dst_port 3129 csum id 2 jump 10 index 2", + "$TC actions add action tunnel_key set src_ip 12.10.10.1 dst_ip 22.20.20.2 dst_port 3130 csum id 3 pass index 3", + "$TC actions add action tunnel_key set src_ip 13.10.10.1 dst_ip 23.20.20.2 dst_port 3131 nocsum id 4 continue index 4" + ], + "cmdUnderTest": "$TC actions list action tunnel_key", + "expExitCode": "0", + "verifyCmd": "$TC actions list action tunnel_key", + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 10.10.10.1.*dst_ip 20.20.20.2.*key_id 1.*dst_port 3128.*nocsum pipe.*index 1.*set.*src_ip 11.10.10.1.*dst_ip 21.20.20.2.*key_id 2.*dst_port 3129.*csum jump 10.*index 2.*set.*src_ip 12.10.10.1.*dst_ip 22.20.20.2.*key_id 3.*dst_port 3130.*csum pass.*index 3.*set.*src_ip 13.10.10.1.*dst_ip 23.20.20.2.*key_id 4.*dst_port 3131.*nocsum continue.*index 4", + "matchCount": "1", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + }, + { + "id": "6783", + "name": "Flush all existing tunnel_key actions", + "category": [ + "actions", + "tunnel_key" + ], + "setup": [ + [ + "$TC actions flush action tunnel_key", + 0, + 1, + 255 + ], + "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 dst_port 3128 nocsum id 1 pipe index 1", + "$TC actions add action tunnel_key set src_ip 11.10.10.1 dst_ip 21.20.20.2 dst_port 3129 csum id 2 reclassify index 2", + "$TC actions add action tunnel_key set src_ip 12.10.10.1 dst_ip 22.20.20.2 dst_port 3130 csum id 3 pass index 3", + "$TC actions add action tunnel_key set src_ip 13.10.10.1 dst_ip 23.20.20.2 dst_port 3131 nocsum id 4 continue index 4" + ], + "cmdUnderTest": "$TC actions flush action tunnel_key", + "expExitCode": "0", + "verifyCmd": "$TC actions list action tunnel_key", + "matchPattern": "action order [0-9]+:.*", + "matchCount": "0", + "teardown": [ + "$TC actions flush action tunnel_key" + ] + } +] diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json b/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json new file mode 100644 index 000000000000..3b97cfd7e0f8 --- /dev/null +++ b/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json @@ -0,0 +1,1049 @@ +[ + { + "id": "901f", + "name": "Add fw filter with prio at 32-bit maxixum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 65535 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 65535 protocol all fw", + "matchPattern": "pref 65535 fw.*handle 0x1.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "51e2", + "name": "Add fw filter with prio exceeding 32-bit maxixum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 65536 fw action ok", + "expExitCode": "255", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 65536 protocol all fw", + "matchPattern": "pref 65536 fw.*handle 0x1.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "d987", + "name": "Add fw filter with action ok", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "affe", + "name": "Add fw filter with action continue", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action continue", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action continue", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "28bc", + "name": "Add fw filter with action pipe", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action pipe", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action pipe", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "8da2", + "name": "Add fw filter with action drop", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action drop", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 protocol all prio 1 fw", + "matchPattern": "handle 0x1.*gact action drop", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "9436", + "name": "Add fw filter with action reclassify", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action reclassify", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action reclassify", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "95bb", + "name": "Add fw filter with action jump 10", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action jump 10", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action jump 10", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "3d74", + "name": "Add fw filter with action goto chain 5", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action goto chain 5", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action goto chain 5", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "eb8f", + "name": "Add fw filter with invalid action", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action pump", + "expExitCode": "255", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "handle 0x1.*gact action pump", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "6a79", + "name": "Add fw filter with missing mandatory action", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw", + "expExitCode": "2", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "filter protocol all pref [0-9]+ fw.*handle 0x1", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "8298", + "name": "Add fw filter with cookie", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 2 fw action pipe cookie aa11bb22cc33dd44ee55ff66aa11b1b2", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 2 protocol all fw", + "matchPattern": "pref 2 fw.*handle 0x1.*gact action pipe.*cookie aa11bb22cc33dd44ee55ff66aa11b1b2", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "a88c", + "name": "Add fw filter with invalid cookie", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 2 fw action continue cookie aa11bb22cc33dd44ee55ff66aa11b1b2777888", + "expExitCode": "255", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 2 protocol all fw", + "matchPattern": "pref 2 fw.*handle 0x1.*gact action continue.*cookie aa11bb22cc33dd44ee55ff66aa11b1b2777888", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "10f6", + "name": "Add fw filter with handle in hex", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 0xa1b2ff prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 0xa1b2ff prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xa1b2ff.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "9d51", + "name": "Add fw filter with handle at 32-bit maximum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 4294967295 prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4294967295 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xffffffff.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "d939", + "name": "Add fw filter with handle exceeding 32-bit maximum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 4294967296 prio 1 fw action ok", + "expExitCode": "1", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4294967296 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0x.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "658c", + "name": "Add fw filter with mask in hex", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 10/0xa1b2f prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 10 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xa/0xa1b2f", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "86be", + "name": "Add fw filter with mask at 32-bit maximum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 10/4294967295 prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 10 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xa[^/]", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "e635", + "name": "Add fw filter with mask exceeding 32-bit maximum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 10/4294967296 prio 1 fw action ok", + "expExitCode": "1", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 10 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xa", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "6cab", + "name": "Add fw filter with handle/mask in hex", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 0xa1b2cdff/0x1a2bffdc prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 0xa1b2cdff prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xa1b2cdff/0x1a2bffdc", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "8700", + "name": "Add fw filter with handle/mask at 32-bit maximum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 4294967295/4294967295 prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 0xffffffff prio 1 protocol all fw", + "matchPattern": "fw.*handle 0xffffffff[^/]", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "7d62", + "name": "Add fw filter with handle/mask exceeding 32-bit maximum", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 4294967296/4294967296 prio 1 fw action ok", + "expExitCode": "1", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 10 prio 1 protocol all fw", + "matchPattern": "fw.*handle", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "7b69", + "name": "Add fw filter with missing mandatory handle", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: prio 1 fw action ok", + "expExitCode": "2", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "filter protocol all.*fw.*handle.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "d68b", + "name": "Add fw filter with invalid parent", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent aa11b1b2: handle 1 prio 1 fw action ok", + "expExitCode": "255", + "verifyCmd": "$TC filter dev $DEV1 parent aa11b1b2: handle 1 prio 1 protocol all fw", + "matchPattern": "filter protocol all pref 1 fw.*handle 0x1.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "66e0", + "name": "Add fw filter with missing mandatory parent id", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 handle 1 prio 1 fw action ok", + "expExitCode": "2", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "pref [0-9]+ fw.*handle 0x1.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "0ff3", + "name": "Add fw filter with classid", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw classid 3 action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0x1 classid :3.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "9849", + "name": "Add fw filter with classid at root", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw classid ffff:ffff action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "pref 1 fw.*handle 0x1 classid root.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "b7ff", + "name": "Add fw filter with classid - keeps last 8 (hex) digits", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw classid 98765fedcb action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0x1 classid 765f:edcb.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "2b18", + "name": "Add fw filter with invalid classid", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw classid 6789defg action ok", + "expExitCode": "1", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol all fw", + "matchPattern": "fw.*handle 0x1 classid 6789:defg.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "fade", + "name": "Add fw filter with flowid", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 10 prio 1 fw flowid 1:10 action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 10 prio 1 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 1 fw.*handle 0xa classid 1:10.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "33af", + "name": "Add fw filter with flowid then classid (same arg, takes second)", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 11 prio 1 fw flowid 10 classid 4 action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 11 prio 1 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 1 fw.*handle 0xb classid :4.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "8a8c", + "name": "Add fw filter with classid then flowid (same arg, takes second)", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 11 prio 1 fw classid 4 flowid 10 action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 11 prio 1 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 1 fw.*handle 0xb classid :10.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "b50d", + "name": "Add fw filter with handle val/mask and flowid 10:1000", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: prio 3 handle 10/0xff fw flowid 10:1000 action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 10 prio 3 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 3 fw.*handle 0xa/0xff classid 10:1000.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "7207", + "name": "Add fw filter with protocol ip", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol ip prio 1 handle 3 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 3 prio 1 protocol ip fw", + "matchPattern": "filter parent ffff: protocol ip pref 1 fw.*handle 0x3.*gact action pass.*index [0-9]+ ref [0-9]+ bind [0-9]+", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "306d", + "name": "Add fw filter with protocol ipv6", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol ipv6 prio 2 handle 4 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4 prio 2 protocol ipv6 fw", + "matchPattern": "filter parent ffff: protocol ipv6 pref 2 fw.*handle 0x4.*gact action pass.*index [0-9]+ ref [0-9]+ bind [0-9]+", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "9a78", + "name": "Add fw filter with protocol arp", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol arp prio 5 handle 7 fw action drop", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 7 prio 5 protocol arp fw", + "matchPattern": "filter parent ffff: protocol arp pref 5 fw.*handle 0x7.*gact action drop.*index [0-9]+ ref [0-9]+ bind [0-9]+", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "1821", + "name": "Add fw filter with protocol 802_3", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol 802_3 handle 1 prio 1 fw action ok", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol 802_3 fw", + "matchPattern": "filter parent ffff: protocol 802_3 pref 1 fw.*handle 0x1.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "2260", + "name": "Add fw filter with invalid protocol", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol igmp handle 1 prio 1 fw action ok", + "expExitCode": "255", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 prio 1 protocol igmp fw", + "matchPattern": "filter parent ffff: protocol igmp pref 1 fw.*handle 0x1.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "09d7", + "name": "Add fw filters protocol 802_3 and ip with conflicting priorities", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: protocol 802_3 prio 3 handle 7 fw action ok" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol ip prio 3 handle 8 fw action ok", + "expExitCode": "2", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 8 prio 3 protocol ip fw", + "matchPattern": "filter parent ffff: protocol ip pref 3 fw.*handle 0x8", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "6973", + "name": "Add fw filters with same index, same action", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: prio 6 handle 2 fw action continue index 5" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: prio 8 handle 4 fw action continue index 5", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4 prio 8 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 8 fw.*handle 0x4.*gact action continue.*index 5 ref 2 bind 2", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "fc06", + "name": "Add fw filters with action police", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: prio 3 handle 4 fw action police rate 1kbit burst 10k index 5", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4 prio 3 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 3 fw.*handle 0x4.*police 0x5 rate 1Kbit burst 10Kb mtu 2Kb action reclassify overhead 0b.*ref 1 bind 1", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "aac7", + "name": "Add fw filters with action police linklayer atm", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress" + ], + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: prio 3 handle 4 fw action police rate 2mbit burst 200k linklayer atm index 8", + "expExitCode": "0", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4 prio 3 protocol all fw", + "matchPattern": "filter parent ffff: protocol all pref 3 fw.*handle 0x4.*police 0x8 rate 2Mbit burst 200Kb mtu 2Kb action reclassify overhead 0b linklayer atm.*ref 1 bind 1", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "5339", + "name": "Del entire fw filter", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 5 prio 7 fw action pass", + "$TC filter add dev $DEV1 parent ffff: handle 3 prio 9 fw action pass" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff:", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "protocol all pref.*handle.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "0e99", + "name": "Del single fw filter x1", + "__comment__": "First of two tests to check that one filter is there and the other isn't", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 5 prio 7 fw action pass", + "$TC filter add dev $DEV1 parent ffff: handle 3 prio 9 fw action pass" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: handle 3 prio 9 fw action pass", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "protocol all pref 7.*handle 0x5.*gact action pass", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "f54c", + "name": "Del single fw filter x2", + "__comment__": "Second of two tests to check that one filter is there and the other isn't", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 5 prio 7 fw action pass", + "$TC filter add dev $DEV1 parent ffff: handle 3 prio 9 fw action pass" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: handle 3 prio 9 fw action pass", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "protocol all pref 9.*handle 0x3.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "ba94", + "name": "Del fw filter by prio", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 1 prio 4 fw action ok", + "$TC filter add dev $DEV1 parent ffff: handle 2 prio 4 fw action ok" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: prio 4", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "pref 4 fw.*gact action pass", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "4acb", + "name": "Del fw filter by chain", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 4 prio 2 chain 13 fw action pipe", + "$TC filter add dev $DEV1 parent ffff: handle 3 prio 5 chain 13 fw action pipe" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: chain 13", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "fw chain 13 handle.*gact action pipe", + "matchCount": "0", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "3424", + "name": "Del fw filter by action (invalid)", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 2 prio 4 fw action drop" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: fw action drop", + "expExitCode": "2", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 2 prio 4 protocol all fw", + "matchPattern": "handle 0x2.*gact action drop", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "da89", + "name": "Del fw filter by handle (invalid)", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 3 prio 4 fw action continue" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: handle 3 fw", + "expExitCode": "2", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 3 prio 4 protocol all fw", + "matchPattern": "handle 0x3.*gact action continue", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "4d95", + "name": "Del fw filter by protocol (invalid)", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 4 prio 2 protocol arp fw action pipe" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: protocol arp fw", + "expExitCode": "2", + "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 4 prio 2 protocol arp fw", + "matchPattern": "filter parent ffff: protocol arp.*handle 0x4.*gact action pipe", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "4736", + "name": "Del fw filter by flowid (invalid)", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 4 prio 2 fw action pipe flowid 45" + ], + "cmdUnderTest": "$TC filter del dev $DEV1 parent ffff: fw flowid 45", + "expExitCode": "2", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "handle 0x4.*gact action pipe", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "3dcb", + "name": "Replace fw filter action", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 1 prio 2 fw action ok" + ], + "cmdUnderTest": "$TC filter replace dev $DEV1 parent ffff: handle 1 prio 2 fw action pipe", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "pref 2 fw.*handle 0x1.*gact action pipe", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "eb4d", + "name": "Replace fw filter classid", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 1 prio 2 fw action ok" + ], + "cmdUnderTest": "$TC filter replace dev $DEV1 parent ffff: handle 1 prio 2 fw action pipe classid 2", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "pref 2 fw.*handle 0x1 classid :2.*gact action pipe", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + }, + { + "id": "67ec", + "name": "Replace fw filter index", + "category": [ + "filter", + "fw" + ], + "setup": [ + "$TC qdisc add dev $DEV1 ingress", + "$TC filter add dev $DEV1 parent ffff: handle 1 prio 2 fw action ok index 3" + ], + "cmdUnderTest": "$TC filter replace dev $DEV1 parent ffff: handle 1 prio 2 fw action ok index 16", + "expExitCode": "0", + "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", + "matchPattern": "pref 2 fw.*handle 0x1.*gact action pass.*index 16", + "matchCount": "1", + "teardown": [ + "$TC qdisc del dev $DEV1 ingress" + ] + } +] diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json b/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json index 5fa02d86b35f..99a5ffca1088 100644 --- a/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json +++ b/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json @@ -12,8 +12,8 @@ "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: protocol ip prio 1 u32 match ip src 127.0.0.1/32 flowid 1:1 action ok", "expExitCode": "0", "verifyCmd": "$TC filter show dev $DEV1 parent ffff:", - "matchPattern": "match 7f000002/ffffffff at 12", - "matchCount": "0", + "matchPattern": "match 7f000001/ffffffff at 12", + "matchCount": "1", "teardown": [ "$TC qdisc del dev $DEV1 ingress" ] |