diff options
author | Jakub Kicinski <kuba@kernel.org> | 2023-04-13 16:11:22 -0700 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2023-04-13 16:43:38 -0700 |
commit | c2865b1122595e69e7df52d01f7f02338b8babca (patch) | |
tree | e1bfb26b2d31ce0bf590cabafc8e3a0ff1bece4c | |
parent | 800e68c44ffe71f9715f745b38fd1af6910b3773 (diff) | |
parent | 8c5c2a4898e3d6bad86e29d471e023c8a19ba799 (diff) |
Daniel Borkmann says:
====================
pull-request: bpf-next 2023-04-13
We've added 260 non-merge commits during the last 36 day(s) which contain
a total of 356 files changed, 21786 insertions(+), 11275 deletions(-).
The main changes are:
1) Rework BPF verifier log behavior and implement it as a rotating log
by default with the option to retain old-style fixed log behavior,
from Andrii Nakryiko.
2) Adds support for using {FOU,GUE} encap with an ipip device operating
in collect_md mode and add a set of BPF kfuncs for controlling encap
params, from Christian Ehrig.
3) Allow BPF programs to detect at load time whether a particular kfunc
exists or not, and also add support for this in light skeleton,
from Alexei Starovoitov.
4) Optimize hashmap lookups when key size is multiple of 4,
from Anton Protopopov.
5) Enable RCU semantics for task BPF kptrs and allow referenced kptr
tasks to be stored in BPF maps, from David Vernet.
6) Add support for stashing local BPF kptr into a map value via
bpf_kptr_xchg(). This is useful e.g. for rbtree node creation
for new cgroups, from Dave Marchevsky.
7) Fix BTF handling of is_int_ptr to skip modifiers to work around
tracing issues where a program cannot be attached, from Feng Zhou.
8) Migrate a big portion of test_verifier unit tests over to
test_progs -a verifier_* via inline asm to ease {read,debug}ability,
from Eduard Zingerman.
9) Several updates to the instruction-set.rst documentation
which is subject to future IETF standardization
(https://lwn.net/Articles/926882/), from Dave Thaler.
10) Fix BPF verifier in the __reg_bound_offset's 64->32 tnum sub-register
known bits information propagation, from Daniel Borkmann.
11) Add skb bitfield compaction work related to BPF with the overall goal
to make more of the sk_buff bits optional, from Jakub Kicinski.
12) BPF selftest cleanups for build id extraction which stand on its own
from the upcoming integration work of build id into struct file object,
from Jiri Olsa.
13) Add fixes and optimizations for xsk descriptor validation and several
selftest improvements for xsk sockets, from Kal Conley.
14) Add BPF links for struct_ops and enable switching implementations
of BPF TCP cong-ctls under a given name by replacing backing
struct_ops map, from Kui-Feng Lee.
15) Remove a misleading BPF verifier env->bypass_spec_v1 check on variable
offset stack read as earlier Spectre checks cover this,
from Luis Gerhorst.
16) Fix issues in copy_from_user_nofault() for BPF and other tracers
to resemble copy_from_user_nmi() from safety PoV, from Florian Lehner
and Alexei Starovoitov.
17) Add --json-summary option to test_progs in order for CI tooling to
ease parsing of test results, from Manu Bretelle.
18) Batch of improvements and refactoring to prep for upcoming
bpf_local_storage conversion to bpf_mem_cache_{alloc,free} allocator,
from Martin KaFai Lau.
19) Improve bpftool's visual program dump which produces the control
flow graph in a DOT format by adding C source inline annotations,
from Quentin Monnet.
20) Fix attaching fentry/fexit/fmod_ret/lsm to modules by extracting
the module name from BTF of the target and searching kallsyms of
the correct module, from Viktor Malik.
21) Improve BPF verifier handling of '<const> <cond> <non_const>'
to better detect whether in particular jmp32 branches are taken,
from Yonghong Song.
22) Allow BPF TCP cong-ctls to write app_limited of struct tcp_sock.
A built-in cc or one from a kernel module is already able to write
to app_limited, from Yixin Shen.
Conflicts:
Documentation/bpf/bpf_devel_QA.rst
b7abcd9c656b ("bpf, doc: Link to submitting-patches.rst for general patch submission info")
0f10f647f455 ("bpf, docs: Use internal linking for link to netdev subsystem doc")
https://lore.kernel.org/all/20230307095812.236eb1be@canb.auug.org.au/
include/net/ip_tunnels.h
bc9d003dc48c3 ("ip_tunnel: Preserve pointer const in ip_tunnel_info_opts")
ac931d4cdec3d ("ipip,ip_tunnel,sit: Add FOU support for externally controlled ipip devices")
https://lore.kernel.org/all/20230413161235.4093777-1-broonie@kernel.org/
net/bpf/test_run.c
e5995bc7e2ba ("bpf, test_run: fix crashes due to XDP frame overwriting/corruption")
294635a8165a ("bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES")
https://lore.kernel.org/all/20230320102619.05b80a98@canb.auug.org.au/
====================
Link: https://lore.kernel.org/r/20230413191525.7295-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
356 files changed, 21786 insertions, 11274 deletions
diff --git a/Documentation/bpf/bpf_devel_QA.rst b/Documentation/bpf/bpf_devel_QA.rst index 7403f81c995c..609b71f5747d 100644 --- a/Documentation/bpf/bpf_devel_QA.rst +++ b/Documentation/bpf/bpf_devel_QA.rst @@ -128,7 +128,8 @@ into the bpf-next tree will make their way into net-next tree. net and net-next are both run by David S. Miller. From there, they will go into the kernel mainline tree run by Linus Torvalds. To read up on the process of net and net-next being merged into the mainline tree, see -the `netdev-FAQ`_. +the documentation on netdev subsystem at +Documentation/process/maintainer-netdev.rst. @@ -147,7 +148,8 @@ request):: Q: How do I indicate which tree (bpf vs. bpf-next) my patch should be applied to? --------------------------------------------------------------------------------- -A: The process is the very same as described in the `netdev-FAQ`_, +A: The process is the very same as described in the netdev subsystem +documentation at Documentation/process/maintainer-netdev.rst, so please read up on it. The subject line must indicate whether the patch is a fix or rather "next-like" content in order to let the maintainers know whether it is targeted at bpf or bpf-next. @@ -206,8 +208,9 @@ ii) run extensive BPF test suite and Once the BPF pull request was accepted by David S. Miller, then the patches end up in net or net-next tree, respectively, and make their way from there further into mainline. Again, see the -`netdev-FAQ`_ for additional information e.g. on how often they are -merged to mainline. +documentation for netdev subsystem at +Documentation/process/maintainer-netdev.rst for additional information +e.g. on how often they are merged to mainline. Q: How long do I need to wait for feedback on my BPF patches? ------------------------------------------------------------- @@ -230,7 +233,8 @@ Q: Are patches applied to bpf-next when the merge window is open? ----------------------------------------------------------------- A: For the time when the merge window is open, bpf-next will not be processed. This is roughly analogous to net-next patch processing, -so feel free to read up on the `netdev-FAQ`_ about further details. +so feel free to read up on the netdev docs at +Documentation/process/maintainer-netdev.rst about further details. During those two weeks of merge window, we might ask you to resend your patch series once bpf-next is open again. Once Linus released @@ -394,7 +398,8 @@ netdev kernel mailing list in Cc and ask for the fix to be queued up: netdev@vger.kernel.org The process in general is the same as on netdev itself, see also the -`netdev-FAQ`_. +the documentation on networking subsystem at +Documentation/process/maintainer-netdev.rst. Q: Do you also backport to kernels not currently maintained as stable? ---------------------------------------------------------------------- @@ -410,7 +415,7 @@ Q: The BPF patch I am about to submit needs to go to stable as well What should I do? A: The same rules apply as with netdev patch submissions in general, see -the `netdev-FAQ`_. +the netdev docs at Documentation/process/maintainer-netdev.rst. Never add "``Cc: stable@vger.kernel.org``" to the patch description, but ask the BPF maintainers to queue the patches instead. This can be done @@ -684,7 +689,6 @@ when: .. Links -.. _netdev-FAQ: https://www.kernel.org/doc/html/latest/process/maintainer-netdev.html .. _selftests: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/testing/selftests/bpf/ diff --git a/Documentation/bpf/clang-notes.rst b/Documentation/bpf/clang-notes.rst index 528feddf2db9..2c872a1ee08e 100644 --- a/Documentation/bpf/clang-notes.rst +++ b/Documentation/bpf/clang-notes.rst @@ -20,6 +20,12 @@ Arithmetic instructions For CPU versions prior to 3, Clang v7.0 and later can enable ``BPF_ALU`` support with ``-Xclang -target-feature -Xclang +alu32``. In CPU version 3, support is automatically included. +Jump instructions +================= + +If ``-O0`` is used, Clang will generate the ``BPF_CALL | BPF_X | BPF_JMP`` (0x8d) +instruction, which is not supported by the Linux kernel verifier. + Atomic operations ================= diff --git a/Documentation/bpf/cpumasks.rst b/Documentation/bpf/cpumasks.rst index 75344cd230e5..41efd8874eeb 100644 --- a/Documentation/bpf/cpumasks.rst +++ b/Documentation/bpf/cpumasks.rst @@ -117,12 +117,7 @@ For example: As mentioned and illustrated above, these ``struct bpf_cpumask *`` objects can also be stored in a map and used as kptrs. If a ``struct bpf_cpumask *`` is in a map, the reference can be removed from the map with bpf_kptr_xchg(), or -opportunistically acquired with bpf_cpumask_kptr_get(): - -.. kernel-doc:: kernel/bpf/cpumask.c - :identifiers: bpf_cpumask_kptr_get - -Here is an example of a ``struct bpf_cpumask *`` being retrieved from a map: +opportunistically acquired using RCU: .. code-block:: c @@ -144,7 +139,7 @@ Here is an example of a ``struct bpf_cpumask *`` being retrieved from a map: /** * A simple example tracepoint program showing how a * struct bpf_cpumask * kptr that is stored in a map can - * be acquired using the bpf_cpumask_kptr_get() kfunc. + * be passed to kfuncs using RCU protection. */ SEC("tp_btf/cgroup_mkdir") int BPF_PROG(cgrp_ancestor_example, struct cgroup *cgrp, const char *path) @@ -158,26 +153,21 @@ Here is an example of a ``struct bpf_cpumask *`` being retrieved from a map: if (!v) return -ENOENT; + bpf_rcu_read_lock(); /* Acquire a reference to the bpf_cpumask * kptr that's already stored in the map. */ - kptr = bpf_cpumask_kptr_get(&v->cpumask); - if (!kptr) + kptr = v->cpumask; + if (!kptr) { /* If no bpf_cpumask was present in the map, it's because * we're racing with another CPU that removed it with * bpf_kptr_xchg() between the bpf_map_lookup_elem() - * above, and our call to bpf_cpumask_kptr_get(). - * bpf_cpumask_kptr_get() internally safely handles this - * race, and will return NULL if the cpumask is no longer - * present in the map by the time we invoke the kfunc. + * above, and our load of the pointer from the map. */ + bpf_rcu_read_unlock(); return -EBUSY; + } - /* Free the reference we just took above. Note that the - * original struct bpf_cpumask * kptr is still in the map. It will - * be freed either at a later time if another context deletes - * it from the map, or automatically by the BPF subsystem if - * it's still present when the map is destroyed. - */ - bpf_cpumask_release(kptr); + bpf_cpumask_setall(kptr); + bpf_rcu_read_unlock(); return 0; } diff --git a/Documentation/bpf/instruction-set.rst b/Documentation/bpf/instruction-set.rst index db8789e6969e..492980ece1ab 100644 --- a/Documentation/bpf/instruction-set.rst +++ b/Documentation/bpf/instruction-set.rst @@ -11,7 +11,8 @@ Documentation conventions ========================= For brevity, this document uses the type notion "u64", "u32", etc. -to mean an unsigned integer whose width is the specified number of bits. +to mean an unsigned integer whose width is the specified number of bits, +and "s32", etc. to mean a signed integer of the specified number of bits. Registers and calling convention ================================ @@ -242,28 +243,58 @@ Jump instructions otherwise identical operations. The 'code' field encodes the operation as below: -======== ===== ========================= ============ -code value description notes -======== ===== ========================= ============ -BPF_JA 0x00 PC += off BPF_JMP only -BPF_JEQ 0x10 PC += off if dst == src -BPF_JGT 0x20 PC += off if dst > src unsigned -BPF_JGE 0x30 PC += off if dst >= src unsigned -BPF_JSET 0x40 PC += off if dst & src -BPF_JNE 0x50 PC += off if dst != src -BPF_JSGT 0x60 PC += off if dst > src signed -BPF_JSGE 0x70 PC += off if dst >= src signed -BPF_CALL 0x80 function call -BPF_EXIT 0x90 function / program return BPF_JMP only -BPF_JLT 0xa0 PC += off if dst < src unsigned -BPF_JLE 0xb0 PC += off if dst <= src unsigned -BPF_JSLT 0xc0 PC += off if dst < src signed -BPF_JSLE 0xd0 PC += off if dst <= src signed -======== ===== ========================= ============ +======== ===== === =========================================== ========================================= +code value src description notes +======== ===== === =========================================== ========================================= +BPF_JA 0x0 0x0 PC += offset BPF_JMP only +BPF_JEQ 0x1 any PC += offset if dst == src +BPF_JGT 0x2 any PC += offset if dst > src unsigned +BPF_JGE 0x3 any PC += offset if dst >= src unsigned +BPF_JSET 0x4 any PC += offset if dst & src +BPF_JNE 0x5 any PC += offset if dst != src +BPF_JSGT 0x6 any PC += offset if dst > src signed +BPF_JSGE 0x7 any PC += offset if dst >= src signed +BPF_CALL 0x8 0x0 call helper function by address see `Helper functions`_ +BPF_CALL 0x8 0x1 call PC += offset see `Program-local functions`_ +BPF_CALL 0x8 0x2 call helper function by BTF ID see `Helper functions`_ +BPF_EXIT 0x9 0x0 return BPF_JMP only +BPF_JLT 0xa any PC += offset if dst < src unsigned +BPF_JLE 0xb any PC += offset if dst <= src unsigned +BPF_JSLT 0xc any PC += offset if dst < src signed +BPF_JSLE 0xd any PC += offset if dst <= src signed +======== ===== === =========================================== ========================================= The eBPF program needs to store the return value into register R0 before doing a -BPF_EXIT. +``BPF_EXIT``. +Example: + +``BPF_JSGE | BPF_X | BPF_JMP32`` (0x7e) means:: + + if (s32)dst s>= (s32)src goto +offset + +where 's>=' indicates a signed '>=' comparison. + +Helper functions +~~~~~~~~~~~~~~~~ + +Helper functions are a concept whereby BPF programs can call into a +set of function calls exposed by the underlying platform. + +Historically, each helper function was identified by an address +encoded in the imm field. The available helper functions may differ +for each program type, but address values are unique across all program types. + +Platforms that support the BPF Type Format (BTF) support identifying +a helper function by a BTF ID encoded in the imm field, where the BTF ID +identifies the helper name and type. + +Program-local functions +~~~~~~~~~~~~~~~~~~~~~~~ +Program-local functions are functions exposed by the same BPF program as the +caller, and are referenced by offset from the call instruction, similar to +``BPF_JA``. A ``BPF_EXIT`` within the program-local function will return to +the caller. Load and store instructions =========================== @@ -385,14 +416,56 @@ and loaded back to ``R0``. ----------------------------- Instructions with the ``BPF_IMM`` 'mode' modifier use the wide instruction -encoding for an extra imm64 value. - -There is currently only one such instruction. - -``BPF_LD | BPF_DW | BPF_IMM`` means:: - - dst = imm64 - +encoding defined in `Instruction encoding`_, and use the 'src' field of the +basic instruction to hold an opcode subtype. + +The following table defines a set of ``BPF_IMM | BPF_DW | BPF_LD`` instructions +with opcode subtypes in the 'src' field, using new terms such as "map" +defined further below: + +========================= ====== === ========================================= =========== ============== +opcode construction opcode src pseudocode imm type dst type +========================= ====== === ========================================= =========== ============== +BPF_IMM | BPF_DW | BPF_LD 0x18 0x0 dst = imm64 integer integer +BPF_IMM | BPF_DW | BPF_LD 0x18 0x1 dst = map_by_fd(imm) map fd map +BPF_IMM | BPF_DW | BPF_LD 0x18 0x2 dst = map_val(map_by_fd(imm)) + next_imm map fd data pointer +BPF_IMM | BPF_DW | BPF_LD 0x18 0x3 dst = var_addr(imm) variable id data pointer +BPF_IMM | BPF_DW | BPF_LD 0x18 0x4 dst = code_addr(imm) integer code pointer +BPF_IMM | BPF_DW | BPF_LD 0x18 0x5 dst = map_by_idx(imm) map index map +BPF_IMM | BPF_DW | BPF_LD 0x18 0x6 dst = map_val(map_by_idx(imm)) + next_imm map index data pointer +========================= ====== === ========================================= =========== ============== + +where + +* map_by_fd(imm) means to convert a 32-bit file descriptor into an address of a map (see `Maps`_) +* map_by_idx(imm) means to convert a 32-bit index into an address of a map +* map_val(map) gets the address of the first value in a given map +* var_addr(imm) gets the address of a platform variable (see `Platform Variables`_) with a given id +* code_addr(imm) gets the address of the instruction at a specified relative offset in number of (64-bit) instructions +* the 'imm type' can be used by disassemblers for display +* the 'dst type' can be used for verification and JIT compilation purposes + +Maps +~~~~ + +Maps are shared memory regions accessible by eBPF programs on some platforms. +A map can have various semantics as defined in a separate document, and may or +may not have a single contiguous memory region, but the 'map_val(map)' is +currently only defined for maps that do have a single contiguous memory region. + +Each map can have a file descriptor (fd) if supported by the platform, where +'map_by_fd(imm)' means to get the map with the specified file descriptor. Each +BPF program can also be defined to use a set of maps associated with the +program at load time, and 'map_by_idx(imm)' means to get the map with the given +index in the set associated with the BPF program containing the instruction. + +Platform Variables +~~~~~~~~~~~~~~~~~~ + +Platform variables are memory regions, identified by integer ids, exposed by +the runtime and accessible by BPF programs on some platforms. The +'var_addr(imm)' operation means to get the address of the memory region +identified by the given id. Legacy BPF Packet access instructions ------------------------------------- diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 69eccf6f98ef..3b42cfe12437 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -179,9 +179,10 @@ both are orthogonal to each other. --------------------- The KF_RELEASE flag is used to indicate that the kfunc releases the pointer -passed in to it. There can be only one referenced pointer that can be passed in. -All copies of the pointer being released are invalidated as a result of invoking -kfunc with this flag. +passed in to it. There can be only one referenced pointer that can be passed +in. All copies of the pointer being released are invalidated as a result of +invoking kfunc with this flag. KF_RELEASE kfuncs automatically receive the +protection afforded by the KF_TRUSTED_ARGS flag described below. 2.4.4 KF_KPTR_GET flag ---------------------- @@ -470,13 +471,50 @@ struct_ops callback arg. For example: struct task_struct *acquired; acquired = bpf_task_acquire(task); + if (acquired) + /* + * In a typical program you'd do something like store + * the task in a map, and the map will automatically + * release it later. Here, we release it manually. + */ + bpf_task_release(acquired); + return 0; + } + + +References acquired on ``struct task_struct *`` objects are RCU protected. +Therefore, when in an RCU read region, you can obtain a pointer to a task +embedded in a map value without having to acquire a reference: + +.. code-block:: c + + #define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8))) + private(TASK) static struct task_struct *global; + + /** + * A trivial example showing how to access a task stored + * in a map using RCU. + */ + SEC("tp_btf/task_newtask") + int BPF_PROG(task_rcu_read_example, struct task_struct *task, u64 clone_flags) + { + struct task_struct *local_copy; + + bpf_rcu_read_lock(); + local_copy = global; + if (local_copy) + /* + * We could also pass local_copy to kfuncs or helper functions here, + * as we're guaranteed that local_copy will be valid until we exit + * the RCU read region below. + */ + bpf_printk("Global task %s is valid", local_copy->comm); + else + bpf_printk("No global task found"); + bpf_rcu_read_unlock(); + + /* At this point we can no longer reference local_copy. */ - /* - * In a typical program you'd do something like store - * the task in a map, and the map will automatically - * release it later. Here, we release it manually. - */ - bpf_task_release(acquired); return 0; } @@ -534,74 +572,6 @@ bpf_task_release() respectively, so we won't provide examples for them. ---- -You may also acquire a reference to a ``struct cgroup`` kptr that's already -stored in a map using bpf_cgroup_kptr_get(): - -.. kernel-doc:: kernel/bpf/helpers.c - :identifiers: bpf_cgroup_kptr_get - -Here's an example of how it can be used: - -.. code-block:: c - - /* struct containing the struct task_struct kptr which is actually stored in the map. */ - struct __cgroups_kfunc_map_value { - struct cgroup __kptr * cgroup; - }; - - /* The map containing struct __cgroups_kfunc_map_value entries. */ - struct { - __uint(type, BPF_MAP_TYPE_HASH); - __type(key, int); - __type(value, struct __cgroups_kfunc_map_value); - __uint(max_entries, 1); - } __cgroups_kfunc_map SEC(".maps"); - - /* ... */ - - /** - * A simple example tracepoint program showing how a - * struct cgroup kptr that is stored in a map can - * be acquired using the bpf_cgroup_kptr_get() kfunc. - */ - SEC("tp_btf/cgroup_mkdir") - int BPF_PROG(cgroup_kptr_get_example, struct cgroup *cgrp, const char *path) - { - struct cgroup *kptr; - struct __cgroups_kfunc_map_value *v; - s32 id = cgrp->self.id; - - /* Assume a cgroup kptr was previously stored in the map. */ - v = bpf_map_lookup_elem(&__cgroups_kfunc_map, &id); - if (!v) - return -ENOENT; - - /* Acquire a reference to the cgroup kptr that's already stored in the map. */ - kptr = bpf_cgroup_kptr_get(&v->cgroup); - if (!kptr) - /* If no cgroup was present in the map, it's because - * we're racing with another CPU that removed it with - * bpf_kptr_xchg() between the bpf_map_lookup_elem() - * above, and our call to bpf_cgroup_kptr_get(). - * bpf_cgroup_kptr_get() internally safely handles this - * race, and will return NULL if the task is no longer - * present in the map by the time we invoke the kfunc. - */ - return -EBUSY; - - /* Free the reference we just took above. Note that the - * original struct cgroup kptr is still in the map. It will - * be freed either at a later time if another context deletes - * it from the map, or automatically by the BPF subsystem if - * it's still present when the map is destroyed. - */ - bpf_cgroup_release(kptr); - - return 0; - } - ----- - Other kfuncs available for interacting with ``struct cgroup *`` objects are bpf_cgroup_ancestor() and bpf_cgroup_from_id(), allowing callers to access the ancestor of a cgroup and find a cgroup by its ID, respectively. Both diff --git a/Documentation/bpf/libbpf/index.rst b/Documentation/bpf/libbpf/index.rst index f9b3b252e28f..7545a2049692 100644 --- a/Documentation/bpf/libbpf/index.rst +++ b/Documentation/bpf/libbpf/index.rst @@ -2,23 +2,32 @@ .. _libbpf: +====== libbpf ====== +If you are looking to develop BPF applications using the libbpf library, this +directory contains important documentation that you should read. + +To get started, it is recommended to begin with the :doc:`libbpf Overview +<libbpf_overview>` document, which provides a high-level understanding of the +libbpf APIs and their usage. This will give you a solid foundation to start +exploring and utilizing the various features of libbpf to develop your BPF +applications. + .. toctree:: :maxdepth: 1 + libbpf_overview API Documentation <https://libbpf.readthedocs.io/en/latest/api.html> program_types libbpf_naming_convention libbpf_build -This is documentation for libbpf, a userspace library for loading and -interacting with bpf programs. -All general BPF questions, including kernel functionality, libbpf APIs and -their application, should be sent to bpf@vger.kernel.org mailing list. -You can `subscribe <http://vger.kernel.org/vger-lists.html#bpf>`_ to the -mailing list search its `archive <https://lore.kernel.org/bpf/>`_. -Please search the archive before asking new questions. It very well might -be that this was already addressed or answered before. +All general BPF questions, including kernel functionality, libbpf APIs and their +application, should be sent to bpf@vger.kernel.org mailing list. You can +`subscribe <http://vger.kernel.org/vger-lists.html#bpf>`_ to the mailing list +search its `archive <https://lore.kernel.org/bpf/>`_. Please search the archive +before asking new questions. It may be that this was already addressed or +answered before. diff --git a/Documentation/bpf/libbpf/libbpf_overview.rst b/Documentation/bpf/libbpf/libbpf_overview.rst new file mode 100644 index 000000000000..f36a2d4ffea2 --- /dev/null +++ b/Documentation/bpf/libbpf/libbpf_overview.rst @@ -0,0 +1,228 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=============== +libbpf Overview +=============== + +libbpf is a C-based library containing a BPF loader that takes compiled BPF +object files and prepares and loads them into the Linux kernel. libbpf takes the +heavy lifting of loading, verifying, and attaching BPF programs to various +kernel hooks, allowing BPF application developers to focus only on BPF program +correctness and performance. + +The following are the high-level features supported by libbpf: + +* Provides high-level and low-level APIs for user space programs to interact + with BPF programs. The low-level APIs wrap all the bpf system call + functionality, which is useful when users need more fine-grained control + over the interactions between user space and BPF programs. +* Provides overall support for the BPF object skeleton generated by bpftool. + The skeleton file simplifies the process for the user space programs to access + global variables and work with BPF programs. +* Provides BPF-side APIS, including BPF helper definitions, BPF maps support, + and tracing helpers, allowing developers to simplify BPF code writing. +* Supports BPF CO-RE mechanism, enabling BPF developers to write portable + BPF programs that can be compiled once and run across different kernel + versions. + +This document will delve into the above concepts in detail, providing a deeper +understanding of the capabilities and advantages of libbpf and how it can help +you develop BPF applications efficiently. + +BPF App Lifecycle and libbpf APIs +================================== + +A BPF application consists of one or more BPF programs (either cooperating or +completely independent), BPF maps, and global variables. The global +variables are shared between all BPF programs, which allows them to cooperate on +a common set of data. libbpf provides APIs that user space programs can use to +manipulate the BPF programs by triggering different phases of a BPF application +lifecycle. + +The following section provides a brief overview of each phase in the BPF life +cycle: + +* **Open phase**: In this phase, libbpf parses the BPF + object file and discovers BPF maps, BPF programs, and global variables. After + a BPF app is opened, user space apps can make additional adjustments + (setting BPF program types, if necessary; pre-setting initial values for + global variables, etc.) before all the entities are created and loaded. + +* **Load phase**: In the load phase, libbpf creates BPF + maps, resolves various relocations, and verifies and loads BPF programs into + the kernel. At this point, libbpf validates all the parts of a BPF application + and loads the BPF program into the kernel, but no BPF program has yet been + executed. After the load phase, it’s possible to set up the initial BPF map + state without racing with the BPF program code execution. + +* **Attachment phase**: In this phase, libbpf + attaches BPF programs to various BPF hook points (e.g., tracepoints, kprobes, + cgroup hooks, network packet processing pipeline, etc.). During this + phase, BPF programs perform useful work such as processing + packets, or updating BPF maps and global variables that can be read from user + space. + +* **Tear down phase**: In the tear down phase, + libbpf detaches BPF programs and unloads them from the kernel. BPF maps are + destroyed, and all the resources used by the BPF app are freed. + +BPF Object Skeleton File +======================== + +BPF skeleton is an alternative interface to libbpf APIs for working with BPF +objects. Skeleton code abstract away generic libbpf APIs to significantly +simplify code for manipulating BPF programs from user space. Skeleton code +includes a bytecode representation of the BPF object file, simplifying the +process of distributing your BPF code. With BPF bytecode embedded, there are no +extra files to deploy along with your application binary. + +You can generate the skeleton header file ``(.skel.h)`` for a specific object +file by passing the BPF object to the bpftool. The generated BPF skeleton +provides the following custom functions that correspond to the BPF lifecycle, +each of them prefixed with the specific object name: + +* ``<name>__open()`` – creates and opens BPF application (``<name>`` stands for + the specific bpf object name) +* ``<name>__load()`` – instantiates, loads,and verifies BPF application parts +* ``<name>__attach()`` – attaches all auto-attachable BPF programs (it’s + optional, you can have more control by using libbpf APIs directly) +* ``<name>__destroy()`` – detaches all BPF programs and + frees up all used resources + +Using the skeleton code is the recommended way to work with bpf programs. Keep +in mind, BPF skeleton provides access to the underlying BPF object, so whatever +was possible to do with generic libbpf APIs is still possible even when the BPF +skeleton is used. It's an additive convenience feature, with no syscalls, and no +cumbersome code. + +Other Advantages of Using Skeleton File +--------------------------------------- + +* BPF skeleton provides an interface for user space programs to work with BPF + global variables. The skeleton code memory maps global variables as a struct + into user space. The struct interface allows user space programs to initialize + BPF programs before the BPF load phase and fetch and update data from user + space afterward. + +* The ``skel.h`` file reflects the object file structure by listing out the + available maps, programs, etc. BPF skeleton provides direct access to all the + BPF maps and BPF programs as struct fields. This eliminates the need for + string-based lookups with ``bpf_object_find_map_by_name()`` and + ``bpf_object_find_program_by_name()`` APIs, reducing errors due to BPF source + code and user-space code getting out of sync. + +* The embedded bytecode representation of the object file ensures that the + skeleton and the BPF object file are always in sync. + +BPF Helpers +=========== + +libbpf provides BPF-side APIs that BPF programs can use to interact with the +system. The BPF helpers definition allows developers to use them in BPF code as +any other plain C function. For example, there are helper functions to print +debugging messages, get the time since the system was booted, interact with BPF +maps, manipulate network packets, etc. + +For a complete description of what the helpers do, the arguments they take, and +the return value, see the `bpf-helpers +<https://man7.org/linux/man-pages/man7/bpf-helpers.7.html>`_ man page. + +BPF CO-RE (Compile Once – Run Everywhere) +========================================= + +BPF programs work in the kernel space and have access to kernel memory and data +structures. One limitation that BPF applications come across is the lack of +portability across different kernel versions and configurations. `BCC +<https://github.com/iovisor/bcc/>`_ is one of the solutions for BPF +portability. However, it comes with runtime overhead and a large binary size +from embedding the compiler with the application. + +libbpf steps up the BPF program portability by supporting the BPF CO-RE concept. +BPF CO-RE brings together BTF type information, libbpf, and the compiler to +produce a single executable binary that you can run on multiple kernel versions +and configurations. + +To make BPF programs portable libbpf relies on the BTF type information of the +running kernel. Kernel also exposes this self-describing authoritative BTF +information through ``sysfs`` at ``/sys/kernel/btf/vmlinux``. + +You can generate the BTF information for the running kernel with the following +command: + +:: + + $ bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h + +The command generates a ``vmlinux.h`` header file with all kernel types +(:doc:`BTF types <../btf>`) that the running kernel uses. Including +``vmlinux.h`` in your BPF program eliminates dependency on system-wide kernel +headers. + +libbpf enables portability of BPF programs by looking at the BPF program’s +recorded BTF type and relocation information and matching them to BTF +information (vmlinux) provided by the running kernel. libbpf then resolves and +matches all the types and fields, and updates necessary offsets and other +relocatable data to ensure that BPF program’s logic functions correctly for a +specific kernel on the host. BPF CO-RE concept thus eliminates overhead +associated with BPF development and allows developers to write portable BPF +applications without modifications and runtime source code compilation on the +target machine. + +The following code snippet shows how to read the parent field of a kernel +``task_struct`` using BPF CO-RE and libbf. The basic helper to read a field in a +CO-RE relocatable manner is ``bpf_core_read(dst, sz, src)``, which will read +``sz`` bytes from the field referenced by ``src`` into the memory pointed to by +``dst``. + +.. code-block:: C + :emphasize-lines: 6 + + //... + struct task_struct *task = (void *)bpf_get_current_task(); + struct task_struct *parent_task; + int err; + + err = bpf_core_read(&parent_task, sizeof(void *), &task->parent); + if (err) { + /* handle error */ + } + + /* parent_task contains the value of task->parent pointer */ + +In the code snippet, we first get a pointer to the current ``task_struct`` using +``bpf_get_current_task()``. We then use ``bpf_core_read()`` to read the parent +field of task struct into the ``parent_task`` variable. ``bpf_core_read()`` is +just like ``bpf_probe_read_kernel()`` BPF helper, except it records information +about the field that should be relocated on the target kernel. i.e, if the +``parent`` field gets shifted to a different offset within +``struct task_struct`` due to some new field added in front of it, libbpf will +automatically adjust the actual offset to the proper value. + +Getting Started with libbpf +=========================== + +Check out the `libbpf-bootstrap <https://github.com/libbpf/libbpf-bootstrap>`_ +repository with simple examples of using libbpf to build various BPF +applications. + +See also `libbpf API documentation +<https://libbpf.readthedocs.io/en/latest/api.html>`_. + +libbpf and Rust +=============== + +If you are building BPF applications in Rust, it is recommended to use the +`Libbpf-rs <https://github.com/libbpf/libbpf-rs>`_ library instead of bindgen +bindings directly to libbpf. Libbpf-rs wraps libbpf functionality in +Rust-idiomatic interfaces and provides libbpf-cargo plugin to handle BPF code +compilation and skeleton generation. Using Libbpf-rs will make building user +space part of the BPF application easier. Note that the BPF program themselves +must still be written in plain C. + +Additional Documentation +======================== + +* `Program types and ELF Sections <https://libbpf.readthedocs.io/en/latest/program_types.html>`_ +* `API naming convention <https://libbpf.readthedocs.io/en/latest/libbpf_naming_convention.html>`_ +* `Building libbpf <https://libbpf.readthedocs.io/en/latest/libbpf_build.html>`_ +* `API documentation Convention <https://libbpf.readthedocs.io/en/latest/libbpf_naming_convention.html#api-documentation-convention>`_ diff --git a/Documentation/bpf/linux-notes.rst b/Documentation/bpf/linux-notes.rst index 956b0c86699d..508d009d3bed 100644 --- a/Documentation/bpf/linux-notes.rst +++ b/Documentation/bpf/linux-notes.rst @@ -12,6 +12,36 @@ Byte swap instructions ``BPF_FROM_LE`` and ``BPF_FROM_BE`` exist as aliases for ``BPF_TO_LE`` and ``BPF_TO_BE`` respectively. +Jump instructions +================= + +``BPF_CALL | BPF_X | BPF_JMP`` (0x8d), where the helper function +integer would be read from a specified register, is not currently supported +by the verifier. Any programs with this instruction will fail to load +until such support is added. + +Maps +==== + +Linux only supports the 'map_val(map)' operation on array maps with a single element. + +Linux uses an fd_array to store maps associated with a BPF program. Thus, +map_by_idx(imm) uses the fd at that index in the array. + +Variables +========= + +The following 64-bit immediate instruction specifies that a variable address, +which corresponds to some integer stored in the 'imm' field, should be loaded: + +========================= ====== === ========================================= =========== ============== +opcode construction opcode src pseudocode imm type dst type +========================= ====== === ========================================= =========== ============== +BPF_IMM | BPF_DW | BPF_LD 0x18 0x3 dst = var_addr(imm) variable id data pointer +========================= ====== === ========================================= =========== ============== + +On Linux, this integer is a BTF ID. + Legacy BPF Packet access instructions ===================================== diff --git a/drivers/hid/bpf/hid_bpf_dispatch.c b/drivers/hid/bpf/hid_bpf_dispatch.c index 8a034a555d4c..d9ef45fcaeab 100644 --- a/drivers/hid/bpf/hid_bpf_dispatch.c +++ b/drivers/hid/bpf/hid_bpf_dispatch.c @@ -342,9 +342,6 @@ hid_bpf_release_context(struct hid_bpf_ctx *ctx) { struct hid_bpf_ctx_kern *ctx_kern; - if (!ctx) - return; - ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx); kfree(ctx_kern); diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 6792a7940e1e..2c6095bd7d69 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -96,11 +96,11 @@ struct bpf_map_ops { /* funcs callable from userspace and from eBPF programs */ void *(*map_lookup_elem)(struct bpf_map *map, void *key); - int (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags); - int (*map_delete_elem)(struct bpf_map *map, void *key); - int (*map_push_elem)(struct bpf_map *map, void *value, u64 flags); - int (*map_pop_elem)(struct bpf_map *map, void *value); - int (*map_peek_elem)(struct bpf_map *map, void *value); + long (*map_update_elem)(struct bpf_map *map, void *key, void *value, u64 flags); + long (*map_delete_elem)(struct bpf_map *map, void *key); + long (*map_push_elem)(struct bpf_map *map, void *value, u64 flags); + long (*map_pop_elem)(struct bpf_map *map, void *value); + long (*map_peek_elem)(struct bpf_map *map, void *value); void *(*map_lookup_percpu_elem)(struct bpf_map *map, void *key, u32 cpu); /* funcs called by prog_array and perf_event_array map */ @@ -139,7 +139,7 @@ struct bpf_map_ops { struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void *owner); /* Misc helpers.*/ - int (*map_redirect)(struct bpf_map *map, u64 key, u64 flags); + long (*map_redirect)(struct bpf_map *map, u64 key, u64 flags); /* map_meta_equal must be implemented for maps that can be * used as an inner map. It is a runtime check to ensure @@ -157,7 +157,7 @@ struct bpf_map_ops { int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env, struct bpf_func_state *caller, struct bpf_func_state *callee); - int (*map_for_each_callback)(struct bpf_map *map, + long (*map_for_each_callback)(struct bpf_map *map, bpf_callback_t callback_fn, void *callback_ctx, u64 flags); @@ -189,9 +189,14 @@ enum btf_field_type { BPF_RB_NODE | BPF_RB_ROOT, }; +typedef void (*btf_dtor_kfunc_t)(void *); + struct btf_field_kptr { struct btf *btf; struct module *module; + /* dtor used if btf_is_kernel(btf), otherwise the type is + * program-allocated, dtor is NULL, and __bpf_obj_drop_impl is used + */ btf_dtor_kfunc_t dtor; u32 btf_id; }; @@ -888,8 +893,7 @@ struct bpf_verifier_ops { struct bpf_prog *prog, u32 *target_size); int (*btf_struct_access)(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + int off, int size); }; struct bpf_prog_offload_ops { @@ -1098,6 +1102,7 @@ struct bpf_trampoline { struct bpf_attach_target_info { struct btf_func_model fmodel; long tgt_addr; + struct module *tgt_mod; const char *tgt_name; const struct btf_type *tgt_type; }; @@ -1401,6 +1406,7 @@ struct bpf_prog_aux { * main prog always has linfo_idx == 0 */ u32 linfo_idx; + struct module *mod; u32 num_exentries; struct exception_table_entry *extable; union { @@ -1469,6 +1475,8 @@ struct bpf_link_ops { void (*show_fdinfo)(const struct bpf_link *link, struct seq_file *seq); int (*fill_link_info)(const struct bpf_link *link, struct bpf_link_info *info); + int (*update_map)(struct bpf_link *link, struct bpf_map *new_map, + struct bpf_map *old_map); }; struct bpf_tramp_link { @@ -1511,6 +1519,8 @@ struct bpf_struct_ops { void *kdata, const void *udata); int (*reg)(void *kdata); void (*unreg)(void *kdata); + int (*update)(void *kdata, void *old_kdata); + int (*validate)(void *kdata); const struct btf_type *type; const struct btf_type *value_type; const char *name; @@ -1545,6 +1555,7 @@ static inline void bpf_module_put(const void *data, struct module *owner) else module_put(owner); } +int bpf_struct_ops_link_create(union bpf_attr *attr); #ifdef CONFIG_NET /* Define it here to avoid the use of forward declaration */ @@ -1585,6 +1596,11 @@ static inline int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, { return -EINVAL; } +static inline int bpf_struct_ops_link_create(union bpf_attr *attr) +{ + return -EOPNOTSUPP; +} + #endif #if defined(CONFIG_CGROUP_BPF) && defined(CONFIG_BPF_LSM) @@ -1617,8 +1633,12 @@ struct bpf_array { #define BPF_COMPLEXITY_LIMIT_INSNS 1000000 /* yes. 1M insns */ #define MAX_TAIL_CALL_CNT 33 -/* Maximum number of loops for bpf_loop */ -#define BPF_MAX_LOOPS BIT(23) +/* Maximum number of loops for bpf_loop and bpf_iter_num. + * It's enum to expose it (and thus make it discoverable) through BTF. + */ +enum { + BPF_MAX_LOOPS = 8 * 1024 * 1024, +}; #define BPF_F_ACCESS_MASK (BPF_F_RDONLY | \ BPF_F_RDONLY_PROG | \ @@ -1921,7 +1941,7 @@ void bpf_prog_free_id(struct bpf_prog *prog); void bpf_map_free_id(struct bpf_map *map); struct btf_field *btf_record_find(const struct btf_record *rec, - u32 offset, enum btf_field_type type); + u32 offset, u32 field_mask); void btf_record_free(struct btf_record *rec); void bpf_map_free_record(struct bpf_map *map); struct btf_record *btf_record_dup(const struct btf_record *rec); @@ -1934,6 +1954,7 @@ struct bpf_map *bpf_map_get_with_uref(u32 ufd); struct bpf_map *__bpf_map_get(struct fd f); void bpf_map_inc(struct bpf_map *map); void bpf_map_inc_with_uref(struct bpf_map *map); +struct bpf_map *__bpf_map_inc_not_zero(struct bpf_map *map, bool uref); struct bpf_map * __must_check bpf_map_inc_not_zero(struct bpf_map *map); void bpf_map_put_with_uref(struct bpf_map *map); void bpf_map_put(struct bpf_map *map); @@ -2154,7 +2175,7 @@ int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size, size_t actual_size); /* verify correctness of eBPF program */ -int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr); +int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size); #ifndef CONFIG_BPF_JIT_ALWAYS_ON void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); @@ -2242,7 +2263,7 @@ static inline bool bpf_tracing_btf_ctx_access(int off, int size, int btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + u32 *next_btf_id, enum bpf_type_flag *flag, const char **field_name); bool btf_struct_ids_match(struct bpf_verifier_log *log, const struct btf *btf, u32 id, int off, const struct btf *need_btf, u32 need_type_id, @@ -2281,7 +2302,7 @@ struct bpf_core_ctx { bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, const char *suffix); + const char *field_name, u32 btf_id, const char *suffix); bool btf_type_ids_nocast_alias(struct bpf_verifier_log *log, const struct btf *reg_btf, u32 reg_id, @@ -2496,7 +2517,8 @@ static inline struct bpf_prog *bpf_prog_by_id(u32 id) static inline int btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + u32 *next_btf_id, enum bpf_type_flag *flag, + const char **field_name) { return -EACCES; } diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h index d934248b8e81..173ec7f43ed1 100644 --- a/include/linux/bpf_local_storage.h +++ b/include/linux/bpf_local_storage.h @@ -13,6 +13,7 @@ #include <linux/list.h> #include <linux/hash.h> #include <linux/types.h> +#include <linux/bpf_mem_alloc.h> #include <uapi/linux/btf.h> #define BPF_LOCAL_STORAGE_CACHE_SIZE 16 @@ -55,6 +56,9 @@ struct bpf_local_storage_map { u32 bucket_log; u16 elem_size; u16 cache_idx; + struct bpf_mem_alloc selem_ma; + struct bpf_mem_alloc storage_ma; + bool bpf_ma; }; struct bpf_local_storage_data { @@ -83,6 +87,7 @@ struct bpf_local_storage_elem { struct bpf_local_storage { struct bpf_local_storage_data __rcu *cache[BPF_LOCAL_STORAGE_CACHE_SIZE]; + struct bpf_local_storage_map __rcu *smap; struct hlist_head list; /* List of bpf_local_storage_elem */ void *owner; /* The object that owns the above "list" of * bpf_local_storage_elem. @@ -121,14 +126,15 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr); struct bpf_map * bpf_local_storage_map_alloc(union bpf_attr *attr, - struct bpf_local_storage_cache *cache); + struct bpf_local_storage_cache *cache, + bool bpf_ma); struct bpf_local_storage_data * bpf_local_storage_lookup(struct bpf_local_storage *local_storage, struct bpf_local_storage_map *smap, bool cacheit_lockit); -bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage); +void bpf_local_storage_destroy(struct bpf_local_storage *local_storage); void bpf_local_storage_map_free(struct bpf_map *map, struct bpf_local_storage_cache *cache, @@ -142,17 +148,19 @@ int bpf_local_storage_map_check_btf(const struct bpf_map *map, void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage, struct bpf_local_storage_elem *selem); -void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool use_trace_rcu); +void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now); void bpf_selem_link_map(struct bpf_local_storage_map *smap, struct bpf_local_storage_elem *selem); -void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem); - struct bpf_local_storage_elem * bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, void *value, bool charge_mem, gfp_t gfp_flags); +void bpf_selem_free(struct bpf_local_storage_elem *selem, + struct bpf_local_storage_map *smap, + bool reuse_now); + int bpf_local_storage_alloc(void *owner, struct bpf_local_storage_map *smap, @@ -163,7 +171,6 @@ struct bpf_local_storage_data * bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, void *value, u64 map_flags, gfp_t gfp_flags); -void bpf_local_storage_free_rcu(struct rcu_head *rcu); u64 bpf_local_storage_map_mem_usage(const struct bpf_map *map); #endif /* _BPF_LOCAL_STORAGE_H */ diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h index a7104af61ab4..3929be5743f4 100644 --- a/include/linux/bpf_mem_alloc.h +++ b/include/linux/bpf_mem_alloc.h @@ -31,5 +31,7 @@ void bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr); /* kmem_cache_alloc/free equivalent: */ void *bpf_mem_cache_alloc(struct bpf_mem_alloc *ma); void bpf_mem_cache_free(struct bpf_mem_alloc *ma, void *ptr); +void bpf_mem_cache_raw_free(void *ptr); +void *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags); #endif /* _BPF_MEM_ALLOC_H */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 18538bad2b8c..f03852b89d28 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -59,6 +59,14 @@ struct bpf_active_lock { u32 id; }; +#define ITER_PREFIX "bpf_iter_" + +enum bpf_iter_state { + BPF_ITER_STATE_INVALID, /* for non-first slot */ + BPF_ITER_STATE_ACTIVE, + BPF_ITER_STATE_DRAINED, +}; + struct bpf_reg_state { /* Ordering of fields matters. See states_equal() */ enum bpf_reg_type type; @@ -103,6 +111,18 @@ struct bpf_reg_state { bool first_slot; } dynptr; + /* For bpf_iter stack slots */ + struct { + /* BTF container and BTF type ID describing + * struct bpf_iter_<type> of an iterator state + */ + struct btf *btf; + u32 btf_id; + /* packing following two fields to fit iter state into 16 bytes */ + enum bpf_iter_state state:2; + int depth:30; + } iter; + /* Max size from any of the above. */ struct { unsigned long raw1; @@ -141,6 +161,8 @@ struct bpf_reg_state { * same reference to the socket, to determine proper reference freeing. * For stack slots that are dynptrs, this is used to track references to * the dynptr to determine proper reference freeing. + * Similarly to dynptrs, we use ID to track "belonging" of a reference + * to a specific instance of bpf_iter. */ u32 id; /* PTR_TO_SOCKET and PTR_TO_TCP_SOCK could be a ptr returned @@ -211,9 +233,11 @@ enum bpf_stack_slot_type { * is stored in bpf_stack_state->spilled_ptr.dynptr.type */ STACK_DYNPTR, + STACK_ITER, }; #define BPF_REG_SIZE 8 /* size of eBPF register in bytes */ + #define BPF_DYNPTR_SIZE sizeof(struct bpf_dynptr_kern) #define BPF_DYNPTR_NR_SLOTS (BPF_DYNPTR_SIZE / BPF_REG_SIZE) @@ -448,12 +472,17 @@ struct bpf_insn_aux_data { bool sanitize_stack_spill; /* subject to Spectre v4 sanitation */ bool zext_dst; /* this insn zero extends dst reg */ bool storage_get_func_atomic; /* bpf_*_storage_get() with atomic memory alloc */ + bool is_iter_next; /* bpf_iter_<type>_next() kfunc call */ u8 alu_state; /* used in combination with alu_limit */ /* below fields are initialized once */ unsigned int orig_idx; /* original instruction index */ - bool prune_point; bool jmp_point; + bool prune_point; + /* ensure we check state equivalence and save state checkpoint and + * this instruction, regardless of any heuristics + */ + bool force_checkpoint; }; #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */ @@ -462,39 +491,36 @@ struct bpf_insn_aux_data { #define BPF_VERIFIER_TMP_LOG_SIZE 1024 struct bpf_verifier_log { - u32 level; - char kbuf[BPF_VERIFIER_TMP_LOG_SIZE]; + /* Logical start and end positions of a "log window" of the verifier log. + * start_pos == 0 means we haven't truncated anything. + * Once truncation starts to happen, start_pos + len_total == end_pos, + * except during log reset situations, in which (end_pos - start_pos) + * might get smaller than len_total (see bpf_vlog_reset()). + * Generally, (end_pos - start_pos) gives number of useful data in + * user log buffer. + */ + u64 start_pos; + u64 end_pos; char __user *ubuf; - u32 len_used; + u32 level; u32 len_total; + u32 len_max; + char kbuf[BPF_VERIFIER_TMP_LOG_SIZE]; }; -static inline bool bpf_verifier_log_full(const struct bpf_verifier_log *log) -{ - return log->len_used >= log->len_total - 1; -} - #define BPF_LOG_LEVEL1 1 #define BPF_LOG_LEVEL2 2 #define BPF_LOG_STATS 4 +#define BPF_LOG_FIXED 8 #define BPF_LOG_LEVEL (BPF_LOG_LEVEL1 | BPF_LOG_LEVEL2) -#define BPF_LOG_MASK (BPF_LOG_LEVEL | BPF_LOG_STATS) +#define BPF_LOG_MASK (BPF_LOG_LEVEL | BPF_LOG_STATS | BPF_LOG_FIXED) #define BPF_LOG_KERNEL (BPF_LOG_MASK + 1) /* kernel internal flag */ #define BPF_LOG_MIN_ALIGNMENT 8U #define BPF_LOG_ALIGNMENT 40U static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log) { - return log && - ((log->level && log->ubuf && !bpf_verifier_log_full(log)) || - log->level == BPF_LOG_KERNEL); -} - -static inline bool -bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log) -{ - return log->len_total >= 128 && log->len_total <= UINT_MAX >> 2 && - log->level && log->ubuf && !(log->level & ~BPF_LOG_MASK); + return log && log->level; } #define BPF_MAX_SUBPROGS 256 @@ -574,7 +600,7 @@ struct bpf_verifier_env { u32 scratched_regs; /* Same as scratched_regs but for stack slots */ u64 scratched_stack_slots; - u32 prev_log_len, prev_insn_print_len; + u64 prev_log_pos, prev_insn_print_pos; /* buffer used in reg_type_str() to generate reg_type string */ char type_str_buf[TYPE_STR_BUF_LEN]; }; @@ -585,6 +611,10 @@ __printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env, const char *fmt, ...); __printf(2, 3) void bpf_log(struct bpf_verifier_log *log, const char *fmt, ...); +int bpf_vlog_init(struct bpf_verifier_log *log, u32 log_level, + char __user *log_buf, u32 log_size); +void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos); +int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual); static inline struct bpf_func_state *cur_func(struct bpf_verifier_env *env) { diff --git a/include/linux/btf.h b/include/linux/btf.h index 556b3e2e7471..495250162422 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -71,6 +71,10 @@ #define KF_SLEEPABLE (1 << 5) /* kfunc may sleep */ #define KF_DESTRUCTIVE (1 << 6) /* kfunc performs destructive actions */ #define KF_RCU (1 << 7) /* kfunc takes either rcu or trusted pointer arguments */ +/* only one of KF_ITER_{NEW,NEXT,DESTROY} could be specified per kfunc */ +#define KF_ITER_NEW (1 << 8) /* kfunc implements BPF iter constructor */ +#define KF_ITER_NEXT (1 << 9) /* kfunc implements BPF iter next method */ +#define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */ /* * Tag marking a kernel function as a kfunc. This is meant to minimize the @@ -117,13 +121,11 @@ struct btf_struct_metas { struct btf_struct_meta types[]; }; -typedef void (*btf_dtor_kfunc_t)(void *); - extern const struct file_operations btf_fops; void btf_get(struct btf *btf); void btf_put(struct btf *btf); -int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr); +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_sz); struct btf *btf_get_by_fd(int fd); int btf_get_info_by_fd(const struct btf *btf, const union bpf_attr *attr, diff --git a/include/linux/filter.h b/include/linux/filter.h index efa5d4a1677e..5364b0c52c1d 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -571,8 +571,7 @@ DECLARE_STATIC_KEY_FALSE(bpf_stats_enabled_key); extern struct mutex nf_conn_btf_access_lock; extern int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + int off, int size); typedef unsigned int (*bpf_dispatcher_fn)(const void *ctx, const struct bpf_insn *insnsi, @@ -1504,9 +1503,9 @@ static inline bool bpf_sk_lookup_run_v6(struct net *net, int protocol, } #endif /* IS_ENABLED(CONFIG_IPV6) */ -static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u64 index, - u64 flags, const u64 flag_mask, - void *lookup_elem(struct bpf_map *map, u32 key)) +static __always_inline long __bpf_xdp_redirect_map(struct bpf_map *map, u64 index, + u64 flags, const u64 flag_mask, + void *lookup_elem(struct bpf_map *map, u32 key)) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); const u64 action_mask = XDP_ABORTED | XDP_DROP | XDP_PASS | XDP_TX; diff --git a/include/linux/module.h b/include/linux/module.h index 4435ad9439ab..886d24877c7c 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -608,14 +608,6 @@ static inline bool within_module(unsigned long addr, const struct module *mod) /* Search for module by name: must be in a RCU-sched critical section. */ struct module *find_module(const char *name); -/* Returns 0 and fills in value, defined and namebuf, or -ERANGE if - symnum out of range. */ -int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, - char *name, char *module_name, int *exported); - -/* Look for this name: can be of form module:name. */ -unsigned long module_kallsyms_lookup_name(const char *name); - extern void __noreturn __module_put_and_kthread_exit(struct module *mod, long code); #define module_put_and_kthread_exit(code) __module_put_and_kthread_exit(THIS_MODULE, code) @@ -662,17 +654,6 @@ static inline void __module_get(struct module *module) /* Dereference module function descriptor */ void *dereference_module_function_descriptor(struct module *mod, void *ptr); -/* For kallsyms to ask for address resolution. namebuf should be at - * least KSYM_NAME_LEN long: a pointer to namebuf is returned if - * found, otherwise NULL. */ -const char *module_address_lookup(unsigned long addr, - unsigned long *symbolsize, - unsigned long *offset, - char **modname, const unsigned char **modbuildid, - char *namebuf); -int lookup_module_symbol_name(unsigned long addr, char *symname); -int lookup_module_symbol_attrs(unsigned long addr, unsigned long *size, unsigned long *offset, char *modname, char *name); - int register_module_notifier(struct notifier_block *nb); int unregister_module_notifier(struct notifier_block *nb); @@ -763,39 +744,6 @@ static inline void module_put(struct module *module) #define module_name(mod) "kernel" -/* For kallsyms to ask for address resolution. NULL means not found. */ -static inline const char *module_address_lookup(unsigned long addr, - unsigned long *symbolsize, - unsigned long *offset, - char **modname, - const unsigned char **modbuildid, - char *namebuf) -{ - return NULL; -} - -static inline int lookup_module_symbol_name(unsigned long addr, char *symname) -{ - return -ERANGE; -} - -static inline int lookup_module_symbol_attrs(unsigned long addr, unsigned long *size, unsigned long *offset, char *modname, char *name) -{ - return -ERANGE; -} - -static inline int module_get_kallsym(unsigned int symnum, unsigned long *value, - char *type, char *name, - char *module_name, int *exported) -{ - return -ERANGE; -} - -static inline unsigned long module_kallsyms_lookup_name(const char *name) -{ - return 0; -} - static inline int register_module_notifier(struct notifier_block *nb) { /* no events will happen anyway, so this can always succeed */ @@ -891,7 +839,36 @@ int module_kallsyms_on_each_symbol(const char *modname, int (*fn)(void *, const char *, struct module *, unsigned long), void *data); -#else + +/* For kallsyms to ask for address resolution. namebuf should be at + * least KSYM_NAME_LEN long: a pointer to namebuf is returned if + * found, otherwise NULL. + */ +const char *module_address_lookup(unsigned long addr, + unsigned long *symbolsize, + unsigned long *offset, + char **modname, const unsigned char **modbuildid, + char *namebuf); +int lookup_module_symbol_name(unsigned long addr, char *symname); +int lookup_module_symbol_attrs(unsigned long addr, + unsigned long *size, + unsigned long *offset, + char *modname, + char *name); + +/* Returns 0 and fills in value, defined and namebuf, or -ERANGE if + * symnum out of range. + */ +int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, + char *name, char *module_name, int *exported); + +/* Look for this name: can be of form module:name. */ +unsigned long module_kallsyms_lookup_name(const char *name); + +unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name); + +#else /* CONFIG_MODULES && CONFIG_KALLSYMS */ + static inline int module_kallsyms_on_each_symbol(const char *modname, int (*fn)(void *, const char *, struct module *, unsigned long), @@ -899,6 +876,50 @@ static inline int module_kallsyms_on_each_symbol(const char *modname, { return -EOPNOTSUPP; } + +/* For kallsyms to ask for address resolution. NULL means not found. */ +static inline const char *module_address_lookup(unsigned long addr, + unsigned long *symbolsize, + unsigned long *offset, + char **modname, + const unsigned char **modbuildid, + char *namebuf) +{ + return NULL; +} + +static inline int lookup_module_symbol_name(unsigned long addr, char *symname) +{ + return -ERANGE; +} + +static inline int lookup_module_symbol_attrs(unsigned long addr, + unsigned long *size, + unsigned long *offset, + char *modname, + char *name) +{ + return -ERANGE; +} + +static inline int module_get_kallsym(unsigned int symnum, unsigned long *value, + char *type, char *name, + char *module_name, int *exported) +{ + return -ERANGE; +} + +static inline unsigned long module_kallsyms_lookup_name(const char *name) +{ + return 0; +} + +static inline unsigned long find_kallsyms_symbol_value(struct module *mod, + const char *name) +{ + return 0; +} + #endif /* CONFIG_MODULES && CONFIG_KALLSYMS */ #endif /* _LINUX_MODULE_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 63d242164b1a..b11b4517760f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1318,11 +1318,6 @@ struct task_struct { struct tlbflush_unmap_batch tlb_ubc; - union { - refcount_t rcu_users; - struct rcu_head rcu; - }; - /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; @@ -1459,6 +1454,8 @@ struct task_struct { unsigned long saved_state_change; # endif #endif + struct rcu_head rcu; + refcount_t rcu_users; int pagefault_disabled; #ifdef CONFIG_MMU struct task_struct *oom_reaper_list; diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 82511b2f61ea..494a23a976b0 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -938,6 +938,19 @@ struct sk_buff { __u8 ip_summed:2; __u8 ooo_okay:1; + /* private: */ + __u8 __mono_tc_offset[0]; + /* public: */ + __u8 mono_delivery_time:1; /* See SKB_MONO_DELIVERY_TIME_MASK */ +#ifdef CONFIG_NET_CLS_ACT + __u8 tc_at_ingress:1; /* See TC_AT_INGRESS_MASK */ + __u8 tc_skip_classify:1; +#endif + __u8 remcsum_offload:1; + __u8 csum_complete_sw:1; + __u8 csum_level:2; + __u8 dst_pending_confirm:1; + __u8 l4_hash:1; __u8 sw_hash:1; __u8 wifi_acked_valid:1; @@ -947,19 +960,6 @@ struct sk_buff { __u8 encapsulation:1; __u8 encap_hdr_csum:1; __u8 csum_valid:1; - - /* private: */ - __u8 __pkt_vlan_present_offset[0]; - /* public: */ - __u8 remcsum_offload:1; - __u8 csum_complete_sw:1; - __u8 csum_level:2; - __u8 dst_pending_confirm:1; - __u8 mono_delivery_time:1; /* See SKB_MONO_DELIVERY_TIME_MASK */ -#ifdef CONFIG_NET_CLS_ACT - __u8 tc_skip_classify:1; - __u8 tc_at_ingress:1; /* See TC_AT_INGRESS_MASK */ -#endif #ifdef CONFIG_IPV6_NDISC_NODETYPE __u8 ndisc_nodetype:2; #endif @@ -1066,13 +1066,13 @@ struct sk_buff { * around, you also must adapt these constants. */ #ifdef __BIG_ENDIAN_BITFIELD -#define TC_AT_INGRESS_MASK (1 << 0) -#define SKB_MONO_DELIVERY_TIME_MASK (1 << 2) +#define SKB_MONO_DELIVERY_TIME_MASK (1 << 7) +#define TC_AT_INGRESS_MASK (1 << 6) #else -#define TC_AT_INGRESS_MASK (1 << 7) -#define SKB_MONO_DELIVERY_TIME_MASK (1 << 5) +#define SKB_MONO_DELIVERY_TIME_MASK (1 << 0) +#define TC_AT_INGRESS_MASK (1 << 1) #endif -#define PKT_VLAN_PRESENT_OFFSET offsetof(struct sk_buff, __pkt_vlan_present_offset) +#define SKB_BF_MONO_TC_OFFSET offsetof(struct sk_buff, __mono_tc_offset) #ifdef __KERNEL__ /* @@ -5063,12 +5063,12 @@ static inline u64 skb_get_kcov_handle(struct sk_buff *skb) #endif } -#ifdef CONFIG_PAGE_POOL static inline void skb_mark_for_recycle(struct sk_buff *skb) { +#ifdef CONFIG_PAGE_POOL skb->pp_recycle = 1; -} #endif +} #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff --git a/include/net/fou.h b/include/net/fou.h index 80f56e275b08..824eb4b231fd 100644 --- a/include/net/fou.h +++ b/include/net/fou.h @@ -17,4 +17,6 @@ int __fou_build_header(struct sk_buff *skb, struct ip_tunnel_encap *e, int __gue_build_header(struct sk_buff *skb, struct ip_tunnel_encap *e, u8 *protocol, __be16 *sport, int type); +int register_fou_bpf(void); + #endif diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h index 255b32a90850..ed4b6ad3fcac 100644 --- a/include/net/ip_tunnels.h +++ b/include/net/ip_tunnels.h @@ -57,6 +57,13 @@ struct ip_tunnel_key { __u8 flow_flags; }; +struct ip_tunnel_encap { + u16 type; + u16 flags; + __be16 sport; + __be16 dport; +}; + /* Flags for ip_tunnel_info mode. */ #define IP_TUNNEL_INFO_TX 0x01 /* represents tx tunnel parameters */ #define IP_TUNNEL_INFO_IPV6 0x02 /* key contains IPv6 addresses */ @@ -75,6 +82,7 @@ struct ip_tunnel_key { struct ip_tunnel_info { struct ip_tunnel_key key; + struct ip_tunnel_encap encap; #ifdef CONFIG_DST_CACHE struct dst_cache dst_cache; #endif @@ -92,13 +100,6 @@ struct ip_tunnel_6rd_parm { }; #endif -struct ip_tunnel_encap { - u16 type; - u16 flags; - __be16 sport; - __be16 dport; -}; - struct ip_tunnel_prl_entry { struct ip_tunnel_prl_entry __rcu *next; __be32 addr; @@ -299,6 +300,7 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn, __be32 remote, __be32 local, __be32 key); +void ip_tunnel_md_udp_encap(struct sk_buff *skb, struct ip_tunnel_info *info); int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, const struct tnl_ptk_info *tpi, struct metadata_dst *tun_dst, bool log_ecn_error); @@ -377,22 +379,23 @@ static inline int ip_encap_hlen(struct ip_tunnel_encap *e) return hlen; } -static inline int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t, +static inline int ip_tunnel_encap(struct sk_buff *skb, + struct ip_tunnel_encap *e, u8 *protocol, struct flowi4 *fl4) { const struct ip_tunnel_encap_ops *ops; int ret = -EINVAL; - if (t->encap.type == TUNNEL_ENCAP_NONE) + if (e->type == TUNNEL_ENCAP_NONE) return 0; - if (t->encap.type >= MAX_IPTUN_ENCAP_OPS) + if (e->type >= MAX_IPTUN_ENCAP_OPS) return -EINVAL; rcu_read_lock(); - ops = rcu_dereference(iptun_encaps[t->encap.type]); + ops = rcu_dereference(iptun_encaps[e->type]); if (likely(ops && ops->build_header)) - ret = ops->build_header(skb, &t->encap, protocol, fl4); + ret = ops->build_header(skb, e, protocol, fl4); rcu_read_unlock(); return ret; diff --git a/include/net/tcp.h b/include/net/tcp.h index a0a91a988272..04a31643cda3 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1117,6 +1117,9 @@ struct tcp_congestion_ops { int tcp_register_congestion_control(struct tcp_congestion_ops *type); void tcp_unregister_congestion_control(struct tcp_congestion_ops *type); +int tcp_update_congestion_control(struct tcp_congestion_ops *type, + struct tcp_congestion_ops *old_type); +int tcp_validate_congestion_control(struct tcp_congestion_ops *ca); void tcp_assign_congestion_control(struct sock *sk); void tcp_init_congestion_control(struct sock *sk); diff --git a/include/net/xdp.h b/include/net/xdp.h index 76aa748e7923..d1c5381fc95f 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -318,35 +318,6 @@ void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); -/* When sending xdp_frame into the network stack, then there is no - * return point callback, which is needed to release e.g. DMA-mapping - * resources with page_pool. Thus, have explicit function to release - * frame resources. - */ -void __xdp_release_frame(void *data, struct xdp_mem_info *mem); -static inline void xdp_release_frame(struct xdp_frame *xdpf) -{ - struct xdp_mem_info *mem = &xdpf->mem; - struct skb_shared_info *sinfo; - int i; - - /* Curr only page_pool needs this */ - if (mem->type != MEM_TYPE_PAGE_POOL) - return; - - if (likely(!xdp_frame_has_frags(xdpf))) - goto out; - - sinfo = xdp_get_shared_info_from_frame(xdpf); - for (i = 0; i < sinfo->nr_frags; i++) { - struct page *page = skb_frag_page(&sinfo->frags[i]); - - __xdp_release_frame(page_address(page), mem); - } -out: - __xdp_release_frame(xdpf->data, mem); -} - static __always_inline unsigned int xdp_get_frame_len(struct xdp_frame *xdpf) { struct skb_shared_info *sinfo; diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 3e952e569418..d318c769b445 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -180,13 +180,8 @@ static inline bool xp_desc_crosses_non_contig_pg(struct xsk_buff_pool *pool, if (likely(!cross_pg)) return false; - if (pool->dma_pages_cnt) { - return !(pool->dma_pages[addr >> PAGE_SHIFT] & - XSK_NEXT_PG_CONTIG_MASK); - } - - /* skb path */ - return addr + len > pool->addrs_cnt; + return pool->dma_pages_cnt && + !(pool->dma_pages[addr >> PAGE_SHIFT] & XSK_NEXT_PG_CONTIG_MASK); } static inline u64 xp_aligned_extract_addr(struct xsk_buff_pool *pool, u64 addr) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 976b194eb775..3823100b7934 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -1033,6 +1033,7 @@ enum bpf_attach_type { BPF_PERF_EVENT, BPF_TRACE_KPROBE_MULTI, BPF_LSM_CGROUP, + BPF_STRUCT_OPS, __MAX_BPF_ATTACH_TYPE }; @@ -1108,7 +1109,7 @@ enum bpf_link_type { */ #define BPF_F_STRICT_ALIGNMENT (1U << 0) -/* If BPF_F_ANY_ALIGNMENT is used in BPF_PROF_LOAD command, the +/* If BPF_F_ANY_ALIGNMENT is used in BPF_PROG_LOAD command, the * verifier will allow any alignment whatsoever. On platforms * with strict alignment requirements for loads ands stores (such * as sparc and mips) the verifier validates that all loads and @@ -1266,6 +1267,9 @@ enum { /* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12), + +/* Create a map that will be registered/unregesitered by the backed bpf_link */ + BPF_F_LINK = (1U << 13), }; /* Flags for BPF_PROG_QUERY. */ @@ -1403,6 +1407,11 @@ union bpf_attr { __aligned_u64 fd_array; /* array of FDs */ __aligned_u64 core_relos; __u32 core_relo_rec_size; /* sizeof(struct bpf_core_relo) */ + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 log_true_size; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1488,6 +1497,11 @@ union bpf_attr { __u32 btf_size; __u32 btf_log_size; __u32 btf_log_level; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 btf_log_true_size; }; struct { @@ -1507,7 +1521,10 @@ union bpf_attr { } task_fd_query; struct { /* struct used by BPF_LINK_CREATE command */ - __u32 prog_fd; /* eBPF program to attach */ + union { + __u32 prog_fd; /* eBPF program to attach */ + __u32 map_fd; /* struct_ops to attach */ + }; union { __u32 target_fd; /* object to attach to */ __u32 target_ifindex; /* target ifindex */ @@ -1548,12 +1565,23 @@ union bpf_attr { struct { /* struct used by BPF_LINK_UPDATE command */ __u32 link_fd; /* link fd */ - /* new program fd to update link with */ - __u32 new_prog_fd; + union { + /* new program fd to update link with */ + __u32 new_prog_fd; + /* new struct_ops map fd to update link with */ + __u32 new_map_fd; + }; __u32 flags; /* extra flags */ - /* expected link's program fd; is specified only if - * BPF_F_REPLACE flag is set in flags */ - __u32 old_prog_fd; + union { + /* expected link's program fd; is specified only if + * BPF_F_REPLACE flag is set in flags. + */ + __u32 old_prog_fd; + /* expected link's map fd; is specified only + * if BPF_F_REPLACE flag is set. + */ + __u32 old_map_fd; + }; } link_update; struct { @@ -1647,17 +1675,17 @@ union bpf_attr { * Description * This helper is a "printk()-like" facility for debugging. It * prints a message defined by format *fmt* (of size *fmt_size*) - * to file *\/sys/kernel/debug/tracing/trace* from DebugFS, if + * to file *\/sys/kernel/tracing/trace* from TraceFS, if * available. It can take up to three additional **u64** * arguments (as an eBPF helpers, the total number of arguments is * limited to five). * * Each time the helper is called, it appends a line to the trace. - * Lines are discarded while *\/sys/kernel/debug/tracing/trace* is - * open, use *\/sys/kernel/debug/tracing/trace_pipe* to avoid this. + * Lines are discarded while *\/sys/kernel/tracing/trace* is + * open, use *\/sys/kernel/tracing/trace_pipe* to avoid this. * The format of the trace is customizable, and the exact output * one will get depends on the options set in - * *\/sys/kernel/debug/tracing/trace_options* (see also the + * *\/sys/kernel/tracing/trace_options* (see also the * *README* file under the same directory). However, it usually * defaults to something like: * @@ -6379,6 +6407,9 @@ struct bpf_link_info { struct { __u32 ifindex; } xdp; + struct { + __u32 map_id; + } struct_ops; }; } __attribute__((aligned(8))); @@ -7112,4 +7143,12 @@ enum { BPF_F_TIMER_ABS = (1ULL << 0), }; +/* BPF numbers iterator state */ +struct bpf_iter_num { + /* opaque iterator state; having __u64 here allows to preserve correct + * alignment requirements in vmlinux.h, generated from BTF + */ + __u64 __opaque[1]; +} __attribute__((aligned(8))); + #endif /* _UAPI__LINUX_BPF_H__ */ diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 02242614dcc7..1d3892168d32 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -6,7 +6,8 @@ cflags-nogcse-$(CONFIG_X86)$(CONFIG_CC_IS_GCC) := -fno-gcse endif CFLAGS_core.o += $(call cc-disable-warning, override-init) $(cflags-nogcse-yy) -obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o +obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o log.o +obj-$(CONFIG_BPF_SYSCALL) += bpf_iter.o map_iter.o task_iter.o prog_iter.o link_iter.o obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o bloom_filter.o obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o obj-$(CONFIG_BPF_SYSCALL) += bpf_local_storage.o bpf_task_storage.o diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index 1588c793a715..2058e89b5ddd 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -307,8 +307,8 @@ static int array_map_get_next_key(struct bpf_map *map, void *key, void *next_key } /* Called from syscall or from eBPF program */ -static int array_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long array_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { struct bpf_array *array = container_of(map, struct bpf_array, map); u32 index = *(u32 *)key; @@ -386,7 +386,7 @@ int bpf_percpu_array_update(struct bpf_map *map, void *key, void *value, } /* Called from syscall or from eBPF program */ -static int array_map_delete_elem(struct bpf_map *map, void *key) +static long array_map_delete_elem(struct bpf_map *map, void *key) { return -EINVAL; } @@ -686,8 +686,8 @@ static const struct bpf_iter_seq_info iter_seq_info = { .seq_priv_size = sizeof(struct bpf_iter_seq_array_map_info), }; -static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn, - void *callback_ctx, u64 flags) +static long bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn, + void *callback_ctx, u64 flags) { u32 i, key, num_elems = 0; struct bpf_array *array; @@ -871,7 +871,7 @@ int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file, return 0; } -static int fd_array_map_delete_elem(struct bpf_map *map, void *key) +static long fd_array_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_array *array = container_of(map, struct bpf_array, map); void *old_ptr; diff --git a/kernel/bpf/bloom_filter.c b/kernel/bpf/bloom_filter.c index 6350c5d35a9b..540331b610a9 100644 --- a/kernel/bpf/bloom_filter.c +++ b/kernel/bpf/bloom_filter.c @@ -16,13 +16,6 @@ struct bpf_bloom_filter { struct bpf_map map; u32 bitset_mask; u32 hash_seed; - /* If the size of the values in the bloom filter is u32 aligned, - * then it is more performant to use jhash2 as the underlying hash - * function, else we use jhash. This tracks the number of u32s - * in an u32-aligned value size. If the value size is not u32 aligned, - * this will be 0. - */ - u32 aligned_u32_count; u32 nr_hash_funcs; unsigned long bitset[]; }; @@ -32,16 +25,15 @@ static u32 hash(struct bpf_bloom_filter *bloom, void *value, { u32 h; - if (bloom->aligned_u32_count) - h = jhash2(value, bloom->aligned_u32_count, - bloom->hash_seed + index); + if (likely(value_size % 4 == 0)) + h = jhash2(value, value_size / 4, bloom->hash_seed + index); else h = jhash(value, value_size, bloom->hash_seed + index); return h & bloom->bitset_mask; } -static int bloom_map_peek_elem(struct bpf_map *map, void *value) +static long bloom_map_peek_elem(struct bpf_map *map, void *value) { struct bpf_bloom_filter *bloom = container_of(map, struct bpf_bloom_filter, map); @@ -56,7 +48,7 @@ static int bloom_map_peek_elem(struct bpf_map *map, void *value) return 0; } -static int bloom_map_push_elem(struct bpf_map *map, void *value, u64 flags) +static long bloom_map_push_elem(struct bpf_map *map, void *value, u64 flags) { struct bpf_bloom_filter *bloom = container_of(map, struct bpf_bloom_filter, map); @@ -73,12 +65,12 @@ static int bloom_map_push_elem(struct bpf_map *map, void *value, u64 flags) return 0; } -static int bloom_map_pop_elem(struct bpf_map *map, void *value) +static long bloom_map_pop_elem(struct bpf_map *map, void *value) { return -EOPNOTSUPP; } -static int bloom_map_delete_elem(struct bpf_map *map, void *value) +static long bloom_map_delete_elem(struct bpf_map *map, void *value) { return -EOPNOTSUPP; } @@ -152,11 +144,6 @@ static struct bpf_map *bloom_map_alloc(union bpf_attr *attr) bloom->nr_hash_funcs = nr_hash_funcs; bloom->bitset_mask = bitset_mask; - /* Check whether the value size is u32-aligned */ - if ((attr->value_size & (sizeof(u32) - 1)) == 0) - bloom->aligned_u32_count = - attr->value_size / sizeof(u32); - if (!(attr->map_flags & BPF_F_ZERO_SEED)) bloom->hash_seed = get_random_u32(); @@ -177,8 +164,8 @@ static void *bloom_map_lookup_elem(struct bpf_map *map, void *key) return ERR_PTR(-EINVAL); } -static int bloom_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 flags) +static long bloom_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 flags) { /* The eBPF program should use map_push_elem instead */ return -EINVAL; diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c index 9ae07aedaf23..d44fe8dd9732 100644 --- a/kernel/bpf/bpf_cgrp_storage.c +++ b/kernel/bpf/bpf_cgrp_storage.c @@ -46,8 +46,6 @@ static struct bpf_local_storage __rcu **cgroup_storage_ptr(void *owner) void bpf_cgrp_storage_free(struct cgroup *cgroup) { struct bpf_local_storage *local_storage; - bool free_cgroup_storage = false; - unsigned long flags; rcu_read_lock(); local_storage = rcu_dereference(cgroup->bpf_cgrp_storage); @@ -57,14 +55,9 @@ void bpf_cgrp_storage_free(struct cgroup *cgroup) } bpf_cgrp_storage_lock(); - raw_spin_lock_irqsave(&local_storage->lock, flags); - free_cgroup_storage = bpf_local_storage_unlink_nolock(local_storage); - raw_spin_unlock_irqrestore(&local_storage->lock, flags); + bpf_local_storage_destroy(local_storage); bpf_cgrp_storage_unlock(); rcu_read_unlock(); - - if (free_cgroup_storage) - kfree_rcu(local_storage, rcu); } static struct bpf_local_storage_data * @@ -100,8 +93,8 @@ static void *bpf_cgrp_storage_lookup_elem(struct bpf_map *map, void *key) return sdata ? sdata->data : NULL; } -static int bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags) +static long bpf_cgrp_storage_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags) { struct bpf_local_storage_data *sdata; struct cgroup *cgroup; @@ -128,11 +121,11 @@ static int cgroup_storage_delete(struct cgroup *cgroup, struct bpf_map *map) if (!sdata) return -ENOENT; - bpf_selem_unlink(SELEM(sdata), true); + bpf_selem_unlink(SELEM(sdata), false); return 0; } -static int bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key) +static long bpf_cgrp_storage_delete_elem(struct bpf_map *map, void *key) { struct cgroup *cgroup; int err, fd; @@ -156,7 +149,7 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key) static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr) { - return bpf_local_storage_map_alloc(attr, &cgroup_cache); + return bpf_local_storage_map_alloc(attr, &cgroup_cache, true); } static void cgroup_storage_map_free(struct bpf_map *map) @@ -231,7 +224,7 @@ const struct bpf_func_proto bpf_cgrp_storage_get_proto = { .gpl_only = false, .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &bpf_cgroup_btf_id[0], .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL, .arg4_type = ARG_ANYTHING, @@ -242,6 +235,6 @@ const struct bpf_func_proto bpf_cgrp_storage_delete_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &bpf_cgroup_btf_id[0], }; diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c index 43e2619c8167..a4d93df78c75 100644 --- a/kernel/bpf/bpf_inode_storage.c +++ b/kernel/bpf/bpf_inode_storage.c @@ -57,7 +57,6 @@ static struct bpf_local_storage_data *inode_storage_lookup(struct inode *inode, void bpf_inode_storage_free(struct inode *inode) { struct bpf_local_storage *local_storage; - bool free_inode_storage = false; struct bpf_storage_blob *bsb; bsb = bpf_inode(inode); @@ -72,13 +71,8 @@ void bpf_inode_storage_free(struct inode *inode) return; } - raw_spin_lock_bh(&local_storage->lock); - free_inode_storage = bpf_local_storage_unlink_nolock(local_storage); - raw_spin_unlock_bh(&local_storage->lock); + bpf_local_storage_destroy(local_storage); rcu_read_unlock(); - - if (free_inode_storage) - kfree_rcu(local_storage, rcu); } static void *bpf_fd_inode_storage_lookup_elem(struct bpf_map *map, void *key) @@ -97,8 +91,8 @@ static void *bpf_fd_inode_storage_lookup_elem(struct bpf_map *map, void *key) return sdata ? sdata->data : NULL; } -static int bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags) +static long bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags) { struct bpf_local_storage_data *sdata; struct file *f; @@ -128,12 +122,12 @@ static int inode_storage_delete(struct inode *inode, struct bpf_map *map) if (!sdata) return -ENOENT; - bpf_selem_unlink(SELEM(sdata), true); + bpf_selem_unlink(SELEM(sdata), false); return 0; } -static int bpf_fd_inode_storage_delete_elem(struct bpf_map *map, void *key) +static long bpf_fd_inode_storage_delete_elem(struct bpf_map *map, void *key) { struct file *f; int fd, err; @@ -205,7 +199,7 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, static struct bpf_map *inode_storage_map_alloc(union bpf_attr *attr) { - return bpf_local_storage_map_alloc(attr, &inode_cache); + return bpf_local_storage_map_alloc(attr, &inode_cache, false); } static void inode_storage_map_free(struct bpf_map *map) @@ -235,7 +229,7 @@ const struct bpf_func_proto bpf_inode_storage_get_proto = { .gpl_only = false, .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &bpf_inode_storage_btf_ids[0], .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL, .arg4_type = ARG_ANYTHING, @@ -246,6 +240,6 @@ const struct bpf_func_proto bpf_inode_storage_delete_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &bpf_inode_storage_btf_ids[0], }; diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c index 5dc307bdeaeb..96856f130cbf 100644 --- a/kernel/bpf/bpf_iter.c +++ b/kernel/bpf/bpf_iter.c @@ -776,3 +776,73 @@ const struct bpf_func_proto bpf_loop_proto = { .arg3_type = ARG_PTR_TO_STACK_OR_NULL, .arg4_type = ARG_ANYTHING, }; + +struct bpf_iter_num_kern { + int cur; /* current value, inclusive */ + int end; /* final value, exclusive */ +} __aligned(8); + +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in vmlinux BTF"); + +__bpf_kfunc int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) +{ + struct bpf_iter_num_kern *s = (void *)it; + + BUILD_BUG_ON(sizeof(struct bpf_iter_num_kern) != sizeof(struct bpf_iter_num)); + BUILD_BUG_ON(__alignof__(struct bpf_iter_num_kern) != __alignof__(struct bpf_iter_num)); + + BTF_TYPE_EMIT(struct btf_iter_num); + + /* start == end is legit, it's an empty range and we'll just get NULL + * on first (and any subsequent) bpf_iter_num_next() call + */ + if (start > end) { + s->cur = s->end = 0; + return -EINVAL; + } + + /* avoid overflows, e.g., if start == INT_MIN and end == INT_MAX */ + if ((s64)end - (s64)start > BPF_MAX_LOOPS) { + s->cur = s->end = 0; + return -E2BIG; + } + + /* user will call bpf_iter_num_next() first, + * which will set s->cur to exactly start value; + * underflow shouldn't matter + */ + s->cur = start - 1; + s->end = end; + + return 0; +} + +__bpf_kfunc int *bpf_iter_num_next(struct bpf_iter_num* it) +{ + struct bpf_iter_num_kern *s = (void *)it; + + /* check failed initialization or if we are done (same behavior); + * need to be careful about overflow, so convert to s64 for checks, + * e.g., if s->cur == s->end == INT_MAX, we can't just do + * s->cur + 1 >= s->end + */ + if ((s64)(s->cur + 1) >= s->end) { + s->cur = s->end = 0; + return NULL; + } + + s->cur++; + + return &s->cur; +} + +__bpf_kfunc void bpf_iter_num_destroy(struct bpf_iter_num *it) +{ + struct bpf_iter_num_kern *s = (void *)it; + + s->cur = s->end = 0; +} + +__diag_pop(); diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index d3ba3f2db640..47d9948d768f 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -80,8 +80,24 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, if (charge_mem && mem_charge(smap, owner, smap->elem_size)) return NULL; - selem = bpf_map_kzalloc(&smap->map, smap->elem_size, - gfp_flags | __GFP_NOWARN); + if (smap->bpf_ma) { + migrate_disable(); + selem = bpf_mem_cache_alloc_flags(&smap->selem_ma, gfp_flags); + migrate_enable(); + if (selem) + /* Keep the original bpf_map_kzalloc behavior + * before started using the bpf_mem_cache_alloc. + * + * No need to use zero_map_value. The bpf_selem_free() + * only does bpf_mem_cache_free when there is + * no other bpf prog is using the selem. + */ + memset(SDATA(selem)->data, 0, smap->map.value_size); + } else { + selem = bpf_map_kzalloc(&smap->map, smap->elem_size, + gfp_flags | __GFP_NOWARN); + } + if (selem) { if (value) copy_map_value(&smap->map, SDATA(selem)->data, value); @@ -95,7 +111,8 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, return NULL; } -void bpf_local_storage_free_rcu(struct rcu_head *rcu) +/* rcu tasks trace callback for bpf_ma == false */ +static void __bpf_local_storage_free_trace_rcu(struct rcu_head *rcu) { struct bpf_local_storage *local_storage; @@ -109,28 +126,66 @@ void bpf_local_storage_free_rcu(struct rcu_head *rcu) kfree_rcu(local_storage, rcu); } -static void bpf_selem_free_fields_rcu(struct rcu_head *rcu) +static void bpf_local_storage_free_rcu(struct rcu_head *rcu) { - struct bpf_local_storage_elem *selem; - struct bpf_local_storage_map *smap; + struct bpf_local_storage *local_storage; - selem = container_of(rcu, struct bpf_local_storage_elem, rcu); - /* protected by the rcu_barrier*() */ - smap = rcu_dereference_protected(SDATA(selem)->smap, true); - bpf_obj_free_fields(smap->map.record, SDATA(selem)->data); - kfree(selem); + local_storage = container_of(rcu, struct bpf_local_storage, rcu); + bpf_mem_cache_raw_free(local_storage); } -static void bpf_selem_free_fields_trace_rcu(struct rcu_head *rcu) +static void bpf_local_storage_free_trace_rcu(struct rcu_head *rcu) { - /* Free directly if Tasks Trace RCU GP also implies RCU GP */ if (rcu_trace_implies_rcu_gp()) - bpf_selem_free_fields_rcu(rcu); + bpf_local_storage_free_rcu(rcu); else - call_rcu(rcu, bpf_selem_free_fields_rcu); + call_rcu(rcu, bpf_local_storage_free_rcu); } -static void bpf_selem_free_trace_rcu(struct rcu_head *rcu) +/* Handle bpf_ma == false */ +static void __bpf_local_storage_free(struct bpf_local_storage *local_storage, + bool vanilla_rcu) +{ + if (vanilla_rcu) + kfree_rcu(local_storage, rcu); + else + call_rcu_tasks_trace(&local_storage->rcu, + __bpf_local_storage_free_trace_rcu); +} + +static void bpf_local_storage_free(struct bpf_local_storage *local_storage, + struct bpf_local_storage_map *smap, + bool bpf_ma, bool reuse_now) +{ + if (!local_storage) + return; + + if (!bpf_ma) { + __bpf_local_storage_free(local_storage, reuse_now); + return; + } + + if (!reuse_now) { + call_rcu_tasks_trace(&local_storage->rcu, + bpf_local_storage_free_trace_rcu); + return; + } + + if (smap) { + migrate_disable(); + bpf_mem_cache_free(&smap->storage_ma, local_storage); + migrate_enable(); + } else { + /* smap could be NULL if the selem that triggered + * this 'local_storage' creation had been long gone. + * In this case, directly do call_rcu(). + */ + call_rcu(&local_storage->rcu, bpf_local_storage_free_rcu); + } +} + +/* rcu tasks trace callback for bpf_ma == false */ +static void __bpf_selem_free_trace_rcu(struct rcu_head *rcu) { struct bpf_local_storage_elem *selem; @@ -141,17 +196,66 @@ static void bpf_selem_free_trace_rcu(struct rcu_head *rcu) kfree_rcu(selem, rcu); } +/* Handle bpf_ma == false */ +static void __bpf_selem_free(struct bpf_local_storage_elem *selem, + bool vanilla_rcu) +{ + if (vanilla_rcu) + kfree_rcu(selem, rcu); + else + call_rcu_tasks_trace(&selem->rcu, __bpf_selem_free_trace_rcu); +} + +static void bpf_selem_free_rcu(struct rcu_head *rcu) +{ + struct bpf_local_storage_elem *selem; + + selem = container_of(rcu, struct bpf_local_storage_elem, rcu); + bpf_mem_cache_raw_free(selem); +} + +static void bpf_selem_free_trace_rcu(struct rcu_head *rcu) +{ + if (rcu_trace_implies_rcu_gp()) + bpf_selem_free_rcu(rcu); + else + call_rcu(rcu, bpf_selem_free_rcu); +} + +void bpf_selem_free(struct bpf_local_storage_elem *selem, + struct bpf_local_storage_map *smap, + bool reuse_now) +{ + bpf_obj_free_fields(smap->map.record, SDATA(selem)->data); + + if (!smap->bpf_ma) { + __bpf_selem_free(selem, reuse_now); + return; + } + + if (!reuse_now) { + call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_trace_rcu); + } else { + /* Instead of using the vanilla call_rcu(), + * bpf_mem_cache_free will be able to reuse selem + * immediately. + */ + migrate_disable(); + bpf_mem_cache_free(&smap->selem_ma, selem); + migrate_enable(); + } +} + /* local_storage->lock must be held and selem->local_storage == local_storage. * The caller must ensure selem->smap is still valid to be * dereferenced for its smap->elem_size and smap->cache_idx. */ static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage, struct bpf_local_storage_elem *selem, - bool uncharge_mem, bool use_trace_rcu) + bool uncharge_mem, bool reuse_now) { struct bpf_local_storage_map *smap; bool free_local_storage; - struct btf_record *rec; void *owner; smap = rcu_dereference_check(SDATA(selem)->smap, bpf_rcu_lock_held()); @@ -192,35 +296,55 @@ static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_stor SDATA(selem)) RCU_INIT_POINTER(local_storage->cache[smap->cache_idx], NULL); - /* A different RCU callback is chosen whenever we need to free - * additional fields in selem data before freeing selem. - * bpf_local_storage_map_free only executes rcu_barrier to wait for RCU - * callbacks when it has special fields, hence we can only conditionally - * dereference smap, as by this time the map might have already been - * freed without waiting for our call_rcu callback if it did not have - * any special fields. + bpf_selem_free(selem, smap, reuse_now); + + if (rcu_access_pointer(local_storage->smap) == smap) + RCU_INIT_POINTER(local_storage->smap, NULL); + + return free_local_storage; +} + +static bool check_storage_bpf_ma(struct bpf_local_storage *local_storage, + struct bpf_local_storage_map *storage_smap, + struct bpf_local_storage_elem *selem) +{ + + struct bpf_local_storage_map *selem_smap; + + /* local_storage->smap may be NULL. If it is, get the bpf_ma + * from any selem in the local_storage->list. The bpf_ma of all + * local_storage and selem should have the same value + * for the same map type. + * + * If the local_storage->list is already empty, the caller will not + * care about the bpf_ma value also because the caller is not + * responsibile to free the local_storage. */ - rec = smap->map.record; - if (use_trace_rcu) { - if (!IS_ERR_OR_NULL(rec)) - call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_fields_trace_rcu); - else - call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_trace_rcu); - } else { - if (!IS_ERR_OR_NULL(rec)) - call_rcu(&selem->rcu, bpf_selem_free_fields_rcu); - else - kfree_rcu(selem, rcu); + + if (storage_smap) + return storage_smap->bpf_ma; + + if (!selem) { + struct hlist_node *n; + + n = rcu_dereference_check(hlist_first_rcu(&local_storage->list), + bpf_rcu_lock_held()); + if (!n) + return false; + + selem = hlist_entry(n, struct bpf_local_storage_elem, snode); } + selem_smap = rcu_dereference_check(SDATA(selem)->smap, bpf_rcu_lock_held()); - return free_local_storage; + return selem_smap->bpf_ma; } -static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem, - bool use_trace_rcu) +static void bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem, + bool reuse_now) { + struct bpf_local_storage_map *storage_smap; struct bpf_local_storage *local_storage; - bool free_local_storage = false; + bool bpf_ma, free_local_storage = false; unsigned long flags; if (unlikely(!selem_linked_to_storage_lockless(selem))) @@ -229,19 +353,18 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem, local_storage = rcu_dereference_check(selem->local_storage, bpf_rcu_lock_held()); + storage_smap = rcu_dereference_check(local_storage->smap, + bpf_rcu_lock_held()); + bpf_ma = check_storage_bpf_ma(local_storage, storage_smap, selem); + raw_spin_lock_irqsave(&local_storage->lock, flags); if (likely(selem_linked_to_storage(selem))) free_local_storage = bpf_selem_unlink_storage_nolock( - local_storage, selem, true, use_trace_rcu); + local_storage, selem, true, reuse_now); raw_spin_unlock_irqrestore(&local_storage->lock, flags); - if (free_local_storage) { - if (use_trace_rcu) - call_rcu_tasks_trace(&local_storage->rcu, - bpf_local_storage_free_rcu); - else - kfree_rcu(local_storage, rcu); - } + if (free_local_storage) + bpf_local_storage_free(local_storage, storage_smap, bpf_ma, reuse_now); } void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage, @@ -251,7 +374,7 @@ void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage, hlist_add_head_rcu(&selem->snode, &local_storage->list); } -void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem) +static void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem) { struct bpf_local_storage_map *smap; struct bpf_local_storage_map_bucket *b; @@ -281,14 +404,14 @@ void bpf_selem_link_map(struct bpf_local_storage_map *smap, raw_spin_unlock_irqrestore(&b->lock, flags); } -void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool use_trace_rcu) +void bpf_selem_unlink(struct bpf_local_storage_elem *selem, bool reuse_now) { /* Always unlink from map before unlinking from local_storage * because selem will be freed after successfully unlinked from * the local_storage. */ bpf_selem_unlink_map(selem); - __bpf_selem_unlink_storage(selem, use_trace_rcu); + bpf_selem_unlink_storage(selem, reuse_now); } /* If cacheit_lockit is false, this lookup function is lockless */ @@ -361,13 +484,21 @@ int bpf_local_storage_alloc(void *owner, if (err) return err; - storage = bpf_map_kzalloc(&smap->map, sizeof(*storage), - gfp_flags | __GFP_NOWARN); + if (smap->bpf_ma) { + migrate_disable(); + storage = bpf_mem_cache_alloc_flags(&smap->storage_ma, gfp_flags); + migrate_enable(); + } else { + storage = bpf_map_kzalloc(&smap->map, sizeof(*storage), + gfp_flags | __GFP_NOWARN); + } + if (!storage) { err = -ENOMEM; goto uncharge; } + RCU_INIT_POINTER(storage->smap, smap); INIT_HLIST_HEAD(&storage->list); raw_spin_lock_init(&storage->lock); storage->owner = owner; @@ -407,7 +538,7 @@ int bpf_local_storage_alloc(void *owner, return 0; uncharge: - kfree(storage); + bpf_local_storage_free(storage, smap, smap->bpf_ma, true); mem_uncharge(smap, owner, sizeof(*storage)); return err; } @@ -451,7 +582,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, err = bpf_local_storage_alloc(owner, smap, selem, gfp_flags); if (err) { - kfree(selem); + bpf_selem_free(selem, smap, true); mem_uncharge(smap, owner, smap->elem_size); return ERR_PTR(err); } @@ -534,7 +665,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, if (old_sdata) { bpf_selem_unlink_map(SELEM(old_sdata)); bpf_selem_unlink_storage_nolock(local_storage, SELEM(old_sdata), - false, true); + false, false); } unlock: @@ -545,7 +676,7 @@ unlock_err: raw_spin_unlock_irqrestore(&local_storage->lock, flags); if (selem) { mem_uncharge(smap, owner, smap->elem_size); - kfree(selem); + bpf_selem_free(selem, smap, true); } return ERR_PTR(err); } @@ -601,40 +732,6 @@ int bpf_local_storage_map_alloc_check(union bpf_attr *attr) return 0; } -static struct bpf_local_storage_map *__bpf_local_storage_map_alloc(union bpf_attr *attr) -{ - struct bpf_local_storage_map *smap; - unsigned int i; - u32 nbuckets; - - smap = bpf_map_area_alloc(sizeof(*smap), NUMA_NO_NODE); - if (!smap) - return ERR_PTR(-ENOMEM); - bpf_map_init_from_attr(&smap->map, attr); - - nbuckets = roundup_pow_of_two(num_possible_cpus()); - /* Use at least 2 buckets, select_bucket() is undefined behavior with 1 bucket */ - nbuckets = max_t(u32, 2, nbuckets); - smap->bucket_log = ilog2(nbuckets); - - smap->buckets = bpf_map_kvcalloc(&smap->map, sizeof(*smap->buckets), - nbuckets, GFP_USER | __GFP_NOWARN); - if (!smap->buckets) { - bpf_map_area_free(smap); - return ERR_PTR(-ENOMEM); - } - - for (i = 0; i < nbuckets; i++) { - INIT_HLIST_HEAD(&smap->buckets[i].list); - raw_spin_lock_init(&smap->buckets[i].lock); - } - - smap->elem_size = offsetof(struct bpf_local_storage_elem, - sdata.data[attr->value_size]); - - return smap; -} - int bpf_local_storage_map_check_btf(const struct bpf_map *map, const struct btf *btf, const struct btf_type *key_type, @@ -652,11 +749,16 @@ int bpf_local_storage_map_check_btf(const struct bpf_map *map, return 0; } -bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage) +void bpf_local_storage_destroy(struct bpf_local_storage *local_storage) { + struct bpf_local_storage_map *storage_smap; struct bpf_local_storage_elem *selem; - bool free_storage = false; + bool bpf_ma, free_storage = false; struct hlist_node *n; + unsigned long flags; + + storage_smap = rcu_dereference_check(local_storage->smap, bpf_rcu_lock_held()); + bpf_ma = check_storage_bpf_ma(local_storage, storage_smap, NULL); /* Neither the bpf_prog nor the bpf_map's syscall * could be modifying the local_storage->list now. @@ -667,6 +769,7 @@ bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage) * when unlinking elem from the local_storage->list and * the map's bucket->list. */ + raw_spin_lock_irqsave(&local_storage->lock, flags); hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) { /* Always unlink from map before unlinking from * local_storage. @@ -679,10 +782,12 @@ bool bpf_local_storage_unlink_nolock(struct bpf_local_storage *local_storage) * of the loop will set the free_cgroup_storage to true. */ free_storage = bpf_selem_unlink_storage_nolock( - local_storage, selem, false, false); + local_storage, selem, false, true); } + raw_spin_unlock_irqrestore(&local_storage->lock, flags); - return free_storage; + if (free_storage) + bpf_local_storage_free(local_storage, storage_smap, bpf_ma, true); } u64 bpf_local_storage_map_mem_usage(const struct bpf_map *map) @@ -695,18 +800,71 @@ u64 bpf_local_storage_map_mem_usage(const struct bpf_map *map) return usage; } +/* When bpf_ma == true, the bpf_mem_alloc is used to allocate and free memory. + * A deadlock free allocator is useful for storage that the bpf prog can easily + * get a hold of the owner PTR_TO_BTF_ID in any context. eg. bpf_get_current_task_btf. + * The task and cgroup storage fall into this case. The bpf_mem_alloc reuses + * memory immediately. To be reuse-immediate safe, the owner destruction + * code path needs to go through a rcu grace period before calling + * bpf_local_storage_destroy(). + * + * When bpf_ma == false, the kmalloc and kfree are used. + */ struct bpf_map * bpf_local_storage_map_alloc(union bpf_attr *attr, - struct bpf_local_storage_cache *cache) + struct bpf_local_storage_cache *cache, + bool bpf_ma) { struct bpf_local_storage_map *smap; + unsigned int i; + u32 nbuckets; + int err; + + smap = bpf_map_area_alloc(sizeof(*smap), NUMA_NO_NODE); + if (!smap) + return ERR_PTR(-ENOMEM); + bpf_map_init_from_attr(&smap->map, attr); + + nbuckets = roundup_pow_of_two(num_possible_cpus()); + /* Use at least 2 buckets, select_bucket() is undefined behavior with 1 bucket */ + nbuckets = max_t(u32, 2, nbuckets); + smap->bucket_log = ilog2(nbuckets); - smap = __bpf_local_storage_map_alloc(attr); - if (IS_ERR(smap)) - return ERR_CAST(smap); + smap->buckets = bpf_map_kvcalloc(&smap->map, sizeof(*smap->buckets), + nbuckets, GFP_USER | __GFP_NOWARN); + if (!smap->buckets) { + err = -ENOMEM; + goto free_smap; + } + + for (i = 0; i < nbuckets; i++) { + INIT_HLIST_HEAD(&smap->buckets[i].list); + raw_spin_lock_init(&smap->buckets[i].lock); + } + + smap->elem_size = offsetof(struct bpf_local_storage_elem, + sdata.data[attr->value_size]); + + smap->bpf_ma = bpf_ma; + if (bpf_ma) { + err = bpf_mem_alloc_init(&smap->selem_ma, smap->elem_size, false); + if (err) + goto free_smap; + + err = bpf_mem_alloc_init(&smap->storage_ma, sizeof(struct bpf_local_storage), false); + if (err) { + bpf_mem_alloc_destroy(&smap->selem_ma); + goto free_smap; + } + } smap->cache_idx = bpf_local_storage_cache_idx_get(cache); return &smap->map; + +free_smap: + kvfree(smap->buckets); + bpf_map_area_free(smap); + return ERR_PTR(err); } void bpf_local_storage_map_free(struct bpf_map *map, @@ -748,7 +906,7 @@ void bpf_local_storage_map_free(struct bpf_map *map, migrate_disable(); this_cpu_inc(*busy_counter); } - bpf_selem_unlink(selem, false); + bpf_selem_unlink(selem, true); if (busy_counter) { this_cpu_dec(*busy_counter); migrate_enable(); @@ -772,26 +930,10 @@ void bpf_local_storage_map_free(struct bpf_map *map, */ synchronize_rcu(); - /* Only delay freeing of smap, buckets are not needed anymore */ - kvfree(smap->buckets); - - /* When local storage has special fields, callbacks for - * bpf_selem_free_fields_rcu and bpf_selem_free_fields_trace_rcu will - * keep using the map BTF record, we need to execute an RCU barrier to - * wait for them as the record will be freed right after our map_free - * callback. - */ - if (!IS_ERR_OR_NULL(smap->map.record)) { - rcu_barrier_tasks_trace(); - /* We cannot skip rcu_barrier() when rcu_trace_implies_rcu_gp() - * is true, because while call_rcu invocation is skipped in that - * case in bpf_selem_free_fields_trace_rcu (and all local - * storage maps pass use_trace_rcu = true), there can be - * call_rcu callbacks based on use_trace_rcu = false in the - * while ((selem = ...)) loop above or when owner's free path - * calls bpf_local_storage_unlink_nolock. - */ - rcu_barrier(); + if (smap->bpf_ma) { + bpf_mem_alloc_destroy(&smap->selem_ma); + bpf_mem_alloc_destroy(&smap->storage_ma); } + kvfree(smap->buckets); bpf_map_area_free(smap); } diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c index 38903fb52f98..d3f0a4825fa6 100644 --- a/kernel/bpf/bpf_struct_ops.c +++ b/kernel/bpf/bpf_struct_ops.c @@ -11,11 +11,13 @@ #include <linux/refcount.h> #include <linux/mutex.h> #include <linux/btf_ids.h> +#include <linux/rcupdate_wait.h> enum bpf_struct_ops_state { BPF_STRUCT_OPS_STATE_INIT, BPF_STRUCT_OPS_STATE_INUSE, BPF_STRUCT_OPS_STATE_TOBEFREE, + BPF_STRUCT_OPS_STATE_READY, }; #define BPF_STRUCT_OPS_COMMON_VALUE \ @@ -58,6 +60,13 @@ struct bpf_struct_ops_map { struct bpf_struct_ops_value kvalue; }; +struct bpf_struct_ops_link { + struct bpf_link link; + struct bpf_map __rcu *map; +}; + +static DEFINE_MUTEX(update_mutex); + #define VALUE_PREFIX "bpf_struct_ops_" #define VALUE_PREFIX_LEN (sizeof(VALUE_PREFIX) - 1) @@ -249,6 +258,7 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; struct bpf_struct_ops_value *uvalue, *kvalue; enum bpf_struct_ops_state state; + s64 refcnt; if (unlikely(*(u32 *)key != 0)) return -ENOENT; @@ -267,7 +277,14 @@ int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, uvalue = value; memcpy(uvalue, st_map->uvalue, map->value_size); uvalue->state = state; - refcount_set(&uvalue->refcnt, refcount_read(&kvalue->refcnt)); + + /* This value offers the user space a general estimate of how + * many sockets are still utilizing this struct_ops for TCP + * congestion control. The number might not be exact, but it + * should sufficiently meet our present goals. + */ + refcnt = atomic64_read(&map->refcnt) - atomic64_read(&map->usercnt); + refcount_set(&uvalue->refcnt, max_t(s64, refcnt, 0)); return 0; } @@ -349,8 +366,8 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks, model, flags, tlinks, NULL); } -static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 flags) +static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 flags) { struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; const struct bpf_struct_ops *st_ops = st_map->st_ops; @@ -491,12 +508,29 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, *(unsigned long *)(udata + moff) = prog->aux->id; } - refcount_set(&kvalue->refcnt, 1); - bpf_map_inc(map); + if (st_map->map.map_flags & BPF_F_LINK) { + err = st_ops->validate(kdata); + if (err) + goto reset_unlock; + set_memory_rox((long)st_map->image, 1); + /* Let bpf_link handle registration & unregistration. + * + * Pair with smp_load_acquire() during lookup_elem(). + */ + smp_store_release(&kvalue->state, BPF_STRUCT_OPS_STATE_READY); + goto unlock; + } set_memory_rox((long)st_map->image, 1); err = st_ops->reg(kdata); if (likely(!err)) { + /* This refcnt increment on the map here after + * 'st_ops->reg()' is secure since the state of the + * map must be set to INIT at this moment, and thus + * bpf_struct_ops_map_delete_elem() can't unregister + * or transition it to TOBEFREE concurrently. + */ + bpf_map_inc(map); /* Pair with smp_load_acquire() during lookup_elem(). * It ensures the above udata updates (e.g. prog->aux->id) * can be seen once BPF_STRUCT_OPS_STATE_INUSE is set. @@ -512,7 +546,6 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key, */ set_memory_nx((long)st_map->image, 1); set_memory_rw((long)st_map->image, 1); - bpf_map_put(map); reset_unlock: bpf_struct_ops_map_put_progs(st_map); @@ -524,20 +557,22 @@ unlock: return err; } -static int bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key) +static long bpf_struct_ops_map_delete_elem(struct bpf_map *map, void *key) { enum bpf_struct_ops_state prev_state; struct bpf_struct_ops_map *st_map; st_map = (struct bpf_struct_ops_map *)map; + if (st_map->map.map_flags & BPF_F_LINK) + return -EOPNOTSUPP; + prev_state = cmpxchg(&st_map->kvalue.state, BPF_STRUCT_OPS_STATE_INUSE, BPF_STRUCT_OPS_STATE_TOBEFREE); switch (prev_state) { case BPF_STRUCT_OPS_STATE_INUSE: st_map->st_ops->unreg(&st_map->kvalue.data); - if (refcount_dec_and_test(&st_map->kvalue.refcnt)) - bpf_map_put(map); + bpf_map_put(map); return 0; case BPF_STRUCT_OPS_STATE_TOBEFREE: return -EINPROGRESS; @@ -570,7 +605,7 @@ static void bpf_struct_ops_map_seq_show_elem(struct bpf_map *map, void *key, kfree(value); } -static void bpf_struct_ops_map_free(struct bpf_map *map) +static void __bpf_struct_ops_map_free(struct bpf_map *map) { struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; @@ -582,10 +617,32 @@ static void bpf_struct_ops_map_free(struct bpf_map *map) bpf_map_area_free(st_map); } +static void bpf_struct_ops_map_free(struct bpf_map *map) +{ + /* The struct_ops's function may switch to another struct_ops. + * + * For example, bpf_tcp_cc_x->init() may switch to + * another tcp_cc_y by calling + * setsockopt(TCP_CONGESTION, "tcp_cc_y"). + * During the switch, bpf_struct_ops_put(tcp_cc_x) is called + * and its refcount may reach 0 which then free its + * trampoline image while tcp_cc_x is still running. + * + * A vanilla rcu gp is to wait for all bpf-tcp-cc prog + * to finish. bpf-tcp-cc prog is non sleepable. + * A rcu_tasks gp is to wait for the last few insn + * in the tramopline image to finish before releasing + * the trampoline image. + */ + synchronize_rcu_mult(call_rcu, call_rcu_tasks); + + __bpf_struct_ops_map_free(map); +} + static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr) { if (attr->key_size != sizeof(unsigned int) || attr->max_entries != 1 || - attr->map_flags || !attr->btf_vmlinux_value_type_id) + (attr->map_flags & ~BPF_F_LINK) || !attr->btf_vmlinux_value_type_id) return -EINVAL; return 0; } @@ -609,6 +666,9 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) if (attr->value_size != vt->size) return ERR_PTR(-EINVAL); + if (attr->map_flags & BPF_F_LINK && (!st_ops->validate || !st_ops->update)) + return ERR_PTR(-EOPNOTSUPP); + t = st_ops->type; st_map_size = sizeof(*st_map) + @@ -630,7 +690,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr) NUMA_NO_NODE); st_map->image = bpf_jit_alloc_exec(PAGE_SIZE); if (!st_map->uvalue || !st_map->links || !st_map->image) { - bpf_struct_ops_map_free(map); + __bpf_struct_ops_map_free(map); return ERR_PTR(-ENOMEM); } @@ -676,41 +736,175 @@ const struct bpf_map_ops bpf_struct_ops_map_ops = { bool bpf_struct_ops_get(const void *kdata) { struct bpf_struct_ops_value *kvalue; + struct bpf_struct_ops_map *st_map; + struct bpf_map *map; kvalue = container_of(kdata, struct bpf_struct_ops_value, data); + st_map = container_of(kvalue, struct bpf_struct_ops_map, kvalue); - return refcount_inc_not_zero(&kvalue->refcnt); + map = __bpf_map_inc_not_zero(&st_map->map, false); + return !IS_ERR(map); } -static void bpf_struct_ops_put_rcu(struct rcu_head *head) +void bpf_struct_ops_put(const void *kdata) { + struct bpf_struct_ops_value *kvalue; struct bpf_struct_ops_map *st_map; - st_map = container_of(head, struct bpf_struct_ops_map, rcu); + kvalue = container_of(kdata, struct bpf_struct_ops_value, data); + st_map = container_of(kvalue, struct bpf_struct_ops_map, kvalue); + bpf_map_put(&st_map->map); } -void bpf_struct_ops_put(const void *kdata) +static bool bpf_struct_ops_valid_to_reg(struct bpf_map *map) { - struct bpf_struct_ops_value *kvalue; + struct bpf_struct_ops_map *st_map = (struct bpf_struct_ops_map *)map; - kvalue = container_of(kdata, struct bpf_struct_ops_value, data); - if (refcount_dec_and_test(&kvalue->refcnt)) { - struct bpf_struct_ops_map *st_map; + return map->map_type == BPF_MAP_TYPE_STRUCT_OPS && + map->map_flags & BPF_F_LINK && + /* Pair with smp_store_release() during map_update */ + smp_load_acquire(&st_map->kvalue.state) == BPF_STRUCT_OPS_STATE_READY; +} - st_map = container_of(kvalue, struct bpf_struct_ops_map, - kvalue); - /* The struct_ops's function may switch to another struct_ops. - * - * For example, bpf_tcp_cc_x->init() may switch to - * another tcp_cc_y by calling - * setsockopt(TCP_CONGESTION, "tcp_cc_y"). - * During the switch, bpf_struct_ops_put(tcp_cc_x) is called - * and its map->refcnt may reach 0 which then free its - * trampoline image while tcp_cc_x is still running. - * - * Thus, a rcu grace period is needed here. +static void bpf_struct_ops_map_link_dealloc(struct bpf_link *link) +{ + struct bpf_struct_ops_link *st_link; + struct bpf_struct_ops_map *st_map; + + st_link = container_of(link, struct bpf_struct_ops_link, link); + st_map = (struct bpf_struct_ops_map *) + rcu_dereference_protected(st_link->map, true); + if (st_map) { + /* st_link->map can be NULL if + * bpf_struct_ops_link_create() fails to register. */ - call_rcu(&st_map->rcu, bpf_struct_ops_put_rcu); + st_map->st_ops->unreg(&st_map->kvalue.data); + bpf_map_put(&st_map->map); + } + kfree(st_link); +} + +static void bpf_struct_ops_map_link_show_fdinfo(const struct bpf_link *link, + struct seq_file *seq) +{ + struct bpf_struct_ops_link *st_link; + struct bpf_map *map; + + st_link = container_of(link, struct bpf_struct_ops_link, link); + rcu_read_lock(); + map = rcu_dereference(st_link->map); + seq_printf(seq, "map_id:\t%d\n", map->id); + rcu_read_unlock(); +} + +static int bpf_struct_ops_map_link_fill_link_info(const struct bpf_link *link, + struct bpf_link_info *info) +{ + struct bpf_struct_ops_link *st_link; + struct bpf_map *map; + + st_link = container_of(link, struct bpf_struct_ops_link, link); + rcu_read_lock(); + map = rcu_dereference(st_link->map); + info->struct_ops.map_id = map->id; + rcu_read_unlock(); + return 0; +} + +static int bpf_struct_ops_map_link_update(struct bpf_link *link, struct bpf_map *new_map, + struct bpf_map *expected_old_map) +{ + struct bpf_struct_ops_map *st_map, *old_st_map; + struct bpf_map *old_map; + struct bpf_struct_ops_link *st_link; + int err = 0; + + st_link = container_of(link, struct bpf_struct_ops_link, link); + st_map = container_of(new_map, struct bpf_struct_ops_map, map); + + if (!bpf_struct_ops_valid_to_reg(new_map)) + return -EINVAL; + + mutex_lock(&update_mutex); + + old_map = rcu_dereference_protected(st_link->map, lockdep_is_held(&update_mutex)); + if (expected_old_map && old_map != expected_old_map) { + err = -EPERM; + goto err_out; + } + + old_st_map = container_of(old_map, struct bpf_struct_ops_map, map); + /* The new and old struct_ops must be the same type. */ + if (st_map->st_ops != old_st_map->st_ops) { + err = -EINVAL; + goto err_out; } + + err = st_map->st_ops->update(st_map->kvalue.data, old_st_map->kvalue.data); + if (err) + goto err_out; + + bpf_map_inc(new_map); + rcu_assign_pointer(st_link->map, new_map); + bpf_map_put(old_map); + +err_out: + mutex_unlock(&update_mutex); + + return err; } + +static const struct bpf_link_ops bpf_struct_ops_map_lops = { + .dealloc = bpf_struct_ops_map_link_dealloc, + .show_fdinfo = bpf_struct_ops_map_link_show_fdinfo, + .fill_link_info = bpf_struct_ops_map_link_fill_link_info, + .update_map = bpf_struct_ops_map_link_update, +}; + +int bpf_struct_ops_link_create(union bpf_attr *attr) +{ + struct bpf_struct_ops_link *link = NULL; + struct bpf_link_primer link_primer; + struct bpf_struct_ops_map *st_map; + struct bpf_map *map; + int err; + + map = bpf_map_get(attr->link_create.map_fd); + if (IS_ERR(map)) + return PTR_ERR(map); + + st_map = (struct bpf_struct_ops_map *)map; + + if (!bpf_struct_ops_valid_to_reg(map)) { + err = -EINVAL; + goto err_out; + } + + link = kzalloc(sizeof(*link), GFP_USER); + if (!link) { + err = -ENOMEM; + goto err_out; + } + bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS, &bpf_struct_ops_map_lops, NULL); + + err = bpf_link_prime(&link->link, &link_primer); + if (err) + goto err_out; + + err = st_map->st_ops->reg(st_map->kvalue.data); + if (err) { + bpf_link_cleanup(&link_primer); + link = NULL; + goto err_out; + } + RCU_INIT_POINTER(link->map, map); + + return bpf_link_settle(&link_primer); + +err_out: + bpf_map_put(map); + kfree(link); + return err; +} + diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c index 20f942229f3c..adf6dfe0ba68 100644 --- a/kernel/bpf/bpf_task_storage.c +++ b/kernel/bpf/bpf_task_storage.c @@ -72,8 +72,6 @@ task_storage_lookup(struct task_struct *task, struct bpf_map *map, void bpf_task_storage_free(struct task_struct *task) { struct bpf_local_storage *local_storage; - bool free_task_storage = false; - unsigned long flags; rcu_read_lock(); @@ -84,14 +82,9 @@ void bpf_task_storage_free(struct task_struct *task) } bpf_task_storage_lock(); - raw_spin_lock_irqsave(&local_storage->lock, flags); - free_task_storage = bpf_local_storage_unlink_nolock(local_storage); - raw_spin_unlock_irqrestore(&local_storage->lock, flags); + bpf_local_storage_destroy(local_storage); bpf_task_storage_unlock(); rcu_read_unlock(); - - if (free_task_storage) - kfree_rcu(local_storage, rcu); } static void *bpf_pid_task_storage_lookup_elem(struct bpf_map *map, void *key) @@ -127,8 +120,8 @@ out: return ERR_PTR(err); } -static int bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags) +static long bpf_pid_task_storage_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags) { struct bpf_local_storage_data *sdata; struct task_struct *task; @@ -175,12 +168,12 @@ static int task_storage_delete(struct task_struct *task, struct bpf_map *map, if (!nobusy) return -EBUSY; - bpf_selem_unlink(SELEM(sdata), true); + bpf_selem_unlink(SELEM(sdata), false); return 0; } -static int bpf_pid_task_storage_delete_elem(struct bpf_map *map, void *key) +static long bpf_pid_task_storage_delete_elem(struct bpf_map *map, void *key) { struct task_struct *task; unsigned int f_flags; @@ -316,7 +309,7 @@ static int notsupp_get_next_key(struct bpf_map *map, void *key, void *next_key) static struct bpf_map *task_storage_map_alloc(union bpf_attr *attr) { - return bpf_local_storage_map_alloc(attr, &task_cache); + return bpf_local_storage_map_alloc(attr, &task_cache, true); } static void task_storage_map_free(struct bpf_map *map) @@ -345,7 +338,7 @@ const struct bpf_func_proto bpf_task_storage_get_recur_proto = { .gpl_only = false, .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &btf_tracing_ids[BTF_TRACING_TYPE_TASK], .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL, .arg4_type = ARG_ANYTHING, @@ -356,7 +349,7 @@ const struct bpf_func_proto bpf_task_storage_get_proto = { .gpl_only = false, .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &btf_tracing_ids[BTF_TRACING_TYPE_TASK], .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL, .arg4_type = ARG_ANYTHING, @@ -367,7 +360,7 @@ const struct bpf_func_proto bpf_task_storage_delete_recur_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &btf_tracing_ids[BTF_TRACING_TYPE_TASK], }; @@ -376,6 +369,6 @@ const struct bpf_func_proto bpf_task_storage_delete_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &btf_tracing_ids[BTF_TRACING_TYPE_TASK], }; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 1853beaed4be..2c2d1fb9f410 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -3231,12 +3231,6 @@ static void btf_struct_log(struct btf_verifier_env *env, btf_verifier_log(env, "size=%u vlen=%u", t->size, btf_type_vlen(t)); } -enum btf_field_info_type { - BTF_FIELD_SPIN_LOCK, - BTF_FIELD_TIMER, - BTF_FIELD_KPTR, -}; - enum { BTF_FIELD_IGNORE = 0, BTF_FIELD_FOUND = 1, @@ -3562,7 +3556,10 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, { struct module *mod = NULL; const struct btf_type *t; - struct btf *kernel_btf; + /* If a matching btf type is found in kernel or module BTFs, kptr_ref + * is that BTF, otherwise it's program BTF + */ + struct btf *kptr_btf; int ret; s32 id; @@ -3571,7 +3568,20 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, */ t = btf_type_by_id(btf, info->kptr.type_id); id = bpf_find_btf_id(__btf_name_by_offset(btf, t->name_off), BTF_INFO_KIND(t->info), - &kernel_btf); + &kptr_btf); + if (id == -ENOENT) { + /* btf_parse_kptr should only be called w/ btf = program BTF */ + WARN_ON_ONCE(btf_is_kernel(btf)); + + /* Type exists only in program BTF. Assume that it's a MEM_ALLOC + * kptr allocated via bpf_obj_new + */ + field->kptr.dtor = NULL; + id = info->kptr.type_id; + kptr_btf = (struct btf *)btf; + btf_get(kptr_btf); + goto found_dtor; + } if (id < 0) return id; @@ -3588,20 +3598,20 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, * can be used as a referenced pointer and be stored in a map at * the same time. */ - dtor_btf_id = btf_find_dtor_kfunc(kernel_btf, id); + dtor_btf_id = btf_find_dtor_kfunc(kptr_btf, id); if (dtor_btf_id < 0) { ret = dtor_btf_id; goto end_btf; } - dtor_func = btf_type_by_id(kernel_btf, dtor_btf_id); + dtor_func = btf_type_by_id(kptr_btf, dtor_btf_id); if (!dtor_func) { ret = -ENOENT; goto end_btf; } - if (btf_is_module(kernel_btf)) { - mod = btf_try_get_module(kernel_btf); + if (btf_is_module(kptr_btf)) { + mod = btf_try_get_module(kptr_btf); if (!mod) { ret = -ENXIO; goto end_btf; @@ -3611,7 +3621,7 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, /* We already verified dtor_func to be btf_type_is_func * in register_btf_id_dtor_kfuncs. */ - dtor_func_name = __btf_name_by_offset(kernel_btf, dtor_func->name_off); + dtor_func_name = __btf_name_by_offset(kptr_btf, dtor_func->name_off); addr = kallsyms_lookup_name(dtor_func_name); if (!addr) { ret = -EINVAL; @@ -3620,14 +3630,15 @@ static int btf_parse_kptr(const struct btf *btf, struct btf_field *field, field->kptr.dtor = (void *)addr; } +found_dtor: field->kptr.btf_id = id; - field->kptr.btf = kernel_btf; + field->kptr.btf = kptr_btf; field->kptr.module = mod; return 0; end_mod: module_put(mod); end_btf: - btf_put(kernel_btf); + btf_put(kptr_btf); return ret; } @@ -5494,38 +5505,45 @@ static int btf_check_type_tags(struct btf_verifier_env *env, return 0; } -static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, - u32 log_level, char __user *log_ubuf, u32 log_size) +static int finalize_log(struct bpf_verifier_log *log, bpfptr_t uattr, u32 uattr_size) { + u32 log_true_size; + int err; + + err = bpf_vlog_finalize(log, &log_true_size); + + if (uattr_size >= offsetofend(union bpf_attr, btf_log_true_size) && + copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, btf_log_true_size), + &log_true_size, sizeof(log_true_size))) + err = -EFAULT; + + return err; +} + +static struct btf *btf_parse(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) +{ + bpfptr_t btf_data = make_bpfptr(attr->btf, uattr.is_kernel); + char __user *log_ubuf = u64_to_user_ptr(attr->btf_log_buf); struct btf_struct_metas *struct_meta_tab; struct btf_verifier_env *env = NULL; - struct bpf_verifier_log *log; struct btf *btf = NULL; u8 *data; - int err; + int err, ret; - if (btf_data_size > BTF_MAX_SIZE) + if (attr->btf_size > BTF_MAX_SIZE) return ERR_PTR(-E2BIG); env = kzalloc(sizeof(*env), GFP_KERNEL | __GFP_NOWARN); if (!env) return ERR_PTR(-ENOMEM); - log = &env->log; - if (log_level || log_ubuf || log_size) { - /* user requested verbose verifier output - * and supplied buffer to store the verification trace - */ - log->level = log_level; - log->ubuf = log_ubuf; - log->len_total = log_size; - - /* log attributes have to be sane */ - if (!bpf_verifier_log_attr_valid(log)) { - err = -EINVAL; - goto errout; - } - } + /* user could have requested verbose verifier output + * and supplied buffer to store the verification trace + */ + err = bpf_vlog_init(&env->log, attr->btf_log_level, + log_ubuf, attr->btf_log_size); + if (err) + goto errout_free; btf = kzalloc(sizeof(*btf), GFP_KERNEL | __GFP_NOWARN); if (!btf) { @@ -5534,16 +5552,16 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, } env->btf = btf; - data = kvmalloc(btf_data_size, GFP_KERNEL | __GFP_NOWARN); + data = kvmalloc(attr->btf_size, GFP_KERNEL | __GFP_NOWARN); if (!data) { err = -ENOMEM; goto errout; } btf->data = data; - btf->data_size = btf_data_size; + btf->data_size = attr->btf_size; - if (copy_from_bpfptr(data, btf_data, btf_data_size)) { + if (copy_from_bpfptr(data, btf_data, attr->btf_size)) { err = -EFAULT; goto errout; } @@ -5566,7 +5584,7 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, if (err) goto errout; - struct_meta_tab = btf_parse_struct_metas(log, btf); + struct_meta_tab = btf_parse_struct_metas(&env->log, btf); if (IS_ERR(struct_meta_tab)) { err = PTR_ERR(struct_meta_tab); goto errout; @@ -5583,10 +5601,9 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, } } - if (log->level && bpf_verifier_log_full(log)) { - err = -ENOSPC; - goto errout_meta; - } + err = finalize_log(&env->log, uattr, uattr_size); + if (err) + goto errout_free; btf_verifier_env_free(env); refcount_set(&btf->refcnt, 1); @@ -5595,6 +5612,11 @@ static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size, errout_meta: btf_free_struct_meta_tab(btf); errout: + /* overwrite err with -ENOSPC or -EFAULT */ + ret = finalize_log(&env->log, uattr, uattr_size); + if (ret) + err = ret; +errout_free: btf_verifier_env_free(env); if (btf) btf_free(btf); @@ -5900,12 +5922,8 @@ struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog) static bool is_int_ptr(struct btf *btf, const struct btf_type *t) { - /* t comes in already as a pointer */ - t = btf_type_by_id(btf, t->type); - - /* allow const */ - if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST) - t = btf_type_by_id(btf, t->type); + /* skip modifiers */ + t = btf_type_skip_modifiers(btf, t->type, NULL); return btf_type_is_int(t); } @@ -6156,7 +6174,8 @@ enum bpf_struct_walk_result { static int btf_struct_walk(struct bpf_verifier_log *log, const struct btf *btf, const struct btf_type *t, int off, int size, - u32 *next_btf_id, enum bpf_type_flag *flag) + u32 *next_btf_id, enum bpf_type_flag *flag, + const char **field_name) { u32 i, moff, mtrue_end, msize = 0, total_nelems = 0; const struct btf_type *mtype, *elem_type = NULL; @@ -6385,6 +6404,8 @@ error: if (btf_type_is_struct(stype)) { *next_btf_id = id; *flag |= tmp_flag; + if (field_name) + *field_name = mname; return WALK_PTR; } } @@ -6411,7 +6432,8 @@ error: int btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, int off, int size, enum bpf_access_type atype __maybe_unused, - u32 *next_btf_id, enum bpf_type_flag *flag) + u32 *next_btf_id, enum bpf_type_flag *flag, + const char **field_name) { const struct btf *btf = reg->btf; enum bpf_type_flag tmp_flag = 0; @@ -6443,7 +6465,7 @@ int btf_struct_access(struct bpf_verifier_log *log, t = btf_type_by_id(btf, id); do { - err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag); + err = btf_struct_walk(log, btf, t, off, size, &id, &tmp_flag, field_name); switch (err) { case WALK_PTR: @@ -6518,7 +6540,7 @@ again: type = btf_type_by_id(btf, id); if (!type) return false; - err = btf_struct_walk(log, btf, type, off, 1, &id, &flag); + err = btf_struct_walk(log, btf, type, off, 1, &id, &flag, NULL); if (err != WALK_STRUCT) return false; @@ -7199,15 +7221,12 @@ static int __btf_new_fd(struct btf *btf) return anon_inode_getfd("btf", &btf_fops, btf, O_RDONLY | O_CLOEXEC); } -int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr) +int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { struct btf *btf; int ret; - btf = btf_parse(make_bpfptr(attr->btf, uattr.is_kernel), - attr->btf_size, attr->btf_log_level, - u64_to_user_ptr(attr->btf_log_buf), - attr->btf_log_size); + btf = btf_parse(attr, uattr, uattr_size); if (IS_ERR(btf)) return PTR_ERR(btf); @@ -7597,6 +7616,108 @@ BTF_ID_LIST_GLOBAL(btf_tracing_ids, MAX_BTF_TRACING_TYPE) BTF_TRACING_TYPE_xxx #undef BTF_TRACING_TYPE +static int btf_check_iter_kfuncs(struct btf *btf, const char *func_name, + const struct btf_type *func, u32 func_flags) +{ + u32 flags = func_flags & (KF_ITER_NEW | KF_ITER_NEXT | KF_ITER_DESTROY); + const char *name, *sfx, *iter_name; + const struct btf_param *arg; + const struct btf_type *t; + char exp_name[128]; + u32 nr_args; + + /* exactly one of KF_ITER_{NEW,NEXT,DESTROY} can be set */ + if (!flags || (flags & (flags - 1))) + return -EINVAL; + + /* any BPF iter kfunc should have `struct bpf_iter_<type> *` first arg */ + nr_args = btf_type_vlen(func); + if (nr_args < 1) + return -EINVAL; + + arg = &btf_params(func)[0]; + t = btf_type_skip_modifiers(btf, arg->type, NULL); + if (!t || !btf_type_is_ptr(t)) + return -EINVAL; + t = btf_type_skip_modifiers(btf, t->type, NULL); + if (!t || !__btf_type_is_struct(t)) + return -EINVAL; + + name = btf_name_by_offset(btf, t->name_off); + if (!name || strncmp(name, ITER_PREFIX, sizeof(ITER_PREFIX) - 1)) + return -EINVAL; + + /* sizeof(struct bpf_iter_<type>) should be a multiple of 8 to + * fit nicely in stack slots + */ + if (t->size == 0 || (t->size % 8)) + return -EINVAL; + + /* validate bpf_iter_<type>_{new,next,destroy}(struct bpf_iter_<type> *) + * naming pattern + */ + iter_name = name + sizeof(ITER_PREFIX) - 1; + if (flags & KF_ITER_NEW) + sfx = "new"; + else if (flags & KF_ITER_NEXT) + sfx = "next"; + else /* (flags & KF_ITER_DESTROY) */ + sfx = "destroy"; + + snprintf(exp_name, sizeof(exp_name), "bpf_iter_%s_%s", iter_name, sfx); + if (strcmp(func_name, exp_name)) + return -EINVAL; + + /* only iter constructor should have extra arguments */ + if (!(flags & KF_ITER_NEW) && nr_args != 1) + return -EINVAL; + + if (flags & KF_ITER_NEXT) { + /* bpf_iter_<type>_next() should return pointer */ + t = btf_type_skip_modifiers(btf, func->type, NULL); + if (!t || !btf_type_is_ptr(t)) + return -EINVAL; + } + + if (flags & KF_ITER_DESTROY) { + /* bpf_iter_<type>_destroy() should return void */ + t = btf_type_by_id(btf, func->type); + if (!t || !btf_type_is_void(t)) + return -EINVAL; + } + + return 0; +} + +static int btf_check_kfunc_protos(struct btf *btf, u32 func_id, u32 func_flags) +{ + const struct btf_type *func; + const char *func_name; + int err; + + /* any kfunc should be FUNC -> FUNC_PROTO */ + func = btf_type_by_id(btf, func_id); + if (!func || !btf_type_is_func(func)) + return -EINVAL; + + /* sanity check kfunc name */ + func_name = btf_name_by_offset(btf, func->name_off); + if (!func_name || !func_name[0]) + return -EINVAL; + + func = btf_type_by_id(btf, func->type); + if (!func || !btf_type_is_func_proto(func)) + return -EINVAL; + + if (func_flags & (KF_ITER_NEW | KF_ITER_NEXT | KF_ITER_DESTROY)) { + err = btf_check_iter_kfuncs(btf, func_name, func, func_flags); + if (err) + return err; + } + + return 0; +} + /* Kernel Function (kfunc) BTF ID set registration API */ static int btf_populate_kfunc_set(struct btf *btf, enum btf_kfunc_hook hook, @@ -7773,7 +7894,7 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook, const struct btf_kfunc_id_set *kset) { struct btf *btf; - int ret; + int ret, i; btf = btf_get_module_btf(kset->owner); if (!btf) { @@ -7790,7 +7911,15 @@ static int __register_btf_kfunc_id_set(enum btf_kfunc_hook hook, if (IS_ERR(btf)) return PTR_ERR(btf); + for (i = 0; i < kset->set->cnt; i++) { + ret = btf_check_kfunc_protos(btf, kset->set->pairs[i].id, + kset->set->pairs[i].flags); + if (ret) + goto err_out; + } + ret = btf_populate_kfunc_set(btf, hook, kset->set); +err_out: btf_put(btf); return ret; } @@ -8368,16 +8497,15 @@ out: bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, const char *suffix) + const char *field_name, u32 btf_id, const char *suffix) { struct btf *btf = reg->btf; const struct btf_type *walk_type, *safe_type; const char *tname; char safe_tname[64]; long ret, safe_id; - const struct btf_member *member, *m_walk = NULL; + const struct btf_member *member; u32 i; - const char *walk_name; walk_type = btf_type_by_id(btf, reg->btf_id); if (!walk_type) @@ -8397,30 +8525,17 @@ bool btf_nested_type_is_trusted(struct bpf_verifier_log *log, if (!safe_type) return false; - for_each_member(i, walk_type, member) { - u32 moff; - - /* We're looking for the PTR_TO_BTF_ID member in the struct - * type we're walking which matches the specified offset. - * Below, we'll iterate over the fields in the safe variant of - * the struct and see if any of them has a matching type / - * name. - */ - moff = __btf_member_bit_offset(walk_type, member) / 8; - if (off == moff) { - m_walk = member; - break; - } - } - if (m_walk == NULL) - return false; - - walk_name = __btf_name_by_offset(btf, m_walk->name_off); for_each_member(i, safe_type, member) { const char *m_name = __btf_name_by_offset(btf, member->name_off); + const struct btf_type *mtype = btf_type_by_id(btf, member->type); + u32 id; + + if (!btf_type_is_ptr(mtype)) + continue; + btf_type_skip_modifiers(btf, mtype->type, &id); /* If we match on both type and name, the field is considered trusted. */ - if (m_walk->type == member->type && !strcmp(walk_name, m_name)) + if (btf_id == id && !strcmp(field_name, m_name)) return true; } diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 871809e71b4e..8ec18faa74ac 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -540,7 +540,7 @@ static void __cpu_map_entry_replace(struct bpf_cpu_map *cmap, } } -static int cpu_map_delete_elem(struct bpf_map *map, void *key) +static long cpu_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map); u32 key_cpu = *(u32 *)key; @@ -553,8 +553,8 @@ static int cpu_map_delete_elem(struct bpf_map *map, void *key) return 0; } -static int cpu_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long cpu_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { struct bpf_cpu_map *cmap = container_of(map, struct bpf_cpu_map, map); struct bpf_cpumap_val cpumap_value = {}; @@ -667,7 +667,7 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key) return 0; } -static int cpu_map_redirect(struct bpf_map *map, u64 index, u64 flags) +static long cpu_map_redirect(struct bpf_map *map, u64 index, u64 flags) { return __bpf_xdp_redirect_map(map, index, flags, 0, __cpu_map_lookup_elem); diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c index b6587ec40f1b..7efdf5d770ca 100644 --- a/kernel/bpf/cpumask.c +++ b/kernel/bpf/cpumask.c @@ -9,6 +9,7 @@ /** * struct bpf_cpumask - refcounted BPF cpumask wrapper structure * @cpumask: The actual cpumask embedded in the struct. + * @rcu: The RCU head used to free the cpumask with RCU safety. * @usage: Object reference counter. When the refcount goes to 0, the * memory is released back to the BPF allocator, which provides * RCU safety. @@ -24,6 +25,7 @@ */ struct bpf_cpumask { cpumask_t cpumask; + struct rcu_head rcu; refcount_t usage; }; @@ -80,32 +82,14 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) return cpumask; } -/** - * bpf_cpumask_kptr_get() - Attempt to acquire a reference to a BPF cpumask - * stored in a map. - * @cpumaskp: A pointer to a BPF cpumask map value. - * - * Attempts to acquire a reference to a BPF cpumask stored in a map value. The - * cpumask returned by this function must either be embedded in a map as a - * kptr, or freed with bpf_cpumask_release(). This function may return NULL if - * no BPF cpumask was found in the specified map value. - */ -__bpf_kfunc struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) +static void cpumask_free_cb(struct rcu_head *head) { struct bpf_cpumask *cpumask; - /* The BPF memory allocator frees memory backing its caches in an RCU - * callback. Thus, we can safely use RCU to ensure that the cpumask is - * safe to read. - */ - rcu_read_lock(); - - cpumask = READ_ONCE(*cpumaskp); - if (cpumask && !refcount_inc_not_zero(&cpumask->usage)) - cpumask = NULL; - - rcu_read_unlock(); - return cpumask; + cpumask = container_of(head, struct bpf_cpumask, rcu); + migrate_disable(); + bpf_mem_cache_free(&bpf_cpumask_ma, cpumask); + migrate_enable(); } /** @@ -118,14 +102,8 @@ __bpf_kfunc struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumas */ __bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask) { - if (!cpumask) - return; - - if (refcount_dec_and_test(&cpumask->usage)) { - migrate_disable(); - bpf_mem_cache_free(&bpf_cpumask_ma, cpumask); - migrate_enable(); - } + if (refcount_dec_and_test(&cpumask->usage)) + call_rcu(&cpumask->rcu, cpumask_free_cb); } /** @@ -424,9 +402,8 @@ __diag_pop(); BTF_SET8_START(cpumask_kfunc_btf_ids) BTF_ID_FLAGS(func, bpf_cpumask_create, KF_ACQUIRE | KF_RET_NULL) -BTF_ID_FLAGS(func, bpf_cpumask_release, KF_RELEASE | KF_TRUSTED_ARGS) +BTF_ID_FLAGS(func, bpf_cpumask_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_cpumask_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) -BTF_ID_FLAGS(func, bpf_cpumask_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cpumask_first, KF_RCU) BTF_ID_FLAGS(func, bpf_cpumask_first_zero, KF_RCU) BTF_ID_FLAGS(func, bpf_cpumask_set_cpu, KF_RCU) diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 19b036a228f7..802692fa3905 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -809,7 +809,7 @@ static void __dev_map_entry_free(struct rcu_head *rcu) kfree(dev); } -static int dev_map_delete_elem(struct bpf_map *map, void *key) +static long dev_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *old_dev; @@ -826,7 +826,7 @@ static int dev_map_delete_elem(struct bpf_map *map, void *key) return 0; } -static int dev_map_hash_delete_elem(struct bpf_map *map, void *key) +static long dev_map_hash_delete_elem(struct bpf_map *map, void *key) { struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *old_dev; @@ -897,8 +897,8 @@ err_out: return ERR_PTR(-EINVAL); } -static int __dev_map_update_elem(struct net *net, struct bpf_map *map, - void *key, void *value, u64 map_flags) +static long __dev_map_update_elem(struct net *net, struct bpf_map *map, + void *key, void *value, u64 map_flags) { struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *dev, *old_dev; @@ -939,15 +939,15 @@ static int __dev_map_update_elem(struct net *net, struct bpf_map *map, return 0; } -static int dev_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long dev_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { return __dev_map_update_elem(current->nsproxy->net_ns, map, key, value, map_flags); } -static int __dev_map_hash_update_elem(struct net *net, struct bpf_map *map, - void *key, void *value, u64 map_flags) +static long __dev_map_hash_update_elem(struct net *net, struct bpf_map *map, + void *key, void *value, u64 map_flags) { struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *dev, *old_dev; @@ -999,21 +999,21 @@ out_err: return err; } -static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { return __dev_map_hash_update_elem(current->nsproxy->net_ns, map, key, value, map_flags); } -static int dev_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags) +static long dev_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags) { return __bpf_xdp_redirect_map(map, ifindex, flags, BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, __dev_map_lookup_elem); } -static int dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags) +static long dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags) { return __bpf_xdp_redirect_map(map, ifindex, flags, BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 0df4b0c10f59..00c253b84bf5 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -607,6 +607,8 @@ free_htab: static inline u32 htab_map_hash(const void *key, u32 key_len, u32 hashrnd) { + if (likely(key_len % 4 == 0)) + return jhash2(key, key_len / 4, hashrnd); return jhash(key, key_len, hashrnd); } @@ -1073,8 +1075,8 @@ static int check_flags(struct bpf_htab *htab, struct htab_elem *l_old, } /* Called from syscall or from eBPF program */ -static int htab_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long htab_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct htab_elem *l_new = NULL, *l_old; @@ -1175,8 +1177,8 @@ static void htab_lru_push_free(struct bpf_htab *htab, struct htab_elem *elem) bpf_lru_push_free(&htab->lru, &elem->lru_node); } -static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct htab_elem *l_new, *l_old = NULL; @@ -1242,9 +1244,9 @@ err: return ret; } -static int __htab_percpu_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags, - bool onallcpus) +static long __htab_percpu_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags, + bool onallcpus) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct htab_elem *l_new = NULL, *l_old; @@ -1297,9 +1299,9 @@ err: return ret; } -static int __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags, - bool onallcpus) +static long __htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags, + bool onallcpus) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct htab_elem *l_new = NULL, *l_old; @@ -1364,21 +1366,21 @@ err: return ret; } -static int htab_percpu_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags) +static long htab_percpu_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags) { return __htab_percpu_map_update_elem(map, key, value, map_flags, false); } -static int htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags) +static long htab_lru_percpu_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags) { return __htab_lru_percpu_map_update_elem(map, key, value, map_flags, false); } /* Called from syscall or from eBPF program */ -static int htab_map_delete_elem(struct bpf_map *map, void *key) +static long htab_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct hlist_nulls_head *head; @@ -1414,7 +1416,7 @@ static int htab_map_delete_elem(struct bpf_map *map, void *key) return ret; } -static int htab_lru_map_delete_elem(struct bpf_map *map, void *key) +static long htab_lru_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct hlist_nulls_head *head; @@ -2134,8 +2136,8 @@ static const struct bpf_iter_seq_info iter_seq_info = { .seq_priv_size = sizeof(struct bpf_iter_seq_hash_map_info), }; -static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn, - void *callback_ctx, u64 flags) +static long bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn, + void *callback_ctx, u64 flags) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); struct hlist_nulls_head *head; diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 637ac4e92e75..f04e60a4847f 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -18,6 +18,7 @@ #include <linux/pid_namespace.h> #include <linux/poison.h> #include <linux/proc_ns.h> +#include <linux/sched/task.h> #include <linux/security.h> #include <linux/btf_ids.h> #include <linux/bpf_mem_alloc.h> @@ -257,7 +258,7 @@ BPF_CALL_2(bpf_get_current_comm, char *, buf, u32, size) goto err_clear; /* Verifier guarantees that size > 0 */ - strscpy(buf, task->comm, size); + strscpy_pad(buf, task->comm, size); return 0; err_clear: memset(buf, 0, size); @@ -571,7 +572,7 @@ static const struct bpf_func_proto bpf_strncmp_proto = { .func = bpf_strncmp, .gpl_only = false, .ret_type = RET_INTEGER, - .arg1_type = ARG_PTR_TO_MEM, + .arg1_type = ARG_PTR_TO_MEM | MEM_RDONLY, .arg2_type = ARG_CONST_SIZE, .arg3_type = ARG_PTR_TO_CONST_STR, }; @@ -1896,14 +1897,19 @@ __bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) return p; } +void __bpf_obj_drop_impl(void *p, const struct btf_record *rec) +{ + if (rec) + bpf_obj_free_fields(rec, p); + bpf_mem_free(&bpf_global_ma, p); +} + __bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign) { struct btf_struct_meta *meta = meta__ign; void *p = p__alloc; - if (meta) - bpf_obj_free_fields(meta->record, p); - bpf_mem_free(&bpf_global_ma, p); + __bpf_obj_drop_impl(p, meta ? meta->record : NULL); } static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *head, bool tail) @@ -2008,73 +2014,8 @@ __bpf_kfunc struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) */ __bpf_kfunc struct task_struct *bpf_task_acquire(struct task_struct *p) { - return get_task_struct(p); -} - -/** - * bpf_task_acquire_not_zero - Acquire a reference to a rcu task object. A task - * acquired by this kfunc which is not stored in a map as a kptr, must be - * released by calling bpf_task_release(). - * @p: The task on which a reference is being acquired. - */ -__bpf_kfunc struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) -{ - /* For the time being this function returns NULL, as it's not currently - * possible to safely acquire a reference to a task with RCU protection - * using get_task_struct() and put_task_struct(). This is due to the - * slightly odd mechanics of p->rcu_users, and how task RCU protection - * works. - * - * A struct task_struct is refcounted by two different refcount_t - * fields: - * - * 1. p->usage: The "true" refcount field which tracks a task's - * lifetime. The task is freed as soon as this - * refcount drops to 0. - * - * 2. p->rcu_users: An "RCU users" refcount field which is statically - * initialized to 2, and is co-located in a union with - * a struct rcu_head field (p->rcu). p->rcu_users - * essentially encapsulates a single p->usage - * refcount, and when p->rcu_users goes to 0, an RCU - * callback is scheduled on the struct rcu_head which - * decrements the p->usage refcount. - * - * There are two important implications to this task refcounting logic - * described above. The first is that - * refcount_inc_not_zero(&p->rcu_users) cannot be used anywhere, as - * after the refcount goes to 0, the RCU callback being scheduled will - * cause the memory backing the refcount to again be nonzero due to the - * fields sharing a union. The other is that we can't rely on RCU to - * guarantee that a task is valid in a BPF program. This is because a - * task could have already transitioned to being in the TASK_DEAD - * state, had its rcu_users refcount go to 0, and its rcu callback - * invoked in which it drops its single p->usage reference. At this - * point the task will be freed as soon as the last p->usage reference - * goes to 0, without waiting for another RCU gp to elapse. The only - * way that a BPF program can guarantee that a task is valid is in this - * scenario is to hold a p->usage refcount itself. - * - * Until we're able to resolve this issue, either by pulling - * p->rcu_users and p->rcu out of the union, or by getting rid of - * p->usage and just using p->rcu_users for refcounting, we'll just - * return NULL here. - */ - return NULL; -} - -/** - * bpf_task_kptr_get - Acquire a reference on a struct task_struct kptr. A task - * kptr acquired by this kfunc which is not subsequently stored in a map, must - * be released by calling bpf_task_release(). - * @pp: A pointer to a task kptr on which a reference is being acquired. - */ -__bpf_kfunc struct task_struct *bpf_task_kptr_get(struct task_struct **pp) -{ - /* We must return NULL here until we have clarity on how to properly - * leverage RCU for ensuring a task's lifetime. See the comment above - * in bpf_task_acquire_not_zero() for more details. - */ + if (refcount_inc_not_zero(&p->rcu_users)) + return p; return NULL; } @@ -2084,10 +2025,7 @@ __bpf_kfunc struct task_struct *bpf_task_kptr_get(struct task_struct **pp) */ __bpf_kfunc void bpf_task_release(struct task_struct *p) { - if (!p) - return; - - put_task_struct(p); + put_task_struct_rcu_user(p); } #ifdef CONFIG_CGROUPS @@ -2099,39 +2037,7 @@ __bpf_kfunc void bpf_task_release(struct task_struct *p) */ __bpf_kfunc struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp) { - cgroup_get(cgrp); - return cgrp; -} - -/** - * bpf_cgroup_kptr_get - Acquire a reference on a struct cgroup kptr. A cgroup - * kptr acquired by this kfunc which is not subsequently stored in a map, must - * be released by calling bpf_cgroup_release(). - * @cgrpp: A pointer to a cgroup kptr on which a reference is being acquired. - */ -__bpf_kfunc struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp) -{ - struct cgroup *cgrp; - - rcu_read_lock(); - /* Another context could remove the cgroup from the map and release it - * at any time, including after we've done the lookup above. This is - * safe because we're in an RCU read region, so the cgroup is - * guaranteed to remain valid until at least the rcu_read_unlock() - * below. - */ - cgrp = READ_ONCE(*cgrpp); - - if (cgrp && !cgroup_tryget(cgrp)) - /* If the cgroup had been removed from the map and freed as - * described above, cgroup_tryget() will return false. The - * cgroup will be freed at some point after the current RCU gp - * has ended, so just return NULL to the user. - */ - cgrp = NULL; - rcu_read_unlock(); - - return cgrp; + return cgroup_tryget(cgrp) ? cgrp : NULL; } /** @@ -2143,9 +2049,6 @@ __bpf_kfunc struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp) */ __bpf_kfunc void bpf_cgroup_release(struct cgroup *cgrp) { - if (!cgrp) - return; - cgroup_put(cgrp); } @@ -2200,7 +2103,7 @@ __bpf_kfunc struct task_struct *bpf_task_from_pid(s32 pid) rcu_read_lock(); p = find_task_by_pid_ns(pid, &init_pid_ns); if (p) - bpf_task_acquire(p); + p = bpf_task_acquire(p); rcu_read_unlock(); return p; @@ -2372,17 +2275,14 @@ BTF_ID_FLAGS(func, bpf_list_push_front) BTF_ID_FLAGS(func, bpf_list_push_back) BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL) -BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) -BTF_ID_FLAGS(func, bpf_task_acquire_not_zero, KF_ACQUIRE | KF_RCU | KF_RET_NULL) -BTF_ID_FLAGS(func, bpf_task_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE) BTF_ID_FLAGS(func, bpf_rbtree_add) BTF_ID_FLAGS(func, bpf_rbtree_first, KF_RET_NULL) #ifdef CONFIG_CGROUPS -BTF_ID_FLAGS(func, bpf_cgroup_acquire, KF_ACQUIRE | KF_TRUSTED_ARGS) -BTF_ID_FLAGS(func, bpf_cgroup_kptr_get, KF_ACQUIRE | KF_KPTR_GET | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_cgroup_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cgroup_release, KF_RELEASE) BTF_ID_FLAGS(func, bpf_cgroup_ancestor, KF_ACQUIRE | KF_RCU | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL) @@ -2411,6 +2311,9 @@ BTF_ID_FLAGS(func, bpf_rcu_read_lock) BTF_ID_FLAGS(func, bpf_rcu_read_unlock) BTF_ID_FLAGS(func, bpf_dynptr_slice, KF_RET_NULL) BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW) +BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY) BTF_SET8_END(common_btf_ids) static const struct btf_kfunc_id_set common_kfunc_set = { diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c index a993560f200a..4c7bbec4a9e4 100644 --- a/kernel/bpf/local_storage.c +++ b/kernel/bpf/local_storage.c @@ -141,8 +141,8 @@ static void *cgroup_storage_lookup_elem(struct bpf_map *_map, void *key) return &READ_ONCE(storage->buf)->data[0]; } -static int cgroup_storage_update_elem(struct bpf_map *map, void *key, - void *value, u64 flags) +static long cgroup_storage_update_elem(struct bpf_map *map, void *key, + void *value, u64 flags) { struct bpf_cgroup_storage *storage; struct bpf_storage_buffer *new; @@ -348,7 +348,7 @@ static void cgroup_storage_map_free(struct bpf_map *_map) bpf_map_area_free(map); } -static int cgroup_storage_delete_elem(struct bpf_map *map, void *key) +static long cgroup_storage_delete_elem(struct bpf_map *map, void *key) { return -EINVAL; } diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c new file mode 100644 index 000000000000..046ddff37a76 --- /dev/null +++ b/kernel/bpf/log.c @@ -0,0 +1,330 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2011-2014 PLUMgrid, http://plumgrid.com + * Copyright (c) 2016 Facebook + * Copyright (c) 2018 Covalent IO, Inc. http://covalent.io + */ +#include <uapi/linux/btf.h> +#include <linux/kernel.h> +#include <linux/types.h> +#include <linux/bpf.h> +#include <linux/bpf_verifier.h> +#include <linux/math64.h> + +static bool bpf_verifier_log_attr_valid(const struct bpf_verifier_log *log) +{ + /* ubuf and len_total should both be specified (or not) together */ + if (!!log->ubuf != !!log->len_total) + return false; + /* log buf without log_level is meaningless */ + if (log->ubuf && log->level == 0) + return false; + if (log->level & ~BPF_LOG_MASK) + return false; + if (log->len_total > UINT_MAX >> 2) + return false; + return true; +} + +int bpf_vlog_init(struct bpf_verifier_log *log, u32 log_level, + char __user *log_buf, u32 log_size) +{ + log->level = log_level; + log->ubuf = log_buf; + log->len_total = log_size; + + /* log attributes have to be sane */ + if (!bpf_verifier_log_attr_valid(log)) + return -EINVAL; + + return 0; +} + +static void bpf_vlog_update_len_max(struct bpf_verifier_log *log, u32 add_len) +{ + /* add_len includes terminal \0, so no need for +1. */ + u64 len = log->end_pos + add_len; + + /* log->len_max could be larger than our current len due to + * bpf_vlog_reset() calls, so we maintain the max of any length at any + * previous point + */ + if (len > UINT_MAX) + log->len_max = UINT_MAX; + else if (len > log->len_max) + log->len_max = len; +} + +void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt, + va_list args) +{ + u64 cur_pos; + u32 new_n, n; + + n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args); + + WARN_ONCE(n >= BPF_VERIFIER_TMP_LOG_SIZE - 1, + "verifier log line truncated - local buffer too short\n"); + + if (log->level == BPF_LOG_KERNEL) { + bool newline = n > 0 && log->kbuf[n - 1] == '\n'; + + pr_err("BPF: %s%s", log->kbuf, newline ? "" : "\n"); + return; + } + + n += 1; /* include terminating zero */ + bpf_vlog_update_len_max(log, n); + + if (log->level & BPF_LOG_FIXED) { + /* check if we have at least something to put into user buf */ + new_n = 0; + if (log->end_pos < log->len_total) { + new_n = min_t(u32, log->len_total - log->end_pos, n); + log->kbuf[new_n - 1] = '\0'; + } + + cur_pos = log->end_pos; + log->end_pos += n - 1; /* don't count terminating '\0' */ + + if (log->ubuf && new_n && + copy_to_user(log->ubuf + cur_pos, log->kbuf, new_n)) + goto fail; + } else { + u64 new_end, new_start; + u32 buf_start, buf_end, new_n; + + new_end = log->end_pos + n; + if (new_end - log->start_pos >= log->len_total) + new_start = new_end - log->len_total; + else + new_start = log->start_pos; + + log->start_pos = new_start; + log->end_pos = new_end - 1; /* don't count terminating '\0' */ + + if (!log->ubuf) + return; + + new_n = min(n, log->len_total); + cur_pos = new_end - new_n; + div_u64_rem(cur_pos, log->len_total, &buf_start); + div_u64_rem(new_end, log->len_total, &buf_end); + /* new_end and buf_end are exclusive indices, so if buf_end is + * exactly zero, then it actually points right to the end of + * ubuf and there is no wrap around + */ + if (buf_end == 0) + buf_end = log->len_total; + + /* if buf_start > buf_end, we wrapped around; + * if buf_start == buf_end, then we fill ubuf completely; we + * can't have buf_start == buf_end to mean that there is + * nothing to write, because we always write at least + * something, even if terminal '\0' + */ + if (buf_start < buf_end) { + /* message fits within contiguous chunk of ubuf */ + if (copy_to_user(log->ubuf + buf_start, + log->kbuf + n - new_n, + buf_end - buf_start)) + goto fail; + } else { + /* message wraps around the end of ubuf, copy in two chunks */ + if (copy_to_user(log->ubuf + buf_start, + log->kbuf + n - new_n, + log->len_total - buf_start)) + goto fail; + if (copy_to_user(log->ubuf, + log->kbuf + n - buf_end, + buf_end)) + goto fail; + } + } + + return; +fail: + log->ubuf = NULL; +} + +void bpf_vlog_reset(struct bpf_verifier_log *log, u64 new_pos) +{ + char zero = 0; + u32 pos; + + if (WARN_ON_ONCE(new_pos > log->end_pos)) + return; + + if (!bpf_verifier_log_needed(log) || log->level == BPF_LOG_KERNEL) + return; + + /* if position to which we reset is beyond current log window, + * then we didn't preserve any useful content and should adjust + * start_pos to end up with an empty log (start_pos == end_pos) + */ + log->end_pos = new_pos; + if (log->end_pos < log->start_pos) + log->start_pos = log->end_pos; + + if (!log->ubuf) + return; + + if (log->level & BPF_LOG_FIXED) + pos = log->end_pos + 1; + else + div_u64_rem(new_pos, log->len_total, &pos); + + if (pos < log->len_total && put_user(zero, log->ubuf + pos)) + log->ubuf = NULL; +} + +static void bpf_vlog_reverse_kbuf(char *buf, int len) +{ + int i, j; + + for (i = 0, j = len - 1; i < j; i++, j--) + swap(buf[i], buf[j]); +} + +static int bpf_vlog_reverse_ubuf(struct bpf_verifier_log *log, int start, int end) +{ + /* we split log->kbuf into two equal parts for both ends of array */ + int n = sizeof(log->kbuf) / 2, nn; + char *lbuf = log->kbuf, *rbuf = log->kbuf + n; + + /* Read ubuf's section [start, end) two chunks at a time, from left + * and right side; within each chunk, swap all the bytes; after that + * reverse the order of lbuf and rbuf and write result back to ubuf. + * This way we'll end up with swapped contents of specified + * [start, end) ubuf segment. + */ + while (end - start > 1) { + nn = min(n, (end - start ) / 2); + + if (copy_from_user(lbuf, log->ubuf + start, nn)) + return -EFAULT; + if (copy_from_user(rbuf, log->ubuf + end - nn, nn)) + return -EFAULT; + + bpf_vlog_reverse_kbuf(lbuf, nn); + bpf_vlog_reverse_kbuf(rbuf, nn); + + /* we write lbuf to the right end of ubuf, while rbuf to the + * left one to end up with properly reversed overall ubuf + */ + if (copy_to_user(log->ubuf + start, rbuf, nn)) + return -EFAULT; + if (copy_to_user(log->ubuf + end - nn, lbuf, nn)) + return -EFAULT; + + start += nn; + end -= nn; + } + + return 0; +} + +int bpf_vlog_finalize(struct bpf_verifier_log *log, u32 *log_size_actual) +{ + u32 sublen; + int err; + + *log_size_actual = 0; + if (!log || log->level == 0 || log->level == BPF_LOG_KERNEL) + return 0; + + if (!log->ubuf) + goto skip_log_rotate; + /* If we never truncated log, there is nothing to move around. */ + if (log->start_pos == 0) + goto skip_log_rotate; + + /* Otherwise we need to rotate log contents to make it start from the + * buffer beginning and be a continuous zero-terminated string. Note + * that if log->start_pos != 0 then we definitely filled up entire log + * buffer with no gaps, and we just need to shift buffer contents to + * the left by (log->start_pos % log->len_total) bytes. + * + * Unfortunately, user buffer could be huge and we don't want to + * allocate temporary kernel memory of the same size just to shift + * contents in a straightforward fashion. Instead, we'll be clever and + * do in-place array rotation. This is a leetcode-style problem, which + * could be solved by three rotations. + * + * Let's say we have log buffer that has to be shifted left by 7 bytes + * (spaces and vertical bar is just for demonstrative purposes): + * E F G H I J K | A B C D + * + * First, we reverse entire array: + * D C B A | K J I H G F E + * + * Then we rotate first 4 bytes (DCBA) and separately last 7 bytes + * (KJIHGFE), resulting in a properly rotated array: + * A B C D | E F G H I J K + * + * We'll utilize log->kbuf to read user memory chunk by chunk, swap + * bytes, and write them back. Doing it byte-by-byte would be + * unnecessarily inefficient. Altogether we are going to read and + * write each byte twice, for total 4 memory copies between kernel and + * user space. + */ + + /* length of the chopped off part that will be the beginning; + * len(ABCD) in the example above + */ + div_u64_rem(log->start_pos, log->len_total, &sublen); + sublen = log->len_total - sublen; + + err = bpf_vlog_reverse_ubuf(log, 0, log->len_total); + err = err ?: bpf_vlog_reverse_ubuf(log, 0, sublen); + err = err ?: bpf_vlog_reverse_ubuf(log, sublen, log->len_total); + if (err) + log->ubuf = NULL; + +skip_log_rotate: + *log_size_actual = log->len_max; + + /* properly initialized log has either both ubuf!=NULL and len_total>0 + * or ubuf==NULL and len_total==0, so if this condition doesn't hold, + * we got a fault somewhere along the way, so report it back + */ + if (!!log->ubuf != !!log->len_total) + return -EFAULT; + + /* did truncation actually happen? */ + if (log->ubuf && log->len_max > log->len_total) + return -ENOSPC; + + return 0; +} + +/* log_level controls verbosity level of eBPF verifier. + * bpf_verifier_log_write() is used to dump the verification trace to the log, + * so the user can figure out what's wrong with the program + */ +__printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env, + const char *fmt, ...) +{ + va_list args; + + if (!bpf_verifier_log_needed(&env->log)) + return; + + va_start(args, fmt); + bpf_verifier_vlog(&env->log, fmt, args); + va_end(args); +} +EXPORT_SYMBOL_GPL(bpf_verifier_log_write); + +__printf(2, 3) void bpf_log(struct bpf_verifier_log *log, + const char *fmt, ...) +{ + va_list args; + + if (!bpf_verifier_log_needed(log)) + return; + + va_start(args, fmt); + bpf_verifier_vlog(log, fmt, args); + va_end(args); +} +EXPORT_SYMBOL_GPL(bpf_log); diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c index dc23f2ac9cde..e0d3ddf2037a 100644 --- a/kernel/bpf/lpm_trie.c +++ b/kernel/bpf/lpm_trie.c @@ -300,8 +300,8 @@ static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie, } /* Called from syscall or from eBPF program */ -static int trie_update_elem(struct bpf_map *map, - void *_key, void *value, u64 flags) +static long trie_update_elem(struct bpf_map *map, + void *_key, void *value, u64 flags) { struct lpm_trie *trie = container_of(map, struct lpm_trie, map); struct lpm_trie_node *node, *im_node = NULL, *new_node = NULL; @@ -431,7 +431,7 @@ out: } /* Called from syscall or from eBPF program */ -static int trie_delete_elem(struct bpf_map *map, void *_key) +static long trie_delete_elem(struct bpf_map *map, void *_key) { struct lpm_trie *trie = container_of(map, struct lpm_trie, map); struct bpf_lpm_trie_key *key = _key; diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c index 5fcdacbb8439..410637c225fb 100644 --- a/kernel/bpf/memalloc.c +++ b/kernel/bpf/memalloc.c @@ -121,15 +121,8 @@ static struct llist_node notrace *__llist_del_first(struct llist_head *head) return entry; } -static void *__alloc(struct bpf_mem_cache *c, int node) +static void *__alloc(struct bpf_mem_cache *c, int node, gfp_t flags) { - /* Allocate, but don't deplete atomic reserves that typical - * GFP_ATOMIC would do. irq_work runs on this cpu and kmalloc - * will allocate from the current numa node which is what we - * want here. - */ - gfp_t flags = GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT; - if (c->percpu_size) { void **obj = kmalloc_node(c->percpu_size, flags, node); void *pptr = __alloc_percpu_gfp(c->unit_size, 8, flags); @@ -185,7 +178,12 @@ static void alloc_bulk(struct bpf_mem_cache *c, int cnt, int node) */ obj = __llist_del_first(&c->free_by_rcu); if (!obj) { - obj = __alloc(c, node); + /* Allocate, but don't deplete atomic reserves that typical + * GFP_ATOMIC would do. irq_work runs on this cpu and kmalloc + * will allocate from the current numa node which is what we + * want here. + */ + obj = __alloc(c, node, GFP_NOWAIT | __GFP_NOWARN | __GFP_ACCOUNT); if (!obj) break; } @@ -676,3 +674,46 @@ void notrace bpf_mem_cache_free(struct bpf_mem_alloc *ma, void *ptr) unit_free(this_cpu_ptr(ma->cache), ptr); } + +/* Directly does a kfree() without putting 'ptr' back to the free_llist + * for reuse and without waiting for a rcu_tasks_trace gp. + * The caller must first go through the rcu_tasks_trace gp for 'ptr' + * before calling bpf_mem_cache_raw_free(). + * It could be used when the rcu_tasks_trace callback does not have + * a hold on the original bpf_mem_alloc object that allocated the + * 'ptr'. This should only be used in the uncommon code path. + * Otherwise, the bpf_mem_alloc's free_llist cannot be refilled + * and may affect performance. + */ +void bpf_mem_cache_raw_free(void *ptr) +{ + if (!ptr) + return; + + kfree(ptr - LLIST_NODE_SZ); +} + +/* When flags == GFP_KERNEL, it signals that the caller will not cause + * deadlock when using kmalloc. bpf_mem_cache_alloc_flags() will use + * kmalloc if the free_llist is empty. + */ +void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags) +{ + struct bpf_mem_cache *c; + void *ret; + + c = this_cpu_ptr(ma->cache); + + ret = unit_alloc(c); + if (!ret && flags == GFP_KERNEL) { + struct mem_cgroup *memcg, *old_memcg; + + memcg = get_memcg(c); + old_memcg = set_active_memcg(memcg); + ret = __alloc(c, NUMA_NO_NODE, GFP_KERNEL | __GFP_NOWARN | __GFP_ACCOUNT); + set_active_memcg(old_memcg); + mem_cgroup_put(memcg); + } + + return !ret ? NULL : ret + LLIST_NODE_SZ; +} diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c index 63ecbbcb349d..601609164ef3 100644 --- a/kernel/bpf/queue_stack_maps.c +++ b/kernel/bpf/queue_stack_maps.c @@ -95,7 +95,7 @@ static void queue_stack_map_free(struct bpf_map *map) bpf_map_area_free(qs); } -static int __queue_map_get(struct bpf_map *map, void *value, bool delete) +static long __queue_map_get(struct bpf_map *map, void *value, bool delete) { struct bpf_queue_stack *qs = bpf_queue_stack(map); unsigned long flags; @@ -124,7 +124,7 @@ out: } -static int __stack_map_get(struct bpf_map *map, void *value, bool delete) +static long __stack_map_get(struct bpf_map *map, void *value, bool delete) { struct bpf_queue_stack *qs = bpf_queue_stack(map); unsigned long flags; @@ -156,32 +156,32 @@ out: } /* Called from syscall or from eBPF program */ -static int queue_map_peek_elem(struct bpf_map *map, void *value) +static long queue_map_peek_elem(struct bpf_map *map, void *value) { return __queue_map_get(map, value, false); } /* Called from syscall or from eBPF program */ -static int stack_map_peek_elem(struct bpf_map *map, void *value) +static long stack_map_peek_elem(struct bpf_map *map, void *value) { return __stack_map_get(map, value, false); } /* Called from syscall or from eBPF program */ -static int queue_map_pop_elem(struct bpf_map *map, void *value) +static long queue_map_pop_elem(struct bpf_map *map, void *value) { return __queue_map_get(map, value, true); } /* Called from syscall or from eBPF program */ -static int stack_map_pop_elem(struct bpf_map *map, void *value) +static long stack_map_pop_elem(struct bpf_map *map, void *value) { return __stack_map_get(map, value, true); } /* Called from syscall or from eBPF program */ -static int queue_stack_map_push_elem(struct bpf_map *map, void *value, - u64 flags) +static long queue_stack_map_push_elem(struct bpf_map *map, void *value, + u64 flags) { struct bpf_queue_stack *qs = bpf_queue_stack(map); unsigned long irq_flags; @@ -227,14 +227,14 @@ static void *queue_stack_map_lookup_elem(struct bpf_map *map, void *key) } /* Called from syscall or from eBPF program */ -static int queue_stack_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 flags) +static long queue_stack_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 flags) { return -EINVAL; } /* Called from syscall or from eBPF program */ -static int queue_stack_map_delete_elem(struct bpf_map *map, void *key) +static long queue_stack_map_delete_elem(struct bpf_map *map, void *key) { return -EINVAL; } diff --git a/kernel/bpf/reuseport_array.c b/kernel/bpf/reuseport_array.c index 71cb72f5b733..cbf2d8d784b8 100644 --- a/kernel/bpf/reuseport_array.c +++ b/kernel/bpf/reuseport_array.c @@ -59,7 +59,7 @@ static void *reuseport_array_lookup_elem(struct bpf_map *map, void *key) } /* Called from syscall only */ -static int reuseport_array_delete_elem(struct bpf_map *map, void *key) +static long reuseport_array_delete_elem(struct bpf_map *map, void *key) { struct reuseport_array *array = reuseport_array(map); u32 index = *(u32 *)key; diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index 0d2a45ff83f1..875ac9b698d9 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -242,13 +242,13 @@ static void *ringbuf_map_lookup_elem(struct bpf_map *map, void *key) return ERR_PTR(-ENOTSUPP); } -static int ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 flags) +static long ringbuf_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 flags) { return -ENOTSUPP; } -static int ringbuf_map_delete_elem(struct bpf_map *map, void *key) +static long ringbuf_map_delete_elem(struct bpf_map *map, void *key) { return -ENOTSUPP; } diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 0f1d8dced933..b25fce425b2c 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -618,14 +618,14 @@ static int stack_map_get_next_key(struct bpf_map *map, void *key, return 0; } -static int stack_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long stack_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { return -EINVAL; } /* Called from syscall or from eBPF program */ -static int stack_map_delete_elem(struct bpf_map *map, void *key) +static long stack_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map); struct stack_map_bucket *old_bucket; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index f406dfa13792..6d575505f89c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -520,14 +520,14 @@ static int btf_field_cmp(const void *a, const void *b) } struct btf_field *btf_record_find(const struct btf_record *rec, u32 offset, - enum btf_field_type type) + u32 field_mask) { struct btf_field *field; - if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & type)) + if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & field_mask)) return NULL; field = bsearch(&offset, rec->fields, rec->cnt, sizeof(rec->fields[0]), btf_field_cmp); - if (!field || !(field->type & type)) + if (!field || !(field->type & field_mask)) return NULL; return field; } @@ -650,6 +650,8 @@ void bpf_obj_free_timer(const struct btf_record *rec, void *obj) bpf_timer_cancel_and_free(obj + rec->timer_off); } +extern void __bpf_obj_drop_impl(void *p, const struct btf_record *rec); + void bpf_obj_free_fields(const struct btf_record *rec, void *obj) { const struct btf_field *fields; @@ -659,8 +661,10 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj) return; fields = rec->fields; for (i = 0; i < rec->cnt; i++) { + struct btf_struct_meta *pointee_struct_meta; const struct btf_field *field = &fields[i]; void *field_ptr = obj + field->offset; + void *xchgd_field; switch (fields[i].type) { case BPF_SPIN_LOCK: @@ -672,7 +676,22 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj) WRITE_ONCE(*(u64 *)field_ptr, 0); break; case BPF_KPTR_REF: - field->kptr.dtor((void *)xchg((unsigned long *)field_ptr, 0)); + xchgd_field = (void *)xchg((unsigned long *)field_ptr, 0); + if (!xchgd_field) + break; + + if (!btf_is_kernel(field->kptr.btf)) { + pointee_struct_meta = btf_find_struct_meta(field->kptr.btf, + field->kptr.btf_id); + WARN_ON_ONCE(!pointee_struct_meta); + migrate_disable(); + __bpf_obj_drop_impl(xchgd_field, pointee_struct_meta ? + pointee_struct_meta->record : + NULL); + migrate_enable(); + } else { + field->kptr.dtor(xchgd_field); + } break; case BPF_LIST_HEAD: if (WARN_ON_ONCE(rec->spin_lock_off < 0)) @@ -1287,8 +1306,10 @@ struct bpf_map *bpf_map_get_with_uref(u32 ufd) return map; } -/* map_idr_lock should have been held */ -static struct bpf_map *__bpf_map_inc_not_zero(struct bpf_map *map, bool uref) +/* map_idr_lock should have been held or the map should have been + * protected by rcu read lock. + */ +struct bpf_map *__bpf_map_inc_not_zero(struct bpf_map *map, bool uref) { int refold; @@ -2051,6 +2072,7 @@ static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred) { bpf_prog_kallsyms_del_all(prog); btf_put(prog->aux->btf); + module_put(prog->aux->mod); kvfree(prog->aux->jited_linfo); kvfree(prog->aux->linfo); kfree(prog->aux->kfunc_tab); @@ -2479,9 +2501,9 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) } /* last field in 'union bpf_attr' used by this command */ -#define BPF_PROG_LOAD_LAST_FIELD core_relo_rec_size +#define BPF_PROG_LOAD_LAST_FIELD log_true_size -static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) +static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr, u32 uattr_size) { enum bpf_prog_type type = attr->prog_type; struct bpf_prog *prog, *dst_prog = NULL; @@ -2631,7 +2653,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr) goto free_prog_sec; /* run eBPF verifier */ - err = bpf_check(&prog, attr, uattr); + err = bpf_check(&prog, attr, uattr, uattr_size); if (err < 0) goto free_used_maps; @@ -2806,16 +2828,19 @@ static void bpf_link_show_fdinfo(struct seq_file *m, struct file *filp) const struct bpf_prog *prog = link->prog; char prog_tag[sizeof(prog->tag) * 2 + 1] = { }; - bin2hex(prog_tag, prog->tag, sizeof(prog->tag)); seq_printf(m, "link_type:\t%s\n" - "link_id:\t%u\n" - "prog_tag:\t%s\n" - "prog_id:\t%u\n", + "link_id:\t%u\n", bpf_link_type_strs[link->type], - link->id, - prog_tag, - prog->aux->id); + link->id); + if (prog) { + bin2hex(prog_tag, prog->tag, sizeof(prog->tag)); + seq_printf(m, + "prog_tag:\t%s\n" + "prog_id:\t%u\n", + prog_tag, + prog->aux->id); + } if (link->ops->show_fdinfo) link->ops->show_fdinfo(link, m); } @@ -3097,6 +3122,11 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog, if (err) goto out_unlock; + if (tgt_info.tgt_mod) { + module_put(prog->aux->mod); + prog->aux->mod = tgt_info.tgt_mod; + } + tr = bpf_trampoline_get(key, &tgt_info); if (!tr) { err = -ENOMEM; @@ -4290,7 +4320,8 @@ static int bpf_link_get_info_by_fd(struct file *file, info.type = link->type; info.id = link->id; - info.prog_id = link->prog->aux->id; + if (link->prog) + info.prog_id = link->prog->aux->id; if (link->ops->fill_link_info) { err = link->ops->fill_link_info(link, &info); @@ -4340,9 +4371,9 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr, return err; } -#define BPF_BTF_LOAD_LAST_FIELD btf_log_level +#define BPF_BTF_LOAD_LAST_FIELD btf_log_true_size -static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr) +static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size) { if (CHECK_ATTR(BPF_BTF_LOAD)) return -EINVAL; @@ -4350,7 +4381,7 @@ static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr) if (!bpf_capable()) return -EPERM; - return btf_new_fd(attr, uattr); + return btf_new_fd(attr, uattr, uattr_size); } #define BPF_BTF_GET_FD_BY_ID_LAST_FIELD btf_id @@ -4553,6 +4584,9 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr) if (CHECK_ATTR(BPF_LINK_CREATE)) return -EINVAL; + if (attr->link_create.attach_type == BPF_STRUCT_OPS) + return bpf_struct_ops_link_create(attr); + prog = bpf_prog_get(attr->link_create.prog_fd); if (IS_ERR(prog)) return PTR_ERR(prog); @@ -4651,6 +4685,35 @@ out: return ret; } +static int link_update_map(struct bpf_link *link, union bpf_attr *attr) +{ + struct bpf_map *new_map, *old_map = NULL; + int ret; + + new_map = bpf_map_get(attr->link_update.new_map_fd); + if (IS_ERR(new_map)) + return PTR_ERR(new_map); + + if (attr->link_update.flags & BPF_F_REPLACE) { + old_map = bpf_map_get(attr->link_update.old_map_fd); + if (IS_ERR(old_map)) { + ret = PTR_ERR(old_map); + goto out_put; + } + } else if (attr->link_update.old_map_fd) { + ret = -EINVAL; + goto out_put; + } + + ret = link->ops->update_map(link, new_map, old_map); + + if (old_map) + bpf_map_put(old_map); +out_put: + bpf_map_put(new_map); + return ret; +} + #define BPF_LINK_UPDATE_LAST_FIELD link_update.old_prog_fd static int link_update(union bpf_attr *attr) @@ -4671,6 +4734,11 @@ static int link_update(union bpf_attr *attr) if (IS_ERR(link)) return PTR_ERR(link); + if (link->ops->update_map) { + ret = link_update_map(link, attr); + goto out_put_link; + } + new_prog = bpf_prog_get(attr->link_update.new_prog_fd); if (IS_ERR(new_prog)) { ret = PTR_ERR(new_prog); @@ -4991,7 +5059,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) err = map_freeze(&attr); break; case BPF_PROG_LOAD: - err = bpf_prog_load(&attr, uattr); + err = bpf_prog_load(&attr, uattr, size); break; case BPF_OBJ_PIN: err = bpf_obj_pin(&attr); @@ -5036,7 +5104,7 @@ static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size) err = bpf_raw_tracepoint_open(&attr); break; case BPF_BTF_LOAD: - err = bpf_btf_load(&attr, uattr); + err = bpf_btf_load(&attr, uattr, size); break; case BPF_BTF_GET_FD_BY_ID: err = bpf_btf_get_fd_by_id(&attr); diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index d0ed7d6f5eec..f61d5138b12b 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -9,7 +9,6 @@ #include <linux/btf.h> #include <linux/rcupdate_trace.h> #include <linux/rcupdate_wait.h> -#include <linux/module.h> #include <linux/static_call.h> #include <linux/bpf_verifier.h> #include <linux/bpf_lsm.h> @@ -172,26 +171,6 @@ out: return tr; } -static int bpf_trampoline_module_get(struct bpf_trampoline *tr) -{ - struct module *mod; - int err = 0; - - preempt_disable(); - mod = __module_text_address((unsigned long) tr->func.addr); - if (mod && !try_module_get(mod)) - err = -ENOENT; - preempt_enable(); - tr->mod = mod; - return err; -} - -static void bpf_trampoline_module_put(struct bpf_trampoline *tr) -{ - module_put(tr->mod); - tr->mod = NULL; -} - static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr) { void *ip = tr->func.addr; @@ -202,8 +181,6 @@ static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr) else ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, old_addr, NULL); - if (!ret) - bpf_trampoline_module_put(tr); return ret; } @@ -238,9 +215,6 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) tr->func.ftrace_managed = true; } - if (bpf_trampoline_module_get(tr)) - return -ENOENT; - if (tr->func.ftrace_managed) { ftrace_set_filter_ip(tr->fops, (unsigned long)ip, 0, 1); ret = register_ftrace_direct_multi(tr->fops, (long)new_addr); @@ -248,8 +222,6 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr) ret = bpf_arch_text_poke(ip, BPF_MOD_CALL, NULL, new_addr); } - if (ret) - bpf_trampoline_module_put(tr); return ret; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b2116ca78d9a..d6db6de3e9ea 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -24,6 +24,7 @@ #include <linux/bpf_lsm.h> #include <linux/btf_ids.h> #include <linux/poison.h> +#include <linux/module.h> #include "disasm.h" @@ -302,6 +303,10 @@ struct bpf_kfunc_call_arg_meta { enum bpf_dynptr_type type; u32 id; } initialized_dynptr; + struct { + u8 spi; + u8 frameno; + } iter; u64 mem_size; }; @@ -330,61 +335,6 @@ find_linfo(const struct bpf_verifier_env *env, u32 insn_off) return &linfo[i - 1]; } -void bpf_verifier_vlog(struct bpf_verifier_log *log, const char *fmt, - va_list args) -{ - unsigned int n; - - n = vscnprintf(log->kbuf, BPF_VERIFIER_TMP_LOG_SIZE, fmt, args); - - WARN_ONCE(n >= BPF_VERIFIER_TMP_LOG_SIZE - 1, - "verifier log line truncated - local buffer too short\n"); - - if (log->level == BPF_LOG_KERNEL) { - bool newline = n > 0 && log->kbuf[n - 1] == '\n'; - - pr_err("BPF: %s%s", log->kbuf, newline ? "" : "\n"); - return; - } - - n = min(log->len_total - log->len_used - 1, n); - log->kbuf[n] = '\0'; - if (!copy_to_user(log->ubuf + log->len_used, log->kbuf, n + 1)) - log->len_used += n; - else - log->ubuf = NULL; -} - -static void bpf_vlog_reset(struct bpf_verifier_log *log, u32 new_pos) -{ - char zero = 0; - - if (!bpf_verifier_log_needed(log)) - return; - - log->len_used = new_pos; - if (put_user(zero, log->ubuf + new_pos)) - log->ubuf = NULL; -} - -/* log_level controls verbosity level of eBPF verifier. - * bpf_verifier_log_write() is used to dump the verification trace to the log, - * so the user can figure out what's wrong with the program - */ -__printf(2, 3) void bpf_verifier_log_write(struct bpf_verifier_env *env, - const char *fmt, ...) -{ - va_list args; - - if (!bpf_verifier_log_needed(&env->log)) - return; - - va_start(args, fmt); - bpf_verifier_vlog(&env->log, fmt, args); - va_end(args); -} -EXPORT_SYMBOL_GPL(bpf_verifier_log_write); - __printf(2, 3) static void verbose(void *private_data, const char *fmt, ...) { struct bpf_verifier_env *env = private_data; @@ -398,20 +348,6 @@ __printf(2, 3) static void verbose(void *private_data, const char *fmt, ...) va_end(args); } -__printf(2, 3) void bpf_log(struct bpf_verifier_log *log, - const char *fmt, ...) -{ - va_list args; - - if (!bpf_verifier_log_needed(log)) - return; - - va_start(args, fmt); - bpf_verifier_vlog(log, fmt, args); - va_end(args); -} -EXPORT_SYMBOL_GPL(bpf_log); - static const char *ltrim(const char *s) { while (isspace(*s)) @@ -481,8 +417,17 @@ static bool type_is_sk_pointer(enum bpf_reg_type type) type == PTR_TO_XDP_SOCK; } +static bool type_may_be_null(u32 type) +{ + return type & PTR_MAYBE_NULL; +} + static bool reg_type_not_null(enum bpf_reg_type type) { + if (type_may_be_null(type)) + return false; + + type = base_type(type); return type == PTR_TO_SOCKET || type == PTR_TO_TCP_SOCK || type == PTR_TO_MAP_VALUE || @@ -526,11 +471,6 @@ static bool type_is_rdonly_mem(u32 type) return type & MEM_RDONLY; } -static bool type_may_be_null(u32 type) -{ - return type & PTR_MAYBE_NULL; -} - static bool is_acquire_function(enum bpf_func_id func_id, const struct bpf_map *map) { @@ -668,6 +608,7 @@ static char slot_type_char[] = { [STACK_MISC] = 'm', [STACK_ZERO] = '0', [STACK_DYNPTR] = 'd', + [STACK_ITER] = 'i', }; static void print_liveness(struct bpf_verifier_env *env, @@ -742,7 +683,12 @@ static int dynptr_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *re return stack_slot_obj_get_spi(env, reg, "dynptr", BPF_DYNPTR_NR_SLOTS); } -static const char *kernel_type_name(const struct btf* btf, u32 id) +static int iter_get_spi(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int nr_slots) +{ + return stack_slot_obj_get_spi(env, reg, "iter", nr_slots); +} + +static const char *btf_type_name(const struct btf *btf, u32 id) { return btf_name_by_offset(btf, btf_type_by_id(btf, id)->name_off); } @@ -766,6 +712,30 @@ static const char *dynptr_type_str(enum bpf_dynptr_type type) } } +static const char *iter_type_str(const struct btf *btf, u32 btf_id) +{ + if (!btf || btf_id == 0) + return "<invalid>"; + + /* we already validated that type is valid and has conforming name */ + return btf_type_name(btf, btf_id) + sizeof(ITER_PREFIX) - 1; +} + +static const char *iter_state_str(enum bpf_iter_state state) +{ + switch (state) { + case BPF_ITER_STATE_ACTIVE: + return "active"; + case BPF_ITER_STATE_DRAINED: + return "drained"; + case BPF_ITER_STATE_INVALID: + return "<invalid>"; + default: + WARN_ONCE(1, "unknown iter state %d\n", state); + return "<unknown>"; + } +} + static void mark_reg_scratched(struct bpf_verifier_env *env, u32 regno) { env->scratched_regs |= 1U << regno; @@ -1118,6 +1088,157 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg } } +static void __mark_reg_known_zero(struct bpf_reg_state *reg); + +static int mark_stack_slots_iter(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, int insn_idx, + struct btf *btf, u32 btf_id, int nr_slots) +{ + struct bpf_func_state *state = func(env, reg); + int spi, i, j, id; + + spi = iter_get_spi(env, reg, nr_slots); + if (spi < 0) + return spi; + + id = acquire_reference_state(env, insn_idx); + if (id < 0) + return id; + + for (i = 0; i < nr_slots; i++) { + struct bpf_stack_state *slot = &state->stack[spi - i]; + struct bpf_reg_state *st = &slot->spilled_ptr; + + __mark_reg_known_zero(st); + st->type = PTR_TO_STACK; /* we don't have dedicated reg type */ + st->live |= REG_LIVE_WRITTEN; + st->ref_obj_id = i == 0 ? id : 0; + st->iter.btf = btf; + st->iter.btf_id = btf_id; + st->iter.state = BPF_ITER_STATE_ACTIVE; + st->iter.depth = 0; + + for (j = 0; j < BPF_REG_SIZE; j++) + slot->slot_type[j] = STACK_ITER; + + mark_stack_slot_scratched(env, spi - i); + } + + return 0; +} + +static int unmark_stack_slots_iter(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, int nr_slots) +{ + struct bpf_func_state *state = func(env, reg); + int spi, i, j; + + spi = iter_get_spi(env, reg, nr_slots); + if (spi < 0) + return spi; + + for (i = 0; i < nr_slots; i++) { + struct bpf_stack_state *slot = &state->stack[spi - i]; + struct bpf_reg_state *st = &slot->spilled_ptr; + + if (i == 0) + WARN_ON_ONCE(release_reference(env, st->ref_obj_id)); + + __mark_reg_not_init(env, st); + + /* see unmark_stack_slots_dynptr() for why we need to set REG_LIVE_WRITTEN */ + st->live |= REG_LIVE_WRITTEN; + + for (j = 0; j < BPF_REG_SIZE; j++) + slot->slot_type[j] = STACK_INVALID; + + mark_stack_slot_scratched(env, spi - i); + } + + return 0; +} + +static bool is_iter_reg_valid_uninit(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, int nr_slots) +{ + struct bpf_func_state *state = func(env, reg); + int spi, i, j; + + /* For -ERANGE (i.e. spi not falling into allocated stack slots), we + * will do check_mem_access to check and update stack bounds later, so + * return true for that case. + */ + spi = iter_get_spi(env, reg, nr_slots); + if (spi == -ERANGE) + return true; + if (spi < 0) + return false; + + for (i = 0; i < nr_slots; i++) { + struct bpf_stack_state *slot = &state->stack[spi - i]; + + for (j = 0; j < BPF_REG_SIZE; j++) + if (slot->slot_type[j] == STACK_ITER) + return false; + } + + return true; +} + +static bool is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + struct btf *btf, u32 btf_id, int nr_slots) +{ + struct bpf_func_state *state = func(env, reg); + int spi, i, j; + + spi = iter_get_spi(env, reg, nr_slots); + if (spi < 0) + return false; + + for (i = 0; i < nr_slots; i++) { + struct bpf_stack_state *slot = &state->stack[spi - i]; + struct bpf_reg_state *st = &slot->spilled_ptr; + + /* only main (first) slot has ref_obj_id set */ + if (i == 0 && !st->ref_obj_id) + return false; + if (i != 0 && st->ref_obj_id) + return false; + if (st->iter.btf != btf || st->iter.btf_id != btf_id) + return false; + + for (j = 0; j < BPF_REG_SIZE; j++) + if (slot->slot_type[j] != STACK_ITER) + return false; + } + + return true; +} + +/* Check if given stack slot is "special": + * - spilled register state (STACK_SPILL); + * - dynptr state (STACK_DYNPTR); + * - iter state (STACK_ITER). + */ +static bool is_stack_slot_special(const struct bpf_stack_state *stack) +{ + enum bpf_stack_slot_type type = stack->slot_type[BPF_REG_SIZE - 1]; + + switch (type) { + case STACK_SPILL: + case STACK_DYNPTR: + case STACK_ITER: + return true; + case STACK_INVALID: + case STACK_MISC: + case STACK_ZERO: + return false; + default: + WARN_ONCE(1, "unknown stack slot type %d\n", type); + return true; + } +} + /* The reg state of a pointer or a bounded scalar was saved when * it was spilled to the stack. */ @@ -1164,7 +1285,7 @@ static void print_verifier_state(struct bpf_verifier_env *env, verbose(env, "%s", reg_type_str(env, t)); if (base_type(t) == PTR_TO_BTF_ID) - verbose(env, "%s", kernel_type_name(reg->btf, reg->btf_id)); + verbose(env, "%s", btf_type_name(reg->btf, reg->btf_id)); verbose(env, "("); /* * _a stands for append, was shortened to avoid multiline statements below. @@ -1267,6 +1388,19 @@ static void print_verifier_state(struct bpf_verifier_env *env, if (reg->ref_obj_id) verbose(env, "(ref_id=%d)", reg->ref_obj_id); break; + case STACK_ITER: + /* only main slot has ref_obj_id set; skip others */ + reg = &state->stack[i].spilled_ptr; + if (!reg->ref_obj_id) + continue; + + verbose(env, " fp%d", (-i - 1) * BPF_REG_SIZE); + print_liveness(env, reg->live); + verbose(env, "=iter_%s(ref_id=%d,state=%s,depth=%u)", + iter_type_str(reg->iter.btf, reg->iter.btf_id), + reg->ref_obj_id, iter_state_str(reg->iter.state), + reg->iter.depth); + break; case STACK_MISC: case STACK_ZERO: default: @@ -1305,10 +1439,10 @@ static inline u32 vlog_alignment(u32 pos) static void print_insn_state(struct bpf_verifier_env *env, const struct bpf_func_state *state) { - if (env->prev_log_len && env->prev_log_len == env->log.len_used) { + if (env->prev_log_pos && env->prev_log_pos == env->log.end_pos) { /* remove new line character */ - bpf_vlog_reset(&env->log, env->prev_log_len - 1); - verbose(env, "%*c;", vlog_alignment(env->prev_insn_print_len), ' '); + bpf_vlog_reset(&env->log, env->prev_log_pos - 1); + verbose(env, "%*c;", vlog_alignment(env->prev_insn_print_pos), ' '); } else { verbose(env, "%d:", env->insn_idx); } @@ -1616,7 +1750,7 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env, elem->insn_idx = insn_idx; elem->prev_insn_idx = prev_insn_idx; elem->next = env->head; - elem->log_pos = env->log.len_used; + elem->log_pos = env->log.end_pos; env->head = elem; env->stack_size++; err = copy_verifier_state(&elem->st, cur); @@ -1946,9 +2080,9 @@ static void __reg_bound_offset(struct bpf_reg_state *reg) struct tnum var64_off = tnum_intersect(reg->var_off, tnum_range(reg->umin_value, reg->umax_value)); - struct tnum var32_off = tnum_intersect(tnum_subreg(reg->var_off), - tnum_range(reg->u32_min_value, - reg->u32_max_value)); + struct tnum var32_off = tnum_intersect(tnum_subreg(var64_off), + tnum_range(reg->u32_min_value, + reg->u32_max_value)); reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); } @@ -2152,7 +2286,7 @@ static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env, elem->insn_idx = insn_idx; elem->prev_insn_idx = prev_insn_idx; elem->next = env->head; - elem->log_pos = env->log.len_used; + elem->log_pos = env->log.end_pos; env->head = elem; env->stack_size++; if (env->stack_size > BPF_COMPLEXITY_LIMIT_JMP_SEQ) { @@ -2710,6 +2844,25 @@ static int mark_dynptr_read(struct bpf_verifier_env *env, struct bpf_reg_state * state->stack[spi - 1].spilled_ptr.parent, REG_LIVE_READ64); } +static int mark_iter_read(struct bpf_verifier_env *env, struct bpf_reg_state *reg, + int spi, int nr_slots) +{ + struct bpf_func_state *state = func(env, reg); + int err, i; + + for (i = 0; i < nr_slots; i++) { + struct bpf_reg_state *st = &state->stack[spi - i].spilled_ptr; + + err = mark_reg_read(env, st, st->parent, REG_LIVE_READ64); + if (err) + return err; + + mark_stack_slot_scratched(env, spi - i); + } + + return 0; +} + /* This function is supposed to be used by the following 32-bit optimization * code only. It returns TRUE if the source or destination register operates * on 64-bit, otherwise return FALSE. @@ -3691,8 +3844,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, /* regular write of data into stack destroys any spilled ptr */ state->stack[spi].spilled_ptr.type = NOT_INIT; - /* Mark slots as STACK_MISC if they belonged to spilled ptr. */ - if (is_spilled_reg(&state->stack[spi])) + /* Mark slots as STACK_MISC if they belonged to spilled ptr/dynptr/iter. */ + if (is_stack_slot_special(&state->stack[spi])) for (i = 0; i < BPF_REG_SIZE; i++) scrub_spilled_slot(&state->stack[spi].slot_type[i]); @@ -4085,17 +4238,13 @@ static int check_stack_read(struct bpf_verifier_env *env, } /* Variable offset is prohibited for unprivileged mode for simplicity * since it requires corresponding support in Spectre masking for stack - * ALU. See also retrieve_ptr_limit(). + * ALU. See also retrieve_ptr_limit(). The check in + * check_stack_access_for_ptr_arithmetic() called by + * adjust_ptr_min_max_vals() prevents users from creating stack pointers + * with variable offsets, therefore no check is required here. Further, + * just checking it here would be insufficient as speculative stack + * writes could still lead to unsafe speculative behaviour. */ - if (!env->bypass_spec_v1 && var_off) { - char tn_buf[48]; - - tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n", - ptr_regno, tn_buf); - return -EACCES; - } - if (!var_off) { off += reg->var_off.value; err = check_stack_read_fixed_off(env, state, off, size, @@ -4301,7 +4450,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, struct btf_field *kptr_field, struct bpf_reg_state *reg, u32 regno) { - const char *targ_name = kernel_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id); + const char *targ_name = btf_type_name(kptr_field->kptr.btf, kptr_field->kptr.btf_id); int perm_flags = PTR_MAYBE_NULL | PTR_TRUSTED | MEM_RCU; const char *reg_name = ""; @@ -4317,7 +4466,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env, return -EINVAL; } /* We need to verify reg->type and reg->btf, before accessing reg->btf */ - reg_name = kernel_type_name(reg->btf, reg->btf_id); + reg_name = btf_type_name(reg->btf, reg->btf_id); /* For ref_ptr case, release function check should ensure we get one * referenced PTR_TO_BTF_ID, and that its fixed offset is 0. For the @@ -4381,6 +4530,8 @@ static bool in_rcu_cs(struct bpf_verifier_env *env) BTF_SET_START(rcu_protected_types) BTF_ID(struct, prog_test_ref_kfunc) BTF_ID(struct, cgroup) +BTF_ID(struct, bpf_cpumask) +BTF_ID(struct, task_struct) BTF_SET_END(rcu_protected_types) static bool rcu_protected_object(const struct btf *btf, u32 btf_id) @@ -4754,6 +4905,11 @@ static bool is_rcu_reg(const struct bpf_reg_state *reg) return reg->type & MEM_RCU; } +static void clear_trusted_flags(enum bpf_type_flag *flag) +{ + *flag &= ~(BPF_REG_TRUSTED_MODIFIERS | MEM_RCU); +} + static int check_pkt_ptr_alignment(struct bpf_verifier_env *env, const struct bpf_reg_state *reg, int off, int size, bool strict) @@ -5158,6 +5314,7 @@ static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val) } #define BTF_TYPE_SAFE_RCU(__type) __PASTE(__type, __safe_rcu) +#define BTF_TYPE_SAFE_RCU_OR_NULL(__type) __PASTE(__type, __safe_rcu_or_null) #define BTF_TYPE_SAFE_TRUSTED(__type) __PASTE(__type, __safe_trusted) /* @@ -5174,18 +5331,39 @@ BTF_TYPE_SAFE_RCU(struct task_struct) { struct task_struct *group_leader; }; +BTF_TYPE_SAFE_RCU(struct cgroup) { + /* cgrp->kn is always accessible as documented in kernel/cgroup/cgroup.c */ + struct kernfs_node *kn; +}; + BTF_TYPE_SAFE_RCU(struct css_set) { struct cgroup *dfl_cgrp; }; +/* RCU trusted: these fields are trusted in RCU CS and can be NULL */ +BTF_TYPE_SAFE_RCU_OR_NULL(struct mm_struct) { + struct file __rcu *exe_file; +}; + +/* skb->sk, req->sk are not RCU protected, but we mark them as such + * because bpf prog accessible sockets are SOCK_RCU_FREE. + */ +BTF_TYPE_SAFE_RCU_OR_NULL(struct sk_buff) { + struct sock *sk; +}; + +BTF_TYPE_SAFE_RCU_OR_NULL(struct request_sock) { + struct sock *sk; +}; + /* full trusted: these fields are trusted even outside of RCU CS and never NULL */ BTF_TYPE_SAFE_TRUSTED(struct bpf_iter_meta) { - __bpf_md_ptr(struct seq_file *, seq); + struct seq_file *seq; }; BTF_TYPE_SAFE_TRUSTED(struct bpf_iter__task) { - __bpf_md_ptr(struct bpf_iter_meta *, meta); - __bpf_md_ptr(struct task_struct *, task); + struct bpf_iter_meta *meta; + struct task_struct *task; }; BTF_TYPE_SAFE_TRUSTED(struct linux_binprm) { @@ -5207,17 +5385,29 @@ BTF_TYPE_SAFE_TRUSTED(struct socket) { static bool type_is_rcu(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - int off) + const char *field_name, u32 btf_id) { BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU(struct task_struct)); + BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU(struct cgroup)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU(struct css_set)); - return btf_nested_type_is_trusted(&env->log, reg, off, "__safe_rcu"); + return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_rcu"); +} + +static bool type_is_rcu_or_null(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + const char *field_name, u32 btf_id) +{ + BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU_OR_NULL(struct mm_struct)); + BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU_OR_NULL(struct sk_buff)); + BTF_TYPE_EMIT(BTF_TYPE_SAFE_RCU_OR_NULL(struct request_sock)); + + return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_rcu_or_null"); } static bool type_is_trusted(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - int off) + const char *field_name, u32 btf_id) { BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct bpf_iter_meta)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct bpf_iter__task)); @@ -5226,7 +5416,7 @@ static bool type_is_trusted(struct bpf_verifier_env *env, BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct dentry)); BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct socket)); - return btf_nested_type_is_trusted(&env->log, reg, off, "__safe_trusted"); + return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_trusted"); } static int check_ptr_to_btf_access(struct bpf_verifier_env *env, @@ -5238,8 +5428,9 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg = regs + regno; const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id); const char *tname = btf_name_by_offset(reg->btf, t->name_off); + const char *field_name = NULL; enum bpf_type_flag flag = 0; - u32 btf_id; + u32 btf_id = 0; int ret; if (!env->allow_ptr_leaks) { @@ -5284,12 +5475,12 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, return -EACCES; } - if (env->ops->btf_struct_access && !type_is_alloc(reg->type)) { + if (env->ops->btf_struct_access && !type_is_alloc(reg->type) && atype == BPF_WRITE) { if (!btf_is_kernel(reg->btf)) { verbose(env, "verifier internal error: reg->btf must be kernel btf\n"); return -EFAULT; } - ret = env->ops->btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag); + ret = env->ops->btf_struct_access(&env->log, reg, off, size); } else { /* Writes are permitted with default btf_struct_access for * program allocated objects (which always have ref_obj_id > 0), @@ -5306,7 +5497,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, return -EFAULT; } - ret = btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag); + ret = btf_struct_access(&env->log, reg, off, size, atype, &btf_id, &flag, &field_name); } if (ret < 0) @@ -5334,20 +5525,21 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, * A regular RCU-protected pointer with __rcu tag can also be deemed * trusted if we are in an RCU CS. Such pointer can be NULL. */ - if (type_is_trusted(env, reg, off)) { + if (type_is_trusted(env, reg, field_name, btf_id)) { flag |= PTR_TRUSTED; } else if (in_rcu_cs(env) && !type_may_be_null(reg->type)) { - if (type_is_rcu(env, reg, off)) { + if (type_is_rcu(env, reg, field_name, btf_id)) { /* ignore __rcu tag and mark it MEM_RCU */ flag |= MEM_RCU; - } else if (flag & MEM_RCU) { + } else if (flag & MEM_RCU || + type_is_rcu_or_null(env, reg, field_name, btf_id)) { /* __rcu tagged pointers can be NULL */ - flag |= PTR_MAYBE_NULL; + flag |= MEM_RCU | PTR_MAYBE_NULL; } else if (flag & (MEM_PERCPU | MEM_USER)) { /* keep as-is */ } else { - /* walking unknown pointers yields untrusted pointer */ - flag = PTR_UNTRUSTED; + /* walking unknown pointers yields old deprecated PTR_TO_BTF_ID */ + clear_trusted_flags(&flag); } } else { /* @@ -5361,7 +5553,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, } } else { /* Old compat. Deprecated */ - flag &= ~PTR_TRUSTED; + clear_trusted_flags(&flag); } if (atype == BPF_READ && value_regno >= 0) @@ -5420,7 +5612,7 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, /* Simulate access to a PTR_TO_BTF_ID */ memset(&map_reg, 0, sizeof(map_reg)); mark_btf_ld_reg(env, &map_reg, 0, PTR_TO_BTF_ID, btf_vmlinux, *map->ops->map_btf_id, 0); - ret = btf_struct_access(&env->log, &map_reg, off, size, atype, &btf_id, &flag); + ret = btf_struct_access(&env->log, &map_reg, off, size, atype, &btf_id, &flag, NULL); if (ret < 0) return ret; @@ -6086,6 +6278,9 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, env, regno, reg->off, access_size, zero_size_allowed, ACCESS_HELPER, meta); + case PTR_TO_BTF_ID: + return check_ptr_to_btf_access(env, regs, regno, reg->off, + access_size, BPF_READ, -1); case PTR_TO_CTX: /* in case the function doesn't know how to access the context, * (because we are in a program of type SYSCALL for example), we @@ -6506,6 +6701,203 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn return err; } +static u32 iter_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int spi) +{ + struct bpf_func_state *state = func(env, reg); + + return state->stack[spi].spilled_ptr.ref_obj_id; +} + +static bool is_iter_kfunc(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & (KF_ITER_NEW | KF_ITER_NEXT | KF_ITER_DESTROY); +} + +static bool is_iter_new_kfunc(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_ITER_NEW; +} + +static bool is_iter_next_kfunc(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_ITER_NEXT; +} + +static bool is_iter_destroy_kfunc(struct bpf_kfunc_call_arg_meta *meta) +{ + return meta->kfunc_flags & KF_ITER_DESTROY; +} + +static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg) +{ + /* btf_check_iter_kfuncs() guarantees that first argument of any iter + * kfunc is iter state pointer + */ + return arg == 0 && is_iter_kfunc(meta); +} + +static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_idx, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + const struct btf_type *t; + const struct btf_param *arg; + int spi, err, i, nr_slots; + u32 btf_id; + + /* btf_check_iter_kfuncs() ensures we don't need to validate anything here */ + arg = &btf_params(meta->func_proto)[0]; + t = btf_type_skip_modifiers(meta->btf, arg->type, NULL); /* PTR */ + t = btf_type_skip_modifiers(meta->btf, t->type, &btf_id); /* STRUCT */ + nr_slots = t->size / BPF_REG_SIZE; + + if (is_iter_new_kfunc(meta)) { + /* bpf_iter_<type>_new() expects pointer to uninit iter state */ + if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) { + verbose(env, "expected uninitialized iter_%s as arg #%d\n", + iter_type_str(meta->btf, btf_id), regno); + return -EINVAL; + } + + for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) { + err = check_mem_access(env, insn_idx, regno, + i, BPF_DW, BPF_WRITE, -1, false); + if (err) + return err; + } + + err = mark_stack_slots_iter(env, reg, insn_idx, meta->btf, btf_id, nr_slots); + if (err) + return err; + } else { + /* iter_next() or iter_destroy() expect initialized iter state*/ + if (!is_iter_reg_valid_init(env, reg, meta->btf, btf_id, nr_slots)) { + verbose(env, "expected an initialized iter_%s as arg #%d\n", + iter_type_str(meta->btf, btf_id), regno); + return -EINVAL; + } + + spi = iter_get_spi(env, reg, nr_slots); + if (spi < 0) + return spi; + + err = mark_iter_read(env, reg, spi, nr_slots); + if (err) + return err; + + /* remember meta->iter info for process_iter_next_call() */ + meta->iter.spi = spi; + meta->iter.frameno = reg->frameno; + meta->ref_obj_id = iter_ref_obj_id(env, reg, spi); + + if (is_iter_destroy_kfunc(meta)) { + err = unmark_stack_slots_iter(env, reg, nr_slots); + if (err) + return err; + } + } + + return 0; +} + +/* process_iter_next_call() is called when verifier gets to iterator's next + * "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer + * to it as just "iter_next()" in comments below. + * + * BPF verifier relies on a crucial contract for any iter_next() + * implementation: it should *eventually* return NULL, and once that happens + * it should keep returning NULL. That is, once iterator exhausts elements to + * iterate, it should never reset or spuriously return new elements. + * + * With the assumption of such contract, process_iter_next_call() simulates + * a fork in the verifier state to validate loop logic correctness and safety + * without having to simulate infinite amount of iterations. + * + * In current state, we first assume that iter_next() returned NULL and + * iterator state is set to DRAINED (BPF_ITER_STATE_DRAINED). In such + * conditions we should not form an infinite loop and should eventually reach + * exit. + * + * Besides that, we also fork current state and enqueue it for later + * verification. In a forked state we keep iterator state as ACTIVE + * (BPF_ITER_STATE_ACTIVE) and assume non-NULL return from iter_next(). We + * also bump iteration depth to prevent erroneous infinite loop detection + * later on (see iter_active_depths_differ() comment for details). In this + * state we assume that we'll eventually loop back to another iter_next() + * calls (it could be in exactly same location or in some other instruction, + * it doesn't matter, we don't make any unnecessary assumptions about this, + * everything revolves around iterator state in a stack slot, not which + * instruction is calling iter_next()). When that happens, we either will come + * to iter_next() with equivalent state and can conclude that next iteration + * will proceed in exactly the same way as we just verified, so it's safe to + * assume that loop converges. If not, we'll go on another iteration + * simulation with a different input state, until all possible starting states + * are validated or we reach maximum number of instructions limit. + * + * This way, we will either exhaustively discover all possible input states + * that iterator loop can start with and eventually will converge, or we'll + * effectively regress into bounded loop simulation logic and either reach + * maximum number of instructions if loop is not provably convergent, or there + * is some statically known limit on number of iterations (e.g., if there is + * an explicit `if n > 100 then break;` statement somewhere in the loop). + * + * One very subtle but very important aspect is that we *always* simulate NULL + * condition first (as the current state) before we simulate non-NULL case. + * This has to do with intricacies of scalar precision tracking. By simulating + * "exit condition" of iter_next() returning NULL first, we make sure all the + * relevant precision marks *that will be set **after** we exit iterator loop* + * are propagated backwards to common parent state of NULL and non-NULL + * branches. Thanks to that, state equivalence checks done later in forked + * state, when reaching iter_next() for ACTIVE iterator, can assume that + * precision marks are finalized and won't change. Because simulating another + * ACTIVE iterator iteration won't change them (because given same input + * states we'll end up with exactly same output states which we are currently + * comparing; and verification after the loop already propagated back what + * needs to be **additionally** tracked as precise). It's subtle, grok + * precision tracking for more intuitive understanding. + */ +static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx, + struct bpf_kfunc_call_arg_meta *meta) +{ + struct bpf_verifier_state *cur_st = env->cur_state, *queued_st; + struct bpf_func_state *cur_fr = cur_st->frame[cur_st->curframe], *queued_fr; + struct bpf_reg_state *cur_iter, *queued_iter; + int iter_frameno = meta->iter.frameno; + int iter_spi = meta->iter.spi; + + BTF_TYPE_EMIT(struct bpf_iter); + + cur_iter = &env->cur_state->frame[iter_frameno]->stack[iter_spi].spilled_ptr; + + if (cur_iter->iter.state != BPF_ITER_STATE_ACTIVE && + cur_iter->iter.state != BPF_ITER_STATE_DRAINED) { + verbose(env, "verifier internal error: unexpected iterator state %d (%s)\n", + cur_iter->iter.state, iter_state_str(cur_iter->iter.state)); + return -EFAULT; + } + + if (cur_iter->iter.state == BPF_ITER_STATE_ACTIVE) { + /* branch out active iter state */ + queued_st = push_stack(env, insn_idx + 1, insn_idx, false); + if (!queued_st) + return -ENOMEM; + + queued_iter = &queued_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr; + queued_iter->iter.state = BPF_ITER_STATE_ACTIVE; + queued_iter->iter.depth++; + + queued_fr = queued_st->frame[queued_st->curframe]; + mark_ptr_not_null_reg(&queued_fr->regs[BPF_REG_0]); + } + + /* switch to DRAINED state, but keep the depth unchanged */ + /* mark current iter state as drained and assume returned NULL */ + cur_iter->iter.state = BPF_ITER_STATE_DRAINED; + __mark_reg_const_zero(&cur_fr->regs[BPF_REG_0]); + + return 0; +} + static bool arg_type_is_mem_size(enum bpf_arg_type type) { return type == ARG_CONST_SIZE || @@ -6600,6 +6992,7 @@ static const struct bpf_reg_types mem_types = { PTR_TO_MEM, PTR_TO_MEM | MEM_RINGBUF, PTR_TO_BUF, + PTR_TO_BTF_ID | PTR_TRUSTED, }, }; @@ -6709,6 +7102,9 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno, if (arg_type & PTR_MAYBE_NULL) type &= ~PTR_MAYBE_NULL; + if (meta->func_id == BPF_FUNC_kptr_xchg && type & MEM_ALLOC) + type &= ~MEM_ALLOC; + for (i = 0; i < ARRAY_SIZE(compatible->types); i++) { expected = compatible->types[i]; if (expected == NOT_INIT) @@ -6728,10 +7124,23 @@ found: if (base_type(reg->type) != PTR_TO_BTF_ID) return 0; + if (compatible == &mem_types) { + if (!(arg_type & MEM_RDONLY)) { + verbose(env, + "%s() may write into memory pointed by R%d type=%s\n", + func_id_name(meta->func_id), + regno, reg_type_str(env, reg->type)); + return -EACCES; + } + return 0; + } + switch ((int)reg->type) { case PTR_TO_BTF_ID: case PTR_TO_BTF_ID | PTR_TRUSTED: case PTR_TO_BTF_ID | MEM_RCU: + case PTR_TO_BTF_ID | PTR_MAYBE_NULL: + case PTR_TO_BTF_ID | PTR_MAYBE_NULL | MEM_RCU: { /* For bpf_sk_release, it needs to match against first member * 'struct sock_common', hence make an exception for it. This @@ -6740,6 +7149,12 @@ found: bool strict_type_match = arg_type_is_release(arg_type) && meta->func_id != BPF_FUNC_sk_release; + if (type_may_be_null(reg->type) && + (!type_may_be_null(arg_type) || arg_type_is_release(arg_type))) { + verbose(env, "Possibly NULL pointer passed to helper arg%d\n", regno); + return -EACCES; + } + if (!arg_btf_id) { if (!compatible->btf_id) { verbose(env, "verifier internal error: missing arg compatible BTF ID\n"); @@ -6763,15 +7178,16 @@ found: btf_vmlinux, *arg_btf_id, strict_type_match)) { verbose(env, "R%d is of type %s but %s is expected\n", - regno, kernel_type_name(reg->btf, reg->btf_id), - kernel_type_name(btf_vmlinux, *arg_btf_id)); + regno, btf_type_name(reg->btf, reg->btf_id), + btf_type_name(btf_vmlinux, *arg_btf_id)); return -EACCES; } } break; } case PTR_TO_BTF_ID | MEM_ALLOC: - if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock) { + if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock && + meta->func_id != BPF_FUNC_kptr_xchg) { verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n"); return -EFAULT; } @@ -6834,7 +7250,7 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, verbose(env, "R%d must have zero offset when passed to release func\n", regno); verbose(env, "No graph node or root found at R%d type:%s off:%d\n", regno, - kernel_type_name(reg->btf, reg->btf_id), reg->off); + btf_type_name(reg->btf, reg->btf_id), reg->off); return -EINVAL; } @@ -8737,6 +9153,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn if (func_id == BPF_FUNC_kptr_xchg) { ret_btf = meta.kptr_field->kptr.btf; ret_btf_id = meta.kptr_field->kptr.btf_id; + if (!btf_is_kernel(ret_btf)) + regs[BPF_REG_0].type |= MEM_ALLOC; } else { if (fn->ret_btf_id == BPF_PTR_POISON) { verbose(env, "verifier internal error:"); @@ -8870,7 +9288,7 @@ static bool is_kfunc_release(struct bpf_kfunc_call_arg_meta *meta) static bool is_kfunc_trusted_args(struct bpf_kfunc_call_arg_meta *meta) { - return meta->kfunc_flags & KF_TRUSTED_ARGS; + return (meta->kfunc_flags & KF_TRUSTED_ARGS) || is_kfunc_release(meta); } static bool is_kfunc_sleepable(struct bpf_kfunc_call_arg_meta *meta) @@ -9099,6 +9517,7 @@ enum kfunc_ptr_arg_type { KF_ARG_PTR_TO_ALLOC_BTF_ID, /* Allocated object */ KF_ARG_PTR_TO_KPTR, /* PTR_TO_KPTR but type specific */ KF_ARG_PTR_TO_DYNPTR, + KF_ARG_PTR_TO_ITER, KF_ARG_PTR_TO_LIST_HEAD, KF_ARG_PTR_TO_LIST_NODE, KF_ARG_PTR_TO_BTF_ID, /* Also covers reg2btf_ids conversions */ @@ -9220,6 +9639,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if (is_kfunc_arg_dynptr(meta->btf, &args[argno])) return KF_ARG_PTR_TO_DYNPTR; + if (is_kfunc_arg_iter(meta, argno)) + return KF_ARG_PTR_TO_ITER; + if (is_kfunc_arg_list_head(meta->btf, &args[argno])) return KF_ARG_PTR_TO_LIST_HEAD; @@ -9848,6 +10270,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; case KF_ARG_PTR_TO_KPTR: case KF_ARG_PTR_TO_DYNPTR: + case KF_ARG_PTR_TO_ITER: case KF_ARG_PTR_TO_LIST_HEAD: case KF_ARG_PTR_TO_LIST_NODE: case KF_ARG_PTR_TO_RB_ROOT: @@ -9944,6 +10367,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; } + case KF_ARG_PTR_TO_ITER: + ret = process_iter_arg(env, regno, insn_idx, meta); + if (ret < 0) + return ret; + break; case KF_ARG_PTR_TO_LIST_HEAD: if (reg->type != PTR_TO_MAP_VALUE && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { @@ -10079,24 +10507,21 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return 0; } -static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, - int *insn_idx_p) +static int fetch_kfunc_meta(struct bpf_verifier_env *env, + struct bpf_insn *insn, + struct bpf_kfunc_call_arg_meta *meta, + const char **kfunc_name) { - const struct btf_type *t, *func, *func_proto, *ptr_type; - u32 i, nargs, func_id, ptr_type_id, release_ref_obj_id; - struct bpf_reg_state *regs = cur_regs(env); - const char *func_name, *ptr_type_name; - bool sleepable, rcu_lock, rcu_unlock; - struct bpf_kfunc_call_arg_meta meta; - int err, insn_idx = *insn_idx_p; - const struct btf_param *args; - const struct btf_type *ret_t; + const struct btf_type *func, *func_proto; + u32 func_id, *kfunc_flags; + const char *func_name; struct btf *desc_btf; - u32 *kfunc_flags; - /* skip for now, but return error when we find this in fixup_kfunc_call */ + if (kfunc_name) + *kfunc_name = NULL; + if (!insn->imm) - return 0; + return -EINVAL; desc_btf = find_kfunc_desc_btf(env, insn->off); if (IS_ERR(desc_btf)) @@ -10105,22 +10530,53 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, func_id = insn->imm; func = btf_type_by_id(desc_btf, func_id); func_name = btf_name_by_offset(desc_btf, func->name_off); + if (kfunc_name) + *kfunc_name = func_name; func_proto = btf_type_by_id(desc_btf, func->type); kfunc_flags = btf_kfunc_id_set_contains(desc_btf, resolve_prog_type(env->prog), func_id); if (!kfunc_flags) { - verbose(env, "calling kernel function %s is not allowed\n", - func_name); return -EACCES; } - /* Prepare kfunc call metadata */ - memset(&meta, 0, sizeof(meta)); - meta.btf = desc_btf; - meta.func_id = func_id; - meta.kfunc_flags = *kfunc_flags; - meta.func_proto = func_proto; - meta.func_name = func_name; + memset(meta, 0, sizeof(*meta)); + meta->btf = desc_btf; + meta->func_id = func_id; + meta->kfunc_flags = *kfunc_flags; + meta->func_proto = func_proto; + meta->func_name = func_name; + + return 0; +} + +static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, + int *insn_idx_p) +{ + const struct btf_type *t, *ptr_type; + u32 i, nargs, ptr_type_id, release_ref_obj_id; + struct bpf_reg_state *regs = cur_regs(env); + const char *func_name, *ptr_type_name; + bool sleepable, rcu_lock, rcu_unlock; + struct bpf_kfunc_call_arg_meta meta; + struct bpf_insn_aux_data *insn_aux; + int err, insn_idx = *insn_idx_p; + const struct btf_param *args; + const struct btf_type *ret_t; + struct btf *desc_btf; + + /* skip for now, but return error when we find this in fixup_kfunc_call */ + if (!insn->imm) + return 0; + + err = fetch_kfunc_meta(env, insn, &meta, &func_name); + if (err == -EACCES && func_name) + verbose(env, "calling kernel function %s is not allowed\n", func_name); + if (err) + return err; + desc_btf = meta.btf; + insn_aux = &env->insn_aux_data[insn_idx]; + + insn_aux->is_iter_next = is_iter_next_kfunc(&meta); if (is_kfunc_destructive(&meta) && !capable(CAP_SYS_BOOT)) { verbose(env, "destructive kfunc calls require CAP_SYS_BOOT capability\n"); @@ -10173,7 +10629,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, err = release_reference(env, regs[meta.release_regno].ref_obj_id); if (err) { verbose(env, "kfunc %s#%d reference has not been acquired before\n", - func_name, func_id); + func_name, meta.func_id); return err; } } @@ -10185,14 +10641,14 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, err = ref_convert_owning_non_owning(env, release_ref_obj_id); if (err) { verbose(env, "kfunc %s#%d conversion of owning ref to non-owning failed\n", - func_name, func_id); + func_name, meta.func_id); return err; } err = release_reference(env, release_ref_obj_id); if (err) { verbose(env, "kfunc %s#%d reference has not been acquired before\n", - func_name, func_id); + func_name, meta.func_id); return err; } } @@ -10202,7 +10658,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, set_rbtree_add_callback_state); if (err) { verbose(env, "kfunc %s#%d failed callback verification\n", - func_name, func_id); + func_name, meta.func_id); return err; } } @@ -10211,7 +10667,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, mark_reg_not_init(env, regs, caller_saved[i]); /* Check return type */ - t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL); + t = btf_type_skip_modifiers(desc_btf, meta.func_proto->type, NULL); if (is_kfunc_acquire(&meta) && !btf_type_is_struct_ptr(meta.btf, t)) { /* Only exception is bpf_obj_new_impl */ @@ -10260,13 +10716,9 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, regs[BPF_REG_0].btf = ret_btf; regs[BPF_REG_0].btf_id = ret_btf_id; - env->insn_aux_data[insn_idx].obj_new_size = ret_t->size; - env->insn_aux_data[insn_idx].kptr_struct_meta = + insn_aux->obj_new_size = ret_t->size; + insn_aux->kptr_struct_meta = btf_find_struct_meta(ret_btf, ret_btf_id); - } else if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) { - env->insn_aux_data[insn_idx].kptr_struct_meta = - btf_find_struct_meta(meta.arg_obj_drop.btf, - meta.arg_obj_drop.btf_id); } else if (meta.func_id == special_kfunc_list[KF_bpf_list_pop_front] || meta.func_id == special_kfunc_list[KF_bpf_list_pop_back]) { struct btf_field *field = meta.arg_list_head.field; @@ -10395,10 +10847,18 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, if (reg_may_point_to_spin_lock(®s[BPF_REG_0]) && !regs[BPF_REG_0].id) regs[BPF_REG_0].id = ++env->id_gen; - } /* else { add_kfunc_call() ensures it is btf_type_is_void(t) } */ + } else if (btf_type_is_void(t)) { + if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) { + if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) { + insn_aux->kptr_struct_meta = + btf_find_struct_meta(meta.arg_obj_drop.btf, + meta.arg_obj_drop.btf_id); + } + } + } - nargs = btf_type_vlen(func_proto); - args = (const struct btf_param *)(func_proto + 1); + nargs = btf_type_vlen(meta.func_proto); + args = (const struct btf_param *)(meta.func_proto + 1); for (i = 0; i < nargs; i++) { u32 regno = i + 1; @@ -10410,6 +10870,12 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, mark_btf_func_reg_size(env, regno, t->size); } + if (is_iter_next_kfunc(&meta)) { + err = process_iter_next_call(env, insn_idx, &meta); + if (err) + return err; + } + return 0; } @@ -12116,10 +12582,14 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode) case BPF_JEQ: if (tnum_is_const(subreg)) return !!tnum_equals_const(subreg, val); + else if (val < reg->u32_min_value || val > reg->u32_max_value) + return 0; break; case BPF_JNE: if (tnum_is_const(subreg)) return !tnum_equals_const(subreg, val); + else if (val < reg->u32_min_value || val > reg->u32_max_value) + return 1; break; case BPF_JSET: if ((~subreg.mask & subreg.value) & val) @@ -12189,10 +12659,14 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode) case BPF_JEQ: if (tnum_is_const(reg->var_off)) return !!tnum_equals_const(reg->var_off, val); + else if (val < reg->umin_value || val > reg->umax_value) + return 0; break; case BPF_JNE: if (tnum_is_const(reg->var_off)) return !tnum_equals_const(reg->var_off, val); + else if (val < reg->umin_value || val > reg->umax_value) + return 1; break; case BPF_JSET: if ((~reg->var_off.mask & reg->var_off.value) & val) @@ -12813,6 +13287,18 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, src_reg->var_off.value, opcode, is_jmp32); + } else if (dst_reg->type == SCALAR_VALUE && + is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) { + pred = is_branch_taken(src_reg, + tnum_subreg(dst_reg->var_off).value, + flip_opcode(opcode), + is_jmp32); + } else if (dst_reg->type == SCALAR_VALUE && + !is_jmp32 && tnum_is_const(dst_reg->var_off)) { + pred = is_branch_taken(src_reg, + dst_reg->var_off.value, + flip_opcode(opcode), + is_jmp32); } else if (reg_is_pkt_pointer_any(dst_reg) && reg_is_pkt_pointer_any(src_reg) && !is_jmp32) { @@ -13407,6 +13893,17 @@ static bool is_prune_point(struct bpf_verifier_env *env, int insn_idx) return env->insn_aux_data[insn_idx].prune_point; } +static void mark_force_checkpoint(struct bpf_verifier_env *env, int idx) +{ + env->insn_aux_data[idx].force_checkpoint = true; +} + +static bool is_force_checkpoint(struct bpf_verifier_env *env, int insn_idx) +{ + return env->insn_aux_data[insn_idx].force_checkpoint; +} + + enum { DONE_EXPLORING = 0, KEEP_EXPLORING = 1, @@ -13522,6 +14019,26 @@ static int visit_insn(int t, struct bpf_verifier_env *env) * async state will be pushed for further exploration. */ mark_prune_point(env, t); + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { + struct bpf_kfunc_call_arg_meta meta; + + ret = fetch_kfunc_meta(env, insn, &meta, NULL); + if (ret == 0 && is_iter_next_kfunc(&meta)) { + mark_prune_point(env, t); + /* Checking and saving state checkpoints at iter_next() call + * is crucial for fast convergence of open-coded iterator loop + * logic, so we need to force it. If we don't do that, + * is_state_visited() might skip saving a checkpoint, causing + * unnecessarily long sequence of not checkpointed + * instructions and jumps, leading to exhaustion of jump + * history buffer, and potentially other undesired outcomes. + * It is expected that with correct open-coded iterators + * convergence will happen quickly, so we don't run a risk of + * exhausting memory. + */ + mark_force_checkpoint(env, t); + } + } return visit_func_call_insn(t, insns, env, insn->src_reg == BPF_PSEUDO_CALL); case BPF_JA: @@ -14275,6 +14792,8 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, * didn't use them */ for (i = 0; i < old->allocated_stack; i++) { + struct bpf_reg_state *old_reg, *cur_reg; + spi = i / BPF_REG_SIZE; if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ)) { @@ -14331,9 +14850,6 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, return false; break; case STACK_DYNPTR: - { - const struct bpf_reg_state *old_reg, *cur_reg; - old_reg = &old->stack[spi].spilled_ptr; cur_reg = &cur->stack[spi].spilled_ptr; if (old_reg->dynptr.type != cur_reg->dynptr.type || @@ -14341,7 +14857,22 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) return false; break; - } + case STACK_ITER: + old_reg = &old->stack[spi].spilled_ptr; + cur_reg = &cur->stack[spi].spilled_ptr; + /* iter.depth is not compared between states as it + * doesn't matter for correctness and would otherwise + * prevent convergence; we maintain it only to prevent + * infinite loop check triggering, see + * iter_active_depths_differ() + */ + if (old_reg->iter.btf != cur_reg->iter.btf || + old_reg->iter.btf_id != cur_reg->iter.btf_id || + old_reg->iter.state != cur_reg->iter.state || + /* ignore {old_reg,cur_reg}->iter.depth, see above */ + !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap)) + return false; + break; case STACK_MISC: case STACK_ZERO: case STACK_INVALID: @@ -14555,10 +15086,11 @@ static int propagate_precision(struct bpf_verifier_env *env, state_reg = state->regs; for (i = 0; i < BPF_REG_FP; i++, state_reg++) { if (state_reg->type != SCALAR_VALUE || - !state_reg->precise) + !state_reg->precise || + !(state_reg->live & REG_LIVE_READ)) continue; if (env->log.level & BPF_LOG_LEVEL2) - verbose(env, "frame %d: propagating r%d\n", i, fr); + verbose(env, "frame %d: propagating r%d\n", fr, i); err = mark_chain_precision_frame(env, fr, i); if (err < 0) return err; @@ -14569,11 +15101,12 @@ static int propagate_precision(struct bpf_verifier_env *env, continue; state_reg = &state->stack[i].spilled_ptr; if (state_reg->type != SCALAR_VALUE || - !state_reg->precise) + !state_reg->precise || + !(state_reg->live & REG_LIVE_READ)) continue; if (env->log.level & BPF_LOG_LEVEL2) verbose(env, "frame %d: propagating fp%d\n", - (-i - 1) * BPF_REG_SIZE, fr); + fr, (-i - 1) * BPF_REG_SIZE); err = mark_chain_precision_stack_frame(env, fr, i); if (err < 0) return err; @@ -14600,6 +15133,92 @@ static bool states_maybe_looping(struct bpf_verifier_state *old, return true; } +static bool is_iter_next_insn(struct bpf_verifier_env *env, int insn_idx) +{ + return env->insn_aux_data[insn_idx].is_iter_next; +} + +/* is_state_visited() handles iter_next() (see process_iter_next_call() for + * terminology) calls specially: as opposed to bounded BPF loops, it *expects* + * states to match, which otherwise would look like an infinite loop. So while + * iter_next() calls are taken care of, we still need to be careful and + * prevent erroneous and too eager declaration of "ininite loop", when + * iterators are involved. + * + * Here's a situation in pseudo-BPF assembly form: + * + * 0: again: ; set up iter_next() call args + * 1: r1 = &it ; <CHECKPOINT HERE> + * 2: call bpf_iter_num_next ; this is iter_next() call + * 3: if r0 == 0 goto done + * 4: ... something useful here ... + * 5: goto again ; another iteration + * 6: done: + * 7: r1 = &it + * 8: call bpf_iter_num_destroy ; clean up iter state + * 9: exit + * + * This is a typical loop. Let's assume that we have a prune point at 1:, + * before we get to `call bpf_iter_num_next` (e.g., because of that `goto + * again`, assuming other heuristics don't get in a way). + * + * When we first time come to 1:, let's say we have some state X. We proceed + * to 2:, fork states, enqueue ACTIVE, validate NULL case successfully, exit. + * Now we come back to validate that forked ACTIVE state. We proceed through + * 3-5, come to goto, jump to 1:. Let's assume our state didn't change, so we + * are converging. But the problem is that we don't know that yet, as this + * convergence has to happen at iter_next() call site only. So if nothing is + * done, at 1: verifier will use bounded loop logic and declare infinite + * looping (and would be *technically* correct, if not for iterator's + * "eventual sticky NULL" contract, see process_iter_next_call()). But we + * don't want that. So what we do in process_iter_next_call() when we go on + * another ACTIVE iteration, we bump slot->iter.depth, to mark that it's + * a different iteration. So when we suspect an infinite loop, we additionally + * check if any of the *ACTIVE* iterator states depths differ. If yes, we + * pretend we are not looping and wait for next iter_next() call. + * + * This only applies to ACTIVE state. In DRAINED state we don't expect to + * loop, because that would actually mean infinite loop, as DRAINED state is + * "sticky", and so we'll keep returning into the same instruction with the + * same state (at least in one of possible code paths). + * + * This approach allows to keep infinite loop heuristic even in the face of + * active iterator. E.g., C snippet below is and will be detected as + * inifintely looping: + * + * struct bpf_iter_num it; + * int *p, x; + * + * bpf_iter_num_new(&it, 0, 10); + * while ((p = bpf_iter_num_next(&t))) { + * x = p; + * while (x--) {} // <<-- infinite loop here + * } + * + */ +static bool iter_active_depths_differ(struct bpf_verifier_state *old, struct bpf_verifier_state *cur) +{ + struct bpf_reg_state *slot, *cur_slot; + struct bpf_func_state *state; + int i, fr; + + for (fr = old->curframe; fr >= 0; fr--) { + state = old->frame[fr]; + for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) { + if (state->stack[i].slot_type[0] != STACK_ITER) + continue; + + slot = &state->stack[i].spilled_ptr; + if (slot->iter.state != BPF_ITER_STATE_ACTIVE) + continue; + + cur_slot = &cur->frame[fr]->stack[i].spilled_ptr; + if (cur_slot->iter.depth != slot->iter.depth) + return true; + } + } + return false; +} static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) { @@ -14607,7 +15226,8 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) struct bpf_verifier_state_list *sl, **pprev; struct bpf_verifier_state *cur = env->cur_state, *new; int i, j, err, states_cnt = 0; - bool add_new_state = env->test_state_freq ? true : false; + bool force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx); + bool add_new_state = force_new_state; /* bpf progs typically have pruning point every 4 instructions * http://vger.kernel.org/bpfconf2019.html#session-1 @@ -14647,8 +15267,46 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) * Since the verifier still needs to catch infinite loops * inside async callbacks. */ - } else if (states_maybe_looping(&sl->state, cur) && - states_equal(env, &sl->state, cur)) { + goto skip_inf_loop_check; + } + /* BPF open-coded iterators loop detection is special. + * states_maybe_looping() logic is too simplistic in detecting + * states that *might* be equivalent, because it doesn't know + * about ID remapping, so don't even perform it. + * See process_iter_next_call() and iter_active_depths_differ() + * for overview of the logic. When current and one of parent + * states are detected as equivalent, it's a good thing: we prove + * convergence and can stop simulating further iterations. + * It's safe to assume that iterator loop will finish, taking into + * account iter_next() contract of eventually returning + * sticky NULL result. + */ + if (is_iter_next_insn(env, insn_idx)) { + if (states_equal(env, &sl->state, cur)) { + struct bpf_func_state *cur_frame; + struct bpf_reg_state *iter_state, *iter_reg; + int spi; + + cur_frame = cur->frame[cur->curframe]; + /* btf_check_iter_kfuncs() enforces that + * iter state pointer is always the first arg + */ + iter_reg = &cur_frame->regs[BPF_REG_1]; + /* current state is valid due to states_equal(), + * so we can assume valid iter and reg state, + * no need for extra (re-)validations + */ + spi = __get_spi(iter_reg->off + iter_reg->var_off.value); + iter_state = &func(env, iter_reg)->stack[spi].spilled_ptr; + if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE) + goto hit; + } + goto skip_inf_loop_check; + } + /* attempt to detect infinite loop to avoid unnecessary doomed work */ + if (states_maybe_looping(&sl->state, cur) && + states_equal(env, &sl->state, cur) && + !iter_active_depths_differ(&sl->state, cur)) { verbose_linfo(env, insn_idx, "; "); verbose(env, "infinite loop detected at insn %d\n", insn_idx); return -EINVAL; @@ -14665,13 +15323,15 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) * This threshold shouldn't be too high either, since states * at the end of the loop are likely to be useful in pruning. */ - if (!env->test_state_freq && +skip_inf_loop_check: + if (!force_new_state && env->jmps_processed - env->prev_jmps_processed < 20 && env->insn_processed - env->prev_insn_processed < 100) add_new_state = false; goto miss; } if (states_equal(env, &sl->state, cur)) { +hit: sl->hit_cnt++; /* reached equivalent register/stack state, * prune the search. @@ -14978,11 +15638,11 @@ static int do_check(struct bpf_verifier_env *env) print_insn_state(env, state->frame[state->curframe]); verbose_linfo(env, env->insn_idx, "; "); - env->prev_log_len = env->log.len_used; + env->prev_log_pos = env->log.end_pos; verbose(env, "%d: ", env->insn_idx); print_bpf_insn(&cbs, insn, env->allow_ptr_leaks); - env->prev_insn_print_len = env->log.len_used - env->prev_log_len; - env->prev_log_len = env->log.len_used; + env->prev_insn_print_pos = env->log.end_pos - env->prev_log_pos; + env->prev_log_pos = env->log.end_pos; } if (bpf_prog_is_offloaded(env->prog->aux)) { @@ -15301,8 +15961,8 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env, goto err_put; } - if (!btf_type_is_var(t)) { - verbose(env, "pseudo btf_id %d in ldimm64 isn't KIND_VAR.\n", id); + if (!btf_type_is_var(t) && !btf_type_is_func(t)) { + verbose(env, "pseudo btf_id %d in ldimm64 isn't KIND_VAR or KIND_FUNC\n", id); err = -EINVAL; goto err_put; } @@ -15315,6 +15975,14 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env, err = -ENOENT; goto err_put; } + insn[0].imm = (u32)addr; + insn[1].imm = addr >> 32; + + if (btf_type_is_func(t)) { + aux->btf_var.reg_type = PTR_TO_MEM | MEM_RDONLY; + aux->btf_var.mem_size = 0; + goto check_btf; + } datasec_id = find_btf_percpu_datasec(btf); if (datasec_id > 0) { @@ -15327,9 +15995,6 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env, } } - insn[0].imm = (u32)addr; - insn[1].imm = addr >> 32; - type = t->type; t = btf_type_skip_modifiers(btf, type, NULL); if (percpu) { @@ -15357,7 +16022,7 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env, aux->btf_var.btf = btf; aux->btf_var.btf_id = type; } - +check_btf: /* check whether we recorded this BTF (and maybe module) already */ for (i = 0; i < env->used_btf_cnt; i++) { if (env->used_btfs[i].btf == btf) { @@ -17032,21 +17697,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env) BUILD_BUG_ON(!__same_type(ops->map_lookup_elem, (void *(*)(struct bpf_map *map, void *key))NULL)); BUILD_BUG_ON(!__same_type(ops->map_delete_elem, - (int (*)(struct bpf_map *map, void *key))NULL)); + (long (*)(struct bpf_map *map, void *key))NULL)); BUILD_BUG_ON(!__same_type(ops->map_update_elem, - (int (*)(struct bpf_map *map, void *key, void *value, + (long (*)(struct bpf_map *map, void *key, void *value, u64 flags))NULL)); BUILD_BUG_ON(!__same_type(ops->map_push_elem, - (int (*)(struct bpf_map *map, void *value, + (long (*)(struct bpf_map *map, void *value, u64 flags))NULL)); BUILD_BUG_ON(!__same_type(ops->map_pop_elem, - (int (*)(struct bpf_map *map, void *value))NULL)); + (long (*)(struct bpf_map *map, void *value))NULL)); BUILD_BUG_ON(!__same_type(ops->map_peek_elem, - (int (*)(struct bpf_map *map, void *value))NULL)); + (long (*)(struct bpf_map *map, void *value))NULL)); BUILD_BUG_ON(!__same_type(ops->map_redirect, - (int (*)(struct bpf_map *map, u64 index, u64 flags))NULL)); + (long (*)(struct bpf_map *map, u64 index, u64 flags))NULL)); BUILD_BUG_ON(!__same_type(ops->map_for_each_callback, - (int (*)(struct bpf_map *map, + (long (*)(struct bpf_map *map, bpf_callback_t callback_fn, void *callback_ctx, u64 flags))NULL)); @@ -17654,6 +18319,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, const char *tname; struct btf *btf; long addr = 0; + struct module *mod = NULL; if (!btf_id) { bpf_log(log, "Tracing programs must provide btf_id\n"); @@ -17827,8 +18493,17 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, else addr = (long) tgt_prog->aux->func[subprog]->bpf_func; } else { - addr = kallsyms_lookup_name(tname); + if (btf_is_module(btf)) { + mod = btf_try_get_module(btf); + if (mod) + addr = find_kallsyms_symbol_value(mod, tname); + else + addr = 0; + } else { + addr = kallsyms_lookup_name(tname); + } if (!addr) { + module_put(mod); bpf_log(log, "The address of function %s cannot be found\n", tname); @@ -17868,11 +18543,13 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, break; } if (ret) { + module_put(mod); bpf_log(log, "%s is not sleepable\n", tname); return ret; } } else if (prog->expected_attach_type == BPF_MODIFY_RETURN) { if (tgt_prog) { + module_put(mod); bpf_log(log, "can't modify return codes of BPF programs\n"); return -EINVAL; } @@ -17881,6 +18558,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, !check_attach_modify_return(addr, tname)) ret = 0; if (ret) { + module_put(mod); bpf_log(log, "%s() is not modifiable\n", tname); return ret; } @@ -17891,6 +18569,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, tgt_info->tgt_addr = addr; tgt_info->tgt_name = tname; tgt_info->tgt_type = t; + tgt_info->tgt_mod = mod; return 0; } @@ -17970,6 +18649,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) /* store info about the attachment target that will be used later */ prog->aux->attach_func_proto = tgt_info.tgt_type; prog->aux->attach_func_name = tgt_info.tgt_name; + prog->aux->mod = tgt_info.tgt_mod; if (tgt_prog) { prog->aux->saved_dst_prog_type = tgt_prog->type; @@ -18014,12 +18694,12 @@ struct btf *bpf_get_btf_vmlinux(void) return btf_vmlinux; } -int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) +int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u32 uattr_size) { u64 start_time = ktime_get_ns(); struct bpf_verifier_env *env; - struct bpf_verifier_log *log; - int i, len, ret = -EINVAL; + int i, len, ret = -EINVAL, err; + u32 log_true_size; bool is_priv; /* no program is valid */ @@ -18032,7 +18712,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) env = kzalloc(sizeof(struct bpf_verifier_env), GFP_KERNEL); if (!env) return -ENOMEM; - log = &env->log; len = (*prog)->len; env->insn_aux_data = @@ -18053,20 +18732,14 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr) if (!is_priv) mutex_lock(&bpf_verifier_lock); - if (attr->log_level || attr->log_buf || attr->log_size) { - /* user requested verbose verifier output - * and supplied buffer to store the verification trace - */ - log->level = attr->log_level; - log->ubuf = (char __user *) (unsigned long) attr->log_buf; - log->len_total = attr->log_size; - - /* log attributes have to be sane */ - if (!bpf_verifier_log_attr_valid(log)) { - ret = -EINVAL; - goto err_unlock; - } - } + /* user could have requested verbose verifier output + * and supplied buffer to store the verification trace + */ + ret = bpf_vlog_init(&env->log, attr->log_level, + (char __user *) (unsigned long) attr->log_buf, + attr->log_size); + if (ret) + goto err_unlock; mark_verifier_state_clean(env); @@ -18180,9 +18853,14 @@ skip_full_check: print_verification_stats(env); env->prog->aux->verified_insns = env->insn_processed; - if (log->level && bpf_verifier_log_full(log)) - ret = -ENOSPC; - if (log->level && !log->ubuf) { + /* preserve original error even if log finalization is successful */ + err = bpf_vlog_finalize(&env->log, &log_true_size); + if (err) + ret = err; + + if (uattr_size >= offsetofend(union bpf_attr, log_true_size) && + copy_to_bpfptr_offset(uattr, offsetof(union bpf_attr, log_true_size), + &log_true_size, sizeof(log_true_size))) { ret = -EFAULT; goto err_release_maps; } diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 935e8121b21e..8a5294f4ce72 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -1465,8 +1465,18 @@ static struct cgroup *current_cgns_cgroup_dfl(void) { struct css_set *cset; - cset = current->nsproxy->cgroup_ns->root_cset; - return __cset_cgroup_from_root(cset, &cgrp_dfl_root); + if (current->nsproxy) { + cset = current->nsproxy->cgroup_ns->root_cset; + return __cset_cgroup_from_root(cset, &cgrp_dfl_root); + } else { + /* + * NOTE: This function may be called from bpf_cgroup_from_id() + * on a task which has already passed exit_task_namespaces() and + * nsproxy == NULL. Fall back to cgrp_dfl_root which will make all + * cgroups visible for lookups. + */ + return &cgrp_dfl_root.cgrp; + } } /* look up cgroup associated with given css_set on the specified hierarchy */ diff --git a/kernel/module/internal.h b/kernel/module/internal.h index 2e2bf236f558..1c877561a7d2 100644 --- a/kernel/module/internal.h +++ b/kernel/module/internal.h @@ -246,7 +246,6 @@ static inline void kmemleak_load_module(const struct module *mod, void init_build_id(struct module *mod, const struct load_info *info); void layout_symtab(struct module *mod, struct load_info *info); void add_kallsyms(struct module *mod, const struct load_info *info); -unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name); static inline bool sect_empty(const Elf_Shdr *sect) { diff --git a/kernel/module/kallsyms.c b/kernel/module/kallsyms.c index ab2376a1be88..bdc911dbcde5 100644 --- a/kernel/module/kallsyms.c +++ b/kernel/module/kallsyms.c @@ -442,7 +442,7 @@ int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, } /* Given a module and name of symbol, find and return the symbol's value */ -unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name) +static unsigned long __find_kallsyms_symbol_value(struct module *mod, const char *name) { unsigned int i; struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms); @@ -466,7 +466,7 @@ static unsigned long __module_kallsyms_lookup_name(const char *name) if (colon) { mod = find_module_all(name, colon - name, false); if (mod) - return find_kallsyms_symbol_value(mod, colon + 1); + return __find_kallsyms_symbol_value(mod, colon + 1); return 0; } @@ -475,7 +475,7 @@ static unsigned long __module_kallsyms_lookup_name(const char *name) if (mod->state == MODULE_STATE_UNFORMED) continue; - ret = find_kallsyms_symbol_value(mod, name); + ret = __find_kallsyms_symbol_value(mod, name); if (ret) return ret; } @@ -494,6 +494,16 @@ unsigned long module_kallsyms_lookup_name(const char *name) return ret; } +unsigned long find_kallsyms_symbol_value(struct module *mod, const char *name) +{ + unsigned long ret; + + preempt_disable(); + ret = __find_kallsyms_symbol_value(mod, name); + preempt_enable(); + return ret; +} + int module_kallsyms_on_each_symbol(const char *modname, int (*fn)(void *, const char *, struct module *, unsigned long), diff --git a/mm/maccess.c b/mm/maccess.c index 074f6b086671..518a25667323 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -5,6 +5,7 @@ #include <linux/export.h> #include <linux/mm.h> #include <linux/uaccess.h> +#include <asm/tlb.h> bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size) @@ -113,11 +114,16 @@ Efault: long copy_from_user_nofault(void *dst, const void __user *src, size_t size) { long ret = -EFAULT; - if (access_ok(src, size)) { - pagefault_disable(); - ret = __copy_from_user_inatomic(dst, src, size); - pagefault_enable(); - } + + if (!__access_ok(src, size)) + return ret; + + if (!nmi_uaccess_okay()) + return ret; + + pagefault_disable(); + ret = __copy_from_user_inatomic(dst, src, size); + pagefault_enable(); if (ret) return -EFAULT; diff --git a/mm/usercopy.c b/mm/usercopy.c index 4c3164beacec..83c164aba6e0 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -173,7 +173,7 @@ static inline void check_heap_object(const void *ptr, unsigned long n, return; } - if (is_vmalloc_addr(ptr)) { + if (is_vmalloc_addr(ptr) && !pagefault_disabled()) { struct vmap_area *area = find_vmap_area(addr); if (!area) diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c index ff4f89a2b02a..5918d1b32e19 100644 --- a/net/bpf/bpf_dummy_struct_ops.c +++ b/net/bpf/bpf_dummy_struct_ops.c @@ -173,14 +173,11 @@ static int bpf_dummy_ops_check_member(const struct btf_type *t, static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, - enum bpf_type_flag *flag) + int off, int size) { const struct btf_type *state; const struct btf_type *t; s32 type_id; - int err; type_id = btf_find_by_name_kind(reg->btf, "bpf_dummy_ops_state", BTF_KIND_STRUCT); @@ -194,11 +191,12 @@ static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log, return -EACCES; } - err = btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); - if (err < 0) - return err; + if (off + size > sizeof(struct bpf_dummy_ops_state)) { + bpf_log(log, "write access at off %d with size %d\n", off, size); + return -EACCES; + } - return atype == BPF_READ ? err : NOT_INIT; + return NOT_INIT; } static const struct bpf_verifier_ops bpf_dummy_verifier_ops = { diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index d350f31c7a3d..0b9bd9b39990 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -215,6 +215,16 @@ static void xdp_test_run_teardown(struct xdp_test_data *xdp) kfree(xdp->skbs); } +static bool frame_was_changed(const struct xdp_page_head *head) +{ + /* xdp_scrub_frame() zeroes the data pointer, flags is the last field, + * i.e. has the highest chances to be overwritten. If those two are + * untouched, it's most likely safe to skip the context reset. + */ + return head->frame->data != head->orig_ctx.data || + head->frame->flags != head->orig_ctx.flags; +} + static bool ctx_was_changed(struct xdp_page_head *head) { return head->orig_ctx.data != head->ctx.data || @@ -224,7 +234,7 @@ static bool ctx_was_changed(struct xdp_page_head *head) static void reset_ctx(struct xdp_page_head *head) { - if (likely(!ctx_was_changed(head))) + if (likely(!frame_was_changed(head) && !ctx_was_changed(head))) return; head->ctx.data = head->orig_ctx.data; @@ -538,6 +548,11 @@ int noinline bpf_fentry_test8(struct bpf_fentry_test_t *arg) return (long)arg->a; } +__bpf_kfunc u32 bpf_fentry_test9(u32 *a) +{ + return *a; +} + __bpf_kfunc int bpf_modify_return_test(int a, int *b) { *b += 1; @@ -567,6 +582,11 @@ long noinline bpf_kfunc_call_test4(signed char a, short b, int c, long d) return (long)a + (long)b + (long)c + d; } +int noinline bpf_fentry_shadow_test(int a) +{ + return a + 1; +} + struct prog_test_member1 { int a; }; @@ -598,6 +618,11 @@ bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr) return &prog_test_struct; } +__bpf_kfunc void bpf_kfunc_call_test_offset(struct prog_test_ref_kfunc *p) +{ + WARN_ON_ONCE(1); +} + __bpf_kfunc struct prog_test_member * bpf_kfunc_call_memb_acquire(void) { @@ -607,9 +632,6 @@ bpf_kfunc_call_memb_acquire(void) __bpf_kfunc void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) { - if (!p) - return; - refcount_dec(&p->cnt); } @@ -795,6 +817,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2) BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS | KF_RCU) BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE) BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_offset) BTF_SET8_END(test_sk_check_kfunc_ids) static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, @@ -844,7 +867,8 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog, bpf_fentry_test5(11, (void *)12, 13, 14, 15) != 65 || bpf_fentry_test6(16, (void *)17, 18, 19, (void *)20, 21) != 111 || bpf_fentry_test7((struct bpf_fentry_test_t *)0) != 0 || - bpf_fentry_test8(&arg) != 0) + bpf_fentry_test8(&arg) != 0 || + bpf_fentry_test9(&retval) != 0) goto out; break; case BPF_MODIFY_RETURN: diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c index 7a36353dbc22..d4172534dfa8 100644 --- a/net/core/bpf_sk_storage.c +++ b/net/core/bpf_sk_storage.c @@ -40,7 +40,7 @@ static int bpf_sk_storage_del(struct sock *sk, struct bpf_map *map) if (!sdata) return -ENOENT; - bpf_selem_unlink(SELEM(sdata), true); + bpf_selem_unlink(SELEM(sdata), false); return 0; } @@ -49,7 +49,6 @@ static int bpf_sk_storage_del(struct sock *sk, struct bpf_map *map) void bpf_sk_storage_free(struct sock *sk) { struct bpf_local_storage *sk_storage; - bool free_sk_storage = false; rcu_read_lock(); sk_storage = rcu_dereference(sk->sk_bpf_storage); @@ -58,13 +57,8 @@ void bpf_sk_storage_free(struct sock *sk) return; } - raw_spin_lock_bh(&sk_storage->lock); - free_sk_storage = bpf_local_storage_unlink_nolock(sk_storage); - raw_spin_unlock_bh(&sk_storage->lock); + bpf_local_storage_destroy(sk_storage); rcu_read_unlock(); - - if (free_sk_storage) - kfree_rcu(sk_storage, rcu); } static void bpf_sk_storage_map_free(struct bpf_map *map) @@ -74,7 +68,7 @@ static void bpf_sk_storage_map_free(struct bpf_map *map) static struct bpf_map *bpf_sk_storage_map_alloc(union bpf_attr *attr) { - return bpf_local_storage_map_alloc(attr, &sk_cache); + return bpf_local_storage_map_alloc(attr, &sk_cache, false); } static int notsupp_get_next_key(struct bpf_map *map, void *key, @@ -100,8 +94,8 @@ static void *bpf_fd_sk_storage_lookup_elem(struct bpf_map *map, void *key) return ERR_PTR(err); } -static int bpf_fd_sk_storage_update_elem(struct bpf_map *map, void *key, - void *value, u64 map_flags) +static long bpf_fd_sk_storage_update_elem(struct bpf_map *map, void *key, + void *value, u64 map_flags) { struct bpf_local_storage_data *sdata; struct socket *sock; @@ -120,7 +114,7 @@ static int bpf_fd_sk_storage_update_elem(struct bpf_map *map, void *key, return err; } -static int bpf_fd_sk_storage_delete_elem(struct bpf_map *map, void *key) +static long bpf_fd_sk_storage_delete_elem(struct bpf_map *map, void *key) { struct socket *sock; int fd, err; @@ -203,7 +197,7 @@ int bpf_sk_storage_clone(const struct sock *sk, struct sock *newsk) } else { ret = bpf_local_storage_alloc(newsk, smap, copy_selem, GFP_ATOMIC); if (ret) { - kfree(copy_selem); + bpf_selem_free(copy_selem, smap, true); atomic_sub(smap->elem_size, &newsk->sk_omem_alloc); bpf_map_put(map); @@ -418,7 +412,7 @@ const struct bpf_func_proto bpf_sk_storage_get_tracing_proto = { .gpl_only = false, .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], .arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL, .arg4_type = ARG_ANYTHING, @@ -430,7 +424,7 @@ const struct bpf_func_proto bpf_sk_storage_delete_tracing_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_CONST_MAP_PTR, - .arg2_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_PTR_TO_BTF_ID_OR_NULL, .arg2_btf_id = &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON], .allowed = bpf_sk_storage_tracing_allowed, }; diff --git a/net/core/filter.c b/net/core/filter.c index a8c8fd96c822..df0df59814ae 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -5002,7 +5002,7 @@ const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto = { .func = bpf_get_socket_ptr_cookie, .gpl_only = false, .ret_type = RET_INTEGER, - .arg1_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON, + .arg1_type = ARG_PTR_TO_BTF_ID_SOCK_COMMON | PTR_MAYBE_NULL, }; BPF_CALL_1(bpf_get_socket_cookie_sock_ops, struct bpf_sock_ops_kern *, ctx) @@ -8746,23 +8746,18 @@ EXPORT_SYMBOL_GPL(nf_conn_btf_access_lock); int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag); + int off, int size); EXPORT_SYMBOL_GPL(nfct_btf_struct_access); static int tc_cls_act_btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + int off, int size) { int ret = -EACCES; - if (atype == BPF_READ) - return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); - mutex_lock(&nf_conn_btf_access_lock); if (nfct_btf_struct_access) - ret = nfct_btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); + ret = nfct_btf_struct_access(log, reg, off, size); mutex_unlock(&nf_conn_btf_access_lock); return ret; @@ -8829,17 +8824,13 @@ EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action); static int xdp_btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + int off, int size) { int ret = -EACCES; - if (atype == BPF_READ) - return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); - mutex_lock(&nf_conn_btf_access_lock); if (nfct_btf_struct_access) - ret = nfct_btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); + ret = nfct_btf_struct_access(log, reg, off, size); mutex_unlock(&nf_conn_btf_access_lock); return ret; @@ -9189,7 +9180,7 @@ static struct bpf_insn *bpf_convert_tstamp_type_read(const struct bpf_insn *si, __u8 tmp_reg = BPF_REG_AX; *insn++ = BPF_LDX_MEM(BPF_B, tmp_reg, skb_reg, - PKT_VLAN_PRESENT_OFFSET); + SKB_BF_MONO_TC_OFFSET); *insn++ = BPF_JMP32_IMM(BPF_JSET, tmp_reg, SKB_MONO_DELIVERY_TIME_MASK, 2); *insn++ = BPF_MOV32_IMM(value_reg, BPF_SKB_TSTAMP_UNSPEC); @@ -9236,7 +9227,7 @@ static struct bpf_insn *bpf_convert_tstamp_read(const struct bpf_prog *prog, /* AX is needed because src_reg and dst_reg could be the same */ __u8 tmp_reg = BPF_REG_AX; - *insn++ = BPF_LDX_MEM(BPF_B, tmp_reg, skb_reg, PKT_VLAN_PRESENT_OFFSET); + *insn++ = BPF_LDX_MEM(BPF_B, tmp_reg, skb_reg, SKB_BF_MONO_TC_OFFSET); *insn++ = BPF_ALU32_IMM(BPF_AND, tmp_reg, TC_AT_INGRESS_MASK | SKB_MONO_DELIVERY_TIME_MASK); *insn++ = BPF_JMP32_IMM(BPF_JNE, tmp_reg, @@ -9271,14 +9262,14 @@ static struct bpf_insn *bpf_convert_tstamp_write(const struct bpf_prog *prog, if (!prog->tstamp_type_access) { __u8 tmp_reg = BPF_REG_AX; - *insn++ = BPF_LDX_MEM(BPF_B, tmp_reg, skb_reg, PKT_VLAN_PRESENT_OFFSET); + *insn++ = BPF_LDX_MEM(BPF_B, tmp_reg, skb_reg, SKB_BF_MONO_TC_OFFSET); /* Writing __sk_buff->tstamp as ingress, goto <clear> */ *insn++ = BPF_JMP32_IMM(BPF_JSET, tmp_reg, TC_AT_INGRESS_MASK, 1); /* goto <store> */ *insn++ = BPF_JMP_A(2); /* <clear>: mono_delivery_time */ *insn++ = BPF_ALU32_IMM(BPF_AND, tmp_reg, ~SKB_MONO_DELIVERY_TIME_MASK); - *insn++ = BPF_STX_MEM(BPF_B, skb_reg, tmp_reg, PKT_VLAN_PRESENT_OFFSET); + *insn++ = BPF_STX_MEM(BPF_B, skb_reg, tmp_reg, SKB_BF_MONO_TC_OFFSET); } #endif diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 9b854e236d23..7c189c2e2fbf 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -437,7 +437,7 @@ static void sock_map_delete_from_link(struct bpf_map *map, struct sock *sk, __sock_map_delete(stab, sk, link_raw); } -static int sock_map_delete_elem(struct bpf_map *map, void *key) +static long sock_map_delete_elem(struct bpf_map *map, void *key) { struct bpf_stab *stab = container_of(map, struct bpf_stab, map); u32 i = *(u32 *)key; @@ -587,8 +587,8 @@ out: return ret; } -static int sock_map_update_elem(struct bpf_map *map, void *key, - void *value, u64 flags) +static long sock_map_update_elem(struct bpf_map *map, void *key, + void *value, u64 flags) { struct sock *sk = (struct sock *)value; int ret; @@ -925,7 +925,7 @@ static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk, raw_spin_unlock_bh(&bucket->lock); } -static int sock_hash_delete_elem(struct bpf_map *map, void *key) +static long sock_hash_delete_elem(struct bpf_map *map, void *key) { struct bpf_shtab *htab = container_of(map, struct bpf_shtab, map); u32 hash, key_size = map->key_size; diff --git a/net/core/xdp.c b/net/core/xdp.c index fb85aca81961..41e5ca8643ec 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -531,21 +531,6 @@ out: } EXPORT_SYMBOL_GPL(xdp_return_buff); -/* Only called for MEM_TYPE_PAGE_POOL see xdp.h */ -void __xdp_release_frame(void *data, struct xdp_mem_info *mem) -{ - struct xdp_mem_allocator *xa; - struct page *page; - - rcu_read_lock(); - xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); - page = virt_to_head_page(data); - if (xa) - page_pool_release_page(xa->page_pool, page); - rcu_read_unlock(); -} -EXPORT_SYMBOL_GPL(__xdp_release_frame); - void xdp_attachment_setup(struct xdp_attachment_info *info, struct netdev_bpf *bpf) { @@ -658,8 +643,8 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, * - RX ring dev queue index (skb_record_rx_queue) */ - /* Until page_pool get SKB return path, release DMA here */ - xdp_release_frame(xdpf); + if (xdpf->mem.type == MEM_TYPE_PAGE_POOL) + skb_mark_for_recycle(skb); /* Allow SKB to reuse area used by xdp_frame */ xdp_scrub_frame(xdpf); diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile index 880277c9fd07..b18ba8ef93ad 100644 --- a/net/ipv4/Makefile +++ b/net/ipv4/Makefile @@ -26,7 +26,7 @@ obj-$(CONFIG_IP_MROUTE) += ipmr.o obj-$(CONFIG_IP_MROUTE_COMMON) += ipmr_base.o obj-$(CONFIG_NET_IPIP) += ipip.o gre-y := gre_demux.o -fou-y := fou_core.o fou_nl.o +fou-y := fou_core.o fou_nl.o fou_bpf.o obj-$(CONFIG_NET_FOU) += fou.o obj-$(CONFIG_NET_IPGRE_DEMUX) += gre.o obj-$(CONFIG_NET_IPGRE) += ip_gre.o diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c index 13fc0c185cd9..4406d796cc2f 100644 --- a/net/ipv4/bpf_tcp_ca.c +++ b/net/ipv4/bpf_tcp_ca.c @@ -72,15 +72,11 @@ static bool bpf_tcp_ca_is_valid_access(int off, int size, static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + int off, int size) { const struct btf_type *t; size_t end; - if (atype == BPF_READ) - return btf_struct_access(log, reg, off, size, atype, next_btf_id, flag); - t = btf_type_by_id(reg->btf, reg->btf_id); if (t != tcp_sock_type) { bpf_log(log, "only read is supported\n"); @@ -113,6 +109,9 @@ static int bpf_tcp_ca_btf_struct_access(struct bpf_verifier_log *log, case offsetof(struct tcp_sock, ecn_flags): end = offsetofend(struct tcp_sock, ecn_flags); break; + case offsetof(struct tcp_sock, app_limited): + end = offsetofend(struct tcp_sock, app_limited); + break; default: bpf_log(log, "no write support to tcp_sock at off %d\n", off); return -EACCES; @@ -239,8 +238,6 @@ static int bpf_tcp_ca_init_member(const struct btf_type *t, if (bpf_obj_name_cpy(tcp_ca->name, utcp_ca->name, sizeof(tcp_ca->name)) <= 0) return -EINVAL; - if (tcp_ca_find(utcp_ca->name)) - return -EEXIST; return 1; } @@ -266,13 +263,25 @@ static void bpf_tcp_ca_unreg(void *kdata) tcp_unregister_congestion_control(kdata); } +static int bpf_tcp_ca_update(void *kdata, void *old_kdata) +{ + return tcp_update_congestion_control(kdata, old_kdata); +} + +static int bpf_tcp_ca_validate(void *kdata) +{ + return tcp_validate_congestion_control(kdata); +} + struct bpf_struct_ops bpf_tcp_congestion_ops = { .verifier_ops = &bpf_tcp_ca_verifier_ops, .reg = bpf_tcp_ca_reg, .unreg = bpf_tcp_ca_unreg, + .update = bpf_tcp_ca_update, .check_member = bpf_tcp_ca_check_member, .init_member = bpf_tcp_ca_init_member, .init = bpf_tcp_ca_init, + .validate = bpf_tcp_ca_validate, .name = "tcp_congestion_ops", }; diff --git a/net/ipv4/fou_bpf.c b/net/ipv4/fou_bpf.c new file mode 100644 index 000000000000..3760a14b6b57 --- /dev/null +++ b/net/ipv4/fou_bpf.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Unstable Fou Helpers for TC-BPF hook + * + * These are called from SCHED_CLS BPF programs. Note that it is + * allowed to break compatibility for these functions since the interface they + * are exposed through to BPF programs is explicitly unstable. + */ + +#include <linux/bpf.h> +#include <linux/btf_ids.h> + +#include <net/dst_metadata.h> +#include <net/fou.h> + +struct bpf_fou_encap { + __be16 sport; + __be16 dport; +}; + +enum bpf_fou_encap_type { + FOU_BPF_ENCAP_FOU, + FOU_BPF_ENCAP_GUE, +}; + +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in BTF"); + +/* bpf_skb_set_fou_encap - Set FOU encap parameters + * + * This function allows for using GUE or FOU encapsulation together with an + * ipip device in collect-metadata mode. + * + * It is meant to be used in BPF tc-hooks and after a call to the + * bpf_skb_set_tunnel_key helper, responsible for setting IP addresses. + * + * Parameters: + * @skb_ctx Pointer to ctx (__sk_buff) in TC program. Cannot be NULL + * @encap Pointer to a `struct bpf_fou_encap` storing UDP src and + * dst ports. If sport is set to 0 the kernel will auto-assign a + * port. This is similar to using `encap-sport auto`. + * Cannot be NULL + * @type Encapsulation type for the packet. Their definitions are + * specified in `enum bpf_fou_encap_type` + */ +__bpf_kfunc int bpf_skb_set_fou_encap(struct __sk_buff *skb_ctx, + struct bpf_fou_encap *encap, int type) +{ + struct sk_buff *skb = (struct sk_buff *)skb_ctx; + struct ip_tunnel_info *info = skb_tunnel_info(skb); + + if (unlikely(!encap)) + return -EINVAL; + + if (unlikely(!info || !(info->mode & IP_TUNNEL_INFO_TX))) + return -EINVAL; + + switch (type) { + case FOU_BPF_ENCAP_FOU: + info->encap.type = TUNNEL_ENCAP_FOU; + break; + case FOU_BPF_ENCAP_GUE: + info->encap.type = TUNNEL_ENCAP_GUE; + break; + default: + info->encap.type = TUNNEL_ENCAP_NONE; + } + + if (info->key.tun_flags & TUNNEL_CSUM) + info->encap.flags |= TUNNEL_ENCAP_FLAG_CSUM; + + info->encap.sport = encap->sport; + info->encap.dport = encap->dport; + + return 0; +} + +/* bpf_skb_get_fou_encap - Get FOU encap parameters + * + * This function allows for reading encap metadata from a packet received + * on an ipip device in collect-metadata mode. + * + * Parameters: + * @skb_ctx Pointer to ctx (__sk_buff) in TC program. Cannot be NULL + * @encap Pointer to a struct bpf_fou_encap storing UDP source and + * destination port. Cannot be NULL + */ +__bpf_kfunc int bpf_skb_get_fou_encap(struct __sk_buff *skb_ctx, + struct bpf_fou_encap *encap) +{ + struct sk_buff *skb = (struct sk_buff *)skb_ctx; + struct ip_tunnel_info *info = skb_tunnel_info(skb); + + if (unlikely(!info)) + return -EINVAL; + + encap->sport = info->encap.sport; + encap->dport = info->encap.dport; + + return 0; +} + +__diag_pop() + +BTF_SET8_START(fou_kfunc_set) +BTF_ID_FLAGS(func, bpf_skb_set_fou_encap) +BTF_ID_FLAGS(func, bpf_skb_get_fou_encap) +BTF_SET8_END(fou_kfunc_set) + +static const struct btf_kfunc_id_set fou_bpf_kfunc_set = { + .owner = THIS_MODULE, + .set = &fou_kfunc_set, +}; + +int register_fou_bpf(void) +{ + return register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, + &fou_bpf_kfunc_set); +} diff --git a/net/ipv4/fou_core.c b/net/ipv4/fou_core.c index cafec9b4eee0..0c41076e31ed 100644 --- a/net/ipv4/fou_core.c +++ b/net/ipv4/fou_core.c @@ -1236,10 +1236,15 @@ static int __init fou_init(void) if (ret < 0) goto unregister; + ret = register_fou_bpf(); + if (ret < 0) + goto kfunc_failed; + ret = ip_tunnel_encap_add_fou_ops(); if (ret == 0) return 0; +kfunc_failed: genl_unregister_family(&fou_nl_family); unregister: unregister_pernet_device(&fou_net_ops); diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c index 2541083d49ad..beeae624c412 100644 --- a/net/ipv4/ip_tunnel.c +++ b/net/ipv4/ip_tunnel.c @@ -359,6 +359,20 @@ err_dev_set_mtu: return ERR_PTR(err); } +void ip_tunnel_md_udp_encap(struct sk_buff *skb, struct ip_tunnel_info *info) +{ + const struct iphdr *iph = ip_hdr(skb); + const struct udphdr *udph; + + if (iph->protocol != IPPROTO_UDP) + return; + + udph = (struct udphdr *)((__u8 *)iph + (iph->ihl << 2)); + info->encap.sport = udph->source; + info->encap.dport = udph->dest; +} +EXPORT_SYMBOL(ip_tunnel_md_udp_encap); + int ip_tunnel_rcv(struct ip_tunnel *tunnel, struct sk_buff *skb, const struct tnl_ptk_info *tpi, struct metadata_dst *tun_dst, bool log_ecn_error) @@ -572,7 +586,11 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, tunnel_id_to_key32(key->tun_id), RT_TOS(tos), dev_net(dev), 0, skb->mark, skb_get_hash(skb), key->flow_flags); - if (tunnel->encap.type != TUNNEL_ENCAP_NONE) + + if (!tunnel_hlen) + tunnel_hlen = ip_encap_hlen(&tun_info->encap); + + if (ip_tunnel_encap(skb, &tun_info->encap, &proto, &fl4) < 0) goto tx_error; use_cache = ip_tunnel_dst_cache_usable(skb, tun_info); @@ -732,7 +750,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, dev_net(dev), tunnel->parms.link, tunnel->fwmark, skb_get_hash(skb), 0); - if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0) + if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0) goto tx_error; if (connected && md) { diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c index abea77759b7e..27b8f83c6ea2 100644 --- a/net/ipv4/ipip.c +++ b/net/ipv4/ipip.c @@ -241,6 +241,7 @@ static int ipip_tunnel_rcv(struct sk_buff *skb, u8 ipproto) tun_dst = ip_tun_rx_dst(skb, 0, 0, 0); if (!tun_dst) return 0; + ip_tunnel_md_udp_encap(skb, &tun_dst->u.tun_info); } skb_reset_mac_header(skb); diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c index db8b4b488c31..1b34050a7538 100644 --- a/net/ipv4/tcp_cong.c +++ b/net/ipv4/tcp_cong.c @@ -75,14 +75,8 @@ struct tcp_congestion_ops *tcp_ca_find_key(u32 key) return NULL; } -/* - * Attach new congestion control algorithm to the list - * of available options. - */ -int tcp_register_congestion_control(struct tcp_congestion_ops *ca) +int tcp_validate_congestion_control(struct tcp_congestion_ops *ca) { - int ret = 0; - /* all algorithms must implement these */ if (!ca->ssthresh || !ca->undo_cwnd || !(ca->cong_avoid || ca->cong_control)) { @@ -90,6 +84,20 @@ int tcp_register_congestion_control(struct tcp_congestion_ops *ca) return -EINVAL; } + return 0; +} + +/* Attach new congestion control algorithm to the list + * of available options. + */ +int tcp_register_congestion_control(struct tcp_congestion_ops *ca) +{ + int ret; + + ret = tcp_validate_congestion_control(ca); + if (ret) + return ret; + ca->key = jhash(ca->name, sizeof(ca->name), strlen(ca->name)); spin_lock(&tcp_cong_list_lock); @@ -130,6 +138,50 @@ void tcp_unregister_congestion_control(struct tcp_congestion_ops *ca) } EXPORT_SYMBOL_GPL(tcp_unregister_congestion_control); +/* Replace a registered old ca with a new one. + * + * The new ca must have the same name as the old one, that has been + * registered. + */ +int tcp_update_congestion_control(struct tcp_congestion_ops *ca, struct tcp_congestion_ops *old_ca) +{ + struct tcp_congestion_ops *existing; + int ret; + + ret = tcp_validate_congestion_control(ca); + if (ret) + return ret; + + ca->key = jhash(ca->name, sizeof(ca->name), strlen(ca->name)); + + spin_lock(&tcp_cong_list_lock); + existing = tcp_ca_find_key(old_ca->key); + if (ca->key == TCP_CA_UNSPEC || !existing || strcmp(existing->name, ca->name)) { + pr_notice("%s not registered or non-unique key\n", + ca->name); + ret = -EINVAL; + } else if (existing != old_ca) { + pr_notice("invalid old congestion control algorithm to replace\n"); + ret = -EINVAL; + } else { + /* Add the new one before removing the old one to keep + * one implementation available all the time. + */ + list_add_tail_rcu(&ca->list, &tcp_cong_list); + list_del_rcu(&existing->list); + pr_debug("%s updated\n", ca->name); + } + spin_unlock(&tcp_cong_list_lock); + + /* Wait for outstanding readers to complete before the + * module or struct_ops gets removed entirely. + */ + if (!ret) + synchronize_rcu(); + + return ret; +} + u32 tcp_ca_get_key_by_name(struct net *net, const char *name, bool *ecn_ca) { const struct tcp_congestion_ops *ca; diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 70d81bba5093..063560e2cb1a 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -1024,7 +1024,7 @@ static netdev_tx_t ipip6_tunnel_xmit(struct sk_buff *skb, ttl = iph6->hop_limit; tos = INET_ECN_encapsulate(tos, ipv6_get_dsfield(iph6)); - if (ip_tunnel_encap(skb, tunnel, &protocol, &fl4) < 0) { + if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0) { ip_rt_put(rt); goto tx_error; } diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c index cd99e6dc1f35..3f821b7ba646 100644 --- a/net/netfilter/nf_conntrack_bpf.c +++ b/net/netfilter/nf_conntrack_bpf.c @@ -192,8 +192,7 @@ BTF_ID(struct, nf_conn___init) /* Check writes into `struct nf_conn` */ static int _nf_conntrack_btf_struct_access(struct bpf_verifier_log *log, const struct bpf_reg_state *reg, - int off, int size, enum bpf_access_type atype, - u32 *next_btf_id, enum bpf_type_flag *flag) + int off, int size) { const struct btf_type *ncit, *nct, *t; size_t end; @@ -401,8 +400,6 @@ __bpf_kfunc struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i) */ __bpf_kfunc void bpf_ct_release(struct nf_conn *nfct) { - if (!nfct) - return; nf_ct_put(nfct); } diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 2ac58b282b5e..cc1e7f15fa73 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -1301,9 +1301,10 @@ static int xsk_mmap(struct file *file, struct socket *sock, loff_t offset = (loff_t)vma->vm_pgoff << PAGE_SHIFT; unsigned long size = vma->vm_end - vma->vm_start; struct xdp_sock *xs = xdp_sk(sock->sk); + int state = READ_ONCE(xs->state); struct xsk_queue *q = NULL; - if (READ_ONCE(xs->state) != XSK_READY) + if (state != XSK_READY && state != XSK_BOUND) return -EBUSY; if (offset == XDP_PGOFF_RX_RING) { @@ -1314,9 +1315,11 @@ static int xsk_mmap(struct file *file, struct socket *sock, /* Matches the smp_wmb() in XDP_UMEM_REG */ smp_rmb(); if (offset == XDP_UMEM_PGOFF_FILL_RING) - q = READ_ONCE(xs->fq_tmp); + q = state == XSK_READY ? READ_ONCE(xs->fq_tmp) : + READ_ONCE(xs->pool->fq); else if (offset == XDP_UMEM_PGOFF_COMPLETION_RING) - q = READ_ONCE(xs->cq_tmp); + q = state == XSK_READY ? READ_ONCE(xs->cq_tmp) : + READ_ONCE(xs->pool->cq); } if (!q) diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h index bfb2a7e50c26..6d40a77fccbe 100644 --- a/net/xdp/xsk_queue.h +++ b/net/xdp/xsk_queue.h @@ -133,16 +133,12 @@ static inline bool xskq_cons_read_addr_unchecked(struct xsk_queue *q, u64 *addr) static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc) { - u64 chunk, chunk_end; + u64 offset = desc->addr & (pool->chunk_size - 1); - chunk = xp_aligned_extract_addr(pool, desc->addr); - if (likely(desc->len)) { - chunk_end = xp_aligned_extract_addr(pool, desc->addr + desc->len - 1); - if (chunk != chunk_end) - return false; - } + if (offset + desc->len > pool->chunk_size) + return false; - if (chunk >= pool->addrs_cnt) + if (desc->addr >= pool->addrs_cnt) return false; if (desc->options) @@ -153,15 +149,12 @@ static inline bool xp_aligned_validate_desc(struct xsk_buff_pool *pool, static inline bool xp_unaligned_validate_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc) { - u64 addr, base_addr; - - base_addr = xp_unaligned_extract_addr(desc->addr); - addr = xp_unaligned_add_offset_to_addr(desc->addr); + u64 addr = xp_unaligned_add_offset_to_addr(desc->addr); if (desc->len > pool->chunk_size) return false; - if (base_addr >= pool->addrs_cnt || addr >= pool->addrs_cnt || + if (addr >= pool->addrs_cnt || addr + desc->len > pool->addrs_cnt || xp_desc_crosses_non_contig_pg(pool, addr, desc->len)) return false; diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c index 0c38d7175922..2c1427074a3b 100644 --- a/net/xdp/xskmap.c +++ b/net/xdp/xskmap.c @@ -162,8 +162,8 @@ static void *xsk_map_lookup_elem_sys_only(struct bpf_map *map, void *key) return ERR_PTR(-EOPNOTSUPP); } -static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value, - u64 map_flags) +static long xsk_map_update_elem(struct bpf_map *map, void *key, void *value, + u64 map_flags) { struct xsk_map *m = container_of(map, struct xsk_map, map); struct xdp_sock __rcu **map_entry; @@ -223,7 +223,7 @@ out: return err; } -static int xsk_map_delete_elem(struct bpf_map *map, void *key) +static long xsk_map_delete_elem(struct bpf_map *map, void *key) { struct xsk_map *m = container_of(map, struct xsk_map, map); struct xdp_sock __rcu **map_entry; @@ -243,7 +243,7 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key) return 0; } -static int xsk_map_redirect(struct bpf_map *map, u64 index, u64 flags) +static long xsk_map_redirect(struct bpf_map *map, u64 index, u64 flags) { return __bpf_xdp_redirect_map(map, index, flags, 0, __xsk_map_lookup_elem); diff --git a/samples/bpf/cpustat_kern.c b/samples/bpf/cpustat_kern.c index 5aefd19cdfa1..944f13fe164a 100644 --- a/samples/bpf/cpustat_kern.c +++ b/samples/bpf/cpustat_kern.c @@ -76,8 +76,8 @@ struct { /* * The trace events for cpu_idle and cpu_frequency are taken from: - * /sys/kernel/debug/tracing/events/power/cpu_idle/format - * /sys/kernel/debug/tracing/events/power/cpu_frequency/format + * /sys/kernel/tracing/events/power/cpu_idle/format + * /sys/kernel/tracing/events/power/cpu_frequency/format * * These two events have same format, so define one common structure. */ diff --git a/samples/bpf/hbm.c b/samples/bpf/hbm.c index 516fbac28b71..6448b7826107 100644 --- a/samples/bpf/hbm.c +++ b/samples/bpf/hbm.c @@ -65,7 +65,7 @@ static void Usage(void); static void read_trace_pipe2(void); static void do_error(char *msg, bool errno_flag); -#define DEBUGFS "/sys/kernel/debug/tracing/" +#define TRACEFS "/sys/kernel/tracing/" static struct bpf_program *bpf_prog; static struct bpf_object *obj; @@ -77,7 +77,7 @@ static void read_trace_pipe2(void) FILE *outf; char *outFname = "hbm_out.log"; - trace_fd = open(DEBUGFS "trace_pipe", O_RDONLY, 0); + trace_fd = open(TRACEFS "trace_pipe", O_RDONLY, 0); if (trace_fd < 0) { printf("Error opening trace_pipe\n"); return; @@ -315,6 +315,7 @@ static int run_bpf_prog(char *prog, int cg_id) fout = fopen(fname, "w"); fprintf(fout, "id:%d\n", cg_id); fprintf(fout, "ERROR: Could not lookup queue_stats\n"); + fclose(fout); } else if (stats_flag && qstats.lastPacketTime > qstats.firstPacketTime) { long long delta_us = (qstats.lastPacketTime - diff --git a/samples/bpf/ibumad_kern.c b/samples/bpf/ibumad_kern.c index 9b193231024a..f07474c72525 100644 --- a/samples/bpf/ibumad_kern.c +++ b/samples/bpf/ibumad_kern.c @@ -39,8 +39,8 @@ struct { /* Taken from the current format defined in * include/trace/events/ib_umad.h * and - * /sys/kernel/debug/tracing/events/ib_umad/ib_umad_read/format - * /sys/kernel/debug/tracing/events/ib_umad/ib_umad_write/format + * /sys/kernel/tracing/events/ib_umad/ib_umad_read/format + * /sys/kernel/tracing/events/ib_umad/ib_umad_write/format */ struct ib_umad_rw_args { u64 pad; diff --git a/samples/bpf/lwt_len_hist.sh b/samples/bpf/lwt_len_hist.sh index 7078bfcc4f4d..381b2c634784 100755 --- a/samples/bpf/lwt_len_hist.sh +++ b/samples/bpf/lwt_len_hist.sh @@ -5,7 +5,7 @@ NS1=lwt_ns1 VETH0=tst_lwt1a VETH1=tst_lwt1b BPF_PROG=lwt_len_hist.bpf.o -TRACE_ROOT=/sys/kernel/debug/tracing +TRACE_ROOT=/sys/kernel/tracing function cleanup { # To reset saved histogram, remove pinned map diff --git a/samples/bpf/offwaketime_kern.c b/samples/bpf/offwaketime_kern.c index eb4d94742e6b..23f12b47e9e5 100644 --- a/samples/bpf/offwaketime_kern.c +++ b/samples/bpf/offwaketime_kern.c @@ -110,7 +110,7 @@ static inline int update_counts(void *ctx, u32 pid, u64 delta) } #if 1 -/* taken from /sys/kernel/debug/tracing/events/sched/sched_switch/format */ +/* taken from /sys/kernel/tracing/events/sched/sched_switch/format */ struct sched_switch_args { unsigned long long pad; char prev_comm[TASK_COMM_LEN]; diff --git a/samples/bpf/task_fd_query_user.c b/samples/bpf/task_fd_query_user.c index a33d74bd3a4b..1e61f2180470 100644 --- a/samples/bpf/task_fd_query_user.c +++ b/samples/bpf/task_fd_query_user.c @@ -235,7 +235,7 @@ static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return) struct bpf_link *link; ssize_t bytes; - snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/%s_events", + snprintf(buf, sizeof(buf), "/sys/kernel/tracing/%s_events", event_type); kfd = open(buf, O_WRONLY | O_TRUNC, 0); CHECK_PERROR_RET(kfd < 0); @@ -252,7 +252,7 @@ static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return) close(kfd); kfd = -1; - snprintf(buf, sizeof(buf), "/sys/kernel/debug/tracing/events/%ss/%s/id", + snprintf(buf, sizeof(buf), "/sys/kernel/tracing/events/%ss/%s/id", event_type, event_alias); efd = open(buf, O_RDONLY, 0); CHECK_PERROR_RET(efd < 0); diff --git a/samples/bpf/test_lwt_bpf.sh b/samples/bpf/test_lwt_bpf.sh index 2e9f5126963b..0bf2d0f6bf4b 100755 --- a/samples/bpf/test_lwt_bpf.sh +++ b/samples/bpf/test_lwt_bpf.sh @@ -21,7 +21,7 @@ IP_LOCAL="192.168.99.1" PROG_SRC="test_lwt_bpf.c" BPF_PROG="test_lwt_bpf.o" -TRACE_ROOT=/sys/kernel/debug/tracing +TRACE_ROOT=/sys/kernel/tracing CONTEXT_INFO=$(cat ${TRACE_ROOT}/trace_options | grep context) function lookup_mac() diff --git a/samples/bpf/test_overhead_tp.bpf.c b/samples/bpf/test_overhead_tp.bpf.c index 67cab3881969..8b498328e961 100644 --- a/samples/bpf/test_overhead_tp.bpf.c +++ b/samples/bpf/test_overhead_tp.bpf.c @@ -7,7 +7,7 @@ #include "vmlinux.h" #include <bpf/bpf_helpers.h> -/* from /sys/kernel/debug/tracing/events/task/task_rename/format */ +/* from /sys/kernel/tracing/events/task/task_rename/format */ struct task_rename { __u64 pad; __u32 pid; @@ -21,7 +21,7 @@ int prog(struct task_rename *ctx) return 0; } -/* from /sys/kernel/debug/tracing/events/fib/fib_table_lookup/format */ +/* from /sys/kernel/tracing/events/fib/fib_table_lookup/format */ struct fib_table_lookup { __u64 pad; __u32 tb_id; diff --git a/scripts/bpf_doc.py b/scripts/bpf_doc.py index 38d51e05c7a2..eaae2ce78381 100755 --- a/scripts/bpf_doc.py +++ b/scripts/bpf_doc.py @@ -383,7 +383,7 @@ class PrinterRST(Printer): .. Copyright (C) All BPF authors and contributors from 2014 to present. .. See git log include/uapi/linux/bpf.h in kernel tree for details. .. -.. SPDX-License-Identifier: Linux-man-pages-copyleft +.. SPDX-License-Identifier: Linux-man-pages-copyleft .. .. Please do not edit this file. It was generated from the documentation .. located in file include/uapi/linux/bpf.h of the Linux kernel sources diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst index 14de72544995..9443c524bb76 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst @@ -28,8 +28,8 @@ PROG COMMANDS ============= | **bpftool** **prog** { **show** | **list** } [*PROG*] -| **bpftool** **prog dump xlated** *PROG* [{**file** *FILE* | **opcodes** | **visual** | **linum**}] -| **bpftool** **prog dump jited** *PROG* [{**file** *FILE* | **opcodes** | **linum**}] +| **bpftool** **prog dump xlated** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] [**visual**] }] +| **bpftool** **prog dump jited** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] }] | **bpftool** **prog pin** *PROG* *FILE* | **bpftool** **prog** { **load** | **loadall** } *OBJ* *PATH* [**type** *TYPE*] [**map** {**idx** *IDX* | **name** *NAME*} *MAP*] [**dev** *NAME*] [**pinmaps** *MAP_DIR*] [**autoattach**] | **bpftool** **prog attach** *PROG* *ATTACH_TYPE* [*MAP*] @@ -88,7 +88,7 @@ DESCRIPTION programs. On such kernels bpftool will automatically emit this information as well. - **bpftool prog dump xlated** *PROG* [{ **file** *FILE* | **opcodes** | **visual** | **linum** }] + **bpftool prog dump xlated** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] [**visual**] }] Dump eBPF instructions of the programs from the kernel. By default, eBPF will be disassembled and printed to standard output in human-readable format. In this case, **opcodes** @@ -106,11 +106,10 @@ DESCRIPTION CFG in DOT format, on standard output. If the programs have line_info available, the source line will - be displayed by default. If **linum** is specified, - the filename, line number and line column will also be - displayed on top of the source line. + be displayed. If **linum** is specified, the filename, line + number and line column will also be displayed. - **bpftool prog dump jited** *PROG* [{ **file** *FILE* | **opcodes** | **linum** }] + **bpftool prog dump jited** *PROG* [{ **file** *FILE* | [**opcodes**] [**linum**] }] Dump jited image (host machine code) of the program. If *FILE* is specified image will be written to a file, @@ -120,9 +119,8 @@ DESCRIPTION **opcodes** controls if raw opcodes will be printed. If the prog has line_info available, the source line will - be displayed by default. If **linum** is specified, - the filename, line number and line column will also be - displayed on top of the source line. + be displayed. If **linum** is specified, the filename, line + number and line column will also be displayed. **bpftool prog pin** *PROG* *FILE* Pin program *PROG* as *FILE*. diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 35f26f7c1124..e7234d1a5306 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -255,20 +255,23 @@ _bpftool_map_update_get_name() _bpftool() { - local cur prev words objword + local cur prev words objword json=0 _init_completion || return # Deal with options if [[ ${words[cword]} == -* ]]; then local c='--version --json --pretty --bpffs --mapcompat --debug \ - --use-loader --base-btf' + --use-loader --base-btf' COMPREPLY=( $( compgen -W "$c" -- "$cur" ) ) return 0 fi + if _bpftool_search_list -j --json -p --pretty; then + json=1 + fi # Deal with simplest keywords case $prev in - help|hex|opcodes|visual|linum) + help|hex) return 0 ;; tag) @@ -366,13 +369,16 @@ _bpftool() return 0 ;; *) - _bpftool_once_attr 'file' - if _bpftool_search_list 'xlated'; then - COMPREPLY+=( $( compgen -W 'opcodes visual linum' -- \ - "$cur" ) ) - else - COMPREPLY+=( $( compgen -W 'opcodes linum' -- \ - "$cur" ) ) + # "file" is not compatible with other keywords here + if _bpftool_search_list 'file'; then + return 0 + fi + if ! _bpftool_search_list 'linum opcodes visual'; then + _bpftool_once_attr 'file' + fi + _bpftool_once_attr 'linum opcodes' + if _bpftool_search_list 'xlated' && [[ "$json" == 0 ]]; then + _bpftool_once_attr 'visual' fi return 0 ;; @@ -502,10 +508,7 @@ _bpftool() ;; *) COMPREPLY=( $( compgen -W "map" -- "$cur" ) ) - _bpftool_once_attr 'type' - _bpftool_once_attr 'dev' - _bpftool_once_attr 'pinmaps' - _bpftool_once_attr 'autoattach' + _bpftool_once_attr 'type dev pinmaps autoattach' return 0 ;; esac @@ -730,16 +733,10 @@ _bpftool() esac ;; *) - _bpftool_once_attr 'type' - _bpftool_once_attr 'key' - _bpftool_once_attr 'value' - _bpftool_once_attr 'entries' - _bpftool_once_attr 'name' - _bpftool_once_attr 'flags' + _bpftool_once_attr 'type key value entries name flags dev' if _bpftool_search_list 'array_of_maps' 'hash_of_maps'; then _bpftool_once_attr 'inner_map' fi - _bpftool_once_attr 'dev' return 0 ;; esac @@ -880,8 +877,7 @@ _bpftool() return 0 ;; *) - _bpftool_once_attr 'cpu' - _bpftool_once_attr 'index' + _bpftool_once_attr 'cpu index' return 0 ;; esac diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c index e7f6ec3a8f35..6c5e0e82da22 100644 --- a/tools/bpf/bpftool/btf_dumper.c +++ b/tools/bpf/bpftool/btf_dumper.c @@ -821,3 +821,86 @@ void btf_dump_linfo_json(const struct btf *btf, BPF_LINE_INFO_LINE_COL(linfo->line_col)); } } + +static void dotlabel_puts(const char *s) +{ + for (; *s; ++s) { + switch (*s) { + case '\\': + case '"': + case '{': + case '}': + case '<': + case '>': + case '|': + case ' ': + putchar('\\'); + __fallthrough; + default: + putchar(*s); + } + } +} + +static const char *shorten_path(const char *path) +{ + const unsigned int MAX_PATH_LEN = 32; + size_t len = strlen(path); + const char *shortpath; + + if (len <= MAX_PATH_LEN) + return path; + + /* Search for last '/' under the MAX_PATH_LEN limit */ + shortpath = strchr(path + len - MAX_PATH_LEN, '/'); + if (shortpath) { + if (shortpath < path + strlen("...")) + /* We removed a very short prefix, e.g. "/w", and we'll + * make the path longer by prefixing with the ellipsis. + * Not worth it, keep initial path. + */ + return path; + return shortpath; + } + + /* File base name length is > MAX_PATH_LEN, search for last '/' */ + shortpath = strrchr(path, '/'); + if (shortpath) + return shortpath; + + return path; +} + +void btf_dump_linfo_dotlabel(const struct btf *btf, + const struct bpf_line_info *linfo, bool linum) +{ + const char *line = btf__name_by_offset(btf, linfo->line_off); + + if (!line || !strlen(line)) + return; + line = ltrim(line); + + if (linum) { + const char *file = btf__name_by_offset(btf, linfo->file_name_off); + const char *shortfile; + + /* More forgiving on file because linum option is + * expected to provide more info than the already + * available src line. + */ + if (!file) + shortfile = ""; + else + shortfile = shorten_path(file); + + printf("; [%s", shortfile > file ? "..." : ""); + dotlabel_puts(shortfile); + printf(" line:%u col:%u]\\l\\\n", + BPF_LINE_INFO_LINE_NUM(linfo->line_col), + BPF_LINE_INFO_LINE_COL(linfo->line_col)); + } + + printf("; "); + dotlabel_puts(line); + printf("\\l\\\n"); +} diff --git a/tools/bpf/bpftool/cfg.c b/tools/bpf/bpftool/cfg.c index 1951219a9af7..eec437cca2ea 100644 --- a/tools/bpf/bpftool/cfg.c +++ b/tools/bpf/bpftool/cfg.c @@ -380,7 +380,9 @@ static void cfg_destroy(struct cfg *cfg) } } -static void draw_bb_node(struct func_node *func, struct bb_node *bb) +static void +draw_bb_node(struct func_node *func, struct bb_node *bb, struct dump_data *dd, + bool opcodes, bool linum) { const char *shape; @@ -398,13 +400,10 @@ static void draw_bb_node(struct func_node *func, struct bb_node *bb) printf("EXIT"); } else { unsigned int start_idx; - struct dump_data dd = {}; - - printf("{"); - kernel_syms_load(&dd); + printf("{\\\n"); start_idx = bb->head - func->start; - dump_xlated_for_graph(&dd, bb->head, bb->tail, start_idx); - kernel_syms_destroy(&dd); + dump_xlated_for_graph(dd, bb->head, bb->tail, start_idx, + opcodes, linum); printf("}"); } @@ -430,12 +429,14 @@ static void draw_bb_succ_edges(struct func_node *func, struct bb_node *bb) } } -static void func_output_bb_def(struct func_node *func) +static void +func_output_bb_def(struct func_node *func, struct dump_data *dd, + bool opcodes, bool linum) { struct bb_node *bb; list_for_each_entry(bb, &func->bbs, l) { - draw_bb_node(func, bb); + draw_bb_node(func, bb, dd, opcodes, linum); } } @@ -455,7 +456,8 @@ static void func_output_edges(struct func_node *func) func_idx, ENTRY_BLOCK_INDEX, func_idx, EXIT_BLOCK_INDEX); } -static void cfg_dump(struct cfg *cfg) +static void +cfg_dump(struct cfg *cfg, struct dump_data *dd, bool opcodes, bool linum) { struct func_node *func; @@ -463,14 +465,15 @@ static void cfg_dump(struct cfg *cfg) list_for_each_entry(func, &cfg->funcs, l) { printf("subgraph \"cluster_%d\" {\n\tstyle=\"dashed\";\n\tcolor=\"black\";\n\tlabel=\"func_%d ()\";\n", func->idx, func->idx); - func_output_bb_def(func); + func_output_bb_def(func, dd, opcodes, linum); func_output_edges(func); printf("}\n"); } printf("}\n"); } -void dump_xlated_cfg(void *buf, unsigned int len) +void dump_xlated_cfg(struct dump_data *dd, void *buf, unsigned int len, + bool opcodes, bool linum) { struct bpf_insn *insn = buf; struct cfg cfg; @@ -479,7 +482,7 @@ void dump_xlated_cfg(void *buf, unsigned int len) if (cfg_build(&cfg, insn, len)) return; - cfg_dump(&cfg); + cfg_dump(&cfg, dd, opcodes, linum); cfg_destroy(&cfg); } diff --git a/tools/bpf/bpftool/cfg.h b/tools/bpf/bpftool/cfg.h index e144257ea6d2..b3793f4e1783 100644 --- a/tools/bpf/bpftool/cfg.h +++ b/tools/bpf/bpftool/cfg.h @@ -4,6 +4,9 @@ #ifndef __BPF_TOOL_CFG_H #define __BPF_TOOL_CFG_H -void dump_xlated_cfg(void *buf, unsigned int len); +#include "xlated_dumper.h" + +void dump_xlated_cfg(struct dump_data *dd, void *buf, unsigned int len, + bool opcodes, bool linum); #endif /* __BPF_TOOL_CFG_H */ diff --git a/tools/bpf/bpftool/json_writer.c b/tools/bpf/bpftool/json_writer.c index bca5dd0a59e3..be379613d118 100644 --- a/tools/bpf/bpftool/json_writer.c +++ b/tools/bpf/bpftool/json_writer.c @@ -75,7 +75,7 @@ static void jsonw_puts(json_writer_t *self, const char *str) fputs("\\b", self->out); break; case '\\': - fputs("\\n", self->out); + fputs("\\\\", self->out); break; case '"': fputs("\\\"", self->out); diff --git a/tools/bpf/bpftool/json_writer.h b/tools/bpf/bpftool/json_writer.h index 8ace65cdb92f..5aaffd3b837b 100644 --- a/tools/bpf/bpftool/json_writer.h +++ b/tools/bpf/bpftool/json_writer.h @@ -14,6 +14,7 @@ #include <stdbool.h> #include <stdint.h> #include <stdarg.h> +#include <stdio.h> #include <linux/compiler.h> /* Opaque class structure */ diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h index 0ef373cef4c7..00d11ca6d3f2 100644 --- a/tools/bpf/bpftool/main.h +++ b/tools/bpf/bpftool/main.h @@ -229,6 +229,8 @@ void btf_dump_linfo_plain(const struct btf *btf, const char *prefix, bool linum); void btf_dump_linfo_json(const struct btf *btf, const struct bpf_line_info *linfo, bool linum); +void btf_dump_linfo_dotlabel(const struct btf *btf, + const struct bpf_line_info *linfo, bool linum); struct nlattr; struct ifinfomsg; diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c index afbe3ec342c8..e5b613a7974c 100644 --- a/tools/bpf/bpftool/prog.c +++ b/tools/bpf/bpftool/prog.c @@ -840,11 +840,6 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode, false)) goto exit_free; } - } else if (visual) { - if (json_output) - jsonw_null(json_wtr); - else - dump_xlated_cfg(buf, member_len); } else { kernel_syms_load(&dd); dd.nr_jited_ksyms = info->nr_jited_ksyms; @@ -855,11 +850,11 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode, dd.prog_linfo = prog_linfo; if (json_output) - dump_xlated_json(&dd, buf, member_len, opcodes, - linum); + dump_xlated_json(&dd, buf, member_len, opcodes, linum); + else if (visual) + dump_xlated_cfg(&dd, buf, member_len, opcodes, linum); else - dump_xlated_plain(&dd, buf, member_len, opcodes, - linum); + dump_xlated_plain(&dd, buf, member_len, opcodes, linum); kernel_syms_destroy(&dd); } @@ -910,37 +905,46 @@ static int do_dump(int argc, char **argv) if (nb_fds < 1) goto exit_free; - if (is_prefix(*argv, "file")) { - NEXT_ARG(); - if (!argc) { - p_err("expected file path"); - goto exit_close; - } - if (nb_fds > 1) { - p_err("several programs matched"); - goto exit_close; - } + while (argc) { + if (is_prefix(*argv, "file")) { + NEXT_ARG(); + if (!argc) { + p_err("expected file path"); + goto exit_close; + } + if (nb_fds > 1) { + p_err("several programs matched"); + goto exit_close; + } - filepath = *argv; - NEXT_ARG(); - } else if (is_prefix(*argv, "opcodes")) { - opcodes = true; - NEXT_ARG(); - } else if (is_prefix(*argv, "visual")) { - if (nb_fds > 1) { - p_err("several programs matched"); + filepath = *argv; + NEXT_ARG(); + } else if (is_prefix(*argv, "opcodes")) { + opcodes = true; + NEXT_ARG(); + } else if (is_prefix(*argv, "visual")) { + if (nb_fds > 1) { + p_err("several programs matched"); + goto exit_close; + } + + visual = true; + NEXT_ARG(); + } else if (is_prefix(*argv, "linum")) { + linum = true; + NEXT_ARG(); + } else { + usage(); goto exit_close; } - - visual = true; - NEXT_ARG(); - } else if (is_prefix(*argv, "linum")) { - linum = true; - NEXT_ARG(); } - if (argc) { - usage(); + if (filepath && (opcodes || visual || linum)) { + p_err("'file' is not compatible with 'opcodes', 'visual', or 'linum'"); + goto exit_close; + } + if (json_output && visual) { + p_err("'visual' is not compatible with JSON output"); goto exit_close; } @@ -1681,7 +1685,8 @@ static int load_with_options(int argc, char **argv, bool first_prog_only) } bpf_program__set_ifindex(pos, ifindex); - bpf_program__set_type(pos, prog_type); + if (bpf_program__type(pos) != prog_type) + bpf_program__set_type(pos, prog_type); bpf_program__set_expected_attach_type(pos, expected_attach_type); } @@ -2420,8 +2425,8 @@ static int do_help(int argc, char **argv) fprintf(stderr, "Usage: %1$s %2$s { show | list } [PROG]\n" - " %1$s %2$s dump xlated PROG [{ file FILE | opcodes | visual | linum }]\n" - " %1$s %2$s dump jited PROG [{ file FILE | opcodes | linum }]\n" + " %1$s %2$s dump xlated PROG [{ file FILE | [opcodes] [linum] [visual] }]\n" + " %1$s %2$s dump jited PROG [{ file FILE | [opcodes] [linum] }]\n" " %1$s %2$s pin PROG FILE\n" " %1$s %2$s { load | loadall } OBJ PATH \\\n" " [type TYPE] [dev NAME] \\\n" diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c index 6fe3134ae45d..da608e10c843 100644 --- a/tools/bpf/bpftool/xlated_dumper.c +++ b/tools/bpf/bpftool/xlated_dumper.c @@ -361,7 +361,8 @@ void dump_xlated_plain(struct dump_data *dd, void *buf, unsigned int len, } void dump_xlated_for_graph(struct dump_data *dd, void *buf_start, void *buf_end, - unsigned int start_idx) + unsigned int start_idx, + bool opcodes, bool linum) { const struct bpf_insn_cbs cbs = { .cb_print = print_insn_for_graph, @@ -369,14 +370,61 @@ void dump_xlated_for_graph(struct dump_data *dd, void *buf_start, void *buf_end, .cb_imm = print_imm, .private_data = dd, }; + const struct bpf_prog_linfo *prog_linfo = dd->prog_linfo; + const struct bpf_line_info *last_linfo = NULL; + struct bpf_func_info *record = dd->func_info; struct bpf_insn *insn_start = buf_start; struct bpf_insn *insn_end = buf_end; struct bpf_insn *cur = insn_start; + struct btf *btf = dd->btf; + bool double_insn = false; + char func_sig[1024]; for (; cur <= insn_end; cur++) { - printf("% 4d: ", (int)(cur - insn_start + start_idx)); + unsigned int insn_off; + + if (double_insn) { + double_insn = false; + continue; + } + double_insn = cur->code == (BPF_LD | BPF_IMM | BPF_DW); + + insn_off = (unsigned int)(cur - insn_start + start_idx); + if (btf && record) { + if (record->insn_off == insn_off) { + btf_dumper_type_only(btf, record->type_id, + func_sig, + sizeof(func_sig)); + if (func_sig[0] != '\0') + printf("; %s:\\l\\\n", func_sig); + record = (void *)record + dd->finfo_rec_size; + } + } + + if (prog_linfo) { + const struct bpf_line_info *linfo; + + linfo = bpf_prog_linfo__lfind(prog_linfo, insn_off, 0); + if (linfo && linfo != last_linfo) { + btf_dump_linfo_dotlabel(btf, linfo, linum); + last_linfo = linfo; + } + } + + printf("%d: ", insn_off); print_bpf_insn(&cbs, cur, true); + + if (opcodes) { + printf("\\ \\ \\ \\ "); + fprint_hex(stdout, cur, 8, " "); + if (double_insn && cur <= insn_end - 1) { + printf(" "); + fprint_hex(stdout, cur + 1, 8, " "); + } + printf("\\l\\\n"); + } + if (cur != insn_end) - printf(" | "); + printf("| "); } } diff --git a/tools/bpf/bpftool/xlated_dumper.h b/tools/bpf/bpftool/xlated_dumper.h index 54847e174273..9a946377b0e6 100644 --- a/tools/bpf/bpftool/xlated_dumper.h +++ b/tools/bpf/bpftool/xlated_dumper.h @@ -34,6 +34,7 @@ void dump_xlated_json(struct dump_data *dd, void *buf, unsigned int len, void dump_xlated_plain(struct dump_data *dd, void *buf, unsigned int len, bool opcodes, bool linum); void dump_xlated_for_graph(struct dump_data *dd, void *buf, void *buf_end, - unsigned int start_index); + unsigned int start_index, + bool opcodes, bool linum); #endif diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 976b194eb775..3823100b7934 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -1033,6 +1033,7 @@ enum bpf_attach_type { BPF_PERF_EVENT, BPF_TRACE_KPROBE_MULTI, BPF_LSM_CGROUP, + BPF_STRUCT_OPS, __MAX_BPF_ATTACH_TYPE }; @@ -1108,7 +1109,7 @@ enum bpf_link_type { */ #define BPF_F_STRICT_ALIGNMENT (1U << 0) -/* If BPF_F_ANY_ALIGNMENT is used in BPF_PROF_LOAD command, the +/* If BPF_F_ANY_ALIGNMENT is used in BPF_PROG_LOAD command, the * verifier will allow any alignment whatsoever. On platforms * with strict alignment requirements for loads ands stores (such * as sparc and mips) the verifier validates that all loads and @@ -1266,6 +1267,9 @@ enum { /* Create a map that is suitable to be an inner map with dynamic max entries */ BPF_F_INNER_MAP = (1U << 12), + +/* Create a map that will be registered/unregesitered by the backed bpf_link */ + BPF_F_LINK = (1U << 13), }; /* Flags for BPF_PROG_QUERY. */ @@ -1403,6 +1407,11 @@ union bpf_attr { __aligned_u64 fd_array; /* array of FDs */ __aligned_u64 core_relos; __u32 core_relo_rec_size; /* sizeof(struct bpf_core_relo) */ + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 log_true_size; }; struct { /* anonymous struct used by BPF_OBJ_* commands */ @@ -1488,6 +1497,11 @@ union bpf_attr { __u32 btf_size; __u32 btf_log_size; __u32 btf_log_level; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + */ + __u32 btf_log_true_size; }; struct { @@ -1507,7 +1521,10 @@ union bpf_attr { } task_fd_query; struct { /* struct used by BPF_LINK_CREATE command */ - __u32 prog_fd; /* eBPF program to attach */ + union { + __u32 prog_fd; /* eBPF program to attach */ + __u32 map_fd; /* struct_ops to attach */ + }; union { __u32 target_fd; /* object to attach to */ __u32 target_ifindex; /* target ifindex */ @@ -1548,12 +1565,23 @@ union bpf_attr { struct { /* struct used by BPF_LINK_UPDATE command */ __u32 link_fd; /* link fd */ - /* new program fd to update link with */ - __u32 new_prog_fd; + union { + /* new program fd to update link with */ + __u32 new_prog_fd; + /* new struct_ops map fd to update link with */ + __u32 new_map_fd; + }; __u32 flags; /* extra flags */ - /* expected link's program fd; is specified only if - * BPF_F_REPLACE flag is set in flags */ - __u32 old_prog_fd; + union { + /* expected link's program fd; is specified only if + * BPF_F_REPLACE flag is set in flags. + */ + __u32 old_prog_fd; + /* expected link's map fd; is specified only + * if BPF_F_REPLACE flag is set. + */ + __u32 old_map_fd; + }; } link_update; struct { @@ -1647,17 +1675,17 @@ union bpf_attr { * Description * This helper is a "printk()-like" facility for debugging. It * prints a message defined by format *fmt* (of size *fmt_size*) - * to file *\/sys/kernel/debug/tracing/trace* from DebugFS, if + * to file *\/sys/kernel/tracing/trace* from TraceFS, if * available. It can take up to three additional **u64** * arguments (as an eBPF helpers, the total number of arguments is * limited to five). * * Each time the helper is called, it appends a line to the trace. - * Lines are discarded while *\/sys/kernel/debug/tracing/trace* is - * open, use *\/sys/kernel/debug/tracing/trace_pipe* to avoid this. + * Lines are discarded while *\/sys/kernel/tracing/trace* is + * open, use *\/sys/kernel/tracing/trace_pipe* to avoid this. * The format of the trace is customizable, and the exact output * one will get depends on the options set in - * *\/sys/kernel/debug/tracing/trace_options* (see also the + * *\/sys/kernel/tracing/trace_options* (see also the * *README* file under the same directory). However, it usually * defaults to something like: * @@ -6379,6 +6407,9 @@ struct bpf_link_info { struct { __u32 ifindex; } xdp; + struct { + __u32 map_id; + } struct_ops; }; } __attribute__((aligned(8))); @@ -7112,4 +7143,12 @@ enum { BPF_F_TIMER_ABS = (1ULL << 0), }; +/* BPF numbers iterator state */ +struct bpf_iter_num { + /* opaque iterator state; having __u64 here allows to preserve correct + * alignment requirements in vmlinux.h, generated from BTF + */ + __u64 __opaque[1]; +} __attribute__((aligned(8))); + #endif /* _UAPI__LINUX_BPF_H__ */ diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index e750b6f5fcc3..128ac723c4ea 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -230,9 +230,9 @@ alloc_zero_tailing_info(const void *orecord, __u32 cnt, int bpf_prog_load(enum bpf_prog_type prog_type, const char *prog_name, const char *license, const struct bpf_insn *insns, size_t insn_cnt, - const struct bpf_prog_load_opts *opts) + struct bpf_prog_load_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, fd_array); + const size_t attr_sz = offsetofend(union bpf_attr, log_true_size); void *finfo = NULL, *linfo = NULL; const char *func_info, *line_info; __u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd; @@ -290,10 +290,6 @@ int bpf_prog_load(enum bpf_prog_type prog_type, if (!!log_buf != !!log_size) return libbpf_err(-EINVAL); - if (log_level > (4 | 2 | 1)) - return libbpf_err(-EINVAL); - if (log_level && !log_buf) - return libbpf_err(-EINVAL); func_info_rec_size = OPTS_GET(opts, func_info_rec_size, 0); func_info = OPTS_GET(opts, func_info, NULL); @@ -316,6 +312,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, } fd = sys_bpf_prog_load(&attr, attr_sz, attempts); + OPTS_SET(opts, log_true_size, attr.log_true_size); if (fd >= 0) return fd; @@ -356,6 +353,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, } fd = sys_bpf_prog_load(&attr, attr_sz, attempts); + OPTS_SET(opts, log_true_size, attr.log_true_size); if (fd >= 0) goto done; } @@ -370,6 +368,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type, attr.log_level = 1; fd = sys_bpf_prog_load(&attr, attr_sz, attempts); + OPTS_SET(opts, log_true_size, attr.log_true_size); } done: /* free() doesn't affect errno, so we don't need to restore it */ @@ -794,11 +793,17 @@ int bpf_link_update(int link_fd, int new_prog_fd, if (!OPTS_VALID(opts, bpf_link_update_opts)) return libbpf_err(-EINVAL); + if (OPTS_GET(opts, old_prog_fd, 0) && OPTS_GET(opts, old_map_fd, 0)) + return libbpf_err(-EINVAL); + memset(&attr, 0, attr_sz); attr.link_update.link_fd = link_fd; attr.link_update.new_prog_fd = new_prog_fd; attr.link_update.flags = OPTS_GET(opts, flags, 0); - attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0); + if (OPTS_GET(opts, old_prog_fd, 0)) + attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0); + else if (OPTS_GET(opts, old_map_fd, 0)) + attr.link_update.old_map_fd = OPTS_GET(opts, old_map_fd, 0); ret = sys_bpf(BPF_LINK_UPDATE, &attr, attr_sz); return libbpf_err_errno(ret); @@ -1078,9 +1083,9 @@ int bpf_raw_tracepoint_open(const char *name, int prog_fd) return libbpf_err_errno(fd); } -int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_load_opts *opts) +int bpf_btf_load(const void *btf_data, size_t btf_size, struct bpf_btf_load_opts *opts) { - const size_t attr_sz = offsetofend(union bpf_attr, btf_log_level); + const size_t attr_sz = offsetofend(union bpf_attr, btf_log_true_size); union bpf_attr attr; char *log_buf; size_t log_size; @@ -1123,6 +1128,8 @@ int bpf_btf_load(const void *btf_data, size_t btf_size, const struct bpf_btf_loa attr.btf_log_level = 1; fd = sys_bpf_fd(BPF_BTF_LOAD, &attr, attr_sz); } + + OPTS_SET(opts, log_true_size, attr.btf_log_true_size); return libbpf_err_errno(fd); } diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index f0f786373238..a2c091389b18 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -96,13 +96,20 @@ struct bpf_prog_load_opts { __u32 log_level; __u32 log_size; char *log_buf; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + * If kernel doesn't support this feature, log_size is left unchanged. + */ + __u32 log_true_size; + size_t :0; }; -#define bpf_prog_load_opts__last_field log_buf +#define bpf_prog_load_opts__last_field log_true_size LIBBPF_API int bpf_prog_load(enum bpf_prog_type prog_type, const char *prog_name, const char *license, const struct bpf_insn *insns, size_t insn_cnt, - const struct bpf_prog_load_opts *opts); + struct bpf_prog_load_opts *opts); /* Flags to direct loading requirements */ #define MAPS_RELAX_COMPAT 0x01 @@ -117,11 +124,18 @@ struct bpf_btf_load_opts { char *log_buf; __u32 log_level; __u32 log_size; + /* output: actual total log contents size (including termintaing zero). + * It could be both larger than original log_size (if log was + * truncated), or smaller (if log buffer wasn't filled completely). + * If kernel doesn't support this feature, log_size is left unchanged. + */ + __u32 log_true_size; + size_t :0; }; -#define bpf_btf_load_opts__last_field log_size +#define bpf_btf_load_opts__last_field log_true_size LIBBPF_API int bpf_btf_load(const void *btf_data, size_t btf_size, - const struct bpf_btf_load_opts *opts); + struct bpf_btf_load_opts *opts); LIBBPF_API int bpf_map_update_elem(int fd, const void *key, const void *value, __u64 flags); @@ -336,8 +350,9 @@ struct bpf_link_update_opts { size_t sz; /* size of this struct for forward/backward compatibility */ __u32 flags; /* extra flags */ __u32 old_prog_fd; /* expected old program FD */ + __u32 old_map_fd; /* expected old map FD */ }; -#define bpf_link_update_opts__last_field old_prog_fd +#define bpf_link_update_opts__last_field old_map_fd LIBBPF_API int bpf_link_update(int link_fd, int new_prog_fd, const struct bpf_link_update_opts *opts); diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h index 223308931d55..fdf44403ff36 100644 --- a/tools/lib/bpf/bpf_gen_internal.h +++ b/tools/lib/bpf/bpf_gen_internal.h @@ -11,6 +11,7 @@ struct ksym_relo_desc { int insn_idx; bool is_weak; bool is_typeless; + bool is_ld64; }; struct ksym_desc { @@ -24,6 +25,7 @@ struct ksym_desc { bool typeless; }; int insn; + bool is_ld64; }; struct bpf_gen { @@ -65,7 +67,7 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *value, __u void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx); void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *name, enum bpf_attach_type type); void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak, - bool is_typeless, int kind, int insn_idx); + bool is_typeless, bool is_ld64, int kind, int insn_idx); void bpf_gen__record_relo_core(struct bpf_gen *gen, const struct bpf_core_relo *core_relo); void bpf_gen__populate_outer_map(struct bpf_gen *gen, int outer_map_idx, int key, int inner_map_idx); diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h index 7d12d3e620cc..e7e1a8acc299 100644 --- a/tools/lib/bpf/bpf_helpers.h +++ b/tools/lib/bpf/bpf_helpers.h @@ -177,6 +177,11 @@ enum libbpf_tristate { #define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted"))) #define __kptr __attribute__((btf_type_tag("kptr"))) +#define bpf_ksym_exists(sym) ({ \ + _Static_assert(!__builtin_constant_p(!!sym), #sym " should be marked as __weak"); \ + !!sym; \ +}) + #ifndef ___bpf_concat #define ___bpf_concat(a, b) a ## b #endif diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c index 23f5c46708f8..83e8e3bfd8ff 100644 --- a/tools/lib/bpf/gen_loader.c +++ b/tools/lib/bpf/gen_loader.c @@ -560,7 +560,7 @@ static void emit_find_attach_target(struct bpf_gen *gen) } void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak, - bool is_typeless, int kind, int insn_idx) + bool is_typeless, bool is_ld64, int kind, int insn_idx) { struct ksym_relo_desc *relo; @@ -574,6 +574,7 @@ void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak, relo->name = name; relo->is_weak = is_weak; relo->is_typeless = is_typeless; + relo->is_ld64 = is_ld64; relo->kind = kind; relo->insn_idx = insn_idx; gen->relo_cnt++; @@ -586,9 +587,11 @@ static struct ksym_desc *get_ksym_desc(struct bpf_gen *gen, struct ksym_relo_des int i; for (i = 0; i < gen->nr_ksyms; i++) { - if (!strcmp(gen->ksyms[i].name, relo->name)) { - gen->ksyms[i].ref++; - return &gen->ksyms[i]; + kdesc = &gen->ksyms[i]; + if (kdesc->kind == relo->kind && kdesc->is_ld64 == relo->is_ld64 && + !strcmp(kdesc->name, relo->name)) { + kdesc->ref++; + return kdesc; } } kdesc = libbpf_reallocarray(gen->ksyms, gen->nr_ksyms + 1, sizeof(*kdesc)); @@ -603,6 +606,7 @@ static struct ksym_desc *get_ksym_desc(struct bpf_gen *gen, struct ksym_relo_des kdesc->ref = 1; kdesc->off = 0; kdesc->insn = 0; + kdesc->is_ld64 = relo->is_ld64; return kdesc; } @@ -804,11 +808,13 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo, return; /* try to copy from existing ldimm64 insn */ if (kdesc->ref > 1) { - move_blob2blob(gen, insn + offsetof(struct bpf_insn, imm), 4, - kdesc->insn + offsetof(struct bpf_insn, imm)); move_blob2blob(gen, insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm), 4, kdesc->insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm)); - /* jump over src_reg adjustment if imm is not 0, reuse BPF_REG_0 from move_blob2blob */ + move_blob2blob(gen, insn + offsetof(struct bpf_insn, imm), 4, + kdesc->insn + offsetof(struct bpf_insn, imm)); + /* jump over src_reg adjustment if imm (btf_id) is not 0, reuse BPF_REG_0 from move_blob2blob + * If btf_id is zero, clear BPF_PSEUDO_BTF_ID flag in src_reg of ld_imm64 insn + */ emit(gen, BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3)); goto clear_src_reg; } @@ -831,7 +837,7 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo, emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7, sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm))); /* skip src_reg adjustment */ - emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 3)); + emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 3)); clear_src_reg: /* clear bpf_object__relocate_data's src_reg assignment, otherwise we get a verifier failure */ reg_mask = src_reg_mask(); @@ -862,23 +868,17 @@ static void emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insn { int insn; - pr_debug("gen: emit_relo (%d): %s at %d\n", relo->kind, relo->name, relo->insn_idx); + pr_debug("gen: emit_relo (%d): %s at %d %s\n", + relo->kind, relo->name, relo->insn_idx, relo->is_ld64 ? "ld64" : "call"); insn = insns + sizeof(struct bpf_insn) * relo->insn_idx; emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_8, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, insn)); - switch (relo->kind) { - case BTF_KIND_VAR: + if (relo->is_ld64) { if (relo->is_typeless) emit_relo_ksym_typeless(gen, relo, insn); else emit_relo_ksym_btf(gen, relo, insn); - break; - case BTF_KIND_FUNC: + } else { emit_relo_kfunc_btf(gen, relo, insn); - break; - default: - pr_warn("Unknown relocation kind '%d'\n", relo->kind); - gen->error = -EDOM; - return; } } @@ -901,18 +901,20 @@ static void cleanup_core_relo(struct bpf_gen *gen) static void cleanup_relos(struct bpf_gen *gen, int insns) { + struct ksym_desc *kdesc; int i, insn; for (i = 0; i < gen->nr_ksyms; i++) { + kdesc = &gen->ksyms[i]; /* only close fds for typed ksyms and kfuncs */ - if (gen->ksyms[i].kind == BTF_KIND_VAR && !gen->ksyms[i].typeless) { + if (kdesc->is_ld64 && !kdesc->typeless) { /* close fd recorded in insn[insn_idx + 1].imm */ - insn = gen->ksyms[i].insn; + insn = kdesc->insn; insn += sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm); emit_sys_close_blob(gen, insn); - } else if (gen->ksyms[i].kind == BTF_KIND_FUNC) { - emit_sys_close_blob(gen, blob_fd_array_off(gen, gen->ksyms[i].off)); - if (gen->ksyms[i].off < MAX_FD_ARRAY_SZ) + } else if (!kdesc->is_ld64) { + emit_sys_close_blob(gen, blob_fd_array_off(gen, kdesc->off)); + if (kdesc->off < MAX_FD_ARRAY_SZ) gen->nr_fd_array--; } } diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index a557718401e4..49cd304ae3bc 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -116,6 +116,7 @@ static const char * const attach_type_name[] = { [BPF_SK_REUSEPORT_SELECT_OR_MIGRATE] = "sk_reuseport_select_or_migrate", [BPF_PERF_EVENT] = "perf_event", [BPF_TRACE_KPROBE_MULTI] = "trace_kprobe_multi", + [BPF_STRUCT_OPS] = "struct_ops", }; static const char * const link_type_name[] = { @@ -215,9 +216,10 @@ static libbpf_print_fn_t __libbpf_pr = __base_pr; libbpf_print_fn_t libbpf_set_print(libbpf_print_fn_t fn) { - libbpf_print_fn_t old_print_fn = __libbpf_pr; + libbpf_print_fn_t old_print_fn; + + old_print_fn = __atomic_exchange_n(&__libbpf_pr, fn, __ATOMIC_RELAXED); - __libbpf_pr = fn; return old_print_fn; } @@ -226,8 +228,10 @@ void libbpf_print(enum libbpf_print_level level, const char *format, ...) { va_list args; int old_errno; + libbpf_print_fn_t print_fn; - if (!__libbpf_pr) + print_fn = __atomic_load_n(&__libbpf_pr, __ATOMIC_RELAXED); + if (!print_fn) return; old_errno = errno; @@ -315,8 +319,8 @@ enum reloc_type { RELO_LD64, RELO_CALL, RELO_DATA, - RELO_EXTERN_VAR, - RELO_EXTERN_FUNC, + RELO_EXTERN_LD64, + RELO_EXTERN_CALL, RELO_SUBPROG_ADDR, RELO_CORE, }; @@ -467,6 +471,7 @@ struct bpf_struct_ops { #define KCONFIG_SEC ".kconfig" #define KSYMS_SEC ".ksyms" #define STRUCT_OPS_SEC ".struct_ops" +#define STRUCT_OPS_LINK_SEC ".struct_ops.link" enum libbpf_map_type { LIBBPF_MAP_UNSPEC, @@ -596,6 +601,7 @@ struct elf_state { Elf64_Ehdr *ehdr; Elf_Data *symbols; Elf_Data *st_ops_data; + Elf_Data *st_ops_link_data; size_t shstrndx; /* section index for section name strings */ size_t strtabidx; struct elf_sec_desc *secs; @@ -605,6 +611,7 @@ struct elf_state { int text_shndx; int symbols_shndx; int st_ops_shndx; + int st_ops_link_shndx; }; struct usdt_manager; @@ -1118,7 +1125,8 @@ static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj) return 0; } -static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) +static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name, + int shndx, Elf_Data *data, __u32 map_flags) { const struct btf_type *type, *datasec; const struct btf_var_secinfo *vsi; @@ -1129,15 +1137,15 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) struct bpf_map *map; __u32 i; - if (obj->efile.st_ops_shndx == -1) + if (shndx == -1) return 0; btf = obj->btf; - datasec_id = btf__find_by_name_kind(btf, STRUCT_OPS_SEC, + datasec_id = btf__find_by_name_kind(btf, sec_name, BTF_KIND_DATASEC); if (datasec_id < 0) { pr_warn("struct_ops init: DATASEC %s not found\n", - STRUCT_OPS_SEC); + sec_name); return -EINVAL; } @@ -1150,7 +1158,7 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) type_id = btf__resolve_type(obj->btf, vsi->type); if (type_id < 0) { pr_warn("struct_ops init: Cannot resolve var type_id %u in DATASEC %s\n", - vsi->type, STRUCT_OPS_SEC); + vsi->type, sec_name); return -EINVAL; } @@ -1169,7 +1177,7 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) if (IS_ERR(map)) return PTR_ERR(map); - map->sec_idx = obj->efile.st_ops_shndx; + map->sec_idx = shndx; map->sec_offset = vsi->offset; map->name = strdup(var_name); if (!map->name) @@ -1179,6 +1187,7 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) map->def.key_size = sizeof(int); map->def.value_size = type->size; map->def.max_entries = 1; + map->def.map_flags = map_flags; map->st_ops = calloc(1, sizeof(*map->st_ops)); if (!map->st_ops) @@ -1191,14 +1200,14 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) if (!st_ops->data || !st_ops->progs || !st_ops->kern_func_off) return -ENOMEM; - if (vsi->offset + type->size > obj->efile.st_ops_data->d_size) { + if (vsi->offset + type->size > data->d_size) { pr_warn("struct_ops init: var %s is beyond the end of DATASEC %s\n", - var_name, STRUCT_OPS_SEC); + var_name, sec_name); return -EINVAL; } memcpy(st_ops->data, - obj->efile.st_ops_data->d_buf + vsi->offset, + data->d_buf + vsi->offset, type->size); st_ops->tname = tname; st_ops->type = type; @@ -1211,6 +1220,19 @@ static int bpf_object__init_struct_ops_maps(struct bpf_object *obj) return 0; } +static int bpf_object_init_struct_ops(struct bpf_object *obj) +{ + int err; + + err = init_struct_ops_maps(obj, STRUCT_OPS_SEC, obj->efile.st_ops_shndx, + obj->efile.st_ops_data, 0); + err = err ?: init_struct_ops_maps(obj, STRUCT_OPS_LINK_SEC, + obj->efile.st_ops_link_shndx, + obj->efile.st_ops_link_data, + BPF_F_LINK); + return err; +} + static struct bpf_object *bpf_object__new(const char *path, const void *obj_buf, size_t obj_buf_sz, @@ -1247,6 +1269,7 @@ static struct bpf_object *bpf_object__new(const char *path, obj->efile.obj_buf_sz = obj_buf_sz; obj->efile.btf_maps_shndx = -1; obj->efile.st_ops_shndx = -1; + obj->efile.st_ops_link_shndx = -1; obj->kconfig_map_idx = -1; obj->kern_version = get_kernel_version(); @@ -1264,6 +1287,7 @@ static void bpf_object__elf_finish(struct bpf_object *obj) obj->efile.elf = NULL; obj->efile.symbols = NULL; obj->efile.st_ops_data = NULL; + obj->efile.st_ops_link_data = NULL; zfree(&obj->efile.secs); obj->efile.sec_cnt = 0; @@ -2618,7 +2642,7 @@ static int bpf_object__init_maps(struct bpf_object *obj, err = bpf_object__init_user_btf_maps(obj, strict, pin_root_path); err = err ?: bpf_object__init_global_data_maps(obj); err = err ?: bpf_object__init_kconfig_map(obj); - err = err ?: bpf_object__init_struct_ops_maps(obj); + err = err ?: bpf_object_init_struct_ops(obj); return err; } @@ -2752,12 +2776,13 @@ static bool libbpf_needs_btf(const struct bpf_object *obj) { return obj->efile.btf_maps_shndx >= 0 || obj->efile.st_ops_shndx >= 0 || + obj->efile.st_ops_link_shndx >= 0 || obj->nr_extern > 0; } static bool kernel_needs_btf(const struct bpf_object *obj) { - return obj->efile.st_ops_shndx >= 0; + return obj->efile.st_ops_shndx >= 0 || obj->efile.st_ops_link_shndx >= 0; } static int bpf_object__init_btf(struct bpf_object *obj, @@ -3450,6 +3475,9 @@ static int bpf_object__elf_collect(struct bpf_object *obj) } else if (strcmp(name, STRUCT_OPS_SEC) == 0) { obj->efile.st_ops_data = data; obj->efile.st_ops_shndx = idx; + } else if (strcmp(name, STRUCT_OPS_LINK_SEC) == 0) { + obj->efile.st_ops_link_data = data; + obj->efile.st_ops_link_shndx = idx; } else { pr_info("elf: skipping unrecognized data section(%d) %s\n", idx, name); @@ -3464,6 +3492,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj) /* Only do relo for section with exec instructions */ if (!section_have_execinstr(obj, targ_sec_idx) && strcmp(name, ".rel" STRUCT_OPS_SEC) && + strcmp(name, ".rel" STRUCT_OPS_LINK_SEC) && strcmp(name, ".rel" MAPS_ELF_SEC)) { pr_info("elf: skipping relo section(%d) %s for section(%d) %s\n", idx, name, targ_sec_idx, @@ -4009,9 +4038,9 @@ static int bpf_program__record_reloc(struct bpf_program *prog, pr_debug("prog '%s': found extern #%d '%s' (sym %d) for insn #%u\n", prog->name, i, ext->name, ext->sym_idx, insn_idx); if (insn->code == (BPF_JMP | BPF_CALL)) - reloc_desc->type = RELO_EXTERN_FUNC; + reloc_desc->type = RELO_EXTERN_CALL; else - reloc_desc->type = RELO_EXTERN_VAR; + reloc_desc->type = RELO_EXTERN_LD64; reloc_desc->insn_idx = insn_idx; reloc_desc->sym_off = i; /* sym_off stores extern index */ return 0; @@ -5855,7 +5884,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) relo->map_idx, map); } break; - case RELO_EXTERN_VAR: + case RELO_EXTERN_LD64: ext = &obj->externs[relo->sym_off]; if (ext->type == EXT_KCFG) { if (obj->gen_loader) { @@ -5877,7 +5906,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog) } } break; - case RELO_EXTERN_FUNC: + case RELO_EXTERN_CALL: ext = &obj->externs[relo->sym_off]; insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL; if (ext->is_set) { @@ -6115,7 +6144,7 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog, continue; relo = find_prog_insn_relo(prog, insn_idx); - if (relo && relo->type == RELO_EXTERN_FUNC) + if (relo && relo->type == RELO_EXTERN_CALL) /* kfunc relocations will be handled later * in bpf_object__relocate_data() */ @@ -6610,7 +6639,7 @@ static int bpf_object__collect_relos(struct bpf_object *obj) return -LIBBPF_ERRNO__INTERNAL; } - if (idx == obj->efile.st_ops_shndx) + if (idx == obj->efile.st_ops_shndx || idx == obj->efile.st_ops_link_shndx) err = bpf_object__collect_st_ops_relos(obj, shdr, data); else if (idx == obj->efile.btf_maps_shndx) err = bpf_object__collect_map_relos(obj, shdr, data); @@ -7070,18 +7099,21 @@ static int bpf_program_record_relos(struct bpf_program *prog) for (i = 0; i < prog->nr_reloc; i++) { struct reloc_desc *relo = &prog->reloc_desc[i]; struct extern_desc *ext = &obj->externs[relo->sym_off]; + int kind; switch (relo->type) { - case RELO_EXTERN_VAR: + case RELO_EXTERN_LD64: if (ext->type != EXT_KSYM) continue; + kind = btf_is_var(btf__type_by_id(obj->btf, ext->btf_id)) ? + BTF_KIND_VAR : BTF_KIND_FUNC; bpf_gen__record_extern(obj->gen_loader, ext->name, ext->is_weak, !ext->ksym.type_id, - BTF_KIND_VAR, relo->insn_idx); + true, kind, relo->insn_idx); break; - case RELO_EXTERN_FUNC: + case RELO_EXTERN_CALL: bpf_gen__record_extern(obj->gen_loader, ext->name, - ext->is_weak, false, BTF_KIND_FUNC, + ext->is_weak, false, false, BTF_KIND_FUNC, relo->insn_idx); break; case RELO_CORE: { @@ -7533,6 +7565,12 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj, ext->is_set = true; ext->ksym.kernel_btf_id = kfunc_id; ext->ksym.btf_fd_idx = mod_btf ? mod_btf->fd_array_idx : 0; + /* Also set kernel_btf_obj_fd to make sure that bpf_object__relocate_data() + * populates FD into ld_imm64 insn when it's used to point to kfunc. + * {kernel_btf_id, btf_fd_idx} -> fixup bpf_call. + * {kernel_btf_id, kernel_btf_obj_fd} -> fixup ld_imm64. + */ + ext->ksym.kernel_btf_obj_fd = mod_btf ? mod_btf->fd : 0; pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n", ext->name, kfunc_id); @@ -7677,6 +7715,37 @@ static int bpf_object__resolve_externs(struct bpf_object *obj, return 0; } +static void bpf_map_prepare_vdata(const struct bpf_map *map) +{ + struct bpf_struct_ops *st_ops; + __u32 i; + + st_ops = map->st_ops; + for (i = 0; i < btf_vlen(st_ops->type); i++) { + struct bpf_program *prog = st_ops->progs[i]; + void *kern_data; + int prog_fd; + + if (!prog) + continue; + + prog_fd = bpf_program__fd(prog); + kern_data = st_ops->kern_vdata + st_ops->kern_func_off[i]; + *(unsigned long *)kern_data = prog_fd; + } +} + +static int bpf_object_prepare_struct_ops(struct bpf_object *obj) +{ + int i; + + for (i = 0; i < obj->nr_maps; i++) + if (bpf_map__is_struct_ops(&obj->maps[i])) + bpf_map_prepare_vdata(&obj->maps[i]); + + return 0; +} + static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const char *target_btf_path) { int err, i; @@ -7702,6 +7771,7 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path); err = err ? : bpf_object__load_progs(obj, extra_log_level); err = err ? : bpf_object_init_prog_arrays(obj); + err = err ? : bpf_object_prepare_struct_ops(obj); if (obj->gen_loader) { /* reset FDs */ @@ -8398,6 +8468,7 @@ int bpf_program__set_type(struct bpf_program *prog, enum bpf_prog_type type) return libbpf_err(-EBUSY); prog->type = type; + prog->sec_def = NULL; return 0; } @@ -8811,6 +8882,7 @@ const char *libbpf_bpf_prog_type_str(enum bpf_prog_type t) } static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj, + int sec_idx, size_t offset) { struct bpf_map *map; @@ -8820,7 +8892,8 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj, map = &obj->maps[i]; if (!bpf_map__is_struct_ops(map)) continue; - if (map->sec_offset <= offset && + if (map->sec_idx == sec_idx && + map->sec_offset <= offset && offset - map->sec_offset < map->def.value_size) return map; } @@ -8862,7 +8935,7 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj, } name = elf_sym_str(obj, sym->st_name) ?: "<?>"; - map = find_struct_ops_map_by_offset(obj, rel->r_offset); + map = find_struct_ops_map_by_offset(obj, shdr->sh_info, rel->r_offset); if (!map) { pr_warn("struct_ops reloc: cannot find map at rel->r_offset %zu\n", (size_t)rel->r_offset); @@ -8929,8 +9002,9 @@ static int bpf_object__collect_st_ops_relos(struct bpf_object *obj, } /* struct_ops BPF prog can be re-used between multiple - * .struct_ops as long as it's the same struct_ops struct - * definition and the same function pointer field + * .struct_ops & .struct_ops.link as long as it's the + * same struct_ops struct definition and the same + * function pointer field */ if (prog->attach_btf_id != st_ops->type_id || prog->expected_attach_type != member_idx) { @@ -9912,16 +9986,20 @@ static int append_to_file(const char *file, const char *fmt, ...) { int fd, n, err = 0; va_list ap; + char buf[1024]; + + va_start(ap, fmt); + n = vsnprintf(buf, sizeof(buf), fmt, ap); + va_end(ap); + + if (n < 0 || n >= sizeof(buf)) + return -EINVAL; fd = open(file, O_WRONLY | O_APPEND | O_CLOEXEC, 0); if (fd < 0) return -errno; - va_start(ap, fmt); - n = vdprintf(fd, fmt, ap); - va_end(ap); - - if (n < 0) + if (write(fd, buf, n) < 0) err = -errno; close(fd); @@ -11566,22 +11644,30 @@ struct bpf_link *bpf_program__attach(const struct bpf_program *prog) return link; } +struct bpf_link_struct_ops { + struct bpf_link link; + int map_fd; +}; + static int bpf_link__detach_struct_ops(struct bpf_link *link) { + struct bpf_link_struct_ops *st_link; __u32 zero = 0; - if (bpf_map_delete_elem(link->fd, &zero)) - return -errno; + st_link = container_of(link, struct bpf_link_struct_ops, link); - return 0; + if (st_link->map_fd < 0) + /* w/o a real link */ + return bpf_map_delete_elem(link->fd, &zero); + + return close(link->fd); } struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) { - struct bpf_struct_ops *st_ops; - struct bpf_link *link; - __u32 i, zero = 0; - int err; + struct bpf_link_struct_ops *link; + __u32 zero = 0; + int err, fd; if (!bpf_map__is_struct_ops(map) || map->fd == -1) return libbpf_err_ptr(-EINVAL); @@ -11590,31 +11676,72 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map) if (!link) return libbpf_err_ptr(-EINVAL); - st_ops = map->st_ops; - for (i = 0; i < btf_vlen(st_ops->type); i++) { - struct bpf_program *prog = st_ops->progs[i]; - void *kern_data; - int prog_fd; + /* kern_vdata should be prepared during the loading phase. */ + err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0); + /* It can be EBUSY if the map has been used to create or + * update a link before. We don't allow updating the value of + * a struct_ops once it is set. That ensures that the value + * never changed. So, it is safe to skip EBUSY. + */ + if (err && (!(map->def.map_flags & BPF_F_LINK) || err != -EBUSY)) { + free(link); + return libbpf_err_ptr(err); + } - if (!prog) - continue; + link->link.detach = bpf_link__detach_struct_ops; - prog_fd = bpf_program__fd(prog); - kern_data = st_ops->kern_vdata + st_ops->kern_func_off[i]; - *(unsigned long *)kern_data = prog_fd; + if (!(map->def.map_flags & BPF_F_LINK)) { + /* w/o a real link */ + link->link.fd = map->fd; + link->map_fd = -1; + return &link->link; } - err = bpf_map_update_elem(map->fd, &zero, st_ops->kern_vdata, 0); - if (err) { - err = -errno; + fd = bpf_link_create(map->fd, 0, BPF_STRUCT_OPS, NULL); + if (fd < 0) { free(link); - return libbpf_err_ptr(err); + return libbpf_err_ptr(fd); } - link->detach = bpf_link__detach_struct_ops; - link->fd = map->fd; + link->link.fd = fd; + link->map_fd = map->fd; - return link; + return &link->link; +} + +/* + * Swap the back struct_ops of a link with a new struct_ops map. + */ +int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map) +{ + struct bpf_link_struct_ops *st_ops_link; + __u32 zero = 0; + int err; + + if (!bpf_map__is_struct_ops(map) || map->fd < 0) + return -EINVAL; + + st_ops_link = container_of(link, struct bpf_link_struct_ops, link); + /* Ensure the type of a link is correct */ + if (st_ops_link->map_fd < 0) + return -EINVAL; + + err = bpf_map_update_elem(map->fd, &zero, map->st_ops->kern_vdata, 0); + /* It can be EBUSY if the map has been used to create or + * update a link before. We don't allow updating the value of + * a struct_ops once it is set. That ensures that the value + * never changed. So, it is safe to skip EBUSY. + */ + if (err && err != -EBUSY) + return err; + + err = bpf_link_update(link->fd, map->fd, NULL); + if (err < 0) + return err; + + st_ops_link->map_fd = map->fd; + + return 0; } typedef enum bpf_perf_event_ret (*bpf_perf_event_print_t)(struct perf_event_header *hdr, diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index db4992a036f8..0b7362397ea3 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -101,6 +101,8 @@ typedef int (*libbpf_print_fn_t)(enum libbpf_print_level level, * be used for libbpf warnings and informational messages. * @param fn The log print function. If NULL, libbpf won't print anything. * @return Pointer to old print function. + * + * This function is thread-safe. */ LIBBPF_API libbpf_print_fn_t libbpf_set_print(libbpf_print_fn_t fn); @@ -719,6 +721,7 @@ bpf_program__attach_freplace(const struct bpf_program *prog, struct bpf_map; LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map); +LIBBPF_API int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map); struct bpf_iter_attach_opts { size_t sz; /* size of this struct for forward/backward compatibility */ diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map index 50dde1f6521e..a5aa3a383d69 100644 --- a/tools/lib/bpf/libbpf.map +++ b/tools/lib/bpf/libbpf.map @@ -386,6 +386,7 @@ LIBBPF_1.1.0 { LIBBPF_1.2.0 { global: bpf_btf_get_info_by_fd; + bpf_link__update_map; bpf_link_get_info_by_fd; bpf_map_get_info_by_fd; bpf_prog_get_info_by_fd; diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c index d7069780984a..5ced96d99f8c 100644 --- a/tools/lib/bpf/linker.c +++ b/tools/lib/bpf/linker.c @@ -1115,7 +1115,19 @@ static int extend_sec(struct bpf_linker *linker, struct dst_sec *dst, struct src if (src->shdr->sh_type != SHT_NOBITS) { tmp = realloc(dst->raw_data, dst_final_sz); - if (!tmp) + /* If dst_align_sz == 0, realloc() behaves in a special way: + * 1. When dst->raw_data is NULL it returns: + * "either NULL or a pointer suitable to be passed to free()" [1]. + * 2. When dst->raw_data is not-NULL it frees dst->raw_data and returns NULL, + * thus invalidating any "pointer suitable to be passed to free()" obtained + * at step (1). + * + * The dst_align_sz > 0 check avoids error exit after (2), otherwise + * dst->raw_data would be freed again in bpf_linker__free(). + * + * [1] man 3 realloc + */ + if (!tmp && dst_align_sz > 0) return -ENOMEM; dst->raw_data = tmp; diff --git a/tools/lib/bpf/zip.c b/tools/lib/bpf/zip.c index f561aa07438f..3f26d629b2b4 100644 --- a/tools/lib/bpf/zip.c +++ b/tools/lib/bpf/zip.c @@ -16,6 +16,10 @@ #include "libbpf_internal.h" #include "zip.h" +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wpacked" +#pragma GCC diagnostic ignored "-Wattributes" + /* Specification of ZIP file format can be found here: * https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT * For a high level overview of the structure of a ZIP file see @@ -119,6 +123,8 @@ struct local_file_header { __u16 extra_field_length; } __attribute__((packed)); +#pragma GCC diagnostic pop + struct zip_archive { void *data; __u32 size; diff --git a/tools/testing/selftests/bpf/DENYLIST.aarch64 b/tools/testing/selftests/bpf/DENYLIST.aarch64 index 99cc33c51eaa..0a6837f97c32 100644 --- a/tools/testing/selftests/bpf/DENYLIST.aarch64 +++ b/tools/testing/selftests/bpf/DENYLIST.aarch64 @@ -44,6 +44,7 @@ lookup_key # test_lookup_key__attach unexp lru_bug # lru_bug__attach unexpected error: -524 (errno 524) modify_return # modify_return__attach failed unexpected error: -524 (errno 524) module_attach # skel_attach skeleton attach failed: -524 +module_fentry_shadow # bpf_link_create unexpected bpf_link_create: actual -524 < expected 0 mptcp/base # run_test mptcp unexpected error: -524 (errno 524) netcnt # packets unexpected packets: actual 10001 != expected 10000 rcu_read_lock # failed to attach: ERROR: strerror_r(-524)=22 diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index a02a085e7f32..c7463f3ec3c0 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -8,8 +8,10 @@ dynptr/test_dynptr_skb_data dynptr/test_skb_readonly fexit_sleep # fexit_skel_load fexit skeleton failed (trampoline) get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace) +iters/testmod_seq* # s390x doesn't support kfuncs in modules yet kprobe_multi_bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95 kprobe_multi_test # relies on fentry +ksyms_btf/weak_ksyms* # test_ksyms_weak__open_and_load unexpected error: -22 (kfunc) ksyms_module # test_ksyms_module__open_and_load unexpected error: -9 (?) ksyms_module_libbpf # JIT does not support calling kernel function (kfunc) ksyms_module_lskel # test_ksyms_module_lskel__open_and_load unexpected error: -9 (?) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 16f404aa1b23..c49e5403ad0e 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -36,7 +36,7 @@ endif # Order correspond to 'make run_tests' order TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \ - test_verifier_log test_dev_cgroup \ + test_dev_cgroup \ test_sock test_sockmap get_cgroup_id_user \ test_cgroup_storage \ test_tcpnotify_user test_sysctl \ @@ -201,7 +201,7 @@ $(OUTPUT)/sign-file: ../../../../scripts/sign-file.c $< -o $@ \ $(shell $(HOSTPKG_CONFIG) --libs libcrypto 2> /dev/null || echo -lcrypto) -$(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_testmod/*.[ch]) +$(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(RESOLVE_BTFIDS) $(wildcard bpf_testmod/Makefile bpf_testmod/*.[ch]) $(call msg,MOD,,$@) $(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation $(Q)$(MAKE) $(submake_extras) RESOLVE_BTFIDS=$(RESOLVE_BTFIDS) -C bpf_testmod @@ -231,9 +231,11 @@ TEST_GEN_PROGS_EXTENDED += $(TRUNNER_BPFTOOL) $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED): $(BPFOBJ) -CGROUP_HELPERS := $(OUTPUT)/cgroup_helpers.o TESTING_HELPERS := $(OUTPUT)/testing_helpers.o +CGROUP_HELPERS := $(OUTPUT)/cgroup_helpers.o +UNPRIV_HELPERS := $(OUTPUT)/unpriv_helpers.o TRACE_HELPERS := $(OUTPUT)/trace_helpers.o +JSON_WRITER := $(OUTPUT)/json_writer.o CAP_HELPERS := $(OUTPUT)/cap_helpers.o $(OUTPUT)/test_dev_cgroup: $(CGROUP_HELPERS) $(TESTING_HELPERS) @@ -251,7 +253,7 @@ $(OUTPUT)/test_lirc_mode2_user: $(TESTING_HELPERS) $(OUTPUT)/xdping: $(TESTING_HELPERS) $(OUTPUT)/flow_dissector_load: $(TESTING_HELPERS) $(OUTPUT)/test_maps: $(TESTING_HELPERS) -$(OUTPUT)/test_verifier: $(TESTING_HELPERS) $(CAP_HELPERS) +$(OUTPUT)/test_verifier: $(TESTING_HELPERS) $(CAP_HELPERS) $(UNPRIV_HELPERS) $(OUTPUT)/xsk.o: $(BPFOBJ) BPFTOOL ?= $(DEFAULT_BPFTOOL) @@ -352,12 +354,12 @@ CLANG_TARGET_ARCH = --target=$(notdir $(CROSS_COMPILE:%-=%)) endif CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH)) -BPF_CFLAGS = -g -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \ +BPF_CFLAGS = -g -Wall -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \ -I$(INCLUDE_DIR) -I$(CURDIR) -I$(APIDIR) \ -I$(abspath $(OUTPUT)/../usr/include) CLANG_CFLAGS = $(CLANG_SYS_INCLUDES) \ - -Wno-compare-distinct-pointer-types -Wuninitialized + -Wno-compare-distinct-pointer-types $(OUTPUT)/test_l4lb_noinline.o: BPF_CFLAGS += -fno-inline $(OUTPUT)/test_xdp_noinline.o: BPF_CFLAGS += -fno-inline @@ -559,7 +561,9 @@ TRUNNER_BPF_PROGS_DIR := progs TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \ network_helpers.c testing_helpers.c \ btf_helpers.c flow_dissector_load.h \ - cap_helpers.c test_loader.c xsk.c disasm.c + cap_helpers.c test_loader.c xsk.c disasm.c \ + json_writer.c unpriv_helpers.c + TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \ $(OUTPUT)/liburandom_read.so \ $(OUTPUT)/xdp_synproxy \ @@ -608,7 +612,7 @@ $(OUTPUT)/test_verifier: test_verifier.c verifier/tests.h $(BPFOBJ) | $(OUTPUT) $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@ -$(OUTPUT)/xskxceiver: xskxceiver.c $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT) +$(OUTPUT)/xskxceiver: xskxceiver.c xskxceiver.h $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT) $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@ @@ -639,6 +643,7 @@ $(OUTPUT)/bench_strncmp.o: $(OUTPUT)/strncmp_bench.skel.h $(OUTPUT)/bench_bpf_hashmap_full_update.o: $(OUTPUT)/bpf_hashmap_full_update_bench.skel.h $(OUTPUT)/bench_local_storage.o: $(OUTPUT)/local_storage_bench.skel.h $(OUTPUT)/bench_local_storage_rcu_tasks_trace.o: $(OUTPUT)/local_storage_rcu_tasks_trace_bench.skel.h +$(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.skel.h $(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h $(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ) $(OUTPUT)/bench: LDLIBS += -lm @@ -656,6 +661,7 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \ $(OUTPUT)/bench_local_storage.o \ $(OUTPUT)/bench_local_storage_rcu_tasks_trace.o \ $(OUTPUT)/bench_bpf_hashmap_lookup.o \ + $(OUTPUT)/bench_local_storage_create.o \ # $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@ diff --git a/tools/testing/selftests/bpf/autoconf_helper.h b/tools/testing/selftests/bpf/autoconf_helper.h new file mode 100644 index 000000000000..5b243b9cdf8c --- /dev/null +++ b/tools/testing/selftests/bpf/autoconf_helper.h @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#ifdef HAVE_GENHDR +# include "autoconf.h" +#else +# if defined(__i386) || defined(__x86_64) || defined(__s390x__) || defined(__aarch64__) +# define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1 +# endif +#endif diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c index 0b2a53bb8460..d9c080ac1796 100644 --- a/tools/testing/selftests/bpf/bench.c +++ b/tools/testing/selftests/bpf/bench.c @@ -278,6 +278,7 @@ extern struct argp bench_local_storage_argp; extern struct argp bench_local_storage_rcu_tasks_trace_argp; extern struct argp bench_strncmp_argp; extern struct argp bench_hashmap_lookup_argp; +extern struct argp bench_local_storage_create_argp; static const struct argp_child bench_parsers[] = { { &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 }, @@ -288,6 +289,7 @@ static const struct argp_child bench_parsers[] = { { &bench_local_storage_rcu_tasks_trace_argp, 0, "local_storage RCU Tasks Trace slowdown benchmark", 0 }, { &bench_hashmap_lookup_argp, 0, "Hashmap lookup benchmark", 0 }, + { &bench_local_storage_create_argp, 0, "local-storage-create benchmark", 0 }, {}, }; @@ -515,6 +517,7 @@ extern const struct bench bench_local_storage_cache_interleaved_get; extern const struct bench bench_local_storage_cache_hashmap_control; extern const struct bench bench_local_storage_tasks_trace; extern const struct bench bench_bpf_hashmap_lookup; +extern const struct bench bench_local_storage_create; static const struct bench *benchs[] = { &bench_count_global, @@ -555,6 +558,7 @@ static const struct bench *benchs[] = { &bench_local_storage_cache_hashmap_control, &bench_local_storage_tasks_trace, &bench_bpf_hashmap_lookup, + &bench_local_storage_create, }; static void find_benchmark(void) diff --git a/tools/testing/selftests/bpf/benchs/bench_local_storage_create.c b/tools/testing/selftests/bpf/benchs/bench_local_storage_create.c new file mode 100644 index 000000000000..cff703f90e95 --- /dev/null +++ b/tools/testing/selftests/bpf/benchs/bench_local_storage_create.c @@ -0,0 +1,264 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <sys/types.h> +#include <sys/socket.h> +#include <pthread.h> +#include <argp.h> + +#include "bench.h" +#include "bench_local_storage_create.skel.h" + +struct thread { + int *fds; + pthread_t *pthds; + int *pthd_results; +}; + +static struct bench_local_storage_create *skel; +static struct thread *threads; +static long create_owner_errs; +static int storage_type = BPF_MAP_TYPE_SK_STORAGE; +static int batch_sz = 32; + +enum { + ARG_BATCH_SZ = 9000, + ARG_STORAGE_TYPE = 9001, +}; + +static const struct argp_option opts[] = { + { "batch-size", ARG_BATCH_SZ, "BATCH_SIZE", 0, + "The number of storage creations in each batch" }, + { "storage-type", ARG_STORAGE_TYPE, "STORAGE_TYPE", 0, + "The type of local storage to test (socket or task)" }, + {}, +}; + +static error_t parse_arg(int key, char *arg, struct argp_state *state) +{ + int ret; + + switch (key) { + case ARG_BATCH_SZ: + ret = atoi(arg); + if (ret < 1) { + fprintf(stderr, "invalid batch-size\n"); + argp_usage(state); + } + batch_sz = ret; + break; + case ARG_STORAGE_TYPE: + if (!strcmp(arg, "task")) { + storage_type = BPF_MAP_TYPE_TASK_STORAGE; + } else if (!strcmp(arg, "socket")) { + storage_type = BPF_MAP_TYPE_SK_STORAGE; + } else { + fprintf(stderr, "invalid storage-type (socket or task)\n"); + argp_usage(state); + } + break; + default: + return ARGP_ERR_UNKNOWN; + } + + return 0; +} + +const struct argp bench_local_storage_create_argp = { + .options = opts, + .parser = parse_arg, +}; + +static void validate(void) +{ + if (env.consumer_cnt > 1) { + fprintf(stderr, + "local-storage-create benchmark does not need consumer\n"); + exit(1); + } +} + +static void setup(void) +{ + int i; + + skel = bench_local_storage_create__open_and_load(); + if (!skel) { + fprintf(stderr, "error loading skel\n"); + exit(1); + } + + skel->bss->bench_pid = getpid(); + if (storage_type == BPF_MAP_TYPE_SK_STORAGE) { + if (!bpf_program__attach(skel->progs.socket_post_create)) { + fprintf(stderr, "Error attaching bpf program\n"); + exit(1); + } + } else { + if (!bpf_program__attach(skel->progs.sched_process_fork)) { + fprintf(stderr, "Error attaching bpf program\n"); + exit(1); + } + } + + if (!bpf_program__attach(skel->progs.kmalloc)) { + fprintf(stderr, "Error attaching bpf program\n"); + exit(1); + } + + threads = calloc(env.producer_cnt, sizeof(*threads)); + + if (!threads) { + fprintf(stderr, "cannot alloc thread_res\n"); + exit(1); + } + + for (i = 0; i < env.producer_cnt; i++) { + struct thread *t = &threads[i]; + + if (storage_type == BPF_MAP_TYPE_SK_STORAGE) { + t->fds = malloc(batch_sz * sizeof(*t->fds)); + if (!t->fds) { + fprintf(stderr, "cannot alloc t->fds\n"); + exit(1); + } + } else { + t->pthds = malloc(batch_sz * sizeof(*t->pthds)); + if (!t->pthds) { + fprintf(stderr, "cannot alloc t->pthds\n"); + exit(1); + } + t->pthd_results = malloc(batch_sz * sizeof(*t->pthd_results)); + if (!t->pthd_results) { + fprintf(stderr, "cannot alloc t->pthd_results\n"); + exit(1); + } + } + } +} + +static void measure(struct bench_res *res) +{ + res->hits = atomic_swap(&skel->bss->create_cnts, 0); + res->drops = atomic_swap(&skel->bss->kmalloc_cnts, 0); +} + +static void *consumer(void *input) +{ + return NULL; +} + +static void *sk_producer(void *input) +{ + struct thread *t = &threads[(long)(input)]; + int *fds = t->fds; + int i; + + while (true) { + for (i = 0; i < batch_sz; i++) { + fds[i] = socket(AF_INET6, SOCK_DGRAM, 0); + if (fds[i] == -1) + atomic_inc(&create_owner_errs); + } + + for (i = 0; i < batch_sz; i++) { + if (fds[i] != -1) + close(fds[i]); + } + } + + return NULL; +} + +static void *thread_func(void *arg) +{ + return NULL; +} + +static void *task_producer(void *input) +{ + struct thread *t = &threads[(long)(input)]; + pthread_t *pthds = t->pthds; + int *pthd_results = t->pthd_results; + int i; + + while (true) { + for (i = 0; i < batch_sz; i++) { + pthd_results[i] = pthread_create(&pthds[i], NULL, thread_func, NULL); + if (pthd_results[i]) + atomic_inc(&create_owner_errs); + } + + for (i = 0; i < batch_sz; i++) { + if (!pthd_results[i]) + pthread_join(pthds[i], NULL);; + } + } + + return NULL; +} + +static void *producer(void *input) +{ + if (storage_type == BPF_MAP_TYPE_SK_STORAGE) + return sk_producer(input); + else + return task_producer(input); +} + +static void report_progress(int iter, struct bench_res *res, long delta_ns) +{ + double creates_per_sec, kmallocs_per_create; + + creates_per_sec = res->hits / 1000.0 / (delta_ns / 1000000000.0); + kmallocs_per_create = (double)res->drops / res->hits; + + printf("Iter %3d (%7.3lfus): ", + iter, (delta_ns - 1000000000) / 1000.0); + printf("creates %8.3lfk/s (%7.3lfk/prod), ", + creates_per_sec, creates_per_sec / env.producer_cnt); + printf("%3.2lf kmallocs/create\n", kmallocs_per_create); +} + +static void report_final(struct bench_res res[], int res_cnt) +{ + double creates_mean = 0.0, creates_stddev = 0.0; + long total_creates = 0, total_kmallocs = 0; + int i; + + for (i = 0; i < res_cnt; i++) { + creates_mean += res[i].hits / 1000.0 / (0.0 + res_cnt); + total_creates += res[i].hits; + total_kmallocs += res[i].drops; + } + + if (res_cnt > 1) { + for (i = 0; i < res_cnt; i++) + creates_stddev += (creates_mean - res[i].hits / 1000.0) * + (creates_mean - res[i].hits / 1000.0) / + (res_cnt - 1.0); + creates_stddev = sqrt(creates_stddev); + } + printf("Summary: creates %8.3lf \u00B1 %5.3lfk/s (%7.3lfk/prod), ", + creates_mean, creates_stddev, creates_mean / env.producer_cnt); + printf("%4.2lf kmallocs/create\n", (double)total_kmallocs / total_creates); + if (create_owner_errs || skel->bss->create_errs) + printf("%s() errors %ld create_errs %ld\n", + storage_type == BPF_MAP_TYPE_SK_STORAGE ? + "socket" : "pthread_create", + create_owner_errs, + skel->bss->create_errs); +} + +/* Benchmark performance of creating bpf local storage */ +const struct bench bench_local_storage_create = { + .name = "local-storage-create", + .argp = &bench_local_storage_create_argp, + .validate = validate, + .setup = setup, + .producer_thread = producer, + .consumer_thread = consumer, + .measure = measure, + .report_progress = report_progress, + .report_final = report_final, +}; diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c index 46500636d8cd..fe847ebfb731 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c @@ -28,6 +28,10 @@ struct bpf_testmod_struct_arg_2 { long b; }; +__diag_push(); +__diag_ignore_all("-Wmissing-prototypes", + "Global functions as their definitions will be in bpf_testmod.ko BTF"); + noinline int bpf_testmod_test_struct_arg_1(struct bpf_testmod_struct_arg_2 a, int b, int c) { bpf_testmod_test_struct_arg_result = a.a + a.b + b + c; @@ -65,6 +69,34 @@ bpf_testmod_test_mod_kfunc(int i) *(int *)this_cpu_ptr(&bpf_testmod_ksym_percpu) = i; } +__bpf_kfunc int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) +{ + if (cnt < 0) { + it->cnt = 0; + return -EINVAL; + } + + it->value = value; + it->cnt = cnt; + + return 0; +} + +__bpf_kfunc s64 *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq* it) +{ + if (it->cnt <= 0) + return NULL; + + it->cnt--; + + return &it->value; +} + +__bpf_kfunc void bpf_iter_testmod_seq_destroy(struct bpf_iter_testmod_seq *it) +{ + it->cnt = 0; +} + struct bpf_testmod_btf_type_tag_1 { int a; }; @@ -102,7 +134,11 @@ bpf_testmod_test_btf_type_tag_percpu_2(struct bpf_testmod_btf_type_tag_3 *arg) { noinline int bpf_testmod_loop_test(int n) { - int i, sum = 0; + /* Make sum volatile, so smart compilers, such as clang, will not + * optimize the code by removing the loop. + */ + volatile int sum = 0; + int i; /* the primary goal of this test is to test LBR. Create a lot of * branches in the function, so we can catch it easily. @@ -143,6 +179,8 @@ noinline int bpf_testmod_fentry_test3(char a, int b, u64 c) return a + b + c; } +__diag_pop(); + int bpf_testmod_fentry_ok; noinline ssize_t @@ -220,6 +258,17 @@ static struct bin_attribute bin_attr_bpf_testmod_file __ro_after_init = { .write = bpf_testmod_test_write, }; +BTF_SET8_START(bpf_testmod_common_kfunc_ids) +BTF_ID_FLAGS(func, bpf_iter_testmod_seq_new, KF_ITER_NEW) +BTF_ID_FLAGS(func, bpf_iter_testmod_seq_next, KF_ITER_NEXT | KF_RET_NULL) +BTF_ID_FLAGS(func, bpf_iter_testmod_seq_destroy, KF_ITER_DESTROY) +BTF_SET8_END(bpf_testmod_common_kfunc_ids) + +static const struct btf_kfunc_id_set bpf_testmod_common_kfunc_set = { + .owner = THIS_MODULE, + .set = &bpf_testmod_common_kfunc_ids, +}; + BTF_SET8_START(bpf_testmod_check_kfunc_ids) BTF_ID_FLAGS(func, bpf_testmod_test_mod_kfunc) BTF_SET8_END(bpf_testmod_check_kfunc_ids) @@ -229,13 +278,20 @@ static const struct btf_kfunc_id_set bpf_testmod_kfunc_set = { .set = &bpf_testmod_check_kfunc_ids, }; +noinline int bpf_fentry_shadow_test(int a) +{ + return a + 2; +} +EXPORT_SYMBOL_GPL(bpf_fentry_shadow_test); + extern int bpf_fentry_test1(int a); static int bpf_testmod_init(void) { int ret; - ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set); + ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, &bpf_testmod_common_kfunc_set); + ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &bpf_testmod_kfunc_set); if (ret < 0) return ret; if (bpf_fentry_test1(0) < 0) diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h index 0d71e2607832..f32793efe095 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.h @@ -22,4 +22,10 @@ struct bpf_testmod_test_writable_ctx { int val; }; +/* BPF iter that returns *value* *n* times in a row */ +struct bpf_iter_testmod_seq { + s64 value; + int cnt; +}; + #endif /* _BPF_TESTMOD_H */ diff --git a/tools/testing/selftests/bpf/get_cgroup_id_user.c b/tools/testing/selftests/bpf/get_cgroup_id_user.c index 156743cf5870..aefd83ebdcd7 100644 --- a/tools/testing/selftests/bpf/get_cgroup_id_user.c +++ b/tools/testing/selftests/bpf/get_cgroup_id_user.c @@ -86,8 +86,13 @@ int main(int argc, char **argv) pid = getpid(); bpf_map_update_elem(pidmap_fd, &key, &pid, 0); - snprintf(buf, sizeof(buf), - "/sys/kernel/debug/tracing/events/%s/id", probe_name); + if (access("/sys/kernel/tracing/trace", F_OK) == 0) { + snprintf(buf, sizeof(buf), + "/sys/kernel/tracing/events/%s/id", probe_name); + } else { + snprintf(buf, sizeof(buf), + "/sys/kernel/debug/tracing/events/%s/id", probe_name); + } efd = open(buf, O_RDONLY, 0); if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno)) goto close_prog; diff --git a/tools/testing/selftests/bpf/json_writer.c b/tools/testing/selftests/bpf/json_writer.c new file mode 120000 index 000000000000..5effa31e2f39 --- /dev/null +++ b/tools/testing/selftests/bpf/json_writer.c @@ -0,0 +1 @@ +../../../bpf/bpftool/json_writer.c
\ No newline at end of file diff --git a/tools/testing/selftests/bpf/json_writer.h b/tools/testing/selftests/bpf/json_writer.h new file mode 120000 index 000000000000..e0a264c26752 --- /dev/null +++ b/tools/testing/selftests/bpf/json_writer.h @@ -0,0 +1 @@ +../../../bpf/bpftool/json_writer.h
\ No newline at end of file diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c index 01de33191226..596caa176582 100644 --- a/tools/testing/selftests/bpf/network_helpers.c +++ b/tools/testing/selftests/bpf/network_helpers.c @@ -95,7 +95,7 @@ static int __start_server(int type, int protocol, const struct sockaddr *addr, if (reuseport && setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &on, sizeof(on))) { log_err("Failed to set SO_REUSEPORT"); - return -1; + goto error_close; } if (bind(fd, addr, addrlen) < 0) { diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c index c94fa8d6c4f6..b92770592563 100644 --- a/tools/testing/selftests/bpf/prog_tests/align.c +++ b/tools/testing/selftests/bpf/prog_tests/align.c @@ -575,14 +575,14 @@ static struct bpf_align_test tests[] = { /* New unknown value in R7 is (4n), >= 76 */ {14, "R7_w=scalar(umin=76,umax=1096,var_off=(0x0; 0x7fc))"}, /* Adding it to packet pointer gives nice bounds again */ - {16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"}, + {16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0x7fc)"}, /* At the time the word size load is performed from R5, * its total fixed offset is NET_IP_ALIGN + reg->off (0) * which is 2. Then the variable offset is (4n+2), so * the total offset is 4-byte aligned and meets the * load's requirements. */ - {20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0xfffffffc)"}, + {20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0x7fc)"}, }, }, }; diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c b/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c index e980188d4124..a53c254c6058 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_tcp_ca.c @@ -8,6 +8,7 @@ #include "bpf_dctcp.skel.h" #include "bpf_cubic.skel.h" #include "bpf_tcp_nogpl.skel.h" +#include "tcp_ca_update.skel.h" #include "bpf_dctcp_release.skel.h" #include "tcp_ca_write_sk_pacing.skel.h" #include "tcp_ca_incompl_cong_ops.skel.h" @@ -381,6 +382,155 @@ static void test_unsupp_cong_op(void) libbpf_set_print(old_print_fn); } +static void test_update_ca(void) +{ + struct tcp_ca_update *skel; + struct bpf_link *link; + int saved_ca1_cnt; + int err; + + skel = tcp_ca_update__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + link = bpf_map__attach_struct_ops(skel->maps.ca_update_1); + ASSERT_OK_PTR(link, "attach_struct_ops"); + + do_test("tcp_ca_update", NULL); + saved_ca1_cnt = skel->bss->ca1_cnt; + ASSERT_GT(saved_ca1_cnt, 0, "ca1_ca1_cnt"); + + err = bpf_link__update_map(link, skel->maps.ca_update_2); + ASSERT_OK(err, "update_map"); + + do_test("tcp_ca_update", NULL); + ASSERT_EQ(skel->bss->ca1_cnt, saved_ca1_cnt, "ca2_ca1_cnt"); + ASSERT_GT(skel->bss->ca2_cnt, 0, "ca2_ca2_cnt"); + + bpf_link__destroy(link); + tcp_ca_update__destroy(skel); +} + +static void test_update_wrong(void) +{ + struct tcp_ca_update *skel; + struct bpf_link *link; + int saved_ca1_cnt; + int err; + + skel = tcp_ca_update__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + link = bpf_map__attach_struct_ops(skel->maps.ca_update_1); + ASSERT_OK_PTR(link, "attach_struct_ops"); + + do_test("tcp_ca_update", NULL); + saved_ca1_cnt = skel->bss->ca1_cnt; + ASSERT_GT(saved_ca1_cnt, 0, "ca1_ca1_cnt"); + + err = bpf_link__update_map(link, skel->maps.ca_wrong); + ASSERT_ERR(err, "update_map"); + + do_test("tcp_ca_update", NULL); + ASSERT_GT(skel->bss->ca1_cnt, saved_ca1_cnt, "ca2_ca1_cnt"); + + bpf_link__destroy(link); + tcp_ca_update__destroy(skel); +} + +static void test_mixed_links(void) +{ + struct tcp_ca_update *skel; + struct bpf_link *link, *link_nl; + int err; + + skel = tcp_ca_update__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + link_nl = bpf_map__attach_struct_ops(skel->maps.ca_no_link); + ASSERT_OK_PTR(link_nl, "attach_struct_ops_nl"); + + link = bpf_map__attach_struct_ops(skel->maps.ca_update_1); + ASSERT_OK_PTR(link, "attach_struct_ops"); + + do_test("tcp_ca_update", NULL); + ASSERT_GT(skel->bss->ca1_cnt, 0, "ca1_ca1_cnt"); + + err = bpf_link__update_map(link, skel->maps.ca_no_link); + ASSERT_ERR(err, "update_map"); + + bpf_link__destroy(link); + bpf_link__destroy(link_nl); + tcp_ca_update__destroy(skel); +} + +static void test_multi_links(void) +{ + struct tcp_ca_update *skel; + struct bpf_link *link; + + skel = tcp_ca_update__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + link = bpf_map__attach_struct_ops(skel->maps.ca_update_1); + ASSERT_OK_PTR(link, "attach_struct_ops_1st"); + bpf_link__destroy(link); + + /* A map should be able to be used to create links multiple + * times. + */ + link = bpf_map__attach_struct_ops(skel->maps.ca_update_1); + ASSERT_OK_PTR(link, "attach_struct_ops_2nd"); + bpf_link__destroy(link); + + tcp_ca_update__destroy(skel); +} + +static void test_link_replace(void) +{ + DECLARE_LIBBPF_OPTS(bpf_link_update_opts, opts); + struct tcp_ca_update *skel; + struct bpf_link *link; + int err; + + skel = tcp_ca_update__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + link = bpf_map__attach_struct_ops(skel->maps.ca_update_1); + ASSERT_OK_PTR(link, "attach_struct_ops_1st"); + bpf_link__destroy(link); + + link = bpf_map__attach_struct_ops(skel->maps.ca_update_2); + ASSERT_OK_PTR(link, "attach_struct_ops_2nd"); + + /* BPF_F_REPLACE with a wrong old map Fd. It should fail! + * + * With BPF_F_REPLACE, the link should be updated only if the + * old map fd given here matches the map backing the link. + */ + opts.old_map_fd = bpf_map__fd(skel->maps.ca_update_1); + opts.flags = BPF_F_REPLACE; + err = bpf_link_update(bpf_link__fd(link), + bpf_map__fd(skel->maps.ca_update_1), + &opts); + ASSERT_ERR(err, "bpf_link_update_fail"); + + /* BPF_F_REPLACE with a correct old map Fd. It should success! */ + opts.old_map_fd = bpf_map__fd(skel->maps.ca_update_2); + err = bpf_link_update(bpf_link__fd(link), + bpf_map__fd(skel->maps.ca_update_1), + &opts); + ASSERT_OK(err, "bpf_link_update_success"); + + bpf_link__destroy(link); + + tcp_ca_update__destroy(skel); +} + void test_bpf_tcp_ca(void) { if (test__start_subtest("dctcp")) @@ -399,4 +549,14 @@ void test_bpf_tcp_ca(void) test_incompl_cong_ops(); if (test__start_subtest("unsupp_cong_op")) test_unsupp_cong_op(); + if (test__start_subtest("update_ca")) + test_update_ca(); + if (test__start_subtest("update_wrong")) + test_update_wrong(); + if (test__start_subtest("mixed_links")) + test_mixed_links(); + if (test__start_subtest("multi_links")) + test_multi_links(); + if (test__start_subtest("link_replace")) + test_link_replace(); } diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c b/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c index 5ca252823294..731c343897d8 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c @@ -144,6 +144,12 @@ void test_verif_scale_pyperf600_nounroll() scale_test("pyperf600_nounroll.bpf.o", BPF_PROG_TYPE_RAW_TRACEPOINT, false); } +void test_verif_scale_pyperf600_iter() +{ + /* open-coded BPF iterator version */ + scale_test("pyperf600_iter.bpf.o", BPF_PROG_TYPE_RAW_TRACEPOINT, false); +} + void test_verif_scale_loop1() { scale_test("loop1.bpf.o", BPF_PROG_TYPE_RAW_TRACEPOINT, false); diff --git a/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c b/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c index 621c57222191..63ee892bc757 100644 --- a/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c +++ b/tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c @@ -56,8 +56,9 @@ static bool assert_storage_noexist(struct bpf_map *map, const void *key) static bool connect_send(const char *cgroup_path) { - bool res = true; int server_fd = -1, client_fd = -1; + char message[] = "message"; + bool res = true; if (join_cgroup(cgroup_path)) goto out_clean; @@ -70,7 +71,10 @@ static bool connect_send(const char *cgroup_path) if (client_fd < 0) goto out_clean; - if (send(client_fd, "message", strlen("message"), 0) < 0) + if (send(client_fd, &message, sizeof(message), 0) < 0) + goto out_clean; + + if (read(server_fd, &message, sizeof(message)) < 0) goto out_clean; res = false; diff --git a/tools/testing/selftests/bpf/prog_tests/cpumask.c b/tools/testing/selftests/bpf/prog_tests/cpumask.c index 5fbe457c4ebe..cdf4acc18e4c 100644 --- a/tools/testing/selftests/bpf/prog_tests/cpumask.c +++ b/tools/testing/selftests/bpf/prog_tests/cpumask.c @@ -16,7 +16,7 @@ static const char * const cpumask_success_testcases[] = { "test_copy_any_anyand", "test_insert_leave", "test_insert_remove_release", - "test_insert_kptr_get_release", + "test_global_mask_rcu", }; static void verify_success(const char *prog_name) diff --git a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c index d5fe3d4b936c..4951aa978f33 100644 --- a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c +++ b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c @@ -68,17 +68,17 @@ static struct test_case test_cases[] = { #if defined(__x86_64__) || defined(__aarch64__) { N(SCHED_CLS, struct __sk_buff, tstamp), - .read = "r11 = *(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset);" - "w11 &= 160;" - "if w11 != 0xa0 goto pc+2;" + .read = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);" + "w11 &= 3;" + "if w11 != 0x3 goto pc+2;" "$dst = 0;" "goto pc+1;" "$dst = *(u64 *)($ctx + sk_buff::tstamp);", - .write = "r11 = *(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset);" - "if w11 & 0x80 goto pc+1;" + .write = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);" + "if w11 & 0x2 goto pc+1;" "goto pc+2;" - "w11 &= -33;" - "*(u8 *)($ctx + sk_buff::__pkt_vlan_present_offset) = r11;" + "w11 &= -2;" + "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r11;" "*(u64 *)($ctx + sk_buff::tstamp) = $src;", }, #endif diff --git a/tools/testing/selftests/bpf/prog_tests/fib_lookup.c b/tools/testing/selftests/bpf/prog_tests/fib_lookup.c index 429393caf612..a1e712105811 100644 --- a/tools/testing/selftests/bpf/prog_tests/fib_lookup.c +++ b/tools/testing/selftests/bpf/prog_tests/fib_lookup.c @@ -54,11 +54,19 @@ static int setup_netns(void) SYS(fail, "ip link add veth1 type veth peer name veth2"); SYS(fail, "ip link set dev veth1 up"); + err = write_sysctl("/proc/sys/net/ipv4/neigh/veth1/gc_stale_time", "900"); + if (!ASSERT_OK(err, "write_sysctl(net.ipv4.neigh.veth1.gc_stale_time)")) + goto fail; + + err = write_sysctl("/proc/sys/net/ipv6/neigh/veth1/gc_stale_time", "900"); + if (!ASSERT_OK(err, "write_sysctl(net.ipv6.neigh.veth1.gc_stale_time)")) + goto fail; + SYS(fail, "ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR); SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV6_NUD_FAILED_ADDR); SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV6_NUD_STALE_ADDR, DMAC); - SYS(fail, "ip addr add %s/24 dev veth1 nodad", IPV4_IFACE_ADDR); + SYS(fail, "ip addr add %s/24 dev veth1", IPV4_IFACE_ADDR); SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR); SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC); @@ -158,7 +166,7 @@ void test_fib_lookup(void) if (!ASSERT_OK(err, "bpf_prog_test_run_opts")) continue; - ASSERT_EQ(tests[i].expected_ret, skel->bss->fib_lookup_ret, + ASSERT_EQ(skel->bss->fib_lookup_ret, tests[i].expected_ret, "fib_lookup_ret"); ret = memcmp(tests[i].dmac, fib_params->dmac, sizeof(tests[i].dmac)); diff --git a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c index 3948da12a528..0394a1156d99 100644 --- a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c +++ b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c @@ -37,8 +37,8 @@ static int create_perf_events(void) /* create perf event */ attr.size = sizeof(attr); - attr.type = PERF_TYPE_RAW; - attr.config = 0x1b00; + attr.type = PERF_TYPE_HARDWARE; + attr.config = PERF_COUNT_HW_CPU_CYCLES; attr.sample_type = PERF_SAMPLE_BRANCH_STACK; attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL | PERF_SAMPLE_BRANCH_USER | PERF_SAMPLE_BRANCH_ANY; diff --git a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c index 5308de1ed478..2715c68301f5 100644 --- a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c +++ b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c @@ -65,6 +65,7 @@ void test_get_stackid_cannot_attach(void) skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, pmu_fd); ASSERT_OK_PTR(skel->links.oncpu, "attach_perf_event_callchain"); + bpf_link__destroy(skel->links.oncpu); close(pmu_fd); /* add exclude_callchain_kernel, attach should fail */ diff --git a/tools/testing/selftests/bpf/prog_tests/iters.c b/tools/testing/selftests/bpf/prog_tests/iters.c new file mode 100644 index 000000000000..10804ae5ae97 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/iters.c @@ -0,0 +1,106 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <test_progs.h> + +#include "iters.skel.h" +#include "iters_state_safety.skel.h" +#include "iters_looping.skel.h" +#include "iters_num.skel.h" +#include "iters_testmod_seq.skel.h" + +static void subtest_num_iters(void) +{ + struct iters_num *skel; + int err; + + skel = iters_num__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel_open_and_load")) + return; + + err = iters_num__attach(skel); + if (!ASSERT_OK(err, "skel_attach")) + goto cleanup; + + usleep(1); + iters_num__detach(skel); + +#define VALIDATE_CASE(case_name) \ + ASSERT_EQ(skel->bss->res_##case_name, \ + skel->rodata->exp_##case_name, \ + #case_name) + + VALIDATE_CASE(empty_zero); + VALIDATE_CASE(empty_int_min); + VALIDATE_CASE(empty_int_max); + VALIDATE_CASE(empty_minus_one); + + VALIDATE_CASE(simple_sum); + VALIDATE_CASE(neg_sum); + VALIDATE_CASE(very_neg_sum); + VALIDATE_CASE(neg_pos_sum); + + VALIDATE_CASE(invalid_range); + VALIDATE_CASE(max_range); + VALIDATE_CASE(e2big_range); + + VALIDATE_CASE(succ_elem_cnt); + VALIDATE_CASE(overfetched_elem_cnt); + VALIDATE_CASE(fail_elem_cnt); + +#undef VALIDATE_CASE + +cleanup: + iters_num__destroy(skel); +} + +static void subtest_testmod_seq_iters(void) +{ + struct iters_testmod_seq *skel; + int err; + + if (!env.has_testmod) { + test__skip(); + return; + } + + skel = iters_testmod_seq__open_and_load(); + if (!ASSERT_OK_PTR(skel, "skel_open_and_load")) + return; + + err = iters_testmod_seq__attach(skel); + if (!ASSERT_OK(err, "skel_attach")) + goto cleanup; + + usleep(1); + iters_testmod_seq__detach(skel); + +#define VALIDATE_CASE(case_name) \ + ASSERT_EQ(skel->bss->res_##case_name, \ + skel->rodata->exp_##case_name, \ + #case_name) + + VALIDATE_CASE(empty); + VALIDATE_CASE(full); + VALIDATE_CASE(truncated); + +#undef VALIDATE_CASE + +cleanup: + iters_testmod_seq__destroy(skel); +} + +void test_iters(void) +{ + RUN_TESTS(iters_state_safety); + RUN_TESTS(iters_looping); + RUN_TESTS(iters); + + if (env.has_testmod) + RUN_TESTS(iters_testmod_seq); + + if (test__start_subtest("num")) + subtest_num_iters(); + if (test__start_subtest("testmod_seq")) + subtest_testmod_seq_iters(); +} diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c index 113dba349a57..2173c4bb555e 100644 --- a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c +++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c @@ -338,7 +338,12 @@ static int get_syms(char ***symsp, size_t *cntp, bool kernel) * Filtering out duplicates by using hashmap__add, which won't * add existing entry. */ - f = fopen("/sys/kernel/debug/tracing/available_filter_functions", "r"); + + if (access("/sys/kernel/tracing/trace", F_OK) == 0) + f = fopen("/sys/kernel/tracing/available_filter_functions", "r"); + else + f = fopen("/sys/kernel/debug/tracing/available_filter_functions", "r"); + if (!f) return -EINVAL; @@ -376,8 +381,10 @@ static int get_syms(char ***symsp, size_t *cntp, bool kernel) continue; err = hashmap__add(map, name, 0); - if (err == -EEXIST) + if (err == -EEXIST) { + err = 0; continue; + } if (err) goto error; diff --git a/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c b/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c new file mode 100644 index 000000000000..76f1da877f81 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/local_kptr_stash.c @@ -0,0 +1,60 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <test_progs.h> +#include <network_helpers.h> + +#include "local_kptr_stash.skel.h" +static void test_local_kptr_stash_simple(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + struct local_kptr_stash *skel; + int ret; + + skel = local_kptr_stash__open_and_load(); + if (!ASSERT_OK_PTR(skel, "local_kptr_stash__open_and_load")) + return; + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.stash_rb_nodes), &opts); + ASSERT_OK(ret, "local_kptr_stash_add_nodes run"); + ASSERT_OK(opts.retval, "local_kptr_stash_add_nodes retval"); + + local_kptr_stash__destroy(skel); +} + +static void test_local_kptr_stash_unstash(void) +{ + LIBBPF_OPTS(bpf_test_run_opts, opts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + struct local_kptr_stash *skel; + int ret; + + skel = local_kptr_stash__open_and_load(); + if (!ASSERT_OK_PTR(skel, "local_kptr_stash__open_and_load")) + return; + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.stash_rb_nodes), &opts); + ASSERT_OK(ret, "local_kptr_stash_add_nodes run"); + ASSERT_OK(opts.retval, "local_kptr_stash_add_nodes retval"); + + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.unstash_rb_node), &opts); + ASSERT_OK(ret, "local_kptr_stash_add_nodes run"); + ASSERT_EQ(opts.retval, 42, "local_kptr_stash_add_nodes retval"); + + local_kptr_stash__destroy(skel); +} + +void test_local_kptr_stash_success(void) +{ + if (test__start_subtest("local_kptr_stash_simple")) + test_local_kptr_stash_simple(); + if (test__start_subtest("local_kptr_stash_unstash")) + test_local_kptr_stash_unstash(); +} diff --git a/tools/testing/selftests/bpf/prog_tests/log_fixup.c b/tools/testing/selftests/bpf/prog_tests/log_fixup.c index 239e1c5753b0..bc27170bdeb0 100644 --- a/tools/testing/selftests/bpf/prog_tests/log_fixup.c +++ b/tools/testing/selftests/bpf/prog_tests/log_fixup.c @@ -24,6 +24,7 @@ static void bad_core_relo(size_t log_buf_size, enum trunc_type trunc_type) bpf_program__set_autoload(skel->progs.bad_relo, true); memset(log_buf, 0, sizeof(log_buf)); bpf_program__set_log_buf(skel->progs.bad_relo, log_buf, log_buf_size ?: sizeof(log_buf)); + bpf_program__set_log_level(skel->progs.bad_relo, 1 | 8); /* BPF_LOG_FIXED to force truncation */ err = test_log_fixup__load(skel); if (!ASSERT_ERR(err, "load_fail")) diff --git a/tools/testing/selftests/bpf/prog_tests/map_ops.c b/tools/testing/selftests/bpf/prog_tests/map_ops.c new file mode 100644 index 000000000000..be5e42a413b4 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/map_ops.c @@ -0,0 +1,162 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <errno.h> +#include <sys/syscall.h> +#include <unistd.h> + +#include "test_map_ops.skel.h" +#include "test_progs.h" + +static void map_update(void) +{ + (void)syscall(__NR_getpid); +} + +static void map_delete(void) +{ + (void)syscall(__NR_getppid); +} + +static void map_push(void) +{ + (void)syscall(__NR_getuid); +} + +static void map_pop(void) +{ + (void)syscall(__NR_geteuid); +} + +static void map_peek(void) +{ + (void)syscall(__NR_getgid); +} + +static void map_for_each_pass(void) +{ + (void)syscall(__NR_gettid); +} + +static void map_for_each_fail(void) +{ + (void)syscall(__NR_getpgid); +} + +static int setup(struct test_map_ops **skel) +{ + int err = 0; + + if (!skel) + return -1; + + *skel = test_map_ops__open(); + if (!ASSERT_OK_PTR(*skel, "test_map_ops__open")) + return -1; + + (*skel)->rodata->pid = getpid(); + + err = test_map_ops__load(*skel); + if (!ASSERT_OK(err, "test_map_ops__load")) + return err; + + err = test_map_ops__attach(*skel); + if (!ASSERT_OK(err, "test_map_ops__attach")) + return err; + + return err; +} + +static void teardown(struct test_map_ops **skel) +{ + if (skel && *skel) + test_map_ops__destroy(*skel); +} + +static void map_ops_update_delete_subtest(void) +{ + struct test_map_ops *skel; + + if (setup(&skel)) + goto teardown; + + map_update(); + ASSERT_OK(skel->bss->err, "map_update_initial"); + + map_update(); + ASSERT_LT(skel->bss->err, 0, "map_update_existing"); + ASSERT_EQ(skel->bss->err, -EEXIST, "map_update_existing"); + + map_delete(); + ASSERT_OK(skel->bss->err, "map_delete_existing"); + + map_delete(); + ASSERT_LT(skel->bss->err, 0, "map_delete_non_existing"); + ASSERT_EQ(skel->bss->err, -ENOENT, "map_delete_non_existing"); + +teardown: + teardown(&skel); +} + +static void map_ops_push_peek_pop_subtest(void) +{ + struct test_map_ops *skel; + + if (setup(&skel)) + goto teardown; + + map_push(); + ASSERT_OK(skel->bss->err, "map_push_initial"); + + map_push(); + ASSERT_LT(skel->bss->err, 0, "map_push_when_full"); + ASSERT_EQ(skel->bss->err, -E2BIG, "map_push_when_full"); + + map_peek(); + ASSERT_OK(skel->bss->err, "map_peek"); + + map_pop(); + ASSERT_OK(skel->bss->err, "map_pop"); + + map_peek(); + ASSERT_LT(skel->bss->err, 0, "map_peek_when_empty"); + ASSERT_EQ(skel->bss->err, -ENOENT, "map_peek_when_empty"); + + map_pop(); + ASSERT_LT(skel->bss->err, 0, "map_pop_when_empty"); + ASSERT_EQ(skel->bss->err, -ENOENT, "map_pop_when_empty"); + +teardown: + teardown(&skel); +} + +static void map_ops_for_each_subtest(void) +{ + struct test_map_ops *skel; + + if (setup(&skel)) + goto teardown; + + map_for_each_pass(); + /* expect to iterate over 1 element */ + ASSERT_EQ(skel->bss->err, 1, "map_for_each_no_flags"); + + map_for_each_fail(); + ASSERT_LT(skel->bss->err, 0, "map_for_each_with_flags"); + ASSERT_EQ(skel->bss->err, -EINVAL, "map_for_each_with_flags"); + +teardown: + teardown(&skel); +} + +void test_map_ops(void) +{ + if (test__start_subtest("map_ops_update_delete")) + map_ops_update_delete_subtest(); + + if (test__start_subtest("map_ops_push_peek_pop")) + map_ops_push_peek_pop_subtest(); + + if (test__start_subtest("map_ops_for_each")) + map_ops_for_each_subtest(); +} diff --git a/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c b/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c new file mode 100644 index 000000000000..c7636e18b1eb --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c @@ -0,0 +1,128 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Red Hat */ +#include <test_progs.h> +#include <bpf/btf.h> +#include "bpf/libbpf_internal.h" +#include "cgroup_helpers.h" + +static const char *module_name = "bpf_testmod"; +static const char *symbol_name = "bpf_fentry_shadow_test"; + +static int get_bpf_testmod_btf_fd(void) +{ + struct bpf_btf_info info; + char name[64]; + __u32 id = 0, len; + int err, fd; + + while (true) { + err = bpf_btf_get_next_id(id, &id); + if (err) { + log_err("failed to iterate BTF objects"); + return err; + } + + fd = bpf_btf_get_fd_by_id(id); + if (fd < 0) { + if (errno == ENOENT) + continue; /* expected race: BTF was unloaded */ + err = -errno; + log_err("failed to get FD for BTF object #%d", id); + return err; + } + + len = sizeof(info); + memset(&info, 0, sizeof(info)); + info.name = ptr_to_u64(name); + info.name_len = sizeof(name); + + err = bpf_obj_get_info_by_fd(fd, &info, &len); + if (err) { + err = -errno; + log_err("failed to get info for BTF object #%d", id); + close(fd); + return err; + } + + if (strcmp(name, module_name) == 0) + return fd; + + close(fd); + } + return -ENOENT; +} + +void test_module_fentry_shadow(void) +{ + struct btf *vmlinux_btf = NULL, *mod_btf = NULL; + int err, i; + int btf_fd[2] = {}; + int prog_fd[2] = {}; + int link_fd[2] = {}; + __s32 btf_id[2] = {}; + + LIBBPF_OPTS(bpf_prog_load_opts, load_opts, + .expected_attach_type = BPF_TRACE_FENTRY, + ); + + const struct bpf_insn trace_program[] = { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }; + + vmlinux_btf = btf__load_vmlinux_btf(); + if (!ASSERT_OK_PTR(vmlinux_btf, "load_vmlinux_btf")) + return; + + btf_fd[1] = get_bpf_testmod_btf_fd(); + if (!ASSERT_GE(btf_fd[1], 0, "get_bpf_testmod_btf_fd")) + goto out; + + mod_btf = btf_get_from_fd(btf_fd[1], vmlinux_btf); + if (!ASSERT_OK_PTR(mod_btf, "btf_get_from_fd")) + goto out; + + btf_id[0] = btf__find_by_name_kind(vmlinux_btf, symbol_name, BTF_KIND_FUNC); + if (!ASSERT_GT(btf_id[0], 0, "btf_find_by_name")) + goto out; + + btf_id[1] = btf__find_by_name_kind(mod_btf, symbol_name, BTF_KIND_FUNC); + if (!ASSERT_GT(btf_id[1], 0, "btf_find_by_name")) + goto out; + + for (i = 0; i < 2; i++) { + load_opts.attach_btf_id = btf_id[i]; + load_opts.attach_btf_obj_fd = btf_fd[i]; + prog_fd[i] = bpf_prog_load(BPF_PROG_TYPE_TRACING, NULL, "GPL", + trace_program, + sizeof(trace_program) / sizeof(struct bpf_insn), + &load_opts); + if (!ASSERT_GE(prog_fd[i], 0, "bpf_prog_load")) + goto out; + + /* If the verifier incorrectly resolves addresses of the + * shadowed functions and uses the same address for both the + * vmlinux and the bpf_testmod functions, this will fail on + * attempting to create two trampolines for the same address, + * which is forbidden. + */ + link_fd[i] = bpf_link_create(prog_fd[i], 0, BPF_TRACE_FENTRY, NULL); + if (!ASSERT_GE(link_fd[i], 0, "bpf_link_create")) + goto out; + } + + err = bpf_prog_test_run_opts(prog_fd[0], NULL); + ASSERT_OK(err, "running test"); + +out: + btf__free(vmlinux_btf); + btf__free(mod_btf); + for (i = 0; i < 2; i++) { + if (btf_fd[i]) + close(btf_fd[i]); + if (prog_fd[i] > 0) + close(prog_fd[i]); + if (link_fd[i] > 0) + close(link_fd[i]); + } +} diff --git a/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c b/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c index 33144c9432ae..f4aad35afae1 100644 --- a/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c +++ b/tools/testing/selftests/bpf/prog_tests/perf_event_stackmap.c @@ -63,7 +63,8 @@ void test_perf_event_stackmap(void) PERF_SAMPLE_BRANCH_NO_FLAGS | PERF_SAMPLE_BRANCH_NO_CYCLES | PERF_SAMPLE_BRANCH_CALL_STACK, - .sample_period = 5000, + .freq = 1, + .sample_freq = read_perf_max_sample_freq(), .size = sizeof(struct perf_event_attr), }; struct perf_event_stackmap *skel; diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c index d63a20fbed33..b15b343ebb6b 100644 --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c @@ -64,8 +64,12 @@ static void test_send_signal_common(struct perf_event_attr *attr, ASSERT_EQ(read(pipe_p2c[0], buf, 1), 1, "pipe_read"); /* wait a little for signal handler */ - for (int i = 0; i < 1000000000 && !sigusr1_received; i++) + for (int i = 0; i < 1000000000 && !sigusr1_received; i++) { j /= i + j + 1; + if (!attr) + /* trigger the nanosleep tracepoint program. */ + usleep(1); + } buf[0] = sigusr1_received ? '2' : '0'; ASSERT_EQ(sigusr1_received, 1, "sigusr1_received"); diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c index 9ad09a6c538a..b7ba5cd47d96 100644 --- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c +++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id.c @@ -7,13 +7,12 @@ void test_stacktrace_build_id(void) int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd; struct test_stacktrace_build_id *skel; - int err, stack_trace_len; + int err, stack_trace_len, build_id_size; __u32 key, prev_key, val, duration = 0; - char buf[256]; - int i, j; + char buf[BPF_BUILD_ID_SIZE]; struct bpf_stack_build_id id_offs[PERF_MAX_STACK_DEPTH]; int build_id_matches = 0; - int retry = 1; + int i, retry = 1; retry: skel = test_stacktrace_build_id__open_and_load(); @@ -52,9 +51,10 @@ retry: "err %d errno %d\n", err, errno)) goto cleanup; - err = extract_build_id(buf, 256); + build_id_size = read_build_id("urandom_read", buf, sizeof(buf)); + err = build_id_size < 0 ? build_id_size : 0; - if (CHECK(err, "get build_id with readelf", + if (CHECK(err, "read_build_id", "err %d errno %d\n", err, errno)) goto cleanup; @@ -64,8 +64,6 @@ retry: goto cleanup; do { - char build_id[64]; - err = bpf_map_lookup_elem(stackmap_fd, &key, id_offs); if (CHECK(err, "lookup_elem from stackmap", "err %d, errno %d\n", err, errno)) @@ -73,10 +71,7 @@ retry: for (i = 0; i < PERF_MAX_STACK_DEPTH; ++i) if (id_offs[i].status == BPF_STACK_BUILD_ID_VALID && id_offs[i].offset != 0) { - for (j = 0; j < 20; ++j) - sprintf(build_id + 2 * j, "%02x", - id_offs[i].build_id[j] & 0xff); - if (strstr(buf, build_id) != NULL) + if (memcmp(buf, id_offs[i].build_id, build_id_size) == 0) build_id_matches = 1; } prev_key = key; diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c index f4ea1a215ce4..5db9eec24b5b 100644 --- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c +++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c @@ -2,21 +2,6 @@ #include <test_progs.h> #include "test_stacktrace_build_id.skel.h" -static __u64 read_perf_max_sample_freq(void) -{ - __u64 sample_freq = 5000; /* fallback to 5000 on error */ - FILE *f; - __u32 duration = 0; - - f = fopen("/proc/sys/kernel/perf_event_max_sample_rate", "r"); - if (f == NULL) - return sample_freq; - CHECK(fscanf(f, "%llu", &sample_freq) != 1, "Get max sample rate", - "return default value: 5000,err %d\n", -errno); - fclose(f); - return sample_freq; -} - void test_stacktrace_build_id_nmi(void) { int control_map_fd, stackid_hmap_fd, stackmap_fd; @@ -28,11 +13,10 @@ void test_stacktrace_build_id_nmi(void) .config = PERF_COUNT_HW_CPU_CYCLES, }; __u32 key, prev_key, val, duration = 0; - char buf[256]; - int i, j; + char buf[BPF_BUILD_ID_SIZE]; struct bpf_stack_build_id id_offs[PERF_MAX_STACK_DEPTH]; - int build_id_matches = 0; - int retry = 1; + int build_id_matches = 0, build_id_size; + int i, retry = 1; attr.sample_freq = read_perf_max_sample_freq(); @@ -94,7 +78,8 @@ retry: "err %d errno %d\n", err, errno)) goto cleanup; - err = extract_build_id(buf, 256); + build_id_size = read_build_id("urandom_read", buf, sizeof(buf)); + err = build_id_size < 0 ? build_id_size : 0; if (CHECK(err, "get build_id with readelf", "err %d errno %d\n", err, errno)) @@ -106,8 +91,6 @@ retry: goto cleanup; do { - char build_id[64]; - err = bpf_map__lookup_elem(skel->maps.stackmap, &key, sizeof(key), id_offs, sizeof(id_offs), 0); if (CHECK(err, "lookup_elem from stackmap", @@ -116,10 +99,7 @@ retry: for (i = 0; i < PERF_MAX_STACK_DEPTH; ++i) if (id_offs[i].status == BPF_STACK_BUILD_ID_VALID && id_offs[i].offset != 0) { - for (j = 0; j < 20; ++j) - sprintf(build_id + 2 * j, "%02x", - id_offs[i].build_id[j] & 0xff); - if (strstr(buf, build_id) != NULL) + if (memcmp(buf, id_offs[i].build_id, build_id_size) == 0) build_id_matches = 1; } prev_key = key; diff --git a/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c b/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c index c717741bf8b6..c91eda624657 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c +++ b/tools/testing/selftests/bpf/prog_tests/task_fd_query_tp.c @@ -17,8 +17,13 @@ static void test_task_fd_query_tp_core(const char *probe_name, if (CHECK(err, "bpf_prog_test_load", "err %d errno %d\n", err, errno)) goto close_prog; - snprintf(buf, sizeof(buf), - "/sys/kernel/debug/tracing/events/%s/id", probe_name); + if (access("/sys/kernel/tracing/trace", F_OK) == 0) { + snprintf(buf, sizeof(buf), + "/sys/kernel/tracing/events/%s/id", probe_name); + } else { + snprintf(buf, sizeof(buf), + "/sys/kernel/debug/tracing/events/%s/id", probe_name); + } efd = open(buf, O_RDONLY, 0); if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno)) goto close_prog; diff --git a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c index f79fa5bc9a8d..740d5f644b40 100644 --- a/tools/testing/selftests/bpf/prog_tests/task_kfunc.c +++ b/tools/testing/selftests/bpf/prog_tests/task_kfunc.c @@ -73,11 +73,12 @@ static const char * const success_tests[] = { "test_task_acquire_release_current", "test_task_acquire_leave_in_map", "test_task_xchg_release", - "test_task_get_release", + "test_task_map_acquire_release", "test_task_current_acquire_release", "test_task_from_pid_arg", "test_task_from_pid_current", "test_task_from_pid_invalid", + "task_kfunc_acquire_trusted_walked", }; void test_task_kfunc(void) diff --git a/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c b/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c index 5cf85d0f9827..13bcaeb028b8 100644 --- a/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c +++ b/tools/testing/selftests/bpf/prog_tests/tcp_hdr_options.c @@ -151,7 +151,7 @@ static int check_hdr_opt(const struct bpf_test_option *exp, const struct bpf_test_option *act, const char *hdr_desc) { - if (!ASSERT_OK(memcmp(exp, act, sizeof(*exp)), hdr_desc)) { + if (!ASSERT_EQ(memcmp(exp, act, sizeof(*exp)), 0, hdr_desc)) { print_option(exp, "expected: "); print_option(act, " actual: "); return -1; @@ -169,7 +169,7 @@ static int check_hdr_stg(const struct hdr_stg *exp, int fd, "map_lookup(hdr_stg_map_fd)")) return -1; - if (!ASSERT_OK(memcmp(exp, &act, sizeof(*exp)), stg_desc)) { + if (!ASSERT_EQ(memcmp(exp, &act, sizeof(*exp)), 0, stg_desc)) { print_hdr_stg(exp, "expected: "); print_hdr_stg(&act, " actual: "); return -1; diff --git a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c index 9c77cd6b1eaf..bcf2e1905ed7 100644 --- a/tools/testing/selftests/bpf/prog_tests/test_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/test_local_storage.c @@ -13,8 +13,6 @@ #include "network_helpers.h" #include "task_local_storage_helpers.h" -static unsigned int duration; - #define TEST_STORAGE_VALUE 0xbeefdead struct storage { @@ -25,7 +23,7 @@ struct storage { /* Fork and exec the provided rm binary and return the exit code of the * forked process and its pid. */ -static int run_self_unlink(int *monitored_pid, const char *rm_path) +static int run_self_unlink(struct local_storage *skel, const char *rm_path) { int child_pid, child_status, ret; int null_fd; @@ -37,7 +35,7 @@ static int run_self_unlink(int *monitored_pid, const char *rm_path) dup2(null_fd, STDERR_FILENO); close(null_fd); - *monitored_pid = getpid(); + skel->bss->monitored_pid = getpid(); /* Use the copied /usr/bin/rm to delete itself * /tmp/copy_of_rm /tmp/copy_of_rm. */ @@ -46,6 +44,7 @@ static int run_self_unlink(int *monitored_pid, const char *rm_path) exit(errno); } else if (child_pid > 0) { waitpid(child_pid, &child_status, 0); + ASSERT_EQ(skel->data->task_storage_result, 0, "task_storage_result"); return WEXITSTATUS(child_status); } @@ -60,36 +59,30 @@ static bool check_syscall_operations(int map_fd, int obj_fd) /* Looking up an existing element should fail initially */ err = bpf_map_lookup_elem_flags(map_fd, &obj_fd, &lookup_val, 0); - if (CHECK(!err || errno != ENOENT, "bpf_map_lookup_elem", - "err:%d errno:%d\n", err, errno)) + if (!ASSERT_EQ(err, -ENOENT, "bpf_map_lookup_elem")) return false; /* Create a new element */ err = bpf_map_update_elem(map_fd, &obj_fd, &val, BPF_NOEXIST); - if (CHECK(err < 0, "bpf_map_update_elem", "err:%d errno:%d\n", err, - errno)) + if (!ASSERT_OK(err, "bpf_map_update_elem")) return false; /* Lookup the newly created element */ err = bpf_map_lookup_elem_flags(map_fd, &obj_fd, &lookup_val, 0); - if (CHECK(err < 0, "bpf_map_lookup_elem", "err:%d errno:%d", err, - errno)) + if (!ASSERT_OK(err, "bpf_map_lookup_elem")) return false; /* Check the value of the newly created element */ - if (CHECK(lookup_val.value != val.value, "bpf_map_lookup_elem", - "value got = %x errno:%d", lookup_val.value, val.value)) + if (!ASSERT_EQ(lookup_val.value, val.value, "bpf_map_lookup_elem")) return false; err = bpf_map_delete_elem(map_fd, &obj_fd); - if (CHECK(err, "bpf_map_delete_elem()", "err:%d errno:%d\n", err, - errno)) + if (!ASSERT_OK(err, "bpf_map_delete_elem()")) return false; /* The lookup should fail, now that the element has been deleted */ err = bpf_map_lookup_elem_flags(map_fd, &obj_fd, &lookup_val, 0); - if (CHECK(!err || errno != ENOENT, "bpf_map_lookup_elem", - "err:%d errno:%d\n", err, errno)) + if (!ASSERT_EQ(err, -ENOENT, "bpf_map_lookup_elem")) return false; return true; @@ -104,35 +97,32 @@ void test_test_local_storage(void) char cmd[256]; skel = local_storage__open_and_load(); - if (CHECK(!skel, "skel_load", "lsm skeleton failed\n")) + if (!ASSERT_OK_PTR(skel, "skel_load")) goto close_prog; err = local_storage__attach(skel); - if (CHECK(err, "attach", "lsm attach failed: %d\n", err)) + if (!ASSERT_OK(err, "attach")) goto close_prog; task_fd = sys_pidfd_open(getpid(), 0); - if (CHECK(task_fd < 0, "pidfd_open", - "failed to get pidfd err:%d, errno:%d", task_fd, errno)) + if (!ASSERT_GE(task_fd, 0, "pidfd_open")) goto close_prog; if (!check_syscall_operations(bpf_map__fd(skel->maps.task_storage_map), task_fd)) goto close_prog; - if (CHECK(!mkdtemp(tmp_dir_path), "mkdtemp", - "unable to create tmpdir: %d\n", errno)) + if (!ASSERT_OK_PTR(mkdtemp(tmp_dir_path), "mkdtemp")) goto close_prog; snprintf(tmp_exec_path, sizeof(tmp_exec_path), "%s/copy_of_rm", tmp_dir_path); snprintf(cmd, sizeof(cmd), "cp /bin/rm %s", tmp_exec_path); - if (CHECK_FAIL(system(cmd))) + if (!ASSERT_OK(system(cmd), "system(cp)")) goto close_prog_rmdir; rm_fd = open(tmp_exec_path, O_RDONLY); - if (CHECK(rm_fd < 0, "open", "failed to open %s err:%d, errno:%d", - tmp_exec_path, rm_fd, errno)) + if (!ASSERT_GE(rm_fd, 0, "open(tmp_exec_path)")) goto close_prog_rmdir; if (!check_syscall_operations(bpf_map__fd(skel->maps.inode_storage_map), @@ -144,8 +134,8 @@ void test_test_local_storage(void) * unlink its executable. This operation should be denied by the loaded * LSM program. */ - err = run_self_unlink(&skel->bss->monitored_pid, tmp_exec_path); - if (CHECK(err != EPERM, "run_self_unlink", "err %d want EPERM\n", err)) + err = run_self_unlink(skel, tmp_exec_path); + if (!ASSERT_EQ(err, EPERM, "run_self_unlink")) goto close_prog_rmdir; /* Set the process being monitored to be the current process */ @@ -156,18 +146,16 @@ void test_test_local_storage(void) */ snprintf(cmd, sizeof(cmd), "mv %s/copy_of_rm %s/check_null_ptr", tmp_dir_path, tmp_dir_path); - if (CHECK_FAIL(system(cmd))) + if (!ASSERT_OK(system(cmd), "system(mv)")) goto close_prog_rmdir; - CHECK(skel->data->inode_storage_result != 0, "inode_storage_result", - "inode_local_storage not set\n"); + ASSERT_EQ(skel->data->inode_storage_result, 0, "inode_storage_result"); serv_sk = start_server(AF_INET6, SOCK_STREAM, NULL, 0, 0); - if (CHECK(serv_sk < 0, "start_server", "failed to start server\n")) + if (!ASSERT_GE(serv_sk, 0, "start_server")) goto close_prog_rmdir; - CHECK(skel->data->sk_storage_result != 0, "sk_storage_result", - "sk_local_storage not set\n"); + ASSERT_EQ(skel->data->sk_storage_result, 0, "sk_storage_result"); if (!check_syscall_operations(bpf_map__fd(skel->maps.sk_storage_map), serv_sk)) diff --git a/tools/testing/selftests/bpf/prog_tests/test_tunnel.c b/tools/testing/selftests/bpf/prog_tests/test_tunnel.c index 47f1d482fe39..d149ab98798d 100644 --- a/tools/testing/selftests/bpf/prog_tests/test_tunnel.c +++ b/tools/testing/selftests/bpf/prog_tests/test_tunnel.c @@ -89,6 +89,9 @@ #define IP6VXLAN_TUNL_DEV0 "ip6vxlan00" #define IP6VXLAN_TUNL_DEV1 "ip6vxlan11" +#define IPIP_TUNL_DEV0 "ipip00" +#define IPIP_TUNL_DEV1 "ipip11" + #define PING_ARGS "-i 0.01 -c 3 -w 10 -q" static int config_device(void) @@ -188,6 +191,79 @@ static void delete_ip6vxlan_tunnel(void) SYS_NOFAIL("ip link delete dev %s", IP6VXLAN_TUNL_DEV1); } +enum ipip_encap { + NONE = 0, + FOU = 1, + GUE = 2, +}; + +static int set_ipip_encap(const char *ipproto, const char *type) +{ + SYS(fail, "ip -n at_ns0 fou add port 5555 %s", ipproto); + SYS(fail, "ip -n at_ns0 link set dev %s type ipip encap %s", + IPIP_TUNL_DEV0, type); + SYS(fail, "ip -n at_ns0 link set dev %s type ipip encap-dport 5555", + IPIP_TUNL_DEV0); + + return 0; +fail: + return -1; +} + +static int add_ipip_tunnel(enum ipip_encap encap) +{ + int err; + const char *ipproto, *type; + + switch (encap) { + case FOU: + ipproto = "ipproto 4"; + type = "fou"; + break; + case GUE: + ipproto = "gue"; + type = ipproto; + break; + default: + ipproto = NULL; + type = ipproto; + } + + /* at_ns0 namespace */ + SYS(fail, "ip -n at_ns0 link add dev %s type ipip local %s remote %s", + IPIP_TUNL_DEV0, IP4_ADDR_VETH0, IP4_ADDR1_VETH1); + + if (type && ipproto) { + err = set_ipip_encap(ipproto, type); + if (!ASSERT_OK(err, "set_ipip_encap")) + goto fail; + } + + SYS(fail, "ip -n at_ns0 link set dev %s up", IPIP_TUNL_DEV0); + SYS(fail, "ip -n at_ns0 addr add dev %s %s/24", + IPIP_TUNL_DEV0, IP4_ADDR_TUNL_DEV0); + + /* root namespace */ + if (type && ipproto) + SYS(fail, "ip fou add port 5555 %s", ipproto); + SYS(fail, "ip link add dev %s type ipip external", IPIP_TUNL_DEV1); + SYS(fail, "ip link set dev %s up", IPIP_TUNL_DEV1); + SYS(fail, "ip addr add dev %s %s/24", IPIP_TUNL_DEV1, + IP4_ADDR_TUNL_DEV1); + + return 0; +fail: + return -1; +} + +static void delete_ipip_tunnel(void) +{ + SYS_NOFAIL("ip -n at_ns0 link delete dev %s", IPIP_TUNL_DEV0); + SYS_NOFAIL("ip -n at_ns0 fou del port 5555 2> /dev/null"); + SYS_NOFAIL("ip link delete dev %s", IPIP_TUNL_DEV1); + SYS_NOFAIL("ip fou del port 5555 2> /dev/null"); +} + static int test_ping(int family, const char *addr) { SYS(fail, "%s %s %s > /dev/null", ping_command(family), PING_ARGS, addr); @@ -386,10 +462,80 @@ done: test_tunnel_kern__destroy(skel); } -#define RUN_TEST(name) \ +static void test_ipip_tunnel(enum ipip_encap encap) +{ + struct test_tunnel_kern *skel = NULL; + struct nstoken *nstoken; + int set_src_prog_fd, get_src_prog_fd; + int ifindex = -1; + int err; + DECLARE_LIBBPF_OPTS(bpf_tc_hook, tc_hook, + .attach_point = BPF_TC_INGRESS); + + /* add ipip tunnel */ + err = add_ipip_tunnel(encap); + if (!ASSERT_OK(err, "add_ipip_tunnel")) + goto done; + + /* load and attach bpf prog to tunnel dev tc hook point */ + skel = test_tunnel_kern__open_and_load(); + if (!ASSERT_OK_PTR(skel, "test_tunnel_kern__open_and_load")) + goto done; + ifindex = if_nametoindex(IPIP_TUNL_DEV1); + if (!ASSERT_NEQ(ifindex, 0, "ipip11 ifindex")) + goto done; + tc_hook.ifindex = ifindex; + + switch (encap) { + case FOU: + get_src_prog_fd = bpf_program__fd( + skel->progs.ipip_encap_get_tunnel); + set_src_prog_fd = bpf_program__fd( + skel->progs.ipip_fou_set_tunnel); + break; + case GUE: + get_src_prog_fd = bpf_program__fd( + skel->progs.ipip_encap_get_tunnel); + set_src_prog_fd = bpf_program__fd( + skel->progs.ipip_gue_set_tunnel); + break; + default: + get_src_prog_fd = bpf_program__fd( + skel->progs.ipip_get_tunnel); + set_src_prog_fd = bpf_program__fd( + skel->progs.ipip_set_tunnel); + } + + if (!ASSERT_GE(set_src_prog_fd, 0, "bpf_program__fd")) + goto done; + if (!ASSERT_GE(get_src_prog_fd, 0, "bpf_program__fd")) + goto done; + if (attach_tc_prog(&tc_hook, get_src_prog_fd, set_src_prog_fd)) + goto done; + + /* ping from root namespace test */ + err = test_ping(AF_INET, IP4_ADDR_TUNL_DEV0); + if (!ASSERT_OK(err, "test_ping")) + goto done; + + /* ping from at_ns0 namespace test */ + nstoken = open_netns("at_ns0"); + err = test_ping(AF_INET, IP4_ADDR_TUNL_DEV1); + if (!ASSERT_OK(err, "test_ping")) + goto done; + close_netns(nstoken); + +done: + /* delete ipip tunnel */ + delete_ipip_tunnel(); + if (skel) + test_tunnel_kern__destroy(skel); +} + +#define RUN_TEST(name, ...) \ ({ \ if (test__start_subtest(#name)) { \ - test_ ## name(); \ + test_ ## name(__VA_ARGS__); \ } \ }) @@ -400,6 +546,9 @@ static void *test_tunnel_run_tests(void *arg) RUN_TEST(vxlan_tunnel); RUN_TEST(ip6vxlan_tunnel); + RUN_TEST(ipip_tunnel, NONE); + RUN_TEST(ipip_tunnel, FOU); + RUN_TEST(ipip_tunnel, GUE); cleanup(); diff --git a/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c b/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c index 770fcc3bb1ba..655d69f0ff0b 100644 --- a/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c +++ b/tools/testing/selftests/bpf/prog_tests/tp_attach_query.c @@ -16,8 +16,13 @@ void serial_test_tp_attach_query(void) for (i = 0; i < num_progs; i++) obj[i] = NULL; - snprintf(buf, sizeof(buf), - "/sys/kernel/debug/tracing/events/sched/sched_switch/id"); + if (access("/sys/kernel/tracing/trace", F_OK) == 0) { + snprintf(buf, sizeof(buf), + "/sys/kernel/tracing/events/sched/sched_switch/id"); + } else { + snprintf(buf, sizeof(buf), + "/sys/kernel/debug/tracing/events/sched/sched_switch/id"); + } efd = open(buf, O_RDONLY, 0); if (CHECK(efd < 0, "open", "err %d errno %d\n", efd, errno)) return; diff --git a/tools/testing/selftests/bpf/prog_tests/trace_printk.c b/tools/testing/selftests/bpf/prog_tests/trace_printk.c index cade7f12315f..7b9124d506a5 100644 --- a/tools/testing/selftests/bpf/prog_tests/trace_printk.c +++ b/tools/testing/selftests/bpf/prog_tests/trace_printk.c @@ -5,7 +5,8 @@ #include "trace_printk.lskel.h" -#define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe" +#define TRACEFS_PIPE "/sys/kernel/tracing/trace_pipe" +#define DEBUGFS_PIPE "/sys/kernel/debug/tracing/trace_pipe" #define SEARCHMSG "testing,testing" void serial_test_trace_printk(void) @@ -34,8 +35,11 @@ void serial_test_trace_printk(void) if (!ASSERT_OK(err, "trace_printk__attach")) goto cleanup; - fp = fopen(TRACEBUF, "r"); - if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)")) + if (access(TRACEFS_PIPE, F_OK) == 0) + fp = fopen(TRACEFS_PIPE, "r"); + else + fp = fopen(DEBUGFS_PIPE, "r"); + if (!ASSERT_OK_PTR(fp, "fopen(TRACE_PIPE)")) goto cleanup; /* We do not want to wait forever if this test fails... */ diff --git a/tools/testing/selftests/bpf/prog_tests/trace_vprintk.c b/tools/testing/selftests/bpf/prog_tests/trace_vprintk.c index 7a4e313e8558..44ea2fd88f4c 100644 --- a/tools/testing/selftests/bpf/prog_tests/trace_vprintk.c +++ b/tools/testing/selftests/bpf/prog_tests/trace_vprintk.c @@ -5,7 +5,8 @@ #include "trace_vprintk.lskel.h" -#define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe" +#define TRACEFS_PIPE "/sys/kernel/tracing/trace_pipe" +#define DEBUGFS_PIPE "/sys/kernel/debug/tracing/trace_pipe" #define SEARCHMSG "1,2,3,4,5,6,7,8,9,10" void serial_test_trace_vprintk(void) @@ -27,8 +28,11 @@ void serial_test_trace_vprintk(void) if (!ASSERT_OK(err, "trace_vprintk__attach")) goto cleanup; - fp = fopen(TRACEBUF, "r"); - if (!ASSERT_OK_PTR(fp, "fopen(TRACEBUF)")) + if (access(TRACEFS_PIPE, F_OK) == 0) + fp = fopen(TRACEFS_PIPE, "r"); + else + fp = fopen(DEBUGFS_PIPE, "r"); + if (!ASSERT_OK_PTR(fp, "fopen(TRACE_PIPE)")) goto cleanup; /* We do not want to wait forever if this test fails... */ diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c b/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c index 6558c857e620..d5b3377aa33c 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c @@ -3,7 +3,6 @@ #include <test_progs.h> #include "test_uprobe_autoattach.skel.h" -#include "progs/bpf_misc.h" /* uprobe attach point */ static noinline int autoattach_trigger_func(int arg1, int arg2, int arg3, diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c new file mode 100644 index 000000000000..73dff693d411 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <test_progs.h> + +#include "cap_helpers.h" +#include "verifier_and.skel.h" +#include "verifier_array_access.skel.h" +#include "verifier_basic_stack.skel.h" +#include "verifier_bounds_deduction.skel.h" +#include "verifier_bounds_deduction_non_const.skel.h" +#include "verifier_bounds_mix_sign_unsign.skel.h" +#include "verifier_cfg.skel.h" +#include "verifier_cgroup_inv_retcode.skel.h" +#include "verifier_cgroup_skb.skel.h" +#include "verifier_cgroup_storage.skel.h" +#include "verifier_const_or.skel.h" +#include "verifier_ctx_sk_msg.skel.h" +#include "verifier_direct_stack_access_wraparound.skel.h" +#include "verifier_div0.skel.h" +#include "verifier_div_overflow.skel.h" +#include "verifier_helper_access_var_len.skel.h" +#include "verifier_helper_packet_access.skel.h" +#include "verifier_helper_restricted.skel.h" +#include "verifier_helper_value_access.skel.h" +#include "verifier_int_ptr.skel.h" +#include "verifier_ld_ind.skel.h" +#include "verifier_leak_ptr.skel.h" +#include "verifier_map_ptr.skel.h" +#include "verifier_map_ret_val.skel.h" +#include "verifier_masking.skel.h" +#include "verifier_meta_access.skel.h" +#include "verifier_raw_stack.skel.h" +#include "verifier_raw_tp_writable.skel.h" +#include "verifier_ringbuf.skel.h" +#include "verifier_spill_fill.skel.h" +#include "verifier_stack_ptr.skel.h" +#include "verifier_uninit.skel.h" +#include "verifier_value_adj_spill.skel.h" +#include "verifier_value.skel.h" +#include "verifier_value_or_null.skel.h" +#include "verifier_var_off.skel.h" +#include "verifier_xadd.skel.h" +#include "verifier_xdp.skel.h" +#include "verifier_xdp_direct_packet_access.skel.h" + +__maybe_unused +static void run_tests_aux(const char *skel_name, skel_elf_bytes_fn elf_bytes_factory) +{ + struct test_loader tester = {}; + __u64 old_caps; + int err; + + /* test_verifier tests are executed w/o CAP_SYS_ADMIN, do the same here */ + err = cap_disable_effective(1ULL << CAP_SYS_ADMIN, &old_caps); + if (err) { + PRINT_FAIL("failed to drop CAP_SYS_ADMIN: %i, %s\n", err, strerror(err)); + return; + } + + test_loader__run_subtests(&tester, skel_name, elf_bytes_factory); + test_loader_fini(&tester); + + err = cap_enable_effective(old_caps, NULL); + if (err) + PRINT_FAIL("failed to restore CAP_SYS_ADMIN: %i, %s\n", err, strerror(err)); +} + +#define RUN(skel) run_tests_aux(#skel, skel##__elf_bytes) + +void test_verifier_and(void) { RUN(verifier_and); } +void test_verifier_array_access(void) { RUN(verifier_array_access); } +void test_verifier_basic_stack(void) { RUN(verifier_basic_stack); } +void test_verifier_bounds_deduction(void) { RUN(verifier_bounds_deduction); } +void test_verifier_bounds_deduction_non_const(void) { RUN(verifier_bounds_deduction_non_const); } +void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); } +void test_verifier_cfg(void) { RUN(verifier_cfg); } +void test_verifier_cgroup_inv_retcode(void) { RUN(verifier_cgroup_inv_retcode); } +void test_verifier_cgroup_skb(void) { RUN(verifier_cgroup_skb); } +void test_verifier_cgroup_storage(void) { RUN(verifier_cgroup_storage); } +void test_verifier_const_or(void) { RUN(verifier_const_or); } +void test_verifier_ctx_sk_msg(void) { RUN(verifier_ctx_sk_msg); } +void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); } +void test_verifier_div0(void) { RUN(verifier_div0); } +void test_verifier_div_overflow(void) { RUN(verifier_div_overflow); } +void test_verifier_helper_access_var_len(void) { RUN(verifier_helper_access_var_len); } +void test_verifier_helper_packet_access(void) { RUN(verifier_helper_packet_access); } +void test_verifier_helper_restricted(void) { RUN(verifier_helper_restricted); } +void test_verifier_helper_value_access(void) { RUN(verifier_helper_value_access); } +void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); } +void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } +void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } +void test_verifier_map_ptr(void) { RUN(verifier_map_ptr); } +void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } +void test_verifier_masking(void) { RUN(verifier_masking); } +void test_verifier_meta_access(void) { RUN(verifier_meta_access); } +void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); } +void test_verifier_raw_tp_writable(void) { RUN(verifier_raw_tp_writable); } +void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } +void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } +void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } +void test_verifier_uninit(void) { RUN(verifier_uninit); } +void test_verifier_value_adj_spill(void) { RUN(verifier_value_adj_spill); } +void test_verifier_value(void) { RUN(verifier_value); } +void test_verifier_value_or_null(void) { RUN(verifier_value_or_null); } +void test_verifier_var_off(void) { RUN(verifier_var_off); } +void test_verifier_xadd(void) { RUN(verifier_xadd); } +void test_verifier_xdp(void) { RUN(verifier_xdp); } +void test_verifier_xdp_direct_packet_access(void) { RUN(verifier_xdp_direct_packet_access); } diff --git a/tools/testing/selftests/bpf/prog_tests/verifier_log.c b/tools/testing/selftests/bpf/prog_tests/verifier_log.c new file mode 100644 index 000000000000..8337c6bc5b95 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/verifier_log.c @@ -0,0 +1,450 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <test_progs.h> +#include <bpf/btf.h> + +#include "test_log_buf.skel.h" + + +static bool check_prog_load(int prog_fd, bool expect_err, const char *tag) +{ + if (expect_err) { + if (!ASSERT_LT(prog_fd, 0, tag)) { + close(prog_fd); + return false; + } + } else /* !expect_err */ { + if (!ASSERT_GT(prog_fd, 0, tag)) + return false; + } + if (prog_fd >= 0) + close(prog_fd); + return true; +} + +static struct { + /* strategically placed before others to avoid accidental modification by kernel */ + char filler[1024]; + char buf[1024]; + /* strategically placed after buf[] to catch more accidental corruptions */ + char reference[1024]; +} logs; +static const struct bpf_insn *insns; +static size_t insn_cnt; + +static int load_prog(struct bpf_prog_load_opts *opts, bool expect_load_error) +{ + int prog_fd; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_prog", + "GPL", insns, insn_cnt, opts); + check_prog_load(prog_fd, expect_load_error, "prog_load"); + + return prog_fd; +} + +static void verif_log_subtest(const char *name, bool expect_load_error, int log_level) +{ + LIBBPF_OPTS(bpf_prog_load_opts, opts); + char *exp_log, prog_name[16], op_name[32]; + struct test_log_buf *skel; + struct bpf_program *prog; + size_t fixed_log_sz; + __u32 log_true_sz_fixed, log_true_sz_rolling; + int i, mode, err, prog_fd, res; + + skel = test_log_buf__open(); + if (!ASSERT_OK_PTR(skel, "skel_open")) + return; + + bpf_object__for_each_program(prog, skel->obj) { + if (strcmp(bpf_program__name(prog), name) == 0) + bpf_program__set_autoload(prog, true); + else + bpf_program__set_autoload(prog, false); + } + + err = test_log_buf__load(skel); + if (!expect_load_error && !ASSERT_OK(err, "unexpected_load_failure")) + goto cleanup; + if (expect_load_error && !ASSERT_ERR(err, "unexpected_load_success")) + goto cleanup; + + insns = bpf_program__insns(skel->progs.good_prog); + insn_cnt = bpf_program__insn_cnt(skel->progs.good_prog); + + opts.log_buf = logs.reference; + opts.log_size = sizeof(logs.reference); + opts.log_level = log_level | 8 /* BPF_LOG_FIXED */; + load_prog(&opts, expect_load_error); + + fixed_log_sz = strlen(logs.reference) + 1; + if (!ASSERT_GT(fixed_log_sz, 50, "fixed_log_sz")) + goto cleanup; + memset(logs.reference + fixed_log_sz, 0, sizeof(logs.reference) - fixed_log_sz); + + /* validate BPF_LOG_FIXED works as verifier log used to work, that is: + * we get -ENOSPC and beginning of the full verifier log. This only + * works for log_level 2 and log_level 1 + failed program. For log + * level 2 we don't reset log at all. For log_level 1 + failed program + * we don't get to verification stats output. With log level 1 + * for successful program final result will be just verifier stats. + * But if provided too short log buf, kernel will NULL-out log->ubuf + * and will stop emitting further log. This means we'll never see + * predictable verifier stats. + * Long story short, we do the following -ENOSPC test only for + * predictable combinations. + */ + if (log_level >= 2 || expect_load_error) { + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* fixed-length log */ + opts.log_size = 25; + + prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, "log_fixed25", + "GPL", insns, insn_cnt, &opts); + if (!ASSERT_EQ(prog_fd, -ENOSPC, "unexpected_log_fixed_prog_load_result")) { + if (prog_fd >= 0) + close(prog_fd); + goto cleanup; + } + if (!ASSERT_EQ(strlen(logs.buf), 24, "log_fixed_25")) + goto cleanup; + if (!ASSERT_STRNEQ(logs.buf, logs.reference, 24, "log_fixed_contents_25")) + goto cleanup; + } + + /* validate rolling verifier log logic: try all variations of log buf + * length to force various truncation scenarios + */ + opts.log_buf = logs.buf; + + /* rotating mode, then fixed mode */ + for (mode = 1; mode >= 0; mode--) { + /* prefill logs.buf with 'A's to detect any write beyond allowed length */ + memset(logs.filler, 'A', sizeof(logs.filler)); + logs.filler[sizeof(logs.filler) - 1] = '\0'; + memset(logs.buf, 'A', sizeof(logs.buf)); + logs.buf[sizeof(logs.buf) - 1] = '\0'; + + for (i = 1; i < fixed_log_sz; i++) { + opts.log_size = i; + opts.log_level = log_level | (mode ? 0 : 8 /* BPF_LOG_FIXED */); + + snprintf(prog_name, sizeof(prog_name), + "log_%s_%d", mode ? "roll" : "fixed", i); + prog_fd = bpf_prog_load(BPF_PROG_TYPE_RAW_TRACEPOINT, prog_name, + "GPL", insns, insn_cnt, &opts); + + snprintf(op_name, sizeof(op_name), + "log_%s_prog_load_%d", mode ? "roll" : "fixed", i); + if (!ASSERT_EQ(prog_fd, -ENOSPC, op_name)) { + if (prog_fd >= 0) + close(prog_fd); + goto cleanup; + } + + snprintf(op_name, sizeof(op_name), + "log_%s_strlen_%d", mode ? "roll" : "fixed", i); + ASSERT_EQ(strlen(logs.buf), i - 1, op_name); + + if (mode) + exp_log = logs.reference + fixed_log_sz - i; + else + exp_log = logs.reference; + + snprintf(op_name, sizeof(op_name), + "log_%s_contents_%d", mode ? "roll" : "fixed", i); + if (!ASSERT_STRNEQ(logs.buf, exp_log, i - 1, op_name)) { + printf("CMP:%d\nS1:'%s'\nS2:'%s'\n", + strncmp(logs.buf, exp_log, i - 1), + logs.buf, exp_log); + goto cleanup; + } + + /* check that unused portions of logs.buf is not overwritten */ + snprintf(op_name, sizeof(op_name), + "log_%s_unused_%d", mode ? "roll" : "fixed", i); + if (!ASSERT_STREQ(logs.buf + i, logs.filler + i, op_name)) { + printf("CMP:%d\nS1:'%s'\nS2:'%s'\n", + strcmp(logs.buf + i, logs.filler + i), + logs.buf + i, logs.filler + i); + goto cleanup; + } + } + } + + /* (FIXED) get actual log size */ + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = sizeof(logs.buf); + opts.log_true_size = 0; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_fixed"); + + log_true_sz_fixed = opts.log_true_size; + ASSERT_GT(log_true_sz_fixed, 0, "log_true_sz_fixed"); + + /* (FIXED, NULL) get actual log size */ + opts.log_buf = NULL; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = 0; + opts.log_true_size = 0; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_fixed_null"); + ASSERT_EQ(opts.log_true_size, log_true_sz_fixed, "log_sz_fixed_null_eq"); + + /* (ROLLING) get actual log size */ + opts.log_buf = logs.buf; + opts.log_level = log_level; + opts.log_size = sizeof(logs.buf); + opts.log_true_size = 0; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_rolling"); + + log_true_sz_rolling = opts.log_true_size; + ASSERT_EQ(log_true_sz_rolling, log_true_sz_fixed, "log_true_sz_eq"); + + /* (ROLLING, NULL) get actual log size */ + opts.log_buf = NULL; + opts.log_level = log_level; + opts.log_size = 0; + opts.log_true_size = 0; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_rolling_null"); + ASSERT_EQ(opts.log_true_size, log_true_sz_rolling, "log_true_sz_null_eq"); + + /* (FIXED) expect -ENOSPC for one byte short log */ + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = log_true_sz_fixed - 1; + opts.log_true_size = 0; + res = load_prog(&opts, true /* should fail */); + ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_fixed"); + + /* (FIXED) expect *not* -ENOSPC with exact log_true_size buffer */ + opts.log_buf = logs.buf; + opts.log_level = log_level | 8; /* BPF_LOG_FIXED */ + opts.log_size = log_true_sz_fixed; + opts.log_true_size = 0; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_fixed"); + + /* (ROLLING) expect -ENOSPC for one byte short log */ + opts.log_buf = logs.buf; + opts.log_level = log_level; + opts.log_size = log_true_sz_rolling - 1; + res = load_prog(&opts, true /* should fail */); + ASSERT_EQ(res, -ENOSPC, "prog_load_res_too_short_rolling"); + + /* (ROLLING) expect *not* -ENOSPC with exact log_true_size buffer */ + opts.log_buf = logs.buf; + opts.log_level = log_level; + opts.log_size = log_true_sz_rolling; + opts.log_true_size = 0; + res = load_prog(&opts, expect_load_error); + ASSERT_NEQ(res, -ENOSPC, "prog_load_res_just_right_rolling"); + +cleanup: + test_log_buf__destroy(skel); +} + +static const void *btf_data; +static u32 btf_data_sz; + +static int load_btf(struct bpf_btf_load_opts *opts, bool expect_err) +{ + int fd; + + fd = bpf_btf_load(btf_data, btf_data_sz, opts); + if (fd >= 0) + close(fd); + if (expect_err) + ASSERT_LT(fd, 0, "btf_load_failure"); + else /* !expect_err */ + ASSERT_GT(fd, 0, "btf_load_success"); + return fd; +} + +static void verif_btf_log_subtest(bool bad_btf) +{ + LIBBPF_OPTS(bpf_btf_load_opts, opts); + struct btf *btf; + struct btf_type *t; + char *exp_log, op_name[32]; + size_t fixed_log_sz; + __u32 log_true_sz_fixed, log_true_sz_rolling; + int i, res; + + /* prepare simple BTF contents */ + btf = btf__new_empty(); + if (!ASSERT_OK_PTR(btf, "btf_new_empty")) + return; + res = btf__add_int(btf, "whatever", 4, 0); + if (!ASSERT_GT(res, 0, "btf_add_int_id")) + goto cleanup; + if (bad_btf) { + /* btf__add_int() doesn't allow bad value of size, so we'll just + * force-cast btf_type pointer and manually override size to invalid + * 3 if we need to simulate failure + */ + t = (void *)btf__type_by_id(btf, res); + if (!ASSERT_OK_PTR(t, "int_btf_type")) + goto cleanup; + t->size = 3; + } + + btf_data = btf__raw_data(btf, &btf_data_sz); + if (!ASSERT_OK_PTR(btf_data, "btf_data")) + goto cleanup; + + load_btf(&opts, bad_btf); + + opts.log_buf = logs.reference; + opts.log_size = sizeof(logs.reference); + opts.log_level = 1 | 8 /* BPF_LOG_FIXED */; + load_btf(&opts, bad_btf); + + fixed_log_sz = strlen(logs.reference) + 1; + if (!ASSERT_GT(fixed_log_sz, 50, "fixed_log_sz")) + goto cleanup; + memset(logs.reference + fixed_log_sz, 0, sizeof(logs.reference) - fixed_log_sz); + + /* validate BPF_LOG_FIXED truncation works as verifier log used to work */ + opts.log_buf = logs.buf; + opts.log_level = 1 | 8; /* fixed-length log */ + opts.log_size = 25; + res = load_btf(&opts, true); + ASSERT_EQ(res, -ENOSPC, "half_log_fd"); + ASSERT_EQ(strlen(logs.buf), 24, "log_fixed_25"); + ASSERT_STRNEQ(logs.buf, logs.reference, 24, op_name); + + /* validate rolling verifier log logic: try all variations of log buf + * length to force various truncation scenarios + */ + opts.log_buf = logs.buf; + opts.log_level = 1; /* rolling log */ + + /* prefill logs.buf with 'A's to detect any write beyond allowed length */ + memset(logs.filler, 'A', sizeof(logs.filler)); + logs.filler[sizeof(logs.filler) - 1] = '\0'; + memset(logs.buf, 'A', sizeof(logs.buf)); + logs.buf[sizeof(logs.buf) - 1] = '\0'; + + for (i = 1; i < fixed_log_sz; i++) { + opts.log_size = i; + + snprintf(op_name, sizeof(op_name), "log_roll_btf_load_%d", i); + res = load_btf(&opts, true); + if (!ASSERT_EQ(res, -ENOSPC, op_name)) + goto cleanup; + + exp_log = logs.reference + fixed_log_sz - i; + snprintf(op_name, sizeof(op_name), "log_roll_contents_%d", i); + if (!ASSERT_STREQ(logs.buf, exp_log, op_name)) { + printf("CMP:%d\nS1:'%s'\nS2:'%s'\n", + strcmp(logs.buf, exp_log), + logs.buf, exp_log); + goto cleanup; + } + + /* check that unused portions of logs.buf are not overwritten */ + snprintf(op_name, sizeof(op_name), "log_roll_unused_tail_%d", i); + if (!ASSERT_STREQ(logs.buf + i, logs.filler + i, op_name)) { + printf("CMP:%d\nS1:'%s'\nS2:'%s'\n", + strcmp(logs.buf + i, logs.filler + i), + logs.buf + i, logs.filler + i); + goto cleanup; + } + } + + /* (FIXED) get actual log size */ + opts.log_buf = logs.buf; + opts.log_level = 1 | 8; /* BPF_LOG_FIXED */ + opts.log_size = sizeof(logs.buf); + opts.log_true_size = 0; + res = load_btf(&opts, bad_btf); + ASSERT_NEQ(res, -ENOSPC, "btf_load_res_fixed"); + + log_true_sz_fixed = opts.log_true_size; + ASSERT_GT(log_true_sz_fixed, 0, "log_true_sz_fixed"); + + /* (FIXED, NULL) get actual log size */ + opts.log_buf = NULL; + opts.log_level = 1 | 8; /* BPF_LOG_FIXED */ + opts.log_size = 0; + opts.log_true_size = 0; + res = load_btf(&opts, bad_btf); + ASSERT_NEQ(res, -ENOSPC, "btf_load_res_fixed_null"); + ASSERT_EQ(opts.log_true_size, log_true_sz_fixed, "log_sz_fixed_null_eq"); + + /* (ROLLING) get actual log size */ + opts.log_buf = logs.buf; + opts.log_level = 1; + opts.log_size = sizeof(logs.buf); + opts.log_true_size = 0; + res = load_btf(&opts, bad_btf); + ASSERT_NEQ(res, -ENOSPC, "btf_load_res_rolling"); + + log_true_sz_rolling = opts.log_true_size; + ASSERT_EQ(log_true_sz_rolling, log_true_sz_fixed, "log_true_sz_eq"); + + /* (ROLLING, NULL) get actual log size */ + opts.log_buf = NULL; + opts.log_level = 1; + opts.log_size = 0; + opts.log_true_size = 0; + res = load_btf(&opts, bad_btf); + ASSERT_NEQ(res, -ENOSPC, "btf_load_res_rolling_null"); + ASSERT_EQ(opts.log_true_size, log_true_sz_rolling, "log_true_sz_null_eq"); + + /* (FIXED) expect -ENOSPC for one byte short log */ + opts.log_buf = logs.buf; + opts.log_level = 1 | 8; /* BPF_LOG_FIXED */ + opts.log_size = log_true_sz_fixed - 1; + opts.log_true_size = 0; + res = load_btf(&opts, true); + ASSERT_EQ(res, -ENOSPC, "btf_load_res_too_short_fixed"); + + /* (FIXED) expect *not* -ENOSPC with exact log_true_size buffer */ + opts.log_buf = logs.buf; + opts.log_level = 1 | 8; /* BPF_LOG_FIXED */ + opts.log_size = log_true_sz_fixed; + opts.log_true_size = 0; + res = load_btf(&opts, bad_btf); + ASSERT_NEQ(res, -ENOSPC, "btf_load_res_just_right_fixed"); + + /* (ROLLING) expect -ENOSPC for one byte short log */ + opts.log_buf = logs.buf; + opts.log_level = 1; + opts.log_size = log_true_sz_rolling - 1; + res = load_btf(&opts, true); + ASSERT_EQ(res, -ENOSPC, "btf_load_res_too_short_rolling"); + + /* (ROLLING) expect *not* -ENOSPC with exact log_true_size buffer */ + opts.log_buf = logs.buf; + opts.log_level = 1; + opts.log_size = log_true_sz_rolling; + opts.log_true_size = 0; + res = load_btf(&opts, bad_btf); + ASSERT_NEQ(res, -ENOSPC, "btf_load_res_just_right_rolling"); + +cleanup: + btf__free(btf); +} + +void test_verifier_log(void) +{ + if (test__start_subtest("good_prog-level1")) + verif_log_subtest("good_prog", false, 1); + if (test__start_subtest("good_prog-level2")) + verif_log_subtest("good_prog", false, 2); + if (test__start_subtest("bad_prog-level1")) + verif_log_subtest("bad_prog", true, 1); + if (test__start_subtest("bad_prog-level2")) + verif_log_subtest("bad_prog", true, 2); + if (test__start_subtest("bad_btf")) + verif_btf_log_subtest(true /* bad btf */); + if (test__start_subtest("good_btf")) + verif_btf_log_subtest(false /* !bad btf */); +} diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c index fde13010980d..c94eb63b7b77 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c @@ -87,12 +87,12 @@ static void test_max_pkt_size(int fd) void test_xdp_do_redirect(void) { int err, xdp_prog_fd, tc_prog_fd, ifindex_src, ifindex_dst; - char data[sizeof(pkt_udp) + sizeof(__u32)]; + char data[sizeof(pkt_udp) + sizeof(__u64)]; struct test_xdp_do_redirect *skel = NULL; struct nstoken *nstoken = NULL; struct bpf_link *link; LIBBPF_OPTS(bpf_xdp_query_opts, query_opts); - struct xdp_md ctx_in = { .data = sizeof(__u32), + struct xdp_md ctx_in = { .data = sizeof(__u64), .data_end = sizeof(data) }; DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, .data_in = &data, @@ -106,8 +106,9 @@ void test_xdp_do_redirect(void) DECLARE_LIBBPF_OPTS(bpf_tc_hook, tc_hook, .attach_point = BPF_TC_INGRESS); - memcpy(&data[sizeof(__u32)], &pkt_udp, sizeof(pkt_udp)); + memcpy(&data[sizeof(__u64)], &pkt_udp, sizeof(pkt_udp)); *((__u32 *)data) = 0x42; /* metadata test value */ + *((__u32 *)data + 4) = 0; skel = test_xdp_do_redirect__open(); if (!ASSERT_OK_PTR(skel, "skel")) diff --git a/tools/testing/selftests/bpf/progs/bench_local_storage_create.c b/tools/testing/selftests/bpf/progs/bench_local_storage_create.c new file mode 100644 index 000000000000..e4bfbba6c193 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/bench_local_storage_create.c @@ -0,0 +1,82 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include "bpf_tracing_net.h" +#include <bpf/bpf_tracing.h> +#include <bpf/bpf_helpers.h> + +long create_errs = 0; +long create_cnts = 0; +long kmalloc_cnts = 0; +__u32 bench_pid = 0; + +struct storage { + __u8 data[64]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_SK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct storage); +} sk_storage_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_TASK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct storage); +} task_storage_map SEC(".maps"); + +SEC("raw_tp/kmalloc") +int BPF_PROG(kmalloc, unsigned long call_site, const void *ptr, + size_t bytes_req, size_t bytes_alloc, gfp_t gfp_flags, + int node) +{ + __sync_fetch_and_add(&kmalloc_cnts, 1); + + return 0; +} + +SEC("tp_btf/sched_process_fork") +int BPF_PROG(sched_process_fork, struct task_struct *parent, struct task_struct *child) +{ + struct storage *stg; + + if (parent->tgid != bench_pid) + return 0; + + stg = bpf_task_storage_get(&task_storage_map, child, NULL, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (stg) + __sync_fetch_and_add(&create_cnts, 1); + else + __sync_fetch_and_add(&create_errs, 1); + + return 0; +} + +SEC("lsm.s/socket_post_create") +int BPF_PROG(socket_post_create, struct socket *sock, int family, int type, + int protocol, int kern) +{ + struct storage *stg; + __u32 pid; + + pid = bpf_get_current_pid_tgid() >> 32; + if (pid != bench_pid) + return 0; + + stg = bpf_sk_storage_get(&sk_storage_map, sock->sk, NULL, + BPF_LOCAL_STORAGE_GET_F_CREATE); + + if (stg) + __sync_fetch_and_add(&create_cnts, 1); + else + __sync_fetch_and_add(&create_errs, 1); + + return 0; +} + +char __license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c b/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c index 9ba14c37bbcc..5ddcc46fd886 100644 --- a/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c +++ b/tools/testing/selftests/bpf/progs/bpf_iter_ksym.c @@ -33,7 +33,6 @@ int dump_ksym(struct bpf_iter__ksym *ctx) __u32 seq_num = ctx->meta->seq_num; unsigned long value; char type; - int ret; if (!iter) return 0; diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt.c b/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt.c index b77adfd55d73..ec7f91850dec 100644 --- a/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt.c +++ b/tools/testing/selftests/bpf/progs/bpf_iter_setsockopt.c @@ -42,7 +42,6 @@ int change_tcp_cc(struct bpf_iter__tcp *ctx) char cur_cc[TCP_CA_NAME_MAX]; struct tcp_sock *tp; struct sock *sk; - int ret; if (!bpf_tcp_sk(ctx->sk_common)) return 0; diff --git a/tools/testing/selftests/bpf/progs/bpf_loop.c b/tools/testing/selftests/bpf/progs/bpf_loop.c index de1fc82d2710..1d194455b109 100644 --- a/tools/testing/selftests/bpf/progs/bpf_loop.c +++ b/tools/testing/selftests/bpf/progs/bpf_loop.c @@ -138,8 +138,6 @@ static int callback_set_0f(int i, void *ctx) SEC("fentry/" SYS_PREFIX "sys_nanosleep") int prog_non_constant_callback(void *ctx) { - struct callback_ctx data = {}; - if (bpf_get_current_pid_tgid() >> 32 != pid) return 0; diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h index f704885aa534..6e3b4903c541 100644 --- a/tools/testing/selftests/bpf/progs/bpf_misc.h +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h @@ -5,12 +5,42 @@ /* This set of attributes controls behavior of the * test_loader.c:test_loader__run_subtests(). * + * The test_loader sequentially loads each program in a skeleton. + * Programs could be loaded in privileged and unprivileged modes. + * - __success, __failure, __msg imply privileged mode; + * - __success_unpriv, __failure_unpriv, __msg_unpriv imply + * unprivileged mode. + * If combination of privileged and unprivileged attributes is present + * both modes are used. If none are present privileged mode is implied. + * + * See test_loader.c:drop_capabilities() for exact set of capabilities + * that differ between privileged and unprivileged modes. + * + * For test filtering purposes the name of the program loaded in + * unprivileged mode is derived from the usual program name by adding + * `@unpriv' suffix. + * * __msg Message expected to be found in the verifier log. * Multiple __msg attributes could be specified. + * __msg_unpriv Same as __msg but for unprivileged mode. * * __success Expect program load success in privileged mode. + * __success_unpriv Expect program load success in unprivileged mode. * * __failure Expect program load failure in privileged mode. + * __failure_unpriv Expect program load failure in unprivileged mode. + * + * __retval Execute the program using BPF_PROG_TEST_RUN command, + * expect return value to match passed parameter: + * - a decimal number + * - a hexadecimal number, when starts from 0x + * - literal INT_MIN + * - literal POINTER_VALUE (see definition below) + * - literal TEST_DATA_LEN (see definition below) + * __retval_unpriv Same, but load program in unprivileged mode. + * + * __description Text to be used instead of a program name for display + * and filtering purposes. * * __log_level Log level to use for the program, numeric value expected. * @@ -27,15 +57,28 @@ #define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" msg))) #define __failure __attribute__((btf_decl_tag("comment:test_expect_failure"))) #define __success __attribute__((btf_decl_tag("comment:test_expect_success"))) +#define __description(desc) __attribute__((btf_decl_tag("comment:test_description=" desc))) +#define __msg_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" msg))) +#define __failure_unpriv __attribute__((btf_decl_tag("comment:test_expect_failure_unpriv"))) +#define __success_unpriv __attribute__((btf_decl_tag("comment:test_expect_success_unpriv"))) #define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl))) #define __flag(flag) __attribute__((btf_decl_tag("comment:test_prog_flags="#flag))) +#define __retval(val) __attribute__((btf_decl_tag("comment:test_retval="#val))) +#define __retval_unpriv(val) __attribute__((btf_decl_tag("comment:test_retval_unpriv="#val))) /* Convenience macro for use with 'asm volatile' blocks */ #define __naked __attribute__((naked)) #define __clobber_all "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "memory" #define __clobber_common "r0", "r1", "r2", "r3", "r4", "r5", "memory" #define __imm(name) [name]"i"(name) +#define __imm_const(name, expr) [name]"i"(expr) #define __imm_addr(name) [name]"i"(&name) +#define __imm_ptr(name) [name]"p"(&name) +#define __imm_insn(name, expr) [name]"i"(*(long *)&(expr)) + +/* Magic constants used with __retval() */ +#define POINTER_VALUE 0xcafe4all +#define TEST_DATA_LEN 64 #if defined(__TARGET_ARCH_x86) #define SYSCALL_WRAPPER 1 @@ -75,5 +118,110 @@ #define FUNC_REG_ARG_CNT 5 #endif +/* make it look to compiler like value is read and written */ +#define __sink(expr) asm volatile("" : "+g"(expr)) + +struct bpf_iter_num; + +extern int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end) __ksym; +extern int *bpf_iter_num_next(struct bpf_iter_num *it) __ksym; +extern void bpf_iter_num_destroy(struct bpf_iter_num *it) __ksym; + +#ifndef bpf_for_each +/* bpf_for_each(iter_type, cur_elem, args...) provides generic construct for + * using BPF open-coded iterators without having to write mundane explicit + * low-level loop logic. Instead, it provides for()-like generic construct + * that can be used pretty naturally. E.g., for some hypothetical cgroup + * iterator, you'd write: + * + * struct cgroup *cg, *parent_cg = <...>; + * + * bpf_for_each(cgroup, cg, parent_cg, CG_ITER_CHILDREN) { + * bpf_printk("Child cgroup id = %d", cg->cgroup_id); + * if (cg->cgroup_id == 123) + * break; + * } + * + * I.e., it looks almost like high-level for each loop in other languages, + * supports continue/break, and is verifiable by BPF verifier. + * + * For iterating integers, the difference betwen bpf_for_each(num, i, N, M) + * and bpf_for(i, N, M) is in that bpf_for() provides additional proof to + * verifier that i is in [N, M) range, and in bpf_for_each() case i is `int + * *`, not just `int`. So for integers bpf_for() is more convenient. + * + * Note: this macro relies on C99 feature of allowing to declare variables + * inside for() loop, bound to for() loop lifetime. It also utilizes GCC + * extension: __attribute__((cleanup(<func>))), supported by both GCC and + * Clang. + */ +#define bpf_for_each(type, cur, args...) for ( \ + /* initialize and define destructor */ \ + struct bpf_iter_##type ___it __attribute__((aligned(8), /* enforce, just in case */, \ + cleanup(bpf_iter_##type##_destroy))), \ + /* ___p pointer is just to call bpf_iter_##type##_new() *once* to init ___it */ \ + *___p __attribute__((unused)) = ( \ + bpf_iter_##type##_new(&___it, ##args), \ + /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \ + /* for bpf_iter_##type##_destroy() when used from cleanup() attribute */ \ + (void)bpf_iter_##type##_destroy, (void *)0); \ + /* iteration and termination check */ \ + (((cur) = bpf_iter_##type##_next(&___it))); \ +) +#endif /* bpf_for_each */ + +#ifndef bpf_for +/* bpf_for(i, start, end) implements a for()-like looping construct that sets + * provided integer variable *i* to values starting from *start* through, + * but not including, *end*. It also proves to BPF verifier that *i* belongs + * to range [start, end), so this can be used for accessing arrays without + * extra checks. + * + * Note: *start* and *end* are assumed to be expressions with no side effects + * and whose values do not change throughout bpf_for() loop execution. They do + * not have to be statically known or constant, though. + * + * Note: similarly to bpf_for_each(), it relies on C99 feature of declaring for() + * loop bound variables and cleanup attribute, supported by GCC and Clang. + */ +#define bpf_for(i, start, end) for ( \ + /* initialize and define destructor */ \ + struct bpf_iter_num ___it __attribute__((aligned(8), /* enforce, just in case */ \ + cleanup(bpf_iter_num_destroy))), \ + /* ___p pointer is necessary to call bpf_iter_num_new() *once* to init ___it */ \ + *___p __attribute__((unused)) = ( \ + bpf_iter_num_new(&___it, (start), (end)), \ + /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \ + /* for bpf_iter_num_destroy() when used from cleanup() attribute */ \ + (void)bpf_iter_num_destroy, (void *)0); \ + ({ \ + /* iteration step */ \ + int *___t = bpf_iter_num_next(&___it); \ + /* termination and bounds check */ \ + (___t && ((i) = *___t, (i) >= (start) && (i) < (end))); \ + }); \ +) +#endif /* bpf_for */ + +#ifndef bpf_repeat +/* bpf_repeat(N) performs N iterations without exposing iteration number + * + * Note: similarly to bpf_for_each(), it relies on C99 feature of declaring for() + * loop bound variables and cleanup attribute, supported by GCC and Clang. + */ +#define bpf_repeat(N) for ( \ + /* initialize and define destructor */ \ + struct bpf_iter_num ___it __attribute__((aligned(8), /* enforce, just in case */ \ + cleanup(bpf_iter_num_destroy))), \ + /* ___p pointer is necessary to call bpf_iter_num_new() *once* to init ___it */ \ + *___p __attribute__((unused)) = ( \ + bpf_iter_num_new(&___it, 0, (N)), \ + /* this is a workaround for Clang bug: it currently doesn't emit BTF */ \ + /* for bpf_iter_num_destroy() when used from cleanup() attribute */ \ + (void)bpf_iter_num_destroy, (void *)0); \ + bpf_iter_num_next(&___it); \ + /* nothing here */ \ +) +#endif /* bpf_repeat */ #endif diff --git a/tools/testing/selftests/bpf/progs/cb_refs.c b/tools/testing/selftests/bpf/progs/cb_refs.c index ce96b33e38d6..50f95ec61165 100644 --- a/tools/testing/selftests/bpf/progs/cb_refs.c +++ b/tools/testing/selftests/bpf/progs/cb_refs.c @@ -52,7 +52,6 @@ int leak_prog(void *ctx) { struct prog_test_ref_kfunc *p; struct map_value *v; - unsigned long sl; v = bpf_map_lookup_elem(&array_map, &(int){0}); if (!v) diff --git a/tools/testing/selftests/bpf/progs/cgroup_skb_sk_lookup_kern.c b/tools/testing/selftests/bpf/progs/cgroup_skb_sk_lookup_kern.c index 88638315c582..ac86a8a61605 100644 --- a/tools/testing/selftests/bpf/progs/cgroup_skb_sk_lookup_kern.c +++ b/tools/testing/selftests/bpf/progs/cgroup_skb_sk_lookup_kern.c @@ -66,7 +66,6 @@ static inline int is_allowed_peer_cg(struct __sk_buff *skb, SEC("cgroup_skb/ingress") int ingress_lookup(struct __sk_buff *skb) { - __u32 serv_port_key = 0; struct ipv6hdr ip6h; struct tcphdr tcph; diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h b/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h index d0b7cd0d09d7..22914a70db54 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_common.h @@ -21,10 +21,11 @@ struct hash_map { } __cgrps_kfunc_map SEC(".maps"); struct cgroup *bpf_cgroup_acquire(struct cgroup *p) __ksym; -struct cgroup *bpf_cgroup_kptr_get(struct cgroup **pp) __ksym; void bpf_cgroup_release(struct cgroup *p) __ksym; struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) __ksym; struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym; +void bpf_rcu_read_lock(void) __ksym; +void bpf_rcu_read_unlock(void) __ksym; static inline struct __cgrps_kfunc_map_value *cgrps_kfunc_map_value_lookup(struct cgroup *cgrp) { @@ -61,6 +62,11 @@ static inline int cgrps_kfunc_map_insert(struct cgroup *cgrp) } acquired = bpf_cgroup_acquire(cgrp); + if (!acquired) { + bpf_map_delete_elem(&__cgrps_kfunc_map, &id); + return -ENOENT; + } + old = bpf_kptr_xchg(&v->cgrp, acquired); if (old) { bpf_cgroup_release(old); diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c index b42291ed9586..0fa564a5cc5b 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c @@ -41,6 +41,23 @@ int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path /* Can't invoke bpf_cgroup_acquire() on an untrusted pointer. */ acquired = bpf_cgroup_acquire(v->cgrp); + if (acquired) + bpf_cgroup_release(acquired); + + return 0; +} + +SEC("tp_btf/cgroup_mkdir") +__failure __msg("Possibly NULL pointer passed to trusted arg0") +int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char *path) +{ + struct cgroup *acquired; + + acquired = bpf_cgroup_acquire(cgrp); + /* + * Can't invoke bpf_cgroup_release() without checking the return value + * of bpf_cgroup_acquire(). + */ bpf_cgroup_release(acquired); return 0; @@ -54,7 +71,8 @@ int BPF_PROG(cgrp_kfunc_acquire_fp, struct cgroup *cgrp, const char *path) /* Can't invoke bpf_cgroup_acquire() on a random frame pointer. */ acquired = bpf_cgroup_acquire((struct cgroup *)&stack_cgrp); - bpf_cgroup_release(acquired); + if (acquired) + bpf_cgroup_release(acquired); return 0; } @@ -67,7 +85,8 @@ int BPF_PROG(cgrp_kfunc_acquire_unsafe_kretprobe, struct cgroup *cgrp) /* Can't acquire an untrusted struct cgroup * pointer. */ acquired = bpf_cgroup_acquire(cgrp); - bpf_cgroup_release(acquired); + if (acquired) + bpf_cgroup_release(acquired); return 0; } @@ -80,7 +99,8 @@ int BPF_PROG(cgrp_kfunc_acquire_trusted_walked, struct cgroup *cgrp, const char /* Can't invoke bpf_cgroup_acquire() on a pointer obtained from walking a trusted cgroup. */ acquired = bpf_cgroup_acquire(cgrp->old_dom_cgrp); - bpf_cgroup_release(acquired); + if (acquired) + bpf_cgroup_release(acquired); return 0; } @@ -93,9 +113,8 @@ int BPF_PROG(cgrp_kfunc_acquire_null, struct cgroup *cgrp, const char *path) /* Can't invoke bpf_cgroup_acquire() on a NULL pointer. */ acquired = bpf_cgroup_acquire(NULL); - if (!acquired) - return 0; - bpf_cgroup_release(acquired); + if (acquired) + bpf_cgroup_release(acquired); return 0; } @@ -109,57 +128,7 @@ int BPF_PROG(cgrp_kfunc_acquire_unreleased, struct cgroup *cgrp, const char *pat acquired = bpf_cgroup_acquire(cgrp); /* Acquired cgroup is never released. */ - - return 0; -} - -SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(cgrp_kfunc_get_non_kptr_param, struct cgroup *cgrp, const char *path) -{ - struct cgroup *kptr; - - /* Cannot use bpf_cgroup_kptr_get() on a non-kptr, even on a valid cgroup. */ - kptr = bpf_cgroup_kptr_get(&cgrp); - if (!kptr) - return 0; - - bpf_cgroup_release(kptr); - - return 0; -} - -SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(cgrp_kfunc_get_non_kptr_acquired, struct cgroup *cgrp, const char *path) -{ - struct cgroup *kptr, *acquired; - - acquired = bpf_cgroup_acquire(cgrp); - - /* Cannot use bpf_cgroup_kptr_get() on a non-map-value, even if the kptr was acquired. */ - kptr = bpf_cgroup_kptr_get(&acquired); - bpf_cgroup_release(acquired); - if (!kptr) - return 0; - - bpf_cgroup_release(kptr); - - return 0; -} - -SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(cgrp_kfunc_get_null, struct cgroup *cgrp, const char *path) -{ - struct cgroup *kptr; - - /* Cannot use bpf_cgroup_kptr_get() on a NULL pointer. */ - kptr = bpf_cgroup_kptr_get(NULL); - if (!kptr) - return 0; - - bpf_cgroup_release(kptr); + __sink(acquired); return 0; } @@ -185,8 +154,8 @@ int BPF_PROG(cgrp_kfunc_xchg_unreleased, struct cgroup *cgrp, const char *path) } SEC("tp_btf/cgroup_mkdir") -__failure __msg("Unreleased reference") -int BPF_PROG(cgrp_kfunc_get_unreleased, struct cgroup *cgrp, const char *path) +__failure __msg("must be referenced or trusted") +int BPF_PROG(cgrp_kfunc_rcu_get_release, struct cgroup *cgrp, const char *path) { struct cgroup *kptr; struct __cgrps_kfunc_map_value *v; @@ -195,17 +164,18 @@ int BPF_PROG(cgrp_kfunc_get_unreleased, struct cgroup *cgrp, const char *path) if (!v) return 0; - kptr = bpf_cgroup_kptr_get(&v->cgrp); - if (!kptr) - return 0; - - /* Kptr acquired above is never released. */ + bpf_rcu_read_lock(); + kptr = v->cgrp; + if (kptr) + /* Can't release a cgroup kptr stored in a map. */ + bpf_cgroup_release(kptr); + bpf_rcu_read_unlock(); return 0; } SEC("tp_btf/cgroup_mkdir") -__failure __msg("expects refcounted") +__failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path) { struct __cgrps_kfunc_map_value *v; @@ -233,7 +203,7 @@ int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path) } SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 is ptr_or_null_ expected ptr_ or socket") +__failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(cgrp_kfunc_release_null, struct cgroup *cgrp, const char *path) { struct __cgrps_kfunc_map_value local, *v; @@ -255,6 +225,8 @@ int BPF_PROG(cgrp_kfunc_release_null, struct cgroup *cgrp, const char *path) return -ENOENT; acquired = bpf_cgroup_acquire(cgrp); + if (!acquired) + return -ENOENT; old = bpf_kptr_xchg(&v->cgrp, acquired); diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c index 030aff700084..5354455a01be 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_success.c @@ -38,7 +38,10 @@ int BPF_PROG(test_cgrp_acquire_release_argument, struct cgroup *cgrp, const char return 0; acquired = bpf_cgroup_acquire(cgrp); - bpf_cgroup_release(acquired); + if (!acquired) + err = 1; + else + bpf_cgroup_release(acquired); return 0; } @@ -123,13 +126,11 @@ int BPF_PROG(test_cgrp_get_release, struct cgroup *cgrp, const char *path) return 0; } - kptr = bpf_cgroup_kptr_get(&v->cgrp); - if (!kptr) { + bpf_rcu_read_lock(); + kptr = v->cgrp; + if (!kptr) err = 3; - return 0; - } - - bpf_cgroup_release(kptr); + bpf_rcu_read_unlock(); return 0; } diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_attach_cgroup.c b/tools/testing/selftests/bpf/progs/cgrp_ls_attach_cgroup.c index 6652d18465b2..8aeba1b75c83 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_ls_attach_cgroup.c +++ b/tools/testing/selftests/bpf/progs/cgrp_ls_attach_cgroup.c @@ -84,7 +84,6 @@ int BPF_PROG(update_cookie_tracing, struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags) { struct socket_cookie *p; - struct tcp_sock *tcp_sk; if (uaddr->sa_family != AF_INET6) return 0; diff --git a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c index 7615dc23d301..4c7844e1dbfa 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c +++ b/tools/testing/selftests/bpf/progs/cgrp_ls_sleepable.c @@ -24,7 +24,6 @@ void bpf_rcu_read_unlock(void) __ksym; SEC("?iter.s/cgroup") int cgroup_iter(struct bpf_iter__cgroup *ctx) { - struct seq_file *seq = ctx->meta->seq; struct cgroup *cgrp = ctx->cgroup; long *ptr; diff --git a/tools/testing/selftests/bpf/progs/connect4_prog.c b/tools/testing/selftests/bpf/progs/connect4_prog.c index ec25371de789..7ef49ec04838 100644 --- a/tools/testing/selftests/bpf/progs/connect4_prog.c +++ b/tools/testing/selftests/bpf/progs/connect4_prog.c @@ -32,7 +32,7 @@ #define IFNAMSIZ 16 #endif -__attribute__ ((noinline)) +__attribute__ ((noinline)) __weak int do_bind(struct bpf_sock_addr *ctx) { struct sockaddr_in sa = {}; diff --git a/tools/testing/selftests/bpf/progs/core_kern.c b/tools/testing/selftests/bpf/progs/core_kern.c index 2715fe27d4cf..004f2acef2eb 100644 --- a/tools/testing/selftests/bpf/progs/core_kern.c +++ b/tools/testing/selftests/bpf/progs/core_kern.c @@ -77,7 +77,7 @@ int balancer_ingress(struct __sk_buff *ctx) void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; void *ptr; - int ret = 0, nh_off, i = 0; + int nh_off, i = 0; nh_off = 14; diff --git a/tools/testing/selftests/bpf/progs/cpumask_common.h b/tools/testing/selftests/bpf/progs/cpumask_common.h index 65e5496ca1b2..0c5b785a93e4 100644 --- a/tools/testing/selftests/bpf/progs/cpumask_common.h +++ b/tools/testing/selftests/bpf/progs/cpumask_common.h @@ -9,6 +9,9 @@ int err; +#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) +private(MASK) static struct bpf_cpumask __kptr * global_mask; + struct __cpumask_map_value { struct bpf_cpumask __kptr * cpumask; }; @@ -23,7 +26,6 @@ struct array_map { struct bpf_cpumask *bpf_cpumask_create(void) __ksym; void bpf_cpumask_release(struct bpf_cpumask *cpumask) __ksym; struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) __ksym; -struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumask) __ksym; u32 bpf_cpumask_first(const struct cpumask *cpumask) __ksym; u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) __ksym; void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) __ksym; @@ -51,6 +53,9 @@ void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) __ksym u32 bpf_cpumask_any(const struct cpumask *src) __ksym; u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) __ksym; +void bpf_rcu_read_lock(void) __ksym; +void bpf_rcu_read_unlock(void) __ksym; + static inline const struct cpumask *cast(struct bpf_cpumask *cpumask) { return (const struct cpumask *)cpumask; diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c index c16f7563b84e..a9bf6ea336cf 100644 --- a/tools/testing/selftests/bpf/progs/cpumask_failure.c +++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c @@ -23,6 +23,7 @@ int BPF_PROG(test_alloc_no_release, struct task_struct *task, u64 clone_flags) struct bpf_cpumask *cpumask; cpumask = create_cpumask(); + __sink(cpumask); /* cpumask is never released. */ return 0; @@ -51,6 +52,7 @@ int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_fla /* Can't acquire a non-struct bpf_cpumask. */ cpumask = bpf_cpumask_acquire((struct bpf_cpumask *)task->cpus_ptr); + __sink(cpumask); return 0; } @@ -63,6 +65,7 @@ int BPF_PROG(test_mutate_cpumask, struct task_struct *task, u64 clone_flags) /* Can't set the CPU of a non-struct bpf_cpumask. */ bpf_cpumask_set_cpu(0, (struct bpf_cpumask *)task->cpus_ptr); + __sink(cpumask); return 0; } @@ -92,35 +95,98 @@ int BPF_PROG(test_insert_remove_no_release, struct task_struct *task, u64 clone_ } SEC("tp_btf/task_newtask") -__failure __msg("Unreleased reference") -int BPF_PROG(test_kptr_get_no_release, struct task_struct *task, u64 clone_flags) +__failure __msg("NULL pointer passed to trusted arg0") +int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags) { - struct bpf_cpumask *cpumask; - struct __cpumask_map_value *v; + /* NULL passed to KF_TRUSTED_ARGS kfunc. */ + bpf_cpumask_empty(NULL); - cpumask = create_cpumask(); - if (!cpumask) + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("R2 must be a rcu pointer") +int BPF_PROG(test_global_mask_out_of_rcu, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *local, *prev; + + local = create_cpumask(); + if (!local) return 0; - if (cpumask_map_insert(cpumask)) + prev = bpf_kptr_xchg(&global_mask, local); + if (prev) { + bpf_cpumask_release(prev); + err = 3; return 0; + } - v = cpumask_map_value_lookup(); - if (!v) + bpf_rcu_read_lock(); + local = global_mask; + if (!local) { + err = 4; + bpf_rcu_read_unlock(); return 0; + } - cpumask = bpf_cpumask_kptr_get(&v->cpumask); + bpf_rcu_read_unlock(); + + /* RCU region is exited before calling KF_RCU kfunc. */ + + bpf_cpumask_test_cpu(0, (const struct cpumask *)local); - /* cpumask is never released. */ return 0; } SEC("tp_btf/task_newtask") -__failure __msg("NULL pointer passed to trusted arg0") -int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags) +__failure __msg("NULL pointer passed to trusted arg1") +int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone_flags) { - /* NULL passed to KF_TRUSTED_ARGS kfunc. */ - bpf_cpumask_empty(NULL); + struct bpf_cpumask *local, *prev; + + local = create_cpumask(); + if (!local) + return 0; + + prev = bpf_kptr_xchg(&global_mask, local); + if (prev) { + bpf_cpumask_release(prev); + err = 3; + return 0; + } + + bpf_rcu_read_lock(); + local = global_mask; + + /* No NULL check is performed on global cpumask kptr. */ + bpf_cpumask_test_cpu(0, (const struct cpumask *)local); + + bpf_rcu_read_unlock(); + + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("Possibly NULL pointer passed to helper arg2") +int BPF_PROG(test_global_mask_rcu_no_null_check, struct task_struct *task, u64 clone_flags) +{ + struct bpf_cpumask *prev, *curr; + + curr = bpf_cpumask_create(); + if (!curr) + return 0; + + prev = bpf_kptr_xchg(&global_mask, curr); + if (prev) + bpf_cpumask_release(prev); + + bpf_rcu_read_lock(); + curr = global_mask; + /* PTR_TO_BTF_ID | PTR_MAYBE_NULL | MEM_RCU passed to bpf_kptr_xchg() */ + prev = bpf_kptr_xchg(&global_mask, curr); + bpf_rcu_read_unlock(); + if (prev) + bpf_cpumask_release(prev); return 0; } diff --git a/tools/testing/selftests/bpf/progs/cpumask_success.c b/tools/testing/selftests/bpf/progs/cpumask_success.c index 1d38bc65d4b0..2fcdd7f68ac7 100644 --- a/tools/testing/selftests/bpf/progs/cpumask_success.c +++ b/tools/testing/selftests/bpf/progs/cpumask_success.c @@ -353,7 +353,6 @@ SEC("tp_btf/task_newtask") int BPF_PROG(test_insert_leave, struct task_struct *task, u64 clone_flags) { struct bpf_cpumask *cpumask; - struct __cpumask_map_value *v; cpumask = create_cpumask(); if (!cpumask) @@ -396,31 +395,34 @@ int BPF_PROG(test_insert_remove_release, struct task_struct *task, u64 clone_fla } SEC("tp_btf/task_newtask") -int BPF_PROG(test_insert_kptr_get_release, struct task_struct *task, u64 clone_flags) +int BPF_PROG(test_global_mask_rcu, struct task_struct *task, u64 clone_flags) { - struct bpf_cpumask *cpumask; - struct __cpumask_map_value *v; + struct bpf_cpumask *local, *prev; - cpumask = create_cpumask(); - if (!cpumask) + if (!is_test_task()) return 0; - if (cpumask_map_insert(cpumask)) { + local = create_cpumask(); + if (!local) + return 0; + + prev = bpf_kptr_xchg(&global_mask, local); + if (prev) { + bpf_cpumask_release(prev); err = 3; return 0; } - v = cpumask_map_value_lookup(); - if (!v) { + bpf_rcu_read_lock(); + local = global_mask; + if (!local) { err = 4; + bpf_rcu_read_unlock(); return 0; } - cpumask = bpf_cpumask_kptr_get(&v->cpumask); - if (cpumask) - bpf_cpumask_release(cpumask); - else - err = 5; + bpf_cpumask_test_cpu(0, (const struct cpumask *)local); + bpf_rcu_read_unlock(); return 0; } diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index 20ce920d891d..759eb5c245cd 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -271,7 +271,7 @@ SEC("?raw_tp") __failure __msg("value is outside of the allowed memory range") int data_slice_out_of_bounds_map_value(void *ctx) { - __u32 key = 0, map_val; + __u32 map_val; struct bpf_dynptr ptr; void *data; @@ -388,7 +388,6 @@ int data_slice_missing_null_check2(void *ctx) /* this should fail */ *data2 = 3; -done: bpf_ringbuf_discard_dynptr(&ptr, 0); return 0; } @@ -440,6 +439,7 @@ int invalid_write1(void *ctx) /* this should fail */ data = bpf_dynptr_data(&ptr, 0, 1); + __sink(data); return 0; } @@ -1374,6 +1374,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb) * changing packet data */ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer)); + __sink(hdr); return 0; } diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c index c8358a7c7924..b2fa6c47ecc0 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_success.c +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c @@ -35,7 +35,7 @@ SEC("?tp/syscalls/sys_enter_nanosleep") int test_read_write(void *ctx) { char write_data[64] = "hello there, world!!"; - char read_data[64] = {}, buf[64] = {}; + char read_data[64] = {}; struct bpf_dynptr ptr; int i; @@ -170,7 +170,6 @@ int test_skb_readonly(struct __sk_buff *skb) { __u8 write_data[2] = {1, 2}; struct bpf_dynptr ptr; - __u64 *data; int ret; if (bpf_dynptr_from_skb(skb, 0, &ptr)) { @@ -191,10 +190,8 @@ int test_skb_readonly(struct __sk_buff *skb) SEC("?cgroup_skb/egress") int test_dynptr_skb_data(struct __sk_buff *skb) { - __u8 write_data[2] = {1, 2}; struct bpf_dynptr ptr; __u64 *data; - int ret; if (bpf_dynptr_from_skb(skb, 0, &ptr)) { err = 1; diff --git a/tools/testing/selftests/bpf/progs/err.h b/tools/testing/selftests/bpf/progs/err.h new file mode 100644 index 000000000000..d66d283d9e59 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/err.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ERR_H__ +#define __ERR_H__ + +#define MAX_ERRNO 4095 +#define IS_ERR_VALUE(x) (unsigned long)(void *)(x) >= (unsigned long)-MAX_ERRNO + +static inline int IS_ERR_OR_NULL(const void *ptr) +{ + return !ptr || IS_ERR_VALUE((unsigned long)ptr); +} + +static inline long PTR_ERR(const void *ptr) +{ + return (long) ptr; +} + +#endif /* __ERR_H__ */ diff --git a/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c b/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c index 4547b059d487..983b7c233382 100644 --- a/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c +++ b/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c @@ -120,8 +120,6 @@ int new_get_skb_ifindex(int val, struct __sk_buff *skb, int var) void *data = (void *)(long)skb->data; struct ipv6hdr ip6, *ip6p; int ifindex = skb->ifindex; - __u32 eth_proto; - __u32 nh_off; /* check that BPF extension can read packet via direct packet access */ if (data + 14 + sizeof(ip6) > data_end) diff --git a/tools/testing/selftests/bpf/progs/freplace_attach_probe.c b/tools/testing/selftests/bpf/progs/freplace_attach_probe.c index bb2a77c5b62b..370a0e1922e0 100644 --- a/tools/testing/selftests/bpf/progs/freplace_attach_probe.c +++ b/tools/testing/selftests/bpf/progs/freplace_attach_probe.c @@ -23,7 +23,7 @@ struct { SEC("freplace/handle_kprobe") int new_handle_kprobe(struct pt_regs *ctx) { - struct hmap_elem zero = {}, *val; + struct hmap_elem *val; int key = 0; val = bpf_map_lookup_elem(&hash_map, &key); diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c new file mode 100644 index 000000000000..6b9b3c56f009 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/iters.c @@ -0,0 +1,719 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <stdbool.h> +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) + +static volatile int zero = 0; + +int my_pid; +int arr[256]; +int small_arr[16] SEC(".data.small_arr"); + +#ifdef REAL_TEST +#define MY_PID_GUARD() if (my_pid != (bpf_get_current_pid_tgid() >> 32)) return 0 +#else +#define MY_PID_GUARD() ({ }) +#endif + +SEC("?raw_tp") +__failure __msg("math between map_value pointer and register with unbounded min value is not allowed") +int iter_err_unsafe_c_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v, i = zero; /* obscure initial value of i */ + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 1000); + while ((v = bpf_iter_num_next(&it))) { + i++; + } + bpf_iter_num_destroy(&it); + + small_arr[i] = 123; /* invalid */ + + return 0; +} + +SEC("?raw_tp") +__failure __msg("unbounded memory access") +int iter_err_unsafe_asm_loop(const void *ctx) +{ + struct bpf_iter_num it; + + MY_PID_GUARD(); + + asm volatile ( + "r6 = %[zero];" /* iteration counter */ + "r1 = %[it];" /* iterator state */ + "r2 = 0;" + "r3 = 1000;" + "r4 = 1;" + "call %[bpf_iter_num_new];" + "loop:" + "r1 = %[it];" + "call %[bpf_iter_num_next];" + "if r0 == 0 goto out;" + "r6 += 1;" + "goto loop;" + "out:" + "r1 = %[it];" + "call %[bpf_iter_num_destroy];" + "r1 = %[small_arr];" + "r2 = r6;" + "r2 <<= 2;" + "r1 += r2;" + "*(u32 *)(r1 + 0) = r6;" /* invalid */ + : + : [it]"r"(&it), + [small_arr]"p"(small_arr), + [zero]"p"(zero), + __imm(bpf_iter_num_new), + __imm(bpf_iter_num_next), + __imm(bpf_iter_num_destroy) + : __clobber_common, "r6" + ); + + return 0; +} + +SEC("raw_tp") +__success +int iter_while_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 3); + while ((v = bpf_iter_num_next(&it))) { + bpf_printk("ITER_BASIC: E1 VAL: v=%d", *v); + } + bpf_iter_num_destroy(&it); + + return 0; +} + +SEC("raw_tp") +__success +int iter_while_loop_auto_cleanup(const void *ctx) +{ + __attribute__((cleanup(bpf_iter_num_destroy))) struct bpf_iter_num it; + int *v; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 3); + while ((v = bpf_iter_num_next(&it))) { + bpf_printk("ITER_BASIC: E1 VAL: v=%d", *v); + } + /* (!) no explicit bpf_iter_num_destroy() */ + + return 0; +} + +SEC("raw_tp") +__success +int iter_for_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 5, 10); + for (v = bpf_iter_num_next(&it); v; v = bpf_iter_num_next(&it)) { + bpf_printk("ITER_BASIC: E2 VAL: v=%d", *v); + } + bpf_iter_num_destroy(&it); + + return 0; +} + +SEC("raw_tp") +__success +int iter_bpf_for_each_macro(const void *ctx) +{ + int *v; + + MY_PID_GUARD(); + + bpf_for_each(num, v, 5, 10) { + bpf_printk("ITER_BASIC: E2 VAL: v=%d", *v); + } + + return 0; +} + +SEC("raw_tp") +__success +int iter_bpf_for_macro(const void *ctx) +{ + int i; + + MY_PID_GUARD(); + + bpf_for(i, 5, 10) { + bpf_printk("ITER_BASIC: E2 VAL: v=%d", i); + } + + return 0; +} + +SEC("raw_tp") +__success +int iter_pragma_unroll_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v, i; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 2); +#pragma nounroll + for (i = 0; i < 3; i++) { + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E3 VAL: i=%d v=%d", i, v ? *v : -1); + } + bpf_iter_num_destroy(&it); + + return 0; +} + +SEC("raw_tp") +__success +int iter_manual_unroll_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 100, 200); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d", v ? *v : -1); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d", v ? *v : -1); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d", v ? *v : -1); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d\n", v ? *v : -1); + bpf_iter_num_destroy(&it); + + return 0; +} + +SEC("raw_tp") +__success +int iter_multiple_sequential_loops(const void *ctx) +{ + struct bpf_iter_num it; + int *v, i; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 3); + while ((v = bpf_iter_num_next(&it))) { + bpf_printk("ITER_BASIC: E1 VAL: v=%d", *v); + } + bpf_iter_num_destroy(&it); + + bpf_iter_num_new(&it, 5, 10); + for (v = bpf_iter_num_next(&it); v; v = bpf_iter_num_next(&it)) { + bpf_printk("ITER_BASIC: E2 VAL: v=%d", *v); + } + bpf_iter_num_destroy(&it); + + bpf_iter_num_new(&it, 0, 2); +#pragma nounroll + for (i = 0; i < 3; i++) { + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E3 VAL: i=%d v=%d", i, v ? *v : -1); + } + bpf_iter_num_destroy(&it); + + bpf_iter_num_new(&it, 100, 200); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d", v ? *v : -1); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d", v ? *v : -1); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d", v ? *v : -1); + v = bpf_iter_num_next(&it); + bpf_printk("ITER_BASIC: E4 VAL: v=%d\n", v ? *v : -1); + bpf_iter_num_destroy(&it); + + return 0; +} + +SEC("raw_tp") +__success +int iter_limit_cond_break_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v, i = 0, sum = 0; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 10); + while ((v = bpf_iter_num_next(&it))) { + bpf_printk("ITER_SIMPLE: i=%d v=%d", i, *v); + sum += *v; + + i++; + if (i > 3) + break; + } + bpf_iter_num_destroy(&it); + + bpf_printk("ITER_SIMPLE: sum=%d\n", sum); + + return 0; +} + +SEC("raw_tp") +__success +int iter_obfuscate_counter(const void *ctx) +{ + struct bpf_iter_num it; + int *v, sum = 0; + /* Make i's initial value unknowable for verifier to prevent it from + * pruning if/else branch inside the loop body and marking i as precise. + */ + int i = zero; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 10); + while ((v = bpf_iter_num_next(&it))) { + int x; + + i += 1; + + /* If we initialized i as `int i = 0;` above, verifier would + * track that i becomes 1 on first iteration after increment + * above, and here verifier would eagerly prune else branch + * and mark i as precise, ruining open-coded iterator logic + * completely, as each next iteration would have a different + * *precise* value of i, and thus there would be no + * convergence of state. This would result in reaching maximum + * instruction limit, no matter what the limit is. + */ + if (i == 1) + x = 123; + else + x = i * 3 + 1; + + bpf_printk("ITER_OBFUSCATE_COUNTER: i=%d v=%d x=%d", i, *v, x); + + sum += x; + } + bpf_iter_num_destroy(&it); + + bpf_printk("ITER_OBFUSCATE_COUNTER: sum=%d\n", sum); + + return 0; +} + +SEC("raw_tp") +__success +int iter_search_loop(const void *ctx) +{ + struct bpf_iter_num it; + int *v, *elem = NULL; + bool found = false; + + MY_PID_GUARD(); + + bpf_iter_num_new(&it, 0, 10); + + while ((v = bpf_iter_num_next(&it))) { + bpf_printk("ITER_SEARCH_LOOP: v=%d", *v); + + if (*v == 2) { + found = true; + elem = v; + barrier_var(elem); + } + } + + /* should fail to verify if bpf_iter_num_destroy() is here */ + + if (found) + /* here found element will be wrong, we should have copied + * value to a variable, but here we want to make sure we can + * access memory after the loop anyways + */ + bpf_printk("ITER_SEARCH_LOOP: FOUND IT = %d!\n", *elem); + else + bpf_printk("ITER_SEARCH_LOOP: NOT FOUND IT!\n"); + + bpf_iter_num_destroy(&it); + + return 0; +} + +SEC("raw_tp") +__success +int iter_array_fill(const void *ctx) +{ + int sum, i; + + MY_PID_GUARD(); + + bpf_for(i, 0, ARRAY_SIZE(arr)) { + arr[i] = i * 2; + } + + sum = 0; + bpf_for(i, 0, ARRAY_SIZE(arr)) { + sum += arr[i]; + } + + bpf_printk("ITER_ARRAY_FILL: sum=%d (should be %d)\n", sum, 255 * 256); + + return 0; +} + +static int arr2d[4][5]; +static int arr2d_row_sums[4]; +static int arr2d_col_sums[5]; + +SEC("raw_tp") +__success +int iter_nested_iters(const void *ctx) +{ + int sum, row, col; + + MY_PID_GUARD(); + + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + bpf_for( col, 0, ARRAY_SIZE(arr2d[0])) { + arr2d[row][col] = row * col; + } + } + + /* zero-initialize sums */ + sum = 0; + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + arr2d_row_sums[row] = 0; + } + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + arr2d_col_sums[col] = 0; + } + + /* calculate sums */ + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + sum += arr2d[row][col]; + arr2d_row_sums[row] += arr2d[row][col]; + arr2d_col_sums[col] += arr2d[row][col]; + } + } + + bpf_printk("ITER_NESTED_ITERS: total sum=%d", sum); + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + bpf_printk("ITER_NESTED_ITERS: row #%d sum=%d", row, arr2d_row_sums[row]); + } + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + bpf_printk("ITER_NESTED_ITERS: col #%d sum=%d%s", + col, arr2d_col_sums[col], + col == ARRAY_SIZE(arr2d[0]) - 1 ? "\n" : ""); + } + + return 0; +} + +SEC("raw_tp") +__success +int iter_nested_deeply_iters(const void *ctx) +{ + int sum = 0; + + MY_PID_GUARD(); + + bpf_repeat(10) { + bpf_repeat(10) { + bpf_repeat(10) { + bpf_repeat(10) { + bpf_repeat(10) { + sum += 1; + } + } + } + } + /* validate that we can break from inside bpf_repeat() */ + break; + } + + return sum; +} + +static __noinline void fill_inner_dimension(int row) +{ + int col; + + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + arr2d[row][col] = row * col; + } +} + +static __noinline int sum_inner_dimension(int row) +{ + int sum = 0, col; + + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + sum += arr2d[row][col]; + arr2d_row_sums[row] += arr2d[row][col]; + arr2d_col_sums[col] += arr2d[row][col]; + } + + return sum; +} + +SEC("raw_tp") +__success +int iter_subprog_iters(const void *ctx) +{ + int sum, row, col; + + MY_PID_GUARD(); + + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + fill_inner_dimension(row); + } + + /* zero-initialize sums */ + sum = 0; + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + arr2d_row_sums[row] = 0; + } + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + arr2d_col_sums[col] = 0; + } + + /* calculate sums */ + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + sum += sum_inner_dimension(row); + } + + bpf_printk("ITER_SUBPROG_ITERS: total sum=%d", sum); + bpf_for(row, 0, ARRAY_SIZE(arr2d)) { + bpf_printk("ITER_SUBPROG_ITERS: row #%d sum=%d", + row, arr2d_row_sums[row]); + } + bpf_for(col, 0, ARRAY_SIZE(arr2d[0])) { + bpf_printk("ITER_SUBPROG_ITERS: col #%d sum=%d%s", + col, arr2d_col_sums[col], + col == ARRAY_SIZE(arr2d[0]) - 1 ? "\n" : ""); + } + + return 0; +} + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, int); + __uint(max_entries, 1000); +} arr_map SEC(".maps"); + +SEC("?raw_tp") +__failure __msg("invalid mem access 'scalar'") +int iter_err_too_permissive1(const void *ctx) +{ + int *map_val = NULL; + int key = 0; + + MY_PID_GUARD(); + + map_val = bpf_map_lookup_elem(&arr_map, &key); + if (!map_val) + return 0; + + bpf_repeat(1000000) { + map_val = NULL; + } + + *map_val = 123; + + return 0; +} + +SEC("?raw_tp") +__failure __msg("invalid mem access 'map_value_or_null'") +int iter_err_too_permissive2(const void *ctx) +{ + int *map_val = NULL; + int key = 0; + + MY_PID_GUARD(); + + map_val = bpf_map_lookup_elem(&arr_map, &key); + if (!map_val) + return 0; + + bpf_repeat(1000000) { + map_val = bpf_map_lookup_elem(&arr_map, &key); + } + + *map_val = 123; + + return 0; +} + +SEC("?raw_tp") +__failure __msg("invalid mem access 'map_value_or_null'") +int iter_err_too_permissive3(const void *ctx) +{ + int *map_val = NULL; + int key = 0; + bool found = false; + + MY_PID_GUARD(); + + bpf_repeat(1000000) { + map_val = bpf_map_lookup_elem(&arr_map, &key); + found = true; + } + + if (found) + *map_val = 123; + + return 0; +} + +SEC("raw_tp") +__success +int iter_tricky_but_fine(const void *ctx) +{ + int *map_val = NULL; + int key = 0; + bool found = false; + + MY_PID_GUARD(); + + bpf_repeat(1000000) { + map_val = bpf_map_lookup_elem(&arr_map, &key); + if (map_val) { + found = true; + break; + } + } + + if (found) + *map_val = 123; + + return 0; +} + +#define __bpf_memzero(p, sz) bpf_probe_read_kernel((p), (sz), 0) + +SEC("raw_tp") +__success +int iter_stack_array_loop(const void *ctx) +{ + long arr1[16], arr2[16], sum = 0; + int i; + + MY_PID_GUARD(); + + /* zero-init arr1 and arr2 in such a way that verifier doesn't know + * it's all zeros; if we don't do that, we'll make BPF verifier track + * all combination of zero/non-zero stack slots for arr1/arr2, which + * will lead to O(2^(ARRAY_SIZE(arr1)+ARRAY_SIZE(arr2))) different + * states + */ + __bpf_memzero(arr1, sizeof(arr1)); + __bpf_memzero(arr2, sizeof(arr1)); + + /* validate that we can break and continue when using bpf_for() */ + bpf_for(i, 0, ARRAY_SIZE(arr1)) { + if (i & 1) { + arr1[i] = i; + continue; + } else { + arr2[i] = i; + break; + } + } + + bpf_for(i, 0, ARRAY_SIZE(arr1)) { + sum += arr1[i] + arr2[i]; + } + + return sum; +} + +static __noinline void fill(struct bpf_iter_num *it, int *arr, __u32 n, int mul) +{ + int *t, i; + + while ((t = bpf_iter_num_next(it))) { + i = *t; + if (i >= n) + break; + arr[i] = i * mul; + } +} + +static __noinline int sum(struct bpf_iter_num *it, int *arr, __u32 n) +{ + int *t, i, sum = 0;; + + while ((t = bpf_iter_num_next(it))) { + i = *t; + if (i >= n) + break; + sum += arr[i]; + } + + return sum; +} + +SEC("raw_tp") +__success +int iter_pass_iter_ptr_to_subprog(const void *ctx) +{ + int arr1[16], arr2[32]; + struct bpf_iter_num it; + int n, sum1, sum2; + + MY_PID_GUARD(); + + /* fill arr1 */ + n = ARRAY_SIZE(arr1); + bpf_iter_num_new(&it, 0, n); + fill(&it, arr1, n, 2); + bpf_iter_num_destroy(&it); + + /* fill arr2 */ + n = ARRAY_SIZE(arr2); + bpf_iter_num_new(&it, 0, n); + fill(&it, arr2, n, 10); + bpf_iter_num_destroy(&it); + + /* sum arr1 */ + n = ARRAY_SIZE(arr1); + bpf_iter_num_new(&it, 0, n); + sum1 = sum(&it, arr1, n); + bpf_iter_num_destroy(&it); + + /* sum arr2 */ + n = ARRAY_SIZE(arr2); + bpf_iter_num_new(&it, 0, n); + sum2 = sum(&it, arr2, n); + bpf_iter_num_destroy(&it); + + bpf_printk("sum1=%d, sum2=%d", sum1, sum2); + + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/iters_looping.c b/tools/testing/selftests/bpf/progs/iters_looping.c new file mode 100644 index 000000000000..05fa5ce7fc59 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/iters_looping.c @@ -0,0 +1,163 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <errno.h> +#include <string.h> +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +char _license[] SEC("license") = "GPL"; + +#define ITER_HELPERS \ + __imm(bpf_iter_num_new), \ + __imm(bpf_iter_num_next), \ + __imm(bpf_iter_num_destroy) + +SEC("?raw_tp") +__success +int force_clang_to_emit_btf_for_externs(void *ctx) +{ + /* we need this as a workaround to enforce compiler emitting BTF + * information for bpf_iter_num_{new,next,destroy}() kfuncs, + * as, apparently, it doesn't emit it for symbols only referenced from + * assembly (or cleanup attribute, for that matter, as well) + */ + bpf_repeat(0); + + return 0; +} + +SEC("?raw_tp") +__success +int consume_first_item_only(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* consume first item */ + "r1 = %[iter];" + "call %[bpf_iter_num_next];" + + "if r0 == 0 goto +1;" + "r0 = *(u32 *)(r0 + 0);" + + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("R0 invalid mem access 'scalar'") +int missing_null_check_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* consume first element */ + "r1 = %[iter];" + "call %[bpf_iter_num_next];" + + /* FAIL: deref with no NULL check */ + "r1 = *(u32 *)(r0 + 0);" + + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure +__msg("invalid access to memory, mem_size=4 off=0 size=8") +__msg("R0 min value is outside of the allowed memory range") +int wrong_sized_read_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* consume first element */ + "r1 = %[iter];" + "call %[bpf_iter_num_next];" + + "if r0 == 0 goto +1;" + /* FAIL: deref more than available 4 bytes */ + "r0 = *(u64 *)(r0 + 0);" + + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__success __log_level(2) +__flag(BPF_F_TEST_STATE_FREQ) +int simplest_loop(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + "r6 = 0;" /* init sum */ + + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 10;" + "call %[bpf_iter_num_new];" + + "1:" + /* consume next item */ + "r1 = %[iter];" + "call %[bpf_iter_num_next];" + + "if r0 == 0 goto 2f;" + "r0 = *(u32 *)(r0 + 0);" + "r6 += r0;" /* accumulate sum */ + "goto 1b;" + + "2:" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common, "r6" + ); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/iters_num.c b/tools/testing/selftests/bpf/progs/iters_num.c new file mode 100644 index 000000000000..7a77a8daee0d --- /dev/null +++ b/tools/testing/selftests/bpf/progs/iters_num.c @@ -0,0 +1,242 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <limits.h> +#include <linux/errno.h> +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +const volatile __s64 exp_empty_zero = 0 + 1; +__s64 res_empty_zero; + +SEC("raw_tp/sys_enter") +int num_empty_zero(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, 0, 0) sum += i; + res_empty_zero = 1 + sum; + + return 0; +} + +const volatile __s64 exp_empty_int_min = 0 + 2; +__s64 res_empty_int_min; + +SEC("raw_tp/sys_enter") +int num_empty_int_min(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, INT_MIN, INT_MIN) sum += i; + res_empty_int_min = 2 + sum; + + return 0; +} + +const volatile __s64 exp_empty_int_max = 0 + 3; +__s64 res_empty_int_max; + +SEC("raw_tp/sys_enter") +int num_empty_int_max(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, INT_MAX, INT_MAX) sum += i; + res_empty_int_max = 3 + sum; + + return 0; +} + +const volatile __s64 exp_empty_minus_one = 0 + 4; +__s64 res_empty_minus_one; + +SEC("raw_tp/sys_enter") +int num_empty_minus_one(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, -1, -1) sum += i; + res_empty_minus_one = 4 + sum; + + return 0; +} + +const volatile __s64 exp_simple_sum = 9 * 10 / 2; +__s64 res_simple_sum; + +SEC("raw_tp/sys_enter") +int num_simple_sum(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, 0, 10) sum += i; + res_simple_sum = sum; + + return 0; +} + +const volatile __s64 exp_neg_sum = -11 * 10 / 2; +__s64 res_neg_sum; + +SEC("raw_tp/sys_enter") +int num_neg_sum(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, -10, 0) sum += i; + res_neg_sum = sum; + + return 0; +} + +const volatile __s64 exp_very_neg_sum = INT_MIN + (__s64)(INT_MIN + 1); +__s64 res_very_neg_sum; + +SEC("raw_tp/sys_enter") +int num_very_neg_sum(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, INT_MIN, INT_MIN + 2) sum += i; + res_very_neg_sum = sum; + + return 0; +} + +const volatile __s64 exp_very_big_sum = (__s64)(INT_MAX - 1) + (__s64)(INT_MAX - 2); +__s64 res_very_big_sum; + +SEC("raw_tp/sys_enter") +int num_very_big_sum(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, INT_MAX - 2, INT_MAX) sum += i; + res_very_big_sum = sum; + + return 0; +} + +const volatile __s64 exp_neg_pos_sum = -3; +__s64 res_neg_pos_sum; + +SEC("raw_tp/sys_enter") +int num_neg_pos_sum(const void *ctx) +{ + __s64 sum = 0, i; + + bpf_for(i, -3, 3) sum += i; + res_neg_pos_sum = sum; + + return 0; +} + +const volatile __s64 exp_invalid_range = -EINVAL; +__s64 res_invalid_range; + +SEC("raw_tp/sys_enter") +int num_invalid_range(const void *ctx) +{ + struct bpf_iter_num it; + + res_invalid_range = bpf_iter_num_new(&it, 1, 0); + bpf_iter_num_destroy(&it); + + return 0; +} + +const volatile __s64 exp_max_range = 0 + 10; +__s64 res_max_range; + +SEC("raw_tp/sys_enter") +int num_max_range(const void *ctx) +{ + struct bpf_iter_num it; + + res_max_range = 10 + bpf_iter_num_new(&it, 0, BPF_MAX_LOOPS); + bpf_iter_num_destroy(&it); + + return 0; +} + +const volatile __s64 exp_e2big_range = -E2BIG; +__s64 res_e2big_range; + +SEC("raw_tp/sys_enter") +int num_e2big_range(const void *ctx) +{ + struct bpf_iter_num it; + + res_e2big_range = bpf_iter_num_new(&it, -1, BPF_MAX_LOOPS); + bpf_iter_num_destroy(&it); + + return 0; +} + +const volatile __s64 exp_succ_elem_cnt = 10; +__s64 res_succ_elem_cnt; + +SEC("raw_tp/sys_enter") +int num_succ_elem_cnt(const void *ctx) +{ + struct bpf_iter_num it; + int cnt = 0, *v; + + bpf_iter_num_new(&it, 0, 10); + while ((v = bpf_iter_num_next(&it))) { + cnt++; + } + bpf_iter_num_destroy(&it); + + res_succ_elem_cnt = cnt; + + return 0; +} + +const volatile __s64 exp_overfetched_elem_cnt = 5; +__s64 res_overfetched_elem_cnt; + +SEC("raw_tp/sys_enter") +int num_overfetched_elem_cnt(const void *ctx) +{ + struct bpf_iter_num it; + int cnt = 0, *v, i; + + bpf_iter_num_new(&it, 0, 5); + for (i = 0; i < 10; i++) { + v = bpf_iter_num_next(&it); + if (v) + cnt++; + } + bpf_iter_num_destroy(&it); + + res_overfetched_elem_cnt = cnt; + + return 0; +} + +const volatile __s64 exp_fail_elem_cnt = 20 + 0; +__s64 res_fail_elem_cnt; + +SEC("raw_tp/sys_enter") +int num_fail_elem_cnt(const void *ctx) +{ + struct bpf_iter_num it; + int cnt = 0, *v, i; + + bpf_iter_num_new(&it, 100, 10); + for (i = 0; i < 10; i++) { + v = bpf_iter_num_next(&it); + if (v) + cnt++; + } + bpf_iter_num_destroy(&it); + + res_fail_elem_cnt = 20 + cnt; + + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c new file mode 100644 index 000000000000..d47e59aba6de --- /dev/null +++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c @@ -0,0 +1,426 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2022 Facebook */ + +#include <errno.h> +#include <string.h> +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +char _license[] SEC("license") = "GPL"; + +#define ITER_HELPERS \ + __imm(bpf_iter_num_new), \ + __imm(bpf_iter_num_next), \ + __imm(bpf_iter_num_destroy) + +SEC("?raw_tp") +__success +int force_clang_to_emit_btf_for_externs(void *ctx) +{ + /* we need this as a workaround to enforce compiler emitting BTF + * information for bpf_iter_num_{new,next,destroy}() kfuncs, + * as, apparently, it doesn't emit it for symbols only referenced from + * assembly (or cleanup attribute, for that matter, as well) + */ + bpf_repeat(0); + + return 0; +} + +SEC("?raw_tp") +__success __log_level(2) +__msg("fp-8_w=iter_num(ref_id=1,state=active,depth=0)") +int create_and_destroy(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("Unreleased reference id=1") +int create_and_forget_to_destroy_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected an initialized iter_num as arg #1") +int destroy_without_creating_fail(void *ctx) +{ + /* init with zeros to stop verifier complaining about uninit stack */ + struct bpf_iter_num iter; + + asm volatile ( + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected an initialized iter_num as arg #1") +int compromise_iter_w_direct_write_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* directly write over first half of iter state */ + "*(u64 *)(%[iter] + 0) = r0;" + + /* (attempt to) destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("Unreleased reference id=1") +int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* directly write over first half of iter state */ + "*(u64 *)(%[iter] + 0) = r0;" + + /* don't destroy iter, leaking ref, which should fail */ + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected an initialized iter_num as arg #1") +int compromise_iter_w_helper_write_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* overwrite 8th byte with bpf_probe_read_kernel() */ + "r1 = %[iter];" + "r1 += 7;" + "r2 = 1;" + "r3 = 0;" /* NULL */ + "call %[bpf_probe_read_kernel];" + + /* (attempt to) destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS, __imm(bpf_probe_read_kernel) + : __clobber_common + ); + + return 0; +} + +static __noinline void subprog_with_iter(void) +{ + struct bpf_iter_num iter; + + bpf_iter_num_new(&iter, 0, 1); + + return; +} + +SEC("?raw_tp") +__failure +/* ensure there was a call to subprog, which might happen without __noinline */ +__msg("returning from callee:") +__msg("Unreleased reference id=1") +int leak_iter_from_subprog_fail(void *ctx) +{ + subprog_with_iter(); + + return 0; +} + +SEC("?raw_tp") +__success __log_level(2) +__msg("fp-8_w=iter_num(ref_id=1,state=active,depth=0)") +int valid_stack_reuse(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + + /* now reuse same stack slots */ + + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected uninitialized iter_num as arg #1") +int double_create_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* (attempt to) create iterator again */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected an initialized iter_num as arg #1") +int double_destroy_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + /* (attempt to) destroy iterator again */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected an initialized iter_num as arg #1") +int next_without_new_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* don't create iterator and try to iterate*/ + "r1 = %[iter];" + "call %[bpf_iter_num_next];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("expected an initialized iter_num as arg #1") +int next_after_destroy_fail(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* create iterator */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + /* destroy iterator */ + "r1 = %[iter];" + "call %[bpf_iter_num_destroy];" + /* don't create iterator and try to iterate*/ + "r1 = %[iter];" + "call %[bpf_iter_num_next];" + : + : __imm_ptr(iter), ITER_HELPERS + : __clobber_common + ); + + return 0; +} + +SEC("?raw_tp") +__failure __msg("invalid read from stack") +int __naked read_from_iter_slot_fail(void) +{ + asm volatile ( + /* r6 points to struct bpf_iter_num on the stack */ + "r6 = r10;" + "r6 += -24;" + + /* create iterator */ + "r1 = r6;" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* attemp to leak bpf_iter_num state */ + "r7 = *(u64 *)(r6 + 0);" + "r8 = *(u64 *)(r6 + 8);" + + /* destroy iterator */ + "r1 = r6;" + "call %[bpf_iter_num_destroy];" + + /* leak bpf_iter_num state */ + "r0 = r7;" + "if r7 > r8 goto +1;" + "r0 = r8;" + "exit;" + : + : ITER_HELPERS + : __clobber_common, "r6", "r7", "r8" + ); +} + +int zero; + +SEC("?raw_tp") +__failure +__flag(BPF_F_TEST_STATE_FREQ) +__msg("Unreleased reference") +int stacksafe_should_not_conflate_stack_spill_and_iter(void *ctx) +{ + struct bpf_iter_num iter; + + asm volatile ( + /* Create a fork in logic, with general setup as follows: + * - fallthrough (first) path is valid; + * - branch (second) path is invalid. + * Then depending on what we do in fallthrough vs branch path, + * we try to detect bugs in func_states_equal(), regsafe(), + * refsafe(), stack_safe(), and similar by tricking verifier + * into believing that branch state is a valid subset of + * a fallthrough state. Verifier should reject overall + * validation, unless there is a bug somewhere in verifier + * logic. + */ + "call %[bpf_get_prandom_u32];" + "r6 = r0;" + "call %[bpf_get_prandom_u32];" + "r7 = r0;" + + "if r6 > r7 goto bad;" /* fork */ + + /* spill r6 into stack slot of bpf_iter_num var */ + "*(u64 *)(%[iter] + 0) = r6;" + + "goto skip_bad;" + + "bad:" + /* create iterator in the same stack slot */ + "r1 = %[iter];" + "r2 = 0;" + "r3 = 1000;" + "call %[bpf_iter_num_new];" + + /* but then forget about it and overwrite it back to r6 spill */ + "*(u64 *)(%[iter] + 0) = r6;" + + "skip_bad:" + "goto +0;" /* force checkpoint */ + + /* corrupt stack slots, if they are really dynptr */ + "*(u64 *)(%[iter] + 0) = r6;" + : + : __imm_ptr(iter), + __imm_addr(zero), + __imm(bpf_get_prandom_u32), + __imm(bpf_dynptr_from_mem), + ITER_HELPERS + : __clobber_common, "r6", "r7" + ); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c new file mode 100644 index 000000000000..3873fb6c292a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c @@ -0,0 +1,79 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct bpf_iter_testmod_seq { + u64 :64; + u64 :64; +}; + +extern int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) __ksym; +extern s64 *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym; +extern void bpf_iter_testmod_seq_destroy(struct bpf_iter_testmod_seq *it) __ksym; + +const volatile __s64 exp_empty = 0 + 1; +__s64 res_empty; + +SEC("raw_tp/sys_enter") +__success __log_level(2) +__msg("fp-16_w=iter_testmod_seq(ref_id=1,state=active,depth=0)") +__msg("fp-16=iter_testmod_seq(ref_id=1,state=drained,depth=0)") +__msg("call bpf_iter_testmod_seq_destroy") +int testmod_seq_empty(const void *ctx) +{ + __s64 sum = 0, *i; + + bpf_for_each(testmod_seq, i, 1000, 0) sum += *i; + res_empty = 1 + sum; + + return 0; +} + +const volatile __s64 exp_full = 1000000; +__s64 res_full; + +SEC("raw_tp/sys_enter") +__success __log_level(2) +__msg("fp-16_w=iter_testmod_seq(ref_id=1,state=active,depth=0)") +__msg("fp-16=iter_testmod_seq(ref_id=1,state=drained,depth=0)") +__msg("call bpf_iter_testmod_seq_destroy") +int testmod_seq_full(const void *ctx) +{ + __s64 sum = 0, *i; + + bpf_for_each(testmod_seq, i, 1000, 1000) sum += *i; + res_full = sum; + + return 0; +} + +const volatile __s64 exp_truncated = 10 * 1000000; +__s64 res_truncated; + +static volatile int zero = 0; + +SEC("raw_tp/sys_enter") +__success __log_level(2) +__msg("fp-16_w=iter_testmod_seq(ref_id=1,state=active,depth=0)") +__msg("fp-16=iter_testmod_seq(ref_id=1,state=drained,depth=0)") +__msg("call bpf_iter_testmod_seq_destroy") +int testmod_seq_truncated(const void *ctx) +{ + __s64 sum = 0, *i; + int cnt = zero; + + bpf_for_each(testmod_seq, i, 10, 2000000) { + sum += *i; + cnt++; + if (cnt >= 1000000) + break; + } + res_truncated = sum; + + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/linked_funcs1.c b/tools/testing/selftests/bpf/progs/linked_funcs1.c index b05571bc67d5..c4b49ceea967 100644 --- a/tools/testing/selftests/bpf/progs/linked_funcs1.c +++ b/tools/testing/selftests/bpf/progs/linked_funcs1.c @@ -5,6 +5,7 @@ #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> #include <bpf/bpf_core_read.h> +#include "bpf_misc.h" /* weak and shared between two files */ const volatile int my_tid __weak; @@ -51,6 +52,7 @@ __weak int set_output_weak(int x) * cause problems for BPF static linker */ whatever = bpf_core_type_size(struct task_struct); + __sink(whatever); output_weak1 = x; return x; @@ -71,6 +73,7 @@ int BPF_PROG(handler1, struct pt_regs *regs, long id) /* make sure we have CO-RE relocations in main program */ whatever = bpf_core_type_size(struct task_struct); + __sink(whatever); set_output_val2(1000); set_output_ctx2(ctx); /* ctx definition is hidden in BPF_PROG macro */ diff --git a/tools/testing/selftests/bpf/progs/linked_funcs2.c b/tools/testing/selftests/bpf/progs/linked_funcs2.c index ee7e3848ee4f..013ff0645f0c 100644 --- a/tools/testing/selftests/bpf/progs/linked_funcs2.c +++ b/tools/testing/selftests/bpf/progs/linked_funcs2.c @@ -5,6 +5,7 @@ #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> #include <bpf/bpf_core_read.h> +#include "bpf_misc.h" /* weak and shared between both files */ const volatile int my_tid __weak; @@ -51,6 +52,7 @@ __weak int set_output_weak(int x) * cause problems for BPF static linker */ whatever = 2 * bpf_core_type_size(struct task_struct); + __sink(whatever); output_weak2 = x; return 2 * x; @@ -71,6 +73,7 @@ int BPF_PROG(handler2, struct pt_regs *regs, long id) /* make sure we have CO-RE relocations in main program */ whatever = bpf_core_type_size(struct task_struct); + __sink(whatever); set_output_val1(2000); set_output_ctx1(ctx); /* ctx definition is hidden in BPF_PROG macro */ diff --git a/tools/testing/selftests/bpf/progs/linked_list.c b/tools/testing/selftests/bpf/progs/linked_list.c index 4fa4a9b01bde..53ded51a3abb 100644 --- a/tools/testing/selftests/bpf/progs/linked_list.c +++ b/tools/testing/selftests/bpf/progs/linked_list.c @@ -313,7 +313,6 @@ SEC("tc") int map_list_push_pop_multiple(void *ctx) { struct map_value *v; - int ret; v = bpf_map_lookup_elem(&array_map, &(int){0}); if (!v) @@ -326,7 +325,6 @@ int inner_map_list_push_pop_multiple(void *ctx) { struct map_value *v; void *map; - int ret; map = bpf_map_lookup_elem(&map_of_maps, &(int){0}); if (!map) @@ -352,7 +350,6 @@ SEC("tc") int map_list_in_list(void *ctx) { struct map_value *v; - int ret; v = bpf_map_lookup_elem(&array_map, &(int){0}); if (!v) @@ -365,7 +362,6 @@ int inner_map_list_in_list(void *ctx) { struct map_value *v; void *map; - int ret; map = bpf_map_lookup_elem(&map_of_maps, &(int){0}); if (!map) diff --git a/tools/testing/selftests/bpf/progs/linked_list_fail.c b/tools/testing/selftests/bpf/progs/linked_list_fail.c index 69cdc07cba13..41978b46f58e 100644 --- a/tools/testing/selftests/bpf/progs/linked_list_fail.c +++ b/tools/testing/selftests/bpf/progs/linked_list_fail.c @@ -557,7 +557,6 @@ SEC("?tc") int incorrect_head_off2(void *ctx) { struct foo *f; - struct bar *b; f = bpf_obj_new(typeof(*f)); if (!f) diff --git a/tools/testing/selftests/bpf/progs/local_kptr_stash.c b/tools/testing/selftests/bpf/progs/local_kptr_stash.c new file mode 100644 index 000000000000..0ef286da092b --- /dev/null +++ b/tools/testing/selftests/bpf/progs/local_kptr_stash.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include <vmlinux.h> +#include <bpf/bpf_tracing.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_core_read.h> +#include "bpf_experimental.h" + +struct node_data { + long key; + long data; + struct bpf_rb_node node; +}; + +struct map_value { + struct prog_test_ref_kfunc *not_kptr; + struct prog_test_ref_kfunc __kptr *val; + struct node_data __kptr *node; +}; + +/* This is necessary so that LLVM generates BTF for node_data struct + * If it's not included, a fwd reference for node_data will be generated but + * no struct. Example BTF of "node" field in map_value when not included: + * + * [10] PTR '(anon)' type_id=35 + * [34] FWD 'node_data' fwd_kind=struct + * [35] TYPE_TAG 'kptr_ref' type_id=34 + * + * (with no node_data struct defined) + * Had to do the same w/ bpf_kfunc_call_test_release below + */ +struct node_data *just_here_because_btf_bug; + +extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 2); +} some_nodes SEC(".maps"); + +static int create_and_stash(int idx, int val) +{ + struct map_value *mapval; + struct node_data *res; + + mapval = bpf_map_lookup_elem(&some_nodes, &idx); + if (!mapval) + return 1; + + res = bpf_obj_new(typeof(*res)); + if (!res) + return 1; + res->key = val; + + res = bpf_kptr_xchg(&mapval->node, res); + if (res) + bpf_obj_drop(res); + return 0; +} + +SEC("tc") +long stash_rb_nodes(void *ctx) +{ + return create_and_stash(0, 41) ?: create_and_stash(1, 42); +} + +SEC("tc") +long unstash_rb_node(void *ctx) +{ + struct map_value *mapval; + struct node_data *res; + long retval; + int key = 1; + + mapval = bpf_map_lookup_elem(&some_nodes, &key); + if (!mapval) + return 1; + + res = bpf_kptr_xchg(&mapval->node, NULL); + if (res) { + retval = res->key; + bpf_obj_drop(res); + return retval; + } + return 1; +} + +SEC("tc") +long stash_test_ref_kfunc(void *ctx) +{ + struct prog_test_ref_kfunc *res; + struct map_value *mapval; + int key = 0; + + mapval = bpf_map_lookup_elem(&some_nodes, &key); + if (!mapval) + return 1; + + res = bpf_kptr_xchg(&mapval->val, NULL); + if (res) + bpf_kfunc_call_test_release(res); + return 0; +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/local_storage.c b/tools/testing/selftests/bpf/progs/local_storage.c index 19423ed862e3..bc8ea56671a1 100644 --- a/tools/testing/selftests/bpf/progs/local_storage.c +++ b/tools/testing/selftests/bpf/progs/local_storage.c @@ -16,6 +16,7 @@ char _license[] SEC("license") = "GPL"; int monitored_pid = 0; int inode_storage_result = -1; int sk_storage_result = -1; +int task_storage_result = -1; struct local_storage { struct inode *exec_inode; @@ -50,26 +51,57 @@ struct { __type(value, struct local_storage); } task_storage_map SEC(".maps"); +struct { + __uint(type, BPF_MAP_TYPE_TASK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct local_storage); +} task_storage_map2 SEC(".maps"); + SEC("lsm/inode_unlink") int BPF_PROG(unlink_hook, struct inode *dir, struct dentry *victim) { __u32 pid = bpf_get_current_pid_tgid() >> 32; + struct bpf_local_storage *local_storage; struct local_storage *storage; + struct task_struct *task; bool is_self_unlink; if (pid != monitored_pid) return 0; - storage = bpf_task_storage_get(&task_storage_map, - bpf_get_current_task_btf(), 0, 0); - if (storage) { - /* Don't let an executable delete itself */ - is_self_unlink = storage->exec_inode == victim->d_inode; - if (is_self_unlink) - return -EPERM; - } + task = bpf_get_current_task_btf(); + if (!task) + return 0; - return 0; + task_storage_result = -1; + + storage = bpf_task_storage_get(&task_storage_map, task, 0, 0); + if (!storage) + return 0; + + /* Don't let an executable delete itself */ + is_self_unlink = storage->exec_inode == victim->d_inode; + + storage = bpf_task_storage_get(&task_storage_map2, task, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!storage || storage->value) + return 0; + + if (bpf_task_storage_delete(&task_storage_map, task)) + return 0; + + /* Ensure that the task_storage_map is disconnected from the storage. + * The storage memory should not be freed back to the + * bpf_mem_alloc. + */ + local_storage = task->bpf_storage; + if (!local_storage || local_storage->smap) + return 0; + + task_storage_result = 0; + + return is_self_unlink ? -EPERM : 0; } SEC("lsm.s/inode_rename") @@ -77,7 +109,6 @@ int BPF_PROG(inode_rename, struct inode *old_dir, struct dentry *old_dentry, struct inode *new_dir, struct dentry *new_dentry, unsigned int flags) { - __u32 pid = bpf_get_current_pid_tgid() >> 32; struct local_storage *storage; int err; @@ -109,18 +140,17 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address, { __u32 pid = bpf_get_current_pid_tgid() >> 32; struct local_storage *storage; - int err; if (pid != monitored_pid) return 0; - storage = bpf_sk_storage_get(&sk_storage_map, sock->sk, 0, - BPF_LOCAL_STORAGE_GET_F_CREATE); + storage = bpf_sk_storage_get(&sk_storage_map, sock->sk, 0, 0); if (!storage) return 0; + sk_storage_result = -1; if (storage->value != DUMMY_STORAGE_VALUE) - sk_storage_result = -1; + return 0; /* This tests that we can associate multiple elements * with the local storage. @@ -130,14 +160,22 @@ int BPF_PROG(socket_bind, struct socket *sock, struct sockaddr *address, if (!storage) return 0; - err = bpf_sk_storage_delete(&sk_storage_map, sock->sk); - if (err) + if (bpf_sk_storage_delete(&sk_storage_map2, sock->sk)) return 0; - err = bpf_sk_storage_delete(&sk_storage_map2, sock->sk); - if (!err) - sk_storage_result = err; + storage = bpf_sk_storage_get(&sk_storage_map2, sock->sk, 0, + BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!storage) + return 0; + + if (bpf_sk_storage_delete(&sk_storage_map, sock->sk)) + return 0; + + /* Ensure that the sk_storage_map is disconnected from the storage. */ + if (!sock->sk->sk_bpf_storage || sock->sk->sk_bpf_storage->smap) + return 0; + sk_storage_result = 0; return 0; } diff --git a/tools/testing/selftests/bpf/progs/loop6.c b/tools/testing/selftests/bpf/progs/loop6.c index 38de0331e6b4..e4ff97fbcce1 100644 --- a/tools/testing/selftests/bpf/progs/loop6.c +++ b/tools/testing/selftests/bpf/progs/loop6.c @@ -5,6 +5,7 @@ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> +#include "bpf_misc.h" char _license[] SEC("license") = "GPL"; @@ -76,6 +77,7 @@ int BPF_KPROBE(trace_virtqueue_add_sgs, void *unused, struct scatterlist **sgs, return 0; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { + __sink(out_sgs); for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX); sgp = __sg_next(sgp)) { bpf_probe_read_kernel(&len, sizeof(len), &sgp->length); @@ -85,6 +87,7 @@ int BPF_KPROBE(trace_virtqueue_add_sgs, void *unused, struct scatterlist **sgs, } for (i = 0; (i < VIRTIO_MAX_SGS) && (i < in_sgs); i++) { + __sink(in_sgs); for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX); sgp = __sg_next(sgp)) { bpf_probe_read_kernel(&len, sizeof(len), &sgp->length); diff --git a/tools/testing/selftests/bpf/progs/lsm.c b/tools/testing/selftests/bpf/progs/lsm.c index dc93887ed34c..fadfdd98707c 100644 --- a/tools/testing/selftests/bpf/progs/lsm.c +++ b/tools/testing/selftests/bpf/progs/lsm.c @@ -4,12 +4,12 @@ * Copyright 2020 Google LLC. */ -#include "bpf_misc.h" #include "vmlinux.h" +#include <errno.h> #include <bpf/bpf_core_read.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> -#include <errno.h> +#include "bpf_misc.h" struct { __uint(type, BPF_MAP_TYPE_ARRAY); diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c index 3903d30217b8..dae5dab1bbf7 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr.c +++ b/tools/testing/selftests/bpf/progs/map_kptr.c @@ -515,7 +515,6 @@ int test_ls_map_kptr_ref1(void *ctx) { struct task_struct *current; struct map_value *v; - int ret; current = bpf_get_current_task_btf(); if (!current) @@ -534,7 +533,6 @@ int test_ls_map_kptr_ref2(void *ctx) { struct task_struct *current; struct map_value *v; - int ret; current = bpf_get_current_task_btf(); if (!current) @@ -550,7 +548,6 @@ int test_ls_map_kptr_ref_del(void *ctx) { struct task_struct *current; struct map_value *v; - int ret; current = bpf_get_current_task_btf(); if (!current) diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c index 08f9ec18c345..15bf3127dba3 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c @@ -20,6 +20,7 @@ struct array_map { } array_map SEC(".maps"); extern struct prog_test_ref_kfunc *bpf_kfunc_call_test_acquire(unsigned long *sp) __ksym; +extern void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) __ksym; extern struct prog_test_ref_kfunc * bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **p, int a, int b) __ksym; @@ -442,4 +443,26 @@ int kptr_get_ref_state(struct __sk_buff *ctx) return 0; } +SEC("?tc") +__failure __msg("Possibly NULL pointer passed to helper arg2") +int kptr_xchg_possibly_null(struct __sk_buff *ctx) +{ + struct prog_test_ref_kfunc *p; + struct map_value *v; + int key = 0; + + v = bpf_map_lookup_elem(&array_map, &key); + if (!v) + return 0; + + p = bpf_kfunc_call_test_acquire(&(unsigned long){0}); + + /* PTR_TO_BTF_ID | PTR_MAYBE_NULL passed to bpf_kptr_xchg() */ + p = bpf_kptr_xchg(&v->ref_ptr, p); + if (p) + bpf_kfunc_call_test_release(p); + + return 0; +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/netcnt_prog.c b/tools/testing/selftests/bpf/progs/netcnt_prog.c index f718b2c212dc..f9ef8aee56f1 100644 --- a/tools/testing/selftests/bpf/progs/netcnt_prog.c +++ b/tools/testing/selftests/bpf/progs/netcnt_prog.c @@ -26,7 +26,6 @@ SEC("cgroup/skb") int bpf_nextcnt(struct __sk_buff *skb) { union percpu_net_cnt *percpu_cnt; - char fmt[] = "%d %llu %llu\n"; union net_cnt *cnt; __u64 ts, dt; int ret; diff --git a/tools/testing/selftests/bpf/progs/netif_receive_skb.c b/tools/testing/selftests/bpf/progs/netif_receive_skb.c index 1d8918dfbd3f..c0062645fc68 100644 --- a/tools/testing/selftests/bpf/progs/netif_receive_skb.c +++ b/tools/testing/selftests/bpf/progs/netif_receive_skb.c @@ -53,7 +53,6 @@ static int __strncmp(const void *m1, const void *m2, size_t len) do { \ static const char _expectedval[EXPECTED_STRSIZE] = \ _expected; \ - static const char _ptrtype[64] = #_type; \ __u64 _hflags = _flags | BTF_F_COMPACT; \ static _type _ptrdata = __VA_ARGS__; \ static struct btf_ptr _ptr = { }; \ diff --git a/tools/testing/selftests/bpf/progs/perfbuf_bench.c b/tools/testing/selftests/bpf/progs/perfbuf_bench.c index 45204fe0c570..29c1639fc78a 100644 --- a/tools/testing/selftests/bpf/progs/perfbuf_bench.c +++ b/tools/testing/selftests/bpf/progs/perfbuf_bench.c @@ -22,7 +22,6 @@ long dropped __attribute__((aligned(128))) = 0; SEC("fentry/" SYS_PREFIX "sys_getpgid") int bench_perfbuf(void *ctx) { - __u64 *sample; int i; for (i = 0; i < batch_cnt; i++) { diff --git a/tools/testing/selftests/bpf/progs/profiler.inc.h b/tools/testing/selftests/bpf/progs/profiler.inc.h index 875513866032..f799d87e8700 100644 --- a/tools/testing/selftests/bpf/progs/profiler.inc.h +++ b/tools/testing/selftests/bpf/progs/profiler.inc.h @@ -6,6 +6,7 @@ #include <bpf/bpf_tracing.h> #include "profiler.h" +#include "err.h" #ifndef NULL #define NULL 0 @@ -16,7 +17,6 @@ #define O_DIRECTORY 00200000 #define __O_TMPFILE 020000000 #define O_TMPFILE (__O_TMPFILE | O_DIRECTORY) -#define MAX_ERRNO 4095 #define S_IFMT 00170000 #define S_IFSOCK 0140000 #define S_IFLNK 0120000 @@ -34,7 +34,6 @@ #define S_ISBLK(m) (((m)&S_IFMT) == S_IFBLK) #define S_ISFIFO(m) (((m)&S_IFMT) == S_IFIFO) #define S_ISSOCK(m) (((m)&S_IFMT) == S_IFSOCK) -#define IS_ERR_VALUE(x) (unsigned long)(void*)(x) >= (unsigned long)-MAX_ERRNO #define KILL_DATA_ARRAY_SIZE 8 diff --git a/tools/testing/selftests/bpf/progs/pyperf.h b/tools/testing/selftests/bpf/progs/pyperf.h index 6c7b1fb268d6..026d573ce179 100644 --- a/tools/testing/selftests/bpf/progs/pyperf.h +++ b/tools/testing/selftests/bpf/progs/pyperf.h @@ -7,6 +7,7 @@ #include <stdbool.h> #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +#include "bpf_misc.h" #define FUNCTION_NAME_LEN 64 #define FILE_NAME_LEN 128 @@ -294,17 +295,22 @@ int __on_event(struct bpf_raw_tracepoint_args *ctx) if (ctx.done) return 0; #else -#ifdef NO_UNROLL +#if defined(USE_ITER) +/* no for loop, no unrolling */ +#elif defined(NO_UNROLL) #pragma clang loop unroll(disable) -#else -#ifdef UNROLL_COUNT +#elif defined(UNROLL_COUNT) #pragma clang loop unroll_count(UNROLL_COUNT) #else #pragma clang loop unroll(full) -#endif #endif /* NO_UNROLL */ /* Unwind python stack */ +#ifdef USE_ITER + int i; + bpf_for(i, 0, STACK_MAX_LEN) { +#else /* !USE_ITER */ for (int i = 0; i < STACK_MAX_LEN; ++i) { +#endif if (frame_ptr && get_frame_data(frame_ptr, pidData, &frame, &sym)) { int32_t new_symbol_id = *symbol_counter * 64 + cur_cpu; int32_t *symbol_id = bpf_map_lookup_elem(&symbolmap, &sym); @@ -339,7 +345,7 @@ int __on_event(struct bpf_raw_tracepoint_args *ctx) SEC("raw_tracepoint/kfree_skb") int on_event(struct bpf_raw_tracepoint_args* ctx) { - int i, ret = 0; + int ret = 0; ret |= __on_event(ctx); ret |= __on_event(ctx); ret |= __on_event(ctx); diff --git a/tools/testing/selftests/bpf/progs/pyperf600_iter.c b/tools/testing/selftests/bpf/progs/pyperf600_iter.c new file mode 100644 index 000000000000..d62e1b200c30 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/pyperf600_iter.c @@ -0,0 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2023 Meta Platforms, Inc. and affiliates. +#define STACK_MAX_LEN 600 +#define SUBPROGS +#define NO_UNROLL +#define USE_ITER +#include "pyperf.h" diff --git a/tools/testing/selftests/bpf/progs/pyperf600_nounroll.c b/tools/testing/selftests/bpf/progs/pyperf600_nounroll.c index 6beff7502f4d..520b58c4f8db 100644 --- a/tools/testing/selftests/bpf/progs/pyperf600_nounroll.c +++ b/tools/testing/selftests/bpf/progs/pyperf600_nounroll.c @@ -2,7 +2,4 @@ // Copyright (c) 2019 Facebook #define STACK_MAX_LEN 600 #define NO_UNROLL -/* clang will not unroll at all. - * Total program size is around 2k insns - */ #include "pyperf.h" diff --git a/tools/testing/selftests/bpf/progs/rbtree_btf_fail__wrong_node_type.c b/tools/testing/selftests/bpf/progs/rbtree_btf_fail__wrong_node_type.c index 340f97da1084..7651843f5a80 100644 --- a/tools/testing/selftests/bpf/progs/rbtree_btf_fail__wrong_node_type.c +++ b/tools/testing/selftests/bpf/progs/rbtree_btf_fail__wrong_node_type.c @@ -16,17 +16,6 @@ struct node_data { struct bpf_list_node node; }; -static bool less(struct bpf_rb_node *a, const struct bpf_rb_node *b) -{ - struct node_data *node_a; - struct node_data *node_b; - - node_a = container_of(a, struct node_data, node); - node_b = container_of(b, struct node_data, node); - - return node_a->key < node_b->key; -} - #define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8))) private(A) struct bpf_spin_lock glock; private(A) struct bpf_rb_root groot __contains(node_data, node); diff --git a/tools/testing/selftests/bpf/progs/rbtree_fail.c b/tools/testing/selftests/bpf/progs/rbtree_fail.c index 1ced900f3fce..46d7d18a218f 100644 --- a/tools/testing/selftests/bpf/progs/rbtree_fail.c +++ b/tools/testing/selftests/bpf/progs/rbtree_fail.c @@ -105,7 +105,7 @@ long rbtree_api_remove_unadded_node(void *ctx) } SEC("?tc") -__failure __msg("Unreleased reference id=2 alloc_insn=11") +__failure __msg("Unreleased reference id=2 alloc_insn=10") long rbtree_api_remove_no_drop(void *ctx) { struct bpf_rb_node *res; @@ -119,6 +119,7 @@ long rbtree_api_remove_no_drop(void *ctx) res = bpf_rbtree_remove(&groot, res); n = container_of(res, struct node_data, node); + __sink(n); bpf_spin_unlock(&glock); /* bpf_obj_drop(n) is missing here */ diff --git a/tools/testing/selftests/bpf/progs/rcu_read_lock.c b/tools/testing/selftests/bpf/progs/rcu_read_lock.c index 7250bb76d18a..14fb01437fb8 100644 --- a/tools/testing/selftests/bpf/progs/rcu_read_lock.c +++ b/tools/testing/selftests/bpf/progs/rcu_read_lock.c @@ -23,7 +23,7 @@ struct bpf_key *bpf_lookup_user_key(__u32 serial, __u64 flags) __ksym; void bpf_key_put(struct bpf_key *key) __ksym; void bpf_rcu_read_lock(void) __ksym; void bpf_rcu_read_unlock(void) __ksym; -struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) __ksym; +struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; void bpf_task_release(struct task_struct *p) __ksym; SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") @@ -159,13 +159,8 @@ int task_acquire(void *ctx) goto out; /* acquire a reference which can be used outside rcu read lock region */ - gparent = bpf_task_acquire_not_zero(gparent); + gparent = bpf_task_acquire(gparent); if (!gparent) - /* Until we resolve the issues with using task->rcu_users, we - * expect bpf_task_acquire_not_zero() to return a NULL task. - * See the comment at the definition of - * bpf_task_acquire_not_zero() for more details. - */ goto out; (void)bpf_task_storage_get(&map_a, gparent, 0, 0); @@ -179,8 +174,6 @@ SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") int miss_lock(void *ctx) { struct task_struct *task; - struct css_set *cgroups; - struct cgroup *dfl_cgrp; /* missing bpf_rcu_read_lock() */ task = bpf_get_current_task_btf(); @@ -195,8 +188,6 @@ SEC("?fentry.s/" SYS_PREFIX "sys_getpgid") int miss_unlock(void *ctx) { struct task_struct *task; - struct css_set *cgroups; - struct cgroup *dfl_cgrp; /* missing bpf_rcu_read_unlock() */ task = bpf_get_current_task_btf(); diff --git a/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c b/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c index a47bb0120719..76556e0b42b2 100644 --- a/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c +++ b/tools/testing/selftests/bpf/progs/read_bpf_task_storage_busy.c @@ -23,7 +23,6 @@ SEC("raw_tp/sys_enter") int BPF_PROG(read_bpf_task_storage_busy) { int *value; - int key; if (!CONFIG_PREEMPT) return 0; diff --git a/tools/testing/selftests/bpf/progs/recvmsg4_prog.c b/tools/testing/selftests/bpf/progs/recvmsg4_prog.c index 3d1ae8b3402f..59748c95471a 100644 --- a/tools/testing/selftests/bpf/progs/recvmsg4_prog.c +++ b/tools/testing/selftests/bpf/progs/recvmsg4_prog.c @@ -17,8 +17,6 @@ SEC("cgroup/recvmsg4") int recvmsg4_prog(struct bpf_sock_addr *ctx) { struct bpf_sock *sk; - __u32 user_ip4; - __u16 user_port; sk = ctx->sk; if (!sk) diff --git a/tools/testing/selftests/bpf/progs/recvmsg6_prog.c b/tools/testing/selftests/bpf/progs/recvmsg6_prog.c index 27dfb21b21b4..d9a4016596d5 100644 --- a/tools/testing/selftests/bpf/progs/recvmsg6_prog.c +++ b/tools/testing/selftests/bpf/progs/recvmsg6_prog.c @@ -20,8 +20,6 @@ SEC("cgroup/recvmsg6") int recvmsg6_prog(struct bpf_sock_addr *ctx) { struct bpf_sock *sk; - __u32 user_ip4; - __u16 user_port; sk = ctx->sk; if (!sk) diff --git a/tools/testing/selftests/bpf/progs/sendmsg4_prog.c b/tools/testing/selftests/bpf/progs/sendmsg4_prog.c index ea75a44cb7fc..351e79aef2fa 100644 --- a/tools/testing/selftests/bpf/progs/sendmsg4_prog.c +++ b/tools/testing/selftests/bpf/progs/sendmsg4_prog.c @@ -21,8 +21,6 @@ SEC("cgroup/sendmsg4") int sendmsg_v4_prog(struct bpf_sock_addr *ctx) { - int prio; - if (ctx->type != SOCK_DGRAM) return 0; diff --git a/tools/testing/selftests/bpf/progs/sockmap_verdict_prog.c b/tools/testing/selftests/bpf/progs/sockmap_verdict_prog.c index e2468a6d01a5..0660f29dca95 100644 --- a/tools/testing/selftests/bpf/progs/sockmap_verdict_prog.c +++ b/tools/testing/selftests/bpf/progs/sockmap_verdict_prog.c @@ -1,6 +1,7 @@ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> +#include "bpf_misc.h" struct { __uint(type, BPF_MAP_TYPE_SOCKMAP); @@ -40,6 +41,9 @@ int bpf_prog2(struct __sk_buff *skb) __u8 *d = data; __u8 sk, map; + __sink(lport); + __sink(rport); + if (data + 8 > data_end) return SK_DROP; diff --git a/tools/testing/selftests/bpf/progs/strobemeta.h b/tools/testing/selftests/bpf/progs/strobemeta.h index e562be6356f3..e02cfd380746 100644 --- a/tools/testing/selftests/bpf/progs/strobemeta.h +++ b/tools/testing/selftests/bpf/progs/strobemeta.h @@ -391,7 +391,6 @@ static __always_inline void *read_map_var(struct strobemeta_cfg *cfg, struct strobe_map_raw map; void *location; uint64_t len; - int i; descr->tag_len = 0; /* presume no tag is set */ descr->cnt = -1; /* presume no value is set */ diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf3.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf3.c index 7fab39a3bb12..99c8d1d8a187 100644 --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf3.c +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf3.c @@ -2,6 +2,7 @@ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include "bpf_legacy.h" +#include "bpf_misc.h" struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); @@ -20,6 +21,8 @@ int subprog_tail2(struct __sk_buff *skb) else bpf_tail_call_static(skb, &jmp_table, 1); + __sink(arr[sizeof(arr) - 1]); + return skb->len; } @@ -30,6 +33,8 @@ int subprog_tail(struct __sk_buff *skb) bpf_tail_call_static(skb, &jmp_table, 0); + __sink(arr[sizeof(arr) - 1]); + return skb->len * 2; } @@ -38,6 +43,8 @@ int classifier_0(struct __sk_buff *skb) { volatile char arr[128] = {}; + __sink(arr[sizeof(arr) - 1]); + return subprog_tail2(skb); } @@ -46,6 +53,8 @@ int classifier_1(struct __sk_buff *skb) { volatile char arr[128] = {}; + __sink(arr[sizeof(arr) - 1]); + return skb->len * 3; } @@ -54,6 +63,8 @@ int entry(struct __sk_buff *skb) { volatile char arr[128] = {}; + __sink(arr[sizeof(arr) - 1]); + return subprog_tail(skb); } diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c index 41ce83da78e8..4a9f63bea66c 100644 --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +#include "bpf_misc.h" #define __unused __attribute__((unused)) @@ -36,6 +37,8 @@ int entry(struct __sk_buff *skb) /* Have data on stack which size is not a multiple of 8 */ volatile char arr[1] = {}; + __sink(arr[0]); + return subprog_tail(skb); } diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_common.h b/tools/testing/selftests/bpf/progs/task_kfunc_common.h index 4c2a4b0e3a25..41f2d44f49cb 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_common.h +++ b/tools/testing/selftests/bpf/progs/task_kfunc_common.h @@ -21,9 +21,10 @@ struct hash_map { } __tasks_kfunc_map SEC(".maps"); struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym; -struct task_struct *bpf_task_kptr_get(struct task_struct **pp) __ksym; void bpf_task_release(struct task_struct *p) __ksym; struct task_struct *bpf_task_from_pid(s32 pid) __ksym; +void bpf_rcu_read_lock(void) __ksym; +void bpf_rcu_read_unlock(void) __ksym; static inline struct __tasks_kfunc_map_value *tasks_kfunc_map_value_lookup(struct task_struct *p) { @@ -60,6 +61,9 @@ static inline int tasks_kfunc_map_insert(struct task_struct *p) } acquired = bpf_task_acquire(p); + if (!acquired) + return -ENOENT; + old = bpf_kptr_xchg(&v->task, acquired); if (old) { bpf_task_release(old); diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c index f19d54eda4f1..dcdea3127086 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c @@ -40,6 +40,9 @@ int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_f /* Can't invoke bpf_task_acquire() on an untrusted pointer. */ acquired = bpf_task_acquire(v->task); + if (!acquired) + return 0; + bpf_task_release(acquired); return 0; @@ -53,38 +56,49 @@ int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags) /* Can't invoke bpf_task_acquire() on a random frame pointer. */ acquired = bpf_task_acquire((struct task_struct *)&stack_task); + if (!acquired) + return 0; + bpf_task_release(acquired); return 0; } SEC("kretprobe/free_task") -__failure __msg("reg type unsupported for arg#0 function") +__failure __msg("calling kernel function bpf_task_acquire is not allowed") int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; + /* Can't call bpf_task_acquire() or bpf_task_release() in an untrusted prog. */ acquired = bpf_task_acquire(task); - /* Can't release a bpf_task_acquire()'d task without a NULL check. */ + if (!acquired) + return 0; bpf_task_release(acquired); return 0; } -SEC("tp_btf/task_newtask") -__failure __msg("R1 must be referenced or trusted") -int BPF_PROG(task_kfunc_acquire_trusted_walked, struct task_struct *task, u64 clone_flags) +SEC("kretprobe/free_task") +__failure __msg("calling kernel function bpf_task_acquire is not allowed") +int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe_rcu, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; - /* Can't invoke bpf_task_acquire() on a trusted pointer obtained from walking a struct. */ - acquired = bpf_task_acquire(task->group_leader); - bpf_task_release(acquired); + bpf_rcu_read_lock(); + if (!task) { + bpf_rcu_read_unlock(); + return 0; + } + /* Can't call bpf_task_acquire() or bpf_task_release() in an untrusted prog. */ + acquired = bpf_task_acquire(task); + if (acquired) + bpf_task_release(acquired); + bpf_rcu_read_unlock(); return 0; } - SEC("tp_btf/task_newtask") __failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags) @@ -109,57 +123,7 @@ int BPF_PROG(task_kfunc_acquire_unreleased, struct task_struct *task, u64 clone_ acquired = bpf_task_acquire(task); /* Acquired task is never released. */ - - return 0; -} - -SEC("tp_btf/task_newtask") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(task_kfunc_get_non_kptr_param, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr; - - /* Cannot use bpf_task_kptr_get() on a non-kptr, even on a valid task. */ - kptr = bpf_task_kptr_get(&task); - if (!kptr) - return 0; - - bpf_task_release(kptr); - - return 0; -} - -SEC("tp_btf/task_newtask") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(task_kfunc_get_non_kptr_acquired, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr, *acquired; - - acquired = bpf_task_acquire(task); - - /* Cannot use bpf_task_kptr_get() on a non-kptr, even if it was acquired. */ - kptr = bpf_task_kptr_get(&acquired); - bpf_task_release(acquired); - if (!kptr) - return 0; - - bpf_task_release(kptr); - - return 0; -} - -SEC("tp_btf/task_newtask") -__failure __msg("arg#0 expected pointer to map value") -int BPF_PROG(task_kfunc_get_null, struct task_struct *task, u64 clone_flags) -{ - struct task_struct *kptr; - - /* Cannot use bpf_task_kptr_get() on a NULL pointer. */ - kptr = bpf_task_kptr_get(NULL); - if (!kptr) - return 0; - - bpf_task_release(kptr); + __sink(acquired); return 0; } @@ -185,27 +149,20 @@ int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_fla } SEC("tp_btf/task_newtask") -__failure __msg("Unreleased reference") -int BPF_PROG(task_kfunc_get_unreleased, struct task_struct *task, u64 clone_flags) +__failure __msg("Possibly NULL pointer passed to trusted arg0") +int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, u64 clone_flags) { - struct task_struct *kptr; - struct __tasks_kfunc_map_value *v; - - v = insert_lookup_task(task); - if (!v) - return 0; - - kptr = bpf_task_kptr_get(&v->task); - if (!kptr) - return 0; + struct task_struct *acquired; - /* Kptr acquired above is never released. */ + acquired = bpf_task_acquire(task); + /* Can't invoke bpf_task_release() on an acquired task without a NULL check. */ + bpf_task_release(acquired); return 0; } SEC("tp_btf/task_newtask") -__failure __msg("arg#0 is untrusted_ptr_or_null_ expected ptr_ or socket") +__failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags) { struct __tasks_kfunc_map_value *v; @@ -233,7 +190,7 @@ int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags) } SEC("tp_btf/task_newtask") -__failure __msg("arg#0 is ptr_or_null_ expected ptr_ or socket") +__failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags) { struct __tasks_kfunc_map_value local, *v; @@ -255,12 +212,13 @@ int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags) return -ENOENT; acquired = bpf_task_acquire(task); + if (!acquired) + return -EEXIST; old = bpf_kptr_xchg(&v->task, acquired); /* old cannot be passed to bpf_task_release() without a NULL check. */ bpf_task_release(old); - bpf_task_release(old); return 0; } @@ -276,7 +234,7 @@ int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_ } SEC("tp_btf/task_newtask") -__failure __msg("arg#0 is ptr_or_null_ expected ptr_ or socket") +__failure __msg("Possibly NULL pointer passed to trusted arg0") int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; @@ -297,6 +255,72 @@ int BPF_PROG(task_kfunc_from_lsm_task_free, struct task_struct *task) /* the argument of lsm task_free hook is untrusted. */ acquired = bpf_task_acquire(task); + if (!acquired) + return 0; + bpf_task_release(acquired); return 0; } + +SEC("tp_btf/task_newtask") +__failure __msg("access beyond the end of member comm") +int BPF_PROG(task_access_comm1, struct task_struct *task, u64 clone_flags) +{ + bpf_strncmp(task->comm, 17, "foo"); + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("access beyond the end of member comm") +int BPF_PROG(task_access_comm2, struct task_struct *task, u64 clone_flags) +{ + bpf_strncmp(task->comm + 1, 16, "foo"); + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("write into memory") +int BPF_PROG(task_access_comm3, struct task_struct *task, u64 clone_flags) +{ + bpf_probe_read_kernel(task->comm, 16, task->comm); + return 0; +} + +SEC("fentry/__set_task_comm") +__failure __msg("R1 type=ptr_ expected") +int BPF_PROG(task_access_comm4, struct task_struct *task, const char *buf, bool exec) +{ + /* + * task->comm is a legacy ptr_to_btf_id. The verifier cannot guarantee + * its safety. Hence it cannot be accessed with normal load insns. + */ + bpf_strncmp(task->comm, 16, "foo"); + return 0; +} + +SEC("tp_btf/task_newtask") +__failure __msg("R1 must be referenced or trusted") +int BPF_PROG(task_kfunc_release_in_map, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *local; + struct __tasks_kfunc_map_value *v; + + if (tasks_kfunc_map_insert(task)) + return 0; + + v = tasks_kfunc_map_value_lookup(task); + if (!v) + return 0; + + bpf_rcu_read_lock(); + local = v->task; + if (!local) { + bpf_rcu_read_unlock(); + return 0; + } + /* Can't release a kptr that's still stored in a map. */ + bpf_task_release(local); + bpf_rcu_read_unlock(); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_success.c b/tools/testing/selftests/bpf/progs/task_kfunc_success.c index 9f359cfd29e7..b09371bba204 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_success.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_success.c @@ -17,6 +17,10 @@ int err, pid; * TP_PROTO(struct task_struct *p, u64 clone_flags) */ +struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym __weak; +void invalid_kfunc(void) __ksym __weak; +void bpf_testmod_test_mod_kfunc(int i) __ksym __weak; + static bool is_test_kfunc_task(void) { int cur_pid = bpf_get_current_pid_tgid() >> 32; @@ -26,10 +30,27 @@ static bool is_test_kfunc_task(void) static int test_acquire_release(struct task_struct *task) { - struct task_struct *acquired; + struct task_struct *acquired = NULL; + + if (!bpf_ksym_exists(bpf_task_acquire)) { + err = 3; + return 0; + } + if (!bpf_ksym_exists(bpf_testmod_test_mod_kfunc)) { + err = 4; + return 0; + } + if (bpf_ksym_exists(invalid_kfunc)) { + /* the verifier's dead code elimination should remove this */ + err = 5; + asm volatile ("goto -1"); /* for (;;); */ + } acquired = bpf_task_acquire(task); - bpf_task_release(acquired); + if (acquired) + bpf_task_release(acquired); + else + err = 6; return 0; } @@ -101,7 +122,7 @@ int BPF_PROG(test_task_xchg_release, struct task_struct *task, u64 clone_flags) } SEC("tp_btf/task_newtask") -int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags) +int BPF_PROG(test_task_map_acquire_release, struct task_struct *task, u64 clone_flags) { struct task_struct *kptr; struct __tasks_kfunc_map_value *v; @@ -122,18 +143,18 @@ int BPF_PROG(test_task_get_release, struct task_struct *task, u64 clone_flags) return 0; } - kptr = bpf_task_kptr_get(&v->task); - if (kptr) { - /* Until we resolve the issues with using task->rcu_users, we - * expect bpf_task_kptr_get() to return a NULL task. See the - * comment at the definition of bpf_task_acquire_not_zero() for - * more details. - */ - bpf_task_release(kptr); + bpf_rcu_read_lock(); + kptr = v->task; + if (!kptr) { err = 3; - return 0; + } else { + kptr = bpf_task_acquire(kptr); + if (!kptr) + err = 4; + else + bpf_task_release(kptr); } - + bpf_rcu_read_unlock(); return 0; } @@ -148,7 +169,10 @@ int BPF_PROG(test_task_current_acquire_release, struct task_struct *task, u64 cl current = bpf_get_current_task_btf(); acquired = bpf_task_acquire(current); - bpf_task_release(acquired); + if (acquired) + bpf_task_release(acquired); + else + err = 1; return 0; } @@ -171,8 +195,6 @@ static void lookup_compare_pid(const struct task_struct *p) SEC("tp_btf/task_newtask") int BPF_PROG(test_task_from_pid_arg, struct task_struct *task, u64 clone_flags) { - struct task_struct *acquired; - if (!is_test_kfunc_task()) return 0; @@ -183,8 +205,6 @@ int BPF_PROG(test_task_from_pid_arg, struct task_struct *task, u64 clone_flags) SEC("tp_btf/task_newtask") int BPF_PROG(test_task_from_pid_current, struct task_struct *task, u64 clone_flags) { - struct task_struct *current, *acquired; - if (!is_test_kfunc_task()) return 0; @@ -208,11 +228,13 @@ static int is_pid_lookup_valid(s32 pid) SEC("tp_btf/task_newtask") int BPF_PROG(test_task_from_pid_invalid, struct task_struct *task, u64 clone_flags) { - struct task_struct *acquired; - if (!is_test_kfunc_task()) return 0; + bpf_strncmp(task->comm, 12, "foo"); + bpf_strncmp(task->comm, 16, "foo"); + bpf_strncmp(&task->comm[8], 4, "foo"); + if (is_pid_lookup_valid(-1)) { err = 1; return 0; @@ -225,3 +247,19 @@ int BPF_PROG(test_task_from_pid_invalid, struct task_struct *task, u64 clone_fla return 0; } + +SEC("tp_btf/task_newtask") +int BPF_PROG(task_kfunc_acquire_trusted_walked, struct task_struct *task, u64 clone_flags) +{ + struct task_struct *acquired; + + /* task->group_leader is listed as a trusted, non-NULL field of task struct. */ + acquired = bpf_task_acquire(task->group_leader); + if (acquired) + bpf_task_release(acquired); + else + err = 1; + + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/tcp_ca_update.c b/tools/testing/selftests/bpf/progs/tcp_ca_update.c new file mode 100644 index 000000000000..b93a0ed33057 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/tcp_ca_update.c @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include "vmlinux.h" + +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_tracing.h> + +char _license[] SEC("license") = "GPL"; + +int ca1_cnt = 0; +int ca2_cnt = 0; + +static inline struct tcp_sock *tcp_sk(const struct sock *sk) +{ + return (struct tcp_sock *)sk; +} + +SEC("struct_ops/ca_update_1_init") +void BPF_PROG(ca_update_1_init, struct sock *sk) +{ + ca1_cnt++; +} + +SEC("struct_ops/ca_update_2_init") +void BPF_PROG(ca_update_2_init, struct sock *sk) +{ + ca2_cnt++; +} + +SEC("struct_ops/ca_update_cong_control") +void BPF_PROG(ca_update_cong_control, struct sock *sk, + const struct rate_sample *rs) +{ +} + +SEC("struct_ops/ca_update_ssthresh") +__u32 BPF_PROG(ca_update_ssthresh, struct sock *sk) +{ + return tcp_sk(sk)->snd_ssthresh; +} + +SEC("struct_ops/ca_update_undo_cwnd") +__u32 BPF_PROG(ca_update_undo_cwnd, struct sock *sk) +{ + return tcp_sk(sk)->snd_cwnd; +} + +SEC(".struct_ops.link") +struct tcp_congestion_ops ca_update_1 = { + .init = (void *)ca_update_1_init, + .cong_control = (void *)ca_update_cong_control, + .ssthresh = (void *)ca_update_ssthresh, + .undo_cwnd = (void *)ca_update_undo_cwnd, + .name = "tcp_ca_update", +}; + +SEC(".struct_ops.link") +struct tcp_congestion_ops ca_update_2 = { + .init = (void *)ca_update_2_init, + .cong_control = (void *)ca_update_cong_control, + .ssthresh = (void *)ca_update_ssthresh, + .undo_cwnd = (void *)ca_update_undo_cwnd, + .name = "tcp_ca_update", +}; + +SEC(".struct_ops.link") +struct tcp_congestion_ops ca_wrong = { + .cong_control = (void *)ca_update_cong_control, + .ssthresh = (void *)ca_update_ssthresh, + .undo_cwnd = (void *)ca_update_undo_cwnd, + .name = "tcp_ca_wrong", +}; + +SEC(".struct_ops") +struct tcp_congestion_ops ca_no_link = { + .cong_control = (void *)ca_update_cong_control, + .ssthresh = (void *)ca_update_ssthresh, + .undo_cwnd = (void *)ca_update_undo_cwnd, + .name = "tcp_ca_no_link", +}; diff --git a/tools/testing/selftests/bpf/progs/tcp_ca_write_sk_pacing.c b/tools/testing/selftests/bpf/progs/tcp_ca_write_sk_pacing.c index 43447704cf0e..0724a79cec78 100644 --- a/tools/testing/selftests/bpf/progs/tcp_ca_write_sk_pacing.c +++ b/tools/testing/selftests/bpf/progs/tcp_ca_write_sk_pacing.c @@ -16,6 +16,16 @@ static inline struct tcp_sock *tcp_sk(const struct sock *sk) return (struct tcp_sock *)sk; } +static inline unsigned int tcp_left_out(const struct tcp_sock *tp) +{ + return tp->sacked_out + tp->lost_out; +} + +static inline unsigned int tcp_packets_in_flight(const struct tcp_sock *tp) +{ + return tp->packets_out - tcp_left_out(tp) + tp->retrans_out; +} + SEC("struct_ops/write_sk_pacing_init") void BPF_PROG(write_sk_pacing_init, struct sock *sk) { @@ -31,11 +41,12 @@ SEC("struct_ops/write_sk_pacing_cong_control") void BPF_PROG(write_sk_pacing_cong_control, struct sock *sk, const struct rate_sample *rs) { - const struct tcp_sock *tp = tcp_sk(sk); + struct tcp_sock *tp = tcp_sk(sk); unsigned long rate = ((tp->snd_cwnd * tp->mss_cache * USEC_PER_SEC) << 3) / (tp->srtt_us ?: 1U << 3); sk->sk_pacing_rate = min(rate, sk->sk_max_pacing_rate); + tp->app_limited = (tp->delivered + tcp_packets_in_flight(tp)) ?: 1; } SEC("struct_ops/write_sk_pacing_ssthresh") diff --git a/tools/testing/selftests/bpf/progs/test_bpf_nf.c b/tools/testing/selftests/bpf/progs/test_bpf_nf.c index 9fc603c9d673..77ad8adf68da 100644 --- a/tools/testing/selftests/bpf/progs/test_bpf_nf.c +++ b/tools/testing/selftests/bpf/progs/test_bpf_nf.c @@ -75,7 +75,6 @@ nf_ct_test(struct nf_conn *(*lookup_fn)(void *, struct bpf_sock_tuple *, u32, struct bpf_ct_opts___local opts_def = { .l4proto = IPPROTO_TCP, .netns_id = -1 }; struct bpf_sock_tuple bpf_tuple; struct nf_conn *ct; - int err; __builtin_memset(&bpf_tuple, 0, sizeof(bpf_tuple.ipv4)); diff --git a/tools/testing/selftests/bpf/progs/test_cls_redirect_dynptr.c b/tools/testing/selftests/bpf/progs/test_cls_redirect_dynptr.c index f45a7095de7a..f41c81212ee9 100644 --- a/tools/testing/selftests/bpf/progs/test_cls_redirect_dynptr.c +++ b/tools/testing/selftests/bpf/progs/test_cls_redirect_dynptr.c @@ -455,7 +455,6 @@ static ret_t forward_to_next_hop(struct __sk_buff *skb, struct bpf_dynptr *dynpt static ret_t skip_next_hops(__u64 *offset, int n) { - __u32 res; switch (n) { case 1: *offset += sizeof(struct in_addr); diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c index ab1e647aeb31..b86fdda2a6ea 100644 --- a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c @@ -42,7 +42,6 @@ int test_core_bitfields(void *ctx) { struct core_reloc_bitfields *in = (void *)&data.in; struct core_reloc_bitfields_output *out = (void *)&data.out; - uint64_t res; out->ub1 = BPF_CORE_READ_BITFIELD_PROBED(in, ub1); out->ub2 = BPF_CORE_READ_BITFIELD_PROBED(in, ub2); diff --git a/tools/testing/selftests/bpf/progs/test_global_func1.c b/tools/testing/selftests/bpf/progs/test_global_func1.c index 23970a20b324..b85fc8c423ba 100644 --- a/tools/testing/selftests/bpf/progs/test_global_func1.c +++ b/tools/testing/selftests/bpf/progs/test_global_func1.c @@ -18,6 +18,8 @@ int f1(struct __sk_buff *skb) { volatile char buf[MAX_STACK] = {}; + __sink(buf[MAX_STACK - 1]); + return f0(0, skb) + skb->len; } @@ -34,6 +36,8 @@ int f3(int val, struct __sk_buff *skb, int var) { volatile char buf[MAX_STACK] = {}; + __sink(buf[MAX_STACK - 1]); + return skb->ifindex * val * var; } diff --git a/tools/testing/selftests/bpf/progs/test_global_func2.c b/tools/testing/selftests/bpf/progs/test_global_func2.c index 3dce97fb52a4..2beab9c3b68a 100644 --- a/tools/testing/selftests/bpf/progs/test_global_func2.c +++ b/tools/testing/selftests/bpf/progs/test_global_func2.c @@ -18,6 +18,8 @@ int f1(struct __sk_buff *skb) { volatile char buf[MAX_STACK] = {}; + __sink(buf[MAX_STACK - 1]); + return f0(0, skb) + skb->len; } @@ -34,6 +36,8 @@ int f3(int val, struct __sk_buff *skb, int var) { volatile char buf[MAX_STACK] = {}; + __sink(buf[MAX_STACK - 1]); + return skb->ifindex * val * var; } diff --git a/tools/testing/selftests/bpf/progs/test_hash_large_key.c b/tools/testing/selftests/bpf/progs/test_hash_large_key.c index 473a22794a62..8b438128f46b 100644 --- a/tools/testing/selftests/bpf/progs/test_hash_large_key.c +++ b/tools/testing/selftests/bpf/progs/test_hash_large_key.c @@ -28,7 +28,7 @@ struct bigelement { SEC("raw_tracepoint/sys_enter") int bpf_hash_large_key_test(void *ctx) { - int zero = 0, err = 1, value = 42; + int zero = 0, value = 42; struct bigelement *key; key = bpf_map_lookup_elem(&key_map, &zero); diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c b/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c index a72a5bf3812a..27109b877714 100644 --- a/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c +++ b/tools/testing/selftests/bpf/progs/test_ksyms_btf_write_check.c @@ -35,7 +35,6 @@ SEC("raw_tp/sys_enter") int handler2(const void *ctx) { int *active; - __u32 cpu; active = bpf_this_cpu_ptr(&bpf_prog_active); write_active(active); diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_weak.c b/tools/testing/selftests/bpf/progs/test_ksyms_weak.c index 5f8379aadb29..d00268c91e19 100644 --- a/tools/testing/selftests/bpf/progs/test_ksyms_weak.c +++ b/tools/testing/selftests/bpf/progs/test_ksyms_weak.c @@ -20,6 +20,8 @@ __u64 out__non_existent_typed = -1; /* test existing weak symbols can be resolved. */ extern const struct rq runqueues __ksym __weak; /* typed */ extern const void bpf_prog_active __ksym __weak; /* typeless */ +struct task_struct *bpf_task_acquire(struct task_struct *p) __ksym __weak; +void bpf_testmod_test_mod_kfunc(int i) __ksym __weak; /* non-existent weak symbols. */ @@ -29,6 +31,7 @@ extern const void bpf_link_fops1 __ksym __weak; /* typed symbols, default to zero. */ extern const int bpf_link_fops2 __ksym __weak; +void invalid_kfunc(void) __ksym __weak; SEC("raw_tp/sys_enter") int pass_handler(const void *ctx) @@ -37,7 +40,7 @@ int pass_handler(const void *ctx) /* tests existing symbols. */ rq = (struct rq *)bpf_per_cpu_ptr(&runqueues, 0); - if (rq) + if (rq && bpf_ksym_exists(&runqueues)) out__existing_typed = rq->cpu; out__existing_typeless = (__u64)&bpf_prog_active; @@ -50,6 +53,18 @@ int pass_handler(const void *ctx) if (&bpf_link_fops2) /* can't happen */ out__non_existent_typed = (__u64)bpf_per_cpu_ptr(&bpf_link_fops2, 0); + if (!bpf_ksym_exists(bpf_task_acquire)) + /* dead code won't be seen by the verifier */ + bpf_task_acquire(0); + + if (!bpf_ksym_exists(bpf_testmod_test_mod_kfunc)) + /* dead code won't be seen by the verifier */ + bpf_testmod_test_mod_kfunc(0); + + if (bpf_ksym_exists(invalid_kfunc)) + /* dead code won't be seen by the verifier */ + invalid_kfunc(); + return 0; } diff --git a/tools/testing/selftests/bpf/progs/test_legacy_printk.c b/tools/testing/selftests/bpf/progs/test_legacy_printk.c index 64c2d9ced529..42718cd8e6a4 100644 --- a/tools/testing/selftests/bpf/progs/test_legacy_printk.c +++ b/tools/testing/selftests/bpf/progs/test_legacy_printk.c @@ -56,7 +56,7 @@ int handle_legacy(void *ctx) SEC("tp/raw_syscalls/sys_enter") int handle_modern(void *ctx) { - int zero = 0, cur_pid; + int cur_pid; cur_pid = bpf_get_current_pid_tgid() >> 32; if (cur_pid != my_pid_var) diff --git a/tools/testing/selftests/bpf/progs/test_map_lock.c b/tools/testing/selftests/bpf/progs/test_map_lock.c index acf073db9e8b..1c02511b73cd 100644 --- a/tools/testing/selftests/bpf/progs/test_map_lock.c +++ b/tools/testing/selftests/bpf/progs/test_map_lock.c @@ -33,7 +33,7 @@ struct { SEC("cgroup/skb") int bpf_map_lock_test(struct __sk_buff *skb) { - struct hmap_elem zero = {}, *val; + struct hmap_elem *val; int rnd = bpf_get_prandom_u32(); int key = 0, err = 1, i; struct array_elem *q; diff --git a/tools/testing/selftests/bpf/progs/test_map_ops.c b/tools/testing/selftests/bpf/progs/test_map_ops.c new file mode 100644 index 000000000000..b53b46a090c8 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_map_ops.c @@ -0,0 +1,138 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */ + +#include "vmlinux.h" +#include <bpf/bpf_helpers.h> + +char _license[] SEC("license") = "GPL"; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); +} hash_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_STACK); + __uint(max_entries, 1); + __type(value, int); +} stack_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, int); +} array_map SEC(".maps"); + +const volatile pid_t pid; +long err = 0; + +static u64 callback(u64 map, u64 key, u64 val, u64 ctx, u64 flags) +{ + return 0; +} + +SEC("tp/syscalls/sys_enter_getpid") +int map_update(void *ctx) +{ + const int key = 0; + const int val = 1; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + err = bpf_map_update_elem(&hash_map, &key, &val, BPF_NOEXIST); + + return 0; +} + +SEC("tp/syscalls/sys_enter_getppid") +int map_delete(void *ctx) +{ + const int key = 0; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + err = bpf_map_delete_elem(&hash_map, &key); + + return 0; +} + +SEC("tp/syscalls/sys_enter_getuid") +int map_push(void *ctx) +{ + const int val = 1; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + err = bpf_map_push_elem(&stack_map, &val, 0); + + return 0; +} + +SEC("tp/syscalls/sys_enter_geteuid") +int map_pop(void *ctx) +{ + int val; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + err = bpf_map_pop_elem(&stack_map, &val); + + return 0; +} + +SEC("tp/syscalls/sys_enter_getgid") +int map_peek(void *ctx) +{ + int val; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + err = bpf_map_peek_elem(&stack_map, &val); + + return 0; +} + +SEC("tp/syscalls/sys_enter_gettid") +int map_for_each_pass(void *ctx) +{ + const int key = 0; + const int val = 1; + const u64 flags = 0; + int callback_ctx; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + bpf_map_update_elem(&array_map, &key, &val, flags); + + err = bpf_for_each_map_elem(&array_map, callback, &callback_ctx, flags); + + return 0; +} + +SEC("tp/syscalls/sys_enter_getpgid") +int map_for_each_fail(void *ctx) +{ + const int key = 0; + const int val = 1; + const u64 flags = BPF_NOEXIST; + int callback_ctx; + + if (pid != (bpf_get_current_pid_tgid() >> 32)) + return 0; + + bpf_map_update_elem(&array_map, &key, &val, flags); + + /* calling for_each with non-zero flags will return error */ + err = bpf_for_each_map_elem(&array_map, callback, &callback_ctx, flags); + + return 0; +} diff --git a/tools/testing/selftests/bpf/progs/test_obj_id.c b/tools/testing/selftests/bpf/progs/test_obj_id.c index ded71b3ff6b4..2850ae788a91 100644 --- a/tools/testing/selftests/bpf/progs/test_obj_id.c +++ b/tools/testing/selftests/bpf/progs/test_obj_id.c @@ -4,6 +4,7 @@ #include <stddef.h> #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +#include "bpf_misc.h" struct { __uint(type, BPF_MAP_TYPE_ARRAY); @@ -19,6 +20,7 @@ int test_obj_id(void *ctx) __u64 *value; value = bpf_map_lookup_elem(&test_map_id, &key); + __sink(value); return 0; } diff --git a/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt.c b/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt.c index 79bab9b50e9e..d9b2ba7ac340 100644 --- a/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt.c +++ b/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt.c @@ -87,7 +87,6 @@ int xdp_ingress_v6(struct xdp_md *xdp) __u8 tcp_hdr_opt_len = 0; struct tcphdr *tcp_hdr; __u64 tcp_offset = 0; - __u32 off; int err; tcp_offset = sizeof(struct ethhdr) + sizeof(struct ipv6hdr); diff --git a/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt_dynptr.c b/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt_dynptr.c index d3b319722e30..dc6e43bc6a62 100644 --- a/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt_dynptr.c +++ b/tools/testing/selftests/bpf/progs/test_parse_tcp_hdr_opt_dynptr.c @@ -30,7 +30,7 @@ __u32 server_id; static int parse_hdr_opt(struct bpf_dynptr *ptr, __u32 *off, __u8 *hdr_bytes_remaining, __u32 *server_id) { - __u8 *tcp_opt, kind, hdr_len; + __u8 kind, hdr_len; __u8 buffer[sizeof(kind) + sizeof(hdr_len) + sizeof(*server_id)]; __u8 *data; diff --git a/tools/testing/selftests/bpf/progs/test_pkt_access.c b/tools/testing/selftests/bpf/progs/test_pkt_access.c index 5cd7c096f62d..bce7173152c6 100644 --- a/tools/testing/selftests/bpf/progs/test_pkt_access.c +++ b/tools/testing/selftests/bpf/progs/test_pkt_access.c @@ -13,6 +13,7 @@ #include <linux/pkt_cls.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> +#include "bpf_misc.h" /* llvm will optimize both subprograms into exactly the same BPF assembly * @@ -51,6 +52,8 @@ int get_skb_len(struct __sk_buff *skb) { volatile char buf[MAX_STACK] = {}; + __sink(buf[MAX_STACK - 1]); + return skb->len; } @@ -73,6 +76,8 @@ int get_skb_ifindex(int val, struct __sk_buff *skb, int var) { volatile char buf[MAX_STACK] = {}; + __sink(buf[MAX_STACK - 1]); + return skb->ifindex * val * var; } diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf.c b/tools/testing/selftests/bpf/progs/test_ringbuf.c index 5bdc0d38efc0..501cefa97633 100644 --- a/tools/testing/selftests/bpf/progs/test_ringbuf.c +++ b/tools/testing/selftests/bpf/progs/test_ringbuf.c @@ -41,7 +41,6 @@ int test_ringbuf(void *ctx) { int cur_pid = bpf_get_current_pid_tgid() >> 32; struct sample *sample; - int zero = 0; if (cur_pid != pid) return 0; diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c b/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c index 2760bf60d05a..21bb7da90ea5 100644 --- a/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_map_key.c @@ -53,6 +53,7 @@ int test_ringbuf_mem_map_key(void *ctx) /* test using 'sample' (PTR_TO_MEM | MEM_ALLOC) as map key arg */ lookup_val = (int *)bpf_map_lookup_elem(&hash_map, sample); + __sink(lookup_val); /* workaround - memcpy is necessary so that verifier doesn't * complain with: diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c index e416e0ce12b7..9626baa6779c 100644 --- a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c @@ -59,7 +59,6 @@ int test_ringbuf(void *ctx) int cur_pid = bpf_get_current_pid_tgid() >> 32; struct sample *sample; void *rb; - int zero = 0; if (cur_pid != pid) return 0; diff --git a/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c b/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c index 7d56ed47cd4d..5eb25c6ad75b 100644 --- a/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c +++ b/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c @@ -64,7 +64,7 @@ SEC("sk_reuseport") int _select_by_skb_data(struct sk_reuseport_md *reuse_md) { __u32 linum, index = 0, flags = 0, index_zero = 0; - __u32 *result_cnt, *linum_value; + __u32 *result_cnt; struct data_check data_check = {}; struct cmd *cmd, cmd_copy; void *data, *data_end; diff --git a/tools/testing/selftests/bpf/progs/test_sk_assign.c b/tools/testing/selftests/bpf/progs/test_sk_assign.c index 21b19b758c4e..3079244c7f96 100644 --- a/tools/testing/selftests/bpf/progs/test_sk_assign.c +++ b/tools/testing/selftests/bpf/progs/test_sk_assign.c @@ -15,6 +15,7 @@ #include <sys/socket.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> +#include "bpf_misc.h" #if defined(IPROUTE2_HAVE_LIBBPF) /* Use a new-style map definition. */ @@ -57,7 +58,6 @@ get_tuple(struct __sk_buff *skb, bool *ipv4, bool *tcp) void *data = (void *)(long)skb->data; struct bpf_sock_tuple *result; struct ethhdr *eth; - __u64 tuple_len; __u8 proto = 0; __u64 ihl_len; @@ -94,6 +94,7 @@ get_tuple(struct __sk_buff *skb, bool *ipv4, bool *tcp) return NULL; *tcp = (proto == IPPROTO_TCP); + __sink(ihl_len); return result; } @@ -173,7 +174,6 @@ int bpf_sk_assign_test(struct __sk_buff *skb) struct bpf_sock_tuple *tuple; bool ipv4 = false; bool tcp = false; - int tuple_len; int ret = 0; tuple = get_tuple(skb, &ipv4, &tcp); diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup.c b/tools/testing/selftests/bpf/progs/test_sk_lookup.c index 6058dcb11b36..71f844b9b902 100644 --- a/tools/testing/selftests/bpf/progs/test_sk_lookup.c +++ b/tools/testing/selftests/bpf/progs/test_sk_lookup.c @@ -391,7 +391,6 @@ SEC("sk_lookup") int ctx_narrow_access(struct bpf_sk_lookup *ctx) { struct bpf_sock *sk; - int err, family; __u32 val_u32; bool v4; @@ -645,9 +644,7 @@ static __always_inline int select_server_a(struct bpf_sk_lookup *ctx) SEC("sk_lookup") int multi_prog_redir1(struct bpf_sk_lookup *ctx) { - int ret; - - ret = select_server_a(ctx); + (void)select_server_a(ctx); bpf_map_update_elem(&run_map, &KEY_PROG1, &PROG_DONE, BPF_ANY); return SK_PASS; } @@ -655,9 +652,7 @@ int multi_prog_redir1(struct bpf_sk_lookup *ctx) SEC("sk_lookup") int multi_prog_redir2(struct bpf_sk_lookup *ctx) { - int ret; - - ret = select_server_a(ctx); + (void)select_server_a(ctx); bpf_map_update_elem(&run_map, &KEY_PROG2, &PROG_DONE, BPF_ANY); return SK_PASS; } diff --git a/tools/testing/selftests/bpf/progs/test_sk_lookup_kern.c b/tools/testing/selftests/bpf/progs/test_sk_lookup_kern.c index 6ccf6d546074..e9efc3263022 100644 --- a/tools/testing/selftests/bpf/progs/test_sk_lookup_kern.c +++ b/tools/testing/selftests/bpf/progs/test_sk_lookup_kern.c @@ -110,7 +110,6 @@ int err_modify_sk_pointer(struct __sk_buff *skb) { struct bpf_sock_tuple tuple = {}; struct bpf_sock *sk; - __u32 family; sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); if (sk) { @@ -125,7 +124,6 @@ int err_modify_sk_or_null_pointer(struct __sk_buff *skb) { struct bpf_sock_tuple tuple = {}; struct bpf_sock *sk; - __u32 family; sk = bpf_sk_lookup_tcp(skb, &tuple, sizeof(tuple), BPF_F_CURRENT_NETNS, 0); sk += 1; diff --git a/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c b/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c index 6dc1f28fc4b6..02e718f06e0f 100644 --- a/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c +++ b/tools/testing/selftests/bpf/progs/test_sk_storage_tracing.c @@ -92,4 +92,20 @@ int BPF_PROG(inet_csk_accept, struct sock *sk, int flags, int *err, bool kern, return 0; } +SEC("tp_btf/tcp_retransmit_synack") +int BPF_PROG(tcp_retransmit_synack, struct sock* sk, struct request_sock* req) +{ + /* load only test */ + bpf_sk_storage_get(&sk_stg_map, sk, 0, 0); + bpf_sk_storage_get(&sk_stg_map, req->sk, 0, 0); + return 0; +} + +SEC("tp_btf/tcp_bad_csum") +int BPF_PROG(tcp_bad_csum, struct sk_buff* skb) +{ + bpf_sk_storage_get(&sk_stg_map, skb->sk, 0, 0); + return 0; +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c index 9f4b8f9f1181..bbad3c2d9aa5 100644 --- a/tools/testing/selftests/bpf/progs/test_sock_fields.c +++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c @@ -121,7 +121,7 @@ static void tpcpy(struct bpf_tcp_sock *dst, SEC("cgroup_skb/egress") int egress_read_sock_fields(struct __sk_buff *skb) { - struct bpf_spinlock_cnt cli_cnt_init = { .lock = 0, .cnt = 0xeB9F }; + struct bpf_spinlock_cnt cli_cnt_init = { .lock = {}, .cnt = 0xeB9F }; struct bpf_spinlock_cnt *pkt_out_cnt, *pkt_out_cnt10; struct bpf_tcp_sock *tp, *tp_ret; struct bpf_sock *sk, *sk_ret; diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h index 6c85b00f27b2..baf9ebc6d903 100644 --- a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h +++ b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h @@ -14,6 +14,7 @@ #include <sys/socket.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> +#include "bpf_misc.h" /* Sockmap sample program connects a client and a backend together * using cgroups. @@ -111,12 +112,15 @@ int bpf_prog2(struct __sk_buff *skb) int len, *f, ret, zero = 0; __u64 flags = 0; + __sink(rport); if (lport == 10000) ret = 10; else ret = 1; len = (__u32)skb->data_end - (__u32)skb->data; + __sink(len); + f = bpf_map_lookup_elem(&sock_skb_opts, &zero); if (f && *f) { ret = 3; @@ -180,7 +184,6 @@ int bpf_prog3(struct __sk_buff *skb) if (err) return SK_DROP; bpf_write_pass(skb, 13); -tls_out: return ret; } @@ -188,8 +191,7 @@ SEC("sockops") int bpf_sockmap(struct bpf_sock_ops *skops) { __u32 lport, rport; - int op, err = 0, index, key, ret; - + int op, err, ret; op = (int) skops->op; @@ -228,6 +230,8 @@ int bpf_sockmap(struct bpf_sock_ops *skops) break; } + __sink(err); + return 0; } @@ -321,6 +325,10 @@ int bpf_prog8(struct sk_msg_md *msg) } else { return SK_DROP; } + + __sink(data_end); + __sink(data); + return SK_PASS; } SEC("sk_msg4") diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock.c b/tools/testing/selftests/bpf/progs/test_spin_lock.c index 5bd10409285b..b2440a0ff422 100644 --- a/tools/testing/selftests/bpf/progs/test_spin_lock.c +++ b/tools/testing/selftests/bpf/progs/test_spin_lock.c @@ -3,6 +3,7 @@ #include <linux/bpf.h> #include <linux/version.h> #include <bpf/bpf_helpers.h> +#include "bpf_misc.h" struct hmap_elem { volatile int cnt; @@ -89,6 +90,8 @@ int bpf_spin_lock_test(struct __sk_buff *skb) credit = q->credit; bpf_spin_unlock(&q->lock); + __sink(credit); + /* spin_lock in cgroup local storage */ cls = bpf_get_local_storage(&cls_map, 0); bpf_spin_lock(&cls->lock); diff --git a/tools/testing/selftests/bpf/progs/test_stacktrace_map.c b/tools/testing/selftests/bpf/progs/test_stacktrace_map.c index 728dbd39eff0..47568007b668 100644 --- a/tools/testing/selftests/bpf/progs/test_stacktrace_map.c +++ b/tools/testing/selftests/bpf/progs/test_stacktrace_map.c @@ -38,7 +38,7 @@ struct { __type(value, stack_trace_t); } stack_amap SEC(".maps"); -/* taken from /sys/kernel/debug/tracing/events/sched/sched_switch/format */ +/* taken from /sys/kernel/tracing/events/sched/sched_switch/format */ struct sched_switch_args { unsigned long long pad; char prev_comm[TASK_COMM_LEN]; diff --git a/tools/testing/selftests/bpf/progs/test_tc_dtime.c b/tools/testing/selftests/bpf/progs/test_tc_dtime.c index 125beec31834..74ec09f040b7 100644 --- a/tools/testing/selftests/bpf/progs/test_tc_dtime.c +++ b/tools/testing/selftests/bpf/progs/test_tc_dtime.c @@ -163,9 +163,9 @@ static int skb_get_type(struct __sk_buff *skb) ip6h = data + sizeof(struct ethhdr); if (ip6h + 1 > data_end) return -1; - if (v6_equal(ip6h->saddr, (struct in6_addr)ip6_src)) + if (v6_equal(ip6h->saddr, (struct in6_addr){{ip6_src}})) ns = SRC_NS; - else if (v6_equal(ip6h->saddr, (struct in6_addr)ip6_dst)) + else if (v6_equal(ip6h->saddr, (struct in6_addr){{ip6_dst}})) ns = DST_NS; inet_proto = ip6h->nexthdr; trans = ip6h + 1; diff --git a/tools/testing/selftests/bpf/progs/test_tc_neigh.c b/tools/testing/selftests/bpf/progs/test_tc_neigh.c index 3e32ea375ab4..de15155f2609 100644 --- a/tools/testing/selftests/bpf/progs/test_tc_neigh.c +++ b/tools/testing/selftests/bpf/progs/test_tc_neigh.c @@ -94,7 +94,7 @@ int tc_dst(struct __sk_buff *skb) redirect = is_remote_ep_v4(skb, __bpf_constant_htonl(ip4_src)); break; case __bpf_constant_htons(ETH_P_IPV6): - redirect = is_remote_ep_v6(skb, (struct in6_addr)ip6_src); + redirect = is_remote_ep_v6(skb, (struct in6_addr){{ip6_src}}); break; } @@ -119,7 +119,7 @@ int tc_src(struct __sk_buff *skb) redirect = is_remote_ep_v4(skb, __bpf_constant_htonl(ip4_dst)); break; case __bpf_constant_htons(ETH_P_IPV6): - redirect = is_remote_ep_v6(skb, (struct in6_addr)ip6_dst); + redirect = is_remote_ep_v6(skb, (struct in6_addr){{ip6_dst}}); break; } diff --git a/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c b/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c index 3ded05280757..cf7ed8cbb1fe 100644 --- a/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c +++ b/tools/testing/selftests/bpf/progs/test_tcpbpf_kern.c @@ -46,8 +46,6 @@ int bpf_testcb(struct bpf_sock_ops *skops) struct bpf_sock_ops *reuse = skops; struct tcphdr *thdr; int window_clamp = 9216; - int good_call_rv = 0; - int bad_call_rv = 0; int save_syn = 1; int rv = -1; int v = 0; diff --git a/tools/testing/selftests/bpf/progs/test_tracepoint.c b/tools/testing/selftests/bpf/progs/test_tracepoint.c index 43bd7a20cc50..4cb8bbb6a320 100644 --- a/tools/testing/selftests/bpf/progs/test_tracepoint.c +++ b/tools/testing/selftests/bpf/progs/test_tracepoint.c @@ -4,7 +4,7 @@ #include <vmlinux.h> #include <bpf/bpf_helpers.h> -/* taken from /sys/kernel/debug/tracing/events/sched/sched_switch/format */ +/* taken from /sys/kernel/tracing/events/sched/sched_switch/format */ struct sched_switch_args { unsigned long long pad; char prev_comm[TASK_COMM_LEN]; diff --git a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c index 95b4aa0928ba..f66af753bbbb 100644 --- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c +++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c @@ -52,6 +52,21 @@ struct vxlan_metadata { __u32 gbp; }; +struct bpf_fou_encap { + __be16 sport; + __be16 dport; +}; + +enum bpf_fou_encap_type { + FOU_BPF_ENCAP_FOU, + FOU_BPF_ENCAP_GUE, +}; + +int bpf_skb_set_fou_encap(struct __sk_buff *skb_ctx, + struct bpf_fou_encap *encap, int type) __ksym; +int bpf_skb_get_fou_encap(struct __sk_buff *skb_ctx, + struct bpf_fou_encap *encap) __ksym; + struct { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 1); @@ -209,7 +224,6 @@ int erspan_get_tunnel(struct __sk_buff *skb) { struct bpf_tunnel_key key; struct erspan_metadata md; - __u32 index; int ret; ret = bpf_skb_get_tunnel_key(skb, &key, sizeof(key), 0); @@ -289,7 +303,6 @@ int ip4ip6erspan_get_tunnel(struct __sk_buff *skb) { struct bpf_tunnel_key key; struct erspan_metadata md; - __u32 index; int ret; ret = bpf_skb_get_tunnel_key(skb, &key, sizeof(key), @@ -405,8 +418,6 @@ int vxlan_get_tunnel_src(struct __sk_buff *skb) int ret; struct bpf_tunnel_key key; struct vxlan_metadata md; - __u32 orig_daddr; - __u32 index = 0; ret = bpf_skb_get_tunnel_key(skb, &key, sizeof(key), BPF_F_TUNINFO_FLAGS); @@ -443,9 +454,7 @@ int veth_set_outer_dst(struct __sk_buff *skb) void *data_end = (void *)(long)skb->data_end; struct udphdr *udph; struct iphdr *iph; - __u32 index = 0; int ret = 0; - int shrink; __s64 csum; if ((void *)eth + sizeof(*eth) > data_end) { @@ -756,6 +765,108 @@ int ipip_get_tunnel(struct __sk_buff *skb) } SEC("tc") +int ipip_gue_set_tunnel(struct __sk_buff *skb) +{ + struct bpf_tunnel_key key = {}; + struct bpf_fou_encap encap = {}; + void *data = (void *)(long)skb->data; + struct iphdr *iph = data; + void *data_end = (void *)(long)skb->data_end; + int ret; + + if (data + sizeof(*iph) > data_end) { + log_err(1); + return TC_ACT_SHOT; + } + + key.tunnel_ttl = 64; + if (iph->protocol == IPPROTO_ICMP) + key.remote_ipv4 = 0xac100164; /* 172.16.1.100 */ + + ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0); + if (ret < 0) { + log_err(ret); + return TC_ACT_SHOT; + } + + encap.sport = 0; + encap.dport = bpf_htons(5555); + + ret = bpf_skb_set_fou_encap(skb, &encap, FOU_BPF_ENCAP_GUE); + if (ret < 0) { + log_err(ret); + return TC_ACT_SHOT; + } + + return TC_ACT_OK; +} + +SEC("tc") +int ipip_fou_set_tunnel(struct __sk_buff *skb) +{ + struct bpf_tunnel_key key = {}; + struct bpf_fou_encap encap = {}; + void *data = (void *)(long)skb->data; + struct iphdr *iph = data; + void *data_end = (void *)(long)skb->data_end; + int ret; + + if (data + sizeof(*iph) > data_end) { + log_err(1); + return TC_ACT_SHOT; + } + + key.tunnel_ttl = 64; + if (iph->protocol == IPPROTO_ICMP) + key.remote_ipv4 = 0xac100164; /* 172.16.1.100 */ + + ret = bpf_skb_set_tunnel_key(skb, &key, sizeof(key), 0); + if (ret < 0) { + log_err(ret); + return TC_ACT_SHOT; + } + + encap.sport = 0; + encap.dport = bpf_htons(5555); + + ret = bpf_skb_set_fou_encap(skb, &encap, FOU_BPF_ENCAP_FOU); + if (ret < 0) { + log_err(ret); + return TC_ACT_SHOT; + } + + return TC_ACT_OK; +} + +SEC("tc") +int ipip_encap_get_tunnel(struct __sk_buff *skb) +{ + int ret; + struct bpf_tunnel_key key = {}; + struct bpf_fou_encap encap = {}; + + ret = bpf_skb_get_tunnel_key(skb, &key, sizeof(key), 0); + if (ret < 0) { + log_err(ret); + return TC_ACT_SHOT; + } + + ret = bpf_skb_get_fou_encap(skb, &encap); + if (ret < 0) { + log_err(ret); + return TC_ACT_SHOT; + } + + if (bpf_ntohs(encap.dport) != 5555) + return TC_ACT_SHOT; + + bpf_printk("%d remote ip 0x%x, sport %d, dport %d\n", ret, + key.remote_ipv4, bpf_ntohs(encap.sport), + bpf_ntohs(encap.dport)); + return TC_ACT_OK; +} + +SEC("tc") int ipip6_set_tunnel(struct __sk_buff *skb) { struct bpf_tunnel_key key = {}; diff --git a/tools/testing/selftests/bpf/progs/test_usdt_multispec.c b/tools/testing/selftests/bpf/progs/test_usdt_multispec.c index aa6de32b50d1..962f3462066a 100644 --- a/tools/testing/selftests/bpf/progs/test_usdt_multispec.c +++ b/tools/testing/selftests/bpf/progs/test_usdt_multispec.c @@ -18,8 +18,6 @@ int usdt_100_sum; SEC("usdt//proc/self/exe:test:usdt_100") int BPF_USDT(usdt_100, int x) { - long tmp; - if (my_pid != (bpf_get_current_pid_tgid() >> 32)) return 0; diff --git a/tools/testing/selftests/bpf/progs/test_verif_scale1.c b/tools/testing/selftests/bpf/progs/test_verif_scale1.c index ac6135d9374c..323a73fb2e8c 100644 --- a/tools/testing/selftests/bpf/progs/test_verif_scale1.c +++ b/tools/testing/selftests/bpf/progs/test_verif_scale1.c @@ -11,7 +11,7 @@ int balancer_ingress(struct __sk_buff *ctx) void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; void *ptr; - int ret = 0, nh_off, i = 0; + int nh_off, i = 0; nh_off = 14; diff --git a/tools/testing/selftests/bpf/progs/test_verif_scale2.c b/tools/testing/selftests/bpf/progs/test_verif_scale2.c index f90ffcafd1e8..f5318f757084 100644 --- a/tools/testing/selftests/bpf/progs/test_verif_scale2.c +++ b/tools/testing/selftests/bpf/progs/test_verif_scale2.c @@ -11,7 +11,7 @@ int balancer_ingress(struct __sk_buff *ctx) void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; void *ptr; - int ret = 0, nh_off, i = 0; + int nh_off, i = 0; nh_off = 14; diff --git a/tools/testing/selftests/bpf/progs/test_verif_scale3.c b/tools/testing/selftests/bpf/progs/test_verif_scale3.c index ca33a9b711c4..2e06dbb1ad5c 100644 --- a/tools/testing/selftests/bpf/progs/test_verif_scale3.c +++ b/tools/testing/selftests/bpf/progs/test_verif_scale3.c @@ -11,7 +11,7 @@ int balancer_ingress(struct __sk_buff *ctx) void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; void *ptr; - int ret = 0, nh_off, i = 0; + int nh_off, i = 0; nh_off = 32; diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c index 297c260fc364..81bb38d72ced 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c @@ -5,8 +5,6 @@ SEC("xdp") int _xdp_adjust_tail_grow(struct xdp_md *xdp) { - void *data_end = (void *)(long)xdp->data_end; - void *data = (void *)(long)xdp->data; int data_len = bpf_xdp_get_buff_len(xdp); int offset = 0; /* SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) */ diff --git a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c index 3379d303f41a..ee48c4963971 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_bpf2bpf.c @@ -45,8 +45,6 @@ SEC("fentry/FUNC") int BPF_PROG(trace_on_entry, struct xdp_buff *xdp) { struct meta meta; - void *data_end = (void *)(long)xdp->data_end; - void *data = (void *)(long)xdp->data; meta.ifindex = xdp->rxq->dev->ifindex; meta.pkt_len = bpf_xdp_get_buff_len((struct xdp_md *)xdp); diff --git a/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c b/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c index 77a123071940..5baaafed0d2d 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_do_redirect.c @@ -4,6 +4,19 @@ #define ETH_ALEN 6 #define HDR_SZ (sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + sizeof(struct udphdr)) + +/** + * enum frame_mark - magics to distinguish page/packet paths + * @MARK_XMIT: page was recycled due to the frame being "xmitted" by the NIC. + * @MARK_IN: frame is being processed by the input XDP prog. + * @MARK_SKB: frame did hit the TC ingress hook as an skb. + */ +enum frame_mark { + MARK_XMIT = 0U, + MARK_IN = 0x42, + MARK_SKB = 0x45, +}; + const volatile int ifindex_out; const volatile int ifindex_in; const volatile __u8 expect_dst[ETH_ALEN]; @@ -34,12 +47,12 @@ int xdp_redirect(struct xdp_md *xdp) if (*metadata != 0x42) return XDP_ABORTED; - if (*payload == 0) { - *payload = 0x42; + if (*payload == MARK_XMIT) pkts_seen_zero++; - } - if (bpf_xdp_adjust_meta(xdp, 4)) + *payload = MARK_IN; + + if (bpf_xdp_adjust_meta(xdp, sizeof(__u64))) return XDP_ABORTED; if (retcode > XDP_PASS) @@ -51,7 +64,7 @@ int xdp_redirect(struct xdp_md *xdp) return ret; } -static bool check_pkt(void *data, void *data_end) +static bool check_pkt(void *data, void *data_end, const __u32 mark) { struct ipv6hdr *iph = data + sizeof(struct ethhdr); __u8 *payload = data + HDR_SZ; @@ -59,13 +72,13 @@ static bool check_pkt(void *data, void *data_end) if (payload + 1 > data_end) return false; - if (iph->nexthdr != IPPROTO_UDP || *payload != 0x42) + if (iph->nexthdr != IPPROTO_UDP || *payload != MARK_IN) return false; /* reset the payload so the same packet doesn't get counted twice when * it cycles back through the kernel path and out the dst veth */ - *payload = 0; + *payload = mark; return true; } @@ -75,11 +88,11 @@ int xdp_count_pkts(struct xdp_md *xdp) void *data = (void *)(long)xdp->data; void *data_end = (void *)(long)xdp->data_end; - if (check_pkt(data, data_end)) + if (check_pkt(data, data_end, MARK_XMIT)) pkts_seen_xdp++; - /* Return XDP_DROP to make sure the data page is recycled, like when it - * exits a physical NIC. Recycled pages will be counted in the + /* Return %XDP_DROP to recycle the data page with %MARK_XMIT, like + * it exited a physical NIC. Those pages will be counted in the * pkts_seen_zero counter above. */ return XDP_DROP; @@ -91,9 +104,12 @@ int tc_count_pkts(struct __sk_buff *skb) void *data = (void *)(long)skb->data; void *data_end = (void *)(long)skb->data_end; - if (check_pkt(data, data_end)) + if (check_pkt(data, data_end, MARK_SKB)) pkts_seen_tc++; + /* Will be either recycled or freed, %MARK_SKB makes sure it won't + * hit any of the counters above. + */ return 0; } diff --git a/tools/testing/selftests/bpf/progs/test_xdp_dynptr.c b/tools/testing/selftests/bpf/progs/test_xdp_dynptr.c index 7521a805b506..25ee4a22e48d 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_dynptr.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_dynptr.c @@ -82,7 +82,6 @@ static __always_inline int handle_ipv4(struct xdp_md *xdp, struct bpf_dynptr *xd struct iptnl_info *tnl; struct ethhdr *new_eth; struct ethhdr *old_eth; - __u32 transport_hdr_sz; struct iphdr *iph; __u16 *next_iph; __u16 payload_len; @@ -165,7 +164,6 @@ static __always_inline int handle_ipv6(struct xdp_md *xdp, struct bpf_dynptr *xd struct iptnl_info *tnl; struct ethhdr *new_eth; struct ethhdr *old_eth; - __u32 transport_hdr_sz; struct ipv6hdr *ip6h; __u16 payload_len; struct vip vip = {}; diff --git a/tools/testing/selftests/bpf/progs/test_xdp_noinline.c b/tools/testing/selftests/bpf/progs/test_xdp_noinline.c index ba48fcb98ab2..42c8f6ded0e4 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_noinline.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_noinline.c @@ -372,45 +372,6 @@ bool encap_v4(struct xdp_md *xdp, struct ctl_value *cval, } static __attribute__ ((noinline)) -bool decap_v6(struct xdp_md *xdp, void **data, void **data_end, bool inner_v4) -{ - struct eth_hdr *new_eth; - struct eth_hdr *old_eth; - - old_eth = *data; - new_eth = *data + sizeof(struct ipv6hdr); - memcpy(new_eth->eth_source, old_eth->eth_source, 6); - memcpy(new_eth->eth_dest, old_eth->eth_dest, 6); - if (inner_v4) - new_eth->eth_proto = 8; - else - new_eth->eth_proto = 56710; - if (bpf_xdp_adjust_head(xdp, (int)sizeof(struct ipv6hdr))) - return false; - *data = (void *)(long)xdp->data; - *data_end = (void *)(long)xdp->data_end; - return true; -} - -static __attribute__ ((noinline)) -bool decap_v4(struct xdp_md *xdp, void **data, void **data_end) -{ - struct eth_hdr *new_eth; - struct eth_hdr *old_eth; - - old_eth = *data; - new_eth = *data + sizeof(struct iphdr); - memcpy(new_eth->eth_source, old_eth->eth_source, 6); - memcpy(new_eth->eth_dest, old_eth->eth_dest, 6); - new_eth->eth_proto = 8; - if (bpf_xdp_adjust_head(xdp, (int)sizeof(struct iphdr))) - return false; - *data = (void *)(long)xdp->data; - *data_end = (void *)(long)xdp->data_end; - return true; -} - -static __attribute__ ((noinline)) int swap_mac_and_send(void *data, void *data_end) { unsigned char tmp_mac[6]; @@ -430,7 +391,6 @@ int send_icmp_reply(void *data, void *data_end) __u16 *next_iph_u16; __u32 tmp_addr = 0; struct iphdr *iph; - __u32 csum1 = 0; __u32 csum = 0; __u64 off = 0; @@ -662,7 +622,6 @@ static int process_l3_headers_v4(struct packet_description *pckt, void *data_end) { struct iphdr *iph; - __u64 iph_len; int action; iph = data + off; @@ -696,7 +655,6 @@ static int process_packet(void *data, __u64 off, void *data_end, struct packet_description pckt = { }; struct vip_definition vip = { }; struct lb_stats *data_stats; - struct eth_hdr *eth = data; void *lru_map = &lru_cache; struct vip_meta *vip_info; __u32 lru_stats_key = 513; @@ -704,7 +662,6 @@ static int process_packet(void *data, __u64 off, void *data_end, __u32 stats_key = 512; struct ctl_value *cval; __u16 pkt_bytes; - __u64 iph_len; __u8 protocol; __u32 vip_num; int action; diff --git a/tools/testing/selftests/bpf/progs/test_xdp_vlan.c b/tools/testing/selftests/bpf/progs/test_xdp_vlan.c index 4ddcb6dfe500..f3ec8086482d 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_vlan.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_vlan.c @@ -210,19 +210,6 @@ int xdp_prognum2(struct xdp_md *ctx) } static __always_inline -void shift_mac_4bytes_16bit(void *data) -{ - __u16 *p = data; - - p[7] = p[5]; /* delete p[7] was vlan_hdr->h_vlan_TCI */ - p[6] = p[4]; /* delete p[6] was ethhdr->h_proto */ - p[5] = p[3]; - p[4] = p[2]; - p[3] = p[1]; - p[2] = p[0]; -} - -static __always_inline void shift_mac_4bytes_32bit(void *data) { __u32 *p = data; diff --git a/tools/testing/selftests/bpf/progs/type_cast.c b/tools/testing/selftests/bpf/progs/type_cast.c index eb78e6f03129..a9629ac230fd 100644 --- a/tools/testing/selftests/bpf/progs/type_cast.c +++ b/tools/testing/selftests/bpf/progs/type_cast.c @@ -63,7 +63,6 @@ SEC("?tp_btf/sys_enter") int BPF_PROG(untrusted_ptr, struct pt_regs *regs, long id) { struct task_struct *task, *task_dup; - long *ptr; task = bpf_get_current_task_btf(); task_dup = bpf_rdonly_cast(task, bpf_core_type_id_kernel(struct task_struct)); diff --git a/tools/testing/selftests/bpf/progs/udp_limit.c b/tools/testing/selftests/bpf/progs/udp_limit.c index 165e3c2dd9a3..4767451b59ac 100644 --- a/tools/testing/selftests/bpf/progs/udp_limit.c +++ b/tools/testing/selftests/bpf/progs/udp_limit.c @@ -17,7 +17,6 @@ SEC("cgroup/sock_create") int sock(struct bpf_sock *ctx) { int *sk_storage; - __u32 key; if (ctx->type != SOCK_DGRAM) return 1; @@ -46,7 +45,6 @@ SEC("cgroup/sock_release") int sock_release(struct bpf_sock *ctx) { int *sk_storage; - __u32 key; if (ctx->type != SOCK_DGRAM) return 1; diff --git a/tools/testing/selftests/bpf/progs/user_ringbuf_success.c b/tools/testing/selftests/bpf/progs/user_ringbuf_success.c index 0ade1110613b..dd3bdf672633 100644 --- a/tools/testing/selftests/bpf/progs/user_ringbuf_success.c +++ b/tools/testing/selftests/bpf/progs/user_ringbuf_success.c @@ -162,8 +162,6 @@ SEC("fentry/" SYS_PREFIX "sys_prctl") int test_user_ringbuf_protocol(void *ctx) { long status = 0; - struct sample *sample = NULL; - struct bpf_dynptr ptr; if (!is_test_process()) return 0; @@ -183,10 +181,6 @@ int test_user_ringbuf_protocol(void *ctx) SEC("fentry/" SYS_PREFIX "sys_getpgid") int test_user_ringbuf(void *ctx) { - int status = 0; - struct sample *sample = NULL; - struct bpf_dynptr ptr; - if (!is_test_process()) return 0; diff --git a/tools/testing/selftests/bpf/progs/verifier_and.c b/tools/testing/selftests/bpf/progs/verifier_and.c new file mode 100644 index 000000000000..e97e518516b6 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_and.c @@ -0,0 +1,107 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/and.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("socket") +__description("invalid and of negative number") +__failure __msg("R0 max value is outside of the allowed memory range") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void invalid_and_of_negative_number(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u8*)(r0 + 0); \ + r1 &= -4; \ + r1 <<= 2; \ + r0 += r1; \ +l0_%=: r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid range check") +__failure __msg("R0 max value is outside of the allowed memory range") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void invalid_range_check(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + r9 = 1; \ + w1 %%= 2; \ + w1 += 1; \ + w9 &= w1; \ + w9 += 1; \ + w9 >>= 1; \ + w3 = 1; \ + w3 -= w9; \ + w3 *= 0x10000000; \ + r0 += r3; \ + *(u32*)(r0 + 0) = r3; \ +l0_%=: r0 = r0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("check known subreg with unknown reg") +__success __failure_unpriv __msg_unpriv("R1 !read_ok") +__retval(0) +__naked void known_subreg_with_unknown_reg(void) +{ + asm volatile (" \ + call %[bpf_get_prandom_u32]; \ + r0 <<= 32; \ + r0 += 1; \ + r0 &= 0xFFFF1234; \ + /* Upper bits are unknown but AND above masks out 1 zero'ing lower bits */\ + if w0 < 1 goto l0_%=; \ + r1 = *(u32*)(r1 + 512); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_array_access.c b/tools/testing/selftests/bpf/progs/verifier_array_access.c new file mode 100644 index 000000000000..95d7ecc12963 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_array_access.c @@ -0,0 +1,529 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/array_access.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); + __uint(map_flags, BPF_F_RDONLY_PROG); +} map_array_ro SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); + __uint(map_flags, BPF_F_WRONLY_PROG); +} map_array_wo SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("socket") +__description("valid map access into an array with a constant") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) +__naked void an_array_with_a_constant_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("valid map access into an array with a register") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void an_array_with_a_register_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 4; \ + r1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("valid map access into an array with a variable") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void an_array_with_a_variable_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + if r1 >= %[max_entries] goto l0_%=; \ + r1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(max_entries, MAX_ENTRIES), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("valid map access into an array with a signed variable") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void array_with_a_signed_variable(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + if w1 s> 0xffffffff goto l1_%=; \ + w1 = 0; \ +l1_%=: w2 = %[max_entries]; \ + if r2 s> r1 goto l2_%=; \ + w1 = 0; \ +l2_%=: w1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(max_entries, MAX_ENTRIES), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access into an array with a constant") +__failure __msg("invalid access to map value, value_size=48 off=48 size=8") +__failure_unpriv +__naked void an_array_with_a_constant_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + %[__imm_0]) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, (MAX_ENTRIES + 1) << 2), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access into an array with a register") +__failure __msg("R0 min value is outside of the allowed memory range") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void an_array_with_a_register_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = %[__imm_0]; \ + r1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, MAX_ENTRIES + 1), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access into an array with a variable") +__failure +__msg("R0 unbounded memory access, make sure to bounds check any such access") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void an_array_with_a_variable_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + r1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access into an array with no floor check") +__failure __msg("R0 unbounded memory access") +__failure_unpriv __msg_unpriv("R0 leaks addr") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void array_with_no_floor_check(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r0 + 0); \ + w2 = %[max_entries]; \ + if r2 s> r1 goto l1_%=; \ + w1 = 0; \ +l1_%=: w1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(max_entries, MAX_ENTRIES), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access into an array with a invalid max check") +__failure __msg("invalid access to map value, value_size=48 off=44 size=8") +__failure_unpriv __msg_unpriv("R0 leaks addr") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void with_a_invalid_max_check_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + w2 = %[__imm_0]; \ + if r2 > r1 goto l1_%=; \ + w1 = 0; \ +l1_%=: w1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, MAX_ENTRIES + 1), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access into an array with a invalid max check") +__failure __msg("R0 pointer += pointer") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void with_a_invalid_max_check_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r8 = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 += r8; \ + r0 = *(u32*)(r0 + %[test_val_foo]); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("socket") +__description("valid read map access into a read-only array 1") +__success __success_unpriv __retval(28) +__naked void a_read_only_array_1_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_ro] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 = *(u32*)(r0 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_ro) + : __clobber_all); +} + +SEC("tc") +__description("valid read map access into a read-only array 2") +__success __retval(65507) +__naked void a_read_only_array_2_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_ro] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 &= 0xffff; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm(bpf_map_lookup_elem), + __imm_addr(map_array_ro) + : __clobber_all); +} + +SEC("socket") +__description("invalid write map access into a read-only array 1") +__failure __msg("write into map forbidden") +__failure_unpriv +__naked void a_read_only_array_1_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_ro] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 42; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_ro) + : __clobber_all); +} + +SEC("tc") +__description("invalid write map access into a read-only array 2") +__failure __msg("write into map forbidden") +__naked void a_read_only_array_2_2(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_ro] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r6; \ + r2 = 0; \ + r3 = r0; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_skb_load_bytes), + __imm_addr(map_array_ro) + : __clobber_all); +} + +SEC("socket") +__description("valid write map access into a write-only array 1") +__success __success_unpriv __retval(1) +__naked void a_write_only_array_1_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_wo] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 42; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: r0 = 1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_wo) + : __clobber_all); +} + +SEC("tc") +__description("valid write map access into a write-only array 2") +__success __retval(0) +__naked void a_write_only_array_2_1(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_wo] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r6; \ + r2 = 0; \ + r3 = r0; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_skb_load_bytes), + __imm_addr(map_array_wo) + : __clobber_all); +} + +SEC("socket") +__description("invalid read map access into a write-only array 1") +__failure __msg("read from map forbidden") +__failure_unpriv +__naked void a_write_only_array_1_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_wo] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 = *(u64*)(r0 + 0); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_wo) + : __clobber_all); +} + +SEC("tc") +__description("invalid read map access into a write-only array 2") +__failure __msg("read from map forbidden") +__naked void a_write_only_array_2_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_wo] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: exit; \ +" : + : __imm(bpf_csum_diff), + __imm(bpf_map_lookup_elem), + __imm_addr(map_array_wo) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_basic_stack.c b/tools/testing/selftests/bpf/progs/verifier_basic_stack.c new file mode 100644 index 000000000000..359df865a8f3 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_basic_stack.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/basic_stack.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("stack out of bounds") +__failure __msg("invalid write to stack") +__failure_unpriv +__naked void stack_out_of_bounds(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 + 8) = r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("uninitialized stack1") +__failure __msg("invalid indirect read from stack") +__failure_unpriv +__naked void uninitialized_stack1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("uninitialized stack2") +__failure __msg("invalid read from stack") +__failure_unpriv +__naked void uninitialized_stack2(void) +{ + asm volatile (" \ + r2 = r10; \ + r0 = *(u64*)(r2 - 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("invalid fp arithmetic") +__failure __msg("R1 subtraction from stack pointer") +__failure_unpriv +__naked void invalid_fp_arithmetic(void) +{ + /* If this gets ever changed, make sure JITs can deal with it. */ + asm volatile (" \ + r0 = 0; \ + r1 = r10; \ + r1 -= 8; \ + *(u64*)(r1 + 0) = r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("non-invalid fp arithmetic") +__success __success_unpriv __retval(0) +__naked void non_invalid_fp_arithmetic(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r10 - 8) = r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("misaligned read from stack") +__failure __msg("misaligned stack access") +__failure_unpriv +__naked void misaligned_read_from_stack(void) +{ + asm volatile (" \ + r2 = r10; \ + r0 = *(u64*)(r2 - 4); \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction.c b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction.c new file mode 100644 index 000000000000..c506afbdd936 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction.c @@ -0,0 +1,171 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/bounds_deduction.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("socket") +__description("check deducing bounds from const, 1") +__failure __msg("R0 tried to subtract pointer from scalar") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__naked void deducing_bounds_from_const_1(void) +{ + asm volatile (" \ + r0 = 1; \ + if r0 s>= 1 goto l0_%=; \ +l0_%=: r0 -= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 2") +__success __failure_unpriv +__msg_unpriv("R1 has pointer with unsupported alu operation") +__retval(1) +__naked void deducing_bounds_from_const_2(void) +{ + asm volatile (" \ + r0 = 1; \ + if r0 s>= 1 goto l0_%=; \ + exit; \ +l0_%=: if r0 s<= 1 goto l1_%=; \ + exit; \ +l1_%=: r1 -= r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 3") +__failure __msg("R0 tried to subtract pointer from scalar") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__naked void deducing_bounds_from_const_3(void) +{ + asm volatile (" \ + r0 = 0; \ + if r0 s<= 0 goto l0_%=; \ +l0_%=: r0 -= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 4") +__success __failure_unpriv +__msg_unpriv("R6 has pointer with unsupported alu operation") +__retval(0) +__naked void deducing_bounds_from_const_4(void) +{ + asm volatile (" \ + r6 = r1; \ + r0 = 0; \ + if r0 s<= 0 goto l0_%=; \ + exit; \ +l0_%=: if r0 s>= 0 goto l1_%=; \ + exit; \ +l1_%=: r6 -= r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 5") +__failure __msg("R0 tried to subtract pointer from scalar") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__naked void deducing_bounds_from_const_5(void) +{ + asm volatile (" \ + r0 = 0; \ + if r0 s>= 1 goto l0_%=; \ + r0 -= r1; \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 6") +__failure __msg("R0 tried to subtract pointer from scalar") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__naked void deducing_bounds_from_const_6(void) +{ + asm volatile (" \ + r0 = 0; \ + if r0 s>= 0 goto l0_%=; \ + exit; \ +l0_%=: r0 -= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 7") +__failure __msg("dereference of modified ctx ptr") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void deducing_bounds_from_const_7(void) +{ + asm volatile (" \ + r0 = %[__imm_0]; \ + if r0 s>= 0 goto l0_%=; \ +l0_%=: r1 -= r0; \ + r0 = *(u32*)(r1 + %[__sk_buff_mark]); \ + exit; \ +" : + : __imm_const(__imm_0, ~0), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 8") +__failure __msg("negative offset ctx ptr R1 off=-1 disallowed") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void deducing_bounds_from_const_8(void) +{ + asm volatile (" \ + r0 = %[__imm_0]; \ + if r0 s>= 0 goto l0_%=; \ + r1 += r0; \ +l0_%=: r0 = *(u32*)(r1 + %[__sk_buff_mark]); \ + exit; \ +" : + : __imm_const(__imm_0, ~0), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 9") +__failure __msg("R0 tried to subtract pointer from scalar") +__msg_unpriv("R1 has pointer with unsupported alu operation") +__naked void deducing_bounds_from_const_9(void) +{ + asm volatile (" \ + r0 = 0; \ + if r0 s>= 0 goto l0_%=; \ +l0_%=: r0 -= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from const, 10") +__failure +__msg("math between ctx pointer and register with unbounded min value is not allowed") +__failure_unpriv +__naked void deducing_bounds_from_const_10(void) +{ + asm volatile (" \ + r0 = 0; \ + if r0 s<= 0 goto l0_%=; \ +l0_%=: /* Marks reg as unknown. */ \ + r0 = -r0; \ + r0 -= r1; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c new file mode 100644 index 000000000000..823f727cf210 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_bounds_deduction_non_const.c @@ -0,0 +1,639 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <non_const> == <const>, 1") +__success __retval(0) +__naked void deducing_bounds_from_non_const_1(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 < 3 goto l0_%=; \ + r2 = 2; \ + if r0 == r2 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <non_const> == <const>, 2") +__success __retval(0) +__naked void deducing_bounds_from_non_const_2(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 > 3 goto l0_%=; \ + r2 = 4; \ + if r0 == r2 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <non_const> != <const>, 1") +__success __retval(0) +__naked void deducing_bounds_from_non_const_3(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 < 3 goto l0_%=; \ + r2 = 2; \ + if r0 != r2 goto l0_%=; \ + goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <non_const> != <const>, 2") +__success __retval(0) +__naked void deducing_bounds_from_non_const_4(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 > 3 goto l0_%=; \ + r2 = 4; \ + if r0 != r2 goto l0_%=; \ + goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <non_const> == <const>, 1") +__success __retval(0) +__naked void deducing_bounds_from_non_const_5(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 < 4 goto l0_%=; \ + w2 = 3; \ + if w0 == w2 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <non_const> == <const>, 2") +__success __retval(0) +__naked void deducing_bounds_from_non_const_6(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 > 4 goto l0_%=; \ + w2 = 5; \ + if w0 == w2 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <non_const> != <const>, 1") +__success __retval(0) +__naked void deducing_bounds_from_non_const_7(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 < 3 goto l0_%=; \ + w2 = 2; \ + if w0 != w2 goto l0_%=; \ + goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <non_const> != <const>, 2") +__success __retval(0) +__naked void deducing_bounds_from_non_const_8(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 > 3 goto l0_%=; \ + w2 = 4; \ + if w0 != w2 goto l0_%=; \ + goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> > <non_const>, 1") +__success __retval(0) +__naked void deducing_bounds_from_non_const_9(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + r2 = 0; \ + if r2 > r0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> > <non_const>, 2") +__success __retval(0) +__naked void deducing_bounds_from_non_const_10(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 < 4 goto l0_%=; \ + r2 = 4; \ + if r2 > r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> >= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_11(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 < 4 goto l0_%=; \ + r2 = 3; \ + if r2 >= r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> < <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_12(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 > 4 goto l0_%=; \ + r2 = 4; \ + if r2 < r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> <= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_13(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 >= 4 goto l0_%=; \ + r2 = 4; \ + if r2 <= r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> == <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_14(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 < 3 goto l0_%=; \ + r2 = 2; \ + if r2 == r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> s> <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_15(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 s< 4 goto l0_%=; \ + r2 = 4; \ + if r2 s> r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> s>= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_16(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 s< 4 goto l0_%=; \ + r2 = 3; \ + if r2 s>= r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> s< <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_17(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 s> 4 goto l0_%=; \ + r2 = 4; \ + if r2 s< r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> s<= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_18(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 s> 4 goto l0_%=; \ + r2 = 5; \ + if r2 s<= r0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp64, <const> != <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_19(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if r0 < 3 goto l0_%=; \ + r2 = 2; \ + if r2 != r0 goto l0_%=; \ + goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> > <non_const>, 1") +__success __retval(0) +__naked void deducing_bounds_from_non_const_20(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + w2 = 0; \ + if w2 > w0 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> > <non_const>, 2") +__success __retval(0) +__naked void deducing_bounds_from_non_const_21(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 < 4 goto l0_%=; \ + w2 = 4; \ + if w2 > w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> >= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_22(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 < 4 goto l0_%=; \ + w2 = 3; \ + if w2 >= w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> < <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_23(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 > 4 goto l0_%=; \ + w2 = 4; \ + if w2 < w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> <= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_24(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 >= 4 goto l0_%=; \ + w2 = 4; \ + if w2 <= w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> == <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_25(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 < 4 goto l0_%=; \ + w2 = 3; \ + if w2 == w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> s> <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_26(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 s< 4 goto l0_%=; \ + w2 = 4; \ + if w2 s> w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> s>= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_27(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 s< 4 goto l0_%=; \ + w2 = 3; \ + if w2 s>= w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> s< <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_28(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 s> 4 goto l0_%=; \ + w2 = 5; \ + if w2 s< w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> s<= <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_29(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 s>= 4 goto l0_%=; \ + w2 = 4; \ + if w2 s<= w0 goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +SEC("socket") +__description("check deducing bounds from non-const, jmp32, <const> != <non_const>") +__success __retval(0) +__naked void deducing_bounds_from_non_const_30(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + if w0 < 3 goto l0_%=; \ + w2 = 2; \ + if w2 != w0 goto l0_%=; \ + goto l1_%=; \ +l0_%=: \ + r0 = 0; \ + exit; \ +l1_%=: \ + r0 -= r1; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c new file mode 100644 index 000000000000..4f40144748a5 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_bounds_mix_sign_unsign.c @@ -0,0 +1,554 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("bounds checks mixing signed and unsigned, positive bounds") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_positive_bounds(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = 2; \ + if r2 >= r1 goto l0_%=; \ + if r1 s> 4 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void checks_mixing_signed_and_unsigned(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r1 > r2 goto l0_%=; \ + if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 2") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_2(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r1 > r2 goto l0_%=; \ + r8 = 0; \ + r8 += r1; \ + if r8 s> 1 goto l0_%=; \ + r0 += r8; \ + r0 = 0; \ + *(u8*)(r8 + 0) = r0; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 3") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_3(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r1 > r2 goto l0_%=; \ + r8 = r1; \ + if r8 s> 1 goto l0_%=; \ + r0 += r8; \ + r0 = 0; \ + *(u8*)(r8 + 0) = r0; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 4") +__success __success_unpriv __retval(0) +__naked void signed_and_unsigned_variant_4(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = 1; \ + r1 &= r2; \ + if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 5") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_5(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r1 > r2 goto l0_%=; \ + if r1 s> 1 goto l0_%=; \ + r0 += 4; \ + r0 -= r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 6") +__failure __msg("R4 min value is negative, either use unsigned") +__failure_unpriv +__naked void signed_and_unsigned_variant_6(void) +{ + asm volatile (" \ + r9 = r1; \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = r9; \ + r2 = 0; \ + r3 = r10; \ + r3 += -512; \ + r4 = *(u64*)(r10 - 16); \ + r6 = -1; \ + if r4 > r6 goto l0_%=; \ + if r4 s> 1 goto l0_%=; \ + r4 += 1; \ + r5 = 0; \ + r6 = 0; \ + *(u16*)(r10 - 512) = r6; \ + call %[bpf_skb_load_bytes]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 7") +__success __success_unpriv __retval(0) +__naked void signed_and_unsigned_variant_7(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = %[__imm_0]; \ + if r1 > r2 goto l0_%=; \ + if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__imm_0, 1024 * 1024 * 1024) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 8") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_8(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r2 > r1 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 9") +__success __success_unpriv __retval(0) +__naked void signed_and_unsigned_variant_9(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -9223372036854775808ULL ll; \ + if r2 > r1 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 10") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_10(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r2 > r1 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 11") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_11(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + if r2 >= r1 goto l1_%=; \ + /* Dead branch. */ \ + r0 = 0; \ + exit; \ +l1_%=: if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 12") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_12(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -6; \ + if r2 >= r1 goto l1_%=; \ + r0 = 0; \ + exit; \ +l1_%=: if r1 s> 1 goto l0_%=; \ + r0 += r1; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 13") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_13(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = 2; \ + if r2 >= r1 goto l0_%=; \ + r7 = 1; \ + if r7 s> 0 goto l1_%=; \ +l0_%=: r0 = 0; \ + exit; \ +l1_%=: r7 += r1; \ + if r7 s> 4 goto l2_%=; \ + r0 += r7; \ + r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l2_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 14") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_14(void) +{ + asm volatile (" \ + r9 = *(u32*)(r1 + %[__sk_buff_mark]); \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -1; \ + r8 = 2; \ + if r9 == 42 goto l1_%=; \ + if r8 s> r1 goto l2_%=; \ +l3_%=: if r1 s> 1 goto l2_%=; \ + r0 += r1; \ +l0_%=: r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ +l2_%=: r0 = 0; \ + exit; \ +l1_%=: if r1 > r2 goto l2_%=; \ + goto l3_%=; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("socket") +__description("bounds checks mixing signed and unsigned, variant 15") +__failure __msg("unbounded min value") +__failure_unpriv +__naked void signed_and_unsigned_variant_15(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_ns]; \ + *(u64*)(r10 - 16) = r0; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u64*)(r10 - 16); \ + r2 = -6; \ + if r2 >= r1 goto l1_%=; \ +l0_%=: r0 = 0; \ + exit; \ +l1_%=: r0 += r1; \ + if r0 > 1 goto l2_%=; \ + r0 = 0; \ + exit; \ +l2_%=: r1 = 0; \ + *(u8*)(r0 + 0) = r1; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_cfg.c b/tools/testing/selftests/bpf/progs/verifier_cfg.c new file mode 100644 index 000000000000..df7697b94007 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_cfg.c @@ -0,0 +1,100 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/cfg.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("socket") +__description("unreachable") +__failure __msg("unreachable") +__failure_unpriv +__naked void unreachable(void) +{ + asm volatile (" \ + exit; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("unreachable2") +__failure __msg("unreachable") +__failure_unpriv +__naked void unreachable2(void) +{ + asm volatile (" \ + goto l0_%=; \ + goto l0_%=; \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("out of range jump") +__failure __msg("jump out of range") +__failure_unpriv +__naked void out_of_range_jump(void) +{ + asm volatile (" \ + goto l0_%=; \ + exit; \ +l0_%=: \ +" ::: __clobber_all); +} + +SEC("socket") +__description("out of range jump2") +__failure __msg("jump out of range") +__failure_unpriv +__naked void out_of_range_jump2(void) +{ + asm volatile (" \ + goto -2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("loop (back-edge)") +__failure __msg("unreachable insn 1") +__msg_unpriv("back-edge") +__naked void loop_back_edge(void) +{ + asm volatile (" \ +l0_%=: goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("loop2 (back-edge)") +__failure __msg("unreachable insn 4") +__msg_unpriv("back-edge") +__naked void loop2_back_edge(void) +{ + asm volatile (" \ +l0_%=: r1 = r0; \ + r2 = r0; \ + r3 = r0; \ + goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("conditional loop") +__failure __msg("infinite loop detected") +__msg_unpriv("back-edge") +__naked void conditional_loop(void) +{ + asm volatile (" \ + r0 = r1; \ +l0_%=: r2 = r0; \ + r3 = r0; \ + if r1 == 0 goto l0_%=; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_cgroup_inv_retcode.c b/tools/testing/selftests/bpf/progs/verifier_cgroup_inv_retcode.c new file mode 100644 index 000000000000..d6c4a7f3f790 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_cgroup_inv_retcode.c @@ -0,0 +1,89 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/cgroup_inv_retcode.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test1") +__failure __msg("R0 has value (0x0; 0xffffffff)") +__naked void with_invalid_return_code_test1(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test2") +__success +__naked void with_invalid_return_code_test2(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + 0); \ + r0 &= 1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test3") +__failure __msg("R0 has value (0x0; 0x3)") +__naked void with_invalid_return_code_test3(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + 0); \ + r0 &= 3; \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test4") +__success +__naked void with_invalid_return_code_test4(void) +{ + asm volatile (" \ + r0 = 1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test5") +__failure __msg("R0 has value (0x2; 0x0)") +__naked void with_invalid_return_code_test5(void) +{ + asm volatile (" \ + r0 = 2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test6") +__failure __msg("R0 is not a known value (ctx)") +__naked void with_invalid_return_code_test6(void) +{ + asm volatile (" \ + r0 = r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/sock") +__description("bpf_exit with invalid return code. test7") +__failure __msg("R0 has unknown scalar value") +__naked void with_invalid_return_code_test7(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + 0); \ + r2 = *(u32*)(r1 + 4); \ + r0 *= r2; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_cgroup_skb.c b/tools/testing/selftests/bpf/progs/verifier_cgroup_skb.c new file mode 100644 index 000000000000..5ee3d349d6d0 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_cgroup_skb.c @@ -0,0 +1,227 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/cgroup_skb.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("cgroup/skb") +__description("direct packet read test#1 for CGROUP_SKB") +__success __failure_unpriv +__msg_unpriv("invalid bpf_context access off=76 size=4") +__retval(0) +__naked void test_1_for_cgroup_skb(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r4 = *(u32*)(r1 + %[__sk_buff_len]); \ + r5 = *(u32*)(r1 + %[__sk_buff_pkt_type]); \ + r6 = *(u32*)(r1 + %[__sk_buff_mark]); \ + *(u32*)(r1 + %[__sk_buff_mark]) = r6; \ + r7 = *(u32*)(r1 + %[__sk_buff_queue_mapping]); \ + r8 = *(u32*)(r1 + %[__sk_buff_protocol]); \ + r9 = *(u32*)(r1 + %[__sk_buff_vlan_present]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_len, offsetof(struct __sk_buff, len)), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(__sk_buff_pkt_type, offsetof(struct __sk_buff, pkt_type)), + __imm_const(__sk_buff_protocol, offsetof(struct __sk_buff, protocol)), + __imm_const(__sk_buff_queue_mapping, offsetof(struct __sk_buff, queue_mapping)), + __imm_const(__sk_buff_vlan_present, offsetof(struct __sk_buff, vlan_present)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("direct packet read test#2 for CGROUP_SKB") +__success __success_unpriv __retval(0) +__naked void test_2_for_cgroup_skb(void) +{ + asm volatile (" \ + r4 = *(u32*)(r1 + %[__sk_buff_vlan_tci]); \ + r5 = *(u32*)(r1 + %[__sk_buff_vlan_proto]); \ + r6 = *(u32*)(r1 + %[__sk_buff_priority]); \ + *(u32*)(r1 + %[__sk_buff_priority]) = r6; \ + r7 = *(u32*)(r1 + %[__sk_buff_ingress_ifindex]);\ + r8 = *(u32*)(r1 + %[__sk_buff_tc_index]); \ + r9 = *(u32*)(r1 + %[__sk_buff_hash]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_hash, offsetof(struct __sk_buff, hash)), + __imm_const(__sk_buff_ingress_ifindex, offsetof(struct __sk_buff, ingress_ifindex)), + __imm_const(__sk_buff_priority, offsetof(struct __sk_buff, priority)), + __imm_const(__sk_buff_tc_index, offsetof(struct __sk_buff, tc_index)), + __imm_const(__sk_buff_vlan_proto, offsetof(struct __sk_buff, vlan_proto)), + __imm_const(__sk_buff_vlan_tci, offsetof(struct __sk_buff, vlan_tci)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("direct packet read test#3 for CGROUP_SKB") +__success __success_unpriv __retval(0) +__naked void test_3_for_cgroup_skb(void) +{ + asm volatile (" \ + r4 = *(u32*)(r1 + %[__sk_buff_cb_0]); \ + r5 = *(u32*)(r1 + %[__sk_buff_cb_1]); \ + r6 = *(u32*)(r1 + %[__sk_buff_cb_2]); \ + r7 = *(u32*)(r1 + %[__sk_buff_cb_3]); \ + r8 = *(u32*)(r1 + %[__sk_buff_cb_4]); \ + r9 = *(u32*)(r1 + %[__sk_buff_napi_id]); \ + *(u32*)(r1 + %[__sk_buff_cb_0]) = r4; \ + *(u32*)(r1 + %[__sk_buff_cb_1]) = r5; \ + *(u32*)(r1 + %[__sk_buff_cb_2]) = r6; \ + *(u32*)(r1 + %[__sk_buff_cb_3]) = r7; \ + *(u32*)(r1 + %[__sk_buff_cb_4]) = r8; \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])), + __imm_const(__sk_buff_cb_1, offsetof(struct __sk_buff, cb[1])), + __imm_const(__sk_buff_cb_2, offsetof(struct __sk_buff, cb[2])), + __imm_const(__sk_buff_cb_3, offsetof(struct __sk_buff, cb[3])), + __imm_const(__sk_buff_cb_4, offsetof(struct __sk_buff, cb[4])), + __imm_const(__sk_buff_napi_id, offsetof(struct __sk_buff, napi_id)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("direct packet read test#4 for CGROUP_SKB") +__success __success_unpriv __retval(0) +__naked void test_4_for_cgroup_skb(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_family]); \ + r3 = *(u32*)(r1 + %[__sk_buff_remote_ip4]); \ + r4 = *(u32*)(r1 + %[__sk_buff_local_ip4]); \ + r5 = *(u32*)(r1 + %[__sk_buff_remote_ip6_0]); \ + r5 = *(u32*)(r1 + %[__sk_buff_remote_ip6_1]); \ + r5 = *(u32*)(r1 + %[__sk_buff_remote_ip6_2]); \ + r5 = *(u32*)(r1 + %[__sk_buff_remote_ip6_3]); \ + r6 = *(u32*)(r1 + %[__sk_buff_local_ip6_0]); \ + r6 = *(u32*)(r1 + %[__sk_buff_local_ip6_1]); \ + r6 = *(u32*)(r1 + %[__sk_buff_local_ip6_2]); \ + r6 = *(u32*)(r1 + %[__sk_buff_local_ip6_3]); \ + r7 = *(u32*)(r1 + %[__sk_buff_remote_port]); \ + r8 = *(u32*)(r1 + %[__sk_buff_local_port]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_family, offsetof(struct __sk_buff, family)), + __imm_const(__sk_buff_local_ip4, offsetof(struct __sk_buff, local_ip4)), + __imm_const(__sk_buff_local_ip6_0, offsetof(struct __sk_buff, local_ip6[0])), + __imm_const(__sk_buff_local_ip6_1, offsetof(struct __sk_buff, local_ip6[1])), + __imm_const(__sk_buff_local_ip6_2, offsetof(struct __sk_buff, local_ip6[2])), + __imm_const(__sk_buff_local_ip6_3, offsetof(struct __sk_buff, local_ip6[3])), + __imm_const(__sk_buff_local_port, offsetof(struct __sk_buff, local_port)), + __imm_const(__sk_buff_remote_ip4, offsetof(struct __sk_buff, remote_ip4)), + __imm_const(__sk_buff_remote_ip6_0, offsetof(struct __sk_buff, remote_ip6[0])), + __imm_const(__sk_buff_remote_ip6_1, offsetof(struct __sk_buff, remote_ip6[1])), + __imm_const(__sk_buff_remote_ip6_2, offsetof(struct __sk_buff, remote_ip6[2])), + __imm_const(__sk_buff_remote_ip6_3, offsetof(struct __sk_buff, remote_ip6[3])), + __imm_const(__sk_buff_remote_port, offsetof(struct __sk_buff, remote_port)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid access of tc_classid for CGROUP_SKB") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void tc_classid_for_cgroup_skb(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_tc_classid]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_tc_classid, offsetof(struct __sk_buff, tc_classid)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid access of data_meta for CGROUP_SKB") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void data_meta_for_cgroup_skb(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_data_meta]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data_meta, offsetof(struct __sk_buff, data_meta)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid access of flow_keys for CGROUP_SKB") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void flow_keys_for_cgroup_skb(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[__sk_buff_flow_keys]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_flow_keys, offsetof(struct __sk_buff, flow_keys)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid write access to napi_id for CGROUP_SKB") +__failure __msg("invalid bpf_context access") +__failure_unpriv +__naked void napi_id_for_cgroup_skb(void) +{ + asm volatile (" \ + r9 = *(u32*)(r1 + %[__sk_buff_napi_id]); \ + *(u32*)(r1 + %[__sk_buff_napi_id]) = r9; \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_napi_id, offsetof(struct __sk_buff, napi_id)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("write tstamp from CGROUP_SKB") +__success __failure_unpriv +__msg_unpriv("invalid bpf_context access off=152 size=8") +__retval(0) +__naked void write_tstamp_from_cgroup_skb(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r1 + %[__sk_buff_tstamp]) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_tstamp, offsetof(struct __sk_buff, tstamp)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("read tstamp from CGROUP_SKB") +__success __success_unpriv __retval(0) +__naked void read_tstamp_from_cgroup_skb(void) +{ + asm volatile (" \ + r0 = *(u64*)(r1 + %[__sk_buff_tstamp]); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_tstamp, offsetof(struct __sk_buff, tstamp)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c b/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c new file mode 100644 index 000000000000..9a13f5c11ac7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c @@ -0,0 +1,308 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/cgroup_storage.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_CGROUP_STORAGE); + __uint(max_entries, 0); + __type(key, struct bpf_cgroup_storage_key); + __type(value, char[TEST_DATA_LEN]); +} cgroup_storage SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE); + __uint(max_entries, 0); + __type(key, struct bpf_cgroup_storage_key); + __type(value, char[64]); +} percpu_cgroup_storage SEC(".maps"); + +SEC("cgroup/skb") +__description("valid cgroup storage access") +__success __success_unpriv __retval(0) +__naked void valid_cgroup_storage_access(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid cgroup storage access 1") +__failure __msg("cannot pass map_type 1 into func bpf_get_local_storage") +__failure_unpriv +__naked void invalid_cgroup_storage_access_1(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid cgroup storage access 2") +__failure __msg("fd 1 is not pointing to valid bpf_map") +__failure_unpriv +__naked void invalid_cgroup_storage_access_2(void) +{ + asm volatile (" \ + r2 = 0; \ + .8byte %[ld_map_fd]; \ + .8byte 0; \ + call %[bpf_get_local_storage]; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_insn(ld_map_fd, BPF_RAW_INSN(BPF_LD | BPF_DW | BPF_IMM, BPF_REG_1, BPF_PSEUDO_MAP_FD, 0, 1)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid cgroup storage access 3") +__failure __msg("invalid access to map value, value_size=64 off=256 size=4") +__failure_unpriv +__naked void invalid_cgroup_storage_access_3(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 256); \ + r1 += 1; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid cgroup storage access 4") +__failure __msg("invalid access to map value, value_size=64 off=-2 size=4") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void invalid_cgroup_storage_access_4(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 - 2); \ + r0 = r1; \ + r1 += 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid cgroup storage access 5") +__failure __msg("get_local_storage() doesn't support non-zero flags") +__failure_unpriv +__naked void invalid_cgroup_storage_access_5(void) +{ + asm volatile (" \ + r2 = 7; \ + r1 = %[cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid cgroup storage access 6") +__failure __msg("get_local_storage() doesn't support non-zero flags") +__msg_unpriv("R2 leaks addr into helper function") +__naked void invalid_cgroup_storage_access_6(void) +{ + asm volatile (" \ + r2 = r1; \ + r1 = %[cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("valid per-cpu cgroup storage access") +__success __success_unpriv __retval(0) +__naked void per_cpu_cgroup_storage_access(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[percpu_cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(percpu_cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid per-cpu cgroup storage access 1") +__failure __msg("cannot pass map_type 1 into func bpf_get_local_storage") +__failure_unpriv +__naked void cpu_cgroup_storage_access_1(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid per-cpu cgroup storage access 2") +__failure __msg("fd 1 is not pointing to valid bpf_map") +__failure_unpriv +__naked void cpu_cgroup_storage_access_2(void) +{ + asm volatile (" \ + r2 = 0; \ + .8byte %[ld_map_fd]; \ + .8byte 0; \ + call %[bpf_get_local_storage]; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_insn(ld_map_fd, BPF_RAW_INSN(BPF_LD | BPF_DW | BPF_IMM, BPF_REG_1, BPF_PSEUDO_MAP_FD, 0, 1)) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid per-cpu cgroup storage access 3") +__failure __msg("invalid access to map value, value_size=64 off=256 size=4") +__failure_unpriv +__naked void cpu_cgroup_storage_access_3(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[percpu_cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 256); \ + r1 += 1; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(percpu_cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid per-cpu cgroup storage access 4") +__failure __msg("invalid access to map value, value_size=64 off=-2 size=4") +__failure_unpriv +__flag(BPF_F_ANY_ALIGNMENT) +__naked void cpu_cgroup_storage_access_4(void) +{ + asm volatile (" \ + r2 = 0; \ + r1 = %[cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 - 2); \ + r0 = r1; \ + r1 += 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid per-cpu cgroup storage access 5") +__failure __msg("get_local_storage() doesn't support non-zero flags") +__failure_unpriv +__naked void cpu_cgroup_storage_access_5(void) +{ + asm volatile (" \ + r2 = 7; \ + r1 = %[percpu_cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(percpu_cgroup_storage) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("invalid per-cpu cgroup storage access 6") +__failure __msg("get_local_storage() doesn't support non-zero flags") +__msg_unpriv("R2 leaks addr into helper function") +__naked void cpu_cgroup_storage_access_6(void) +{ + asm volatile (" \ + r2 = r1; \ + r1 = %[percpu_cgroup_storage] ll; \ + call %[bpf_get_local_storage]; \ + r1 = *(u32*)(r0 + 0); \ + r0 = r1; \ + r0 &= 1; \ + exit; \ +" : + : __imm(bpf_get_local_storage), + __imm_addr(percpu_cgroup_storage) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_const_or.c b/tools/testing/selftests/bpf/progs/verifier_const_or.c new file mode 100644 index 000000000000..ba8922b2eebd --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_const_or.c @@ -0,0 +1,82 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/const_or.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("tracepoint") +__description("constant register |= constant should keep constant type") +__success +__naked void constant_should_keep_constant_type(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -48; \ + r2 = 34; \ + r2 |= 13; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("constant register |= constant should not bypass stack boundary checks") +__failure __msg("invalid indirect access to stack R1 off=-48 size=58") +__naked void not_bypass_stack_boundary_checks_1(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -48; \ + r2 = 34; \ + r2 |= 24; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("constant register |= constant register should keep constant type") +__success +__naked void register_should_keep_constant_type(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -48; \ + r2 = 34; \ + r4 = 13; \ + r2 |= r4; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("constant register |= constant register should not bypass stack boundary checks") +__failure __msg("invalid indirect access to stack R1 off=-48 size=58") +__naked void not_bypass_stack_boundary_checks_2(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -48; \ + r2 = 34; \ + r4 = 24; \ + r2 |= r4; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx_sk_msg.c b/tools/testing/selftests/bpf/progs/verifier_ctx_sk_msg.c new file mode 100644 index 000000000000..65edc89799f9 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_ctx_sk_msg.c @@ -0,0 +1,228 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/ctx_sk_msg.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("sk_msg") +__description("valid access family in SK_MSG") +__success +__naked void access_family_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_family]); \ + exit; \ +" : + : __imm_const(sk_msg_md_family, offsetof(struct sk_msg_md, family)) + : __clobber_all); +} + +SEC("sk_msg") +__description("valid access remote_ip4 in SK_MSG") +__success +__naked void remote_ip4_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_remote_ip4]); \ + exit; \ +" : + : __imm_const(sk_msg_md_remote_ip4, offsetof(struct sk_msg_md, remote_ip4)) + : __clobber_all); +} + +SEC("sk_msg") +__description("valid access local_ip4 in SK_MSG") +__success +__naked void local_ip4_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_local_ip4]); \ + exit; \ +" : + : __imm_const(sk_msg_md_local_ip4, offsetof(struct sk_msg_md, local_ip4)) + : __clobber_all); +} + +SEC("sk_msg") +__description("valid access remote_port in SK_MSG") +__success +__naked void remote_port_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_remote_port]); \ + exit; \ +" : + : __imm_const(sk_msg_md_remote_port, offsetof(struct sk_msg_md, remote_port)) + : __clobber_all); +} + +SEC("sk_msg") +__description("valid access local_port in SK_MSG") +__success +__naked void local_port_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_local_port]); \ + exit; \ +" : + : __imm_const(sk_msg_md_local_port, offsetof(struct sk_msg_md, local_port)) + : __clobber_all); +} + +SEC("sk_skb") +__description("valid access remote_ip6 in SK_MSG") +__success +__naked void remote_ip6_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_remote_ip6_0]); \ + r0 = *(u32*)(r1 + %[sk_msg_md_remote_ip6_1]); \ + r0 = *(u32*)(r1 + %[sk_msg_md_remote_ip6_2]); \ + r0 = *(u32*)(r1 + %[sk_msg_md_remote_ip6_3]); \ + exit; \ +" : + : __imm_const(sk_msg_md_remote_ip6_0, offsetof(struct sk_msg_md, remote_ip6[0])), + __imm_const(sk_msg_md_remote_ip6_1, offsetof(struct sk_msg_md, remote_ip6[1])), + __imm_const(sk_msg_md_remote_ip6_2, offsetof(struct sk_msg_md, remote_ip6[2])), + __imm_const(sk_msg_md_remote_ip6_3, offsetof(struct sk_msg_md, remote_ip6[3])) + : __clobber_all); +} + +SEC("sk_skb") +__description("valid access local_ip6 in SK_MSG") +__success +__naked void local_ip6_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_local_ip6_0]); \ + r0 = *(u32*)(r1 + %[sk_msg_md_local_ip6_1]); \ + r0 = *(u32*)(r1 + %[sk_msg_md_local_ip6_2]); \ + r0 = *(u32*)(r1 + %[sk_msg_md_local_ip6_3]); \ + exit; \ +" : + : __imm_const(sk_msg_md_local_ip6_0, offsetof(struct sk_msg_md, local_ip6[0])), + __imm_const(sk_msg_md_local_ip6_1, offsetof(struct sk_msg_md, local_ip6[1])), + __imm_const(sk_msg_md_local_ip6_2, offsetof(struct sk_msg_md, local_ip6[2])), + __imm_const(sk_msg_md_local_ip6_3, offsetof(struct sk_msg_md, local_ip6[3])) + : __clobber_all); +} + +SEC("sk_msg") +__description("valid access size in SK_MSG") +__success +__naked void access_size_in_sk_msg(void) +{ + asm volatile (" \ + r0 = *(u32*)(r1 + %[sk_msg_md_size]); \ + exit; \ +" : + : __imm_const(sk_msg_md_size, offsetof(struct sk_msg_md, size)) + : __clobber_all); +} + +SEC("sk_msg") +__description("invalid 64B read of size in SK_MSG") +__failure __msg("invalid bpf_context access") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void of_size_in_sk_msg(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + %[sk_msg_md_size]); \ + exit; \ +" : + : __imm_const(sk_msg_md_size, offsetof(struct sk_msg_md, size)) + : __clobber_all); +} + +SEC("sk_msg") +__description("invalid read past end of SK_MSG") +__failure __msg("invalid bpf_context access") +__naked void past_end_of_sk_msg(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__imm_0]); \ + exit; \ +" : + : __imm_const(__imm_0, offsetof(struct sk_msg_md, size) + 4) + : __clobber_all); +} + +SEC("sk_msg") +__description("invalid read offset in SK_MSG") +__failure __msg("invalid bpf_context access") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void read_offset_in_sk_msg(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__imm_0]); \ + exit; \ +" : + : __imm_const(__imm_0, offsetof(struct sk_msg_md, family) + 1) + : __clobber_all); +} + +SEC("sk_msg") +__description("direct packet read for SK_MSG") +__success +__naked void packet_read_for_sk_msg(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + %[sk_msg_md_data]); \ + r3 = *(u64*)(r1 + %[sk_msg_md_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(sk_msg_md_data, offsetof(struct sk_msg_md, data)), + __imm_const(sk_msg_md_data_end, offsetof(struct sk_msg_md, data_end)) + : __clobber_all); +} + +SEC("sk_msg") +__description("direct packet write for SK_MSG") +__success +__naked void packet_write_for_sk_msg(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + %[sk_msg_md_data]); \ + r3 = *(u64*)(r1 + %[sk_msg_md_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + *(u8*)(r2 + 0) = r2; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(sk_msg_md_data, offsetof(struct sk_msg_md, data)), + __imm_const(sk_msg_md_data_end, offsetof(struct sk_msg_md, data_end)) + : __clobber_all); +} + +SEC("sk_msg") +__description("overlapping checks for direct packet access SK_MSG") +__success +__naked void direct_packet_access_sk_msg(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + %[sk_msg_md_data]); \ + r3 = *(u64*)(r1 + %[sk_msg_md_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r1 = r2; \ + r1 += 6; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u16*)(r2 + 6); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(sk_msg_md_data, offsetof(struct sk_msg_md, data)), + __imm_const(sk_msg_md_data_end, offsetof(struct sk_msg_md, data_end)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_direct_stack_access_wraparound.c b/tools/testing/selftests/bpf/progs/verifier_direct_stack_access_wraparound.c new file mode 100644 index 000000000000..c538c6893552 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_direct_stack_access_wraparound.c @@ -0,0 +1,56 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/direct_stack_access_wraparound.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("socket") +__description("direct stack access with 32-bit wraparound. test1") +__failure __msg("fp pointer and 2147483647") +__failure_unpriv +__naked void with_32_bit_wraparound_test1(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += 0x7fffffff; \ + r1 += 0x7fffffff; \ + w0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("direct stack access with 32-bit wraparound. test2") +__failure __msg("fp pointer and 1073741823") +__failure_unpriv +__naked void with_32_bit_wraparound_test2(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += 0x3fffffff; \ + r1 += 0x3fffffff; \ + w0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("direct stack access with 32-bit wraparound. test3") +__failure __msg("fp pointer offset 1073741822") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void with_32_bit_wraparound_test3(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += 0x1fffffff; \ + r1 += 0x1fffffff; \ + w0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_div0.c b/tools/testing/selftests/bpf/progs/verifier_div0.c new file mode 100644 index 000000000000..cca5ea18fc28 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_div0.c @@ -0,0 +1,213 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/div0.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("socket") +__description("DIV32 by 0, zero check 1") +__success __success_unpriv __retval(42) +__naked void by_0_zero_check_1_1(void) +{ + asm volatile (" \ + w0 = 42; \ + w1 = 0; \ + w2 = 1; \ + w2 /= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("DIV32 by 0, zero check 2") +__success __success_unpriv __retval(42) +__naked void by_0_zero_check_2_1(void) +{ + asm volatile (" \ + w0 = 42; \ + r1 = 0xffffffff00000000LL ll; \ + w2 = 1; \ + w2 /= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("DIV64 by 0, zero check") +__success __success_unpriv __retval(42) +__naked void div64_by_0_zero_check(void) +{ + asm volatile (" \ + w0 = 42; \ + w1 = 0; \ + w2 = 1; \ + r2 /= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("MOD32 by 0, zero check 1") +__success __success_unpriv __retval(42) +__naked void by_0_zero_check_1_2(void) +{ + asm volatile (" \ + w0 = 42; \ + w1 = 0; \ + w2 = 1; \ + w2 %%= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("MOD32 by 0, zero check 2") +__success __success_unpriv __retval(42) +__naked void by_0_zero_check_2_2(void) +{ + asm volatile (" \ + w0 = 42; \ + r1 = 0xffffffff00000000LL ll; \ + w2 = 1; \ + w2 %%= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("MOD64 by 0, zero check") +__success __success_unpriv __retval(42) +__naked void mod64_by_0_zero_check(void) +{ + asm volatile (" \ + w0 = 42; \ + w1 = 0; \ + w2 = 1; \ + r2 %%= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("DIV32 by 0, zero check ok, cls") +__success __retval(8) +__naked void _0_zero_check_ok_cls_1(void) +{ + asm volatile (" \ + w0 = 42; \ + w1 = 2; \ + w2 = 16; \ + w2 /= w1; \ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("DIV32 by 0, zero check 1, cls") +__success __retval(0) +__naked void _0_zero_check_1_cls_1(void) +{ + asm volatile (" \ + w1 = 0; \ + w0 = 1; \ + w0 /= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("DIV32 by 0, zero check 2, cls") +__success __retval(0) +__naked void _0_zero_check_2_cls_1(void) +{ + asm volatile (" \ + r1 = 0xffffffff00000000LL ll; \ + w0 = 1; \ + w0 /= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("DIV64 by 0, zero check, cls") +__success __retval(0) +__naked void by_0_zero_check_cls(void) +{ + asm volatile (" \ + w1 = 0; \ + w0 = 1; \ + r0 /= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("MOD32 by 0, zero check ok, cls") +__success __retval(2) +__naked void _0_zero_check_ok_cls_2(void) +{ + asm volatile (" \ + w0 = 42; \ + w1 = 3; \ + w2 = 5; \ + w2 %%= w1; \ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("MOD32 by 0, zero check 1, cls") +__success __retval(1) +__naked void _0_zero_check_1_cls_2(void) +{ + asm volatile (" \ + w1 = 0; \ + w0 = 1; \ + w0 %%= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("MOD32 by 0, zero check 2, cls") +__success __retval(1) +__naked void _0_zero_check_2_cls_2(void) +{ + asm volatile (" \ + r1 = 0xffffffff00000000LL ll; \ + w0 = 1; \ + w0 %%= w1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("MOD64 by 0, zero check 1, cls") +__success __retval(2) +__naked void _0_zero_check_1_cls_3(void) +{ + asm volatile (" \ + w1 = 0; \ + w0 = 2; \ + r0 %%= r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("MOD64 by 0, zero check 2, cls") +__success __retval(-1) +__naked void _0_zero_check_2_cls_3(void) +{ + asm volatile (" \ + w1 = 0; \ + w0 = -1; \ + r0 %%= r1; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_div_overflow.c b/tools/testing/selftests/bpf/progs/verifier_div_overflow.c new file mode 100644 index 000000000000..458984da804c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_div_overflow.c @@ -0,0 +1,144 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/div_overflow.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include <limits.h> +#include "bpf_misc.h" + +/* Just make sure that JITs used udiv/umod as otherwise we get + * an exception from INT_MIN/-1 overflow similarly as with div + * by zero. + */ + +SEC("tc") +__description("DIV32 overflow, check 1") +__success __retval(0) +__naked void div32_overflow_check_1(void) +{ + asm volatile (" \ + w1 = -1; \ + w0 = %[int_min]; \ + w0 /= w1; \ + exit; \ +" : + : __imm_const(int_min, INT_MIN) + : __clobber_all); +} + +SEC("tc") +__description("DIV32 overflow, check 2") +__success __retval(0) +__naked void div32_overflow_check_2(void) +{ + asm volatile (" \ + w0 = %[int_min]; \ + w0 /= -1; \ + exit; \ +" : + : __imm_const(int_min, INT_MIN) + : __clobber_all); +} + +SEC("tc") +__description("DIV64 overflow, check 1") +__success __retval(0) +__naked void div64_overflow_check_1(void) +{ + asm volatile (" \ + r1 = -1; \ + r2 = %[llong_min] ll; \ + r2 /= r1; \ + w0 = 0; \ + if r0 == r2 goto l0_%=; \ + w0 = 1; \ +l0_%=: exit; \ +" : + : __imm_const(llong_min, LLONG_MIN) + : __clobber_all); +} + +SEC("tc") +__description("DIV64 overflow, check 2") +__success __retval(0) +__naked void div64_overflow_check_2(void) +{ + asm volatile (" \ + r1 = %[llong_min] ll; \ + r1 /= -1; \ + w0 = 0; \ + if r0 == r1 goto l0_%=; \ + w0 = 1; \ +l0_%=: exit; \ +" : + : __imm_const(llong_min, LLONG_MIN) + : __clobber_all); +} + +SEC("tc") +__description("MOD32 overflow, check 1") +__success __retval(INT_MIN) +__naked void mod32_overflow_check_1(void) +{ + asm volatile (" \ + w1 = -1; \ + w0 = %[int_min]; \ + w0 %%= w1; \ + exit; \ +" : + : __imm_const(int_min, INT_MIN) + : __clobber_all); +} + +SEC("tc") +__description("MOD32 overflow, check 2") +__success __retval(INT_MIN) +__naked void mod32_overflow_check_2(void) +{ + asm volatile (" \ + w0 = %[int_min]; \ + w0 %%= -1; \ + exit; \ +" : + : __imm_const(int_min, INT_MIN) + : __clobber_all); +} + +SEC("tc") +__description("MOD64 overflow, check 1") +__success __retval(1) +__naked void mod64_overflow_check_1(void) +{ + asm volatile (" \ + r1 = -1; \ + r2 = %[llong_min] ll; \ + r3 = r2; \ + r2 %%= r1; \ + w0 = 0; \ + if r3 != r2 goto l0_%=; \ + w0 = 1; \ +l0_%=: exit; \ +" : + : __imm_const(llong_min, LLONG_MIN) + : __clobber_all); +} + +SEC("tc") +__description("MOD64 overflow, check 2") +__success __retval(1) +__naked void mod64_overflow_check_2(void) +{ + asm volatile (" \ + r2 = %[llong_min] ll; \ + r3 = r2; \ + r2 %%= -1; \ + w0 = 0; \ + if r3 != r2 goto l0_%=; \ + w0 = 1; \ +l0_%=: exit; \ +" : + : __imm_const(llong_min, LLONG_MIN) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_helper_access_var_len.c b/tools/testing/selftests/bpf/progs/verifier_helper_access_var_len.c new file mode 100644 index 000000000000..50c6b22606f6 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_helper_access_var_len.c @@ -0,0 +1,825 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/helper_access_var_len.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 4096); +} map_ringbuf SEC(".maps"); + +SEC("tracepoint") +__description("helper access to variable memory: stack, bitwise AND + JMP, correct bounds") +__success +__naked void bitwise_and_jmp_correct_bounds(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -64; \ + r0 = 0; \ + *(u64*)(r10 - 64) = r0; \ + *(u64*)(r10 - 56) = r0; \ + *(u64*)(r10 - 48) = r0; \ + *(u64*)(r10 - 40) = r0; \ + *(u64*)(r10 - 32) = r0; \ + *(u64*)(r10 - 24) = r0; \ + *(u64*)(r10 - 16) = r0; \ + *(u64*)(r10 - 8) = r0; \ + r2 = 16; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + r2 &= 64; \ + r4 = 0; \ + if r4 >= r2 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("socket") +__description("helper access to variable memory: stack, bitwise AND, zero included") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid indirect read from stack R2 off -64+0 size 64") +__retval(0) +__naked void stack_bitwise_and_zero_included(void) +{ + asm volatile (" \ + /* set max stack size */ \ + r6 = 0; \ + *(u64*)(r10 - 128) = r6; \ + /* set r3 to a random value */ \ + call %[bpf_get_prandom_u32]; \ + r3 = r0; \ + /* use bitwise AND to limit r3 range to [0, 64] */\ + r3 &= 64; \ + r1 = %[map_ringbuf] ll; \ + r2 = r10; \ + r2 += -64; \ + r4 = 0; \ + /* Call bpf_ringbuf_output(), it is one of a few helper functions with\ + * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.\ + * For unpriv this should signal an error, because memory at &fp[-64] is\ + * not initialized. \ + */ \ + call %[bpf_ringbuf_output]; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_ringbuf_output), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, bitwise AND + JMP, wrong max") +__failure __msg("invalid indirect access to stack R1 off=-64 size=65") +__naked void bitwise_and_jmp_wrong_max(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + 8); \ + r1 = r10; \ + r1 += -64; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + r2 &= 65; \ + r4 = 0; \ + if r4 >= r2 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, JMP, correct bounds") +__success +__naked void memory_stack_jmp_correct_bounds(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -64; \ + r0 = 0; \ + *(u64*)(r10 - 64) = r0; \ + *(u64*)(r10 - 56) = r0; \ + *(u64*)(r10 - 48) = r0; \ + *(u64*)(r10 - 40) = r0; \ + *(u64*)(r10 - 32) = r0; \ + *(u64*)(r10 - 24) = r0; \ + *(u64*)(r10 - 16) = r0; \ + *(u64*)(r10 - 8) = r0; \ + r2 = 16; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + if r2 > 64 goto l0_%=; \ + r4 = 0; \ + if r4 >= r2 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, JMP (signed), correct bounds") +__success +__naked void stack_jmp_signed_correct_bounds(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -64; \ + r0 = 0; \ + *(u64*)(r10 - 64) = r0; \ + *(u64*)(r10 - 56) = r0; \ + *(u64*)(r10 - 48) = r0; \ + *(u64*)(r10 - 40) = r0; \ + *(u64*)(r10 - 32) = r0; \ + *(u64*)(r10 - 24) = r0; \ + *(u64*)(r10 - 16) = r0; \ + *(u64*)(r10 - 8) = r0; \ + r2 = 16; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + if r2 s> 64 goto l0_%=; \ + r4 = 0; \ + if r4 s>= r2 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, JMP, bounds + offset") +__failure __msg("invalid indirect access to stack R1 off=-64 size=65") +__naked void memory_stack_jmp_bounds_offset(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + 8); \ + r1 = r10; \ + r1 += -64; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + if r2 > 64 goto l0_%=; \ + r4 = 0; \ + if r4 >= r2 goto l0_%=; \ + r2 += 1; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, JMP, wrong max") +__failure __msg("invalid indirect access to stack R1 off=-64 size=65") +__naked void memory_stack_jmp_wrong_max(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + 8); \ + r1 = r10; \ + r1 += -64; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + if r2 > 65 goto l0_%=; \ + r4 = 0; \ + if r4 >= r2 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, JMP, no max check") +__failure +/* because max wasn't checked, signed min is negative */ +__msg("R2 min value is negative, either use unsigned or 'var &= const'") +__naked void stack_jmp_no_max_check(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + 8); \ + r1 = r10; \ + r1 += -64; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + r4 = 0; \ + if r4 >= r2 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("socket") +__description("helper access to variable memory: stack, JMP, no min check") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid indirect read from stack R2 off -64+0 size 64") +__retval(0) +__naked void stack_jmp_no_min_check(void) +{ + asm volatile (" \ + /* set max stack size */ \ + r6 = 0; \ + *(u64*)(r10 - 128) = r6; \ + /* set r3 to a random value */ \ + call %[bpf_get_prandom_u32]; \ + r3 = r0; \ + /* use JMP to limit r3 range to [0, 64] */ \ + if r3 > 64 goto l0_%=; \ + r1 = %[map_ringbuf] ll; \ + r2 = r10; \ + r2 += -64; \ + r4 = 0; \ + /* Call bpf_ringbuf_output(), it is one of a few helper functions with\ + * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.\ + * For unpriv this should signal an error, because memory at &fp[-64] is\ + * not initialized. \ + */ \ + call %[bpf_ringbuf_output]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_ringbuf_output), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: stack, JMP (signed), no min check") +__failure __msg("R2 min value is negative") +__naked void jmp_signed_no_min_check(void) +{ + asm volatile (" \ + r2 = *(u64*)(r1 + 8); \ + r1 = r10; \ + r1 += -64; \ + *(u64*)(r1 - 128) = r2; \ + r2 = *(u64*)(r1 - 128); \ + if r2 s> 64 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: map, JMP, correct bounds") +__success +__naked void memory_map_jmp_correct_bounds(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[sizeof_test_val]; \ + *(u64*)(r10 - 128) = r2; \ + r2 = *(u64*)(r10 - 128); \ + if r2 s> %[sizeof_test_val] goto l1_%=; \ + r4 = 0; \ + if r4 s>= r2 goto l1_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(sizeof_test_val, sizeof(struct test_val)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: map, JMP, wrong max") +__failure __msg("invalid access to map value, value_size=48 off=0 size=49") +__naked void memory_map_jmp_wrong_max(void) +{ + asm volatile (" \ + r6 = *(u64*)(r1 + 8); \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = r6; \ + *(u64*)(r10 - 128) = r2; \ + r2 = *(u64*)(r10 - 128); \ + if r2 s> %[__imm_0] goto l1_%=; \ + r4 = 0; \ + if r4 s>= r2 goto l1_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) + 1) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: map adjusted, JMP, correct bounds") +__success +__naked void map_adjusted_jmp_correct_bounds(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += 20; \ + r2 = %[sizeof_test_val]; \ + *(u64*)(r10 - 128) = r2; \ + r2 = *(u64*)(r10 - 128); \ + if r2 s> %[__imm_0] goto l1_%=; \ + r4 = 0; \ + if r4 s>= r2 goto l1_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - 20), + __imm_const(sizeof_test_val, sizeof(struct test_val)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: map adjusted, JMP, wrong max") +__failure __msg("R1 min value is outside of the allowed memory range") +__naked void map_adjusted_jmp_wrong_max(void) +{ + asm volatile (" \ + r6 = *(u64*)(r1 + 8); \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += 20; \ + r2 = r6; \ + *(u64*)(r10 - 128) = r2; \ + r2 = *(u64*)(r10 - 128); \ + if r2 s> %[__imm_0] goto l1_%=; \ + r4 = 0; \ + if r4 s>= r2 goto l1_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l1_%=: r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - 19) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size = 0 allowed on NULL (ARG_PTR_TO_MEM_OR_NULL)") +__success __retval(0) +__naked void ptr_to_mem_or_null_1(void) +{ + asm volatile (" \ + r1 = 0; \ + r2 = 0; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ + exit; \ +" : + : __imm(bpf_csum_diff) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size > 0 not allowed on NULL (ARG_PTR_TO_MEM_OR_NULL)") +__failure __msg("R1 type=scalar expected=fp") +__naked void ptr_to_mem_or_null_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + 0); \ + r1 = 0; \ + *(u64*)(r10 - 128) = r2; \ + r2 = *(u64*)(r10 - 128); \ + r2 &= 64; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ + exit; \ +" : + : __imm(bpf_csum_diff) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size = 0 allowed on != NULL stack pointer (ARG_PTR_TO_MEM_OR_NULL)") +__success __retval(0) +__naked void ptr_to_mem_or_null_3(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -8; \ + r2 = 0; \ + *(u64*)(r1 + 0) = r2; \ + r2 &= 8; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ + exit; \ +" : + : __imm(bpf_csum_diff) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size = 0 allowed on != NULL map pointer (ARG_PTR_TO_MEM_OR_NULL)") +__success __retval(0) +__naked void ptr_to_mem_or_null_4(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = 0; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: exit; \ +" : + : __imm(bpf_csum_diff), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size possible = 0 allowed on != NULL stack pointer (ARG_PTR_TO_MEM_OR_NULL)") +__success __retval(0) +__naked void ptr_to_mem_or_null_5(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = *(u64*)(r0 + 0); \ + if r2 > 8 goto l0_%=; \ + r1 = r10; \ + r1 += -8; \ + *(u64*)(r1 + 0) = r2; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: exit; \ +" : + : __imm(bpf_csum_diff), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size possible = 0 allowed on != NULL map pointer (ARG_PTR_TO_MEM_OR_NULL)") +__success __retval(0) +__naked void ptr_to_mem_or_null_6(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = *(u64*)(r0 + 0); \ + if r2 > 8 goto l0_%=; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: exit; \ +" : + : __imm(bpf_csum_diff), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("helper access to variable memory: size possible = 0 allowed on != NULL packet pointer (ARG_PTR_TO_MEM_OR_NULL)") +__success __retval(0) +/* csum_diff of 64-byte packet */ +__flag(BPF_F_ANY_ALIGNMENT) +__naked void ptr_to_mem_or_null_7(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r0 = r6; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r1 = r6; \ + r2 = *(u64*)(r6 + 0); \ + if r2 > 8 goto l0_%=; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: size = 0 not allowed on NULL (!ARG_PTR_TO_MEM_OR_NULL)") +__failure __msg("R1 type=scalar expected=fp") +__naked void ptr_to_mem_or_null_8(void) +{ + asm volatile (" \ + r1 = 0; \ + r2 = 0; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: size > 0 not allowed on NULL (!ARG_PTR_TO_MEM_OR_NULL)") +__failure __msg("R1 type=scalar expected=fp") +__naked void ptr_to_mem_or_null_9(void) +{ + asm volatile (" \ + r1 = 0; \ + r2 = 1; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: size = 0 allowed on != NULL stack pointer (!ARG_PTR_TO_MEM_OR_NULL)") +__success +__naked void ptr_to_mem_or_null_10(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -8; \ + r2 = 0; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: size = 0 allowed on != NULL map pointer (!ARG_PTR_TO_MEM_OR_NULL)") +__success +__naked void ptr_to_mem_or_null_11(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = 0; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: size possible = 0 allowed on != NULL stack pointer (!ARG_PTR_TO_MEM_OR_NULL)") +__success +__naked void ptr_to_mem_or_null_12(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = *(u64*)(r0 + 0); \ + if r2 > 8 goto l0_%=; \ + r1 = r10; \ + r1 += -8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: size possible = 0 allowed on != NULL map pointer (!ARG_PTR_TO_MEM_OR_NULL)") +__success +__naked void ptr_to_mem_or_null_13(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = *(u64*)(r0 + 0); \ + if r2 > 8 goto l0_%=; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("helper access to variable memory: 8 bytes leak") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid indirect read from stack R2 off -64+32 size 64") +__retval(0) +__naked void variable_memory_8_bytes_leak(void) +{ + asm volatile (" \ + /* set max stack size */ \ + r6 = 0; \ + *(u64*)(r10 - 128) = r6; \ + /* set r3 to a random value */ \ + call %[bpf_get_prandom_u32]; \ + r3 = r0; \ + r1 = %[map_ringbuf] ll; \ + r2 = r10; \ + r2 += -64; \ + r0 = 0; \ + *(u64*)(r10 - 64) = r0; \ + *(u64*)(r10 - 56) = r0; \ + *(u64*)(r10 - 48) = r0; \ + *(u64*)(r10 - 40) = r0; \ + /* Note: fp[-32] left uninitialized */ \ + *(u64*)(r10 - 24) = r0; \ + *(u64*)(r10 - 16) = r0; \ + *(u64*)(r10 - 8) = r0; \ + /* Limit r3 range to [1, 64] */ \ + r3 &= 63; \ + r3 += 1; \ + r4 = 0; \ + /* Call bpf_ringbuf_output(), it is one of a few helper functions with\ + * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode.\ + * For unpriv this should signal an error, because memory region [1, 64]\ + * at &fp[-64] is not fully initialized. \ + */ \ + call %[bpf_ringbuf_output]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32), + __imm(bpf_ringbuf_output), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to variable memory: 8 bytes no leak (init memory)") +__success +__naked void bytes_no_leak_init_memory(void) +{ + asm volatile (" \ + r1 = r10; \ + r0 = 0; \ + r0 = 0; \ + *(u64*)(r10 - 64) = r0; \ + *(u64*)(r10 - 56) = r0; \ + *(u64*)(r10 - 48) = r0; \ + *(u64*)(r10 - 40) = r0; \ + *(u64*)(r10 - 32) = r0; \ + *(u64*)(r10 - 24) = r0; \ + *(u64*)(r10 - 16) = r0; \ + *(u64*)(r10 - 8) = r0; \ + r1 += -64; \ + r2 = 0; \ + r2 &= 32; \ + r2 += 32; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ + r1 = *(u64*)(r10 - 16); \ + exit; \ +" : + : __imm(bpf_probe_read_kernel) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c new file mode 100644 index 000000000000..74f5f9cd153d --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_helper_packet_access.c @@ -0,0 +1,550 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/helper_packet_access.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("xdp") +__description("helper access to packet: test1, valid packet_ptr range") +__success __retval(0) +__naked void test1_valid_packet_ptr_range(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ + r1 = %[map_hash_8b] ll; \ + r3 = r2; \ + r4 = 0; \ + call %[bpf_map_update_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_update_elem), + __imm_addr(map_hash_8b), + __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("helper access to packet: test2, unchecked packet_ptr") +__failure __msg("invalid access to packet") +__naked void packet_test2_unchecked_packet_ptr(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(xdp_md_data, offsetof(struct xdp_md, data)) + : __clobber_all); +} + +SEC("xdp") +__description("helper access to packet: test3, variable add") +__success __retval(0) +__naked void to_packet_test3_variable_add(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r4 = r2; \ + r4 += 8; \ + if r4 > r3 goto l0_%=; \ + r5 = *(u8*)(r2 + 0); \ + r4 = r2; \ + r4 += r5; \ + r5 = r4; \ + r5 += 8; \ + if r5 > r3 goto l0_%=; \ + r1 = %[map_hash_8b] ll; \ + r2 = r4; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("helper access to packet: test4, packet_ptr with bad range") +__failure __msg("invalid access to packet") +__naked void packet_ptr_with_bad_range_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r4 = r2; \ + r4 += 4; \ + if r4 > r3 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("helper access to packet: test5, packet_ptr with too short range") +__failure __msg("invalid access to packet") +__naked void ptr_with_too_short_range_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r2 += 1; \ + r4 = r2; \ + r4 += 7; \ + if r4 > r3 goto l0_%=; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test6, cls valid packet_ptr range") +__success __retval(0) +__naked void cls_valid_packet_ptr_range(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ + r1 = %[map_hash_8b] ll; \ + r3 = r2; \ + r4 = 0; \ + call %[bpf_map_update_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_update_elem), + __imm_addr(map_hash_8b), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test7, cls unchecked packet_ptr") +__failure __msg("invalid access to packet") +__naked void test7_cls_unchecked_packet_ptr(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test8, cls variable add") +__success __retval(0) +__naked void packet_test8_cls_variable_add(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r4 = r2; \ + r4 += 8; \ + if r4 > r3 goto l0_%=; \ + r5 = *(u8*)(r2 + 0); \ + r4 = r2; \ + r4 += r5; \ + r5 = r4; \ + r5 += 8; \ + if r5 > r3 goto l0_%=; \ + r1 = %[map_hash_8b] ll; \ + r2 = r4; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test9, cls packet_ptr with bad range") +__failure __msg("invalid access to packet") +__naked void packet_ptr_with_bad_range_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r4 = r2; \ + r4 += 4; \ + if r4 > r3 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test10, cls packet_ptr with too short range") +__failure __msg("invalid access to packet") +__naked void ptr_with_too_short_range_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r2 += 1; \ + r4 = r2; \ + r4 += 7; \ + if r4 > r3 goto l0_%=; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test11, cls unsuitable helper 1") +__failure __msg("helper access to the packet") +__naked void test11_cls_unsuitable_helper_1(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r3 = r6; \ + r3 += 7; \ + if r3 > r7 goto l0_%=; \ + r2 = 0; \ + r4 = 42; \ + r5 = 0; \ + call %[bpf_skb_store_bytes]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_skb_store_bytes), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test12, cls unsuitable helper 2") +__failure __msg("helper access to the packet") +__naked void test12_cls_unsuitable_helper_2(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r3 = r6; \ + r6 += 8; \ + if r6 > r7 goto l0_%=; \ + r2 = 0; \ + r4 = 4; \ + call %[bpf_skb_load_bytes]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_skb_load_bytes), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test13, cls helper ok") +__success __retval(0) +__naked void packet_test13_cls_helper_ok(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 = r6; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test14, cls helper ok sub") +__success __retval(0) +__naked void test14_cls_helper_ok_sub(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 -= 4; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test15, cls helper fail sub") +__failure __msg("invalid access to packet") +__naked void test15_cls_helper_fail_sub(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 -= 12; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test16, cls helper fail range 1") +__failure __msg("invalid access to packet") +__naked void cls_helper_fail_range_1(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 = r6; \ + r2 = 8; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test17, cls helper fail range 2") +__failure __msg("R2 min value is negative") +__naked void cls_helper_fail_range_2(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 = r6; \ + r2 = -9; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test18, cls helper fail range 3") +__failure __msg("R2 min value is negative") +__naked void cls_helper_fail_range_3(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 = r6; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__imm_0, ~0), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test19, cls helper range zero") +__success __retval(0) +__naked void test19_cls_helper_range_zero(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 = r6; \ + r2 = 0; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test20, pkt end as input") +__failure __msg("R1 type=pkt_end expected=fp") +__naked void test20_pkt_end_as_input(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r1 = r7; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("helper access to packet: test21, wrong reg") +__failure __msg("invalid access to packet") +__naked void to_packet_test21_wrong_reg(void) +{ + asm volatile (" \ + r6 = *(u32*)(r1 + %[__sk_buff_data]); \ + r7 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r6 += 1; \ + r1 = r6; \ + r1 += 7; \ + if r1 > r7 goto l0_%=; \ + r2 = 4; \ + r3 = 0; \ + r4 = 0; \ + r5 = 0; \ + call %[bpf_csum_diff]; \ + r0 = 0; \ +l0_%=: exit; \ +" : + : __imm(bpf_csum_diff), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_helper_restricted.c b/tools/testing/selftests/bpf/progs/verifier_helper_restricted.c new file mode 100644 index 000000000000..0ede0ccd090c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_helper_restricted.c @@ -0,0 +1,279 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/helper_restricted.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct val { + int cnt; + struct bpf_spin_lock l; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct val); +} map_spin_lock SEC(".maps"); + +struct timer { + struct bpf_timer t; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct timer); +} map_timer SEC(".maps"); + +SEC("kprobe") +__description("bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_KPROBE") +__failure __msg("unknown func bpf_ktime_get_coarse_ns") +__naked void in_bpf_prog_type_kprobe_1(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_coarse_ns]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_coarse_ns) + : __clobber_all); +} + +SEC("tracepoint") +__description("bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_TRACEPOINT") +__failure __msg("unknown func bpf_ktime_get_coarse_ns") +__naked void in_bpf_prog_type_tracepoint_1(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_coarse_ns]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_coarse_ns) + : __clobber_all); +} + +SEC("perf_event") +__description("bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_PERF_EVENT") +__failure __msg("unknown func bpf_ktime_get_coarse_ns") +__naked void bpf_prog_type_perf_event_1(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_coarse_ns]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_coarse_ns) + : __clobber_all); +} + +SEC("raw_tracepoint") +__description("bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT") +__failure __msg("unknown func bpf_ktime_get_coarse_ns") +__naked void bpf_prog_type_raw_tracepoint_1(void) +{ + asm volatile (" \ + call %[bpf_ktime_get_coarse_ns]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_coarse_ns) + : __clobber_all); +} + +SEC("kprobe") +__description("bpf_timer_init isn restricted in BPF_PROG_TYPE_KPROBE") +__failure __msg("tracing progs cannot use bpf_timer yet") +__naked void in_bpf_prog_type_kprobe_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_timer] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[map_timer] ll; \ + r3 = 1; \ +l0_%=: call %[bpf_timer_init]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_timer_init), + __imm_addr(map_timer) + : __clobber_all); +} + +SEC("perf_event") +__description("bpf_timer_init is forbidden in BPF_PROG_TYPE_PERF_EVENT") +__failure __msg("tracing progs cannot use bpf_timer yet") +__naked void bpf_prog_type_perf_event_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_timer] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[map_timer] ll; \ + r3 = 1; \ +l0_%=: call %[bpf_timer_init]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_timer_init), + __imm_addr(map_timer) + : __clobber_all); +} + +SEC("tracepoint") +__description("bpf_timer_init is forbidden in BPF_PROG_TYPE_TRACEPOINT") +__failure __msg("tracing progs cannot use bpf_timer yet") +__naked void in_bpf_prog_type_tracepoint_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_timer] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[map_timer] ll; \ + r3 = 1; \ +l0_%=: call %[bpf_timer_init]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_timer_init), + __imm_addr(map_timer) + : __clobber_all); +} + +SEC("raw_tracepoint") +__description("bpf_timer_init is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT") +__failure __msg("tracing progs cannot use bpf_timer yet") +__naked void bpf_prog_type_raw_tracepoint_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_timer] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[map_timer] ll; \ + r3 = 1; \ +l0_%=: call %[bpf_timer_init]; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_timer_init), + __imm_addr(map_timer) + : __clobber_all); +} + +SEC("kprobe") +__description("bpf_spin_lock is forbidden in BPF_PROG_TYPE_KPROBE") +__failure __msg("tracing progs cannot use bpf_spin_lock yet") +__naked void in_bpf_prog_type_kprobe_3(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + call %[bpf_spin_lock]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("tracepoint") +__description("bpf_spin_lock is forbidden in BPF_PROG_TYPE_TRACEPOINT") +__failure __msg("tracing progs cannot use bpf_spin_lock yet") +__naked void in_bpf_prog_type_tracepoint_3(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + call %[bpf_spin_lock]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("perf_event") +__description("bpf_spin_lock is forbidden in BPF_PROG_TYPE_PERF_EVENT") +__failure __msg("tracing progs cannot use bpf_spin_lock yet") +__naked void bpf_prog_type_perf_event_3(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + call %[bpf_spin_lock]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +SEC("raw_tracepoint") +__description("bpf_spin_lock is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT") +__failure __msg("tracing progs cannot use bpf_spin_lock yet") +__naked void bpf_prog_type_raw_tracepoint_3(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_spin_lock] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + call %[bpf_spin_lock]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_spin_lock), + __imm_addr(map_spin_lock) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c b/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c new file mode 100644 index 000000000000..692216c0ad3d --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_helper_value_access.c @@ -0,0 +1,1245 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/helper_value_access.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct other_val { + long long foo; + long long bar; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct other_val); +} map_hash_16b SEC(".maps"); + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("tracepoint") +__description("helper access to map: full range") +__success +__naked void access_to_map_full_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[sizeof_test_val]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(sizeof_test_val, sizeof(struct test_val)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: partial range") +__success +__naked void access_to_map_partial_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: empty range") +__failure __msg("invalid access to map value, value_size=48 off=0 size=0") +__naked void access_to_map_empty_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = 0; \ + call %[bpf_trace_printk]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_trace_printk), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: out-of-bound range") +__failure __msg("invalid access to map value, value_size=48 off=0 size=56") +__naked void map_out_of_bound_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) + 8) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: negative range") +__failure __msg("R2 min value is negative") +__naked void access_to_map_negative_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r2 = -8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const imm): full range") +__success +__naked void via_const_imm_full_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += %[test_val_foo]; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - offsetof(struct test_val, foo)), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const imm): partial range") +__success +__naked void via_const_imm_partial_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += %[test_val_foo]; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const imm): empty range") +__failure __msg("invalid access to map value, value_size=48 off=4 size=0") +__naked void via_const_imm_empty_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += %[test_val_foo]; \ + r2 = 0; \ + call %[bpf_trace_printk]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_trace_printk), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const imm): out-of-bound range") +__failure __msg("invalid access to map value, value_size=48 off=4 size=52") +__naked void imm_out_of_bound_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += %[test_val_foo]; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - offsetof(struct test_val, foo) + 8), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const imm): negative range (> adjustment)") +__failure __msg("R2 min value is negative") +__naked void const_imm_negative_range_adjustment_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += %[test_val_foo]; \ + r2 = -8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const imm): negative range (< adjustment)") +__failure __msg("R2 min value is negative") +__naked void const_imm_negative_range_adjustment_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r1 += %[test_val_foo]; \ + r2 = -1; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const reg): full range") +__success +__naked void via_const_reg_full_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = %[test_val_foo]; \ + r1 += r3; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - offsetof(struct test_val, foo)), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const reg): partial range") +__success +__naked void via_const_reg_partial_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = %[test_val_foo]; \ + r1 += r3; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const reg): empty range") +__failure __msg("R1 min value is outside of the allowed memory range") +__naked void via_const_reg_empty_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = 0; \ + r1 += r3; \ + r2 = 0; \ + call %[bpf_trace_printk]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_trace_printk), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const reg): out-of-bound range") +__failure __msg("invalid access to map value, value_size=48 off=4 size=52") +__naked void reg_out_of_bound_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = %[test_val_foo]; \ + r1 += r3; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - offsetof(struct test_val, foo) + 8), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const reg): negative range (> adjustment)") +__failure __msg("R2 min value is negative") +__naked void const_reg_negative_range_adjustment_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = %[test_val_foo]; \ + r1 += r3; \ + r2 = -8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via const reg): negative range (< adjustment)") +__failure __msg("R2 min value is negative") +__naked void const_reg_negative_range_adjustment_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = %[test_val_foo]; \ + r1 += r3; \ + r2 = -1; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via variable): full range") +__success +__naked void map_via_variable_full_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 > %[test_val_foo] goto l0_%=; \ + r1 += r3; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - offsetof(struct test_val, foo)), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via variable): partial range") +__success +__naked void map_via_variable_partial_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 > %[test_val_foo] goto l0_%=; \ + r1 += r3; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via variable): empty range") +__failure __msg("R1 min value is outside of the allowed memory range") +__naked void map_via_variable_empty_range(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 > %[test_val_foo] goto l0_%=; \ + r1 += r3; \ + r2 = 0; \ + call %[bpf_trace_printk]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_trace_printk), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via variable): no max check") +__failure __msg("R1 unbounded memory access") +__naked void via_variable_no_max_check_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + r1 += r3; \ + r2 = 1; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to adjusted map (via variable): wrong max check") +__failure __msg("invalid access to map value, value_size=48 off=4 size=45") +__naked void via_variable_wrong_max_check_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 > %[test_val_foo] goto l0_%=; \ + r1 += r3; \ + r2 = %[__imm_0]; \ + r3 = 0; \ + call %[bpf_probe_read_kernel]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_probe_read_kernel), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, sizeof(struct test_val) - offsetof(struct test_val, foo) + 1), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using <, good access") +__success +__naked void bounds_check_using_good_access_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 < 32 goto l1_%=; \ + r0 = 0; \ +l0_%=: exit; \ +l1_%=: r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using <, bad access") +__failure __msg("R1 unbounded memory access") +__naked void bounds_check_using_bad_access_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 < 32 goto l1_%=; \ + r1 += r3; \ +l0_%=: r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using <=, good access") +__success +__naked void bounds_check_using_good_access_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 <= 32 goto l1_%=; \ + r0 = 0; \ +l0_%=: exit; \ +l1_%=: r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using <=, bad access") +__failure __msg("R1 unbounded memory access") +__naked void bounds_check_using_bad_access_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 <= 32 goto l1_%=; \ + r1 += r3; \ +l0_%=: r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using s<, good access") +__success +__naked void check_using_s_good_access_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 s< 32 goto l1_%=; \ +l2_%=: r0 = 0; \ +l0_%=: exit; \ +l1_%=: if r3 s< 0 goto l2_%=; \ + r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using s<, good access 2") +__success +__naked void using_s_good_access_2_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 s< 32 goto l1_%=; \ +l2_%=: r0 = 0; \ +l0_%=: exit; \ +l1_%=: if r3 s< -3 goto l2_%=; \ + r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using s<, bad access") +__failure __msg("R1 min value is negative") +__naked void check_using_s_bad_access_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u64*)(r0 + 0); \ + if r3 s< 32 goto l1_%=; \ +l2_%=: r0 = 0; \ +l0_%=: exit; \ +l1_%=: if r3 s< -3 goto l2_%=; \ + r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using s<=, good access") +__success +__naked void check_using_s_good_access_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 s<= 32 goto l1_%=; \ +l2_%=: r0 = 0; \ +l0_%=: exit; \ +l1_%=: if r3 s<= 0 goto l2_%=; \ + r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using s<=, good access 2") +__success +__naked void using_s_good_access_2_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 s<= 32 goto l1_%=; \ +l2_%=: r0 = 0; \ +l0_%=: exit; \ +l1_%=: if r3 s<= -3 goto l2_%=; \ + r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("helper access to map: bounds check using s<=, bad access") +__failure __msg("R1 min value is negative") +__naked void check_using_s_bad_access_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = r0; \ + r3 = *(u64*)(r0 + 0); \ + if r3 s<= 32 goto l1_%=; \ +l2_%=: r0 = 0; \ +l0_%=: exit; \ +l1_%=: if r3 s<= -3 goto l2_%=; \ + r1 += r3; \ + r0 = 0; \ + *(u8*)(r1 + 0) = r0; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map lookup helper access to map") +__success +__naked void lookup_helper_access_to_map(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map update helper access to map") +__success +__naked void update_helper_access_to_map(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r4 = 0; \ + r3 = r0; \ + r2 = r0; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_update_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_map_update_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map update helper access to map: wrong size") +__failure __msg("invalid access to map value, value_size=8 off=0 size=16") +__naked void access_to_map_wrong_size(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r4 = 0; \ + r3 = r0; \ + r2 = r0; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_update_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm(bpf_map_update_elem), + __imm_addr(map_hash_16b), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via const imm)") +__success +__naked void adjusted_map_via_const_imm(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r2 += %[other_val_bar]; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b), + __imm_const(other_val_bar, offsetof(struct other_val, bar)) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via const imm): out-of-bound 1") +__failure __msg("invalid access to map value, value_size=16 off=12 size=8") +__naked void imm_out_of_bound_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r2 += %[__imm_0]; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b), + __imm_const(__imm_0, sizeof(struct other_val) - 4) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via const imm): out-of-bound 2") +__failure __msg("invalid access to map value, value_size=16 off=-4 size=8") +__naked void imm_out_of_bound_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r2 += -4; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via const reg)") +__success +__naked void adjusted_map_via_const_reg(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r3 = %[other_val_bar]; \ + r2 += r3; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b), + __imm_const(other_val_bar, offsetof(struct other_val, bar)) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via const reg): out-of-bound 1") +__failure __msg("invalid access to map value, value_size=16 off=12 size=8") +__naked void reg_out_of_bound_1(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r3 = %[__imm_0]; \ + r2 += r3; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b), + __imm_const(__imm_0, sizeof(struct other_val) - 4) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via const reg): out-of-bound 2") +__failure __msg("invalid access to map value, value_size=16 off=-4 size=8") +__naked void reg_out_of_bound_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r3 = -4; \ + r2 += r3; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via variable)") +__success +__naked void to_adjusted_map_via_variable(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 > %[other_val_bar] goto l0_%=; \ + r2 += r3; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b), + __imm_const(other_val_bar, offsetof(struct other_val, bar)) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via variable): no max check") +__failure +__msg("R2 unbounded memory access, make sure to bounds check any such access") +__naked void via_variable_no_max_check_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r3 = *(u32*)(r0 + 0); \ + r2 += r3; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +SEC("tracepoint") +__description("map helper access to adjusted map (via variable): wrong max check") +__failure __msg("invalid access to map value, value_size=16 off=9 size=8") +__naked void via_variable_wrong_max_check_2(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r2 = r0; \ + r3 = *(u32*)(r0 + 0); \ + if r3 > %[__imm_0] goto l0_%=; \ + r2 += r3; \ + r1 = %[map_hash_16b] ll; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b), + __imm_const(__imm_0, offsetof(struct other_val, bar) + 1) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_int_ptr.c b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c new file mode 100644 index 000000000000..b054f9c48143 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_int_ptr.c @@ -0,0 +1,157 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/int_ptr.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("cgroup/sysctl") +__description("ARG_PTR_TO_LONG uninitialized") +__failure __msg("invalid indirect read from stack R4 off -16+0 size 8") +__naked void arg_ptr_to_long_uninitialized(void) +{ + asm volatile (" \ + /* bpf_strtoul arg1 (buf) */ \ + r7 = r10; \ + r7 += -8; \ + r0 = 0x00303036; \ + *(u64*)(r7 + 0) = r0; \ + r1 = r7; \ + /* bpf_strtoul arg2 (buf_len) */ \ + r2 = 4; \ + /* bpf_strtoul arg3 (flags) */ \ + r3 = 0; \ + /* bpf_strtoul arg4 (res) */ \ + r7 += -8; \ + r4 = r7; \ + /* bpf_strtoul() */ \ + call %[bpf_strtoul]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_strtoul) + : __clobber_all); +} + +SEC("socket") +__description("ARG_PTR_TO_LONG half-uninitialized") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid indirect read from stack R4 off -16+4 size 8") +__retval(0) +__naked void ptr_to_long_half_uninitialized(void) +{ + asm volatile (" \ + /* bpf_strtoul arg1 (buf) */ \ + r7 = r10; \ + r7 += -8; \ + r0 = 0x00303036; \ + *(u64*)(r7 + 0) = r0; \ + r1 = r7; \ + /* bpf_strtoul arg2 (buf_len) */ \ + r2 = 4; \ + /* bpf_strtoul arg3 (flags) */ \ + r3 = 0; \ + /* bpf_strtoul arg4 (res) */ \ + r7 += -8; \ + *(u32*)(r7 + 0) = r0; \ + r4 = r7; \ + /* bpf_strtoul() */ \ + call %[bpf_strtoul]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_strtoul) + : __clobber_all); +} + +SEC("cgroup/sysctl") +__description("ARG_PTR_TO_LONG misaligned") +__failure __msg("misaligned stack access off (0x0; 0x0)+-20+0 size 8") +__naked void arg_ptr_to_long_misaligned(void) +{ + asm volatile (" \ + /* bpf_strtoul arg1 (buf) */ \ + r7 = r10; \ + r7 += -8; \ + r0 = 0x00303036; \ + *(u64*)(r7 + 0) = r0; \ + r1 = r7; \ + /* bpf_strtoul arg2 (buf_len) */ \ + r2 = 4; \ + /* bpf_strtoul arg3 (flags) */ \ + r3 = 0; \ + /* bpf_strtoul arg4 (res) */ \ + r7 += -12; \ + r0 = 0; \ + *(u32*)(r7 + 0) = r0; \ + *(u64*)(r7 + 4) = r0; \ + r4 = r7; \ + /* bpf_strtoul() */ \ + call %[bpf_strtoul]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_strtoul) + : __clobber_all); +} + +SEC("cgroup/sysctl") +__description("ARG_PTR_TO_LONG size < sizeof(long)") +__failure __msg("invalid indirect access to stack R4 off=-4 size=8") +__naked void to_long_size_sizeof_long(void) +{ + asm volatile (" \ + /* bpf_strtoul arg1 (buf) */ \ + r7 = r10; \ + r7 += -16; \ + r0 = 0x00303036; \ + *(u64*)(r7 + 0) = r0; \ + r1 = r7; \ + /* bpf_strtoul arg2 (buf_len) */ \ + r2 = 4; \ + /* bpf_strtoul arg3 (flags) */ \ + r3 = 0; \ + /* bpf_strtoul arg4 (res) */ \ + r7 += 12; \ + *(u32*)(r7 + 0) = r0; \ + r4 = r7; \ + /* bpf_strtoul() */ \ + call %[bpf_strtoul]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_strtoul) + : __clobber_all); +} + +SEC("cgroup/sysctl") +__description("ARG_PTR_TO_LONG initialized") +__success +__naked void arg_ptr_to_long_initialized(void) +{ + asm volatile (" \ + /* bpf_strtoul arg1 (buf) */ \ + r7 = r10; \ + r7 += -8; \ + r0 = 0x00303036; \ + *(u64*)(r7 + 0) = r0; \ + r1 = r7; \ + /* bpf_strtoul arg2 (buf_len) */ \ + r2 = 4; \ + /* bpf_strtoul arg3 (flags) */ \ + r3 = 0; \ + /* bpf_strtoul arg4 (res) */ \ + r7 += -8; \ + *(u64*)(r7 + 0) = r0; \ + r4 = r7; \ + /* bpf_strtoul() */ \ + call %[bpf_strtoul]; \ + r0 = 1; \ + exit; \ +" : + : __imm(bpf_strtoul) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_ld_ind.c b/tools/testing/selftests/bpf/progs/verifier_ld_ind.c new file mode 100644 index 000000000000..c925ba9a2e74 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_ld_ind.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/ld_ind.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +SEC("socket") +__description("ld_ind: check calling conv, r1") +__failure __msg("R1 !read_ok") +__failure_unpriv +__naked void ind_check_calling_conv_r1(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 1; \ + .8byte %[ld_ind]; \ + r0 = r1; \ + exit; \ +" : + : __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_1, -0x200000)) + : __clobber_all); +} + +SEC("socket") +__description("ld_ind: check calling conv, r2") +__failure __msg("R2 !read_ok") +__failure_unpriv +__naked void ind_check_calling_conv_r2(void) +{ + asm volatile (" \ + r6 = r1; \ + r2 = 1; \ + .8byte %[ld_ind]; \ + r0 = r2; \ + exit; \ +" : + : __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_2, -0x200000)) + : __clobber_all); +} + +SEC("socket") +__description("ld_ind: check calling conv, r3") +__failure __msg("R3 !read_ok") +__failure_unpriv +__naked void ind_check_calling_conv_r3(void) +{ + asm volatile (" \ + r6 = r1; \ + r3 = 1; \ + .8byte %[ld_ind]; \ + r0 = r3; \ + exit; \ +" : + : __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_3, -0x200000)) + : __clobber_all); +} + +SEC("socket") +__description("ld_ind: check calling conv, r4") +__failure __msg("R4 !read_ok") +__failure_unpriv +__naked void ind_check_calling_conv_r4(void) +{ + asm volatile (" \ + r6 = r1; \ + r4 = 1; \ + .8byte %[ld_ind]; \ + r0 = r4; \ + exit; \ +" : + : __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_4, -0x200000)) + : __clobber_all); +} + +SEC("socket") +__description("ld_ind: check calling conv, r5") +__failure __msg("R5 !read_ok") +__failure_unpriv +__naked void ind_check_calling_conv_r5(void) +{ + asm volatile (" \ + r6 = r1; \ + r5 = 1; \ + .8byte %[ld_ind]; \ + r0 = r5; \ + exit; \ +" : + : __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_5, -0x200000)) + : __clobber_all); +} + +SEC("socket") +__description("ld_ind: check calling conv, r7") +__success __success_unpriv __retval(1) +__naked void ind_check_calling_conv_r7(void) +{ + asm volatile (" \ + r6 = r1; \ + r7 = 1; \ + .8byte %[ld_ind]; \ + r0 = r7; \ + exit; \ +" : + : __imm_insn(ld_ind, BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_leak_ptr.c b/tools/testing/selftests/bpf/progs/verifier_leak_ptr.c new file mode 100644 index 000000000000..d153fbe50055 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_leak_ptr.c @@ -0,0 +1,92 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/leak_ptr.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("leak pointer into ctx 1") +__failure __msg("BPF_ATOMIC stores into R1 ctx is not allowed") +__failure_unpriv __msg_unpriv("R2 leaks addr into mem") +__naked void leak_pointer_into_ctx_1(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r1 + %[__sk_buff_cb_0]) = r0; \ + r2 = %[map_hash_8b] ll; \ + lock *(u64 *)(r1 + %[__sk_buff_cb_0]) += r2; \ + exit; \ +" : + : __imm_addr(map_hash_8b), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("leak pointer into ctx 2") +__failure __msg("BPF_ATOMIC stores into R1 ctx is not allowed") +__failure_unpriv __msg_unpriv("R10 leaks addr into mem") +__naked void leak_pointer_into_ctx_2(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r1 + %[__sk_buff_cb_0]) = r0; \ + lock *(u64 *)(r1 + %[__sk_buff_cb_0]) += r10; \ + exit; \ +" : + : __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("leak pointer into ctx 3") +__success __failure_unpriv __msg_unpriv("R2 leaks addr into ctx") +__retval(0) +__naked void leak_pointer_into_ctx_3(void) +{ + asm volatile (" \ + r0 = 0; \ + r2 = %[map_hash_8b] ll; \ + *(u64*)(r1 + %[__sk_buff_cb_0]) = r2; \ + exit; \ +" : + : __imm_addr(map_hash_8b), + __imm_const(__sk_buff_cb_0, offsetof(struct __sk_buff, cb[0])) + : __clobber_all); +} + +SEC("socket") +__description("leak pointer into map val") +__success __failure_unpriv __msg_unpriv("R6 leaks addr into mem") +__retval(0) +__naked void leak_pointer_into_map_val(void) +{ + asm volatile (" \ + r6 = r1; \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r3 = 0; \ + *(u64*)(r0 + 0) = r3; \ + lock *(u64 *)(r0 + 0) += r6; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_map_ptr.c b/tools/testing/selftests/bpf/progs/verifier_map_ptr.c new file mode 100644 index 000000000000..11a079145966 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_map_ptr.c @@ -0,0 +1,159 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/map_ptr.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +struct other_val { + long long foo; + long long bar; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct other_val); +} map_hash_16b SEC(".maps"); + +SEC("socket") +__description("bpf_map_ptr: read with negative offset rejected") +__failure __msg("R1 is bpf_array invalid negative access: off=-8") +__failure_unpriv +__msg_unpriv("access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN") +__naked void read_with_negative_offset_rejected(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 = %[map_array_48b] ll; \ + r6 = *(u64*)(r1 - 8); \ + r0 = 1; \ + exit; \ +" : + : __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("bpf_map_ptr: write rejected") +__failure __msg("only read from bpf_array is supported") +__failure_unpriv +__msg_unpriv("access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN") +__naked void bpf_map_ptr_write_rejected(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r10 - 8) = r0; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_array_48b] ll; \ + *(u64*)(r1 + 0) = r2; \ + r0 = 1; \ + exit; \ +" : + : __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("bpf_map_ptr: read non-existent field rejected") +__failure +__msg("cannot access ptr member ops with moff 0 in struct bpf_map with off 1 size 4") +__failure_unpriv +__msg_unpriv("access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void read_non_existent_field_rejected(void) +{ + asm volatile (" \ + r6 = 0; \ + r1 = %[map_array_48b] ll; \ + r6 = *(u32*)(r1 + 1); \ + r0 = 1; \ + exit; \ +" : + : __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("bpf_map_ptr: read ops field accepted") +__success __failure_unpriv +__msg_unpriv("access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN") +__retval(1) +__naked void ptr_read_ops_field_accepted(void) +{ + asm volatile (" \ + r6 = 0; \ + r1 = %[map_array_48b] ll; \ + r6 = *(u64*)(r1 + 0); \ + r0 = 1; \ + exit; \ +" : + : __imm_addr(map_array_48b) + : __clobber_all); +} + +SEC("socket") +__description("bpf_map_ptr: r = 0, map_ptr = map_ptr + r") +__success __failure_unpriv +__msg_unpriv("R1 has pointer with unsupported alu operation") +__retval(0) +__naked void map_ptr_map_ptr_r(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r10 - 8) = r0; \ + r2 = r10; \ + r2 += -8; \ + r0 = 0; \ + r1 = %[map_hash_16b] ll; \ + r1 += r0; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +SEC("socket") +__description("bpf_map_ptr: r = 0, r = r + map_ptr") +__success __failure_unpriv +__msg_unpriv("R0 has pointer with unsupported alu operation") +__retval(0) +__naked void _0_r_r_map_ptr(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r10 - 8) = r0; \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + r0 = %[map_hash_16b] ll; \ + r1 += r0; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_16b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_map_ret_val.c b/tools/testing/selftests/bpf/progs/verifier_map_ret_val.c new file mode 100644 index 000000000000..1639628b832d --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_map_ret_val.c @@ -0,0 +1,110 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/map_ret_val.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("invalid map_fd for function call") +__failure __msg("fd 0 is not pointing to valid bpf_map") +__failure_unpriv +__naked void map_fd_for_function_call(void) +{ + asm volatile (" \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + r2 = r10; \ + r2 += -8; \ + .8byte %[ld_map_fd]; \ + .8byte 0; \ + call %[bpf_map_delete_elem]; \ + exit; \ +" : + : __imm(bpf_map_delete_elem), + __imm_insn(ld_map_fd, BPF_RAW_INSN(BPF_LD | BPF_DW | BPF_IMM, BPF_REG_1, BPF_PSEUDO_MAP_FD, 0, 0)) + : __clobber_all); +} + +SEC("socket") +__description("don't check return value before access") +__failure __msg("R0 invalid mem access 'map_value_or_null'") +__failure_unpriv +__naked void check_return_value_before_access(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r1 = 0; \ + *(u64*)(r0 + 0) = r1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("access memory with incorrect alignment") +__failure __msg("misaligned value access") +__failure_unpriv +__flag(BPF_F_STRICT_ALIGNMENT) +__naked void access_memory_with_incorrect_alignment_1(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r0 + 4) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("sometimes access memory with incorrect alignment") +__failure __msg("R0 invalid mem access") +__msg_unpriv("R0 leaks addr") +__flag(BPF_F_STRICT_ALIGNMENT) +__naked void access_memory_with_incorrect_alignment_2(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r0 + 0) = r1; \ + exit; \ +l0_%=: r1 = 1; \ + *(u64*)(r0 + 0) = r1; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_masking.c b/tools/testing/selftests/bpf/progs/verifier_masking.c new file mode 100644 index 000000000000..5732cc1b4c47 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_masking.c @@ -0,0 +1,410 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/masking.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("socket") +__description("masking, test out of bounds 1") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_1(void) +{ + asm volatile (" \ + w1 = 5; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 5 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 2") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_2(void) +{ + asm volatile (" \ + w1 = 1; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 3") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_3(void) +{ + asm volatile (" \ + w1 = 0xffffffff; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffffffff - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 4") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_4(void) +{ + asm volatile (" \ + w1 = 0xffffffff; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 5") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_5(void) +{ + asm volatile (" \ + w1 = -1; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 6") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_6(void) +{ + asm volatile (" \ + w1 = -1; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffffffff - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 7") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_7(void) +{ + asm volatile (" \ + r1 = 5; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 5 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 8") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_8(void) +{ + asm volatile (" \ + r1 = 1; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 9") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_9(void) +{ + asm volatile (" \ + r1 = 0xffffffff; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffffffff - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 10") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_10(void) +{ + asm volatile (" \ + r1 = 0xffffffff; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 11") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_11(void) +{ + asm volatile (" \ + r1 = -1; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test out of bounds 12") +__success __success_unpriv __retval(0) +__naked void test_out_of_bounds_12(void) +{ + asm volatile (" \ + r1 = -1; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffffffff - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 1") +__success __success_unpriv __retval(4) +__naked void masking_test_in_bounds_1(void) +{ + asm volatile (" \ + w1 = 4; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 5 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 2") +__success __success_unpriv __retval(0) +__naked void masking_test_in_bounds_2(void) +{ + asm volatile (" \ + w1 = 0; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffffffff - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 3") +__success __success_unpriv __retval(0xfffffffe) +__naked void masking_test_in_bounds_3(void) +{ + asm volatile (" \ + w1 = 0xfffffffe; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xffffffff - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 4") +__success __success_unpriv __retval(0xabcde) +__naked void masking_test_in_bounds_4(void) +{ + asm volatile (" \ + w1 = 0xabcde; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 0xabcdef - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 5") +__success __success_unpriv __retval(0) +__naked void masking_test_in_bounds_5(void) +{ + asm volatile (" \ + w1 = 0; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 1 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 6") +__success __success_unpriv __retval(46) +__naked void masking_test_in_bounds_6(void) +{ + asm volatile (" \ + w1 = 46; \ + w2 = %[__imm_0]; \ + r2 -= r1; \ + r2 |= r1; \ + r2 = -r2; \ + r2 s>>= 63; \ + r1 &= r2; \ + r0 = r1; \ + exit; \ +" : + : __imm_const(__imm_0, 47 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 7") +__success __success_unpriv __retval(46) +__naked void masking_test_in_bounds_7(void) +{ + asm volatile (" \ + r3 = -46; \ + r3 *= -1; \ + w2 = %[__imm_0]; \ + r2 -= r3; \ + r2 |= r3; \ + r2 = -r2; \ + r2 s>>= 63; \ + r3 &= r2; \ + r0 = r3; \ + exit; \ +" : + : __imm_const(__imm_0, 47 - 1) + : __clobber_all); +} + +SEC("socket") +__description("masking, test in bounds 8") +__success __success_unpriv __retval(0) +__naked void masking_test_in_bounds_8(void) +{ + asm volatile (" \ + r3 = -47; \ + r3 *= -1; \ + w2 = %[__imm_0]; \ + r2 -= r3; \ + r2 |= r3; \ + r2 = -r2; \ + r2 s>>= 63; \ + r3 &= r2; \ + r0 = r3; \ + exit; \ +" : + : __imm_const(__imm_0, 47 - 1) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_meta_access.c b/tools/testing/selftests/bpf/progs/verifier_meta_access.c new file mode 100644 index 000000000000..d81722fb5f19 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_meta_access.c @@ -0,0 +1,284 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/meta_access.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("xdp") +__description("meta access, test1") +__success __retval(0) +__naked void meta_access_test1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test2") +__failure __msg("invalid access to packet, off=-8") +__naked void meta_access_test2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r0 = r2; \ + r0 -= 8; \ + r4 = r2; \ + r4 += 8; \ + if r4 > r3 goto l0_%=; \ + r0 = *(u8*)(r0 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test3") +__failure __msg("invalid access to packet") +__naked void meta_access_test3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r0 = r2; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test4") +__failure __msg("invalid access to packet") +__naked void meta_access_test4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r4 = *(u32*)(r1 + %[xdp_md_data]); \ + r0 = r4; \ + r0 += 8; \ + if r0 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test5") +__failure __msg("R3 !read_ok") +__naked void meta_access_test5(void) +{ + asm volatile (" \ + r3 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r4 = *(u32*)(r1 + %[xdp_md_data]); \ + r0 = r3; \ + r0 += 8; \ + if r0 > r4 goto l0_%=; \ + r2 = -8; \ + call %[bpf_xdp_adjust_meta]; \ + r0 = *(u8*)(r3 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_xdp_adjust_meta), + __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test6") +__failure __msg("invalid access to packet") +__naked void meta_access_test6(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r0 = r3; \ + r0 += 8; \ + r4 = r2; \ + r4 += 8; \ + if r4 > r0 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test7") +__success __retval(0) +__naked void meta_access_test7(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r0 = r3; \ + r0 += 8; \ + r4 = r2; \ + r4 += 8; \ + if r4 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test8") +__success __retval(0) +__naked void meta_access_test8(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r4 = r2; \ + r4 += 0xFFFF; \ + if r4 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test9") +__failure __msg("invalid access to packet") +__naked void meta_access_test9(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r4 = r2; \ + r4 += 0xFFFF; \ + r4 += 1; \ + if r4 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test10") +__failure __msg("invalid access to packet") +__naked void meta_access_test10(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r4 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r5 = 42; \ + r6 = 24; \ + *(u64*)(r10 - 8) = r5; \ + lock *(u64 *)(r10 - 8) += r6; \ + r5 = *(u64*)(r10 - 8); \ + if r5 > 100 goto l0_%=; \ + r3 += r5; \ + r5 = r3; \ + r6 = r2; \ + r6 += 8; \ + if r6 > r5 goto l0_%=; \ + r2 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test11") +__success __retval(0) +__naked void meta_access_test11(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r5 = 42; \ + r6 = 24; \ + *(u64*)(r10 - 8) = r5; \ + lock *(u64 *)(r10 - 8) += r6; \ + r5 = *(u64*)(r10 - 8); \ + if r5 > 100 goto l0_%=; \ + r2 += r5; \ + r5 = r2; \ + r6 = r2; \ + r6 += 8; \ + if r6 > r3 goto l0_%=; \ + r5 = *(u8*)(r5 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("meta access, test12") +__success __retval(0) +__naked void meta_access_test12(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r4 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r5 = r3; \ + r5 += 16; \ + if r5 > r4 goto l0_%=; \ + r0 = *(u8*)(r3 + 0); \ + r5 = r2; \ + r5 += 16; \ + if r5 > r3 goto l0_%=; \ + r0 = *(u8*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_raw_stack.c b/tools/testing/selftests/bpf/progs/verifier_raw_stack.c new file mode 100644 index 000000000000..efbfc3a4ad6a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_raw_stack.c @@ -0,0 +1,371 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/raw_stack.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("tc") +__description("raw_stack: no skb_load_bytes") +__failure __msg("invalid read from stack R6 off=-8 size=8") +__naked void stack_no_skb_load_bytes(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + r3 = r6; \ + r4 = 8; \ + /* Call to skb_load_bytes() omitted. */ \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, negative len") +__failure __msg("R4 min value is negative") +__naked void skb_load_bytes_negative_len(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + r3 = r6; \ + r4 = -8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, negative len 2") +__failure __msg("R4 min value is negative") +__naked void load_bytes_negative_len_2(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + r3 = r6; \ + r4 = %[__imm_0]; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes), + __imm_const(__imm_0, ~0) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, zero len") +__failure __msg("invalid zero-sized read") +__naked void skb_load_bytes_zero_len(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + r3 = r6; \ + r4 = 0; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, no init") +__success __retval(0) +__naked void skb_load_bytes_no_init(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, init") +__success __retval(0) +__naked void stack_skb_load_bytes_init(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + r3 = 0xcafe; \ + *(u64*)(r6 + 0) = r3; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, spilled regs around bounds") +__success __retval(0) +__naked void bytes_spilled_regs_around_bounds(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -16; \ + *(u64*)(r6 - 8) = r1; \ + *(u64*)(r6 + 8) = r1; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 - 8); \ + r2 = *(u64*)(r6 + 8); \ + r0 = *(u32*)(r0 + %[__sk_buff_mark]); \ + r2 = *(u32*)(r2 + %[__sk_buff_priority]); \ + r0 += r2; \ + exit; \ +" : + : __imm(bpf_skb_load_bytes), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(__sk_buff_priority, offsetof(struct __sk_buff, priority)) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, spilled regs corruption") +__failure __msg("R0 invalid mem access 'scalar'") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void load_bytes_spilled_regs_corruption(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -8; \ + *(u64*)(r6 + 0) = r1; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + r0 = *(u32*)(r0 + %[__sk_buff_mark]); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, spilled regs corruption 2") +__failure __msg("R3 invalid mem access 'scalar'") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void bytes_spilled_regs_corruption_2(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -16; \ + *(u64*)(r6 - 8) = r1; \ + *(u64*)(r6 + 0) = r1; \ + *(u64*)(r6 + 8) = r1; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 - 8); \ + r2 = *(u64*)(r6 + 8); \ + r3 = *(u64*)(r6 + 0); \ + r0 = *(u32*)(r0 + %[__sk_buff_mark]); \ + r2 = *(u32*)(r2 + %[__sk_buff_priority]); \ + r0 += r2; \ + r3 = *(u32*)(r3 + %[__sk_buff_pkt_type]); \ + r0 += r3; \ + exit; \ +" : + : __imm(bpf_skb_load_bytes), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(__sk_buff_pkt_type, offsetof(struct __sk_buff, pkt_type)), + __imm_const(__sk_buff_priority, offsetof(struct __sk_buff, priority)) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, spilled regs + data") +__success __retval(0) +__naked void load_bytes_spilled_regs_data(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -16; \ + *(u64*)(r6 - 8) = r1; \ + *(u64*)(r6 + 0) = r1; \ + *(u64*)(r6 + 8) = r1; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 - 8); \ + r2 = *(u64*)(r6 + 8); \ + r3 = *(u64*)(r6 + 0); \ + r0 = *(u32*)(r0 + %[__sk_buff_mark]); \ + r2 = *(u32*)(r2 + %[__sk_buff_priority]); \ + r0 += r2; \ + r0 += r3; \ + exit; \ +" : + : __imm(bpf_skb_load_bytes), + __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)), + __imm_const(__sk_buff_priority, offsetof(struct __sk_buff, priority)) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, invalid access 1") +__failure __msg("invalid indirect access to stack R3 off=-513 size=8") +__naked void load_bytes_invalid_access_1(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -513; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, invalid access 2") +__failure __msg("invalid indirect access to stack R3 off=-1 size=8") +__naked void load_bytes_invalid_access_2(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -1; \ + r3 = r6; \ + r4 = 8; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, invalid access 3") +__failure __msg("R4 min value is negative") +__naked void load_bytes_invalid_access_3(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += 0xffffffff; \ + r3 = r6; \ + r4 = 0xffffffff; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, invalid access 4") +__failure +__msg("R4 unbounded memory access, use 'var &= const' or 'if (var < const)'") +__naked void load_bytes_invalid_access_4(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -1; \ + r3 = r6; \ + r4 = 0x7fffffff; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, invalid access 5") +__failure +__msg("R4 unbounded memory access, use 'var &= const' or 'if (var < const)'") +__naked void load_bytes_invalid_access_5(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -512; \ + r3 = r6; \ + r4 = 0x7fffffff; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, invalid access 6") +__failure __msg("invalid zero-sized read") +__naked void load_bytes_invalid_access_6(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -512; \ + r3 = r6; \ + r4 = 0; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +SEC("tc") +__description("raw_stack: skb_load_bytes, large access") +__success __retval(0) +__naked void skb_load_bytes_large_access(void) +{ + asm volatile (" \ + r2 = 4; \ + r6 = r10; \ + r6 += -512; \ + r3 = r6; \ + r4 = 512; \ + call %[bpf_skb_load_bytes]; \ + r0 = *(u64*)(r6 + 0); \ + exit; \ +" : + : __imm(bpf_skb_load_bytes) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_raw_tp_writable.c b/tools/testing/selftests/bpf/progs/verifier_raw_tp_writable.c new file mode 100644 index 000000000000..14a0172e2141 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_raw_tp_writable.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/raw_tp_writable.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("raw_tracepoint.w") +__description("raw_tracepoint_writable: reject variable offset") +__failure +__msg("R6 invalid variable buffer offset: off=0, var_off=(0x0; 0xffffffff)") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void tracepoint_writable_reject_variable_offset(void) +{ + asm volatile (" \ + /* r6 is our tp buffer */ \ + r6 = *(u64*)(r1 + 0); \ + r1 = %[map_hash_8b] ll; \ + /* move the key (== 0) to r10-8 */ \ + w0 = 0; \ + r2 = r10; \ + r2 += -8; \ + *(u64*)(r2 + 0) = r0; \ + /* lookup in the map */ \ + call %[bpf_map_lookup_elem]; \ + /* exit clean if null */ \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: /* shift the buffer pointer to a variable location */\ + r0 = *(u32*)(r0 + 0); \ + r6 += r0; \ + /* clobber whatever's there */ \ + r7 = 4242; \ + *(u64*)(r6 + 0) = r7; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_ringbuf.c b/tools/testing/selftests/bpf/progs/verifier_ringbuf.c new file mode 100644 index 000000000000..ae1d521f326c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_ringbuf.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/ringbuf.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 4096); +} map_ringbuf SEC(".maps"); + +SEC("socket") +__description("ringbuf: invalid reservation offset 1") +__failure __msg("R1 must have zero offset when passed to release func") +__failure_unpriv +__naked void ringbuf_invalid_reservation_offset_1(void) +{ + asm volatile (" \ + /* reserve 8 byte ringbuf memory */ \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r1 = %[map_ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_ringbuf_reserve]; \ + /* store a pointer to the reserved memory in R6 */\ + r6 = r0; \ + /* check whether the reservation was successful */\ + if r0 == 0 goto l0_%=; \ + /* spill R6(mem) into the stack */ \ + *(u64*)(r10 - 8) = r6; \ + /* fill it back in R7 */ \ + r7 = *(u64*)(r10 - 8); \ + /* should be able to access *(R7) = 0 */ \ + r1 = 0; \ + *(u64*)(r7 + 0) = r1; \ + /* submit the reserved ringbuf memory */ \ + r1 = r7; \ + /* add invalid offset to reserved ringbuf memory */\ + r1 += 0xcafe; \ + r2 = 0; \ + call %[bpf_ringbuf_submit]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ringbuf_reserve), + __imm(bpf_ringbuf_submit), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("socket") +__description("ringbuf: invalid reservation offset 2") +__failure __msg("R7 min value is outside of the allowed memory range") +__failure_unpriv +__naked void ringbuf_invalid_reservation_offset_2(void) +{ + asm volatile (" \ + /* reserve 8 byte ringbuf memory */ \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r1 = %[map_ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_ringbuf_reserve]; \ + /* store a pointer to the reserved memory in R6 */\ + r6 = r0; \ + /* check whether the reservation was successful */\ + if r0 == 0 goto l0_%=; \ + /* spill R6(mem) into the stack */ \ + *(u64*)(r10 - 8) = r6; \ + /* fill it back in R7 */ \ + r7 = *(u64*)(r10 - 8); \ + /* add invalid offset to reserved ringbuf memory */\ + r7 += 0xcafe; \ + /* should be able to access *(R7) = 0 */ \ + r1 = 0; \ + *(u64*)(r7 + 0) = r1; \ + /* submit the reserved ringbuf memory */ \ + r1 = r7; \ + r2 = 0; \ + call %[bpf_ringbuf_submit]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ringbuf_reserve), + __imm(bpf_ringbuf_submit), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("xdp") +__description("ringbuf: check passing rb mem to helpers") +__success __retval(0) +__naked void passing_rb_mem_to_helpers(void) +{ + asm volatile (" \ + r6 = r1; \ + /* reserve 8 byte ringbuf memory */ \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r1 = %[map_ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_ringbuf_reserve]; \ + r7 = r0; \ + /* check whether the reservation was successful */\ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: /* pass allocated ring buffer memory to fib lookup */\ + r1 = r6; \ + r2 = r0; \ + r3 = 8; \ + r4 = 0; \ + call %[bpf_fib_lookup]; \ + /* submit the ringbuf memory */ \ + r1 = r7; \ + r2 = 0; \ + call %[bpf_ringbuf_submit]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_fib_lookup), + __imm(bpf_ringbuf_reserve), + __imm(bpf_ringbuf_submit), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c new file mode 100644 index 000000000000..136e5530b72c --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c @@ -0,0 +1,374 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/spill_fill.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_RINGBUF); + __uint(max_entries, 4096); +} map_ringbuf SEC(".maps"); + +SEC("socket") +__description("check valid spill/fill") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(POINTER_VALUE) +__naked void check_valid_spill_fill(void) +{ + asm volatile (" \ + /* spill R1(ctx) into stack */ \ + *(u64*)(r10 - 8) = r1; \ + /* fill it back into R2 */ \ + r2 = *(u64*)(r10 - 8); \ + /* should be able to access R0 = *(R2 + 8) */ \ + /* BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8), */\ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check valid spill/fill, skb mark") +__success __success_unpriv __retval(0) +__naked void valid_spill_fill_skb_mark(void) +{ + asm volatile (" \ + r6 = r1; \ + *(u64*)(r10 - 8) = r6; \ + r0 = *(u64*)(r10 - 8); \ + r0 = *(u32*)(r0 + %[__sk_buff_mark]); \ + exit; \ +" : + : __imm_const(__sk_buff_mark, offsetof(struct __sk_buff, mark)) + : __clobber_all); +} + +SEC("socket") +__description("check valid spill/fill, ptr to mem") +__success __success_unpriv __retval(0) +__naked void spill_fill_ptr_to_mem(void) +{ + asm volatile (" \ + /* reserve 8 byte ringbuf memory */ \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r1 = %[map_ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_ringbuf_reserve]; \ + /* store a pointer to the reserved memory in R6 */\ + r6 = r0; \ + /* check whether the reservation was successful */\ + if r0 == 0 goto l0_%=; \ + /* spill R6(mem) into the stack */ \ + *(u64*)(r10 - 8) = r6; \ + /* fill it back in R7 */ \ + r7 = *(u64*)(r10 - 8); \ + /* should be able to access *(R7) = 0 */ \ + r1 = 0; \ + *(u64*)(r7 + 0) = r1; \ + /* submit the reserved ringbuf memory */ \ + r1 = r7; \ + r2 = 0; \ + call %[bpf_ringbuf_submit]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ringbuf_reserve), + __imm(bpf_ringbuf_submit), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("socket") +__description("check with invalid reg offset 0") +__failure __msg("R0 pointer arithmetic on ringbuf_mem_or_null prohibited") +__failure_unpriv +__naked void with_invalid_reg_offset_0(void) +{ + asm volatile (" \ + /* reserve 8 byte ringbuf memory */ \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r1 = %[map_ringbuf] ll; \ + r2 = 8; \ + r3 = 0; \ + call %[bpf_ringbuf_reserve]; \ + /* store a pointer to the reserved memory in R6 */\ + r6 = r0; \ + /* add invalid offset to memory or NULL */ \ + r0 += 1; \ + /* check whether the reservation was successful */\ + if r0 == 0 goto l0_%=; \ + /* should not be able to access *(R7) = 0 */ \ + r1 = 0; \ + *(u32*)(r6 + 0) = r1; \ + /* submit the reserved ringbuf memory */ \ + r1 = r6; \ + r2 = 0; \ + call %[bpf_ringbuf_submit]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_ringbuf_reserve), + __imm(bpf_ringbuf_submit), + __imm_addr(map_ringbuf) + : __clobber_all); +} + +SEC("socket") +__description("check corrupted spill/fill") +__failure __msg("R0 invalid mem access 'scalar'") +__msg_unpriv("attempt to corrupt spilled") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void check_corrupted_spill_fill(void) +{ + asm volatile (" \ + /* spill R1(ctx) into stack */ \ + *(u64*)(r10 - 8) = r1; \ + /* mess up with R1 pointer on stack */ \ + r0 = 0x23; \ + *(u8*)(r10 - 7) = r0; \ + /* fill back into R0 is fine for priv. \ + * R0 now becomes SCALAR_VALUE. \ + */ \ + r0 = *(u64*)(r10 - 8); \ + /* Load from R0 should fail. */ \ + r0 = *(u64*)(r0 + 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check corrupted spill/fill, LSB") +__success __failure_unpriv __msg_unpriv("attempt to corrupt spilled") +__retval(POINTER_VALUE) +__naked void check_corrupted_spill_fill_lsb(void) +{ + asm volatile (" \ + *(u64*)(r10 - 8) = r1; \ + r0 = 0xcafe; \ + *(u16*)(r10 - 8) = r0; \ + r0 = *(u64*)(r10 - 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("check corrupted spill/fill, MSB") +__success __failure_unpriv __msg_unpriv("attempt to corrupt spilled") +__retval(POINTER_VALUE) +__naked void check_corrupted_spill_fill_msb(void) +{ + asm volatile (" \ + *(u64*)(r10 - 8) = r1; \ + r0 = 0x12345678; \ + *(u32*)(r10 - 4) = r0; \ + r0 = *(u64*)(r10 - 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("Spill and refill a u32 const scalar. Offset to skb->data") +__success __retval(0) +__naked void scalar_offset_to_skb_data_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + w4 = 20; \ + *(u32*)(r10 - 8) = r4; \ + r4 = *(u32*)(r10 - 8); \ + r0 = r2; \ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */ \ + r0 += r4; \ + /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */\ + if r0 > r3 goto l0_%=; \ + /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=20 */\ + r0 = *(u32*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("socket") +__description("Spill a u32 const, refill from another half of the uninit u32 from the stack") +/* in privileged mode reads from uninitialized stack locations are permitted */ +__success __failure_unpriv +__msg_unpriv("invalid read from stack off -4+0 size 4") +__retval(0) +__naked void uninit_u32_from_the_stack(void) +{ + asm volatile (" \ + w4 = 20; \ + *(u32*)(r10 - 8) = r4; \ + /* r4 = *(u32 *)(r10 -4) fp-8=????rrrr*/ \ + r4 = *(u32*)(r10 - 4); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("Spill a u32 const scalar. Refill as u16. Offset to skb->data") +__failure __msg("invalid access to packet") +__naked void u16_offset_to_skb_data(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + w4 = 20; \ + *(u32*)(r10 - 8) = r4; \ + r4 = *(u16*)(r10 - 8); \ + r0 = r2; \ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ + r0 += r4; \ + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\ + if r0 > r3 goto l0_%=; \ + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\ + r0 = *(u32*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("Spill u32 const scalars. Refill as u64. Offset to skb->data") +__failure __msg("invalid access to packet") +__naked void u64_offset_to_skb_data(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + w6 = 0; \ + w7 = 20; \ + *(u32*)(r10 - 4) = r6; \ + *(u32*)(r10 - 8) = r7; \ + r4 = *(u16*)(r10 - 8); \ + r0 = r2; \ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ + r0 += r4; \ + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\ + if r0 > r3 goto l0_%=; \ + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\ + r0 = *(u32*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data") +__failure __msg("invalid access to packet") +__naked void _6_offset_to_skb_data(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + w4 = 20; \ + *(u32*)(r10 - 8) = r4; \ + r4 = *(u16*)(r10 - 6); \ + r0 = r2; \ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */\ + r0 += r4; \ + /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */\ + if r0 > r3 goto l0_%=; \ + /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */\ + r0 = *(u32*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("Spill and refill a u32 const scalar at non 8byte aligned stack addr. Offset to skb->data") +__failure __msg("invalid access to packet") +__naked void addr_offset_to_skb_data(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + w4 = 20; \ + *(u32*)(r10 - 8) = r4; \ + *(u32*)(r10 - 4) = r4; \ + r4 = *(u32*)(r10 - 4); \ + r0 = r2; \ + /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=U32_MAX */\ + r0 += r4; \ + /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */\ + if r0 > r3 goto l0_%=; \ + /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */\ + r0 = *(u32*)(r2 + 0); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("Spill and refill a umax=40 bounded scalar. Offset to skb->data") +__success __retval(0) +__naked void scalar_offset_to_skb_data_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[__sk_buff_data]); \ + r3 = *(u32*)(r1 + %[__sk_buff_data_end]); \ + r4 = *(u64*)(r1 + %[__sk_buff_tstamp]); \ + if r4 <= 40 goto l0_%=; \ + r0 = 0; \ + exit; \ +l0_%=: /* *(u32 *)(r10 -8) = r4 R4=umax=40 */ \ + *(u32*)(r10 - 8) = r4; \ + /* r4 = (*u32 *)(r10 - 8) */ \ + r4 = *(u32*)(r10 - 8); \ + /* r2 += r4 R2=pkt R4=umax=40 */ \ + r2 += r4; \ + /* r0 = r2 R2=pkt,umax=40 R4=umax=40 */ \ + r0 = r2; \ + /* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */ \ + r2 += 20; \ + /* if (r2 > r3) R0=pkt,umax=40 R2=pkt,off=20,umax=40 */\ + if r2 > r3 goto l1_%=; \ + /* r0 = *(u32 *)r0 R0=pkt,r=20,umax=40 R2=pkt,off=20,r=20,umax=40 */\ + r0 = *(u32*)(r0 + 0); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)), + __imm_const(__sk_buff_tstamp, offsetof(struct __sk_buff, tstamp)) + : __clobber_all); +} + +SEC("tc") +__description("Spill a u32 scalar at fp-4 and then at fp-8") +__success __retval(0) +__naked void and_then_at_fp_8(void) +{ + asm volatile (" \ + w4 = 4321; \ + *(u32*)(r10 - 4) = r4; \ + *(u32*)(r10 - 8) = r4; \ + r4 = *(u64*)(r10 - 8); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c new file mode 100644 index 000000000000..e0f77e3e7869 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_stack_ptr.c @@ -0,0 +1,484 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/stack_ptr.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include <limits.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct test_val); +} map_array_48b SEC(".maps"); + +SEC("socket") +__description("PTR_TO_STACK store/load") +__success __success_unpriv __retval(0xfaceb00c) +__naked void ptr_to_stack_store_load(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -10; \ + r0 = 0xfaceb00c; \ + *(u64*)(r1 + 2) = r0; \ + r0 = *(u64*)(r1 + 2); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK store/load - bad alignment on off") +__failure __msg("misaligned stack access off (0x0; 0x0)+-8+2 size 8") +__failure_unpriv +__naked void load_bad_alignment_on_off(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -8; \ + r0 = 0xfaceb00c; \ + *(u64*)(r1 + 2) = r0; \ + r0 = *(u64*)(r1 + 2); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK store/load - bad alignment on reg") +__failure __msg("misaligned stack access off (0x0; 0x0)+-10+8 size 8") +__failure_unpriv +__naked void load_bad_alignment_on_reg(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -10; \ + r0 = 0xfaceb00c; \ + *(u64*)(r1 + 8) = r0; \ + r0 = *(u64*)(r1 + 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK store/load - out of bounds low") +__failure __msg("invalid write to stack R1 off=-79992 size=8") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void load_out_of_bounds_low(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -80000; \ + r0 = 0xfaceb00c; \ + *(u64*)(r1 + 8) = r0; \ + r0 = *(u64*)(r1 + 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK store/load - out of bounds high") +__failure __msg("invalid write to stack R1 off=0 size=8") +__failure_unpriv +__naked void load_out_of_bounds_high(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -8; \ + r0 = 0xfaceb00c; \ + *(u64*)(r1 + 8) = r0; \ + r0 = *(u64*)(r1 + 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 1") +__success __success_unpriv __retval(42) +__naked void to_stack_check_high_1(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -1; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 2") +__success __success_unpriv __retval(42) +__naked void to_stack_check_high_2(void) +{ + asm volatile (" \ + r1 = r10; \ + r0 = 42; \ + *(u8*)(r1 - 1) = r0; \ + r0 = *(u8*)(r1 - 1); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 3") +__success __failure_unpriv +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__retval(42) +__naked void to_stack_check_high_3(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += 0; \ + r0 = 42; \ + *(u8*)(r1 - 1) = r0; \ + r0 = *(u8*)(r1 - 1); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 4") +__failure __msg("invalid write to stack R1 off=0 size=1") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_high_4(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += 0; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 5") +__failure __msg("invalid write to stack R1") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_high_5(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[__imm_0]; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" : + : __imm_const(__imm_0, (1 << 29) - 1) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 6") +__failure __msg("invalid write to stack") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_high_6(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[__imm_0]; \ + r0 = 42; \ + *(u8*)(r1 + %[shrt_max]) = r0; \ + r0 = *(u8*)(r1 + %[shrt_max]); \ + exit; \ +" : + : __imm_const(__imm_0, (1 << 29) - 1), + __imm_const(shrt_max, SHRT_MAX) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check high 7") +__failure __msg("fp pointer offset") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_high_7(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[__imm_0]; \ + r1 += %[__imm_0]; \ + r0 = 42; \ + *(u8*)(r1 + %[shrt_max]) = r0; \ + r0 = *(u8*)(r1 + %[shrt_max]); \ + exit; \ +" : + : __imm_const(__imm_0, (1 << 29) - 1), + __imm_const(shrt_max, SHRT_MAX) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 1") +__success __success_unpriv __retval(42) +__naked void to_stack_check_low_1(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -512; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 2") +__success __failure_unpriv +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__retval(42) +__naked void to_stack_check_low_2(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -513; \ + r0 = 42; \ + *(u8*)(r1 + 1) = r0; \ + r0 = *(u8*)(r1 + 1); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 3") +__failure __msg("invalid write to stack R1 off=-513 size=1") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_low_3(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -513; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 4") +__failure __msg("math between fp pointer") +__failure_unpriv +__naked void to_stack_check_low_4(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[int_min]; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" : + : __imm_const(int_min, INT_MIN) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 5") +__failure __msg("invalid write to stack") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_low_5(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[__imm_0]; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" : + : __imm_const(__imm_0, -((1 << 29) - 1)) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 6") +__failure __msg("invalid write to stack") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_low_6(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[__imm_0]; \ + r0 = 42; \ + *(u8*)(r1 %[shrt_min]) = r0; \ + r0 = *(u8*)(r1 %[shrt_min]); \ + exit; \ +" : + : __imm_const(__imm_0, -((1 << 29) - 1)), + __imm_const(shrt_min, SHRT_MIN) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK check low 7") +__failure __msg("fp pointer offset") +__msg_unpriv("R1 stack pointer arithmetic goes out of range") +__naked void to_stack_check_low_7(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += %[__imm_0]; \ + r1 += %[__imm_0]; \ + r0 = 42; \ + *(u8*)(r1 %[shrt_min]) = r0; \ + r0 = *(u8*)(r1 %[shrt_min]); \ + exit; \ +" : + : __imm_const(__imm_0, -((1 << 29) - 1)), + __imm_const(shrt_min, SHRT_MIN) + : __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK mixed reg/k, 1") +__success __success_unpriv __retval(42) +__naked void stack_mixed_reg_k_1(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -3; \ + r2 = -3; \ + r1 += r2; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK mixed reg/k, 2") +__success __success_unpriv __retval(42) +__naked void stack_mixed_reg_k_2(void) +{ + asm volatile (" \ + r0 = 0; \ + *(u64*)(r10 - 8) = r0; \ + r0 = 0; \ + *(u64*)(r10 - 16) = r0; \ + r1 = r10; \ + r1 += -3; \ + r2 = -3; \ + r1 += r2; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r5 = r10; \ + r0 = *(u8*)(r5 - 6); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK mixed reg/k, 3") +__success __success_unpriv __retval(-3) +__naked void stack_mixed_reg_k_3(void) +{ + asm volatile (" \ + r1 = r10; \ + r1 += -3; \ + r2 = -3; \ + r1 += r2; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("PTR_TO_STACK reg") +__success __success_unpriv __retval(42) +__naked void ptr_to_stack_reg(void) +{ + asm volatile (" \ + r1 = r10; \ + r2 = -3; \ + r1 += r2; \ + r0 = 42; \ + *(u8*)(r1 + 0) = r0; \ + r0 = *(u8*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("stack pointer arithmetic") +__success __success_unpriv __retval(0) +__naked void stack_pointer_arithmetic(void) +{ + asm volatile (" \ + r1 = 4; \ + goto l0_%=; \ +l0_%=: r7 = r10; \ + r7 += -10; \ + r7 += -10; \ + r2 = r7; \ + r2 += r1; \ + r0 = 0; \ + *(u32*)(r2 + 4) = r0; \ + r2 = r7; \ + r2 += 8; \ + r0 = 0; \ + *(u32*)(r2 + 4) = r0; \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("store PTR_TO_STACK in R10 to array map using BPF_B") +__success __retval(42) +__naked void array_map_using_bpf_b(void) +{ + asm volatile (" \ + /* Load pointer to map. */ \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_array_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + r0 = 2; \ + exit; \ +l0_%=: r1 = r0; \ + /* Copy R10 to R9. */ \ + r9 = r10; \ + /* Pollute other registers with unaligned values. */\ + r2 = -1; \ + r3 = -1; \ + r4 = -1; \ + r5 = -1; \ + r6 = -1; \ + r7 = -1; \ + r8 = -1; \ + /* Store both R9 and R10 with BPF_B and read back. */\ + *(u8*)(r1 + 0) = r10; \ + r2 = *(u8*)(r1 + 0); \ + *(u8*)(r1 + 0) = r9; \ + r3 = *(u8*)(r1 + 0); \ + /* Should read back as same value. */ \ + if r2 == r3 goto l1_%=; \ + r0 = 1; \ + exit; \ +l1_%=: r0 = 42; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_array_48b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_uninit.c b/tools/testing/selftests/bpf/progs/verifier_uninit.c new file mode 100644 index 000000000000..7718cd7d19ce --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_uninit.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/uninit.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +SEC("socket") +__description("read uninitialized register") +__failure __msg("R2 !read_ok") +__failure_unpriv +__naked void read_uninitialized_register(void) +{ + asm volatile (" \ + r0 = r2; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("read invalid register") +__failure __msg("R15 is invalid") +__failure_unpriv +__naked void read_invalid_register(void) +{ + asm volatile (" \ + .8byte %[mov64_reg]; \ + exit; \ +" : + : __imm_insn(mov64_reg, BPF_MOV64_REG(BPF_REG_0, -1)) + : __clobber_all); +} + +SEC("socket") +__description("program doesn't init R0 before exit") +__failure __msg("R0 !read_ok") +__failure_unpriv +__naked void t_init_r0_before_exit(void) +{ + asm volatile (" \ + r2 = r1; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("program doesn't init R0 before exit in all branches") +__failure __msg("R0 !read_ok") +__msg_unpriv("R1 pointer comparison") +__naked void before_exit_in_all_branches(void) +{ + asm volatile (" \ + if r1 >= 0 goto l0_%=; \ + r0 = 1; \ + r0 += 2; \ +l0_%=: exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_value.c b/tools/testing/selftests/bpf/progs/verifier_value.c new file mode 100644 index 000000000000..b5af6b6f5acd --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_value.c @@ -0,0 +1,158 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/value.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("socket") +__description("map element value store of cleared call register") +__failure __msg("R1 !read_ok") +__failure_unpriv __msg_unpriv("R1 !read_ok") +__naked void store_of_cleared_call_register(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value with unaligned store") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void element_value_with_unaligned_store(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 += 3; \ + r1 = 42; \ + *(u64*)(r0 + 0) = r1; \ + r1 = 43; \ + *(u64*)(r0 + 2) = r1; \ + r1 = 44; \ + *(u64*)(r0 - 2) = r1; \ + r8 = r0; \ + r1 = 32; \ + *(u64*)(r8 + 0) = r1; \ + r1 = 33; \ + *(u64*)(r8 + 2) = r1; \ + r1 = 34; \ + *(u64*)(r8 - 2) = r1; \ + r8 += 5; \ + r1 = 22; \ + *(u64*)(r8 + 0) = r1; \ + r1 = 23; \ + *(u64*)(r8 + 4) = r1; \ + r1 = 24; \ + *(u64*)(r8 - 7) = r1; \ + r7 = r8; \ + r7 += 3; \ + r1 = 22; \ + *(u64*)(r7 + 0) = r1; \ + r1 = 23; \ + *(u64*)(r7 + 4) = r1; \ + r1 = 24; \ + *(u64*)(r7 - 4) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value with unaligned load") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void element_value_with_unaligned_load(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + if r1 >= %[max_entries] goto l0_%=; \ + r0 += 3; \ + r7 = *(u64*)(r0 + 0); \ + r7 = *(u64*)(r0 + 2); \ + r8 = r0; \ + r7 = *(u64*)(r8 + 0); \ + r7 = *(u64*)(r8 + 2); \ + r0 += 5; \ + r7 = *(u64*)(r0 + 0); \ + r7 = *(u64*)(r0 + 4); \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(max_entries, MAX_ENTRIES) + : __clobber_all); +} + +SEC("socket") +__description("map element value is preserved across register spilling") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void is_preserved_across_register_spilling(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r0 += %[test_val_foo]; \ + r1 = 42; \ + *(u64*)(r0 + 0) = r1; \ + r1 = r10; \ + r1 += -184; \ + *(u64*)(r1 + 0) = r0; \ + r3 = *(u64*)(r1 + 0); \ + r1 = 42; \ + *(u64*)(r3 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_value_adj_spill.c b/tools/testing/selftests/bpf/progs/verifier_value_adj_spill.c new file mode 100644 index 000000000000..d7a5ba9bbe6a --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_value_adj_spill.c @@ -0,0 +1,78 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/value_adj_spill.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +SEC("socket") +__description("map element value is preserved across register spilling") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) +__naked void is_preserved_across_register_spilling(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 42; \ + *(u64*)(r0 + 0) = r1; \ + r1 = r10; \ + r1 += -184; \ + *(u64*)(r1 + 0) = r0; \ + r3 = *(u64*)(r1 + 0); \ + r1 = 42; \ + *(u64*)(r3 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +SEC("socket") +__description("map element value or null is marked on register spilling") +__success __failure_unpriv __msg_unpriv("R0 leaks addr") +__retval(0) +__naked void is_marked_on_register_spilling(void) +{ + asm volatile (" \ + r2 = r10; \ + r2 += -8; \ + r1 = 0; \ + *(u64*)(r2 + 0) = r1; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + r1 = r10; \ + r1 += -152; \ + *(u64*)(r1 + 0) = r0; \ + if r0 == 0 goto l0_%=; \ + r3 = *(u64*)(r1 + 0); \ + r1 = 42; \ + *(u64*)(r3 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_value_or_null.c b/tools/testing/selftests/bpf/progs/verifier_value_or_null.c new file mode 100644 index 000000000000..8ff668a242eb --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_value_or_null.c @@ -0,0 +1,288 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/value_or_null.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +#define MAX_ENTRIES 11 + +struct test_val { + unsigned int index; + int foo[MAX_ENTRIES]; +}; + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, struct test_val); +} map_hash_48b SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("tc") +__description("multiple registers share map_lookup_elem result") +__success __retval(0) +__naked void share_map_lookup_elem_result(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r4 = r0; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("alu ops on ptr_to_map_value_or_null, 1") +__failure __msg("R4 pointer arithmetic on map_value_or_null") +__naked void map_value_or_null_1(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r4 = r0; \ + r4 += -2; \ + r4 += 2; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("alu ops on ptr_to_map_value_or_null, 2") +__failure __msg("R4 pointer arithmetic on map_value_or_null") +__naked void map_value_or_null_2(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r4 = r0; \ + r4 &= -1; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("alu ops on ptr_to_map_value_or_null, 3") +__failure __msg("R4 pointer arithmetic on map_value_or_null") +__naked void map_value_or_null_3(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r4 = r0; \ + r4 <<= 1; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("invalid memory access with multiple map_lookup_elem calls") +__failure __msg("R4 !read_ok") +__naked void multiple_map_lookup_elem_calls(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + r8 = r1; \ + r7 = r2; \ + call %[bpf_map_lookup_elem]; \ + r4 = r0; \ + r1 = r8; \ + r2 = r7; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("tc") +__description("valid indirect map_lookup_elem access with 2nd lookup in branch") +__success __retval(0) +__naked void with_2nd_lookup_in_branch(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + r8 = r1; \ + r7 = r2; \ + call %[bpf_map_lookup_elem]; \ + r2 = 10; \ + if r2 != 0 goto l0_%=; \ + r1 = r8; \ + r2 = r7; \ + call %[bpf_map_lookup_elem]; \ +l0_%=: r4 = r0; \ + if r0 == 0 goto l1_%=; \ + r1 = 0; \ + *(u64*)(r4 + 0) = r1; \ +l1_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("socket") +__description("invalid map access from else condition") +__failure __msg("R0 unbounded memory access") +__failure_unpriv __msg_unpriv("R0 leaks addr") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void map_access_from_else_condition(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_48b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 == 0 goto l0_%=; \ + r1 = *(u32*)(r0 + 0); \ + if r1 >= %[__imm_0] goto l1_%=; \ + r1 += 1; \ +l1_%=: r1 <<= 2; \ + r0 += r1; \ + r1 = %[test_val_foo]; \ + *(u64*)(r0 + 0) = r1; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_48b), + __imm_const(__imm_0, MAX_ENTRIES-1), + __imm_const(test_val_foo, offsetof(struct test_val, foo)) + : __clobber_all); +} + +SEC("tc") +__description("map lookup and null branch prediction") +__success __retval(0) +__naked void lookup_and_null_branch_prediction(void) +{ + asm volatile (" \ + r1 = 10; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r6 = r0; \ + if r6 == 0 goto l0_%=; \ + if r6 != 0 goto l0_%=; \ + r10 += 10; \ +l0_%=: exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("MAP_VALUE_OR_NULL check_ids() in regsafe()") +__failure __msg("R8 invalid mem access 'map_value_or_null'") +__failure_unpriv __msg_unpriv("") +__flag(BPF_F_TEST_STATE_FREQ) +__naked void null_check_ids_in_regsafe(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + /* r9 = map_lookup_elem(...) */ \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r9 = r0; \ + /* r8 = map_lookup_elem(...) */ \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r8 = r0; \ + /* r7 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r7 = r0; \ + /* r6 = ktime_get_ns() */ \ + call %[bpf_ktime_get_ns]; \ + r6 = r0; \ + /* if r6 > r7 goto +1 ; no new information about the state is derived from\ + * ; this check, thus produced verifier states differ\ + * ; only in 'insn_idx' \ + * r9 = r8 ; optionally share ID between r9 and r8\ + */ \ + if r6 > r7 goto l0_%=; \ + r9 = r8; \ +l0_%=: /* if r9 == 0 goto <exit> */ \ + if r9 == 0 goto l1_%=; \ + /* read map value via r8, this is not always \ + * safe because r8 might be not equal to r9. \ + */ \ + r0 = *(u64*)(r8 + 0); \ +l1_%=: /* exit 0 */ \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_ktime_get_ns), + __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_var_off.c b/tools/testing/selftests/bpf/progs/verifier_var_off.c new file mode 100644 index 000000000000..83a90afba785 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_var_off.c @@ -0,0 +1,349 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/var_off.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("lwt_in") +__description("variable-offset ctx access") +__failure __msg("variable ctx access var_off=(0x0; 0x4)") +__naked void variable_offset_ctx_access(void) +{ + asm volatile (" \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned */ \ + r2 &= 4; \ + /* add it to skb. We now have either &skb->len or\ + * &skb->pkt_type, but we don't know which \ + */ \ + r1 += r2; \ + /* dereference it */ \ + r0 = *(u32*)(r1 + 0); \ + exit; \ +" ::: __clobber_all); +} + +SEC("cgroup/skb") +__description("variable-offset stack read, priv vs unpriv") +__success __failure_unpriv +__msg_unpriv("R2 variable stack access prohibited for !root") +__retval(0) +__naked void stack_read_priv_vs_unpriv(void) +{ + asm volatile (" \ + /* Fill the top 8 bytes of the stack */ \ + r0 = 0; \ + *(u64*)(r10 - 8) = r0; \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned */ \ + r2 &= 4; \ + r2 -= 8; \ + /* add it to fp. We now have either fp-4 or fp-8, but\ + * we don't know which \ + */ \ + r2 += r10; \ + /* dereference it for a stack read */ \ + r0 = *(u32*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("lwt_in") +__description("variable-offset stack read, uninitialized") +__failure __msg("invalid variable-offset read from stack R2") +__naked void variable_offset_stack_read_uninitialized(void) +{ + asm volatile (" \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned */ \ + r2 &= 4; \ + r2 -= 8; \ + /* add it to fp. We now have either fp-4 or fp-8, but\ + * we don't know which \ + */ \ + r2 += r10; \ + /* dereference it for a stack read */ \ + r0 = *(u32*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("variable-offset stack write, priv vs unpriv") +__success __failure_unpriv +/* Variable stack access is rejected for unprivileged. + */ +__msg_unpriv("R2 variable stack access prohibited for !root") +__retval(0) +__naked void stack_write_priv_vs_unpriv(void) +{ + asm volatile (" \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 8-byte aligned */ \ + r2 &= 8; \ + r2 -= 16; \ + /* Add it to fp. We now have either fp-8 or fp-16, but\ + * we don't know which \ + */ \ + r2 += r10; \ + /* Dereference it for a stack write */ \ + r0 = 0; \ + *(u64*)(r2 + 0) = r0; \ + /* Now read from the address we just wrote. This shows\ + * that, after a variable-offset write, a priviledged\ + * program can read the slots that were in the range of\ + * that write (even if the verifier doesn't actually know\ + * if the slot being read was really written to or not.\ + */ \ + r3 = *(u64*)(r2 + 0); \ + r0 = 0; \ + exit; \ +" ::: __clobber_all); +} + +SEC("socket") +__description("variable-offset stack write clobbers spilled regs") +__failure +/* In the priviledged case, dereferencing a spilled-and-then-filled + * register is rejected because the previous variable offset stack + * write might have overwritten the spilled pointer (i.e. we lose track + * of the spilled register when we analyze the write). + */ +__msg("R2 invalid mem access 'scalar'") +__failure_unpriv +/* The unprivileged case is not too interesting; variable + * stack access is rejected. + */ +__msg_unpriv("R2 variable stack access prohibited for !root") +__naked void stack_write_clobbers_spilled_regs(void) +{ + asm volatile (" \ + /* Dummy instruction; needed because we need to patch the next one\ + * and we can't patch the first instruction. \ + */ \ + r6 = 0; \ + /* Make R0 a map ptr */ \ + r0 = %[map_hash_8b] ll; \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 8-byte aligned */ \ + r2 &= 8; \ + r2 -= 16; \ + /* Add it to fp. We now have either fp-8 or fp-16, but\ + * we don't know which. \ + */ \ + r2 += r10; \ + /* Spill R0(map ptr) into stack */ \ + *(u64*)(r10 - 8) = r0; \ + /* Dereference the unknown value for a stack write */\ + r0 = 0; \ + *(u64*)(r2 + 0) = r0; \ + /* Fill the register back into R2 */ \ + r2 = *(u64*)(r10 - 8); \ + /* Try to dereference R2 for a memory load */ \ + r0 = *(u64*)(r2 + 8); \ + exit; \ +" : + : __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("sockops") +__description("indirect variable-offset stack access, unbounded") +__failure __msg("invalid unbounded variable-offset indirect access to stack R4") +__naked void variable_offset_stack_access_unbounded(void) +{ + asm volatile (" \ + r2 = 6; \ + r3 = 28; \ + /* Fill the top 16 bytes of the stack. */ \ + r4 = 0; \ + *(u64*)(r10 - 16) = r4; \ + r4 = 0; \ + *(u64*)(r10 - 8) = r4; \ + /* Get an unknown value. */ \ + r4 = *(u64*)(r1 + %[bpf_sock_ops_bytes_received]);\ + /* Check the lower bound but don't check the upper one. */\ + if r4 s< 0 goto l0_%=; \ + /* Point the lower bound to initialized stack. Offset is now in range\ + * from fp-16 to fp+0x7fffffffffffffef, i.e. max value is unbounded.\ + */ \ + r4 -= 16; \ + r4 += r10; \ + r5 = 8; \ + /* Dereference it indirectly. */ \ + call %[bpf_getsockopt]; \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm(bpf_getsockopt), + __imm_const(bpf_sock_ops_bytes_received, offsetof(struct bpf_sock_ops, bytes_received)) + : __clobber_all); +} + +SEC("lwt_in") +__description("indirect variable-offset stack access, max out of bound") +__failure __msg("invalid variable-offset indirect access to stack R2") +__naked void access_max_out_of_bound(void) +{ + asm volatile (" \ + /* Fill the top 8 bytes of the stack */ \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned */ \ + r2 &= 4; \ + r2 -= 8; \ + /* add it to fp. We now have either fp-4 or fp-8, but\ + * we don't know which \ + */ \ + r2 += r10; \ + /* dereference it indirectly */ \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("lwt_in") +__description("indirect variable-offset stack access, min out of bound") +__failure __msg("invalid variable-offset indirect access to stack R2") +__naked void access_min_out_of_bound(void) +{ + asm volatile (" \ + /* Fill the top 8 bytes of the stack */ \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned */ \ + r2 &= 4; \ + r2 -= 516; \ + /* add it to fp. We now have either fp-516 or fp-512, but\ + * we don't know which \ + */ \ + r2 += r10; \ + /* dereference it indirectly */ \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("lwt_in") +__description("indirect variable-offset stack access, min_off < min_initialized") +__failure __msg("invalid indirect read from stack R2 var_off") +__naked void access_min_off_min_initialized(void) +{ + asm volatile (" \ + /* Fill only the top 8 bytes of the stack. */ \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + /* Get an unknown value */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned. */ \ + r2 &= 4; \ + r2 -= 16; \ + /* Add it to fp. We now have either fp-12 or fp-16, but we don't know\ + * which. fp-16 size 8 is partially uninitialized stack.\ + */ \ + r2 += r10; \ + /* Dereference it indirectly. */ \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("cgroup/skb") +__description("indirect variable-offset stack access, priv vs unpriv") +__success __failure_unpriv +__msg_unpriv("R2 variable stack access prohibited for !root") +__retval(0) +__naked void stack_access_priv_vs_unpriv(void) +{ + asm volatile (" \ + /* Fill the top 16 bytes of the stack. */ \ + r2 = 0; \ + *(u64*)(r10 - 16) = r2; \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + /* Get an unknown value. */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned. */ \ + r2 &= 4; \ + r2 -= 16; \ + /* Add it to fp. We now have either fp-12 or fp-16, we don't know\ + * which, but either way it points to initialized stack.\ + */ \ + r2 += r10; \ + /* Dereference it indirectly. */ \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("lwt_in") +__description("indirect variable-offset stack access, ok") +__success __retval(0) +__naked void variable_offset_stack_access_ok(void) +{ + asm volatile (" \ + /* Fill the top 16 bytes of the stack. */ \ + r2 = 0; \ + *(u64*)(r10 - 16) = r2; \ + r2 = 0; \ + *(u64*)(r10 - 8) = r2; \ + /* Get an unknown value. */ \ + r2 = *(u32*)(r1 + 0); \ + /* Make it small and 4-byte aligned. */ \ + r2 &= 4; \ + r2 -= 16; \ + /* Add it to fp. We now have either fp-12 or fp-16, we don't know\ + * which, but either way it points to initialized stack.\ + */ \ + r2 += r10; \ + /* Dereference it indirectly. */ \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + r0 = 0; \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_xadd.c b/tools/testing/selftests/bpf/progs/verifier_xadd.c new file mode 100644 index 000000000000..05a0a55adb45 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_xadd.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/xadd.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("tc") +__description("xadd/w check unaligned stack") +__failure __msg("misaligned stack access off") +__naked void xadd_w_check_unaligned_stack(void) +{ + asm volatile (" \ + r0 = 1; \ + *(u64*)(r10 - 8) = r0; \ + lock *(u32 *)(r10 - 7) += w0; \ + r0 = *(u64*)(r10 - 8); \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("xadd/w check unaligned map") +__failure __msg("misaligned value access off") +__naked void xadd_w_check_unaligned_map(void) +{ + asm volatile (" \ + r1 = 0; \ + *(u64*)(r10 - 8) = r1; \ + r2 = r10; \ + r2 += -8; \ + r1 = %[map_hash_8b] ll; \ + call %[bpf_map_lookup_elem]; \ + if r0 != 0 goto l0_%=; \ + exit; \ +l0_%=: r1 = 1; \ + lock *(u32 *)(r0 + 3) += w1; \ + r0 = *(u32*)(r0 + 3); \ + exit; \ +" : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all); +} + +SEC("xdp") +__description("xadd/w check unaligned pkt") +__failure __msg("BPF_ATOMIC stores into R2 pkt is not allowed") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void xadd_w_check_unaligned_pkt(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + r0 = 99; \ + goto l1_%=; \ +l0_%=: r0 = 1; \ + r1 = 0; \ + *(u32*)(r2 + 0) = r1; \ + r1 = 0; \ + *(u32*)(r2 + 3) = r1; \ + lock *(u32 *)(r2 + 1) += w0; \ + lock *(u32 *)(r2 + 2) += w0; \ + r0 = *(u32*)(r2 + 1); \ +l1_%=: exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("tc") +__description("xadd/w check whether src/dst got mangled, 1") +__success __retval(3) +__naked void src_dst_got_mangled_1(void) +{ + asm volatile (" \ + r0 = 1; \ + r6 = r0; \ + r7 = r10; \ + *(u64*)(r10 - 8) = r0; \ + lock *(u64 *)(r10 - 8) += r0; \ + lock *(u64 *)(r10 - 8) += r0; \ + if r6 != r0 goto l0_%=; \ + if r7 != r10 goto l0_%=; \ + r0 = *(u64*)(r10 - 8); \ + exit; \ +l0_%=: r0 = 42; \ + exit; \ +" ::: __clobber_all); +} + +SEC("tc") +__description("xadd/w check whether src/dst got mangled, 2") +__success __retval(3) +__naked void src_dst_got_mangled_2(void) +{ + asm volatile (" \ + r0 = 1; \ + r6 = r0; \ + r7 = r10; \ + *(u32*)(r10 - 8) = r0; \ + lock *(u32 *)(r10 - 8) += w0; \ + lock *(u32 *)(r10 - 8) += w0; \ + if r6 != r0 goto l0_%=; \ + if r7 != r10 goto l0_%=; \ + r0 = *(u32*)(r10 - 8); \ + exit; \ +l0_%=: r0 = 42; \ + exit; \ +" ::: __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_xdp.c b/tools/testing/selftests/bpf/progs/verifier_xdp.c new file mode 100644 index 000000000000..50768ed179b3 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_xdp.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/xdp.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("xdp") +__description("XDP, using ifindex from netdev") +__success __retval(1) +__naked void xdp_using_ifindex_from_netdev(void) +{ + asm volatile (" \ + r0 = 0; \ + r2 = *(u32*)(r1 + %[xdp_md_ingress_ifindex]); \ + if r2 < 1 goto l0_%=; \ + r0 = 1; \ +l0_%=: exit; \ +" : + : __imm_const(xdp_md_ingress_ifindex, offsetof(struct xdp_md, ingress_ifindex)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c b/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c new file mode 100644 index 000000000000..df2dfd1b15d1 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_xdp_direct_packet_access.c @@ -0,0 +1,1722 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Converted from tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +SEC("xdp") +__description("XDP pkt read, pkt_end mangling, bad access 1") +__failure __msg("R3 pointer arithmetic on pkt_end") +__naked void end_mangling_bad_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + r3 += 8; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end mangling, bad access 2") +__failure __msg("R3 pointer arithmetic on pkt_end") +__naked void end_mangling_bad_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + r3 -= 8; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' > pkt_end, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void end_corner_case_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' > pkt_end, bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_end_bad_access_1_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 4); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' > pkt_end, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_end_bad_access_2_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' > pkt_end, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 9; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 9); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end > pkt_data', good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void end_pkt_data_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end > pkt_data', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 6; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 6); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end > pkt_data', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 > r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end > pkt_data', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end > pkt_data', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' < pkt_end, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_pkt_end_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 6; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 6); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' < pkt_end, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_end_bad_access_2_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' < pkt_end, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void end_corner_case_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' < pkt_end, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end < pkt_data', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end < pkt_data', bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_1_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 4); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end < pkt_data', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 < r1 goto l0_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end < pkt_data', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 9; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 9); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end < pkt_data', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' >= pkt_end, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_pkt_end_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u32*)(r1 - 5); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_5(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 6; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 6); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' >= pkt_end, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_end_bad_access_2_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 >= r3 goto l0_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' >= pkt_end, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void end_corner_case_good_access_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' >= pkt_end, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_5(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end >= pkt_data', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end >= pkt_data', bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_1_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 4); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end >= pkt_data', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 >= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end >= pkt_data', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_6(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 9; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 9); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_6(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' <= pkt_end, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void end_corner_case_good_access_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' <= pkt_end, bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_end_bad_access_1_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 4); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' <= pkt_end, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_end_bad_access_2_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r1 <= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' <= pkt_end, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_7(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 9; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 9); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_7(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end <= pkt_data', good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void end_pkt_data_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u32*)(r1 - 5); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_8(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 6; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 6); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end <= pkt_data', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 <= r1 goto l0_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end <= pkt_data', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 7; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_end <= pkt_data', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_8(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data]); \ + r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ + r1 = r2; \ + r1 += 8; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' > pkt_data, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_5(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' > pkt_data, bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_1_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 4); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' > pkt_data, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_5(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 > r3 goto l0_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' > pkt_data, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_9(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 9; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 9); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_9(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r1 > r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data > pkt_meta', good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_pkt_meta_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_10(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 6; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 6); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data > pkt_meta', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_meta_bad_access_2_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 > r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data > pkt_meta', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void meta_corner_case_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data > pkt_meta', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_10(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 > r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' < pkt_data, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void meta_pkt_data_good_access_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_11(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 6; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 6); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' < pkt_data, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_6(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' < pkt_data, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_6(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' < pkt_data, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_11(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 < r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data < pkt_meta', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void meta_corner_case_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data < pkt_meta', bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_meta_bad_access_1_1(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 4); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data < pkt_meta', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_meta_bad_access_2_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 < r1 goto l0_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data < pkt_meta', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_12(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 9; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 9); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_12(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r3 < r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' >= pkt_data, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void meta_pkt_data_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u32*)(r1 - 5); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_13(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 6; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 6); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' >= pkt_data, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_7(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 >= r3 goto l0_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' >= pkt_data, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_7(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' >= pkt_data, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_13(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 >= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data >= pkt_meta', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void meta_corner_case_good_access_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data >= pkt_meta', bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_meta_bad_access_1_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 4); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data >= pkt_meta', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_meta_bad_access_2_3(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 >= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data >= pkt_meta', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_14(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 9; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 9); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_14(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r3 >= r1 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' <= pkt_data, corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_corner_case_good_access_8(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 8); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' <= pkt_data, bad access 1") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_1_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 4); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' <= pkt_data, bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_data_bad_access_2_8(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r1 <= r3 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' <= pkt_data, corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_15(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 9; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 9); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_15(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r1 <= r3 goto l0_%=; \ + goto l1_%=; \ +l0_%=: r0 = *(u64*)(r1 - 7); \ +l1_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data <= pkt_meta', good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void data_pkt_meta_good_access_2(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u32*)(r1 - 5); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_bad_access_16(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 6; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 6); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data <= pkt_meta', bad access 2") +__failure __msg("R1 offset is outside of the packet") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void pkt_meta_bad_access_2_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 <= r1 goto l0_%=; \ +l0_%=: r0 = *(u32*)(r1 - 5); \ + r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data <= pkt_meta', corner case, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void meta_corner_case_good_access_4(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 7; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 7); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +SEC("xdp") +__description("XDP pkt read, pkt_data <= pkt_meta', corner case +1, good access") +__success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) +__naked void corner_case_1_good_access_16(void) +{ + asm volatile (" \ + r2 = *(u32*)(r1 + %[xdp_md_data_meta]); \ + r3 = *(u32*)(r1 + %[xdp_md_data]); \ + r1 = r2; \ + r1 += 8; \ + if r3 <= r1 goto l0_%=; \ + r0 = *(u64*)(r1 - 8); \ +l0_%=: r0 = 0; \ + exit; \ +" : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_const(xdp_md_data_meta, offsetof(struct xdp_md, data_meta)) + : __clobber_all); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/xdp_features.c b/tools/testing/selftests/bpf/progs/xdp_features.c index 87c247d56f72..67424084a38a 100644 --- a/tools/testing/selftests/bpf/progs/xdp_features.c +++ b/tools/testing/selftests/bpf/progs/xdp_features.c @@ -70,7 +70,6 @@ xdp_process_echo_packet(struct xdp_md *xdp, bool dut) struct tlv_hdr *tlv; struct udphdr *uh; __be16 port; - __u8 *cmd; if (eh + 1 > (struct ethhdr *)data_end) return -EINVAL; diff --git a/tools/testing/selftests/bpf/progs/xdping_kern.c b/tools/testing/selftests/bpf/progs/xdping_kern.c index 4ad73847b8a5..54cf1765118b 100644 --- a/tools/testing/selftests/bpf/progs/xdping_kern.c +++ b/tools/testing/selftests/bpf/progs/xdping_kern.c @@ -89,7 +89,6 @@ static __always_inline int icmp_check(struct xdp_md *ctx, int type) SEC("xdp") int xdping_client(struct xdp_md *ctx) { - void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; struct pinginfo *pinginfo = NULL; struct ethhdr *eth = data; @@ -153,7 +152,6 @@ int xdping_client(struct xdp_md *ctx) SEC("xdp") int xdping_server(struct xdp_md *ctx) { - void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; struct ethhdr *eth = data; struct icmphdr *icmph; diff --git a/tools/testing/selftests/bpf/progs/xdpwall.c b/tools/testing/selftests/bpf/progs/xdpwall.c index 7a891a0c3a39..c2dd0c28237a 100644 --- a/tools/testing/selftests/bpf/progs/xdpwall.c +++ b/tools/testing/selftests/bpf/progs/xdpwall.c @@ -321,7 +321,6 @@ int edgewall(struct xdp_md *ctx) void *data = (void *)(long)(ctx->data); struct fw_match_info match_info = {}; struct pkt_info info = {}; - __u8 parse_err = NO_ERR; void *transport_hdr; struct ethhdr *eth; bool filter_res; diff --git a/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c b/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c index 744a01d0e57d..a630c95c7471 100644 --- a/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c +++ b/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c @@ -3,6 +3,7 @@ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +#include "xsk_xdp_metadata.h" struct { __uint(type, BPF_MAP_TYPE_XSKMAP); @@ -12,6 +13,7 @@ struct { } xsk SEC(".maps"); static unsigned int idx; +int count = 0; SEC("xdp") int xsk_def_prog(struct xdp_md *xdp) { @@ -27,4 +29,27 @@ SEC("xdp") int xsk_xdp_drop(struct xdp_md *xdp) return bpf_redirect_map(&xsk, 0, XDP_DROP); } +SEC("xdp") int xsk_xdp_populate_metadata(struct xdp_md *xdp) +{ + void *data, *data_meta; + struct xdp_info *meta; + int err; + + /* Reserve enough for all custom metadata. */ + err = bpf_xdp_adjust_meta(xdp, -(int)sizeof(struct xdp_info)); + if (err) + return XDP_DROP; + + data = (void *)(long)xdp->data; + data_meta = (void *)(long)xdp->data_meta; + + if (data_meta + sizeof(struct xdp_info) > data) + return XDP_DROP; + + meta = data_meta; + meta->count = count++; + + return bpf_redirect_map(&xsk, 0, XDP_DROP); +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_ftrace.sh b/tools/testing/selftests/bpf/test_ftrace.sh index 20de7bb873bc..f5109eb0e951 100755 --- a/tools/testing/selftests/bpf/test_ftrace.sh +++ b/tools/testing/selftests/bpf/test_ftrace.sh @@ -1,6 +1,11 @@ #!/bin/bash -TR=/sys/kernel/debug/tracing/ +if [[ -e /sys/kernel/tracing/trace ]]; then + TR=/sys/kernel/tracing/ +else + TR=/sys/kernel/debug/tracing/ +fi + clear_trace() { # reset trace output echo > $TR/trace } diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c index bf41390157bf..47e9e076bc8f 100644 --- a/tools/testing/selftests/bpf/test_loader.c +++ b/tools/testing/selftests/bpf/test_loader.c @@ -1,9 +1,14 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ +#include <linux/capability.h> #include <stdlib.h> #include <test_progs.h> #include <bpf/btf.h> +#include "autoconf_helper.h" +#include "unpriv_helpers.h" +#include "cap_helpers.h" + #define str_has_pfx(str, pfx) \ (strncmp(str, pfx, __builtin_constant_p(pfx) ? sizeof(pfx) - 1 : strlen(pfx)) == 0) @@ -12,16 +17,48 @@ #define TEST_TAG_EXPECT_FAILURE "comment:test_expect_failure" #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success" #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg=" +#define TEST_TAG_EXPECT_FAILURE_UNPRIV "comment:test_expect_failure_unpriv" +#define TEST_TAG_EXPECT_SUCCESS_UNPRIV "comment:test_expect_success_unpriv" +#define TEST_TAG_EXPECT_MSG_PFX_UNPRIV "comment:test_expect_msg_unpriv=" #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level=" #define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags=" +#define TEST_TAG_DESCRIPTION_PFX "comment:test_description=" +#define TEST_TAG_RETVAL_PFX "comment:test_retval=" +#define TEST_TAG_RETVAL_PFX_UNPRIV "comment:test_retval_unpriv=" -struct test_spec { - const char *name; +/* Warning: duplicated in bpf_misc.h */ +#define POINTER_VALUE 0xcafe4all +#define TEST_DATA_LEN 64 + +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS +#define EFFICIENT_UNALIGNED_ACCESS 1 +#else +#define EFFICIENT_UNALIGNED_ACCESS 0 +#endif + +static int sysctl_unpriv_disabled = -1; + +enum mode { + PRIV = 1, + UNPRIV = 2 +}; + +struct test_subspec { + char *name; bool expect_failure; const char **expect_msgs; size_t expect_msg_cnt; + int retval; + bool execute; +}; + +struct test_spec { + const char *prog_name; + struct test_subspec priv; + struct test_subspec unpriv; int log_level; int prog_flags; + int mode_mask; }; static int tester_init(struct test_loader *tester) @@ -44,17 +81,87 @@ void test_loader_fini(struct test_loader *tester) free(tester->log_buf); } +static void free_test_spec(struct test_spec *spec) +{ + free(spec->priv.name); + free(spec->unpriv.name); + free(spec->priv.expect_msgs); + free(spec->unpriv.expect_msgs); +} + +static int push_msg(const char *msg, struct test_subspec *subspec) +{ + void *tmp; + + tmp = realloc(subspec->expect_msgs, (1 + subspec->expect_msg_cnt) * sizeof(void *)); + if (!tmp) { + ASSERT_FAIL("failed to realloc memory for messages\n"); + return -ENOMEM; + } + subspec->expect_msgs = tmp; + subspec->expect_msgs[subspec->expect_msg_cnt++] = msg; + + return 0; +} + +static int parse_int(const char *str, int *val, const char *name) +{ + char *end; + long tmp; + + errno = 0; + if (str_has_pfx(str, "0x")) + tmp = strtol(str + 2, &end, 16); + else + tmp = strtol(str, &end, 10); + if (errno || end[0] != '\0') { + PRINT_FAIL("failed to parse %s from '%s'\n", name, str); + return -EINVAL; + } + *val = tmp; + return 0; +} + +static int parse_retval(const char *str, int *val, const char *name) +{ + struct { + char *name; + int val; + } named_values[] = { + { "INT_MIN" , INT_MIN }, + { "POINTER_VALUE", POINTER_VALUE }, + { "TEST_DATA_LEN", TEST_DATA_LEN }, + }; + int i; + + for (i = 0; i < ARRAY_SIZE(named_values); ++i) { + if (strcmp(str, named_values[i].name) != 0) + continue; + *val = named_values[i].val; + return 0; + } + + return parse_int(str, val, name); +} + +/* Uses btf_decl_tag attributes to describe the expected test + * behavior, see bpf_misc.h for detailed description of each attribute + * and attribute combinations. + */ static int parse_test_spec(struct test_loader *tester, struct bpf_object *obj, struct bpf_program *prog, struct test_spec *spec) { + const char *description = NULL; + bool has_unpriv_result = false; + bool has_unpriv_retval = false; + int func_id, i, err = 0; struct btf *btf; - int func_id, i; memset(spec, 0, sizeof(*spec)); - spec->name = bpf_program__name(prog); + spec->prog_name = bpf_program__name(prog); btf = bpf_object__btf(obj); if (!btf) { @@ -62,16 +169,16 @@ static int parse_test_spec(struct test_loader *tester, return -EINVAL; } - func_id = btf__find_by_name_kind(btf, spec->name, BTF_KIND_FUNC); + func_id = btf__find_by_name_kind(btf, spec->prog_name, BTF_KIND_FUNC); if (func_id < 0) { - ASSERT_FAIL("failed to find FUNC BTF type for '%s'", spec->name); + ASSERT_FAIL("failed to find FUNC BTF type for '%s'", spec->prog_name); return -EINVAL; } for (i = 1; i < btf__type_cnt(btf); i++) { + const char *s, *val, *msg; const struct btf_type *t; - const char *s, *val; - char *e; + int tmp; t = btf__type_by_id(btf, i); if (!btf_is_decl_tag(t)) @@ -81,31 +188,54 @@ static int parse_test_spec(struct test_loader *tester, continue; s = btf__str_by_offset(btf, t->name_off); - if (strcmp(s, TEST_TAG_EXPECT_FAILURE) == 0) { - spec->expect_failure = true; + if (str_has_pfx(s, TEST_TAG_DESCRIPTION_PFX)) { + description = s + sizeof(TEST_TAG_DESCRIPTION_PFX) - 1; + } else if (strcmp(s, TEST_TAG_EXPECT_FAILURE) == 0) { + spec->priv.expect_failure = true; + spec->mode_mask |= PRIV; } else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) { - spec->expect_failure = false; + spec->priv.expect_failure = false; + spec->mode_mask |= PRIV; + } else if (strcmp(s, TEST_TAG_EXPECT_FAILURE_UNPRIV) == 0) { + spec->unpriv.expect_failure = true; + spec->mode_mask |= UNPRIV; + has_unpriv_result = true; + } else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS_UNPRIV) == 0) { + spec->unpriv.expect_failure = false; + spec->mode_mask |= UNPRIV; + has_unpriv_result = true; } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) { - void *tmp; - const char **msg; - - tmp = realloc(spec->expect_msgs, - (1 + spec->expect_msg_cnt) * sizeof(void *)); - if (!tmp) { - ASSERT_FAIL("failed to realloc memory for messages\n"); - return -ENOMEM; - } - spec->expect_msgs = tmp; - msg = &spec->expect_msgs[spec->expect_msg_cnt++]; - *msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1; + msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1; + err = push_msg(msg, &spec->priv); + if (err) + goto cleanup; + spec->mode_mask |= PRIV; + } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX_UNPRIV)) { + msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX_UNPRIV) - 1; + err = push_msg(msg, &spec->unpriv); + if (err) + goto cleanup; + spec->mode_mask |= UNPRIV; + } else if (str_has_pfx(s, TEST_TAG_RETVAL_PFX)) { + val = s + sizeof(TEST_TAG_RETVAL_PFX) - 1; + err = parse_retval(val, &spec->priv.retval, "__retval"); + if (err) + goto cleanup; + spec->priv.execute = true; + spec->mode_mask |= PRIV; + } else if (str_has_pfx(s, TEST_TAG_RETVAL_PFX_UNPRIV)) { + val = s + sizeof(TEST_TAG_RETVAL_PFX_UNPRIV) - 1; + err = parse_retval(val, &spec->unpriv.retval, "__retval_unpriv"); + if (err) + goto cleanup; + spec->mode_mask |= UNPRIV; + spec->unpriv.execute = true; + has_unpriv_retval = true; } else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) { val = s + sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1; - errno = 0; - spec->log_level = strtol(val, &e, 0); - if (errno || e[0] != '\0') { - ASSERT_FAIL("failed to parse test log level from '%s'", s); - return -EINVAL; - } + err = parse_int(val, &spec->log_level, "test log level"); + if (err) + goto cleanup; } else if (str_has_pfx(s, TEST_TAG_PROG_FLAGS_PFX)) { val = s + sizeof(TEST_TAG_PROG_FLAGS_PFX) - 1; if (strcmp(val, "BPF_F_STRICT_ALIGNMENT") == 0) { @@ -121,17 +251,74 @@ static int parse_test_spec(struct test_loader *tester, } else if (strcmp(val, "BPF_F_XDP_HAS_FRAGS") == 0) { spec->prog_flags |= BPF_F_XDP_HAS_FRAGS; } else /* assume numeric value */ { - errno = 0; - spec->prog_flags |= strtol(val, &e, 0); - if (errno || e[0] != '\0') { - ASSERT_FAIL("failed to parse test prog flags from '%s'", s); - return -EINVAL; - } + err = parse_int(val, &tmp, "test prog flags"); + if (err) + goto cleanup; + spec->prog_flags |= tmp; + } + } + } + + if (spec->mode_mask == 0) + spec->mode_mask = PRIV; + + if (!description) + description = spec->prog_name; + + if (spec->mode_mask & PRIV) { + spec->priv.name = strdup(description); + if (!spec->priv.name) { + PRINT_FAIL("failed to allocate memory for priv.name\n"); + err = -ENOMEM; + goto cleanup; + } + } + + if (spec->mode_mask & UNPRIV) { + int descr_len = strlen(description); + const char *suffix = " @unpriv"; + char *name; + + name = malloc(descr_len + strlen(suffix) + 1); + if (!name) { + PRINT_FAIL("failed to allocate memory for unpriv.name\n"); + err = -ENOMEM; + goto cleanup; + } + + strcpy(name, description); + strcpy(&name[descr_len], suffix); + spec->unpriv.name = name; + } + + if (spec->mode_mask & (PRIV | UNPRIV)) { + if (!has_unpriv_result) + spec->unpriv.expect_failure = spec->priv.expect_failure; + + if (!has_unpriv_retval) { + spec->unpriv.retval = spec->priv.retval; + spec->unpriv.execute = spec->priv.execute; + } + + if (!spec->unpriv.expect_msgs) { + size_t sz = spec->priv.expect_msg_cnt * sizeof(void *); + + spec->unpriv.expect_msgs = malloc(sz); + if (!spec->unpriv.expect_msgs) { + PRINT_FAIL("failed to allocate memory for unpriv.expect_msgs\n"); + err = -ENOMEM; + goto cleanup; } + memcpy(spec->unpriv.expect_msgs, spec->priv.expect_msgs, sz); + spec->unpriv.expect_msg_cnt = spec->priv.expect_msg_cnt; } } return 0; + +cleanup: + free_test_spec(spec); + return err; } static void prepare_case(struct test_loader *tester, @@ -148,7 +335,7 @@ static void prepare_case(struct test_loader *tester, bpf_program__set_log_buf(prog, tester->log_buf, tester->log_buf_sz); - /* Make sure we set at least minimal log level, unless test requirest + /* Make sure we set at least minimal log level, unless test requires * even higher level already. Make sure to preserve independent log * level 4 (verifier stats), though. */ @@ -172,18 +359,18 @@ static void emit_verifier_log(const char *log_buf, bool force) } static void validate_case(struct test_loader *tester, - struct test_spec *spec, + struct test_subspec *subspec, struct bpf_object *obj, struct bpf_program *prog, int load_err) { int i, j; - for (i = 0; i < spec->expect_msg_cnt; i++) { + for (i = 0; i < subspec->expect_msg_cnt; i++) { char *match; const char *expect_msg; - expect_msg = spec->expect_msgs[i]; + expect_msg = subspec->expect_msgs[i]; match = strstr(tester->log_buf + tester->next_match_pos, expect_msg); if (!ASSERT_OK_PTR(match, "expect_msg")) { @@ -191,7 +378,8 @@ static void validate_case(struct test_loader *tester, if (env.verbosity == VERBOSE_NONE) emit_verifier_log(tester->log_buf, true /*force*/); for (j = 0; j < i; j++) - fprintf(stderr, "MATCHED MSG: '%s'\n", spec->expect_msgs[j]); + fprintf(stderr, + "MATCHED MSG: '%s'\n", subspec->expect_msgs[j]); fprintf(stderr, "EXPECTED MSG: '%s'\n", expect_msg); return; } @@ -200,17 +388,229 @@ static void validate_case(struct test_loader *tester, } } +struct cap_state { + __u64 old_caps; + bool initialized; +}; + +static int drop_capabilities(struct cap_state *caps) +{ + const __u64 caps_to_drop = (1ULL << CAP_SYS_ADMIN | 1ULL << CAP_NET_ADMIN | + 1ULL << CAP_PERFMON | 1ULL << CAP_BPF); + int err; + + err = cap_disable_effective(caps_to_drop, &caps->old_caps); + if (err) { + PRINT_FAIL("failed to drop capabilities: %i, %s\n", err, strerror(err)); + return err; + } + + caps->initialized = true; + return 0; +} + +static int restore_capabilities(struct cap_state *caps) +{ + int err; + + if (!caps->initialized) + return 0; + + err = cap_enable_effective(caps->old_caps, NULL); + if (err) + PRINT_FAIL("failed to restore capabilities: %i, %s\n", err, strerror(err)); + caps->initialized = false; + return err; +} + +static bool can_execute_unpriv(struct test_loader *tester, struct test_spec *spec) +{ + if (sysctl_unpriv_disabled < 0) + sysctl_unpriv_disabled = get_unpriv_disabled() ? 1 : 0; + if (sysctl_unpriv_disabled) + return false; + if ((spec->prog_flags & BPF_F_ANY_ALIGNMENT) && !EFFICIENT_UNALIGNED_ACCESS) + return false; + return true; +} + +static bool is_unpriv_capable_map(struct bpf_map *map) +{ + enum bpf_map_type type; + __u32 flags; + + type = bpf_map__type(map); + + switch (type) { + case BPF_MAP_TYPE_HASH: + case BPF_MAP_TYPE_PERCPU_HASH: + case BPF_MAP_TYPE_HASH_OF_MAPS: + flags = bpf_map__map_flags(map); + return !(flags & BPF_F_ZERO_SEED); + case BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: + case BPF_MAP_TYPE_ARRAY: + case BPF_MAP_TYPE_RINGBUF: + case BPF_MAP_TYPE_PROG_ARRAY: + case BPF_MAP_TYPE_CGROUP_ARRAY: + case BPF_MAP_TYPE_PERCPU_ARRAY: + case BPF_MAP_TYPE_USER_RINGBUF: + case BPF_MAP_TYPE_ARRAY_OF_MAPS: + case BPF_MAP_TYPE_CGROUP_STORAGE: + case BPF_MAP_TYPE_PERF_EVENT_ARRAY: + return true; + default: + return false; + } +} + +static int do_prog_test_run(int fd_prog, int *retval) +{ + __u8 tmp_out[TEST_DATA_LEN << 2] = {}; + __u8 tmp_in[TEST_DATA_LEN] = {}; + int err, saved_errno; + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = tmp_in, + .data_size_in = sizeof(tmp_in), + .data_out = tmp_out, + .data_size_out = sizeof(tmp_out), + .repeat = 1, + ); + + err = bpf_prog_test_run_opts(fd_prog, &topts); + saved_errno = errno; + + if (err) { + PRINT_FAIL("FAIL: Unexpected bpf_prog_test_run error: %d (%s) ", + saved_errno, strerror(saved_errno)); + return err; + } + + ASSERT_OK(0, "bpf_prog_test_run"); + *retval = topts.retval; + + return 0; +} + +static bool should_do_test_run(struct test_spec *spec, struct test_subspec *subspec) +{ + if (!subspec->execute) + return false; + + if (subspec->expect_failure) + return false; + + if ((spec->prog_flags & BPF_F_ANY_ALIGNMENT) && !EFFICIENT_UNALIGNED_ACCESS) { + if (env.verbosity != VERBOSE_NONE) + printf("alignment prevents execution\n"); + return false; + } + + return true; +} + /* this function is forced noinline and has short generic name to look better * in test_progs output (in case of a failure) */ static noinline void run_subtest(struct test_loader *tester, - const char *skel_name, - skel_elf_bytes_fn elf_bytes_factory) + struct bpf_object_open_opts *open_opts, + const void *obj_bytes, + size_t obj_byte_cnt, + struct test_spec *spec, + bool unpriv) +{ + struct test_subspec *subspec = unpriv ? &spec->unpriv : &spec->priv; + struct cap_state caps = {}; + struct bpf_program *tprog; + struct bpf_object *tobj; + struct bpf_map *map; + int retval; + int err; + + if (!test__start_subtest(subspec->name)) + return; + + if (unpriv) { + if (!can_execute_unpriv(tester, spec)) { + test__skip(); + test__end_subtest(); + return; + } + if (drop_capabilities(&caps)) { + test__end_subtest(); + return; + } + } + + tobj = bpf_object__open_mem(obj_bytes, obj_byte_cnt, open_opts); + if (!ASSERT_OK_PTR(tobj, "obj_open_mem")) /* shouldn't happen */ + goto subtest_cleanup; + + bpf_object__for_each_program(tprog, tobj) + bpf_program__set_autoload(tprog, false); + + bpf_object__for_each_program(tprog, tobj) { + /* only load specified program */ + if (strcmp(bpf_program__name(tprog), spec->prog_name) == 0) { + bpf_program__set_autoload(tprog, true); + break; + } + } + + prepare_case(tester, spec, tobj, tprog); + + /* By default bpf_object__load() automatically creates all + * maps declared in the skeleton. Some map types are only + * allowed in priv mode. Disable autoload for such maps in + * unpriv mode. + */ + bpf_object__for_each_map(map, tobj) + bpf_map__set_autocreate(map, !unpriv || is_unpriv_capable_map(map)); + + err = bpf_object__load(tobj); + if (subspec->expect_failure) { + if (!ASSERT_ERR(err, "unexpected_load_success")) { + emit_verifier_log(tester->log_buf, false /*force*/); + goto tobj_cleanup; + } + } else { + if (!ASSERT_OK(err, "unexpected_load_failure")) { + emit_verifier_log(tester->log_buf, true /*force*/); + goto tobj_cleanup; + } + } + + emit_verifier_log(tester->log_buf, false /*force*/); + validate_case(tester, subspec, tobj, tprog, err); + + if (should_do_test_run(spec, subspec)) { + /* For some reason test_verifier executes programs + * with all capabilities restored. Do the same here. + */ + if (!restore_capabilities(&caps)) + goto tobj_cleanup; + + do_prog_test_run(bpf_program__fd(tprog), &retval); + if (retval != subspec->retval && subspec->retval != POINTER_VALUE) { + PRINT_FAIL("Unexpected retval: %d != %d\n", retval, subspec->retval); + goto tobj_cleanup; + } + } + +tobj_cleanup: + bpf_object__close(tobj); +subtest_cleanup: + test__end_subtest(); + restore_capabilities(&caps); +} + +static void process_subtest(struct test_loader *tester, + const char *skel_name, + skel_elf_bytes_fn elf_bytes_factory) { LIBBPF_OPTS(bpf_object_open_opts, open_opts, .object_name = skel_name); - struct bpf_object *obj = NULL, *tobj; - struct bpf_program *prog, *tprog; + struct bpf_object *obj = NULL; + struct bpf_program *prog; const void *obj_bytes; size_t obj_byte_cnt; int err; @@ -224,52 +624,22 @@ void run_subtest(struct test_loader *tester, return; bpf_object__for_each_program(prog, obj) { - const char *prog_name = bpf_program__name(prog); struct test_spec spec; - if (!test__start_subtest(prog_name)) - continue; - /* if we can't derive test specification, go to the next test */ err = parse_test_spec(tester, obj, prog, &spec); - if (!ASSERT_OK(err, "parse_test_spec")) + if (err) { + PRINT_FAIL("Can't parse test spec for program '%s'\n", + bpf_program__name(prog)); continue; - - tobj = bpf_object__open_mem(obj_bytes, obj_byte_cnt, &open_opts); - if (!ASSERT_OK_PTR(tobj, "obj_open_mem")) /* shouldn't happen */ - continue; - - bpf_object__for_each_program(tprog, tobj) - bpf_program__set_autoload(tprog, false); - - bpf_object__for_each_program(tprog, tobj) { - /* only load specified program */ - if (strcmp(bpf_program__name(tprog), prog_name) == 0) { - bpf_program__set_autoload(tprog, true); - break; - } - } - - prepare_case(tester, &spec, tobj, tprog); - - err = bpf_object__load(tobj); - if (spec.expect_failure) { - if (!ASSERT_ERR(err, "unexpected_load_success")) { - emit_verifier_log(tester->log_buf, false /*force*/); - goto tobj_cleanup; - } - } else { - if (!ASSERT_OK(err, "unexpected_load_failure")) { - emit_verifier_log(tester->log_buf, true /*force*/); - goto tobj_cleanup; - } } - emit_verifier_log(tester->log_buf, false /*force*/); - validate_case(tester, &spec, tobj, tprog, err); + if (spec.mode_mask & PRIV) + run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, &spec, false); + if (spec.mode_mask & UNPRIV) + run_subtest(tester, &open_opts, obj_bytes, obj_byte_cnt, &spec, true); -tobj_cleanup: - bpf_object__close(tobj); + free_test_spec(&spec); } bpf_object__close(obj); @@ -280,5 +650,5 @@ void test_loader__run_subtests(struct test_loader *tester, skel_elf_bytes_fn elf_bytes_factory) { /* see comment in run_subtest() for why we do this function nesting */ - run_subtest(tester, skel_name, elf_bytes_factory); + process_subtest(tester, skel_name, elf_bytes_factory); } diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c index 6d5e3022c75f..ea82921110da 100644 --- a/tools/testing/selftests/bpf/test_progs.c +++ b/tools/testing/selftests/bpf/test_progs.c @@ -18,6 +18,7 @@ #include <sys/socket.h> #include <sys/un.h> #include <bpf/btf.h> +#include "json_writer.h" static bool verbose(void) { @@ -269,10 +270,23 @@ static void print_subtest_name(int test_num, int subtest_num, fprintf(env.stdout, "\n"); } +static void jsonw_write_log_message(json_writer_t *w, char *log_buf, size_t log_cnt) +{ + /* open_memstream (from stdio_hijack_init) ensures that log_bug is terminated by a + * null byte. Yet in parallel mode, log_buf will be NULL if there is no message. + */ + if (log_cnt) { + jsonw_string_field(w, "message", log_buf); + } else { + jsonw_string_field(w, "message", ""); + } +} + static void dump_test_log(const struct prog_test_def *test, const struct test_state *test_state, bool skip_ok_subtests, - bool par_exec_result) + bool par_exec_result, + json_writer_t *w) { bool test_failed = test_state->error_cnt > 0; bool force_log = test_state->force_log; @@ -296,6 +310,16 @@ static void dump_test_log(const struct prog_test_def *test, if (test_state->log_cnt && print_test) print_test_log(test_state->log_buf, test_state->log_cnt); + if (w && print_test) { + jsonw_start_object(w); + jsonw_string_field(w, "name", test->test_name); + jsonw_uint_field(w, "number", test->test_num); + jsonw_write_log_message(w, test_state->log_buf, test_state->log_cnt); + jsonw_bool_field(w, "failed", test_failed); + jsonw_name(w, "subtests"); + jsonw_start_array(w); + } + for (i = 0; i < test_state->subtest_num; i++) { subtest_state = &test_state->subtest_states[i]; subtest_failed = subtest_state->error_cnt; @@ -314,6 +338,20 @@ static void dump_test_log(const struct prog_test_def *test, test->test_name, subtest_state->name, test_result(subtest_state->error_cnt, subtest_state->skipped)); + + if (w && print_subtest) { + jsonw_start_object(w); + jsonw_string_field(w, "name", subtest_state->name); + jsonw_uint_field(w, "number", i+1); + jsonw_write_log_message(w, subtest_state->log_buf, subtest_state->log_cnt); + jsonw_bool_field(w, "failed", subtest_failed); + jsonw_end_object(w); + } + } + + if (w && print_test) { + jsonw_end_array(w); + jsonw_end_object(w); } print_test_result(test, test_state); @@ -591,31 +629,6 @@ out: return err; } -int extract_build_id(char *build_id, size_t size) -{ - FILE *fp; - char *line = NULL; - size_t len = 0; - - fp = popen("readelf -n ./urandom_read | grep 'Build ID'", "r"); - if (fp == NULL) - return -1; - - if (getline(&line, &len, fp) == -1) - goto err; - pclose(fp); - - if (len > size) - len = size; - memcpy(build_id, line, len); - build_id[len] = '\0'; - free(line); - return 0; -err: - pclose(fp); - return -1; -} - static int finit_module(int fd, const char *param_values, int flags) { return syscall(__NR_finit_module, fd, param_values, flags); @@ -715,6 +728,7 @@ enum ARG_KEYS { ARG_TEST_NAME_GLOB_DENYLIST = 'd', ARG_NUM_WORKERS = 'j', ARG_DEBUG = -1, + ARG_JSON_SUMMARY = 'J' }; static const struct argp_option opts[] = { @@ -740,6 +754,7 @@ static const struct argp_option opts[] = { "Number of workers to run in parallel, default to number of cpus." }, { "debug", ARG_DEBUG, NULL, 0, "print extra debug information for test_progs." }, + { "json-summary", ARG_JSON_SUMMARY, "FILE", 0, "Write report in json format to this file."}, {}, }; @@ -870,6 +885,13 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state) case ARG_DEBUG: env->debug = true; break; + case ARG_JSON_SUMMARY: + env->json = fopen(arg, "w"); + if (env->json == NULL) { + perror("Failed to open json summary file"); + return -errno; + } + break; case ARGP_KEY_ARG: argp_usage(state); break; @@ -1017,7 +1039,7 @@ void crash_handler(int signum) stdio_restore(); if (env.test) { env.test_state->error_cnt++; - dump_test_log(env.test, env.test_state, true, false); + dump_test_log(env.test, env.test_state, true, false, NULL); } if (env.worker_id != -1) fprintf(stderr, "[%d]: ", env.worker_id); @@ -1124,7 +1146,7 @@ static void run_one_test(int test_num) stdio_restore(); - dump_test_log(test, state, false, false); + dump_test_log(test, state, false, false, NULL); } struct dispatch_data { @@ -1283,7 +1305,7 @@ static void *dispatch_thread(void *ctx) } while (false); pthread_mutex_lock(&stdout_output_lock); - dump_test_log(test, state, false, true); + dump_test_log(test, state, false, true, NULL); pthread_mutex_unlock(&stdout_output_lock); } /* while (true) */ error: @@ -1308,6 +1330,7 @@ static void calculate_summary_and_print_errors(struct test_env *env) { int i; int succ_cnt = 0, fail_cnt = 0, sub_succ_cnt = 0, skip_cnt = 0; + json_writer_t *w = NULL; for (i = 0; i < prog_test_cnt; i++) { struct test_state *state = &test_states[i]; @@ -1324,6 +1347,22 @@ static void calculate_summary_and_print_errors(struct test_env *env) succ_cnt++; } + if (env->json) { + w = jsonw_new(env->json); + if (!w) + fprintf(env->stderr, "Failed to create new JSON stream."); + } + + if (w) { + jsonw_start_object(w); + jsonw_uint_field(w, "success", succ_cnt); + jsonw_uint_field(w, "success_subtest", sub_succ_cnt); + jsonw_uint_field(w, "skipped", skip_cnt); + jsonw_uint_field(w, "failed", fail_cnt); + jsonw_name(w, "results"); + jsonw_start_array(w); + } + /* * We only print error logs summary when there are failed tests and * verbose mode is not enabled. Otherwise, results may be incosistent. @@ -1340,10 +1379,19 @@ static void calculate_summary_and_print_errors(struct test_env *env) if (!state->tested || !state->error_cnt) continue; - dump_test_log(test, state, true, true); + dump_test_log(test, state, true, true, w); } } + if (w) { + jsonw_end_array(w); + jsonw_end_object(w); + jsonw_destroy(&w); + } + + if (env->json) + fclose(env->json); + printf("Summary: %d/%d PASSED, %d SKIPPED, %d FAILED\n", succ_cnt, sub_succ_cnt, skip_cnt, fail_cnt); diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h index 3cbf005747ed..10ba43250668 100644 --- a/tools/testing/selftests/bpf/test_progs.h +++ b/tools/testing/selftests/bpf/test_progs.h @@ -114,6 +114,7 @@ struct test_env { FILE *stdout; FILE *stderr; int nr_cpus; + FILE *json; int succ_cnt; /* successful tests */ int sub_succ_cnt; /* successful sub-tests */ @@ -404,7 +405,6 @@ static inline void *u64_to_ptr(__u64 ptr) int bpf_find_map(const char *test, struct bpf_object *obj, const char *name); int compare_map_keys(int map1_fd, int map2_fd); int compare_stack_ips(int smap_fd, int amap_fd, int stack_trace_len); -int extract_build_id(char *build_id, size_t size); int kern_sync_rcu(void); int trigger_module_test_read(int read_sz); int trigger_module_test_write(int write_sz); diff --git a/tools/testing/selftests/bpf/test_tunnel.sh b/tools/testing/selftests/bpf/test_tunnel.sh index 06857b689c11..2dec7dbf29a2 100755 --- a/tools/testing/selftests/bpf/test_tunnel.sh +++ b/tools/testing/selftests/bpf/test_tunnel.sh @@ -571,8 +571,13 @@ setup_xfrm_tunnel() test_xfrm_tunnel() { + if [[ -e /sys/kernel/tracing/trace ]]; then + TRACE=/sys/kernel/tracing/trace + else + TRACE=/sys/kernel/debug/tracing/trace + fi config_device - > /sys/kernel/debug/tracing/trace + > ${TRACE} setup_xfrm_tunnel mkdir -p ${BPF_PIN_TUNNEL_DIR} bpftool prog loadall ${BPF_FILE} ${BPF_PIN_TUNNEL_DIR} @@ -581,11 +586,11 @@ test_xfrm_tunnel() ${BPF_PIN_TUNNEL_DIR}/xfrm_get_state ip netns exec at_ns0 ping $PING_ARG 10.1.1.200 sleep 1 - grep "reqid 1" /sys/kernel/debug/tracing/trace + grep "reqid 1" ${TRACE} check_err $? - grep "spi 0x1" /sys/kernel/debug/tracing/trace + grep "spi 0x1" ${TRACE} check_err $? - grep "remote ip 0xac100164" /sys/kernel/debug/tracing/trace + grep "remote ip 0xac100164" ${TRACE} check_err $? cleanup diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 49a70d9beb0b..e4657c5bc3f1 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -33,13 +33,8 @@ #include <bpf/bpf.h> #include <bpf/libbpf.h> -#ifdef HAVE_GENHDR -# include "autoconf.h" -#else -# if defined(__i386) || defined(__x86_64) || defined(__s390x__) || defined(__aarch64__) -# define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1 -# endif -#endif +#include "autoconf_helper.h" +#include "unpriv_helpers.h" #include "cap_helpers.h" #include "bpf_rand.h" #include "bpf_util.h" @@ -1084,7 +1079,7 @@ static void do_test_fixup(struct bpf_test *test, enum bpf_prog_type prog_type, } if (*fixup_map_ringbuf) { map_fds[20] = create_map(BPF_MAP_TYPE_RINGBUF, 0, - 0, 4096); + 0, getpagesize()); do { prog[*fixup_map_ringbuf].imm = map_fds[20]; fixup_map_ringbuf++; @@ -1665,22 +1660,6 @@ static bool is_admin(void) return (caps & ADMIN_CAPS) == ADMIN_CAPS; } -static void get_unpriv_disabled() -{ - char buf[2]; - FILE *fd; - - fd = fopen("/proc/sys/"UNPRIV_SYSCTL, "r"); - if (!fd) { - perror("fopen /proc/sys/"UNPRIV_SYSCTL); - unpriv_disabled = true; - return; - } - if (fgets(buf, 2, fd) == buf && atoi(buf)) - unpriv_disabled = true; - fclose(fd); -} - static bool test_as_unpriv(struct bpf_test *test) { #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS diff --git a/tools/testing/selftests/bpf/test_verifier_log.c b/tools/testing/selftests/bpf/test_verifier_log.c deleted file mode 100644 index 70feda97cee5..000000000000 --- a/tools/testing/selftests/bpf/test_verifier_log.c +++ /dev/null @@ -1,175 +0,0 @@ -#include <errno.h> -#include <stdlib.h> -#include <stdio.h> -#include <string.h> -#include <unistd.h> -#include <sys/time.h> - -#include <linux/bpf.h> -#include <linux/filter.h> -#include <linux/unistd.h> - -#include <bpf/bpf.h> - -#define LOG_SIZE (1 << 20) - -#define err(str...) printf("ERROR: " str) - -static const struct bpf_insn code_sample[] = { - /* We need a few instructions to pass the min log length */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), -}; - -static inline __u64 ptr_to_u64(const void *ptr) -{ - return (__u64) (unsigned long) ptr; -} - -static int load(char *log, size_t log_len, int log_level) -{ - union bpf_attr attr; - - bzero(&attr, sizeof(attr)); - attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER; - attr.insn_cnt = (__u32)(sizeof(code_sample) / sizeof(struct bpf_insn)); - attr.insns = ptr_to_u64(code_sample); - attr.license = ptr_to_u64("GPL"); - attr.log_buf = ptr_to_u64(log); - attr.log_size = log_len; - attr.log_level = log_level; - - return syscall(__NR_bpf, BPF_PROG_LOAD, &attr, sizeof(attr)); -} - -static void check_ret(int ret, int exp_errno) -{ - if (ret > 0) { - close(ret); - err("broken sample loaded successfully!?\n"); - exit(1); - } - - if (!ret || errno != exp_errno) { - err("Program load returned: ret:%d/errno:%d, expected ret:%d/errno:%d\n", - ret, errno, -1, exp_errno); - exit(1); - } -} - -static void check_ones(const char *buf, size_t len, const char *msg) -{ - while (len--) - if (buf[len] != 1) { - err("%s", msg); - exit(1); - } -} - -static void test_log_good(char *log, size_t buf_len, size_t log_len, - size_t exp_len, int exp_errno, const char *full_log) -{ - size_t len; - int ret; - - memset(log, 1, buf_len); - - ret = load(log, log_len, 1); - check_ret(ret, exp_errno); - - len = strnlen(log, buf_len); - if (len == buf_len) { - err("verifier did not NULL terminate the log\n"); - exit(1); - } - if (exp_len && len != exp_len) { - err("incorrect log length expected:%zd have:%zd\n", - exp_len, len); - exit(1); - } - - if (strchr(log, 1)) { - err("verifier leaked a byte through\n"); - exit(1); - } - - check_ones(log + len + 1, buf_len - len - 1, - "verifier wrote bytes past NULL termination\n"); - - if (memcmp(full_log, log, LOG_SIZE)) { - err("log did not match expected output\n"); - exit(1); - } -} - -static void test_log_bad(char *log, size_t log_len, int log_level) -{ - int ret; - - ret = load(log, log_len, log_level); - check_ret(ret, EINVAL); - if (log) - check_ones(log, LOG_SIZE, - "verifier touched log with bad parameters\n"); -} - -int main(int argc, char **argv) -{ - char full_log[LOG_SIZE]; - char log[LOG_SIZE]; - size_t want_len; - int i; - - memset(log, 1, LOG_SIZE); - - /* Use libbpf 1.0 API mode */ - libbpf_set_strict_mode(LIBBPF_STRICT_ALL); - - /* Test incorrect attr */ - printf("Test log_level 0...\n"); - test_log_bad(log, LOG_SIZE, 0); - - printf("Test log_size < 128...\n"); - test_log_bad(log, 15, 1); - - printf("Test log_buff = NULL...\n"); - test_log_bad(NULL, LOG_SIZE, 1); - - /* Test with log big enough */ - printf("Test oversized buffer...\n"); - test_log_good(full_log, LOG_SIZE, LOG_SIZE, 0, EACCES, full_log); - - want_len = strlen(full_log); - - printf("Test exact buffer...\n"); - test_log_good(log, LOG_SIZE, want_len + 2, want_len, EACCES, full_log); - - printf("Test undersized buffers...\n"); - for (i = 0; i < 64; i++) { - full_log[want_len - i + 1] = 1; - full_log[want_len - i] = 0; - - test_log_good(log, LOG_SIZE, want_len + 1 - i, want_len - i, - ENOSPC, full_log); - } - - printf("test_verifier_log: OK\n"); - return 0; -} diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh index b077cf58f825..377fb157a57c 100755 --- a/tools/testing/selftests/bpf/test_xsk.sh +++ b/tools/testing/selftests/bpf/test_xsk.sh @@ -116,6 +116,7 @@ setup_vethPairs() { ip link add ${VETH0} numtxqueues 4 numrxqueues 4 type veth peer name ${VETH1} numtxqueues 4 numrxqueues 4 if [ -f /proc/net/if_inet6 ]; then echo 1 > /proc/sys/net/ipv6/conf/${VETH0}/disable_ipv6 + echo 1 > /proc/sys/net/ipv6/conf/${VETH1}/disable_ipv6 fi if [[ $verbose -eq 1 ]]; then echo "setting up ${VETH1}" diff --git a/tools/testing/selftests/bpf/testing_helpers.c b/tools/testing/selftests/bpf/testing_helpers.c index 6c44153755e6..0b5e0829e5be 100644 --- a/tools/testing/selftests/bpf/testing_helpers.c +++ b/tools/testing/selftests/bpf/testing_helpers.c @@ -195,7 +195,7 @@ int bpf_prog_test_load(const char *file, enum bpf_prog_type type, goto err_out; } - if (type != BPF_PROG_TYPE_UNSPEC) + if (type != BPF_PROG_TYPE_UNSPEC && bpf_program__type(prog) != type) bpf_program__set_type(prog, type); flags = bpf_program__flags(prog) | BPF_F_TEST_RND_HI32; @@ -229,3 +229,23 @@ int bpf_test_load_program(enum bpf_prog_type type, const struct bpf_insn *insns, return bpf_prog_load(type, NULL, license, insns, insns_cnt, &opts); } + +__u64 read_perf_max_sample_freq(void) +{ + __u64 sample_freq = 5000; /* fallback to 5000 on error */ + FILE *f; + + f = fopen("/proc/sys/kernel/perf_event_max_sample_rate", "r"); + if (f == NULL) { + printf("Failed to open /proc/sys/kernel/perf_event_max_sample_rate: err %d\n" + "return default value: 5000\n", -errno); + return sample_freq; + } + if (fscanf(f, "%llu", &sample_freq) != 1) { + printf("Failed to parse /proc/sys/kernel/perf_event_max_sample_rate: err %d\n" + "return default value: 5000\n", -errno); + } + + fclose(f); + return sample_freq; +} diff --git a/tools/testing/selftests/bpf/testing_helpers.h b/tools/testing/selftests/bpf/testing_helpers.h index 6ec00bf79cb5..eb8790f928e4 100644 --- a/tools/testing/selftests/bpf/testing_helpers.h +++ b/tools/testing/selftests/bpf/testing_helpers.h @@ -20,3 +20,5 @@ struct test_filter_set; int parse_test_list(const char *s, struct test_filter_set *test_set, bool is_glob_pattern); + +__u64 read_perf_max_sample_freq(void); diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c index 09a16a77bae4..9b070cdf44ac 100644 --- a/tools/testing/selftests/bpf/trace_helpers.c +++ b/tools/testing/selftests/bpf/trace_helpers.c @@ -11,8 +11,12 @@ #include <linux/perf_event.h> #include <sys/mman.h> #include "trace_helpers.h" +#include <linux/limits.h> +#include <libelf.h> +#include <gelf.h> -#define DEBUGFS "/sys/kernel/debug/tracing/" +#define TRACEFS_PIPE "/sys/kernel/tracing/trace_pipe" +#define DEBUGFS_PIPE "/sys/kernel/debug/tracing/trace_pipe" #define MAX_SYMS 300000 static struct ksym syms[MAX_SYMS]; @@ -136,7 +140,10 @@ void read_trace_pipe(void) { int trace_fd; - trace_fd = open(DEBUGFS "trace_pipe", O_RDONLY, 0); + if (access(TRACEFS_PIPE, F_OK) == 0) + trace_fd = open(TRACEFS_PIPE, O_RDONLY, 0); + else + trace_fd = open(DEBUGFS_PIPE, O_RDONLY, 0); if (trace_fd < 0) return; @@ -230,3 +237,82 @@ ssize_t get_rel_offset(uintptr_t addr) fclose(f); return -EINVAL; } + +static int +parse_build_id_buf(const void *note_start, Elf32_Word note_size, char *build_id) +{ + Elf32_Word note_offs = 0; + + while (note_offs + sizeof(Elf32_Nhdr) < note_size) { + Elf32_Nhdr *nhdr = (Elf32_Nhdr *)(note_start + note_offs); + + if (nhdr->n_type == 3 && nhdr->n_namesz == sizeof("GNU") && + !strcmp((char *)(nhdr + 1), "GNU") && nhdr->n_descsz > 0 && + nhdr->n_descsz <= BPF_BUILD_ID_SIZE) { + memcpy(build_id, note_start + note_offs + + ALIGN(sizeof("GNU"), 4) + sizeof(Elf32_Nhdr), nhdr->n_descsz); + memset(build_id + nhdr->n_descsz, 0, BPF_BUILD_ID_SIZE - nhdr->n_descsz); + return (int) nhdr->n_descsz; + } + + note_offs = note_offs + sizeof(Elf32_Nhdr) + + ALIGN(nhdr->n_namesz, 4) + ALIGN(nhdr->n_descsz, 4); + } + + return -ENOENT; +} + +/* Reads binary from *path* file and returns it in the *build_id* buffer + * with *size* which is expected to be at least BPF_BUILD_ID_SIZE bytes. + * Returns size of build id on success. On error the error value is + * returned. + */ +int read_build_id(const char *path, char *build_id, size_t size) +{ + int fd, err = -EINVAL; + Elf *elf = NULL; + GElf_Ehdr ehdr; + size_t max, i; + + if (size < BPF_BUILD_ID_SIZE) + return -EINVAL; + + fd = open(path, O_RDONLY | O_CLOEXEC); + if (fd < 0) + return -errno; + + (void)elf_version(EV_CURRENT); + + elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); + if (!elf) + goto out; + if (elf_kind(elf) != ELF_K_ELF) + goto out; + if (!gelf_getehdr(elf, &ehdr)) + goto out; + + for (i = 0; i < ehdr.e_phnum; i++) { + GElf_Phdr mem, *phdr; + char *data; + + phdr = gelf_getphdr(elf, i, &mem); + if (!phdr) + goto out; + if (phdr->p_type != PT_NOTE) + continue; + data = elf_rawfile(elf, &max); + if (!data) + goto out; + if (phdr->p_offset + phdr->p_memsz > max) + goto out; + err = parse_build_id_buf(data + phdr->p_offset, phdr->p_memsz, build_id); + if (err > 0) + break; + } + +out: + if (elf) + elf_end(elf); + close(fd); + return err; +} diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h index 53efde0e2998..876f3e711df6 100644 --- a/tools/testing/selftests/bpf/trace_helpers.h +++ b/tools/testing/selftests/bpf/trace_helpers.h @@ -4,6 +4,9 @@ #include <bpf/libbpf.h> +#define __ALIGN_MASK(x, mask) (((x)+(mask))&~(mask)) +#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a)-1) + struct ksym { long addr; char *name; @@ -23,4 +26,6 @@ void read_trace_pipe(void); ssize_t get_uprobe_offset(const void *addr); ssize_t get_rel_offset(uintptr_t addr); +int read_build_id(const char *path, char *build_id, size_t size); + #endif diff --git a/tools/testing/selftests/bpf/unpriv_helpers.c b/tools/testing/selftests/bpf/unpriv_helpers.c new file mode 100644 index 000000000000..2a6efbd0401e --- /dev/null +++ b/tools/testing/selftests/bpf/unpriv_helpers.c @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <stdbool.h> +#include <stdlib.h> +#include <error.h> +#include <stdio.h> + +#include "unpriv_helpers.h" + +bool get_unpriv_disabled(void) +{ + bool disabled; + char buf[2]; + FILE *fd; + + fd = fopen("/proc/sys/" UNPRIV_SYSCTL, "r"); + if (fd) { + disabled = (fgets(buf, 2, fd) == buf && atoi(buf)); + fclose(fd); + } else { + perror("fopen /proc/sys/" UNPRIV_SYSCTL); + disabled = true; + } + + return disabled; +} diff --git a/tools/testing/selftests/bpf/unpriv_helpers.h b/tools/testing/selftests/bpf/unpriv_helpers.h new file mode 100644 index 000000000000..151f67329665 --- /dev/null +++ b/tools/testing/selftests/bpf/unpriv_helpers.h @@ -0,0 +1,7 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <stdbool.h> + +#define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled" + +bool get_unpriv_disabled(void); diff --git a/tools/testing/selftests/bpf/verifier/and.c b/tools/testing/selftests/bpf/verifier/and.c deleted file mode 100644 index 7d7ebee5cc7a..000000000000 --- a/tools/testing/selftests/bpf/verifier/and.c +++ /dev/null @@ -1,68 +0,0 @@ -{ - "invalid and of negative number", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_1, -4), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 max value is outside of the allowed memory range", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid range check", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 12), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_9, 1), - BPF_ALU32_IMM(BPF_MOD, BPF_REG_1, 2), - BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_ALU32_REG(BPF_AND, BPF_REG_9, BPF_REG_1), - BPF_ALU32_IMM(BPF_ADD, BPF_REG_9, 1), - BPF_ALU32_IMM(BPF_RSH, BPF_REG_9, 1), - BPF_MOV32_IMM(BPF_REG_3, 1), - BPF_ALU32_REG(BPF_SUB, BPF_REG_3, BPF_REG_9), - BPF_ALU32_IMM(BPF_MUL, BPF_REG_3, 0x10000000), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3), - BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_3, 0), - BPF_MOV64_REG(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 max value is outside of the allowed memory range", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "check known subreg with unknown reg", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xFFFF1234), - /* Upper bits are unknown but AND above masks out 1 zero'ing lower bits */ - BPF_JMP32_IMM(BPF_JLT, BPF_REG_0, 1, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 512), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 !read_ok", - .result_unpriv = REJECT, - .result = ACCEPT, - .retval = 0 -}, diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c deleted file mode 100644 index 1b138cd2b187..000000000000 --- a/tools/testing/selftests/bpf/verifier/array_access.c +++ /dev/null @@ -1,379 +0,0 @@ -{ - "valid map access into an array with a constant", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "valid map access into an array with a register", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "valid map access into an array with a variable", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 3), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "valid map access into an array with a signed variable", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP32_IMM(BPF_JSGT, BPF_REG_1, 0xffffffff, 1), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES), - BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .result = ACCEPT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid map access into an array with a constant", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, (MAX_ENTRIES + 1) << 2, - offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=48 size=8", - .result = REJECT, -}, -{ - "invalid map access into an array with a register", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_IMM(BPF_REG_1, MAX_ENTRIES + 1), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 min value is outside of the allowed memory range", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid map access into an array with a variable", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 unbounded memory access, make sure to bounds check any such access", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid map access into an array with no floor check", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES), - BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .errstr = "R0 unbounded memory access", - .result_unpriv = REJECT, - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid map access into an array with a invalid max check", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES + 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .errstr = "invalid access to map value, value_size=48 off=44 size=8", - .result_unpriv = REJECT, - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid map access into an array with a invalid max check", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, - offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3, 11 }, - .errstr = "R0 pointer += pointer", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "valid read map access into a read-only array 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_ro = { 3 }, - .result = ACCEPT, - .retval = 28, -}, -{ - "valid read map access into a read-only array 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_csum_diff), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_array_ro = { 3 }, - .result = ACCEPT, - .retval = 65507, -}, -{ - "invalid write map access into a read-only array 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42), - BPF_EXIT_INSN(), - }, - .fixup_map_array_ro = { 3 }, - .result = REJECT, - .errstr = "write into map forbidden", -}, -{ - "invalid write map access into a read-only array 2", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_skb_load_bytes), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_array_ro = { 4 }, - .result = REJECT, - .errstr = "write into map forbidden", -}, -{ - "valid write map access into a write-only array 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_wo = { 3 }, - .result = ACCEPT, - .retval = 1, -}, -{ - "valid write map access into a write-only array 2", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_skb_load_bytes), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_array_wo = { 4 }, - .result = ACCEPT, - .retval = 0, -}, -{ - "invalid read map access into a write-only array 1", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_array_wo = { 3 }, - .result = REJECT, - .errstr = "read from map forbidden", -}, -{ - "invalid read map access into a write-only array 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .fixup_map_array_wo = { 3 }, - .result = REJECT, - .errstr = "read from map forbidden", -}, diff --git a/tools/testing/selftests/bpf/verifier/basic_stack.c b/tools/testing/selftests/bpf/verifier/basic_stack.c deleted file mode 100644 index f995777dddb3..000000000000 --- a/tools/testing/selftests/bpf/verifier/basic_stack.c +++ /dev/null @@ -1,64 +0,0 @@ -{ - "stack out of bounds", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid write to stack", - .result = REJECT, -}, -{ - "uninitialized stack1", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 2 }, - .errstr = "invalid indirect read from stack", - .result = REJECT, -}, -{ - "uninitialized stack2", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8), - BPF_EXIT_INSN(), - }, - .errstr = "invalid read from stack", - .result = REJECT, -}, -{ - "invalid fp arithmetic", - /* If this gets ever changed, make sure JITs can deal with it. */ - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 8), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 subtraction from stack pointer", - .result = REJECT, -}, -{ - "non-invalid fp arithmetic", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, -}, -{ - "misaligned read from stack", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -4), - BPF_EXIT_INSN(), - }, - .errstr = "misaligned stack access", - .result = REJECT, -}, diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c index 33125d5f6772..43942ce8cf15 100644 --- a/tools/testing/selftests/bpf/verifier/bounds.c +++ b/tools/testing/selftests/bpf/verifier/bounds.c @@ -753,3 +753,132 @@ .result_unpriv = REJECT, .result = ACCEPT, }, +{ + "bound check with JMP_JLT for crossing 64-bit signed boundary", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 8), + + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), + BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10), + BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), + + BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + /* r1 unsigned range is [0x7fffffffffffff10, 0x800000000000000f] */ + BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2), + + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_XDP, +}, +{ + "bound check with JMP_JSLT for crossing 64-bit signed boundary", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 13), + + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), + BPF_LD_IMM64(BPF_REG_0, 0x7fffffffffffff10), + BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), + + BPF_LD_IMM64(BPF_REG_2, 0x8000000000000fff), + BPF_LD_IMM64(BPF_REG_0, 0x8000000000000000), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_2, 3), + /* r1 signed range is [S64_MIN, S64_MAX] */ + BPF_JMP_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -3), + + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_XDP, +}, +{ + "bound check for loop upper bound greater than U32_MAX", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 8), + + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), + BPF_LD_IMM64(BPF_REG_0, 0x100000000), + BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), + + BPF_LD_IMM64(BPF_REG_0, 0x100000000), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2), + + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_XDP, +}, +{ + "bound check with JMP32_JLT for crossing 32-bit signed boundary", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 6), + + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), + BPF_MOV32_IMM(BPF_REG_0, 0x7fffff10), + BPF_ALU32_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), + + BPF_MOV32_IMM(BPF_REG_0, 0x80000000), + BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1), + /* r1 unsigned range is [0, 0x8000000f] */ + BPF_JMP32_REG(BPF_JLT, BPF_REG_0, BPF_REG_1, -2), + + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_XDP, +}, +{ + "bound check with JMP32_JSLT for crossing 32-bit signed boundary", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data_end)), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 10), + + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0), + BPF_MOV32_IMM(BPF_REG_0, 0x7fffff10), + BPF_ALU32_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), + + BPF_MOV32_IMM(BPF_REG_2, 0x80000fff), + BPF_MOV32_IMM(BPF_REG_0, 0x80000000), + BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_JMP32_REG(BPF_JSGT, BPF_REG_0, BPF_REG_2, 3), + /* r1 signed range is [S32_MIN, S32_MAX] */ + BPF_JMP32_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, -3), + + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_XDP, +}, diff --git a/tools/testing/selftests/bpf/verifier/bounds_deduction.c b/tools/testing/selftests/bpf/verifier/bounds_deduction.c deleted file mode 100644 index 3931c481e30c..000000000000 --- a/tools/testing/selftests/bpf/verifier/bounds_deduction.c +++ /dev/null @@ -1,136 +0,0 @@ -{ - "check deducing bounds from const, 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 0), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "R0 tried to subtract pointer from scalar", - .result = REJECT, -}, -{ - "check deducing bounds from const, 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 1, 1), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .result_unpriv = REJECT, - .result = ACCEPT, - .retval = 1, -}, -{ - "check deducing bounds from const, 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "R0 tried to subtract pointer from scalar", - .result = REJECT, -}, -{ - "check deducing bounds from const, 4", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R6 has pointer with unsupported alu operation", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "check deducing bounds from const, 5", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "R0 tried to subtract pointer from scalar", - .result = REJECT, -}, -{ - "check deducing bounds from const, 6", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "R0 tried to subtract pointer from scalar", - .result = REJECT, -}, -{ - "check deducing bounds from const, 7", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, ~0), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0), - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "dereference of modified ctx ptr", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "check deducing bounds from const, 8", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, ~0), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "negative offset ctx ptr R1 off=-1 disallowed", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "check deducing bounds from const, 9", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .errstr = "R0 tried to subtract pointer from scalar", - .result = REJECT, -}, -{ - "check deducing bounds from const, 10", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0), - /* Marks reg as unknown. */ - BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr = "math between ctx pointer and register with unbounded min value is not allowed", - .result = REJECT, -}, diff --git a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c deleted file mode 100644 index bf82b923c5fe..000000000000 --- a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c +++ /dev/null @@ -1,411 +0,0 @@ -{ - "bounds checks mixing signed and unsigned, positive bounds", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, 2), - BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 4, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 2", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5), - BPF_MOV64_IMM(BPF_REG_8, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_1), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), - BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 3", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 4), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), - BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 4", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, 1), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .result = ACCEPT, -}, -{ - "bounds checks mixing signed and unsigned, variant 5", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 4), - BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 6", - .insns = { - BPF_MOV64_REG(BPF_REG_9, BPF_REG_1), - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_9), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -512), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_6, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_6, 5), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_4, 1, 4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_ST_MEM(BPF_H, BPF_REG_10, -512, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R4 min value is negative, either use unsigned", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 7", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .result = ACCEPT, -}, -{ - "bounds checks mixing signed and unsigned, variant 8", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 9", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_LD_IMM64(BPF_REG_2, -9223372036854775808ULL), - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .result = ACCEPT, -}, -{ - "bounds checks mixing signed and unsigned, variant 10", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 11", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), - /* Dead branch. */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 12", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -6), - BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 13", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, 2), - BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), - BPF_MOV64_IMM(BPF_REG_7, 1), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_1), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 4, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_7), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 14", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_MOV64_IMM(BPF_REG_8, 2), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 42, 6), - BPF_JMP_REG(BPF_JSGT, BPF_REG_8, BPF_REG_1, 3), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, -3), - BPF_JMP_IMM(BPF_JA, 0, 0, -7), - }, - .fixup_map_hash_8b = { 6 }, - .errstr = "unbounded min value", - .result = REJECT, -}, -{ - "bounds checks mixing signed and unsigned, variant 15", - .insns = { - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_MOV64_IMM(BPF_REG_2, -6), - BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 1, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "unbounded min value", - .result = REJECT, -}, diff --git a/tools/testing/selftests/bpf/verifier/btf_ctx_access.c b/tools/testing/selftests/bpf/verifier/btf_ctx_access.c index 6340db6b46dc..0484d3de040d 100644 --- a/tools/testing/selftests/bpf/verifier/btf_ctx_access.c +++ b/tools/testing/selftests/bpf/verifier/btf_ctx_access.c @@ -10,3 +10,16 @@ .expected_attach_type = BPF_TRACE_FENTRY, .kfunc = "bpf_modify_return_test", }, + +{ + "btf_ctx_access u32 pointer accept", + .insns = { + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), /* load 1nd argument value (u32 pointer) */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_TRACING, + .expected_attach_type = BPF_TRACE_FENTRY, + .kfunc = "bpf_fentry_test9", +}, diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index 5702fc9761ef..1bdf2b43e49e 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -109,7 +109,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "arg#0 is ptr_or_null_ expected ptr_ or socket", + .errstr = "Possibly NULL pointer passed to trusted arg0", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_acquire", 3 }, { "bpf_kfunc_call_test_release", 5 }, @@ -165,19 +165,23 @@ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), BPF_EXIT_INSN(), BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 16), BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -4), BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0), + BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_acquire", 3 }, - { "bpf_kfunc_call_test_release", 9 }, + { "bpf_kfunc_call_test_offset", 9 }, + { "bpf_kfunc_call_test_release", 12 }, }, .result_unpriv = REJECT, .result = REJECT, diff --git a/tools/testing/selftests/bpf/verifier/cfg.c b/tools/testing/selftests/bpf/verifier/cfg.c deleted file mode 100644 index 4eb76ed739ce..000000000000 --- a/tools/testing/selftests/bpf/verifier/cfg.c +++ /dev/null @@ -1,73 +0,0 @@ -{ - "unreachable", - .insns = { - BPF_EXIT_INSN(), - BPF_EXIT_INSN(), - }, - .errstr = "unreachable", - .result = REJECT, -}, -{ - "unreachable2", - .insns = { - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "unreachable", - .result = REJECT, -}, -{ - "out of range jump", - .insns = { - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_EXIT_INSN(), - }, - .errstr = "jump out of range", - .result = REJECT, -}, -{ - "out of range jump2", - .insns = { - BPF_JMP_IMM(BPF_JA, 0, 0, -2), - BPF_EXIT_INSN(), - }, - .errstr = "jump out of range", - .result = REJECT, -}, -{ - "loop (back-edge)", - .insns = { - BPF_JMP_IMM(BPF_JA, 0, 0, -1), - BPF_EXIT_INSN(), - }, - .errstr = "unreachable insn 1", - .errstr_unpriv = "back-edge", - .result = REJECT, -}, -{ - "loop2 (back-edge)", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_JMP_IMM(BPF_JA, 0, 0, -4), - BPF_EXIT_INSN(), - }, - .errstr = "unreachable insn 4", - .errstr_unpriv = "back-edge", - .result = REJECT, -}, -{ - "conditional loop", - .insns = { - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3), - BPF_EXIT_INSN(), - }, - .errstr = "infinite loop detected", - .errstr_unpriv = "back-edge", - .result = REJECT, -}, diff --git a/tools/testing/selftests/bpf/verifier/cgroup_inv_retcode.c b/tools/testing/selftests/bpf/verifier/cgroup_inv_retcode.c deleted file mode 100644 index 6d65fe3e7321..000000000000 --- a/tools/testing/selftests/bpf/verifier/cgroup_inv_retcode.c +++ /dev/null @@ -1,72 +0,0 @@ -{ - "bpf_exit with invalid return code. test1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R0 has value (0x0; 0xffffffff)", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, -{ - "bpf_exit with invalid return code. test2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, -{ - "bpf_exit with invalid return code. test3", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 3), - BPF_EXIT_INSN(), - }, - .errstr = "R0 has value (0x0; 0x3)", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, -{ - "bpf_exit with invalid return code. test4", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, -{ - "bpf_exit with invalid return code. test5", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 2), - BPF_EXIT_INSN(), - }, - .errstr = "R0 has value (0x2; 0x0)", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, -{ - "bpf_exit with invalid return code. test6", - .insns = { - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr = "R0 is not a known value (ctx)", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, -{ - "bpf_exit with invalid return code. test7", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 4), - BPF_ALU64_REG(BPF_MUL, BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .errstr = "R0 has unknown scalar value", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SOCK, -}, diff --git a/tools/testing/selftests/bpf/verifier/cgroup_skb.c b/tools/testing/selftests/bpf/verifier/cgroup_skb.c deleted file mode 100644 index 52e4c03b076b..000000000000 --- a/tools/testing/selftests/bpf/verifier/cgroup_skb.c +++ /dev/null @@ -1,197 +0,0 @@ -{ - "direct packet read test#1 for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct __sk_buff, len)), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, pkt_type)), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, mark)), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, - offsetof(struct __sk_buff, mark)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, queue_mapping)), - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, protocol)), - BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, - offsetof(struct __sk_buff, vlan_present)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "invalid bpf_context access off=76 size=4", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "direct packet read test#2 for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct __sk_buff, vlan_tci)), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, vlan_proto)), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, priority)), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, - offsetof(struct __sk_buff, priority)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, ingress_ifindex)), - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, tc_index)), - BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, - offsetof(struct __sk_buff, hash)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "direct packet read test#3 for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct __sk_buff, cb[0])), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, cb[1])), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, cb[2])), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, cb[3])), - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, cb[4])), - BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, - offsetof(struct __sk_buff, napi_id)), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_4, - offsetof(struct __sk_buff, cb[0])), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_5, - offsetof(struct __sk_buff, cb[1])), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, - offsetof(struct __sk_buff, cb[2])), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_7, - offsetof(struct __sk_buff, cb[3])), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_8, - offsetof(struct __sk_buff, cb[4])), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "direct packet read test#4 for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, family)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, remote_ip4)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct __sk_buff, local_ip4)), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, remote_ip6[0])), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, remote_ip6[1])), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, remote_ip6[2])), - BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_1, - offsetof(struct __sk_buff, remote_ip6[3])), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, local_ip6[0])), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, local_ip6[1])), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, local_ip6[2])), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, local_ip6[3])), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, remote_port)), - BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1, - offsetof(struct __sk_buff, local_port)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid access of tc_classid for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tc_classid)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid access of data_meta for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, data_meta)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid access of flow_keys for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, flow_keys)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid write access to napi_id for CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, - offsetof(struct __sk_buff, napi_id)), - BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_9, - offsetof(struct __sk_buff, napi_id)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid bpf_context access", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "write tstamp from CGROUP_SKB", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, tstamp)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "invalid bpf_context access off=152 size=8", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "read tstamp from CGROUP_SKB", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, - offsetof(struct __sk_buff, tstamp)), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, diff --git a/tools/testing/selftests/bpf/verifier/cgroup_storage.c b/tools/testing/selftests/bpf/verifier/cgroup_storage.c deleted file mode 100644 index 97057c0a1b8a..000000000000 --- a/tools/testing/selftests/bpf/verifier/cgroup_storage.c +++ /dev/null @@ -1,220 +0,0 @@ -{ - "valid cgroup storage access", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_cgroup_storage = { 1 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid cgroup storage access 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - .result = REJECT, - .errstr = "cannot pass map_type 1 into func bpf_get_local_storage", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid cgroup storage access 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "fd 1 is not pointing to valid bpf_map", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid cgroup storage access 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=64 off=256 size=4", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid cgroup storage access 4", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_EXIT_INSN(), - }, - .fixup_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=64 off=-2 size=4", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid cgroup storage access 5", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 7), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "get_local_storage() doesn't support non-zero flags", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid cgroup storage access 6", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "get_local_storage() doesn't support non-zero flags", - .errstr_unpriv = "R2 leaks addr into helper function", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "valid per-cpu cgroup storage access", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_percpu_cgroup_storage = { 1 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid per-cpu cgroup storage access 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - .result = REJECT, - .errstr = "cannot pass map_type 1 into func bpf_get_local_storage", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid per-cpu cgroup storage access 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 1), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "fd 1 is not pointing to valid bpf_map", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid per-cpu cgroup storage access 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_percpu_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=64 off=256 size=4", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid per-cpu cgroup storage access 4", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_EXIT_INSN(), - }, - .fixup_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=64 off=-2 size=4", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid per-cpu cgroup storage access 5", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 7), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_percpu_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "get_local_storage() doesn't support non-zero flags", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "invalid per-cpu cgroup storage access 6", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_percpu_cgroup_storage = { 1 }, - .result = REJECT, - .errstr = "get_local_storage() doesn't support non-zero flags", - .errstr_unpriv = "R2 leaks addr into helper function", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, diff --git a/tools/testing/selftests/bpf/verifier/const_or.c b/tools/testing/selftests/bpf/verifier/const_or.c deleted file mode 100644 index 0719b0ddec04..000000000000 --- a/tools/testing/selftests/bpf/verifier/const_or.c +++ /dev/null @@ -1,60 +0,0 @@ -{ - "constant register |= constant should keep constant type", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48), - BPF_MOV64_IMM(BPF_REG_2, 34), - BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 13), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "constant register |= constant should not bypass stack boundary checks", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48), - BPF_MOV64_IMM(BPF_REG_2, 34), - BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 24), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .errstr = "invalid indirect access to stack R1 off=-48 size=58", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "constant register |= constant register should keep constant type", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48), - BPF_MOV64_IMM(BPF_REG_2, 34), - BPF_MOV64_IMM(BPF_REG_4, 13), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "constant register |= constant register should not bypass stack boundary checks", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -48), - BPF_MOV64_IMM(BPF_REG_2, 34), - BPF_MOV64_IMM(BPF_REG_4, 24), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .errstr = "invalid indirect access to stack R1 off=-48 size=58", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, diff --git a/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c b/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c deleted file mode 100644 index c6c69220a569..000000000000 --- a/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c +++ /dev/null @@ -1,181 +0,0 @@ -{ - "valid access family in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, family)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "valid access remote_ip4 in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, remote_ip4)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "valid access local_ip4 in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, local_ip4)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "valid access remote_port in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, remote_port)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "valid access local_port in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, local_port)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "valid access remote_ip6 in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, remote_ip6[0])), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, remote_ip6[1])), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, remote_ip6[2])), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, remote_ip6[3])), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_SKB, -}, -{ - "valid access local_ip6 in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, local_ip6[0])), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, local_ip6[1])), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, local_ip6[2])), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, local_ip6[3])), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_SKB, -}, -{ - "valid access size in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, - offsetof(struct sk_msg_md, size)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "invalid 64B read of size in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, - offsetof(struct sk_msg_md, size)), - BPF_EXIT_INSN(), - }, - .errstr = "invalid bpf_context access", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SK_MSG, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "invalid read past end of SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct sk_msg_md, size) + 4), - BPF_EXIT_INSN(), - }, - .errstr = "invalid bpf_context access", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "invalid read offset in SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct sk_msg_md, family) + 1), - BPF_EXIT_INSN(), - }, - .errstr = "invalid bpf_context access", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SK_MSG, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "direct packet read for SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, - offsetof(struct sk_msg_md, data)), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, - offsetof(struct sk_msg_md, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "direct packet write for SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, - offsetof(struct sk_msg_md, data)), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, - offsetof(struct sk_msg_md, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, -{ - "overlapping checks for direct packet access SK_MSG", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, - offsetof(struct sk_msg_md, data)), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, - offsetof(struct sk_msg_md, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SK_MSG, -}, diff --git a/tools/testing/selftests/bpf/verifier/direct_stack_access_wraparound.c b/tools/testing/selftests/bpf/verifier/direct_stack_access_wraparound.c deleted file mode 100644 index 698e3779fdd2..000000000000 --- a/tools/testing/selftests/bpf/verifier/direct_stack_access_wraparound.c +++ /dev/null @@ -1,40 +0,0 @@ -{ - "direct stack access with 32-bit wraparound. test1", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x7fffffff), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "fp pointer and 2147483647", - .result = REJECT -}, -{ - "direct stack access with 32-bit wraparound. test2", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x3fffffff), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "fp pointer and 1073741823", - .result = REJECT -}, -{ - "direct stack access with 32-bit wraparound. test3", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0x1fffffff), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "fp pointer offset 1073741822", - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .result = REJECT -}, diff --git a/tools/testing/selftests/bpf/verifier/div0.c b/tools/testing/selftests/bpf/verifier/div0.c deleted file mode 100644 index 7685edfbcf71..000000000000 --- a/tools/testing/selftests/bpf/verifier/div0.c +++ /dev/null @@ -1,184 +0,0 @@ -{ - "DIV32 by 0, zero check 1", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, 1), - BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "DIV32 by 0, zero check 2", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL), - BPF_MOV32_IMM(BPF_REG_2, 1), - BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "DIV64 by 0, zero check", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, 1), - BPF_ALU64_REG(BPF_DIV, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "MOD32 by 0, zero check 1", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, 1), - BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "MOD32 by 0, zero check 2", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL), - BPF_MOV32_IMM(BPF_REG_2, 1), - BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "MOD64 by 0, zero check", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, 1), - BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "DIV32 by 0, zero check ok, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_MOV32_IMM(BPF_REG_1, 2), - BPF_MOV32_IMM(BPF_REG_2, 16), - BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 8, -}, -{ - "DIV32 by 0, zero check 1, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "DIV32 by 0, zero check 2, cls", - .insns = { - BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "DIV64 by 0, zero check, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "MOD32 by 0, zero check ok, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, 42), - BPF_MOV32_IMM(BPF_REG_1, 3), - BPF_MOV32_IMM(BPF_REG_2, 5), - BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 2, -}, -{ - "MOD32 by 0, zero check 1, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 1, -}, -{ - "MOD32 by 0, zero check 2, cls", - .insns = { - BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 1, -}, -{ - "MOD64 by 0, zero check 1, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_0, 2), - BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 2, -}, -{ - "MOD64 by 0, zero check 2, cls", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_0, -1), - BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = -1, -}, diff --git a/tools/testing/selftests/bpf/verifier/div_overflow.c b/tools/testing/selftests/bpf/verifier/div_overflow.c deleted file mode 100644 index acab4f00819f..000000000000 --- a/tools/testing/selftests/bpf/verifier/div_overflow.c +++ /dev/null @@ -1,110 +0,0 @@ -/* Just make sure that JITs used udiv/umod as otherwise we get - * an exception from INT_MIN/-1 overflow similarly as with div - * by zero. - */ -{ - "DIV32 overflow, check 1", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_0, INT_MIN), - BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "DIV32 overflow, check 2", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, INT_MIN), - BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, -1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "DIV64 overflow, check 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_LD_IMM64(BPF_REG_2, LLONG_MIN), - BPF_ALU64_REG(BPF_DIV, BPF_REG_2, BPF_REG_1), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 1), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "DIV64 overflow, check 2", - .insns = { - BPF_LD_IMM64(BPF_REG_1, LLONG_MIN), - BPF_ALU64_IMM(BPF_DIV, BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 0, -}, -{ - "MOD32 overflow, check 1", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_0, INT_MIN), - BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = INT_MIN, -}, -{ - "MOD32 overflow, check 2", - .insns = { - BPF_MOV32_IMM(BPF_REG_0, INT_MIN), - BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, -1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = INT_MIN, -}, -{ - "MOD64 overflow, check 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_LD_IMM64(BPF_REG_2, LLONG_MIN), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), - BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_1), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 1, -}, -{ - "MOD64 overflow, check 2", - .insns = { - BPF_LD_IMM64(BPF_REG_2, LLONG_MIN), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), - BPF_ALU64_IMM(BPF_MOD, BPF_REG_2, -1), - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1), - BPF_MOV32_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, - .retval = 1, -}, diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c deleted file mode 100644 index 9c4885885aba..000000000000 --- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c +++ /dev/null @@ -1,650 +0,0 @@ -{ - "helper access to variable memory: stack, bitwise AND + JMP, correct bounds", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -32), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_MOV64_IMM(BPF_REG_2, 16), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, bitwise AND, zero included", - .insns = { - /* set max stack size */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0), - /* set r3 to a random value */ - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - /* use bitwise AND to limit r3 range to [0, 64] */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 64), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64), - BPF_MOV64_IMM(BPF_REG_4, 0), - /* Call bpf_ringbuf_output(), it is one of a few helper functions with - * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode. - * For unpriv this should signal an error, because memory at &fp[-64] is - * not initialized. - */ - BPF_EMIT_CALL(BPF_FUNC_ringbuf_output), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 4 }, - .errstr_unpriv = "invalid indirect read from stack R2 off -64+0 size 64", - .result_unpriv = REJECT, - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, -{ - "helper access to variable memory: stack, bitwise AND + JMP, wrong max", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 65), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid indirect access to stack R1 off=-64 size=65", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, JMP, correct bounds", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -32), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_MOV64_IMM(BPF_REG_2, 16), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, JMP (signed), correct bounds", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -32), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_MOV64_IMM(BPF_REG_2, 16), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 64, 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, JMP, bounds + offset", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 5), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid indirect access to stack R1 off=-64 size=65", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, JMP, wrong max", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 65, 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid indirect access to stack R1 off=-64 size=65", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, JMP, no max check", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - /* because max wasn't checked, signed min is negative */ - .errstr = "R2 min value is negative, either use unsigned or 'var &= const'", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: stack, JMP, no min check", - .insns = { - /* set max stack size */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0), - /* set r3 to a random value */ - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - /* use JMP to limit r3 range to [0, 64] */ - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 64, 6), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64), - BPF_MOV64_IMM(BPF_REG_4, 0), - /* Call bpf_ringbuf_output(), it is one of a few helper functions with - * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode. - * For unpriv this should signal an error, because memory at &fp[-64] is - * not initialized. - */ - BPF_EMIT_CALL(BPF_FUNC_ringbuf_output), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 4 }, - .errstr_unpriv = "invalid indirect read from stack R2 off -64+0 size 64", - .result_unpriv = REJECT, - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, -{ - "helper access to variable memory: stack, JMP (signed), no min check", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 64, 3), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R2 min value is negative", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: map, JMP, correct bounds", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val)), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val), 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: map, JMP, wrong max", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_6), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) + 1, 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .errstr = "invalid access to map value, value_size=48 off=0 size=49", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: map adjusted, JMP, correct bounds", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 20), - BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val)), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) - 20, 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: map adjusted, JMP, wrong max", - .insns = { - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 20), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_6), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128), - BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) - 19, 4), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 4 }, - .errstr = "R1 min value is outside of the allowed memory range", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: size = 0 allowed on NULL (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to variable memory: size > 0 not allowed on NULL (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128), - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .errstr = "R1 type=scalar expected=fp", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to variable memory: size = 0 allowed on != NULL stack pointer (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to variable memory: size = 0 allowed on != NULL map pointer (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to variable memory: size possible = 0 allowed on != NULL stack pointer (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to variable memory: size possible = 0 allowed on != NULL map pointer (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to variable memory: size possible = 0 allowed on != NULL packet pointer (ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_EMIT_CALL(BPF_FUNC_csum_diff), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 0 /* csum_diff of 64-byte packet */, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "helper access to variable memory: size = 0 not allowed on NULL (!ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .errstr = "R1 type=scalar expected=fp", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: size > 0 not allowed on NULL (!ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .errstr = "R1 type=scalar expected=fp", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: size = 0 allowed on != NULL stack pointer (!ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: size = 0 allowed on != NULL map pointer (!ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: size possible = 0 allowed on != NULL stack pointer (!ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: size possible = 0 allowed on != NULL map pointer (!ARG_PTR_TO_MEM_OR_NULL)", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 2), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to variable memory: 8 bytes leak", - .insns = { - /* set max stack size */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -128, 0), - /* set r3 to a random value */ - BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40), - /* Note: fp[-32] left uninitialized */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - /* Limit r3 range to [1, 64] */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 63), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 1), - BPF_MOV64_IMM(BPF_REG_4, 0), - /* Call bpf_ringbuf_output(), it is one of a few helper functions with - * ARG_CONST_SIZE_OR_ZERO parameter allowed in unpriv mode. - * For unpriv this should signal an error, because memory region [1, 64] - * at &fp[-64] is not fully initialized. - */ - BPF_EMIT_CALL(BPF_FUNC_ringbuf_output), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 3 }, - .errstr_unpriv = "invalid indirect read from stack R2 off -64+32 size 64", - .result_unpriv = REJECT, - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, -{ - "helper access to variable memory: 8 bytes no leak (init memory)", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -64), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -56), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -48), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -40), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -32), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 32), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 32), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, diff --git a/tools/testing/selftests/bpf/verifier/helper_packet_access.c b/tools/testing/selftests/bpf/verifier/helper_packet_access.c deleted file mode 100644 index ae54587e9829..000000000000 --- a/tools/testing/selftests/bpf/verifier/helper_packet_access.c +++ /dev/null @@ -1,460 +0,0 @@ -{ - "helper access to packet: test1, valid packet_ptr range", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 5), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .result_unpriv = ACCEPT, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "helper access to packet: test2, unchecked packet_ptr", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "helper access to packet: test3, variable add", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 10), - BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_5), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 11 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "helper access to packet: test4, packet_ptr with bad range", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 7 }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "helper access to packet: test5, packet_ptr with too short range", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 6 }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "helper access to packet: test6, cls valid packet_ptr range", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 5), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test7, cls unchecked packet_ptr", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test8, cls variable add", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 10), - BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_2, 0), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_5), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 11 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test9, cls packet_ptr with bad range", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 7 }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test10, cls packet_ptr with too short range", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 6 }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test11, cls unsuitable helper 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_7, 4), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_4, 42), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_store_bytes), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "helper access to the packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test12, cls unsuitable helper 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_4, 4), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "helper access to the packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test13, cls helper ok", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test14, cls helper ok sub", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 4), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test15, cls helper fail sub", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 12), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test16, cls helper fail range 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test17, cls helper fail range 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, -9), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R2 min value is negative", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test18, cls helper fail range 3", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, ~0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R2 min value is negative", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test19, cls helper range zero", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test20, pkt end as input", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R1 type=pkt_end expected=fp", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "helper access to packet: test21, wrong reg", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6), - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_IMM(BPF_REG_5, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, diff --git a/tools/testing/selftests/bpf/verifier/helper_restricted.c b/tools/testing/selftests/bpf/verifier/helper_restricted.c deleted file mode 100644 index a067b7098b97..000000000000 --- a/tools/testing/selftests/bpf/verifier/helper_restricted.c +++ /dev/null @@ -1,196 +0,0 @@ -{ - "bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_KPROBE", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "unknown func bpf_ktime_get_coarse_ns", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_KPROBE, -}, -{ - "bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_TRACEPOINT", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "unknown func bpf_ktime_get_coarse_ns", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_PERF_EVENT", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "unknown func bpf_ktime_get_coarse_ns", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_PERF_EVENT, -}, -{ - "bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT", - .insns = { - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "unknown func bpf_ktime_get_coarse_ns", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_RAW_TRACEPOINT, -}, -{ - "bpf_timer_init isn restricted in BPF_PROG_TYPE_KPROBE", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_EMIT_CALL(BPF_FUNC_timer_init), - BPF_EXIT_INSN(), - }, - .fixup_map_timer = { 3, 8 }, - .errstr = "tracing progs cannot use bpf_timer yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_KPROBE, -}, -{ - "bpf_timer_init is forbidden in BPF_PROG_TYPE_PERF_EVENT", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_EMIT_CALL(BPF_FUNC_timer_init), - BPF_EXIT_INSN(), - }, - .fixup_map_timer = { 3, 8 }, - .errstr = "tracing progs cannot use bpf_timer yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_PERF_EVENT, -}, -{ - "bpf_timer_init is forbidden in BPF_PROG_TYPE_TRACEPOINT", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_EMIT_CALL(BPF_FUNC_timer_init), - BPF_EXIT_INSN(), - }, - .fixup_map_timer = { 3, 8 }, - .errstr = "tracing progs cannot use bpf_timer yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bpf_timer_init is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_EMIT_CALL(BPF_FUNC_timer_init), - BPF_EXIT_INSN(), - }, - .fixup_map_timer = { 3, 8 }, - .errstr = "tracing progs cannot use bpf_timer yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_RAW_TRACEPOINT, -}, -{ - "bpf_spin_lock is forbidden in BPF_PROG_TYPE_KPROBE", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_spin_lock), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .errstr = "tracing progs cannot use bpf_spin_lock yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_KPROBE, -}, -{ - "bpf_spin_lock is forbidden in BPF_PROG_TYPE_TRACEPOINT", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_spin_lock), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .errstr = "tracing progs cannot use bpf_spin_lock yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "bpf_spin_lock is forbidden in BPF_PROG_TYPE_PERF_EVENT", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_spin_lock), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .errstr = "tracing progs cannot use bpf_spin_lock yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_PERF_EVENT, -}, -{ - "bpf_spin_lock is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_EMIT_CALL(BPF_FUNC_spin_lock), - BPF_EXIT_INSN(), - }, - .fixup_map_spin_lock = { 3 }, - .errstr = "tracing progs cannot use bpf_spin_lock yet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_RAW_TRACEPOINT, -}, diff --git a/tools/testing/selftests/bpf/verifier/helper_value_access.c b/tools/testing/selftests/bpf/verifier/helper_value_access.c deleted file mode 100644 index 1c7882ddfa63..000000000000 --- a/tools/testing/selftests/bpf/verifier/helper_value_access.c +++ /dev/null @@ -1,953 +0,0 @@ -{ - "helper access to map: full range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val)), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: partial range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: empty range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_trace_printk), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=0 size=0", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: out-of-bound range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val) + 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=0 size=56", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: negative range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R2 min value is negative", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const imm): full range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_2, - sizeof(struct test_val) - offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const imm): partial range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const imm): empty range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_trace_printk), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=4 size=0", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const imm): out-of-bound range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_2, - sizeof(struct test_val) - offsetof(struct test_val, foo) + 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=4 size=52", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const imm): negative range (> adjustment)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R2 min value is negative", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const imm): negative range (< adjustment)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R2 min value is negative", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const reg): full range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, - sizeof(struct test_val) - offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const reg): partial range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const reg): empty range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_trace_printk), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R1 min value is outside of the allowed memory range", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const reg): out-of-bound range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, - sizeof(struct test_val) - - offsetof(struct test_val, foo) + 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=4 size=52", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const reg): negative range (> adjustment)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R2 min value is negative", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via const reg): negative range (< adjustment)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R2 min value is negative", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via variable): full range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, - sizeof(struct test_val) - offsetof(struct test_val, foo)), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via variable): partial range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via variable): empty range", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_EMIT_CALL(BPF_FUNC_trace_printk), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R1 min value is outside of the allowed memory range", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via variable): no max check", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R1 unbounded memory access", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to adjusted map (via variable): wrong max check", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_MOV64_IMM(BPF_REG_2, - sizeof(struct test_val) - - offsetof(struct test_val, foo) + 1), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "invalid access to map value, value_size=48 off=4 size=45", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using <, good access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JLT, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using <, bad access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JLT, BPF_REG_3, 32, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = REJECT, - .errstr = "R1 unbounded memory access", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using <=, good access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using <=, bad access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 32, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = REJECT, - .errstr = "R1 unbounded memory access", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using s<, good access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 0, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using s<, good access 2", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, -3, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using s<, bad access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, -3, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = REJECT, - .errstr = "R1 min value is negative", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using s<=, good access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using s<=, good access 2", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, -3, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "helper access to map: bounds check using s<=, bad access", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 32, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, -3, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .result = REJECT, - .errstr = "R1 min value is negative", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map lookup helper access to map", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 8 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map update helper access to map", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_update_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 10 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map update helper access to map: wrong size", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_update_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .fixup_map_hash_16b = { 10 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=8 off=0 size=16", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via const imm)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, offsetof(struct other_val, bar)), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 9 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via const imm): out-of-bound 1", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, sizeof(struct other_val) - 4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 9 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=16 off=12 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via const imm): out-of-bound 2", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 9 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=16 off=-4 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via const reg)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, offsetof(struct other_val, bar)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 10 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via const reg): out-of-bound 1", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, sizeof(struct other_val) - 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 10 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=16 off=12 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via const reg): out-of-bound 2", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, -4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 10 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=16 off=-4 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via variable)", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct other_val, bar), 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 11 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via variable): no max check", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 10 }, - .result = REJECT, - .errstr = "R2 unbounded memory access, make sure to bounds check any such access", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, -{ - "map helper access to adjusted map (via variable): wrong max check", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct other_val, bar) + 1, 4), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 3, 11 }, - .result = REJECT, - .errstr = "invalid access to map value, value_size=16 off=9 size=8", - .prog_type = BPF_PROG_TYPE_TRACEPOINT, -}, diff --git a/tools/testing/selftests/bpf/verifier/int_ptr.c b/tools/testing/selftests/bpf/verifier/int_ptr.c deleted file mode 100644 index 02d9e004260b..000000000000 --- a/tools/testing/selftests/bpf/verifier/int_ptr.c +++ /dev/null @@ -1,161 +0,0 @@ -{ - "ARG_PTR_TO_LONG uninitialized", - .insns = { - /* bpf_strtoul arg1 (buf) */ - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_MOV64_IMM(BPF_REG_0, 0x00303036), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - - /* bpf_strtoul arg2 (buf_len) */ - BPF_MOV64_IMM(BPF_REG_2, 4), - - /* bpf_strtoul arg3 (flags) */ - BPF_MOV64_IMM(BPF_REG_3, 0), - - /* bpf_strtoul arg4 (res) */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_7), - - /* bpf_strtoul() */ - BPF_EMIT_CALL(BPF_FUNC_strtoul), - - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, - .errstr = "invalid indirect read from stack R4 off -16+0 size 8", -}, -{ - "ARG_PTR_TO_LONG half-uninitialized", - .insns = { - /* bpf_strtoul arg1 (buf) */ - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_MOV64_IMM(BPF_REG_0, 0x00303036), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - - /* bpf_strtoul arg2 (buf_len) */ - BPF_MOV64_IMM(BPF_REG_2, 4), - - /* bpf_strtoul arg3 (flags) */ - BPF_MOV64_IMM(BPF_REG_3, 0), - - /* bpf_strtoul arg4 (res) */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_STX_MEM(BPF_W, BPF_REG_7, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_7), - - /* bpf_strtoul() */ - BPF_EMIT_CALL(BPF_FUNC_strtoul), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result_unpriv = REJECT, - .errstr_unpriv = "invalid indirect read from stack R4 off -16+4 size 8", - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, -{ - "ARG_PTR_TO_LONG misaligned", - .insns = { - /* bpf_strtoul arg1 (buf) */ - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_MOV64_IMM(BPF_REG_0, 0x00303036), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - - /* bpf_strtoul arg2 (buf_len) */ - BPF_MOV64_IMM(BPF_REG_2, 4), - - /* bpf_strtoul arg3 (flags) */ - BPF_MOV64_IMM(BPF_REG_3, 0), - - /* bpf_strtoul arg4 (res) */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -12), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_W, BPF_REG_7, BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 4), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_7), - - /* bpf_strtoul() */ - BPF_EMIT_CALL(BPF_FUNC_strtoul), - - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, - .errstr = "misaligned stack access off (0x0; 0x0)+-20+0 size 8", -}, -{ - "ARG_PTR_TO_LONG size < sizeof(long)", - .insns = { - /* bpf_strtoul arg1 (buf) */ - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -16), - BPF_MOV64_IMM(BPF_REG_0, 0x00303036), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - - /* bpf_strtoul arg2 (buf_len) */ - BPF_MOV64_IMM(BPF_REG_2, 4), - - /* bpf_strtoul arg3 (flags) */ - BPF_MOV64_IMM(BPF_REG_3, 0), - - /* bpf_strtoul arg4 (res) */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, 12), - BPF_STX_MEM(BPF_W, BPF_REG_7, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_7), - - /* bpf_strtoul() */ - BPF_EMIT_CALL(BPF_FUNC_strtoul), - - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, - .errstr = "invalid indirect access to stack R4 off=-4 size=8", -}, -{ - "ARG_PTR_TO_LONG initialized", - .insns = { - /* bpf_strtoul arg1 (buf) */ - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_MOV64_IMM(BPF_REG_0, 0x00303036), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - - /* bpf_strtoul arg2 (buf_len) */ - BPF_MOV64_IMM(BPF_REG_2, 4), - - /* bpf_strtoul arg3 (flags) */ - BPF_MOV64_IMM(BPF_REG_3, 0), - - /* bpf_strtoul arg4 (res) */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_7), - - /* bpf_strtoul() */ - BPF_EMIT_CALL(BPF_FUNC_strtoul), - - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SYSCTL, -}, diff --git a/tools/testing/selftests/bpf/verifier/ld_ind.c b/tools/testing/selftests/bpf/verifier/ld_ind.c deleted file mode 100644 index 079734227538..000000000000 --- a/tools/testing/selftests/bpf/verifier/ld_ind.c +++ /dev/null @@ -1,72 +0,0 @@ -{ - "ld_ind: check calling conv, r1", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_LD_IND(BPF_W, BPF_REG_1, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr = "R1 !read_ok", - .result = REJECT, -}, -{ - "ld_ind: check calling conv, r2", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_2, 1), - BPF_LD_IND(BPF_W, BPF_REG_2, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .errstr = "R2 !read_ok", - .result = REJECT, -}, -{ - "ld_ind: check calling conv, r3", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_3, 1), - BPF_LD_IND(BPF_W, BPF_REG_3, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), - BPF_EXIT_INSN(), - }, - .errstr = "R3 !read_ok", - .result = REJECT, -}, -{ - "ld_ind: check calling conv, r4", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_4, 1), - BPF_LD_IND(BPF_W, BPF_REG_4, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), - BPF_EXIT_INSN(), - }, - .errstr = "R4 !read_ok", - .result = REJECT, -}, -{ - "ld_ind: check calling conv, r5", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_5, 1), - BPF_LD_IND(BPF_W, BPF_REG_5, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_5), - BPF_EXIT_INSN(), - }, - .errstr = "R5 !read_ok", - .result = REJECT, -}, -{ - "ld_ind: check calling conv, r7", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_MOV64_IMM(BPF_REG_7, 1), - BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_7), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 1, -}, diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c deleted file mode 100644 index 73f0dea95546..000000000000 --- a/tools/testing/selftests/bpf/verifier/leak_ptr.c +++ /dev/null @@ -1,67 +0,0 @@ -{ - "leak pointer into ctx 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_1, BPF_REG_2, - offsetof(struct __sk_buff, cb[0])), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 2 }, - .errstr_unpriv = "R2 leaks addr into mem", - .result_unpriv = REJECT, - .result = REJECT, - .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", -}, -{ - "leak pointer into ctx 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, - offsetof(struct __sk_buff, cb[0])), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_1, BPF_REG_10, - offsetof(struct __sk_buff, cb[0])), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R10 leaks addr into mem", - .result_unpriv = REJECT, - .result = REJECT, - .errstr = "BPF_ATOMIC stores into R1 ctx is not allowed", -}, -{ - "leak pointer into ctx 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_LD_MAP_FD(BPF_REG_2, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, - offsetof(struct __sk_buff, cb[0])), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - .errstr_unpriv = "R2 leaks addr into ctx", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "leak pointer into map val", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_6, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .errstr_unpriv = "R6 leaks addr into mem", - .result_unpriv = REJECT, - .result = ACCEPT, -}, diff --git a/tools/testing/selftests/bpf/verifier/map_ptr.c b/tools/testing/selftests/bpf/verifier/map_ptr.c deleted file mode 100644 index 17ee84dc7766..000000000000 --- a/tools/testing/selftests/bpf/verifier/map_ptr.c +++ /dev/null @@ -1,99 +0,0 @@ -{ - "bpf_map_ptr: read with negative offset rejected", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result_unpriv = REJECT, - .errstr_unpriv = "access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN", - .result = REJECT, - .errstr = "R1 is bpf_array invalid negative access: off=-8", -}, -{ - "bpf_map_ptr: write rejected", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result_unpriv = REJECT, - .errstr_unpriv = "access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN", - .result = REJECT, - .errstr = "only read from bpf_array is supported", -}, -{ - "bpf_map_ptr: read non-existent field rejected", - .insns = { - BPF_MOV64_IMM(BPF_REG_6, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, 1), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result_unpriv = REJECT, - .errstr_unpriv = "access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN", - .result = REJECT, - .errstr = "cannot access ptr member ops with moff 0 in struct bpf_map with off 1 size 4", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "bpf_map_ptr: read ops field accepted", - .insns = { - BPF_MOV64_IMM(BPF_REG_6, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 1 }, - .result_unpriv = REJECT, - .errstr_unpriv = "access is allowed only to CAP_PERFMON and CAP_SYS_ADMIN", - .result = ACCEPT, - .retval = 1, -}, -{ - "bpf_map_ptr: r = 0, map_ptr = map_ptr + r", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 4 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 has pointer with unsupported alu operation", - .result = ACCEPT, -}, -{ - "bpf_map_ptr: r = 0, r = r + map_ptr", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_MOV64_IMM(BPF_REG_1, 0), - BPF_LD_MAP_FD(BPF_REG_0, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_16b = { 4 }, - .result_unpriv = REJECT, - .errstr_unpriv = "R0 has pointer with unsupported alu operation", - .result = ACCEPT, -}, diff --git a/tools/testing/selftests/bpf/verifier/map_ret_val.c b/tools/testing/selftests/bpf/verifier/map_ret_val.c deleted file mode 100644 index bdd0e8d18333..000000000000 --- a/tools/testing/selftests/bpf/verifier/map_ret_val.c +++ /dev/null @@ -1,65 +0,0 @@ -{ - "invalid map_fd for function call", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem), - BPF_EXIT_INSN(), - }, - .errstr = "fd 0 is not pointing to valid bpf_map", - .result = REJECT, -}, -{ - "don't check return value before access", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 invalid mem access 'map_value_or_null'", - .result = REJECT, -}, -{ - "access memory with incorrect alignment", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "misaligned value access", - .result = REJECT, - .flags = F_LOAD_WITH_STRICT_ALIGNMENT, -}, -{ - "sometimes access memory with incorrect alignment", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0), - BPF_EXIT_INSN(), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .errstr = "R0 invalid mem access", - .errstr_unpriv = "R0 leaks addr", - .result = REJECT, - .flags = F_LOAD_WITH_STRICT_ALIGNMENT, -}, diff --git a/tools/testing/selftests/bpf/verifier/masking.c b/tools/testing/selftests/bpf/verifier/masking.c deleted file mode 100644 index 6e1358c544fd..000000000000 --- a/tools/testing/selftests/bpf/verifier/masking.c +++ /dev/null @@ -1,322 +0,0 @@ -{ - "masking, test out of bounds 1", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 5), - BPF_MOV32_IMM(BPF_REG_2, 5 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 2", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 3", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0xffffffff), - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 4", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0xffffffff), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 5", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 6", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 7", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 5), - BPF_MOV32_IMM(BPF_REG_2, 5 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 8", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 9", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0xffffffff), - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 10", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 0xffffffff), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 11", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test out of bounds 12", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, -1), - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test in bounds 1", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 4), - BPF_MOV32_IMM(BPF_REG_2, 5 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 4, -}, -{ - "masking, test in bounds 2", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test in bounds 3", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0xfffffffe), - BPF_MOV32_IMM(BPF_REG_2, 0xffffffff - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0xfffffffe, -}, -{ - "masking, test in bounds 4", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0xabcde), - BPF_MOV32_IMM(BPF_REG_2, 0xabcdef - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0xabcde, -}, -{ - "masking, test in bounds 5", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 0), - BPF_MOV32_IMM(BPF_REG_2, 1 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, -{ - "masking, test in bounds 6", - .insns = { - BPF_MOV32_IMM(BPF_REG_1, 46), - BPF_MOV32_IMM(BPF_REG_2, 47 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_1), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_1), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 46, -}, -{ - "masking, test in bounds 7", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, -46), - BPF_ALU64_IMM(BPF_MUL, BPF_REG_3, -1), - BPF_MOV32_IMM(BPF_REG_2, 47 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_3), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_3), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_3, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 46, -}, -{ - "masking, test in bounds 8", - .insns = { - BPF_MOV64_IMM(BPF_REG_3, -47), - BPF_ALU64_IMM(BPF_MUL, BPF_REG_3, -1), - BPF_MOV32_IMM(BPF_REG_2, 47 - 1), - BPF_ALU64_REG(BPF_SUB, BPF_REG_2, BPF_REG_3), - BPF_ALU64_REG(BPF_OR, BPF_REG_2, BPF_REG_3), - BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), - BPF_ALU64_IMM(BPF_ARSH, BPF_REG_2, 63), - BPF_ALU64_REG(BPF_AND, BPF_REG_3, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0, -}, diff --git a/tools/testing/selftests/bpf/verifier/meta_access.c b/tools/testing/selftests/bpf/verifier/meta_access.c deleted file mode 100644 index b45e8af41420..000000000000 --- a/tools/testing/selftests/bpf/verifier/meta_access.c +++ /dev/null @@ -1,235 +0,0 @@ -{ - "meta access, test1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 8), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet, off=-8", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test3", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test4", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test5", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_4, 3), - BPF_MOV64_IMM(BPF_REG_2, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_meta), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R3 !read_ok", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test6", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test7", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test8", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0xFFFF), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test9", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0xFFFF), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1), - BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test10", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_IMM(BPF_REG_5, 42), - BPF_MOV64_IMM(BPF_REG_6, 24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_5, -8), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -8), - BPF_JMP_IMM(BPF_JGT, BPF_REG_5, 100, 6), - BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_5), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_3), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_5, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test11", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_IMM(BPF_REG_5, 42), - BPF_MOV64_IMM(BPF_REG_6, 24), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_5, -8), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -8), - BPF_JMP_IMM(BPF_JGT, BPF_REG_5, 100, 6), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_5), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_2), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_5, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "meta access, test12", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_3), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 16), - BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 5), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 0), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 16), - BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, -}, diff --git a/tools/testing/selftests/bpf/verifier/raw_stack.c b/tools/testing/selftests/bpf/verifier/raw_stack.c deleted file mode 100644 index eb5ed936580b..000000000000 --- a/tools/testing/selftests/bpf/verifier/raw_stack.c +++ /dev/null @@ -1,305 +0,0 @@ -{ - "raw_stack: no skb_load_bytes", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - /* Call to skb_load_bytes() omitted. */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid read from stack R6 off=-8 size=8", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, negative len", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, -8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R4 min value is negative", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, negative len 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, ~0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R4 min value is negative", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, zero len", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid zero-sized read", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, no init", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, init", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_6, 0, 0xcafe), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, spilled regs around bounds", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 8), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, - offsetof(struct __sk_buff, mark)), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_2, - offsetof(struct __sk_buff, priority)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, spilled regs corruption", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, - offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R0 invalid mem access 'scalar'", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "raw_stack: skb_load_bytes, spilled regs corruption 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, - offsetof(struct __sk_buff, mark)), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_2, - offsetof(struct __sk_buff, priority)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_3, - offsetof(struct __sk_buff, pkt_type)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R3 invalid mem access 'scalar'", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "raw_stack: skb_load_bytes, spilled regs + data", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -16), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 8), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 8), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, - offsetof(struct __sk_buff, mark)), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_2, - offsetof(struct __sk_buff, priority)), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, invalid access 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -513), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid indirect access to stack R3 off=-513 size=8", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, invalid access 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -1), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 8), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid indirect access to stack R3 off=-1 size=8", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, invalid access 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 0xffffffff), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 0xffffffff), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R4 min value is negative", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, invalid access 4", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -1), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 0x7fffffff), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R4 unbounded memory access, use 'var &= const' or 'if (var < const)'", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, invalid access 5", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -512), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 0x7fffffff), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "R4 unbounded memory access, use 'var &= const' or 'if (var < const)'", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, invalid access 6", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -512), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid zero-sized read", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "raw_stack: skb_load_bytes, large access", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 4), - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -512), - BPF_MOV64_REG(BPF_REG_3, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_4, 512), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, diff --git a/tools/testing/selftests/bpf/verifier/raw_tp_writable.c b/tools/testing/selftests/bpf/verifier/raw_tp_writable.c deleted file mode 100644 index 2978fb5a769d..000000000000 --- a/tools/testing/selftests/bpf/verifier/raw_tp_writable.c +++ /dev/null @@ -1,35 +0,0 @@ -{ - "raw_tracepoint_writable: reject variable offset", - .insns = { - /* r6 is our tp buffer */ - BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0), - - BPF_LD_MAP_FD(BPF_REG_1, 0), - /* move the key (== 0) to r10-8 */ - BPF_MOV32_IMM(BPF_REG_0, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0), - /* lookup in the map */ - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_map_lookup_elem), - - /* exit clean if null */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - - /* shift the buffer pointer to a variable location */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_0), - /* clobber whatever's there */ - BPF_MOV64_IMM(BPF_REG_7, 4242), - BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_7, 0), - - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1, }, - .prog_type = BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE, - .errstr = "R6 invalid variable buffer offset: off=0, var_off=(0x0; 0xffffffff)", - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c index 9540164712b7..5a2e154dd1e0 100644 --- a/tools/testing/selftests/bpf/verifier/ref_tracking.c +++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c @@ -142,7 +142,7 @@ .kfunc = "bpf", .expected_attach_type = BPF_LSM_MAC, .flags = BPF_F_SLEEPABLE, - .errstr = "arg#0 is ptr_or_null_ expected ptr_ or socket", + .errstr = "Possibly NULL pointer passed to trusted arg0", .fixup_kfunc_btf_id = { { "bpf_lookup_user_key", 2 }, { "bpf_key_put", 4 }, @@ -163,7 +163,7 @@ .kfunc = "bpf", .expected_attach_type = BPF_LSM_MAC, .flags = BPF_F_SLEEPABLE, - .errstr = "arg#0 is ptr_or_null_ expected ptr_ or socket", + .errstr = "Possibly NULL pointer passed to trusted arg0", .fixup_kfunc_btf_id = { { "bpf_lookup_system_key", 1 }, { "bpf_key_put", 3 }, @@ -182,7 +182,7 @@ .kfunc = "bpf", .expected_attach_type = BPF_LSM_MAC, .flags = BPF_F_SLEEPABLE, - .errstr = "arg#0 pointer type STRUCT bpf_key must point to scalar, or struct with scalar", + .errstr = "Possibly NULL pointer passed to trusted arg0", .fixup_kfunc_btf_id = { { "bpf_key_put", 1 }, }, diff --git a/tools/testing/selftests/bpf/verifier/ringbuf.c b/tools/testing/selftests/bpf/verifier/ringbuf.c deleted file mode 100644 index 92e3f6a61a79..000000000000 --- a/tools/testing/selftests/bpf/verifier/ringbuf.c +++ /dev/null @@ -1,95 +0,0 @@ -{ - "ringbuf: invalid reservation offset 1", - .insns = { - /* reserve 8 byte ringbuf memory */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), - /* store a pointer to the reserved memory in R6 */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* check whether the reservation was successful */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - /* spill R6(mem) into the stack */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), - /* fill it back in R7 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8), - /* should be able to access *(R7) = 0 */ - BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0), - /* submit the reserved ringbuf memory */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - /* add invalid offset to reserved ringbuf memory */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xcafe), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 1 }, - .result = REJECT, - .errstr = "R1 must have zero offset when passed to release func", -}, -{ - "ringbuf: invalid reservation offset 2", - .insns = { - /* reserve 8 byte ringbuf memory */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), - /* store a pointer to the reserved memory in R6 */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* check whether the reservation was successful */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - /* spill R6(mem) into the stack */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), - /* fill it back in R7 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8), - /* add invalid offset to reserved ringbuf memory */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, 0xcafe), - /* should be able to access *(R7) = 0 */ - BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0), - /* submit the reserved ringbuf memory */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 1 }, - .result = REJECT, - .errstr = "R7 min value is outside of the allowed memory range", -}, -{ - "ringbuf: check passing rb mem to helpers", - .insns = { - BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), - /* reserve 8 byte ringbuf memory */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* check whether the reservation was successful */ - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - /* pass allocated ring buffer memory to fib lookup */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), - BPF_MOV64_IMM(BPF_REG_3, 8), - BPF_MOV64_IMM(BPF_REG_4, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_fib_lookup), - /* submit the ringbuf memory */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 2 }, - .prog_type = BPF_PROG_TYPE_XDP, - .result = ACCEPT, -}, diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c deleted file mode 100644 index d1463bf4949a..000000000000 --- a/tools/testing/selftests/bpf/verifier/spill_fill.c +++ /dev/null @@ -1,345 +0,0 @@ -{ - "check valid spill/fill", - .insns = { - /* spill R1(ctx) into stack */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - /* fill it back into R2 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), - /* should be able to access R0 = *(R2 + 8) */ - /* BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8), */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R0 leaks addr", - .result = ACCEPT, - .result_unpriv = REJECT, - .retval = POINTER_VALUE, -}, -{ - "check valid spill/fill, skb mark", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, - offsetof(struct __sk_buff, mark)), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = ACCEPT, -}, -{ - "check valid spill/fill, ptr to mem", - .insns = { - /* reserve 8 byte ringbuf memory */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), - /* store a pointer to the reserved memory in R6 */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* check whether the reservation was successful */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - /* spill R6(mem) into the stack */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), - /* fill it back in R7 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8), - /* should be able to access *(R7) = 0 */ - BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0), - /* submit the reserved ringbuf memory */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 1 }, - .result = ACCEPT, - .result_unpriv = ACCEPT, -}, -{ - "check with invalid reg offset 0", - .insns = { - /* reserve 8 byte ringbuf memory */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_IMM(BPF_REG_2, 8), - BPF_MOV64_IMM(BPF_REG_3, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), - /* store a pointer to the reserved memory in R6 */ - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* add invalid offset to memory or NULL */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), - /* check whether the reservation was successful */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), - /* should not be able to access *(R7) = 0 */ - BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0), - /* submit the reserved ringbuf memory */ - BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), - BPF_MOV64_IMM(BPF_REG_2, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_ringbuf = { 1 }, - .result = REJECT, - .errstr = "R0 pointer arithmetic on ringbuf_mem_or_null prohibited", -}, -{ - "check corrupted spill/fill", - .insns = { - /* spill R1(ctx) into stack */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - /* mess up with R1 pointer on stack */ - BPF_ST_MEM(BPF_B, BPF_REG_10, -7, 0x23), - /* fill back into R0 is fine for priv. - * R0 now becomes SCALAR_VALUE. - */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - /* Load from R0 should fail. */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "attempt to corrupt spilled", - .errstr = "R0 invalid mem access 'scalar'", - .result = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "check corrupted spill/fill, LSB", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_ST_MEM(BPF_H, BPF_REG_10, -8, 0xcafe), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "attempt to corrupt spilled", - .result_unpriv = REJECT, - .result = ACCEPT, - .retval = POINTER_VALUE, -}, -{ - "check corrupted spill/fill, MSB", - .insns = { - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x12345678), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "attempt to corrupt spilled", - .result_unpriv = REJECT, - .result = ACCEPT, - .retval = POINTER_VALUE, -}, -{ - "Spill and refill a u32 const scalar. Offset to skb->data", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - /* r4 = 20 */ - BPF_MOV32_IMM(BPF_REG_4, 20), - /* *(u32 *)(r10 -8) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* r4 = *(u32 *)(r10 -8) */ - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), - /* r0 = r2 */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - /* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */ - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - /* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=20 */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "Spill a u32 const, refill from another half of the uninit u32 from the stack", - .insns = { - /* r4 = 20 */ - BPF_MOV32_IMM(BPF_REG_4, 20), - /* *(u32 *)(r10 -8) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* r4 = *(u32 *)(r10 -4) fp-8=????rrrr*/ - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result_unpriv = REJECT, - .errstr_unpriv = "invalid read from stack off -4+0 size 4", - /* in privileged mode reads from uninitialized stack locations are permitted */ - .result = ACCEPT, -}, -{ - "Spill a u32 const scalar. Refill as u16. Offset to skb->data", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - /* r4 = 20 */ - BPF_MOV32_IMM(BPF_REG_4, 20), - /* *(u32 *)(r10 -8) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* r4 = *(u16 *)(r10 -8) */ - BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), - /* r0 = r2 */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */ - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "Spill u32 const scalars. Refill as u64. Offset to skb->data", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - /* r6 = 0 */ - BPF_MOV32_IMM(BPF_REG_6, 0), - /* r7 = 20 */ - BPF_MOV32_IMM(BPF_REG_7, 20), - /* *(u32 *)(r10 -4) = r6 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -4), - /* *(u32 *)(r10 -8) = r7 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8), - /* r4 = *(u64 *)(r10 -8) */ - BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -8), - /* r0 = r2 */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */ - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "Spill a u32 const scalar. Refill as u16 from fp-6. Offset to skb->data", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - /* r4 = 20 */ - BPF_MOV32_IMM(BPF_REG_4, 20), - /* *(u32 *)(r10 -8) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* r4 = *(u16 *)(r10 -6) */ - BPF_LDX_MEM(BPF_H, BPF_REG_4, BPF_REG_10, -6), - /* r0 = r2 */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - /* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */ - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - /* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "Spill and refill a u32 const scalar at non 8byte aligned stack addr. Offset to skb->data", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - /* r4 = 20 */ - BPF_MOV32_IMM(BPF_REG_4, 20), - /* *(u32 *)(r10 -8) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* *(u32 *)(r10 -4) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4), - /* r4 = *(u32 *)(r10 -4), */ - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -4), - /* r0 = r2 */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - /* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=U32_MAX */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), - /* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */ - BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), - /* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid access to packet", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "Spill and refill a umax=40 bounded scalar. Offset to skb->data", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct __sk_buff, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct __sk_buff, data_end)), - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1, - offsetof(struct __sk_buff, tstamp)), - BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - /* *(u32 *)(r10 -8) = r4 R4=umax=40 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* r4 = (*u32 *)(r10 - 8) */ - BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_10, -8), - /* r2 += r4 R2=pkt R4=umax=40 */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_4), - /* r0 = r2 R2=pkt,umax=40 R4=umax=40 */ - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - /* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */ - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20), - /* if (r2 > r3) R0=pkt,umax=40 R2=pkt,off=20,umax=40 */ - BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 1), - /* r0 = *(u32 *)r0 R0=pkt,r=20,umax=40 R2=pkt,off=20,r=20,umax=40 */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "Spill a u32 scalar at fp-4 and then at fp-8", - .insns = { - /* r4 = 4321 */ - BPF_MOV32_IMM(BPF_REG_4, 4321), - /* *(u32 *)(r10 -4) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -4), - /* *(u32 *)(r10 -8) = r4 */ - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_4, -8), - /* r4 = *(u64 *)(r10 -8) */ - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, diff --git a/tools/testing/selftests/bpf/verifier/stack_ptr.c b/tools/testing/selftests/bpf/verifier/stack_ptr.c deleted file mode 100644 index 8ab94d65f3d5..000000000000 --- a/tools/testing/selftests/bpf/verifier/stack_ptr.c +++ /dev/null @@ -1,359 +0,0 @@ -{ - "PTR_TO_STACK store/load", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -10), - BPF_ST_MEM(BPF_DW, BPF_REG_1, 2, 0xfaceb00c), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 0xfaceb00c, -}, -{ - "PTR_TO_STACK store/load - bad alignment on off", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_1, 2, 0xfaceb00c), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 2), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "misaligned stack access off (0x0; 0x0)+-8+2 size 8", -}, -{ - "PTR_TO_STACK store/load - bad alignment on reg", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -10), - BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 0xfaceb00c), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "misaligned stack access off (0x0; 0x0)+-10+8 size 8", -}, -{ - "PTR_TO_STACK store/load - out of bounds low", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -80000), - BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 0xfaceb00c), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid write to stack R1 off=-79992 size=8", - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", -}, -{ - "PTR_TO_STACK store/load - out of bounds high", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 0xfaceb00c), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid write to stack R1 off=0 size=8", -}, -{ - "PTR_TO_STACK check high 1", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -1), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK check high 2", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ST_MEM(BPF_B, BPF_REG_1, -1, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, -1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK check high 3", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0), - BPF_ST_MEM(BPF_B, BPF_REG_1, -1, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, -1), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .result_unpriv = REJECT, - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK check high 4", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid write to stack R1 off=0 size=1", - .result = REJECT, -}, -{ - "PTR_TO_STACK check high 5", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, (1 << 29) - 1), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid write to stack R1", -}, -{ - "PTR_TO_STACK check high 6", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, (1 << 29) - 1), - BPF_ST_MEM(BPF_B, BPF_REG_1, SHRT_MAX, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, SHRT_MAX), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid write to stack", -}, -{ - "PTR_TO_STACK check high 7", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, (1 << 29) - 1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, (1 << 29) - 1), - BPF_ST_MEM(BPF_B, BPF_REG_1, SHRT_MAX, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, SHRT_MAX), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "fp pointer offset", -}, -{ - "PTR_TO_STACK check low 1", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -512), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK check low 2", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -513), - BPF_ST_MEM(BPF_B, BPF_REG_1, 1, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 1), - BPF_EXIT_INSN(), - }, - .result_unpriv = REJECT, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK check low 3", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -513), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid write to stack R1 off=-513 size=1", - .result = REJECT, -}, -{ - "PTR_TO_STACK check low 4", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, INT_MIN), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "math between fp pointer", -}, -{ - "PTR_TO_STACK check low 5", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -((1 << 29) - 1)), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "invalid write to stack", -}, -{ - "PTR_TO_STACK check low 6", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -((1 << 29) - 1)), - BPF_ST_MEM(BPF_B, BPF_REG_1, SHRT_MIN, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, SHRT_MIN), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid write to stack", - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", -}, -{ - "PTR_TO_STACK check low 7", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -((1 << 29) - 1)), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -((1 << 29) - 1)), - BPF_ST_MEM(BPF_B, BPF_REG_1, SHRT_MIN, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, SHRT_MIN), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", - .errstr = "fp pointer offset", -}, -{ - "PTR_TO_STACK mixed reg/k, 1", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -3), - BPF_MOV64_IMM(BPF_REG_2, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK mixed reg/k, 2", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -3), - BPF_MOV64_IMM(BPF_REG_2, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_MOV64_REG(BPF_REG_5, BPF_REG_10), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_5, -6), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "PTR_TO_STACK mixed reg/k, 3", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -3), - BPF_MOV64_IMM(BPF_REG_2, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = -3, -}, -{ - "PTR_TO_STACK reg", - .insns = { - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_MOV64_IMM(BPF_REG_2, -3), - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), - BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 42), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .retval = 42, -}, -{ - "stack pointer arithmetic", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 4), - BPF_JMP_IMM(BPF_JA, 0, 0, 0), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -10), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_7), - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1), - BPF_ST_MEM(0, BPF_REG_2, 4, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8), - BPF_ST_MEM(0, BPF_REG_2, 4, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, -}, -{ - "store PTR_TO_STACK in R10 to array map using BPF_B", - .insns = { - /* Load pointer to map. */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 2), - BPF_EXIT_INSN(), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), - /* Copy R10 to R9. */ - BPF_MOV64_REG(BPF_REG_9, BPF_REG_10), - /* Pollute other registers with unaligned values. */ - BPF_MOV64_IMM(BPF_REG_2, -1), - BPF_MOV64_IMM(BPF_REG_3, -1), - BPF_MOV64_IMM(BPF_REG_4, -1), - BPF_MOV64_IMM(BPF_REG_5, -1), - BPF_MOV64_IMM(BPF_REG_6, -1), - BPF_MOV64_IMM(BPF_REG_7, -1), - BPF_MOV64_IMM(BPF_REG_8, -1), - /* Store both R9 and R10 with BPF_B and read back. */ - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_10, 0), - BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_1, 0), - BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_9, 0), - BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_1, 0), - /* Should read back as same value. */ - BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 42), - BPF_EXIT_INSN(), - }, - .fixup_map_array_48b = { 3 }, - .result = ACCEPT, - .retval = 42, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, diff --git a/tools/testing/selftests/bpf/verifier/uninit.c b/tools/testing/selftests/bpf/verifier/uninit.c deleted file mode 100644 index 987a5871ff1d..000000000000 --- a/tools/testing/selftests/bpf/verifier/uninit.c +++ /dev/null @@ -1,39 +0,0 @@ -{ - "read uninitialized register", - .insns = { - BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), - BPF_EXIT_INSN(), - }, - .errstr = "R2 !read_ok", - .result = REJECT, -}, -{ - "read invalid register", - .insns = { - BPF_MOV64_REG(BPF_REG_0, -1), - BPF_EXIT_INSN(), - }, - .errstr = "R15 is invalid", - .result = REJECT, -}, -{ - "program doesn't init R0 before exit", - .insns = { - BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1), - BPF_EXIT_INSN(), - }, - .errstr = "R0 !read_ok", - .result = REJECT, -}, -{ - "program doesn't init R0 before exit in all branches", - .insns = { - BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2), - BPF_EXIT_INSN(), - }, - .errstr = "R0 !read_ok", - .errstr_unpriv = "R1 pointer comparison", - .result = REJECT, -}, diff --git a/tools/testing/selftests/bpf/verifier/value.c b/tools/testing/selftests/bpf/verifier/value.c deleted file mode 100644 index 0e42592b1218..000000000000 --- a/tools/testing/selftests/bpf/verifier/value.c +++ /dev/null @@ -1,104 +0,0 @@ -{ - "map element value store of cleared call register", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R1 !read_ok", - .errstr = "R1 !read_ok", - .result = REJECT, - .result_unpriv = REJECT, -}, -{ - "map element value with unaligned store", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 17), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 2, 43), - BPF_ST_MEM(BPF_DW, BPF_REG_0, -2, 44), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_ST_MEM(BPF_DW, BPF_REG_8, 0, 32), - BPF_ST_MEM(BPF_DW, BPF_REG_8, 2, 33), - BPF_ST_MEM(BPF_DW, BPF_REG_8, -2, 34), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_8, 5), - BPF_ST_MEM(BPF_DW, BPF_REG_8, 0, 22), - BPF_ST_MEM(BPF_DW, BPF_REG_8, 4, 23), - BPF_ST_MEM(BPF_DW, BPF_REG_8, -7, 24), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_8), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, 3), - BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 22), - BPF_ST_MEM(BPF_DW, BPF_REG_7, 4, 23), - BPF_ST_MEM(BPF_DW, BPF_REG_7, -4, 24), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result = ACCEPT, - .result_unpriv = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map element value with unaligned load", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 9), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3), - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 2), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_8, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_8, 2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 5), - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 4), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result = ACCEPT, - .result_unpriv = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map element value is preserved across register spilling", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, offsetof(struct test_val, foo)), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -184), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_3, 0, 42), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result = ACCEPT, - .result_unpriv = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, diff --git a/tools/testing/selftests/bpf/verifier/value_adj_spill.c b/tools/testing/selftests/bpf/verifier/value_adj_spill.c deleted file mode 100644 index 7135e8021b81..000000000000 --- a/tools/testing/selftests/bpf/verifier/value_adj_spill.c +++ /dev/null @@ -1,43 +0,0 @@ -{ - "map element value is preserved across register spilling", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -184), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_3, 0, 42), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result = ACCEPT, - .result_unpriv = REJECT, -}, -{ - "map element value or null is marked on register spilling", - .insns = { - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -152), - BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_3, 0, 42), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr_unpriv = "R0 leaks addr", - .result = ACCEPT, - .result_unpriv = REJECT, -}, diff --git a/tools/testing/selftests/bpf/verifier/value_or_null.c b/tools/testing/selftests/bpf/verifier/value_or_null.c deleted file mode 100644 index 52a8bca14f03..000000000000 --- a/tools/testing/selftests/bpf/verifier/value_or_null.c +++ /dev/null @@ -1,220 +0,0 @@ -{ - "multiple registers share map_lookup_elem result", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS -}, -{ - "alu ops on ptr_to_map_value_or_null, 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 2), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .errstr = "R4 pointer arithmetic on map_value_or_null", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS -}, -{ - "alu ops on ptr_to_map_value_or_null, 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_ALU64_IMM(BPF_AND, BPF_REG_4, -1), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .errstr = "R4 pointer arithmetic on map_value_or_null", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS -}, -{ - "alu ops on ptr_to_map_value_or_null, 3", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_4, 1), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .errstr = "R4 pointer arithmetic on map_value_or_null", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS -}, -{ - "invalid memory access with multiple map_lookup_elem calls", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_2), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .result = REJECT, - .errstr = "R4 !read_ok", - .prog_type = BPF_PROG_TYPE_SCHED_CLS -}, -{ - "valid indirect map_lookup_elem access with 2nd lookup in branch", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_2), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_2, 10), - BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 3), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_7), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), - BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS -}, -{ - "invalid map access from else condition", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), - BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), - BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES-1, 1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1), - BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), - BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), - BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_48b = { 3 }, - .errstr = "R0 unbounded memory access", - .result = REJECT, - .errstr_unpriv = "R0 leaks addr", - .result_unpriv = REJECT, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "map lookup and null branch prediction", - .insns = { - BPF_MOV64_IMM(BPF_REG_1, 10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2), - BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_10, 10), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 4 }, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .result = ACCEPT, -}, -{ - "MAP_VALUE_OR_NULL check_ids() in regsafe()", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* r9 = map_lookup_elem(...) */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_0), - /* r8 = map_lookup_elem(...) */ - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, - 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_REG(BPF_REG_8, BPF_REG_0), - /* r7 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), - /* r6 = ktime_get_ns() */ - BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - /* if r6 > r7 goto +1 ; no new information about the state is derived from - * ; this check, thus produced verifier states differ - * ; only in 'insn_idx' - * r9 = r8 ; optionally share ID between r9 and r8 - */ - BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1), - BPF_MOV64_REG(BPF_REG_9, BPF_REG_8), - /* if r9 == 0 goto <exit> */ - BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 0, 1), - /* read map value via r8, this is not always - * safe because r8 might be not equal to r9. - */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_8, 0), - /* exit 0 */ - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .flags = BPF_F_TEST_STATE_FREQ, - .fixup_map_hash_8b = { 3, 9 }, - .result = REJECT, - .errstr = "R8 invalid mem access 'map_value_or_null'", - .result_unpriv = REJECT, - .errstr_unpriv = "", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c deleted file mode 100644 index b183e26c03f1..000000000000 --- a/tools/testing/selftests/bpf/verifier/var_off.c +++ /dev/null @@ -1,291 +0,0 @@ -{ - "variable-offset ctx access", - .insns = { - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - /* add it to skb. We now have either &skb->len or - * &skb->pkt_type, but we don't know which - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2), - /* dereference it */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), - BPF_EXIT_INSN(), - }, - .errstr = "variable ctx access var_off=(0x0; 0x4)", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "variable-offset stack read, priv vs unpriv", - .insns = { - /* Fill the top 8 bytes of the stack */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8), - /* add it to fp. We now have either fp-4 or fp-8, but - * we don't know which - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* dereference it for a stack read */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .result_unpriv = REJECT, - .errstr_unpriv = "R2 variable stack access prohibited for !root", - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "variable-offset stack read, uninitialized", - .insns = { - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8), - /* add it to fp. We now have either fp-4 or fp-8, but - * we don't know which - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* dereference it for a stack read */ - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "invalid variable-offset read from stack R2", - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "variable-offset stack write, priv vs unpriv", - .insns = { - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 8-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), - /* Add it to fp. We now have either fp-8 or fp-16, but - * we don't know which - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* Dereference it for a stack write */ - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - /* Now read from the address we just wrote. This shows - * that, after a variable-offset write, a priviledged - * program can read the slots that were in the range of - * that write (even if the verifier doesn't actually know - * if the slot being read was really written to or not. - */ - BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_2, 0), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - /* Variable stack access is rejected for unprivileged. - */ - .errstr_unpriv = "R2 variable stack access prohibited for !root", - .result_unpriv = REJECT, - .result = ACCEPT, -}, -{ - "variable-offset stack write clobbers spilled regs", - .insns = { - /* Dummy instruction; needed because we need to patch the next one - * and we can't patch the first instruction. - */ - BPF_MOV64_IMM(BPF_REG_6, 0), - /* Make R0 a map ptr */ - BPF_LD_MAP_FD(BPF_REG_0, 0), - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 8-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 8), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), - /* Add it to fp. We now have either fp-8 or fp-16, but - * we don't know which. - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* Spill R0(map ptr) into stack */ - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - /* Dereference the unknown value for a stack write */ - BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0), - /* Fill the register back into R2 */ - BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), - /* Try to dereference R2 for a memory load */ - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 1 }, - /* The unprivileged case is not too interesting; variable - * stack access is rejected. - */ - .errstr_unpriv = "R2 variable stack access prohibited for !root", - .result_unpriv = REJECT, - /* In the priviledged case, dereferencing a spilled-and-then-filled - * register is rejected because the previous variable offset stack - * write might have overwritten the spilled pointer (i.e. we lose track - * of the spilled register when we analyze the write). - */ - .errstr = "R2 invalid mem access 'scalar'", - .result = REJECT, -}, -{ - "indirect variable-offset stack access, unbounded", - .insns = { - BPF_MOV64_IMM(BPF_REG_2, 6), - BPF_MOV64_IMM(BPF_REG_3, 28), - /* Fill the top 16 bytes of the stack. */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value. */ - BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1, offsetof(struct bpf_sock_ops, - bytes_received)), - /* Check the lower bound but don't check the upper one. */ - BPF_JMP_IMM(BPF_JSLT, BPF_REG_4, 0, 4), - /* Point the lower bound to initialized stack. Offset is now in range - * from fp-16 to fp+0x7fffffffffffffef, i.e. max value is unbounded. - */ - BPF_ALU64_IMM(BPF_SUB, BPF_REG_4, 16), - BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_10), - BPF_MOV64_IMM(BPF_REG_5, 8), - /* Dereference it indirectly. */ - BPF_EMIT_CALL(BPF_FUNC_getsockopt), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "invalid unbounded variable-offset indirect access to stack R4", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_SOCK_OPS, -}, -{ - "indirect variable-offset stack access, max out of bound", - .insns = { - /* Fill the top 8 bytes of the stack */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8), - /* add it to fp. We now have either fp-4 or fp-8, but - * we don't know which - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* dereference it indirectly */ - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "invalid variable-offset indirect access to stack R2", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "indirect variable-offset stack access, min out of bound", - .insns = { - /* Fill the top 8 bytes of the stack */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 516), - /* add it to fp. We now have either fp-516 or fp-512, but - * we don't know which - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* dereference it indirectly */ - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "invalid variable-offset indirect access to stack R2", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "indirect variable-offset stack access, min_off < min_initialized", - .insns = { - /* Fill only the top 8 bytes of the stack. */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned. */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), - /* Add it to fp. We now have either fp-12 or fp-16, but we don't know - * which. fp-16 size 8 is partially uninitialized stack. - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* Dereference it indirectly. */ - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 5 }, - .errstr = "invalid indirect read from stack R2 var_off", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, -{ - "indirect variable-offset stack access, priv vs unpriv", - .insns = { - /* Fill the top 16 bytes of the stack. */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value. */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned. */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), - /* Add it to fp. We now have either fp-12 or fp-16, we don't know - * which, but either way it points to initialized stack. - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* Dereference it indirectly. */ - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 6 }, - .errstr_unpriv = "R2 variable stack access prohibited for !root", - .result_unpriv = REJECT, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_CGROUP_SKB, -}, -{ - "indirect variable-offset stack access, ok", - .insns = { - /* Fill the top 16 bytes of the stack. */ - BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, 0), - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - /* Get an unknown value. */ - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0), - /* Make it small and 4-byte aligned. */ - BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 16), - /* Add it to fp. We now have either fp-12 or fp-16, we don't know - * which, but either way it points to initialized stack. - */ - BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10), - /* Dereference it indirectly. */ - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 6 }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_LWT_IN, -}, diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c deleted file mode 100644 index b96ef3526815..000000000000 --- a/tools/testing/selftests/bpf/verifier/xadd.c +++ /dev/null @@ -1,97 +0,0 @@ -{ - "xadd/w check unaligned stack", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_10, BPF_REG_0, -7), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "misaligned stack access off", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "xadd/w check unaligned map", - .insns = { - BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), - BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), - BPF_LD_MAP_FD(BPF_REG_1, 0), - BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), - BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_1, 1), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_0, BPF_REG_1, 3), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 3), - BPF_EXIT_INSN(), - }, - .fixup_map_hash_8b = { 3 }, - .result = REJECT, - .errstr = "misaligned value access off", - .prog_type = BPF_PROG_TYPE_SCHED_CLS, -}, -{ - "xadd/w check unaligned pkt", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 2), - BPF_MOV64_IMM(BPF_REG_0, 99), - BPF_JMP_IMM(BPF_JA, 0, 0, 6), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0), - BPF_ST_MEM(BPF_W, BPF_REG_2, 3, 0), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_2, BPF_REG_0, 1), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_2, BPF_REG_0, 2), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 1), - BPF_EXIT_INSN(), - }, - .result = REJECT, - .errstr = "BPF_ATOMIC stores into R2 pkt is not allowed", - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "xadd/w check whether src/dst got mangled, 1", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_0, -8), - BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_0, -8), - BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_0, 3), - BPF_JMP_REG(BPF_JNE, BPF_REG_7, BPF_REG_10, 2), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 42), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 3, -}, -{ - "xadd/w check whether src/dst got mangled, 2", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), - BPF_MOV64_REG(BPF_REG_7, BPF_REG_10), - BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_10, BPF_REG_0, -8), - BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_10, BPF_REG_0, -8), - BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_0, 3), - BPF_JMP_REG(BPF_JNE, BPF_REG_7, BPF_REG_10, 2), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8), - BPF_EXIT_INSN(), - BPF_MOV64_IMM(BPF_REG_0, 42), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_SCHED_CLS, - .retval = 3, -}, diff --git a/tools/testing/selftests/bpf/verifier/xdp.c b/tools/testing/selftests/bpf/verifier/xdp.c deleted file mode 100644 index 5ac390508139..000000000000 --- a/tools/testing/selftests/bpf/verifier/xdp.c +++ /dev/null @@ -1,14 +0,0 @@ -{ - "XDP, using ifindex from netdev", - .insns = { - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, ingress_ifindex)), - BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 1, 1), - BPF_MOV64_IMM(BPF_REG_0, 1), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .retval = 1, -}, diff --git a/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c b/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c deleted file mode 100644 index b4ec228eb95d..000000000000 --- a/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c +++ /dev/null @@ -1,1468 +0,0 @@ -{ - "XDP pkt read, pkt_end mangling, bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R3 pointer arithmetic on pkt_end", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "XDP pkt read, pkt_end mangling, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_ALU64_IMM(BPF_SUB, BPF_REG_3, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R3 pointer arithmetic on pkt_end", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, -}, -{ - "XDP pkt read, pkt_data' > pkt_end, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' > pkt_end, bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' > pkt_end, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' > pkt_end, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end > pkt_data', good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end > pkt_data', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end > pkt_data', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end > pkt_data', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end > pkt_data', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' < pkt_end, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' < pkt_end, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' < pkt_end, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' < pkt_end, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end < pkt_data', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end < pkt_data', bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end < pkt_data', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end < pkt_data', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end < pkt_data', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' >= pkt_end, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' >= pkt_end, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' >= pkt_end, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' >= pkt_end, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end >= pkt_data', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end >= pkt_data', bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end >= pkt_data', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end >= pkt_data', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' <= pkt_end, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' <= pkt_end, bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' <= pkt_end, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' <= pkt_end, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end <= pkt_data', good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end <= pkt_data', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end <= pkt_data', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_end <= pkt_data', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, - offsetof(struct xdp_md, data_end)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' > pkt_data, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' > pkt_data, bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' > pkt_data, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' > pkt_data, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data > pkt_meta', good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data > pkt_meta', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data > pkt_meta', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data > pkt_meta', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' < pkt_data, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' < pkt_data, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' < pkt_data, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' < pkt_data, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data < pkt_meta', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data < pkt_meta', bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data < pkt_meta', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data < pkt_meta', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' >= pkt_data, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' >= pkt_data, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' >= pkt_data, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' >= pkt_data, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data >= pkt_meta', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data >= pkt_meta', bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data >= pkt_meta', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data >= pkt_meta', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' <= pkt_data, corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' <= pkt_data, bad access 1", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' <= pkt_data, bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' <= pkt_data, corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), - BPF_JMP_IMM(BPF_JA, 0, 0, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data <= pkt_meta', good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data <= pkt_meta', bad access 2", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 0), - BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .errstr = "R1 offset is outside of the packet", - .result = REJECT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data <= pkt_meta', corner case, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, -{ - "XDP pkt read, pkt_data <= pkt_meta', corner case +1, good access", - .insns = { - BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, - offsetof(struct xdp_md, data_meta)), - BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), - BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), - BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), - BPF_MOV64_IMM(BPF_REG_0, 0), - BPF_EXIT_INSN(), - }, - .result = ACCEPT, - .prog_type = BPF_PROG_TYPE_XDP, - .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, -}, diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c index 83231456d3c5..1db7185181da 100644 --- a/tools/testing/selftests/bpf/veristat.c +++ b/tools/testing/selftests/bpf/veristat.c @@ -1,10 +1,9 @@ -// SPDX-License-Identifier: GPL-2.0 +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) /* Copyright (c) 2022 Meta Platforms, Inc. and affiliates. */ #define _GNU_SOURCE #include <argp.h> #include <string.h> #include <stdlib.h> -#include <linux/compiler.h> #include <sched.h> #include <pthread.h> #include <dirent.h> @@ -15,10 +14,15 @@ #include <sys/sysinfo.h> #include <sys/stat.h> #include <bpf/libbpf.h> +#include <bpf/btf.h> #include <libelf.h> #include <gelf.h> #include <float.h> +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) +#endif + enum stat_id { VERDICT, DURATION, @@ -135,12 +139,17 @@ static struct env { char **filenames; int filename_cnt; bool verbose; + bool debug; bool quiet; - int log_level; enum resfmt out_fmt; + bool show_version; bool comparison_mode; bool replay_mode; + int log_level; + int log_size; + bool log_fixed; + struct verif_stats *prog_stats; int prog_stat_cnt; @@ -169,23 +178,37 @@ static int libbpf_print_fn(enum libbpf_print_level level, const char *format, va { if (!env.verbose) return 0; - if (level == LIBBPF_DEBUG /* && !env.verbose */) + if (level == LIBBPF_DEBUG && !env.debug) return 0; return vfprintf(stderr, format, args); } -const char *argp_program_version = "veristat"; +#ifndef VERISTAT_VERSION +#define VERISTAT_VERSION "<kernel>" +#endif + +const char *argp_program_version = "veristat v" VERISTAT_VERSION; const char *argp_program_bug_address = "<bpf@vger.kernel.org>"; const char argp_program_doc[] = "veristat BPF verifier stats collection and comparison tool.\n" "\n" "USAGE: veristat <obj-file> [<obj-file>...]\n" -" OR: veristat -C <baseline.csv> <comparison.csv>\n"; +" OR: veristat -C <baseline.csv> <comparison.csv>\n" +" OR: veristat -R <results.csv>\n"; + +enum { + OPT_LOG_FIXED = 1000, + OPT_LOG_SIZE = 1001, +}; static const struct argp_option opts[] = { { NULL, 'h', NULL, OPTION_HIDDEN, "Show the full help" }, + { "version", 'V', NULL, 0, "Print version" }, { "verbose", 'v', NULL, 0, "Verbose mode" }, + { "debug", 'd', NULL, 0, "Debug mode (turns on libbpf debug logging)" }, { "log-level", 'l', "LEVEL", 0, "Verifier log level (default 0 for normal mode, 1 for verbose mode)" }, + { "log-fixed", OPT_LOG_FIXED, NULL, 0, "Disable verifier log rotation" }, + { "log-size", OPT_LOG_SIZE, "BYTES", 0, "Customize verifier log size (default to 16MB)" }, { "quiet", 'q', NULL, 0, "Quiet mode" }, { "emit", 'e', "SPEC", 0, "Specify stats to be emitted" }, { "sort", 's', "SPEC", 0, "Specify sort order" }, @@ -209,9 +232,16 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state) case 'h': argp_state_help(state, stderr, ARGP_HELP_STD_HELP); break; + case 'V': + env.show_version = true; + break; case 'v': env.verbose = true; break; + case 'd': + env.debug = true; + env.verbose = true; + break; case 'q': env.quiet = true; break; @@ -243,6 +273,17 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state) argp_usage(state); } break; + case OPT_LOG_FIXED: + env.log_fixed = true; + break; + case OPT_LOG_SIZE: + errno = 0; + env.log_size = strtol(arg, NULL, 10); + if (errno) { + fprintf(stderr, "invalid log size: %s\n", arg); + argp_usage(state); + } + break; case 'C': env.comparison_mode = true; break; @@ -772,7 +813,62 @@ static int parse_verif_log(char * const buf, size_t buf_sz, struct verif_stats * return 0; } -static void fixup_obj(struct bpf_object *obj) +static int guess_prog_type_by_ctx_name(const char *ctx_name, + enum bpf_prog_type *prog_type, + enum bpf_attach_type *attach_type) +{ + /* We need to guess program type based on its declared context type. + * This guess can't be perfect as many different program types might + * share the same context type. So we can only hope to reasonably + * well guess this and get lucky. + * + * Just in case, we support both UAPI-side type names and + * kernel-internal names. + */ + static struct { + const char *uapi_name; + const char *kern_name; + enum bpf_prog_type prog_type; + enum bpf_attach_type attach_type; + } ctx_map[] = { + /* __sk_buff is most ambiguous, we assume TC program */ + { "__sk_buff", "sk_buff", BPF_PROG_TYPE_SCHED_CLS }, + { "bpf_sock", "sock", BPF_PROG_TYPE_CGROUP_SOCK, BPF_CGROUP_INET4_POST_BIND }, + { "bpf_sock_addr", "bpf_sock_addr_kern", BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_BIND }, + { "bpf_sock_ops", "bpf_sock_ops_kern", BPF_PROG_TYPE_SOCK_OPS, BPF_CGROUP_SOCK_OPS }, + { "sk_msg_md", "sk_msg", BPF_PROG_TYPE_SK_MSG, BPF_SK_MSG_VERDICT }, + { "bpf_cgroup_dev_ctx", "bpf_cgroup_dev_ctx", BPF_PROG_TYPE_CGROUP_DEVICE, BPF_CGROUP_DEVICE }, + { "bpf_sysctl", "bpf_sysctl_kern", BPF_PROG_TYPE_CGROUP_SYSCTL, BPF_CGROUP_SYSCTL }, + { "bpf_sockopt", "bpf_sockopt_kern", BPF_PROG_TYPE_CGROUP_SOCKOPT, BPF_CGROUP_SETSOCKOPT }, + { "sk_reuseport_md", "sk_reuseport_kern", BPF_PROG_TYPE_SK_REUSEPORT, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE }, + { "bpf_sk_lookup", "bpf_sk_lookup_kern", BPF_PROG_TYPE_SK_LOOKUP, BPF_SK_LOOKUP }, + { "xdp_md", "xdp_buff", BPF_PROG_TYPE_XDP, BPF_XDP }, + /* tracing types with no expected attach type */ + { "bpf_user_pt_regs_t", "pt_regs", BPF_PROG_TYPE_KPROBE }, + { "bpf_perf_event_data", "bpf_perf_event_data_kern", BPF_PROG_TYPE_PERF_EVENT }, + /* raw_tp programs use u64[] from kernel side, we don't want + * to match on that, probably; so NULL for kern-side type + */ + { "bpf_raw_tracepoint_args", NULL, BPF_PROG_TYPE_RAW_TRACEPOINT }, + }; + int i; + + if (!ctx_name) + return -EINVAL; + + for (i = 0; i < ARRAY_SIZE(ctx_map); i++) { + if (strcmp(ctx_map[i].uapi_name, ctx_name) == 0 || + (ctx_map[i].kern_name && strcmp(ctx_map[i].kern_name, ctx_name) == 0)) { + *prog_type = ctx_map[i].prog_type; + *attach_type = ctx_map[i].attach_type; + return 0; + } + } + + return -ESRCH; +} + +static void fixup_obj(struct bpf_object *obj, struct bpf_program *prog, const char *filename) { struct bpf_map *map; @@ -792,18 +888,75 @@ static void fixup_obj(struct bpf_object *obj) bpf_map__set_max_entries(map, 1); } } + + /* SEC(freplace) programs can't be loaded with veristat as is, + * but we can try guessing their target program's expected type by + * looking at the type of program's first argument and substituting + * corresponding program type + */ + if (bpf_program__type(prog) == BPF_PROG_TYPE_EXT) { + const struct btf *btf = bpf_object__btf(obj); + const char *prog_name = bpf_program__name(prog); + enum bpf_prog_type prog_type; + enum bpf_attach_type attach_type; + const struct btf_type *t; + const char *ctx_name; + int id; + + if (!btf) + goto skip_freplace_fixup; + + id = btf__find_by_name_kind(btf, prog_name, BTF_KIND_FUNC); + t = btf__type_by_id(btf, id); + t = btf__type_by_id(btf, t->type); + if (!btf_is_func_proto(t) || btf_vlen(t) != 1) + goto skip_freplace_fixup; + + /* context argument is a pointer to a struct/typedef */ + t = btf__type_by_id(btf, btf_params(t)[0].type); + while (t && btf_is_mod(t)) + t = btf__type_by_id(btf, t->type); + if (!t || !btf_is_ptr(t)) + goto skip_freplace_fixup; + t = btf__type_by_id(btf, t->type); + while (t && btf_is_mod(t)) + t = btf__type_by_id(btf, t->type); + if (!t) + goto skip_freplace_fixup; + + ctx_name = btf__name_by_offset(btf, t->name_off); + + if (guess_prog_type_by_ctx_name(ctx_name, &prog_type, &attach_type) == 0) { + bpf_program__set_type(prog, prog_type); + bpf_program__set_expected_attach_type(prog, attach_type); + + if (!env.quiet) { + printf("Using guessed program type '%s' for %s/%s...\n", + libbpf_bpf_prog_type_str(prog_type), + filename, prog_name); + } + } else { + if (!env.quiet) { + printf("Failed to guess program type for freplace program with context type name '%s' for %s/%s. Consider using canonical type names to help veristat...\n", + ctx_name, filename, prog_name); + } + } + } +skip_freplace_fixup: + return; } static int process_prog(const char *filename, struct bpf_object *obj, struct bpf_program *prog) { const char *prog_name = bpf_program__name(prog); - size_t buf_sz = sizeof(verif_log_buf); - char *buf = verif_log_buf; + const char *base_filename = basename(filename); + char *buf; + int buf_sz, log_level; struct verif_stats *stats; int err = 0; void *tmp; - if (!should_process_file_prog(basename(filename), bpf_program__name(prog))) { + if (!should_process_file_prog(base_filename, bpf_program__name(prog))) { env.progs_skipped++; return 0; } @@ -816,25 +969,30 @@ static int process_prog(const char *filename, struct bpf_object *obj, struct bpf memset(stats, 0, sizeof(*stats)); if (env.verbose) { - buf_sz = 16 * 1024 * 1024; + buf_sz = env.log_size ? env.log_size : 16 * 1024 * 1024; buf = malloc(buf_sz); if (!buf) return -ENOMEM; - bpf_program__set_log_buf(prog, buf, buf_sz); - bpf_program__set_log_level(prog, env.log_level | 4); /* stats + log */ + /* ensure we always request stats */ + log_level = env.log_level | 4 | (env.log_fixed ? 8 : 0); } else { - bpf_program__set_log_buf(prog, buf, buf_sz); - bpf_program__set_log_level(prog, 4); /* only verifier stats */ + buf = verif_log_buf; + buf_sz = sizeof(verif_log_buf); + /* request only verifier stats */ + log_level = 4 | (env.log_fixed ? 8 : 0); } verif_log_buf[0] = '\0'; + bpf_program__set_log_buf(prog, buf, buf_sz); + bpf_program__set_log_level(prog, log_level); + /* increase chances of successful BPF object loading */ - fixup_obj(obj); + fixup_obj(obj, prog, base_filename); err = bpf_object__load(obj); env.progs_processed++; - stats->file_name = strdup(basename(filename)); + stats->file_name = strdup(base_filename); stats->prog_name = strdup(bpf_program__name(prog)); stats->stats[VERDICT] = err == 0; /* 1 - success, 0 - failure */ parse_verif_log(buf, buf_sz, stats); @@ -913,6 +1071,7 @@ static int process_obj(const char *filename) goto cleanup; } + lprog = NULL; bpf_object__for_each_program(tprog, tobj) { const char *tprog_name = bpf_program__name(tprog); @@ -1691,18 +1850,22 @@ static int handle_comparison_mode(void) join->stats_b = comp; i++; j++; - } else if (comp == &fallback_stats || r < 0) { + } else if (base != &fallback_stats && (comp == &fallback_stats || r < 0)) { join->file_name = base->file_name; join->prog_name = base->prog_name; join->stats_a = base; join->stats_b = NULL; i++; - } else { + } else if (comp != &fallback_stats && (base == &fallback_stats || r > 0)) { join->file_name = comp->file_name; join->prog_name = comp->prog_name; join->stats_a = NULL; join->stats_b = comp; j++; + } else { + fprintf(stderr, "%s:%d: should never reach here i=%i, j=%i", + __FILE__, __LINE__, i, j); + return -EINVAL; } env.join_stat_cnt += 1; } @@ -1723,6 +1886,7 @@ static int handle_comparison_mode(void) one_more_time: output_comp_headers(cur_fmt); + last_idx = -1; for (i = 0; i < env.join_stat_cnt; i++) { const struct verif_stats_join *join = &env.join_stats[i]; @@ -1872,6 +2036,11 @@ int main(int argc, char **argv) if (argp_parse(&argp, argc, argv, 0, NULL, NULL)) return 1; + if (env.show_version) { + printf("%s\n", argp_program_version); + return 0; + } + if (env.verbose && env.quiet) { fprintf(stderr, "Verbose and quiet modes are incompatible, please specify just one or neither!\n\n"); argp_help(&argp, stderr, ARGP_HELP_USAGE, "veristat"); diff --git a/tools/testing/selftests/bpf/xdp_features.c b/tools/testing/selftests/bpf/xdp_features.c index fce12165213b..b449788fbd39 100644 --- a/tools/testing/selftests/bpf/xdp_features.c +++ b/tools/testing/selftests/bpf/xdp_features.c @@ -25,6 +25,7 @@ static struct env { bool verbosity; + char ifname[IF_NAMESIZE]; int ifindex; bool is_tester; struct { @@ -151,20 +152,26 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state) case 'D': if (make_sockaddr(AF_INET6, arg, DUT_ECHO_PORT, &env.dut_addr, NULL)) { - fprintf(stderr, "Invalid DUT address: %s\n", arg); + fprintf(stderr, + "Invalid address assigned to the Device Under Test: %s\n", + arg); return ARGP_ERR_UNKNOWN; } break; case 'C': if (make_sockaddr(AF_INET6, arg, DUT_CTRL_PORT, &env.dut_ctrl_addr, NULL)) { - fprintf(stderr, "Invalid DUT CTRL address: %s\n", arg); + fprintf(stderr, + "Invalid address assigned to the Device Under Test: %s\n", + arg); return ARGP_ERR_UNKNOWN; } break; case 'T': if (make_sockaddr(AF_INET6, arg, 0, &env.tester_addr, NULL)) { - fprintf(stderr, "Invalid Tester address: %s\n", arg); + fprintf(stderr, + "Invalid address assigned to the Tester device: %s\n", + arg); return ARGP_ERR_UNKNOWN; } break; @@ -179,7 +186,7 @@ static error_t parse_arg(int key, char *arg, struct argp_state *state) env.ifindex = if_nametoindex(arg); if (!env.ifindex) env.ifindex = strtoul(arg, NULL, 0); - if (!env.ifindex) { + if (!env.ifindex || !if_indextoname(env.ifindex, env.ifname)) { fprintf(stderr, "Bad interface index or name (%d): %s\n", errno, strerror(errno)); @@ -205,6 +212,7 @@ static void set_env_default(void) env.feature.drv_feature = NETDEV_XDP_ACT_NDO_XMIT; env.feature.action = -EINVAL; env.ifindex = -ENODEV; + strcpy(env.ifname, "unknown"); make_sockaddr(AF_INET6, "::ffff:127.0.0.1", DUT_CTRL_PORT, &env.dut_ctrl_addr, NULL); make_sockaddr(AF_INET6, "::ffff:127.0.0.1", DUT_ECHO_PORT, @@ -248,15 +256,18 @@ static int dut_run_echo_thread(pthread_t *t, int *sockfd) sockfd = start_reuseport_server(AF_INET6, SOCK_DGRAM, NULL, DUT_ECHO_PORT, 0, 1); if (!sockfd) { - fprintf(stderr, "Failed to create echo socket\n"); + fprintf(stderr, + "Failed creating data UDP socket on device %s\n", + env.ifname); return -errno; } /* start echo channel */ err = pthread_create(t, NULL, dut_echo_thread, sockfd); if (err) { - fprintf(stderr, "Failed creating dut_echo thread: %s\n", - strerror(-err)); + fprintf(stderr, + "Failed creating data UDP thread on device %s: %s\n", + env.ifname, strerror(-err)); free_fds(sockfd, 1); return -EINVAL; } @@ -320,9 +331,8 @@ static int dut_attach_xdp_prog(struct xdp_features *skel, int flags) err = bpf_xdp_attach(env.ifindex, bpf_program__fd(prog), flags, NULL); if (err) - fprintf(stderr, - "Failed to attach XDP program to ifindex %d\n", - env.ifindex); + fprintf(stderr, "Failed attaching XDP program to device %s\n", + env.ifname); return err; } @@ -358,13 +368,16 @@ static int dut_run(struct xdp_features *skel) sockfd = start_reuseport_server(AF_INET6, SOCK_STREAM, NULL, DUT_CTRL_PORT, 0, 1); if (!sockfd) { - fprintf(stderr, "Failed to create DUT socket\n"); + fprintf(stderr, + "Failed creating control socket on device %s\n", env.ifname); return -errno; } ctrl_sockfd = accept(*sockfd, (struct sockaddr *)&ctrl_addr, &addrlen); if (ctrl_sockfd < 0) { - fprintf(stderr, "Failed to accept connection on DUT socket\n"); + fprintf(stderr, + "Failed accepting connections on device %s control socket\n", + env.ifname); free_fds(sockfd, 1); return -errno; } @@ -422,8 +435,8 @@ static int dut_run(struct xdp_features *skel) &opts); if (err) { fprintf(stderr, - "Failed to query XDP cap for ifindex %d\n", - env.ifindex); + "Failed querying XDP cap for device %s\n", + env.ifname); goto end_thread; } @@ -447,7 +460,8 @@ static int dut_run(struct xdp_features *skel) &key, sizeof(key), &val, sizeof(val), 0); if (err) { - fprintf(stderr, "bpf_map_lookup_elem failed\n"); + fprintf(stderr, + "bpf_map_lookup_elem failed (%d)\n", err); goto end_thread; } @@ -489,7 +503,7 @@ static bool tester_collect_detected_cap(struct xdp_features *skel, err = bpf_map__lookup_elem(skel->maps.stats, &key, sizeof(key), &val, sizeof(val), 0); if (err) { - fprintf(stderr, "bpf_map_lookup_elem failed\n"); + fprintf(stderr, "bpf_map_lookup_elem failed (%d)\n", err); return false; } @@ -540,7 +554,9 @@ static int send_echo_msg(void) sockfd = socket(AF_INET6, SOCK_DGRAM, 0); if (sockfd < 0) { - fprintf(stderr, "Failed to create echo socket\n"); + fprintf(stderr, + "Failed creating data UDP socket on device %s\n", + env.ifname); return -errno; } @@ -565,7 +581,8 @@ static int tester_run(struct xdp_features *skel) sockfd = socket(AF_INET6, SOCK_STREAM, 0); if (sockfd < 0) { - fprintf(stderr, "Failed to create tester socket\n"); + fprintf(stderr, + "Failed creating tester service control socket\n"); return -errno; } @@ -575,7 +592,8 @@ static int tester_run(struct xdp_features *skel) err = connect(sockfd, (struct sockaddr *)&env.dut_ctrl_addr, sizeof(env.dut_ctrl_addr)); if (err) { - fprintf(stderr, "Failed to connect to the DUT\n"); + fprintf(stderr, + "Failed connecting to the Device Under Test control socket\n"); return -errno; } @@ -596,8 +614,8 @@ static int tester_run(struct xdp_features *skel) err = bpf_xdp_attach(env.ifindex, bpf_program__fd(prog), flags, NULL); if (err) { - fprintf(stderr, "Failed to attach XDP program to ifindex %d\n", - env.ifindex); + fprintf(stderr, "Failed attaching XDP program to device %s\n", + env.ifname); goto out; } @@ -653,7 +671,7 @@ int main(int argc, char **argv) return err; if (env.ifindex < 0) { - fprintf(stderr, "Invalid ifindex\n"); + fprintf(stderr, "Invalid device name %s\n", env.ifname); return -ENODEV; } @@ -684,11 +702,12 @@ int main(int argc, char **argv) if (env.is_tester) { /* Tester */ - fprintf(stdout, "Starting tester on device %d\n", env.ifindex); + fprintf(stdout, "Starting tester service on device %s\n", + env.ifname); err = tester_run(skel); } else { /* DUT */ - fprintf(stdout, "Starting DUT on device %d\n", env.ifindex); + fprintf(stdout, "Starting test on device %s\n", env.ifname); err = dut_run(skel); } diff --git a/tools/testing/selftests/bpf/xsk_xdp_metadata.h b/tools/testing/selftests/bpf/xsk_xdp_metadata.h new file mode 100644 index 000000000000..943133da378a --- /dev/null +++ b/tools/testing/selftests/bpf/xsk_xdp_metadata.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +struct xdp_info { + __u64 count; +} __attribute__((aligned(32))); diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c index a17655107a94..5a9691e942de 100644 --- a/tools/testing/selftests/bpf/xskxceiver.c +++ b/tools/testing/selftests/bpf/xskxceiver.c @@ -69,6 +69,7 @@ */ #define _GNU_SOURCE +#include <assert.h> #include <fcntl.h> #include <errno.h> #include <getopt.h> @@ -103,6 +104,7 @@ #include <bpf/bpf.h> #include <linux/filter.h> #include "../kselftest.h" +#include "xsk_xdp_metadata.h" static const char *MAC1 = "\x00\x0A\x56\x9E\xEE\x62"; static const char *MAC2 = "\x00\x0A\x56\x9E\xEE\x61"; @@ -464,6 +466,7 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx, ifobj->use_fill_ring = true; ifobj->release_rx = true; ifobj->validation_func = NULL; + ifobj->use_metadata = false; if (i == 0) { ifobj->rx_on = false; @@ -631,7 +634,6 @@ static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb if (!pkt_stream) exit_with_error(ENOMEM); - pkt_stream->nb_pkts = nb_pkts; for (i = 0; i < nb_pkts; i++) { pkt_set(umem, &pkt_stream->pkts[i], (i % umem->num_frames) * umem->frame_size, pkt_len); @@ -798,6 +800,20 @@ static bool is_offset_correct(struct xsk_umem_info *umem, struct pkt_stream *pkt return false; } +static bool is_metadata_correct(struct pkt *pkt, void *buffer, u64 addr) +{ + void *data = xsk_umem__get_data(buffer, addr); + struct xdp_info *meta = data - sizeof(struct xdp_info); + + if (meta->count != pkt->payload) { + ksft_print_msg("[%s] expected meta_count [%d], got meta_count [%d]\n", + __func__, pkt->payload, meta->count); + return false; + } + + return true; +} + static bool is_pkt_valid(struct pkt *pkt, void *buffer, u64 addr, u32 len) { void *data = xsk_umem__get_data(buffer, addr); @@ -959,7 +975,8 @@ static int receive_pkts(struct test_spec *test, struct pollfd *fds) addr = xsk_umem__add_offset_to_addr(addr); if (!is_pkt_valid(pkt, umem->buffer, addr, desc->len) || - !is_offset_correct(umem, pkt_stream, addr, pkt->addr)) + !is_offset_correct(umem, pkt_stream, addr, pkt->addr) || + (ifobj->use_metadata && !is_metadata_correct(pkt, umem->buffer, addr))) return TEST_FAILURE; if (ifobj->use_fill_ring) @@ -1124,7 +1141,14 @@ static int validate_rx_dropped(struct ifobject *ifobject) if (err) return TEST_FAILURE; - if (stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2) + /* The receiver calls getsockopt after receiving the last (valid) + * packet which is not the final packet sent in this test (valid and + * invalid packets are sent in alternating fashion with the final + * packet being invalid). Since the last packet may or may not have + * been dropped already, both outcomes must be allowed. + */ + if (stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2 || + stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2 - 1) return TEST_PASS; return TEST_FAILURE; @@ -1635,6 +1659,7 @@ static void testapp_single_pkt(struct test_spec *test) static void testapp_invalid_desc(struct test_spec *test) { + u64 umem_size = test->ifobj_tx->umem->num_frames * test->ifobj_tx->umem->frame_size; struct pkt pkts[] = { /* Zero packet address allowed */ {0, PKT_SIZE, 0, true}, @@ -1644,10 +1669,12 @@ static void testapp_invalid_desc(struct test_spec *test) {-2, PKT_SIZE, 0, false}, /* Packet too large */ {0x2000, XSK_UMEM__INVALID_FRAME_SIZE, 0, false}, + /* Up to end of umem allowed */ + {umem_size - PKT_SIZE, PKT_SIZE, 0, true}, /* After umem ends */ - {UMEM_SIZE, PKT_SIZE, 0, false}, + {umem_size, PKT_SIZE, 0, false}, /* Straddle the end of umem */ - {UMEM_SIZE - PKT_SIZE / 2, PKT_SIZE, 0, false}, + {umem_size - PKT_SIZE / 2, PKT_SIZE, 0, false}, /* Straddle a page boundrary */ {0x3000 - PKT_SIZE / 2, PKT_SIZE, 0, false}, /* Straddle a 2K boundrary */ @@ -1657,16 +1684,17 @@ static void testapp_invalid_desc(struct test_spec *test) if (test->ifobj_tx->umem->unaligned_mode) { /* Crossing a page boundrary allowed */ - pkts[6].valid = true; + pkts[7].valid = true; } if (test->ifobj_tx->umem->frame_size == XSK_UMEM__DEFAULT_FRAME_SIZE / 2) { /* Crossing a 2K frame size boundrary not allowed */ - pkts[7].valid = false; + pkts[8].valid = false; } if (test->ifobj_tx->shared_umem) { - pkts[4].addr += UMEM_SIZE; - pkts[5].addr += UMEM_SIZE; + pkts[4].addr += umem_size; + pkts[5].addr += umem_size; + pkts[6].addr += umem_size; } pkt_stream_generate_custom(test, pkts, ARRAY_SIZE(pkts)); @@ -1686,6 +1714,30 @@ static void testapp_xdp_drop(struct test_spec *test) testapp_validate_traffic(test); } +static void testapp_xdp_metadata_count(struct test_spec *test) +{ + struct xsk_xdp_progs *skel_rx = test->ifobj_rx->xdp_progs; + struct xsk_xdp_progs *skel_tx = test->ifobj_tx->xdp_progs; + struct bpf_map *data_map; + int count = 0; + int key = 0; + + test_spec_set_name(test, "XDP_METADATA_COUNT"); + test_spec_set_xdp_prog(test, skel_rx->progs.xsk_xdp_populate_metadata, + skel_tx->progs.xsk_xdp_populate_metadata, + skel_rx->maps.xsk, skel_tx->maps.xsk); + test->ifobj_rx->use_metadata = true; + + data_map = bpf_object__find_map_by_name(skel_rx->obj, "xsk_xdp_.bss"); + if (!data_map || !bpf_map__is_internal(data_map)) + exit_with_error(ENOMEM); + + if (bpf_map_update_elem(bpf_map__fd(data_map), &key, &count, BPF_ANY)) + exit_with_error(errno); + + testapp_validate_traffic(test); +} + static void testapp_poll_txq_tmout(struct test_spec *test) { test_spec_set_name(test, "POLL_TXQ_FULL"); @@ -1825,6 +1877,29 @@ static void run_pkt_test(struct test_spec *test, enum test_mode mode, enum test_ test->ifobj_rx->umem->unaligned_mode = true; testapp_invalid_desc(test); break; + case TEST_TYPE_UNALIGNED_INV_DESC_4K1_FRAME: { + u64 page_size, umem_size; + + if (!hugepages_present(test->ifobj_tx)) { + ksft_test_result_skip("No 2M huge pages present.\n"); + return; + } + test_spec_set_name(test, "UNALIGNED_INV_DESC_4K1_FRAME_SIZE"); + /* Odd frame size so the UMEM doesn't end near a page boundary. */ + test->ifobj_tx->umem->frame_size = 4001; + test->ifobj_rx->umem->frame_size = 4001; + test->ifobj_tx->umem->unaligned_mode = true; + test->ifobj_rx->umem->unaligned_mode = true; + /* This test exists to test descriptors that staddle the end of + * the UMEM but not a page. + */ + page_size = sysconf(_SC_PAGESIZE); + umem_size = test->ifobj_tx->umem->num_frames * test->ifobj_tx->umem->frame_size; + assert(umem_size % page_size > PKT_SIZE); + assert(umem_size % page_size < page_size - PKT_SIZE); + testapp_invalid_desc(test); + break; + } case TEST_TYPE_UNALIGNED: if (!testapp_unaligned(test)) return; @@ -1835,6 +1910,9 @@ static void run_pkt_test(struct test_spec *test, enum test_mode mode, enum test_ case TEST_TYPE_XDP_DROP_HALF: testapp_xdp_drop(test); break; + case TEST_TYPE_XDP_METADATA_COUNT: + testapp_xdp_metadata_count(test); + break; default: break; } diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h index 3e8ec7d8ec32..919327807a4e 100644 --- a/tools/testing/selftests/bpf/xskxceiver.h +++ b/tools/testing/selftests/bpf/xskxceiver.h @@ -53,7 +53,6 @@ #define THREAD_TMOUT 3 #define DEFAULT_PKT_CNT (4 * 1024) #define DEFAULT_UMEM_BUFFERS (DEFAULT_PKT_CNT / 4) -#define UMEM_SIZE (DEFAULT_UMEM_BUFFERS * XSK_UMEM__DEFAULT_FRAME_SIZE) #define RX_FULL_RXQSIZE 32 #define UMEM_HEADROOM_TEST_SIZE 128 #define XSK_UMEM__INVALID_FRAME_SIZE (XSK_UMEM__DEFAULT_FRAME_SIZE + 1) @@ -79,6 +78,7 @@ enum test_type { TEST_TYPE_ALIGNED_INV_DESC, TEST_TYPE_ALIGNED_INV_DESC_2K_FRAME, TEST_TYPE_UNALIGNED_INV_DESC, + TEST_TYPE_UNALIGNED_INV_DESC_4K1_FRAME, TEST_TYPE_HEADROOM, TEST_TYPE_TEARDOWN, TEST_TYPE_BIDI, @@ -88,6 +88,7 @@ enum test_type { TEST_TYPE_STATS_FILL_EMPTY, TEST_TYPE_BPF_RES, TEST_TYPE_XDP_DROP_HALF, + TEST_TYPE_XDP_METADATA_COUNT, TEST_TYPE_MAX }; @@ -158,6 +159,7 @@ struct ifobject { bool use_fill_ring; bool release_rx; bool shared_umem; + bool use_metadata; u8 dst_mac[ETH_ALEN]; u8 src_mac[ETH_ALEN]; }; |