summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)Author
2021-10-25bpftool: Do not expose and init hash maps for pinned path in main.cQuentin Monnet
BPF programs, maps, and links, can all be listed with their pinned paths by bpftool, when the "-f" option is provided. To do so, bpftool builds hash maps containing all pinned paths for each kind of objects. These three hash maps are always initialised in main.c, and exposed through main.h. There appear to be no particular reason to do so: we can just as well make them static to the files that need them (prog.c, map.c, and link.c respectively), and initialise them only when we want to show objects and the "-f" switch is provided. This may prevent unnecessary memory allocations if the implementation of the hash maps was to change in the future. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-3-quentin@isovalent.com
2021-10-25bpftool: Remove Makefile dep. on $(LIBBPF) for $(LIBBPF_INTERNAL_HDRS)Quentin Monnet
The dependency is only useful to make sure that the $(LIBBPF_HDRS_DIR) directory is created before we try to install locally the required libbpf internal header. Let's create this directory properly instead. This is in preparation of making $(LIBBPF_INTERNAL_HDRS) a dependency to the bootstrap bpftool version, in which case we want no dependency on $(LIBBPF). Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211023205154.6710-2-quentin@isovalent.com
2021-10-25selftests/bpf: Split out bpf_verif_scale selftests into multiple testsAndrii Nakryiko
Instead of using subtests in bpf_verif_scale selftest, turn each scale sub-test into its own test. Each subtest is compltely independent and just reuses a bit of common test running logic, so the conversion is trivial. For convenience, keep all of BPF verifier scale tests in one file. This conversion shaves off a significant amount of time when running test_progs in parallel mode. E.g., just running scale tests (-t verif_scale): BEFORE ====== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m22.894s user 0m0.012s sys 0m22.797s AFTER ===== Summary: 24/0 PASSED, 0 SKIPPED, 0 FAILED real 0m12.044s user 0m0.024s sys 0m27.869s Ten second saving right there. test_progs -j is not yet ready to be turned on by default, unfortunately, and some tests fail almost every time, but this is a good improvement nevertheless. Ignoring few failures, here is sequential vs parallel run times when running all tests now: SEQUENTIAL ========== Summary: 206/953 PASSED, 4 SKIPPED, 0 FAILED real 1m5.625s user 0m4.211s sys 0m31.650s PARALLEL ======== Summary: 204/952 PASSED, 4 SKIPPED, 2 FAILED real 0m35.550s user 0m4.998s sys 0m39.890s Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-5-andrii@kernel.org
2021-10-25selftests/bpf: Mark tc_redirect selftest as serialAndrii Nakryiko
It seems to cause a lot of harm to kprobe/tracepoint selftests. Yucong mentioned before that it does manipulate sysfs, which might be the reason. So let's mark it as serial, though ideally it would be less intrusive on the system at test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-4-andrii@kernel.org
2021-10-25selftests/bpf: Support multiple tests per fileAndrii Nakryiko
Revamp how test discovery works for test_progs and allow multiple test entries per file. Any global void function with no arguments and serial_test_ or test_ prefix is considered a test. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-3-andrii@kernel.org
2021-10-25selftests/bpf: Normalize selftest entry pointsAndrii Nakryiko
Ensure that all test entry points are global void functions with no input arguments. Mark few subtest entry points as static. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022223228.99920-2-andrii@kernel.org
2021-10-22libbpf: Fix BTF header parsing checksAndrii Nakryiko
Original code assumed fixed and correct BTF header length. That's not always the case, though, so fix this bug with a proper additional check. And use actual header length instead of sizeof(struct btf_header) in sanity checks. Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf") Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211023003157.726961-2-andrii@kernel.org
2021-10-22libbpf: Fix overflow in BTF sanity checksAndrii Nakryiko
btf_header's str_off+str_len or type_off+type_len can overflow as they are u32s. This will lead to bypassing the sanity checks during BTF parsing, resulting in crashes afterwards. Fix by using 64-bit signed integers for comparison. Fixes: d8123624506c ("libbpf: Fix BTF data layout checks and allow empty BTF") Reported-by: Evgeny Vereshchagin <evvers@ya.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211023003157.726961-1-andrii@kernel.org
2021-10-22selftests/bpf: Add BTF_KIND_DECL_TAG typedef example in tag.cYonghong Song
Change value type in progs/tag.c to a typedef with a btf_decl_tag. With `bpftool btf dump file tag.o`, we have ... [14] TYPEDEF 'value_t' type_id=17 [15] DECL_TAG 'tag1' type_id=14 component_idx=-1 [16] DECL_TAG 'tag2' type_id=14 component_idx=-1 [17] STRUCT '(anon)' size=8 vlen=2 'a' type_id=2 bits_offset=0 'b' type_id=2 bits_offset=32 ... The btf_tag selftest also succeeded: $ ./test_progs -t tag #21 btf_tag:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195643.4020315-1-yhs@fb.com
2021-10-22selftests/bpf: Test deduplication for BTF_KIND_DECL_TAG typedefYonghong Song
Add unit tests for deduplication of BTF_KIND_DECL_TAG to typedef types. Also changed a few comments from "tag" to "decl_tag" to match BTF_KIND_DECL_TAG enum value name. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195638.4019770-1-yhs@fb.com
2021-10-22selftests/bpf: Add BTF_KIND_DECL_TAG typedef unit testsYonghong Song
Test good and bad variants of typedef BTF_KIND_DECL_TAG encoding. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021195633.4019472-1-yhs@fb.com
2021-10-22selftests/bpf: Fix flow dissector testsStanislav Fomichev
- update custom loader to search by name, not section name - update bpftool commands to use proper pin path Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211021214814.1236114-4-sdf@google.com
2021-10-22libbpf: Use func name when pinning programs with LIBBPF_STRICT_SEC_NAMEStanislav Fomichev
We can't use section name anymore because they are not unique and pinning objects with multiple programs with the same progtype/secname will fail. [0] Closes: https://github.com/libbpf/libbpf/issues/273 Fixes: 33a2c75c55e2 ("libbpf: add internal pin_name") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20211021214814.1236114-2-sdf@google.com
2021-10-22bpftool: Avoid leaking the JSON writer prepared for program metadataQuentin Monnet
Bpftool creates a new JSON object for writing program metadata in plain text mode, regardless of metadata being present or not. Then this writer is freed if any metadata has been found and printed, but it leaks otherwise. We cannot destroy the object unconditionally, because the destructor prints an undesirable line break. Instead, make sure the writer is created only after we have found program metadata to print. Found with valgrind. Fixes: aff52e685eb3 ("bpftool: Support dumping metadata") Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022094743.11052-1-quentin@isovalent.com
2021-10-22selftests/bpf: Switch to new btf__type_cnt/btf__raw_data APIsHengqi Chen
Replace the calls to btf__get_nr_types/btf__get_raw_data in selftests with new APIs btf__type_cnt/btf__raw_data. The old APIs will be deprecated in libbpf v0.7+. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-6-hengqi.chen@gmail.com
2021-10-22bpftool: Switch to new btf__type_cnt APIHengqi Chen
Replace the call to btf__get_nr_types with new API btf__type_cnt. The old API will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-5-hengqi.chen@gmail.com
2021-10-22tools/resolve_btfids: Switch to new btf__type_cnt APIHengqi Chen
Replace the call to btf__get_nr_types with new API btf__type_cnt. The old API will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-4-hengqi.chen@gmail.com
2021-10-22perf bpf: Switch to new btf__raw_data APIHengqi Chen
Replace the call to btf__get_raw_data with new API btf__raw_data. The old APIs will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-3-hengqi.chen@gmail.com
2021-10-22libbpf: Add btf__type_cnt() and btf__raw_data() APIsHengqi Chen
Add btf__type_cnt() and btf__raw_data() APIs and deprecate btf__get_nr_type() and btf__get_raw_data() since the old APIs don't follow the libbpf naming convention for getters which omit 'get' in the name (see [0]). btf__raw_data() is just an alias to the existing btf__get_raw_data(). btf__type_cnt() now returns the number of all types of the BTF object including 'void'. [0] Closes: https://github.com/libbpf/libbpf/issues/279 Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-2-hengqi.chen@gmail.com
2021-10-22libbpf: Fix memory leak in btf__dedup()Mauricio Vásquez
Free btf_dedup if btf_ensure_modifiable() returns error. Fixes: 919d2b1dbb07 ("libbpf: Allow modification of BTF and add btf__add_str API") Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022202035.48868-1-mauricio@kinvolk.io
2021-10-22selftests/bpf: Make perf_buffer selftests work on 4.9 kernel againAndrii Nakryiko
Recent change to use tp/syscalls/sys_enter_nanosleep for perf_buffer selftests causes this selftest to fail on 4.9 kernel in libbpf CI ([0]): libbpf: prog 'handle_sys_enter': failed to attach to perf_event FD 6: Invalid argument libbpf: prog 'handle_sys_enter': failed to attach to tracepoint 'syscalls/sys_enter_nanosleep': Invalid argument It's not exactly clear why, because perf_event itself is created for this tracepoint, but I can't even compile 4.9 kernel locally, so it's hard to figure this out. If anyone has better luck and would like to help investigating this, I'd really appreciate this. For now, unblock CI by switching back to raw_syscalls/sys_enter, but reduce amount of unnecessary samples emitted by filter by process ID. Use explicit ARRAY map for that to make it work on 4.9 as well, because global data isn't yet supported there. Fixes: aa274f98b269 ("selftests/bpf: Fix possible/online index mismatch in perf_buffer test") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022201342.3490692-1-andrii@kernel.org
2021-10-22libbpf: Fix the use of aligned attributeAndrii Nakryiko
Building libbpf sources out of kernel tree (in Github repo) we run into compilation error due to unknown __aligned attribute. It must be coming from some kernel header, which is not available to Github sources. Use explicit __attribute__((aligned(16))) instead. Fixes: 961632d54163 ("libbpf: Fix dumping non-aligned __int128") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211022192502.2975553-1-andrii@kernel.org
2021-10-21selftests/bpf: Switch to ".bss"/".rodata"/".data" lookups for internal mapsAndrii Nakryiko
Utilize libbpf's feature of allowing to lookup internal maps by their ELF section names. No need to guess or calculate the exact truncated prefix taken from the object name. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-11-andrii@kernel.org
2021-10-21libbpf: Simplify look up by name of internal mapsAndrii Nakryiko
Map name that's assigned to internal maps (.rodata, .data, .bss, etc) consist of a small prefix of bpf_object's name and ELF section name as a suffix. This makes it hard for users to "guess" the name to use for looking up by name with bpf_object__find_map_by_name() API. One proposal was to drop object name prefix from the map name and just use ".rodata", ".data", etc, names. One downside called out was that when multiple BPF applications are active on the host, it will be hard to distinguish between multiple instances of .rodata and know which BPF object (app) they belong to. Having few first characters, while quite limiting, still can give a bit of a clue, in general. Note, though, that btf_value_type_id for such global data maps (ARRAY) points to DATASEC type, which encodes full ELF name, so tools like bpftool can take advantage of this fact to "recover" full original name of the map. This is also the reason why for custom .data.* and .rodata.* maps libbpf uses only their ELF names and doesn't prepend object name at all. Another downside of such approach is that it is not backwards compatible and, among direct use of bpf_object__find_map_by_name() API, will break any BPF skeleton generated using bpftool that was compiled with older libbpf version. Instead of causing all this pain, libbpf will still generate map name using a combination of object name and ELF section name, but it will allow looking such maps up by their natural names, which correspond to their respective ELF section names. This means non-truncated ELF section names longer than 15 characters are going to be expected and supported. With such set up, we get the best of both worlds: leave small bits of a clue about BPF application that instantiated such maps, as well as making it easy for user apps to lookup such maps at runtime. In this sense it closes corresponding libbpf 1.0 issue ([0]). BPF skeletons will continue using full names for lookups. [0] Closes: https://github.com/libbpf/libbpf/issues/275 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-10-andrii@kernel.org
2021-10-21selftests/bpf: Demonstrate use of custom .rodata/.data sectionsAndrii Nakryiko
Enhance existing selftests to demonstrate the use of custom .data/.rodata sections. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-9-andrii@kernel.org
2021-10-21libbpf: Support multiple .rodata.* and .data.* BPF mapsAndrii Nakryiko
Add support for having multiple .rodata and .data data sections ([0]). .rodata/.data are supported like the usual, but now also .rodata.<whatever> and .data.<whatever> are also supported. Each such section will get its own backing BPF_MAP_TYPE_ARRAY, just like .rodata and .data. Multiple .bss maps are not supported, as the whole '.bss' name is confusing and might be deprecated soon, as well as user would need to specify custom ELF section with SEC() attribute anyway, so might as well stick to just .data.* and .rodata.* convention. User-visible map name for such new maps is going to be just their ELF section names. [0] https://github.com/libbpf/libbpf/issues/274 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-8-andrii@kernel.org
2021-10-21bpftool: Improve skeleton generation for data maps without DATASEC typeAndrii Nakryiko
It can happen that some data sections (e.g., .rodata.cst16, containing compiler populated string constants) won't have a corresponding BTF DATASEC type. Now that libbpf supports .rodata.* and .data.* sections, situation like that will cause invalid BPF skeleton to be generated that won't compile successfully, as some parts of skeleton would assume memory-mapped struct definitions for each special data section. Fix this by generating empty struct definitions for such data sections. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-7-andrii@kernel.org
2021-10-21bpftool: Support multiple .rodata/.data internal maps in skeletonAndrii Nakryiko
Remove the assumption about only single instance of each of .rodata and .data internal maps. Nothing changes for '.rodata' and '.data' maps, but new '.rodata.something' map will get 'rodata_something' section in BPF skeleton for them (as well as having struct bpf_map * field in maps section with the same field name). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-6-andrii@kernel.org
2021-10-21libbpf: Remove assumptions about uniqueness of .rodata/.data/.bss mapsAndrii Nakryiko
Remove internal libbpf assumption that there can be only one .rodata, .data, and .bss map per BPF object. To achieve that, extend and generalize the scheme that was used for keeping track of relocation ELF sections. Now each ELF section has a temporary extra index that keeps track of logical type of ELF section (relocations, data, read-only data, BSS). Switch relocation to this scheme, as well as .rodata/.data/.bss handling. We don't yet allow multiple .rodata, .data, and .bss sections, but no libbpf internal code makes an assumption that there can be only one of each and thus they can be explicitly referenced by a single index. Next patches will actually allow multiple .rodata and .data sections. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-5-andrii@kernel.org
2021-10-21libbpf: Use Elf64-specific types explicitly for dealing with ELFAndrii Nakryiko
Minimize the usage of class-agnostic gelf_xxx() APIs from libelf. These APIs require copying ELF data structures into local GElf_xxx structs and have a more cumbersome API. BPF ELF file is defined to be always 64-bit ELF object, even when intended to be run on 32-bit host architectures, so there is no need to do class-agnostic conversions everywhere. BPF static linker implementation within libbpf has been using Elf64-specific types since initial implementation. Add two simple helpers, elf_sym_by_idx() and elf_rel_by_idx(), for more succinct direct access to ELF symbol and relocation records within ELF data itself and switch all the GElf_xxx usage into Elf64_xxx equivalents. The only remaining place within libbpf.c that's still using gelf API is gelf_getclass(), as there doesn't seem to be a direct way to get underlying ELF bitness. No functional changes intended. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-4-andrii@kernel.org
2021-10-21libbpf: Extract ELF processing state into separate structAndrii Nakryiko
Name currently anonymous internal struct that keeps ELF-related state for bpf_object. Just a bit of clean up, no functional changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-3-andrii@kernel.org
2021-10-21libbpf: Deprecate btf__finalize_data() and move it into libbpf.cAndrii Nakryiko
There isn't a good use case where anyone but libbpf itself needs to call btf__finalize_data(). It was implemented for internal use and it's not clear why it was made into public API in the first place. To function, it requires active ELF data, which is stored inside bpf_object for the duration of opening phase only. But the only BTF that needs bpf_object's ELF is that bpf_object's BTF itself, which libbpf fixes up automatically during bpf_object__open() operation anyways. There is no need for any additional fix up and no reasonable scenario where it's useful and appropriate. Thus, btf__finalize_data() is just an API atavism and is better removed. So this patch marks it as deprecated immediately (v0.6+) and moves the code from btf.c into libbpf.c where it's used in the context of bpf_object opening phase. Such code co-location allows to make code structure more straightforward and remove bpf_object__section_size() and bpf_object__variable_offset() internal helpers from libbpf_internal.h, making them static. Their naming is also adjusted to not create a wrong illusion that they are some sort of method of bpf_object. They are internal helpers and are called appropriately. This is part of libbpf 1.0 effort ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/276 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-2-andrii@kernel.org
2021-10-21selftests/bpf: Use nanosleep tracepoint in perf buffer testJiri Olsa
The perf buffer tests triggers trace with nanosleep syscall, but monitors all syscalls, which results in lot of data in the buffer and makes it harder to debug. Let's lower the trace traffic and monitor just nanosleep syscall. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211021114132.8196-4-jolsa@kernel.org
2021-10-21selftests/bpf: Fix possible/online index mismatch in perf_buffer testJiri Olsa
The perf_buffer fails on system with offline cpus: # test_progs -t perf_buffer serial_test_perf_buffer:PASS:nr_cpus 0 nsec serial_test_perf_buffer:PASS:nr_on_cpus 0 nsec serial_test_perf_buffer:PASS:skel_load 0 nsec serial_test_perf_buffer:PASS:attach_kprobe 0 nsec serial_test_perf_buffer:PASS:perf_buf__new 0 nsec serial_test_perf_buffer:PASS:epoll_fd 0 nsec skipping offline CPU #4 serial_test_perf_buffer:PASS:perf_buffer__poll 0 nsec serial_test_perf_buffer:PASS:seen_cpu_cnt 0 nsec serial_test_perf_buffer:PASS:buf_cnt 0 nsec ... serial_test_perf_buffer:PASS:fd_check 0 nsec serial_test_perf_buffer:PASS:drain_buf 0 nsec serial_test_perf_buffer:PASS:consume_buf 0 nsec serial_test_perf_buffer:FAIL:cpu_seen cpu 5 not seen #88 perf_buffer:FAIL Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED If the offline cpu is from the middle of the possible set, we get mismatch with possible and online cpu buffers. The perf buffer test calls perf_buffer__consume_buffer for all 'possible' cpus, but the library holds only 'online' cpu buffers and perf_buffer__consume_buffer returns them based on index. Adding extra (online) index to keep track of online buffers, we need the original (possible) index to trigger trace on proper cpu. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211021114132.8196-3-jolsa@kernel.org
2021-10-21selftests/bpf: Fix perf_buffer test on system with offline cpusJiri Olsa
The perf_buffer fails on system with offline cpus: # test_progs -t perf_buffer test_perf_buffer:PASS:nr_cpus 0 nsec test_perf_buffer:PASS:nr_on_cpus 0 nsec test_perf_buffer:PASS:skel_load 0 nsec test_perf_buffer:PASS:attach_kprobe 0 nsec test_perf_buffer:PASS:perf_buf__new 0 nsec test_perf_buffer:PASS:epoll_fd 0 nsec skipping offline CPU #24 skipping offline CPU #25 skipping offline CPU #26 skipping offline CPU #27 skipping offline CPU #28 skipping offline CPU #29 skipping offline CPU #30 skipping offline CPU #31 test_perf_buffer:PASS:perf_buffer__poll 0 nsec test_perf_buffer:PASS:seen_cpu_cnt 0 nsec test_perf_buffer:FAIL:buf_cnt got 24, expected 32 Summary: 0/0 PASSED, 0 SKIPPED, 1 FAILED Changing the test to check online cpus instead of possible. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211021114132.8196-2-jolsa@kernel.org
2021-10-21selftests/bpf: Add verif_stats testDave Marchevsky
verified_insns field was added to response of bpf_obj_get_info_by_fd call on a prog. Confirm that it's being populated by loading a simple program and asking for its info. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211020074818.1017682-3-davemarchevsky@fb.com
2021-10-21bpf: Add verified_insns to bpf_prog_info and fdinfoDave Marchevsky
This stat is currently printed in the verifier log and not stored anywhere. To ease consumption of this data, add a field to bpf_prog_aux so it can be exposed via BPF_OBJ_GET_INFO_BY_FD and fdinfo. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211020074818.1017682-2-davemarchevsky@fb.com
2021-10-21libbpf: Fix ptr_is_aligned() usagesIlya Leoshkevich
Currently ptr_is_aligned() takes size, and not alignment, as a parameter, which may be overly pessimistic e.g. for __i128 on s390, which must be only 8-byte aligned. Fix by using btf__align_of(). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211021104658.624944-2-iii@linux.ibm.com
2021-10-21selftests/bpf: Test bpf_skc_to_unix_sock() helperHengqi Chen
Add a new test which triggers unix_listen kernel function to test bpf_skc_to_unix_sock helper. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211021134752.1223426-3-hengqi.chen@gmail.com
2021-10-21bpf: Add bpf_skc_to_unix_sock() helperHengqi Chen
The helper is used in tracing programs to cast a socket pointer to a unix_sock pointer. The return value could be NULL if the casting is illegal. Suggested-by: Yonghong Song <yhs@fb.com> Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021134752.1223426-2-hengqi.chen@gmail.com
2021-10-20selftests/bpf: Some more atomic testsBrendan Jackman
Some new verifier tests that hit some important gaps in the parameter space for atomic ops. There are already exhaustive tests for the JIT part in lib/test_bpf.c, but these exercise the verifier too. Signed-off-by: Brendan Jackman <jackmanb@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211015093318.1273686-1-jackmanb@google.com
2021-10-20libbpf: Fix dumping non-aligned __int128Ilya Leoshkevich
Non-aligned integers are dumped as bitfields, which is supported for at most 64-bit integers. Fix by using the same trick as btf_dump_float_data(): copy non-aligned values to the local buffer. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211013160902.428340-4-iii@linux.ibm.com
2021-10-20libbpf: Fix dumping big-endian bitfieldsIlya Leoshkevich
On big-endian arches not only bytes, but also bits are numbered in reverse order (see e.g. S/390 ELF ABI Supplement, but this is also true for other big-endian arches as well). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211013160902.428340-3-iii@linux.ibm.com
2021-10-20selftests/bpf: Use cpu_number only on arches that have itIlya Leoshkevich
cpu_number exists only on Intel and aarch64, so skip the test involing it on other arches. An alternative would be to replace it with an exported non-ifdefed primitive-typed percpu variable from the common code, but there appears to be none. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211013160902.428340-2-iii@linux.ibm.com
2021-10-20bpftool: Remove useless #include to <perf-sys.h> from map_perf_ring.cQuentin Monnet
The header is no longer needed since the event_pipe implementation was updated to rely on libbpf's perf_buffer. This makes bpftool free of dependencies to perf files, and we can update the Makefile accordingly. Fixes: 9b190f185d2f ("tools/bpftool: switch map event_pipe to libbpf's perf_buffer") Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211020094826.16046-1-quentin@isovalent.com
2021-10-20selftests/bpf: Remove duplicated include in cgroup_helpersWan Jiabing
Fix following checkincludes.pl warning: ./scripts/checkincludes.pl tools/testing/selftests/bpf/cgroup_helpers.c tools/testing/selftests/bpf/cgroup_helpers.c: unistd.h is included more than once. Signed-off-by: Wan Jiabing <wanjiabing@vivo.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211012023231.19911-1-wanjiabing@vivo.com
2021-10-20libbpf: Migrate internal use of bpf_program__get_prog_info_linearDave Marchevsky
In preparation for bpf_program__get_prog_info_linear deprecation, move the single use in libbpf to call bpf_obj_get_info_by_fd directly. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211011082031.4148337-2-davemarchevsky@fb.com
2021-10-19bpftool: Turn check on zlib from a phony target into a conditional errorQuentin Monnet
One of bpftool's object files depends on zlib. To make sure we do not attempt to build that object when the library is not available, commit d66fa3c70e59 ("tools: bpftool: add feature check for zlib") introduced a feature check to detect whether zlib is present. This check comes as a rule for which the target ("zdep") is a nonexistent file (phony target), which means that the Makefile always attempts to rebuild it. It is mostly harmless. However, one side effect is that, on running again once bpftool is already built, make considers that "something" (the recipe for zdep) was executed, and does not print the usual message "make: Nothing to be done for 'all'", which is a user-friendly indicator that the build went fine. Before, with some level of debugging information: $ make --debug=m [...] Reading makefiles... Auto-detecting system features: ... libbfd: [ on ] ... disassembler-four-args: [ on ] ... zlib: [ on ] ... libcap: [ on ] ... clang-bpf-co-re: [ on ] Updating makefiles.... Updating goal targets.... File 'all' does not exist. File 'zdep' does not exist. Must remake target 'zdep'. File 'all' does not exist. Must remake target 'all'. Successfully remade target file 'all'. After the patch: $ make --debug=m [...] Auto-detecting system features: ... libbfd: [ on ] ... disassembler-four-args: [ on ] ... zlib: [ on ] ... libcap: [ on ] ... clang-bpf-co-re: [ on ] Updating makefiles.... Updating goal targets.... File 'all' does not exist. Must remake target 'all'. Successfully remade target file 'all'. make: Nothing to be done for 'all'. (Note the last line, which is not part of make's debug information.) Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211009210341.6291-4-quentin@isovalent.com
2021-10-19bpftool: Do not FORCE-build libbpfQuentin Monnet
In bpftool's Makefile, libbpf has a FORCE dependency, to make sure we rebuild it in case its source files changed. Let's instead make the rebuild depend on the source files directly, through a call to the "$(wildcard ...)" function. This avoids descending into libbpf's directory if there is nothing to update. Do the same for the bootstrap libbpf version. This results in a slightly faster operation and less verbose output when running make a second time in bpftool's directory. Before: Auto-detecting system features: ... libbfd: [ on ] ... disassembler-four-args: [ on ] ... zlib: [ on ] ... libcap: [ on ] ... clang-bpf-co-re: [ on ] make[1]: Entering directory '/root/dev/linux/tools/lib/bpf' make[1]: Entering directory '/root/dev/linux/tools/lib/bpf' make[1]: Nothing to be done for 'install_headers'. make[1]: Leaving directory '/root/dev/linux/tools/lib/bpf' make[1]: Leaving directory '/root/dev/linux/tools/lib/bpf' After: Auto-detecting system features: ... libbfd: [ on ] ... disassembler-four-args: [ on ] ... zlib: [ on ] ... libcap: [ on ] ... clang-bpf-co-re: [ on ] Other ways to clean up the output could be to pass the "-s" option, or to redirect the output to >/dev/null, when calling make recursively to descend into libbpf's directory. However, this would suppress some useful output if something goes wrong during the build. A better alternative would be to pass "--no-print-directory" to the recursive make, but that would still leave us with some noise for "install_headers". Skipping the descent into libbpf's directory if no source file has changed works best, and seems the most logical option overall. Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211009210341.6291-3-quentin@isovalent.com
2021-10-19bpftool: Fix install for libbpf's internal header(s)Quentin Monnet
We recently updated bpftool's Makefile to make it install the headers from libbpf, instead of pulling them directly from libbpf's directory. There is also an additional header, internal to libbpf, that needs be installed. The way that bpftool's Makefile installs that particular header is currently correct, but would break if we were to modify $(LIBBPF_INTERNAL_HDRS) to make it point to more than one header. Use a static pattern rule instead, so that the Makefile can withstand the addition of other headers to install. The objective is simply to make the Makefile more robust. It should _not_ be read as an invitation to import more internal headers from libbpf into bpftool. Fixes: f012ade10b34 ("bpftool: Install libbpf headers instead of including the dir") Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20211009210341.6291-2-quentin@isovalent.com