Commit Graph

76 Commits

Author SHA1 Message Date
Viktor Malik 2dddaefcab
bpftool: improve skeleton backwards compat with old buggy libbpfs
JIRA: https://issues.redhat.com/browse/RHEL-30774

commit 06e71ad534881d2a09ced7509d2ab0daedac4c96
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Jul 8 13:45:38 2024 -0700

    bpftool: improve skeleton backwards compat with old buggy libbpfs
    
    Old versions of libbpf don't handle varying sizes of bpf_map_skeleton
    struct correctly. As such, BPF skeleton generated by newest bpftool
    might not be compatible with older libbpf (though only when libbpf is
    used as a shared library), even though it, by design, should.
    
    Going forward libbpf will be fixed, plus we'll release bug fixed
    versions of relevant old libbpfs, but meanwhile try to mitigate from
    bpftool side by conservatively assuming older and smaller definition of
    bpf_map_skeleton, if possible. Meaning, if there are no struct_ops maps.
    
    If there are struct_ops, then presumably user would like to have
    auto-attaching logic and struct_ops map link placeholders, so use the
    full bpf_map_skeleton definition in that case.
    
    Acked-by: Quentin Monnet <qmo@kernel.org>
    Co-developed-by: Mykyta Yatsenko <yatsenko@meta.com>
    Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Eduard Zingerman <eddyz87@gmail.com>
    Link: https://lore.kernel.org/r/20240708204540.4188946-2-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-11-26 15:55:16 +01:00
Viktor Malik dd899215f4
bpftool: Allow compile-time checks of BPF map auto-attach support in skeleton
JIRA: https://issues.redhat.com/browse/RHEL-30774

commit 651337c7ca82c259bf5c8fe9beda9673531a0031
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Jun 18 11:38:32 2024 -0700

    bpftool: Allow compile-time checks of BPF map auto-attach support in skeleton
    
    New versions of bpftool now emit additional link placeholders for BPF
    maps (struct_ops maps are the only maps right now that support
    attachment), and set up BPF skeleton in such a way that libbpf will
    auto-attach BPF maps automatically, assumming libbpf is recent enough
    (v1.5+). Old libbpf will do nothing with those links and won't attempt
    to auto-attach maps. This allows user code to handle both pre-v1.5 and
    v1.5+ versions of libbpf at runtime, if necessary.
    
    But if users don't have (or don't want to) control bpftool version that
    generates skeleton, then they can't just assume that skeleton will have
    link placeholders. To make this detection possible and easy, let's add
    the following to generated skeleton header file:
    
      #define BPF_SKEL_SUPPORTS_MAP_AUTO_ATTACH 1
    
    This can be used during compilation time to guard code that accesses
    skel->links.<map> slots.
    
    Note, if auto-attachment is undesirable, libbpf allows to disable this
    through bpf_map__set_autoattach(map, false). This is necessary only on
    libbpf v1.5+, older libbpf doesn't support map auto-attach anyways.
    
    Libbpf version can be detected at compilation time using
    LIBBPF_MAJOR_VERSION and LIBBPF_MINOR_VERSION macros, or at runtime with
    libbpf_major_version() and libbpf_minor_version() APIs.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Quentin Monnet <qmo@kernel.org>
    Link: https://lore.kernel.org/bpf/20240618183832.2535876-1-andrii@kernel.org

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-11-26 14:40:10 +01:00
Viktor Malik 20a63f1df6
libbpf: Auto-attach struct_ops BPF maps in BPF skeleton
JIRA: https://issues.redhat.com/browse/RHEL-30774

commit 08ac454e258e38813afb906650f19acce3afd982
Author: Mykyta Yatsenko <yatsenko@meta.com>
Date:   Wed Jun 5 18:51:35 2024 +0100

    libbpf: Auto-attach struct_ops BPF maps in BPF skeleton
    
    Similarly to `bpf_program`, support `bpf_map` automatic attachment in
    `bpf_object__attach_skeleton`. Currently only struct_ops maps could be
    attached.
    
    On bpftool side, code-generate links in skeleton struct for struct_ops maps.
    Similarly to `bpf_program_skeleton`, set links in `bpf_map_skeleton`.
    
    On libbpf side, extend `bpf_map` with new `autoattach` field to support
    enabling or disabling autoattach functionality, introducing
    getter/setter for this field.
    
    `bpf_object__(attach|detach)_skeleton` is extended with
    attaching/detaching struct_ops maps logic.
    
    Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20240605175135.117127-1-yatsenko@meta.com

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-11-26 14:40:05 +01:00
Viktor Malik dfe85d39d1
bpftool: Use BTF field iterator in btfgen
JIRA: https://issues.redhat.com/browse/RHEL-30774

commit e1a8630291fde2a0edac2955e3df48587dac9906
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Jun 4 17:16:28 2024 -0700

    bpftool: Use BTF field iterator in btfgen
    
    Switch bpftool's code which is using libbpf-internal
    btf_type_visit_type_ids() helper to new btf_field_iter functionality.
    
    This makes bpftool code simpler, but also unblocks removing libbpf's
    btf_type_visit_type_ids() helper completely.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Tested-by: Alan Maguire <alan.maguire@oracle.com>
    Reviewed-by: Quentin Monnet <qmo@kernel.org>
    Acked-by: Eduard Zingerman <eddyz87@gmail.com>
    Acked-by: Jiri Olsa <jolsa@kernel.org>
    Link: https://lore.kernel.org/bpf/20240605001629.4061937-5-andrii@kernel.org

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-11-26 14:40:05 +01:00
Viktor Malik 8083c3de39
bpftool: Use __typeof__() instead of typeof() in BPF skeleton
JIRA: https://issues.redhat.com/browse/RHEL-30773

commit 2a24e2485722b0e12e17a2bd473bd15c9e420bdb
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Apr 1 10:07:13 2024 -0700

    bpftool: Use __typeof__() instead of typeof() in BPF skeleton
    
    When generated BPF skeleton header is included in C++ code base, some
    compiler setups will emit warning about using language extensions due to
    typeof() usage, resulting in something like:
    
      error: extension used [-Werror,-Wlanguage-extension-token]
      obj->struct_ops.empty_tcp_ca = (typeof(obj->struct_ops.empty_tcp_ca))
                                      ^
    
    It looks like __typeof__() is a preferred way to do typeof() with better
    C++ compatibility behavior, so switch to that. With __typeof__() we get
    no such warning.
    
    Fixes: c2a0257c1edf ("bpftool: Cast pointers for shadow types explicitly.")
    Fixes: 00389c58ffe9 ("bpftool: Add support for subskeletons")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Kui-Feng Lee <thinker.li@gmail.com>
    Acked-by: Quentin Monnet <qmo@kernel.org>
    Acked-by: John Fastabend <john.fastabend@gmail.com>
    Link: https://lore.kernel.org/bpf/20240401170713.2081368-1-andrii@kernel.org

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-11-07 13:58:41 +01:00
Viktor Malik 0826582cff
bpftool: Cast pointers for shadow types explicitly.
JIRA: https://issues.redhat.com/browse/RHEL-30773

commit c2a0257c1edf16c6acd2afac7572d7e9043b6577
Author: Kui-Feng Lee <thinker.li@gmail.com>
Date:   Mon Mar 11 18:37:26 2024 -0700

    bpftool: Cast pointers for shadow types explicitly.
    
    According to a report, skeletons fail to assign shadow pointers when being
    compiled with C++ programs. Unlike C doing implicit casting for void
    pointers, C++ requires an explicit casting.
    
    To support C++, we do explicit casting for each shadow pointer.
    
    Also add struct_ops_module.skel.h to test_cpp to validate C++
    compilation as part of BPF selftests.
    
    Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Yonghong Song <yonghong.song@linux.dev>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20240312013726.1780720-1-thinker.li@gmail.com

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-11-07 13:58:25 +01:00
Jerome Marchand f24684ff64 libbpf, selftests/bpf: Adjust libbpf, bpftool, selftests to match LLVM
JIRA: https://issues.redhat.com/browse/RHEL-23649

commit 10ebe835c937a11870690aa44c7c970fe906ff54
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Thu Mar 14 19:18:32 2024 -0700

    libbpf, selftests/bpf: Adjust libbpf, bpftool, selftests to match LLVM

    The selftests use
    to tell LLVM about special pointers. For LLVM there is nothing "arena"
    about them. They are simply pointers in a different address space.
    Hence LLVM diff https://github.com/llvm/llvm-project/pull/85161 renamed:
    . macro __BPF_FEATURE_ARENA_CAST -> __BPF_FEATURE_ADDR_SPACE_CAST
    . global variables in __attribute__((address_space(N))) are now
      placed in section named ".addr_space.N" instead of ".arena.N".

    Adjust libbpf, bpftool, and selftests to match LLVM.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Stanislav Fomichev <sdf@google.com>
    Link: https://lore.kernel.org/bpf/20240315021834.62988-3-alexei.starovoitov@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2024-10-15 10:49:16 +02:00
Jerome Marchand 044119b1e6 libbpf: Recognize __arena global variables.
JIRA: https://issues.redhat.com/browse/RHEL-23649

commit 2e7ba4f8fd1fa879b37db0b738c23ba2af8292ee
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Mar 7 17:08:08 2024 -0800

    libbpf: Recognize __arena global variables.

    LLVM automatically places __arena variables into ".arena.1" ELF section.
    In order to use such global variables bpf program must include definition
    of arena map in ".maps" section, like:
    struct {
           __uint(type, BPF_MAP_TYPE_ARENA);
           __uint(map_flags, BPF_F_MMAPABLE);
           __uint(max_entries, 1000);         /* number of pages */
           __ulong(map_extra, 2ull << 44);    /* start of mmap() region */
    } arena SEC(".maps");

    libbpf recognizes both uses of arena and creates single `struct bpf_map *`
    instance in libbpf APIs.
    ".arena.1" ELF section data is used as initial data image, which is exposed
    through skeleton and bpf_map__initial_value() to the user, if they need to tune
    it before the load phase. During load phase, this initial image is copied over
    into mmap()'ed region corresponding to arena, and discarded.

    Few small checks here and there had to be added to make sure this
    approach works with bpf_map__initial_value(), mostly due to hard-coded
    assumption that map->mmaped is set up with mmap() syscall and should be
    munmap()'ed. For arena, .arena.1 can be (much) smaller than maximum
    arena size, so this smaller data size has to be tracked separately.
    Given it is enforced that there is only one arena for entire bpf_object
    instance, we just keep it in a separate field. This can be generalized
    if necessary later.

    All global variables from ".arena.1" section are accessible from user space
    via skel->arena->name_of_var.

    For bss/data/rodata the skeleton/libbpf perform the following sequence:
    1. addr = mmap(MAP_ANONYMOUS)
    2. user space optionally modifies global vars
    3. map_fd = bpf_create_map()
    4. bpf_update_map_elem(map_fd, addr) // to store values into the kernel
    5. mmap(addr, MAP_FIXED, map_fd)
    after step 5 user spaces see the values it wrote at step 2 at the same addresses

    arena doesn't support update_map_elem. Hence skeleton/libbpf do:
    1. addr = malloc(sizeof SEC ".arena.1")
    2. user space optionally modifies global vars
    3. map_fd = bpf_create_map(MAP_TYPE_ARENA)
    4. real_addr = mmap(map->map_extra, MAP_SHARED | MAP_FIXED, map_fd)
    5. memcpy(real_addr, addr) // this will fault-in and allocate pages

    At the end look and feel of global data vs __arena global data is the same from
    bpf prog pov.

    Another complication is:
    struct {
      __uint(type, BPF_MAP_TYPE_ARENA);
    } arena SEC(".maps");

    int __arena foo;
    int bar;

      ptr1 = &foo;   // relocation against ".arena.1" section
      ptr2 = &arena; // relocation against ".maps" section
      ptr3 = &bar;   // relocation against ".bss" section

    Fo the kernel ptr1 and ptr2 has point to the same arena's map_fd
    while ptr3 points to a different global array's map_fd.
    For the verifier:
    ptr1->type == unknown_scalar
    ptr2->type == const_ptr_to_map
    ptr3->type == ptr_to_map_value

    After verification, from JIT pov all 3 ptr-s are normal ld_imm64 insns.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20240308010812.89848-11-alexei.starovoitov@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2024-10-15 10:49:15 +02:00
Jerome Marchand 0d8850cd88 bpftool: rename is_internal_mmapable_map into is_mmapable_map
JIRA: https://issues.redhat.com/browse/RHEL-23649

commit 1576b07961971d4eeb0e269c7133e9a6d430daf8
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Mar 6 19:12:27 2024 -0800

    bpftool: rename is_internal_mmapable_map into is_mmapable_map

    It's not restricted to working with "internal" maps, it cares about any
    map that can be mmap'ed. Reflect that in more succinct and generic name.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/r/20240307031228.42896-6-alexei.starovoitov@gmail.com
    Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2024-10-15 10:49:14 +02:00
Jerome Marchand 07405420ea bpftool: Generated shadow variables for struct_ops maps.
JIRA: https://issues.redhat.com/browse/RHEL-23649

commit a7b0fa352eafef95bd0d736ca94965d3f884ad18
Author: Kui-Feng Lee <thinker.li@gmail.com>
Date:   Wed Feb 28 22:45:21 2024 -0800

    bpftool: Generated shadow variables for struct_ops maps.

    Declares and defines a pointer of the shadow type for each struct_ops map.

    The code generator will create an anonymous struct type as the shadow type
    for each struct_ops map. The shadow type is translated from the original
    struct type of the map. The user of the skeleton use pointers of them to
    access the values of struct_ops maps.

    However, shadow types only supports certain types of fields, including
    scalar types and function pointers. Any fields of unsupported types are
    translated into an array of characters to occupy the space of the original
    field. Function pointers are translated into pointers of the struct
    bpf_program. Additionally, padding fields are generated to occupy the space
    between two consecutive fields.

    The pointers of shadow types of struct_osp maps are initialized when
    *__open_opts() in skeletons are called. For a map called FOO, the user can
    access it through the pointer at skel->struct_ops.FOO.

    Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20240229064523.2091270-4-thinker.li@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2024-10-15 10:49:12 +02:00
Jerome Marchand 6fb45ecff2 bpftool: Be more portable by using POSIX's basename()
JIRA: https://issues.redhat.com/browse/RHEL-23649

commit 29788f39a4171dd48a6d19eb78cf2ab168c4349a
Author: Arnaldo Carvalho de Melo <acme@kernel.org>
Date:   Mon Jan 29 11:33:26 2024 -0300

    bpftool: Be more portable by using POSIX's basename()

    musl libc had the basename() prototype in string.h, but this is a
    glibc-ism, now they removed the _GNU_SOURCE bits in their devel distro,
    Alpine Linux edge:

      https://git.musl-libc.org/cgit/musl/commit/?id=725e17ed6dff4d0cd22487bb64470881e86a92e7

    So lets use the POSIX version, the whole rationale is spelled out at:

      https://gitlab.alpinelinux.org/alpine/aports/-/issues/15643

    Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Jiri Olsa <olsajiri@gmail.com>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/lkml/ZZhsPs00TI75RdAr@kernel.org
    Link: https://lore.kernel.org/bpf/Zbe3NuOgaupvUcpF@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2024-10-15 10:49:05 +02:00
Artem Savkov c6fcef255a bpftool: Align bpf_load_and_run_opts insns and data
JIRA: https://issues.redhat.com/browse/RHEL-23643

commit 1be84ca53ca0421c781f9ec007cd8bccbb58f763
Author: Ian Rogers <irogers@google.com>
Date:   Fri Oct 6 21:44:39 2023 -0700

    bpftool: Align bpf_load_and_run_opts insns and data
    
    A C string lacks alignment so use aligned arrays to avoid potential
    alignment problems. Switch to using sizeof (less 1 for the \0
    terminator) rather than a hardcode size constant.
    
    Signed-off-by: Ian Rogers <irogers@google.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20231007044439.25171-2-irogers@google.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2024-03-27 10:27:53 +01:00
Artem Savkov 189700383d bpftool: Align output skeleton ELF code
JIRA: https://issues.redhat.com/browse/RHEL-23643

commit 23671f4dfd10b48b4a2fee4768886f0d8ec55b7e
Author: Ian Rogers <irogers@google.com>
Date:   Fri Oct 6 21:44:38 2023 -0700

    bpftool: Align output skeleton ELF code
    
    libbpf accesses the ELF data requiring at least 8 byte alignment,
    however, the data is generated into a C string that doesn't guarantee
    alignment. Fix this by assigning to an aligned char array. Use sizeof
    on the array, less one for the \0 terminator, rather than generating a
    constant.
    
    Fixes: a6cc6b34b93e ("bpftool: Provide a helper method for accessing skeleton's embedded ELF data")
    Signed-off-by: Ian Rogers <irogers@google.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Reviewed-by: Alan Maguire <alan.maguire@oracle.com>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20231007044439.25171-1-irogers@google.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2024-03-27 10:27:53 +01:00
Artem Savkov 24f7121605 bpftool: Fix -Wcast-qual warning
JIRA: https://issues.redhat.com/browse/RHEL-23643

commit ebc8484d0e6da9e6c9e8cfa1f40bf94e9c6fc512
Author: Denys Zagorui <dzagorui@cisco.com>
Date:   Thu Sep 7 02:02:10 2023 -0700

    bpftool: Fix -Wcast-qual warning
    
    This cast was made by purpose for older libbpf where the
    bpf_object_skeleton field is void * instead of const void *
    to eliminate a warning (as i understand
    -Wincompatible-pointer-types-discards-qualifiers) but this
    cast introduces another warning (-Wcast-qual) for libbpf
    where data field is const void *
    
    It makes sense for bpftool to be in sync with libbpf from
    kernel sources
    
    Signed-off-by: Denys Zagorui <dzagorui@cisco.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20230907090210.968612-1-dzagorui@cisco.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2024-03-27 10:27:46 +01:00
Jerome Marchand c0a58b3566 bpftool: clean-up usage of libbpf_get_error()
Bugzilla: https://bugzilla.redhat.com/2177177

commit d1313e01271d2d8f33d6c82f1afb77e820a3540d
Author: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Date:   Sun Nov 20 11:26:32 2022 +0000

    bpftool: clean-up usage of libbpf_get_error()

    bpftool is now totally compliant with libbpf 1.0 mode and is not
    expected to be compiled with pre-1.0, let's clean-up the usage of
    libbpf_get_error().

    The changes stay aligned with returned errors always negative.

    - In tools/bpf/bpftool/btf.c This fixes an uninitialized local
    variable `err` in function do_dump() because it may now be returned
    without having been set.
    - This also removes the checks on NULL pointers before calling
    btf__free() because that function already does the check.

    Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
    Link: https://lore.kernel.org/r/20221120112515.38165-5-sahid.ferdjaoui@industrialdiscipline.com
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2023-04-28 11:43:09 +02:00
Jerome Marchand 24d396b55c libbpf: Hashmap interface update to allow both long and void* keys/values
Bugzilla: https://bugzilla.redhat.com/2177177

Conflicts: Some minor changes due to missing commits 09b73fe9e3de
("perf smt: Compute SMT from topology") and f0c4b97a2927 ("perf test:
Add basic core_wide expression test").

commit c302378bc157f6a73b6cae4ca67f5f6aa931dcec
Author: Eduard Zingerman <eddyz87@gmail.com>
Date:   Wed Nov 9 16:26:09 2022 +0200

    libbpf: Hashmap interface update to allow both long and void* keys/values

    An update for libbpf's hashmap interface from void* -> void* to a
    polymorphic one, allowing both long and void* keys and values.

    This simplifies many use cases in libbpf as hashmaps there are mostly
    integer to integer.

    Perf copies hashmap implementation from libbpf and has to be
    updated as well.

    Changes to libbpf, selftests/bpf and perf are packed as a single
    commit to avoid compilation issues with any future bisect.

    Polymorphic interface is acheived by hiding hashmap interface
    functions behind auxiliary macros that take care of necessary
    type casts, for example:

        #define hashmap_cast_ptr(p)                                         \
            ({                                                              \
                    _Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\
                                   #p " pointee should be a long-sized integer or a pointer"); \
                    (long *)(p);                                            \
            })

        bool hashmap_find(const struct hashmap *map, long key, long *value);

        #define hashmap__find(map, key, value) \
                    hashmap_find((map), (long)(key), hashmap_cast_ptr(value))

    - hashmap__find macro casts key and value parameters to long
      and long* respectively
    - hashmap_cast_ptr ensures that value pointer points to a memory
      of appropriate size.

    This hack was suggested by Andrii Nakryiko in [1].
    This is a follow up for [2].

    [1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@mail.gmail.com/
    [2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@meta.com/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d

    Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20221109142611.879983-2-eddyz87@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2023-04-28 11:43:02 +02:00
Artem Savkov 68d730e069 bpftool: Fix error message of strerror
Bugzilla: https://bugzilla.redhat.com/2166911

commit 3ca2fb497440a3c8294f9df0ce7b2c3c9a1c5875
Author: Tianyi Liu <i.pear@outlook.com>
Date:   Wed Sep 28 16:09:32 2022 +0800

    bpftool: Fix error message of strerror
    
    strerror() expects a positive errno, however variable err will never be
    positive when an error occurs. This causes bpftool to output too many
    "unknown error", even a simple "file not exist" error can not get an
    accurate message.
    
    This patch fixed all "strerror(err)" patterns in bpftool.
    Specially in btf.c#L823, hashmap__append() is an internal function of
    libbpf and will not change errno, so there's a little difference.
    Some libbpf_get_error() calls are kept for return values.
    
    Changes since v1: https://lore.kernel.org/bpf/SY4P282MB1084B61CD8671DFA395AA8579D539@SY4P282MB1084.AUSP282.PROD.OUTLOOK.COM/
    Check directly for NULL values instead of calling libbpf_get_error().
    
    Signed-off-by: Tianyi Liu <i.pear@outlook.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/SY4P282MB1084AD9CD84A920F08DF83E29D549@SY4P282MB1084.AUSP282.PROD.OUTLOOK.COM

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-03-06 14:54:21 +01:00
Artem Savkov 88be17a2e9 bpftool: Don't try to return value from void function in skeleton
Bugzilla: https://bugzilla.redhat.com/2137876

commit a6df06744b2d0b953615e0d6ca8b5e84ae4019fc
Author: Jörn-Thorben Hinz <jthinz@mailbox.tu-berlin.de>
Date:   Tue Jul 26 15:32:03 2022 +0200

    bpftool: Don't try to return value from void function in skeleton
    
    A skeleton generated by bpftool previously contained a return followed
    by an expression in OBJ_NAME__detach(), which has return type void. This
    did not hurt, the bpf_object__detach_skeleton() called there returns
    void itself anyway, but led to a warning when compiling with e.g.
    -pedantic.
    
    Signed-off-by: Jörn-Thorben Hinz <jthinz@mailbox.tu-berlin.de>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220726133203.514087-1-jthinz@mailbox.tu-berlin.de

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-01-05 15:46:45 +01:00
Artem Savkov bbfcab3ef7 bpftool: Add support for KIND_RESTRICT to gen min_core_btf command
Bugzilla: https://bugzilla.redhat.com/2137876

commit aad53f17f0ad7485872d66fbcb53cc0c60e811f2
Author: Daniel Müller <deso@posteo.net>
Date:   Wed Jul 6 21:28:54 2022 +0000

    bpftool: Add support for KIND_RESTRICT to gen min_core_btf command
    
    This change adjusts bpftool's type marking logic, as used in conjunction
    with TYPE_EXISTS relocations, to correctly recognize and handle the
    RESTRICT BTF kind.
    
    Suggested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Müller <deso@posteo.net>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220623212205.2805002-1-deso@posteo.net/T/#m4c75205145701762a4b398e0cdb911d5b5305ffc
    Link: https://lore.kernel.org/bpf/20220706212855.1700615-2-deso@posteo.net

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-01-05 15:46:36 +01:00
Artem Savkov fb9a2e77e4 bpftool: Honor BPF_CORE_TYPE_MATCHES relocation
Bugzilla: https://bugzilla.redhat.com/2137876

commit 633e7ceb2cbbae9b2f5ca69106b0de65728c5988
Author: Daniel Müller <deso@posteo.net>
Date:   Tue Jun 28 16:01:19 2022 +0000

    bpftool: Honor BPF_CORE_TYPE_MATCHES relocation
    
    bpftool needs to know about the newly introduced BPF_CORE_TYPE_MATCHES
    relocation for its 'gen min_core_btf' command to work properly in the
    present of this relocation.
    Specifically, we need to make sure to mark types and fields so that they
    are present in the minimized BTF for "type match" checks to work out.
    However, contrary to the existing btfgen_record_field_relo, we need to
    rely on the BTF -- and not the spec -- to find fields. With this change
    we handle this new variant correctly. The functionality will be tested
    with follow on changes to BPF selftests, which already run against a
    minimized BTF created with bpftool.
    
    Signed-off-by: Daniel Müller <deso@posteo.net>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220628160127.607834-3-deso@posteo.net

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-01-05 15:46:35 +01:00
Artem Savkov e91fcd1090 bpftool: Check for NULL ptr of btf in codegen_asserts
Bugzilla: https://bugzilla.redhat.com/2137876

commit de4b4b94fad90f876ab12e87999109e31a1871b4
Author: Michael Mullin <masmullin@gmail.com>
Date:   Mon May 23 15:49:17 2022 -0400

    bpftool: Check for NULL ptr of btf in codegen_asserts
    
    bpf_object__btf() can return a NULL value.  If bpf_object__btf returns
    null, do not progress through codegen_asserts(). This avoids a null ptr
    dereference at the call btf__type_cnt() in the function find_type_for_map()
    
    Signed-off-by: Michael Mullin <masmullin@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220523194917.igkgorco42537arb@jup

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-01-05 15:46:29 +01:00
Yauheni Kaliuta a9478e4ae2 bpftool: Add btf enum64 support
Bugzilla: http://bugzilla.redhat.com/2120968

commit 58a53978fdf65d12dae1798e44120efb992a3615
Author: Yonghong Song <yhs@fb.com>
Date:   Mon Jun 6 23:26:52 2022 -0700

    bpftool: Add btf enum64 support
    
    Add BTF_KIND_ENUM64 support.
    For example, the following enum is defined in uapi bpf.h.
      $ cat core.c
      enum A {
            BPF_F_INDEX_MASK                = 0xffffffffULL,
            BPF_F_CURRENT_CPU               = BPF_F_INDEX_MASK,
            BPF_F_CTXLEN_MASK               = (0xfffffULL << 32),
      } g;
    Compiled with
      clang -target bpf -O2 -g -c core.c
    Using bpftool to dump types and generate format C file:
      $ bpftool btf dump file core.o
      ...
      [1] ENUM64 'A' encoding=UNSIGNED size=8 vlen=3
            'BPF_F_INDEX_MASK' val=4294967295ULL
            'BPF_F_CURRENT_CPU' val=4294967295ULL
            'BPF_F_CTXLEN_MASK' val=4503595332403200ULL
      $ bpftool btf dump file core.o format c
      ...
      enum A {
            BPF_F_INDEX_MASK = 4294967295ULL,
            BPF_F_CURRENT_CPU = 4294967295ULL,
            BPF_F_CTXLEN_MASK = 4503595332403200ULL,
      };
      ...
    
    Note that for raw btf output, the encoding (UNSIGNED or SIGNED)
    is printed out as well. The 64bit value is also represented properly
    in BTF and C dump.
    
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/r/20220607062652.3722649-1-yhs@fb.com
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-11-30 12:47:12 +02:00
Yauheni Kaliuta 5da73b1bf6 bpftool: bpf_link_get_from_fd support for LSM programs in lskel
Bugzilla: https://bugzilla.redhat.com/2120968

commit bd2331b3757f5b2ab4aafc591b55fa2a592abf7c
Author: KP Singh <kpsingh@kernel.org>
Date:   Mon May 9 21:49:05 2022 +0000

    bpftool: bpf_link_get_from_fd support for LSM programs in lskel
    
    bpf_link_get_from_fd currently returns a NULL fd for LSM programs.
    LSM programs are similar to tracing programs and can also use
    skel_raw_tracepoint_open.
    
    Signed-off-by: KP Singh <kpsingh@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220509214905.3754984-1-kpsingh@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-11-30 12:47:02 +02:00
Yauheni Kaliuta 27b6a5eee8 bpftool: Declare generator name
Bugzilla: https://bugzilla.redhat.com/2120968

commit 56c3e749d08a041454f5d75273c24d16240f26dc
Author: Jason Wang <jasowang@redhat.com>
Date:   Mon May 9 17:02:47 2022 +0800

    bpftool: Declare generator name
    
    Most code generators declare its name so did this for bfptool.
    
    Signed-off-by: Jason Wang <jasowang@redhat.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220509090247.5457-1-jasowang@redhat.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-11-30 12:47:02 +02:00
Jerome Marchand f0b1684768 bpftool: Explicit errno handling in skeletons
Bugzilla: https://bugzilla.redhat.com/2120966

commit 522574fd7864e091d473765102e866414979b2ab
Author: Delyan Kratunov <delyank@fb.com>
Date:   Mon Mar 21 23:29:18 2022 +0000

    bpftool: Explicit errno handling in skeletons

    Andrii noticed that since f97b8b9bd630 ("bpftool: Fix a bug in subskeleton
    code generation") the subskeleton code allows bpf_object__destroy_subskeleton
    to overwrite the errno that subskeleton__open would return with. While this
    is not currently an issue, let's make it future-proof.

    This patch explicitly tracks err in subskeleton__open and skeleton__create
    (i.e. calloc failure is explicitly ENOMEM) and ensures that errno is -err on
    the error return path. The skeleton code had to be changed since maps and
    progs codegen is shared with subskeletons.

    Fixes: f97b8b9bd630 ("bpftool: Fix a bug in subskeleton code generation")
    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/3b6bfbb770c79ae64d8de26c1c1bd9d53a4b85f8.camel@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:08 +02:00
Jerome Marchand 9a68327061 bpftool: Fix generated code in codegen_asserts
Bugzilla: https://bugzilla.redhat.com/2120966

commit ef8a257b4e499a979364b1f9caf25a325f6ee8b8
Author: Jiri Olsa <jolsa@kernel.org>
Date:   Mon Mar 28 10:37:03 2022 +0200

    bpftool: Fix generated code in codegen_asserts

    Arnaldo reported perf compilation fail with:

      $ make -k BUILD_BPF_SKEL=1 CORESIGHT=1 PYTHON=python3
      ...
      In file included from util/bpf_counter.c:28:
      /tmp/build/perf//util/bpf_skel/bperf_leader.skel.h: In function ‘bperf_leader_bpf__assert’:
      /tmp/build/perf//util/bpf_skel/bperf_leader.skel.h:351:51: error: unused parameter ‘s’ [-Werror=unused-parameter]
        351 | bperf_leader_bpf__assert(struct bperf_leader_bpf *s)
            |                          ~~~~~~~~~~~~~~~~~~~~~~~~~^
      cc1: all warnings being treated as errors

    If there's nothing to generate in the new assert function,
    we will get unused 's' warn/error, adding 'unused' attribute to it.

    Fixes: 08d4dba6ae77 ("bpftool: Bpf skeletons assert type sizes")
    Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Signed-off-by: Jiri Olsa <jolsa@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
    Link: https://lore.kernel.org/bpf/20220328083703.2880079-1-jolsa@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:08 +02:00
Jerome Marchand 261df64038 bpftool: Fix a bug in subskeleton code generation
Bugzilla: https://bugzilla.redhat.com/2120966

commit f97b8b9bd630fb76c0e9e11cbf390e3d64a144d7
Author: Yonghong Song <yhs@fb.com>
Date:   Sat Mar 19 20:20:09 2022 -0700

    bpftool: Fix a bug in subskeleton code generation

    Compiled with clang by adding LLVM=1 both kernel and selftests/bpf
    build, I hit the following compilation error:

    In file included from /.../tools/testing/selftests/bpf/prog_tests/subskeleton.c:6:
      ./test_subskeleton_lib.subskel.h:168:6: error: variable 'err' is used uninitialized whenever
          'if' condition is true [-Werror,-Wsometimes-uninitialized]
              if (!s->progs)
                  ^~~~~~~~~
      ./test_subskeleton_lib.subskel.h:181:11: note: uninitialized use occurs here
              errno = -err;
                       ^~~
      ./test_subskeleton_lib.subskel.h:168:2: note: remove the 'if' if its condition is always false
              if (!s->progs)
              ^~~~~~~~~~~~~~

    The compilation error is triggered by the following code
            ...
            int err;

            obj = (struct test_subskeleton_lib *)calloc(1, sizeof(*obj));
            if (!obj) {
                    errno = ENOMEM;
                    goto err;
            }
            ...

      err:
            test_subskeleton_lib__destroy(obj);
            errno = -err;
            ...
    in test_subskeleton_lib__open(). The 'err' is not initialized, yet it
    is used in 'errno = -err' later.

    The fix is to remove 'errno = -err' since errno has been set properly
    in all incoming branches.

    Fixes: 00389c58ffe9 ("bpftool: Add support for subskeletons")
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220320032009.3106133-1-yhs@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:07 +02:00
Jerome Marchand 6bb9c2053e bpftool: Add support for subskeletons
Bugzilla: https://bugzilla.redhat.com/2120966

commit 00389c58ffe993782a8ba4bb5a34a102b1f6fe24
Author: Delyan Kratunov <delyank@fb.com>
Date:   Wed Mar 16 23:37:28 2022 +0000

    bpftool: Add support for subskeletons

    Subskeletons are headers which require an already loaded program to
    operate.

    For example, when a BPF library is linked into a larger BPF object file,
    the library userspace needs a way to access its own global variables
    without requiring knowledge about the larger program at build time.

    As a result, subskeletons require a loaded bpf_object to open().
    Further, they find their own symbols in the larger program by
    walking BTF type data at run time.

    At this time, programs, maps, and globals are supported through
    non-owning pointers.

    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/ca8a48b4841c72d285ecce82371bef4a899756cb.1647473511.git.delyank@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:05 +02:00
Jerome Marchand 1ed8e8f551 bpftool: Bpf skeletons assert type sizes
Bugzilla: https://bugzilla.redhat.com/2120966

commit 08d4dba6ae77aaec0e0c79dcfcb0613cb7426b2c
Author: Delyan Kratunov <delyank@fb.com>
Date:   Wed Feb 23 22:01:58 2022 +0000

    bpftool: Bpf skeletons assert type sizes

    When emitting type declarations in skeletons, bpftool will now also emit
    static assertions on the size of the data/bss/rodata/etc fields. This
    ensures that in situations where userspace and kernel types have the same
    name but differ in size we do not silently produce incorrect results but
    instead break the build.

    This was reported in [1] and as expected the repro in [2] fails to build
    on the new size assert after this change.

      [1]: Closes: https://github.com/libbpf/libbpf/issues/433
      [2]: https://github.com/fuweid/iovisor-bcc-pr-3777

    Signed-off-by: Delyan Kratunov <delyank@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Tested-by: Hengqi Chen <hengqi.chen@gmail.com>
    Acked-by: Hengqi Chen <hengqi.chen@gmail.com>
    Link: https://lore.kernel.org/bpf/f562455d7b3cf338e59a7976f4690ec5a0057f7f.camel@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:51 +02:00
Jerome Marchand 90f7bd03c2 bpftool: Fix C++ additions to skeleton
Bugzilla: https://bugzilla.redhat.com/2120966

commit 9b6eb0478dfad3b0e7af6c73523d96826210f4fe
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Feb 16 15:35:40 2022 -0800

    bpftool: Fix C++ additions to skeleton

    Mark C++-specific T::open() and other methods as static inline to avoid
    symbol redefinition when multiple files use the same skeleton header in
    an application.

    Fixes: bb8ffe61ea45 ("bpftool: Add C++-specific open/load/etc skeleton wrappers")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220216233540.216642-1-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:50 +02:00
Jerome Marchand 2a397ff302 bpftool: Implement btfgen_get_btf()
Bugzilla: https://bugzilla.redhat.com/2120966

commit dc695516b6f5cb322b95de7ef3a6ec1db707ff8b
Author: Mauricio Vásquez <mauricio@kinvolk.io>
Date:   Tue Feb 15 17:58:54 2022 -0500

    bpftool: Implement btfgen_get_btf()

    The last part of the BTFGen algorithm is to create a new BTF object with
    all the types that were recorded in the previous steps.

    This function performs two different steps:
    1. Add the types to the new BTF object by using btf__add_type(). Some
    special logic around struct and unions is implemented to only add the
    members that are really used in the field-based relocations. The type
    ID on the new and old BTF objects is stored on a map.
    2. Fix all the type IDs on the new BTF object by using the IDs saved in
    the previous step.

    Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
    Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com>
    Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co>
    Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220215225856.671072-6-mauricio@kinvolk.io

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:50 +02:00
Jerome Marchand 06c6de356e bpftool: Implement "gen min_core_btf" logic
Bugzilla: https://bugzilla.redhat.com/2120966

commit a9caaba399f9cff34c6bffa1b1fd673d3d6043a0
Author: Mauricio Vásquez <mauricio@kinvolk.io>
Date:   Tue Feb 15 17:58:53 2022 -0500

    bpftool: Implement "gen min_core_btf" logic

    This commit implements the logic for the gen min_core_btf command.
    Specifically, it implements the following functions:

    - minimize_btf(): receives the path of a source and destination BTF
    files and a list of BPF objects. This function records the relocations
    for all objects and then generates the BTF file by calling
    btfgen_get_btf() (implemented in the following commit).

    - btfgen_record_obj(): loads the BTF and BTF.ext sections of the BPF
    objects and loops through all CO-RE relocations. It uses
    bpf_core_calc_relo_insn() from libbpf and passes the target spec to
    btfgen_record_reloc(), that calls one of the following functions
    depending on the relocation kind.

    - btfgen_record_field_relo(): uses the target specification to mark all
    the types that are involved in a field-based CO-RE relocation. In this
    case types resolved and marked recursively using btfgen_mark_type().
    Only the struct and union members (and their types) involved in the
    relocation are marked to optimize the size of the generated BTF file.

    - btfgen_record_type_relo(): marks the types involved in a type-based
    CO-RE relocation. In this case no members for the struct and union types
    are marked as libbpf doesn't use them while performing this kind of
    relocation. Pointed types are marked as they are used by libbpf in this
    case.

    - btfgen_record_enumval_relo(): marks the whole enum type for enum-based
    relocations.

    Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
    Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com>
    Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co>
    Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220215225856.671072-5-mauricio@kinvolk.io

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:50 +02:00
Jerome Marchand 25b6d15483 bpftool: Add gen min_core_btf command
Bugzilla: https://bugzilla.redhat.com/2120966

commit 0a9f4a20c6153d187c8ee58133357ac671372f5f
Author: Mauricio Vásquez <mauricio@kinvolk.io>
Date:   Tue Feb 15 17:58:52 2022 -0500

    bpftool: Add gen min_core_btf command

    This command is implemented under the "gen" command in bpftool and the
    syntax is the following:

    $ bpftool gen min_core_btf INPUT OUTPUT OBJECT [OBJECT...]

    INPUT is the file that contains all the BTF types for a kernel and
    OUTPUT is the path of the minimize BTF file that will be created with
    only the types needed by the objects.

    Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
    Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com>
    Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co>
    Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220215225856.671072-4-mauricio@kinvolk.io

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:50 +02:00
Jerome Marchand 85d86088c5 bpftool: Add C++-specific open/load/etc skeleton wrappers
Bugzilla: https://bugzilla.redhat.com/2120966

commit bb8ffe61ea454a565e4fb1f450ef71237c9f032c
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Fri Feb 11 21:57:32 2022 -0800

    bpftool: Add C++-specific open/load/etc skeleton wrappers

    Add C++-specific static methods for code-generated BPF skeleton for each
    skeleton operation: open, open_opts, open_and_load, load, attach,
    detach, destroy, and elf_bytes. This is to facilitate easier C++
    templating on top of pure C BPF skeleton.

    In C, open/load/destroy/etc "methods" are of the form
    <skeleton_name>__<method>() to avoid name collision with similar
    "methods" of other skeletons withint the same application. This works
    well, but is very inconvenient for C++ applications that would like to
    write generic (templated) wrappers around BPF skeleton to fit in with
    C++ code base and take advantage of destructors and other convenient C++
    constructs.

    This patch makes it easier to build such generic templated wrappers by
    additionally defining C++ static methods for skeleton's struct with
    fixed names. This allows to refer to, say, open method as `T::open()`
    instead of having to somehow generate `T__open()` function call.

    Next patch adds an example template to test_cpp selftest to demonstrate
    how it's possible to have all the operations wrapped in a generic
    Skeleton<my_skeleton> type without explicitly passing function references.

    An example of generated declaration section without %1$s placeholders:

      #ifdef __cplusplus
          static struct test_attach_probe *open(const struct bpf_object_open_opts *opts = nullptr);
          static struct test_attach_probe *open_and_load();
          static int load(struct test_attach_probe *skel);
          static int attach(struct test_attach_probe *skel);
          static void detach(struct test_attach_probe *skel);
          static void destroy(struct test_attach_probe *skel);
          static const void *elf_bytes(size_t *sz);
      #endif /* __cplusplus */

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220212055733.539056-2-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:49 +02:00
Jerome Marchand 354e190758 bpftool: Generalize light skeleton generation.
Bugzilla: https://bugzilla.redhat.com/2120966

commit 28d743f671272d7a5f676669c84438b0f9600936
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Wed Feb 9 15:19:59 2022 -0800

    bpftool: Generalize light skeleton generation.

    Generealize light skeleton by hiding mmap details in skel_internal.h
    In this form generated lskel.h is usable both by user space and by the kernel.

    Note that previously #include <bpf/bpf.h> was in *.lskel.h file.
    To avoid #ifdef-s in a generated lskel.h the include of bpf.h is moved
    to skel_internal.h, but skel_internal.h is also used by gen_loader.c
    which is part of libbpf. Therefore skel_internal.h does #include "bpf.h"
    in case of user space, so gen_loader.c and lskel.h have necessary definitions.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Yonghong Song <yhs@fb.com>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220209232001.27490-4-alexei.starovoitov@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:49 +02:00
Jerome Marchand 691f6ba70b libbpf: Open code raw_tp_open and link_create commands.
Bugzilla: https://bugzilla.redhat.com/2120966

commit c69f94a33d12a9c49f1800c54838ee19447ac176
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Mon Jan 31 14:05:24 2022 -0800

    libbpf: Open code raw_tp_open and link_create commands.

    Open code raw_tracepoint_open and link_create used by light skeleton
    to be able to avoid full libbpf eventually.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20220131220528.98088-4-alexei.starovoitov@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:45 +02:00
Jerome Marchand b1efe0942b libbpf: Add support for bpf iter in light skeleton.
Bugzilla: https://bugzilla.redhat.com/2120966

commit 42d1d53fedc9980e8fed98a5a03762cba7d2e9f6
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Mon Jan 31 14:05:22 2022 -0800

    libbpf: Add support for bpf iter in light skeleton.

    bpf iterator programs should use bpf_link_create to attach instead of
    bpf_raw_tracepoint_open like other tracing programs.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20220131220528.98088-2-alexei.starovoitov@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:45 +02:00
Jerome Marchand 0b09f05340 bpftool: use preferred setters/getters instead of deprecated ones
Bugzilla: https://bugzilla.redhat.com/2120966

commit 39748db1d6bc12b9f749a0aebe7ec71b00bd60eb
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Jan 24 11:42:51 2022 -0800

    bpftool: use preferred setters/getters instead of deprecated ones

    Use bpf_program__type() instead of discouraged bpf_program__get_type().
    Also switch to bpf_map__set_max_entries() instead of bpf_map__resize().

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20220124194254.2051434-5-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:43 +02:00
Jerome Marchand 3b2c47f45c bpftool: Stop using bpf_map__def() API
Bugzilla: https://bugzilla.redhat.com/2120966

commit 3c28919f0652a1952333b88e1af5ce408fafe238
Author: Christy Lee <christylee@fb.com>
Date:   Fri Jan 7 16:42:15 2022 -0800

    bpftool: Stop using bpf_map__def() API

    libbpf bpf_map__def() API is being deprecated, replace bpftool's
    usage with the appropriate getters and setters

    Signed-off-by: Christy Lee <christylee@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220108004218.355761-3-christylee@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:39 +02:00
Jerome Marchand e8bc956f61 bpftool: Only set obj->skeleton on complete success
Bugzilla: https://bugzilla.redhat.com/2120966

commit 0991f6a38f576aa9a5e34713e23c998a3310d4d0
Author: Wei Fu <fuweid89@gmail.com>
Date:   Sat Jan 8 16:40:08 2022 +0800

    bpftool: Only set obj->skeleton on complete success

    After `bpftool gen skeleton`, the ${bpf_app}.skel.h will provide that
    ${bpf_app_name}__open helper to load bpf. If there is some error
    like ENOMEM, the ${bpf_app_name}__open will rollback(free) the allocated
    object, including `bpf_object_skeleton`.

    Since the ${bpf_app_name}__create_skeleton set the obj->skeleton first
    and not rollback it when error, it will cause double-free in
    ${bpf_app_name}__destory at ${bpf_app_name}__open. Therefore, we should
    set the obj->skeleton before return 0;

    Fixes: 5dc7a8b211 ("bpftool, selftests/bpf: Embed object file inside skeleton")
    Signed-off-by: Wei Fu <fuweid89@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220108084008.1053111-1-fuweid89@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:39 +02:00
Artem Savkov b51d8e947e bpftool: Switch bpf_object__load_xattr() to bpf_object__load()
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit b59e4ce8bcaab6445f4a0d37a96ca8953caaf5cf
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Dec 9 11:38:40 2021 -0800

    bpftool: Switch bpf_object__load_xattr() to bpf_object__load()

    Switch all the uses of to-be-deprecated bpf_object__load_xattr() into
    a simple bpf_object__load() calls with optional log_level passed through
    open_opts.kernel_log_level, if -d option is specified.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-13-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:46 +02:00
Artem Savkov 4638817e0a bpftool: Use libbpf_get_error() to check error
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit e5043894b21f7d99d3db31ad06308d6c5726caa6
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Mon Nov 15 09:24:36 2021 +0800

    bpftool: Use libbpf_get_error() to check error

    Currently, LIBBPF_STRICT_ALL mode is enabled by default for
    bpftool which means on error cases, some libbpf APIs would
    return NULL pointers. This makes IS_ERR check failed to detect
    such cases and result in segfault error. Use libbpf_get_error()
    instead like we do in libbpf itself.

    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211115012436.3143318-1-hengqi.chen@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:37 +02:00
Artem Savkov 443663d0b2 bpftool: Update btf_dump__new() and perf_buffer__new_raw() calls
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 164b04f27fbd769f57905dfddd2a8953974eeef4
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 10 21:36:24 2021 -0800

    bpftool: Update btf_dump__new() and perf_buffer__new_raw() calls

    Use v1.0-compatible variants of btf_dump and perf_buffer "constructors".
    This is also a demonstration of reusing struct perf_buffer_raw_opts as
    OPTS-style option struct for new perf_buffer__new_raw() API.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-10-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:35 +02:00
Yauheni Kaliuta ebe4d0acc6 bpftool: Switch to new btf__type_cnt API
Bugzilla: http://bugzilla.redhat.com/2069045

commit 58fc155b0e4bbd69584b7a241ab01d55ee7cfde6
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Fri Oct 22 21:06:22 2021 +0800

    bpftool: Switch to new btf__type_cnt API
    
    Replace the call to btf__get_nr_types with new API btf__type_cnt.
    The old API will be deprecated in libbpf v0.7+. No functionality
    change.
    
    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211022130623.1548429-5-hengqi.chen@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta 4597d92973 bpftool: Improve skeleton generation for data maps without DATASEC type
Bugzilla: http://bugzilla.redhat.com/2069045

commit ef9356d392f980b3b192668fa05b2eaaad127da1
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:44:00 2021 -0700

    bpftool: Improve skeleton generation for data maps without DATASEC type
    
    It can happen that some data sections (e.g., .rodata.cst16, containing
    compiler populated string constants) won't have a corresponding BTF
    DATASEC type. Now that libbpf supports .rodata.* and .data.* sections,
    situation like that will cause invalid BPF skeleton to be generated that
    won't compile successfully, as some parts of skeleton would assume
    memory-mapped struct definitions for each special data section.
    
    Fix this by generating empty struct definitions for such data sections.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-7-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta 4df65205ed bpftool: Support multiple .rodata/.data internal maps in skeleton
Bugzilla: http://bugzilla.redhat.com/2069045

commit 8654b4d35e6c915ef456c14320ec8720383e81a7
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:43:59 2021 -0700

    bpftool: Support multiple .rodata/.data internal maps in skeleton
    
    Remove the assumption about only single instance of each of .rodata and
    .data internal maps. Nothing changes for '.rodata' and '.data' maps, but new
    '.rodata.something' map will get 'rodata_something' section in BPF
    skeleton for them (as well as having struct bpf_map * field in maps
    section with the same field name).
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-6-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta f1e442bf06 bpftool: Remove unused includes to <bpf/bpf_gen_internal.h>
Bugzilla: http://bugzilla.redhat.com/2069045

commit c66a248f1950d41502fb67624147281d9de0e868
Author: Quentin Monnet <quentin@isovalent.com>
Date:   Thu Oct 7 20:44:28 2021 +0100

    bpftool: Remove unused includes to <bpf/bpf_gen_internal.h>
    
    It seems that the header file was never necessary to compile bpftool,
    and it is not part of the headers exported from libbpf. Let's remove the
    includes from prog.c and gen.c.
    
    Fixes: d510296d33 ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.")
    Signed-off-by: Quentin Monnet <quentin@isovalent.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211007194438.34443-3-quentin@isovalent.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:40 +03:00
Yauheni Kaliuta 05c414145a bpftool: Avoid using "?: " in generated code
Bugzilla: http://bugzilla.redhat.com/2069045

commit 09710d82c0a3469eadc32781721ac2336fdf915d
Author: Yucong Sun <fallentree@fb.com>
Date:   Tue Sep 28 11:42:21 2021 -0700

    bpftool: Avoid using "?: " in generated code
    
    "?:" is a GNU C extension, some environment has warning flags for its
    use, or even prohibit it directly.  This patch avoid triggering these
    problems by simply expand it to its full form, no functionality change.
    
    Signed-off-by: Yucong Sun <fallentree@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210928184221.1545079-1-fallentree@fb.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:18 +03:00
Yauheni Kaliuta 94a163d7cc bpftool: Provide a helper method for accessing skeleton's embedded ELF data
Bugzilla: http://bugzilla.redhat.com/2069045

commit a6cc6b34b93e3660149a7cb947be98a9b239ffce
Author: Matt Smith <alastorze@fb.com>
Date:   Wed Sep 1 12:44:38 2021 -0700

    bpftool: Provide a helper method for accessing skeleton's embedded ELF data
    
    This adds a skeleton method X__elf_bytes() which returns the binary data of
    the compiled and embedded BPF object file. It additionally sets the size of
    the return data to the provided size_t pointer argument.
    
    The assignment to s->data is cast to void * to ensure no warning is issued if
    compiled with a previous version of libbpf where the bpf_object_skeleton field
    is void * instead of const void *
    
    Signed-off-by: Matt Smith <alastorze@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210901194439.3853238-3-alastorze@fb.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:15:38 +03:00
Jerome Marchand 765e926fe8 tools: bpftool: Document and add bash completion for -L, -B options
Bugzilla: http://bugzilla.redhat.com/2041365

commit 8cc8c6357c8fa763c650f1bddb69871a254f427c
Author: Quentin Monnet <quentin@isovalent.com>
Date:   Fri Jul 30 22:54:34 2021 +0100

    tools: bpftool: Document and add bash completion for -L, -B options

    The -L|--use-loader option for using loader programs when loading, or
    when generating a skeleton, did not have any documentation or bash
    completion. Same thing goes for -B|--base-btf, used to pass a path to a
    base BTF object for split BTF such as BTF for kernel modules.

    This patch documents and adds bash completion for those options.

    Fixes: 75fa177769 ("tools/bpftool: Add bpftool support for split BTF")
    Fixes: d510296d33 ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.")
    Signed-off-by: Quentin Monnet <quentin@isovalent.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210730215435.7095-7-quentin@isovalent.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-04-29 18:14:37 +02:00