Commit Graph

630 Commits

Author SHA1 Message Date
Jerome Marchand afb792fe6c libbpf: Expose bpf_core_{add,free}_cands() to bpftool
Bugzilla: https://bugzilla.redhat.com/2120966

commit 8de6cae40bce6e19f39de60056cad39a7274169d
Author: Mauricio Vásquez <mauricio@kinvolk.io>
Date:   Tue Feb 15 17:58:51 2022 -0500

    libbpf: Expose bpf_core_{add,free}_cands() to bpftool

    Expose bpf_core_add_cands() and bpf_core_free_cands() to handle
    candidates list.

    Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
    Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com>
    Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co>
    Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220215225856.671072-3-mauricio@kinvolk.io

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:50 +02:00
Jerome Marchand a1698130ea libbpf: Split bpf_core_apply_relo()
Bugzilla: https://bugzilla.redhat.com/2120966

commit adb8fa195efdfaac5852aaac24810b456ce43b04
Author: Mauricio Vásquez <mauricio@kinvolk.io>
Date:   Tue Feb 15 17:58:50 2022 -0500

    libbpf: Split bpf_core_apply_relo()

    BTFGen needs to run the core relocation logic in order to understand
    what are the types involved in a given relocation.

    Currently bpf_core_apply_relo() calculates and **applies** a relocation
    to an instruction. Having both operations in the same function makes it
    difficult to only calculate the relocation without patching the
    instruction. This commit splits that logic in two different phases: (1)
    calculate the relocation and (2) patch the instruction.

    For the first phase bpf_core_apply_relo() is renamed to
    bpf_core_calc_relo_insn() who is now only on charge of calculating the
    relocation, the second phase uses the already existing
    bpf_core_patch_insn(). bpf_object__relocate_core() uses both of them and
    the BTFGen will use only bpf_core_calc_relo_insn().

    Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
    Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com>
    Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co>
    Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20220215225856.671072-2-mauricio@kinvolk.io

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:50 +02:00
Jerome Marchand f690b14f99 libbpf: Remove mode check in libbpf_set_strict_mode()
Bugzilla: https://bugzilla.redhat.com/2120966

commit e4e835c87bb5ef7e7923e72da57c923bfcade418
Author: Mauricio Vásquez <mauricio@kinvolk.io>
Date:   Mon Feb 7 09:50:50 2022 -0500

    libbpf: Remove mode check in libbpf_set_strict_mode()

    libbpf_set_strict_mode() checks that the passed mode doesn't contain
    extra bits for LIBBPF_STRICT_* flags that don't exist yet.

    It makes it difficult for applications to disable some strict flags as
    something like "LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS"
    is rejected by this check and they have to use a rather complicated
    formula to calculate it.[0]

    One possibility is to change LIBBPF_STRICT_ALL to only contain the bits
    of all existing LIBBPF_STRICT_* flags instead of 0xffffffff. However
    it's not possible because the idea is that applications compiled against
    older libbpf_legacy.h would still be opting into latest
    LIBBPF_STRICT_ALL features.[1]

    The other possibility is to remove that check so something like
    "LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS" is allowed. It's
    what this commit does.

    [0]: https://lore.kernel.org/bpf/20220204220435.301896-1-mauricio@kinvolk.io/
    [1]: https://lore.kernel.org/bpf/CAEf4BzaTWa9fELJLh+bxnOb0P1EMQmaRbJVG0L+nXZdy0b8G3Q@mail.gmail.com/

    Fixes: 93b8952d223a ("libbpf: deprecate legacy BPF map definitions")
    Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220207145052.124421-2-mauricio@kinvolk.io

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:47 +02:00
Jerome Marchand 7d8fc8812b libbpf: Deprecate forgotten btf__get_map_kv_tids()
Bugzilla: https://bugzilla.redhat.com/2120966

commit 227a0713b319e7a8605312dee1c97c97a719a9fc
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Feb 3 14:50:17 2022 -0800

    libbpf: Deprecate forgotten btf__get_map_kv_tids()

    btf__get_map_kv_tids() is in the same group of APIs as
    btf_ext__reloc_func_info()/btf_ext__reloc_line_info() which were only
    used by BCC. It was missed to be marked as deprecated in [0]. Fixing
    that to complete [1].

      [0] https://patchwork.kernel.org/project/netdevbpf/patch/20220201014610.3522985-1-davemarchevsky@fb.com/
      [1] Closes: https://github.com/libbpf/libbpf/issues/277

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20220203225017.1795946-1-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:46 +02:00
Jerome Marchand 71c497154c libbpf: Stop using deprecated bpf_map__is_offload_neutral()
Bugzilla: https://bugzilla.redhat.com/2120966

commit a5dd9589f0ababa9ca645d96cfaa8161d45dcb74
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Feb 2 14:59:11 2022 -0800

    libbpf: Stop using deprecated bpf_map__is_offload_neutral()

    Open-code bpf_map__is_offload_neutral() logic in one place in
    to-be-deprecated bpf_prog_load_xattr2.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20220202225916.3313522-2-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:46 +02:00
Jerome Marchand 7fab43dbfd libbpf: hide and discourage inconsistently named getters
Bugzilla: https://bugzilla.redhat.com/2120966

commit 20eccf29e2979a18411517061998bac7d12c8543
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Jan 24 11:42:48 2022 -0800

    libbpf: hide and discourage inconsistently named getters

    Move a bunch of "getters" into libbpf_legacy.h to keep them there in
    libbpf 1.0. See [0] for discussion of "Discouraged APIs". These getters
    don't add any maintenance burden and are simple alias, but they are
    inconsistent in naming. So keep them in libbpf_legacy.h instead of
    libbpf.h to "hide" them in favor of preferred getters ([1]). Also add two
    missing getters: bpf_program__type() and bpf_program__expected_attach_type().

      [0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#handling-deprecation-of-apis-and-functionality
      [1] Closes: https://github.com/libbpf/libbpf/issues/307

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20220124194254.2051434-2-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:43 +02:00
Jerome Marchand 7678cbb2ce libbpf: Mark bpf_object__open_xattr() deprecated
Bugzilla: https://bugzilla.redhat.com/2120966

commit fc76387003d6907e298fd6b87f13847c4edddab1
Author: Christy Lee <christylee@fb.com>
Date:   Mon Jan 24 17:09:17 2022 -0800

    libbpf: Mark bpf_object__open_xattr() deprecated

    Mark bpf_object__open_xattr() as deprecated, use
    bpf_object__open_file() instead.

      [0] Closes: https://github.com/libbpf/libbpf/issues/287

    Signed-off-by: Christy Lee <christylee@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220125010917.679975-1-christylee@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:43 +02:00
Jerome Marchand c39d88f6d8 libbpf: Add "iter.s" section for sleepable bpf iterator programs
Bugzilla: https://bugzilla.redhat.com/2120966

commit a8b77f7463a500584a69fed60bb4da57ac435138
Author: Kenny Yu <kennyyu@fb.com>
Date:   Mon Jan 24 10:54:02 2022 -0800

    libbpf: Add "iter.s" section for sleepable bpf iterator programs

    This adds a new bpf section "iter.s" to allow bpf iterator programs to
    be sleepable.

    Signed-off-by: Kenny Yu <kennyyu@fb.com>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20220124185403.468466-4-kennyyu@fb.com
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:43 +02:00
Jiri Benc c3e21f7d64 libbpf: Add SEC name for xdp frags programs
Bugzilla: https://bugzilla.redhat.com/2120966

commit 082c4bfba4f77d6c65b451d7ef23093a75cc50e7
Author: Lorenzo Bianconi <lorenzo@kernel.org>
Date:   Fri Jan 21 11:10:01 2022 +0100

    libbpf: Add SEC name for xdp frags programs

    Introduce support for the following SEC entries for XDP frags
    property:
    - SEC("xdp.frags")
    - SEC("xdp.frags/devmap")
    - SEC("xdp.frags/cpumap")

    Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
    Acked-by: John Fastabend <john.fastabend@gmail.com>
    Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
    Link: https://lore.kernel.org/r/af23b6e4841c171ad1af01917839b77847a4bc27.1642758637.git.lorenzo@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jiri Benc <jbenc@redhat.com>
2022-10-25 14:57:42 +02:00
Jerome Marchand f55ca6e379 libbpf: deprecate legacy BPF map definitions
Bugzilla: https://bugzilla.redhat.com/2120966

commit 93b8952d223af03c51fba0c6258173d2ffbd2cb7
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Jan 19 22:05:28 2022 -0800

    libbpf: deprecate legacy BPF map definitions

    Enact deprecation of legacy BPF map definition in SEC("maps") ([0]). For
    the definitions themselves introduce LIBBPF_STRICT_MAP_DEFINITIONS flag
    for libbpf strict mode. If it is set, error out on any struct
    bpf_map_def-based map definition. If not set, libbpf will print out
    a warning for each legacy BPF map to raise awareness that it goes away.

    For any use of BPF_ANNOTATE_KV_PAIR() macro providing a legacy way to
    associate BTF key/value type information with legacy BPF map definition,
    warn through libbpf's pr_warn() error message (but don't fail BPF object
    open).

    BPF-side struct bpf_map_def is marked as deprecated. User-space struct
    bpf_map_def has to be used internally in libbpf, so it is left
    untouched. It should be enough for bpf_map__def() to be marked
    deprecated to raise awareness that it goes away.

    bpftool is an interesting case that utilizes libbpf to open BPF ELF
    object to generate skeleton. As such, even though bpftool itself uses
    full on strict libbpf mode (LIBBPF_STRICT_ALL), it has to relax it a bit
    for BPF map definition handling to minimize unnecessary disruptions. So
    opt-out of LIBBPF_STRICT_MAP_DEFINITIONS for bpftool. User's code that
    will later use generated skeleton will make its own decision whether to
    enforce LIBBPF_STRICT_MAP_DEFINITIONS or not.

    There are few tests in selftests/bpf that are consciously using legacy
    BPF map definitions to test libbpf functionality. For those, temporary
    opt out of LIBBPF_STRICT_MAP_DEFINITIONS mode for the duration of those
    tests.

      [0] Closes: https://github.com/libbpf/libbpf/issues/272

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20220120060529.1890907-4-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:41 +02:00
Jerome Marchand b5b44660d2 libbpf: Fix possible NULL pointer dereference when destroying skeleton
Bugzilla: https://bugzilla.redhat.com/2120966

commit a32ea51a3f17ce6524c9fc19d311e708331c8b5f
Author: Yafang Shao <laoar.shao@gmail.com>
Date:   Sat Jan 8 13:47:39 2022 +0000

    libbpf: Fix possible NULL pointer dereference when destroying skeleton

    When I checked the code in skeleton header file generated with my own
    bpf prog, I found there may be possible NULL pointer dereference when
    destroying skeleton. Then I checked the in-tree bpf progs, finding that is
    a common issue. Let's take the generated samples/bpf/xdp_redirect_cpu.skel.h
    for example. Below is the generated code in
    xdp_redirect_cpu__create_skeleton():

    	xdp_redirect_cpu__create_skeleton
    		struct bpf_object_skeleton *s;
    		s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));
    		if (!s)
    			goto error;
    		...
    	error:
    		bpf_object__destroy_skeleton(s);
    		return  -ENOMEM;

    After goto error, the NULL 's' will be deferenced in
    bpf_object__destroy_skeleton().

    We can simply fix this issue by just adding a NULL check in
    bpf_object__destroy_skeleton().

    Fixes: d66562fba1 ("libbpf: Add BPF object skeleton support")
    Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220108134739.32541-1-laoar.shao@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:39 +02:00
Artem Savkov 090452922a libbpf: Support repeated legacy kprobes on same function
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 51a33c60f1c22c0d2dafad774315ba1537765442
Author: Qiang Wang <wangqiang.wq.frank@bytedance.com>
Date:   Mon Dec 27 21:07:13 2021 +0800

    libbpf: Support repeated legacy kprobes on same function

    If repeated legacy kprobes on same function in one process,
    libbpf will register using the same probe name and got -EBUSY
    error. So append index to the probe name format to fix this
    problem.

    Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com>
    Signed-off-by: Qiang Wang <wangqiang.wq.frank@bytedance.com>
    Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211227130713.66933-2-wangqiang.wq.frank@bytedance.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:53 +02:00
Artem Savkov b03ac3b33a libbpf: Deprecate bpf_perf_event_read_simple() API
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 7218c28c87f57c131879a75a226b9033ac90b266
Author: Christy Lee <christylee@fb.com>
Date:   Wed Dec 29 12:41:56 2021 -0800

    libbpf: Deprecate bpf_perf_event_read_simple() API

    With perf_buffer__poll() and perf_buffer__consume() APIs available,
    there is no reason to expose bpf_perf_event_read_simple() API to
    users. If users need custom perf buffer, they could re-implement
    the function.

    Mark bpf_perf_event_read_simple() and move the logic to a new
    static function so it can still be called by other functions in the
    same file.

      [0] Closes: https://github.com/libbpf/libbpf/issues/310

    Signed-off-by: Christy Lee <christylee@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211229204156.13569-1-christylee@fb.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:52 +02:00
Artem Savkov a2ab03a151 libbpf: Improve LINUX_VERSION_CODE detection
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 5b3d72987701d51bf31823b39db49d10970f5c2d
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Dec 22 15:10:03 2021 -0800

    libbpf: Improve LINUX_VERSION_CODE detection

    Ubuntu reports incorrect kernel version through uname(), which on older
    kernels leads to kprobe BPF programs failing to load due to the version
    check mismatch.

    Accommodate Ubuntu's quirks with LINUX_VERSION_CODE by using
    Ubuntu-specific /proc/version_code to fetch major/minor/patch versions
    to form LINUX_VERSION_CODE.

    While at it, consolide libbpf's kernel version detection code between
    libbpf.c and libbpf_probes.c.

      [0] Closes: https://github.com/libbpf/libbpf/issues/421

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211222231003.2334940-1-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:51 +02:00
Artem Savkov af9d669f87 libbpf: Avoid reading past ELF data section end when copying license
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit f97982398cc1c92f2e9bd0ef1ef870a5a729b0ac
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Dec 14 15:20:54 2021 -0800

    libbpf: Avoid reading past ELF data section end when copying license

    Fix possible read beyond ELF "license" data section if the license
    string is not properly zero-terminated. Use the fact that libbpf_strlcpy
    never accesses the (N-1)st byte of the source string because it's
    replaced with '\0' anyways.

    If this happens, it's a violation of contract between libbpf and a user,
    but not handling this more robustly upsets CIFuzz, so given the fix is
    trivial, let's fix the potential issue.

    Fixes: 9fc205b413b3 ("libbpf: Add sane strncpy alternative and use it internally")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211214232054.3458774-1-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:48 +02:00
Artem Savkov 136c624b20 libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit e542f2c4cd16d49392abf3349341d58153d3c603
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Dec 14 11:59:03 2021 -0800

    libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF

    The need to increase RLIMIT_MEMLOCK to do anything useful with BPF is
    one of the first extremely frustrating gotchas that all new BPF users go
    through and in some cases have to learn it a very hard way.

    Luckily, starting with upstream Linux kernel version 5.11, BPF subsystem
    dropped the dependency on memlock and uses memcg-based memory accounting
    instead. Unfortunately, detecting memcg-based BPF memory accounting is
    far from trivial (as can be evidenced by this patch), so in practice
    most BPF applications still do unconditional RLIMIT_MEMLOCK increase.

    As we move towards libbpf 1.0, it would be good to allow users to forget
    about RLIMIT_MEMLOCK vs memcg and let libbpf do the sensible adjustment
    automatically. This patch paves the way forward in this matter. Libbpf
    will do feature detection of memcg-based accounting, and if detected,
    will do nothing. But if the kernel is too old, just like BCC, libbpf
    will automatically increase RLIMIT_MEMLOCK on behalf of user
    application ([0]).

    As this is technically a breaking change, during the transition period
    applications have to opt into libbpf 1.0 mode by setting
    LIBBPF_STRICT_AUTO_RLIMIT_MEMLOCK bit when calling
    libbpf_set_strict_mode().

    Libbpf allows to control the exact amount of set RLIMIT_MEMLOCK limit
    with libbpf_set_memlock_rlim_max() API. Passing 0 will make libbpf do
    nothing with RLIMIT_MEMLOCK. libbpf_set_memlock_rlim_max() has to be
    called before the first bpf_prog_load(), bpf_btf_load(), or
    bpf_object__load() call, otherwise it has no effect and will return
    -EBUSY.

      [0] Closes: https://github.com/libbpf/libbpf/issues/369

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211214195904.1785155-2-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:48 +02:00
Artem Savkov 8b7632b54c libbpf: Add sane strncpy alternative and use it internally
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 9fc205b413b3f3e9502fa92151fba63b91230454
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Fri Dec 10 16:40:43 2021 -0800

    libbpf: Add sane strncpy alternative and use it internally

    strncpy() has a notoriously error-prone semantics which makes GCC
    complain about it a lot (and quite often completely completely falsely
    at that). Instead of pleasing GCC all the time (-Wno-stringop-truncation
    is unfortunately only supported by GCC, so it's a bit too messy to just
    enable it in Makefile), add libbpf-internal libbpf_strlcpy() helper
    which follows what FreeBSD's strlcpy() does and what most people would
    expect from strncpy(): copies up to N-1 first bytes from source string
    into destination string and ensures zero-termination afterwards.

    Replace all the relevant uses of strncpy/strncat/memcpy in libbpf with
    libbpf_strlcpy().

    This also fixes the issue reported by Emmanuel Deloget in xsk.c where
    memcpy() could access source string beyond its end.

    Fixes: 2f6324a393 (libbpf: Support shared umems between queues and devices)
    Reported-by: Emmanuel Deloget <emmanuel.deloget@eho.link>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211211004043.2374068-1-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:48 +02:00
Artem Savkov 6ebde64add libbpf: Deprecate bpf_object__load_xattr()
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Conflicts: upstream merge conflict resolved in be3158290db8

commit e7b924ca715f0d1c0be62b205c36c4076b335421
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Dec 9 11:38:36 2021 -0800

    libbpf: Deprecate bpf_object__load_xattr()

    Deprecate non-extensible bpf_object__load_xattr() in v0.8 ([0]).

    With log_level control through bpf_object_open_opts or
    bpf_program__set_log_level(), we are finally at the point where
    bpf_object__load_xattr() doesn't provide any functionality that can't be
    accessed through other (better) ways. The other feature,
    target_btf_path, is also controllable through bpf_object_open_opts.

      [0] Closes: https://github.com/libbpf/libbpf/issues/289

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-9-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:46 +02:00
Artem Savkov c38fb23da8 libbpf: Add per-program log buffer setter and getter
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit b3ce907950350a58880b94fed2b6022f160b8b9a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Dec 9 11:38:35 2021 -0800

    libbpf: Add per-program log buffer setter and getter

    Allow to set user-provided log buffer on a per-program basis ([0]). This
    gives great deal of flexibility in terms of which programs are loaded
    with logging enabled and where corresponding logs go.

    Log buffer set with bpf_program__set_log_buf() overrides kernel_log_buf
    and kernel_log_size settings set at bpf_object open time through
    bpf_object_open_opts, if any.

    Adjust bpf_object_load_prog_instance() logic to not perform own log buf
    allocation and load retry if custom log buffer is provided by the user.

      [0] Closes: https://github.com/libbpf/libbpf/issues/418

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-8-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:46 +02:00
Artem Savkov 90bf4c8c73 libbpf: Preserve kernel error code and remove kprobe prog type guessing
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 2eda2145ebfc76569fd088f46356203fc0c785a1
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Dec 9 11:38:34 2021 -0800

    libbpf: Preserve kernel error code and remove kprobe prog type guessing

    Instead of rewriting error code returned by the kernel of prog load with
    libbpf-sepcific variants pass through the original error.

    There is now also no need to have a backup generic -LIBBPF_ERRNO__LOAD
    fallback error as bpf_prog_load() guarantees that errno will be properly
    set no matter what.

    Also drop a completely outdated and pretty useless BPF_PROG_TYPE_KPROBE
    guess logic. It's not necessary and neither it's helpful in modern BPF
    applications.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-7-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:46 +02:00
Artem Savkov 070dc775a8 libbpf: Improve logging around BPF program loading
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit ad9a7f96445b70c415d8e193f854321b110c890a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Dec 9 11:38:33 2021 -0800

    libbpf: Improve logging around BPF program loading

    Add missing "prog '%s': " prefixes in few places and use consistently
    markers for beginning and end of program load logs. Here's an example of
    log output:

    libbpf: prog 'handler': BPF program load failed: Permission denied
    libbpf: -- BEGIN PROG LOAD LOG ---
    arg#0 reference type('UNKNOWN ') size cannot be determined: -22
    ; out1 = in1;
    0: (18) r1 = 0xffffc9000cdcc000
    2: (61) r1 = *(u32 *)(r1 +0)

    ...

    81: (63) *(u32 *)(r4 +0) = r5
     R1_w=map_value(id=0,off=16,ks=4,vs=20,imm=0) R4=map_value(id=0,off=400,ks=4,vs=16,imm=0)
    invalid access to map value, value_size=16 off=400 size=4
    R4 min value is outside of the allowed memory range
    processed 63 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
     -- END PROG LOAD LOG --
    libbpf: failed to load program 'handler'
    libbpf: failed to load object 'test_skeleton'

    The entire verifier log, including BEGIN and END markers are now always
    youtput during a single print callback call. This should make it much
    easier to post-process or parse it, if necessary. It's not an explicit
    API guarantee, but it can be reasonably expected to stay like that.

    Also __bpf_object__open is renamed to bpf_object_open() as it's always
    an adventure to find the exact function that implements bpf_object's
    open phase, so drop the double underscored and use internal libbpf
    naming convention.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-6-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:45 +02:00
Artem Savkov 206be0d178 libbpf: Allow passing user log setting through bpf_object_open_opts
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit e0e3ea888c69b4ea17133b8ac8dfd5066a759b5a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Dec 9 11:38:32 2021 -0800

    libbpf: Allow passing user log setting through bpf_object_open_opts

    Allow users to provide their own custom log_buf, log_size, and log_level
    at bpf_object level through bpf_object_open_opts. This log_buf will be
    used during BTF loading. Subsequent patch will use same log_buf during
    BPF program loading, unless overriden at per-bpf_program level.

    When such custom log_buf is provided, libbpf won't be attempting
    retrying loading of BTF to try to provide its own log buffer to capture
    kernel's error log output. User is responsible to provide big enough
    buffer, otherwise they run a risk of getting -ENOSPC error from the
    bpf() syscall.

    See also comments in bpf_object_open_opts regarding log_level and
    log_buf interactions.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211209193840.1248570-5-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:45 +02:00
Artem Savkov bad77b29d8 libbpf: Reduce bpf_core_apply_relo_insn() stack usage.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 78c1f8d0634cc35da613d844eda7c849fc50f643
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Fri Dec 3 10:28:36 2021 -0800

    libbpf: Reduce bpf_core_apply_relo_insn() stack usage.

    Reduce bpf_core_apply_relo_insn() stack usage and bump
    BPF_CORE_SPEC_MAX_LEN limit back to 64.

    Fixes: 29db4bea1d10 ("bpf: Prepare relo_core.c for kernel duty.")
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211203182836.16646-1-alexei.starovoitov@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:44 +02:00
Artem Savkov 77708bae55 libbpf: Add API to get/set log_level at per-program level
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit dbdd2c7f8cec2d09ae0e1bd707ae6050fa1c105f
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Dec 1 15:28:17 2021 -0800

    libbpf: Add API to get/set log_level at per-program level

    Add bpf_program__set_log_level() and bpf_program__log_level() to fetch
    and adjust log_level sent during BPF_PROG_LOAD command. This allows to
    selectively request more or less verbose output in BPF verifier log.

    Also bump libbpf version to 0.7 and make these APIs the first in v0.7.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211201232824.3166325-3-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:44 +02:00
Artem Savkov d4eaff3be5 libbpf: Support init of inner maps in light skeleton.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit be05c94476f3cf4fdc29feab4ed1053187323296
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Wed Dec 1 10:10:33 2021 -0800

    libbpf: Support init of inner maps in light skeleton.

    Add ability to initialize inner maps in light skeleton.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211201181040.23337-11-alexei.starovoitov@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:43 +02:00
Artem Savkov 733fb7a65c libbpf: Use CO-RE in the kernel in light skeleton.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit d0e928876e30b18411b80fd2445424bc00e95745
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Wed Dec 1 10:10:32 2021 -0800

    libbpf: Use CO-RE in the kernel in light skeleton.

    Without lskel the CO-RE relocations are processed by libbpf before any other
    work is done. Instead, when lskel is needed, remember relocation as RELO_CORE
    kind. Then when loader prog is generated for a given bpf program pass CO-RE
    relos of that program to gen loader via bpf_gen__record_relo_core(). The gen
    loader will remember them as-is and pass it later as-is into the kernel.

    The normal libbpf flow is to process CO-RE early before call relos happen. In
    case of gen_loader the core relos have to be added to other relos to be copied
    together when bpf static function is appended in different places to other main
    bpf progs. During the copy the append_subprog_relos() will adjust insn_idx for
    normal relos and for RELO_CORE kind too. When that is done each struct
    reloc_desc has good relos for specific main prog.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211201181040.23337-10-alexei.starovoitov@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:43 +02:00
Artem Savkov ab2abba8a9 libbpf: Cleanup struct bpf_core_cand.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 03d5b99138dd8c7bfb838396acb180bd515ebf06
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Dec 1 10:10:30 2021 -0800

    libbpf: Cleanup struct bpf_core_cand.

    Remove two redundant fields from struct bpf_core_cand.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211201181040.23337-8-alexei.starovoitov@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:43 +02:00
Artem Savkov fb78b62c10 bpf: Define enum bpf_core_relo_kind as uapi.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 46334a0cd21bed70d6f1ddef1464f75a0ebe1774
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Wed Dec 1 10:10:27 2021 -0800

    bpf: Define enum bpf_core_relo_kind as uapi.

    enum bpf_core_relo_kind is generated by llvm and processed by libbpf.
    It's a de-facto uapi.
    With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure.
    Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in
    bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind,
    bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf
    program into llvm.

    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211201181040.23337-5-alexei.starovoitov@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:42 +02:00
Artem Savkov e403be5e23 libbpf: Remove duplicate assignments
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit c291d0a4d169811898d723cfa5f1aa1fc60e607c
Author: Mehrdad Arshad Rad <arshad.rad@gmail.com>
Date:   Sun Nov 28 11:33:37 2021 -0800

    libbpf: Remove duplicate assignments

    There is a same action when load_attr.attach_btf_id is initialized.

    Signed-off-by: Mehrdad Arshad Rad <arshad.rad@gmail.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211128193337.10628-1-arshad.rad@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:41 +02:00
Artem Savkov 6cc4f84afa libbpf: Support static initialization of BPF_MAP_TYPE_PROG_ARRAY
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 341ac5ffc4bd859103899c876902caf07cc97ea4
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Sun Nov 28 22:16:32 2021 +0800

    libbpf: Support static initialization of BPF_MAP_TYPE_PROG_ARRAY

    Support static initialization of BPF_MAP_TYPE_PROG_ARRAY with a
    syntax similar to map-in-map initialization ([0]):

        SEC("socket")
        int tailcall_1(void *ctx)
        {
            return 0;
        }

        struct {
            __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
            __uint(max_entries, 2);
            __uint(key_size, sizeof(__u32));
            __array(values, int (void *));
        } prog_array_init SEC(".maps") = {
            .values = {
                [1] = (void *)&tailcall_1,
            },
        };

    Here's the relevant part of libbpf debug log showing what's
    going on with prog-array initialization:

    libbpf: sec '.relsocket': collecting relocation for section(3) 'socket'
    libbpf: sec '.relsocket': relo #0: insn #2 against 'prog_array_init'
    libbpf: prog 'entry': found map 0 (prog_array_init, sec 4, off 0) for insn #0
    libbpf: .maps relo #0: for 3 value 0 rel->r_offset 32 name 53 ('tailcall_1')
    libbpf: .maps relo #0: map 'prog_array_init' slot [1] points to prog 'tailcall_1'
    libbpf: map 'prog_array_init': created successfully, fd=5
    libbpf: map 'prog_array_init': slot [1] set to prog 'tailcall_1' fd=6

      [0] Closes: https://github.com/libbpf/libbpf/issues/354

    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211128141633.502339-2-hengqi.chen@gmail.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:41 +02:00
Artem Savkov 064409eb15 libbpf: Don't call libc APIs with NULL pointers
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 2a6a9bf26170b4e156c18706cd230934ebd2f95f
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Nov 23 16:23:16 2021 -0800

    libbpf: Don't call libc APIs with NULL pointers

    Sanitizer complains about qsort(), bsearch(), and memcpy() being called
    with NULL pointer. This can only happen when the associated number of
    elements is zero, so no harm should be done. But still prevent this from
    happening to keep sanitizer runs clean from extra noise.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211124002325.1737739-5-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:40 +02:00
Artem Savkov bf3557ee93 libbpf: Use bpf_map_create() consistently internally
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit a9606f405f2c8f24751b0a7326655a657a63ad60
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 24 11:32:31 2021 -0800

    libbpf: Use bpf_map_create() consistently internally

    Remove all the remaining uses of to-be-deprecated bpf_create_map*() APIs.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211124193233.3115996-3-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:39 +02:00
Artem Savkov 5cc1ba0a17 libbpf: Unify low-level map creation APIs w/ new bpf_map_create()
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 992c4225419a38663d6239bc2f525b4ac0429188
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 24 11:32:30 2021 -0800

    libbpf: Unify low-level map creation APIs w/ new bpf_map_create()

    Mark the entire zoo of low-level map creation APIs for deprecation in
    libbpf 0.7 ([0]) and introduce a new bpf_map_create() API that is
    OPTS-based (and thus future-proof) and matches the BPF_MAP_CREATE
    command name.

    While at it, ensure that gen_loader sends map_extra field. Also remove
    now unneeded btf_key_type_id/btf_value_type_id logic that libbpf is
    doing anyways.

      [0] Closes: https://github.com/libbpf/libbpf/issues/282

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20211124193233.3115996-2-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:39 +02:00
Artem Savkov 357dfc09dd libbpf: Change bpf_program__set_extra_flags to bpf_program__set_flags
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 8cccee9e91e19207671b94af40bacf7c1d2e74ef
Author: Florent Revest <revest@chromium.org>
Date:   Fri Nov 19 19:00:35 2021 +0100

    libbpf: Change bpf_program__set_extra_flags to bpf_program__set_flags

    bpf_program__set_extra_flags has just been introduced so we can still
    change it without breaking users.

    This new interface is a bit more flexible (for example if someone wants
    to clear a flag).

    Signed-off-by: Florent Revest <revest@chromium.org>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211119180035.1396139-1-revest@chromium.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:39 +02:00
Artem Savkov 48677686b6 libbpf: Add runtime APIs to query libbpf version
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 7615209f42a1976894cd0df97a380a034911656a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Nov 18 09:40:54 2021 -0800

    libbpf: Add runtime APIs to query libbpf version

    Libbpf provided LIBBPF_MAJOR_VERSION and LIBBPF_MINOR_VERSION macros to
    check libbpf version at compilation time. This doesn't cover all the
    needs, though, because version of libbpf that application is compiled
    against doesn't necessarily match the version of libbpf at runtime,
    especially if libbpf is used as a shared library.

    Add libbpf_major_version() and libbpf_minor_version() returning major
    and minor versions, respectively, as integers. Also add a convenience
    libbpf_version_string() for various tooling using libbpf to print out
    libbpf version in a human-readable form. Currently it will return
    "v0.6", but in the future it can contains some extra information, so the
    format itself is not part of a stable API and shouldn't be relied upon.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: John Fastabend <john.fastabend@gmail.com>
    Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
    Link: https://lore.kernel.org/bpf/20211118174054.2699477-1-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:38 +02:00
Artem Savkov 88a2fca700 libbpf: Support BTF_KIND_TYPE_TAG
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 2dc1e488e5cdfd937554ca81fd46ad874d244b3f
Author: Yonghong Song <yhs@fb.com>
Date:   Thu Nov 11 17:26:14 2021 -0800

    libbpf: Support BTF_KIND_TYPE_TAG

    Add libbpf support for BTF_KIND_TYPE_TAG.

    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112012614.1505315-1-yhs@fb.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:36 +02:00
Artem Savkov be5634660e libbpf: Make perf_buffer__new() use OPTS-based interface
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 4178893465774f91dcd49465ae6f4e3cc036b7b2
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 10 21:36:20 2021 -0800

    libbpf: Make perf_buffer__new() use OPTS-based interface

    Add new variants of perf_buffer__new() and perf_buffer__new_raw() that
    use OPTS-based options for future extensibility ([0]). Given all the
    currently used API names are best fits, re-use them and use
    ___libbpf_override() approach and symbol versioning to preserve ABI and
    source code compatibility. struct perf_buffer_opts and struct
    perf_buffer_raw_opts are kept as well, but they are restructured such
    that they are OPTS-based when used with new APIs. For struct
    perf_buffer_raw_opts we keep few fields intact, so we have to also
    preserve the memory location of them both when used as OPTS and for
    legacy API variants. This is achieved with anonymous padding for OPTS
    "incarnation" of the struct.  These pads can be eventually used for new
    options.

      [0] Closes: https://github.com/libbpf/libbpf/issues/311

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111053624.190580-6-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:35 +02:00
Artem Savkov e938153a3d libbpf: Add ability to get/set per-program load flags
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit a6ca71583137300f207343d5d950cb1c365ab911
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 10 21:17:57 2021 -0800

    libbpf: Add ability to get/set per-program load flags

    Add bpf_program__flags() API to retrieve prog_flags that will be (or
    were) supplied to BPF_PROG_LOAD command.

    Also add bpf_program__set_extra_flags() API to allow to set *extra*
    flags, in addition to those determined by program's SEC() definition.
    Such flags are logically OR'ed with libbpf-derived flags.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211111051758.92283-2-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:35 +02:00
Artem Savkov cd578f2bfe libbpf: Free up resources used by inner map definition
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 8f7b239ea8cfdc8e64c875ee417fed41431a1f37
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Sun Nov 7 08:55:14 2021 -0800

    libbpf: Free up resources used by inner map definition

    It's not enough to just free(map->inner_map), as inner_map itself can
    have extra memory allocated, like map name.

    Fixes: 646f02ffdd ("libbpf: Add BTF-defined map-in-map support")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
    Link: https://lore.kernel.org/bpf/20211107165521.9240-3-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:33 +02:00
Artem Savkov 668818db31 libbpf: Stop using to-be-deprecated APIs
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit bcc40fc0021d4b7c016f8bcf62bd4e21251fdee8
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 15:08:38 2021 -0700

    libbpf: Stop using to-be-deprecated APIs

    Remove all the internal uses of libbpf APIs that are slated to be
    deprecated in v0.7.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211103220845.2676888-6-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:33 +02:00
Artem Savkov 1f6cbaa965 libbpf: Remove internal use of deprecated bpf_prog_load() variants
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit e32660ac6fd6bd3c9d249644330d968c6ef61b07
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 15:08:37 2021 -0700

    libbpf: Remove internal use of deprecated bpf_prog_load() variants

    Remove all the internal uses of bpf_load_program_xattr(), which is
    slated for deprecation in v0.7.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211103220845.2676888-5-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:33 +02:00
Artem Savkov afbb64a94a libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit d10ef2b825cffd0807dd733fdfd6a5bea32270d7
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 15:08:36 2021 -0700

    libbpf: Unify low-level BPF_PROG_LOAD APIs into bpf_prog_load()

    Add a new unified OPTS-based low-level API for program loading,
    bpf_prog_load() ([0]).  bpf_prog_load() accepts few "mandatory"
    parameters as input arguments (program type, name, license,
    instructions) and all the other optional (as in not required to specify
    for all types of BPF programs) fields into struct bpf_prog_load_opts.

    This makes all the other non-extensible APIs variant for BPF_PROG_LOAD
    obsolete and they are slated for deprecation in libbpf v0.7:
      - bpf_load_program();
      - bpf_load_program_xattr();
      - bpf_verify_program().

    Implementation-wise, internal helper libbpf__bpf_prog_load is refactored
    to become a public bpf_prog_load() API. struct bpf_prog_load_params used
    internally is replaced by public struct bpf_prog_load_opts.

    Unfortunately, while conceptually all this is pretty straightforward,
    the biggest complication comes from the already existing bpf_prog_load()
    *high-level* API, which has nothing to do with BPF_PROG_LOAD command.

    We try really hard to have a new API named bpf_prog_load(), though,
    because it maps naturally to BPF_PROG_LOAD command.

    For that, we rename old bpf_prog_load() into bpf_prog_load_deprecated()
    and mark it as COMPAT_VERSION() for shared library users compiled
    against old version of libbpf. Statically linked users and shared lib
    users compiled against new version of libbpf headers will get "rerouted"
    to bpf_prog_deprecated() through a macro helper that decides whether to
    use new or old bpf_prog_load() based on number of input arguments (see
    ___libbpf_overload in libbpf_common.h).

    To test that existing
    bpf_prog_load()-using code compiles and works as expected, I've compiled
    and ran selftests as is. I had to remove (locally) selftest/bpf/Makefile
    -Dbpf_prog_load=bpf_prog_test_load hack because it was conflicting with
    the macro-based overload approach. I don't expect anyone else to do
    something like this in practice, though. This is testing-specific way to
    replace bpf_prog_load() calls with special testing variant of it, which
    adds extra prog_flags value. After testing I kept this selftests hack,
    but ensured that we use a new bpf_prog_load_deprecated name for this.

    This patch also marks bpf_prog_load() and bpf_prog_load_xattr() as deprecated.
    bpf_object interface has to be used for working with struct bpf_program.
    Libbpf doesn't support loading just a bpf_program.

    The silver lining is that when we get to libbpf 1.0 all these
    complication will be gone and we'll have one clean bpf_prog_load()
    low-level API with no backwards compatibility hackery surrounding it.

      [0] Closes: https://github.com/libbpf/libbpf/issues/284

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211103220845.2676888-4-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:33 +02:00
Artem Savkov 1998aafbd6 libbpf: Deprecate bpf_program__load() API
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit be2f2d1680dfb36793ea8d3110edd4a1db496352
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Nov 2 22:14:49 2021 -0700

    libbpf: Deprecate bpf_program__load() API

    Mark bpf_program__load() as deprecated ([0]) since v0.6. Also rename few
    internal program loading bpf_object helper functions to have more
    consistent naming.

      [0] Closes: https://github.com/libbpf/libbpf/issues/301

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211103051449.1884903-1-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:32 +02:00
Artem Savkov adcf0c45dc libbpf: Improve ELF relo sanitization
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit b7332d2820d394dd2ac127df1567b4da597355a1
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 10:32:13 2021 -0700

    libbpf: Improve ELF relo sanitization

    Add few sanity checks for relocations to prevent div-by-zero and
    out-of-bounds array accesses in libbpf.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211103173213.1376990-6-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:32 +02:00
Artem Savkov 19ac4329ff libbpf: Validate that .BTF and .BTF.ext sections contain data
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 62554d52e71797eefa3fc15b54008038837bb2d4
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 10:32:11 2021 -0700

    libbpf: Validate that .BTF and .BTF.ext sections contain data

    .BTF and .BTF.ext ELF sections should have SHT_PROGBITS type and contain
    data. If they are not, ELF is invalid or corrupted, so bail out.
    Otherwise this can lead to data->d_buf being NULL and SIGSEGV later on.
    Reported by oss-fuzz project.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211103173213.1376990-4-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:32 +02:00
Artem Savkov 6bcd55cc44 libbpf: Improve sanity checking during BTF fix up
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 88918dc12dc357a06d8d722a684617b1c87a4654
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 10:32:10 2021 -0700

    libbpf: Improve sanity checking during BTF fix up

    If BTF is corrupted DATASEC's variable type ID might be incorrect.
    Prevent this easy to detect situation with extra NULL check.
    Reported by oss-fuzz project.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211103173213.1376990-3-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:32 +02:00
Artem Savkov 4bd054eb34 libbpf: Detect corrupted ELF symbols section
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 833907876be55205d0ec153dcd819c014404ee16
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 10:32:09 2021 -0700

    libbpf: Detect corrupted ELF symbols section

    Prevent divide-by-zero if ELF is corrupted and has zero sh_entsize.
    Reported by oss-fuzz project.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211103173213.1376990-2-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:32 +02:00
Yauheni Kaliuta b4be21a662 libbpf: Deprecate bpf_objects_list
Bugzilla: http://bugzilla.redhat.com/2069045

commit 689624f037ce219d42312534eff4dc470b54dec4
Author: Joe Burton <jevburton@google.com>
Date:   Tue Oct 26 22:35:28 2021 +0000

    libbpf: Deprecate bpf_objects_list
    
    Add a flag to `enum libbpf_strict_mode' to disable the global
    `bpf_objects_list', preventing race conditions when concurrent threads
    call bpf_object__open() or bpf_object__close().
    
    bpf_object__next() will return NULL if this option is set.
    
    Callers may achieve the same workflow by tracking bpf_objects in
    application code.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/293
    
    Signed-off-by: Joe Burton <jevburton@google.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211026223528.413950-1-jevburton.kernel@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:54 +03:00
Yauheni Kaliuta 87c8788979 libbpf: Add ability to fetch bpf_program's underlying instructions
Bugzilla: http://bugzilla.redhat.com/2069045

commit 65a7fa2e4e5381d205d3b0098da0fc8471fbbfb6
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Oct 25 15:45:29 2021 -0700

    libbpf: Add ability to fetch bpf_program's underlying instructions
    
    Add APIs providing read-only access to bpf_program BPF instructions ([0]).
    This is useful for diagnostics purposes, but it also allows a cleaner
    support for cloning BPF programs after libbpf did all the FD resolution
    and CO-RE relocations, subprog instructions appending, etc. Currently,
    cloning BPF program is possible only through hijacking a half-broken
    bpf_program__set_prep() API, which doesn't really work well for anything
    but most primitive programs. For instance, set_prep() API doesn't allow
    adjusting BPF program load parameters which are necessary for loading
    fentry/fexit BPF programs (the case where BPF program cloning is
    a necessity if doing some sort of mass-attachment functionality).
    
    Given bpf_program__set_prep() API is set to be deprecated, having
    a cleaner alternative is a must. libbpf internally already keeps track
    of linear array of struct bpf_insn, so it's not hard to expose it. The
    only gotcha is that libbpf previously freed instructions array during
    bpf_object load time, which would make this API much less useful overall,
    because in between bpf_object__open() and bpf_object__load() a lot of
    changes to instructions are done by libbpf.
    
    So this patch makes libbpf hold onto prog->insns array even after BPF
    program loading. I think this is a small price for added functionality
    and improved introspection of BPF program code.
    
    See retsnoop PR ([1]) for how it can be used in practice and code
    savings compared to relying on bpf_program__set_prep().
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/298
      [1] https://github.com/anakryiko/retsnoop/pull/1
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211025224531.1088894-3-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:54 +03:00
Yauheni Kaliuta 45b5b924d3 libbpf: Add "bool skipped" to struct bpf_map
Bugzilla: http://bugzilla.redhat.com/2069045

commit 229fae38d0fc0d6ff58d57cbeb1432da55e58d4f
Author: Shuyi Cheng <chengshuyi@linux.alibaba.com>
Date:   Fri Dec 10 17:39:57 2021 +0800

    libbpf: Add "bool skipped" to struct bpf_map
    
    Fix error: "failed to pin map: Bad file descriptor, path:
    /sys/fs/bpf/_rodata_str1_1."
    
    In the old kernel, the global data map will not be created, see [0]. So
    we should skip the pinning of the global data map to avoid
    bpf_object__pin_maps returning error. Therefore, when the map is not
    created, we mark “map->skipped" as true and then check during relocation
    and during pinning.
    
    Fixes: 16e0c35c6f7a ("libbpf: Load global data maps lazily on legacy kernels")
    Signed-off-by: Shuyi Cheng <chengshuyi@linux.alibaba.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:54 +03:00
Yauheni Kaliuta a6c86b2eff libbpf: Use probe_name for legacy kprobe
Bugzilla: http://bugzilla.redhat.com/2069045

commit 71cff670baff5cc6a6eeb0181e2cc55579c5e1e0
Author: Qiang Wang <wangqiang.wq.frank@bytedance.com>
Date:   Mon Dec 27 21:07:12 2021 +0800

    libbpf: Use probe_name for legacy kprobe
    
    Fix a bug in commit 46ed5fc33db9, which wrongly used the
    func_name instead of probe_name to register legacy kprobe.
    
    Fixes: 46ed5fc33db9 ("libbpf: Refactor and simplify legacy kprobe code")
    Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com>
    Signed-off-by: Qiang Wang <wangqiang.wq.frank@bytedance.com>
    Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Tested-by: Hengqi Chen <hengqi.chen@gmail.com>
    Reviewed-by: Hengqi Chen <hengqi.chen@gmail.com>
    Link: https://lore.kernel.org/bpf/20211227130713.66933-1-wangqiang.wq.frank@bytedance.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:53 +03:00
Yauheni Kaliuta 1a99baba3a libbpf: Perform map fd cleanup for gen_loader in case of error
Bugzilla: http://bugzilla.redhat.com/2069045

commit ba05fd36b8512d6aeefe9c2c5b6a25b726c4bfff
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Sat Nov 13 04:50:22 2021 +0530

    libbpf: Perform map fd cleanup for gen_loader in case of error
    
    Alexei reported a fd leak issue in gen loader (when invoked from
    bpftool) [0]. When adding ksym support, map fd allocation was moved from
    stack to loader map, however I missed closing these fds (relevant when
    cleanup label is jumped to on error). For the success case, the
    allocated fd is returned in loader ctx, hence this problem is not
    noticed.
    
    Make three changes, first MAX_USED_MAPS in MAX_FD_ARRAY_SZ instead of
    MAX_USED_PROGS, the braino was not a problem until now for this case as
    we didn't try to close map fds (otherwise use of it would have tried
    closing 32 additional fds in ksym btf fd range). Then, do a cleanup for
    all nr_maps fds in cleanup label code, so that in case of error all
    temporary map fds from bpf_gen__map_create are closed.
    
    Then, adjust the cleanup label to only generate code for the required
    number of program and map fds.  To trim code for remaining program
    fds, lay out prog_fd array in stack in the end, so that we can
    directly skip the remaining instances.  Still stack size remains same,
    since changing that would require changes in a lot of places
    (including adjustment of stack_off macro), so nr_progs_sz variable is
    only used to track required number of iterations (and jump over
    cleanup size calculated from that), stack offset calculation remains
    unaffected.
    
    The difference for test_ksyms_module.o is as follows:
    libbpf: //prog cleanup iterations: before = 34, after = 5
    libbpf: //maps cleanup iterations: before = 64, after = 2
    
    Also, move allocation of gen->fd_array offset to bpf_gen__init. Since
    offset can now be 0, and we already continue even if add_data returns 0
    in case of failure, we do not need to distinguish between 0 offset and
    failure case 0, as we rely on bpf_gen__finish to check errors. We can
    also skip check for gen->fd_array in add_*_fd functions, since
    bpf_gen__init will take care of it.
    
      [0]: https://lore.kernel.org/bpf/CAADnVQJ6jSitKSNKyxOrUzwY2qDRX0sPkJ=VLGHuCLVJ=qOt9g@mail.gmail.com
    
    Fixes: 18f4fccbf314 ("libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations")
    Reported-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211112232022.899074-1-memxor@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:53 +03:00
Yauheni Kaliuta 50fb7a53da libbpf: Fix section counting logic
Bugzilla: http://bugzilla.redhat.com/2069045

commit 0d6988e16a12ebd41d3e268992211b0ceba44ed7
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 10:32:12 2021 -0700

    libbpf: Fix section counting logic
    
    e_shnum does include section #0 and as such is exactly the number of ELF
    sections that we need to allocate memory for to use section indices as
    array indices. Fix the off-by-one error.
    
    This is purely accounting fix, previously we were overallocating one
    too many array items. But no correctness errors otherwise.
    
    Fixes: 25bbbd7a444b ("libbpf: Remove assumptions about uniqueness of .rodata/.data/.bss maps")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20211103173213.1376990-5-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:53 +03:00
Yauheni Kaliuta b745876b90 libbpf: Load global data maps lazily on legacy kernels
Bugzilla: http://bugzilla.redhat.com/2069045

commit 16e0c35c6f7a2e90d52f3035ecf942af21417b7b
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Nov 23 12:01:04 2021 -0800

    libbpf: Load global data maps lazily on legacy kernels
    
    Load global data maps lazily, if kernel is too old to support global
    data. Make sure that programs are still correct by detecting if any of
    the to-be-loaded programs have relocation against any of such maps.
    
    This allows to solve the issue ([0]) with bpf_printk() and Clang
    generating unnecessary and unreferenced .rodata.strX.Y sections, but it
    also goes further along the CO-RE lines, allowing to have a BPF object
    in which some code can work on very old kernels and relies only on BPF
    maps explicitly, while other BPF programs might enjoy global variable
    support. If such programs are correctly set to not load at runtime on
    old kernels, bpf_object will load and function correctly now.
    
      [0] https://lore.kernel.org/bpf/CAK-59YFPU3qO+_pXWOH+c1LSA=8WA1yabJZfREjOEXNHAqgXNg@mail.gmail.com/
    
    Fixes: aed659170a31 ("libbpf: Support multiple .rodata.* and .data.* BPF maps")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211123200105.387855-1-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:53 +03:00
Yauheni Kaliuta a18005d896 libbpf: Use O_CLOEXEC uniformly when opening fds
Bugzilla: http://bugzilla.redhat.com/2069045

commit 92274e24b01b331ef7a4227135933e6163fe94aa
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Thu Oct 28 12:04:58 2021 +0530

    libbpf: Use O_CLOEXEC uniformly when opening fds
    
    There are some instances where we don't use O_CLOEXEC when opening an
    fd, fix these up. Otherwise, it is possible that a parallel fork causes
    these fds to leak into a child process on execve.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211028063501.2239335-6-memxor@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:48 +03:00
Yauheni Kaliuta 541e80729c libbpf: Add typeless ksym support to gen_loader
Bugzilla: http://bugzilla.redhat.com/2069045

commit c24941cd3766b6de682dbe6809bd6af12271ab5b
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Thu Oct 28 12:04:55 2021 +0530

    libbpf: Add typeless ksym support to gen_loader
    
    This uses the bpf_kallsyms_lookup_name helper added in previous patches
    to relocate typeless ksyms. The return value ENOENT can be ignored, and
    the value written to 'res' can be directly stored to the insn, as it is
    overwritten to 0 on lookup failure. For repeating symbols, we can simply
    copy the previously populated bpf_insn.
    
    Also, we need to take care to not close fds for typeless ksym_desc, so
    reuse the 'off' member's space to add a marker for typeless ksym and use
    that to skip them in cleanup_relos.
    
    We add a emit_ksym_relo_log helper that avoids duplicating common
    logging instructions between typeless and weak ksym (for future commit).
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211028063501.2239335-3-memxor@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:48 +03:00
Yauheni Kaliuta cb058f57c3 libbpf: Add "map_extra" as a per-map-type extra flag
Bugzilla: http://bugzilla.redhat.com/2069045

commit 47512102cde2d252d7b984d9675cfd3420b48ad9
Author: Joanne Koong <joannekoong@fb.com>
Date:   Wed Oct 27 16:45:01 2021 -0700

    libbpf: Add "map_extra" as a per-map-type extra flag
    
    This patch adds the libbpf infrastructure for supporting a
    per-map-type "map_extra" field, whose definition will be
    idiosyncratic depending on map type.
    
    For example, for the bloom filter map, the lower 4 bits of
    map_extra is used to denote the number of hash functions.
    
    Please note that until libbpf 1.0 is here, the
    "bpf_create_map_params" struct is used as a temporary
    means for propagating the map_extra field to the kernel.
    
    Signed-off-by: Joanne Koong <joannekoong@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211027234504.30744-3-joannekoong@fb.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:48 +03:00
Yauheni Kaliuta 87f52db921 libbpf: Use __BYTE_ORDER__
Bugzilla: http://bugzilla.redhat.com/2069045

commit 3930198dc9a0720a2b6561c67e55859ec51b73f9
Author: Ilya Leoshkevich <iii@linux.ibm.com>
Date:   Tue Oct 26 03:08:27 2021 +0200

    libbpf: Use __BYTE_ORDER__
    
    Use the compiler-defined __BYTE_ORDER__ instead of the libc-defined
    __BYTE_ORDER for consistency.
    
    Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211026010831.748682-3-iii@linux.ibm.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:46 +03:00
Yauheni Kaliuta a2bdc8e67e libbpf: Deprecate multi-instance bpf_program APIs
Bugzilla: http://bugzilla.redhat.com/2069045

commit e21d585cb3db65a207cd338c74b9886090ef1ceb
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Oct 25 15:45:30 2021 -0700

    libbpf: Deprecate multi-instance bpf_program APIs
    
    Schedule deprecation of a set of APIs that are related to multi-instance
    bpf_programs:
      - bpf_program__set_prep() ([0]);
      - bpf_program__{set,unset}_instance() ([1]);
      - bpf_program__nth_fd().
    
    These APIs are obscure, very niche, and don't seem to be used much in
    practice. bpf_program__set_prep() is pretty useless for anything but the
    simplest BPF programs, as it doesn't allow to adjust BPF program load
    attributes, among other things. In short, it already bitrotted and will
    bitrot some more if not removed.
    
    With bpf_program__insns() API, which gives access to post-processed BPF
    program instructions of any given entry-point BPF program, it's now
    possible to do whatever necessary adjustments were possible with
    set_prep() API before, but also more. Given any such use case is
    automatically an advanced use case, requiring users to stick to
    low-level bpf_prog_load() APIs and managing their own prog FDs is
    reasonable.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/299
      [1] Closes: https://github.com/libbpf/libbpf/issues/300
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211025224531.1088894-4-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:46 +03:00
Yauheni Kaliuta 924e42453c libbpf: Use func name when pinning programs with LIBBPF_STRICT_SEC_NAME
Bugzilla: http://bugzilla.redhat.com/2069045

commit a77f879ba1178437e5df87090165e5a45a91ba7f
Author: Stanislav Fomichev <sdf@google.com>
Date:   Thu Oct 21 14:48:12 2021 -0700

    libbpf: Use func name when pinning programs with LIBBPF_STRICT_SEC_NAME
    
    We can't use section name anymore because they are not unique
    and pinning objects with multiple programs with the same
    progtype/secname will fail.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/273
    
    Fixes: 33a2c75c55 ("libbpf: add internal pin_name")
    Signed-off-by: Stanislav Fomichev <sdf@google.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Reviewed-by: Quentin Monnet <quentin@isovalent.com>
    Link: https://lore.kernel.org/bpf/20211021214814.1236114-2-sdf@google.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:45 +03:00
Yauheni Kaliuta b16b2c8c85 libbpf: Add btf__type_cnt() and btf__raw_data() APIs
Bugzilla: http://bugzilla.redhat.com/2069045

commit 6a886de070fad850d6cb74a787c9ed017303d9ac
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Fri Oct 22 21:06:19 2021 +0800

    libbpf: Add btf__type_cnt() and btf__raw_data() APIs
    
    Add btf__type_cnt() and btf__raw_data() APIs and deprecate
    btf__get_nr_type() and btf__get_raw_data() since the old APIs
    don't follow the libbpf naming convention for getters which
    omit 'get' in the name (see [0]). btf__raw_data() is just an
    alias to the existing btf__get_raw_data(). btf__type_cnt()
    now returns the number of all types of the BTF object
    including 'void'.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/279
    
    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211022130623.1548429-2-hengqi.chen@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta b7085850a5 libbpf: Deprecate btf__finalize_data() and move it into libbpf.c
Bugzilla: http://bugzilla.redhat.com/2069045

commit b96c07f3b5ae6944eb52fd96a322340aa80aef5d
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:43:55 2021 -0700

    libbpf: Deprecate btf__finalize_data() and move it into libbpf.c
    
    There isn't a good use case where anyone but libbpf itself needs to call
    btf__finalize_data(). It was implemented for internal use and it's not
    clear why it was made into public API in the first place. To function, it
    requires active ELF data, which is stored inside bpf_object for the
    duration of opening phase only. But the only BTF that needs bpf_object's
    ELF is that bpf_object's BTF itself, which libbpf fixes up automatically
    during bpf_object__open() operation anyways. There is no need for any
    additional fix up and no reasonable scenario where it's useful and
    appropriate.
    
    Thus, btf__finalize_data() is just an API atavism and is better removed.
    So this patch marks it as deprecated immediately (v0.6+) and moves the
    code from btf.c into libbpf.c where it's used in the context of
    bpf_object opening phase. Such code co-location allows to make code
    structure more straightforward and remove bpf_object__section_size() and
    bpf_object__variable_offset() internal helpers from libbpf_internal.h,
    making them static. Their naming is also adjusted to not create
    a wrong illusion that they are some sort of method of bpf_object. They
    are internal helpers and are called appropriately.
    
    This is part of libbpf 1.0 effort ([0]).
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/276
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-2-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta 8763090f64 libbpf: Simplify look up by name of internal maps
Bugzilla: http://bugzilla.redhat.com/2069045

commit 26071635ac5ecd8276bf3bdfc3ea1128c93ac722
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:44:03 2021 -0700

    libbpf: Simplify look up by name of internal maps
    
    Map name that's assigned to internal maps (.rodata, .data, .bss, etc)
    consist of a small prefix of bpf_object's name and ELF section name as
    a suffix. This makes it hard for users to "guess" the name to use for
    looking up by name with bpf_object__find_map_by_name() API.
    
    One proposal was to drop object name prefix from the map name and just
    use ".rodata", ".data", etc, names. One downside called out was that
    when multiple BPF applications are active on the host, it will be hard
    to distinguish between multiple instances of .rodata and know which BPF
    object (app) they belong to. Having few first characters, while quite
    limiting, still can give a bit of a clue, in general.
    
    Note, though, that btf_value_type_id for such global data maps (ARRAY)
    points to DATASEC type, which encodes full ELF name, so tools like
    bpftool can take advantage of this fact to "recover" full original name
    of the map. This is also the reason why for custom .data.* and .rodata.*
    maps libbpf uses only their ELF names and doesn't prepend object name at
    all.
    
    Another downside of such approach is that it is not backwards compatible
    and, among direct use of bpf_object__find_map_by_name() API, will break
    any BPF skeleton generated using bpftool that was compiled with older
    libbpf version.
    
    Instead of causing all this pain, libbpf will still generate map name
    using a combination of object name and ELF section name, but it will
    allow looking such maps up by their natural names, which correspond to
    their respective ELF section names. This means non-truncated ELF section
    names longer than 15 characters are going to be expected and supported.
    
    With such set up, we get the best of both worlds: leave small bits of
    a clue about BPF application that instantiated such maps, as well as
    making it easy for user apps to lookup such maps at runtime. In this
    sense it closes corresponding libbpf 1.0 issue ([0]).
    
    BPF skeletons will continue using full names for lookups.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/275
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-10-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta 6bdc12eca0 libbpf: Support multiple .rodata.* and .data.* BPF maps
Bugzilla: http://bugzilla.redhat.com/2069045

commit aed659170a3171e425913ae259d46396fb9c10ef
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:44:01 2021 -0700

    libbpf: Support multiple .rodata.* and .data.* BPF maps
    
    Add support for having multiple .rodata and .data data sections ([0]).
    .rodata/.data are supported like the usual, but now also
    .rodata.<whatever> and .data.<whatever> are also supported. Each such
    section will get its own backing BPF_MAP_TYPE_ARRAY, just like
    .rodata and .data.
    
    Multiple .bss maps are not supported, as the whole '.bss' name is
    confusing and might be deprecated soon, as well as user would need to
    specify custom ELF section with SEC() attribute anyway, so might as well
    stick to just .data.* and .rodata.* convention.
    
    User-visible map name for such new maps is going to be just their ELF
    section names.
    
      [0] https://github.com/libbpf/libbpf/issues/274
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-8-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:44 +03:00
Yauheni Kaliuta 1ee776b632 libbpf: Remove assumptions about uniqueness of .rodata/.data/.bss maps
Bugzilla: http://bugzilla.redhat.com/2069045

commit 25bbbd7a444b1624000389830d46ffdc5b809ee8
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:43:58 2021 -0700

    libbpf: Remove assumptions about uniqueness of .rodata/.data/.bss maps
    
    Remove internal libbpf assumption that there can be only one .rodata,
    .data, and .bss map per BPF object. To achieve that, extend and
    generalize the scheme that was used for keeping track of relocation ELF
    sections. Now each ELF section has a temporary extra index that keeps
    track of logical type of ELF section (relocations, data, read-only data,
    BSS). Switch relocation to this scheme, as well as .rodata/.data/.bss
    handling.
    
    We don't yet allow multiple .rodata, .data, and .bss sections, but no
    libbpf internal code makes an assumption that there can be only one of
    each and thus they can be explicitly referenced by a single index. Next
    patches will actually allow multiple .rodata and .data sections.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-5-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:43 +03:00
Yauheni Kaliuta 3100283a2a libbpf: Use Elf64-specific types explicitly for dealing with ELF
Bugzilla: http://bugzilla.redhat.com/2069045

commit ad23b7238474c6319bf692ae6ce037d9696df1d1
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:43:57 2021 -0700

    libbpf: Use Elf64-specific types explicitly for dealing with ELF
    
    Minimize the usage of class-agnostic gelf_xxx() APIs from libelf. These
    APIs require copying ELF data structures into local GElf_xxx structs and
    have a more cumbersome API. BPF ELF file is defined to be always 64-bit
    ELF object, even when intended to be run on 32-bit host architectures,
    so there is no need to do class-agnostic conversions everywhere. BPF
    static linker implementation within libbpf has been using Elf64-specific
    types since initial implementation.
    
    Add two simple helpers, elf_sym_by_idx() and elf_rel_by_idx(), for more
    succinct direct access to ELF symbol and relocation records within ELF
    data itself and switch all the GElf_xxx usage into Elf64_xxx
    equivalents. The only remaining place within libbpf.c that's still using
    gelf API is gelf_getclass(), as there doesn't seem to be a direct way to
    get underlying ELF bitness.
    
    No functional changes intended.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-4-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:43 +03:00
Yauheni Kaliuta 601b9a6a38 libbpf: Extract ELF processing state into separate struct
Bugzilla: http://bugzilla.redhat.com/2069045

commit 29a30ff501518a49282754909543cef1ef49e4bc
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 20 18:43:56 2021 -0700

    libbpf: Extract ELF processing state into separate struct
    
    Name currently anonymous internal struct that keeps ELF-related state
    for bpf_object. Just a bit of clean up, no functional changes.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211021014404.2635234-3-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:43 +03:00
Yauheni Kaliuta 0f8f1bc096 libbpf: Migrate internal use of bpf_program__get_prog_info_linear
Bugzilla: http://bugzilla.redhat.com/2069045

commit ebc7b50a3849d73665013573cf3c09f27fb14fde
Author: Dave Marchevsky <davemarchevsky@fb.com>
Date:   Mon Oct 11 01:20:28 2021 -0700

    libbpf: Migrate internal use of bpf_program__get_prog_info_linear
    
    In preparation for bpf_program__get_prog_info_linear deprecation, move
    the single use in libbpf to call bpf_obj_get_info_by_fd directly.
    
    Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211011082031.4148337-2-davemarchevsky@fb.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:42 +03:00
Yauheni Kaliuta fdbdd94ffe bpf: Rename BTF_KIND_TAG to BTF_KIND_DECL_TAG
Bugzilla: http://bugzilla.redhat.com/2069045

commit 223f903e9c832699f4e5f422281a60756c1c6cfe
Author: Yonghong Song <yhs@fb.com>
Date:   Tue Oct 12 09:48:38 2021 -0700

    bpf: Rename BTF_KIND_TAG to BTF_KIND_DECL_TAG
    
    Patch set [1] introduced BTF_KIND_TAG to allow tagging
    declarations for struct/union, struct/union field, var, func
    and func arguments and these tags will be encoded into
    dwarf. They are also encoded to btf by llvm for the bpf target.
    
    After BTF_KIND_TAG is introduced, we intended to use it
    for kernel __user attributes. But kernel __user is actually
    a type attribute. Upstream and internal discussion showed
    it is not a good idea to mix declaration attribute and
    type attribute. So we proposed to introduce btf_type_tag
    as a type attribute and existing btf_tag renamed to
    btf_decl_tag ([2]).
    
    This patch renamed BTF_KIND_TAG to BTF_KIND_DECL_TAG and some
    other declarations with *_tag to *_decl_tag to make it clear
    the tag is for declaration. In the future, BTF_KIND_TYPE_TAG
    might be introduced per [3].
    
     [1] https://lore.kernel.org/bpf/20210914223004.244411-1-yhs@fb.com/
     [2] https://reviews.llvm.org/D111588
     [3] https://reviews.llvm.org/D111199
    
    Fixes: b5ea834dde6b ("bpf: Support for new btf kind BTF_KIND_TAG")
    Fixes: 5b84bd10363e ("libbpf: Add support for BTF_KIND_TAG")
    Fixes: 5c07f2fec003 ("bpftool: Add support for BTF_KIND_TAG")
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211012164838.3345699-1-yhs@fb.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:42 +03:00
Yauheni Kaliuta 15affa3ba5 libbpf: Support detecting and attaching of writable tracepoint program
Bugzilla: http://bugzilla.redhat.com/2069045

commit ccaf12d6215a56836472db220520cda8024d6c4f
Author: Hou Tao <houtao1@huawei.com>
Date:   Mon Oct 4 17:48:56 2021 +0800

    libbpf: Support detecting and attaching of writable tracepoint program
    
    Program on writable tracepoint is BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE,
    but its attachment is the same as BPF_PROG_TYPE_RAW_TRACEPOINT.
    
    Signed-off-by: Hou Tao <houtao1@huawei.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211004094857.30868-3-hotforest@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:41 +03:00
Yauheni Kaliuta d4df645ec1 libbpf: Deprecate bpf_object__unload() API since v0.6
Bugzilla: http://bugzilla.redhat.com/2069045

commit 4a404a7e8a3902fc560527241a611186605efb4e
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Sun Oct 3 00:10:00 2021 +0800

    libbpf: Deprecate bpf_object__unload() API since v0.6
    
    BPF objects are not reloadable after unload. Users are expected to use
    bpf_object__close() to unload and free up resources in one operation.
    No need to expose bpf_object__unload() as a public API, deprecate it
    ([0]).  Add bpf_object__unload() as an alias to internal
    bpf_object_unload() and replace all bpf_object__unload() uses to avoid
    compilation errors.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/290
    
    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211002161000.3854559-1-hengqi.chen@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:39 +03:00
Yauheni Kaliuta 950b01c9ff libbpf: Deprecate bpf_{map,program}__{prev,next} APIs since v0.7
Bugzilla: http://bugzilla.redhat.com/2069045

commit 2088a3a71d870115fdfb799c0f7de76d7383ba03
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Mon Oct 4 00:58:43 2021 +0800

    libbpf: Deprecate bpf_{map,program}__{prev,next} APIs since v0.7
    
    Deprecate bpf_{map,program}__{prev,next} APIs. Replace them with
    a new set of APIs named bpf_object__{prev,next}_{program,map} which
    follow the libbpf API naming convention ([0]). No functionality changes.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/296
    
    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Song Liu <songliubraving@fb.com>
    Link: https://lore.kernel.org/bpf/20211003165844.4054931-2-hengqi.chen@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:39 +03:00
Yauheni Kaliuta 0aaee4947e libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
Bugzilla: http://bugzilla.redhat.com/2069045

commit 18f4fccbf314fdb07d276f4cd3eaf53f1825550d
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Sat Oct 2 06:47:56 2021 +0530

    libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
    
    This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
    relocations, with support for weak kfunc relocations. The general idea
    is to move map_fds to loader map, and also use the data for storing
    kfunc BTF fds. Since both reuse the fd_array parameter, they need to be
    kept together.
    
    For map_fds, we reserve MAX_USED_MAPS slots in a region, and for kfunc,
    we reserve MAX_KFUNC_DESCS. This is done so that insn->off has more
    chances of being <= INT16_MAX than treating data map as a sparse array
    and adding fd as needed.
    
    When the MAX_KFUNC_DESCS limit is reached, we fall back to the sparse
    array model, so that as long as it does remain <= INT16_MAX, we pass an
    index relative to the start of fd_array.
    
    We store all ksyms in an array where we try to avoid calling the
    bpf_btf_find_by_name_kind helper, and also reuse the BTF fd that was
    already stored. This also speeds up the loading process compared to
    emitting calls in all cases, in later tests.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211002011757.311265-9-memxor@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:38 +03:00
Yauheni Kaliuta a911ace4c4 libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0
Bugzilla: http://bugzilla.redhat.com/2069045

commit 466b2e13971ef65cd7b621ca3044be14028b002b
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Sat Oct 2 06:47:55 2021 +0530

    libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0
    
    Preserve these calls as it allows verifier to succeed in loading the
    program if they are determined to be unreachable after dead code
    elimination during program load. If not, the verifier will fail at
    runtime. This is done for ext->is_weak symbols similar to the case for
    variable ksyms.
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211002011757.311265-8-memxor@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:38 +03:00
Yauheni Kaliuta c4ff36ae66 libbpf: Support kernel module function calls
Bugzilla: http://bugzilla.redhat.com/2069045

commit 9dbe6015636c19f929a7f7b742f27f303ff6069d
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Sat Oct 2 06:47:54 2021 +0530

    libbpf: Support kernel module function calls
    
    This patch adds libbpf support for kernel module function call support.
    The fd_array parameter is used during BPF program load to pass module
    BTFs referenced by the program. insn->off is set to index into this
    array, but starts from 1, because insn->off as 0 is reserved for
    btf_vmlinux.
    
    We try to use existing insn->off for a module, since the kernel limits
    the maximum distinct module BTFs for kfuncs to 256, and also because
    index must never exceed the maximum allowed value that can fit in
    insn->off (INT16_MAX). In the future, if kernel interprets signed offset
    as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.
    
    Also introduce a btf__find_by_name_kind_own helper to start searching
    from module BTF's start id when we know that the BTF ID is not present
    in vmlinux BTF (in find_ksym_btf_id).
    
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211002011757.311265-7-memxor@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:38 +03:00
Yauheni Kaliuta d5d98283b3 libbpf: Support uniform BTF-defined key/value specification across all BPF maps
Bugzilla: http://bugzilla.redhat.com/2069045

commit f731052325efc3726577feb743c7495f880ae07d
Author: Hengqi Chen <hengqi.chen@gmail.com>
Date:   Fri Oct 1 00:14:55 2021 +0800

    libbpf: Support uniform BTF-defined key/value specification across all BPF maps
    
    A bunch of BPF maps do not support specifying BTF types for key and value.
    This is non-uniform and inconvenient[0]. Currently, libbpf uses a retry
    logic which removes BTF type IDs when BPF map creation failed. Instead
    of retrying, this commit recognizes those specialized maps and removes
    BTF type IDs when creating BPF map.
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/355
    
    Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210930161456.3444544-2-hengqi.chen@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:23:38 +03:00
Yauheni Kaliuta 5599f7f76d selftests/bpf: Switch sk_lookup selftests to strict SEC("sk_lookup") use
Bugzilla: http://bugzilla.redhat.com/2069045

commit 7c80c87ad56a05ec56069c3f5d7e60b5b1eb19b4
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:46 2021 -0700

    selftests/bpf: Switch sk_lookup selftests to strict SEC("sk_lookup") use
    
    Update "sk_lookup/" definition to be a stand-alone type specifier,
    with backwards-compatible prefix match logic in non-libbpf-1.0 mode.
    
    Currently in selftests all the "sk_lookup/<whatever>" uses just use
    <whatever> for duplicated unique name encoding, which is redundant as
    BPF program's name (C function name) uniquely and descriptively
    identifies the intended use for such BPF programs.
    
    With libbpf's SEC_DEF("sk_lookup") definition updated, switch existing
    sk_lookup programs to use "unqualified" SEC("sk_lookup") section names,
    with no random text after it.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-11-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:18 +03:00
Yauheni Kaliuta 2606e0ec11 libbpf: Add opt-in strict BPF program section name handling logic
Bugzilla: http://bugzilla.redhat.com/2069045

commit dd94d45cf0acb1d82748b17e1106b2c8b487b28b
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:45 2021 -0700

    libbpf: Add opt-in strict BPF program section name handling logic
    
    Implement strict ELF section name handling for BPF programs. It utilizes
    `libbpf_set_strict_mode()` framework and adds new flag: LIBBPF_STRICT_SEC_NAME.
    
    If this flag is set, libbpf will enforce exact section name matching for
    a lot of program types that previously allowed just partial prefix
    match. E.g., if previously SEC("xdp_whatever_i_want") was allowed, now
    in strict mode only SEC("xdp") will be accepted, which makes SEC("")
    definitions cleaner and more structured. SEC() now won't be used as yet
    another way to uniquely encode BPF program identifier (for that
    C function name is better and is guaranteed to be unique within
    bpf_object). Now SEC() is strictly BPF program type and, depending on
    program type, extra load/attach parameter specification.
    
    Libbpf completely supports multiple BPF programs in the same ELF
    section, so multiple BPF programs of the same type/specification easily
    co-exist together within the same bpf_object scope.
    
    Additionally, a new (for now internal) convention is introduced: section
    name that can be a stand-alone exact BPF program type specificator, but
    also could have extra parameters after '/' delimiter. An example of such
    section is "struct_ops", which can be specified by itself, but also
    allows to specify the intended operation to be attached to, e.g.,
    "struct_ops/dctcp_init". Note, that "struct_ops_some_op" is not allowed.
    Such section definition is specified as "struct_ops+".
    
    This change is part of libbpf 1.0 effort ([0], [1]).
    
      [0] Closes: https://github.com/libbpf/libbpf/issues/271
      [1] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#stricter-and-more-uniform-bpf-program-section-name-sec-handling
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-10-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:17 +03:00
Yauheni Kaliuta 15bbb462cb libbpf: Complete SEC() table unification for BPF_APROG_SEC/BPF_EAPROG_SEC
Bugzilla: http://bugzilla.redhat.com/2069045

commit d41ea045a6e461673d1b2fad106b8cd04c3ba863
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:44 2021 -0700

    libbpf: Complete SEC() table unification for BPF_APROG_SEC/BPF_EAPROG_SEC
    
    Complete SEC() table refactoring towards unified form by rewriting
    BPF_APROG_SEC and BPF_EAPROG_SEC definitions with
    SEC_DEF(SEC_ATTACHABLE_OPT) (for optional expected_attach_type) and
    SEC_DEF(SEC_ATTACHABLE) (mandatory expected_attach_type), respectively.
    Drop BPF_APROG_SEC, BPF_EAPROG_SEC, and BPF_PROG_SEC_IMPL macros after
    that, leaving SEC_DEF() macro as the only one used.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-9-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:17 +03:00
Yauheni Kaliuta 4f98a5b409 libbpf: Refactor ELF section handler definitions
Bugzilla: http://bugzilla.redhat.com/2069045

commit 15ea31fadd7f5b1076b4f91f75562bc319799c24
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:43 2021 -0700

    libbpf: Refactor ELF section handler definitions
    
    Refactor ELF section handler definitions table to use a set of flags and
    unified SEC_DEF() macro. This allows for more succinct and table-like
    set of definitions, and allows to more easily extend the logic without
    adding more verbosity (this is utilized in later patches in the series).
    
    This approach is also making libbpf-internal program pre-load callback
    not rely on bpf_sec_def definition, which demonstrates that future
    pluggable ELF section handlers will be able to achieve similar level of
    integration without libbpf having to expose extra types and APIs.
    
    For starters, update SEC_DEF() definitions and make them more succinct.
    Also convert BPF_PROG_SEC() and BPF_APROG_COMPAT() definitions to
    a common SEC_DEF() use.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-8-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:17 +03:00
Yauheni Kaliuta faf53499b1 libbpf: Reduce reliance of attach_fns on sec_def internals
Bugzilla: http://bugzilla.redhat.com/2069045

commit 13d35a0cf1741431333ba4aa9bce9c5bbc88f63b
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:42 2021 -0700

    libbpf: Reduce reliance of attach_fns on sec_def internals
    
    Move closer to not relying on bpf_sec_def internals that won't be part
    of public API, when pluggable SEC() handlers will be allowed. Drop
    pre-calculated prefix length, and in various helpers don't rely on this
    prefix length availability. Also minimize reliance on knowing
    bpf_sec_def's prefix for few places where section prefix shortcuts are
    supported (e.g., tp vs tracepoint, raw_tp vs raw_tracepoint).
    
    Given checking some string for having a given string-constant prefix is
    such a common operation and so annoying to be done with pure C code, add
    a small macro helper, str_has_pfx(), and reuse it throughout libbpf.c
    where prefix comparison is performed. With __builtin_constant_p() it's
    possible to have a convenient helper that checks some string for having
    a given prefix, where prefix is either string literal (or compile-time
    known string due to compiler optimization) or just a runtime string
    pointer, which is quite convenient and saves a lot of typing and string
    literal duplication.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-7-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:17 +03:00
Yauheni Kaliuta 656c2bcbbb libbpf: Refactor internal sec_def handling to enable pluggability
Bugzilla: http://bugzilla.redhat.com/2069045

commit 12d9466d8bf3d1d4b4fd0f5733b6fa0cc5ee1013
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:41 2021 -0700

    libbpf: Refactor internal sec_def handling to enable pluggability
    
    Refactor internals of libbpf to allow adding custom SEC() handling logic
    easily from outside of libbpf. To that effect, each SEC()-handling
    registration sets mandatory program type/expected attach type for
    a given prefix and can provide three callbacks called at different
    points of BPF program lifetime:
    
      - init callback for right after bpf_program is initialized and
      prog_type/expected_attach_type is set. This happens during
      bpf_object__open() step, close to the very end of constructing
      bpf_object, so all the libbpf APIs for querying and updating
      bpf_program properties should be available;
    
      - pre-load callback is called right before BPF_PROG_LOAD command is
      called in the kernel. This callbacks has ability to set both
      bpf_program properties, as well as program load attributes, overriding
      and augmenting the standard libbpf handling of them;
    
      - optional auto-attach callback, which makes a given SEC() handler
      support auto-attachment of a BPF program through bpf_program__attach()
      API and/or BPF skeletons <skel>__attach() method.
    
    Each callbacks gets a `long cookie` parameter passed in, which is
    specified during SEC() handling. This can be used by callbacks to lookup
    whatever additional information is necessary.
    
    This is not yet completely ready to be exposed to the outside world,
    mainly due to non-public nature of struct bpf_prog_load_params. Instead
    of making it part of public API, we'll wait until the planned low-level
    libbpf API improvements for BPF_PROG_LOAD and other typical bpf()
    syscall APIs, at which point we'll have a public, probably OPTS-based,
    way to fully specify BPF program load parameters, which will be used as
    an interface for custom pre-load callbacks.
    
    But this change itself is already a good first step to unify the BPF
    program hanling logic even within the libbpf itself. As one example, all
    the extra per-program type handling (sleepable bit, attach_btf_id
    resolution, unsetting optional expected attach type) is now more obvious
    and is gathered in one place.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Dave Marchevsky <davemarchevsky@fb.com>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-6-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:17 +03:00
Yauheni Kaliuta 157ea07cfd libbpf: Add "tc" SEC_DEF which is a better name for "classifier"
Bugzilla: http://bugzilla.redhat.com/2069045

commit 9673268f03ba72efcc00fa95f3fe3744fcae0dd0
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 28 09:19:37 2021 -0700

    libbpf: Add "tc" SEC_DEF which is a better name for "classifier"
    
    As argued in [0], add "tc" ELF section definition for SCHED_CLS BPF
    program type. "classifier" is a misleading terminology and should be
    migrated away from.
    
      [0] https://lore.kernel.org/bpf/270e27b1-e5be-5b1c-b343-51bd644d0747@iogearbox.net/
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20210928161946.2512801-2-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:17 +03:00
Yauheni Kaliuta 50bd4379bb libbpf: Add legacy uprobe attaching support
Bugzilla: http://bugzilla.redhat.com/2069045

commit cc10623c681019c608c0cb30e2b38994e2c90b2a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 21 14:00:36 2021 -0700

    libbpf: Add legacy uprobe attaching support
    
    Similarly to recently added legacy kprobe attach interface support
    through tracefs, support attaching uprobes using the legacy interface if
    host kernel doesn't support newer FD-based interface.
    
    For uprobes event name consists of "libbpf_" prefix, PID, sanitized
    binary path and offset within that binary. Structuraly the code is
    aligned with kprobe logic refactoring in previous patch. struct
    bpf_link_perf is re-used and all the same legacy_probe_name and
    legacy_is_retprobe fields are used to ensure proper cleanup on
    bpf_link__destroy().
    
    Users should be aware, though, that on old kernels which don't support
    FD-based interface for kprobe/uprobe attachment, if the application
    crashes before bpf_link__destroy() is called, uprobe legacy
    events will be left in tracefs. This is the same limitation as with
    legacy kprobe interfaces.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20210921210036.1545557-5-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:15 +03:00
Yauheni Kaliuta 05497d46b9 libbpf: Refactor and simplify legacy kprobe code
Bugzilla: http://bugzilla.redhat.com/2069045

commit 46ed5fc33db966aa1a46e8ae9d96b08b756a2546
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 21 14:00:35 2021 -0700

    libbpf: Refactor and simplify legacy kprobe code
    
    Refactor legacy kprobe handling code to follow the same logic as uprobe
    legacy logic added in the next patchs:
      - add append_to_file() helper that makes it simpler to work with
        tracefs file-based interface for creating and deleting probes;
      - move out probe/event name generation outside of the code that
        adds/removes it, which simplifies bookkeeping significantly;
      - change the probe name format to start with "libbpf_" prefix and
        include offset within kernel function;
      - switch 'unsigned long' to 'size_t' for specifying kprobe offsets,
        which is consistent with how uprobes define that, simplifies
        printf()-ing internally, and also avoids unnecessary complications on
        architectures where sizeof(long) != sizeof(void *).
    
    This patch also implicitly fixes the problem with invalid open() error
    handling present in poke_kprobe_events(), which (the function) this
    patch removes.
    
    Fixes: ca304b40c20d ("libbpf: Introduce legacy kprobe events support")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20210921210036.1545557-4-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:15 +03:00
Yauheni Kaliuta f2d0905e57 libbpf: Fix memory leak in legacy kprobe attach logic
Bugzilla: http://bugzilla.redhat.com/2069045

commit 303a257223a3bbd7cc6ccc2b7777179c8d9f3989
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Tue Sep 21 14:00:33 2021 -0700

    libbpf: Fix memory leak in legacy kprobe attach logic
    
    In some error scenarios legacy_probe string won't be free()'d. Fix this.
    This was reported by Coverity static analysis.
    
    Fixes: ca304b40c20d ("libbpf: Introduce legacy kprobe events support")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20210921210036.1545557-2-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:15 +03:00
Yauheni Kaliuta d68e43de90 libbpf: Constify all high-level program attach APIs
Bugzilla: http://bugzilla.redhat.com/2069045

commit 942025c9f37ee45e69eb5f39a2877afab66d9555
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Sep 15 18:58:36 2021 -0700

    libbpf: Constify all high-level program attach APIs
    
    Attach APIs shouldn't need to modify bpf_program/bpf_map structs, so
    change all struct bpf_program and struct bpf_map pointers to const
    pointers. This is completely backwards compatible with no functional
    change.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20210916015836.1248906-8-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:14 +03:00
Yauheni Kaliuta d19dc173e6 libbpf: Schedule open_opts.attach_prog_fd deprecation since v0.7
Bugzilla: http://bugzilla.redhat.com/2069045

commit 91b555d73e53879fc6d4cf82c8c0e14c00ce212d
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Sep 15 18:58:35 2021 -0700

    libbpf: Schedule open_opts.attach_prog_fd deprecation since v0.7
    
    bpf_object_open_opts.attach_prog_fd makes a pretty strong assumption
    that bpf_object contains either only single freplace BPF program or all
    of BPF programs in BPF object are freplaces intended to replace
    different subprograms of the same target BPF program. This seems both
    a bit confusing, too assuming, and limiting.
    
    We've had bpf_program__set_attach_target() API which allows more
    fine-grained control over this, on a per-program level. As such, mark
    open_opts.attach_prog_fd as deprecated starting from v0.7, so that we
    have one more universal way of setting freplace targets. With previous
    change to allow NULL attach_func_name argument, and especially combined
    with BPF skeleton, arguable bpf_program__set_attach_target() is a more
    convenient and explicit API as well.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20210916015836.1248906-7-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:14 +03:00
Yauheni Kaliuta d9aea07891 libbpf: Allow skipping attach_func_name in bpf_program__set_attach_target()
Bugzilla: http://bugzilla.redhat.com/2069045

commit 2d5ec1c66e25f0b4dd895a211e651a12dec2ef4f
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Sep 15 18:58:33 2021 -0700

    libbpf: Allow skipping attach_func_name in bpf_program__set_attach_target()
    
    Allow to use bpf_program__set_attach_target to only set target attach
    program FD, while letting libbpf to use target attach function name from
    SEC() definition. This might be useful for some scenarios where
    bpf_object contains multiple related freplace BPF programs intended to
    replace different sub-programs in target BPF program. In such case all
    programs will have the same attach_prog_fd, but different
    attach_func_name. It's convenient to specify such target function names
    declaratively in SEC() definitions, but attach_prog_fd is a dynamic
    runtime setting.
    
    To simplify such scenario, allow bpf_program__set_attach_target() to
    delay BTF ID resolution till the BPF program load time by providing NULL
    attach_func_name. In that case the behavior will be similar to using
    bpf_object_open_opts.attach_prog_fd (which is marked deprecated since
    v0.7), but has the benefit of allowing more control by user in what is
    attached to what. Such setup allows having BPF programs attached to
    different target attach_prog_fd with target functions still declaratively
    recorded in BPF source code in SEC() definitions.
    
    Selftests changes in the next patch should make this more obvious.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20210916015836.1248906-5-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:13 +03:00
Yauheni Kaliuta 866766e8c0 libbpf: Use pre-setup sec_def in libbpf_find_attach_btf_id()
Bugzilla: http://bugzilla.redhat.com/2069045

commit f11f86a3931b5d533aed1be1720fbd55bd63174d
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Sep 15 18:58:30 2021 -0700

    libbpf: Use pre-setup sec_def in libbpf_find_attach_btf_id()
    
    Don't perform another search for sec_def inside
    libbpf_find_attach_btf_id(), as each recognized bpf_program already has
    prog->sec_def set.
    
    Also remove unnecessary NULL check for prog->sec_name, as it can never
    be NULL.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Yonghong Song <yhs@fb.com>
    Link: https://lore.kernel.org/bpf/20210916015836.1248906-2-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:13 +03:00
Yauheni Kaliuta 0881d7268d libbpf: Add support for BTF_KIND_TAG
Bugzilla: http://bugzilla.redhat.com/2069045

commit 5b84bd10363e36ceb7c4c1ae749a3fc8adf8df45
Author: Yonghong Song <yhs@fb.com>
Date:   Tue Sep 14 15:30:25 2021 -0700

    libbpf: Add support for BTF_KIND_TAG
    
    Add BTF_KIND_TAG support for parsing and dedup.
    Also added sanitization for BTF_KIND_TAG. If BTF_KIND_TAG is not
    supported in the kernel, sanitize it to INTs.
    
    Signed-off-by: Yonghong Song <yhs@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210914223025.246687-1-yhs@fb.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:12 +03:00
Yauheni Kaliuta 0d2781967f libbpf: Minimize explicit iterator of section definition array
Bugzilla: http://bugzilla.redhat.com/2069045

commit b6291a6f30d35bd4459dc35aac2f30669a4356ac
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Sep 13 18:47:33 2021 -0700

    libbpf: Minimize explicit iterator of section definition array
    
    Remove almost all the code that explicitly iterated BPF program section
    definitions in favor of using find_sec_def(). The only remaining user of
    section_defs is libbpf_get_type_names that has to iterate all of them to
    construct its result.
    
    Having one internal API entry point for section definitions will
    simplify further refactorings around libbpf's program section
    definitions parsing.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20210914014733.2768-5-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:12 +03:00
Yauheni Kaliuta 17c2ff6628 libbpf: Simplify BPF program auto-attach code
Bugzilla: http://bugzilla.redhat.com/2069045

commit 5532dfd42e4846e84d346a6dfe01e477e35baa65
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Sep 13 18:47:32 2021 -0700

    libbpf: Simplify BPF program auto-attach code
    
    Remove the need to explicitly pass bpf_sec_def for auto-attachable BPF
    programs, as it is already recorded at bpf_object__open() time for all
    recognized type of BPF programs. This further reduces number of explicit
    calls to find_sec_def(), simplifying further refactorings.
    
    No functional changes are done by this patch.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20210914014733.2768-4-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:12 +03:00
Yauheni Kaliuta fc47d1bba1 libbpf: Ensure BPF prog types are set before relocations
Bugzilla: http://bugzilla.redhat.com/2069045

commit 91b4d1d1d54431c72f3a7ff034f30a635f787426
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Sep 13 18:47:31 2021 -0700

    libbpf: Ensure BPF prog types are set before relocations
    
    Refactor bpf_object__open() sequencing to perform BPF program type
    detection based on SEC() definitions before we get to relocations
    collection. This allows to have more information about BPF program by
    the time we get to, say, struct_ops relocation gathering. This,
    subsequently, simplifies struct_ops logic and removes the need to
    perform extra find_sec_def() resolution.
    
    With this patch libbpf will require all struct_ops BPF programs to be
    marked with SEC("struct_ops") or SEC("struct_ops/xxx") annotations.
    Real-world applications are already doing that through something like
    selftests's BPF_STRUCT_OPS() macro. This change streamlines libbpf's
    internal handling of SEC() definitions and is in the sprit of
    upcoming libbpf-1.0 section strictness changes ([0]).
    
      [0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#stricter-and-more-uniform-bpf-program-section-name-sec-handling
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Martin KaFai Lau <kafai@fb.com>
    Link: https://lore.kernel.org/bpf/20210914014733.2768-3-andrii@kernel.org

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:11 +03:00
Yauheni Kaliuta a9d01e49c1 libbpf: Introduce legacy kprobe events support
Bugzilla: http://bugzilla.redhat.com/2069045

commit ca304b40c20d5750f08200f0ad3445384646620c
Author: Rafael David Tinoco <rafaeldtinoco@gmail.com>
Date:   Sun Sep 12 03:48:44 2021 -0300

    libbpf: Introduce legacy kprobe events support
    
    Allow kprobe tracepoint events creation through legacy interface, as the
    kprobe dynamic PMUs support, used by default, was only created in v4.17.
    
    Store legacy kprobe name in struct bpf_perf_link, instead of creating
    a new "subclass" off of bpf_perf_link. This is ok as it's just two new
    fields, which are also going to be reused for legacy uprobe support in
    follow up patches.
    
    Signed-off-by: Rafael David Tinoco <rafaeldtinoco@gmail.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210912064844.3181742-1-rafaeldtinoco@gmail.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:16:11 +03:00
Yauheni Kaliuta 458518aed9 libbpf: Don't crash on object files with no symbol tables
Bugzilla: http://bugzilla.redhat.com/2069045

commit 03e601f48b2da6fb44d0f7b86957a8f6bacfb347
Author: Toke Høiland-Jørgensen <toke@redhat.com>
Date:   Wed Sep 1 13:48:12 2021 +0200

    libbpf: Don't crash on object files with no symbol tables
    
    If libbpf encounters an ELF file that has been stripped of its symbol
    table, it will crash in bpf_object__add_programs() when trying to
    dereference the obj->efile.symbols pointer.
    
    Fix this by erroring out of bpf_object__elf_collect() if it is not able
    able to find the symbol table.
    
    v2:
      - Move check into bpf_object__elf_collect() and add nice error message
    
    Fixes: 6245947c1b ("libbpf: Allow gaps in BPF program sections to support overriden weak functions")
    Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20210901114812.204720-1-toke@redhat.com

Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com>
2022-06-03 17:15:38 +03:00
Jerome Marchand 0e4cfaa349 libbpf: Fix off-by-one bug in bpf_core_apply_relo()
Bugzilla: https://bugzilla.redhat.com/2041365

commit de5d0dcef602de39070c31c7e56c58249c56ba37
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Oct 25 15:45:28 2021 -0700

    libbpf: Fix off-by-one bug in bpf_core_apply_relo()

    Fix instruction index validity check which has off-by-one error.

    Fixes: 3ee4f5335511 ("libbpf: Split bpf_core_apply_relo() into bpf_program independent helper.")
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211025224531.1088894-2-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-04-29 18:17:15 +02:00
Jerome Marchand 062aee8319 libbpf: Fix segfault in light skeleton for objects without BTF
Bugzilla: http://bugzilla.redhat.com/2041365

commit 4729445b47efebf089da4ccbcd1b116ffa2ad4af
Author: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Date:   Thu Sep 30 11:46:34 2021 +0530

    libbpf: Fix segfault in light skeleton for objects without BTF

    When fed an empty BPF object, bpftool gen skeleton -L crashes at
    btf__set_fd() since it assumes presence of obj->btf, however for
    the sequence below clang adds no .BTF section (hence no BTF).

    Reproducer:

      $ touch a.bpf.c
      $ clang -O2 -g -target bpf -c a.bpf.c
      $ bpftool gen skeleton -L a.bpf.o
      /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
      /* THIS FILE IS AUTOGENERATED! */

      struct a_bpf {
            struct bpf_loader_ctx ctx;
      Segmentation fault (core dumped)

    The same occurs for files compiled without BTF info, i.e. without
    clang's -g flag.

    Fixes: 6723474373 (libbpf: Generate loader program out of BPF ELF file.)
    Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20210930061634.1840768-1-memxor@gmail.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-04-29 18:17:13 +02:00
Jerome Marchand 61b9d2b2d0 libbpf: Add uprobe ref counter offset support for USDT semaphores
Bugzilla: http://bugzilla.redhat.com/2041365

commit 5e3b8356de3623987ace530b1977ffeb9ecf5a8a
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Sun Aug 15 00:06:08 2021 -0700

    libbpf: Add uprobe ref counter offset support for USDT semaphores

    When attaching to uprobes through perf subsystem, it's possible to specify
    offset of a so-called USDT semaphore, which is just a reference counted u16,
    used by kernel to keep track of how many tracers are attached to a given
    location. Support for this feature was added in [0], so just wire this through
    uprobe_opts. This is important to enable implementing USDT attachment and
    tracing through libbpf's bpf_program__attach_uprobe_opts() API.

      [0] a6ca88b241 ("trace_uprobe: support reference counter in fd-based uprobe")

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20210815070609.987780-16-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-04-29 18:14:42 +02:00
Jerome Marchand 1f6e531045 libbpf: Add bpf_cookie to perf_event, kprobe, uprobe, and tp attach APIs
Bugzilla: http://bugzilla.redhat.com/2041365

commit 47faff371755ba0f1ad76e2df7f5003377d974a5
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Sun Aug 15 00:06:04 2021 -0700

    libbpf: Add bpf_cookie to perf_event, kprobe, uprobe, and tp attach APIs

    Wire through bpf_cookie for all attach APIs that use perf_event_open under the
    hood:
      - for kprobes, extend existing bpf_kprobe_opts with bpf_cookie field;
      - for perf_event, uprobe, and tracepoint APIs, add their _opts variants and
        pass bpf_cookie through opts.

    For kernel that don't support BPF_LINK_CREATE for perf_events, and thus
    bpf_cookie is not supported either, return error and log warning for user.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Link: https://lore.kernel.org/bpf/20210815070609.987780-12-andrii@kernel.org

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-04-29 18:14:41 +02:00