Commit Graph

292 Commits

Author SHA1 Message Date
Viktor Malik cbe464fbfc
fprobe: Pass return address to the handlers
JIRA: https://issues.redhat.com/browse/RHEL-64700

Conflicts: small change due to upstream commit
           2752741080f8 ("fprobe: add recursion detection in fprobe_exit_handler")
           previously backported out-of-order, aligning with upstream
           code to prevent future conflicts

commit cb16330d12741f6dae56aad5acf62f5be3a06c4e
Author: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Date:   Tue Jun 6 21:39:55 2023 +0900

    fprobe: Pass return address to the handlers

    Pass return address as 'ret_ip' to the fprobe entry and return handlers
    so that the fprobe user handler can get the reutrn address without
    analyzing arch-dependent pt_regs.

    Link: https://lore.kernel.org/all/168507467664.913472.11642316698862778600.stgit@mhiramat.roam.corp.google.com/

    Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-10-25 09:07:32 +02:00
Prarit Bhargava 661ece2953 x86/kprobes: Fix arch_check_optimized_kprobe check within optimized_kprobe range
JIRA: https://issues.redhat.com/browse/RHEL-25415

Conflicts: Minor drift issues.

commit f1c97a1b4ef709e3f066f82e3ba3108c3b133ae6
Author: Yang Jihong <yangjihong1@huawei.com>
Date:   Tue Feb 21 08:49:16 2023 +0900

    x86/kprobes: Fix arch_check_optimized_kprobe check within optimized_kprobe range

    When arch_prepare_optimized_kprobe calculating jump destination address,
    it copies original instructions from jmp-optimized kprobe (see
    __recover_optprobed_insn), and calculated based on length of original
    instruction.

    arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when
    checking whether jmp-optimized kprobe exists.
    As a result, setup_detour_execution may jump to a range that has been
    overwritten by jump destination address, resulting in an inval opcode error.

    For example, assume that register two kprobes whose addresses are
    <func+9> and <func+11> in "func" function.
    The original code of "func" function is as follows:

       0xffffffff816cb5e9 <+9>:     push   %r12
       0xffffffff816cb5eb <+11>:    xor    %r12d,%r12d
       0xffffffff816cb5ee <+14>:    test   %rdi,%rdi
       0xffffffff816cb5f1 <+17>:    setne  %r12b
       0xffffffff816cb5f5 <+21>:    push   %rbp

    1.Register the kprobe for <func+11>, assume that is kp1, corresponding optimized_kprobe is op1.
      After the optimization, "func" code changes to:

       0xffffffff816cc079 <+9>:     push   %r12
       0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
       0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
       0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
       0xffffffff816cc084 <+20>:    (bad)
       0xffffffff816cc085 <+21>:    push   %rbp

    Now op1->flags == KPROBE_FLAG_OPTIMATED;

    2. Register the kprobe for <func+9>, assume that is kp2, corresponding optimized_kprobe is op2.

    register_kprobe(kp2)
      register_aggr_kprobe
        alloc_aggr_kprobe
          __prepare_optimized_kprobe
            arch_prepare_optimized_kprobe
              __recover_optprobed_insn    // copy original bytes from kp1->optinsn.copied_insn,
                                          // jump address = <func+14>

    3. disable kp1:

    disable_kprobe(kp1)
      __disable_kprobe
        ...
        if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
          ret = disarm_kprobe(orig_p, true)       // add op1 in unoptimizing_list, not unoptimized
          orig_p->flags |= KPROBE_FLAG_DISABLED;  // op1->flags ==  KPROBE_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED
        ...

    4. unregister kp2
    __unregister_kprobe_top
      ...
      if (!kprobe_disabled(ap) && !kprobes_all_disarmed) {
        optimize_kprobe(op)
          ...
          if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FLAG_DISABLED, here not return
            return;
          p->kp.flags |= KPROBE_FLAG_OPTIMIZED;   //  now op2 has KPROBE_FLAG_OPTIMIZED
      }

    "func" code now is:

       0xffffffff816cc079 <+9>:     int3
       0xffffffff816cc07a <+10>:    push   %rsp
       0xffffffff816cc07b <+11>:    jmp    0xffffffffa0210000
       0xffffffff816cc080 <+16>:    incl   0xf(%rcx)
       0xffffffff816cc083 <+19>:    xchg   %eax,%ebp
       0xffffffff816cc084 <+20>:    (bad)
       0xffffffff816cc085 <+21>:    push   %rbp

    5. if call "func", int3 handler call setup_detour_execution:

      if (p->flags & KPROBE_FLAG_OPTIMIZED) {
        ...
        regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
        ...
      }

    The code for the destination address is

       0xffffffffa021072c:  push   %r12
       0xffffffffa021072e:  xor    %r12d,%r12d
       0xffffffffa0210731:  jmp    0xffffffff816cb5ee <func+14>

    However, <func+14> is not a valid start instruction address. As a result, an error occurs.

    Link: https://lore.kernel.org/all/20230216034247.32348-3-yangjihong1@huawei.com/

    Fixes: f66c0447cc ("kprobes: Set unoptimized flag after unoptimizing code")
    Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
    Cc: stable@vger.kernel.org
    Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
2024-03-20 09:42:59 -04:00
Prarit Bhargava b785819126 x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
JIRA: https://issues.redhat.com/browse/RHEL-25415

Conflicts: Minor drift issues.

commit 868a6fc0ca2407622d2833adefe1c4d284766c4c
Author: Yang Jihong <yangjihong1@huawei.com>
Date:   Tue Feb 21 08:49:16 2023 +0900

    x86/kprobes: Fix __recover_optprobed_insn check optimizing logic

    Since the following commit:

      commit f66c0447cc ("kprobes: Set unoptimized flag after unoptimizing code")

    modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
    may be in the optimizing or unoptimizing state when op.kp->flags
    has KPROBE_FLAG_OPTIMIZED and op->list is not empty.

    The __recover_optprobed_insn check logic is incorrect, a kprobe in the
    unoptimizing state may be incorrectly determined as unoptimizing.
    As a result, incorrect instructions are copied.

    The optprobe_queued_unopt function needs to be exported for invoking in
    arch directory.

    Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/

    Fixes: f66c0447cc ("kprobes: Set unoptimized flag after unoptimizing code")
    Cc: stable@vger.kernel.org
    Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
    Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
2024-03-20 09:42:58 -04:00
Artem Savkov fbe85326ce kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
Bugzilla: https://bugzilla.redhat.com/2166911

commit bf7a87f1075f67c286f794519f0fedfa8b0b18cc
Author: Jiri Olsa <jolsa@kernel.org>
Date:   Mon Sep 26 17:33:35 2022 +0200

    kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
    
    Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
    attach address is on function entry. This is used in following
    changes in get_func_ip helper to return correct function address.
    
    Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
    Signed-off-by: Jiri Olsa <jolsa@kernel.org>
    Link: https://lore.kernel.org/r/20220926153340.1621984-2-jolsa@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-03-06 14:54:18 +01:00
Frantisek Hrbata 1269719102 Merge: BPF and XDP rebase to v5.18
Merge conflicts:
-----------------
arch/x86/net/bpf_jit_comp.c
        - bpf_arch_text_poke()
          HEAD(!1464) contains b73b002f7f ("x86/ibt,bpf: Add ENDBR instructions to prologue and trampoline")
          Resolved in favour of !1464, but keep the return statement from !1477

MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/1477

Bugzilla: https://bugzilla.redhat.com/2120966

Rebase BPF and XDP to the upstream kernel version 5.18

Patch applied, then reverted:
```
544356 selftests/bpf: switch to new libbpf XDP APIs
0bfb95 selftests, bpf: Do not yet switch to new libbpf XDP APIs
```
Taken in the perf rebase:
```
23fcfc perf: use generic bpf_program__set_type() to set BPF prog type
```
Unsuported arches:
```
5c1011 libbpf: Fix riscv register names
cf0b5b libbpf: Fix accessing syscall arguments on riscv
```
Depends on changes of other subsystems:
```
7fc8c3 s390/bpf: encode register within extable entry
aebfd1 x86/ibt,ftrace: Search for __fentry__ location
589127 x86/ibt,bpf: Add ENDBR instructions to prologue and trampoline
```
Broken selftest:
```
edae34 selftests net: add UDP GRO fraglist + bpf self-tests
cf6783 selftests net: fix bpf build error
7b92aa selftests net: fix kselftest net fatal error
```
Out of scope:
```
baebdf net: dev: Makes sure netif_rx() can be invoked in any context.
5c8166 kbuild: replace $(if A,A,B) with $(or A,B)
1a97ce perf maps: Use a pointer for kmaps
967747 uaccess: remove CONFIG_SET_FS
42b01a s390: always use the packed stack layout
bf0882 flow_dissector: Add support for HSR
d09a30 s390/extable: move EX_TABLE define to asm-extable.h
3d6671 s390/extable: convert to relative table with data
4efd41 s390: raise minimum supported machine generation to z10
f65e58 flow_dissector: Add support for HSRv0
1a6d7a netdevsim: Introduce support for L3 offload xstats
9b1894 selftests: netdevsim: hw_stats_l3: Add a new test
84005b perf ftrace latency: Add -n/--use-nsec option
36c4a7 kasan, arm64: don't tag executable vmalloc allocations
8df013 docs: netdev: move the netdev-FAQ to the process pages
4d4d00 perf tools: Update copy of libbpf's hashmap.c
0df6ad perf evlist: Rename cpus to user_requested_cpus
1b8089 flow_dissector: fix false-positive __read_overflow2_field() warning
0ae065 perf build: Fix check for btf__load_from_kernel_by_id() in libbpf
8994e9 perf test bpf: Skip test if clang is not present
735346 perf build: Fix btf__load_from_kernel_by_id() feature check
f037ac s390/stack: merge empty stack frame slots
335220 docs: netdev: update maintainer-netdev.rst reference
a0b098 s390/nospec: remove unneeded header includes
34513a netdevsim: Fix hwstats debugfs file permissions
```

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>

Approved-by: John W. Linville <linville@redhat.com>
Approved-by: Wander Lairson Costa <wander@redhat.com>
Approved-by: Torez Smith <torez@redhat.com>
Approved-by: Jan Stancek <jstancek@redhat.com>
Approved-by: Prarit Bhargava <prarit@redhat.com>
Approved-by: Felix Maurer <fmaurer@redhat.com>
Approved-by: Viktor Malik <vmalik@redhat.com>

Signed-off-by: Frantisek Hrbata <fhrbata@redhat.com>
2022-11-21 05:30:47 -05:00
Joe Lawrence 9d17e344af x86/ibt,kprobes: Cure sym+0 equals fentry woes
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2121207

commit cc66bb91457827f62e2b6cb2518666820f0a6c48
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Mar 8 16:30:32 2022 +0100

    x86/ibt,kprobes: Cure sym+0 equals fentry woes

    In order to allow kprobes to skip the ENDBR instructions at sym+0 for
    X86_KERNEL_IBT builds, change _kprobe_addr() to take an architecture
    callback to inspect the function at hand and modify the offset if
    needed.

    This streamlines the existing interface to cover more cases and
    require less hooks. Once PowerPC gets fully converted there will only
    be the one arch hook.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Link: https://lore.kernel.org/r/20220308154318.405947704@infradead.org

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-10-27 14:27:57 -04:00
Joe Lawrence 80789bdd0d x86/ibt,ftrace: Search for __fentry__ location
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2121207

commit aebfd12521d9c7d0b502cf6d06314cfbcdccfe3b
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Mar 8 16:30:29 2022 +0100

    x86/ibt,ftrace: Search for __fentry__ location

    Currently a lot of ftrace code assumes __fentry__ is at sym+0. However
    with Intel IBT enabled the first instruction of a function will most
    likely be ENDBR.

    Change ftrace_location() to not only return the __fentry__ location
    when called for the __fentry__ location, but also when called for the
    sym+0 location.

    Then audit/update all callsites of this function to consistently use
    these new semantics.

    Suggested-by: Steven Rostedt <rostedt@goodmis.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Link: https://lore.kernel.org/r/20220308154318.227581603@infradead.org

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-10-27 14:27:56 -04:00
Jerome Marchand 7c43ef896d kprobes: Fix KRETPROBES when CONFIG_KRETPROBE_ON_RETHOOK is set
Bugzilla: https://bugzilla.redhat.com/2120966

commit 1d661ed54d8613c97bcff2c7d6181c61e482a1da
Author: Adam Zabrocki <pi3@pi3.com.pl>
Date:   Fri Apr 22 18:40:27 2022 +0200

    kprobes: Fix KRETPROBES when CONFIG_KRETPROBE_ON_RETHOOK is set

    The recent kernel change in 73f9b911faa7 ("kprobes: Use rethook for kretprobe
    if possible"), introduced a potential NULL pointer dereference bug in the
    KRETPROBE mechanism. The official Kprobes documentation defines that "Any or
    all handlers can be NULL". Unfortunately, there is a missing return handler
    verification to fulfill these requirements and can result in a NULL pointer
    dereference bug.

    This patch adds such verification in kretprobe_rethook_handler() function.

    Fixes: 73f9b911faa7 ("kprobes: Use rethook for kretprobe if possible")
    Signed-off-by: Adam Zabrocki <pi3@pi3.com.pl>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
    Cc: Anil S. Keshavamurthy <anil.s.keshavamurthy@intel.com>
    Link: https://lore.kernel.org/bpf/20220422164027.GA7862@pi3.com.pl

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:09 +02:00
Jerome Marchand f6d3c6107d kprobes: Use rethook for kretprobe if possible
Bugzilla: https://bugzilla.redhat.com/2120966

Conflicts:
Context change from missing commits 223a76b268c9 ("kprobes: Fix coding
style issues") and cc66bb914578 ("x86/ibt,kprobes: Cure sym+0 equals
fentry woes")

commit 73f9b911faa74ac5107879de05c9489c419f41bb
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Sat Mar 26 11:27:05 2022 +0900

    kprobes: Use rethook for kretprobe if possible

    Use rethook for kretprobe function return hooking if the arch sets
    CONFIG_HAVE_RETHOOK=y. In this case, CONFIG_KRETPROBE_ON_RETHOOK is
    set to 'y' automatically, and the kretprobe internal data fields
    switches to use rethook. If not, it continues to use kretprobe
    specific function return hooks.

    Suggested-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/164826162556.2455864.12255833167233452047.stgit@devnote2

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:08 +02:00
Jerome Marchand 5218290114 kprobes: Limit max data_size of the kretprobe instances
Bugzilla: https://bugzilla.redhat.com/2120966

commit 6bbfa44116689469267f1a6e3d233b52114139d2
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Wed Dec 1 23:45:50 2021 +0900

    kprobes: Limit max data_size of the kretprobe instances

    The 'kprobe::data_size' is unsigned, thus it can not be negative.  But if
    user sets it enough big number (e.g. (size_t)-8), the result of 'data_size
    + sizeof(struct kretprobe_instance)' becomes smaller than sizeof(struct
    kretprobe_instance) or zero. In result, the kretprobe_instance are
    allocated without enough memory, and kretprobe accesses outside of
    allocated memory.

    To avoid this issue, introduce a max limitation of the
    kretprobe::data_size. 4KB per instance should be OK.

    Link: https://lkml.kernel.org/r/163836995040.432120.10322772773821182925.stgit@devnote2

    Cc: stable@vger.kernel.org
    Fixes: f47cd9b553 ("kprobes: kretprobe user entry-handler")
    Reported-by: zhangyue <zhangyue1@kylinos.cn>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:58:08 +02:00
Joe Lawrence 1a316ebf26 kprobes: convert tests to kunit
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit e44e81c5b90f698025eadceb7eef8661eda117d5
Author: Sven Schnelle <svens@linux.ibm.com>
Date:   Thu Oct 21 09:54:24 2021 +0900

    kprobes: convert tests to kunit

    This converts the kprobes testcases to use the kunit framework.
    It adds a dependency on CONFIG_KUNIT, and the output will change
    to TAP:

    TAP version 14
    1..1
        # Subtest: kprobes_test
        1..4
    random: crng init done
        ok 1 - test_kprobe
        ok 2 - test_kprobes
        ok 3 - test_kretprobe
        ok 4 - test_kretprobes
    ok 1 - kprobes_test

    Note that the kprobes testcases are no longer run immediately after
    kprobes initialization, but as a late initcall when kunit is
    initialized. kprobes itself is initialized with an early initcall,
    so the order is still correct.

    Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:16 -04:00
Joe Lawrence 8e079b3aaa x86/kprobes: Fixup return address in generic trampoline handler
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit bf094cffea2a6503ce84062f9f0243bef77c58f9
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:42:51 2021 +0900

    x86/kprobes: Fixup return address in generic trampoline handler

    In x86, the fake return address on the stack saved by
    __kretprobe_trampoline() will be replaced with the real return
    address after returning from trampoline_handler(). Before fixing
    the return address, the real return address can be found in the
    'current->kretprobe_instances'.

    However, since there is a window between updating the
    'current->kretprobe_instances' and fixing the address on the stack,
    if an interrupt happens at that timing and the interrupt handler
    does stacktrace, it may fail to unwind because it can not get
    the correct return address from 'current->kretprobe_instances'.

    This will eliminate that window by fixing the return address
    right before updating 'current->kretprobe_instances'.

    Link: https://lkml.kernel.org/r/163163057094.489837.9044470370440745866.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:15 -04:00
Joe Lawrence 3cad09c558 kprobes: Enable stacktrace from pt_regs in kretprobe handler
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit df91c5bccb0c2cb868b54bd68a6ddf1fcbede6b1
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:42:12 2021 +0900

    kprobes: Enable stacktrace from pt_regs in kretprobe handler

    Since the ORC unwinder from pt_regs requires setting up regs->ip
    correctly, set the correct return address to the regs->ip before
    calling user kretprobe handler.

    This allows the kretrprobe handler to trace stack from the
    kretprobe's pt_regs by stack_trace_save_regs() (eBPF will do
    this), instead of stack tracing from the handler context by
    stack_trace_save() (ftrace will do this).

    Link: https://lkml.kernel.org/r/163163053237.489837.4272653874525136832.stgit@devnote2

    Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:15 -04:00
Joe Lawrence 0d70f1d483 kprobes: Add kretprobe_find_ret_addr() for searching return address
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 03bac0df2886882c43e6d0bfff9dee84a184fc7e
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:41:04 2021 +0900

    kprobes: Add kretprobe_find_ret_addr() for searching return address

    Introduce kretprobe_find_ret_addr() and is_kretprobe_trampoline().
    These APIs will be used by the ORC stack unwinder and ftrace, so that
    they can check whether the given address points kretprobe trampoline
    code and query the correct return address in that case.

    Link: https://lkml.kernel.org/r/163163046461.489837.1044778356430293962.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:14 -04:00
Joe Lawrence 5edc2a2c73 kprobes: treewide: Remove trampoline_address from kretprobe_trampoline_handler()
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 96fed8ac2bb64ab45497fdd8e3d390165b7a9be8
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:40:45 2021 +0900

    kprobes: treewide: Remove trampoline_address from kretprobe_trampoline_handler()

    The __kretprobe_trampoline_handler() callback, called from low level
    arch kprobes methods, has the 'trampoline_address' parameter, which is
    entirely superfluous as it basically just replicates:

      dereference_kernel_function_descriptor(kretprobe_trampoline)

    In fact we had bugs in arch code where it wasn't replicated correctly.

    So remove this superfluous parameter and use kretprobe_trampoline_addr()
    instead.

    Link: https://lkml.kernel.org/r/163163044546.489837.13505751885476015002.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:14 -04:00
Joe Lawrence bbb042cc66 kprobes: treewide: Replace arch_deref_entry_point() with dereference_symbol_descriptor()
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit f2ec8d9a3b8c0f22cd6a2b4f5a2d9aee5206e3b7
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:40:36 2021 +0900

    kprobes: treewide: Replace arch_deref_entry_point() with dereference_symbol_descriptor()

    ~15 years ago kprobes grew the 'arch_deref_entry_point()' __weak function:

      3d7e33825d87: ("jprobes: make jprobes a little safer for users")

    But this is just open-coded dereference_symbol_descriptor() in essence, and
    its obscure nature was causing bugs.

    Just use the real thing and remove arch_deref_entry_point().

    Link: https://lkml.kernel.org/r/163163043630.489837.7924988885652708696.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Tested-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:14 -04:00
Joe Lawrence 4815caf4f0 kprobes: Use bool type for functions which returns boolean value
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 29e8077ae2beea6a85ad2d0bae9c550bd5d05ed9
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:40:16 2021 +0900

    kprobes: Use bool type for functions which returns boolean value

    Use the 'bool' type instead of 'int' for the functions which
    returns a boolean value, because this makes clear that those
    functions don't return any error code.

    Link: https://lkml.kernel.org/r/163163041649.489837.17311187321419747536.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:13 -04:00
Joe Lawrence c0f53a21f0 kprobes: treewide: Use 'kprobe_opcode_t *' for the code address in get_optimized_kprobe()
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit c42421e205fc2570a4d019184ea7d6c382c93f4c
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:40:07 2021 +0900

    kprobes: treewide: Use 'kprobe_opcode_t *' for the code address in get_optimized_kprobe()

    Since get_optimized_kprobe() is only used inside kprobes,
    it doesn't need to use 'unsigned long' type for 'addr' parameter.
    Make it use 'kprobe_opcode_t *' for the 'addr' parameter and
    subsequent call of arch_within_optimized_kprobe() also should use
    'kprobe_opcode_t *'.

    Note that MAX_OPTIMIZED_LENGTH and RELATIVEJUMP_SIZE are defined
    by byte-size, but the size of 'kprobe_opcode_t' depends on the
    architecture. Therefore, we must be careful when calculating
    addresses using those macros.

    Link: https://lkml.kernel.org/r/163163040680.489837.12133032364499833736.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:13 -04:00
Joe Lawrence 2b05036e19 kprobes: Add assertions for required lock
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 57d4e31780106ad97516bfd197fac47a81482353
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:39:55 2021 +0900

    kprobes: Add assertions for required lock

    Add assertions for required locks instead of comment it
    so that the lockdep can inspect locks automatically.

    Link: https://lkml.kernel.org/r/163163039572.489837.18011973177537476885.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:50:13 -04:00
Joe Lawrence e10a4e48cb kprobes: Fix coding style issues
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

Conflicts:
	- kernel/kprobes.c: rhel9 already has 670721c7bd2a ("sched: Move
	  kprobes cleanup out of finish_task_switch()""), so leave the
          comment preceding kprobe_flush_task() as is.

commit 223a76b268c9cfa265d454879ae09e2c9c808f87
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:39:34 2021 +0900

    kprobes: Fix coding style issues

    Fix coding style issues reported by checkpatch.pl and update
    comments to quote variable names and add "()" to function
    name.
    One TODO comment in __disarm_kprobe() is removed because
    it has been done by following commit.

    Link: https://lkml.kernel.org/r/163163037468.489837.4282347782492003960.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:49:18 -04:00
Joe Lawrence 8fbf6756ae kprobes: treewide: Cleanup the error messages for kprobes
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 9c89bb8e327203bc27e09ebd82d8f61ac2ae8b24
Author: Masami Hiramatsu <mhiramat@kernel.org>
Date:   Tue Sep 14 23:39:25 2021 +0900

    kprobes: treewide: Cleanup the error messages for kprobes

    This clean up the error/notification messages in kprobes related code.
    Basically this defines 'pr_fmt()' macros for each files and update
    the messages which describes

     - what happened,
     - what is the kernel going to do or not do,
     - is the kernel fine,
     - what can the user do about it.

    Also, if the message is not needed (e.g. the function returns unique
    error code, or other error message is already shown.) remove it,
    and replace the message with WARN_*() macros if suitable.

    Link: https://lkml.kernel.org/r/163163036568.489837.14085396178727185469.stgit@devnote2

    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:33:34 -04:00
Joe Lawrence 40bf9807b8 kprobes: Make arch_check_ftrace_location static
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 4402deae8993fb0e25a19bb999b38df13e25a7e0
Author: Punit Agrawal <punitagrawal@gmail.com>
Date:   Tue Sep 14 23:39:16 2021 +0900

    kprobes: Make arch_check_ftrace_location static

    arch_check_ftrace_location() was introduced as a weak function in
    commit f7f242ff00 ("kprobes: introduce weak
    arch_check_ftrace_location() helper function") to allow architectures
    to handle kprobes call site on their own.

    Recently, the only architecture (csky) to implement
    arch_check_ftrace_location() was migrated to using the common
    version.

    As a result, further cleanup the code to drop the weak attribute and
    rename the function to remove the architecture specific
    implementation.

    Link: https://lkml.kernel.org/r/163163035673.489837.2367816318195254104.stgit@devnote2

    Signed-off-by: Punit Agrawal <punitagrawal@gmail.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:33:34 -04:00
Joe Lawrence 1417793290 kprobe: Simplify prepare_kprobe() by dropping redundant version
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 02afb8d6048d6526619e6e2dcdc95ce9c2bdb52f
Author: Punit Agrawal <punitagrawal@gmail.com>
Date:   Tue Sep 14 23:38:57 2021 +0900

    kprobe: Simplify prepare_kprobe() by dropping redundant version

    The function prepare_kprobe() is called during kprobe registration and
    is responsible for ensuring any architecture related preparation for
    the kprobe is done before returning.

    One of two versions of prepare_kprobe() is chosen depending on the
    availability of KPROBE_ON_FTRACE in the kernel configuration.

    Simplify the code by dropping the version when KPROBE_ON_FTRACE is not
    selected - instead relying on kprobe_ftrace() to return false when
    KPROBE_ON_FTRACE is not set.

    No functional change.

    Link: https://lkml.kernel.org/r/163163033696.489837.9264661820279300788.stgit@devnote2

    Signed-off-by: Punit Agrawal <punitagrawal@gmail.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:33:34 -04:00
Joe Lawrence 8fc45329b2 kprobes: Use helper to parse boolean input from userspace
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 5d6de7d7fb4b0f752adff80ca003b4fd4b467b64
Author: Punit Agrawal <punitagrawal@gmail.com>
Date:   Tue Sep 14 23:38:46 2021 +0900

    kprobes: Use helper to parse boolean input from userspace

    The "enabled" file provides a debugfs interface to arm / disarm
    kprobes in the kernel. In order to parse the buffer containing the
    values written from userspace, the callback manually parses the user
    input to convert it to a boolean value.

    As taking a string value from userspace and converting it to boolean
    is a common operation, a helper kstrtobool_from_user() is already
    available in the kernel. Update the callback to use the common helper
    to parse the write buffer from userspace.

    Link: https://lkml.kernel.org/r/163163032637.489837.10678039554832855327.stgit@devnote2

    Signed-off-by: Punit Agrawal <punitagrawal@gmail.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:33:34 -04:00
Joe Lawrence 0d4d04dbe6 kprobes: Do not use local variable when creating debugfs file
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069373

commit 8f7262cd66699a4b02eb7549b35c81b2116aad95
Author: Punit Agrawal <punitagrawal@gmail.com>
Date:   Tue Sep 14 23:38:37 2021 +0900

    kprobes: Do not use local variable when creating debugfs file

    debugfs_create_file() takes a pointer argument that can be used during
    file operation callbacks (accessible via i_private in the inode
    structure). An obvious requirement is for the pointer to refer to
    valid memory when used.

    When creating the debugfs file to dynamically enable / disable
    kprobes, a pointer to local variable is passed to
    debugfs_create_file(); which will go out of scope when the init
    function returns. The reason this hasn't triggered random memory
    corruption is because the pointer is not accessed during the debugfs
    file callbacks.

    Since the enabled state is managed by the kprobes_all_disabled global
    variable, the local variable is not needed. Fix the incorrect (and
    unnecessary) usage of local variable during debugfs_file_create() by
    passing NULL instead.

    Link: https://lkml.kernel.org/r/163163031686.489837.4476867635937014973.stgit@devnote2

    Fixes: bf8f6e5b3e ("Kprobes: The ON/OFF knob thru debugfs")
    Signed-off-by: Punit Agrawal <punitagrawal@gmail.com>
    Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-04-06 21:33:34 -04:00
Phil Auld 395c062ef5 sched: Move kprobes cleanup out of finish_task_switch()
Bugzilla: http://bugzilla.redhat.com/2020279

commit 670721c7bd2a6e16e40db29b2707a27bdecd6928
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Tue Sep 28 14:24:28 2021 +0200

    sched: Move kprobes cleanup out of finish_task_switch()

    Doing cleanups in the tail of schedule() is a latency punishment for the
    incoming task. The point of invoking kprobes_task_flush() for a dead task
    is that the instances are returned and cannot leak when __schedule() is
    kprobed.

    Move it into the delayed cleanup.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/20210928122411.537994026@linutronix.de

Signed-off-by: Phil Auld <pauld@redhat.com>
2021-12-13 16:07:49 -05:00
Linus Torvalds 301c8b1d7c Locking fixes:
- Fix a Sparc crash
  - Fix a number of objtool warnings
  - Fix /proc/lockdep output on certain configs
  - Restore a kprobes fail-safe
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmDq8DMRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1jZcA//aYhW8gm3rjtXeRme6H5vLF3fehxw9xoC
 g6RTAHStHd9xJyctsFYR7Fx7o1l2G05jf5tv4MWoAYMtnjz6OKfPQu7b8eTD3Z+3
 n0AAfsrrVaK4f8AgGZ+bj4kw/BCJL0Xx8HyRXjDWODVZVY+yUEo2c5vsw02inQeW
 3AQ1m4ZhQBYvl7r4pD0oi6BrL0ruvC0NN5kRYuh1Ib4I8GtF1h9ACPFICxsV6Glx
 4SKqzsvaQbV+9EiiLpKqEpi/EJqMmAE5sr4EUnQxWsMeuOKavzETck1ZxWTO5iIh
 gXI2yTuLS6++yBPCQer/8eePsP3bAiQeNJ+71xpfFdmwx9osA7DFe3aV3f5Ug+Bq
 f4yswcw1Y1jZhvNp3AV9kE+h2mrSUEWGKAj9LCIV6VqNfOeKKrAyrxSfLRYiB1Ek
 M9+ML+lN3M2c4n5P7qxx1ZUOZ1It19Nx6HNEeTPkfKhlI+57hpmvPvKIjqZQRdAD
 oE9exVRssFxDQLIHWoshoDQ7JVR7fsqn7I6ExejnAIpl6veFAAQ457gOHmFyi+jo
 aLeCTAie0hA18TrMqWtp/ftnpTTJvRJKtHPQXIYmqEkp8S85ryd7Co/9sMRHDS8e
 XhQRFPSfp4MHqucmoyUIlbRkv16f/0RsC0gv10U0T/WUkjQGMBL5/dvZLpJILtDm
 DOmYxoe0UP8=
 =WvwL
 -----END PGP SIGNATURE-----

Merge tag 'locking-urgent-2021-07-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking fixes from Ingo Molnar:

 - Fix a Sparc crash

 - Fix a number of objtool warnings

 - Fix /proc/lockdep output on certain configs

 - Restore a kprobes fail-safe

* tag 'locking-urgent-2021-07-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/atomic: sparc: Fix arch_cmpxchg64_local()
  kprobe/static_call: Restore missing static_call_text_reserved()
  static_call: Fix static_call_text_reserved() vs __init
  jump_label: Fix jump_label_text_reserved() vs __init
  locking/lockdep: Fix meaningless /proc/lockdep output of lock classes on !CONFIG_PROVE_LOCKING
2021-07-11 11:06:09 -07:00
Peter Zijlstra fa68bd09fc kprobe/static_call: Restore missing static_call_text_reserved()
Restore two hunks from commit:

  6333e8f73b ("static_call: Avoid kprobes on inline static_call()s")

that went walkabout in a Git merge commit.

Fixes: 76d4acf22b ("Merge tag 'perf-kprobes-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: https://lore.kernel.org/r/20210628113045.167127609@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-07-05 10:47:16 +02:00
Linus Torvalds 019b3fd94b powerpc updates for 5.14
- A big series refactoring parts of our KVM code, and converting some to C.
 
  - Support for ARCH_HAS_SET_MEMORY, and ARCH_HAS_STRICT_MODULE_RWX on some CPUs.
 
  - Support for the Microwatt soft-core.
 
  - Optimisations to our interrupt return path on 64-bit.
 
  - Support for userspace access to the NX GZIP accelerator on PowerVM on Power10.
 
  - Enable KUAP and KUEP by default on 32-bit Book3S CPUs.
 
  - Other smaller features, fixes & cleanups.
 
 Thanks to: Andy Shevchenko, Aneesh Kumar K.V, Arnd Bergmann, Athira Rajeev, Baokun Li,
 Benjamin Herrenschmidt, Bharata B Rao, Christophe Leroy, Daniel Axtens, Daniel Henrique
 Barboza, Finn Thain, Geoff Levand, Haren Myneni, Jason Wang, Jiapeng Chong, Joel Stanley,
 Jordan Niethe, Kajol Jain, Nathan Chancellor, Nathan Lynch, Naveen N. Rao, Nicholas
 Piggin, Nick Desaulniers, Paul Mackerras, Russell Currey, Sathvika Vasireddy, Shaokun
 Zhang, Stephen Rothwell, Sudeep Holla, Suraj Jitindar Singh, Tom Rix, Vaibhav Jain,
 YueHaibing, Zhang Jianhua, Zhen Lei.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAmDfFS4THG1wZUBlbGxl
 cm1hbi5pZC5hdQAKCRBR6+o8yOGlgFxHEAC88NJ+Gz87LiTQFt6QjhziBaJUd+sY
 uqADPRROr4P50O8PjYZbMi2qbXzOlLkZO4wJWX7jpZ1F9KmbPNqY2shD8h4ahyge
 F/uqzBW1FXBJfnDEKdU2MzalkeTP+dwxLZyouUamjDCGNLFjOV4x/Fft5otOdXjO
 k9uO6yoGyOkWYzjC+Y/irNPlIDDByB/+bD92Cb52Y2mXMDDEnx4JzbtkeJW+8udT
 Sjn3bWzeL+dz5GehjMKwK4+SptNiyQGOgM8FwtnKUMvgzxv04DqCGjr9YC12L2Z7
 VoFZc4GzVgtf8DZg4fJ3KG5aG2nH3Tui7jc9lUckdrxixDAZw5wSG7CQ39gFb/5+
 7A4fEJk4Z3h5llibwxAZrC7wV8ZDDXn8oRFzRcOJjfxYaD+ohOOyWHIebwkdiXYx
 nfYI7sBcScDLXeBvHtDra2GJpbFSpVL3S/QNhhi1vKVNrFSyAgbAybcVL2xPLZ6+
 8Mh7A8xt+hf2bo9AXuYJDo9mwXWfg1093d0kT+AslcRhZioBk18c2AiZLIz0FzuL
 Ua/e5FPb99x9LSdcZHvaAXBoHT2iTgDyCyDa3gkIesyuRX6ggHoFcVQuvdDcbJ9d
 H8LK+Tahy1Y+E5b6KdtU8mDEGE+QG+CWLnwQ6YSCaL/MYgaFzNa32Jdj1fmztSBC
 cttP43kHZ7ljTw==
 =zo4d
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-5.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

 - A big series refactoring parts of our KVM code, and converting some
   to C.

 - Support for ARCH_HAS_SET_MEMORY, and ARCH_HAS_STRICT_MODULE_RWX on
   some CPUs.

 - Support for the Microwatt soft-core.

 - Optimisations to our interrupt return path on 64-bit.

 - Support for userspace access to the NX GZIP accelerator on PowerVM on
   Power10.

 - Enable KUAP and KUEP by default on 32-bit Book3S CPUs.

 - Other smaller features, fixes & cleanups.

Thanks to: Andy Shevchenko, Aneesh Kumar K.V, Arnd Bergmann, Athira
Rajeev, Baokun Li, Benjamin Herrenschmidt, Bharata B Rao, Christophe
Leroy, Daniel Axtens, Daniel Henrique Barboza, Finn Thain, Geoff Levand,
Haren Myneni, Jason Wang, Jiapeng Chong, Joel Stanley, Jordan Niethe,
Kajol Jain, Nathan Chancellor, Nathan Lynch, Naveen N. Rao, Nicholas
Piggin, Nick Desaulniers, Paul Mackerras, Russell Currey, Sathvika
Vasireddy, Shaokun Zhang, Stephen Rothwell, Sudeep Holla, Suraj Jitindar
Singh, Tom Rix, Vaibhav Jain, YueHaibing, Zhang Jianhua, and Zhen Lei.

* tag 'powerpc-5.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (218 commits)
  powerpc: Only build restart_table.c for 64s
  powerpc/64s: move ret_from_fork etc above __end_soft_masked
  powerpc/64s/interrupt: clean up interrupt return labels
  powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols
  powerpc/64: enable MSR[EE] in irq replay pt_regs
  powerpc/64s/interrupt: preserve regs->softe for NMI interrupts
  powerpc/64s: add a table of implicit soft-masked addresses
  powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
  powerpc/64e: fix CONFIG_RELOCATABLE build warnings
  powerpc/64s: fix hash page fault interrupt handler
  powerpc/4xx: Fix setup_kuep() on SMP
  powerpc/32s: Fix setup_{kuap/kuep}() on SMP
  powerpc/interrupt: Use names in check_return_regs_valid()
  powerpc/interrupt: Also use exit_must_hard_disable() on PPC32
  powerpc/sysfs: Replace sizeof(arr)/sizeof(arr[0]) with ARRAY_SIZE
  powerpc/ptrace: Refactor regs_set_return_{msr/ip}
  powerpc/ptrace: Move set_return_regs_changed() before regs_set_return_{msr/ip}
  powerpc/stacktrace: Fix spurious "stale" traces in raise_backtrace_ipi()
  powerpc/pseries/vas: Include irqdomain.h
  powerpc: mark local variables around longjmp as volatile
  ...
2021-07-02 12:54:34 -07:00
Linus Torvalds 71bd934101 Merge branch 'akpm' (patches from Andrew)
Merge more updates from Andrew Morton:
 "190 patches.

  Subsystems affected by this patch series: mm (hugetlb, userfaultfd,
  vmscan, kconfig, proc, z3fold, zbud, ras, mempolicy, memblock,
  migration, thp, nommu, kconfig, madvise, memory-hotplug, zswap,
  zsmalloc, zram, cleanups, kfence, and hmm), procfs, sysctl, misc,
  core-kernel, lib, lz4, checkpatch, init, kprobes, nilfs2, hfs,
  signals, exec, kcov, selftests, compress/decompress, and ipc"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (190 commits)
  ipc/util.c: use binary search for max_idx
  ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock
  ipc: use kmalloc for msg_queue and shmid_kernel
  ipc sem: use kvmalloc for sem_undo allocation
  lib/decompressors: remove set but not used variabled 'level'
  selftests/vm/pkeys: exercise x86 XSAVE init state
  selftests/vm/pkeys: refill shadow register after implicit kernel write
  selftests/vm/pkeys: handle negative sys_pkey_alloc() return code
  selftests/vm/pkeys: fix alloc_random_pkey() to make it really, really random
  kcov: add __no_sanitize_coverage to fix noinstr for all architectures
  exec: remove checks in __register_bimfmt()
  x86: signal: don't do sas_ss_reset() until we are certain that sigframe won't be abandoned
  hfsplus: report create_date to kstat.btime
  hfsplus: remove unnecessary oom message
  nilfs2: remove redundant continue statement in a while-loop
  kprobes: remove duplicated strong free_insn_page in x86 and s390
  init: print out unknown kernel parameters
  checkpatch: do not complain about positive return values starting with EPOLL
  checkpatch: improve the indented label test
  checkpatch: scripts/spdxcheck.py now requires python3
  ...
2021-07-02 12:08:10 -07:00
Barry Song 66ce75144d kprobes: remove duplicated strong free_insn_page in x86 and s390
free_insn_page() in x86 and s390 is same with the common weak function in
kernel/kprobes.c.  Plus, the comment "Recover page to RW mode before
releasing it" in x86 seems insensible to be there since resetting mapping
is done by common code in vfree() of module_memfree().  So drop these two
duplicated strong functions and related comment, then mark the common one
in kernel/kprobes.c strong.

Link: https://lkml.kernel.org/r/20210608065736.32656-1-song.bao.hua@hisilicon.com
Signed-off-by: Barry Song <song.bao.hua@hisilicon.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Qi Liu <liuqi115@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01 11:06:06 -07:00
Peter Zijlstra ec6aba3d2b kprobes: Remove kprobe::fault_handler
The reason for kprobe::fault_handler(), as given by their comment:

 * We come here because instructions in the pre/post
 * handler caused the page_fault, this could happen
 * if handler tries to access user space by
 * copy_from_user(), get_user() etc. Let the
 * user-specified handler try to fix it first.

Is just plain bad. Those other handlers are ran from non-preemptible
context and had better use _nofault() functions. Also, there is no
upstream usage of this.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: https://lore.kernel.org/r/20210525073213.561116662@infradead.org
2021-06-01 16:00:08 +02:00
Christophe Leroy 7ee3e97e00 kprobes: Allow architectures to override optinsn page allocation
Some architectures like powerpc require a non standard
allocation of optinsn page, because module pages are
too far from the kernel for direct branches.

Define weak alloc_optinsn_page() and free_optinsn_page(), that
fall back on alloc_insn_page() and free_insn_page() when not
overridden by the architecture.

Suggested-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/40a43d6df1fdf41ade36e9a46e60a4df774ca9f6.1620896780.git.christophe.leroy@csgroup.eu
2021-05-23 20:51:35 +10:00
Masami Hiramatsu c85c9a2c6e kprobes: Fix to delay the kprobes jump optimization
Commit 36dadef23f ("kprobes: Init kprobes in early_initcall")
moved the kprobe setup in early_initcall(), which includes kprobe
jump optimization.
The kprobes jump optimizer involves synchronize_rcu_tasks() which
depends on the ksoftirqd and rcu_spawn_tasks_*(). However, since
those are setup in core_initcall(), kprobes jump optimizer can not
run at the early_initcall().

To avoid this issue, make the kprobe optimization disabled in the
early_initcall() and enables it in subsys_initcall().

Note that non-optimized kprobes is still available after
early_initcall(). Only jump optimization is delayed.

Link: https://lkml.kernel.org/r/161365856280.719838.12423085451287256713.stgit@devnote2

Fixes: 36dadef23f ("kprobes: Init kprobes in early_initcall")
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: RCU <rcu@vger.kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Theodore Y . Ts'o" <tytso@mit.edu>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: stable@vger.kernel.org
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reported-by: Uladzislau Rezki <urezki@gmail.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-02-19 14:57:12 -05:00
Masami Hiramatsu 33b1d14668 kprobes: Warn if the kprobe is reregistered
Warn if the kprobe is reregistered, since there must be
a software bug (actively used resource must not be re-registered)
and caller must be fixed.

Link: https://lkml.kernel.org/r/161236436734.194052.4058506306336814476.stgit@devnote2

Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@linux.ibm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-02-09 12:44:32 -05:00
Wang ShaoBo 0188b87899 kretprobe: Avoid re-registration of the same kretprobe earlier
Our system encountered a re-init error when re-registering same kretprobe,
where the kretprobe_instance in rp->free_instances is illegally accessed
after re-init.

Implementation to avoid re-registration has been introduced for kprobe
before, but lags for register_kretprobe(). We must check if kprobe has
been re-registered before re-initializing kretprobe, otherwise it will
destroy the data struct of kretprobe registered, which can lead to memory
leak, system crash, also some unexpected behaviors.

We use check_kprobe_rereg() to check if kprobe has been re-registered
before running register_kretprobe()'s body, for giving a warning message
and terminate registration process.

Link: https://lkml.kernel.org/r/20210128124427.2031088-1-bobo.shaobowang@huawei.com

Cc: stable@vger.kernel.org
Fixes: 1f0ab40976 ("kprobes: Prevent re-registration of the same kprobe")
[ The above commit should have been done for kretprobes too ]
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@linux.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Wang ShaoBo <bobo.shaobowang@huawei.com>
Signed-off-by: Cheng Jian <cj.chengjian@huawei.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-01-29 17:29:16 -05:00
Masami Hiramatsu 97c753e62e tracing/kprobe: Fix to support kretprobe events on unloaded modules
Fix kprobe_on_func_entry() returns error code instead of false so that
register_kretprobe() can return an appropriate error code.

append_trace_kprobe() expects the kprobe registration returns -ENOENT
when the target symbol is not found, and it checks whether the target
module is unloaded or not. If the target module doesn't exist, it
defers to probe the target symbol until the module is loaded.

However, since register_kretprobe() returns -EINVAL instead of -ENOENT
in that case, it always fail on putting the kretprobe event on unloaded
modules. e.g.

Kprobe event:
/sys/kernel/debug/tracing # echo p xfs:xfs_end_io >> kprobe_events
[   16.515574] trace_kprobe: This probe might be able to register after target module is loaded. Continue.

Kretprobe event: (p -> r)
/sys/kernel/debug/tracing # echo r xfs:xfs_end_io >> kprobe_events
sh: write error: Invalid argument
/sys/kernel/debug/tracing # cat error_log
[   41.122514] trace_kprobe: error: Failed to register probe event
  Command: r xfs:xfs_end_io
             ^

To fix this bug, change kprobe_on_func_entry() to detect symbol lookup
failure and return -ENOENT in that case. Otherwise it returns -EINVAL
or 0 (succeeded, given address is on the entry).

Link: https://lkml.kernel.org/r/161176187132.1067016.8118042342894378981.stgit@devnote2

Cc: stable@vger.kernel.org
Fixes: 59158ec4ae ("tracing/kprobes: Check the probe on unloaded module correctly")
Reported-by: Jianlin Lv <Jianlin.Lv@arm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-01-29 15:39:48 -05:00
Ingo Molnar 0a986ea81e Merge branch 'linus' into perf/kprobes
Merge recent kprobes updates into perf/kprobes that came from -mm.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-11-07 13:18:49 +01:00
Peter Zijlstra 6e426e0fcd kprobes: Replace rp->free_instance with freelist
Gets rid of rp->lock, and as a result kretprobes are now fully
lockless.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/159870623583.1229682.17472357584134058687.stgit@devnote2
2020-10-12 18:27:28 +02:00
Peter Zijlstra d741bf41d7 kprobes: Remove kretprobe hash
The kretprobe hash is mostly superfluous, replace it with a per-task
variable.

This gets rid of the task hash and it's related locking.

Note that this may change the kprobes module-exported API for kretprobe
handlers. If any out-of-tree kretprobe user uses ri->rp, use
get_kretprobe(ri) instead.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/159870620431.1229682.16325792502413731312.stgit@devnote2
2020-10-12 18:27:27 +02:00
Masami Hiramatsu 36dadef23f kprobes: Init kprobes in early_initcall
Init kprobes feature in early_initcall as same as jump_label and
dynamic_debug does, so that we can use kprobes events in earlier
boot stage.

Link: https://lkml.kernel.org/r/159974151897.478751.8342374158615496628.stgit@devnote2

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-09-21 21:06:04 -04:00
Masami Hiramatsu 82d083ab60 kprobes: tracing/kprobes: Fix to kill kprobes on initmem after boot
Since kprobe_event= cmdline option allows user to put kprobes on the
functions in initmem, kprobe has to make such probes gone after boot.
Currently the probes on the init functions in modules will be handled
by module callback, but the kernel init text isn't handled.
Without this, kprobes may access non-exist text area to disable or
remove it.

Link: https://lkml.kernel.org/r/159972810544.428528.1839307531600646955.stgit@devnote2

Fixes: 970988e19e ("tracing/kprobe: Add kprobe_event= boot parameter")
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Shuah Khan <skhan@linuxfoundation.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-09-18 14:27:24 -04:00
Masami Hiramatsu 3031313eb3 kprobes: Fix to check probe enabled before disarm_kprobe_ftrace()
Commit 0cb2f1372b ("kprobes: Fix NULL pointer dereference at
kprobe_ftrace_handler") fixed one bug but not completely fixed yet.
If we run a kprobe_module.tc of ftracetest, kernel showed a warning
as below.

# ./ftracetest test.d/kprobe/kprobe_module.tc
=== Ftrace unit tests ===
[1] Kprobe dynamic event - probing module
...
[   22.400215] ------------[ cut here ]------------
[   22.400962] Failed to disarm kprobe-ftrace at trace_printk_irq_work+0x0/0x7e [trace_printk] (-2)
[   22.402139] WARNING: CPU: 7 PID: 200 at kernel/kprobes.c:1091 __disarm_kprobe_ftrace.isra.0+0x7e/0xa0
[   22.403358] Modules linked in: trace_printk(-)
[   22.404028] CPU: 7 PID: 200 Comm: rmmod Not tainted 5.9.0-rc2+ #66
[   22.404870] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1 04/01/2014
[   22.406139] RIP: 0010:__disarm_kprobe_ftrace.isra.0+0x7e/0xa0
[   22.406947] Code: 30 8b 03 eb c9 80 3d e5 09 1f 01 00 75 dc 49 8b 34 24 89 c2 48 c7 c7 a0 c2 05 82 89 45 e4 c6 05 cc 09 1f 01 01 e8 a9 c7 f0 ff <0f> 0b 8b 45 e4 eb b9 89 c6 48 c7 c7 70 c2 05 82 89 45 e4 e8 91 c7
[   22.409544] RSP: 0018:ffffc90000237df0 EFLAGS: 00010286
[   22.410385] RAX: 0000000000000000 RBX: ffffffff83066024 RCX: 0000000000000000
[   22.411434] RDX: 0000000000000001 RSI: ffffffff810de8d3 RDI: ffffffff810de8d3
[   22.412687] RBP: ffffc90000237e10 R08: 0000000000000001 R09: 0000000000000001
[   22.413762] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88807c478640
[   22.414852] R13: ffffffff8235ebc0 R14: ffffffffa00060c0 R15: 0000000000000000
[   22.415941] FS:  00000000019d48c0(0000) GS:ffff88807d7c0000(0000) knlGS:0000000000000000
[   22.417264] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   22.418176] CR2: 00000000005bb7e3 CR3: 0000000078f7a000 CR4: 00000000000006a0
[   22.419309] Call Trace:
[   22.419990]  kill_kprobe+0x94/0x160
[   22.420652]  kprobes_module_callback+0x64/0x230
[   22.421470]  notifier_call_chain+0x4f/0x70
[   22.422184]  blocking_notifier_call_chain+0x49/0x70
[   22.422979]  __x64_sys_delete_module+0x1ac/0x240
[   22.423733]  do_syscall_64+0x38/0x50
[   22.424366]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   22.425176] RIP: 0033:0x4bb81d
[   22.425741] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e0 ff ff ff f7 d8 64 89 01 48
[   22.428726] RSP: 002b:00007ffc70fef008 EFLAGS: 00000246 ORIG_RAX: 00000000000000b0
[   22.430169] RAX: ffffffffffffffda RBX: 00000000019d48a0 RCX: 00000000004bb81d
[   22.431375] RDX: 0000000000000000 RSI: 0000000000000880 RDI: 00007ffc70fef028
[   22.432543] RBP: 0000000000000880 R08: 00000000ffffffff R09: 00007ffc70fef320
[   22.433692] R10: 0000000000656300 R11: 0000000000000246 R12: 00007ffc70fef028
[   22.434635] R13: 0000000000000000 R14: 0000000000000002 R15: 0000000000000000
[   22.435682] irq event stamp: 1169
[   22.436240] hardirqs last  enabled at (1179): [<ffffffff810df542>] console_unlock+0x422/0x580
[   22.437466] hardirqs last disabled at (1188): [<ffffffff810df19b>] console_unlock+0x7b/0x580
[   22.438608] softirqs last  enabled at (866): [<ffffffff81c0038e>] __do_softirq+0x38e/0x490
[   22.439637] softirqs last disabled at (859): [<ffffffff81a00f42>] asm_call_on_stack+0x12/0x20
[   22.440690] ---[ end trace 1e7ce7e1e4567276 ]---
[   22.472832] trace_kprobe: This probe might be able to register after target module is loaded. Continue.

This is because the kill_kprobe() calls disarm_kprobe_ftrace() even
if the given probe is not enabled. In that case, ftrace_set_filter_ip()
fails because the given probe point is not registered to ftrace.

Fix to check the given (going) probe is enabled before invoking
disarm_kprobe_ftrace().

Link: https://lkml.kernel.org/r/159888672694.1411785.5987998076694782591.stgit@devnote2

Fixes: 0cb2f1372b ("kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler")
Cc: Ingo Molnar <mingo@kernel.org>
Cc: "Naveen N . Rao" <naveen.n.rao@linux.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: David Miller <davem@davemloft.net>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Chengming Zhou <zhouchengming@bytedance.com>
Cc: stable@vger.kernel.org
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-09-18 11:50:51 -04:00
Masami Hiramatsu bcb53209be kprobes: Fix to check probe enabled before disarm_kprobe_ftrace()
Commit:

  0cb2f1372b ("kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler")

fixed one bug but the underlying bugs are not completely fixed yet.

If we run a kprobe_module.tc of ftracetest, a warning triggers:

  # ./ftracetest test.d/kprobe/kprobe_module.tc
  === Ftrace unit tests ===
  [1] Kprobe dynamic event - probing module
  ...
   ------------[ cut here ]------------
   Failed to disarm kprobe-ftrace at trace_printk_irq_work+0x0/0x7e [trace_printk] (-2)
   WARNING: CPU: 7 PID: 200 at kernel/kprobes.c:1091 __disarm_kprobe_ftrace.isra.0+0x7e/0xa0

This is because the kill_kprobe() calls disarm_kprobe_ftrace() even
if the given probe is not enabled. In that case, ftrace_set_filter_ip()
fails because the given probe point is not registered to ftrace.

Fix to check the given (going) probe is enabled before invoking
disarm_kprobe_ftrace().

Fixes: 0cb2f1372b ("kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler")
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/159888672694.1411785.5987998076694782591.stgit@devnote2
2020-09-14 11:20:03 +02:00
Masami Hiramatsu 319f0ce284 kprobes: Make local functions static
Since we unified the kretprobe trampoline handler from arch/* code,
some functions and objects do not need to be exported anymore.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/159870618256.1229682.8692046612635810882.stgit@devnote2
2020-09-08 11:52:42 +02:00
Masami Hiramatsu b338817807 kprobes: Free kretprobe_instance with RCU callback
Free kretprobe_instance with RCU callback instead of directly
freeing the object in the kretprobe handler context.

This will make kretprobe run safer in NMI context.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/159870616685.1229682.11978742048709542226.stgit@devnote2
2020-09-08 11:52:35 +02:00
Masami Hiramatsu e03b4a084e kprobes: Remove NMI context check
The in_nmi() check in pre_handler_kretprobe() is meant to avoid
recursion, and blindly assumes that anything NMI is recursive.

However, since commit:

  9b38cc704e ("kretprobe: Prevent triggering kretprobe from within kprobe_flush_task")

there is a better way to detect and avoid actual recursion.

By setting a dummy kprobe, any actual exceptions will terminate early
(by trying to handle the dummy kprobe), and recursion will not happen.

Employ this to avoid the kretprobe_table_lock() recursion, replacing
the over-eager in_nmi() check.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/159870615628.1229682.6087311596892125907.stgit@devnote2
2020-09-08 11:52:35 +02:00
Masami Hiramatsu 66ada2ccae kprobes: Add generic kretprobe trampoline handler
Add a generic kretprobe trampoline handler for unifying
the all cloned /arch/* kretprobe trampoline handlers.

The generic kretprobe trampoline handler is based on the
x86 implementation, because it is the latest implementation.
It has frame pointer checking, kprobe_busy_begin/end and
return address fixup for user handlers.

[ mingo: Minor edits. ]

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/159870600138.1229682.3424065380448088833.stgit@devnote2
2020-09-08 11:52:31 +02:00
Linus Torvalds 32663c78c1 Tracing updates for 5.9
- The biggest news in that the tracing ring buffer can now time events that
    interrupted other ring buffer events. Before this change, if an interrupt
    came in while recording another event, and that interrupt also had an
    event, those events would all have the same time stamp as the event it
    interrupted. Now, with the new design, those events will have a unique time
    stamp and rightfully display the time for those events that were recorded
    while interrupting another event.
 
  - Bootconfig how has an "override" operator that lets the users have a
    default config, but then add options to override the default.
 
  - A fix was made to properly filter function graph tracing to the ftrace
    PIDs. This came in at the end of the -rc cycle, and needs to be backported.
 
  - Several clean ups, performance updates, and minor fixes as well.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCXy3GOBQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qphsAP9ci1jtrC2+cMBMCNKb/AFpA/nDaKsD
 hpsDzvD0YPOmCAEA9QbZset8wUNG49R4FexP7egQ8Ad2S6Oa5f60jWleDQY=
 =lH+q
 -----END PGP SIGNATURE-----

Merge tag 'trace-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:

 - The biggest news in that the tracing ring buffer can now time events
   that interrupted other ring buffer events.

   Before this change, if an interrupt came in while recording another
   event, and that interrupt also had an event, those events would all
   have the same time stamp as the event it interrupted.

   Now, with the new design, those events will have a unique time stamp
   and rightfully display the time for those events that were recorded
   while interrupting another event.

 - Bootconfig how has an "override" operator that lets the users have a
   default config, but then add options to override the default.

 - A fix was made to properly filter function graph tracing to the
   ftrace PIDs. This came in at the end of the -rc cycle, and needs to
   be backported.

 - Several clean ups, performance updates, and minor fixes as well.

* tag 'trace-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (39 commits)
  tracing: Add trace_array_init_printk() to initialize instance trace_printk() buffers
  kprobes: Fix compiler warning for !CONFIG_KPROBES_ON_FTRACE
  tracing: Use trace_sched_process_free() instead of exit() for pid tracing
  bootconfig: Fix to find the initargs correctly
  Documentation: bootconfig: Add bootconfig override operator
  tools/bootconfig: Add testcases for value override operator
  lib/bootconfig: Add override operator support
  kprobes: Remove show_registers() function prototype
  tracing/uprobe: Remove dead code in trace_uprobe_register()
  kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler
  ftrace: Fix ftrace_trace_task return value
  tracepoint: Use __used attribute definitions from compiler_attributes.h
  tracepoint: Mark __tracepoint_string's __used
  trace : Have tracing buffer info use kvzalloc instead of kzalloc
  tracing: Remove outdated comment in stack handling
  ftrace: Do not let direct or IPMODIFY ftrace_ops be added to module and set trampolines
  ftrace: Setup correct FTRACE_FL_REGS flags for module
  tracing/hwlat: Honor the tracing_cpumask
  tracing/hwlat: Drop the duplicate assignment in start_kthread()
  tracing: Save one trace_event->type by using __TRACE_LAST_TYPE
  ...
2020-08-07 18:29:15 -07:00
Muchun Song 10de795a5a kprobes: Fix compiler warning for !CONFIG_KPROBES_ON_FTRACE
Fix compiler warning(as show below) for !CONFIG_KPROBES_ON_FTRACE.

kernel/kprobes.c: In function 'kill_kprobe':
kernel/kprobes.c:1116:33: warning: statement with no effect
[-Wunused-value]
 1116 | #define disarm_kprobe_ftrace(p) (-ENODEV)
      |                                 ^
kernel/kprobes.c:2154:3: note: in expansion of macro
'disarm_kprobe_ftrace'
 2154 |   disarm_kprobe_ftrace(p);

Link: https://lore.kernel.org/r/20200805142136.0331f7ea@canb.auug.org.au
Link: https://lkml.kernel.org/r/20200805172046.19066-1-songmuchun@bytedance.com

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: 0cb2f1372b ("kprobes: Fix NULL pointer dereference at kprobe_ftrace_handler")
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-08-06 09:16:27 -04:00