Commit Graph

169 Commits

Author SHA1 Message Date
Denis Aleksandrov 7c3f326164 livepatch: Add stack_order sysfs attribute
JIRA: https://issues.redhat.com/browse/RHEL-85303

Add "stack_order" sysfs attribute which holds the order in which a live
patch module was loaded into the system. A user can then determine an
active live patched version of a function.

cat /sys/kernel/livepatch/livepatch_1/stack_order -> 1

means that livepatch_1 is the first live patch applied

cat /sys/kernel/livepatch/livepatch_module/stack_order -> N

means that livepatch_module is the Nth live patch applied

Suggested-by: Petr Mladek <pmladek@suse.com>
Suggested-by: Miroslav Benes <mbenes@suse.cz>
Suggested-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Wardenjohn <zhangwarden@gmail.com>
Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Tested-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Miroslav Benes <mbenes@suse.cz>
Link: https://lore.kernel.org/r/20241008014856.3729-2-zhangwarden@gmail.com
[pmladek@suse.com: Updated kernel version and date in the ABI documentation.]
Signed-off-by: Petr Mladek <pmladek@suse.com>
(cherry picked from commit 3dae09de406167123449d9ece1f51855d5bac01a)
Signed-off-by: Denis Aleksandrov <daleksan@redhat.com>
2025-04-03 13:23:15 -04:00
Denis Aleksandrov cab71a4b8d livepatch: Use kallsyms_on_each_match_symbol() to improve performance
JIRA: https://issues.redhat.com/browse/RHEL-85303

Based on the test results of kallsyms_on_each_match_symbol() and
kallsyms_on_each_symbol(), the average performance can be improved by
more than 1500 times.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
(cherry picked from commit 9cb37357dfce1b596041ad68a20407c8b4e76635)
Signed-off-by: Denis Aleksandrov <daleksan@redhat.com>
2025-04-03 13:22:35 -04:00
Denis Aleksandrov 579531376b livepatch: Fix build failure on 32 bits processors
JIRA: https://issues.redhat.com/browse/RHEL-85303

Trying to build livepatch on powerpc/32 results in:

	kernel/livepatch/core.c: In function 'klp_resolve_symbols':
	kernel/livepatch/core.c:221:23: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
	  221 |                 sym = (Elf64_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
	      |                       ^
	kernel/livepatch/core.c:221:21: error: assignment to 'Elf32_Sym *' {aka 'struct elf32_sym *'} from incompatible pointer type 'Elf64_Sym *' {aka 'struct elf64_sym *'} [-Werror=incompatible-pointer-types]
	  221 |                 sym = (Elf64_Sym *)sechdrs[symndx].sh_addr + ELF_R_SYM(relas[i].r_info);
	      |                     ^
	kernel/livepatch/core.c: In function 'klp_apply_section_relocs':
	kernel/livepatch/core.c:312:35: error: passing argument 1 of 'klp_resolve_symbols' from incompatible pointer type [-Werror=incompatible-pointer-types]
	  312 |         ret = klp_resolve_symbols(sechdrs, strtab, symndx, sec, sec_objname);
	      |                                   ^~~~~~~
	      |                                   |
	      |                                   Elf32_Shdr * {aka struct elf32_shdr *}
	kernel/livepatch/core.c:193:44: note: expected 'Elf64_Shdr *' {aka 'struct elf64_shdr *'} but argument is of type 'Elf32_Shdr *' {aka 'struct elf32_shdr *'}
	  193 | static int klp_resolve_symbols(Elf64_Shdr *sechdrs, const char *strtab,
	      |                                ~~~~~~~~~~~~^~~~~~~

Fix it by using the right types instead of forcing 64 bits types.

Fixes: 7c8e2bdd5f ("livepatch: Apply vmlinux-specific KLP relocations early")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Acked-by: Petr Mladek <pmladek@suse.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/5288e11b018a762ea3351cc8fb2d4f15093a4457.1640017960.git.christophe.leroy@csgroup.eu

(cherry picked from commit 2f293651eca3eacaeb56747dede31edace7329d2)
Signed-off-by: Denis Aleksandrov <daleksan@redhat.com>
2025-04-03 13:21:39 -04:00
Ryan Sullivan 67eaf09d47 livepatch: Add "replace" sysfs attribute
JIRA: https://issues.redhat.com/browse/RHEL-61781

There are situations when it might make sense to combine livepatches
with and without the atomic replace on the same system. For example,
the livepatch without the atomic replace might provide a hotfix
or extra tuning.

Managing livepatches on such systems might be challenging. And the
information which of the installed livepatches do not use the atomic
replace would be useful.

Add new sysfs interface 'replace'. It works as follows:

   $ cat /sys/kernel/livepatch/livepatch-non_replace/replace
   0

   $ cat /sys/kernel/livepatch/livepatch-replace/replace
   1

[ commit log improved by Petr ]

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Link: https://lore.kernel.org/r/20240625151123.2750-2-laoar.shao@gmail.com
Signed-off-by: Petr Mladek <pmladek@suse.com>
(cherry picked from commit adb68ed26a3e92224e04502c768f1bb03b7c7aeb)
Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2024-10-15 09:54:10 -04:00
Ryan Sullivan da7ba890e8 livepatch: Replace snprintf() with sysfs_emit()
JIRA: https://issues.redhat.com/browse/RHEL-61781

Let's use sysfs_emit() instead of snprintf().

Suggested-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Link: https://lore.kernel.org/r/20240625151123.2750-4-laoar.shao@gmail.com
Signed-off-by: Petr Mladek <pmladek@suse.com>
(cherry picked from commit 920526928089b00be7881c7112a463fe8a63371b)
Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2024-10-15 09:54:10 -04:00
Ryan Sullivan fe4b765d6d livepatch: Rename KLP_* to KLP_TRANSITION_*
JIRA: https://issues.redhat.com/browse/RHEL-61781

The original macros of KLP_* is about the state of the transition.
Rename macros of KLP_* to KLP_TRANSITION_* to fix the confusing
description of klp transition state.

Signed-off-by: Wardenjohn <zhangwarden@gmail.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Tested-by: Petr Mladek <pmladek@suse.com>
Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Link: https://lore.kernel.org/r/20240507050111.38195-2-zhangwarden@gmail.com
Signed-off-by: Petr Mladek <pmladek@suse.com>
(cherry picked from commit d927752f287fe10965612541593468ffcfa9231f)
Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2024-10-15 09:54:10 -04:00
Donald Dutile 9d720c5a56 kallsyms: Delete an unused parameter related to {module_}kallsyms_on_each_symbol()
JIRA: https://issues.redhat.com/browse/RHEL-28063

commit 3703bd54cd37e7875f51ece8df8c85c184e40bba
Author: Zhen Lei <thunder.leizhen@huawei.com>
Date:   Wed Mar 8 15:38:46 2023 +0800

    kallsyms: Delete an unused parameter related to {module_}kallsyms_on_each_symbol()

    The parameter 'struct module *' in the hook function associated with
    {module_}kallsyms_on_each_symbol() is no longer used. Delete it.

    Suggested-by: Petr Mladek <pmladek@suse.com>
    Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
    Reviewed-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
    Acked-by: Jiri Olsa <jolsa@kernel.org>
    Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
    Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>

Signed-off-by: Donald Dutile <ddutile@redhat.com>
2024-06-17 14:17:23 -04:00
Donald Dutile b0a329a194 livepatch: Improve the search performance of module_kallsyms_on_each_symbol()
JIRA: https://issues.redhat.com/browse/RHEL-28063

Conflicts: upstream 73feb8d5fa3b already backported to RHEL9, so that results
           in missing hunk for RHEL9 in module.h.
           Removed bpf_trace.c hunk since bpf updated closer to upstream
           and no longer uses kallsyms function.

commit 07cc2c931e8e1083a31f4c51d2244fe264af63bf
Author: Zhen Lei <thunder.leizhen@huawei.com>
Date:   Mon Jan 16 11:10:07 2023 +0100

    livepatch: Improve the search performance of module_kallsyms_on_each_symbol()

    Currently we traverse all symbols of all modules to find the specified
    function for the specified module. But in reality, we just need to find
    the given module and then traverse all the symbols in it.

    Let's add a new parameter 'const char *modname' to function
    module_kallsyms_on_each_symbol(), then we can compare the module names
    directly in this function and call hook 'fn' after matching. If 'modname'
    is NULL, the symbols of all modules are still traversed for compatibility
    with other usage cases.

    Phase1: mod1-->mod2..(subsequent modules do not need to be compared)
                    |
    Phase2:          -->f1-->f2-->f3

    Assuming that there are m modules, each module has n symbols on average,
    then the time complexity is reduced from O(m * n) to O(m) + O(n).

    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Acked-by: Song Liu <song@kernel.org>
    Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
    Signed-off-by: Jiri Olsa <jolsa@kernel.org>
    Acked-by: Miroslav Benes <mbenes@suse.cz>
    Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
    Link: https://lore.kernel.org/r/20230116101009.23694-2-jolsa@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Donald Dutile <ddutile@redhat.com>
2024-06-17 14:17:22 -04:00
Donald Dutile c67a86d17a kallsyms: increase maximum kernel symbol length to 512
JIRA: https://issues.redhat.com/browse/RHEL-28063

commit b8a94bfb33952bb17fbc65f8903d242a721c533d
Author: Miguel Ojeda <ojeda@kernel.org>
Date:   Mon Apr 5 05:03:50 2021 +0200

    kallsyms: increase maximum kernel symbol length to 512

    Rust symbols can become quite long due to namespacing introduced
    by modules, types, traits, generics, etc. For instance,
    the following code:

        pub mod my_module {
            pub struct MyType;
            pub struct MyGenericType<T>(T);

            pub trait MyTrait {
                fn my_method() -> u32;
            }

            impl MyTrait for MyGenericType<MyType> {
                fn my_method() -> u32 {
                    42
                }
            }
        }

    generates a symbol of length 96 when using the upcoming v0 mangling scheme:

        _RNvXNtCshGpAVYOtgW1_7example9my_moduleINtB2_13MyGenericTypeNtB2_6MyTypeENtB2_7MyTrait9my_method

    At the moment, Rust symbols may reach up to 300 in length.
    Setting 512 as the maximum seems like a reasonable choice to
    keep some headroom.

    Reviewed-by: Kees Cook <keescook@chromium.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Co-developed-by: Alex Gaynor <alex.gaynor@gmail.com>
    Signed-off-by: Alex Gaynor <alex.gaynor@gmail.com>
    Co-developed-by: Wedson Almeida Filho <wedsonaf@google.com>
    Signed-off-by: Wedson Almeida Filho <wedsonaf@google.com>
    Co-developed-by: Gary Guo <gary@garyguo.net>
    Signed-off-by: Gary Guo <gary@garyguo.net>
    Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
    Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
    Signed-off-by: Miguel Ojeda <ojeda@kernel.org>

Signed-off-by: Donald Dutile <ddutile@redhat.com>
2024-06-17 14:17:21 -04:00
Ryan Sullivan 42272ec587 livepatch: Fix missing newline character in klp_resolve_symbols()
JIRA: https://issues.redhat.com/browse/RHEL-31518

Without the newline character, the log may not be printed immediately
after the error occurs.

Fixes: ca376a9374 ("livepatch: Prevent module-specific KLP rela sections from referencing vmlinux symbols")
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Link: https://lore.kernel.org/r/20230914072644.4098857-1-zhengyejian1@huawei.com
(cherry picked from commit 67e18e132f0fd738f8c8cac3aa1420312073f795)
Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2024-04-17 16:41:33 -04:00
Jan Stancek 52ab1aed59 Merge: livepatch: selected fixes for rhel-9.4
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/3107

JIRA: https://issues.redhat.com/browse/RHEL-2768

Small bug fixes and documentation updates for livepatch
subsystem as requested by client

Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>

Approved-by: David Arcari <darcari@redhat.com>
Approved-by: Julia Denham <jdenham@redhat.com>

Signed-off-by: Jan Stancek <jstancek@redhat.com>
2023-11-13 10:15:36 +01:00
Ryan Sullivan 98889bdd81 livepatch: Make 'klp_stack_entries' static
JIRA: https://issues.redhat.com/browse/RHEL-2768

commit 42cffe980ce383893660d78e33340763ca1dadae
Author: Josh Poimboeuf <jpoimboe@kernel.org>
Date:   Tue May 30 16:15:58 2023 -0700

    livepatch: Make 'klp_stack_entries' static

    The 'klp_stack_entries' percpu array is only used in transition.c.  Make
    it static.

    Fixes: e92606fa172f ("livepatch: Convert stack entries array to percpu")
    Reported-by: kernel test robot <lkp@intel.com>
    Closes: https://lore.kernel.org/oe-kbuild-all/202305171329.i0UQ4TJa-lkp@intel.com/
    Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Signed-off-by: Petr Mladek <pmladek@suse.com>
    Link: https://lore.kernel.org/r/5115752fca6537720700f4bf5b178959dfbca41a.1685488550.git.jpoimboe@kernel.org

Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2023-09-25 16:16:30 -04:00
Ryan Sullivan eefa4e141b livepatch: Convert stack entries array to percpu
JIRA: https://issues.redhat.com/browse/RHEL-2768

commit e92606fa172f63a26054885b9715be86c643229d
Author: Josh Poimboeuf <jpoimboe@kernel.org>
Date:   Mon Mar 13 16:33:46 2023 -0700

    livepatch: Convert stack entries array to percpu

    The entries array in klp_check_stack() is static local because it's too
    big to be reasonably allocated on the stack.  Serialized access is
    enforced by the klp_mutex.

    In preparation for calling klp_check_stack() without the mutex (from
    cond_resched), convert it to a percpu variable.

    Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/20230313233346.kayh4t2lpicjkpsv@treble

Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2023-09-25 16:16:30 -04:00
Ryan Sullivan de5372ab4a livepatch: Make kobj_type structures constant
JIRA: https://issues.redhat.com/browse/RHEL-2768

commit 1b47b80e2fa777b37a9486512f4636d7e6aaa353
Author: Thomas Weißschuh <linux@weissschuh.net>
Date:   Fri Feb 17 03:14:41 2023 +0000

    livepatch: Make kobj_type structures constant

    Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.")
    the driver core allows the usage of const struct kobj_type.

    Take advantage of this to constify the structure definitions to prevent
    modification at runtime.

    Signed-off-by: Thomas Weißschuh <linux@weissschuh.net>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Signed-off-by: Petr Mladek <pmladek@suse.com>
    Link: https://lore.kernel.org/r/20230217-kobj_type-livepatch-v1-1-06ded292e897@weissschuh.net

Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2023-09-25 11:49:59 -04:00
Ryan Sullivan 3fbdd27563 livepatch,x86: Clear relocation targets on a module removal
JIRA: https://issues.redhat.com/browse/RHEL-2768

commit 0c05e7bd2d017a3a9a0f4e9a19ad4acf1f616f12
Author: Song Liu <song@kernel.org>
Date:   Wed Jan 25 10:54:01 2023 -0800

    livepatch,x86: Clear relocation targets on a module removal

    Josh reported a bug:

      When the object to be patched is a module, and that module is
      rmmod'ed and reloaded, it fails to load with:

      module: x86/modules: Skipping invalid relocation target, existing value is nonzero for type 2, loc 00000000ba0302e9, val ffffffffa03e293c
      livepatch: failed to initialize patch 'livepatch_nfsd' for module 'nfsd' (-8)
      livepatch: patch 'livepatch_nfsd' failed for module 'nfsd', refusing to load module 'nfsd'

      The livepatch module has a relocation which references a symbol
      in the _previous_ loading of nfsd. When apply_relocate_add()
      tries to replace the old relocation with a new one, it sees that
      the previous one is nonzero and it errors out.

    He also proposed three different solutions. We could remove the error
    check in apply_relocate_add() introduced by commit eda9cec4c9
    ("x86/module: Detect and skip invalid relocations"). However the check
    is useful for detecting corrupted modules.

    We could also deny the patched modules to be removed. If it proved to be
    a major drawback for users, we could still implement a different
    approach. The solution would also complicate the existing code a lot.

    We thus decided to reverse the relocation patching (clear all relocation
    targets on x86_64). The solution is not
    universal and is too much arch-specific, but it may prove to be simpler
    in the end.

    Reported-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Originally-by: Miroslav Benes <mbenes@suse.cz>
    Signed-off-by: Song Liu <song@kernel.org>
    Acked-by: Miroslav Benes <mbenes@suse.cz>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Reviewed-by: Joe Lawrence <joe.lawrence@redhat.com>
    Tested-by: Joe Lawrence <joe.lawrence@redhat.com>
    Signed-off-by: Petr Mladek <pmladek@suse.com>
    Link: https://lore.kernel.org/r/20230125185401.279042-2-song@kernel.org

Signed-off-by: Ryan Sullivan <rysulliv@redhat.com>
2023-09-25 11:47:26 -04:00
Phil Auld 8142c03a19 livepatch,sched: Add livepatch task switching to cond_resched()
JIRA: https://issues.redhat.com/browse/RHEL-1536
Conflicts: Minor fixup due to already having 8df1947c71 ("livepatch:
    Replace the fake signal sending with TIF_NOTIFY_SIGNAL infrastructure")

commit e3ff7c609f39671d1aaff4fb4a8594e14f3e03f8
Author: Josh Poimboeuf <jpoimboe@kernel.org>
Date:   Fri Feb 24 08:50:00 2023 -0800

    livepatch,sched: Add livepatch task switching to cond_resched()

    There have been reports [1][2] of live patches failing to complete
    within a reasonable amount of time due to CPU-bound kthreads.

    Fix it by patching tasks in cond_resched().

    There are four different flavors of cond_resched(), depending on the
    kernel configuration.  Hook into all of them.

    A more elegant solution might be to use a preempt notifier.  However,
    non-ORC unwinders can't unwind a preempted task reliably.

    [1] https://lore.kernel.org/lkml/20220507174628.2086373-1-song@kernel.org/
    [2] https://lkml.kernel.org/lkml/20230120-vhost-klp-switching-v1-0-7c2b65519c43@kernel.org

    Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Tested-by: Seth Forshee (DigitalOcean) <sforshee@kernel.org>
    Link: https://lore.kernel.org/r/4ae981466b7814ec221014fc2554b2f86f3fb70b.1677257135.git.jpoimboe@kernel.org

Signed-off-by: Phil Auld <pauld@redhat.com>
2023-09-07 14:26:06 -04:00
Phil Auld b251ef1a82 livepatch: Skip task_call_func() for current task
JIRA: https://issues.redhat.com/browse/RHEL-1536

commit 383439d3d400d4c5a7ffb4495124adc6be6a05e2
Author: Josh Poimboeuf <jpoimboe@kernel.org>
Date:   Fri Feb 24 08:49:59 2023 -0800

    livepatch: Skip task_call_func() for current task

    The current task doesn't need the scheduler's protection to unwind its
    own stack.

    Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Tested-by: Seth Forshee (DigitalOcean) <sforshee@kernel.org>
    Link: https://lore.kernel.org/r/4b92e793462d532a05f03767151fa29db3e68e13.1677257135.git.jpoimboe@kernel.org

Signed-off-by: Phil Auld <pauld@redhat.com>
2023-09-07 14:26:06 -04:00
Julia Denham 905a26fb64 livepatch: Move the result-invariant calculation out of the loop
JIRA: https://issues.redhat.com/browse/RHEL-257

commit 53910ef7ba04fbf1ea74037fa997d3aa1ae3e0bd
Author: Zhen Lei <thunder.leizhen@huawei.com>
Date:   Fri Sep 30 09:54:46 2022 +0800

livepatch: Move the result-invariant calculation out of the loop

The calculation results of the variables 'func_addr' and 'func_size' are
not affected by the for loop and do not change due to the changes of
entries[i]. The performance can be improved by moving it outside the loop.

No functional change.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
(cherry picked from commit 53910ef7ba04fbf1ea74037fa997d3aa1ae3e0bd)

Signed-off-by: Julia Denham <jdenham@redhat.com>
2023-04-10 11:55:14 -04:00
Julia Denham d223ec7f2b livepatch: add sysfs entry "patched" for each klp_object
JIRA: https://issues.redhat.com/browse/RHEL-257

commit bb26cfd9e77e8dadd4be2ca154017bde9326cd4b
Author: Song Liu <song@kernel.org>
Date:   Fri Sep 2 13:52:07 2022 -0700

livepatch: add sysfs entry "patched" for each klp_object

Add per klp_object sysfs entry "patched". It makes it easier to debug
typos in the module name.

Signed-off-by: Song Liu <song@kernel.org>
Reviewed-by: Joe Lawrence <joe.lawrence@redhat.com>
[pmladek@suse.com: Updated kernel version when the sysfs file will be introduced]
Reviewed-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Link: https://lore.kernel.org/r/20220902205208.3117798-2-song@kernel.org
(cherry picked from commit bb26cfd9e77e8dadd4be2ca154017bde9326cd4b)

Signed-off-by: Julia Denham <jdenham@redhat.com>
2023-04-10 11:55:00 -04:00
Julia Denham e7e95b6859 livepatch: Add a missing newline character in klp_module_coming()
JIRA: https://issues.redhat.com/browse/RHEL-257

commit 66d8529d0f0423bc0fc249a5620c342c122981fb
Author: Zhen Lei <thunder.leizhen@huawei.com>
Date:   Tue Aug 30 19:28:55 2022 +0800

livepatch: Add a missing newline character in klp_module_coming()

The error message is not printed immediately because it does not end with
a newline character.

Before:
root@localhost:~# insmod vmlinux.ko
insmod: ERROR: could not insert module vmlinux.ko: Invalid parameters

After:
root@localhost:~# insmod vmlinux.ko
[   43.982558] livepatch: vmlinux.ko: invalid module name
insmod: ERROR: could not insert module vmlinux.ko: Invalid parameters

Fixes: dcf550e52f ("livepatch: Disallow vmlinux.ko")
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Petr Mladek <pmladek@suse.com>
Link: https://lore.kernel.org/r/20220830112855.749-1-thunder.leizhen@huawei.com
(cherry picked from commit 66d8529d0f0423bc0fc249a5620c342c122981fb)

Signed-off-by: Julia Denham <jdenham@redhat.com>
2023-04-10 11:54:04 -04:00
Julia Denham 04205aca17 livepatch: fix race between fork and KLP transition
JIRA: https://issues.redhat.com/browse/RHEL-257

commit 747f7a2901174c9afa805dddfb7b24db6f65e985
Author: Rik van Riel <riel@surriel.com>
Date:   Mon Aug 8 15:00:19 2022 -0400

livepatch: fix race between fork and KLP transition

The KLP transition code depends on the TIF_PATCH_PENDING and
the task->patch_state to stay in sync. On a normal (forward)
transition, TIF_PATCH_PENDING will be set on every task in
the system, while on a reverse transition (after a failed
forward one) first TIF_PATCH_PENDING will be cleared from
every task, followed by it being set on tasks that need to
be transitioned back to the original code.

However, the fork code copies over the TIF_PATCH_PENDING flag
from the parent to the child early on, in dup_task_struct and
setup_thread_stack. Much later, klp_copy_process will set
child->patch_state to match that of the parent.

However, the parent's patch_state may have been changed by KLP loading
or unloading since it was initially copied over into the child.

This results in the KLP code occasionally hitting this warning in
klp_complete_transition:

        for_each_process_thread(g, task) {
                WARN_ON_ONCE(test_tsk_thread_flag(task, TIF_PATCH_PENDING));
                task->patch_state = KLP_UNDEFINED;
        }

Set, or clear, the TIF_PATCH_PENDING flag in the child task
depending on whether or not it is needed at the time
klp_copy_process is called, at a point in copy_process where the
tasklist_lock is held exclusively, preventing races with the KLP
code.

The KLP code does have a few places where the state is changed
without the tasklist_lock held, but those should not cause
problems because klp_update_patch_state(current) cannot be
called while the current task is in the middle of fork,
klp_check_and_switch_task() which is called under the pi_lock,
which prevents rescheduling, and manipulation of the patch
state of idle tasks, which do not fork.

This should prevent this warning from triggering again in the
future, and close the race for both normal and reverse transitions.

Signed-off-by: Rik van Riel <riel@surriel.com>
Reported-by: Breno Leitao <leitao@debian.org>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
Fixes: d83a7cb375 ("livepatch: change to a per-task consistency model")
Cc: stable@kernel.org
Signed-off-by: Petr Mladek <pmladek@suse.com>
Link: https://lore.kernel.org/r/20220808150019.03d6a67b@imladris.surriel.com
(cherry picked from commit 747f7a2901174c9afa805dddfb7b24db6f65e985)

Signed-off-by: Julia Denham <jdenham@redhat.com>
2023-04-10 11:53:51 -04:00
Julia Denham 410900c9b5 livepatch: Don't block removal of patches that are safe to unload
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2121205

commit 2957308343fa7c621df9f342fab88cb970b8d5f3
Author: Chengming Zhou <zhouchengming@bytedance.com>
Date:   Sat Mar 12 23:22:20 2022 +0800

livepatch: Don't block removal of patches that are safe to unload

module_put() is not called for a patch with "forced" flag. It should
block the removal of the livepatch module when the code might still
be in use after forced transition.

klp_force_transition() currently sets "forced" flag for all patches on
the list.

In fact, any patch can be safely unloaded when it passed through
the consistency model in KLP_UNPATCHED transition.

In other words, the "forced" flag must be set only for livepatches
that are being removed. In particular, set the "forced" flag:

  + only for klp_transition_patch when the transition to KLP_UNPATCHED
    state was forced.

  + all replaced patches when the transition to KLP_PATCHED state was
    forced and the patch was replacing the existing patches.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Tested-by: Petr Mladek <pmladek@suse.com>
[mbenes@suse.cz: wording improvements]
Signed-off-by: Petr Mladek <pmladek@suse.com>
Link: https://lore.kernel.org/r/20220312152220.88127-1-zhouchengming@bytedance.com
(cherry picked from commit 2957308343fa7c621df9f342fab88cb970b8d5f3)

Signed-off-by: Julia Denham <jdenham@redhat.com>
2022-12-05 12:18:16 -05:00
Joe Lawrence f73fb4730e x86/livepatch: Validate __fentry__ location
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2121207

commit d15cb3dab1e4f00e29599a4f5e1f6678a530d270
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Mar 8 16:30:30 2022 +0100

    x86/livepatch: Validate __fentry__ location

    Currently livepatch assumes __fentry__ lives at func+0, which is most
    likely untrue with IBT on. Instead make it use ftrace_location() by
    default which both validates and finds the actual ip if there is any
    in the same symbol.

    Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Link: https://lore.kernel.org/r/20220308154318.285971256@infradead.org

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
2022-10-27 14:27:57 -04:00
C. Erastus Toe 1d0078b34a livepatch: Fix missing unlock on error in klp_enable_patch()
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069362

commit 50a0f3f55e382b313e7cbebdf8ccf1593296e16f
Author: Yang Yingliang <yangyingliang@huawei.com>
Date:   Sat Dec 25 10:51:15 2021 +0800

    livepatch: Fix missing unlock on error in klp_enable_patch()

    Add missing unlock when try_module_get() fails in klp_enable_patch().

    Fixes: 5ef3dd20555e8e8 ("livepatch: Fix kobject refcount bug on klp_init_patch_early failure path")
    Reported-by: Hulk Robot <hulkci@huawei.com>
    Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
    Acked-by: David Vernet <void@manifault.com>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Signed-off-by: Petr Mladek <pmladek@suse.com>
    Link: https://lore.kernel.org/r/20211225025115.475348-1-yangyingliang@huawei.com

Signed-off-by: C. Erastus Toe <ctoe@redhat.com>
2022-05-19 16:20:16 -04:00
C. Erastus Toe 24c8fe29ae livepatch: Fix kobject refcount bug on klp_init_patch_early failure path
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069362

commit 5ef3dd20555e8e878ac390a71e658db5fd02845c
Author: David Vernet <void@manifault.com>
Date:   Tue Dec 21 07:39:31 2021 -0800

    livepatch: Fix kobject refcount bug on klp_init_patch_early failure path

    When enabling a klp patch with klp_enable_patch(), klp_init_patch_early()
    is invoked to initialize the kobjects for the patch itself, as well as the
    'struct klp_object' and 'struct klp_func' objects that comprise it.
    However, there are some error paths in klp_enable_patch() where some
    kobjects may have been initialized with kobject_init(), but an error code
    is still returned due to e.g. a 'struct klp_object' having a NULL funcs
    pointer.

    In these paths, the initial reference of the kobject of the 'struct
    klp_patch' may never be released, along with one or more of its objects and
    their functions, as kobject_put() is not invoked on the cleanup path if
    klp_init_patch_early() returns an error code.

    For example, if an object entry such as the following were added to the
    sample livepatch module's klp patch, it would cause the vmlinux klp_object,
    and its klp_func which updates 'cmdline_proc_show', to never be released:

    static struct klp_object objs[] = {
            {
                    /* name being NULL means vmlinux */
                    .funcs = funcs,
            },
            {
                    /* NULL funcs -- would cause reference leak */
                    .name = "kvm",
            }, { }
    };

    Without this change, if CONFIG_DEBUG_KOBJECT is enabled, and the sample klp
    patch is loaded, the kobjects (the patch, the vmlinux 'struct klp_object',
    and its func) are observed as initialized, but never released, in the dmesg
    log output.  With the change, these kobject references no longer fail to be
    released as the error case is properly handled before they are initialized.

    Signed-off-by: David Vernet <void@manifault.com>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Acked-by: Miroslav Benes <mbenes@suse.cz>
    Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Signed-off-by: Petr Mladek <pmladek@suse.com>

Signed-off-by: C. Erastus Toe <ctoe@redhat.com>
2022-05-19 16:17:19 -04:00
C. Erastus Toe 61fc814350 Documentation: livepatch: Add livepatch API page
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2069362

commit e368cd72880360ffe9b298349ae96286dd121499
Author: David Vernet <void@manifault.com>
Date:   Tue Dec 21 06:57:45 2021 -0800

    Documentation: livepatch: Add livepatch API page

    The livepatch subsystem has several exported functions and objects with
    kerneldoc comments. Though the livepatch documentation contains handwritten
    descriptions of all of these exported functions, they are currently not
    pulled into the docs build using the kernel-doc directive.

    In order to allow readers of the documentation to see the full kerneldoc
    comments in the generated documentation files, this change adds a new
    Documentation/livepatch/api.rst page which contains kernel-doc directives
    to link the kerneldoc comments directly in the documentation.  With this,
    all of the hand-written descriptions of the APIs now cross-reference the
    kerneldoc comments on the new Livepatching APIs page, and running
    ./scripts/find-unused-docs.sh on kernel/livepatch no longer shows any files
    as missing documentation.

    Note that all of the handwritten API descriptions were left alone with the
    exception of Documentation/livepatch/system-state.rst, which was updated to
    allow the cross-referencing to work correctly. The file now follows the
    cross-referencing formatting guidance specified in
    Documentation/doc-guide/kernel-doc.rst. Furthermore, some comments around
    klp_shadow_free_all() were updated to say <_, id> rather than <*, id> to
    match the rest of the file, and to prevent the docs build from emitting an
    "Inline emphasis start-string without end string" error.

    Signed-off-by: David Vernet <void@manifault.com>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Acked-by: Miroslav Benes <mbenes@suse.cz>
    Signed-off-by: Petr Mladek <pmladek@suse.com>
    Link: https://lore.kernel.org/r/20211221145743.4098360-1-void@manifault.com

Signed-off-by: C. Erastus Toe <ctoe@redhat.com>
2022-05-19 16:10:58 -04:00
Herton R. Krzesinski 49764f5e39 Merge: ftrace: do CPU checking after preemption disabled
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/225

Bugzilla: http://bugzilla.redhat.com/1938117

commit d33cc657372366a8959f099c619a208b4c5dc664
Author: 王贇 <yun.wang@linux.alibaba.com>
Date:   Wed Oct 27 11:15:11 2021 +0800

    ftrace: do CPU checking after preemption disabled

    With CONFIG_DEBUG_PREEMPT we observed reports like:

      BUG: using smp_processor_id() in preemptible
      caller is perf_ftrace_function_call+0x6f/0x2e0
      CPU: 1 PID: 680 Comm: a.out Not tainted
      Call Trace:
       <TASK>
       dump_stack_lvl+0x8d/0xcf
       check_preemption_disabled+0x104/0x110
       ? optimize_nops.isra.7+0x230/0x230
       ? text_poke_bp_batch+0x9f/0x310
       perf_ftrace_function_call+0x6f/0x2e0
       ...
       __text_poke+0x5/0x620
       text_poke_bp_batch+0x9f/0x310

    This telling us the CPU could be changed after task is preempted, and
    the checking on CPU before preemption will be invalid.

    Since now ftrace_test_recursion_trylock() will help to disable the
    preemption, this patch just do the checking after trylock() to address
    the issue.

    Link: https://lkml.kernel.org/r/54880691-5fe2-33e7-d12f-1fa6136f5183@linux.alibaba.com

    CC: Steven Rostedt <rostedt@goodmis.org>
    Cc: Guo Ren <guoren@kernel.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
    Cc: Helge Deller <deller@gmx.de>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Paul Walmsley <paul.walmsley@sifive.com>
    Cc: Palmer Dabbelt <palmer@dabbelt.com>
    Cc: Albert Ou <aou@eecs.berkeley.edu>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Miroslav Benes <mbenes@suse.cz>
    Cc: Petr Mladek <pmladek@suse.com>
    Cc: Joe Lawrence <joe.lawrence@redhat.com>
    Cc: Masami Hiramatsu <mhiramat@kernel.org>
    Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Jisheng Zhang <jszhang@kernel.org>
    Reported-by: Abaci <abaci@linux.alibaba.com>
    Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Wander Lairson Costa <wander@redhat.com>

Approved-by: Prarit Bhargava <prarit@redhat.com>
Approved-by: Joe Lawrence <joe.lawrence@redhat.com>

Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
2022-01-12 16:59:34 +00:00
Herton R. Krzesinski adc818bf26 Merge: Replace deprecated CPU-hotplug functions for kernel-rt
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/134
Bugzilla: http://bugzilla.redhat.com/2023079

Depends: https://gitlab.com/redhat/rhel/src/kernel/rhel-8/-/merge_requests/99

The kernel-rt variant requires these changes in order to make future
changes to the RHEL9 kernel.  These changes were found by code inspection
and affect not only kernel-rt but the regular kernel variants as well.

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
RH-Acked-by: Rafael Aquini <aquini@redhat.com>
RH-Acked-by: John W. Linville <linville@redhat.com>
RH-Acked-by: David Arcari <darcari@redhat.com>
RH-Acked-by: Vladis Dronov <vdronov@redhat.com>
RH-Acked-by: Jiri Benc <jbenc@redhat.com>
RH-Acked-by: Jarod Wilson <jarod@redhat.com>
RH-Acked-by: Waiman Long <longman@redhat.com>
RH-Acked-by: Phil Auld <pauld@redhat.com>
RH-Acked-by: Wander Lairson Costa <wander@redhat.com>
Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
2022-01-10 11:46:27 -03:00
Wander Lairson Costa d6846b1c2e
ftrace: disable preemption when recursion locked
Bugzilla: http://bugzilla.redhat.com/1938117

commit ce5e48036c9e76a2a5bd4d9079eac273087a533a
Author: 王贇 <yun.wang@linux.alibaba.com>
Date:   Wed Oct 27 11:14:44 2021 +0800

    ftrace: disable preemption when recursion locked

    As the documentation explained, ftrace_test_recursion_trylock()
    and ftrace_test_recursion_unlock() were supposed to disable and
    enable preemption properly, however currently this work is done
    outside of the function, which could be missing by mistake.

    And since the internal using of trace_test_and_set_recursion()
    and trace_clear_recursion() also require preemption disabled, we
    can just merge the logical.

    This patch will make sure the preemption has been disabled when
    trace_test_and_set_recursion() return bit >= 0, and
    trace_clear_recursion() will enable the preemption if previously
    enabled.

    Link: https://lkml.kernel.org/r/13bde807-779c-aa4c-0672-20515ae365ea@linux.alibaba.com

    CC: Petr Mladek <pmladek@suse.com>
    Cc: Guo Ren <guoren@kernel.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
    Cc: Helge Deller <deller@gmx.de>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Paul Walmsley <paul.walmsley@sifive.com>
    Cc: Palmer Dabbelt <palmer@dabbelt.com>
    Cc: Albert Ou <aou@eecs.berkeley.edu>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Joe Lawrence <joe.lawrence@redhat.com>
    Cc: Masami Hiramatsu <mhiramat@kernel.org>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Jisheng Zhang <jszhang@kernel.org>
    CC: Steven Rostedt <rostedt@goodmis.org>
    CC: Miroslav Benes <mbenes@suse.cz>
    Reported-by: Abaci <abaci@linux.alibaba.com>
    Suggested-by: Peter Zijlstra <peterz@infradead.org>
    Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
    [ Removed extra line in comment - SDR ]
    Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Signed-off-by: Wander Lairson Costa <wander@redhat.com>
2022-01-03 11:30:41 -03:00
Phil Auld ecc8a6a83e sched,livepatch: Use wake_up_if_idle()
Bugzilla: http://bugzilla.redhat.com/2020279

commit 5de62ea84abd732ded7c5569426fd71c0420f83e
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Sep 21 22:16:02 2021 +0200

    sched,livepatch: Use wake_up_if_idle()

    Make sure to prod idle CPUs so they call klp_update_patch_state().

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Acked-by: Miroslav Benes <mbenes@suse.cz>
    Acked-by: Vasily Gorbik <gor@linux.ibm.com>
    Tested-by: Petr Mladek <pmladek@suse.com>
    Tested-by: Vasily Gorbik <gor@linux.ibm.com> # on s390
    Link: https://lkml.kernel.org/r/20210929151723.162004989@infradead.org

Signed-off-by: Phil Auld <pauld@redhat.com>
2021-12-13 16:07:48 -05:00
Phil Auld 8898c24d50 sched,livepatch: Use task_call_func()
Bugzilla: http://bugzilla.redhat.com/2020279

commit 00619f7c650e4e46c650cb2e2fd5f438b32dc64b
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Sep 21 21:54:32 2021 +0200

    sched,livepatch: Use task_call_func()

    Instead of frobbing around with scheduler internals, use the shiny new
    task_call_func() interface.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Acked-by: Miroslav Benes <mbenes@suse.cz>
    Acked-by: Vasily Gorbik <gor@linux.ibm.com>
    Tested-by: Petr Mladek <pmladek@suse.com>
    Tested-by: Vasily Gorbik <gor@linux.ibm.com> # on s390
    Link: https://lkml.kernel.org/r/20210929152428.709906138@infradead.org

Signed-off-by: Phil Auld <pauld@redhat.com>
2021-12-13 16:07:48 -05:00
Prarit Bhargava 471669735d livepatch: Replace deprecated CPU-hotplug functions.
Bugzilla: http://bugzilla.redhat.com/2023079

commit 1daf08a066cfe500587affd3fa3be8c13b8ff007
Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date:   Tue Aug 3 16:16:09 2021 +0200

    livepatch: Replace deprecated CPU-hotplug functions.

    The functions get_online_cpus() and put_online_cpus() have been
    deprecated during the CPU hotplug rework. They map directly to
    cpus_read_lock() and cpus_read_unlock().

    Replace deprecated CPU-hotplug functions with the official version.
    The behavior remains unchanged.

    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Jiri Kosina <jikos@kernel.org>
    Cc: Miroslav Benes <mbenes@suse.cz>
    Cc: Petr Mladek <pmladek@suse.com>
    Cc: Joe Lawrence <joe.lawrence@redhat.com>
    Cc: live-patching@vger.kernel.org
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Reviewed-by: Petr Mladek <pmladek@suse.com>
    Signed-off-by: Jiri Kosina <jkosina@suse.cz>

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
2021-12-09 09:04:12 -05:00
Linus Torvalds eb6bbacc46 Livepatching changes for 5.13
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEESH4wyp42V4tXvYsjUqAMR0iAlPIFAmCIF5EACgkQUqAMR0iA
 lPIABA/+MstVI15QFRD50xo/TyGUP3r7NZmU7BTbhzshSW2XkXFiWNn73VeULhZ8
 CDXf8bKuD2tIQTq30+RMRqmSUSN00mXepupA1eVeyKoUYqXzNsEU6oQBRQ3wKYQY
 HrvbcJtIMNC10G7TUIgltiqKsi5538YIfWn5EBN9wvxyHvnJQ/RzNS1OBIaDx+vV
 04E/65P+gcrNMBRXB03/Vl2KBfJQKb4Hj78Yo9puq6kJmV3uHmHgv2adYGT/veG/
 2o+cigfJS1uLg6vCiJC9bBkQgNJUj/3p+6EaVBFfgpZ1ddKW9AVEQMv2PDvaWCuR
 BKwuawobHl1eHgCPS+dofZMFZ0LT+z0pnf8jpduLmbcKFbHYaWLwmPjwFwSQNJ6e
 zqM91pnwRUkSVXmboxubcNHbioRFXhvIiswxHHbrzS4BBs3mXfSEnMRlbB75iKuJ
 cazVRP6u+ukg7XZhtsL2/8UYXOJ4bbIF7R9B/DM5o4zJD2gs9fRCkfDrp2n0Twtu
 x7NffZAAmlqBQ+7c9d/eEkZmzpkX76gRwUC/IvwJRIs+jOHfEehe+tPuqDlWN19b
 vM0a+kup0n8T/ZCfLeK8PLayjm8/hnNg2CK970zlovEKAdCIMovJHUJjnXboSx/n
 +4R/DH3LUG3TJo+rMKLCOZt9/ph2YC1ySTmOXv9GkE0VlsCn95c=
 =ZMd+
 -----END PGP SIGNATURE-----

Merge tag 'livepatching-for-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/livepatching/livepatching

Pull livepatching update from Petr Mladek:

 - Use TIF_NOTIFY_SIGNAL infrastructure instead of the fake signal

* tag 'livepatching-for-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/livepatching/livepatching:
  livepatch: Replace the fake signal sending with TIF_NOTIFY_SIGNAL infrastructure
2021-04-27 18:14:38 -07:00
Miroslav Benes 8df1947c71 livepatch: Replace the fake signal sending with TIF_NOTIFY_SIGNAL infrastructure
Livepatch sends a fake signal to all remaining blocking tasks of a
running transition after a set period of time. It uses TIF_SIGPENDING
flag for the purpose. Commit 12db8b6900 ("entry: Add support for
TIF_NOTIFY_SIGNAL") added a generic infrastructure to achieve the same.
Replace our bespoke solution with the generic one.

Reviewed-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2021-03-30 09:40:21 +02:00
Christoph Hellwig 013c1667cf kallsyms: refactor {,module_}kallsyms_on_each_symbol
Require an explicit call to module_kallsyms_on_each_symbol to look
for symbols in modules instead of the call from kallsyms_on_each_symbol,
and acquire module_mutex inside of module_kallsyms_on_each_symbol instead
of leaving that up to the caller.  Note that this slightly changes the
behavior for the livepatch code in that the symbols from vmlinux are not
iterated anymore if objname is set, but that actually is the desired
behavior in this case.

Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jessica Yu <jeyu@kernel.org>
2021-02-08 12:22:08 +01:00
Christoph Hellwig a006050575 module: use RCU to synchronize find_module
Allow for a RCU-sched critical section around find_module, following
the lower level find_module_all helper, and switch the two callers
outside of module.c to use such a RCU-sched critical section instead
of module_mutex.

Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jessica Yu <jeyu@kernel.org>
2021-02-08 12:21:40 +01:00
Steven Rostedt (VMware) 2860cd8a23 livepatch: Use the default ftrace_ops instead of REGS when ARGS is available
When CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS is available, the ftrace call
will be able to set the ip of the calling function. This will improve the
performance of live kernel patching where it does not need all the regs to
be stored just to change the instruction pointer.

If all archs that support live kernel patching also support
HAVE_DYNAMIC_FTRACE_WITH_ARGS, then the architecture specific function
klp_arch_set_pc() could be made generic.

It is possible that an arch can support HAVE_DYNAMIC_FTRACE_WITH_ARGS but
not HAVE_DYNAMIC_FTRACE_WITH_REGS and then have access to live patching.

Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: live-patching@vger.kernel.org
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-11-13 12:15:28 -05:00
Steven Rostedt (VMware) d19ad0775d ftrace: Have the callbacks receive a struct ftrace_regs instead of pt_regs
In preparation to have arguments of a function passed to callbacks attached
to functions as default, change the default callback prototype to receive a
struct ftrace_regs as the forth parameter instead of a pt_regs.

For callbacks that set the FL_SAVE_REGS flag in their ftrace_ops flags, they
will now need to get the pt_regs via a ftrace_get_regs() helper call. If
this is called by a callback that their ftrace_ops did not have a
FL_SAVE_REGS flag set, it that helper function will return NULL.

This will allow the ftrace_regs to hold enough just to get the parameters
and stack pointer, but without the worry that callbacks may have a pt_regs
that is not completely filled.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-11-13 12:14:55 -05:00
Steven Rostedt (VMware) 773c167050 ftrace: Add recording of functions that caused recursion
This adds CONFIG_FTRACE_RECORD_RECURSION that will record to a file
"recursed_functions" all the functions that caused recursion while a
callback to the function tracer was running.

Link: https://lkml.kernel.org/r/20201106023548.102375687@goodmis.org

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Guo Ren <guoren@kernel.org>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-csky@vger.kernel.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: live-patching@vger.kernel.org
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-11-06 08:42:26 -05:00
Steven Rostedt (VMware) 4b750b573c livepatch: Trigger WARNING if livepatch function fails due to recursion
If for some reason a function is called that triggers the recursion
detection of live patching, trigger a warning. By not executing the live
patch code, it is possible that the old unpatched function will be called
placing the system into an unknown state.

Link: https://lore.kernel.org/r/20201029145709.GD16774@alley
Link: https://lkml.kernel.org/r/20201106023547.312639435@goodmis.org

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: live-patching@vger.kernel.org
Suggested-by: Miroslav Benes <mbenes@suse.cz>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-11-06 08:41:47 -05:00
Steven Rostedt (VMware) 13f3ea9a2c livepatch/ftrace: Add recursion protection to the ftrace callback
If a ftrace callback does not supply its own recursion protection and
does not set the RECURSION_SAFE flag in its ftrace_ops, then ftrace will
make a helper trampoline to do so before calling the callback instead of
just calling the callback directly.

The default for ftrace_ops is going to change. It will expect that handlers
provide their own recursion protection, unless its ftrace_ops states
otherwise.

Link: https://lkml.kernel.org/r/20201028115613.291169246@goodmis.org
Link: https://lkml.kernel.org/r/20201106023547.122802424@goodmis.org

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: live-patching@vger.kernel.org
Reviewed-by: Petr Mladek <pmladek@suse.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-11-06 08:36:23 -05:00
Randy Dunlap 7b7b8a2c95 kernel/: fix repeated words in comments
Fix multiple occurrences of duplicated words in kernel/.

Fix one typo/spello on the same line as a duplicate word.  Change one
instance of "the the" to "that the".  Otherwise just drop one of the
repeated words.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.kernel.org/r/98202fa6-8919-ef63-9efe-c0fad5ca7af1@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:19 -07:00
Samuel Zou a4ae16f65c livepatch: Make klp_apply_object_relocs static
Fix the following sparse warning:

kernel/livepatch/core.c:748:5: warning: symbol 'klp_apply_object_relocs' was
not declared.

The klp_apply_object_relocs() has only one call site within core.c;
it should be static

Fixes: 7c8e2bdd5f ("livepatch: Apply vmlinux-specific KLP relocations early")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Samuel Zou <zou_wei@huawei.com>
Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-11 00:31:38 +02:00
Josh Poimboeuf 5b384f9335 x86/module: Use text_mutex in apply_relocate_add()
Now that the livepatch code no longer needs the text_mutex for changing
module permissions, move its usage down to apply_relocate_add().

Note the s390 version of apply_relocate_add() doesn't need to use the
text_mutex because it already uses s390_kernel_write_lock, which
accomplishes the same task.

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-08 00:12:43 +02:00
Josh Poimboeuf d556e1be33 livepatch: Remove module_disable_ro() usage
With arch_klp_init_object_loaded() gone, and apply_relocate_add() now
using text_poke(), livepatch no longer needs to use module_disable_ro().

Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-08 00:12:43 +02:00
Josh Poimboeuf ca376a9374 livepatch: Prevent module-specific KLP rela sections from referencing vmlinux symbols
Prevent module-specific KLP rela sections from referencing vmlinux
symbols.  This helps prevent ordering issues with module special section
initializations.  Presumably such symbols are exported and normal relas
can be used instead.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-08 00:12:42 +02:00
Peter Zijlstra 1d05334d28 livepatch: Remove .klp.arch
After the previous patch, vmlinux-specific KLP relocations are now
applied early during KLP module load.  This means that .klp.arch
sections are no longer needed for *vmlinux-specific* KLP relocations.

One might think they're still needed for *module-specific* KLP
relocations.  If a to-be-patched module is loaded *after* its
corresponding KLP module is loaded, any corresponding KLP relocations
will be delayed until the to-be-patched module is loaded.  If any
special sections (.parainstructions, for example) rely on those
relocations, their initializations (apply_paravirt) need to be done
afterwards.  Thus the apparent need for arch_klp_init_object_loaded()
and its corresponding .klp.arch sections -- it allows some of the
special section initializations to be done at a later time.

But... if you look closer, that dependency between the special sections
and the module-specific KLP relocations doesn't actually exist in
reality.  Looking at the contents of the .altinstructions and
.parainstructions sections, there's not a realistic scenario in which a
KLP module's .altinstructions or .parainstructions section needs to
access a symbol in a to-be-patched module.  It might need to access a
local symbol or even a vmlinux symbol; but not another module's symbol.
When a special section needs to reference a local or vmlinux symbol, a
normal rela can be used instead of a KLP rela.

Since the special section initializations don't actually have any real
dependency on module-specific KLP relocations, .klp.arch and
arch_klp_init_object_loaded() no longer have a reason to exist.  So
remove them.

As Peter said much more succinctly:

  So the reason for .klp.arch was that .klp.rela.* stuff would overwrite
  paravirt instructions. If that happens you're doing it wrong. Those
  RELAs are core kernel, not module, and thus should've happened in
  .rela.* sections at patch-module loading time.

  Reverting this removes the two apply_{paravirt,alternatives}() calls
  from the late patching path, and means we don't have to worry about
  them when removing module_disable_ro().

[ jpoimboe: Rewrote patch description.  Tweaked klp_init_object_loaded()
	    error path. ]

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-08 00:12:42 +02:00
Josh Poimboeuf 7c8e2bdd5f livepatch: Apply vmlinux-specific KLP relocations early
KLP relocations are livepatch-specific relocations which are applied to
a KLP module's text or data.  They exist for two reasons:

  1) Unexported symbols: replacement functions often need to access
     unexported symbols (e.g. static functions), which "normal"
     relocations don't allow.

  2) Late module patching: this is the ability for a KLP module to
     bypass normal module dependencies, such that the KLP module can be
     loaded *before* a to-be-patched module.  This means that
     relocations which need to access symbols in the to-be-patched
     module might need to be applied to the KLP module well after it has
     been loaded.

Non-late-patched KLP relocations are applied from the KLP module's init
function.  That usually works fine, unless the patched code wants to use
alternatives, paravirt patching, jump tables, or some other special
section which needs relocations.  Then we run into ordering issues and
crashes.

In order for those special sections to work properly, the KLP
relocations should be applied *before* the special section init code
runs, such as apply_paravirt(), apply_alternatives(), or
jump_label_apply_nops().

You might think the obvious solution would be to move the KLP relocation
initialization earlier, but it's not necessarily that simple.  The
problem is the above-mentioned late module patching, for which KLP
relocations can get applied well after the KLP module is loaded.

To "fix" this issue in the past, we created .klp.arch sections:

  .klp.arch.{module}..altinstructions
  .klp.arch.{module}..parainstructions

Those sections allow KLP late module patching code to call
apply_paravirt() and apply_alternatives() after the module-specific KLP
relocations (.klp.rela.{module}.{section}) have been applied.

But that has a lot of drawbacks, including code complexity, the need for
arch-specific code, and the (per-arch) danger that we missed some
special section -- for example the __jump_table section which is used
for jump labels.

It turns out there's a simpler and more functional approach.  There are
two kinds of KLP relocation sections:

  1) vmlinux-specific KLP relocation sections

     .klp.rela.vmlinux.{sec}

     These are relocations (applied to the KLP module) which reference
     unexported vmlinux symbols.

  2) module-specific KLP relocation sections

     .klp.rela.{module}.{sec}:

     These are relocations (applied to the KLP module) which reference
     unexported or exported module symbols.

Up until now, these have been treated the same.  However, they're
inherently different.

Because of late module patching, module-specific KLP relocations can be
applied very late, thus they can create the ordering headaches described
above.

But vmlinux-specific KLP relocations don't have that problem.  There's
nothing to prevent them from being applied earlier.  So apply them at
the same time as normal relocations, when the KLP module is being
loaded.

This means that for vmlinux-specific KLP relocations, we no longer have
any ordering issues.  vmlinux-referencing jump labels, alternatives, and
paravirt patching will work automatically, without the need for the
.klp.arch hacks.

All that said, for module-specific KLP relocations, the ordering
problems still exist and we *do* still need .klp.arch.  Or do we?  Stay
tuned.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Acked-by: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-08 00:12:42 +02:00
Josh Poimboeuf dcf550e52f livepatch: Disallow vmlinux.ko
This is purely a theoretical issue, but if there were a module named
vmlinux.ko, the livepatch relocation code wouldn't be able to
distinguish between vmlinux-specific and vmlinux.o-specific KLP
relocations.

If CONFIG_LIVEPATCH is enabled, don't allow a module named vmlinux.ko.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Miroslav Benes <mbenes@suse.cz>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2020-05-08 00:12:42 +02:00
Linus Torvalds 95f1fa9e34 New tracing features:
- PERAMAENT flag to ftrace_ops when attaching a callback to a function
    As /proc/sys/kernel/ftrace_enabled when set to zero will disable all
    attached callbacks in ftrace, this has a detrimental impact on live
    kernel tracing, as it disables all that it patched. If a ftrace_ops
    is registered to ftrace with the PERMANENT flag set, it will prevent
    ftrace_enabled from being disabled, and if ftrace_enabled is already
    disabled, it will prevent a ftrace_ops with PREMANENT flag set from
    being registered.
 
  - New register_ftrace_direct(). As eBPF would like to register its own
    trampolines to be called by the ftrace nop locations directly,
    without going through the ftrace trampoline, this function has been
    added. This allows for eBPF trampolines to live along side of
    ftrace, perf, kprobe and live patching. It also utilizes the ftrace
    enabled_functions file that keeps track of functions that have been
    modified in the kernel, to allow for security auditing.
 
  - Allow for kernel internal use of ftrace instances. Subsystems in
    the kernel can now create and destroy their own tracing instances
    which allows them to have their own tracing buffer, and be able
    to record events without worrying about other users from writing over
    their data.
 
  - New seq_buf_hex_dump() that lets users use the hex_dump() in their
    seq_buf usage.
 
  - Notifications now added to tracing_max_latency to allow user space
    to know when a new max latency is hit by one of the latency tracers.
 
  - Wider spread use of generic compare operations for use of bsearch and
    friends.
 
  - More synthetic event fields may be defined (32 up from 16)
 
  - Use of xarray for architectures with sparse system calls, for the
    system call trace events.
 
 This along with small clean ups and fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCXdwv4BQccm9zdGVkdEBn
 b29kbWlzLm9yZwAKCRAp5XQQmuv6qnB5AP91vsdHQjwE1+/UWG/cO+qFtKvn2QJK
 QmBRIJNH/s+1TAD/fAOhgw+ojSK3o/qc+NpvPTEW9AEwcJL1wacJUn+XbQc=
 =ztql
 -----END PGP SIGNATURE-----

Merge tag 'trace-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "New tracing features:

   - New PERMANENT flag to ftrace_ops when attaching a callback to a
     function.

     As /proc/sys/kernel/ftrace_enabled when set to zero will disable
     all attached callbacks in ftrace, this has a detrimental impact on
     live kernel tracing, as it disables all that it patched. If a
     ftrace_ops is registered to ftrace with the PERMANENT flag set, it
     will prevent ftrace_enabled from being disabled, and if
     ftrace_enabled is already disabled, it will prevent a ftrace_ops
     with PREMANENT flag set from being registered.

   - New register_ftrace_direct().

     As eBPF would like to register its own trampolines to be called by
     the ftrace nop locations directly, without going through the ftrace
     trampoline, this function has been added. This allows for eBPF
     trampolines to live along side of ftrace, perf, kprobe and live
     patching. It also utilizes the ftrace enabled_functions file that
     keeps track of functions that have been modified in the kernel, to
     allow for security auditing.

   - Allow for kernel internal use of ftrace instances.

     Subsystems in the kernel can now create and destroy their own
     tracing instances which allows them to have their own tracing
     buffer, and be able to record events without worrying about other
     users from writing over their data.

   - New seq_buf_hex_dump() that lets users use the hex_dump() in their
     seq_buf usage.

   - Notifications now added to tracing_max_latency to allow user space
     to know when a new max latency is hit by one of the latency
     tracers.

   - Wider spread use of generic compare operations for use of bsearch
     and friends.

   - More synthetic event fields may be defined (32 up from 16)

   - Use of xarray for architectures with sparse system calls, for the
     system call trace events.

  This along with small clean ups and fixes"

* tag 'trace-v5.5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (51 commits)
  tracing: Enable syscall optimization for MIPS
  tracing: Use xarray for syscall trace events
  tracing: Sample module to demonstrate kernel access to Ftrace instances.
  tracing: Adding new functions for kernel access to Ftrace instances
  tracing: Fix Kconfig indentation
  ring-buffer: Fix typos in function ring_buffer_producer
  ftrace: Use BIT() macro
  ftrace: Return ENOTSUPP when DYNAMIC_FTRACE_WITH_DIRECT_CALLS is not configured
  ftrace: Rename ftrace_graph_stub to ftrace_stub_graph
  ftrace: Add a helper function to modify_ftrace_direct() to allow arch optimization
  ftrace: Add helper find_direct_entry() to consolidate code
  ftrace: Add another check for match in register_ftrace_direct()
  ftrace: Fix accounting bug with direct->count in register_ftrace_direct()
  ftrace/selftests: Fix spelling mistake "wakeing" -> "waking"
  tracing: Increase SYNTH_FIELDS_MAX for synthetic_events
  ftrace/samples: Add a sample module that implements modify_ftrace_direct()
  ftrace: Add modify_ftrace_direct()
  tracing: Add missing "inline" in stub function of latency_fsnotify()
  tracing: Remove stray tab in TRACE_EVAL_MAP_FILE's help text
  tracing: Use seq_buf_hex_dump() to dump buffers
  ...
2019-11-27 11:42:01 -08:00