Commit Graph

302 Commits

Author SHA1 Message Date
Čestmír Kalina c4ea29f891 lockdep: Fix lockdep_set_notrack_class() for CONFIG_LOCK_STAT
JIRA: https://issues.redhat.com/browse/RHEL-60306

commit ff9bf4b34104955017822e9bc42aeeb526ee2a80
Author: Kent Overstreet <kent.overstreet@linux.dev>
Date: Tue, 30 Jul 2024 21:14:08 -0400

    We won't find a contended lock if it's not being tracked.

    Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>

Signed-off-by: Čestmír Kalina <ckalina@redhat.com>
2024-12-18 17:06:50 +01:00
Čestmír Kalina e7b2c43fa4 lockdep: lockdep_set_notrack_class()
JIRA: https://issues.redhat.com/browse/RHEL-60306

Conflicts: omitted the bcachefs specific part.

commit 1a616c2fe96b357894b74b41787d4ea6987f6199
Author: Kent Overstreet <kent.overstreet@linux.dev>
Date: Thu, 21 Dec 2023 20:34:17 -0500

    Add a new helper to disable lockdep tracking entirely for a given class.

    This is needed for bcachefs, which takes too many btree node locks for
    lockdep to track. Instead, we have a single lockdep_map for "btree_trans
    has any btree nodes locked", which makes more since given that we have
    centralized lock management and a cycle detector.

    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Waiman Long <longman@redhat.com>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>

Signed-off-by: Čestmír Kalina <ckalina@redhat.com>
2024-12-18 17:01:23 +01:00
Waiman Long 4b9e87be94 lockdep: fix static memory detection even more
JIRA: https://issues.redhat.com/browse/RHEL-35759

commit 0a6b58c5cd0dfd7961e725212f0fc8dfc5d96195
Author: Helge Deller <deller@gmx.de>
Date:   Tue, 15 Aug 2023 00:31:09 +0200

    lockdep: fix static memory detection even more

    On the parisc architecture, lockdep reports for all static objects which
    are in the __initdata section (e.g. "setup_done" in devtmpfs,
    "kthreadd_done" in init/main.c) this warning:

            INFO: trying to register non-static key.

    The warning itself is wrong, because those objects are in the __initdata
    section, but the section itself is on parisc outside of range from
    _stext to _end, which is why the static_obj() functions returns a wrong
    answer.

    While fixing this issue, I noticed that the whole existing check can
    be simplified a lot.
    Instead of checking against the _stext and _end symbols (which include
    code areas too) just check for the .data and .bss segments (since we check a
    data object). This can be done with the existing is_kernel_core_data()
    macro.

    In addition objects in the __initdata section can be checked with
    init_section_contains(), and is_kernel_rodata() allows keys to be in the
    _ro_after_init section.

    This partly reverts and simplifies commit bac59d18c7 ("x86/setup: Fix static
    memory detection").

    Link: https://lkml.kernel.org/r/ZNqrLRaOi/3wPAdp@p100
    Fixes: bac59d18c7 ("x86/setup: Fix static memory detection")
    Signed-off-by: Helge Deller <deller@gmx.de>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Guenter Roeck <linux@roeck-us.net>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: "Rafael J. Wysocki" <rafael@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2024-05-22 19:52:14 -04:00
Waiman Long 813b918a53 lockdep: Add lock_set_cmp_fn() annotation
JIRA: https://issues.redhat.com/browse/RHEL-35759

commit eb1cfd09f788e39948a82be8063e54e40dd018d9
Author: Kent Overstreet <kent.overstreet@linux.dev>
Date:   Tue, 9 May 2023 15:58:46 -0400

    lockdep: Add lock_set_cmp_fn() annotation

    This implements a new interface to lockdep, lock_set_cmp_fn(), for
    defining a custom ordering when taking multiple locks of the same
    class.

    This is an alternative to subclasses, but can not fully replace them
    since subclasses allow lock hierarchies with other clasees
    inter-twined, while this relies on pure class nesting.

    Specifically, if A is our nesting class then:

      A/0 <- B <- A/1

    Would be a valid lock order with subclasses (each subclass really is a
    full class from the validation PoV) but not with this annotation,
    which requires all nesting to be consecutive.

    Example output:

    | ============================================
    | WARNING: possible recursive locking detected
    | 6.2.0-rc8-00003-g7d81e591ca6a-dirty #15 Not tainted
    | --------------------------------------------
    | kworker/14:3/938 is trying to acquire lock:
    | ffff8880143218c8 (&b->lock l=0 0:2803368){++++}-{3:3}, at: bch_btree_node_get.part.0+0x81/0x2b0
    |
    | but task is already holding lock:
    | ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0
    | and the lock comparison function returns 1:
    |
    | other info that might help us debug this:
    |  Possible unsafe locking scenario:
    |
    |        CPU0
    |        ----
    |   lock(&b->lock l=1 1048575:9223372036854775807);
    |   lock(&b->lock l=0 0:2803368);
    |
    |  *** DEADLOCK ***
    |
    |  May be due to missing lock nesting notation
    |
    | 3 locks held by kworker/14:3/938:
    |  #0: ffff888005ea9d38 ((wq_completion)bcache){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530
    |  #1: ffff8880098c3e70 ((work_completion)(&cl->work)#3){+.+.}-{0:0}, at: process_one_work+0x1ec/0x530
    |  #2: ffff8880143de8c8 (&b->lock l=1 1048575:9223372036854775807){++++}-{3:3}, at: __bch_btree_map_nodes+0xea/0x1e0

    [peterz: extended changelog]
    Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/20230509195847.1745548-1-kent.overstreet@linux.dev

Signed-off-by: Waiman Long <longman@redhat.com>
2024-05-22 19:52:13 -04:00
Andrew Halaney d5f68d4bb5 lockdep: Mark emergency section in lockdep splats
JIRA: https://issues.redhat.com/browse/RHEL-3987
Upstream Status: https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git/

commit d6d66f8f158bef221213adb096d969c3fbae02e3
Author: John Ogness <john.ogness@linutronix.de>
Date:   Mon Sep 18 20:27:41 2023 +0000

    lockdep: Mark emergency section in lockdep splats

    Mark an emergency section within print_usage_bug(), where
    lockdep bugs are printed. In this section, the CPU will not
    perform console output for the printk() calls. Instead, a
    flushing of the console output will triggered when exiting
    the emergency section.

    Signed-off-by: John Ogness <john.ogness@linutronix.de>
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

Signed-off-by: Andrew Halaney <ahalaney@redhat.com>
2024-05-09 11:26:26 -04:00
Joel Savitz df99a9ef52 lockdep: Fix block chain corruption
JIRA: https://issues.redhat.com/browse/RHEL-5226

commit bca4104b00fec60be330cd32818dd5c70db3d469
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Nov 21 12:41:26 2023 +0100

    lockdep: Fix block chain corruption

    Kent reported an occasional KASAN splat in lockdep. Mark then noted:

    > I suspect the dodgy access is to chain_block_buckets[-1], which hits the last 4
    > bytes of the redzone and gets (incorrectly/misleadingly) attributed to
    > nr_large_chain_blocks.

    That would mean @size == 0, at which point size_to_bucket() returns -1
    and the above happens.

    alloc_chain_hlocks() has 'size - req', for the first with the
    precondition 'size >= rq', which allows the 0.

    This code is trying to split a block, del_chain_block() takes what we
    need, and add_chain_block() puts back the remainder, except in the
    above case the remainder is 0 sized and things go sideways.

    Fixes: 810507fe6f ("locking/lockdep: Reuse freed chain_hlocks entries")
    Reported-by: Kent Overstreet <kent.overstreet@linux.dev>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Tested-by: Kent Overstreet <kent.overstreet@linux.dev>
    Link: https://lkml.kernel.org/r/20231121114126.GH8262@noisy.programming.kicks-ass.net"

Signed-off-by: Joel Savitz <jsavitz@redhat.com>
2024-01-15 10:10:44 -05:00
Joel Savitz e6cce444fa debugobjects,locking: Annotate debug_object_fill_pool() wait type violation
JIRA: https://issues.redhat.com/browse/RHEL-5226

commit 0cce06ba859a515bd06224085d3addb870608b6d
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Apr 25 17:03:13 2023 +0200

    debugobjects,locking: Annotate debug_object_fill_pool() wait type violation

    There is an explicit wait-type violation in debug_object_fill_pool()
    for PREEMPT_RT=n kernels which allows them to more easily fill the
    object pool and reduce the chance of allocation failures.

    Lockdep's wait-type checks are designed to check the PREEMPT_RT
    locking rules even for PREEMPT_RT=n kernels and object to this, so
    create a lockdep annotation to allow this to stand.

    Specifically, create a 'lock' type that overrides the inner wait-type
    while it is held -- allowing one to temporarily raise it, such that
    the violation is hidden.

    Reported-by: Vlastimil Babka <vbabka@suse.cz>
    Reported-by: Qi Zheng <zhengqi.arch@bytedance.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Tested-by: Qi Zheng <zhengqi.arch@bytedance.com>
    Link: https://lkml.kernel.org/r/20230429100614.GA1489784@hirez.programming.kicks-ass.net"

Signed-off-by: Joel Savitz <jsavitz@redhat.com>
2024-01-15 10:10:43 -05:00
Waiman Long f2c20bf128 locking/lockdep: Improve the deadlock scenario print for sync and read lock
JIRA: https://issues.redhat.com/browse/RHEL-5228

commit 0471db447cb7de56bbe2fedd9256b4d2b8ef642a
Author: Boqun Feng <boqun.feng@gmail.com>
Date:   Fri, 13 Jan 2023 15:57:22 -0800

    locking/lockdep: Improve the deadlock scenario print for sync and read lock

    Lock scenario print is always a weak spot of lockdep splats. Improvement
    can be made if we rework the dependency search and the error printing.

    However without touching the graph search, we can improve a little for
    the circular deadlock case, since we have the to-be-added lock
    dependency, and know whether these two locks are read/write/sync.

    In order to know whether a held_lock is sync or not, a bit was
    "stolen" from ->references, which reduce our limit for the same lock
    class nesting from 2^12 to 2^11, and it should still be good enough.

    Besides, since we now have bit in held_lock for sync, we don't need the
    "hardirqoffs being 1" trick, and also we can avoid the __lock_release()
    if we jump out of __lock_acquire() before the held_lock stored.

    With these changes, a deadlock case evolved with read lock and sync gets
    a better print-out from:

            [...]  Possible unsafe locking scenario:
            [...]
            [...]        CPU0                    CPU1
            [...]        ----                    ----
            [...]   lock(srcuA);
            [...]                                lock(srcuB);
            [...]                                lock(srcuA);
            [...]   lock(srcuB);

    to

            [...]  Possible unsafe locking scenario:
            [...]
            [...]        CPU0                    CPU1
            [...]        ----                    ----
            [...]   rlock(srcuA);
            [...]                                lock(srcuB);
            [...]                                lock(srcuA);
            [...]   sync(srcuB);

    Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

Signed-off-by: Waiman Long <longman@redhat.com>
2023-09-22 13:21:47 -04:00
Waiman Long 6330a4cf83 locking/lockdep: Introduce lock_sync()
JIRA: https://issues.redhat.com/browse/RHEL-5228

commit 2f1f043e7bea3fbf4c1869df2f7a0312bc8ca2bf
Author: Boqun Feng <boqun.feng@gmail.com>
Date:   Thu, 12 Jan 2023 22:59:53 -0800

    locking/lockdep: Introduce lock_sync()

    Currently, functions like synchronize_srcu() do not have lockdep
    annotations resembling those of other write-side locking primitives.
    Such annotations might look as follows:

            lock_acquire();
            lock_release();

    Such annotations would tell lockdep that synchronize_srcu() acts like
    an empty critical section that waits for other (read-side) critical
    sections to finish.  This would definitely catch some deadlock, but
    as pointed out by Paul Mckenney [1], this could also introduce false
    positives because of irq-safe/unsafe detection.  Of course, there are
    tricks could help with this:

            might_sleep(); // Existing statement in __synchronize_srcu().
            if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
                    local_irq_disable();
                    lock_acquire();
                    lock_release();
                    local_irq_enable();
            }

    But it would be better for lockdep to provide a separate annonation for
    functions like synchronize_srcu(), so that people won't need to repeat
    the ugly tricks above.

    Therefore introduce lock_sync(), which is simply an lock+unlock
    pair with no irq safe/unsafe deadlock check.  This works because the
    to-be-annontated functions do not create real critical sections, and
    there is therefore no way that irq can create extra dependencies.

    [1]: https://lore.kernel.org/lkml/20180412021233.ewncg5jjuzjw3x62@tardis/

    Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
    Acked-by: Waiman Long <longman@redhat.com>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
    [ boqun: Fix typos reported by Davidlohr Bueso and Paul E. Mckenney ]
    Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Boqun Feng <boqun.feng@gmail.com>

Signed-off-by: Waiman Long <longman@redhat.com>
2023-09-22 13:21:45 -04:00
Jan Stancek c3b7a93516 Merge: Update locking code to upstream 6.1 + follow up fixes
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2146

```
Omitted-fix: 807ff7ed34d2 ("futex: add missing rtmutex.h include")
	part of patchset not being backported: https://patchwork.freedesktop.org/series/102340/
	tiny change to locking code is only incidental

Omitted-fix: f5d39b020809 ("freezer,sched: Rewrite core freezer logic")
	changes to futex subsystem are only incidental to larger rewrite of freezer logic

Omitted-fix: e67198cc05b8 ("context_tracking: Take idle eqs entrypoints over RCU")
	part of a larger patchset implementing RCU context tracking,
	http://kerneloscope.usersys.redhat.com/series/08ab707dfc83d6ab7829c1c0f39b0d4530fa42a8/

Omitted-fix: 79dbd006a6d6 ("kmsan: disable instrumentation of unsupported common kernel code")
	part of larget patchset implementing KMSAN
	http://kerneloscope.usersys.redhat.com/series/4ca8cc8d1bbe582bfc7a4d80bd72cfd8d3d0e2e8/

Omitted-fix: 81895a65ec63 ("treewide: use prandom_u32_max() when possible, part 1")
	treewide changes to random number generation
	part of patchset: http://kerneloscope.usersys.redhat.com/series/8b3ccbc1f1f91847160951aa15dd27c22dddcb49/

Arnd Bergmann (4):
  futex: Remove futex_cmpxchg detection
  futex: Ensure futex_atomic_cmpxchg_inatomic() is present
  futex: Fix sparc32/m68k/nds32 build regression
  futex: Fix additional regressions

Guo Jin (1):
  locking: Fix qspinlock/x86 inline asm error

Joel Savitz (1):
  Revert "locking/rwsem: Conditionally wake waiters in reader/writer
    slowpaths"

Mathieu Desnoyers (1):
  futex: Fix futex_waitv() hrtimer debug object leak on kcalloc error

Namhyung Kim (2):
  locking: Apply contention tracepoints in the slow path
  locking: Add __lockfunc to slow path functions

Peter Zijlstra (1):
  locking/mutex: Make contention tracepoints more consistent wrt
    adaptive spinning

Sebastian Andrzej Siewior (1):
  futex: Remove a PREEMPT_RT_FULL reference.

Tetsuo Handa (1):
  locking/lockdep: Print more debug information - report name and key
    when look_up_lock_class() got confused

Waiman Long (9):
  locking/rwsem: Make handoff bit handling more consistent
  locking/rwsem: Conditionally wake waiters in reader/writer slowpaths
  locking/rwsem: No need to check for handoff bit if wait queue empty
  locking/rwsem: Always try to wake waiters in out_nolock path
  locking/qrwlock: Change "queue rwlock" to "queued rwlock"
  locking/rwsem: Allow slowpath writer to ignore handoff bit if not set
    by first waiter
  locking/rwsem: Prevent non-first waiter from spinning in down_write()
    slowpath
  locking/rwsem: Disable preemption in all down_read*() and up_read()
    code paths
  locking/rwsem: Disable preemption in all down_write*() and up_write()
    code paths

Wander Lairson Costa (1):
  rtmutex: Ensure that the top waiter is always woken up

Xiu Jianfeng (1):
  lockdep: Use memset_startat() helper in reinit_class()

tangmeng (1):
  kernel/lockdep: move lockdep sysctls to its own file

 arch/arc/Kconfig                          |   1 -
 arch/arm64/Kconfig                        |   1 -
 arch/csky/Kconfig                         |   1 -
 arch/m68k/Kconfig                         |   1 -
 arch/mips/include/asm/futex.h             |  27 +-
 arch/riscv/Kconfig                        |   1 -
 arch/s390/Kconfig                         |   1 -
 arch/sh/Kconfig                           |   1 -
 arch/um/Kconfig                           |   1 -
 arch/um/kernel/skas/uaccess.c             |   1 -
 arch/x86/include/asm/qspinlock_paravirt.h |  13 +-
 arch/xtensa/Kconfig                       |   1 -
 arch/xtensa/include/asm/futex.h           |   8 +-
 include/asm-generic/futex.h               |  31 +--
 include/asm-generic/qrwlock.h             |  28 +-
 include/asm-generic/qrwlock_types.h       |   2 +-
 include/linux/lockdep.h                   |   4 -
 include/trace/events/lock.h               |   4 +-
 init/Kconfig                              |   9 +-
 kernel/futex/core.c                       |  35 ---
 kernel/futex/futex.h                      |   6 -
 kernel/futex/pi.c                         |   2 +-
 kernel/futex/syscalls.c                   |  33 +--
 kernel/locking/lockdep.c                  |  46 +++-
 kernel/locking/mutex.c                    |  15 +-
 kernel/locking/percpu-rwsem.c             |   5 +
 kernel/locking/qrwlock.c                  |  21 +-
 kernel/locking/qspinlock.c                |   7 +-
 kernel/locking/qspinlock_paravirt.h       |   4 +-
 kernel/locking/rtmutex.c                  |  16 +-
 kernel/locking/rwbase_rt.c                |   7 +
 kernel/locking/rwsem.c                    | 311 +++++++++++++---------
 kernel/locking/semaphore.c                |  15 +-
 kernel/sysctl.c                           |  21 --
 34 files changed, 366 insertions(+), 314 deletions(-)

--
2.31.1
```

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2176147
Signed-off-by: Joel Savitz <jsavitz@redhat.com>

Approved-by: Waiman Long <longman@redhat.com>
Approved-by: Artem Savkov <asavkov@redhat.com>
Approved-by: Phil Auld <pauld@redhat.com>
Approved-by: Prarit Bhargava <prarit@redhat.com>

Signed-off-by: Jan Stancek <jstancek@redhat.com>
2023-04-16 14:59:54 +02:00
Waiman Long 33208366c8 cpuidle: lib/bug: Disable rcu_is_watching() during WARN/BUG
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2169516
Conflicts: A merge conflict in adding include file
	   <linux/context_tracking.h> to kernel/panic.c due to missing
	   upstream commit 23b36fec7e14 ("panic: use error_report_end
	   tracepoint on warnings") and commit 8b05aa263361 ("panic:
	   Expose "warn_count" to sysfs").

commit 5a5d7e9badd2cb8065db171961bd30bd3595e4b6
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Thu, 26 Jan 2023 16:08:31 +0100

    cpuidle: lib/bug: Disable rcu_is_watching() during WARN/BUG

    In order to avoid WARN/BUG from generating nested or even recursive
    warnings, force rcu_is_watching() true during
    WARN/lockdep_rcu_suspicious().

    Notably things like unwinding the stack can trigger rcu_dereference()
    warnings, which then triggers more unwinding which then triggers more
    warnings etc..

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Link: https://lore.kernel.org/r/20230126151323.408156109@infradead.org

Signed-off-by: Waiman Long <longman@redhat.com>
2023-03-30 08:48:17 -04:00
Waiman Long 034dc8d70a context_tracking: Take idle eqs entrypoints over RCU
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2169516

commit e67198cc05b8ecbb7b8e2d8ef9fb5c8d26821873
Author: Frederic Weisbecker <frederic@kernel.org>
Date:   Wed, 8 Jun 2022 16:40:25 +0200

    context_tracking: Take idle eqs entrypoints over RCU

    The RCU dynticks counter is going to be merged into the context tracking
    subsystem. Start with moving the idle extended quiescent states
    entrypoints to context tracking. For now those are dumb redirections to
    existing RCU calls.

    [ paulmck: Apply kernel test robot feedback. ]

    Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
    Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
    Cc: Joel Fernandes <joel@joelfernandes.org>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Cc: Nicolas Saenz Julienne <nsaenz@kernel.org>
    Cc: Marcelo Tosatti <mtosatti@redhat.com>
    Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com>
    Cc: Yu Liao <liaoyu15@huawei.com>
    Cc: Phil Auld <pauld@redhat.com>
    Cc: Paul Gortmaker<paul.gortmaker@windriver.com>
    Cc: Alex Belits <abelits@marvell.com>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
    Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
    Tested-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>

Signed-off-by: Waiman Long <longman@redhat.com>
2023-03-30 08:36:16 -04:00
Joel Savitz 10dd83e9a0 locking/lockdep: Print more debug information - report name and key when look_up_lock_class() got confused
commit 76e64c73db9542ff4bae8a60f4f32e38f3799b95
Author: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date:   Mon Sep 19 09:52:13 2022 +0900

    locking/lockdep: Print more debug information - report name and key when look_up_lock_class() got confused

    Printing this information will be helpful:

      ------------[ cut here ]------------
      Looking for class "l2tp_sock" with key l2tp_socket_class, but found a different class "slock-AF_INET6" with the same key
      WARNING: CPU: 1 PID: 14195 at kernel/locking/lockdep.c:940 look_up_lock_class+0xcc/0x140
      Modules linked in:
      CPU: 1 PID: 14195 Comm: a.out Not tainted 6.0.0-rc6-dirty #863
      Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
      RIP: 0010:look_up_lock_class+0xcc/0x140

    Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Link: https://lore.kernel.org/r/bd99391e-f787-efe9-5ec6-3c6dc4c587b0@I-love.SAKURA.ne.jp

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2176147
Signed-off-by: Joel Savitz <jsavitz@redhat.com>
2023-03-07 15:26:28 -05:00
Joel Savitz 1ac1f27d39 kernel/lockdep: move lockdep sysctls to its own file
conflict in kernel/sysctl.c
	detail: some conditional includes remain in the context due to the commits
		removing them not being backported to c9s
	action: remove relevant section and maintain context

commit f79c9b8ae8bde10126586c1bb55b5fd027276d8e
Author: tangmeng <tangmeng@uniontech.com>
Date:   Fri Feb 18 18:58:57 2022 +0800

    kernel/lockdep: move lockdep sysctls to its own file

    kernel/sysctl.c is a kitchen sink where everyone leaves their dirty
    dishes, this makes it very difficult to maintain.

    To help with this maintenance let's start by moving sysctls to places
    where they actually belong.  The proc sysctl maintainers do not want to
    know what sysctl knobs you wish to add for your own piece of code, we
    just care about the core logic.

    All filesystem syctls now get reviewed by fs folks. This commit
    follows the commit of fs, move the prove_locking and lock_stat sysctls
    to its own file, kernel/lockdep.c.

    Signed-off-by: tangmeng <tangmeng@uniontech.com>
    Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2176147
Signed-off-by: Joel Savitz <jsavitz@redhat.com>
2023-03-07 15:26:28 -05:00
Joel Savitz cd8d597167 lockdep: Use memset_startat() helper in reinit_class()
commit e204193b138af347fbbbe026e68cb3385112f387
Author: Xiu Jianfeng <xiujianfeng@huawei.com>
Date:   Mon Dec 13 21:26:18 2021 +0800

    lockdep: Use memset_startat() helper in reinit_class()

    use memset_startat() helper to simplify the code, there is no functional
    change in this patch.

    Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/20211213132618.105737-1-xiujianfeng@huawei.com

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2176147
Signed-off-by: Joel Savitz <jsavitz@redhat.com>
2023-03-07 15:26:27 -05:00
Waiman Long 93a7f45356 locking/lockdep: Fix lockdep_init_map_*() confusion
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141431

commit eae6d58d67d9739be5f7ae2dbead1d0ef6528243
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Fri, 17 Jun 2022 15:26:06 +0200

    locking/lockdep: Fix lockdep_init_map_*() confusion

    Commit dfd5e3f5fe ("locking/lockdep: Mark local_lock_t") added yet
    another lockdep_init_map_*() variant, but forgot to update all the
    existing users of the most complicated version.

    This could lead to a loss of lock_type and hence an incorrect report.
    Given the relative rarity of both local_lock and these annotations,
    this is unlikely to happen in practise, still, best fix things.

    Fixes: dfd5e3f5fe ("locking/lockdep: Mark local_lock_t")
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/YqyEDtoan20K0CVD@worktop.programming.kicks-ass.net

Signed-off-by: Waiman Long <longman@redhat.com>
2022-11-10 11:38:07 -05:00
Waiman Long 1d2e54472b locking/lockdep: Use sched_clock() for random numbers
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141431

commit 4051a81774d6d8e28192742c26999d6f29bc0e68
Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date:   Tue, 17 May 2022 11:16:14 +0200

    locking/lockdep: Use sched_clock() for random numbers

    Since the rewrote of prandom_u32(), in the commit mentioned below, the
    function uses sleeping locks which extracing random numbers and filling
    the batch.
    This breaks lockdep on PREEMPT_RT because lock_pin_lock() disables
    interrupts while calling __lock_pin_lock(). This can't be moved earlier
    because the main user of the function (rq_pin_lock()) invokes that
    function after disabling interrupts in order to acquire the lock.

    The cookie does not require random numbers as its goal is to provide a
    random value in order to notice unexpected "unlock + lock" sites.

    Use sched_clock() to provide random numbers.

    Fixes: a0103f4d86f88 ("random32: use real rng for non-deterministic randomness")
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/YoNn3pTkm5+QzE5k@linutronix.de

Signed-off-by: Waiman Long <longman@redhat.com>
2022-11-10 11:38:07 -05:00
Waiman Long b54489e6b0 locking: Add lock contention tracepoints
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141431

commit 16edd9b511a13e7760ed4b92ba4e39bacda5c86f
Author: Namhyung Kim <namhyung@kernel.org>
Date:   Tue, 22 Mar 2022 11:57:08 -0700

    locking: Add lock contention tracepoints

    This adds two new lock contention tracepoints like below:

     * lock:contention_begin
     * lock:contention_end

    The lock:contention_begin takes a flags argument to classify locks.  I
    found it useful to identify what kind of locks it's tracing like if
    it's spinning or sleeping, reader-writer lock, real-time, and per-cpu.

    Move tracepoint definitions into mutex.c so that we can use them
    without lockdep.

    Signed-off-by: Namhyung Kim <namhyung@kernel.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
    Link: https://lkml.kernel.org/r/20220322185709.141236-2-namhyung@kernel.org

Signed-off-by: Waiman Long <longman@redhat.com>
2022-11-10 11:38:06 -05:00
Waiman Long fa072c44f8 lockdep: Fix -Wunused-parameter for _THIS_IP_
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141431

commit 8b023accc8df70e72f7704d29fead7ca914d6837
Author: Nick Desaulniers <ndesaulniers@google.com>
Date:   Mon, 14 Mar 2022 15:19:03 -0700

    lockdep: Fix -Wunused-parameter for _THIS_IP_

    While looking into a bug related to the compiler's handling of addresses
    of labels, I noticed some uses of _THIS_IP_ seemed unused in lockdep.
    Drive by cleanup.

    -Wunused-parameter:
    kernel/locking/lockdep.c:1383:22: warning: unused parameter 'ip'
    kernel/locking/lockdep.c:4246:48: warning: unused parameter 'ip'
    kernel/locking/lockdep.c:4844:19: warning: unused parameter 'ip'

    Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Waiman Long <longman@redhat.com>
    Link: https://lore.kernel.org/r/20220314221909.2027027-1-ndesaulniers@google.com

Signed-off-by: Waiman Long <longman@redhat.com>
2022-11-10 11:38:05 -05:00
Waiman Long ed55911d2e locking/lockdep: Iterate lock_classes directly when reading lockdep files
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit fb7275acd6fb988313dddd8d3d19efa70d9015ad
Author: Waiman Long <longman@redhat.com>
Date:   Thu, 10 Feb 2022 22:55:26 -0500

    locking/lockdep: Iterate lock_classes directly when reading lockdep files

    When dumping lock_classes information via /proc/lockdep, we can't take
    the lockdep lock as the lock hold time is indeterminate. Iterating
    over all_lock_classes without holding lock can be dangerous as there
    is a slight chance that it may branch off to other lists leading to
    infinite loop or even access invalid memory if changes are made to
    all_lock_classes list in parallel.

    To avoid this problem, iteration of lock classes is now done directly
    on the lock_classes array itself. The lock_classes_in_use bitmap is
    checked to see if the lock class is being used. To avoid iterating
    the full array all the times, a new max_lock_class_idx value is added
    to track the maximum lock_class index that is currently being used.

    We can theoretically take the lockdep lock for iterating all_lock_classes
    when other lockdep files (lockdep_stats and lock_stat) are accessed as
    the lock hold time will be shorter for them. For consistency, they are
    also modified to iterate the lock_classes array directly.

    Signed-off-by: Waiman Long <longman@redhat.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/20220211035526.1329503-2-longman@redhat.com

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:34:11 -04:00
Waiman Long e9c41c8c15 lockdep: Correct lock_classes index mapping
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit 28df029d53a2fd80c1b8674d47895648ad26dcfb
Author: Cheng Jui Wang <cheng-jui.wang@mediatek.com>
Date:   Thu, 10 Feb 2022 18:50:11 +0800

    lockdep: Correct lock_classes index mapping

    A kernel exception was hit when trying to dump /proc/lockdep_chains after
    lockdep report "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!":

    Unable to handle kernel paging request at virtual address 00054005450e05c3
    ...
    00054005450e05c3] address between user and kernel address ranges
    ...
    pc : [0xffffffece769b3a8] string+0x50/0x10c
    lr : [0xffffffece769ac88] vsnprintf+0x468/0x69c
    ...
     Call trace:
      string+0x50/0x10c
      vsnprintf+0x468/0x69c
      seq_printf+0x8c/0xd8
      print_name+0x64/0xf4
      lc_show+0xb8/0x128
      seq_read_iter+0x3cc/0x5fc
      proc_reg_read_iter+0xdc/0x1d4

    The cause of the problem is the function lock_chain_get_class() will
    shift lock_classes index by 1, but the index don't need to be shifted
    anymore since commit 01bb6f0af9 ("locking/lockdep: Change the range
    of class_idx in held_lock struct") already change the index to start
    from 0.

    The lock_classes[-1] located at chain_hlocks array. When printing
    lock_classes[-1] after the chain_hlocks entries are modified, the
    exception happened.

    The output of lockdep_chains are incorrect due to this problem too.

    Fixes: f611e8cf98 ("lockdep: Take read/write status in consideration when generate chainkey")
    Signed-off-by: Cheng Jui Wang <cheng-jui.wang@mediatek.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
    Link: https://lore.kernel.org/r/20220210105011.21712-1-cheng-jui.wang@mediatek.com

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:34:10 -04:00
Waiman Long 6fc1ca52c2 locking/lockdep: Avoid potential access of invalid memory in lock_class
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit 61cc4534b6550997c97a03759ab46b29d44c0017
Author: Waiman Long <longman@redhat.com>
Date:   Sun, 2 Jan 2022 21:35:58 -0500

    locking/lockdep: Avoid potential access of invalid memory in lock_class

    It was found that reading /proc/lockdep after a lockdep splat may
    potentially cause an access to freed memory if lockdep_unregister_key()
    is called after the splat but before access to /proc/lockdep [1]. This
    is due to the fact that graph_lock() call in lockdep_unregister_key()
    fails after the clearing of debug_locks by the splat process.

    After lockdep_unregister_key() is called, the lock_name may be freed
    but the corresponding lock_class structure still have a reference to
    it. That invalid memory pointer will then be accessed when /proc/lockdep
    is read by a user and a use-after-free (UAF) error will be reported if
    KASAN is enabled.

    To fix this problem, lockdep_unregister_key() is now modified to always
    search for a matching key irrespective of the debug_locks state and
    zap the corresponding lock class if a matching one is found.

    [1] https://lore.kernel.org/lkml/77f05c15-81b6-bddd-9650-80d5f23fe330@i-love.sakura.ne.jp/

    Fixes: 8b39adbee8 ("locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again")
    Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
    Signed-off-by: Waiman Long <longman@redhat.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Bart Van Assche <bvanassche@acm.org>
    Link: https://lkml.kernel.org/r/20220103023558.1377055-1-longman@redhat.com

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:34:05 -04:00
Waiman Long 322ceeb2a9 lockdep: Remove softirq accounting on PREEMPT_RT.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit 0c1d7a2c2d32fac7ff4a644724b2d52a64184645
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon, 29 Nov 2021 18:46:48 +0100

    lockdep: Remove softirq accounting on PREEMPT_RT.

    There is not really a softirq context on PREEMPT_RT.  Softirqs on
    PREEMPT_RT are always invoked within the context of a threaded
    interrupt handler or within ksoftirqd. The "in-softirq" context is
    preemptible and is protected by a per-CPU lock to ensure mutual
    exclusion.

    There is no difference on PREEMPT_RT between spin_lock_irq() and
    spin_lock() because the former does not disable interrupts. Therefore
    if a lock is used in_softirq() and locked once with spin_lock_irq()
    then lockdep will report this with "inconsistent {SOFTIRQ-ON-W} ->
    {IN-SOFTIRQ-W} usage".

    Teach lockdep that we don't really do softirqs on -RT.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20211129174654.668506-6-bigeasy@linutronix.de

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:33:54 -04:00
Waiman Long 7d55bcfd60 kallsyms: remove arch specific text and data check
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit 1b1ad288b8f1b11f83396e537003722897ecc12b
Author: Kefeng Wang <wangkefeng.wang@huawei.com>
Date:   Mon, 8 Nov 2021 18:33:43 -0800

    kallsyms: remove arch specific text and data check

    Patch series "sections: Unify kernel sections range check and use", v4.

    There are three head files(kallsyms.h, kernel.h and sections.h) which
    include the kernel sections range check, let's make some cleanup and unify
    them.

    1. cleanup arch specific text/data check and fix address boundary check
       in kallsyms.h

    2. make all the basic/core kernel range check function into sections.h

    3. update all the callers, and use the helper in sections.h to simplify
       the code

    After this series, we have 5 APIs about kernel sections range check in
    sections.h

     * is_kernel_rodata()           --- already in sections.h
     * is_kernel_core_data()        --- come from core_kernel_data() in kernel.h
     * is_kernel_inittext()         --- come from kernel.h and kallsyms.h
     * __is_kernel_text()           --- add new internal helper
     * __is_kernel()                --- add new internal helper

    Note: For the last two helpers, people should not use directly, consider to
          use corresponding function in kallsyms.h.

    This patch (of 11):

    Remove arch specific text and data check after commit 4ba66a9760 ("arch:
    remove blackfin port"), no need arch-specific text/data check.

    Link: https://lkml.kernel.org/r/20210930071143.63410-1-wangkefeng.wang@huawei.com
    Link: https://lkml.kernel.org/r/20210930071143.63410-2-wangkefeng.wang@huawei.com
    Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
    Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: David S. Miller <davem@davemloft.net>
    Cc: Alexei Starovoitov <ast@kernel.org>
    Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
    Cc: Alexander Potapenko <glider@google.com>
    Cc: Andrey Konovalov <andreyknvl@gmail.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dmitry Vyukov <dvyukov@google.com>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Michal Simek <monstr@monstr.eu>
    Cc: Petr Mladek <pmladek@suse.com>
    Cc: Richard Henderson <rth@twiddle.net>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:33:14 -04:00
Waiman Long afc0c75818 mm: make generic arch_is_kernel_initmem_freed() do what it says
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit e5ae3728327fda5209454d738eebdb20443fdfac
Author: Christophe Leroy <christophe.leroy@csgroup.eu>
Date:   Fri, 5 Nov 2021 13:40:43 -0700

    mm: make generic arch_is_kernel_initmem_freed() do what it says

    Commit 7a5da02de8 ("locking/lockdep: check for freed initmem in
    static_obj()") added arch_is_kernel_initmem_freed() which is supposed to
    report whether an object is part of already freed init memory.

    For the time being, the generic version of
    arch_is_kernel_initmem_freed() always reports 'false', allthough
    free_initmem() is generically called on all architectures.

    Therefore, change the generic version of arch_is_kernel_initmem_freed()
    to check whether free_initmem() has been called.  If so, then check if a
    given address falls into init memory.

    To ease the use of system_state, move it out of line into its only
    caller which is lockdep.c

    Link: https://lkml.kernel.org/r/1d40783e676e07858be97d881f449ee7ea8adfb1.1633001016.git.christophe.leroy@csgroup.eu
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
    Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Heiko Carstens <hca@linux.ibm.com>
    Cc: Paul Mackerras <paulus@ozlabs.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:33:13 -04:00
Waiman Long 71ff543f2c lockdep: Let lock_is_held_type() detect recursive read as read
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit 2507003a1d10917c9158077bf6030719d02c941e
Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date:   Fri, 3 Sep 2021 10:40:01 +0200

    lockdep: Let lock_is_held_type() detect recursive read as read

    lock_is_held_type(, 1) detects acquired read locks. It only recognized
    locks acquired with lock_acquire_shared(). Read locks acquired with
    lock_acquire_shared_recursive() are not recognized because a `2' is
    stored as the read value.

    Rework the check to additionally recognise lock's read value one and two
    as a read held lock.

    Fixes: e918188611 ("locking: More accurate annotations for read_lock()")
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Boqun Feng <boqun.feng@gmail.com>
    Acked-by: Waiman Long <longman@redhat.com>
    Link: https://lkml.kernel.org/r/20210903084001.lblecrvz4esl4mrr@linutronix.de

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:30:12 -04:00
Waiman Long f4a02df5b1 lockdep: Improve comments in wait-type checks
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit a2e05ddda11b0bd529f443df9089ab498b2c2642
Author: Zhouyi Zhou <zhouzhouyi@gmail.com>
Date:   Wed, 11 Aug 2021 10:59:20 +0800

    lockdep: Improve comments in wait-type checks

    Comments in wait-type checks be improved by mentioning the
    PREEPT_RT kernel configure option.

    Signed-off-by: Zhouyi Zhou <zhouzhouyi@gmail.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Paul E. McKenney <paulmck@kernel.org>
    Link: https://lkml.kernel.org/r/20210811025920.20751-1-zhouzhouyi@gmail.com

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:30:06 -04:00
Waiman Long b5c87ef19c locking/lockdep: Avoid RCU-induced noinstr fail
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit ce0b9c805dd66d5e49fd53ec5415ae398f4c56e6
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Thu, 24 Jun 2021 11:41:10 +0200

    locking/lockdep: Avoid RCU-induced noinstr fail

    vmlinux.o: warning: objtool: look_up_lock_class()+0xc7: call to rcu_read_lock_any_held() leaves .noinstr.text section

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210624095148.311980536@infradead.org

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:30:06 -04:00
Linus Torvalds 28e92f9903 Merge branch 'core-rcu-2021.07.04' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu
Pull RCU updates from Paul McKenney:

 - Bitmap parsing support for "all" as an alias for all bits

 - Documentation updates

 - Miscellaneous fixes, including some that overlap into mm and lockdep

 - kvfree_rcu() updates

 - mem_dump_obj() updates, with acks from one of the slab-allocator
   maintainers

 - RCU NOCB CPU updates, including limited deoffloading

 - SRCU updates

 - Tasks-RCU updates

 - Torture-test updates

* 'core-rcu-2021.07.04' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (78 commits)
  tasks-rcu: Make show_rcu_tasks_gp_kthreads() be static inline
  rcu-tasks: Make ksoftirqd provide RCU Tasks quiescent states
  rcu: Add missing __releases() annotation
  rcu: Remove obsolete rcu_read_unlock() deadlock commentary
  rcu: Improve comments describing RCU read-side critical sections
  rcu: Create an unrcu_pointer() to remove __rcu from a pointer
  srcu: Early test SRCU polling start
  rcu: Fix various typos in comments
  rcu/nocb: Unify timers
  rcu/nocb: Prepare for fine-grained deferred wakeup
  rcu/nocb: Only cancel nocb timer if not polling
  rcu/nocb: Delete bypass_timer upon nocb_gp wakeup
  rcu/nocb: Cancel nocb_timer upon nocb_gp wakeup
  rcu/nocb: Allow de-offloading rdp leader
  rcu/nocb: Directly call __wake_nocb_gp() from bypass timer
  rcu: Don't penalize priority boosting when there is nothing to boost
  rcu: Point to documentation of ordering guarantees
  rcu: Make rcu_gp_cleanup() be noinline for tracing
  rcu: Restrict RCU_STRICT_GRACE_PERIOD to at most four CPUs
  rcu: Make show_rcu_gp_kthreads() dump rcu_node structures blocking GP
  ...
2021-07-04 12:58:33 -07:00
Linus Torvalds 54a728dc5e Scheduler udpates for this cycle:
- Changes to core scheduling facilities:
 
     - Add "Core Scheduling" via CONFIG_SCHED_CORE=y, which enables
       coordinated scheduling across SMT siblings. This is a much
       requested feature for cloud computing platforms, to allow
       the flexible utilization of SMT siblings, without exposing
       untrusted domains to information leaks & side channels, plus
       to ensure more deterministic computing performance on SMT
       systems used by heterogenous workloads.
 
       There's new prctls to set core scheduling groups, which
       allows more flexible management of workloads that can share
       siblings.
 
     - Fix task->state access anti-patterns that may result in missed
       wakeups and rename it to ->__state in the process to catch new
       abuses.
 
  - Load-balancing changes:
 
      - Tweak newidle_balance for fair-sched, to improve
        'memcache'-like workloads.
 
      - "Age" (decay) average idle time, to better track & improve workloads
        such as 'tbench'.
 
      - Fix & improve energy-aware (EAS) balancing logic & metrics.
 
      - Fix & improve the uclamp metrics.
 
      - Fix task migration (taskset) corner case on !CONFIG_CPUSET.
 
      - Fix RT and deadline utilization tracking across policy changes
 
      - Introduce a "burstable" CFS controller via cgroups, which allows
        bursty CPU-bound workloads to borrow a bit against their future
        quota to improve overall latencies & batching. Can be tweaked
        via /sys/fs/cgroup/cpu/<X>/cpu.cfs_burst_us.
 
      - Rework assymetric topology/capacity detection & handling.
 
  - Scheduler statistics & tooling:
 
      - Disable delayacct by default, but add a sysctl to enable
        it at runtime if tooling needs it. Use static keys and
        other optimizations to make it more palatable.
 
      - Use sched_clock() in delayacct, instead of ktime_get_ns().
 
  - Misc cleanups and fixes.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmDZcPoRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1g3yw//WfhIqy7Psa9d/MBMjQDRGbTuO4+w22Dj
 vmWFU44Q4KJxQHWeIgUlrK+dzvYWvNmflUs2CUUOiDVzxFTHMIyBtL4qCBUbx4Ns
 vKAcB9wsWZge2o3WzZqpProRhdoRaSKw8egUr2q7rACVBkckY7eGP/OjWxXU8BdA
 b7D0LPWwuIBFfN4pFYeCDLn32Dqr9s6Chyj+ZecabdG7EE6Gu+f1diVcxy7JE/mc
 4WWL0D1RqdgpGrBEuMJIxPYekdrZiuy4jtEbztz5gbTBteN1cj3BLfqn0Pc/e6rO
 Vyuc5mXCAmzRVi18z6g6bsVl+IA/nrbErENB2OHOhOYtqiZxqGTd4GPWZszMyY17
 5AsEO5+5pcaBsy4gyp09qURggBu9zhJnMVmOI3rIHZkmkhwzc6uUJlyhDCTiFWOz
 3ZF3LjbZEyCKodMD8qMHbs3axIBpIfZqjzkvSKyFnvfXEGVytVse7NUuWtQ36u92
 GnURxVeYY1TDVXvE1Y8owNKMxknKQ6YRlypP7Dtbeo/qG6hShp0xmS7qDLDi0ybZ
 ZlK+bDECiVoDf3nvJo+8v5M82IJ3CBt4UYldeRJsa1YCK/FsbK8tp91fkEfnXVue
 +U6LPX0AmMpXacR5HaZfb3uBIKRw/QMdP/7RFtBPhpV6jqCrEmuqHnpPQiEVtxwO
 UmG7bt94Trk=
 =3VDr
 -----END PGP SIGNATURE-----

Merge tag 'sched-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler udpates from Ingo Molnar:

 - Changes to core scheduling facilities:

    - Add "Core Scheduling" via CONFIG_SCHED_CORE=y, which enables
      coordinated scheduling across SMT siblings. This is a much
      requested feature for cloud computing platforms, to allow the
      flexible utilization of SMT siblings, without exposing untrusted
      domains to information leaks & side channels, plus to ensure more
      deterministic computing performance on SMT systems used by
      heterogenous workloads.

      There are new prctls to set core scheduling groups, which allows
      more flexible management of workloads that can share siblings.

    - Fix task->state access anti-patterns that may result in missed
      wakeups and rename it to ->__state in the process to catch new
      abuses.

 - Load-balancing changes:

    - Tweak newidle_balance for fair-sched, to improve 'memcache'-like
      workloads.

    - "Age" (decay) average idle time, to better track & improve
      workloads such as 'tbench'.

    - Fix & improve energy-aware (EAS) balancing logic & metrics.

    - Fix & improve the uclamp metrics.

    - Fix task migration (taskset) corner case on !CONFIG_CPUSET.

    - Fix RT and deadline utilization tracking across policy changes

    - Introduce a "burstable" CFS controller via cgroups, which allows
      bursty CPU-bound workloads to borrow a bit against their future
      quota to improve overall latencies & batching. Can be tweaked via
      /sys/fs/cgroup/cpu/<X>/cpu.cfs_burst_us.

    - Rework assymetric topology/capacity detection & handling.

 - Scheduler statistics & tooling:

    - Disable delayacct by default, but add a sysctl to enable it at
      runtime if tooling needs it. Use static keys and other
      optimizations to make it more palatable.

    - Use sched_clock() in delayacct, instead of ktime_get_ns().

 - Misc cleanups and fixes.

* tag 'sched-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits)
  sched/doc: Update the CPU capacity asymmetry bits
  sched/topology: Rework CPU capacity asymmetry detection
  sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag
  psi: Fix race between psi_trigger_create/destroy
  sched/fair: Introduce the burstable CFS controller
  sched/uclamp: Fix uclamp_tg_restrict()
  sched/rt: Fix Deadline utilization tracking during policy change
  sched/rt: Fix RT utilization tracking during policy change
  sched: Change task_struct::state
  sched,arch: Remove unused TASK_STATE offsets
  sched,timer: Use __set_current_state()
  sched: Add get_current_state()
  sched,perf,kvm: Fix preemption condition
  sched: Introduce task_is_running()
  sched: Unbreak wakeups
  sched/fair: Age the average idle time
  sched/cpufreq: Consider reduced CPU capacity in energy calculation
  sched/fair: Take thermal pressure into account while estimating energy
  thermal/cpufreq_cooling: Update offline CPUs per-cpu thermal_pressure
  sched/fair: Return early from update_tg_cfs_load() if delta == 0
  ...
2021-06-28 12:14:19 -07:00
Linus Torvalds a15286c63d Locking changes for this cycle:
- Core locking & atomics:
 
      - Convert all architectures to ARCH_ATOMIC: move every
        architecture to ARCH_ATOMIC, then get rid of ARCH_ATOMIC
        and all the transitory facilities and #ifdefs.
 
        Much reduction in complexity from that series:
 
            63 files changed, 756 insertions(+), 4094 deletions(-)
 
      - Self-test enhancements
 
  - Futexes:
 
      - Add the new FUTEX_LOCK_PI2 ABI, which is a variant that
        doesn't set FLAGS_CLOCKRT (.e. uses CLOCK_MONOTONIC).
 
        [ The temptation to repurpose FUTEX_LOCK_PI's implicit
          setting of FLAGS_CLOCKRT & invert the flag's meaning
          to avoid having to introduce a new variant was
          resisted successfully. ]
 
      - Enhance futex self-tests
 
  - Lockdep:
 
      - Fix dependency path printouts
      - Optimize trace saving
      - Broaden & fix wait-context checks
 
  - Misc cleanups and fixes.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmDZaEYRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hPdxAAiNCsxL6X1cZ8zqbWsvLefT9Zqhzgs5u6
 gdZele7PNibvbYdON26b5RUzuKfOW/hgyX6LKqr+AiNYTT9PGhcY+tycUr2PGk5R
 LMyhJWmmX5cUVPU92ky+z5hEHB2gr4XPJcvgpKKUL0XB1tBaSvy2DtgwPuhXOoT1
 1sCQfy63t71snt2RfEnibVW6xovwaA2lsqL81lLHJN4iRFWvqO498/m4+PWkylsm
 ig/+VT1Oz7t4wqu3NhTqNNZv+4K4W2asniyo53Dg2BnRm/NjhJtgg4jRibrb0ssb
 67Xdq6y8+xNBmEAKj+Re8VpMcu4aj346Ctk7d4gst2ah/Rc0TvqfH6mezH7oq7RL
 hmOrMBWtwQfKhEE/fDkng30nrVxc/98YXP0n2rCCa0ySsaF6b6T185mTcYDRDxFs
 BVNS58ub+zxrF9Zd4nhIHKaEHiL2ZdDimqAicXN0RpywjIzTQ/y11uU7I1WBsKkq
 WkPYs+FPHnX7aBv1MsuxHhb8sUXjG924K4JeqnjF45jC3sC1crX+N0jv4wHw+89V
 h4k20s2Tw6m5XGXlgGwMJh0PCcD6X22Vd9Uyw8zb+IJfvNTGR9Rp1Ec+1gMRSll+
 xsn6G6Uy9bcNU0SqKlBSfelweGKn4ZxbEPn76Jc8KWLiepuZ6vv5PBoOuaujWht9
 KAeOC5XdjMk=
 =tH//
 -----END PGP SIGNATURE-----

Merge tag 'locking-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:

 - Core locking & atomics:

     - Convert all architectures to ARCH_ATOMIC: move every architecture
       to ARCH_ATOMIC, then get rid of ARCH_ATOMIC and all the
       transitory facilities and #ifdefs.

       Much reduction in complexity from that series:

           63 files changed, 756 insertions(+), 4094 deletions(-)

     - Self-test enhancements

 - Futexes:

     - Add the new FUTEX_LOCK_PI2 ABI, which is a variant that doesn't
       set FLAGS_CLOCKRT (.e. uses CLOCK_MONOTONIC).

       [ The temptation to repurpose FUTEX_LOCK_PI's implicit setting of
         FLAGS_CLOCKRT & invert the flag's meaning to avoid having to
         introduce a new variant was resisted successfully. ]

     - Enhance futex self-tests

 - Lockdep:

     - Fix dependency path printouts

     - Optimize trace saving

     - Broaden & fix wait-context checks

 - Misc cleanups and fixes.

* tag 'locking-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits)
  locking/lockdep: Correct the description error for check_redundant()
  futex: Provide FUTEX_LOCK_PI2 to support clock selection
  futex: Prepare futex_lock_pi() for runtime clock selection
  lockdep/selftest: Remove wait-type RCU_CALLBACK tests
  lockdep/selftests: Fix selftests vs PROVE_RAW_LOCK_NESTING
  lockdep: Fix wait-type for empty stack
  locking/selftests: Add a selftest for check_irq_usage()
  lockding/lockdep: Avoid to find wrong lock dep path in check_irq_usage()
  locking/lockdep: Remove the unnecessary trace saving
  locking/lockdep: Fix the dep path printing for backwards BFS
  selftests: futex: Add futex compare requeue test
  selftests: futex: Add futex wait test
  seqlock: Remove trailing semicolon in macros
  locking/lockdep: Reduce LOCKDEP dependency list
  locking/lockdep,doc: Improve readability of the block matrix
  locking/atomics: atomic-instrumented: simplify ifdeffery
  locking/atomic: delete !ARCH_ATOMIC remnants
  locking/atomic: xtensa: move to ARCH_ATOMIC
  locking/atomic: sparc: move to ARCH_ATOMIC
  locking/atomic: sh: move to ARCH_ATOMIC
  ...
2021-06-28 11:45:29 -07:00
Xiongwei Song 0e8a89d49d locking/lockdep: Correct the description error for check_redundant()
If there is no matched result, check_redundant() will return BFS_RNOMATCH.

Signed-off-by: Xiongwei Song <sxwjean@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lkml.kernel.org/r/20210618130230.123249-1-sxwjean@me.com
2021-06-22 16:42:09 +02:00
Peter Zijlstra f8b298cc39 lockdep: Fix wait-type for empty stack
Even the very first lock can violate the wait-context check, consider
the various IRQ contexts.

Fixes: de8f5e4f2d ("lockdep: Introduce wait-type checks")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Joerg Roedel <jroedel@suse.de>
Link: https://lore.kernel.org/r/20210617190313.256987481@infradead.org
2021-06-22 16:42:08 +02:00
Boqun Feng 7b1f8c6179 lockding/lockdep: Avoid to find wrong lock dep path in check_irq_usage()
In the step #3 of check_irq_usage(), we seach backwards to find a lock
whose usage conflicts the usage of @target_entry1 on safe/unsafe.
However, we should only keep the irq-unsafe usage of @target_entry1 into
consideration, because it could be a case where a lock is hardirq-unsafe
but soft-safe, and in check_irq_usage() we find it because its
hardirq-unsafe could result into a hardirq-safe-unsafe deadlock, but
currently since we don't filter out the other usage bits, so we may find
a lock dependency path softirq-unsafe -> softirq-safe, which in fact
doesn't cause a deadlock. And this may cause misleading lockdep splats.

Fix this by only keeping LOCKF_ENABLED_IRQ_ALL bits when we try the
backwards search.

Reported-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210618170110.3699115-4-boqun.feng@gmail.com
2021-06-22 16:42:07 +02:00
Boqun Feng d4c157c7b1 locking/lockdep: Remove the unnecessary trace saving
In print_bad_irq_dependency(), save_trace() is called to set the ->trace
for @prev_root as the current call trace, however @prev_root corresponds
to the the held lock, which may not be acquired in current call trace,
therefore it's wrong to use save_trace() to set ->trace of @prev_root.
Moreover, with our adjustment of printing backwards dependency path, the
->trace of @prev_root is unncessary, so remove it.

Reported-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210618170110.3699115-3-boqun.feng@gmail.com
2021-06-22 16:42:07 +02:00
Boqun Feng 69c7a5fb24 locking/lockdep: Fix the dep path printing for backwards BFS
We use the same code to print backwards lock dependency path as the
forwards lock dependency path, and this could result into incorrect
printing because for a backwards lock_list ->trace is not the call trace
where the lock of ->class is acquired.

Fix this by introducing a separate function on printing the backwards
dependency path. Also add a few comments about the printing while we are
at it.

Reported-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210618170110.3699115-2-boqun.feng@gmail.com
2021-06-22 16:42:06 +02:00
Peter Zijlstra 49faa77759 locking/lockdep: Improve noinstr vs errors
Better handle the failure paths.

  vmlinux.o: warning: objtool: debug_locks_off()+0x23: call to console_verbose() leaves .noinstr.text section
  vmlinux.o: warning: objtool: debug_locks_off()+0x19: call to __kasan_check_write() leaves .noinstr.text section

  debug_locks_off+0x19/0x40:
  instrument_atomic_write at include/linux/instrumented.h:86
  (inlined by) __debug_locks_off at include/linux/debug_locks.h:17
  (inlined by) debug_locks_off at lib/debug_locks.c:41

Fixes: 6eebad1ad3 ("lockdep: __always_inline more for noinstr")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20210621120120.784404944@infradead.org
2021-06-22 13:56:43 +02:00
Peter Zijlstra b03fbd4ff2 sched: Introduce task_is_running()
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper:
task_is_running(p).

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org
2021-06-18 11:43:07 +02:00
Leo Yan 89e70d5c58 locking/lockdep: Correct calling tracepoints
The commit eb1f00237a ("lockdep,trace: Expose tracepoints") reverses
tracepoints for lock_contended() and lock_acquired(), thus the ftrace
log shows the wrong locking sequence that "acquired" event is prior to
"contended" event:

  <idle>-0       [001] d.s3 20803.501685: lock_acquire: 0000000008b91ab4 &sg_policy->update_lock
  <idle>-0       [001] d.s3 20803.501686: lock_acquired: 0000000008b91ab4 &sg_policy->update_lock
  <idle>-0       [001] d.s3 20803.501689: lock_contended: 0000000008b91ab4 &sg_policy->update_lock
  <idle>-0       [001] d.s3 20803.501690: lock_release: 0000000008b91ab4 &sg_policy->update_lock

This patch fixes calling tracepoints for lock_contended() and
lock_acquired().

Fixes: eb1f00237a ("lockdep,trace: Expose tracepoints")
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210512120937.90211-1-leo.yan@linaro.org
2021-05-18 12:53:50 +02:00
Paul E. McKenney 1feb2cc8db lockdep: Explicitly flag likely false-positive report
The reason that lockdep_rcu_suspicious() prints the value of debug_locks
is because a value of zero indicates a likely false positive.  This can
work, but is a bit obtuse.  This commit therefore explicitly calls out
the possibility of a false positive.

Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-10 16:22:54 -07:00
Linus Torvalds 0ff0edb550 Locking changes for this cycle were:
- rtmutex cleanup & spring cleaning pass that removes ~400 lines of code
  - Futex simplifications & cleanups
  - Add debugging to the CSD code, to help track down a tenacious race (or hw problem)
  - Add lockdep_assert_not_held(), to allow code to require a lock to not be held,
    and propagate this into the ath10k driver
  - Misc LKMM documentation updates
  - Misc KCSAN updates: cleanups & documentation updates
  - Misc fixes and cleanups
  - Fix locktorture bugs with ww_mutexes
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmCJDn0RHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hPrRAAryS4zPnuDsfkVk0smxo7a0lK5ljbH2Xo
 28QUZXOl6upnEV8dzbjwG7eAjt5ZJVI5tKIeG0PV0NUJH2nsyHwESdtULGGYuPf/
 4YUzNwZJa+nI/jeBnVsXCimLVxxnNCRdR7yOVOHm4ukEwa+YTNt1pvlYRmUd4YyH
 Q5cCrpb3THvLka3AAamEbqnHnAdGxHKuuHYVRkODpMQ+zrQvtN8antYsuk8kJsqM
 m+GZg/dVCuLEPah5k+lOACtcq/w7HCmTlxS8t4XLvD52jywFZLcCPvi1rk0+JR+k
 Vd9TngC09GJ4jXuDpr42YKkU9/X6qy2Es39iA/ozCvc1Alrhspx/59XmaVSuWQGo
 XYuEPx38Yuo/6w16haSgp0k4WSay15A4uhCTQ75VF4vli8Bqgg9PaxLyQH1uG8e2
 xk8U90R7bDzLlhKYIx1Vu5Z0t7A1JtB5CJtgpcfg/zQLlzygo75fHzdAiU5fDBDm
 3QQXSU2Oqzt7c5ZypioHWazARk7tL6th38KGN1gZDTm5zwifpaCtHi7sml6hhZ/4
 ATH6zEPzIbXJL2UqumSli6H4ye5ORNjOu32r7YPqLI4IDbzpssfoSwfKYlQG4Tvn
 4H1Ukirzni0gz5+wbleItzf2aeo1rocs4YQTnaT02j8NmUHUz4AzOHGOQFr5Tvh0
 wk/P4MIoSb0=
 =cOOk
 -----END PGP SIGNATURE-----

Merge tag 'locking-core-2021-04-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:

 - rtmutex cleanup & spring cleaning pass that removes ~400 lines of
   code

 - Futex simplifications & cleanups

 - Add debugging to the CSD code, to help track down a tenacious race
   (or hw problem)

 - Add lockdep_assert_not_held(), to allow code to require a lock to not
   be held, and propagate this into the ath10k driver

 - Misc LKMM documentation updates

 - Misc KCSAN updates: cleanups & documentation updates

 - Misc fixes and cleanups

 - Fix locktorture bugs with ww_mutexes

* tag 'locking-core-2021-04-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (44 commits)
  kcsan: Fix printk format string
  static_call: Relax static_call_update() function argument type
  static_call: Fix unused variable warn w/o MODULE
  locking/rtmutex: Clean up signal handling in __rt_mutex_slowlock()
  locking/rtmutex: Restrict the trylock WARN_ON() to debug
  locking/rtmutex: Fix misleading comment in rt_mutex_postunlock()
  locking/rtmutex: Consolidate the fast/slowpath invocation
  locking/rtmutex: Make text section and inlining consistent
  locking/rtmutex: Move debug functions as inlines into common header
  locking/rtmutex: Decrapify __rt_mutex_init()
  locking/rtmutex: Remove pointless CONFIG_RT_MUTEXES=n stubs
  locking/rtmutex: Inline chainwalk depth check
  locking/rtmutex: Move rt_mutex_debug_task_free() to rtmutex.c
  locking/rtmutex: Remove empty and unused debug stubs
  locking/rtmutex: Consolidate rt_mutex_init()
  locking/rtmutex: Remove output from deadlock detector
  locking/rtmutex: Remove rtmutex deadlock tester leftovers
  locking/rtmutex: Remove rt_mutex_timed_lock()
  MAINTAINERS: Add myself as futex reviewer
  locking/mutex: Remove repeated declaration
  ...
2021-04-28 12:37:53 -07:00
Linus Torvalds ffc766b31e This is an irregular pull request for sending a lockdep patch.
Peter Zijlstra asked us to find bad annotation that blows up the lockdep
 storage [1][2][3] but we could not find such annotation [4][5], and
 Peter cannot give us feedback any more [6]. Since we tested this patch
 on linux-next.git without problems, and keeping this problem unresolved
 discourages kernel testing which is more painful, I'm sending this patch
 without forever waiting for response from Peter.
 
 [1] https://lkml.kernel.org/r/20200916115057.GO2674@hirez.programming.kicks-ass.net
 [2] https://lkml.kernel.org/r/20201118142357.GW3121392@hirez.programming.kicks-ass.net
 [3] https://lkml.kernel.org/r/20201118151038.GX3121392@hirez.programming.kicks-ass.net
 [4] https://lkml.kernel.org/r/CACT4Y+asqRbjaN9ras=P5DcxKgzsnV0fvV0tYb2VkT+P00pFvQ@mail.gmail.com
 [5] https://lkml.kernel.org/r/4b89985e-99f9-18bc-0bf1-c883127dc70c@i-love.sakura.ne.jp
 [6] https://lkml.kernel.org/r/CACT4Y+YnHFV1p5mbhby2nyOaNTy8c_yoVk86z5avo14KWs0s1A@mail.gmail.com
 
  kernel/locking/lockdep.c           |    2 -
  kernel/locking/lockdep_internals.h |    8 +++----
  lib/Kconfig.debug                  |   40 +++++++++++++++++++++++++++++++++++++
  3 files changed, 45 insertions(+), 5 deletions(-)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJghetlAAoJEEJfEo0MZPUqMaEP/i0pkfOyKBdUe61Y9g0A2TmN
 h5I59KiSsgmx7dK90Q2GP1kUQE9ROCiqIz9qHzCzWfk9jljgFgRfECBKHqH+K7Tq
 AlQQkJmAiwpg+1scSkhoxBOrSGXHe2xB4qvazvw7tAAIDPjcV/pkFlNKaUtItzr2
 VPr4t6Eis/MZ7Pau2xLFLX2gRn5KvpsbcL+wydrDfqlXx3pNXlBvChBxixk90HS6
 0BC5pgb68pXm8Emzbp3+iloy0VuG/BHDA/vy02k5zUjMM7Zy+aGxR/cl2jvc+lWd
 wyRWhwbSjTUrYs3Olmjkybj15lsgl573oIptVhIIrXuvjpyY5v1IH1gkLoxJgr5d
 yaKSdYwyN/OPI3KireEfaSgc6IqrJ1K9gLh1Knqw4JeoJngEVEkmBwBg/izpiXoL
 WVlWZuLkYtOTWxpsTOiCtzv4KkFhFtE61IEAIEsvvj9oeLQJu7JUR8oW0ZQtdfXg
 Em0IbObS8VGW322MNmb1p9SsaYvOueWyKzImEVlCBAb2g6PUYuiAwiOw8/tvsDFr
 KPXCPpaqKCFtp+BG21fn6GpTqJ4GteWy6JK6C9i/xhIWmv+QRijNEmPlyYQ0YMkd
 a8z8rqRqexknlPCJy/9AZWfBo6kg5Dt3icrrNVKoXLVC/LNYaHQvIKsGzZaQ1Pyq
 W6rnMbLCRD199sqoEFrH
 =E7U/
 -----END PGP SIGNATURE-----

Merge tag 'tomoyo-pr-20210426' of git://git.osdn.net/gitroot/tomoyo/tomoyo-test1

Pull lockdep capacity limit updates from Tetsuo Handa:
 "syzbot is occasionally reporting that fuzz testing is terminated due
  to hitting upper limits lockdep can track.

  Analysis via /proc/lockdep* did not show any obvious culprits, allow
  tuning tracing capacity constants"

* tag 'tomoyo-pr-20210426' of git://git.osdn.net/gitroot/tomoyo/tomoyo-test1:
  lockdep: Allow tuning tracing capacity constants.
2021-04-26 08:44:23 -07:00
Tetsuo Handa 5dc33592e9 lockdep: Allow tuning tracing capacity constants.
Since syzkaller continues various test cases until the kernel crashes,
syzkaller tends to examine more locking dependencies than normal systems.
As a result, syzbot is reporting that the fuzz testing was terminated
due to hitting upper limits lockdep can track [1] [2] [3]. Since analysis
via /proc/lockdep* did not show any obvious culprit [4] [5], we have no
choice but allow tuning tracing capacity constants.

[1] https://syzkaller.appspot.com/bug?id=3d97ba93fb3566000c1c59691ea427370d33ea1b
[2] https://syzkaller.appspot.com/bug?id=381cb436fe60dc03d7fd2a092b46d7f09542a72a
[3] https://syzkaller.appspot.com/bug?id=a588183ac34c1437fc0785e8f220e88282e5a29f
[4] https://lkml.kernel.org/r/4b8f7a57-fa20-47bd-48a0-ae35d860f233@i-love.sakura.ne.jp
[5] https://lkml.kernel.org/r/1c351187-253b-2d49-acaf-4563c63ae7d2@i-love.sakura.ne.jp

References: https://lkml.kernel.org/r/1595640639-9310-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
2021-04-05 20:33:57 +09:00
Arnd Bergmann 6d48b7912c lockdep: Address clang -Wformat warning printing for %hd
Clang doesn't like format strings that truncate a 32-bit
value to something shorter:

  kernel/locking/lockdep.c:709:4: error: format specifies type 'short' but the argument has type 'int' [-Werror,-Wformat]

In this case, the warning is a slightly questionable, as it could realize
that both class->wait_type_outer and class->wait_type_inner are in fact
8-bit struct members, even though the result of the ?: operator becomes an
'int'.

However, there is really no point in printing the number as a 16-bit
'short' rather than either an 8-bit or 32-bit number, so just change
it to a normal %d.

Fixes: de8f5e4f2d ("lockdep: Introduce wait-type checks")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20210322115531.3987555-1-arnd@kernel.org
2021-03-22 22:07:09 +01:00
Ingo Molnar e2db7592be locking: Fix typos in comments
Fix ~16 single-word typos in locking code comments.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-22 02:45:52 +01:00
Tetsuo Handa 3a85969e9d lockdep: Add a missing initialization hint to the "INFO: Trying to register non-static key" message
Since this message is printed when dynamically allocated spinlocks (e.g.
kzalloc()) are used without initialization (e.g. spin_lock_init()),
suggest to developers to check whether initialization functions for objects
were called, before making developers wonder what annotation is missing.

[ mingo: Minor tweaks to the message. ]

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20210321064913.4619-1-penguin-kernel@I-love.SAKURA.ne.jp
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-21 11:59:57 +01:00
Shuah Khan f8cfa46608 lockdep: Add lockdep lock state defines
Adds defines for lock state returns from lock_is_held_type() based on
Johannes Berg's suggestions as it make it easier to read and maintain
the lock states. These are defines and a enum to avoid changes to
lock_is_held_type() and lockdep_is_held() return types.

Updates to lock_is_held_type() and  __lock_is_held() to use the new
defines.

Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/linux-wireless/871rdmu9z9.fsf@codeaurora.org/
2021-03-06 12:51:10 +01:00
Shuah Khan 3e31f94752 lockdep: Add lockdep_assert_not_held()
Some kernel functions must be called without holding a specific lock.
Add lockdep_assert_not_held() to be used in these functions to detect
incorrect calls while holding a lock.

lockdep_assert_not_held() provides the opposite functionality of
lockdep_assert_held() which is used to assert calls that require
holding a specific lock.

Incorporates suggestions from Peter Zijlstra to avoid misfires when
lockdep_off() is employed.

The need for lockdep_assert_not_held() came up in a discussion on
ath10k patch. ath10k_drain_tx() and i915_vma_pin_ww() are examples
of functions that can use lockdep_assert_not_held().

Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/linux-wireless/871rdmu9z9.fsf@codeaurora.org/
2021-03-06 12:51:05 +01:00
Ingo Molnar 62137364e3 Merge branch 'linus' into locking/core, to pick up upstream fixes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-02-12 12:54:58 +01:00
Peter Zijlstra 7f82e631d2 locking/lockdep: Avoid unmatched unlock
Commit f6f48e1804 ("lockdep: Teach lockdep about "USED" <- "IN-NMI"
inversions") overlooked that print_usage_bug() releases the graph_lock
and called it without the graph lock held.

Fixes: f6f48e1804 ("lockdep: Teach lockdep about "USED" <- "IN-NMI" inversions")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Link: https://lkml.kernel.org/r/YBfkuyIfB1+VRxXP@hirez.programming.kicks-ass.net
2021-02-05 17:20:15 +01:00