Commit Graph

36 Commits

Author SHA1 Message Date
Waiman Long 0ed4516be9 arch: Remove cmpxchg_double
JIRA: https://issues.redhat.com/browse/RHEL-68940
Conflicts:
 1) A context diff in the arch_try_cmpxchg128_local() hunk of
    arch/x86/include/asm/cmpxchg_64.h due to the presence of later
    upstream commit 5bef003538ae ("locking/atomic: x86: add preprocessor
    symbols").
 2) A context diff in the first hunk of arch/x86/include/asm/percpu.h
    due to the presence of later upstream commit 5f863897d964 ("x86/percpu:
    Define raw_cpu_try_cmpxchg and this_cpu_try_cmpxchg()").

commit febe950dbfb464799beb0339cc6fb10699f4a5da
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Wed, 31 May 2023 15:08:44 +0200

    arch: Remove cmpxchg_double

    No moar users, remove the monster.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Arnd Bergmann <arnd@arndb.de>
    Reviewed-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Heiko Carstens <hca@linux.ibm.com>
    Tested-by: Mark Rutland <mark.rutland@arm.com>
    Link: https://lore.kernel.org/r/20230531132323.991907085@infradead.org

Signed-off-by: Waiman Long <longman@redhat.com>
2025-04-20 18:40:07 -04:00
Waiman Long 102b7e44e5 locking/atomic: Correct (cmp)xchg() instrumentation
JIRA: https://issues.redhat.com/browse/RHEL-68940
Conflicts:
  2 merge conflicts in include/linux/atomic/atomic-instrumented.h due to
  the presence of a later upstream commit 8c8b096a23d1 ("instrumentation:
  Wire up cmpxchg128()").

commit ec570320b09f76d52819e60abdccf372658216b6
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Thu, 13 Apr 2023 17:06:44 +0100

    locking/atomic: Correct (cmp)xchg() instrumentation

    All xchg() and cmpxchg() ops are atomic RMWs, but currently we
    instrument these with instrument_atomic_write() rather than
    instrument_atomic_read_write(), missing the read aspect.

    Similarly, all try_cmpxchg() ops are non-atomic RMWs on *oldp, but we
    instrument these accesses with instrument_atomic_write() rather than
    instrument_read_write(), missing the read aspect and erroneously marking
    these as atomic.

    Fix the instrumentation for both points.

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Link: https://lkml.kernel.org/r/20230413160644.490976-1-mark.rutland@arm.com
    Cc: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2025-04-20 18:39:58 -04:00
Rafael Aquini 1625227e20 instrumentation: Wire up cmpxchg128()
JIRA: https://issues.redhat.com/browse/RHEL-27742
Conflicts:
  * the last hunk for "include/linux/atomic/atomic-arch-fallback.h",
    "include/linux/atomic/atomic-instrumented.h", and
    "scripts/atomic/gen-atomic-instrumented.sh" are different because
    the RHEL-9 backport for commit e6ce9d741163 ("locking/atomic: Add
    generic try_cmpxchg{,64}_local() support") has dropped the ball
    and lost those hunks. We're just fixing that mistake in here.

This patch is a backport of the following upstream commit:
commit 8c8b096a23d12fedf3c0f50524f30113ef97aa8c
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Wed May 31 15:08:37 2023 +0200

    instrumentation: Wire up cmpxchg128()

    Wire up the cmpxchg128 family in the atomic wrapper scripts.

    These provide the generic cmpxchg128 family of functions from the
    arch_ prefixed version, adding explicit instrumentation where needed.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Arnd Bergmann <arnd@arndb.de>
    Reviewed-by: Mark Rutland <mark.rutland@arm.com>
    Acked-by: Mark Rutland <mark.rutland@arm.com>
    Tested-by: Mark Rutland <mark.rutland@arm.com>
    Link: https://lore.kernel.org/r/20230531132323.519237070@infradead.org

Signed-off-by: Rafael Aquini <raquini@redhat.com>
2024-09-05 20:36:20 -04:00
Prarit Bhargava 849583cf2c locking/atomic: Add generic try_cmpxchg{,64}_local() support
JIRA: https://issues.redhat.com/browse/RHEL-25415

commit e6ce9d741163af0b846637ce6550ae8a671b1588
Author: Uros Bizjak <ubizjak@gmail.com>
Date:   Wed Apr 5 16:17:06 2023 +0200

    locking/atomic: Add generic try_cmpxchg{,64}_local() support

    Add generic support for try_cmpxchg{,64}_local() and their falbacks.

    These provides the generic try_cmpxchg_local family of functions
    from the arch_ prefixed version, also adding explicit instrumentation.

    Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Acked-by: Mark Rutland <mark.rutland@arm.com>
    Link: https://lore.kernel.org/r/20230405141710.3551-2-ubizjak@gmail.com
    Cc: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Prarit Bhargava <prarit@redhat.com>
2024-03-20 09:43:03 -04:00
Waiman Long 99540cc176 atomics: Provide atomic_add_negative() variants
JIRA: https://issues.redhat.com/browse/RHEL-5228

commit e5ab9eff46b04c5a04778e40d7092fed3fda52ca
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Thu, 23 Mar 2023 21:55:30 +0100

    atomics: Provide atomic_add_negative() variants

    atomic_add_negative() does not provide the relaxed/acquire/release
    variants.

    Provide them in preparation for a new scalable reference count algorithm.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Mark Rutland <mark.rutland@arm.com>
    Link: https://lore.kernel.org/r/20230323102800.101763813@linutronix.de

Signed-off-by: Waiman Long <longman@redhat.com>
2023-09-22 13:21:49 -04:00
Waiman Long 00ebe154f7 locking/atomics, kcsan: Add instrumentation for barriers
JIRA: https://issues.redhat.com/browse/RHEL-5228
Conflicts: A merge conflict in include/linux/atomic/atomic-instrumented.h
	   due to the presence of a later upstream commit 0aa7be05d83c
	   ("locking/atomic: Add generic try_cmpxchg64 support"). Add
	   back the missing kcsan_release()/kcsan_mb() calls skipped
	   in that merged commit and drop the hash change at the end
	   of the file.

commit e87c4f6642f49627c3430cb3ee78c73fb51b48e4
Author: Marco Elver <elver@google.com>
Date:   Tue, 30 Nov 2021 12:44:24 +0100

    locking/atomics, kcsan: Add instrumentation for barriers

    Adds the required KCSAN instrumentation for barriers of atomics.

    Signed-off-by: Marco Elver <elver@google.com>
    Signed-off-by: Paul E. McKenney <paulmck@kernel.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2023-09-22 13:21:01 -04:00
Jaroslav Kysela 8bb54747f3 Fix up more non-executable files marked executable
Bugzilla: https://bugzilla.redhat.com/2179848

commit c96618275234ad03d44eafe9f8844305bb44fda4
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Sat Jan 28 11:17:57 2023 -0800

    Fix up more non-executable files marked executable

    Joe found another DT file that shouldn't be executable, and that
    frustrated me enough that I went hunting with this script:

        git ls-files -s |
            grep '^100755' |
            cut -f2 |
            xargs grep -L '^#!'

    and that found another file that shouldn't have been marked executable
    either, despite being in the scripts directory.

    Maybe these two are the last ones at least for now.  But I'm sure we'll
    be back in a few years, fixing things up again.

    Fixes: 8c6789f4e2d4 ("ASoC: dt-bindings: Add Everest ES8326 audio CODEC")
    Fixes: 4d8e5cd233 ("locking/atomics: Fix scripts/atomic/ script permissions")
    Reported-by: Joe Perches <joe@perches.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Jaroslav Kysela <jkysela@redhat.com>
2023-06-21 16:22:06 +02:00
Vitaly Kuznetsov 5cae5d4bec locking/atomic: Add generic try_cmpxchg64 support
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2119111

commit 0aa7be05d83cc584da0782405e8007e351dfb6cc
Author: Uros Bizjak <ubizjak@gmail.com>
Date:   Sun May 15 20:42:03 2022 +0200

    locking/atomic: Add generic try_cmpxchg64 support

    Add generic support for try_cmpxchg64{,_acquire,_release,_relaxed}
    and their falbacks involving cmpxchg64.

    Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lkml.kernel.org/r/20220515184205.103089-2-ubizjak@gmail.com

Add generic support for try_cmpxchg64{,_acquire,_release,_relaxed}
and their falbacks involving cmpxchg64.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20220515184205.103089-2-ubizjak@gmail.com

Conflicts:
	include/linux/atomic/atomic-instrumented.h (skipping
	e87c4f6642f49, dropping kcsan_release()/kcsan_mb())

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
2022-10-25 13:39:57 +02:00
Waiman Long db81d84668 atomics: Fix atomic64_{read_acquire,set_release} fallbacks
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2076713

commit dc1b4df09acdca7a89806b28f235cd6d8dcd3d24
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Mon, 7 Feb 2022 10:19:43 +0000

    atomics: Fix atomic64_{read_acquire,set_release} fallbacks

    Arnd reports that on 32-bit architectures, the fallbacks for
    atomic64_read_acquire() and atomic64_set_release() are broken as they
    use smp_load_acquire() and smp_store_release() respectively, which do
    not work on types larger than the native word size.

    Since those contain compiletime_assert_atomic_type(), any attempt to use
    those fallbacks will result in a build-time error. e.g. with the
    following added to arch/arm/kernel/setup.c:

    | void test_atomic64(atomic64_t *v)
    | {
    |        atomic64_set_release(v, 5);
    |        atomic64_read_acquire(v);
    | }

    The compiler will complain as follows:

    | In file included from <command-line>:
    | In function 'arch_atomic64_set_release',
    |     inlined from 'test_atomic64' at ./include/linux/atomic/atomic-instrumented.h:669:2:
    | ././include/linux/compiler_types.h:346:38: error: call to '__compiletime_assert_9' declared with attribute error: Need native word sized stores/loads for atomicity.
    |   346 |  _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
    |       |                                      ^
    | ././include/linux/compiler_types.h:327:4: note: in definition of macro '__compiletime_assert'
    |   327 |    prefix ## suffix();    \
    |       |    ^~~~~~
    | ././include/linux/compiler_types.h:346:2: note: in expansion of macro '_compiletime_assert'
    |   346 |  _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
    |       |  ^~~~~~~~~~~~~~~~~~~
    | ././include/linux/compiler_types.h:349:2: note: in expansion of macro 'compiletime_assert'
    |   349 |  compiletime_assert(__native_word(t),    \
    |       |  ^~~~~~~~~~~~~~~~~~
    | ./include/asm-generic/barrier.h:133:2: note: in expansion of macro 'compiletime_assert_atomic_type'
    |   133 |  compiletime_assert_atomic_type(*p);    \
    |       |  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    | ./include/asm-generic/barrier.h:164:55: note: in expansion of macro '__smp_store_release'
    |   164 | #define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0)
    |       |                                                       ^~~~~~~~~~~~~~~~~~~
    | ./include/linux/atomic/atomic-arch-fallback.h:1270:2: note: in expansion of macro 'smp_store_release'
    |  1270 |  smp_store_release(&(v)->counter, i);
    |       |  ^~~~~~~~~~~~~~~~~
    | make[2]: *** [scripts/Makefile.build:288: arch/arm/kernel/setup.o] Error 1
    | make[1]: *** [scripts/Makefile.build:550: arch/arm/kernel] Error 2
    | make: *** [Makefile:1831: arch/arm] Error 2

    Fix this by only using smp_load_acquire() and smp_store_release() for
    native atomic types, and otherwise falling back to the regular barriers
    necessary for acquire/release semantics, as we do in the more generic
    acquire and release fallbacks.

    Since the fallback templates are used to generate the atomic64_*() and
    atomic_*() operations, the __native_word() check is added to both. For
    the atomic_*() operations, which are always 32-bit, the __native_word()
    check is redundant but not harmful, as it is always true.

    For the example above this works as expected on 32-bit, e.g. for arm
    multi_v7_defconfig:

    | <test_atomic64>:
    |         push    {r4, r5}
    |         dmb     ish
    |         pldw    [r0]
    |         mov     r2, #5
    |         mov     r3, #0
    |         ldrexd  r4, [r0]
    |         strexd  r4, r2, [r0]
    |         teq     r4, #0
    |         bne     484 <test_atomic64+0x14>
    |         ldrexd  r2, [r0]
    |         dmb     ish
    |         pop     {r4, r5}
    |         bx      lr

    ... and also on 64-bit, e.g. for arm64 defconfig:

    | <test_atomic64>:
    |         bti     c
    |         paciasp
    |         mov     x1, #0x5
    |         stlr    x1, [x0]
    |         ldar    x0, [x0]
    |         autiasp
    |         ret

    Reported-by: Arnd Bergmann <arnd@arndb.de>
    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
    Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
    Link: https://lore.kernel.org/r/20220207101943.439825-1-mark.rutland@arm.com

Signed-off-by: Waiman Long <longman@redhat.com>
2022-05-12 08:34:06 -04:00
Waiman Long bbc5329965 locking/atomic: add arch_atomic_long*()
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2022806

commit 67d1b0de258ad066e1fc85d0ceaa75e107fb45bb
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Tue, 13 Jul 2021 11:52:52 +0100

    locking/atomic: add arch_atomic_long*()

    Now that all architectures provide arch_{atomic,atomic64}_*(), we can
    build arch_atomic_long_*() atop these, which can be safely used in
    noinstr code. The regular atomic_long_*() wrappers are built atop these,
    as we do for {atomic,atomic64}_*() atop arch_{atomic,atomic64}_*().

    We don't provide arch_* versions of the cond_read*() variants, as we
    don't have arch_* versions of the underlying atomic/atomic64 functions
    (nor the smp_cond_load*() helpers these are typically based on).

    Note that the headers in this patch under include/linux/atomic/ are
    generated by the scripts in scripts/atomic/.

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210713105253.7615-5-mark.rutland@arm.com

Signed-off-by: Waiman Long <longman@redhat.com>
2021-11-12 14:23:15 -05:00
Waiman Long eddd1cbdfd locking/atomic: centralize generated headers
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2022806

commit e3d18cee258b898017b298b5b93f8134dd62aee3
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Tue, 13 Jul 2021 11:52:51 +0100

    locking/atomic: centralize generated headers

    The generated atomic headers are only intended to be included directly
    by <linux/atomic.h>, but are spread across include/linux/ and
    include/asm-generic/, where people mnay be encouraged to include them.

    This patch centralizes them under include/linux/atomic/.

    Other than the header guards and hashes, there is no change to any of
    the generated headers as a result of this patch.

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210713105253.7615-4-mark.rutland@arm.com

Signed-off-by: Waiman Long <longman@redhat.com>
2021-11-12 14:23:14 -05:00
Waiman Long 364722ad11 locking/atomic: remove ARCH_ATOMIC remanants
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2022806

commit f3e615b4db1fb7034f1d76dc307b77cc848f040e
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Tue, 13 Jul 2021 11:52:50 +0100

    locking/atomic: remove ARCH_ATOMIC remanants

    Now that gen-atomic-fallback.sh is only used to generate the arch_*
    fallbacks, we don't need to also generate the non-arch_* forms, and can
    removethe infrastructure this needed.

    There is no change to any of the generated headers as a result of this
    patch.

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210713105253.7615-3-mark.rutland@arm.com

Signed-off-by: Waiman Long <longman@redhat.com>
2021-11-12 14:23:14 -05:00
Waiman Long d7f0d85097 locking/atomic: simplify ifdef generation
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2022806

commit 47401d94947d507ff9f33fccf490baf47638fb69
Author: Mark Rutland <mark.rutland@arm.com>
Date:   Tue, 13 Jul 2021 11:52:49 +0100

    locking/atomic: simplify ifdef generation

    In gen-atomic-fallback.sh's gen_proto_order_variants(), we generate some
    ifdeferry with:

    | local basename="${arch}${atomic}_${pfx}${name}${sfx}"
    | ...
    | printf "#ifdef ${basename}\n"
    | ...
    | printf "#endif /* ${arch}${atomic}_${pfx}${name}${sfx} */\n\n"

    For clarity, use ${basename} for both sides, rather than open-coding the
    string generation.

    There is no change to any of the generated headers as a result of this
    patch.

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Link: https://lore.kernel.org/r/20210713105253.7615-2-mark.rutland@arm.com

Signed-off-by: Waiman Long <longman@redhat.com>
2021-11-12 14:23:13 -05:00
Mark Rutland bccf1ec369 locking/atomics: atomic-instrumented: simplify ifdeffery
Now that all architectures implement ARCH_ATOMIC, the fallbacks are
generated before the instrumented wrappers are generated. Due to this,
in atomic-instrumented.h we can assume that the whole set of atomic
functions has been generated. Likewise, atomic-instrumented.h doesn't
need to provide a preprocessor definition for every atomic it wraps.

This patch removes the redundant ifdeffery.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210525140232.53872-34-mark.rutland@arm.com
2021-05-26 13:20:52 +02:00
Mark Rutland 3c1885187b locking/atomic: delete !ARCH_ATOMIC remnants
Now that all architectures implement ARCH_ATOMIC, we can make it
mandatory, removing the Kconfig symbol and logic for !ARCH_ATOMIC.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210525140232.53872-33-mark.rutland@arm.com
2021-05-26 13:20:52 +02:00
Ingo Molnar a70a04b384 locking/atomics: Regenerate the atomics-check SHA1's
The include/asm-generic/atomic-instrumented.h checksum got out
of sync, so regenerate it. (No change to actual code.)

Also make scripts/atomic/gen-atomics.sh executable, to make
it easier to use.

The auto-generated atomic header signatures are now fine:

  thule:~/tip> scripts/atomic/check-atomics.sh
  thule:~/tip>

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-11-07 13:20:41 +01:00
Ingo Molnar 666fab4a3e Merge branch 'linus' into perf/kprobes
Conflicts:
	include/asm-generic/atomic-instrumented.h
	kernel/kprobes.c

Use the upstream atomic-instrumented.h checksum, and pick
the kprobes version of kernel/kprobes.c, which effectively
reverts this upstream workaround:

  645f224e7ba2: ("kprobes: Tell lockdep about kprobe nesting")

Since the new code *should* be fine without nesting.

Knock on wood ...

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-11-07 13:20:17 +01:00
Peter Zijlstra 29f006fdef asm-generic/atomic: Add try_cmpxchg() fallbacks
Only x86 provides try_cmpxchg() outside of the atomic_t interfaces,
provide generic fallbacks to create this interface from the widely
available cmpxchg() function.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/159870621515.1229682.15506193091065001742.stgit@devnote2
2020-10-12 18:27:27 +02:00
Ingo Molnar d6c4c11348 Merge branch 'kcsan' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into locking/core
Pull KCSAN updates for v5.10 from Paul E. McKenney:

 - Improve kernel messages.

 - Be more permissive with bitops races under KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.

 - Optimize debugfs stat counters.

 - Introduce the instrument_*read_write() annotations, to provide a
   finer description of certain ops - using KCSAN's compound instrumentation.
   Use them for atomic RNW and bitops, where appropriate.
   Doing this might find new races.
   (Depends on the compiler having tsan-compound-read-before-write=1 support.)

 - Support atomic built-ins, which will help certain architectures, such as s390.

 - Misc enhancements and smaller fixes.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-10-09 08:56:02 +02:00
Paul Bolle d89d5f855f locking/atomics: Check atomic-arch-fallback.h too
The sha1sum of include/linux/atomic-arch-fallback.h isn't checked by
check-atomics.sh. It's not clear why it's skipped so let's check it too.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20201001202028.1048418-1-pebolle@tiscali.nl
2020-10-07 18:14:14 +02:00
Marco Elver 3570a1bcf4 locking/atomics: Use read-write instrumentation for atomic RMWs
Use instrument_atomic_read_write() for atomic RMW ops.

Cc: Will Deacon <will@kernel.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: <linux-arch@vger.kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-08-24 15:09:59 -07:00
Peter Zijlstra 5faafd5685 locking/atomics: Provide the arch_atomic_ interface to generic code
Architectures with instrumented (KASAN/KCSAN) atomic operations
natively provide arch_atomic_ variants that are not instrumented.

It turns out that some generic code also requires arch_atomic_ in
order to avoid instrumentation, so provide the arch_atomic_ interface
as a direct map into the regular atomic_ interface for
non-instrumented architectures.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-06-25 08:23:22 -07:00
Thomas Gleixner 37d1a04b13 Rebase locking/kcsan to locking/urgent
Merge the state of the locking kcsan branch before the read/write_once()
and the atomics modifications got merged.

Squash the fallout of the rebase on top of the read/write once and atomic
fallback work into the merge. The history of the original branch is
preserved in tag locking-kcsan-2020-06-02.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-06-11 20:02:46 +02:00
Peter Zijlstra 37f8173dd8 locking/atomics: Flip fallbacks and instrumentation
Currently instrumentation of atomic primitives is done at the architecture
level, while composites or fallbacks are provided at the generic level.

The result is that there are no uninstrumented variants of the
fallbacks. Since there is now need of such variants to isolate text poke
from any form of instrumentation invert this ordering.

Doing this means moving the instrumentation into the generic code as
well as having (for now) two variants of the fallbacks.

Notes:

 - the various *cond_read* primitives are not proper fallbacks
   and got moved into linux/atomic.c. No arch_ variants are
   generated because the base primitives smp_cond_load*()
   are instrumented.

 - once all architectures are moved over to arch_atomic_ one of the
   fallback variants can be removed and some 2300 lines reclaimed.

 - atomic_{read,set}*() are no longer double-instrumented

Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20200505134058.769149955@linutronix.de
2020-06-11 08:03:24 +02:00
Marco Elver 765dcd2099 asm-generic/atomic: Use __always_inline for fallback wrappers
Use __always_inline for atomic fallback wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
inline even relatively small static inline functions that are assumed to
be inlinable such as atomic ops. This can cause problems, for example in
UACCESS regions.

While the fallback wrappers aren't pure wrappers, they are trivial
nonetheless, and the function they wrap should determine the final
inlining policy.

For x86 tinyconfig we observe:
- vmlinux baseline: 1315988
- vmlinux with patch: 1315928 (-60 bytes)

[ tglx: Cherry-picked from KCSAN ]

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-06-11 08:03:24 +02:00
Marco Elver ed8af2e4d2 asm-generic, atomic-instrumented: Use generic instrumented.h
This switches atomic-instrumented.h to use the generic instrumentation
wrappers provided by instrumented.h.

No functional change intended.

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-03-21 09:41:42 +01:00
Marco Elver 944bc9cca7 asm-generic/atomic: Use __always_inline for fallback wrappers
Use __always_inline for atomic fallback wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
inline even relatively small static inline functions that are assumed to
be inlinable such as atomic ops. This can cause problems, for example in
UACCESS regions.

While the fallback wrappers aren't pure wrappers, they are trivial
nonetheless, and the function they wrap should determine the final
inlining policy.

For x86 tinyconfig we observe:
- vmlinux baseline: 1315988
- vmlinux with patch: 1315928 (-60 bytes)

Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-01-07 07:47:23 -08:00
Marco Elver c020395b66 asm-generic/atomic: Use __always_inline for pure wrappers
Prefer __always_inline for atomic wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
inline even relatively small static inline functions that are assumed to
be inlinable such as atomic ops. This can cause problems, for example in
UACCESS regions.

By using __always_inline, we let the real implementation and not the
wrapper determine the final inlining preference.

For x86 tinyconfig we observe:
- vmlinux baseline: 1316204
- vmlinux with patch: 1315988 (-216 bytes)

This came up when addressing UACCESS warnings with CC_OPTIMIZE_FOR_SIZE
in the KCSAN runtime:
http://lkml.kernel.org/r/58708908-84a0-0a81-a836-ad97e33dbb62@infradead.org

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-01-07 07:47:23 -08:00
Marco Elver e75a6795ed locking/atomics, kcsan: Add KCSAN instrumentation
This adds KCSAN instrumentation to atomic-instrumented.h.

Signed-off-by: Marco Elver <elver@google.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2019-11-16 07:23:15 -08:00
Michael Forney ebf8d82bbb locking/atomics: Use sed(1) instead of non-standard head(1) option
POSIX says the -n option must be a positive decimal integer. Not all
implementations of head(1) support negative numbers meaning offset from
the end of the file.

Instead, the sed expression '$d' has the same effect of removing the
last line of the file.

Signed-off-by: Michael Forney <mforney@mforney.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190618053306.730-1-mforney@mforney.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-06-25 10:17:07 +02:00
Andrew Morton b50776ae01 locking/atomics: Don't assume that scripts are executable
patch(1) doesn't set the x bit on files.  So if someone downloads and
applies patch-4.21.xz, their kernel won't build.  Fix that by executing
/bin/sh.

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-19 14:21:43 +02:00
Mark Rutland 0cf264b313 locking/atomics: Check atomic headers with sha1sum
We currently check the atomic headers at build-time to ensure they
haven't been modified directly, and these checks require regenerating
the headers in full. As this takes a few seconds, even when
parallelized, this is too slow to run for every kernel build.

Instead, we can generate a hash of each header as we generate them,
which we can cheaply check at build time (~0.16s for all headers).

This patch does so, updating headers with their hashes using the new
gen-atomics.sh script. As some users apparently build the kernel wihout
coreutils, lacking sha1sum, the checks are skipped in this case.
Presumably, most developers have a working coreutils installation.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: anders.roxell@linaro.org
Cc: linux-kernel@vger.kernel.rg
Cc: naresh.kamboju@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-13 08:07:31 +01:00
Anders Roxell b14e77f89a locking/atomics: Change 'fold' to 'grep'
Some distibutions and build systems doesn't include 'fold' from
coreutils default.

.../scripts/atomic/atomic-tbl.sh: line 183: fold: command not found

Rework to use 'grep' instead of 'fold' to use a dependency that is
already used a lot in the kernel.

[Mark: rework commit message]

Suggested-by: Will Deacon <will.deacon@arm.com>
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: boqun.feng@gmail.com
Cc: linux-kernel@vger.kernel.rg
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-11 14:27:33 +01:00
Ingo Molnar 4d8e5cd233 locking/atomics: Fix scripts/atomic/ script permissions
Mark all these scripts executable.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linuxdrivers@attotech.com
Cc: dvyukov@google.com
Cc: boqun.feng@gmail.com
Cc: arnd@arndb.de
Cc: aryabinin@virtuozzo.com
Cc: glider@google.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-01 12:45:46 +01:00
Mark Rutland 8d32588077 locking/atomics: Check generated headers are up-to-date
Now that all the generated atomic headers are in place, it would be good
to ensure that:

a) the headers are up-to-date when scripting changes.

b) developers don't directly modify the generated headers.

To ensure both of these properties, let's add a Kbuild step to check
that the generated headers are up-to-date.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: catalin.marinas@arm.com
Cc: Will Deacon <will.deacon@arm.com>
Cc: linuxdrivers@attotech.com
Cc: dvyukov@google.com
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: arnd@arndb.de
Cc: aryabinin@virtuozzo.com
Cc: glider@google.com
Link: http://lkml.kernel.org/r/20180904104830.2975-6-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-01 11:01:10 +01:00
Mark Rutland ace9bad4df locking/atomics: Add common header generation files
To minimize repetition, to allow for future rework, and to ensure
regularity of the various atomic APIs, we'd like to automatically
generate (the bulk of) a number of headers related to atomics.

This patch adds the infrastructure to do so, leaving actual conversion
of headers to subsequent patches. This infrastructure consists of:

* atomics.tbl - a table describing the functions in the atomics API,
  with names, prototypes, and metadata describing the variants that
  exist (e.g fetch/return, acquire/release/relaxed). Note that the
  return type is dependent on the particular variant.

* atomic-tbl.sh - a library of routines useful for dealing with
  atomics.tbl (e.g. querying which variants exist, or generating
  argument/parameter lists for a given function variant).

* gen-atomic-fallback.sh - a script which generates a header of
  fallbacks, covering cases where architecture omit certain functions
  (e.g. omitting relaxed variants).

* gen-atomic-long.sh - a script which generates wrappers providing the
  atomic_long API atomic of the relevant atomic or atomic64 API,
  ensuring the APIs are consistent.

* gen-atomic-instrumented.sh - a script which generates atomic* wrappers
  atop of arch_atomic* functions, with automatically generated KASAN
  instrumentation.

* fallbacks/* - a set of fallback implementations for atomics, which
  should be used when no implementation of a given atomic is provided.
  These are used by gen-atomic-fallback.sh to generate fallbacks, and
  these are also used by other scripts to determine the set of optional
  atomics (as required to generate preprocessor guards correctly).

  Fallbacks may use the following variables:

  ${atomic}     atomic prefix: atomic/atomic64/atomic_long, which can be
		used to derive the atomic type, and to prefix functions

  ${int}        integer type: int/s64/long

  ${pfx}        variant prefix, e.g. fetch_

  ${name}       base function name, e.g. add

  ${sfx}        variant suffix, e.g. _return

  ${order}      order suffix, e.g. _relaxed

  ${atomicname} full name, e.g. atomic64_fetch_add_relaxed

  ${ret}        return type of the function, e.g. void

  ${retstmt}    a return statement (with a trailing space), unless the
                variant returns void

  ${params}     parameter list for the function declaration, e.g.
                "int i, atomic_t *v"

  ${args}       argument list for invoking the function, e.g. "i, v"

  ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are
  open-coded for fallbacks where these do not vary, or are critical to
  understanding the logic of the fallback.

The MAINTAINERS entry for the atomic infrastructure is updated to cover
the new scripts.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: catalin.marinas@arm.com
Cc: Will Deacon <will.deacon@arm.com>
Cc: linuxdrivers@attotech.com
Cc: dvyukov@google.com
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: arnd@arndb.de
Cc: aryabinin@virtuozzo.com
Cc: glider@google.com
Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-01 11:00:36 +01:00