Commit Graph

35 Commits

Author SHA1 Message Date
Audra Mitchell 5de7aadd9a mm: use kstrtobool() instead of strtobool()
JIRA: https://issues.redhat.com/browse/RHEL-27739

This patch is a backport of the following upstream commit:
commit f15be1b8d449a8eebe82d77164bf760804753651
Author: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Date:   Tue Nov 1 22:14:09 2022 +0100

    mm: use kstrtobool() instead of strtobool()

    strtobool() is the same as kstrtobool().  However, the latter is more used
    within the kernel.

    In order to remove strtobool() and slightly simplify kstrtox.h, switch to
    the other function name.

    While at it, include the corresponding header file (<linux/kstrtox.h>)

    Link: https://lkml.kernel.org/r/03f9401a6c8b87a1c786a2138d16b048f8d0eb53.1667336095.git.christophe.jaillet@wanadoo.fr
    Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
    Acked-by: Pasha Tatashin <pasha.tatashin@soleen.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Audra Mitchell <audra@redhat.com>
2024-04-09 09:42:56 -04:00
Waiman Long 506b2d281a mm: Fix copy_from_user_nofault().
JIRA: https://issues.redhat.com/browse/RHEL-18440

commit d319f344561de23e810515d109c7278919bff7b0
Author: Alexei Starovoitov <ast@kernel.org>
Date:   Mon, 10 Apr 2023 19:43:44 +0200

    mm: Fix copy_from_user_nofault().

    There are several issues with copy_from_user_nofault():

    - access_ok() is designed for user context only and for that reason
    it has WARN_ON_IN_IRQ() which triggers when bpf, kprobe, eprobe
    and perf on ppc are calling it from irq.

    - it's missing nmi_uaccess_okay() which is a nop on all architectures
    except x86 where it's required.
    The comment in arch/x86/mm/tlb.c explains the details why it's necessary.
    Calling copy_from_user_nofault() from bpf, [ke]probe without this check is not safe.

    - __copy_from_user_inatomic() under CONFIG_HARDENED_USERCOPY is calling
    check_object_size()->__check_object_size()->check_heap_object()->find_vmap_area()->spin_lock()
    which is not safe to do from bpf, [ke]probe and perf due to potential deadlock.

    Fix all three issues. At the end the copy_from_user_nofault() becomes
    equivalent to copy_from_user_nmi() from safety point of view with
    a difference in the return value.

    Reported-by: Hsin-Wei Hung <hsinweih@uci.edu>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Signed-off-by: Florian Lehner <dev@der-flo.net>
    Tested-by: Hsin-Wei Hung <hsinweih@uci.edu>
    Tested-by: Florian Lehner <dev@der-flo.net>
    Link: https://lore.kernel.org/r/20230410174345.4376-2-dev@der-flo.net
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2023-12-07 12:50:09 -05:00
Chris von Recklinghausen 28df42cc2e usercopy: use unsigned long instead of uintptr_t
Bugzilla: https://bugzilla.redhat.com/2160210

commit 170b2c350cfcb6f74074e44dd9f916787546db0d
Author: Jason A. Donenfeld <Jason@zx2c4.com>
Date:   Thu Jun 16 16:36:17 2022 +0200

    usercopy: use unsigned long instead of uintptr_t

    A recent commit factored out a series of annoying (unsigned long) casts
    into a single variable declaration, but made the pointer type a
    `uintptr_t` rather than the usual `unsigned long`. This patch changes it
    to be the integer type more typically used by the kernel to represent
    addresses.

    Fixes: 35fb9ae4aa2e ("usercopy: Cast pointer to an integer once")
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Uladzislau Rezki <urezki@gmail.com>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Joe Perches <joe@perches.com>
    Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220616143617.449094-1-Jason@zx2c4.com

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:16 -04:00
Chris von Recklinghausen e4d1f06296 usercopy: Make usercopy resilient against ridiculously large copies
Bugzilla: https://bugzilla.redhat.com/2160210

commit 1dfbe9fcda4afc957f0e371e207ae3cb7e8f3b0e
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Sun Jun 12 22:32:27 2022 +0100

    usercopy: Make usercopy resilient against ridiculously large copies

    If 'n' is so large that it's negative, we might wrap around and mistakenly
    think that the copy is OK when it's not.  Such a copy would probably
    crash, but just doing the arithmetic in a more simple way lets us detect
    and refuse this case.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    Tested-by: Zorro Lang <zlang@redhat.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220612213227.3881769-4-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:13 -04:00
Chris von Recklinghausen 2b07bc9bdc usercopy: Cast pointer to an integer once
Bugzilla: https://bugzilla.redhat.com/2160210

commit 35fb9ae4aa2e838b234323e6f7cf6336ff019e5a
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Sun Jun 12 22:32:26 2022 +0100

    usercopy: Cast pointer to an integer once

    Get rid of a lot of annoying casts by setting 'addr' once at the top
    of the function.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    Tested-by: Zorro Lang <zlang@redhat.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220612213227.3881769-3-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:13 -04:00
Chris von Recklinghausen 51420b6878 usercopy: Handle vm_map_ram() areas
Bugzilla: https://bugzilla.redhat.com/2160210

commit 993d0b287e2ef7bee2e8b13b0ce4d2b5066f278e
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Sun Jun 12 22:32:25 2022 +0100

    usercopy: Handle vm_map_ram() areas

    vmalloc does not allocate a vm_struct for vm_map_ram() areas.  That causes
    us to deny usercopies from those areas.  This affects XFS which uses
    vm_map_ram() for its directories.

    Fix this by calling find_vmap_area() instead of find_vm_area().

    Fixes: 0aef499f3172 ("mm/usercopy: Detect vmalloc overruns")
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
    Tested-by: Zorro Lang <zlang@redhat.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220612213227.3881769-2-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:13 -04:00
Chris von Recklinghausen b439169186 mm: usercopy: move the virt_addr_valid() below the is_vmalloc_addr()
Bugzilla: https://bugzilla.redhat.com/2160210

commit a5f4d9df1f7beaaebbaa5943ceb789c34f10b8d5
Author: Yuanzheng Song <songyuanzheng@huawei.com>
Date:   Thu May 5 07:10:37 2022 +0000

    mm: usercopy: move the virt_addr_valid() below the is_vmalloc_addr()

    The is_kmap_addr() and the is_vmalloc_addr() in the check_heap_object()
    will not work, because the virt_addr_valid() will exclude the kmap and
    vmalloc regions. So let's move the virt_addr_valid() below
    the is_vmalloc_addr().

    Signed-off-by: Yuanzheng Song <songyuanzheng@huawei.com>
    Fixes: 4e140f59d285 ("mm/usercopy: Check kmap addresses properly")
    Fixes: 0aef499f3172 ("mm/usercopy: Detect vmalloc overruns")
    Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220505071037.4121100-1-songyuanzheng@huawei.com

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:09 -04:00
Chris von Recklinghausen aa411b50fa usercopy: Remove HARDENED_USERCOPY_PAGESPAN
Bugzilla: https://bugzilla.redhat.com/2160210

commit 1109a5d907015005cdbe9eaa4fec40213e2f9010
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Mon Jan 10 23:15:30 2022 +0000

    usercopy: Remove HARDENED_USERCOPY_PAGESPAN

    There isn't enough information to make this a useful check any more;
    the useful parts of it were moved in earlier patches, so remove this
    set of checks now.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Acked-by: Kees Cook <keescook@chromium.org>
    Reviewed-by: David Hildenbrand <david@redhat.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220110231530.665970-5-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:18:51 -04:00
Chris von Recklinghausen 4b5941a35b mm/usercopy: Detect large folio overruns
Bugzilla: https://bugzilla.redhat.com/2160210

commit ab502103ae3ce4c0fc393e598455efede3e523c9
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Mon Jan 10 23:15:29 2022 +0000

    mm/usercopy: Detect large folio overruns

    Move the compound page overrun detection out of
    CONFIG_HARDENED_USERCOPY_PAGESPAN and convert it to use folios so it's
    enabled for more people.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Acked-by: Kees Cook <keescook@chromium.org>
    Reviewed-by: David Hildenbrand <david@redhat.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220110231530.665970-4-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:18:51 -04:00
Chris von Recklinghausen 0d9a8299dc mm/usercopy: Detect vmalloc overruns
Bugzilla: https://bugzilla.redhat.com/2160210

commit 0aef499f3172a60222ae7460d61b364c134d6e1a
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Mon Jan 10 23:15:28 2022 +0000

    mm/usercopy: Detect vmalloc overruns

    If you have a vmalloc() allocation, or an address from calling vmap(),
    you cannot overrun the vm_area which describes it, regardless of the
    size of the underlying allocation.  This probably doesn't do much for
    security because vmalloc comes with guard pages these days, but it
    prevents usercopy aborts when copying to a vmap() of smaller pages.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Acked-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220110231530.665970-3-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:18:51 -04:00
Chris von Recklinghausen 29c50d2739 mm/usercopy: Check kmap addresses properly
Bugzilla: https://bugzilla.redhat.com/2160210

commit 4e140f59d285c1ca1e5c81b4c13e27366865bd09
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Mon Jan 10 23:15:27 2022 +0000

    mm/usercopy: Check kmap addresses properly

    If you are copying to an address in the kmap region, you may not copy
    across a page boundary, no matter what the size of the underlying
    allocation.  You can't kmap() a slab page because slab pages always
    come from low memory.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Acked-by: Kees Cook <keescook@chromium.org>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    Link: https://lore.kernel.org/r/20220110231530.665970-2-willy@infradead.org

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:18:51 -04:00
Chris von Recklinghausen 4ecd490024 mm: remove usercopy_warn()
Bugzilla: https://bugzilla.redhat.com/2120352

commit 6eada26ffc80bfe1f2db088be0c44ec82b5cd3dc
Author: Christophe Leroy <christophe.leroy@csgroup.eu>
Date:   Tue Mar 22 14:47:46 2022 -0700

    mm: remove usercopy_warn()

    Users of usercopy_warn() were removed by commit 53944f171a89 ("mm:
    remove HARDENED_USERCOPY_FALLBACK")

    Remove it.

    Link: https://lkml.kernel.org/r/5f26643fc70b05f8455b60b99c30c17d635fa640.1644231910.git.christophe.leroy@csgroup.eu
    Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
    Reviewed-by: Stephen Kitt <steve@sk2.org>
    Reviewed-by: Muchun Song <songmuchun@bytedance.com>
    Cc: Kees Cook <keescook@chromium.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2022-10-12 07:27:55 -04:00
Chris von Recklinghausen 79187aae93 usercopy: Check valid lifetime via stack depth
Bugzilla: https://bugzilla.redhat.com/2120352

commit 2792d84e6da5e0fd7d3b22fd70bc69b7ee263609
Author: Kees Cook <keescook@chromium.org>
Date:   Wed Feb 16 12:05:28 2022 -0800

    usercopy: Check valid lifetime via stack depth

    One of the things that CONFIG_HARDENED_USERCOPY sanity-checks is whether
    an object that is about to be copied to/from userspace is overlapping
    the stack at all. If it is, it performs a number of inexpensive
    bounds checks. One of the finer-grained checks is whether an object
    crosses stack frames within the stack region. Doing this on x86 with
    CONFIG_FRAME_POINTER was cheap/easy. Doing it with ORC was deemed too
    heavy, and was left out (a while ago), leaving the courser whole-stack
    check.

    The LKDTM tests USERCOPY_STACK_FRAME_TO and USERCOPY_STACK_FRAME_FROM
    try to exercise these cross-frame cases to validate the defense is
    working. They have been failing ever since ORC was added (which was
    expected). While Muhammad was investigating various LKDTM failures[1],
    he asked me for additional details on them, and I realized that when
    exact stack frame boundary checking is not available (i.e. everything
    except x86 with FRAME_POINTER), it could check if a stack object is at
    least "current depth valid", in the sense that any object within the
    stack region but not between start-of-stack and current_stack_pointer
    should be considered unavailable (i.e. its lifetime is from a call no
    longer present on the stack).

    Introduce ARCH_HAS_CURRENT_STACK_POINTER to track which architectures
    have actually implemented the common global register alias.

    Additionally report usercopy bounds checking failures with an offset
    from current_stack_pointer, which may assist with diagnosing failures.

    The LKDTM USERCOPY_STACK_FRAME_TO and USERCOPY_STACK_FRAME_FROM tests
    (once slightly adjusted in a separate patch) pass again with this fixed.

    [1] https://github.com/kernelci/kernelci-project/issues/84

    Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: linux-mm@kvack.org
    Reported-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
    Signed-off-by: Kees Cook <keescook@chromium.org>
    ---
    v1: https://lore.kernel.org/lkml/20220216201449.2087956-1-keescook@chromium.org
    v2: https://lore.kernel.org/lkml/20220224060342.1855457-1-keescook@chromium.org
    v3: https://lore.kernel.org/lkml/20220225173345.3358109-1-keescook@chromium.org
    v4: - improve commit log (akpm)

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2022-10-12 07:27:45 -04:00
Aristeu Rozanski 529133f15f mm: Convert check_heap_object() to use struct slab
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2083861
Tested: by me with multiple test suites

commit 0b3eb091d5759479d44cb793fad2c51ea06bdcec
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Mon Oct 4 14:45:56 2021 +0100

    mm: Convert check_heap_object() to use struct slab

    Ensure that we're not seeing a tail page inside __check_heap_object() by
    converting to a slab instead of a page.  Take the opportunity to mark
    the slab as const since we're not modifying it.  Also move the
    declaration of __check_heap_object() to mm/slab.h so it's not available
    to the wider kernel.

    [ vbabka@suse.cz: in check_heap_object() only convert to struct slab for
      actual PageSlab pages; use folio as intermediate step instead of page ]

    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    Reviewed-by: Roman Gushchin <guro@fb.com>

Signed-off-by: Aristeu Rozanski <arozansk@redhat.com>
2022-07-10 10:44:09 -04:00
Rafael Aquini 8a54c5784d mm/usercopy: return 1 from hardened_usercopy __setup() handler
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2064990

This patch is a backport of the following upstream commit:
commit 05fe3c103f7e6b8b4fca8a7001dfc9ed4628085b
Author: Randy Dunlap <rdunlap@infradead.org>
Date:   Tue Mar 22 14:47:52 2022 -0700

    mm/usercopy: return 1 from hardened_usercopy __setup() handler

    __setup() handlers should return 1 if the command line option is handled
    and 0 if not (or maybe never return 0; it just pollutes init's
    environment).  This prevents:

      Unknown kernel command line parameters \
      "BOOT_IMAGE=/boot/bzImage-517rc5 hardened_usercopy=off", will be \
      passed to user space.

      Run /sbin/init as init process
       with arguments:
         /sbin/init
       with environment:
         HOME=/
         TERM=linux
         BOOT_IMAGE=/boot/bzImage-517rc5
         hardened_usercopy=off
    or
         hardened_usercopy=on
    but when "hardened_usercopy=foo" is used, there is no Unknown kernel
    command line parameter.

    Return 1 to indicate that the boot option has been handled.
    Print a warning if strtobool() returns an error on the option string,
    but do not mark this as in unknown command line option and do not cause
    init's environment to be polluted with this string.

    Link: https://lkml.kernel.org/r/20220222034249.14795-1-rdunlap@infradead.org
    Link: lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru
    Fixes: b5cb15d937 ("usercopy: Allow boot cmdline disabling of hardening")
    Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
    Reported-by: Igor Zhbanov <i.zhbanov@omprussia.ru>
    Acked-by: Chris von Recklinghausen <crecklin@redhat.com>
    Cc: Kees Cook <keescook@chromium.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2022-03-27 00:48:47 -04:00
Randy Dunlap 5ce1be0e40 mm/usercopy.c: delete duplicated word
Drop the repeated word "the".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Link: http://lkml.kernel.org/r/20200801173822.14973-13-rdunlap@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-08-12 10:57:58 -07:00
Kees Cook 314eed30ed usercopy: Avoid HIGHMEM pfn warning
When running on a system with >512MB RAM with a 32-bit kernel built with:

	CONFIG_DEBUG_VIRTUAL=y
	CONFIG_HIGHMEM=y
	CONFIG_HARDENED_USERCOPY=y

all execve()s will fail due to argv copying into kmap()ed pages, and on
usercopy checking the calls ultimately of virt_to_page() will be looking
for "bad" kmap (highmem) pointers due to CONFIG_DEBUG_VIRTUAL=y:

 ------------[ cut here ]------------
 kernel BUG at ../arch/x86/mm/physaddr.c:83!
 invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
 CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc8 #6
 Hardware name: Dell Inc. Inspiron 1318/0C236D, BIOS A04 01/15/2009
 EIP: __phys_addr+0xaf/0x100
 ...
 Call Trace:
  __check_object_size+0xaf/0x3c0
  ? __might_sleep+0x80/0xa0
  copy_strings+0x1c2/0x370
  copy_strings_kernel+0x2b/0x40
  __do_execve_file+0x4ca/0x810
  ? kmem_cache_alloc+0x1c7/0x370
  do_execve+0x1b/0x20
  ...

The check is from arch/x86/mm/physaddr.c:

	VIRTUAL_BUG_ON((phys_addr >> PAGE_SHIFT) > max_low_pfn);

Due to the kmap() in fs/exec.c:

		kaddr = kmap(kmapped_page);
	...
	if (copy_from_user(kaddr+offset, str, bytes_to_copy)) ...

Now we can fetch the correct page to avoid the pfn check. In both cases,
hardened usercopy will need to walk the page-span checker (if enabled)
to do sanity checking.

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Fixes: f5509cc18d ("mm: Hardened usercopy")
Cc: Matthew Wilcox <willy@infradead.org>
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Link: https://lore.kernel.org/r/201909171056.7F2FFD17@keescook
2019-09-17 15:20:17 -07:00
Isaac J. Manjarres 951531691c mm/usercopy: use memory range to be accessed for wraparound check
Currently, when checking to see if accessing n bytes starting at address
"ptr" will cause a wraparound in the memory addresses, the check in
check_bogus_address() adds an extra byte, which is incorrect, as the
range of addresses that will be accessed is [ptr, ptr + (n - 1)].

This can lead to incorrectly detecting a wraparound in the memory
address, when trying to read 4 KB from memory that is mapped to the the
last possible page in the virtual address space, when in fact, accessing
that range of memory would not cause a wraparound to occur.

Use the memory range that will actually be accessed when considering if
accessing a certain amount of bytes will cause the memory address to
wrap around.

Link: http://lkml.kernel.org/r/1564509253-23287-1-git-send-email-isaacm@codeaurora.org
Fixes: f5509cc18d ("mm: Hardened usercopy")
Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
Co-developed-by: Prasad Sodagudi <psodagud@codeaurora.org>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Trilok Soni <tsoni@codeaurora.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-08-13 16:06:52 -07:00
Thomas Gleixner d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Qian Cai 7bff3c0699 mm/usercopy.c: no check page span for stack objects
It is easy to trigger this with CONFIG_HARDENED_USERCOPY_PAGESPAN=y,

  usercopy: Kernel memory overwrite attempt detected to spans multiple pages (offset 0, size 23)!
  kernel BUG at mm/usercopy.c:102!

For example,

print_worker_info
char name[WQ_NAME_LEN] = { };
char desc[WORKER_DESC_LEN] = { };
  probe_kernel_read(name, wq->name, sizeof(name) - 1);
  probe_kernel_read(desc, worker->desc, sizeof(desc) - 1);
    __copy_from_user_inatomic
      check_object_size
        check_heap_object
          check_page_span

This is because on-stack variables could cross PAGE_SIZE boundary, and
failed this check,

if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
	   ((unsigned long)end & (unsigned long)PAGE_MASK)))

ptr = FFFF889007D7EFF8
end = FFFF889007D7F00E

Hence, fix it by checking if it is a stack object first.

[keescook@chromium.org: improve comments after reorder]
  Link: http://lkml.kernel.org/r/20190103165151.GA32845@beast
Link: http://lkml.kernel.org/r/20181231030254.99441-1-cai@lca.pw
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-08 17:15:11 -08:00
Chris von Recklinghausen b5cb15d937 usercopy: Allow boot cmdline disabling of hardening
Enabling HARDENED_USERCOPY may cause measurable regressions in networking
performance: up to 8% under UDP flood.

I ran a small packet UDP flood using pktgen vs. a host b2b connected. On
the receiver side the UDP packets are processed by a simple user space
process that just reads and drops them:

https://github.com/netoptimizer/network-testing/blob/master/src/udp_sink.c

Not very useful from a functional PoV, but it helps to pin-point
bottlenecks in the networking stack.

When running a kernel with CONFIG_HARDENED_USERCOPY=y, I see a 5-8%
regression in the receive tput, compared to the same kernel without this
option enabled.

With CONFIG_HARDENED_USERCOPY=y, perf shows ~6% of CPU time spent
cumulatively in __check_object_size (~4%) and __virt_addr_valid (~2%).

The call-chain is:

__GI___libc_recvfrom
entry_SYSCALL_64_after_hwframe
do_syscall_64
__x64_sys_recvfrom
__sys_recvfrom
inet_recvmsg
udp_recvmsg
__check_object_size

udp_recvmsg() actually calls copy_to_iter() (inlined) and the latters
calls check_copy_size() (again, inlined).

A generic distro may want to enable HARDENED_USERCOPY in their default
kernel config, but at the same time, such distro may want to be able to
avoid the performance penalties in with the default configuration and
disable the stricter check on a per-boot basis.

This change adds a boot parameter that conditionally disables
HARDENED_USERCOPY via "hardened_usercopy=off".

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-07-04 08:04:52 -07:00
Kees Cook afcc90f862 usercopy: WARN() on slab cache usercopy region violations
This patch adds checking of usercopy cache whitelisting, and is modified
from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the
last public patch of grsecurity/PaX based on my understanding of the
code. Changes or omissions from the original code are mine and don't
reflect the original grsecurity/PaX code.

The SLAB and SLUB allocators are modified to WARN() on all copy operations
in which the kernel heap memory being modified falls outside of the cache's
defined usercopy region.

Based on an earlier patch from David Windsor.

Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-mm@kvack.org
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-01-15 12:07:48 -08:00
Kees Cook f4e6e289cb usercopy: Include offset in hardened usercopy report
This refactors the hardened usercopy code so that failure reporting can
happen within the checking functions instead of at the top level. This
simplifies the return value handling and allows more details and offsets
to be included in the report. Having the offset can be much more helpful
in understanding hardened usercopy bugs.

Signed-off-by: Kees Cook <keescook@chromium.org>
2018-01-15 12:07:45 -08:00
Kees Cook b394d468e7 usercopy: Enhance and rename report_usercopy()
In preparation for refactoring the usercopy checks to pass offset to
the hardened usercopy report, this renames report_usercopy() to the
more accurate usercopy_abort(), marks it as noreturn because it is,
adds a hopefully helpful comment for anyone investigating such reports,
makes the function available to the slab allocators, and adds new "detail"
and "offset" arguments.

Signed-off-by: Kees Cook <keescook@chromium.org>
2018-01-15 12:07:44 -08:00
Kees Cook 4f5e838605 usercopy: Remove pointer from overflow report
Using %p was already mostly useless in the usercopy overflow reports,
so this removes it entirely to avoid confusion now that %p-hashing
is enabled.

Fixes: ad67b74d24 ("printk: hash addresses printed with %p")
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-01-15 12:07:44 -08:00
Laura Abbott 517e1fbeb6 mm/usercopy: Drop extra is_vmalloc_or_module() check
Previously virt_addr_valid() was insufficient to validate if virt_to_page()
could be called on an address on arm64. This has since been fixed up so
there is no need for the extra check. Drop it.

Signed-off-by: Laura Abbott <labbott@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-04-05 12:30:18 -07:00
Sahara 96dc4f9fb6 usercopy: Move enum for arch_within_stack_frames()
This patch moves the arch_within_stack_frames() return value enum up in
the header files so that per-architecture implementations can reuse the
same return values.

Signed-off-by: Sahara <keun-o.park@darkmatter.ae>
Signed-off-by: James Morse <james.morse@arm.com>
[kees: adjusted naming and commit log]
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-04-04 14:30:29 -07:00
Ingo Molnar 299300258d sched/headers: Prepare for new header dependencies before moving code to <linux/sched/task.h>
We are going to split <linux/sched/task.h> out of <linux/sched.h>, which
will have to be picked up from other headers and a couple of .c files.

Create a trivial placeholder <linux/sched/task.h> file that just
maps to <linux/sched.h> to make this patch obviously correct and
bisectable.

Include the new header in the files that are going to need it.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:35 +01:00
Ingo Molnar 5b825c3af1 sched/headers: Prepare to remove <linux/cred.h> inclusion from <linux/sched.h>
Add #include <linux/cred.h> dependencies to all .c files rely on sched.h
doing that for them.

Note that even if the count where we need to add extra headers seems high,
it's still a net win, because <linux/sched.h> is included in over
2,200 files ...

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-03-02 08:42:31 +01:00
Laura Abbott 46f6236aa1 mm/usercopy: Switch to using lm_alias
The usercopy checking code currently calls __va(__pa(...)) to check for
aliases on symbols. Switch to using lm_alias instead.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-11 13:56:50 +00:00
Laura Abbott aa4f060111 mm: usercopy: Check for module addresses
While running a compile on arm64, I hit a memory exposure

usercopy: kernel memory exposure attempt detected from fffffc0000f3b1a8 (buffer_head) (1 bytes)
------------[ cut here ]------------
kernel BUG at mm/usercopy.c:75!
Internal error: Oops - BUG: 0 [#1] SMP
Modules linked in: ip6t_rpfilter ip6t_REJECT
nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_broute bridge stp
llc ebtable_nat ip6table_security ip6table_raw ip6table_nat
nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle
iptable_security iptable_raw iptable_nat nf_conntrack_ipv4
nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle
ebtable_filter ebtables ip6table_filter ip6_tables vfat fat xgene_edac
xgene_enet edac_core i2c_xgene_slimpro i2c_core at803x realtek xgene_dma
mdio_xgene gpio_dwapb gpio_xgene_sb xgene_rng mailbox_xgene_slimpro nfsd
auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c sdhci_of_arasan
sdhci_pltfm sdhci mmc_core xhci_plat_hcd gpio_keys
CPU: 0 PID: 19744 Comm: updatedb Tainted: G        W 4.8.0-rc3-threadinfo+ #1
Hardware name: AppliedMicro X-Gene Mustang Board/X-Gene Mustang Board, BIOS 3.06.12 Aug 12 2016
task: fffffe03df944c00 task.stack: fffffe00d128c000
PC is at __check_object_size+0x70/0x3f0
LR is at __check_object_size+0x70/0x3f0
...
[<fffffc00082b4280>] __check_object_size+0x70/0x3f0
[<fffffc00082cdc30>] filldir64+0x158/0x1a0
[<fffffc0000f327e8>] __fat_readdir+0x4a0/0x558 [fat]
[<fffffc0000f328d4>] fat_readdir+0x34/0x40 [fat]
[<fffffc00082cd8f8>] iterate_dir+0x190/0x1e0
[<fffffc00082cde58>] SyS_getdents64+0x88/0x120
[<fffffc0008082c70>] el0_svc_naked+0x24/0x28

fffffc0000f3b1a8 is a module address. Modules may have compiled in
strings which could get copied to userspace. In this instance, it
looks like "." which matches with a size of 1 byte. Extend the
is_vmalloc_addr check to be is_vmalloc_or_module_addr to cover
all possible cases.

Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2016-09-20 16:07:39 -07:00
Kees Cook 8e1f74ea02 usercopy: remove page-spanning test for now
A custom allocator without __GFP_COMP that copies to userspace has been
found in vmw_execbuf_process[1], so this disables the page-span checker
by placing it behind a CONFIG for future work where such things can be
tracked down later.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1373326

Reported-by: Vinson Lee <vlee@freedesktop.org>
Fixes: f5509cc18d ("mm: Hardened usercopy")
Signed-off-by: Kees Cook <keescook@chromium.org>
2016-09-07 11:33:26 -07:00
Josh Poimboeuf 94cd97af69 usercopy: fix overlap check for kernel text
When running with a local patch which moves the '_stext' symbol to the
very beginning of the kernel text area, I got the following panic with
CONFIG_HARDENED_USERCOPY:

  usercopy: kernel memory exposure attempt detected from ffff88103dfff000 (<linear kernel text>) (4096 bytes)
  ------------[ cut here ]------------
  kernel BUG at mm/usercopy.c:79!
  invalid opcode: 0000 [#1] SMP
  ...
  CPU: 0 PID: 4800 Comm: cp Not tainted 4.8.0-rc3.after+ #1
  Hardware name: Dell Inc. PowerEdge R720/0X3D66, BIOS 2.5.4 01/22/2016
  task: ffff880817444140 task.stack: ffff880816274000
  RIP: 0010:[<ffffffff8121c796>] __check_object_size+0x76/0x413
  RSP: 0018:ffff880816277c40 EFLAGS: 00010246
  RAX: 000000000000006b RBX: ffff88103dfff000 RCX: 0000000000000000
  RDX: 0000000000000000 RSI: ffff88081f80dfa8 RDI: ffff88081f80dfa8
  RBP: ffff880816277c90 R08: 000000000000054c R09: 0000000000000000
  R10: 0000000000000005 R11: 0000000000000006 R12: 0000000000001000
  R13: ffff88103e000000 R14: ffff88103dffffff R15: 0000000000000001
  FS:  00007fb9d1750800(0000) GS:ffff88081f800000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00000000021d2000 CR3: 000000081a08f000 CR4: 00000000001406f0
  Stack:
   ffff880816277cc8 0000000000010000 000000043de07000 0000000000000000
   0000000000001000 ffff880816277e60 0000000000001000 ffff880816277e28
   000000000000c000 0000000000001000 ffff880816277ce8 ffffffff8136c3a6
  Call Trace:
   [<ffffffff8136c3a6>] copy_page_to_iter_iovec+0xa6/0x1c0
   [<ffffffff8136e766>] copy_page_to_iter+0x16/0x90
   [<ffffffff811970e3>] generic_file_read_iter+0x3e3/0x7c0
   [<ffffffffa06a738d>] ? xfs_file_buffered_aio_write+0xad/0x260 [xfs]
   [<ffffffff816e6262>] ? down_read+0x12/0x40
   [<ffffffffa06a61b1>] xfs_file_buffered_aio_read+0x51/0xc0 [xfs]
   [<ffffffffa06a6692>] xfs_file_read_iter+0x62/0xb0 [xfs]
   [<ffffffff812224cf>] __vfs_read+0xdf/0x130
   [<ffffffff81222c9e>] vfs_read+0x8e/0x140
   [<ffffffff81224195>] SyS_read+0x55/0xc0
   [<ffffffff81003a47>] do_syscall_64+0x67/0x160
   [<ffffffff816e8421>] entry_SYSCALL64_slow_path+0x25/0x25
  RIP: 0033:[<00007fb9d0c33c00>] 0x7fb9d0c33c00
  RSP: 002b:00007ffc9c262f28 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
  RAX: ffffffffffffffda RBX: fffffffffff8ffff RCX: 00007fb9d0c33c00
  RDX: 0000000000010000 RSI: 00000000021c3000 RDI: 0000000000000004
  RBP: 00000000021c3000 R08: 0000000000000000 R09: 00007ffc9c264d6c
  R10: 00007ffc9c262c50 R11: 0000000000000246 R12: 0000000000010000
  R13: 00007ffc9c2630b0 R14: 0000000000000004 R15: 0000000000010000
  Code: 81 48 0f 44 d0 48 c7 c6 90 4d a3 81 48 c7 c0 bb b3 a2 81 48 0f 44 f0 4d 89 e1 48 89 d9 48 c7 c7 68 16 a3 81 31 c0 e8 f4 57 f7 ff <0f> 0b 48 8d 90 00 40 00 00 48 39 d3 0f 83 22 01 00 00 48 39 c3
  RIP  [<ffffffff8121c796>] __check_object_size+0x76/0x413
   RSP <ffff880816277c40>

The checked object's range [ffff88103dfff000, ffff88103e000000) is
valid, so there shouldn't have been a BUG.  The hardened usercopy code
got confused because the range's ending address is the same as the
kernel's text starting address at 0xffff88103e000000.  The overlap check
is slightly off.

Fixes: f5509cc18d ("mm: Hardened usercopy")
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2016-08-22 19:10:51 -07:00
Eric Biggers 7329a65587 usercopy: avoid potentially undefined behavior in pointer math
check_bogus_address() checked for pointer overflow using this expression,
where 'ptr' has type 'const void *':

	ptr + n < ptr

Since pointer wraparound is undefined behavior, gcc at -O2 by default
treats it like the following, which would not behave as intended:

	(long)n < 0

Fortunately, this doesn't currently happen for kernel code because kernel
code is compiled with -fno-strict-overflow.  But the expression should be
fixed anyway to use well-defined integer arithmetic, since it could be
treated differently by different compilers in the future or could be
reported by tools checking for undefined behavior.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2016-08-22 19:07:55 -07:00
Kees Cook f5509cc18d mm: Hardened usercopy
This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations (excepting Reserved
  and CMA ranges)
- if on the stack
  - object must not extend before/after the current process stack
  - object must be contained by a valid stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
2016-07-26 14:41:47 -07:00