Commit Graph

63 Commits

Author SHA1 Message Date
Rafael Aquini c40b6b19a2 mm/debug_vm_pgtable: drop RANDOM_ORVALUE trick
JIRA: https://issues.redhat.com/browse/RHEL-27745

This patch is a backport of the following upstream commit:
commit 0b1ef4fde7a24909ff2afacffd0d6afa28b73652
Author: Peter Xu <peterx@redhat.com>
Date:   Thu May 23 09:21:39 2024 -0400

    mm/debug_vm_pgtable: drop RANDOM_ORVALUE trick

    Macro RANDOM_ORVALUE was used to make sure the pgtable entry will be
    populated with !none data in clear tests.

    The RANDOM_ORVALUE tried to cover mostly all the bits in a pgtable entry,
    even if there's no discussion on whether all the bits will be vaild.  Both
    S390 and PPC64 have their own masks to avoid touching some bits.  Now it's
    the turn for x86_64.

    The issue is there's a recent report from Mikhail Gavrilov showing that
    this can cause a warning with the newly added pte set check in commit
    8430557fc5 on writable v.s.  userfaultfd-wp bit, even though the check
    itself was valid, the random pte is not.  We can choose to mask more bits
    out.

    However the need to have such random bits setup is questionable, as now
    it's already guaranteed to be true on below:

      - For pte level, the pgtable entry will be installed with value from
        pfn_pte(), where pfn points to a valid page.  Hence the pte will be
        !none already if populated with pfn_pte().

      - For upper-than-pte level, the pgtable entry should contain a directory
        entry always, which is also !none.

    All the cases look like good enough to test a pxx_clear() helper.  Instead
    of extending the bitmask, drop the "set random bits" trick completely.  Add
    some warning guards to make sure the entries will be !none before clear().

    Link: https://lkml.kernel.org/r/20240523132139.289719-1-peterx@redhat.com
    Fixes: 8430557fc584 ("mm/page_table_check: support userfault wr-protect entries")
    Signed-off-by: Peter Xu <peterx@redhat.com>
    Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
    Link: https://lore.kernel.org/r/CABXGCsMB9A8-X+Np_Q+fWLURYL_0t3Y-MdoNabDM-Lzk58-DGA@mail.gmail.com
    Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
    Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
    Acked-by: David Hildenbrand <david@redhat.com>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Gavin Shan <gshan@redhat.com>
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Rafael Aquini <raquini@redhat.com>
2024-12-09 12:25:15 -05:00
Rafael Aquini c8c9c0b259 mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER
JIRA: https://issues.redhat.com/browse/RHEL-27745
Conflicts:
  * arch/*/Kconfig: all hunks dropped as there were only text blurbs and comments
     being changed with no functional changes whatsoever, and RHEL9 is missing
     several (unrelated) commits to these arches that tranform the text blurbs in
     the way these non-functional hunks were expecting;
  * drivers/accel/qaic/qaic_data.c: hunk dropped due to RHEL-only commit
     083c0cdce2 ("Merge DRM changes from upstream v6.8..v6.9");
  * drivers/gpu/drm/i915/gem/selftests/huge_pages.c: hunk dropped due to RHEL-only
     commit ca8b16c11b ("Merge DRM changes from upstream v6.7..v6.8");
  * drivers/gpu/drm/ttm/tests/ttm_pool_test.c: all hunks dropped due to RHEL-only
     commit ca8b16c11b ("Merge DRM changes from upstream v6.7..v6.8");
  * drivers/video/fbdev/vermilion/vermilion.c: hunk dropped as RHEL9 misses
     commit dbe7e429fe ("vmlfb: framebuffer driver for Intel Vermilion Range");
  * include/linux/pageblock-flags.h: differences due to out-of-order backport
    of upstream commits 72801513b2bf ("mm: set pageblock_order to HPAGE_PMD_ORDER
    in case with !CONFIG_HUGETLB_PAGE but THP enabled"), and 3a7e02c040b1
    ("minmax: avoid overly complicated constant expressions in VM code");
  * mm/mm_init.c: differences on the 3rd, and 4th hunks are due to RHEL
     backport commit 1845b92dcf ("mm: move most of core MM initialization to
     mm/mm_init.c") ignoring the out-of-order backport of commit 3f6dac0fd1b8
     ("mm/page_alloc: make deferred page init free pages in MAX_ORDER blocks")
     thus partially reverting the changes introduced by the latter;

This patch is a backport of the following upstream commit:
commit 5e0a760b44417f7cadd79de2204d6247109558a0
Author: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Date:   Thu Dec 28 17:47:04 2023 +0300

    mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER

    commit 23baf831a32c ("mm, treewide: redefine MAX_ORDER sanely") has
    changed the definition of MAX_ORDER to be inclusive.  This has caused
    issues with code that was not yet upstream and depended on the previous
    definition.

    To draw attention to the altered meaning of the define, rename MAX_ORDER
    to MAX_PAGE_ORDER.

    Link: https://lkml.kernel.org/r/20231228144704.14033-2-kirill.shutemov@linux.intel.com
    Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Rafael Aquini <raquini@redhat.com>
2024-12-09 12:24:17 -05:00
Rafael Aquini dcc6021648 mm/debug_vm_pgtable: fix BUG_ON with pud advanced test
JIRA: https://issues.redhat.com/browse/RHEL-27743

This patch is a backport of the following upstream commit:
commit 720da1e593b85a550593b415bf1d79a053133451
Author: Aneesh Kumar K.V (IBM) <aneesh.kumar@kernel.org>
Date:   Mon Jan 29 11:30:22 2024 +0530

    mm/debug_vm_pgtable: fix BUG_ON with pud advanced test

    Architectures like powerpc add debug checks to ensure we find only devmap
    PUD pte entries.  These debug checks are only done with CONFIG_DEBUG_VM.
    This patch marks the ptes used for PUD advanced test devmap pte entries so
    that we don't hit on debug checks on architecture like ppc64 as below.

    WARNING: CPU: 2 PID: 1 at arch/powerpc/mm/book3s64/radix_pgtable.c:1382 radix__pud_hugepage_update+0x38/0x138
    ....
    NIP [c0000000000a7004] radix__pud_hugepage_update+0x38/0x138
    LR [c0000000000a77a8] radix__pudp_huge_get_and_clear+0x28/0x60
    Call Trace:
    [c000000004a2f950] [c000000004a2f9a0] 0xc000000004a2f9a0 (unreliable)
    [c000000004a2f980] [000d34c100000000] 0xd34c100000000
    [c000000004a2f9a0] [c00000000206ba98] pud_advanced_tests+0x118/0x334
    [c000000004a2fa40] [c00000000206db34] debug_vm_pgtable+0xcbc/0x1c48
    [c000000004a2fc10] [c00000000000fd28] do_one_initcall+0x60/0x388

    Also

     kernel BUG at arch/powerpc/mm/book3s64/pgtable.c:202!
     ....

     NIP [c000000000096510] pudp_huge_get_and_clear_full+0x98/0x174
     LR [c00000000206bb34] pud_advanced_tests+0x1b4/0x334
     Call Trace:
     [c000000004a2f950] [000d34c100000000] 0xd34c100000000 (unreliable)
     [c000000004a2f9a0] [c00000000206bb34] pud_advanced_tests+0x1b4/0x334
     [c000000004a2fa40] [c00000000206db34] debug_vm_pgtable+0xcbc/0x1c48
     [c000000004a2fc10] [c00000000000fd28] do_one_initcall+0x60/0x388

    Link: https://lkml.kernel.org/r/20240129060022.68044-1-aneesh.kumar@kernel.org
    Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
    Signed-off-by: Aneesh Kumar K.V (IBM) <aneesh.kumar@kernel.org>
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Rafael Aquini <raquini@redhat.com>
2024-10-01 11:22:40 -04:00
Rafael Aquini b4ddb6958f mm: change pudp_huge_get_and_clear_full take vm_area_struct as arg
JIRA: https://issues.redhat.com/browse/RHEL-27743

This patch is a backport of the following upstream commit:
commit f32928ab6fe5abac5a270b6c0bffc4ce77ee8c42
Author: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Date:   Tue Jul 25 00:37:48 2023 +0530

    mm: change pudp_huge_get_and_clear_full take vm_area_struct as arg

    We will use this in a later patch to do tlb flush when clearing pud
    entries on powerpc.  This is similar to commit 93a98695f2 ("mm: change
    pmdp_huge_get_and_clear_full take vm_area_struct as arg")

    Link: https://lkml.kernel.org/r/20230724190759.483013-3-aneesh.kumar@linux.ibm.com
    Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Joao Martins <joao.m.martins@oracle.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Muchun Song <muchun.song@linux.dev>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Oscar Salvador <osalvador@suse.de>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Rafael Aquini <raquini@redhat.com>
2024-10-01 11:19:43 -04:00
Rafael Aquini 81543c1d8d mm/hugepage pud: allow arch-specific helper function to check huge page pud support
JIRA: https://issues.redhat.com/browse/RHEL-27743

This patch is a backport of the following upstream commit:
commit 348ad1606f4c09e3dc28092baac474e10a252471
Author: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Date:   Tue Jul 25 00:37:47 2023 +0530

    mm/hugepage pud: allow arch-specific helper function to check huge page pud support

    Patch series "Add support for DAX vmemmap optimization for ppc64", v6.

    This patch series implements changes required to support DAX vmemmap
    optimization for ppc64.  The vmemmap optimization is only enabled with
    radix MMU translation and 1GB PUD mapping with 64K page size.

    The patch series also splits the hugetlb vmemmap optimization as a
    separate Kconfig variable so that architectures can enable DAX vmemmap
    optimization without enabling hugetlb vmemmap optimization.  This should
    enable architectures like arm64 to enable DAX vmemmap optimization while
    they can't enable hugetlb vmemmap optimization.  More details of the same
    are in patch "mm/vmemmap optimization: Split hugetlb and devdax vmemmap
    optimization".

    With 64K page size for 16384 pages added (1G) we save 14 pages
    With 4K page size for 262144 pages added (1G) we save 4094 pages
    With 4K page size for 512 pages added (2M) we save 6 pages

    This patch (of 13):

    Architectures like powerpc would like to enable transparent huge page pud
    support only with radix translation.  To support that add
    has_transparent_pud_hugepage() helper that architectures can override.

    [aneesh.kumar@linux.ibm.com: use the new has_transparent_pud_hugepage()]
      Link: https://lkml.kernel.org/r/87tttrvtaj.fsf@linux.ibm.com
    Link: https://lkml.kernel.org/r/20230724190759.483013-1-aneesh.kumar@linux.ibm.com
    Link: https://lkml.kernel.org/r/20230724190759.483013-2-aneesh.kumar@linux.ibm.com
    Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Joao Martins <joao.m.martins@oracle.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Muchun Song <muchun.song@linux.dev>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Oscar Salvador <osalvador@suse.de>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Rafael Aquini <raquini@redhat.com>
2024-10-01 11:19:42 -04:00
Chris von Recklinghausen db9db78026 mm: prefer xxx_page() alloc/free functions for order-0 pages
JIRA: https://issues.redhat.com/browse/RHEL-27741

commit dcc1be119071f034f3123d3c618d2ef70c80125e
Author: Lorenzo Stoakes <lstoakes@gmail.com>
Date:   Mon Mar 13 12:27:14 2023 +0000

    mm: prefer xxx_page() alloc/free functions for order-0 pages

    Update instances of alloc_pages(..., 0), __get_free_pages(..., 0) and
    __free_pages(..., 0) to use alloc_page(), __get_free_page() and
    __free_page() respectively in core code.

    Link: https://lkml.kernel.org/r/50c48ca4789f1da2a65795f2346f5ae3eff7d665.1678710232.git.lstoakes@gmail.com
    Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
    Reviewed-by: David Hildenbrand <david@redhat.com>
    Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org>
    Acked-by: Mel Gorman <mgorman@techsingularity.net>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Christoph Hellwig <hch@infradead.org>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2024-04-30 07:00:13 -04:00
Aristeu Rozanski e7030d52b7 mm: remove __HAVE_ARCH_PTE_SWP_EXCLUSIVE
JIRA: https://issues.redhat.com/browse/RHEL-27740
Tested: by me
Conflicts: dropped arches we don't support

commit 950fe885a89770619e315f9b46301eebf0aab7b3
Author: David Hildenbrand <david@redhat.com>
Date:   Fri Jan 13 18:10:26 2023 +0100

    mm: remove __HAVE_ARCH_PTE_SWP_EXCLUSIVE

    __HAVE_ARCH_PTE_SWP_EXCLUSIVE is now supported by all architectures that
    support swp PTEs, so let's drop it.

    Link: https://lkml.kernel.org/r/20230113171026.582290-27-david@redhat.com
    Signed-off-by: David Hildenbrand <david@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Aristeu Rozanski <arozansk@redhat.com>
2024-04-29 14:33:08 -04:00
Aristeu Rozanski 55e06c119d mm/debug_vm_pgtable: more pte_swp_exclusive() sanity checks
JIRA: https://issues.redhat.com/browse/RHEL-27740
Tested: by me

commit 2321ba3e3733f513e46e29b9c70512ecddbf1085
Author: David Hildenbrand <david@redhat.com>
Date:   Fri Jan 13 18:10:01 2023 +0100

    mm/debug_vm_pgtable: more pte_swp_exclusive() sanity checks

    Patch series "mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE on all
    architectures with swap PTEs".

    This is the follow-up on [1]:
            [PATCH v2 0/8] mm: COW fixes part 3: reliable GUP R/W FOLL_GET of
            anonymous pages

    After we implemented __HAVE_ARCH_PTE_SWP_EXCLUSIVE on most prominent
    enterprise architectures, implement __HAVE_ARCH_PTE_SWP_EXCLUSIVE on all
    remaining architectures that support swap PTEs.

    This makes sure that exclusive anonymous pages will stay exclusive, even
    after they were swapped out -- for example, making GUP R/W FOLL_GET of
    anonymous pages reliable.  Details can be found in [1].

    This primarily fixes remaining known O_DIRECT memory corruptions that can
    happen on concurrent swapout, whereby we can lose DMA reads to a page
    (modifying the user page by writing to it).

    To verify, there are two test cases (requiring swap space, obviously):
    (1) The O_DIRECT+swapout test case [2] from Andrea. This test case tries
        triggering a race condition.
    (2) My vmsplice() test case [3] that tries to detect if the exclusive
        marker was lost during swapout, not relying on a race condition.

    For example, on 32bit x86 (with and without PAE), my test case fails
    without these patches:
            $ ./test_swp_exclusive
            FAIL: page was replaced during COW
    But succeeds with these patches:
            $ ./test_swp_exclusive
            PASS: page was not replaced during COW

    Why implement __HAVE_ARCH_PTE_SWP_EXCLUSIVE for all architectures, even
    the ones where swap support might be in a questionable state?  This is the
    first step towards removing "readable_exclusive" migration entries, and
    instead using pte_swp_exclusive() also with (readable) migration entries
    instead (as suggested by Peter).  The only missing piece for that is
    supporting pmd_swp_exclusive() on relevant architectures with THP
    migration support.

    As all relevant architectures now implement __HAVE_ARCH_PTE_SWP_EXCLUSIVE,,
    we can drop __HAVE_ARCH_PTE_SWP_EXCLUSIVE in the last patch.

    I tried cross-compiling all relevant setups and tested on x86 and sparc64
    so far.

    CCing arch maintainers only on this cover letter and on the respective
    patch(es).

    [1] https://lkml.kernel.org/r/20220329164329.208407-1-david@redhat.com
    [2] https://gitlab.com/aarcange/kernel-testcases-for-v5.11/-/blob/main/page_count_do_wp_page-swap.c
    [3] https://gitlab.com/davidhildenbrand/scratchspace/-/blob/main/test_swp_exclusive.c

    This patch (of 26):

    We want to implement __HAVE_ARCH_PTE_SWP_EXCLUSIVE on all architectures.
    Let's extend our sanity checks, especially testing that our PTE bit does
    not affect:

    * is_swap_pte() -> pte_present() and pte_none()
    * the swap entry + type
    * pte_swp_soft_dirty()

    Especially, the pfn_pte() is dodgy when the swap PTE layout differs
    heavily from ordinary PTEs.  Let's properly construct a swap PTE from swap
    type+offset.

    [david@redhat.com: fix build]
      Link: https://lkml.kernel.org/r/6aaad548-cf48-77fa-9d6c-db83d724b2eb@redhat.com
    Link: https://lkml.kernel.org/r/20230113171026.582290-1-david@redhat.com
    Link: https://lkml.kernel.org/r/20230113171026.582290-2-david@redhat.com
    Signed-off-by: David Hildenbrand <david@redhat.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
    Cc: <aou@eecs.berkeley.edu>
    Cc: Borislav Petkov (AMD) <bp@alien8.de>
    Cc: Brian Cain <bcain@quicinc.com>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Chris Zankel <chris@zankel.net>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David S. Miller <davem@davemloft.net>
    Cc: Dinh Nguyen <dinguyen@kernel.org>
    Cc: Geert Uytterhoeven <geert@linux-m68k.org>
    Cc: Greg Ungerer <gerg@linux-m68k.org>
    Cc: Guo Ren <guoren@kernel.org>
    Cc: Helge Deller <deller@gmx.de>
    Cc: H. Peter Anvin (Intel) <hpa@zytor.com>
    Cc: Huacai Chen <chenhuacai@kernel.org>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Cc: Jason Gunthorpe <jgg@nvidia.com>
    Cc: Johannes Berg <johannes@sipsolutions.net>
    Cc: John Hubbard <jhubbard@nvidia.com>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Max Filippov <jcmvbkbc@gmail.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Michal Simek <monstr@monstr.eu>
    Cc: Mike Rapoport <rppt@linux.ibm.com>
    Cc: Nadav Amit <namit@vmware.com>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Palmer Dabbelt <palmer@dabbelt.com>
    Cc: Paul Walmsley <paul.walmsley@sifive.com>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Richard Henderson <richard.henderson@linaro.org>
    Cc: Richard Weinberger <richard@nod.at>
    Cc: Rich Felker <dalias@libc.org>
    Cc: Russell King <linux@armlinux.org.uk>
    Cc: Stafford Horne <shorne@gmail.com>
    Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
    Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Vineet Gupta <vgupta@kernel.org>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Xuerui Wang <kernel@xen0n.name>
    Cc: Yang Shi <shy828301@gmail.com>
    Cc: Yoshinori Sato <ysato@users.osdn.me>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Aristeu Rozanski <arozansk@redhat.com>
2024-04-29 14:33:08 -04:00
Audra Mitchell b5ecc6bed9 mm: debug_vm_pgtable: use VM_ACCESS_FLAGS
JIRA: https://issues.redhat.com/browse/RHEL-27739

This patch is a backport of the following upstream commit:
commit d7e679b6f9d9f7337f0fdc5011f2ecc9b16f821b
Author: Kefeng Wang <wangkefeng.wang@huawei.com>
Date:   Wed Oct 19 11:49:44 2022 +0800

    mm: debug_vm_pgtable: use VM_ACCESS_FLAGS

    Directly use VM_ACCESS_FLAGS instead VMFLAGS.

    Link: https://lkml.kernel.org/r/20221019034945.93081-5-wangkefeng.wang@huawei.com
    Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
    Cc: Alex Deucher <alexander.deucher@amd.com>
    Cc: "Christian König" <christian.koenig@amd.com>
    Cc: Daniel Vetter <daniel@ffwll.ch>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David Airlie <airlied@gmail.com>
    Cc: Dinh Nguyen <dinguyen@kernel.org>
    Cc: Jarkko Sakkinen <jarkko@kernel.org>
    Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Audra Mitchell <audra@redhat.com>
2024-04-09 09:42:52 -04:00
Prarit Bhargava 25cf7e4e50 mm: Make pte_mkwrite() take a VMA
JIRA: https://issues.redhat.com/browse/RHEL-25415

Conflicts: This is a rip and replace of pt_mkwrite() with one arg for
pte_mkwrite() with two args.  There are uses upstream that are not yet
in RHEL9.

commit 161e393c0f63592a3b95bdd8b55752653763fc6d
Author: Rick Edgecombe <rick.p.edgecombe@intel.com>
Date:   Mon Jun 12 17:10:29 2023 -0700

    mm: Make pte_mkwrite() take a VMA

    The x86 Shadow stack feature includes a new type of memory called shadow
    stack. This shadow stack memory has some unusual properties, which requires
    some core mm changes to function properly.

    One of these unusual properties is that shadow stack memory is writable,
    but only in limited ways. These limits are applied via a specific PTE
    bit combination. Nevertheless, the memory is writable, and core mm code
    will need to apply the writable permissions in the typical paths that
    call pte_mkwrite(). Future patches will make pte_mkwrite() take a VMA, so
    that the x86 implementation of it can know whether to create regular
    writable or shadow stack mappings.

    But there are a couple of challenges to this. Modifying the signatures of
    each arch pte_mkwrite() implementation would be error prone because some
    are generated with macros and would need to be re-implemented. Also, some
    pte_mkwrite() callers operate on kernel memory without a VMA.

    So this can be done in a three step process. First pte_mkwrite() can be
    renamed to pte_mkwrite_novma() in each arch, with a generic pte_mkwrite()
    added that just calls pte_mkwrite_novma(). Next callers without a VMA can
    be moved to pte_mkwrite_novma(). And lastly, pte_mkwrite() and all callers
    can be changed to take/pass a VMA.

    Previous work pte_mkwrite() renamed pte_mkwrite_novma() and converted
    callers that don't have a VMA were to use pte_mkwrite_novma(). So now
    change pte_mkwrite() to take a VMA and change the remaining callers to
    pass a VMA. Apply the same changes for pmd_mkwrite().

    No functional change.

    Suggested-by: David Hildenbrand <david@redhat.com>
    Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
    Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
    Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org>
    Acked-by: David Hildenbrand <david@redhat.com>
    Link: https://lore.kernel.org/all/20230613001108.3040476-4-rick.p.edgecombe%40intel.com

Omitted-fix: f441ff73f1ec powerpc: Fix pud_mkwrite() definition after pte_mkwrite() API changes
	pud_mkwrite() not in RHEL9 code for powerpc (removed previously)
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
2024-03-20 09:43:13 -04:00
Paolo Bonzini 538bf6f332 mm, treewide: redefine MAX_ORDER sanely
JIRA: https://issues.redhat.com/browse/RHEL-10059

MAX_ORDER currently defined as number of orders page allocator supports:
user can ask buddy allocator for page order between 0 and MAX_ORDER-1.

This definition is counter-intuitive and lead to number of bugs all over
the kernel.

Change the definition of MAX_ORDER to be inclusive: the range of orders
user can ask from buddy allocator is 0..MAX_ORDER now.

[kirill@shutemov.name: fix min() warning]
  Link: https://lkml.kernel.org/r/20230315153800.32wib3n5rickolvh@box
[akpm@linux-foundation.org: fix another min_t warning]
[kirill@shutemov.name: fixups per Zi Yan]
  Link: https://lkml.kernel.org/r/20230316232144.b7ic4cif4kjiabws@box.shutemov.name
[akpm@linux-foundation.org: fix underlining in docs]
  Link: https://lore.kernel.org/oe-kbuild-all/202303191025.VRCTk6mP-lkp@intel.com/
Link: https://lkml.kernel.org/r/20230315113133.11326-11-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Michael Ellerman <mpe@ellerman.id.au>	[powerpc]
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 23baf831a32c04f9a968812511540b1b3e648bf5)

[RHEL: Fix conflicts by changing MAX_ORDER - 1 to MAX_ORDER,
       ">= MAX_ORDER" to "> MAX_ORDER", etc.]

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-10-30 09:12:37 +01:00
Chris von Recklinghausen 414cb8f393 mm/debug_vm_pgtable,page_table_check: warn pte map fails
JIRA: https://issues.redhat.com/browse/RHEL-1848

commit 9f2bad096d2f84751fd4559fcd4cdda1a2af1976
Author: Hugh Dickins <hughd@google.com>
Date:   Thu Jun 8 18:27:52 2023 -0700

    mm/debug_vm_pgtable,page_table_check: warn pte map fails

    Failures here would be surprising: pte_advanced_tests() and
    pte_clear_tests() and __page_table_check_pte_clear_range() each issue a
    warning if pte_offset_map() or pte_offset_map_lock() fails.

    Link: https://lkml.kernel.org/r/3ea9e4f-e5cf-d7d9-4c2-291b3c5a3636@google.com
    Signed-off-by: Hugh Dickins <hughd@google.com>
    Cc: Alistair Popple <apopple@nvidia.com>
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Axel Rasmussen <axelrasmussen@google.com>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Christoph Hellwig <hch@infradead.org>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: "Huang, Ying" <ying.huang@intel.com>
    Cc: Ira Weiny <ira.weiny@intel.com>
    Cc: Jason Gunthorpe <jgg@ziepe.ca>
    Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Lorenzo Stoakes <lstoakes@gmail.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Miaohe Lin <linmiaohe@huawei.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Mike Rapoport (IBM) <rppt@kernel.org>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
    Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Qi Zheng <zhengqi.arch@bytedance.com>
    Cc: Ralph Campbell <rcampbell@nvidia.com>
    Cc: Ryan Roberts <ryan.roberts@arm.com>
    Cc: SeongJae Park <sj@kernel.org>
    Cc: Song Liu <song@kernel.org>
    Cc: Steven Price <steven.price@arm.com>
    Cc: Suren Baghdasaryan <surenb@google.com>
    Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Yang Shi <shy828301@gmail.com>
    Cc: Yu Zhao <yuzhao@google.com>
    Cc: Zack Rusin <zackr@vmware.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-10-20 06:16:16 -04:00
Nico Pache 600fecd2a8 mm/debug_vm_pgtable: replace pte_mkhuge() with arch_make_huge_pte()
commit 9dabf6e1374519f89d9fc326a129b5cc35088479
Author: Anshuman Khandual <anshuman.khandual@arm.com>
Date:   Thu Mar 2 17:18:45 2023 +0530

    mm/debug_vm_pgtable: replace pte_mkhuge() with arch_make_huge_pte()

    Since the following commit arch_make_huge_pte() should be used directly in
    generic memory subsystem as a platform provided page table helper, instead
    of pte_mkhuge().  Change hugetlb_basic_tests() to call
    arch_make_huge_pte() directly, and update its relevant documentation entry
    as required.

    'commit 16785bd77431 ("mm: merge pte_mkhuge() call into arch_make_huge_pte()")'

    Link: https://lkml.kernel.org/r/20230302114845.421674-1-anshuman.khandual@arm.com
    Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Reported-by: Christophe Leroy <christophe.leroy@csgroup.eu>
      Link: https://lore.kernel.org/all/1ea45095-0926-a56a-a273-816709e9075e@csgroup.eu/
    Cc: Jonathan Corbet <corbet@lwn.net>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Mike Rapoport <rppt@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2168372
Signed-off-by: Nico Pache <npache@redhat.com>
2023-06-14 15:11:04 -06:00
Jan Stancek f53a416d2f Merge: mm/debug: use valid physical memory for pmd/pud tests
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2136

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2095767

commit c4876ff68716e5372224d17045b47610d667a0ee
Author: Frank van der Linden <fvdl@google.com>
Date:   Mon, 9 Jan 2023 17:43:32 +0000

    mm/debug: use valid physical memory for pmd/pud tests

    The page table debug tests need a physical address to validate low-level
    page table manipulation with.  The memory at this address is not actually
    touched, it just encoded in the page table entries at various levels
    during the tests only.

    Since the memory is not used, the code just picks the physical address of
    the start_kernel symbol.  This value is then truncated to get a properly
    aligned address that is to be used for various tests.  Because of the
    truncation, the address might not actually exist, or might not describe a
    complete huge page.  That's not a problem for most tests, but the
    arch-specific code may check for attribute validity and consistency.  The
    x86 version of {pud,pmd}_set_huge actually validates the MTRRs for the
    PMD/PUD range.  This may fail with an address derived from start_kernel,
    depending on where the kernel was loaded and what the physical memory
    layout of the system is.  This then leads to false negatives for the
    {pud,pmd}_set_huge tests.

    Avoid this by finding a properly aligned memory range that exists and is
    usable.  If such a range is not found, skip the tests that needed it.

    [fvdl@google.com: v3]
      Link: https://lkml.kernel.org/r/20230110181208.1633879-1-fvdl@google.com
    Link: https://lkml.kernel.org/r/20230109174332.329366-1-fvdl@google.com
    Fixes: 399145f9eb ("mm/debug: add tests validating architecture page table helpers")
    Signed-off-by: Frank van der Linden <fvdl@google.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Waiman Long <longman@redhat.com>

Approved-by: Donald Dutile <ddutile@redhat.com>
Approved-by: Rafael Aquini <aquini@redhat.com>

Signed-off-by: Jan Stancek <jstancek@redhat.com>
2023-04-02 11:49:01 +02:00
Chris von Recklinghausen dde236aa73 mm: remove unused savedwrite infrastructure
Bugzilla: https://bugzilla.redhat.com/2160210

commit d6379159f47630813f06f97535cc82ce7b9eed49
Author: David Hildenbrand <david@redhat.com>
Date:   Tue Nov 8 18:46:51 2022 +0100

    mm: remove unused savedwrite infrastructure

    NUMA hinting no longer uses savedwrite, let's rip it out.

    ... and while at it, drop __pte_write() and __pmd_write() on ppc64.

    Link: https://lkml.kernel.org/r/20221108174652.198904-7-david@redhat.com
    Signed-off-by: David Hildenbrand <david@redhat.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Dave Chinner <david@fromorbit.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Mike Rapoport <rppt@kernel.org>
    Cc: Nadav Amit <namit@vmware.com>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:34 -04:00
Chris von Recklinghausen 267a7a9b62 docs: rename Documentation/vm to Documentation/mm
Conflicts: drop changes to arch/loongarch/Kconfig - unsupported config

Bugzilla: https://bugzilla.redhat.com/2160210

commit ee65728e103bb7dd99d8604bf6c7aa89c7d7e446
Author: Mike Rapoport <rppt@kernel.org>
Date:   Mon Jun 27 09:00:26 2022 +0300

    docs: rename Documentation/vm to Documentation/mm

    so it will be consistent with code mm directory and with
    Documentation/admin-guide/mm and won't be confused with virtual machines.

    Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
    Suggested-by: Matthew Wilcox <willy@infradead.org>
    Tested-by: Ira Weiny <ira.weiny@intel.com>
    Acked-by: Jonathan Corbet <corbet@lwn.net>
    Acked-by: Wu XiangCheng <bobwxc@email.cn>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:19:15 -04:00
Chris von Recklinghausen 9e9a4dcea6 mm/debug_vm_pgtable: drop protection_map[] usage
Bugzilla: https://bugzilla.redhat.com/2160210

commit 31d17076b07c8ed212b1d865b36b0c7313885ef2
Author: Anshuman Khandual <anshuman.khandual@arm.com>
Date:   Thu Apr 28 23:16:12 2022 -0700

    mm/debug_vm_pgtable: drop protection_map[] usage

    Patch series "mm: protection_map[] cleanups".

    This patch (of 2):

    Although protection_map[] contains the platform defined page protection
    map for a given vm_flags combination, vm_get_page_prot() is the right
    interface to use.  This will also reduce dependency on protection_map[]
    which is going to be dropped off completely later on.

    Link: https://lkml.kernel.org/r/20220404031840.588321-1-anshuman.khandual@arm.com
    Link: https://lkml.kernel.org/r/20220404031840.588321-2-anshuman.khandual@arm.com
    Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Reviewed-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2023-03-24 11:18:55 -04:00
Waiman Long 6258419ec4 mm/debug: use valid physical memory for pmd/pud tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2095767

commit c4876ff68716e5372224d17045b47610d667a0ee
Author: Frank van der Linden <fvdl@google.com>
Date:   Mon, 9 Jan 2023 17:43:32 +0000

    mm/debug: use valid physical memory for pmd/pud tests

    The page table debug tests need a physical address to validate low-level
    page table manipulation with.  The memory at this address is not actually
    touched, it just encoded in the page table entries at various levels
    during the tests only.

    Since the memory is not used, the code just picks the physical address of
    the start_kernel symbol.  This value is then truncated to get a properly
    aligned address that is to be used for various tests.  Because of the
    truncation, the address might not actually exist, or might not describe a
    complete huge page.  That's not a problem for most tests, but the
    arch-specific code may check for attribute validity and consistency.  The
    x86 version of {pud,pmd}_set_huge actually validates the MTRRs for the
    PMD/PUD range.  This may fail with an address derived from start_kernel,
    depending on where the kernel was loaded and what the physical memory
    layout of the system is.  This then leads to false negatives for the
    {pud,pmd}_set_huge tests.

    Avoid this by finding a properly aligned memory range that exists and is
    usable.  If such a range is not found, skip the tests that needed it.

    [fvdl@google.com: v3]
      Link: https://lkml.kernel.org/r/20230110181208.1633879-1-fvdl@google.com
    Link: https://lkml.kernel.org/r/20230109174332.329366-1-fvdl@google.com
    Fixes: 399145f9eb ("mm/debug: add tests validating architecture page table helpers")
    Signed-off-by: Frank van der Linden <fvdl@google.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Waiman Long <longman@redhat.com>
2023-03-06 15:12:45 -05:00
Chris von Recklinghausen ad9725161a mm/debug_vm_pgtable: add tests for __HAVE_ARCH_PTE_SWP_EXCLUSIVE
Bugzilla: https://bugzilla.redhat.com/2120352

commit 210d1e8af42df0fc35aee953124798f743432fab
Author: David Hildenbrand <david@redhat.com>
Date:   Mon May 9 18:20:45 2022 -0700

    mm/debug_vm_pgtable: add tests for __HAVE_ARCH_PTE_SWP_EXCLUSIVE

    Let's test that __HAVE_ARCH_PTE_SWP_EXCLUSIVE works as expected.

    Link: https://lkml.kernel.org/r/20220329164329.208407-3-david@redhat.com
    Signed-off-by: David Hildenbrand <david@redhat.com>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Christoph Hellwig <hch@lst.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Don Dutile <ddutile@redhat.com>
    Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
    Cc: Heiko Carstens <hca@linux.ibm.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Jan Kara <jack@suse.cz>
    Cc: Jann Horn <jannh@google.com>
    Cc: Jason Gunthorpe <jgg@nvidia.com>
    Cc: John Hubbard <jhubbard@nvidia.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Liang Zhang <zhangliang5@huawei.com>
    Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: Mike Rapoport <rppt@linux.ibm.com>
    Cc: Nadav Amit <namit@vmware.com>
    Cc: Oded Gabbay <oded.gabbay@gmail.com>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Pedro Demarchi Gomes <pedrodemargomes@gmail.com>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Rik van Riel <riel@surriel.com>
    Cc: Roman Gushchin <guro@fb.com>
    Cc: Shakeel Butt <shakeelb@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Vasily Gorbik <gor@linux.ibm.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2022-10-12 07:28:11 -04:00
Chris von Recklinghausen b5ad25df0b mm/debug_vm_pgtable: update comments regarding migration swap entries
Bugzilla: https://bugzilla.redhat.com/2120352

commit 236476180c0f5d308fb313d5570d0b067307884c
Author: Anshuman Khandual <anshuman.khandual@arm.com>
Date:   Fri Jan 14 14:05:07 2022 -0800

    mm/debug_vm_pgtable: update comments regarding migration swap entries

    Commit 4dd845b5a3 ("mm/swapops: rework swap entry manipulation code")
    had changed migtation entry related helpers.  Just update
    debug_vm_pgatble() synced documentation to reflect those changes.

    Link: https://lkml.kernel.org/r/1641880417-24848-1-git-send-email-anshuman.khandual@arm.com
    Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Jonathan Corbet <corbet@lwn.net>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2022-10-12 07:27:38 -04:00
Chris von Recklinghausen 1e19113a7a mm: debug_vm_pgtable: don't use __P000 directly
Bugzilla: https://bugzilla.redhat.com/2120352

commit 8772716f96704c67b1e2a6ba175605b4fce2a252
Author: Guo Ren <guoren@linux.alibaba.com>
Date:   Fri Nov 5 13:36:09 2021 -0700

    mm: debug_vm_pgtable: don't use __P000 directly

    The __Pxxx/__Sxxx macros are only for protection_map[] init.  All usage
    of them in linux should come from protection_map array.

    Because a lot of architectures would re-initilize protection_map[]
    array, eg: x86-mem_encrypt, m68k-motorola, mips, arm, sparc.

    Using __P000 is not rigorous.

    Link: https://lkml.kernel.org/r/20210924060821.1138281-1-guoren@kernel.org
    Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
    Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Gavin Shan <gshan@redhat.com>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2022-10-12 07:27:26 -04:00
Chris von Recklinghausen 64bd05d9b5 mm: ptep_clear() page table helper
Bugzilla: https://bugzilla.redhat.com/2120352

commit 08d5b29eac7dd5e6c79b66d390ecbb9219e05931
Author: Pasha Tatashin <pasha.tatashin@soleen.com>
Date:   Fri Jan 14 14:06:33 2022 -0800

    mm: ptep_clear() page table helper

    We have ptep_get_and_clear() and ptep_get_and_clear_full() helpers to
    clear PTE from user page tables, but there is no variant for simple
    clear of a present PTE from user page tables without using a low level
    pte_clear() which can be either native or para-virtualised.

    Add a new ptep_clear() that can be used in common code to clear PTEs
    from page table.  We will need this call later in order to add a hook
    for page table check.

    Link: https://lkml.kernel.org/r/20211221154650.1047963-3-pasha.tatashin@soleen.com
    Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Frederic Weisbecker <frederic@kernel.org>
    Cc: Greg Thelen <gthelen@google.com>
    Cc: "H. Peter Anvin" <hpa@zytor.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Jiri Slaby <jirislaby@kernel.org>
    Cc: Jonathan Corbet <corbet@lwn.net>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Masahiro Yamada <masahiroy@kernel.org>
    Cc: Mike Rapoport <rppt@kernel.org>
    Cc: Muchun Song <songmuchun@bytedance.com>
    Cc: Paul Turner <pjt@google.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Sami Tolvanen <samitolvanen@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Wei Xu <weixugc@google.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
2022-10-12 07:27:13 -04:00
Rafael Aquini 99ae502b1a mm/debug_vm_pgtable: remove pte entry from the page table
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2064990

This patch is a backport of the following upstream commit:
commit fb5222aae64fe25e5f3ebefde8214dcf3ba33ca5
Author: Pasha Tatashin <pasha.tatashin@soleen.com>
Date:   Thu Feb 3 20:49:10 2022 -0800

    mm/debug_vm_pgtable: remove pte entry from the page table

    Patch series "page table check fixes and cleanups", v5.

    This patch (of 4):

    The pte entry that is used in pte_advanced_tests() is never removed from
    the page table at the end of the test.

    The issue is detected by page_table_check, to repro compile kernel with
    the following configs:

    CONFIG_DEBUG_VM_PGTABLE=y
    CONFIG_PAGE_TABLE_CHECK=y
    CONFIG_PAGE_TABLE_CHECK_ENFORCED=y

    During the boot the following BUG is printed:

      debug_vm_pgtable: [debug_vm_pgtable         ]: Validating architecture page table helpers
      ------------[ cut here ]------------
      kernel BUG at mm/page_table_check.c:162!
      invalid opcode: 0000 [#1] PREEMPT SMP PTI
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.16.0-11413-g2c271fe77d52 #3
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
      ...

    The entry should be properly removed from the page table before the page
    is released to the free list.

    Link: https://lkml.kernel.org/r/20220131203249.2832273-1-pasha.tatashin@soleen.com
    Link: https://lkml.kernel.org/r/20220131203249.2832273-2-pasha.tatashin@soleen.com
    Fixes: a5c3b9ffb0 ("mm/debug_vm_pgtable: add tests validating advanced arch page table helpers")
    Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
    Reviewed-by: Zi Yan <ziy@nvidia.com>
    Tested-by: Zi Yan <ziy@nvidia.com>
    Acked-by: David Rientjes <rientjes@google.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Paul Turner <pjt@google.com>
    Cc: Wei Xu <weixugc@google.com>
    Cc: Greg Thelen <gthelen@google.com>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Mike Rapoport <rppt@kernel.org>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Jiri Slaby <jirislaby@kernel.org>
    Cc: Muchun Song <songmuchun@bytedance.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: <stable@vger.kernel.org>    [5.9+]
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2022-03-27 00:48:31 -04:00
Rafael Aquini d5dbc59aec mm/debug_vm_pgtable: fix corrupted page flag
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 8c5b3a8adad2152d162fc0230c822f907c816be9
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:54 2021 -0700

    mm/debug_vm_pgtable: fix corrupted page flag

    In page table entry modifying tests, set_xxx_at() are used to populate
    the page table entries. On ARM64, PG_arch_1 (PG_dcache_clean) flag is
    set to the target page flag if execution permission is given. The logic
    exits since commit 4f04d8f005 ("arm64: MMU definitions"). The page
    flag is kept when the page is free'd to buddy's free area list. However,
    it will trigger page checking failure when it's pulled from the buddy's
    free area list, as the following warning messages indicate.

       BUG: Bad page state in process memhog  pfn:08000
       page:0000000015c0a628 refcount:0 mapcount:0 \
            mapping:0000000000000000 index:0x1 pfn:0x8000
       flags: 0x7ffff8000000800(arch_1|node=0|zone=0|lastcpupid=0xfffff)
       raw: 07ffff8000000800 dead000000000100 dead000000000122 0000000000000000
       raw: 0000000000000001 0000000000000000 00000000ffffffff 0000000000000000
       page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag(s) set

    This fixes the issue by clearing PG_arch_1 through flush_dcache_page()
    after set_xxx_at() is called. For architectures other than ARM64, the
    unexpected overhead of cache flushing is acceptable.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-13-gshan@redhat.com
    Fixes: a5c3b9ffb0 ("mm/debug_vm_pgtable: add tests validating advanced arch page table helpers")
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:42 -05:00
Rafael Aquini 729b02d737 mm/debug_vm_pgtable: remove unused code
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit fda88cfda1ab666ee6c136eeb401b4ead7ecd066
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:51 2021 -0700

    mm/debug_vm_pgtable: remove unused code

    The variables used by old implementation isn't needed as we switched to
    "struct pgtable_debug_args".  Lets remove them and related code in
    debug_vm_pgtable().

    Link: https://lkml.kernel.org/r/20210809092631.1888748-12-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:41 -05:00
Rafael Aquini 2de10d19ef mm/debug_vm_pgtable: use struct pgtable_debug_args in PGD and P4D modifying tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 2f87f8c39a91effbbfd88a4642bd245b9c2b7ad3
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:48 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in PGD and P4D modifying tests

    This uses struct pgtable_debug_args in PGD/P4D modifying tests.  No
    allocated huge page is used in these tests.  Besides, the unused variable
    @saved_p4dp and @saved_pudp are dropped.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-11-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:40 -05:00
Rafael Aquini d0a9f6344f mm/debug_vm_pgtable: use struct pgtable_debug_args in PUD modifying tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 4cbde03bdb0b832503999387f5e86b006fa54674
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:45 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in PUD modifying tests

    This uses struct pgtable_debug_args in PUD modifying tests.  The allocated
    huge page is used when set_pud_at() is used.  The corresponding tests are
    skipped if the huge page doesn't exist.  Besides, the following unused
    variables in debug_vm_pgtable() are dropped: @prot, @paddr, @pud_aligned.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-10-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:40 -05:00
Rafael Aquini b0958afd45 mm/debug_vm_pgtable: use struct pgtable_debug_args in PMD modifying tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit c0fe07b0aa72b530d5da63a24eb10d503cae5a95
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:41 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in PMD modifying tests

    This uses struct pgtable_debug_args in PMD modifying tests.  The allocated
    huge page is used when set_pmd_at() is used.  The corresponding tests are
    skipped if the huge page doesn't exist.  Besides, the unused variable
    @pmd_aligned in debug_vm_pgtable() is dropped.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-9-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:39 -05:00
Rafael Aquini 8f144301f8 mm/debug_vm_pgtable: use struct pgtable_debug_args in PTE modifying tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 44966c4480f8024776c9ecc68f5589f023f19884
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:38 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in PTE modifying tests

    This uses struct pgtable_debug_args in PTE modifying tests.  The allocated
    page is used as set_pte_at() is used there.  The tests are skipped if the
    allocated page doesn't exist.  It's notable that args->ptep need to be
    mapped before the tests.  The reason why we don't map args->ptep at the
    beginning is PTE entry is only mapped and accessible in atomic context
    when CONFIG_HIGHPTE is enabled.  So we avoid to do that so that atomic
    context is only enabled if needed.

    Besides, the unused variable @pte_aligned and @ptep in debug_vm_pgtable()
    are dropped.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-8-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:38 -05:00
Rafael Aquini 3c848adf25 mm/debug_vm_pgtable: use struct pgtable_debug_args in migration and thp tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 4878a888824bd69ad4fff18efa93901ba2ba24f3
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:35 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in migration and thp tests

    This uses struct pgtable_debug_args in the migration and thp test
    functions.  It's notable that the pre-allocated page is used in
    swap_migration_tests() as set_pte_at() is used there.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-7-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:38 -05:00
Rafael Aquini ff931e1117 mm/debug_vm_pgtable: use struct pgtable_debug_args in soft_dirty and swap tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 5f447e8067fd9f472bc418de219f3cb9a8c3fbe8
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:32 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in soft_dirty and swap tests

    This uses struct pgtable_debug_args in the soft_dirty and swap test
    functions.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-6-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:37 -05:00
Rafael Aquini 214c5e5352 mm/debug_vm_pgtable: use struct pgtable_debug_args in protnone and devmap tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 8cb183f2f2a014e818cf60de3afd5a06410fd5b9
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:28 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in protnone and devmap tests

    This uses struct pgtable_debug_args in protnone and devmap test functions.
    After that, the unused variable @protnone in debug_vm_pgtable() is
    dropped.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-5-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:36 -05:00
Rafael Aquini bc7868a778 mm/debug_vm_pgtable: use struct pgtable_debug_args in leaf and savewrite tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 8983d231c7cc1adaebed89153552da1e3fd55f61
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:25 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in leaf and savewrite tests

    This uses struct pgtable_debug_args in the leaf and savewrite test
    functions.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-4-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:35 -05:00
Rafael Aquini 9c05c46c1e mm/debug_vm_pgtable: use struct pgtable_debug_args in basic tests
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 36b77d1e159283da3c9414cbe6d9cb8e79a59c19
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:22 2021 -0700

    mm/debug_vm_pgtable: use struct pgtable_debug_args in basic tests

    This uses struct pgtable_debug_args in the basic test functions.  The
    unused variables @pgd_aligned and @p4d_aligned in debug_vm_pgtable() are
    dropped.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-3-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:35 -05:00
Rafael Aquini 529fdf7165 mm/debug_vm_pgtable: introduce struct pgtable_debug_args
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2023396

This patch is a backport of the following upstream commit:
commit 3c9b84f044a9e54cf56d1b2c9b80a2d2ce56d70a
Author: Gavin Shan <gshan@redhat.com>
Date:   Thu Sep 2 14:52:19 2021 -0700

    mm/debug_vm_pgtable: introduce struct pgtable_debug_args

    Patch series "mm/debug_vm_pgtable: Enhancements", v6.

    There are a couple of issues with current implementations and this series
    tries to resolve the issues:

      (a) All needed information are scattered in variables, passed to various
          test functions. The code is organized in pretty much relaxed fashion.

      (b) The page isn't allocated from buddy during page table entry modifying
          tests. The page can be invalid, conflicting to the implementations
          of set_xxx_at() on ARM64. The target page is accessed so that the
          iCache can be flushed when execution permission is given on ARM64.
          Besides, the target page can be unmapped and accessing to it causes
          kernel crash.

    "struct pgtable_debug_args" is introduced to address issue (a).  For issue
    (b), the used page is allocated from buddy in page table entry modifying
    tests.  The corresponding tets will be skipped if we fail to allocate the
    (huge) page.  For other test cases, the original page around to kernel
    symbol (@start_kernel) is still used.

    The patches are organized as below.  PATCH[2-10] could be combined to one
    patch, but it will make the review harder:

      PATCH[1] introduces "struct pgtable_debug_args" as place holder of all
               needed information. With it, the old and new implementation
               can coexist.
      PATCH[2-10] uses "struct pgtable_debug_args" in various test functions.
      PATCH[11] removes the unused code for old implementation.
      PATCH[12] fixes the issue of corrupted page flag for ARM64

    This patch (of 6):

    In debug_vm_pgtable(), there are many local variables introduced to track
    the needed information and they are passed to the functions for various
    test cases.  It'd better to introduce a struct as place holder for these
    information.  With it, what the tests functions need is the struct.  In
    this way, the code is simplified and easier to be maintained.

    Besides, set_xxx_at() could access the data on the corresponding pages in
    the page table modifying tests.  So the accessed pages in the tests should
    have been allocated from buddy.  Otherwise, we're accessing pages that
    aren't owned by us.  This causes issues like page flag corruption or
    kernel crash on accessing unmapped page when CONFIG_DEBUG_PAGEALLOC is
    enabled.

    This introduces "struct pgtable_debug_args".  The struct is initialized
    and destroyed, but the information in the struct isn't used yet.  It will
    be used in subsequent patches.

    Link: https://lkml.kernel.org/r/20210809092631.1888748-1-gshan@redhat.com
    Link: https://lkml.kernel.org/r/20210809092631.1888748-2-gshan@redhat.com
    Signed-off-by: Gavin Shan <gshan@redhat.com>
    Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
    Tested-by: Christophe Leroy <christophe.leroy@csgroup.eu>       [powerpc 8xx]
    Tested-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>      [s390]
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Qian Cai <cai@lca.pw>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Vineet Gupta <vgupta@synopsys.com>
    Cc: Chunyu Hu <chuhu@redhat.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Signed-off-by: Rafael Aquini <aquini@redhat.com>
2021-11-29 11:40:34 -05:00
Alistair Popple 4dd845b5a3 mm/swapops: rework swap entry manipulation code
Both migration and device private pages use special swap entries that are
manipluated by a range of inline functions.  The arguments to these are
somewhat inconsistent so rework them to remove flag type arguments and to
make the arguments similar for both read and write entry creation.

Link: https://lkml.kernel.org/r/20210616105937.23201-3-apopple@nvidia.com
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: Ben Skeggs <bskeggs@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-07-01 11:06:03 -07:00
Shixin Liu b593b90dc9 mm/debug_vm_pgtable: remove redundant pfn_{pmd/pte}() and fix one comment mistake
Remove redundant pfn_{pmd/pte}() in {pmd/pte}_advanced_tests() and adjust
pfn_pud() in pud_advanced_tests() to make it similar with other two
functions.

In addition, the branch condition should be CONFIG_TRANSPARENT_HUGEPAGE
instead of CONFIG_ARCH_HAS_PTE_DEVMAP.

Link: https://lkml.kernel.org/r/20210419071820.750217-2-liushixin2@huawei.com
Signed-off-by: Shixin Liu <liushixin2@huawei.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-30 20:47:26 -07:00
Shixin Liu 5fe77be6bf mm/debug_vm_pgtable: move {pmd/pud}_huge_tests out of CONFIG_TRANSPARENT_HUGEPAGE
The functions {pmd/pud}_set_huge and {pmd/pud}_clear_huge are not
dependent on THP.  Hence move {pmd/pud}_huge_tests out of
CONFIG_TRANSPARENT_HUGEPAGE.

Link: https://lkml.kernel.org/r/20210419071820.750217-1-liushixin2@huawei.com
Signed-off-by: Shixin Liu <liushixin2@huawei.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-30 20:47:25 -07:00
Anshuman Khandual 65ac1a60a5 mm/debug_vm_pgtable: ensure THP availability via has_transparent_hugepage()
On certain platforms, THP support could not just be validated via the
build option CONFIG_TRANSPARENT_HUGEPAGE.  Instead
has_transparent_hugepage() also needs to be called upon to verify THP
runtime support.  Otherwise the debug test will just run into unusable THP
helpers like in the case of a 4K hash config on powerpc platform [1].
This just moves all pfn_pmd() and pfn_pud() after THP runtime validation
with has_transparent_hugepage() which prevents the mentioned problem.

[1] https://bugzilla.kernel.org/show_bug.cgi?id=213069

Link: https://lkml.kernel.org/r/1621397588-19211-1-git-send-email-anshuman.khandual@arm.com
Fixes: 787d563b86 ("mm/debug_vm_pgtable: fix kernel crash by checking for THP support")
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-29 10:53:47 -07:00
Gerald Schaefer 04f7ce3f07 mm/debug_vm_pgtable: fix alignment for pmd/pud_advanced_tests()
In pmd/pud_advanced_tests(), the vaddr is aligned up to the next pmd/pud
entry, and so it does not match the given pmdp/pudp and (aligned down)
pfn any more.

For s390, this results in memory corruption, because the IDTE
instruction used e.g.  in xxx_get_and_clear() will take the vaddr for
some calculations, in combination with the given pmdp.  It will then end
up with a wrong table origin, ending on ...ff8, and some of those
wrongly set low-order bits will also select a wrong pagetable level for
the index addition.  IDTE could therefore invalidate (or 0x20) something
outside of the page tables, depending on the wrongly picked index, which
in turn depends on the random vaddr.

As result, we sometimes see "BUG task_struct (Not tainted): Padding
overwritten" on s390, where one 0x5a padding value got overwritten with
0x7a.

Fix this by aligning down, similar to how the pmd/pud_aligned pfns are
calculated.

Link: https://lkml.kernel.org/r/20210525130043.186290-2-gerald.schaefer@linux.ibm.com
Fixes: a5c3b9ffb0 ("mm/debug_vm_pgtable: add tests validating advanced arch page table helpers")
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: <stable@vger.kernel.org>	[5.9+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-05 08:58:11 -07:00
Nicholas Piggin bbc180a5ad mm: HUGE_VMAP arch support cleanup
This changes the awkward approach where architectures provide init
functions to determine which levels they can provide large mappings for,
to one where the arch is queried for each call.

This removes code and indirection, and allows constant-folding of dead
code for unsupported levels.

This also adds a prot argument to the arch query.  This is unused
currently but could help with some architectures (e.g., some powerpc
processors can't map uncacheable memory with large pages).

Link: https://lkml.kernel.org/r/20210317062402.533919-7-npiggin@gmail.com
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Ding Tianhong <dingtianhong@huawei.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Will Deacon <will@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-04-30 11:20:40 -07:00
Anshuman Khandual 2e326c07bb mm/debug_vm_pgtable/basic: iterate over entire protection_map[]
Currently the basic tests just validate various page table transformations
after starting with vm_get_page_prot(VM_READ|VM_WRITE|VM_EXEC) protection.
Instead scan over the entire protection_map[] for better coverage.  It
also makes sure that all these basic page table tranformations checks hold
true irrespective of the starting protection value for the page table
entry.  There is also a slight change in the debug print format for basic
tests to capture the protection value it is being tested with.  The
modified output looks something like

[pte_basic_tests          ]: Validating PTE basic ()
[pte_basic_tests          ]: Validating PTE basic (read)
[pte_basic_tests          ]: Validating PTE basic (write)
[pte_basic_tests          ]: Validating PTE basic (read|write)
[pte_basic_tests          ]: Validating PTE basic (exec)
[pte_basic_tests          ]: Validating PTE basic (read|exec)
[pte_basic_tests          ]: Validating PTE basic (write|exec)
[pte_basic_tests          ]: Validating PTE basic (read|write|exec)
[pte_basic_tests          ]: Validating PTE basic (shared)
[pte_basic_tests          ]: Validating PTE basic (read|shared)
[pte_basic_tests          ]: Validating PTE basic (write|shared)
[pte_basic_tests          ]: Validating PTE basic (read|write|shared)
[pte_basic_tests          ]: Validating PTE basic (exec|shared)
[pte_basic_tests          ]: Validating PTE basic (read|exec|shared)
[pte_basic_tests          ]: Validating PTE basic (write|exec|shared)
[pte_basic_tests          ]: Validating PTE basic (read|write|exec|shared)

This adds a missing argument 'struct mm_struct *' in pud_basic_tests()
test .  This never got exposed before as PUD based THP is available only
on X86 platform where mm_pmd_folded(mm) call gets macro replaced without
requiring the mm_struct i.e __is_defined(__PAGETABLE_PMD_FOLDED).

Link: https://lkml.kernel.org/r/1611137241-26220-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Tested-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
Reviewed-by: Steven Price <steven.price@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-24 13:38:27 -08:00
Anshuman Khandual bb5c47ced4 mm/debug_vm_pgtable/basic: add validation for dirtiness after write protect
Patch series "mm/debug_vm_pgtable: Some minor updates", v3.

This series contains some cleanups and new test suggestions from Catalin
from an earlier discussion.

https://lore.kernel.org/linux-mm/20201123142237.GF17833@gaia/

This patch (of 2):

This adds validation tests for dirtiness after write protect conversion
for each page table level.  There are two new separate test types involved
here.

The first test ensures that a given page table entry does not become dirty
after pxx_wrprotect().  This is important for platforms like arm64 which
transfers and drops the hardware dirty bit (!PTE_RDONLY) to the software
dirty bit while making it an write protected one.  This test ensures that
no fresh page table entry could be created with hardware dirty bit set.
The second test ensures that a given page table entry always preserve the
dirty information across pxx_wrprotect().

This adds two previously missing PUD level basic tests and while here
fixes pxx_wrprotect() related typos in the documentation file.

Link: https://lkml.kernel.org/r/1611137241-26220-1-git-send-email-anshuman.khandual@arm.com
Link: https://lkml.kernel.org/r/1611137241-26220-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Steven Price <steven.price@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-24 13:38:27 -08:00
Aneesh Kumar K.V f14312e1ed mm/debug_vm_pgtable: avoid doing memory allocation with pgtable_t mapped.
With highmem, pte_alloc_map() keep the level4 page table mapped using
kmap_atomic().  Avoid doing new memory allocation with page table mapped
like above.

[    9.409233] BUG: sleeping function called from invalid context at mm/page_alloc.c:4822
[    9.410557] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper
[    9.411932] no locks held by swapper/1.
[    9.412595] CPU: 0 PID: 1 Comm: swapper Not tainted 5.9.0-rc3-00323-gc50eb1ed654b5 #2
[    9.413824] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[    9.415207] Call Trace:
[    9.415651]  ? ___might_sleep.cold+0xa7/0xcc
[    9.416367]  ? __alloc_pages_nodemask+0x14c/0x5b0
[    9.417055]  ? swap_migration_tests+0x50/0x293
[    9.417704]  ? debug_vm_pgtable+0x4bc/0x708
[    9.418287]  ? swap_migration_tests+0x293/0x293
[    9.418911]  ? do_one_initcall+0x82/0x3cb
[    9.419465]  ? parse_args+0x1bd/0x280
[    9.419983]  ? rcu_read_lock_sched_held+0x36/0x60
[    9.420673]  ? trace_initcall_level+0x1f/0xf3
[    9.421279]  ? trace_initcall_level+0xbd/0xf3
[    9.421881]  ? do_basic_setup+0x9d/0xdd
[    9.422410]  ? do_basic_setup+0xc3/0xdd
[    9.422938]  ? kernel_init_freeable+0x72/0xa3
[    9.423539]  ? rest_init+0x134/0x134
[    9.424055]  ? kernel_init+0x5/0x12c
[    9.424574]  ? ret_from_fork+0x19/0x30

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.kernel.org/r/20200913110327.645310-1-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00
Aneesh Kumar K.V 401035d5c4 mm/debug_vm_pgtable: avoid none pte in pte_clear_test
pte_clear_tests operate on an existing pte entry.  Make sure that is not a
none pte entry.

[aneesh.kumar@linux.ibm.com: avoid kernel crash with riscv]
  Link: https://lkml.kernel.org/r/20201015033206.140550-1-aneesh.kumar@linux.ibm.com

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Chancellor <natechancellor@gmail.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Link: https://lkml.kernel.org/r/20200902114222.181353-14-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00
Aneesh Kumar K.V 2b1dd67a78 mm/debug_vm_pgtable/hugetlb: disable hugetlb test on ppc64
The seems to be missing quite a lot of details w.r.t allocating the
correct pgtable_t page (huge_pte_alloc()), holding the right lock
(huge_pte_lock()) etc.  The vma used is also not a hugetlb VMA.

ppc64 do have runtime checks within CONFIG_DEBUG_VM for most of these.
Hence disable the test on ppc64.

[anshuman.khandual@arm.com: drop hugetlb_advanced_tests()]
  Link: https://lore.kernel.org/lkml/289c3fdb-1394-c1af-bdc4-5542907089dc@linux.ibm.com/#t
  Link: https://lkml.kernel.org/r/1600914446-21890-1-git-send-email-anshuman.khandual@arm.com

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-13-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00
Aneesh Kumar K.V 13af050630 mm/debug_vm_pgtable/pmd_clear: don't use pmd/pud_clear on pte entries
pmd_clear() should not be used to clear pmd level pte entries.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-12-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00
Aneesh Kumar K.V 87f34986de mm/debug_vm_pgtable/thp: use page table depost/withdraw with THP
Architectures like ppc64 use deposited page table while updating the huge
pte entries.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-11-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00
Aneesh Kumar K.V 6f302e270c mm/debug_vm_pgtable/locks: take correct page table lock
Make sure we call pte accessors with correct lock held.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-10-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00
Aneesh Kumar K.V e8edf0adb9 mm/debug_vm_pgtable/locks: move non page table modifying test together
This will help in adding proper locks in a later patch

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lkml.kernel.org/r/20200902114222.181353-9-aneesh.kumar@linux.ibm.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-10-16 11:11:14 -07:00