x86/mm: Use PAGE_ALIGNED(x) instead of IS_ALIGNED(x, PAGE_SIZE)

Bugzilla: https://bugzilla.redhat.com/2120352

commit e19d11267f0e6c8aff2d15d2dfed12365b4c9184
Author: Fanjun Kong <bh1scw@gmail.com>
Date:   Thu May 26 22:20:39 2022 +0800

    x86/mm: Use PAGE_ALIGNED(x) instead of IS_ALIGNED(x, PAGE_SIZE)

    The <linux/mm.h> already provides the PAGE_ALIGNED() macro. Let's
    use this macro instead of IS_ALIGNED() and passing PAGE_SIZE directly.

    No change in functionality.

    [ mingo: Tweak changelog. ]

    Signed-off-by: Fanjun Kong <bh1scw@gmail.com>
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Link: https://lore.kernel.org/r/20220526142038.1582839-1-bh1scw@gmail.com

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
This commit is contained in:
Chris von Recklinghausen 2022-10-12 06:42:11 -04:00
parent a0affca890
commit 86674c7059
1 changed files with 4 additions and 4 deletions

View File

@ -1241,8 +1241,8 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct,
void __ref vmemmap_free(unsigned long start, unsigned long end,
struct vmem_altmap *altmap)
{
VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
VM_BUG_ON(!PAGE_ALIGNED(start));
VM_BUG_ON(!PAGE_ALIGNED(end));
remove_pagetable(start, end, false, altmap);
}
@ -1606,8 +1606,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
{
int err;
VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
VM_BUG_ON(!PAGE_ALIGNED(start));
VM_BUG_ON(!PAGE_ALIGNED(end));
if (end - start < PAGES_PER_SECTION * sizeof(struct page))
err = vmemmap_populate_basepages(start, end, node, NULL);