x86/sev: Make sure pages are not skipped during kdump

JIRA: https://issues.redhat.com/browse/RHEL-10019

commit 82b7f88f2316c5442708daeb0b5ec5aa54c8ff7f
Author: Ashish Kalra <ashish.kalra@amd.com>
Date:   Tue May 6 18:35:29 2025 +0000

    x86/sev: Make sure pages are not skipped during kdump

    When shared pages are being converted to private during kdump, additional
    checks are performed. They include handling the case of a GHCB page being
    contained within a huge page.

    Currently, this check incorrectly skips a page just below the GHCB page from
    being transitioned back to private during kdump preparation.

    This skipped page causes a 0x404 #VC exception when it is accessed later while
    dumping guest memory for vmcore generation.

    Correct the range to be checked for GHCB contained in a huge page.  Also,
    ensure that the skipped huge page containing the GHCB page is transitioned
    back to private by applying the correct address mask later when changing GHCBs
    to private at end of kdump preparation.

      [ bp: Massage commit message. ]

    Fixes: 3074152e56c9 ("x86/sev: Convert shared memory back to private on kexec")
    Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
    Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
    Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
    Tested-by: Srikanth Aithal <sraithal@amd.com>
    Cc: stable@vger.kernel.org
    Link: https://lore.kernel.org/20250506183529.289549-1-Ashish.Kalra@amd.com

Signed-off-by: Bandan Das <bsd@redhat.com>
This commit is contained in:
Bandan Das 2025-06-03 10:21:17 -04:00
parent 4ba21ea6a0
commit a777e41d92
1 changed files with 7 additions and 4 deletions

View File

@ -1152,7 +1152,8 @@ static void unshare_all_memory(void)
data = per_cpu(runtime_data, cpu); data = per_cpu(runtime_data, cpu);
ghcb = (unsigned long)&data->ghcb_page; ghcb = (unsigned long)&data->ghcb_page;
if (addr <= ghcb && ghcb <= addr + size) { /* Handle the case of a huge page containing the GHCB page */
if (addr <= ghcb && ghcb < addr + size) {
skipped_addr = true; skipped_addr = true;
break; break;
} }
@ -1264,8 +1265,8 @@ static void shutdown_all_aps(void)
void snp_kexec_finish(void) void snp_kexec_finish(void)
{ {
struct sev_es_runtime_data *data; struct sev_es_runtime_data *data;
unsigned long size, addr;
unsigned int level, cpu; unsigned int level, cpu;
unsigned long size;
struct ghcb *ghcb; struct ghcb *ghcb;
pte_t *pte; pte_t *pte;
@ -1293,8 +1294,10 @@ void snp_kexec_finish(void)
ghcb = &data->ghcb_page; ghcb = &data->ghcb_page;
pte = lookup_address((unsigned long)ghcb, &level); pte = lookup_address((unsigned long)ghcb, &level);
size = page_level_size(level); size = page_level_size(level);
set_pte_enc(pte, level, (void *)ghcb); /* Handle the case of a huge page containing the GHCB page */
snp_set_memory_private((unsigned long)ghcb, (size / PAGE_SIZE)); addr = (unsigned long)ghcb & page_level_mask(level);
set_pte_enc(pte, level, (void *)addr);
snp_set_memory_private(addr, (size / PAGE_SIZE));
} }
} }