From ec56b1904a256418dc74ce9e59e583cde09d4521 Mon Sep 17 00:00:00 2001 From: Phil Auld Date: Tue, 5 Sep 2023 14:57:33 -0400 Subject: [PATCH] sched: Change wait_task_inactive()s match_state JIRA: https://issues.redhat.com/browse/RHEL-1536 Conflicts: This was applied out of order with f9fc8cad9728 ("sched: Add TASK_ANY for wait_task_inactive()") so adjusted code to match what the results should have been. commit 9204a97f7ae862fc8a3330ec8335917534c3fb63 Author: Peter Zijlstra Date: Mon Aug 22 13:18:19 2022 +0200 sched: Change wait_task_inactive()s match_state Make wait_task_inactive()'s @match_state work like ttwu()'s @state. That is, instead of an equal comparison, use it as a mask. This allows matching multiple block conditions. (removes the unlikely; it doesn't make sense how it's only part of the condition) Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20220822114648.856734578@infradead.org Signed-off-by: Phil Auld --- kernel/sched/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 993477b19492..62a907193ce9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3445,7 +3445,7 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state * is actually now running somewhere else! */ while (task_on_cpu(rq, p)) { - if (match_state && unlikely(READ_ONCE(p->__state) != match_state)) + if (!(READ_ONCE(p->__state) & match_state)) return 0; cpu_relax(); } @@ -3460,7 +3460,7 @@ unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state running = task_on_cpu(rq, p); queued = task_on_rq_queued(p); ncsw = 0; - if (!match_state || READ_ONCE(p->__state) == match_state) + if (READ_ONCE(p->__state) & match_state) ncsw = p->nvcsw | LONG_MIN; /* sets MSB */ task_rq_unlock(rq, p, &rf);