systemd/backport-unit-add-jobs-that-were-skipped-because-of-ratelimit.patch
wangyuhang 5304d4b4cc backport: sync patches from systemd community; Fix compilation failure with - O0 option
(cherry picked from commit 112d69b7c80bafc341b94dbc98d8efd8c288f7a3)
2023-08-15 15:49:45 +08:00

48 lines
1.6 KiB
Diff

From c29e6a9530316823b0455cd83eb6d0bb8dd664f4 Mon Sep 17 00:00:00 2001
From: Michal Sekletar <msekleta@redhat.com>
Date: Thu, 25 Nov 2021 18:28:25 +0100
Subject: [PATCH] unit: add jobs that were skipped because of ratelimit back to
run_queue
Assumption in edc027b was that job we first skipped because of active
ratelimit is still in run_queue. Hence we trigger the queue and dispatch
it in the next iteration. Actually we remove jobs from run_queue in
job_run_and_invalidate() before we call unit_start(). Hence if we want
to attempt to run the job again in the future we need to add it back
to run_queue.
Fixes #21458
Conflict:NA
Reference:https://github.com/systemd/systemd-stable/commit/c29e6a9530316823b0455cd83eb6d0bb8dd664f4
---
src/core/mount.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/src/core/mount.c b/src/core/mount.c
index 90b11347f7..35368fe8e6 100644
--- a/src/core/mount.c
+++ b/src/core/mount.c
@@ -1840,9 +1840,18 @@ static bool mount_is_mounted(Mount *m) {
static int mount_on_ratelimit_expire(sd_event_source *s, void *userdata) {
Manager *m = userdata;
+ Job *j;
assert(m);
+ /* Let's enqueue all start jobs that were previously skipped because of active ratelimit. */
+ HASHMAP_FOREACH(j, m->jobs) {
+ if (j->unit->type != UNIT_MOUNT)
+ continue;
+
+ job_add_to_run_queue(j);
+ }
+
/* By entering ratelimited state we made all mount start jobs not runnable, now rate limit is over so
* let's make sure we dispatch them in the next iteration. */
manager_trigger_run_queue(m);
--
2.33.0