Mailinglist Archive: opensuse-commit (1903 mails)

< Previous Next >
commit mdadm for openSUSE:Factory
Hello community,

here is the log from the commit of package mdadm for openSUSE:Factory checked
in at 2019-04-03 09:23:57
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/mdadm (Old)
and /work/SRC/openSUSE:Factory/.mdadm.new.25356 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "mdadm"

Wed Apr 3 09:23:57 2019 rev:118 rq:689523 version:4.1

Changes:
--------
--- /work/SRC/openSUSE:Factory/mdadm/mdadm.changes 2019-02-24
17:04:57.328635941 +0100
+++ /work/SRC/openSUSE:Factory/.mdadm.new.25356/mdadm.changes 2019-04-03
09:24:04.303703795 +0200
@@ -1,0 +2,9 @@
+Thu Mar 28 11:43:38 UTC 2019 - colyli@xxxxxxxx
+
+- imsm: finish recovery when drive with rebuild fails (bsc#1126975)
+ 0010-imsm-finish-recovery-when-drive-with-rebuild-fails.patch
+- mdmon: don't attempt to manage new arrays when terminating
+ (bsc#1127526)
+ 0011-mdmon-don-t-attempt-to-manage-new-arrays-when-termin.patch
+
+-------------------------------------------------------------------

New:
----
0010-imsm-finish-recovery-when-drive-with-rebuild-fails.patch
0011-mdmon-don-t-attempt-to-manage-new-arrays-when-termin.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ mdadm.spec ++++++
--- /var/tmp/diff_new_pack.OASyaa/_old 2019-04-03 09:24:07.711705400 +0200
+++ /var/tmp/diff_new_pack.OASyaa/_new 2019-04-03 09:24:07.747705417 +0200
@@ -51,6 +51,8 @@
Patch17: 0007-Grow-report-correct-new-chunk-size.patch
Patch18: 0008-policy.c-prevent-NULL-pointer-referencing.patch
Patch19: 0009-Detail.c-do-not-skip-first-character-when-calling-xs.patch
+Patch20: 0010-imsm-finish-recovery-when-drive-with-rebuild-fails.patch
+Patch21: 0011-mdmon-don-t-attempt-to-manage-new-arrays-when-termin.patch
Patch1001: 1001-display-timeout-status.patch
%define _udevdir %(pkg-config --variable=udevdir udev)
%define _systemdshutdowndir %{_unitdir}/../system-shutdown
@@ -70,6 +72,8 @@
%patch17 -p1
%patch18 -p1
%patch19 -p1
+%patch20 -p1
+%patch21 -p1
%patch1001 -p1

%build

++++++ 0010-imsm-finish-recovery-when-drive-with-rebuild-fails.patch ++++++
From a4e96fd8f3f0b5416783237c1cb6ee87e7eff23d Mon Sep 17 00:00:00 2001
From: Mariusz Tkaczyk <mariusz.tkaczyk@xxxxxxxxx>
Date: Fri, 8 Feb 2019 11:07:10 +0100
Subject: [PATCH] imsm: finish recovery when drive with rebuild fails
Git-commit: a4e96fd8f3f0b5416783237c1cb6ee87e7eff23d
Patch-mainline: mdadm-4.1-12
References: bsc#1126975

Commit d7a1fda2769b ("imsm: update metadata correctly while raid10 double
degradation") resolves main Imsm double degradation problems but it
omits one case. Now metadata hangs in the rebuilding state if the drive
under rebuild is removed during recovery from double degradation.

The root cause of this problem is comparing new map_state with current
and if they both are degraded assuming that nothing new happens.

Don't rely on map states, just check if device is failed. If the drive
under rebuild fails then finish migration, in other cases update map
state only (second fail means that destination map state can't be normal).

To avoid problems with reassembling move end_migration (called after
double degradation successful recovery) after check if recovery really
finished, for details see (7ce057018 "imsm: fix: rebuild does not
continue after reboot").
Remove redundant code responsible for finishing rebuild process. Function
end_migration do exactly the same. Set last_checkpoint to 0, to prepare
it for the next rebuild.

Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@xxxxxxxxx>
Signed-off-by: Jes Sorensen <jsorensen@xxxxxx>
Signed-off-by: Coly Li <colyli@xxxxxxx>
---
super-intel.c | 26 +++++++++++---------------
1 file changed, 11 insertions(+), 15 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index d2035cc..38a1b6c 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -8560,26 +8560,22 @@ static void imsm_set_disk(struct active_array *a, int
n, int state)
}
if (is_rebuilding(dev)) {
dprintf_cont("while rebuilding ");
- if (map->map_state != map_state) {
- dprintf_cont("map state change ");
+ if (state & DS_FAULTY) {
+ dprintf_cont("removing failed drive ");
if (n == map->failed_disk_num) {
dprintf_cont("end migration");
end_migration(dev, super, map_state);
+ a->last_checkpoint = 0;
} else {
- dprintf_cont("raid10 double
degradation, map state change");
+ dprintf_cont("fail detected during
rebuild, changing map state");
map->map_state = map_state;
}
super->updates_pending++;
- } else if (!rebuild_done)
- break;
- else if (n == map->failed_disk_num) {
- /* r10 double degraded to degraded transition */
- dprintf_cont("raid10 double degradation end
migration");
- end_migration(dev, super, map_state);
- a->last_checkpoint = 0;
- super->updates_pending++;
}

+ if (!rebuild_done)
+ break;
+
/* check if recovery is really finished */
for (mdi = a->info.devs; mdi ; mdi = mdi->next)
if (mdi->recovery_start != MaxSector) {
@@ -8588,7 +8584,7 @@ static void imsm_set_disk(struct active_array *a, int n,
int state)
}
if (recovery_not_finished) {
dprintf_cont("\n");
- dprintf_cont("Rebuild has not finished yet, map
state changes only if raid10 double degradation happens");
+ dprintf_cont("Rebuild has not finished yet");
if (a->last_checkpoint < mdi->recovery_start) {
a->last_checkpoint =
mdi->recovery_start;
@@ -8598,9 +8594,9 @@ static void imsm_set_disk(struct active_array *a, int n,
int state)
}

dprintf_cont(" Rebuild done, still degraded");
- dev->vol.migr_state = 0;
- set_migr_type(dev, 0);
- dev->vol.curr_migr_unit = 0;
+ end_migration(dev, super, map_state);
+ a->last_checkpoint = 0;
+ super->updates_pending++;

for (i = 0; i < map->num_members; i++) {
int idx = get_imsm_ord_tbl_ent(dev, i, MAP_0);
--
2.16.4

++++++ 0011-mdmon-don-t-attempt-to-manage-new-arrays-when-termin.patch ++++++
From 69d084784de196acec8ab703cd1b379af211d624 Mon Sep 17 00:00:00 2001
From: Artur Paszkiewicz <artur.paszkiewicz@xxxxxxxxx>
Date: Fri, 22 Feb 2019 10:15:45 +0100
Subject: [PATCH] mdmon: don't attempt to manage new arrays when terminating
Git-commit: 69d084784de196acec8ab703cd1b379af211d624
Patch-mainline: mdadm-4.1-12
References: bsc#1127526

When mdmon gets a SIGTERM, it stops managing arrays that are clean. If
there is more that one array in the container and one of them is dirty
and the clean one is still present in mdstat, mdmon will treat it as a
new array and start managing it again. This leads to a cycle of
remove_old() / manage_new() calls for the clean array, until the other
one also becomes clean.

Prevent this by not calling manage_new() if sigterm is set. Also, remove
a check for sigterm in manage_new() because the condition will never be
true.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@xxxxxxxxx>
Signed-off-by: Jes Sorensen <jsorensen@xxxxxx>
Signed-off-by: Coly Li <colyli@xxxxxxx>
---
managemon.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/managemon.c b/managemon.c
index 101231c..29b91ba 100644
--- a/managemon.c
+++ b/managemon.c
@@ -727,9 +727,7 @@ static void manage_new(struct mdstat_ent *mdstat,
dprintf("inst: %s action: %d state: %d\n", inst,
new->action_fd, new->info.state_fd);

- if (sigterm)
- new->info.safe_mode_delay = 1;
- else if (mdi->safe_mode_delay >= 50)
+ if (mdi->safe_mode_delay >= 50)
/* Normal start, mdadm set this. */
new->info.safe_mode_delay = mdi->safe_mode_delay;
else
@@ -803,7 +801,7 @@ void manage(struct mdstat_ent *mdstat, struct supertype
*container)
break;
}
}
- if (a == NULL || !a->container)
+ if ((a == NULL || !a->container) && !sigterm)
manage_new(mdstat, container, a);
}
}
--
2.16.4


< Previous Next >
This Thread
  • No further messages