Browse Source

90mdraid: fix/adjust force-run script

1) mdadm -As --auto=yes --run 2>&1 | vinfo (removed)

Currently such auto assembly will not complete or force-run partially
assembled arrays. It might assemble "concurrent" separate array and
force-run it, if possible (though the chances of suddenly showing
missing components in this scenario - a script run after udev timeout -
are pretty thin). See [1] for details. Also see #3 below.

2) mdadm -Is --run 2>&1 (removed)

This will only force-run native arrays - arrays in containers will not
be affected. See [1] for details. Also see #3 below.

3) mdadm -R run loop (implicitly handles #1 & #2)

This loop does everywthing that #1 & #2 are expected to do. Thus, the
above invocations are simply redundant and this is the most safe and
flexible option.

Also, it shouldn't be necessary to go under md/ directory, as those are
just symlinks to /dev/md[0-9]*.

Certain checks were changed to strict ones (array state, degraded state)
instead of relying on env tricks.

'cat' was added explicitly to installed programs (it has been used
implicitly in shutdown script either way)

4) mdmon bug

See [1] for details as well. In short - force-run arrays in containers
will not have mdmon started, so we do that manually.

5) stop/run queue magic

Also removed. mdadm -R will only cause change events to the array
itself, and they should not be an issue.

[1] http://article.gmane.org/gmane.linux.raid/35133

Signed-off-by: Michal Soltys <soltys@ziu.info>
master
Michal Soltys 14 years ago committed by Harald Hoyer
parent
commit
66426469d0
  1. 51
      modules.d/90mdraid/mdraid_start.sh
  2. 2
      modules.d/90mdraid/module-setup.sh
  3. 1
      modules.d/90mdraid/parse-md.sh

51
modules.d/90mdraid/mdraid_start.sh

@ -3,23 +3,34 @@ @@ -3,23 +3,34 @@
# ex: ts=8 sw=4 sts=4 et filetype=sh

type getarg >/dev/null 2>&1 || . /lib/dracut-lib.sh
# run mdadm if udev has settled
info "Assembling MD RAID arrays"
udevadm control --stop-exec-queue
mdadm -As --auto=yes --run 2>&1 | vinfo
mdadm -Is --run 2>&1 | vinfo

# there could still be some leftover devices
# which have had a container added
for md in /dev/md[0-9]* /dev/md/*; do
[ -b "$md" ] || continue
udevinfo="$(udevadm info --query=env --name=$md)"
strstr "$udevinfo" "MD_UUID=" && continue
strstr "$udevinfo" "MD_LEVEL=container" && continue
strstr "$udevinfo" "DEVTYPE=partition" && continue
mdadm --run "$md" 2>&1 | vinfo
done
unset udevinfo

ln -s $(command -v mdraid-cleanup) $hookdir/pre-pivot/31-mdraid-cleanup.sh 2>/dev/null
udevadm control --start-exec-queue
_md_force_run() {
local _udevinfo
local _path_s
local _path_d
# try to force-run anything not running yet
for md in /dev/md[0-9]*; do
[ -b "$md" ] || continue
_udevinfo="$(udevadm info --query=env --name="$md")"
strstr "$_udevinfo" "MD_LEVEL=container" && continue
strstr "$_udevinfo" "DEVTYPE=partition" && continue

_path_s="$(udevadm info -q path -n "$md")/md/array_state"
[ ! -r "$_path_s" ] && continue

# inactive ?
[ "$(cat "$_path_s")" != "inactive" ] && continue

mdadm -R "$md" 2>&1 | vinfo

# still inactive ?
[ "$(cat "$_path_s")" = "inactive" ] && continue

_path_d="${_path_s%/*}/degraded"
[ ! -r "$_path_d" ] && continue

# workaround for mdmon bug
[ "$(cat "$_path_d")" -gt "0" ] && mdmon --takeover "$md"
done
}

_md_force_run

2
modules.d/90mdraid/module-setup.sh

@ -37,7 +37,7 @@ installkernel() { @@ -37,7 +37,7 @@ installkernel() {
}

install() {
dracut_install mdadm partx
dracut_install mdadm partx cat


# XXX: mdmon really needs to run as non-root?

1
modules.d/90mdraid/parse-md.sh

@ -34,6 +34,7 @@ fi @@ -34,6 +34,7 @@ fi

if ! getargbool 1 rd.md.conf -n rd_NO_MDADMCONF; then
rm -f /etc/mdadm/mdadm.conf /etc/mdadm.conf
ln -s $(command -v mdraid-cleanup) $hookdir/pre-pivot/31-mdraid-cleanup.sh 2>/dev/null
fi

# noiswmd nodmraid for anaconda / rc.sysinit compatibility

Loading…
Cancel
Save