mdadm-3.2.6+:
Incremental assembly rule contains "--offroot" arg. Update
regexp to catch this variant.
mdadm-3.3+:
Rules was splitted into two files: 63-md-raid-arrays.rules
and 64-md-raid-assembly.rules. Install them both and edit
the latter.
The 69-dm-lvm-metad.rules set some udev env. variables that makes it
possible to detect the right time to activate LVM on MD. The MD is very
similar to DM during activation - it's usable only after proper device
activation - the CHANGE event. We need to make a difference between a
CHANGE event that comes from this activation and CHANGE event that is
the outcome of the WATCH udev rule (otherwise we'd end up with LVM
activation done on each CHANGE event - which is wrong).
So we need the udev databse to be persistent during pivot to root fs
even for MD devices.
This prints the kernel command line parameters for the current disk
layout.
$ dracut --print-cmdline
rd.luks.uuid=luks-e68c8906-6542-4a26-83c4-91b4dd9f0471
rd.lvm.lv=debian/root rd.lvm.lv=debian/usr root=/dev/mapper/debian-root
rootflags=rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered
rootfstype=ext4
No automatic assembly is done anymore by default. You will have to
specify exactly what devices to assemble
("rd.md.uuid=" "rd.luks.uuid" ...)
or use "rd.auto=1" or "rd.auto" on the kernel command line.
For big servers with thousands of disks we don't want to assemble
everything by default (error prone, slow).
Currently anaconda provides rd.md=0 on kernel's command line as a boot
time optimization if root is not on md device. But this leads to kdump
failure. We copy the command line from first kernel and if dump target
is on md device, it fails as we never try to assemble md devices as
rd.md=0.
We have already set rd.md.uuid though in /etc/cmdlind.d/ dir providing
dracut the info about what md devices to assemble. So this patch overrides
rd.md settings if rd.md.uuid is provided.
This is a stop gap measure to get kdump working on software raid
devices. Harald seems to have bigger cleanup plans for rd.md. Once
that happens, this patch will not be needed and things should
automatically be fixed.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
In kernel_only mode, we don't want to write /etc/cmdline.d
Correctly return the check functions, so we have a valid return of
for_each_host_dev_fs().
mdraid and dmraid functions had wrong checkings for the filesystem
type.
Like -H, we need to poll every module to check if it is needed
to mount a specific device in '--mount'.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
ID_FS_TYPE can be much more than just ddf/imsm/linux raid member, so
do the proper checks.
This reverts certain changes from:
cf5891424e
Signed-off-by: Michal Soltys <soltys@ziu.info>
Reworked the flow of the rules file a bit, removed redundant tests, also
should be easier to follow. It's much shorter now as well, a bit more
similar to 90lvm script - both revolve around same concepts after all.
There's no reason to treat conf-assembled arrays differently from
incremental ones. Once we hit timeout in init's udev loop, we can use
common script (mdraid_start.sh) to try force inactive arrays
into degraded mode.
md-finished.sh was kind-of out of place - it didn't really wait for any
particular device(s) to show up, just watched if onetime mdadm scripts
are still in place. Furthermore, after moving mdraid_start to --timeout
initqueue, it didn't really have too much to watch at all, besides
mdadm_auto (and that served no purpose, as we do wait for concrete
devices).
Either way, with stock 64-md fixes, current version of 65-md*.rules does
the following:
- limits assembly to certain uuids, if specified
- watch for no ddf/imsm
- if mdadm.conf => setup onetime -As script, without forced --run option
- if !mdadm.conf => incrementally assemble
- for both cases, setup timeout script, run-forcing arrays as a last resort
Signed-off-by: Michal Soltys <soltys@ziu.info>