ID_FS_TYPE can be much more than just ddf/imsm/linux raid member, so
do the proper checks.
This reverts certain changes from:
cf5891424e
Signed-off-by: Michal Soltys <soltys@ziu.info>
Reworked the flow of the rules file a bit, removed redundant tests, also
should be easier to follow. It's much shorter now as well, a bit more
similar to 90lvm script - both revolve around same concepts after all.
There's no reason to treat conf-assembled arrays differently from
incremental ones. Once we hit timeout in init's udev loop, we can use
common script (mdraid_start.sh) to try force inactive arrays
into degraded mode.
md-finished.sh was kind-of out of place - it didn't really wait for any
particular device(s) to show up, just watched if onetime mdadm scripts
are still in place. Furthermore, after moving mdraid_start to --timeout
initqueue, it didn't really have too much to watch at all, besides
mdadm_auto (and that served no purpose, as we do wait for concrete
devices).
Either way, with stock 64-md fixes, current version of 65-md*.rules does
the following:
- limits assembly to certain uuids, if specified
- watch for no ddf/imsm
- if mdadm.conf => setup onetime -As script, without forced --run option
- if !mdadm.conf => incrementally assemble
- for both cases, setup timeout script, run-forcing arrays as a last resort
Signed-off-by: Michal Soltys <soltys@ziu.info>
Remove whole "start a container logic".
Containers once assembled, always remain in 'inactive' state.
Any attempt to run a container with mdadm -IR is a no-op, and any
attempt with just mdadm -R ends with an error.
Signed-off-by: Michal Soltys <soltys@ziu.info>
This is a more sane solution, than ignoring subsequent "change" events.
The only danger is that we could loop, if a lvm scan triggers a broken
md partition, which triggers a broken PV and so on.
Better fix the scanning tools, not to emit change events for devices,
if no action was taken.
Copy /etc/mdadm.conf to initramfs (even for non-hostonly) if
mdadmconf="yes" is set in dracut.conf or --mdadmconf is specified on the
dracut command line.
This was done, because there seems _no_ sane way to autoassemble md raid
arrays.
also moved rd_NO_MD to an udev ENV
LVM
rd_NO_LVM
disable LVM detection
rd_LVM_VG=<volume group name>
only activate the volume groups with the given name
crypto LUKS
rd_NO_LUKS
disable crypto LUKS detection
rd_LUKS_UUID=<luks uuid>
only activate the LUKS partitions with the given UUID
MD
rd_NO_MD
disable MD RAID detection
rd_MD_UUID=<md uuid>
only activate the raid sets with the given UUID
DMRAID
rd_NO_DM
disable DM RAID detection
rd_DM_UUID=<dmraid uuid>
only activate the raid sets with the given UUID
Intel BIOS raid is being shifted from dmraid to mdraid because mdraid offers
more features. So if an imsm metadata capable mdadm is present use mdraid
instead of dmraid for isw_raid_member's
This patch also adds code to mdraid_start.sh so that the raidsets
inside the imsm containers get started once udev is done probing
(doing this earlier leads to potentially degraded use of the sets and
an unwanted resync).