In kernel_only mode, we don't want to write /etc/cmdline.d
Correctly return the check functions, so we have a valid return of
for_each_host_dev_fs().
mdraid and dmraid functions had wrong checkings for the filesystem
type.
Like -H, we need to poll every module to check if it is needed
to mount a specific device in '--mount'.
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
ID_FS_TYPE can be much more than just ddf/imsm/linux raid member, so
do the proper checks.
This reverts certain changes from:
cf5891424e
Signed-off-by: Michal Soltys <soltys@ziu.info>
Reworked the flow of the rules file a bit, removed redundant tests, also
should be easier to follow. It's much shorter now as well, a bit more
similar to 90lvm script - both revolve around same concepts after all.
There's no reason to treat conf-assembled arrays differently from
incremental ones. Once we hit timeout in init's udev loop, we can use
common script (mdraid_start.sh) to try force inactive arrays
into degraded mode.
md-finished.sh was kind-of out of place - it didn't really wait for any
particular device(s) to show up, just watched if onetime mdadm scripts
are still in place. Furthermore, after moving mdraid_start to --timeout
initqueue, it didn't really have too much to watch at all, besides
mdadm_auto (and that served no purpose, as we do wait for concrete
devices).
Either way, with stock 64-md fixes, current version of 65-md*.rules does
the following:
- limits assembly to certain uuids, if specified
- watch for no ddf/imsm
- if mdadm.conf => setup onetime -As script, without forced --run option
- if !mdadm.conf => incrementally assemble
- for both cases, setup timeout script, run-forcing arrays as a last resort
Signed-off-by: Michal Soltys <soltys@ziu.info>
1) mdadm -As --auto=yes --run 2>&1 | vinfo (removed)
Currently such auto assembly will not complete or force-run partially
assembled arrays. It might assemble "concurrent" separate array and
force-run it, if possible (though the chances of suddenly showing
missing components in this scenario - a script run after udev timeout -
are pretty thin). See [1] for details. Also see #3 below.
2) mdadm -Is --run 2>&1 (removed)
This will only force-run native arrays - arrays in containers will not
be affected. See [1] for details. Also see #3 below.
3) mdadm -R run loop (implicitly handles #1 & #2)
This loop does everywthing that #1 & #2 are expected to do. Thus, the
above invocations are simply redundant and this is the most safe and
flexible option.
Also, it shouldn't be necessary to go under md/ directory, as those are
just symlinks to /dev/md[0-9]*.
Certain checks were changed to strict ones (array state, degraded state)
instead of relying on env tricks.
'cat' was added explicitly to installed programs (it has been used
implicitly in shutdown script either way)
4) mdmon bug
See [1] for details as well. In short - force-run arrays in containers
will not have mdmon started, so we do that manually.
5) stop/run queue magic
Also removed. mdadm -R will only cause change events to the array
itself, and they should not be an issue.
[1] http://article.gmane.org/gmane.linux.raid/35133
Signed-off-by: Michal Soltys <soltys@ziu.info>
Stop both arrays (first pass) and containers (second pass).
Loop only over /dev/md[0-9]*
Call cleanup script only once, make sure it's after crypt cleanup.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Remove whole "start a container logic".
Containers once assembled, always remain in 'inactive' state.
Any attempt to run a container with mdadm -IR is a no-op, and any
attempt with just mdadm -R ends with an error.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Currently shipped mdadm rules incrementally assemble all imsm and native
raids, and do so unconditionally. This causes few issues:
- fine-grained controls in 65-md* are shadowed - for example,
mdadm.conf's presence tests or uuid checks
- 90dmraid might also conflict with 90mdraid, if user prefers the former
to handle containers
- possibly other subtle issues
This patch adjusts the behaviour.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Given that we boot into a modern Linux distribution with the "/run" toplevel
directory, we can easily mount move the whole /run directory to the real
root in the end and have the complete initramfs later on in
/run/initramfs. All log files and /run states are still accessible and
to save space /run/initramfs can be removed later on.
Because the kernel does not mount a tmpfs on /run prior to unpacking the
initramfs cpio image, we have to copy ourselves very early to a tmpfs
and mount it on /run.
Due to lazy umount the old initramfs binaries should
be removed in the end by switch_root.
This feature can be turned on with "--prefix".
We want all "/var/run" information to live in /dev/.run, until the real
root is mounted.
Therefore we mount a tmpfs on /dev/.run, which can/will be bind/move mounted
on /var/run later on.