Mailing List Archive

Can initrd and/or RAID be disabled at boot?
Hi,
This is related to my thread from a few days ago about the
disappointing speed of my RAID6 root partition. The goal here is to
get the machine booting from an SSD so that I can free up my five hard
drives to play with.

SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
grub.conf but I keep booting from old RAID instead of the new SSD.
What am I doing wrong?

What I've done so far:

1) I've removed everything relatively non-essential from the HDD-based
RAID6. It's still a lot of data (40GB) but my Windows VMs are moved to
an external USB drive as is all the video content which is on a second
USB drive so the remaining size is pretty manageable.

2) In looking around for ways to get / copied to the SDD I ran across
this Arch Linux page called "Full System Backup with rsync":

https://wiki.archlinux.org/index.php/Full_System_Backup_with_rsync

Basically it boiled down to just a straight-forward rsync command, but
what I liked about the description what that it can be done on a live
system. The command in the page is

rsync -aAXv /* /path/to/backup/folder
--exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}

which I have modified to

rsync -avx /* /path/to/backup/folder
--exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}

because I don't use (AFAICT) any of the ACL stuff and the command
simply wouldn't do anything.

I ran this command the first time to get 98% of everything copied
while in KDE, but before I moved forward I exited KDE, stopped X and
ran it as root from the console. After the second run it didn't pick
up any new file changes so I suspect it's pretty close to what I'd get
dealing with a Live CD boot. (COMMENTS?)

3) I added a new boot options in grub.conf:

title fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
root (hd5,0)
kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=LABEL=fastVM
video=vesafb vga=0x307title

fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
root (hd5,0)
kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=/dev/sda1 video=vesafb vga=0x307

I am relatively confident that (hd5,0) is the SSD. I have 6 drives in
the system - the 5 HDDs and the SSD. The 5 hard drives all have
multiple partitions which is what grub tells me using tab completion
for the line

root(hdX,

Additionally the SDD has a single partition to tab completion on
root(hd5 finishes with root(hd5,0). I used /dev/sda as that's how it's
identified when I boot using RAID.

Now, the kernel has the initrd built into it so if it cannot be
turned off I guess I'll try building a new kernel without it. However
I found a few web pages that also said RAID could be disabled using a
'noraid' option which I thought should stop the system from finding
the exiting RAID6 but no luck.

Does anyone here have any ideas? fdisk info follows at the end. Ask
for anything else you want to see.

If I can get to booting off the SSD then for the next few days I
could build different RAIDs and do some performance testing.

Thanks,
Mark



c2RAID6 ~ # fdisk -l

Disk /dev/sda: 128.0 GB, 128035676160 bytes, 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xfd2e963c

Device Boot Start End Blocks Id System
/dev/sda1 2048 250069679 125033816 83 Linux

Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8b45be24

Device Boot Start End Blocks Id System
/dev/sdb1 * 63 112454 56196 83 Linux
/dev/sdb2 112455 8514449 4200997+ 82 Linux swap / Solaris
/dev/sdb3 8594775 976773167 484089196+ fd Linux raid autodetect

Disk /dev/sdc: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x703d11ba

Device Boot Start End Blocks Id System
/dev/sdc1 * 63 112454 56196 83 Linux
/dev/sdc2 112455 8514449 4200997+ 82 Linux swap / Solaris
/dev/sdc3 8594775 976773167 484089196+ fd Linux raid autodetect

Disk /dev/sde: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xce92d9ff

Device Boot Start End Blocks Id System
/dev/sde1 2048 8594774 4296363+ 83 Linux
/dev/sde3 8594775 976773167 484089196+ fd Linux raid autodetect

Disk /dev/sdf: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x21141305

Device Boot Start End Blocks Id System
/dev/sdf1 2048 8594774 4296363+ 83 Linux
/dev/sdf3 8595456 976773167 484088856 fd Linux raid autodetect

Disk /dev/sdd: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xfb3ad342

Device Boot Start End Blocks Id System
/dev/sdd1 * 63 112454 56196 83 Linux
/dev/sdd2 112455 8514449 4200997+ 82 Linux swap / Solaris
/dev/sdd3 8594775 976773167 484089196+ fd Linux raid autodetect

Disk /dev/md3: 1487.1 GB, 1487118827520 bytes, 2904528960 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 49152 bytes

c2RAID6 ~ #
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
Mark Knecht, mused, then expounded:
> Hi,
> This is related to my thread from a few days ago about the
> disappointing speed of my RAID6 root partition. The goal here is to
> get the machine booting from an SSD so that I can free up my five hard
> drives to play with.
>
> SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
> grub.conf but I keep booting from old RAID instead of the new SSD.
> What am I doing wrong?
>

Can the boot order be changed in the bios?

Was grub-install run on the SSD and saved to the SSD's MBR?

If, possible, can the SATA cables be moved such that the SSD is drive 0?

And, it might be useful to mount by-id or by-uuid -

rsanders@conejo ~ $ ls /dev/disk/by-id/
ata-OCZ_VERTEX-TURBO_408M1550705G08FCFEQB ata-TSSTcorp_CDDVDW_SH-S223L_Q9896GAZ20018200
ata-OCZ_VERTEX-TURBO_408M1550705G08FCFEQB-part1 md-uuid-3899baf6:daee0dc9:cb201669:f728008a
ata-OCZ_VERTEX-TURBO_408M1550705G08FCFEQB-part2 md-uuid-dfeb347b:7e3b4c57:cb201669:f728008a
ata-OCZ_VERTEX-TURBO_408M1550705G08FCFEQB-part3 wwn-0x50000f00080a0008
ata-SAMSUNG_HE502IJ_S1MTJ1CQA00080 wwn-0x50000f00080a0008-part1
ata-SAMSUNG_HE502IJ_S1MTJ1CQA00080-part1 wwn-0x50000f00080a0008-part2
ata-SAMSUNG_HE502IJ_S1MTJ1CQA00080-part2 wwn-0x50000f00080a0018
ata-SAMSUNG_HE502IJ_S1MTJ1CQA00081 wwn-0x50000f00080a0018-part1
ata-SAMSUNG_HE502IJ_S1MTJ1CQA00081-part1 wwn-0x50000f00080a0018-part2
ata-SAMSUNG_HE502IJ_S1MTJ1CQA00081-part2
rsanders@conejo ~ $ ls /dev/disk/by-uuid/
1011bd2c-14bb-485e-8416-14f82835c4f6 7540a6d2-426d-4272-83f0-34ab7d1ffc83 a5ea4eb8-4797-482f-af80-a60f20a62915
2974c334-cffd-41d6-94c9-23a4d24980be a26b8632-dabf-450d-806f-330b71b91aeb


/etc/fstab would look like -

/dev/disk/by=id/ata-OCZ_VERTEX-TURBO_408M1550705G08FCFEQB-part1 /boot ext2 noauto,noatime 1 2
/dev/disk/by-id/ata-OCZ_VERTEX-TURBO_408M1550705G08FCFEQB-part3 / xfs noatime 0 1

And grub.conf would look something like this SLES example -

title SUSE Linux Enterprise Server 11 SP2 - 3.0.80-0.5
root (hd0,1)
kernel /boot/vmlinuz-3.0.80-0.5-default root=/dev/disk/by-id/ata-ST3500514NS_9WJ0KSKX-part2 resume=/dev/disk/by-id/ata-ST3500514NS_9WJ0KSKX-part1 splash=silent crashkernel=256M-:128M showopts vga=0x31d
initrd /boot/initrd-3.0.80-0.5-default


> What I've done so far:
>
> 1) I've removed everything relatively non-essential from the HDD-based
> RAID6. It's still a lot of data (40GB) but my Windows VMs are moved to
> an external USB drive as is all the video content which is on a second
> USB drive so the remaining size is pretty manageable.
>
> 2) In looking around for ways to get / copied to the SDD I ran across
> this Arch Linux page called "Full System Backup with rsync":
>
> https://wiki.archlinux.org/index.php/Full_System_Backup_with_rsync
>
> Basically it boiled down to just a straight-forward rsync command, but
> what I liked about the description what that it can be done on a live
> system. The command in the page is
>
> rsync -aAXv /* /path/to/backup/folder
> --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}
>
> which I have modified to
>
> rsync -avx /* /path/to/backup/folder
> --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost+found}
>
> because I don't use (AFAICT) any of the ACL stuff and the command
> simply wouldn't do anything.
>
> I ran this command the first time to get 98% of everything copied
> while in KDE, but before I moved forward I exited KDE, stopped X and
> ran it as root from the console. After the second run it didn't pick
> up any new file changes so I suspect it's pretty close to what I'd get
> dealing with a Live CD boot. (COMMENTS?)
>
> 3) I added a new boot options in grub.conf:
>
> title fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
> root (hd5,0)
> kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=LABEL=fastVM
> video=vesafb vga=0x307title
>
> fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
> root (hd5,0)
> kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=/dev/sda1 video=vesafb vga=0x307
>
> I am relatively confident that (hd5,0) is the SSD. I have 6 drives in
> the system - the 5 HDDs and the SSD. The 5 hard drives all have
> multiple partitions which is what grub tells me using tab completion
> for the line
>
> root(hdX,
>
> Additionally the SDD has a single partition to tab completion on
> root(hd5 finishes with root(hd5,0). I used /dev/sda as that's how it's
> identified when I boot using RAID.
>
> Now, the kernel has the initrd built into it so if it cannot be
> turned off I guess I'll try building a new kernel without it. However
> I found a few web pages that also said RAID could be disabled using a
> 'noraid' option which I thought should stop the system from finding
> the exiting RAID6 but no luck.
>
> Does anyone here have any ideas? fdisk info follows at the end. Ask
> for anything else you want to see.
>
> If I can get to booting off the SSD then for the next few days I
> could build different RAIDs and do some performance testing.
>
> Thanks,
> Mark
>
>
>
> c2RAID6 ~ # fdisk -l
>
> Disk /dev/sda: 128.0 GB, 128035676160 bytes, 250069680 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0xfd2e963c
>
> Device Boot Start End Blocks Id System
> /dev/sda1 2048 250069679 125033816 83 Linux
>
> Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x8b45be24
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 * 63 112454 56196 83 Linux
> /dev/sdb2 112455 8514449 4200997+ 82 Linux swap / Solaris
> /dev/sdb3 8594775 976773167 484089196+ fd Linux raid autodetect
>
> Disk /dev/sdc: 500.1 GB, 500107862016 bytes, 976773168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x703d11ba
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 * 63 112454 56196 83 Linux
> /dev/sdc2 112455 8514449 4200997+ 82 Linux swap / Solaris
> /dev/sdc3 8594775 976773167 484089196+ fd Linux raid autodetect
>
> Disk /dev/sde: 500.1 GB, 500107862016 bytes, 976773168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0xce92d9ff
>
> Device Boot Start End Blocks Id System
> /dev/sde1 2048 8594774 4296363+ 83 Linux
> /dev/sde3 8594775 976773167 484089196+ fd Linux raid autodetect
>
> Disk /dev/sdf: 500.1 GB, 500107862016 bytes, 976773168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x21141305
>
> Device Boot Start End Blocks Id System
> /dev/sdf1 2048 8594774 4296363+ 83 Linux
> /dev/sdf3 8595456 976773167 484088856 fd Linux raid autodetect
>
> Disk /dev/sdd: 500.1 GB, 500107862016 bytes, 976773168 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0xfb3ad342
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 * 63 112454 56196 83 Linux
> /dev/sdd2 112455 8514449 4200997+ 82 Linux swap / Solaris
> /dev/sdd3 8594775 976773167 484089196+ fd Linux raid autodetect
>
> Disk /dev/md3: 1487.1 GB, 1487118827520 bytes, 2904528960 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 16384 bytes / 49152 bytes
>
> c2RAID6 ~ #
>

--
-
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
On Wed, Jun 26, 2013 at 3:53 PM, Bob Sanders <rsanders@sgi.com> wrote:
> Mark Knecht, mused, then expounded:
>> Hi,
>> This is related to my thread from a few days ago about the
>> disappointing speed of my RAID6 root partition. The goal here is to
>> get the machine booting from an SSD so that I can free up my five hard
>> drives to play with.
>>
>> SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
>> grub.conf but I keep booting from old RAID instead of the new SSD.
>> What am I doing wrong?
>>
>
> Can the boot order be changed in the bios?
>

Not sure. Will investigate.

> Was grub-install run on the SSD and saved to the SSD's MBR?
>

Not yet but planned.

> If, possible, can the SATA cables be moved such that the SSD is drive 0?
>

Would prefer not to. I want to keep the current boot as default as I
need to work during the day. Once I get the AAD booting I might be
open to rearranging things but at that point I assume I won't have to.

> And, it might be useful to mount by-id or by-uuid -
>

Will keep in mind.

After writing the first note it dawned on me that if the live system
copy had worked as per the Arch Linux doc then I should have been able
to chroot into the SSD. Unfortunately it didn't work so I blew away
the SSD and started over from scratch with a clean install using the
Gentoo AMD64 Install Guide. I haven't tried chrooting into it yet.
Maybe this evening.

Thanks for the ideas.

- Mark
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
Mark Knecht posted on Tue, 25 Jun 2013 15:51:14 -0700 as excerpted:

> This is related to my thread from a few days ago about the
> disappointing speed of my RAID6 root partition. The goal here is to get
> the machine booting from an SSD so that I can free up my five hard
> drives to play with.

FWIW, this post covers a lot of ground, too much I think to really cover
in one post. Which is why I've delayed replying until now. I expect
I'll punt on some topics this first time thru, but we'll see how it
goes...

> SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
> grub.conf but I keep booting from old RAID instead of the new SSD.
> What am I doing wrong?
>
> What I've done so far:
>
> 1) I've removed everything relatively non-essential from the HDD-based
> RAID6. It's still a lot of data (40GB) but my Windows VMs are moved to
> an external USB drive as is all the video content which is on a second
> USB drive so the remaining size is pretty manageable.

OK...

> 2) In looking around for ways to get / copied to the SDD I ran across
> this Arch Linux page called "Full System Backup with rsync":
>
> https://wiki.archlinux.org/index.php/Full_System_Backup_with_rsync

> Basically it boiled down to just a straight-forward rsync command, but
> what I liked about the description what that it can be done on a live
> system. The command in the page is
>
> rsync -aAXv /* /path/to/backup/folder
> --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost
+found}
>
> which I have modified to
>
> rsync -avx /* /path/to/backup/folder
> --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/lost
+found}
>
> because I don't use (AFAICT) any of the ACL stuff and the command simply
> wouldn't do anything.

For ACL you're probably correct. But you might be using xattributes. Do
you have any of the following USE flags turned on: caps xattr filecaps ?
(Without going into an explanation of the specific distinction between
the USE flags above, particularly caps and filecaps.) What filesystem do
you use on / (and /usr if separate), and if appropriate, are the extended
attributes and security label kernel options enabled for it?

For example, here I have ext4, reiserfs and btrfs enabled and use or have
used them on my various root filesystems, as well as tmpfs with the
appropriate options since I have PORTAGE_TMPDIR pointed at tmpfs (and
also devtmpfs needs some of the options):

zgrep 'REISER\|EXT4\|TMPFS\|BTRFS' /proc/config.gz

CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT23=y
# CONFIG_EXT4_FS_POSIX_ACL is not set
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_REISERFS_FS=y
# CONFIG_REISERFS_CHECK is not set
# CONFIG_REISERFS_PROC_INFO is not set
CONFIG_REISERFS_FS_XATTR=y
# CONFIG_REISERFS_FS_POSIX_ACL is not set
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_BTRFS_FS=y
# CONFIG_BTRFS_FS_POSIX_ACL is not set
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y

tmpfs only has ACL on for devtmpfs (and I'm not sure I need that, but to
avoid both security issues and broken device functionality...). The
others don't have that on, but where appropriate, they have XATTR on, as
well as FS_SECURITY. (Again, this is really only surface coverage,
here. TBH I don't fully understand the depths myself, certainly not well
enough to be comfortable discussing it in depth, tho I'm reasonably sure
I have the options I want enabled, here.)

The deal is that file capacities in one form or another can be used to
avoid having to SETUID root various executables that would otherwise need
it, which is a good thing since that reduces the security vulnerability
window that SETUID root otherwise opens, often necessarily.

And these file capacities are implemented using xattrs. So if your
system is setup to use them, a good thing from a security perspective but
somewhat complicated by the kernel config requirements in addition to the
USE flags, you'll probably want to use the -X option, tho you should
still be safe without -A (tho it shouldn't hurt).

However, the penalty for NOT using -X, provided you're not using xattrs
for anything else, should simply be that you'll need to become root to
run some commands that would otherwise be runnable without root (with the
corresponding open security window, should it be possible for a cracker
to get those commands running as root to do something unintended). So
the potential cost of getting it wrong is actually quite limited, unless
you happen to be the target of a cracker with both good timing and a
reasonable skill level, as well.

And of course if you have only part of the pieces above enabled, say the
appropriate filesystem options in the kernel but not the USE flags, or
the reverse, then you're not covered and the rsync options won't matter
in any case.

But the -AX options shouldn't do any harm in any case, so here I'd have
just left them on, making it -avxAX.


Meanwhile, while I always see people worried about copying a live
filesystem around, I've never had a problem here simply doing a
cp --archive, or the equivalent in mc (midnight commander, ncurses-based
commander-style dual-pane file manager).

What I do for root is use a root-bind script:

#!/bin/bash
me=${0##*/}

case $me in
rootbind) mount --bind / /mnt/rootbind;;
rootbindu) umount /mnt/rootbind;;
*) echo rootbind: bad call; exit 1;;
esac

(That allows the script to be called rootbind, with a symlink to it
called rootbindu, that does the corresponding umount.)

What a bind-mount does is mount an already mounted filesystem at a
different mountpoint. In particular, it does NOT do recursive mounts
(tho there's another mount option that copies the full mount tree, it's
just not what I want here), so what I'm using it for here is to get a
"clean" copy of the rootfs, WITHOUT other filesystems such as /dev and
/home mounted on top.

Then I can do a nice clean cp --archive of my rootfs to a (normally
freshly formated, so cp and rsync would accomplish the same thing) backup
root, and that's what I've used for backup, for years.

And I test those backups too, and occasionally reboot to the backup and
do a clean mkfs and copy back from the backup to the normally working
copy too, just to take care of fragmentation and any possibility of
unresolved filesystem damage or bitrot that might have set in, as well as
ensuring that I can switch to the backups for operational use by actually
doing so. So I know the technique works for me.

Now if I was running some active database that was continuing to update
as I did my copy, yes, that would be a problem, and I'd want to do a
snapshot or otherwise "freeze" the live filesystem in ordered to get a
good self-consistent copy. But, for root anyway, unless I'm trying to do
an emerge --update in the background or something at the same time (and
why would I, both the copy and the update could be trying to access the
filesystem at once, slowing both down, and it needlessly complicates
things, so there's no purpose to doing so), a simple cp --archive of the
live filesystem, from the bind-mount so I get JUST the root filesystem,
no more no less, is sufficient.

For /home, there's a /bit/ more concern, say with the firefox sqlite
databases, if I'm browsing at the same time I'm trying to do the backup.
However, that's simple enough to avoid. Just don't do anything that's
going to be actively changing the filesystem at the same time I'm trying
to make an accurate backup of it.

Of course with your VMs that's a bit different story, rather like the
active database case. A snapshotting filesystem (like btrfs) or sub-
filesystem block-device layer (like lvm2) can be used here, taking the
snapshot and copying it while the activity continues on the live
filesystem, or, likely a simpler solution for those where it's possible,
just do the copy when the database/vms aren't active and in use.

But unless your vms/databases are files on your rootfs, that shouldn't be
a problem with the rootfs backup, in any case. And if they are and you
can't shut down the vms/databases for long enough to do a backup, I'd
personally have questions of the strategy that put them on rootfs to
begin with, but whatever, THAT is when you'd need to worry about taking
an accurate copy of the live rootfs, but ideally, that's not a case you
need to worry about, and indeed, from what I've read it's not a problem
in your case at all. =:^)

> I ran this command the first time to get 98% of everything copied while
> in KDE, but before I moved forward I exited KDE, stopped X and ran it as
> root from the console. After the second run it didn't pick up any new
> file changes so I suspect it's pretty close to what I'd get dealing with
> a Live CD boot. (COMMENTS?)

As the above commentary should indicate, if anything I think you're being
overly cautious. In the vast majority of cases, a simple cp --archive,
or your equivalent rsync, should be fine. The caveat would be if you
were trying to backup the vms while they were in operation, but you've
taken care of that separately, so (with the possible caveat about file
capacities and xattrs) I believe you're good to go.

> 3) I added a new boot options in grub.conf:

(Long) Note in passing: You should probably look into upgrading to grub2
at some point. Now may not be a good time, as you've got a lot on your
plate right now as it is, but at some point. Because while there's a bit
of a learning curve to getting up and running on grub2, it's a lot more
flexible than grub1, with a lot more troubleshooting possible if you're
not getting the boot you expect, and direct support of all sorts of fancy
stuff like mdadm, lvm2, btrfs, zfs, etc, as well as an advanced command-
line shell much like sh/bash itself, so it's very possible to browse your
whole filesystem directly from inside grub, as I said, making
troubleshooting **MUCH** easier. Plus its scripting (including if/then
conditionals and variable handling much like bash) and menu system make
all sorts of advanced boot configs possible.

And while I'm at it, I'd strongly recommend switching to gpt partitioning
from the old mbr style partitions, either before switching to grub2 or at
the same time. GPT is more reliable (checksummed partitioned table with
two copies, one at the beginning and one at the end of the device, unlike
mbr with just one, with no checking, that if it goes bad...) and less
complicated (no primary/secondary/logical partition distinction, up to
128 partitions handled by default, with a possibility for even more if
you setup a larger gpt than mbr. Plus, it allows partition labels much
like the filesystem labels people already use, only on the partitions
themselves, so they don't change with the filesystem. That in itself
makes things much easier, since with labels it's much easier to keep
track of what each partition is actually for.

The reason I recommend switching to gpt before switching to grub2, is
that gpt has a special BIOS-reserved partition type, that grub2 can make
use of to store its core (like grub1's stage-1.5 and 2), making the grub2
administration and updates less problematic than they might be
otherwise. (This of course assumes bios, not efi, but grub2 works with
efi as well, I'm just not familiar with its efi operation, and besides,
efi folks are likely to already be running grub2 or something else,
instead of legacy grub1, so it's likely a reasonably safe assumption.)

Actually, when I switched to gpt here, while still on grub1, I was
forward thinking enough to setup both a bios-reserved partition, and an
efi-reserved partition, even tho neither one was used at the time. They
were small (a couple MB for the BIOS partition, just under 128 MB for the
efi partition, so the both fit in 128 MB, an eighth of a gig). Then I
upgraded to grub2 and it found and used the gpt bios partition without
issue, instead of having to worry about fitting it in slack space before
the first partition or whatever. The efi-reserved partition is still
unused, but it's there in case I upgrade to efi on this machine (I doubt
I will as I have no reason to), or decide to fit the existing disk into a
new machine at some point, without full repartitioning.

(FWIW, I use gptfdisk, aka gdisk, as my gpt-partitioner analogous to
fdisk. However, gparted has supported gpt for awhile, and even standard
fdisk, from the util-linux package, has (still experimental) gpt support
now. Tho the cfdisk variant (also from util-linux) doesn't have gpt
support yet, but cgdisk, from the gptfdisk package, does, and that's the
executable from the gptfdisk package I tend to use here. (I use gdisk -l
to spit out the partition list on the commandline, similar to cat-ing a
file. That's about it. I use cgdisk for actual gpt partition table
editing.))

It's just that reading your post, I'm translating to grub2 in my head,
and thinking how much simpler grub2 makes troubleshooting, when you can
effectively browse all hard drives in read-only mode directly from grub,
not only browsing around to know for sure that a particular partition is
the one you need, but paging thru various files in the kernel
Documentation dir, for instance, to get options to plug in on the kernel
commandline in grub, etc. It really does make troubleshooting early boot
problems MUCH easier, because grub2 simply gives you far more to work
with in terms of troubleshooting tools available to use at the grub
prompt.

The one caveat for gpt is for people multi-booting to other than Linux.
From what I've read, MS does support GPT, but with substantially less
flexibility (especially for XP, 7 is better) than Linux. I think it can
install to either, but switching from one to the other without
reinstalling is problematic, or something like that, whereas with Linux
it's simply ensuring the appropriate support is configured into (or
available as modules if you're running an initr*) the kernel. (I have
little idea how Apple or the BSDs work with GPT.)

But while you do MS, AFAIK it's all in VMs, so that shouldn't be a
problem for you, so gpt should be fine.

And of course grub2 should be fine as well, gpt or not, but based on my
experience, gpt makes the grub2 upgrade far easier, at least as long as
there's a bios-reserved partition setup in gpt already, as there was here
when I did my grub2 upgrade, since I'd already done the gpt upgrade
previously.

But as I said, now may not be the best time to think about that as you
have enough on your plate ATM. Maybe something for later, tho... Or
maybe consider doing gpt now, since you're repartitioning now, and grub2
later...

(grub1 menu entries:)

> title fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
> root (hd5,0)
> kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=LABEL=fastVM video=vesafb
> vga=0x307title
>
> fastVM 3.8.13-gentoo using LABEL (SSD, initramfs in kernel)
> root (hd5,0)
> kernel (hd0,0)/boot/bzImage-3.8.13-gentoo root=/dev/sda1 video=vesafb
> vga=0x307

I'll assume that "vga=0x307title" is a typo, and that "title" starts the
second menu entry...

... Making the difference between the two entries the root=LABEL=fastVM,
vs root=/dev/sda1

> I am relatively confident that (hd5,0) is the SSD. I have 6 drives in
> the system - the 5 HDDs and the SSD. The 5 hard drives all have multiple
> partitions which is what grub tells me using tab completion for the line
>
> root(hdX,
>
> Additionally the SDD has a single partition to tab completion on
> root(hd5 finishes with root(hd5,0). I used /dev/sda as that's how it's
> identified when I boot using RAID.

This is actually what triggered the long grub2 note above. "Relatively
confident", vs. knowing, because with grub2's mdadm support, you can
(read-only) browse all the filesystems in the raid, etc (lvm2, etc, if
you're using that...), as well. So you know what's what, because you can
actually browse it, direct from the grub2 boot prompt.

However, while my grub1 knowledge is getting a bit rusty now, I think
you're mixing up grub's root(hdX,Y) notation, which can be thought of as
sort of like a cd in bash, simply changing the location you're starting
from if you don't type in the full path, with the kernel's root=
commandline option.

Once the kernel loads (from hd0,0 in both entries), its root= line may
have an entirely DIFFERENT device ordering, depending on the order in
which it loaded its (sata chipset, etc) drivers and the order the devices
came back in the device probes it did as it loaded them.

That's actually why kernel devs and udev folks plus many distros tend to
recommend the LABEL= (or alternatively UUID=) option for the kernel's
root= commandline option, these days, instead of the old /dev/sdX style,
because in theory at least, the numbering of /dev/sdX devices can change
arbitrarily. In fact, on most home systems with a consistent set of
devices appearing at boot, the order seldom changes, and it's *OFTEN* the
same as the order as seen by grub, but that doesn't HAVE to be the case.

Of course the monkey wrench in all this is that as far as I'm aware
anyway, the LABEL= and UUID= variants of the root= kernel commandline
option *REQUIRE* an initr* with working udev or similar (I'm not sure if
busybox's mdev supports LABEL=/UUID= or not), which might well be a given
on binary-based distros that handle devices using kernel modules instead
of custom built-in kernel device support, and thus require an initr* to
handle the modules load anyway, but it's definitely *NOT* a given on a
distro like gentoo, which strongly encourages building from source and
where many, perhaps most users, use a custom-built kernel with the
drivers necessary to boot builtin, and thus may well not require an initr*
at all. For initr*-less boots, AFAIK root=/dev/* is the only usable
alternative, because the /dev/disk/by-*/ subdirs that LABEL= and UUID=
depends on are udev userspace created, and those won't be available for
rootfs mount in an initr*-less boot.

> Now, the kernel has the initrd built into it so if it cannot be
> turned off I guess I'll try building a new kernel without it. However I
> found a few web pages that also said RAID could be disabled using a
> 'noraid' option which I thought should stop the system from finding the
> exiting RAID6 but no luck.

FWIW, the best reference for kernel commandline options is the kernel
documentation itself. Sometimes you need more, but that's always the
place to look first.

Specifically, $KERNELDIR/Documentation/kernel-parameters.txt , for the
big list in one place, with additional documentation often provided in
the various individual files documenting specific features.

kernel-parameters.txt lists noinitrd:

noinitrd [RAM] Tells the kernel not to load any configured
initial RAM disk.

So that should work. It doesn't say anything about it not working with
a built-in initramfs, either, so if it doesn't, there's a bug in that it
either should say something about it, or it should work.

FWIW, depending on what initramfs creation script you're using and its
content, you should be able to tell whether the initramfs activated or
not.

Here, I /just/ /recently/ started using dracut, since it seems multi-
device btrfs as root doesn't work reliably otherwise, and that's what I'm
using as my rootfs now (btrfs raid1 mode on dual SSDs, I could only get
it to mount the dual-device btrfs raid1 in degraded mode, seeing only one
of the two devices, without the btrfs device scan in the initramfs, tho a
google says some people have it working, <shrug>).

But even booting to my more traditional reiserfs rootfs backups on the
"spinning rust", where booting from the initramfs isn't mandatory, I can
tell whether the initramfs was loaded or not by the boot-time console
output. Among other things, if the initramfs is loaded and run, then
/proc and /run are already loaded when the openrc service that would
normally mount them gets run, because the initramfs mounted them. But
apparently the initramfs mounts at least /run with different permissions,
so openrc mentions that it's changing permissions on /run when it runs
after the initramfs, but simply mounts it with the permissions it wants,
when the initramfs hasn't run.

But unfortunately, I've not actually tried the noinitrd kernel commandline
option, so I can't VERIFY that it works here, with my now builtin
initramfs. I'll have to reboot to try that, and will try to get back to
you on that. (Note to self. Test the root=LABEL with initramfs-less
boot too, while I'm at it.)

If you're using a dracut-created initr*, then there's several other
helpful kernel commandline options that it hooks. See the
dracut.comandline manpage for the full list, but rd.break and its
rd.break=<brkpoint> variants allow dropping to the initr*'s builtin shell
(AFAIK dash by default for dracut, but bash is an option... which I've
enabled) at various points, say right before or right after the initr*'s
udev runs, right before mounting the real rootfs, or right before the
final switchroot and start of the init on the real rootfs. If you're
using some other initr* creator, obviously you'd check its documentation
for similar options.

I know rd.break works here, as I tested it while I was figuring out how
to work this new-to-me initramfs thing. And it's obvious that I'm in the
initr*'s bash, because its shell prompt isn't anything like my customized
shell prompt.

Meanwhile, I DO NOT see "noraid" listed in kernel-parameters.txt, altho
that doesn't mean it doesn't (or didn't at some point) exist. I DO see a
couple raid-options, md= and raid=, however, with references to
Documentation/md.txt.

Based on the md.txt file, it appears raid=noautodetect is the option
you're looking for. This also matches my now slightly rusty recollection
from when I ran mdraid before. noraid didn't look quite right, but
raid=noautodetect looks much closer to what I remember.

(If you're using dracut-based initr*, there's a similar option for it,
rd.auto, rd.auto=(0|1), that defaults off with current versions of dracut,
according to the dracut.cmdline manpage. That governs autoassembly of
raid, lvm, etc. But since it already defaults off, unless you're running
an old version where that defaulted on, or have it as part of your
builtin commandline as configured in your kernel or something, that
shouldn't be your problem.)

> Does anyone here have any ideas? fdisk info follows at the end.
> Ask for anything else you want to see.
>
> If I can get to booting off the SSD then for the next few days I
> could build different RAIDs and do some performance testing.

Hmm... This didn't turn out to be so hard to reply to after all. Maybe
because I kept my initr* remarks to dracut-based, which is all I know
anyway...

Some other remarks...

FWIW, if you're running an md-raid rootfs, at least with gpt and a
dedicated bios partition, installing grub2 is easier than installing or
updating grub1, as well. I remember the pain it was to install grub1 to
each of the four drives composing my raid, back when I had that setup, in
particular, the pain of trying to be sure I was installing to the
physical drive I /thought/ I was installing to, while at the same time
ensuring it was actually pointed at the /boot on the same drive, not at
the /boot on a different drive, so that if that drive was the only one I
had left, I could still boot from it. The problem was that because I was
on mdraid, grub1 was detecting that and I had to specify the physical
device one way to tell it where to install stage1, and a different way to
tell it where to put stage2 in /boot.

With grub2, things were so much easier that I had trouble believing I'd
actually installed it already. But I rebooted and it worked just fine,
so I had. Same thing when I switched to the pair of ssds with btrfs in
raid1 mode as my rootfs. I installed to the first one... and thought
surely there was another step I had missed, but there it was. After
reboot to test, I installed to the second one, and rebooted to it (using
the boot selector in the BIOS) to test. All fine. =:^)

Of course part of that, again, is due to using gpt with the reserved bios
partition for grub to put its stage2 core in, quite apart from what it
puts in /boot. I suppose I'd have had a similar problem as I did with
grub1, if I was still using mbr or didn't have a reserved bios partition
in my gpt layout, and grub had to stick the stage2 core either in slack
space before the first partition (if there was room to do so), or in
/boot itself, and hope the filesystem didn't move things around afterward
(which reiserfs did do a couple of times to me back with grub1, tho it
wasn't usually a problem).

I glossed over what to do with non-dracut-based initr*, as I've not used
anything other than dracut and direct no-initr*, and dracut's only very
recently. However, I'd be quite surprised if others didn't have
something similar to dracuts rd.break options, etc, and you certainly
should be able to tell whether the initr* is running or not, based on the
early-boot console output. Of course, whether you're /familiar/ enough
with that output or not to tell what's initr* and what's not, is an
entirely different question, but if you know well what one looks like,
the other should be enough different that you can tell, if you look
closely.

From the initrd, it should be possible to mount something else other than
the old raid as rootfs, and by that time, you'll have the kernel
populated /dev tree to work with as well as possibly the udev populated
disk/by-* trees, so finding the right one to mount shouldn't be an issue
-- no worries about kernel device order not matching grub device order,
because you're past grub and using the kernel already, by that point.
That was definitely one of the things I tested on my dracut-based initr*,
that from within the initr* (using for instance rd.break=pre-mount to
drop to a shell before the mount), I could find and mount backup root
filesystems, should it be necessary.

From within the initrd, you should be able to mount using label, uuid or
device, any of the three, provided of course that udev has populated the
disk/by-label and by-uuid trees, and I could certainly mount with either
label or device (I didn't try uuid), using my dracut-based initramfs,
here.

So really, you shouldn't need the noinitrd option for that. Tho as long
as your selected rootfs doesn't /require/ an initr* to boot (as my multi-
device btrfs rootfs seems to here), you should be able to boot to it with
an appropriate kernel commandline root= option, with or without the
initr*.

> c2RAID6 ~ # fdisk -l
>
> Disk /dev/sda: 128.0 GB [snip]
>
> Device Boot Start End Blocks Id System
> /dev/sda1 2048 250069679 125033816 83 Linux
>
> Disk /dev/sdb: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 * 63 112454 56196 83 Linux
> /dev/sdb2 112455 8514449 4200997+ 82 Linux swap
> /dev/sdb3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/sdc: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 * 63 112454 56196 83 Linux
> /dev/sdc2 112455 8514449 4200997+ 82 Linux swap
> /dev/sdc3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/sde: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sde1 2048 8594774 4296363+ 83 Linux
> /dev/sde3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/sdf: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdf1 2048 8594774 4296363+ 83 Linux
> /dev/sdf3 8595456 976773167 484088856 fd Linux raid
>
> Disk /dev/sdd: 500.1 GB
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 * 63 112454 56196 83 Linux
> /dev/sdd2 112455 8514449 4200997+ 82 Linux swap
> /dev/sdd3 8594775 976773167 484089196+ fd Linux raid
>
> Disk /dev/md3: 1487.1 GB
>
> c2RAID6 ~ #

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
On Thu, Jun 27, 2013 at 11:53 AM, Duncan <1i5t5.duncan@cox.net> wrote:
> Mark Knecht posted on Tue, 25 Jun 2013 15:51:14 -0700 as excerpted:
>
>> This is related to my thread from a few days ago about the
>> disappointing speed of my RAID6 root partition. The goal here is to get
>> the machine booting from an SSD so that I can free up my five hard
>> drives to play with.
>
> FWIW, this post covers a lot of ground, too much I think to really cover
> in one post. Which is why I've delayed replying until now. I expect
> I'll punt on some topics this first time thru, but we'll see how it
> goes...

Agreed, and I've made some major course changes WRT this whole thing,
but there's a lot of great info in your response so I'm going to make
a very targeted response for now.

<SNIP>
>
> zgrep 'REISER\|EXT4\|TMPFS\|BTRFS' /proc/config.gz
<SNIP>

I use ext4 mostly. Some ext3 on older external USB drives. ext2 on boot.

Looking at caps, xattr & filecaps I don't appear to have them selected
on any packages. (equery hasuse ..., emerge -pv ...)

Similar results as yours for the zgrep:

mark@c2RAID6 ~ $ zgrep 'REISER\|EXT4\|TMPFS\|BTRFS' /proc/config.gz
CONFIG_DEVTMPFS=y
# CONFIG_DEVTMPFS_MOUNT is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_BTRFS_FS is not set
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
mark@c2RAID6 ~ $

With that in mind I may well have needed the -X on the rsync. However
as I didn't get a quick response I decided this was a background issue
for me in a sense. My HDD-based, low performance RAID6 is working so
for now I'm cool. As I have some time coming up over the weekend, and
because I have this SSD which is to date unused, I decided to simply
build a new Gentoo install from scratch on the SDD in a chroot. I
haven't even bothered with trying to boot it yet. I just copied all
the RAID6 config stuff, world file, /etc/portage/*, /etc/conf.d, blah
blah and let it start building all the binaries. If it works, great.
If not no big deal. It's just compute cycles because it's on the SDD
and isn't slowing me down much inside of the RAID6 environment today.

However, I think your comments about gpt & grub2 are VERY good points
and might work out in my favor long term. I only used 2 partitions on
the SDD - one for a new boot partition and one for /, my thought being
that if I installed grub on the SDD then in BIOS I could point at
/dev/sda to boot off the SDD instead of /dev/sdb. As I think about
your comments, I could consider backing up the SDD install using rsync
-aAvx, converting to gpt & grub2 on that device, do my learning and it
doesn't have to impact my current setup at all. That can all stay on
the hard drives until I'm ready to get rid of it. It's just a flip of
a switch in BIOS as to which one I'm using.

I'll go through your response later and continue the conversation as
appropriate but I wanted to say thanks more quickly for the above
points.

Cheers,
Mark
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
Duncan posted on Thu, 27 Jun 2013 18:53:08 +0000 as excerpted:

> But unfortunately, I've not actually tried the noinitrd kernel
> commandline option, so I can't VERIFY that it works here, with my now
> builtin initramfs. I'll have to reboot to try that, and will try to get
> back to you on that. (Note to self. Test the root=LABEL with
> initramfs-less boot too, while I'm at it.)

I couldn't get the noinitrd option to work here either, on builtin
initramfs.

Not too big a deal tho because as I think you (Mark, grandparent poster)
suggested, it's always possible to rebuild a new kernel without the
initramfs built-in. And if a kernel fails with its builtin for some
reason, there's still the previous kernels with their known working
builtins. So just as I can always boot a backup kernel when a new kernel
fails, I can always boot a backup kernel when the builtin initramfs fails.

Which of course means I didn't try the root=LABEL without an initramfs.

But one other option I DID try... rdinit= . This parameter is similar to
the init= parameter, but for the initr*. It is thus possible to, for
instance, do something like rdinit=/bin/bash (assuming that's the shell
available in your initr*), and get a direct initrd shell, instead of the
script that /bin/init usually is in the initr*. Then in that shell you
can do whatever manual thing you want, and possibly finish up with an
exec /bin/init or whatever, to run the normal rdinit script.

Of course another option would be to setup multiple scripts in the initr*,
each of which could be run as the rdinit replacement, but doing different
things. It would then obviously be possible to have one of those scripts
do something else entirely, whether that be mounting a different root, or
running a memory checker, or starting a game (either in the initr* itself
or mounting a different root to do it), or...

You could then set rdinit= in the kernel commandline to select the
replacement rdinit script you wanted.

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
Mark Knecht posted on Thu, 27 Jun 2013 13:52:18 -0700 as excerpted:

> Looking at caps, xattr & filecaps I don't appear to have them selected
> on any packages. (equery hasuse ..., emerge -pv ...)
>
> Similar results as yours for the zgrep:

> With that in mind I may well have needed the -X on the rsync.

Since you don't have any packages with caps/xattr/filecaps set in USE,
you're probably fine in that regard -- you were correct that you didn't
need it. However, as I said, the -X won't hurt.

> However as
> I didn't get a quick response I decided this was a background issue for
> me in a sense.

Makes sense.

> However, I think your comments about gpt & grub2 are VERY good points
> and might work out in my favor long term. I only used 2 partitions on
> the SDD - one for a new boot partition and one for /, my thought being
> that if I installed grub on the SDD then in BIOS I could point at
> /dev/sda to boot off the SDD instead of /dev/sdb. As I think about your
> comments, I could consider backing up the SDD install using rsync -aAvx,
> converting to gpt & grub2 on that device, do my learning and it doesn't
> have to impact my current setup at all. That can all stay on the hard
> drives until I'm ready to get rid of it. It's just a flip of a switch in
> BIOS as to which one I'm using.

That's a good idea.

Two other comments/suggestions regarding grub1 -> grub2 conversion.

1) I found it *EXTREMELY* useful when I was learning grub2, to have two
bootable devices setup, with just a BIOS selector switch to switch
between them, keeping my normal boot setup grub1 until I was pretty much
done experimenting with grub2 on the other one and had it setup the way I
wanted. After I could boot to grub2 and from it to my normal Linux
setup, then and only then did I blow away the grub1 setup that I had been
using for backup while I experimented with grub2. Of course now I have
all three of my current bootable devices (the pair of SSDs and the
spinning rust as backup) configured with separate grub2 instances, so I
can boot to grub2 from any of the three, and from the grub2, load the
kernel and boot to any of my five root filesystems (the primary and
backup btrfs raid1 mode roots on the pair of ssds, and any of the three
roots, primary and first and second backup of the rootfs on the spinning
rust, which I've yet to repartition now that I have the ssds up and
running, tho I intend to at some point, but there's no hurry).

That's FAR less nerve-racking than trying to be sure that you got it
correct enough to at least boot far enough to correct any remaining
issues, on the single working bootable device available, that can be
either grub1 or grub2, but can't have both installed and working at the
same time!

Of course for those without a second installed primary boot device, it's
possible to do the same thing with a USB stick, which is what I probably
would have done if I didn't have the extra installed boot devices around
to try. But either way, leave the existing grub1 alone while you
experiment with grub2 on a different device, only blowing away grub1 once
you have grub2 configured and booting to your satisfaction.

2) There's two methods available for configuring grub2, the automated/
scripted method, and the direct grub.cfg edit method. Certainly YMMV,
but here, the automated/scripted method just didn't cut it. I did use it
for the initial automated conversion from grub1 and the initial bare-
bones grub2 functionality test, to see that it was working at all, but
after that, I quickly switched to the far more powerful direct edit
method, as all I was getting from the automated/scripted method was
frustrated.

Of course most of the documentation and howtos out there are based on the
scripted method. There's certainly info available for the direct edit
method, including the grub info pages installed with the package itself
and the manual on the grub web site if you prefer that to trying to
navigate info pages, but as they're designed more for experts and distro
maintainers than ordinary users (who are expected to use the scripted
interface), the documentation does tend to be a bit vague at various
points.

Which is why I ended up doing a lot of experimentation, because I knew
that there was a command to do basically what I wanted, but finding
either nice examples or command specifics wasn't always easy, so I had to
look stuff up, then reboot to grub to try it, then boot thru grub to the
main system to be able to either look more stuff up or write out to
permanent config the bit I'd just figured out in grub by trying it.

So I don't claim that my grub2 config is the be all and end all of grub
configs, but if/when it comes time for someone to upgrade, just ask, and
I can post it, to give people some nice real-world working grub config
code examples to follow, the same ones I had such a hard time finding and
that I often had to simply try it out until I figured out the bits that
the info pages didn't really describe in what I'd call a useful way.

Similarly, if you have grub2 config command questions, post them, as it's
reasonably likely I've asked myself the same sorts of questions, and
ultimately gave up googling for the answer and just tried it until I got
it working.

Finally, my initial "final" grub2 config was powerful, but brittle and
inflexible in some aspects. What I ended up with on the second go round
(well, we'll call it config version 2.1 now, as I just today modified it
just a bit further after my initr* experiments, which revealed a weakness
with version 2.0) is far more flexible, with far less brainlessly
duplicated menu option boilerplate for a dozen different menu options
doing very small variants of what's otherwise the exact same thing.

Basically, I originally had it setup in a menu tree like this:

current kernel primary root
fallback kernel primary root
backups menu
primary root
current kernel
fallback kernel
stable kernel
backup root
current kernel
fallback kernel
stable kernel
backup2 root
current kernel
fallback kernel
stable kernel
init=/bin/bash
primary root
current kernel
fallback kernel
stable kernel
backup root
current kernel
fallback kernel
stable kernel
backup2 root
current kernel
fallback kernel
stable kernel
single mode
primary root
current kernel
fallback kernel
stable kernel
backup root
current kernel
fallback kernel
stable kernel
backup2 root
current kernel
fallback kernel
stable kernel
utils menu
reboot
halt
boothints
cat boothints1
cat boothints2
cat boothints3


Now instead of that I have something more like this, much shorter but
also much more flexible!:

current kernel primary root
backups menu
boot
set kernel.current
set kernel.fallback
set kernel.stable
set root.ssd.primary
set root.ssd.secondary
set root.rust.1
set root.rust.2
set root.rust.3
set opts.initbash
set opts.single
set opts.other
reset opts
utils menu
reboot
halt
boothints
browse


So now the initial menu only has three items on it, current boot,
backups, and utils. There's a timeout on the default current boot, so if
I do nothing, it'll boot that without interference.

The backups menu allows me to set different options for kernel, root, and
general options, or clear the existing general options so defaults are
used. When I'm satisfied with the changes I've selected, I choose the
boot option, which echoes the kernel command line it'll execute and
prompts for confirmation, then executes if I give it.

The kernel and root options overwrite any value previously set, while the
general options don't overwrite each other, so there's a reset option for
that. Note that this way, I have far more flexibility with far less
duplicated config, just by selecting the options I want, then hitting the
boot option to actually prompt for confirmation and then boot. Before, I
couldn't select BOTH single mode AND init=/bin/bash; now I can (tho why
one might want both remains an open question, since single mode with bash
as init does nothing). And now I have five root options instead of
three, with less code duplication to do it.

For the general options, initbash simply sets init=/bin/bash, and single
simply sets s, to boot into single user mode. Other options simply
prompts for manual entry of additional options, as desired. (opts.other
is the option I just added today, thus my call it config version 2.1
comment above. Now I don't need to switch to commandline mode just to
add another option to my kernel commandline, as I can simply select that
menu option and type it in there, before selecting the boot option,
confirming that I typed it correctly with the echoed to-be-run kernel
comandline, and booting.)

On the utils menu, reboot and halt allow me to do that directly from
grub. That makes it possible to simply hit the three-finger salute (aka
ctrl-alt-del) from in linux to reboot, then select halt in grub, if I'm
lazy and would rather do that than reach up to the power button to
trigger a system shutdown that way.

I initially setup grub2 before grub2's final release, while it was still
in beta. One of the betas screwed up the pager functionality, so I had
to split my boothints (kernel commandline options I might find useful)
notes into several pages, each of which could fit a single screen. Of
course that bug was fixed in the general grub2 release, so a single
boothints option allows me to page thru a combined longer boothints file,
now.

And the browse option... uses grub2's search by label command to
automatically set some variables appropriately, so instead of manually
figuring out which (gpt0,X) value to use to get say the first backup home
partition on spinning rust, I can simply use the browse option to set all
the vars, then switch to the grub2 commandline and do something like this:

set

(shows me a list of set vars, so I can pick the one to use)

cat $hmr1/user/.bashrc

(Prints out the selected file, automatically stopping at each screenfull
if the pager var is set, as mine is by default. If that partition/volume
happens to be on raid or lvm or whatever, or some exotic filesystem like
btrfs or zfs as long as there's a grub2 module for it (as there is for
those two), or if it's a compressed file, grub has modules for some types
of compression too, a load of the appropriate grub2 modules will let the
grub search by label and cat functions work just as they would on a
partition directly on the hardware itself, so the file can still be
catted. =:^)

Like I said it's not necessarily the be-all and end-all of fancy bash
configurations, but it should give you a much more concrete idea of just
the sort of flexibility and power that grub2 has, and the type of setup
one can configure if desired.

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
On Tue, Jun 25, 2013 at 5:51 PM, Mark Knecht <markknecht@gmail.com> wrote:
> This is related to my thread from a few days ago about the
> disappointing speed of my RAID6 root partition. The goal here is to
> get the machine booting from an SSD so that I can free up my five hard
> drives to play with.
>
> SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
> grub.conf but I keep booting from old RAID instead of the new SSD.
> What am I doing wrong?

Here are some things I would try to narrow it down:

Put raid=noautodetect on your kernel commandline to prevent the kernel
from auto-assembling the array

It sounds like you are pretty sure it is at least using the boot
sector of the new drive, so I am thinking it is possible there is some
weird combination of using a boot sector from one drive to get you
into the boot partition of another drive. If the old boot drive is
still attached, you could try moving/renaming the grub config or whole
grub folder on the old boot drive to make sure it's not getting used.

If that doesn't give any clues, I would physically unplug the cable of
every drive other than the SSD (if that is realistic based on your
filesystem layout) and see how far it gets. Maybe if it fails you can
figure out what it's trying to access on the other disks.

As far as the RAID I think there are at least a few different ways an
mdraid array comes to be assembled:
- your initramfs does it
- your kernel does it (only for a RAID with v0.90 superblock)
- init script does it (/etc/init.d/mdraid)
- udev does it (/lib64/udev/rules.d/64-md-raid.rules)
- you manually do it later on using mdadm

Viewing dmesg output from around the point where boot begins and the
RAID is assembled might give you some clues about who's doing what.

I recently upgraded my machine and disks and am using UUID and labels
for everything, I can literally boot from either the old HDD or new
HDD from my BIOS boot menu, plugging them into the motherboard in any
order, and either one will work properly, even though the /dev/sdX
assignments might change from boot to boot. You can use the blkid
command (as root) to see the labels and UUIDs for all of your drives
and partitions.

Good luck,
Paul
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
On Mon, Jul 1, 2013 at 2:10 PM, Paul Hartman
<paul.hartman+gentoo@gmail.com> wrote:
> On Tue, Jun 25, 2013 at 5:51 PM, Mark Knecht <markknecht@gmail.com> wrote:
>> This is related to my thread from a few days ago about the
>> disappointing speed of my RAID6 root partition. The goal here is to
>> get the machine booting from an SSD so that I can free up my five hard
>> drives to play with.
>>
>> SHORT SUMMATION: I've tried noninitrd and noraid in the kernel line of
>> grub.conf but I keep booting from old RAID instead of the new SSD.
>> What am I doing wrong?
>
> Here are some things I would try to narrow it down:
>
> Put raid=noautodetect on your kernel commandline to prevent the kernel
> from auto-assembling the array
>
> It sounds like you are pretty sure it is at least using the boot
> sector of the new drive, so I am thinking it is possible there is some
> weird combination of using a boot sector from one drive to get you
> into the boot partition of another drive. If the old boot drive is
> still attached, you could try moving/renaming the grub config or whole
> grub folder on the old boot drive to make sure it's not getting used.
>
> If that doesn't give any clues, I would physically unplug the cable of
> every drive other than the SSD (if that is realistic based on your
> filesystem layout) and see how far it gets. Maybe if it fails you can
> figure out what it's trying to access on the other disks.
>
> As far as the RAID I think there are at least a few different ways an
> mdraid array comes to be assembled:
> - your initramfs does it
> - your kernel does it (only for a RAID with v0.90 superblock)
> - init script does it (/etc/init.d/mdraid)
> - udev does it (/lib64/udev/rules.d/64-md-raid.rules)
> - you manually do it later on using mdadm
>
> Viewing dmesg output from around the point where boot begins and the
> RAID is assembled might give you some clues about who's doing what.
>
> I recently upgraded my machine and disks and am using UUID and labels
> for everything, I can literally boot from either the old HDD or new
> HDD from my BIOS boot menu, plugging them into the motherboard in any
> order, and either one will work properly, even though the /dev/sdX
> assignments might change from boot to boot. You can use the blkid
> command (as root) to see the labels and UUIDs for all of your drives
> and partitions.
>
> Good luck,
> Paul
>

Hi Paul,
Thanks for the interest and sorry for the delay in my response.
I've ended up going in a slightly different direction in the process
which as of yet hasn't yielded much except more work for me. No
response is necessary to this post, although I'm always interested in
what folks are doing and thinking.

This post is nothing more than a status report.

First, the purpose of this work: my existing machine has 5 HDDs
hooked to the internal SATA controller, an unused SDD and a number of
external USB drives holding video & Windows VMs. The RAID6 is slow (my
opinion) and I want to investigate other RAID architectures in the
machine with the goal of eventually using one (RAID1, RAID5, RAID6,
RAID10) in a better optimized and tested setup.

The first path I went down was to rsync the exiting Gentoo / to the
SDD. For whatever reason that never booted correctly.

To get past the issue above I just created a new Gentoo install on
the SDD from scratch. This was easy to do in a chroot while I do my
trading work throughout the day. As per other posts I decided to
create a new boot partition on the SDD with an eye toward possibly
using grub2 later. (The SDD is sda, at least in the RAID6 environment.
/dev/sda1 is the new boot, /dev/sda2 is the new SDD root) However in
the beginning my intention was only to place the SDD kernel on the new
boot partition. The HDD RAID6 kernel remains on the HDD boot partition
and I continue to use grub on that boot partition to get the machine
going. For the SDD boot I just point the root & kernel commands to the
first partition on the SDD.

This has worked and the SDD boots reasonably well, but it is
fragile. It appears that drive ordering changes from boot to boot
depending on which USB drives are attached so I probably need to move
to something more like your UUID methods.

Lastly, because I am still running on the original RAID6 setup I do
not want to touch the internal SATA cabling at all. It's important to
me that the currently working setup continue to work. I'll just have
to struggle along with making the setup more reliable using both the
HDD & SDD setups.

Due to everything above I've not yet done anything with testing of
new RAIDs. Maybe later this week. Who knows?

Cheers,
Mark
Re: Can initrd and/or RAID be disabled at boot? [ In reply to ]
Mark Knecht posted on Tue, 02 Jul 2013 10:06:31 -0700 as excerpted:

> On Mon, Jul 1, 2013 at 2:10 PM, Paul Hartman
> <paul.hartman+gentoo@gmail.com> wrote:

>> I recently upgraded my machine and disks and am using UUID and labels
>> for everything, I can literally boot from either the old HDD or new HDD
>> from my BIOS boot menu, plugging them into the motherboard in any
>> order, and either one will work properly, even though the /dev/sdX
>> assignments might change from boot to boot. You can use the blkid
>> command (as root) to see the labels and UUIDs for all of your drives
>> and partitions.

> Thanks for the interest and sorry for the delay in my response.
> I've ended up going in a slightly different direction in the process
> which as of yet hasn't yielded much except more work for me. No response
> is necessary to this post, although I'm always interested in what folks
> are doing and thinking.

Sounds like me. =:^)

> For the SDD boot I just point the root & kernel commands to the first
> partition on the SDD.
>
> This has worked and the SDD boots reasonably well, but it is
> fragile. It appears that drive ordering changes from boot to boot
> depending on which USB drives are attached so I probably need to move to
> something more like your UUID methods.
>
> Lastly, because I am still running on the original RAID6 setup I do
> not want to touch the internal SATA cabling at all. It's important to me
> that the currently working setup continue to work.

That sounds like me too. =:^) (I eventually changed the cabling/order,
but not until I had the new setup tested and working to my satisfaction.)

For the kernel commandline, if you're using an initr* (as you are, IIRC),
and if that initr* has udev on it (since it's the bit that allows this to
work), I know for sure you can use root=LABEL=rootfslabel (where
targetfslabel is the label of your "real" rootfs), and it should "just
work". Similarly with root=UUID=rootfsuuid based on the documentation,
tho I've not actually tried it here so don't have the personal experience
I have with label.

I've even been /told/ that you should be able to use anything available
in /dev/disk/by-*, including PARTLABEL and PARTUUID (as available on gpt
partitions), as well as ID (which should be based on the physical drive
model/serial number). However, I've only seen that from one source and
have not actually tried it myself, so the confidence level in that is
somewhat lower.

Also, apparently udev doesn't detect/create the usual by-part* symlinks
for mdraid and possibly lvm2 yet. Or at least it wasn't doing so a few
months ago when I was actually still running md/raid (not btrfs in raid1
mode as I'm doing now). So even if it's gpt partitioned md/raid and you
can see the partition-labels in your gdisk/cgdisk/parted/whatever, if
udev isn't creating the symlinks in /dev/disk/by-partlabel, mount can't
see them and they can't be used.

Because the way all this works is mount can read udev's /dev/disk/by-*
symlinks, and that's what's really doing the rootfs mount in this case,
the mount on the initr*. (Caveat, I'm not sure busybox mount handles
that, nor do I know what its device-manager does in terms of creating
those links like udev does by default. So if your initr* is based on
that instead of standard udev, plus mount from util-linux...)

But a udev-and-util-linux-mount-based initr* should certainly handle at
least root=LABEL= just fine, as it's doing so here.

That handles the root= bit. There's also the grub bit itself. Grub
needs to be pointed at the correct kernel. But that should be easy
enough, as long as you're putting it on the same /boot that your grub is
pointing at.

Meanwhile, once real-root is mounted and you're doing the regular boot,
the regular fstab mounts (the localmount service for openrc) should "just
work" with labels, and in fact, I've been routinely using LABEL= in the
first fstab field for quite some years now.

But AFAIK, the kernel itself can't handle root=LABEL style commandline
options without an initr*, because AFAIK it doesn't have the necessary
mapping. udev and mount handles that in userspace, which means in
ordered for it to work, userspace, either on the realrootfs, or on the
initr*, must already be going before it'll work.

Which leaves initr*-less folks using the old root=/dev/* or root=nnn:nnn
(major:minor device numbers) formats, which aren't as flexible in terms
of device reordering.

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman