Dale posted on Thu, 09 Sep 2010 21:16:49 -0500 as excerpted:
> Lindsay Haisley wrote:
>> I recently tried a badly needed kernel upgrade on my desktop system,
>> moving from kernel 2.6.23-gentoo-r3 to kernel 2.6.29-gentoo-r5.
I'll say! Even 2.6.29 is ancient. 2.6.35.4 is I believe the current
release, and I'm running 2.6.36-rc3. (I run linus git directly, but he
has been away at a conference and there haven't been any updates for some
days, so current git is still the rc3 he put out before he left... unless
he got back and did some commits today.)
But I'm curious; why 2.6.29? 2.6.27 was a long-term-stable-support
release, still currently supported tho likely not for a lot longer, unless
someone else picks it up, as the maintainer says he's about done with it,
while 2.6.29 was on a normal release support schedule, AFAIK, with stable
updates for it ending probably sometime after the +2 release (2.6.29 +2 =
2.6.31).
2.6.32 is the current long-term-stable-support release. If I were running
my kernels as long as you do, that's what I'd be upgrading to, because
it'll be supported (and get security and bug patch support in further
stable releases) for some time yet, not the 2.6.29 that's no longer
upstream supported.
I'd STRONGLY suggest you do whatever research you might wish to, to
confirm what I just stated, and then then seriously consider 2.6.32.
Either that, or go back to 2.6.27, because altho its support is about to
end (again, unless someone else picks it up), it has been supported as a
long-term-stable for quite some time now and has a lot more of the bugs
worked out than 2.6.29 will ever get.
>> This
>> also required an upgrade of udev from 141 to 151-r4. When I rebooted
>> the box there was no /dev/hda4 which is normally the root filesystem,
>> and instead what was the root filesystem had a device name of, I
>> believe, "rootfs" in the kernel mount table which had the same files.
>> A number of other mounts were gone as well (there was no /dev/hda at
>> all, which has several partitions).
You were playing irresponsible gentoo sysadmin who wants to live on the
edge and risk breaking stuff because you didn't read your ewarn summaries,
weren't you? Especially with boot-critical packages like udev, and
*ESPECIALLY* when you're upgrading that far, you **REALLY** **NEED** to
read those messages.
Excerpt from udev-151-r4.ebuild:
if use old-hd-rules; then
ewarn
ewarn "old-hd-rules use flag is enabled"
ewarn "This adds the removed rules for /dev/hd* devices"
else
ewarn
ewarn "This version of udev no longer has use flag old-hd-
rules enabled"
ewarn "So all special rules for /dev/hd* devices are
missing"
fi
ewarn "Please migrate to the new libata if you need these rules."
ewarn "They will be completely removed on the next udev update."
Neither did you check the USE flag changes that emerge --pretend or emerge
--ask would have shown you. From equery uses =udev-151-r4:
- - old-hd-rules : Install rules for /dev/hd* devices, removed upstream
at udev-148
>> The boot-up stumbled to a halt at a maintenance mode prompt with the
>> root filesystem mounted R/O and of course no gnome desktop. I could
>> use mount -o remount,rw / to make the root filesystem RW, which allowed
>> me to re-emerge an earlier version of udev and boot to the previous
>> kernel, but I'm stuck with an aging kernel, and other tools depend on a
>> kernel and udev upgrade so sooner or later I'm going to be just, plain,
>> stuck :(
What's it they say about naughty gentoo sysadmins who can't do responsible
upgrades, checking their USE flag changes or AT LEAST the ewarn
summaries? Oh, yeah, "If it breaks, you get to keep the pieces." Well,
you didn't read either one, and not without surprise for something left
that long, then upgraded irresponsibly, entirely heedless of any warnings
or USE flag changes, it broke...
Of course, it /is/ fixable. You won't be left with pieces that you can't
put back together. =:^)
>> The drive setup is a bit complex. The actual hard drive mounting
>> (excluding things like proc, udev, devpts, etc.) look like:
>>
>> /dev/hda4 on /
>> The underlying drives are SATA drives which show up as
>> /dev/sd[a-d]1 in /dev.
>>
>> This setup must be maintained in a functional state across any kernel
>> and udev upgrades.
You said it, not me. It must be /maintained/. That's a responsibility.
Gentoo can't do that for you, neither does it even try to. If you can't
even read the warnings the gentoo devs spend their time on... <shrug>
You also said it yourself. The devices are /dev/sd*, not /dev/hd*. The
direct change to your fstab (and lvm and etc config, if necessary), then,
should be pretty simple, a pretty much direct substitution: s/hd/sd/g
(substitute, for hd, sd, globally). It's possible the [a-d] bit will
change order, but I doubt it, as you've been depending on udev's hd rules
for some time now, and I'm almost certain (without actually checking, one
could check the udev ruleset if they wanted to be sure) they default to
using the same device letter if possible.
>> I've been careful to use the .config from the working kernel as the
>> start for configuring a kernel for the newer kernel, using make
>> oldconfig.
>>
>> Does anyone have any idea what's wrong here? Am I required in recent
>> kernels to identify all physical drives in /etc/fstab (and anywhere
>> else it matters) with a UUID instead of a /dev device name? I've
>> wasted an entire day on this problem, which I can ill afford, but I
>> have to get past this roadblock and get my kernel up-to-date.
If you couldn't afford it, why did you crassly upgrade a boot-critical
package and then reboot, without reading the ewarns, /especially/ when
upgrading that far at once? Do you regularly play Russian Roulette as
well? How many bullets do you load when you do?
Come on! This isn't rocket science! You're using a rolling distribution;
you let your packages get /years/ out of date, and then you upgrade boot-
critical packages without either doing any research on what has changed in
the mean time, or AT LEAST reading the warnings! What do you EXPECT to
happen? Under such circumstances, I know what I'd expect. I'd expect a
tough time getting back up and running, when the system broke because I'd
effectively played Russian Roulette with my computer, with five bullets in
a six-shooter! Gentoo can put the warnings in the ebuild and cause them
to display, get mailed to you, whatever, depending on how you've
configured it, but you can lead a horse to water but can't make him drink,
and Gentoo can put the warnings there but can't make you read them.
This is especially true when you already know that the drives are SATAs
and appear as /dev/sd*, not /dev/hd*. There would be a /bit/ more excuse,
if you'd just upgraded legacy PATA drives, as the they are now taken care
of by libata as well by default, and switching from /dev/hd* to /dev/sd*.
But you already knew your drives were /dev/sd*, so why /on/ /earth/ were
you using /dev/hd* in your fstab, anyway? For the folks with PATA drives
who have been living under rocks and in caves for several years now,
there's at least /some/ explanation, since the switch to /dev/sd* could
take them by surprise (again, if they've been living under a rock or in a
cave, I'd not expect a responsible sysadmin who has been around to have
missed that), but you... you knew your drives were /dev/sd* already? Why
on /earth/ were you using /dev/hd* then?
> I ran into this a few weeks ago. I to have the old IDE hard drives. I
> had to switch to PATA which means my drives are now sd* instead of hd*.
As I said, at least there's /some/ excuse with PATA/IDE.
> I don't use LVM tho. I set LABELS on mine and use that to boot and
> mount the partitions where that are supposed to mount. It worked pretty
> well and since I'm using LABELS I can also boot the older kernels and
> hopefully any future kernels that come out.
Actually, that's the way I have mine setup now too, with labels. That
way, it shouldn't matter if the partition is on /dev/hda, /dev/md3, /dev/
sdz, or something else, mount and the kernel should be able to figure it
out.
> I *think* LVM can use LABELS to. If so, that would be my suggestion.
> That way you can move things around as needed and them still boot as
> they still be able to find your partitions.
Seconded.
The one thing that can be a bit confusing then, however, especially with
multiple layers such as mdraid, dmraid, lvm, etc, is keeping straight
which devices are which, when maintenance using the device nodes instead
of the labels is needed.
> So far, I have not been able to get Grub to see the LABELS. I just
> haven't been able to do much testing on it yet.
AFAIK (and this is why I replied here, instead of directly to the above
post), grub-1 (0.97-r*, called "legacy" and unsupported for years
upstream, even well before grub2's on-disk format stabilized, in that
regard they pulled a KDE, leaving the current actually working version
without support, while the new version was still way broken, tho luckily
in the grub case, the community was big enough that community patches have
continued to sustain grub-legacy over the gap, adding new features like gpt
partition support, ext4 filesystem support, etc, well after upstream
abandoned their stable code but years before the new version was even
generally usable, let alone production-ready) can't use labels. It's
/possible/ grub2 might, but I don't know for sure as I've not upgraded to
it yet.
> If LVM can work with this, it should be backward compatible
> with the kernels.
I don't believe lvm works with labels directly, because labels are a
file-system-level feature, and lvm is a below-file-system-level layer.
Neither does mdraid (mdadm), for much the same reason.
However, mdadm DOES allow device sequence change flexibility, because it
puts MORE than enough data in its medadata headers to easily identify the
mdraid devices on its own -- as long as you point it at the correct set of
devices. (If no DEVICE line, what it uses to configure where it scans, is
present, it scans devices listed in /proc/partitions, a sane default.)
FWIW, I ran lvm for awhile, but decided that now that mdraid handles
partitioned raid, there wasn't much need for it any longer, and what with
lvm on mdraid (partitions) on raw disk partitions, the complexity both of
keeping track of it all, AND of keeping enough of both the mdadm and lvm
administration commands in my head to properly fix things if something
went wrong... was getting dangerously high. As complex as it was, I was
thus more likely to make a critical data-loss mistake. As such, I decided
it was better to dump the lvm, and deal simply with partitioned RAID.
Unfortunately, I've forgotten enough about lvm since then that I don't
remember exactly what features it did have in this regard, but as I said,
I doubt it has label support because labels are a file-system-level
feature, and the filesystems go on lvm, not lvm on the filesystems. But I
expect it DOES have UID and scanning support.
FWIW2, as disks increase beyond the 2-T barrier, with legacy MBR style
partitioning reaching its limits at 2-T, the newer GPT (GUID Partitioning
Table, originally developed as part of EFI (Extensible Firmware Interface,
think Apple and Intel), but backward compatible with BIOS based machines
as well) is likely to take over. GPT has a number of advantages,
including keeping redundant copies at each end of the disk and checksums
for reliability and doing away with the primary/secondary/extended
partition distinction, but ALSO including a partition-level label-type
feature. Given that, as GPT becomes more common, it's very likely the
various above-partition-below-filesystem level tools and systems,
including mdraid/mdadm and lvm/devicemapper/dmraid, will grow support for
partition labels as well. Until that point, however, the much less human
readable UUID/GUID system is the closest it can get.
BTW (or FWIW3, if you prefer), the whole idea behind udev is that it's now
user customizable. You can make the devices show up either directly, or
via symlink, as whatever name you want. If you want to name your hard
drives after the planets, /dev/mercury instead of /dev/sda (or hda),
followed by venus, earth, mars... instead of sdb, sdc, sdd... that's
doable.
It is of course also doable to find and save the old udev hd*
compatibility rules, if you want, keeping them around for continued use.
That's even easier than devising your own rules, since they're pre-made.
Just find and save somewhere the file(s) that take care of that, before
doing the upgrade. You can read about it in the udev manpage if you like,
but the default rules location is /lib(64)?/udev/rules.d, so you'll
probably find them there (tho there might be a helper script that you need
to save as well), while the custom rules location is /etc/udev/rules.d, so
that's where you should copy them too.
Again, Gentoo has documentation on this, the udev guide. You can get as
deeply into it, designing as complex a ruleset doing who knows what, as
you want. As is normally the case with Gentoo, it's all up to you. But
again, if you want to just do upgrades without reading any warnings or
documentation, even when it's spit-out in ewarns in the upgrade itself,
well, I suggest you go looking for a different distribution, because
Gentoo wasn't designed as a babysitter, and really, there /are/ others
that better fit that bill.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman