Mailing List Archive

LVM and the /usr Logical Volume
My new laptop is set up to dual boot and has a clean Gentoo install as
the second operating system.  It looks like there may be an issue with
the /usr Logical Volume (LV) somewhere between LVM, initramfs and udev. 
Only the base system has been installed and updated (no desktop).

The issue is the /usr logical volume is not mounted as expected. After
booting without the livecd:
  * The df -h command show /usr on /dev/dm-1 and not
/dev/mapper/vg0-usr like the in the fstab.
  * My expectation is it should follow the other LVs (home, var, opt,
vm) and be in the vg0 Volume Group on /dev/mapper .
  * However the mount /usr command indicates that it is mounted
correctly:  mount: /usr: /dev/mapper/vg0-usr already mounted or mount
point busy.

Is there something off here or is this correct behavior?

The laptop is a new HP Envy x360, 2-in-1 Flip Laptop, 15.6" Full HD
Touchscreen, AMD Ryzen 7 5700U Processor, 64GB RAM and 1TB PCIe SSD.

Below is the /etc/fstab and output from lsblk, df -h and the links in
the volume group after booting to the livecd and booting to the ssd.

Thank you

#
*****************************************************************************
# /etc/fstab:  This is a dual boot system (Windows 11 & Gentoo), the
# same results occurred using straight mount points, LABEL and UUID.
#
*****************************************************************************
# <fs>          <mountpoint>    <type> <opts>                     
<dump/pass>
#/dev/nvme0n1p1 /efi            vfat noauto,noatime                    1 2
#/dev/nvme0n1p2 /
#/dev/nvme0n1p3 /Win11
#/dev/nvme0n1p4 /Win11Data
#/dev/nvme0n1p5 /Win11Recovery
/dev/nvme0n1p6  /boot           ext2 defaults,noatime                  0 2
/dev/nvme0n1p7  none            swap sw                                0 0
/dev/nvme0n1p8  /               ext4 defaults,noatime,discard          0 1
/dev/nvme0n1p9  /lib/modules    ext4 defaults,noatime,discard          0 1
/dev/nvme0n1p10 /tmp            ext4 defaults,noatime,discard          0 2

#/dev/mapper/vg0-usr     /usr    ext4 defaults,noatime,discard          0 0
#/dev/mapper/vg0-home    /home   ext4 defaults,noatime,discard          0 1
#/dev/mapper/vg0-opt     /opt    ext4 defaults,noatime,discard          0 1
#/dev/mapper/vg0-var     /var    ext4 defaults,noatime,discard          0 1
#/dev/mapper/vg1-vm      /vm     ext4 noauto,noatime,discard,user       0 1

#Use blkid /dev/mapper/* to get the LABEL and UUID (quotes cause errors).
LABEL=usr   /usr    ext4    defaults,noatime,discard          0 0
LABEL=home  /home   ext4    defaults,noatime,discard          0 1
LABEL=opt   /opt    ext4    defaults,noatime,discard          0 1
LABEL=var   /var    ext4    defaults,noatime,discard          0 1
LABEL=vm    /vm     ext4    noauto,noatime,discard,user       0 1

#UUID=d9237094-6589-4e90-989d-17bfe74082a4 /usr    ext4
defaults,noatime,discard          0 0
#UUID=53831f3e-6266-4186-a7e1-90ecd027b981 /home   ext4
defaults,noatime,discard          0 1
#UUID=cbdfcbb5-dff1-4b21-8eca-d1684b621fb2 /opt    ext4
defaults,noatime,discard          0 1
#UUID=d43c8c7a-1a83-42f7-958d-9402e7bcc48f /var    ext4
defaults,noatime,discard          0 1
#UUID=95ea1fcc-df9d-4c0b-bce4-a979f8430728 /vm     ext4
noauto,noatime,discard,user       0 1

/dev/cdrom      /mnt/cdrom      auto rw,exec,noauto,user               0 0


#
*****************************************************************************
# Booting to the livecd and before chroot, all looks good.
#
*****************************************************************************
livecd ~ # lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 385.7M  1 loop /mnt/livecd
sda            8:0    1     2G  0 disk
??sda1         8:1    1     2G  0 part /mnt/cdrom
nvme0n1      259:0    0 931.5G  0 disk
??nvme0n1p1  259:1    0   100M  0 part
??nvme0n1p2  259:2    0    16M  0 part
??nvme0n1p3  259:3    0  52.2G  0 part
??nvme0n1p4  259:4    0  40.2G  0 part
??nvme0n1p5  259:5    0 608.6M  0 part
??nvme0n1p6  259:6    0   2.8G  0 part /mnt/gentoo/boot
??nvme0n1p7  259:7    0   4.7G  0 part [SWAP]
??nvme0n1p8  259:8    0   9.3G  0 part /mnt/gentoo
??nvme0n1p9  259:9    0   3.7G  0 part /mnt/gentoo/lib/modules
??nvme0n1p10 259:10   0   2.8G  0 part /mnt/gentoo/tmp
??nvme0n1p11 259:11   0 186.3G  0 part
? ??vg0-usr  253:1    0    25G  0 lvm  /mnt/gentoo/usr
? ??vg0-var  253:2    0    20G  0 lvm  /mnt/gentoo/var
? ??vg0-home 253:3    0    80G  0 lvm  /mnt/gentoo/home
? ??vg0-opt  253:4    0    20G  0 lvm  /mnt/gentoo/opt
??nvme0n1p12 259:12   0 186.3G  0 part
? ??vg1-vm   253:0    0   150G  0 lvm  /mnt/gentoo/vm
??nvme0n1p13 259:13   0  93.1G  0 part
??nvme0n1p14 259:14   0  93.1G  0 part
??nvme0n1p15 259:15   0  46.6G  0 part
??nvme0n1p16 259:16   0  46.6G  0 part
??nvme0n1p17 259:17   0  46.6G  0 part
??nvme0n1p18 259:18   0  46.6G  0 part
??nvme0n1p19 259:19   0  46.6G  0 part
??nvme0n1p20 259:20   0  23.5G  0 part

livecd ~ # df -h
Filesystem            Size  Used Avail Use% Mounted on
none                   32G  704K   32G   1% /run
udev                   10M     0   10M   0% /dev
shm                    32G     0   32G   0% /dev/shm
tmpfs                  32G   60M   32G   1% /
/dev/sda1             2.0G  436M  1.6G  22% /mnt/cdrom
/dev/loop0            386M  386M     0 100% /mnt/livecd
cgroup_root            10M     0   10M   0% /sys/fs/cgroup
/dev/nvme0n1p8        9.1G  915M  7.7G  11% /mnt/gentoo
/dev/nvme0n1p6        2.8G  105M  2.6G   4% /mnt/gentoo/boot
/dev/nvme0n1p9        3.6G  112M  3.3G   4% /mnt/gentoo/lib/modules
/dev/nvme0n1p10       2.7G   32K  2.6G   1% /mnt/gentoo/tmp
/dev/mapper/vg0-usr    25G  3.7G   20G  16% /mnt/gentoo/usr
/dev/mapper/vg0-var    20G  2.4G   17G  13% /mnt/gentoo/var
/dev/mapper/vg0-home   79G   24K   75G   1% /mnt/gentoo/home
/dev/mapper/vg0-opt    20G   14M   19G   1% /mnt/gentoo/opt
/dev/mapper/vg1-vm    147G   28K  140G   1% /mnt/gentoo/vm
tmpfs                  32G     0   32G   0% /mnt/gentoo/dev/shm


#
*****************************************************************************
# Booting to the livecd and after chroot, all looks good.
#
*****************************************************************************
(chroot) livecd # lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 385.7M  1 loop
sda            8:0    1     2G  0 disk
??sda1         8:1    1     2G  0 part
nvme0n1      259:0    0 931.5G  0 disk
??nvme0n1p1  259:1    0   100M  0 part
??nvme0n1p2  259:2    0    16M  0 part
??nvme0n1p3  259:3    0  52.2G  0 part
??nvme0n1p4  259:4    0  40.2G  0 part
??nvme0n1p5  259:5    0 608.6M  0 part
??nvme0n1p6  259:6    0   2.8G  0 part /boot
??nvme0n1p7  259:7    0   4.7G  0 part [SWAP]
??nvme0n1p8  259:8    0   9.3G  0 part /
??nvme0n1p9  259:9    0   3.7G  0 part /lib/modules
??nvme0n1p10 259:10   0   2.8G  0 part /tmp
??nvme0n1p11 259:11   0 186.3G  0 part
? ??vg0-usr  253:1    0    25G  0 lvm  /usr
? ??vg0-var  253:2    0    20G  0 lvm  /var
? ??vg0-home 253:3    0    80G  0 lvm  /home
? ??vg0-opt  253:4    0    20G  0 lvm  /opt
??nvme0n1p12 259:12   0 186.3G  0 part
? ??vg1-vm   253:0    0   150G  0 lvm  /vm
??nvme0n1p13 259:13   0  93.1G  0 part
??nvme0n1p14 259:14   0  93.1G  0 part
??nvme0n1p15 259:15   0  46.6G  0 part
??nvme0n1p16 259:16   0  46.6G  0 part
??nvme0n1p17 259:17   0  46.6G  0 part
??nvme0n1p18 259:18   0  46.6G  0 part
??nvme0n1p19 259:19   0  46.6G  0 part
??nvme0n1p20 259:20   0  23.5G  0 part

(chroot) livecd # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/nvme0n1p8        9.1G  915M  7.7G  11% /
/dev/nvme0n1p6        2.8G  105M  2.6G   4% /boot
/dev/nvme0n1p9        3.6G  112M  3.3G   4% /lib/modules
/dev/nvme0n1p10       2.7G   32K  2.6G   1% /tmp
/dev/mapper/vg0-usr    25G  3.7G   20G  16% /usr
/dev/mapper/vg0-var    20G  2.4G   17G  13% /var
/dev/mapper/vg0-home   79G   24K   75G   1% /home
/dev/mapper/vg0-opt    20G   14M   19G   1% /opt
/dev/mapper/vg1-vm    147G   28K  140G   1% /vm
cgroup_root            10M     0   10M   0% /sys/fs/cgroup
udev                   10M     0   10M   0% /dev
tmpfs                  32G     0   32G   0% /dev/shm
none                   32G  704K   32G   1% /run



#
*****************************************************************************
# Booting to new system, the df -h does not shows /usr in
# the vg0 volume group under /dev/mapper.
#
*****************************************************************************
newhost / # df -h
Filesystem            Size  Used Avail Use% Mounted on
none                   32G  604K   32G   1% /run
udev                   10M     0   10M   0% /dev
tmpfs                  32G     0   32G   0% /dev/shm
/dev/nvme0n1p8        9.1G  916M  7.7G  11% /
*/dev/dm-1              25G  3.9G   20G  17% /usr **  # This looks
wrong,**the expectation is that it would be /dev/mapper/vg0-usr .**
*cgroup_root            10M     0   10M   0% /sys/fs/cgroup
/dev/nvme0n1p6        2.8G  105M  2.6G   4% /boot
/dev/nvme0n1p9        3.6G  112M  3.3G   4% /lib/modules
/dev/nvme0n1p10       2.7G   32K  2.6G   1% /tmp
/dev/mapper/vg0-home   79G   24K   75G   1% /home
/dev/mapper/vg0-opt    20G  7.3M   19G   1% /opt
/dev/mapper/vg0-var    20G  2.8G   16G  15% /var

newhost / # lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
nvme0n1      259:0    0 931.5G  0 disk
??nvme0n1p1  259:1    0   100M  0 part
??nvme0n1p2  259:2    0    16M  0 part
??nvme0n1p3  259:3    0  52.2G  0 part
??nvme0n1p4  259:4    0  40.2G  0 part
??nvme0n1p5  259:5    0 608.6M  0 part
??nvme0n1p6  259:6    0   2.8G  0 part /boot
??nvme0n1p7  259:7    0   4.7G  0 part [SWAP]
??nvme0n1p8  259:8    0   9.3G  0 part /
??nvme0n1p9  259:9    0   3.7G  0 part /lib/modules
??nvme0n1p10 259:10   0   2.8G  0 part /tmp
??nvme0n1p11 259:11   0 186.3G  0 part
? ??*vg0-usr  253:1    0    25G  0 lvm  /usr **  # This looks right.*
? ??vg0-var  253:2    0    20G  0 lvm  /var
? ??vg0-home 253:3    0    80G  0 lvm  /home
? ??vg0-opt  253:4    0    20G  0 lvm  /opt
??nvme0n1p12 259:12   0 186.3G  0 part
? ??vg1-vm   253:0    0   150G  0 lvm
??nvme0n1p13 259:13   0  93.1G  0 part
??nvme0n1p14 259:14   0  93.1G  0 part
??nvme0n1p15 259:15   0  46.6G  0 part
??nvme0n1p16 259:16   0  46.6G  0 part
??nvme0n1p17 259:17   0  46.6G  0 part
??nvme0n1p18 259:18   0  46.6G  0 part
??nvme0n1p19 259:19   0  46.6G  0 part
??nvme0n1p20 259:20   0  23.5G  0 part

newhost / # ls -l /dev/vg0 /dev/vg1
/dev/vg0:
total 0
lrwxrwxrwx 1 root root 7 Apr  4 03:32 home -> ../dm-3
lrwxrwxrwx 1 root root 7 Apr  4 03:32 opt -> ../dm-4
lrwxrwxrwx 1 root root 7 Apr  4 03:32 *usr -> ../dm-1  # This looks right.*
lrwxrwxrwx 1 root root 7 Apr  4 03:32 var -> ../dm-2

/dev/vg1:
total 0
lrwxrwxrwx 1 root root 7 Apr  4 03:32 vm -> ../dm-0

# mount /usr
mount: /usr: /dev/mapper/vg0-usr already mounted or mount point busy.
Re: LVM and the /usr Logical Volume [ In reply to ]
dhk wrote:
> My new laptop is set up to dual boot and has a clean Gentoo install as
> the second operating system.  It looks like there may be an issue with
> the /usr Logical Volume (LV) somewhere between LVM, initramfs and
> udev.  Only the base system has been installed and updated (no desktop).
>
> The issue is the /usr logical volume is not mounted as expected. 
> After booting without the livecd:
>   * The df -h command show /usr on /dev/dm-1 and not
> /dev/mapper/vg0-usr like the in the fstab.
>   * My expectation is it should follow the other LVs (home, var, opt,
> vm) and be in the vg0 Volume Group on /dev/mapper .
>   * However the mount /usr command indicates that it is mounted
> correctly:  mount: /usr: /dev/mapper/vg0-usr already mounted or mount
> point busy.
>
> Is there something off here or is this correct behavior?
>
> The laptop is a new HP Envy x360, 2-in-1 Flip Laptop, 15.6" Full HD
> Touchscreen, AMD Ryzen 7 5700U Processor, 64GB RAM and 1TB PCIe SSD.
>
> Below is the /etc/fstab and output from lsblk, df -h and the links in
> the volume group after booting to the livecd and booting to the ssd.
>
> Thank you
>
> #
> *****************************************************************************
> # /etc/fstab:  This is a dual boot system (Windows 11 & Gentoo), the
> # same results occurred using straight mount points, LABEL and UUID.
> #
> *****************************************************************************
> # <fs>          <mountpoint>    <type>  <opts>                     
> <dump/pass>
> #/dev/nvme0n1p1 /efi            vfat   
> noauto,noatime                    1 2
> #/dev/nvme0n1p2 /
> #/dev/nvme0n1p3 /Win11
> #/dev/nvme0n1p4 /Win11Data
> #/dev/nvme0n1p5 /Win11Recovery
> /dev/nvme0n1p6  /boot           ext2   
> defaults,noatime                  0 2
> /dev/nvme0n1p7  none            swap   
> sw                                0 0
> /dev/nvme0n1p8  /               ext4   
> defaults,noatime,discard          0 1
> /dev/nvme0n1p9  /lib/modules    ext4   
> defaults,noatime,discard          0 1
> /dev/nvme0n1p10 /tmp            ext4   
> defaults,noatime,discard          0 2
>
> #/dev/mapper/vg0-usr     /usr    ext4   
> defaults,noatime,discard          0 0
> #/dev/mapper/vg0-home    /home   ext4   
> defaults,noatime,discard          0 1
> #/dev/mapper/vg0-opt     /opt    ext4   
> defaults,noatime,discard          0 1
> #/dev/mapper/vg0-var     /var    ext4   
> defaults,noatime,discard          0 1
> #/dev/mapper/vg1-vm      /vm     ext4   
> noauto,noatime,discard,user       0 1
>
> #Use blkid /dev/mapper/* to get the LABEL and UUID (quotes cause errors).
> LABEL=usr   /usr    ext4    defaults,noatime,discard          0 0
> LABEL=home  /home   ext4    defaults,noatime,discard          0 1
> LABEL=opt   /opt    ext4    defaults,noatime,discard          0 1
> LABEL=var   /var    ext4    defaults,noatime,discard          0 1
> LABEL=vm    /vm     ext4    noauto,noatime,discard,user       0 1
>
> #UUID=d9237094-6589-4e90-989d-17bfe74082a4 /usr    ext4   
> defaults,noatime,discard          0 0
> #UUID=53831f3e-6266-4186-a7e1-90ecd027b981 /home   ext4   
> defaults,noatime,discard          0 1
> #UUID=cbdfcbb5-dff1-4b21-8eca-d1684b621fb2 /opt    ext4   
> defaults,noatime,discard          0 1
> #UUID=d43c8c7a-1a83-42f7-958d-9402e7bcc48f /var    ext4   
> defaults,noatime,discard          0 1
> #UUID=95ea1fcc-df9d-4c0b-bce4-a979f8430728 /vm     ext4   
> noauto,noatime,discard,user       0 1
>
> /dev/cdrom      /mnt/cdrom      auto   
> rw,exec,noauto,user               0 0
>
>
> #
> *****************************************************************************
> # Booting to the livecd and before chroot, all looks good.
> #
> *****************************************************************************
> livecd ~ # lsblk
> NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
> loop0          7:0    0 385.7M  1 loop /mnt/livecd
> sda            8:0    1     2G  0 disk
> ??sda1         8:1    1     2G  0 part /mnt/cdrom
> nvme0n1      259:0    0 931.5G  0 disk
> ??nvme0n1p1  259:1    0   100M  0 part
> ??nvme0n1p2  259:2    0    16M  0 part
> ??nvme0n1p3  259:3    0  52.2G  0 part
> ??nvme0n1p4  259:4    0  40.2G  0 part
> ??nvme0n1p5  259:5    0 608.6M  0 part
> ??nvme0n1p6  259:6    0   2.8G  0 part /mnt/gentoo/boot
> ??nvme0n1p7  259:7    0   4.7G  0 part [SWAP]
> ??nvme0n1p8  259:8    0   9.3G  0 part /mnt/gentoo
> ??nvme0n1p9  259:9    0   3.7G  0 part /mnt/gentoo/lib/modules
> ??nvme0n1p10 259:10   0   2.8G  0 part /mnt/gentoo/tmp
> ??nvme0n1p11 259:11   0 186.3G  0 part
> ? ??vg0-usr  253:1    0    25G  0 lvm  /mnt/gentoo/usr
> ? ??vg0-var  253:2    0    20G  0 lvm  /mnt/gentoo/var
> ? ??vg0-home 253:3    0    80G  0 lvm  /mnt/gentoo/home
> ? ??vg0-opt  253:4    0    20G  0 lvm  /mnt/gentoo/opt
> ??nvme0n1p12 259:12   0 186.3G  0 part
> ? ??vg1-vm   253:0    0   150G  0 lvm  /mnt/gentoo/vm
> ??nvme0n1p13 259:13   0  93.1G  0 part
> ??nvme0n1p14 259:14   0  93.1G  0 part
> ??nvme0n1p15 259:15   0  46.6G  0 part
> ??nvme0n1p16 259:16   0  46.6G  0 part
> ??nvme0n1p17 259:17   0  46.6G  0 part
> ??nvme0n1p18 259:18   0  46.6G  0 part
> ??nvme0n1p19 259:19   0  46.6G  0 part
> ??nvme0n1p20 259:20   0  23.5G  0 part
>
> livecd ~ # df -h
> Filesystem            Size  Used Avail Use% Mounted on
> none                   32G  704K   32G   1% /run
> udev                   10M     0   10M   0% /dev
> shm                    32G     0   32G   0% /dev/shm
> tmpfs                  32G   60M   32G   1% /
> /dev/sda1             2.0G  436M  1.6G  22% /mnt/cdrom
> /dev/loop0            386M  386M     0 100% /mnt/livecd
> cgroup_root            10M     0   10M   0% /sys/fs/cgroup
> /dev/nvme0n1p8        9.1G  915M  7.7G  11% /mnt/gentoo
> /dev/nvme0n1p6        2.8G  105M  2.6G   4% /mnt/gentoo/boot
> /dev/nvme0n1p9        3.6G  112M  3.3G   4% /mnt/gentoo/lib/modules
> /dev/nvme0n1p10       2.7G   32K  2.6G   1% /mnt/gentoo/tmp
> /dev/mapper/vg0-usr    25G  3.7G   20G  16% /mnt/gentoo/usr
> /dev/mapper/vg0-var    20G  2.4G   17G  13% /mnt/gentoo/var
> /dev/mapper/vg0-home   79G   24K   75G   1% /mnt/gentoo/home
> /dev/mapper/vg0-opt    20G   14M   19G   1% /mnt/gentoo/opt
> /dev/mapper/vg1-vm    147G   28K  140G   1% /mnt/gentoo/vm
> tmpfs                  32G     0   32G   0% /mnt/gentoo/dev/shm
>
>
> #
> *****************************************************************************
> # Booting to the livecd and after chroot, all looks good.
> #
> *****************************************************************************
> (chroot) livecd # lsblk
> NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
> loop0          7:0    0 385.7M  1 loop
> sda            8:0    1     2G  0 disk
> ??sda1         8:1    1     2G  0 part
> nvme0n1      259:0    0 931.5G  0 disk
> ??nvme0n1p1  259:1    0   100M  0 part
> ??nvme0n1p2  259:2    0    16M  0 part
> ??nvme0n1p3  259:3    0  52.2G  0 part
> ??nvme0n1p4  259:4    0  40.2G  0 part
> ??nvme0n1p5  259:5    0 608.6M  0 part
> ??nvme0n1p6  259:6    0   2.8G  0 part /boot
> ??nvme0n1p7  259:7    0   4.7G  0 part [SWAP]
> ??nvme0n1p8  259:8    0   9.3G  0 part /
> ??nvme0n1p9  259:9    0   3.7G  0 part /lib/modules
> ??nvme0n1p10 259:10   0   2.8G  0 part /tmp
> ??nvme0n1p11 259:11   0 186.3G  0 part
> ? ??vg0-usr  253:1    0    25G  0 lvm  /usr
> ? ??vg0-var  253:2    0    20G  0 lvm  /var
> ? ??vg0-home 253:3    0    80G  0 lvm  /home
> ? ??vg0-opt  253:4    0    20G  0 lvm  /opt
> ??nvme0n1p12 259:12   0 186.3G  0 part
> ? ??vg1-vm   253:0    0   150G  0 lvm  /vm
> ??nvme0n1p13 259:13   0  93.1G  0 part
> ??nvme0n1p14 259:14   0  93.1G  0 part
> ??nvme0n1p15 259:15   0  46.6G  0 part
> ??nvme0n1p16 259:16   0  46.6G  0 part
> ??nvme0n1p17 259:17   0  46.6G  0 part
> ??nvme0n1p18 259:18   0  46.6G  0 part
> ??nvme0n1p19 259:19   0  46.6G  0 part
> ??nvme0n1p20 259:20   0  23.5G  0 part
>
> (chroot) livecd # df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/nvme0n1p8        9.1G  915M  7.7G  11% /
> /dev/nvme0n1p6        2.8G  105M  2.6G   4% /boot
> /dev/nvme0n1p9        3.6G  112M  3.3G   4% /lib/modules
> /dev/nvme0n1p10       2.7G   32K  2.6G   1% /tmp
> /dev/mapper/vg0-usr    25G  3.7G   20G  16% /usr
> /dev/mapper/vg0-var    20G  2.4G   17G  13% /var
> /dev/mapper/vg0-home   79G   24K   75G   1% /home
> /dev/mapper/vg0-opt    20G   14M   19G   1% /opt
> /dev/mapper/vg1-vm    147G   28K  140G   1% /vm
> cgroup_root            10M     0   10M   0% /sys/fs/cgroup
> udev                   10M     0   10M   0% /dev
> tmpfs                  32G     0   32G   0% /dev/shm
> none                   32G  704K   32G   1% /run
>
>
>
> #
> *****************************************************************************
> # Booting to new system, the df -h does not shows /usr in
> # the vg0 volume group under /dev/mapper.
> #
> *****************************************************************************
> newhost / # df -h
> Filesystem            Size  Used Avail Use% Mounted on
> none                   32G  604K   32G   1% /run
> udev                   10M     0   10M   0% /dev
> tmpfs                  32G     0   32G   0% /dev/shm
> /dev/nvme0n1p8        9.1G  916M  7.7G  11% /
> */dev/dm-1              25G  3.9G   20G  17% /usr **  # This looks
> wrong,**the expectation is that it would be /dev/mapper/vg0-usr .**
> *cgroup_root            10M     0   10M   0% /sys/fs/cgroup
> /dev/nvme0n1p6        2.8G  105M  2.6G   4% /boot
> /dev/nvme0n1p9        3.6G  112M  3.3G   4% /lib/modules
> /dev/nvme0n1p10       2.7G   32K  2.6G   1% /tmp
> /dev/mapper/vg0-home   79G   24K   75G   1% /home
> /dev/mapper/vg0-opt    20G  7.3M   19G   1% /opt
> /dev/mapper/vg0-var    20G  2.8G   16G  15% /var
>
> newhost / # lsblk
> NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
> nvme0n1      259:0    0 931.5G  0 disk
> ??nvme0n1p1  259:1    0   100M  0 part
> ??nvme0n1p2  259:2    0    16M  0 part
> ??nvme0n1p3  259:3    0  52.2G  0 part
> ??nvme0n1p4  259:4    0  40.2G  0 part
> ??nvme0n1p5  259:5    0 608.6M  0 part
> ??nvme0n1p6  259:6    0   2.8G  0 part /boot
> ??nvme0n1p7  259:7    0   4.7G  0 part [SWAP]
> ??nvme0n1p8  259:8    0   9.3G  0 part /
> ??nvme0n1p9  259:9    0   3.7G  0 part /lib/modules
> ??nvme0n1p10 259:10   0   2.8G  0 part /tmp
> ??nvme0n1p11 259:11   0 186.3G  0 part
> ? ??*vg0-usr  253:1    0    25G  0 lvm  /usr **  # This looks right.*
> ? ??vg0-var  253:2    0    20G  0 lvm  /var
> ? ??vg0-home 253:3    0    80G  0 lvm  /home
> ? ??vg0-opt  253:4    0    20G  0 lvm  /opt
> ??nvme0n1p12 259:12   0 186.3G  0 part
> ? ??vg1-vm   253:0    0   150G  0 lvm  
> ??nvme0n1p13 259:13   0  93.1G  0 part
> ??nvme0n1p14 259:14   0  93.1G  0 part
> ??nvme0n1p15 259:15   0  46.6G  0 part
> ??nvme0n1p16 259:16   0  46.6G  0 part
> ??nvme0n1p17 259:17   0  46.6G  0 part
> ??nvme0n1p18 259:18   0  46.6G  0 part
> ??nvme0n1p19 259:19   0  46.6G  0 part
> ??nvme0n1p20 259:20   0  23.5G  0 part
>
> newhost / # ls -l /dev/vg0 /dev/vg1
> /dev/vg0:
> total 0
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 home -> ../dm-3
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 opt -> ../dm-4
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 *usr -> ../dm-1  # This looks
> right.*
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 var -> ../dm-2
>
> /dev/vg1:
> total 0
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 vm -> ../dm-0
>
> # mount /usr
> mount: /usr: /dev/mapper/vg0-usr already mounted or mount point busy.
>


Is it possible that something else has the usr label?  I don't see
anything in the info you provided but maybe it is elsewhere, somewhere. 

Another option, try using the UUID instead.  That would eliminate the
above if that is the problem. 

Grasping at straws. 

Dale

:-)  :-) 
Re: LVM and the /usr Logical Volume [ In reply to ]
On 06/04/2022 14:12, Dale wrote:
> Is it possible that something else has the usr label?  I don't see
> anything in the info you provided but maybe it is elsewhere, somewhere.
>
> Another option, try using the UUID instead.  That would eliminate the
> above if that is the problem.
>
> Grasping at straws.

Or is /usr mounted by the initramfs, and just as you switch-mount root,
you might have to switch-mount /usr?

Cheers,
Wol
Re: LVM and the /usr Logical Volume [ In reply to ]
So it sounds like /usr being under /dev/dm-1 instead of /dev/mapper does
not look right.

The UUID was tried in the fstab and the same results occurred, same as
with LABEL and mount points.

Since /usr is mounted temporarily at boot it almost looks as if there is
something wrong with the way the initramfs is handling it. The tmpfs is
built into the kernel and the /etc/initramfs.mounts looks correct with
only /usr in it, but /lib/modules was tried also and did not make a
difference.

Could this be a bug with genkernel or udev?

Thanks
Re: Re: LVM and the /usr Logical Volume [ In reply to ]
On Wed, 06 Apr 2022 19:38:16 -0400,
dhk wrote:
>
> So it sounds like /usr being under /dev/dm-1 instead of
> /dev/mapper does not look right.
>
> The UUID was tried in the fstab and the same results occurred,
> same as with LABEL and mount points.
>
> Since /usr is mounted temporarily at boot it almost looks as if
> there is something wrong with the way the initramfs is handling
> it. The tmpfs is built into the kernel and the
> /etc/initramfs.mounts looks correct with only /usr in it, but
> /lib/modules was tried also and did not make a difference.
>
> Could this be a bug with genkernel or udev?

Are you using systemd or openrc? What are you using for your initrd,
dracut or something else? I also wonder if dm1 is the same thing as
your /dev/mapper/... by another name -- check where the link points
to.

--
Your life is like a penny. You're going to lose it. The question is:
How do
you spend it?

John Covici wb2una
covici@ccs.covici.com
Re: Re: LVM and the /usr Logical Volume [ In reply to ]
On 07/04/2022 05:00, John Covici wrote:
> Are you using systemd or openrc? What are you using for your initrd,
> dracut or something else? I also wonder if dm1 is the same thing as
> your/dev/mapper/... by another name -- check where the link points
> to.

If it isn't, then there's something wrong. You should be using
/dev/mapper/..., which should be a link to whatever device is underlying
it. /dev/dm-1 will be whatever devicemapper brought up as the first
device it found.

Cheers,
Wol
Re: LVM and the /usr Logical Volume [ In reply to ]
On Tue, 5 Apr 2022 20:25:09 -0400, dhk wrote:

> The issue is the /usr logical volume is not mounted as expected. After
> booting without the livecd:
>   * The df -h command show /usr on /dev/dm-1 and not
> /dev/mapper/vg0-usr like the in the fstab.
>   * My expectation is it should follow the other LVs (home, var, opt,
> vm) and be in the vg0 Volume Group on /dev/mapper .
>   * However the mount /usr command indicates that it is mounted
> correctly:  mount: /usr: /dev/mapper/vg0-usr already mounted or mount
> point busy.
>
> Is there something off here or is this correct behavior?

> newhost / # ls -l /dev/vg0 /dev/vg1
> /dev/vg0:
> total 0
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 home -> ../dm-3
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 opt -> ../dm-4
> lrwxrwxrwx 1 root root 7 Apr  4 03:32 *usr -> ../dm-1  # This looks
> right.* lrwxrwxrwx 1 root root 7 Apr  4 03:32 var -> ../dm-2

/dev/mapper/vg0-usr and /dev/dm-1 are the same device, so nothing is
actually wrong, this is more a cosmetic issue. You are mounting the
correct device, it is just showing as a different name.

I suspect the initramfs here, what does the fstab inside that look like?

How are you creating the initramfs? Genkernel, dracut, home brewed?


--
Neil Bothwick

The word 'Windows' is a word out of an old dialect of the Apaches.
It means: 'White man staring through glass-screen onto an hourglass...')
Re: LVM and the /usr Logical Volume [ In reply to ]
Having /dev/dm-1 mounted on /usr would not be an issue if it was
supposed to be that way; however, nothing in the handbook or anything
else I have read says that is correct.  In addition, every other system
I have setup or used always had /usr as the mount point in the fstab.

My primary questions are:
  * Why is it different this time?
  * What changed to make /usr mount the block device?
  * Why is the /usr record in the fstab being ignored and being handled
differently that /var, /opt, /home and /vm ?

Even though everything seems to be work correctly, without a good and
authoritative explanation my confidence level in the stability is not
too high and is preventing me from relying on it as primary host.

My concerns about not having a good explanation for why df -h shows
/dev/dm-1 on /usr instead of /dev/mapper/vg0-usr are:
* There could be problems interfacing directly with the block device
(/dev/dm-1) and not the link (/dev/mapper/vg0-usr).
* When it comes time to extend the /usr logical volume and use commands
like lvextend, resize2fs, lvresize and some others it may cause problems.
* The documentation does not say this is correct, in fact the
documentation specifically says the opposite that the fstab is used for
the mount points.
* It looks like the initramfs is not letting go of the temporary /usr
mount and mounting /usr in the vg0-usr volume group correctly.

After reinstalling Gentoo with a new liveusb, my system still looks
similar to the way it was before.  I started with the existing partition
schema and wiped everything and performed a separate independent
install.  I am still not sure why the /dev/dm-1 block device is mounted
on /usr which is not what the fstab is instructing.

UUIDs are not being used because the handbook says:
*Important:*  UUIDs of the filesystem on a LVM volume and its LVM
snapshots are identical, therefore using UUIDs to mount LVM volumes
should be avoided.

/etc/fstab:
/dev/nvme0n1p6          /boot           ext2
defaults,noatime                    0 2
/dev/nvme0n1p7          none            swap
sw                                  0 0
/dev/nvme0n1p8          /               ext4
defaults,noatime,discard            0 1
/dev/nvme0n1p9          /lib/modules    ext4
defaults,noatime,discard            0 1
/dev/nvme0n1p10         /tmp            ext4
defaults,noatime,discard            0 1
/dev/mapper/vg0-usr     /usr            ext4
defaults,noatime,discard            0 0
/dev/mapper/vg0-home    /home           ext4
defaults,noatime,discard            0 1
/dev/mapper/vg0-opt     /opt            ext4
defaults,noatime,discard            0 1
/dev/mapper/vg0-var     /var            ext4
defaults,noatime,discard            0 1
/dev/mapper/vg1-vm      /vm             ext4
noauto,noatime,discard              0 1
/dev/cdrom      /mnt/cdrom      auto rw,exec,noauto,user             0 0

/etc/initramfs.mounts has:
/usr

# ls -l /dev/mapper/vg0-usr
lrwxrwxrwx 1 root root 7 Apr 23 05:56 /dev/mapper/vg0-usr -> ../dm-1

# mount /usr
mount: /usr: /dev/mapper/vg0-usr already mounted or mount point busy.

# df -h /usr
Filesystem      Size  Used Avail Use% Mounted on
/dev/dm-1        25G  3.2G   20G  14% /usr

Thank you
Re: Re: LVM and the /usr Logical Volume [ In reply to ]
On 25/04/2022 14:36, dhk wrote:
> After reinstalling Gentoo with a new liveusb, my system still looks
> similar to the way it was before.  I started with the existing partition
> schema and wiped everything and performed a separate independent
> install.  I am still not sure why the /dev/dm-1 block device is mounted
> on /usr which is not what the fstab is instructing.

First of all, I notice you haven't said anything about /home, /opt etc.
Missing context is important ...

Secondly, vg0-usr is a symlink to dm-1, so I would not be suprised for
df to resolve it.

In fact, looking at both the output of mount, and df, on my system they
are inconsistent. mount tells me /dev/mapper/vg-root-lv-gentoo is
mounted on /, while df tells me /dev/dm-1 is mounted on /.

My guess is that anything to do with initial boot may or may not link to
/dev/dm-x, anything after that links to vg as you expect.

Either way it doesn't really make any difference imho.

Cheers,
Wol