Mailing List Archive

Packages for Proxmox VE 6
Hi,

the first beta was just released and I'd like to test with a new drbd
cluster.

Do you think PVE 5 packages are fine or better wait the linbit PVE 6
repository?

Thank you for you work!



_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
On Mon, Jul 08, 2019 at 12:31:15PM +0200, Denis wrote:
> Hi,
>
> the first beta was just released and I'd like to test with a new drbd
> cluster.
>
> Do you think PVE 5 packages are fine or better wait the linbit PVE 6
> repository?

every test/feedback is highly appreciated. I did not have time to follow
betas (and IIRC now stable). I don't think that the packages itself
would break, it is just user space perl for the plugin and java. So you
can totally give the PVE5 packages a try. What I did not have time to
check is if Proxmox changed the API. And I still have to catch up with
the old API changes as well (which should only print some warnings).

Regards, rck
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
Hi Roland!

I just set up three VMs, installed PVE 6 beta and drbd/linstor from PVE5
repository without problems!

I successfully create a VM with disk managed by linstor-proxmox and I
see the vm-100-disk-1 syncing... in sync on two nodes!
I disable KVM hardware virtualization for starting it, of course.

But I get an error when I start the VM
(it starts withoud drbd disk)


before pasting logs, another strange thing is that if I broswe the
content of drbd storage, from proxmox, I can't see the VM disk but I get
the status and the %usage



kvm: -drive
file=/dev/drbd/by-res/vm-100-disk-1/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap:
The device is not writable: Permission denied
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -name prononva
-chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait'
-mon 'chardev=qmp,mode=control' -chardev
'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon
'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/100.pid
-daemonize -smbios 'type=1,uuid=6996412f-0536-485e-87f2-f37904b68301'
-smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
-vnc unix:/var/run/qemu-server/100.vnc,password -cpu qemu64 -m 512
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device
'vmgenid,guid=4066aee8-8a9b-4c86-9326-e040c452d0d5' -device
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device
'usb-tablet,id=tablet,bus=uhci.0,port=1' -device
'VGA,id=vga,bus=pci.0,addr=0x2' -chardev
'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' -device
'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device
'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi
'initiator-name=iqn.1993-08.org.debian:01:5e763b83f1b' -drive
'file=/var/lib/vz/template/iso/debian-10.0.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads'
-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200'
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive
'file=/dev/drbd/by-res/vm-100-disk-1/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap'
-device
'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100'
-netdev
'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'
-device
'virtio-net-pci,mac=F2:94:D5:E8:1A:32,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
-machine 'accel=tcg,type=pc'' failed: exit code 1




If I try to backup:



INFO: starting new backup job: vzdump 100 --storage local --node p6t1
--remove 0 --compress lzo --mode snapshot
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2019-07-13 17:18:13
INFO: status = stopped
INFO: Plugin "PVE::Storage::Custom::LINSTORPlugin" is implementing an
older storage API, an upgrade is recommended
INFO: update VM 100: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: VM Name: prononva
INFO: include disk 'scsi0' 'drbdstorage:vm-100-disk-1' 20975192K
INFO: creating archive
'/var/lib/vz/dump/vzdump-qemu-100-2019_07_13-17_18_13.vma.lzo'
INFO: starting kvm to execute backup task
kvm: -drive
file=/dev/drbd/by-res/vm-100-disk-1/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap:
The device is not writable: Permission denied
INFO: Plugin "PVE::Storage::Custom::LINSTORPlugin" is implementing an
older storage API, an upgrade is recommended
ERROR: Backup of VM 100 failed - start failed: command '/usr/bin/kvm -id
100 -name prononva -chardev
'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon
'chardev=qmp,mode=control' -chardev
'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon
'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/100.pid
-daemonize -smbios 'type=1,uuid=6996412f-0536-485e-87f2-f37904b68301'
-smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
-vnc unix:/var/run/qemu-server/100.vnc,password -cpu qemu64 -m 512
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device
'vmgenid,guid=4066aee8-8a9b-4c86-9326-e040c452d0d5' -device
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device
'usb-tablet,id=tablet,bus=uhci.0,port=1' -device
'VGA,id=vga,bus=pci.0,addr=0x2' -chardev
'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' -device
'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device
'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi
'initiator-name=iqn.1993-08.org.debian:01:5e763b83f1b' -drive
'file=/var/lib/vz/template/iso/debian-10.0.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads'
-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200'
-device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive
'file=/dev/drbd/by-res/vm-100-disk-1/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap'
-device
'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100'
-netdev
'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'
-device
'virtio-net-pci,mac=F2:94:D5:E8:1A:32,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
-machine 'accel=tcg,type=pc' -S' failed: exit code 1
INFO: Failed at 2019-07-13 17:18:16
INFO: Backup job finished with errors
TASK ERROR: job errors



linstor-satellite has no log

something in linstor-controller:
10.7.96.3 - - [2019/Jul/13:17:28:23 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:23 +0200] "GET
/v1/view/resources?nodes=p6t1&resources=vm-100-disk-1 HTTP/1.1" 200 - ""
"PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:24 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:24 +0200] "GET
/v1/view/resources?nodes=p6t1&resources=vm-100-disk-1 HTTP/1.1" 200 - ""
"PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:24 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:24 +0200] "GET
/v1/view/resources?nodes=p6t1&resources=vm-100-disk-1 HTTP/1.1" 200 - ""
"PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.4 - - [2019/Jul/13:17:28:26 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.4 - - [2019/Jul/13:17:28:26 +0200] "GET
/v1/view/storage-pools?nodes=p6t3 HTTP/1.1" 200 317 ""
"PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.5 - - [2019/Jul/13:17:28:27 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.5 - - [2019/Jul/13:17:28:27 +0200] "GET
/v1/view/storage-pools?nodes=p6t2 HTTP/1.1" 200 317 ""
"PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:28 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.3 - - [2019/Jul/13:17:28:28 +0200] "GET
/v1/view/storage-pools?nodes=p6t1 HTTP/1.1" 200 317 ""
"PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.4 - - [2019/Jul/13:17:28:36 +0200] "GET /v1/controller/version
HTTP/1.1" 200 142 "" "PythonLinstor/0.9.8 (API1.0.4)"
10.7.96.4 - - [2019/Jul/13:17:28:36 +0200] "GET
/v1/view/storage-pools?nodes=p6t3 HTTP/1.1" 200 317 ""
"PythonLinstor/0.9.8 (API1.0.4)"


Am I missing something?

Thank you for your support!



Il 09/07/19 10:30, Roland Kammerer ha scritto:
> On Mon, Jul 08, 2019 at 12:31:15PM +0200, Denis wrote:
>> Hi,
>>
>> the first beta was just released and I'd like to test with a new drbd
>> cluster.
>>
>> Do you think PVE 5 packages are fine or better wait the linbit PVE 6
>> repository?
> every test/feedback is highly appreciated. I did not have time to follow
> betas (and IIRC now stable). I don't think that the packages itself
> would break, it is just user space perl for the plugin and java. So you
> can totally give the PVE5 packages a try. What I did not have time to
> check is if Proxmox changed the API. And I still have to catch up with
> the old API changes as well (which should only print some warnings).
>
> Regards, rck
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
>
> I successfully create a VM with disk managed by linstor-proxmox and I
>
see the vm-100-disk-1 syncing... in sync on two nodes!



What about the 3rd node? Is that not syncing or you have configured it as a
Diskless node?
Have you set the redundancy number to 3 in storage.cfg ?


But I get an error when I start the VM
> (it starts withoud drbd disk)


Have you tried starting it on one of Diskfull nodes?

before pasting logs, another strange thing is that if I broswe the
> content of drbd storage, from proxmox, I can't see the VM disk but I get
> the status and the %usage


I think this is broken on PVE5 as well. I cannot browse the content of the
drbd storage in the webgui but I can see the total amount and the space
used in the summary tab. This used to work in the past but I'm unsure at
which stage it got broken.

none,aio=native,detect-zeroes=unmap:
>
The device is not writable: Permission denied
TASK ERROR: start failed:...

Here. It complains that the drbd volume is not in a writeable state for
some reason (read only?).Confirm its status with drbdtop and make the
necessary actions to switch it in the correct (writeable) state (Primary?).


If I try to backup:




The device is not writable: Permission denied
INFO: Plugin "PVE::Storage::Custom::LINSTORPlugin" is implementing an
older storage API, an upgrade is recommended
ERROR: Backup of VM 100 failed

This is failing due to the same reason as well ...


Gianni
Re: Packages for Proxmox VE 6 [ In reply to ]
Il 14/07/19 19:35, Gianni Milo ha scritto:
>
> I successfully create a VM with disk managed by linstor-proxmox and I
>
> see the vm-100-disk-1 syncing... in sync on two nodes!
>
>
>
> What about the 3rd node? Is that not syncing or you have configured it
> as a Diskless node?
> Have you set the redundancy number to 3 in storage.cfg ?

I use to set redundancy 2 on my cluster,  so a third node is not used
for a single volume

>
>
> But I get an error when I start the VM
> (it starts withoud drbd disk)
>
>
> Have you tried starting it on one of Diskfull nodes?

Do you mean diskless node?

Interesting, just tried but same results, see logs later

>
> before pasting logs, another strange thing is that if I broswe the
> content of drbd storage, from proxmox, I can't see the VM disk but
> I get
> the status and the %usage
>
>
> I think this is broken on PVE5 as well. I cannot browse the content of
> the drbd storage in the webgui but I can see the total amount and the
> space used in the summary tab. This used to work in the past but I'm
> unsure at which stage it got broken.

I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
0.9.5-1 and 0.9.2-1 witch shows the storage content


> Here. It complains that the drbd volume is not in a writeable state
> for some reason (read only?).Confirm its status with drbdtop and make
> the necessary actions to switch it in the correct (writeable) state
> (Primary?).

drbdtop is normal:

?Resource: vm-100-disk-1: (Overall danger score: 0)
? Local Disc(Secondary):
?  volume 0 (/dev/drbd1001): UpToDate(normal disk state)
?
? Connection to p6t2(Secondary): Connected(connected to p6t2)
?  volume 0:
?   UpToDate(normal disk state)

I did also try to use a drbd volume from host and it is working well, I
can write, the Primary promotion was automatic

I think there are some communication problem with the proxmox storage
plugin but I'm not sure, of course.

Anyway thank you for your reply!


Here the logs of the VM start on a diskless node

SUCCESS:
Description:
    New resource 'vm-101-disk-1' on node 'p6t2' registered.
Details:
    Resource 'vm-101-disk-1' on node 'p6t2' UUID is:
ba138d06-9584-48ff-9662-cf2ca0e15044
SUCCESS:
Description:
    Volume with number '0' on resource 'vm-101-disk-1' on node 'p6t2'
successfully registered
Details:
    Volume UUID is: f995230b-d95c-4745-bb6f-db974991a3f1
SUCCESS:
    Created resource 'vm-101-disk-1' on 'p6t2'
SUCCESS:
    Added peer(s) 'p6t2' to resource 'vm-101-disk-1' on 'p6t3'
SUCCESS:
    Added peer(s) 'p6t2' to resource 'vm-101-disk-1' on 'p6t1'
SUCCESS:
Description:
    Resource 'vm-101-disk-1' on 'p6t2' ready
Details:
    Node(s): 'p6t2', Resource: 'vm-101-disk-1'
kvm: -drive
file=/dev/drbd/by-res/vm-101-disk-1/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap:
The device is not writable: Permission denied
Use of uninitialized value in join or string at
/usr/share/perl5/PVE/Storage/Custom/LINSTORPlugin.pm line 152.

NOTICE
  Intentionally removing diskless assignment (vm-101-disk-1) on (p6t2).
  It will be re-created when the resource is actually used on this node.
SUCCESS:
Description:
    Node: p6t2, Resource: vm-101-disk-1 marked for deletion.
Details:
    Node: p6t2, Resource: vm-101-disk-1 UUID is:
ba138d06-9584-48ff-9662-cf2ca0e15044
SUCCESS:
    Deleted 'vm-101-disk-1' on 'p6t2'
SUCCESS:
    Notified 'p6t3' that 'vm-101-disk-1' is being deleted on 'p6t2'
SUCCESS:
    Notified 'p6t1' that 'vm-101-disk-1' is being deleted on 'p6t2'
SUCCESS:
Description:
    Node: p6t2, Resource: vm-101-disk-1 deletion complete.
Details:
    Node: p6t2, Resource: vm-101-disk-1 UUID was:
ba138d06-9584-48ff-9662-cf2ca0e15044
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -name prova2
-chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait'
-mon 'chardev=qmp,mode=control' -chardev
'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon
'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/101.pid
-daemonize -smbios 'type=1,uuid=f01631be-8d16-4eb6-acb8-d91957317262'
-smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot
'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'
-vnc unix:/var/run/qemu-server/101.vnc,password -cpu qemu64 -m 512
-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device
'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device
'vmgenid,guid=e2a1888f-adcb-42d3-98e7-b05b03277236' -device
'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device
'usb-tablet,id=tablet,bus=uhci.0,port=1' -device
'VGA,id=vga,bus=pci.0,addr=0x2' -device
'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi
'initiator-name=iqn.1993-08.org.debian:01:9504984c045' -drive
'if=none,id=drive-ide2,media=cdrom,aio=threads' -device
'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device
'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive
'file=/dev/drbd/by-res/vm-101-disk-1/0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=native,detect-zeroes=unmap'
-device
'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100'
-netdev
'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'
-device
'virtio-net-pci,mac=BA:BF:04:AE:9E:3B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'
-machine 'accel=tcg,type=pc'' failed: exit code 1
Re: Packages for Proxmox VE 6 [ In reply to ]
>
> I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
> 0.9.5-1 and 0.9.2-1 witch shows the storage content
>

I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same problem
you are seeing on PVE6. Must be something that changed in linstor...
Re: Packages for Proxmox VE 6 [ In reply to ]
On Mon, Jul 15, 2019 at 08:25:09AM +0100, Gianni Milo wrote:
> >
> > I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
> > 0.9.5-1 and 0.9.2-1 witch shows the storage content
> >
>
> I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same problem
> you are seeing on PVE6. Must be something that changed in linstor...

I hope this is fixed (and released soon):
https://github.com/LINBIT/linstor-proxmox/issues/20

Please test!

Regards, rck
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
Il 15/07/19 11:36, Roland Kammerer ha scritto:
> On Mon, Jul 15, 2019 at 08:25:09AM +0100, Gianni Milo wrote:
>>> I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
>>> 0.9.5-1 and 0.9.2-1 witch shows the storage content
>>>
>> I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same problem
>> you are seeing on PVE6. Must be something that changed in linstor...
> I hope this is fixed (and released soon):
> https://github.com/LINBIT/linstor-proxmox/issues/20
>
> Please test!


Hi Roland,

now I can see the drbdstorage content but I can't start the VM on PVE6

I'm ready for testing, I can grant you an access to my test enviroment too


Denis


_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
Have you tried to force the disk to primary with

drbdadm primary <disk>

manually?

I also give it a try with Proxmox6. I can migrate disks while the
machine is stopped. If I try to migrate online I get the same error. And
if I try to start a VM with disks that are "secondary" on that machine I
do also get it. If I switch it manually to primary it starts. But
auto-propagate is working if I for example try fdisk on the host on the
according drbd-dev. Seems like it takes to long for some operations, or
doesn't get triggered.

Migration of vms between hosts also requires to set the second host to
primary first, otherwise migration fails.

Regards,
Daniel


On 7/15/19 11:55 AM, Denis wrote:
> Il 15/07/19 11:36, Roland Kammerer ha scritto:
>> On Mon, Jul 15, 2019 at 08:25:09AM +0100, Gianni Milo wrote:
>>>> I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
>>>> 0.9.5-1 and 0.9.2-1 witch shows the storage content
>>>>
>>> I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same
>>> problem
>>> you are seeing on PVE6. Must be something that changed in linstor...
>> I hope this is fixed (and released soon):
>> https://github.com/LINBIT/linstor-proxmox/issues/20
>>
>> Please test!
>
>
> Hi Roland,
>
> now I can see the drbdstorage content but I can't start the VM on PVE6
>
> I'm ready for testing, I can grant you an access to my test enviroment too
>
>
> Denis
>
>
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
On Mon, Jul 15, 2019 at 11:55:10AM +0200, Denis wrote:
> Il 15/07/19 11:36, Roland Kammerer ha scritto:
> > On Mon, Jul 15, 2019 at 08:25:09AM +0100, Gianni Milo wrote:
> > > > I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
> > > > 0.9.5-1 and 0.9.2-1 witch shows the storage content
> > > >
> > > I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same problem
> > > you are seeing on PVE6. Must be something that changed in linstor...
> > I hope this is fixed (and released soon):
> > https://github.com/LINBIT/linstor-proxmox/issues/20
> >
> > Please test!
>
>
> Hi Roland,
>
> now I can see the drbdstorage content

good, that is what it was supposed to fix.

> but I can't start the VM on PVE6

we will see, one step at a time:
- new Proxmox API
- LINSTOR's REST-API
- PVE 6

> I'm ready for testing, I can grant you an access to my test enviroment
> too

That is very kind, but at the moment I'm fine with some VMs on my side.

Regards, rck
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
You are right Daniel!
If I force with drbdadm primary, it works!
I never tried to migrate, I will.

BTW: I'm using now the new plugin with v2 API, I will test it better

Denis

Il 15/07/2019 12:09, dridders-drbduser@dridders.de ha scritto:
> Have you tried to force the disk to primary with
>
> drbdadm primary <disk>
>
> manually?
>
> I also give it a try with Proxmox6. I can migrate disks while the
> machine is stopped. If I try to migrate online I get the same error. And
> if I try to start a VM with disks that are "secondary" on that machine I
> do also get it. If I switch it manually to primary it starts. But
> auto-propagate is working if I for example try fdisk on the host on the
> according drbd-dev. Seems like it takes to long for some operations, or
> doesn't get triggered.
>
> Migration of vms between hosts also requires to set the second host to
> primary first, otherwise migration fails.
>
> Regards,
> Daniel
>
>
> On 7/15/19 11:55 AM, Denis wrote:
>> Il 15/07/19 11:36, Roland Kammerer ha scritto:
>>> On Mon, Jul 15, 2019 at 08:25:09AM +0100, Gianni Milo wrote:
>>>>> I run a similar 3-nodes production cluster with PVE 5.4-3 and linstor
>>>>> 0.9.5-1 and 0.9.2-1 witch shows the storage content
>>>>>
>>>> I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same
>>>> problem
>>>> you are seeing on PVE6. Must be something that changed in linstor...
>>> I hope this is fixed (and released soon):
>>> https://github.com/LINBIT/linstor-proxmox/issues/20
>>>
>>> Please test!
>>
>> Hi Roland,
>>
>> now I can see the drbdstorage content but I can't start the VM on PVE6
>>
>> I'm ready for testing, I can grant you an access to my test enviroment too
>>
>>
>> Denis
>>
>>
>> _______________________________________________
>> Star us on GITHUB: https://github.com/LINBIT
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Packages for Proxmox VE 6 [ In reply to ]
Hello,

I tried again with the latest packages, PVE6 final and updated and
linstor 0.9.13

All works except the proxmox-linstor plugin can't promote a resouce to
primary, so VMs doesn't start

If I type drbdadm primary <disk> on theVM node the VM starts

I can live migrate the VM too, with the same command on the destination
node (dual primary)

Please tell me how I can help!


>
> Il 15/07/2019 12:09, dridders-drbduser@dridders.de ha scritto:
>> Have you tried to force the disk to primary with
>>
>> drbdadm primary <disk>
>>
>> manually?
>>
>> I also give it a try with Proxmox6. I can migrate disks while the
>> machine is stopped. If I try to migrate online I get the same error. And
>> if I try to start a VM with disks that are "secondary" on that machine I
>> do also get it. If I switch it manually to primary it starts. But
>> auto-propagate is working if I for example try fdisk on the host on the
>> according drbd-dev. Seems like it takes to long for some operations, or
>> doesn't get triggered.
>>
>> Migration of vms between hosts also requires to set the second host to
>> primary first, otherwise migration fails.
>>
>> Regards,
>> Daniel
>>
>>
>> On 7/15/19 11:55 AM, Denis wrote:
>>> Il 15/07/19 11:36, Roland Kammerer ha scritto:
>>>> On Mon, Jul 15, 2019 at 08:25:09AM +0100, Gianni Milo wrote:
>>>>>> I run a similar 3-nodes production cluster with PVE 5.4-3 and
>>>>>> linstor
>>>>>> 0.9.5-1 and 0.9.2-1 witch shows the storage content
>>>>>>
>>>>> I'm using PVE 5.4-3 with LINSTOR 0.9.12-1 and I can reproduce same
>>>>> problem
>>>>> you are seeing on PVE6. Must be something that changed in linstor...
>>>> I hope this is fixed (and released soon):
>>>> https://github.com/LINBIT/linstor-proxmox/issues/20
>>>>
>>>> Please test!
>>>
>>> Hi Roland,
>>>
>>> now I can see the drbdstorage content but I can't start the VM on PVE6
>>>
>>> I'm ready for testing, I can grant you an access to my test
>>> enviroment too
>>>
>>>
>>> Denis
>>>
>>>
>>> _______________________________________________
>>> Star us on GITHUB: https://github.com/LINBIT
>>> drbd-user mailing list
>>> drbd-user@lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>> _______________________________________________
>> Star us on GITHUB: https://github.com/LINBIT
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user