Mailing List Archive

Linstor: confining failure domains to a single back-end device
Hello.

I'm new to linstor and manage to create a test cluster to serve as a
backend for virtual machines managed by OpenNebula.

In the documentation, there is a mention on “confining failure domains
to a single back-end device”[1] but I didn't found example of how to do
so.

Do I understand correctly that it means, on each server:

1 disk == 1 PV == 1 VG == 1 ThinPool == 1 storage-pool

?

Regards.

Footnotes:
[1] https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-a_storage_pool_per_backend_device

--
Daniel Dehennin
Pôle Logiciels Libres du ministère de l’Éducation nationale
Récupérer ma clef GPG: gpg --recv-keys 0x5A380850F562870C
Empreinte: EEB2 C6C8 EDFE 8364 8B2B A263 5A38 0850 F562 870C
Re: Linstor: confining failure domains to a single back-end device [ In reply to ]
Daniel Dehennin
<Daniel.Dehennin@region-academique-bourgogne-franche-comte.fr> writes:


Hello

> In the documentation, there is a mention on “confining failure domains
> to a single back-end device”[1] but I didn't found example of how to do
> so.
>
> Do I understand correctly that it means, on each server:
>
> 1 disk == 1 PV == 1 VG == 1 ThinPool == 1 storage-pool
>
> ?

Ok, so I don't understand how to create and use this setup. Does someone
have any hints?

Regards.
--
Daniel Dehennin
Pôle Logiciels Libres du ministère de l’Éducation nationale
Récupérer ma clef GPG: gpg --recv-keys 0x5A380850F562870C
Empreinte: EEB2 C6C8 EDFE 8364 8B2B A263 5A38 0850 F562 870C
Re: Linstor: confining failure domains to a single back-end device [ In reply to ]
On Wed, Jul 06, 2022 at 02:18:17PM +0200, Daniel Dehennin wrote:
> Daniel Dehennin
> <Daniel.Dehennin@region-academique-bourgogne-franche-comte.fr> writes:
>
> Hello
>
> > In the documentation, there is a mention on “confining failure domains
> > to a single back-end device”[1] but I didn't found example of how to do
> > so.
> >
> > Do I understand correctly that it means, on each server:
> >
> > 1 disk == 1 PV == 1 VG == 1 ThinPool == 1 storage-pool
> >
> > ?
>
> Ok, so I don't understand how to create and use this setup. Does someone
> have any hints?

I'd say your assumption is correct. The "handle" to create actual
resources in LINSTOR then is the resource group/volume group (LINSTOR
rg/vg, not to confuse with LVM). And LINSTOR RGs can have multiple
LINSTOR storage pools. So you always use the same LINSTOR RG, it will
choose one of the SPs and as there is a 1:1 mapping down to 1 physical
disk your failure domain for that resource is that single disk.

Regards, rck
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user
Re: Linstor: confining failure domains to a single back-end device [ In reply to ]
Roland Kammerer <roland.kammerer@linbit.com> writes:

>> > Do I understand correctly that it means, on each server:
>> >
>> > 1 disk == 1 PV == 1 VG == 1 ThinPool == 1 storage-pool
>> >
>> > ?
>>
>> Ok, so I don't understand how to create and use this setup. Does someone
>> have any hints?
>
> I'd say your assumption is correct. The "handle" to create actual
> resources in LINSTOR then is the resource group/volume group (LINSTOR
> rg/vg, not to confuse with LVM). And LINSTOR RGs can have multiple
> LINSTOR storage pools. So you always use the same LINSTOR RG, it will
> choose one of the SPs and as there is a 1:1 mapping down to 1 physical
> disk your failure domain for that resource is that single disk.

Thanks a lot.

So, to see if I understood correctly:

1. when a first VM disk is created, one storage-pool is chosen to store it

2. any VM running, the disk will be “cloned” on the same storage-pool
when possible and fallback to “copy” if the storage-pool is full[1]

3. Saving the running VM disk to the long term storage is always in
“copy” mode[2], so no storage shared between them

Is that right?

Because I'm wondering how things are working when one storage-pool (==
one PV) is full.

Regards.

Footnotes:
[1] https://github.com/OpenNebula/addon-linstor/blob/master/one/extender.py#L175-L182

[2] https://github.com/OpenNebula/addon-linstor/blob/master/tm/cpds#L48-L53

--
Daniel Dehennin
Pôle Logiciels Libres du ministère de l’Éducation nationale
Récupérer ma clef GPG: gpg --recv-keys 0x5A380850F562870C
Empreinte: EEB2 C6C8 EDFE 8364 8B2B A263 5A38 0850 F562 870C