Mailing List Archive

DRBD version 9.2.5, but version 9.2.7 (or higher) is required
Hello,

I have added an new node to cluster today (fresh ubuntu installation. After
successfully setup I want to share an resource on it (extend linstor_db
resource with one more node), but when I start it, I get error message
Node: 'drbd-05' has DRBD version 9.2.5, but version 9.2.7 (or higher) is
required

But in some time from one node the linstor_db resource is removed (I don't
delete it, controller do it). I try to add to this node again the resource
but always get identical error.

The problem that I don't known what will be happend if I upgrade drbd
kernel modul to new version and restart the node (all nodes), Do linstor
remove other resources from nodes? The problem is that not only the
resource config is removed, a zfs volume too.

Anybody have information what is the procedure in this situation?
Re: DRBD version 9.2.5, but version 9.2.7 (or higher) is required [ In reply to ]
Hello!

I have added an new node to cluster today (fresh ubuntu installation. After
> successfully setup I want to share an resource on it (extend linstor_db
> resource with one more node), but when I start it, I get error message
> Node: 'drbd-05' has DRBD version 9.2.5, but version 9.2.7 (or higher)
> is required
>

LINSTOR is usually fine with a DRBD version >= 9.0.0. The only feature that
requires the most current version (9.2.7 or 9.1.18) is when you want to mix
storage pools. Most commonly that is achieved by mixing ZFS and LVM storage
pools.
If you want to do that, you will need to upgrade DRBD.


> But in some time from one node the linstor_db resource is removed (I don't
> delete it, controller do it). I try to add to this node again the resource
> but always get identical error.
>

There is a "BalanceTask" where the controller tries to adjust the
deployed-resource-count to the configured place-count from the
corresponding resource-group.
That means if your resource-group (RG) has a place-count of 2 configured,
and you have deployed your 3rd linstor_db resource, LINSTOR will
automatically (after a while) delete one "unnecessary" resource.
If you do want to have your linstor_db resource 3 (or more) times
replicated, you could create a new RG, move the linstor_db
resource-definition into the new RG and configure the new RG with a
--place-count 3 for example.
I suggest using a new RG, since simply modifying the --place-count of an
existing RG will trigger lots of resource-creations since a lot of
resource-definition will "lack" a resource.


> The problem that I don't known what will be happend if I upgrade drbd
> kernel modul to new version and restart the node (all nodes), Do linstor
> remove other resources from nodes? The problem is that not only the
> resource config is removed, a zfs volume too.
>

Although LINSTOR will indeed delete the .res files (briefly before they get
regenerated), LINSTOR will however NOT `drbdadm down` the DRBD devices. So
restarting LINSTOR will not affect any running DRBD.


Hope that helps

Best regards,
Gabor
Re: DRBD version 9.2.5, but version 9.2.7 (or higher) is required [ In reply to ]
Hello Gabor, thank You for answare.

You say mixing of storage pools, but I use only ZFS (thin and not thin
provisioned). Here is an example.
+-------------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity |
TotalCapacity | CanSnapshots | State | SharedName |
|===========================================================================================================================================|
| DfltDisklessStorPool | drbd-03 | DISKLESS | | |
| False | Ok | drbd-03;DfltDisklessStorPool |
| DfltDisklessStorPool | drbd-04 | DISKLESS | | |
| False | Ok | drbd-04;DfltDisklessStorPool |
| DfltDisklessStorPool | drbd-05 | DISKLESS | | |
| False | Ok | drbd-05;DfltDisklessStorPool |
| DfltDisklessStorPool | drbd-06 | DISKLESS | | |
| False | Ok | drbd-06;DfltDisklessStorPool |
| HDDPool | drbd-03 | ZFS_THIN | HDDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-03;HDDPool |
| HDDPool | drbd-04 | ZFS_THIN | HDDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-04;HDDPool |
| HDDPool | drbd-05 | ZFS | HDDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-05;HDDPool |
| HDDPool | drbd-06 | ZFS | HDDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-06;HDDPool |
| SSDPool | drbd-03 | ZFS_THIN | SSDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-03;SSDPool |
| SSDPool | drbd-04 | ZFS_THIN | SSDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-04;SSDPool |
| SSDPool | drbd-05 | ZFS_THIN | SSDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-05;SSDPool |
| SSDPool | drbd-06 | ZFS_THIN | SSDPool | XXX.XX XiB |
XXX.XX XiB | True | Ok | drbd-06;SSDPool |
+-------------------------------------------------------------------------------------------------------------------------------------------+



THis mixing is'nt enabled too?

On Tue, Jan 30, 2024 at 4:15?PM Gábor Hernádi <gabor.hernadi@linbit.com>
wrote:

> Hello!
>
> I have added an new node to cluster today (fresh ubuntu installation.
>> After successfully setup I want to share an resource on it (extend
>> linstor_db resource with one more node), but when I start it, I get error
>> message
>> Node: 'drbd-05' has DRBD version 9.2.5, but version 9.2.7 (or higher)
>> is required
>>
>
> LINSTOR is usually fine with a DRBD version >= 9.0.0. The only feature
> that requires the most current version (9.2.7 or 9.1.18) is when you want
> to mix storage pools. Most commonly that is achieved by mixing ZFS and LVM
> storage pools.
> If you want to do that, you will need to upgrade DRBD.
>
>
>> But in some time from one node the linstor_db resource is removed (I
>> don't delete it, controller do it). I try to add to this node again the
>> resource but always get identical error.
>>
>
> There is a "BalanceTask" where the controller tries to adjust the
> deployed-resource-count to the configured place-count from the
> corresponding resource-group.
> That means if your resource-group (RG) has a place-count of 2 configured,
> and you have deployed your 3rd linstor_db resource, LINSTOR will
> automatically (after a while) delete one "unnecessary" resource.
> If you do want to have your linstor_db resource 3 (or more) times
> replicated, you could create a new RG, move the linstor_db
> resource-definition into the new RG and configure the new RG with a
> --place-count 3 for example.
> I suggest using a new RG, since simply modifying the --place-count of an
> existing RG will trigger lots of resource-creations since a lot of
> resource-definition will "lack" a resource.
>
>
>> The problem that I don't known what will be happend if I upgrade drbd
>> kernel modul to new version and restart the node (all nodes), Do linstor
>> remove other resources from nodes? The problem is that not only the
>> resource config is removed, a zfs volume too.
>>
>
> Although LINSTOR will indeed delete the .res files (briefly before they
> get regenerated), LINSTOR will however NOT `drbdadm down` the DRBD devices.
> So restarting LINSTOR will not affect any running DRBD.
>
>
> Hope that helps
>
> Best regards,
> Gabor
>
Re: DRBD version 9.2.5, but version 9.2.7 (or higher) is required [ In reply to ]
Hello Zsolt,

LINSTOR's storage pool mixing has 2 criteria. If one is fulfilled, you have
a mixed storage pool setup:
1) Different extent sizes (this is what I mentioned in the previous email
with "mixing LVM with ZFS", since LVM by default has 4M extent size and ZFS
has 8k).
2) Mixing thin and thick storage pools.

Thin and thick storage pools have different initial DRBD-sync behavior
(full sync vs day0-based partial sync). Mixing thin and thick forces all
thin volumes to act as thick volumes (forever, even if you delete all the
thick-resources of the resource-definition and are only left with thin
resources).

So to answer your question: Mixing ZFS and ZFS_THIN is indeed considered
storage pool mixing.

--
Best regards,
Gabor Hernadi
Re: DRBD version 9.2.5, but version 9.2.7 (or higher) is required [ In reply to ]
Thanks for Your detailed answare. But before new linstor version this
mixing goes without error. I was successfully migrate or sync volumes
between this servers. Can e turn off this feature temporary for one or more
volume/resource?

On Wed, Jan 31, 2024, 5:16?PM Gábor Hernádi <gabor.hernadi@linbit.com>
wrote:

> Hello Zsolt,
>
> LINSTOR's storage pool mixing has 2 criteria. If one is fulfilled, you
> have a mixed storage pool setup:
> 1) Different extent sizes (this is what I mentioned in the previous email
> with "mixing LVM with ZFS", since LVM by default has 4M extent size and ZFS
> has 8k).
> 2) Mixing thin and thick storage pools.
>
> Thin and thick storage pools have different initial DRBD-sync behavior
> (full sync vs day0-based partial sync). Mixing thin and thick forces all
> thin volumes to act as thick volumes (forever, even if you delete all the
> thick-resources of the resource-definition and are only left with thin
> resources).
>
> So to answer your question: Mixing ZFS and ZFS_THIN is indeed considered
> storage pool mixing.
>
> --
> Best regards,
> Gabor Hernadi
>
Re: DRBD version 9.2.5, but version 9.2.7 (or higher) is required [ In reply to ]
You are correct, this is a bug that was introduced recently. We will fix
that in the next release.

On Wed, Jan 31, 2024 at 6:38?PM Zsolt Baji <bajizs@gmail.com> wrote:

> Thanks for Your detailed answare. But before new linstor version this
> mixing goes without error. I was successfully migrate or sync volumes
> between this servers. Can e turn off this feature temporary for one or more
> volume/resource?
>
> On Wed, Jan 31, 2024, 5:16?PM Gábor Hernádi <gabor.hernadi@linbit.com>
> wrote:
>
>> Hello Zsolt,
>>
>> LINSTOR's storage pool mixing has 2 criteria. If one is fulfilled, you
>> have a mixed storage pool setup:
>> 1) Different extent sizes (this is what I mentioned in the previous email
>> with "mixing LVM with ZFS", since LVM by default has 4M extent size and ZFS
>> has 8k).
>> 2) Mixing thin and thick storage pools.
>>
>> Thin and thick storage pools have different initial DRBD-sync behavior
>> (full sync vs day0-based partial sync). Mixing thin and thick forces all
>> thin volumes to act as thick volumes (forever, even if you delete all the
>> thick-resources of the resource-definition and are only left with thin
>> resources).
>>
>> So to answer your question: Mixing ZFS and ZFS_THIN is indeed considered
>> storage pool mixing.
>>
>> --
>> Best regards,
>> Gabor Hernadi
>>
>

--
Best regards,
Gabor Hernadi