Mailing List Archive

Definitive answer on >4TB DRBD volumes
Hi,

I've googled around for a while, and I can't find anything definitive
for or against - what is the maximum volume size supported by DRBD?

I'm running an 11TB DRBD 8.2.6 volume between two nodes, connected by
10GE. I've hit some odd issues (OOPSes, continually resyncing data)
and I'd like to eliminate the volume size as a cause of the issue.

Cheers,

Patrick
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Jul 22, 2008 at 05:03:23PM +0800, Patrick Coleman wrote:
> Hi,
>
> I've googled around for a while, and I can't find anything definitive
> for or against - what is the maximum volume size supported by DRBD?
>
> I'm running an 11TB DRBD 8.2.6 volume between two nodes, connected by
> 10GE. I've hit some odd issues (OOPSes, continually resyncing data)
> and I'd like to eliminate the volume size as a cause of the issue.


DRBD 8.0.x, 8.2.6:
32bit kernel:
4 TB hard limit per device
you can have several of them, but you probably run into some
other limit pretty fast.
64bit kernel:
4 TB "supported".
(unsupported theoretically) 16 TB hard limit per device,
you can have several of them, but you probably run into some
other limit pretty fast.

you need 32 MB in core RAM per TB storage,
which is pinned by DRBDs bitmap.
that is 512 MB RAM for 16 TB total storage,
1 GB RAM for 32 TB total storage.
and yes, that bitmap memory just sits there, pinned,
doing nothing during normal operation. sorry for that.

the difference between "supported" 4 TB
and the (unsupported) 16TB is that we have had several reports
of instability when going beyond the 4 TB (e.g. it appears to
work on a freshly booted box, but fails if you try again on a
busy box after some uptime). and that we don't have too much
experience with that large devices on "plain" DRBD ourselves,
since we tend to use "DRBD+" then.

DRBD+
both 32bit and 64bit
(we recommend 64bit for large storage anyways).
16 TB per device (currently)
you can have several of them, you should not hit any other limit
but installed physical ram.
you need a support contract with LINBIT for DRBD+.

we plan to increase the limit on DRBD+ up to 128 TB (or even
"arbitrary") per device "soonish".
once that is done, the current DRBD+ code will be migrated,
which would make 16 TB per device supported in 64bit "plain"
DRBD 8.2.y, where y is several steps larger than 6.


Did that help?


--
: Lars Ellenberg http://www.linbit.com :
: DRBD/HA support and consulting sales at linbit.com :
: LINBIT Information Technologies GmbH Tel +43-1-8178292-0 :
: Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 :
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Jul 22, 2008 at 12:01:48PM +0200, Lars Ellenberg wrote:
> you need 32 MB in core RAM per TB storage,
> which is pinned by DRBDs bitmap.
> that is 512 MB RAM for 16 TB total storage,
> 1 GB RAM for 32 TB total storage.

Oh, this is very nice information, thank you. I'm curious if the memory
requirements are the same for:
- one 4TB device and
- eight 512GB devices?

I guess yes, since you mention "total storage", but I wouldn't mind a
confirmation :)

thanks in advance,
iustin
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Jul 22, 2008 at 12:09:39PM +0200, Iustin Pop wrote:
> On Tue, Jul 22, 2008 at 12:01:48PM +0200, Lars Ellenberg wrote:
> > you need 32 MB in core RAM per TB storage,
> > which is pinned by DRBDs bitmap.
> > that is 512 MB RAM for 16 TB total storage,
> > 1 GB RAM for 32 TB total storage.
>
> Oh, this is very nice information, thank you. I'm curious if the memory
> requirements are the same for:
> - one 4TB device and
> - eight 512GB devices?
>
> I guess yes, since you mention "total storage", but I wouldn't mind a
> confirmation :)

yes.
well, there is some additional mostly negligible (compared to the bitmap
memory consumtion) overhead per device.

but, yes.
one 4TB needs 128 MB bitmap,
eigth 512 GB devices need 16 MB bitmap each.

--
: Lars Ellenberg http://www.linbit.com :
: DRBD/HA support and consulting sales at linbit.com :
: LINBIT Information Technologies GmbH Tel +43-1-8178292-0 :
: Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 :
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Jul 22, 2008 at 6:01 PM, Lars Ellenberg
<lars.ellenberg@linbit.com> wrote:
> On Tue, Jul 22, 2008 at 05:03:23PM +0800, Patrick Coleman wrote:
>> Hi,
>>
>> I've googled around for a while, and I can't find anything definitive
>> for or against - what is the maximum volume size supported by DRBD?
>>
>> I'm running an 11TB DRBD 8.2.6 volume between two nodes, connected by
>> 10GE. I've hit some odd issues (OOPSes, continually resyncing data)
>> and I'd like to eliminate the volume size as a cause of the issue.
>
>
> DRBD 8.0.x, 8.2.6:
> 32bit kernel:
> 4 TB hard limit per device
> you can have several of them, but you probably run into some
> other limit pretty fast.
> 64bit kernel:
> 4 TB "supported".
> (unsupported theoretically) 16 TB hard limit per device,
> you can have several of them, but you probably run into some
> other limit pretty fast.
<snip>
> Did that help?

mm, thanks for making that clear.

The boxes are both Dual-Quad-Core Xeons with 8GB of RAM, running a
Debian 2.6.22-amd64 kernel, so memory shouldn't be a problem. I'll
describe my current problems in more detail, and perhaps you'll be
able to tell me whether it seems to be related at all to the size of
the device (though it does sound likely, given you've had reports of
instability).

Firstly, DRBD seems to think it's permanently out of sync. I installed
8.0.12 (Debian testing) and ran the initial sync, and everything went
fine. Then I rebooted the secondary. After each reboot, it says about
3.9TB is out of sync and resyncs it. During the resync, the oos field
in /proc/drbd drops to zero. This completes, but then if I check
/proc/drbd the oos field is static at about 3.9TB, though the states
are UpToDate/UpToDate. Connecting and reconnecting makes it resync,
but has the same effect as for a reboot. Invalidating and resyncing
the secondary had no effect.

I then upgraded to 8.2.26, compiled from the Debian source package.
This all worked ok, and a resync happened as expected.

I tried blowing away the secondary and rebuilding it from scratch.
This seemed to work ok, and started doing the initial sync, but
crashed the secondary towards the end. After rebooting, it went back
to its resyncing 3.9TB thing.

I didn't trust the data on the secondary at this point, so I tried the
new verify feature from the primary. This went through to the end and
found the 3.9TB OOS but crashed the primary after it had just
finished, looking at the logs on the secondary.

The primary then decided its own 3.9TB was out of sync, and resynced
from the secondary. It's currently doing the same thing it was doing
before, with the large oos value in /proc/drbd:

version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by
phil@fat-tyre, 2008-05-30 12:59:17
0: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
ns:0 nr:1000728252 dw:2122151308 dr:2008 al:1 bm:259638 lo:0 pe:0
ua:0 ap:0 oos:3128329444

I was considering moving to 4x3TB DRBD volumes this weekend, and see
if that helps, but from what you say it might not make much
difference. If you think this is worth trying then I'll give it a go
anyway.

The issue is that I don't know whether the instability is caused by
DRBD or something else in the system (they're both mostly identical).
By the time I get to the box the terminal has blanked itself, so I
can't see the backtrace. There's nothing in the logs. It may be worth
connecting up a serial console, but I've only had two crashes in as
many months so it's going to be a while before I get anything.

One other thing I've noticed is that the machines started crashing
when I upgraded - would downgrading help?

Any suggestions you have at all would be most welcome.

Cheers,

Patrick



--
http://www.labyrinthdata.net.au - WA Backup, Web and VPS Hosting
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Jul 22, 2008 at 10:14 PM, Patrick Coleman <blinken@gmail.com> wrote:
> On Tue, Jul 22, 2008 at 6:01 PM, Lars Ellenberg
> <lars.ellenberg@linbit.com> wrote:
>> On Tue, Jul 22, 2008 at 05:03:23PM +0800, Patrick Coleman wrote:
<snip>

Replying to my own post, sorry. I've just noticed the following
interesting thing in the kernel log:

Taking the log from the resync that just finished:

drbd0: receiver (re)started
drbd0: conn( Unconnected -> WFConnection )
drbd0: Handshake successful: Agreed network protocol version 88
drbd0: conn( WFConnection -> WFReportParams )
drbd0: Starting asender thread (from drbd0_receiver [9371])
drbd0: data-integrity-alg: <not-used>
drbd0: Becoming sync target due to disk states.
drbd0: bm_set was 781246677, corrected to 782082361.
/usr/src/modules/drbd8/drbd/drbd_receiver.c:2144
drbd0: peer( Unknown -> Primary ) conn( WFReportParams -> WFBitMapT )
pdsk( DUnknown -> UpToDate )
drbd0: Writing meta data super block now.
drbd0: conn( WFBitMapT -> WFSyncUUID )
drbd0: helper command: /sbin/drbdadm before-resync-target
drbd0: conn( WFSyncUUID -> SyncTarget )
drbd0: Began resync as SyncTarget (will sync 3128329444 KB [782082361
bits set]).
drbd0: Writing meta data super block now.
drbd0: Resync done (total 27906 sec; paused 0 sec; 112100 K/sec)
drbd0: conn( SyncTarget -> Connected ) disk( Inconsistent -> UpToDate )
drbd0: bm_set was 0, corrected to 782082361.
/usr/src/modules/drbd8/drbd/drbd_worker.c:675
drbd0: helper command: /sbin/drbdadm after-resync-target
drbd0: cs:Connected rs_left=782082361 > rs_total=0 (rs_failed 0)
drbd0: Writing meta data super block now.

and /proc/drbd:

version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by
phil@fat-tyre, 2008-05-30 12:59:17
0: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
ns:0 nr:1001628124 dw:2123051180 dr:2008 al:1 bm:259638 lo:0 pe:0
ua:0 ap:0 oos:3128329444

I was looking at the line in the log where it says "bm_set was 0,
corrected to 78208236". 78208236*4K = 3128329444K, which is the oos
value.

-Patrick



--
http://www.labyrinthdata.net.au - WA Backup, Web and VPS Hosting
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Jul 22, 2008 at 12:01:48PM +0200, Lars Ellenberg wrote:
> On Tue, Jul 22, 2008 at 05:03:23PM +0800, Patrick Coleman wrote:
> > Hi,
> >
> > I've googled around for a while, and I can't find anything definitive
> > for or against - what is the maximum volume size supported by DRBD?
> >
> > I'm running an 11TB DRBD 8.2.6 volume between two nodes, connected by
> > 10GE. I've hit some odd issues (OOPSes, continually resyncing data)
> > and I'd like to eliminate the volume size as a cause of the issue.
>
>
> DRBD 8.0.x, 8.2.6:
> 32bit kernel:
> 4 TB hard limit per device
> you can have several of them, but you probably run into some
> other limit pretty fast.
> 64bit kernel:
> 4 TB "supported".
> (unsupported theoretically) 16 TB hard limit per device,

I just tried that myself out of curiosity.
it works up to "only" 8 TB per device on 64bit kernel.
anything bigger will oops sooner or later.

again: maximum 8 TB per device -- or crash is imminent.

therefore we will restrict drbd-8.2.7 (8.0.13)
to refuse anything bigger than that.

> you can have several of them, but you probably run into some
> other limit pretty fast.
>
> you need 32 MB in core RAM per TB storage,
> which is pinned by DRBDs bitmap.
> that is 512 MB RAM for 16 TB total storage,
> 1 GB RAM for 32 TB total storage.
> and yes, that bitmap memory just sits there, pinned,
> doing nothing during normal operation. sorry for that.

as mentioned in the other mail, read that "total storage" as:
8 devices @ 512 GB --> 128 MB RAM usage for drbd bitmap
1 device @ 4 TB --> 128 MB RAM usage for drbd bitmap

> the difference between "supported" 4 TB
> and the (unsupported) 16TB is that we have had several reports
> of instability when going beyond the 4 TB (e.g. it appears to
> work on a freshly booted box, but fails if you try again on a
> busy box after some uptime). and that we don't have too much
> experience with that large devices on "plain" DRBD ourselves,
> since we tend to use "DRBD+" then.
>
> DRBD+
> both 32bit and 64bit
> (we recommend 64bit for large storage anyways).
> 16 TB per device (currently)
> you can have several of them, you should not hit any other limit
> but installed physical ram.
> you need a support contract with LINBIT for DRBD+.
>
> we plan to increase the limit on DRBD+ up to 128 TB (or even
> "arbitrary") per device "soonish".
> once that is done, the current DRBD+ code will be migrated,
> which would make 16 TB per device supported in 64bit "plain"
> DRBD 8.2.y, where y is several steps larger than 6.

this remains true, however.

--
: Lars Ellenberg http://www.linbit.com :
: DRBD/HA support and consulting sales at linbit.com :
: LINBIT Information Technologies GmbH Tel +43-1-8178292-0 :
: Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 :
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
RE: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
> > DRBD 8.0.x, 8.2.6:
> > 32bit kernel:
> > 4 TB hard limit per device
> > you can have several of them, but you probably run into some
> > other limit pretty fast.
> > 64bit kernel:
> > 4 TB "supported".
> > (unsupported theoretically) 16 TB hard limit per device,
>
> I just tried that myself out of curiosity.
> it works up to "only" 8 TB per device on 64bit kernel.
> anything bigger will oops sooner or later.
>
> again: maximum 8 TB per device -- or crash is imminent.
>
> therefore we will restrict drbd-8.2.7 (8.0.13)
> to refuse anything bigger than that.

I might be naive here but surely with a 64bit kernel it should be
possible to go over 8TB and if you cannot then there must be a bug
somewhere ?

-------------------------------------------------------------------------------

This email may contain legally privileged and/or confidential information. It is solely for and is confidential for use by the addressee. Unauthorised recipients must preserve, observe and respect this confidentiality. If you have received it in error please notify us and delete it from your computer. Do not discuss, distribute or otherwise copy it.

Unless expressly stated to the contrary this e-mail is not intended to, and shall not, have any contractually binding effect on the Company and its clients. We accept no liability for any reliance placed on this e-mail other than to the intended recipient. If the content is not about the business of this Company or its clients then the message is neither from nor sanctioned by the Company.

We accept no liability or responsibility for any changes made to this e-mail after it was sent or any viruses transmitted through this e-mail or any attachment. It is your responsibility to satisfy yourself that this e-mail or any attachment is free from viruses and can be opened without harm to your systems.

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Thu, Jul 24, 2008 at 4:34 PM, Lars Ellenberg
<lars.ellenberg@linbit.com> wrote:
> On Tue, Jul 22, 2008 at 12:01:48PM +0200, Lars Ellenberg wrote:
>> On Tue, Jul 22, 2008 at 05:03:23PM +0800, Patrick Coleman wrote:
>> > Hi,
>> >
>> > I've googled around for a while, and I can't find anything definitive
>> > for or against - what is the maximum volume size supported by DRBD?
>> >
>> > I'm running an 11TB DRBD 8.2.6 volume between two nodes, connected by
>> > 10GE. I've hit some odd issues (OOPSes, continually resyncing data)
>> > and I'd like to eliminate the volume size as a cause of the issue.
>>
>>
>> DRBD 8.0.x, 8.2.6:
>> 32bit kernel:
>> 4 TB hard limit per device
>> you can have several of them, but you probably run into some
>> other limit pretty fast.
>> 64bit kernel:
>> 4 TB "supported".
>> (unsupported theoretically) 16 TB hard limit per device,
>
> I just tried that myself out of curiosity.
> it works up to "only" 8 TB per device on 64bit kernel.
> anything bigger will oops sooner or later.
>
> again: maximum 8 TB per device -- or crash is imminent.
>
> therefore we will restrict drbd-8.2.7 (8.0.13)
> to refuse anything bigger than that.

Cool. Just to clarify - the 8TB limit is global or per resource? That
is, can I safely have two 6TB devices (assuming I have sufficient
memory to handle the bitmaps)?

Cheers,

Patrick


--
http://www.labyrinthdata.net.au - WA Backup, Web and VPS Hosting
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Thu, Jul 24, 2008 at 10:12:47AM +0100, Lee Christie wrote:
> > > DRBD 8.0.x, 8.2.6:
> > > 32bit kernel:
> > > 4 TB hard limit per device
> > > you can have several of them, but you probably run into some
> > > other limit pretty fast.
> > > 64bit kernel:
> > > 4 TB "supported".
> > > (unsupported theoretically) 16 TB hard limit per device,
> >
> > I just tried that myself out of curiosity.
> > it works up to "only" 8 TB per device on 64bit kernel.
> > anything bigger will oops sooner or later.
> >
> > again: maximum 8 TB per device -- or crash is imminent.
> >
> > therefore we will restrict drbd-8.2.7 (8.0.13)
> > to refuse anything bigger than that.
>
> I might be naive here but surely with a 64bit kernel it should be
> possible to go over 8TB and if you cannot then there must be a bug
> somewhere ?

no, not a bug. but a
"shortcoming of the current implementation."

as I said:

> we plan to increase the limit on DRBD+ up to 128 TB (or even
> "arbitrary") per device "soonish".
> once that is done, the current DRBD+ code will be migrated,
> which would make 16 TB per device supported in 64bit "plain"
> DRBD 8.2.y, where y is several steps larger than 6.

this remains true, however.

--
: Lars Ellenberg http://www.linbit.com :
: DRBD/HA support and consulting sales at linbit.com :
: LINBIT Information Technologies GmbH Tel +43-1-8178292-0 :
: Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 :
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Thu, Jul 24, 2008 at 05:43:28PM +0800, Patrick Coleman wrote:
> On Thu, Jul 24, 2008 at 4:34 PM, Lars Ellenberg
> <lars.ellenberg@linbit.com> wrote:
> > On Tue, Jul 22, 2008 at 12:01:48PM +0200, Lars Ellenberg wrote:
> >> On Tue, Jul 22, 2008 at 05:03:23PM +0800, Patrick Coleman wrote:
> >> > Hi,
> >> >
> >> > I've googled around for a while, and I can't find anything definitive
> >> > for or against - what is the maximum volume size supported by DRBD?
> >> >
> >> > I'm running an 11TB DRBD 8.2.6 volume between two nodes, connected by
> >> > 10GE. I've hit some odd issues (OOPSes, continually resyncing data)
> >> > and I'd like to eliminate the volume size as a cause of the issue.
> >>
> >>
> >> DRBD 8.0.x, 8.2.6:
> >> 32bit kernel:
> >> 4 TB hard limit per device
> >> you can have several of them, but you probably run into some
> >> other limit pretty fast.
> >> 64bit kernel:
> >> 4 TB "supported".
> >> (unsupported theoretically) 16 TB hard limit per device,
> >
> > I just tried that myself out of curiosity.
> > it works up to "only" 8 TB per device on 64bit kernel.
> > anything bigger will oops sooner or later.
> >
> > again: maximum 8 TB per device -- or crash is imminent.
> >
> > therefore we will restrict drbd-8.2.7 (8.0.13)
> > to refuse anything bigger than that.
>
> Cool. Just to clarify - the 8TB limit is global or per resource?

yes.
:)

I think I was pretty clear in that post.
where I wrote "per device" I meant "per device".
where I wrote
> you can have several of them, but you probably run into some
> other limit pretty fast.
I meant exactly that.

> That is, can I safely have two 6TB devices (assuming I have sufficient
> memory to handle the bitmaps)?

you can try. it might work.

--
: Lars Ellenberg http://www.linbit.com :
: DRBD/HA support and consulting sales at linbit.com :
: LINBIT Information Technologies GmbH Tel +43-1-8178292-0 :
: Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 :
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
RE: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Thu, 24 Jul 2008, Lee Christie wrote:

> I might be naive here but surely with a 64bit kernel it should be
> possible to go over 8TB and if you cannot then there must be a bug
> somewhere ?

I tried 10.5 and crashed on sync over and over. I ended up going with two
4 TB and one 2.5 TB. It hurt to cut things up like that and took a long
time to move things around, but open source will NOT do 10.5 TB that is
for sure.

-Nathan
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Update: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Thu, 24 Jul 2008, Lars Ellenberg wrote:

>> That is, can I safely have two 6TB devices (assuming I have sufficient
>> memory to handle the bitmaps)?
>
> you can try. it might work.

Well crossing my fingers with two 4 TB and one 2.5 TB, so far so good. I
can't wait till DRBD++ has the max-bio-bvecs workaround....

><>
Nathan Stratton CTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.net http://www.blinkmind.com
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
>DRBD+
> both 32bit and 64bit
> (we recommend 64bit for large storage anyways).
> 16 TB per device (currently)

And now in 8.3.0rc1...

>* bitmap in unmapped pages = support for devices > 4TByte (was DRBD+)

So does this mean 8.3.0rc1 supports 16 TB? What if we want more than
that? Is it a hard-coded limit simply because it hasn't been tested
beyond that or is it known to cause mayhem?
--

Maurice Volaski, mvolaski@aecom.yu.edu
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
Re: Definitive answer on >4TB DRBD volumes [ In reply to ]
On Tue, Dec 02, 2008 at 11:29:38AM -0500, Maurice Volaski wrote:
>> DRBD+
>> both 32bit and 64bit
>> (we recommend 64bit for large storage anyways).
>> 16 TB per device (currently)
>
> And now in 8.3.0rc1...
>
>> * bitmap in unmapped pages = support for devices > 4TByte (was DRBD+)
>
> So does this mean 8.3.0rc1 supports 16 TB?

yes, on both 32bit and 64bit.

> What if we want more than that?

then go fix it.

or wait until someone (maybe LINBIT?) fixes that first.

> Is it a hard-coded limit simply because it hasn't been tested
> beyond that or is it known to cause mayhem?

I am certain that the current code base cannot support more than 32 TB
without major changes.

I am currently unsure whether it might support up to 32TB (though
probably on 64bit kernel only) with only minor code changes.
we are looking into that.


--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user