Mailing List Archive

btrfs Was: Soliciting new RAID ideas
thegeezer posted on Wed, 28 May 2014 00:38:03 +0100 as excerpted:

> depending on your budget a pair of large sata drives + mdadm will be
> ideal, if you had lvm already you could simply 'move' then 'enlarge'
> your existing stuff (tm) : i'd like to know how btrfs would do the same
> for anyone who can let me know.
> you have raid6 because you probably know that raid5 is just waiting for
> trouble, so i'd probably start looking at btrfs for your finanical data
> to be checksummed.

Given that I'm a regular on the btrfs list as well as running it myself,
I'm likely to know more about it than most. Here's a whirlwind rundown
with a strong emphasis on practical points a lot of people miss (IOW, I'm
skipping a lot of the commonly covered and obvious stuff). Point 6 below
directly answers your move/enlarge question. Meanwhile, points 1, 7 and
8 are critically important, as we see a lot of people on the btrfs list
getting them wrong.

1) Since there's raid5/6 discussion on the thread... Don't use btrfs
raid56 modes at this time, except purely for playing around with trashable
or fully backed up data. The implementation as introduced isn't code-
complete, and while the operational runtime side works, recovery from
dropped devices, not so much. Thus, in terms of data safety you're
effectively running a slow raid0 with lots of extra overhead that can be
considered trash if a device drops, with the sole benefit being that when
the raid56 mode recovery implementation code gets merged (and is tested
for a kernel cycle or two to work out the initial bugs), you'll then get
what amounts to a "free" upgrade to the raid5 or raid6 mode you had
originally configured, since it was doing the operational parity
calculation and writes to track it all along, it just couldn't yet be
used for actual recovery as the code simply wasn't there to do so.

2) Btrfs raid0, raid1 and raid10 modes, along with single mode (on a
single or multiple-devices) and dup mode (on a single device, metadata is
by default duplicated -- two copies, except on ssd where the default is
only a single copy since some ssds dedup anyway) are reasonably mature
and stable, to the same point as btrfs in general, anyway, which is to
say it's "mostly stable, keep your backups fresh but you're not /too/
likely to have to use them." There are still enough bugs being fixed in
each kernel release, however, that running latest stable series is
/strongly/ recommended, as your data is at risk to known-fixed bugs (even
if at this point they only tend to hit the corner-cases) if you're not
doing so.

3) It's worth noting that btrfs treats data and metadata separately --
when you do a mkfs.btrfs, you can configure redundancy modes separately
for each, the single-device default being (as above) dup metadata (except
for ssd), single data, the multi-device default being raid1 metadata,
single data

4) FWIW, most of my btrfs formatted partitions are dual-device raid1 mode
for both data and metadata, on ssd. (Second backup is reiserfs on
spinning-rust, just in case some Armageddon bug eats all the btrfs at the
same time, working copy and first backup, tho btrfs is stable enough now
that's extremely unlikely, but I didn't consider it so back when I set
things up nearly a year ago now.)

The reason for my raid1 mode choice isn't that of ordinary raid1, it's
specifically due to btrfs' checksumming and data integrity features -- if
one copy fails its checksum, btrfs will, IF IT HAS ANOTHER COPY TO TRY,
check the second copy and if it's good, will use it and rewrite the bad
copy. Btrfs scrub allows checking the entire filesystem for checksum
errors and restoring any errors it finds from good copies where possible.

Obviously, the default single data mode (or raid0) won't have a second
copy to check and rewrite from, while raid1 (and raid10) modes will (as
will dup-mode metadata on a single device, but with one exception, dup
mode isn't allowed for data, only metadata, the exception being the mixed-
blockgroup mode that mixes data and metadata together, that's the default
on filesystems under 1 GiB but isn't recommended on large filesystems for
performance reasons).

So I wanted a second copy of both data and metadata to take advantage of
btrfs' data integrity and scrub features, and with btrfs raid1 mode, I
get both that and the traditional raid1 device-loss protection as well.
=:^)

5) It's worth noting that as of now, btrfs raid1 mode is only two-way-
mirrored, no matter how many devices are configured into the filesystem.
N-way-mirrored is the next feature on the roadmap after the raid56 work
is completed, but given how nearly every btrfs feature has taken much
longer to complete than originally planned, I'm not expecting it until
sometime next year, now.

Which is unfortunate, as my risk vs. cost sweet spot would be 3-way-
mirroring, covering in case *TWO* copies of a block failed checksum. Oh,
well, it's coming, even if it seems at this point like the proverbial
carrot dangling off a stick held in front of the donkey.

6) Btrfs handles moving then enlarging (parallel to LVM) using btrfs
add/delete, to add or delete a device to/from a filesystem (moving the
content from a to-be-deleted device in the process), plus btrfs balance,
to restripe/convert/rebalance between devices as well as to free
allocated but empty data and metadata chunks back to unallocated.
There's also btrfs resize, but that's more like the conventional
filesystem resize command, resizing the part of the filesystem on an
individual device (partitioned/virtual or whole physical device).

So to add a device, you'd btrfs device add, then btrfs balance, with an
optional conversion to a different redundancy mode if desired, to
rebalance the existing data and metadata onto that device. (Without the
rebalance it would be used for new chunks, but existing data and metadata
chunks would stay where they were. I'll omit the "chunk definition"
discussion in the interest of brevity.)

To delete a device, you'd btrfs device delete, which would move all the
data on that device onto other existing devices in the filesystem, after
which it could be removed.

7) Given the thread, I'd be remiss to omit this one. VM images and other
large "internal-rewrite-pattern" files (large database files, etc) need
special treatment on btrfs, at least currently. As such, btrfs may not
be the greatest solution for Mark (tho it would work fine with special
procedures), given the several VMs he runs. This one unfortunately hits
a lot of people. =:^( But here's a heads-up, so it doesn't have to hit
anyone reading this! =:^)

As a property of the technology, any copy-on-write-based filesystem is
going to find files where various bits of existing data within the file
are repeatedly rewritten (as opposed to new data simply being appended,
think a log file or live-stored audio/video stream) extremely challenging
to deal with. The problem is that unlike ordinary filesystems that
rewrite the data in place such that a file continues to occupy the same
extents as it did before, copy-on-write filesystems will write a changed
block to a different location. While COW does mean atomic updates and
thus more reliability since either the new data or the old data should
exist, never an unpredictable mixture of the two, as a result of the
above rewrite pattern, this type of internally-rewritten file gets
**HEAVILY** fragmented over time.

We've had filefrag reports of several gig files with over 100K extents!
Obviously, this isn't going to be the most efficient file in the world to
access!

For smaller files, up to a couple hundred MiB or perhaps a bit more,
btrfs has the autodefrag mount option, which can help a lot. With this
option enabled, whenever a block of a file is changed and rewritten, thus
written elsewhere, btrfs queues up a rewrite of the entire file to happen
in the background. The rewrite will be done sequentially, thus defragging
the file. This works quite well for firefox's sqlite database files, for
instance, as they're internal-rewrite-pattern, but they're small enough
that autodefrag handles them reasonably nicely.

But this solution doesn't scale so well as the file size increases toward
and past a GiB, particularly for files with a continuous stream of
internal rewrites such as can happen with an operating VM writing to its
virtual storage device. At some point, the stream of writes comes in
faster than the file can be rewritten, and things start to back up!

To deal with this case, there's the NOCOW file attribute, set with chattr
+C. However, to be effective, this attribute must be set when the file
is empty, before it has existing content. The easiest way to do that is
to set the attribute on the directory which will contain the files.
While it doesn't affect the directory itself any, newly created files
within that directory inherit the NOCOW attribute before they have data,
thus allowing it to work without having to worry about it that much. For
existing files, create a new directory, set its NOCOW attribute, and COPY
(don't move, and don't use cp --reflink) the existing files into it.

Once you have your large internal-rewrite-pattern files set NOCOW, btrfs
will rewrite them in-place as an ordinary filesystem would, thus avoiding
the problem.

Except for one thing. I haven't mentioned btrfs snapshots yet as that
feature, but for this caveat, is covered well enough elsewhere. But
here's the problem. A snapshot locks the existing file data in place.
As a result, the first write to a block within a file after a snapshot
MUST be COW, even if the file is otherwise set NOCOW.

If only the occasional one-off snapshot is done it's not /too/ bad, as
all the internal file writes between snapshots are NOCOW, it's only the
first one to each file block after a snapshot that must be COW. But many
people and distros are script-automating their snapshots in ordered to
have rollback capacities, and on btrfs, snapshots are (ordinarily) light
enough that people are sometimes configuring a snapshot a minute! If
only a minute's changes can be written to a the existing location, then
there's a snapshot and changes must be written to a new location, then
another snapshot and yet another location... Basically the NOCOW we set
on that file isn't doing us any good!

8) So making this a separate point as it's important and a lot of people
get it wrong. NOCOW and snapshots don't mix!

There is, however, a (partial) workaround. Because snapshots stop at
btrfs subvolume boundaries, if you put your large VM images and similar
large internal-rewrite-pattern files (databases, etc) in subvolumes,
making that directory I suggested above a full subvolume not just a NOCOW
directory, snapshots of the parent subvolume will not include the VM
images subvolume, thus leaving the VM images alone. This solves the
snapshot-broken-NOCOW and thus the fragmentation issue, but it DOES mean
that those VM images must be backed up using more conventional methods
since snapshotting won't work for them.

9) Some other still partially broken bits of btrfs include:

9a) Quotas: Just don't use them on btrfs at this point. Performance
doesn't scale (altho there's a rewrite in progress), and they are buggy.
Additionally, the scaling interaction with snapshots is geometrically
negative, sometimes requiring 64 GiB of RAM or more and coming to a near
standstill at that, for users with enough quota-groups and enough
snapshots. If you need quotas, use a more traditional filesystem with
stable quota support. Hopefully by this time next year...

9b) Snapshot-aware-defrag: This was enabled at one point but simply
didn't scale, when it turned out people were doing things like per-minute
snapshots and thus had thousands and thousands of snapshots. So this has
been disabled for the time being. Btrfs defrag will defrag the working
copy it is run on, but currently doesn't account for snapshots, so data
that was fragmented at snapshot time gets duplicated as it is
defragmented. However, they plan to re-enable the feature ones they have
rewritten various bits to scale far better than they do at present.

9c) Send and receive. Btrfs send and receive are a very nice feature
that can make backups far faster, with far less data transferred.
They're great when they work. Unfortunately, there are still various
corner-cases where they don't. (As an example, a recent fix was for the
case where subdir B was nested inside subdir A for the first, full send/
receive, but later, the relationship was reversed, with subdir B made the
parent of subdir A. Until the recent fix, send/receive couldn't handle
that sort of corner-case.) You can go ahead and use it if it's working
for you, as if it finishes without error, the copy should be 100%
reliable. However, have an alternate plan for backups if you suddenly
hit one of those corner-cases and send/receive quits working.

Of course it's worth mentioning that b and c deal with features that most
filesystems don't have at all, so with the exception of quotas, it's not
like something's broken on btrfs that works on other filesystems.
Instead, these features are (nearly) unique to btrfs, so even if they
come with certain limitations, that's still better than not having the
option of using the feature at all, because it simply doesn't exist on
the other filesystem!

10) Btrfs in general is headed toward stable now, and a lot of people,
including me, have used it for a significant amount of time without
problems, but it's still new enough that you're strongly urged to make
and test your backups, because by not doing so, you're stating by your
actions if not your words, that you simply don't care if some as yet
undiscovered and unfixed bug in the filesystem eats your data.

For similar reasons altho already mentioned above, run the latest stable
kernel from the latest stable kernel series, at the oldest, and consider
running rc kernels from at least rc2 or so (by which time any real data
eating bugs, in btrfs or elsewhere, should be found and fixed, or at
least published). Because anything older and you are literally risking
your data to known and fixed bugs.

As is said, take reasonable care and you're much less likely to be the
statistic!

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
top man, thanks for detail and the tips !
Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
(Dammit, it seems that I've developed a habit of writing somewhat long-winded
emails :-/ . Sorry!)

Am Wed, 28 May 2014 08:29:07 +0100
schrieb thegeezer <thegeezer@thegeezer.net>:

> top man, thanks for detail and the tips !

I second this :) . In fact, I think I'll link to it in my btrfs thread on
gentoo-user.

I do have a question for Duncan (or anybody else who knows, but I know that
Duncan is fairly active on the BTRFS ML), though:

How does btrfs handle checksum errors on a single drive (or when self-healing
fails)?

That is, does it return a hard error, rendering the file unreadable, or is it
possible to read from a corrupted file? Sadly, I don't remember finding the
answer to this from my own research into BTRFS before I made the switch (my
thread is here: [0]), and searching online now hasn't revealed anything; all I
can find are mentions of its self-healing capability.

I *think* BTRFS treats this as a hard error? But I'm just not sure.

(I feel kind of stupid, because I'm sure I saw the answer in some of the emails
on linux-btrfs that I read through via GMANE.)

I ask because I'm considering converting the 2TB data partition on my 3TB
external hard drive from NTFS to BTRFS [1] . It primarily contains media
files, where random corruption is decidedly *not* the end of the world.
However, it also contains ISOs and other large files where corruption matters
more, but which are not important enough to land on my BTRFS RAID (on the other
hand, my music collection is ;-) ).

In any case, reconstructing a corrupted file can be fairly difficult: It might
involve re-ripping a (game) disk, or it might be something I got from a friend,
delaying file recovery until I can get it again, or the file might be a youtube
download (or a conference video, or something from archive.org, or ...) and I
have to track it down online again. However, I might want to *know* that a file
is corrupt, so that I *can* reconstruct it if I want to.

The obvious answer, retrieving from backup, is difficult to implement, since I
would need an additional external drive for that. Also, the files are not
*that* important, e.g., in the case of a youtube download, where most of the
time I delete the file afterwards anyway.

(It seems to me that the optimal solution would be to use some sort of NAS, with
a multi-device ZFS or BTRFS file system, in place of an external hard drive; I
expect to go that route in the future, when I can afford it.)

[0] http://thread.gmane.org/gmane.linux.gentoo.user/274236

[1] I used NTFS under the assumption that I might want to keep the drive Windows
compatible (for family), but have decided that I don't really care, since the drive is
pretty much permanently attached to my desktop (it also has an EXT4 partition
for automatic local backups, so removing it would be less than optimal ;-) ).

--
Marc Joliet
--
"People who think they know everything really annoy those of us who know we
don't" - Bjarne Stroustrup
Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
Marc Joliet posted on Wed, 28 May 2014 22:32:47 +0200 as excerpted:

> (Dammit, it seems that I've developed a habit of writing somewhat
> long-winded emails :-/ . Sorry!)

You? <looking this way and that> What does that make mine? =:^)

> Am Wed, 28 May 2014 08:29:07 +0100 schrieb thegeezer
> <thegeezer@thegeezer.net>:
>
>> top man, thanks for detail and the tips !
>
> I second this :) . In fact, I think I'll link to it in my btrfs thread
> on gentoo-user.

Thanks. I was on the user list for a short time back in 2004 when I
first started with gentoo, but back then it was mostly x86, while my
interest was amd64, and the amd64 list was active enough back then that I
didn't really feel the need for the mostly x86 user list, so I
unsubscribed and never got around to subscribing again, when the amd64
list traffic mostly dried up. But if it'll help people there... go right
ahead and link or repost. (Also, anyone who wants to put it up on the
gentoo wiki, go ahead. I work best on newsgroups and mailing lists, and
find wikis, like most of the web, in practice read-only for my usage.
I'll read up on them, but somehow never get around to actually writing
anything on them, even if it would in theory save me a bunch of time
since I could write stuff once and link it instead of repeating on the
lists.)

> I do have a question for Duncan (or anybody else who knows, but I know
> that Duncan is fairly active on the BTRFS ML), though:
>
> How does btrfs handle checksum errors on a single drive (or when
> self-healing fails)?
>
> That is, does it return a hard error, rendering the file unreadable, or
> is it possible to read from a corrupted file?

As you suspect, it's a hard error.

There has been developer discussion on the btrfs list of some sort of
mount option or the like that would allow retrieval even with bad
checksums, presumably with dmesg then being the only indication something
was wrong, in case it's a simple single bit-flip or the like in something
like text where it should be obvious, or media, where it'll likely not
even be noticed, but I've not seen an actual patch for it. Presumably
it'll eventually happen, but to now there's a lot more potential features
and bug fixes to code up than developers and time in their days to code
them, so no idea when. I guess when the right person gets that itch to
scratch.

Which is yet another reason I have chosen the raid1 mode for both data
and metadata and am eagerly awaiting the N-way-mirroring code in ordered
to let me do 3-way as well, because I'd really /hate/ to think it's just
a bitflip, yet not have any way at all to get to it.

Which of course makes it that much more critical to keep your backups as
current as you're willing to risk losing, *AND* test that they're
actually recoverable, as well.

(FWIW here, while I do have backups, they aren't always current. Still,
for my purposes the *REAL* backups are the experiences and knowledge in
my head. As long as I have that, I can recreate the real valuable stuff,
and to the extent that I can't, I don't consider it /that/ valuable. And
if I lose those REAL backups... well I won't have enough left then to
realize what I've lost, will I? That's ultimately the attitude I take,
appreciating the real important stuff for what it is, and the rest, well,
if it comes to it, I lose what I lose, but yes, I do still keep backups,
actually multiple levels deep, tho as I said they aren't always current.)

However, one trick that I alluded to, that actually turned out to be an
accidental side effect feature of fixing an entirely different problem,
is setting mixed-blockgroup mode at mkfs.btrfs and selecting dup mode for
both data and metadata at that time as well. (In mixed-mode, data and
metadata must be set the same, and the default except on ssd is then dup,
but the point here is to ensure dup, not single.) As I said, the reason
mixed-mode is there is to deal with really small filesystems and it's the
default for under a gig. And there's definitely a performance cost as
well as the double-space cost when using dup. But it *DOES* allow one to
run dup mode for both data and metadata, and some users are willing to
pay its performance costs for the additional data integrity it offers.

Certainly, if you can possibly do two devices, the paired device raid1
mode is preferable, but for instance my netbook has only a single SATA
port, so either mixed-bg and dup mode, or partitioning up and using two
partitions to fake two devices for raid1 mode, are what I'm likely to do.

(I actually don't know which I'll do as I haven't messed with the netbook
in awhile, but I have an SSD already laying around to throw in it and I
keep thinking about it, and with its single SATA port, it's a perfect
example of sometimes not being /able/ to run two devices. OTOH, I might
just throw some money at it and buy a full 64-bit replacement machine,
thus allowing me to use the 64-bit packages I build for my main machine
on the (new) little one too, and thus to do away with the 32-bit chroot
on my main machine that I use as a built image for the netbook.)

(I snipped it there to reply to this bit first as it was a
straightforward answer. I'll go back and read the rest now, to see if
there's anything else I want to reply to.)

--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
Re: Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
Am Thu, 29 May 2014 06:41:14 +0000 (UTC)
schrieb Duncan <1i5t5.duncan@cox.net>:

> Marc Joliet posted on Wed, 28 May 2014 22:32:47 +0200 as excerpted:
>
> > (Dammit, it seems that I've developed a habit of writing somewhat
> > long-winded emails :-/ . Sorry!)
>
> You? <looking this way and that> What does that make mine? =:^)

Novels, duh ;-) .

> > Am Wed, 28 May 2014 08:29:07 +0100 schrieb thegeezer
> > <thegeezer@thegeezer.net>:
> >
> >> top man, thanks for detail and the tips !
> >
> > I second this :) . In fact, I think I'll link to it in my btrfs thread
> > on gentoo-user.
>
> Thanks. I was on the user list for a short time back in 2004 when I
> first started with gentoo, but back then it was mostly x86, while my
> interest was amd64, and the amd64 list was active enough back then that I
> didn't really feel the need for the mostly x86 user list, so I
> unsubscribed and never got around to subscribing again, when the amd64
> list traffic mostly dried up. But if it'll help people there... go right
> ahead and link or repost.

I ended up simply forwarding it, as opposed to bumping my inactive thread.

> (Also, anyone who wants to put it up on the
> gentoo wiki, go ahead. I work best on newsgroups and mailing lists, and
> find wikis, like most of the web, in practice read-only for my usage.
> I'll read up on them, but somehow never get around to actually writing
> anything on them, even if it would in theory save me a bunch of time
> since I could write stuff once and link it instead of repeating on the
> lists.)

Heh, the only Wiki I ever edited was at my old student job. But yeah, I don't
feel comfortable enough in my BTRFS knowledge to write a Wiki entry myself.

> > I do have a question for Duncan (or anybody else who knows, but I know
> > that Duncan is fairly active on the BTRFS ML), though:
> >
> > How does btrfs handle checksum errors on a single drive (or when
> > self-healing fails)?
> >
> > That is, does it return a hard error, rendering the file unreadable, or
> > is it possible to read from a corrupted file?
>
> As you suspect, it's a hard error.

Damn >:-( .

> There has been developer discussion on the btrfs list of some sort of
> mount option or the like that would allow retrieval even with bad
> checksums, presumably with dmesg then being the only indication something
> was wrong, in case it's a simple single bit-flip or the like in something
> like text where it should be obvious, or media, where it'll likely not
> even be noticed, but I've not seen an actual patch for it. Presumably
> it'll eventually happen, but to now there's a lot more potential features
> and bug fixes to code up than developers and time in their days to code
> them, so no idea when. I guess when the right person gets that itch to
> scratch.

That's really too bad, I guess this isn't a situation that often arises for
BTRFS users.

> Which is yet another reason I have chosen the raid1 mode for both data
> and metadata and am eagerly awaiting the N-way-mirroring code in ordered
> to let me do 3-way as well, because I'd really /hate/ to think it's just
> a bitflip, yet not have any way at all to get to it.
>
> Which of course makes it that much more critical to keep your backups as
> current as you're willing to risk losing, *AND* test that they're
> actually recoverable, as well.

Of course, but like I said, I can't back up this one data partition. I do have
backups for everything on my desktop computer, though, which are on the other
partition of this external drive.

> (FWIW here, while I do have backups, they aren't always current. Still,
> for my purposes the *REAL* backups are the experiences and knowledge in
> my head. As long as I have that, I can recreate the real valuable stuff,
> and to the extent that I can't, I don't consider it /that/ valuable. And
> if I lose those REAL backups... well I won't have enough left then to
> realize what I've lost, will I? That's ultimately the attitude I take,
> appreciating the real important stuff for what it is, and the rest, well,
> if it comes to it, I lose what I lose, but yes, I do still keep backups,
> actually multiple levels deep, tho as I said they aren't always current.)

Hehe, good philosophy :-) .

> However, one trick that I alluded to, that actually turned out to be an
> accidental side effect feature of fixing an entirely different problem,
> is setting mixed-blockgroup mode at mkfs.btrfs and selecting dup mode for
> both data and metadata at that time as well. (In mixed-mode, data and
> metadata must be set the same, and the default except on ssd is then dup,
> but the point here is to ensure dup, not single.) As I said, the reason
> mixed-mode is there is to deal with really small filesystems and it's the
> default for under a gig. And there's definitely a performance cost as
> well as the double-space cost when using dup. But it *DOES* allow one to
> run dup mode for both data and metadata, and some users are willing to
> pay its performance costs for the additional data integrity it offers.

That is an interesting idea. I might consider that. Or I might just create a
third partition and make a RAID 1 out of those, once I know how much space my
backups will ultimately take.

But really, why is there no dup for data?

(I only set up my backups about a month ago just before my migration to BTRFS,
using rsnapshot, and the backups aren't fully there yet; the one monthly backup
is still missing, and I wanted to wait a bit after that to see how much space
the backups ultimately require. Plus, I might back up (parts of) my laptop to
there, too, although there isn't that much stuff on it that isn't already
synchronised in some other fashion, so it's not decided yet.)

> Certainly, if you can possibly do two devices, the paired device raid1
> mode is preferable, but for instance my netbook has only a single SATA
> port, so either mixed-bg and dup mode, or partitioning up and using two
> partitions to fake two devices for raid1 mode, are what I'm likely to do.
[...]

Ah, you mentioned the RAID 1 idea already :-) .

--
Marc Joliet
--
"People who think they know everything really annoy those of us who know we
don't" - Bjarne Stroustrup
Re: Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
On Thu, May 29, 2014 at 1:57 PM, Marc Joliet <marcec@gmx.de> wrote:
> Am Thu, 29 May 2014 06:41:14 +0000 (UTC)
> schrieb Duncan <1i5t5.duncan@cox.net>:
>> Thanks. I was on the user list for a short time back in 2004 when I
>> first started with gentoo, but back then it was mostly x86, while my
>> interest was amd64, and the amd64 list was active enough back then that I
>> didn't really feel the need for the mostly x86 user list, so I
>> unsubscribed and never got around to subscribing again, when the amd64
>> list traffic mostly dried up. But if it'll help people there... go right
>> ahead and link or repost.
>
> I ended up simply forwarding it, as opposed to bumping my inactive thread.

When was the last time we actually had an amd64-specific discussion on
this list? Part of me wonders if the list ought to be retired. It
made a lot more sense back when amd64 was fairly experimental and
prone to fairly unique issues. I deleted my 32-bit chroot some time
ago.

Rich
Re: Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
On Thu, May 29, 2014 at 10:59 AM, Rich Freeman <rich0@gentoo.org> wrote:
> On Thu, May 29, 2014 at 1:57 PM, Marc Joliet <marcec@gmx.de> wrote:
>> Am Thu, 29 May 2014 06:41:14 +0000 (UTC)
>> schrieb Duncan <1i5t5.duncan@cox.net>:
>>> Thanks. I was on the user list for a short time back in 2004 when I
>>> first started with gentoo, but back then it was mostly x86, while my
>>> interest was amd64, and the amd64 list was active enough back then that I
>>> didn't really feel the need for the mostly x86 user list, so I
>>> unsubscribed and never got around to subscribing again, when the amd64
>>> list traffic mostly dried up. But if it'll help people there... go right
>>> ahead and link or repost.
>>
>> I ended up simply forwarding it, as opposed to bumping my inactive thread.
>
> When was the last time we actually had an amd64-specific discussion on
> this list? Part of me wonders if the list ought to be retired. It
> made a lot more sense back when amd64 was fairly experimental and
> prone to fairly unique issues. I deleted my 32-bit chroot some time
> ago.
>
> Rich
>

I completely understand your point but in my case, after about a
decade ongentoo-user, I quit posting gentoo-user completely due to the
attitudes of some folks there, flame posts, put-downs, etc. I have no
idea how it is now but I have no real desire to go back there.

The two things I really value about this list are the quality of posts
as well as the very civil way folks treat each other.

Just my 2 cents

Cheers,
Mark
Re: Re: btrfs Was: Soliciting new RAID ideas [ In reply to ]
On Thu, 29 May 2014 13:59:25 -0400
Rich Freeman <rich0@gentoo.org> wrote:

>
> When was the last time we actually had an amd64-specific discussion on
> this list? Part of me wonders if the list ought to be retired. It
> made a lot more sense back when amd64 was fairly experimental and
> prone to fairly unique issues.
>

There may not be any amd64 issues, but there certainly are a lot of
gripes.

For those who operate a pure 64-bit system (no multi-lib), there is
a fair amount of highly useful software that has not yet been updated
to be 64-bit clean. For example, Adobe PDF Reader, Foxit PDF Reader,
and the Intel ICC compiler are still 32-bit. I wish these folks would
get with the modern trends.

Frank Peters