Mailing List Archive

1 2 3  View All
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
Am Sun, Aug 01, 2021 at 11:41:48AM +0800 schrieb William Kenworthy:
>
> On 1/8/21 8:50 am, Frank Steinmetzger wrote:
> > Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
> >
> > ...
> > And thanks to the cache, a new snapshots usually is done very fast. But for
> > a yet unknown reason, sometimes Borg re-hashes all files, even though I
> > didn’t touch the cache. In that case it takes 2½ hours to go through my
> > video directory.
> >
> Borg will do that as an extra method of ensuring its not missed any
> changes.  I think the default is every 26 times it visits a file so its
> a big hit the first time it starts but semi-randomises over time, it can
> be set or disabled via an environment variable.

Ah, you’re right. I recently lowered the TTL in my wrapper script. This
might have been the trigger. I did that because I was dismayed by the size
of the cache (1 GiB), which made my root partition rather cramped. But then
I had the epiphany of simply moving the cache to the big data partition.
Problem solved (but forgot to revert the TTL).

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

What was that disease called again that makes you forget everything?
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
Am Sun, Aug 01, 2021 at 11:36:36AM +0800 schrieb William Kenworthy:

> >> Its not raid, just a btrfs single on disk (no partition).  Contains a
> >> single borgbackup repo for an offline backup of all the online
> >> borgbackup repo's I have for a 3 times a day backup rota of individual
> >> machines/data stores
> > So you are borg’ing a repo into a repo? I am planning on simply rsync’ing
> > the borg directory from one external HDD to another. Hopefully SMR can cope
> > with this adequatly.
> >
> > And you are storing several machines into a single repo? The docs say this
> > is not supported officially. But I have one repo each for /, /home and data
> > for both my PC and laptop. Using a wrapper script, I create snapshots that
> > are named $HOSTNAME_$DATE in each repo.
>
> Basicly yes: I use a once per hour snapshot of approximately 500Gib of
> data on moosefs, plus borgbackups 3 times a day to individual repos on
> moosefs for each host.

So you have:
Host A ??[hourly]??> Online-Borg A ??
???[3/day]??> Offline-Borg
Host B ??[hourly]??> Online-Borg B ??
?

> 3 times a day, the latest snapshot is stuffed into a borg repo on moosefs
> and the old  snapshots are deleted.

How do you stuff just the latest snapshot of a repo into another repo?

> 3. borgbackup often to keep changes between updates small - time to
> backup will stay short.

Apparently you are dealing with some kind of productive, high-availability
or high-throughput system. I only have my personal stuff, so I don’t mind it
taking a few minutes. That allows me to watch progress bars and numbers
going up. :)

> 4. borg'ing a repo into a repo works extreemly well - however there are
> catches based around backup set names and the file change tests used.
> (ping me if you want the details)

But what is the point of that? Why not simply keep the last x hourly/
daily/weekly snapshots? The only thing I can think of is to have a
small(ish) short-term repo and keep the larger archival repo separate.

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“If it’s true that our species is alone in the universe, then I’d have to say
the universe aimed rather low and settled for very little.” – George Carlin
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
On 2/8/21 5:38 am, Frank Steinmetzger wrote:
> Am Sun, Aug 01, 2021 at 11:46:02AM +0800 schrieb William Kenworthy:
>
>>>> And you are storing several machines into a single repo? The docs say this
>>>> is not supported officially. But I have one repo each for /, /home and data
>>>> for both my PC and laptop. Using a wrapper script, I create snapshots that
>>>> are named $HOSTNAME_$DATE in each repo.
>>> Basicly yes: I use a once per hour snapshot of approximately 500Gib of
>>> data on moosefs, plus borgbackups 3 times a day to individual repos on
>>> moosefs for each host.  3 times a day, the latest snapshot is stuffed
>>> into a borg repo on moosefs and the old  snapshots are deleted.  I
>>> currently manually push all the repos into a borg repo on the USB3 SMR
>>> drive once a day or so.
>>>
>>> 1. rsync (and cp etc.) are dismally slow on SMR - use where you have to,
>>> avoid otherwise.
>>>
>>> forgot to mention
>> 1a. borgbackup repos are not easily copy'able - each repo has a unique
>> ID and copy'ing via rsync creates a duplicate, not a new repo with a new
>> cache and metadata which depending on how you use can cause
>> corruption/data loss.  Google it.
> Yup. Today I did my (not so) weekly backup and rsynced the repo to the new
> drive. After that I wanted to compare performance of my old 3 TB drive and
> the new SMR one by deleting a snapshot from the repo on each drive. But Borg
> objected on the second deletion, because “the cache was newer”. But that’s
> okay. I actually like this, as this will prevent me from chaning two repos
> in parallel which would make them incompatible.
>
Keep in  mind that both repos have the same ID - you should also rsync
the cache and security directories as well as they are now out of sync
(hence the warning).  Be very careful on how you do this - you are one
step away from losing the while repo if the cache gets out of sync.  The
docs warn against rsyncing two repos and then using them at the same
time for a good reason.

BillK
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
On 2/8/21 5:55 am, Frank Steinmetzger wrote:
> Am Sun, Aug 01, 2021 at 11:36:36AM +0800 schrieb William Kenworthy:
>
>>>> Its not raid, just a btrfs single on disk (no partition).  Contains a
>>>> single borgbackup repo for an offline backup of all the online
>>>> borgbackup repo's I have for a 3 times a day backup rota of individual
>>>> machines/data stores
>>> So you are borg’ing a repo into a repo? I am planning on simply rsync’ing
>>> the borg directory from one external HDD to another. Hopefully SMR can cope
>>> with this adequatly.
>>>
>>> And you are storing several machines into a single repo? The docs say this
>>> is not supported officially. But I have one repo each for /, /home and data
>>> for both my PC and laptop. Using a wrapper script, I create snapshots that
>>> are named $HOSTNAME_$DATE in each repo.
>> Basicly yes: I use a once per hour snapshot of approximately 500Gib of
>> data on moosefs, plus borgbackups 3 times a day to individual repos on
>> moosefs for each host.
> So you have:
> Host A ??[hourly]??> Online-Borg A ??
> ???[3/day]??> Offline-Borg
> Host B ??[hourly]??> Online-Borg B ??
> ?
>
>> 3 times a day, the latest snapshot is stuffed into a borg repo on moosefs
>> and the old  snapshots are deleted.
> How do you stuff just the latest snapshot of a repo into another repo?
>
>> 3. borgbackup often to keep changes between updates small - time to
>> backup will stay short.
> Apparently you are dealing with some kind of productive, high-availability
> or high-throughput system. I only have my personal stuff, so I don’t mind it
> taking a few minutes. That allows me to watch progress bars and numbers
> going up. :)
>
>> 4. borg'ing a repo into a repo works extreemly well - however there are
>> catches based around backup set names and the file change tests used.
>> (ping me if you want the details)
> But what is the point of that? Why not simply keep the last x hourly/
> daily/weekly snapshots? The only thing I can think of is to have a
> small(ish) short-term repo and keep the larger archival repo separate.
>
Hi Frank,

Not quite - I see I could have been clearer.  I "experiment" a lot -
which means things break so I need to get back running quickly.  So the
purpose of the online repos and snapshots is just for that - quick
recovery.  Longer term I want to have an offline copy for a real
disaster (yes, I have needed to restore almost 20TiB including the
online backups on the moosefs system three months ago - self caused :)
as well as often needing to step back a few days/weeks for a historical
copy of a file, or less often a system restore on a host.  By the way,
this discussion has been useful as I found some failed backups due to a
missing dependency in the borgbackup ebuild when looking closer at it -
has a bug.  Need to do some watcher scripts to detect that.


stage 1: online, immediately available

Hosts (those with actual attached storage - a mixture of intel, arm32
and arm64 devices are backed up to their own borg repo 3 times a day via
push.  One repo per machine on moosefs.

A separate script does an hourly backup of VM, LXC images, and various
data stores via a moosefs snapshot.

stage 2: resource management for the snapshots

3 times a day, a script does a borg create on the latest snapshop at the
time, and when complete deletes all previous snapshots (-1) so at that
point I have two older snapshots available + a couple created during the
borg run - note that large multi GiB snapshots can quickly use up all
memory (32GiB) on the moosefs master unless culled regularly.

stage 3: offline because disasters happen :)

All borg repos are on moosefs with a single root directory
(/mnt/mfs/backups) so once every day or so I manually mount the offline
disk and do a borg create on the backup directory.  I was doing this
once a week, but operationally its easier to do 5-10 minutes every day
than an hour once a week due to the scale of changes over the longer
time period.

So, its looks like:

Host A ??[3/day]??> Online-Borg A ??
??????????????????????????[common directory]---[manual, 1/day]??> Offline-Borg
Host ... ??[3/day]??> Online-Borg ... ?? |
? |

Snapshots ??[hourly]??> Online-Borg "snaps" ?[3/day] ??????????????????


BillK
Re: cryptsetup close and device in use when it is not [ In reply to ]
Ramon Fischer wrote:
> OK, if it could be "udev", you might want to try to check the following:
>
>    $ grep -rF "<part_of_uuid>" /etc/udev/rules.d/
>    $ grep -rF "<part_of_uuid>" /lib/udev/rules.d/
>    $ grep -rF "<part_of_uuid>" /etc
>
> You could also try to search for the partition device, maybe there
> will be some interesting configuration files.
>
> If you are using "systemd", you might want to check every service unit
> file as well:
>
>    $ systemctl
>
> Recently, I had a similar issue with "cryptsetup" on Raspbian, where
> the "/etc/crypttab" was faulty, which may be applicable here. It had
> the following entry:
>
>    # <accident_paste_with_uuid> # <target name> <source device> [...]
>    <entry1>
>    <entry2>
>
> Therefore, the systemd service unit
> "systemd-cryptsetup@dev-disk-by\x2duuid-#<accident_paste_with_uuid> #
> <target name> <source device> [...]" - if I remember correctly - failed.
> It seems, that "systemd-cryptsetup-generator" only searches for
> matching UUIDs in "/etc/crypttab", even, if they are commented and
> creates service units for each match in "/run/systemd/generator/".
> I remember, that I had issues to access the hard drive. Nevertheless,
> I was able to mount it normally, due to the other correct entry(?).
>
> By removing the accidentally pasted UUID from "/etc/crypttab" and
> rebooting, I was able to use the hard drive without issues again.
>
> Maybe this is something, where you could poke around? :)
>
> -Ramon

I'm running openrc here.  I don't recall making any udev rules
recently.  This is a list of what I have.


root@fireball / # ls -al /etc/udev/rules.d/
total 20
drwxr-xr-x 2 root root 4096 Apr 27 15:07 .
drwxr-xr-x 3 root root 4096 Jul 27 03:17 ..
-rw-r--r-- 1 root root 2064 Apr 27 15:07 69-libmtp.rules
-rw-r--r-- 1 root root 1903 Apr  4  2012 70-persistent-cd.rules
-rw-r--r-- 1 root root  814 Jan  1  2008 70-persistent-net.rules
-rw-r--r-- 1 root root    0 Mar 22  2015 80-net-name-slot.rules
root@fireball / #


One is for CD/DVD stuff.  I wonder if I can remove that now.  Two is for
network cards and top one is something to do with my old Motorola cell
phone, rest in peace. 

All this said, it did it again last night.  I tried a few things and
went to bed while my updates were compiling.  When I got up a bit ago,
it closed just fine.  So, something says it is busy but eventually
releases it if left alone for a while.  I'd like to know what it is and
if it is really in use or not.  Thing is, I can't find a way to know
what it is that is using it.  The dmsetup command shows it is in use but
no way to know what is using it. 

Dale

:-)  :-) 
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
Am Mon, Aug 02, 2021 at 01:38:31PM +0800 schrieb William Kenworthy:

> > Yup. Today I did my (not so) weekly backup and rsynced the repo to the new
> > drive. After that I wanted to compare performance of my old 3 TB drive and
> > the new SMR one by deleting a snapshot from the repo on each drive. But Borg
> > objected on the second deletion, because “the cache was newer”. But that’s
> > okay. I actually like this, as this will prevent me from chaning two repos
> > in parallel which would make them incompatible.
> >
> Keep in  mind that both repos have the same ID - you should also rsync
> the cache and security directories as well as they are now out of sync
> (hence the warning).

That thought crossed my mind recently but I was unsure how to store the
cache. But since the repo is a monolith, it should suffice to rsync
the whole cache directory to the backup drive (or do it as a tar).

The only problem is the temporal sequence:
1. Host A runs borg and gets a current cache.
2. Host B runs borg on the same repo and gets a current cache.
2a. Host A now has an outdated cache.

Usually, Host B uses Host A via ssh as remote location of the repository.
So I could simply run a borg command on Host A to update the cache somehow.

> Be very careful on how you do this - you are one step away from losing the
> while repo if the cache gets out of sync.  The docs warn against rsyncing
> two repos and then using them at the same time for a good reason.

I won’t use them at the same time. It will always be one direction:
Hosts --[borg]--> Main backup drive --[rsync]--> secondary backup drive

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

My concience is clean! After all, I’ve never used it.
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
Am Mon, Aug 02, 2021 at 02:12:24PM +0800 schrieb William Kenworthy:
> >>> And you are storing several machines into a single repo? The docs say this
> >>> is not supported officially. But I have one repo each for /, /home and data
> >>> for both my PC and laptop. Using a wrapper script, I create snapshots that
> >>> are named $HOSTNAME_$DATE in each repo.
> >> Basicly yes: I use a once per hour snapshot of approximately 500Gib of
> >> data on moosefs, plus borgbackups 3 times a day to individual repos on
> >> moosefs for each host.
> > So you have:
> > Host A ??[hourly]??> Online-Borg A ??
> > ???[3/day]??> Offline-Borg
> > Host B ??[hourly]??> Online-Borg B ??
> > ?
> > […]
>
> Hi Frank,
>
> Not quite - I see I could have been clearer.  I "experiment" a lot -
> which means things break so I need to get back running quickly.  So the
> purpose of the online repos and snapshots is just for that - quick
> recovery.

Whenever you say snapshot, you meen moosefs snapshots, right? Up until this
thread I’ve never heard of that FS.

I would love to play more with storage systems, moving stuff around, backing
it up, assigning space and so on (basically play admin for a few people),
but apart from my ZFS-based NAS, I have nothing that would need this. I run
a nextcloud instance on my shared internet host and one on my raspi. That’s
as far as it gets. :D

> stage 1: online, immediately available
>
> Hosts (those with actual attached storage - a mixture of intel, arm32
> and arm64 devices are backed up to their own borg repo 3 times a day via
> push.  One repo per machine on moosefs.
>
> A separate script does an hourly backup of VM, LXC images, and various
> data stores via a moosefs snapshot.
>
> stage 2: resource management for the snapshots
>
> 3 times a day, a script does a borg create on the latest snapshop at the
> time

So you mount the latest snapshot or access it in some other way and borg
*its* content, not the live data, right?

> and when complete deletes all previous snapshots (-1) so at that
> point I have two older snapshots available + a couple created during the
> borg run - note that large multi GiB snapshots can quickly use up all
> memory (32GiB) on the moosefs master unless culled regularly.

Sounds a bit delicate to me. If one link fails for some reason undetectedly,
you risk clog-up.

> stage 3: offline because disasters happen :)
>
> All borg repos are on moosefs with a single root directory
> (/mnt/mfs/backups) so once every day or so I manually mount the offline
> disk and do a borg create on the backup directory.

What happens if that daily borg runs while the repos are being written to?

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Even a Bonsai dreams of greatness.
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
On 3/8/21 5:52 am, Frank Steinmetzger wrote:
> Am Mon, Aug 02, 2021 at 01:38:31PM +0800 schrieb William Kenworthy:
>
>>> Yup. Today I did my (not so) weekly backup and rsynced the repo to the new
>>> drive. After that I wanted to compare performance of my old 3 TB drive and
>>> the new SMR one by deleting a snapshot from the repo on each drive. But Borg
>>> objected on the second deletion, because “the cache was newer”. But that’s
>>> okay. I actually like this, as this will prevent me from chaning two repos
>>> in parallel which would make them incompatible.
>>>
>> Keep in  mind that both repos have the same ID - you should also rsync
>> the cache and security directories as well as they are now out of sync
>> (hence the warning).
> That thought crossed my mind recently but I was unsure how to store the
> cache. But since the repo is a monolith, it should suffice to rsync
> the whole cache directory to the backup drive (or do it as a tar).
>
> The only problem is the temporal sequence:
> 1. Host A runs borg and gets a current cache.
> 2. Host B runs borg on the same repo and gets a current cache.
> 2a. Host A now has an outdated cache.
>
> Usually, Host B uses Host A via ssh as remote location of the repository.
> So I could simply run a borg command on Host A to update the cache somehow.
>
>> Be very careful on how you do this - you are one step away from losing the
>> while repo if the cache gets out of sync.  The docs warn against rsyncing
>> two repos and then using them at the same time for a good reason.
> I won’t use them at the same time. It will always be one direction:
> Hosts --[borg]--> Main backup drive --[rsync]--> secondary backup drive
>
You could delete and rebuild the cache each time (or I think there is a
way to do without it).  There are quite a few threads on the borg lists
about this in the past (usually people trying to recover trashed repos)
- you might ask there if there is a way to deal with changing the ID now?

In any case, I think doing it the way you are has a fairly high chance
you will irretrievably trash both repos.

BillK
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
On 3/8/21 6:03 am, Frank Steinmetzger wrote:
> Am Mon, Aug 02, 2021 at 02:12:24PM +0800 schrieb William Kenworthy:
>>>>> And you are storing several machines into a single repo? The docs say this
>>>>> is not supported officially. But I have one repo each for /, /home and data
>>>>> for both my PC and laptop. Using a wrapper script, I create snapshots that
>>>>> are named $HOSTNAME_$DATE in each repo.
>>>> Basicly yes: I use a once per hour snapshot of approximately 500Gib of
>>>> data on moosefs, plus borgbackups 3 times a day to individual repos on
>>>> moosefs for each host.
>>> So you have:
>>> Host A ??[hourly]??> Online-Borg A ??
>>> ???[3/day]??> Offline-Borg
>>> Host B ??[hourly]??> Online-Borg B ??
>>> ?
>>> […]
>> Hi Frank,
>>
>> Not quite - I see I could have been clearer.  I "experiment" a lot -
>> which means things break so I need to get back running quickly.  So the
>> purpose of the online repos and snapshots is just for that - quick
>> recovery.
> Whenever you say snapshot, you meen moosefs snapshots, right? Up until this
> thread I’ve never heard of that FS.
>
> I would love to play more with storage systems, moving stuff around, backing
> it up, assigning space and so on (basically play admin for a few people),
> but apart from my ZFS-based NAS, I have nothing that would need this. I run
> a nextcloud instance on my shared internet host and one on my raspi. That’s
> as far as it gets. :D
>
>> stage 1: online, immediately available
>>
>> Hosts (those with actual attached storage - a mixture of intel, arm32
>> and arm64 devices are backed up to their own borg repo 3 times a day via
>> push.  One repo per machine on moosefs.
>>
>> A separate script does an hourly backup of VM, LXC images, and various
>> data stores via a moosefs snapshot.
>>
>> stage 2: resource management for the snapshots
>>
>> 3 times a day, a script does a borg create on the latest snapshop at the
>> time
> So you mount the latest snapshot or access it in some other way and borg
> *its* content, not the live data, right?
Yes, though a moosefs snapshot is really a lazy copy of the data to a
new location - issue the mfsmakesnapshot command and a few seconds later
you have an identical copy of possibly terabytes of data in a new
location with almost no extra disk space needed - though moosefs needs
memory allocated to track the contents. i.e. think of it like
symlink/hardlink to the original data until a file is changed whereupon
its links are broken and its new data - its a little more complex than
that but that's the gist of it.  If you need data from a snapshot - you
just copy it out or use it in place which breaks the link if written to.
>
>> and when complete deletes all previous snapshots (-1) so at that
>> point I have two older snapshots available + a couple created during the
>> borg run - note that large multi GiB snapshots can quickly use up all
>> memory (32GiB) on the moosefs master unless culled regularly.
> Sounds a bit delicate to me. If one link fails for some reason undetectedly,
> you risk clog-up.
Problems so far relate to borg failing to run for some reason - notice
it and fix it, no problems overall as the rest keeps working
>
>> stage 3: offline because disasters happen :)
>>
>> All borg repos are on moosefs with a single root directory
>> (/mnt/mfs/backups) so once every day or so I manually mount the offline
>> disk and do a borg create on the backup directory.
> What happens if that daily borg runs while the repos are being written to?

To avoid this I mostly use fcron which has serialisation features so as
long as I coordinate start and take into account run times its good. 
The manual copy is of course a bit tricky but again, its a timing
issue.  If I make a mistake, I would expect to include a repo that might
need a check/repair before use.  To borg, its just backing up files - it
doesn't care that its another borg repo, in use or not.  It still treats
open files the same way as any other files - try and read them, but skip
if unable to. Two borg instance cant work in the same repo, but one can
and one can back that repo up at the same time because it just sees it
as files if that makes sense.  In reality, I have not needed to deal
with it yet.  The great thing about borg and the way this rota is
structured is that I have a history I can go back to if necessary.  In
my experimenting in getting this right, I pay attention to the warnings
borg issues and often delete the cache and security directories to make
sure everything stays sane.

Getting way off-topic here so we can take this off list if you are
interested, or maybe others are interested here?

BillK


BillK
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
Am Tue, Aug 03, 2021 at 07:10:03AM +0800 schrieb William Kenworthy:

> >> Keep in  mind that both repos have the same ID - you should also rsync
> >> the cache and security directories as well as they are now out of sync
> >> (hence the warning).
> > That thought crossed my mind recently but I was unsure how to store the
> > cache. But since the repo is a monolith, it should suffice to rsync
> > the whole cache directory to the backup drive (or do it as a tar).
> >
> > The only problem is the temporal sequence:
> > 1. Host A runs borg and gets a current cache.
> > 2. Host B runs borg on the same repo and gets a current cache.
> > 2a. Host A now has an outdated cache.
> >
> > Usually, Host B uses Host A via ssh as remote location of the repository.
> > So I could simply run a borg command on Host A to update the cache somehow.
> >
> >> Be very careful on how you do this - you are one step away from losing the
> >> while repo if the cache gets out of sync.  The docs warn against rsyncing
> >> two repos and then using them at the same time for a good reason.
> > I won’t use them at the same time. It will always be one direction:
> > Hosts --[borg]--> Main backup drive --[rsync]--> secondary backup drive
> >
> You could delete and rebuild the cache each time (or I think there is a
> way to do without it).

If the cache can be easily rebuilt, then there’d be no need to store it at
all. At least that’s what I hoped, but was shown otherwise; I deleted the
whole cache, wanting to clean it up from cruft, and then the next run took
hours due to complete re-hash.

> There are quite a few threads on the borg lists about this in the past
> (usually people trying to recover trashed repos)

I’ll give them a read in a quiet hour.

> - you might ask there if there is a way to deal with changing the ID now?

Why would I need to change the ID? As already explained, I will only ever
borg to the primary backup disk and mirror that to another disk with rsync.
And when the primary fails, I use the secondary as drop-in replacement.

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Sometimes the fingers are faster then grammar.
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
Am Tue, Aug 03, 2021 at 10:18:06AM +0200 schrieb Frank Steinmetzger:

> > You could delete and rebuild the cache each time (or I think there is a
> > way to do without it).
>
> If the cache can be easily rebuilt, then there’d be no need to store it at
> all.

Here’s an afterthought that just hit me:
there should actually be no point in archiving the cache at all. If you had
a disaster and do a full restore from borg, the old cache data becomes
invalid anyways, because the files’ inodes will now be different. AFAIK,
inodes are one way of detecting file changes. Different inode ? file must be
different ? rehash.

(Unless `borg extract` updates the borg cache for files it restores, which I
doubt because the destination path is arbitrary.)

--
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Normal people believe that if it ain't broke, don't fix it. Engineers
believe that if it ain't broke, it doesn't have enough features yet.
Re: [OT] SMR drives (WAS: cryptsetup close and device in use when it is not) [ In reply to ]
On 6/8/21 4:40 am, Frank Steinmetzger wrote:
> Am Tue, Aug 03, 2021 at 10:18:06AM +0200 schrieb Frank Steinmetzger:
>
>>> You could delete and rebuild the cache each time (or I think there is a
>>> way to do without it).
>> If the cache can be easily rebuilt, then there’d be no need to store it at
>> all.
> Here’s an afterthought that just hit me:
> there should actually be no point in archiving the cache at all. If you had
> a disaster and do a full restore from borg, the old cache data becomes
> invalid anyways, because the files’ inodes will now be different. AFAIK,
> inodes are one way of detecting file changes. Different inode ? file must be
> different ? rehash.
>
> (Unless `borg extract` updates the borg cache for files it restores, which I
> doubt because the destination path is arbitrary.)
>
Agreed - I do get a warning on restore and my first choice is always
delete the cache AND the security directory - I should just go ahead and
do it anyway I guess.

Also, it would be a good time to read the borg create statement
(https://borgbackup.readthedocs.io/en/stable/usage/create.html) for the
file change detection parameters - moosefs and snapshots required
non-default options to get it right.

BillK
Re: cryptsetup close and device in use when it is not [ In reply to ]
Hi Dale,

> So, something says it is busy but eventually
> releases it if left alone for a while.  I'd like to know what it is and
> if it is really in use or not.  Thing is, I can't find a way to know
> what it is that is using it.  The dmsetup command shows it is in use but
> no way to know what is using it.
I could reproduce this issue by killing my desktop process, unmounting
the home partition and playing some "kill process" bingo. I could
backtrace it to one unkillable process "kcryptd":

1. Kill "awesomewm": <CTRL + ALT> + Backspace
2. Kill other processes accessing "/home/"
3. umount /home
4. cryptsetup close crypthome
Device crypthome is still in use
5. dmsetup info /dev/mapper/crypthome
Name:              crypthome
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 1
Number of targets: 1
UUID: CRYPT-LUKS2-<some_uuid>-crypthome
6. Kill any unnecessary process and try "cryptsetup close crypthome"
7. Search for major, minor: ps aux | grep "253:1"
root       150  0.2  0.0      0     0 ?        I    15:21   0:02
[kworker/u16:5-kcryptd/253:1]
8. Does not work: kill 150
9. Does not work and could be dangerous: kill -9 150

So, there was still one "kcryptd" process left, accessing the hard
drive, but I found no way to kill it.

Maybe this could be helpful?

-Ramon


On 02/08/2021 15:33, Dale wrote:
> Ramon Fischer wrote:
>> OK, if it could be "udev", you might want to try to check the following:
>>
>>    $ grep -rF "<part_of_uuid>" /etc/udev/rules.d/
>>    $ grep -rF "<part_of_uuid>" /lib/udev/rules.d/
>>    $ grep -rF "<part_of_uuid>" /etc
>>
>> You could also try to search for the partition device, maybe there
>> will be some interesting configuration files.
>>
>> If you are using "systemd", you might want to check every service unit
>> file as well:
>>
>>    $ systemctl
>>
>> Recently, I had a similar issue with "cryptsetup" on Raspbian, where
>> the "/etc/crypttab" was faulty, which may be applicable here. It had
>> the following entry:
>>
>>    # <accident_paste_with_uuid> # <target name> <source device> [...]
>>    <entry1>
>>    <entry2>
>>
>> Therefore, the systemd service unit
>> "systemd-cryptsetup@dev-disk-by\x2duuid-#<accident_paste_with_uuid> #
>> <target name> <source device> [...]" - if I remember correctly - failed.
>> It seems, that "systemd-cryptsetup-generator" only searches for
>> matching UUIDs in "/etc/crypttab", even, if they are commented and
>> creates service units for each match in "/run/systemd/generator/".
>> I remember, that I had issues to access the hard drive. Nevertheless,
>> I was able to mount it normally, due to the other correct entry(?).
>>
>> By removing the accidentally pasted UUID from "/etc/crypttab" and
>> rebooting, I was able to use the hard drive without issues again.
>>
>> Maybe this is something, where you could poke around? :)
>>
>> -Ramon
> I'm running openrc here.  I don't recall making any udev rules
> recently.  This is a list of what I have.
>
>
> root@fireball / # ls -al /etc/udev/rules.d/
> total 20
> drwxr-xr-x 2 root root 4096 Apr 27 15:07 .
> drwxr-xr-x 3 root root 4096 Jul 27 03:17 ..
> -rw-r--r-- 1 root root 2064 Apr 27 15:07 69-libmtp.rules
> -rw-r--r-- 1 root root 1903 Apr  4  2012 70-persistent-cd.rules
> -rw-r--r-- 1 root root  814 Jan  1  2008 70-persistent-net.rules
> -rw-r--r-- 1 root root    0 Mar 22  2015 80-net-name-slot.rules
> root@fireball / #
>
>
> One is for CD/DVD stuff.  I wonder if I can remove that now.  Two is for
> network cards and top one is something to do with my old Motorola cell
> phone, rest in peace.
>
> All this said, it did it again last night.  I tried a few things and
> went to bed while my updates were compiling.  When I got up a bit ago,
> it closed just fine.  So, something says it is busy but eventually
> releases it if left alone for a while.  I'd like to know what it is and
> if it is really in use or not.  Thing is, I can't find a way to know
> what it is that is using it.  The dmsetup command shows it is in use but
> no way to know what is using it.
>
> Dale
>
> :-)  :-)

--
GPG public key: 5983 98DA 5F4D A464 38FD CF87 155B E264 13E6 99BF
Re: cryptsetup close and device in use when it is not [ In reply to ]
Ramon Fischer wrote:
> Hi Dale,
>
>>   So, something says it is busy but eventually
>> releases it if left alone for a while.  I'd like to know what it is and
>> if it is really in use or not.  Thing is, I can't find a way to know
>> what it is that is using it.  The dmsetup command shows it is in use but
>> no way to know what is using it.
> I could reproduce this issue by killing my desktop process, unmounting
> the home partition and playing some "kill process" bingo. I could
> backtrace it to one unkillable process "kcryptd":
>
>    1. Kill "awesomewm": <CTRL + ALT> + Backspace
>    2. Kill other processes accessing "/home/"
>    3. umount /home
>    4. cryptsetup close crypthome
>    Device crypthome is still in use
>    5. dmsetup info /dev/mapper/crypthome
>    Name:              crypthome
>    State:             ACTIVE
>    Read Ahead:        256
>    Tables present:    LIVE
>    Open count:        1
>    Event number:      0
>    Major, minor:      253, 1
>    Number of targets: 1
>    UUID: CRYPT-LUKS2-<some_uuid>-crypthome
>    6. Kill any unnecessary process and try "cryptsetup close crypthome"
>    7. Search for major, minor: ps aux | grep "253:1"
>    root       150  0.2  0.0      0     0 ?        I    15:21   0:02
>    [kworker/u16:5-kcryptd/253:1]
>    8. Does not work: kill 150
>    9. Does not work and could be dangerous: kill -9 150
>
> So, there was still one "kcryptd" process left, accessing the hard
> drive, but I found no way to kill it.
>
> Maybe this could be helpful?
>
> -Ramon
>


Well, it still does it but there is no rhyme or reason to when it says
in use and when it closes when asked too.  I to saw a process kcryptd
but not sure what triggers it. I didn't try to kill it since I'm pretty
sure it is a kernel process.  I build everything into my kernel, no
modules.  Sort of scared to mess with it. 

So, sometimes it works as it should, sometimes not.  When it doesn't, if
I leave it alone for a while then try again, it works.  I also checked
to be sure SMART wasn't doing something but if it is, I couldn't see
it.  Since it is a removable drive, I don't have it set to do anything
either. 

I guess I'll just have to wait on it to finish whatever it is doing to
close it when it gets stubborn.  Maybe this is a bug, maybe it has a
really good reason for not closing.  Who knows.  :/

Thanks to all for the help.  We gave it a good try.

Dale

:-)  :-)

1 2 3  View All