Mailing List Archive

[cinder] Pruning Old Volume Backups with Ceph Backend
I back up my volumes daily, using incremental backups to minimize
network traffic and storage consumption. I want to periodically remove
old backups, and during this pruning operation, avoid entering a state
where a volume has no recent backups. Ceph RBD appears to support this
workflow, but unfortunately, Cinder does not. I can only delete the
*latest* backup of a given volume, and this precludes any reasonable
way to prune backups. Here, I'll show you.

Let's make three backups of the same volume:
```
openstack volume backup create --name backup-1 --force volume-foo
openstack volume backup create --name backup-2 --force volume-foo
openstack volume backup create --name backup-3 --force volume-foo
```

Cinder reports the following via `volume backup show`:
- backup-1 is not an incremental backup, but backup-2 and backup-3 are
(`is_incremental`).
- All but the latest backup have dependent backups (`has_dependent_backups`).

We take a backup every day, and after a week we're on backup-7. We
want to start deleting older backups so that we don't keep
accumulating backups forever! What happens when we try?

```
# openstack volume backup delete backup-1
Failed to delete backup with name or ID 'backup-1': Invalid backup:
Incremental backups exist for this backup. (HTTP 400)
```

We can't delete backup-1 because Cinder considers it a "base" backup
which `has_dependent_backups`. What about backup-2? Same story. Adding
the `--force` flag just gives a slightly different error message. The
*only* backup that Cinder will delete is backup-7 -- the very latest
one. This means that if we want to remove the oldest backups of a
volume, *we must first remove all newer backups of the same volume*,
i.e. delete literally all of our backups.

Also, we cannot force creation of another *full* (non-incrmental)
backup in order to free all of the earlier backups for removal.
(Omitting the `--incremental` flag has no effect; you still get an
incremental backup.)

Can we hope for better? Let's reach behind Cinder to the Ceph backend.
Volume backups are represented as a "base" RBD image with a snapshot
for each incremental backup:

```
# rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
SNAPID NAME
SIZE TIMESTAMP
577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
10240 MB Thu Aug 23 10:57:48 2018
578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
10240 MB Thu Aug 23 11:05:43 2018
579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
10240 MB Thu Aug 23 11:06:47 2018
580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
10240 MB Thu Aug 23 11:22:23 2018
581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
10240 MB Thu Aug 23 11:22:47 2018
582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
10240 MB Thu Aug 23 11:23:04 2018
583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
10240 MB Thu Aug 23 11:23:31 2018
584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
10240 MB Thu Aug 23 12:32:43 2018
```

It seems that each snapshot stands alone and doesn't depend on others.
Ceph lets me delete the older snapshots.

```
# rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
Removing snap: 100% complete...done.
# rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
Removing snap: 100% complete...done.
```

Now that we nuked backup-1 and backup-4, can we still restore from
backup-7 and launch an instance with it?

```
openstack volume create --size 10 --bootable volume-foo-restored
openstack volume backup restore backup-7 volume-foo-restored
openstack server create --volume volume-foo-restored --flavor medium1
instance-restored-from-backup-7
```

Yes! We can SSH to the instance and it appears intact.

Perhaps each snapshot in Ceph stores a complete diff from the base RBD
image (rather than each successive snapshot depending on the last). If
this is true, then Cinder is unnecessarily protective of older
backups. Cinder represents these as "with dependents" and doesn't let
us touch them, even though Ceph will let us delete older RBD
snapshots, apparently without disrupting newer snapshots of the same
volume. If we could remove this limitation, Cinder backups would be
significantly more useful for us. We mostly host servers with
non-cloud-native workloads (IaaS for research scientists). For these,
full-disk backups at the infrastructure level are an important
supplement to file-level or application-level backups.

It would be great if someone else could confirm or disprove what I'm
seeing here. I'd also love to hear from anyone else using Cinder
backups this way.

Regards,

Chris Martin at CyVerse

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [cinder] Pruning Old Volume Backups with Ceph Backend [ In reply to ]
Hi Chris,

Unless I overlooked something, I don't see Cinder or Ceph versions posted.

Feel free to just post the codenames but give us some inkling.

On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin <cmart@cyverse.org> wrote:

> I back up my volumes daily, using incremental backups to minimize
> network traffic and storage consumption. I want to periodically remove
> old backups, and during this pruning operation, avoid entering a state
> where a volume has no recent backups. Ceph RBD appears to support this
> workflow, but unfortunately, Cinder does not. I can only delete the
> *latest* backup of a given volume, and this precludes any reasonable
> way to prune backups. Here, I'll show you.
>
> Let's make three backups of the same volume:
> ```
> openstack volume backup create --name backup-1 --force volume-foo
> openstack volume backup create --name backup-2 --force volume-foo
> openstack volume backup create --name backup-3 --force volume-foo
> ```
>
> Cinder reports the following via `volume backup show`:
> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
> (`is_incremental`).
> - All but the latest backup have dependent backups
> (`has_dependent_backups`).
>
> We take a backup every day, and after a week we're on backup-7. We
> want to start deleting older backups so that we don't keep
> accumulating backups forever! What happens when we try?
>
> ```
> # openstack volume backup delete backup-1
> Failed to delete backup with name or ID 'backup-1': Invalid backup:
> Incremental backups exist for this backup. (HTTP 400)
> ```
>
> We can't delete backup-1 because Cinder considers it a "base" backup
> which `has_dependent_backups`. What about backup-2? Same story. Adding
> the `--force` flag just gives a slightly different error message. The
> *only* backup that Cinder will delete is backup-7 -- the very latest
> one. This means that if we want to remove the oldest backups of a
> volume, *we must first remove all newer backups of the same volume*,
> i.e. delete literally all of our backups.
>
> Also, we cannot force creation of another *full* (non-incrmental)
> backup in order to free all of the earlier backups for removal.
> (Omitting the `--incremental` flag has no effect; you still get an
> incremental backup.)
>
> Can we hope for better? Let's reach behind Cinder to the Ceph backend.
> Volume backups are represented as a "base" RBD image with a snapshot
> for each incremental backup:
>
> ```
> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
> SNAPID NAME
> SIZE TIMESTAMP
> 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
> 10240 MB Thu Aug 23 10:57:48 2018
> 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
> 10240 MB Thu Aug 23 11:05:43 2018
> 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
> 10240 MB Thu Aug 23 11:06:47 2018
> 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
> 10240 MB Thu Aug 23 11:22:23 2018
> 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
> 10240 MB Thu Aug 23 11:22:47 2018
> 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
> 10240 MB Thu Aug 23 11:23:04 2018
> 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
> 10240 MB Thu Aug 23 11:23:31 2018
> 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
> 10240 MB Thu Aug 23 12:32:43 2018
> ```
>
> It seems that each snapshot stands alone and doesn't depend on others.
> Ceph lets me delete the older snapshots.
>
> ```
> # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@
> backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
> Removing snap: 100% complete...done.
> # rbd snap rm volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@
> backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
> Removing snap: 100% complete...done.
> ```
>
> Now that we nuked backup-1 and backup-4, can we still restore from
> backup-7 and launch an instance with it?
>
> ```
> openstack volume create --size 10 --bootable volume-foo-restored
> openstack volume backup restore backup-7 volume-foo-restored
> openstack server create --volume volume-foo-restored --flavor medium1
> instance-restored-from-backup-7
> ```
>
> Yes! We can SSH to the instance and it appears intact.
>
> Perhaps each snapshot in Ceph stores a complete diff from the base RBD
> image (rather than each successive snapshot depending on the last). If
> this is true, then Cinder is unnecessarily protective of older
> backups. Cinder represents these as "with dependents" and doesn't let
> us touch them, even though Ceph will let us delete older RBD
> snapshots, apparently without disrupting newer snapshots of the same
> volume. If we could remove this limitation, Cinder backups would be
> significantly more useful for us. We mostly host servers with
> non-cloud-native workloads (IaaS for research scientists). For these,
> full-disk backups at the infrastructure level are an important
> supplement to file-level or application-level backups.
>
> It would be great if someone else could confirm or disprove what I'm
> seeing here. I'd also love to hear from anyone else using Cinder
> backups this way.
>
> Regards,
>
> Chris Martin at CyVerse
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
Re: [cinder] Pruning Old Volume Backups with Ceph Backend [ In reply to ]
Apologies -- I'm running Pike release of Cinder, Luminous release of
Ceph. Deployed with OpenStack-Ansible and Ceph-Ansible respectively.

On Thu, Aug 23, 2018 at 8:27 PM, David Medberry <openstack@medberry.net> wrote:
> Hi Chris,
>
> Unless I overlooked something, I don't see Cinder or Ceph versions posted.
>
> Feel free to just post the codenames but give us some inkling.
>
> On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin <cmart@cyverse.org> wrote:
>>
>> I back up my volumes daily, using incremental backups to minimize
>> network traffic and storage consumption. I want to periodically remove
>> old backups, and during this pruning operation, avoid entering a state
>> where a volume has no recent backups. Ceph RBD appears to support this
>> workflow, but unfortunately, Cinder does not. I can only delete the
>> *latest* backup of a given volume, and this precludes any reasonable
>> way to prune backups. Here, I'll show you.
>>
>> Let's make three backups of the same volume:
>> ```
>> openstack volume backup create --name backup-1 --force volume-foo
>> openstack volume backup create --name backup-2 --force volume-foo
>> openstack volume backup create --name backup-3 --force volume-foo
>> ```
>>
>> Cinder reports the following via `volume backup show`:
>> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
>> (`is_incremental`).
>> - All but the latest backup have dependent backups
>> (`has_dependent_backups`).
>>
>> We take a backup every day, and after a week we're on backup-7. We
>> want to start deleting older backups so that we don't keep
>> accumulating backups forever! What happens when we try?
>>
>> ```
>> # openstack volume backup delete backup-1
>> Failed to delete backup with name or ID 'backup-1': Invalid backup:
>> Incremental backups exist for this backup. (HTTP 400)
>> ```
>>
>> We can't delete backup-1 because Cinder considers it a "base" backup
>> which `has_dependent_backups`. What about backup-2? Same story. Adding
>> the `--force` flag just gives a slightly different error message. The
>> *only* backup that Cinder will delete is backup-7 -- the very latest
>> one. This means that if we want to remove the oldest backups of a
>> volume, *we must first remove all newer backups of the same volume*,
>> i.e. delete literally all of our backups.
>>
>> Also, we cannot force creation of another *full* (non-incrmental)
>> backup in order to free all of the earlier backups for removal.
>> (Omitting the `--incremental` flag has no effect; you still get an
>> incremental backup.)
>>
>> Can we hope for better? Let's reach behind Cinder to the Ceph backend.
>> Volume backups are represented as a "base" RBD image with a snapshot
>> for each incremental backup:
>>
>> ```
>> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
>> SNAPID NAME
>> SIZE TIMESTAMP
>> 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
>> 10240 MB Thu Aug 23 10:57:48 2018
>> 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
>> 10240 MB Thu Aug 23 11:05:43 2018
>> 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
>> 10240 MB Thu Aug 23 11:06:47 2018
>> 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
>> 10240 MB Thu Aug 23 11:22:23 2018
>> 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
>> 10240 MB Thu Aug 23 11:22:47 2018
>> 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
>> 10240 MB Thu Aug 23 11:23:04 2018
>> 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
>> 10240 MB Thu Aug 23 11:23:31 2018
>> 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
>> 10240 MB Thu Aug 23 12:32:43 2018
>> ```
>>
>> It seems that each snapshot stands alone and doesn't depend on others.
>> Ceph lets me delete the older snapshots.
>>
>> ```
>> # rbd snap rm
>> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
>> Removing snap: 100% complete...done.
>> # rbd snap rm
>> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
>> Removing snap: 100% complete...done.
>> ```
>>
>> Now that we nuked backup-1 and backup-4, can we still restore from
>> backup-7 and launch an instance with it?
>>
>> ```
>> openstack volume create --size 10 --bootable volume-foo-restored
>> openstack volume backup restore backup-7 volume-foo-restored
>> openstack server create --volume volume-foo-restored --flavor medium1
>> instance-restored-from-backup-7
>> ```
>>
>> Yes! We can SSH to the instance and it appears intact.
>>
>> Perhaps each snapshot in Ceph stores a complete diff from the base RBD
>> image (rather than each successive snapshot depending on the last). If
>> this is true, then Cinder is unnecessarily protective of older
>> backups. Cinder represents these as "with dependents" and doesn't let
>> us touch them, even though Ceph will let us delete older RBD
>> snapshots, apparently without disrupting newer snapshots of the same
>> volume. If we could remove this limitation, Cinder backups would be
>> significantly more useful for us. We mostly host servers with
>> non-cloud-native workloads (IaaS for research scientists). For these,
>> full-disk backups at the infrastructure level are an important
>> supplement to file-level or application-level backups.
>>
>> It would be great if someone else could confirm or disprove what I'm
>> seeing here. I'd also love to hear from anyone else using Cinder
>> backups this way.
>>
>> Regards,
>>
>> Chris Martin at CyVerse
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [cinder] Pruning Old Volume Backups with Ceph Backend [ In reply to ]
Hi Chris,

I can't seem to reproduce your issue. What OpenStack release are you using?

> openstack volume backup create --name backup-1 --force volume-foo
> openstack volume backup create --name backup-2 --force volume-foo
> openstack volume backup create --name backup-3 --force volume-foo
> ```
> Cinder reports the following via `volume backup show`:
> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
> (`is_incremental`).
> - All but the latest backup have dependent backups (`has_dependent_backups`).

If I don't create the backups with the --incremental flag, they're all
indepentent and don't have dependent backups:

---cut here---
(openstack) volume backup create --name backup1 --force
51c18b65-db03-485e-98fd-ccb0f0c2422d
(openstack) volume backup create --name backup2 --force
51c18b65-db03-485e-98fd-ccb0f0c2422d

(openstack) volume backup show backup1
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | nova |
| container | images |
| created_at | 2018-08-29T09:33:42.000000 |
| data_timestamp | 2018-08-29T09:33:42.000000 |
| description | None |
| fail_reason | None |
| has_dependent_backups | False |
| id | 8c9b20a5-bf31-4771-b8db-b828664bb810 |
| is_incremental | False |
| name | backup1 |
| object_count | 0 |
| size | 2 |
| snapshot_id | None |
| status | available |
| updated_at | 2018-08-29T09:34:14.000000 |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+-----------------------+--------------------------------------+

(openstack) volume backup show backup2
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | nova |
| container | images |
| created_at | 2018-08-29T09:34:20.000000 |
| data_timestamp | 2018-08-29T09:34:20.000000 |
| description | None |
| fail_reason | None |
| has_dependent_backups | False |
| id | 9de60042-b4b6-478a-ac4d-49bf1b00d297 |
| is_incremental | False |
| name | backup2 |
| object_count | 0 |
| size | 2 |
| snapshot_id | None |
| status | available |
| updated_at | 2018-08-29T09:34:52.000000 |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+-----------------------+--------------------------------------+

(openstack) volume backup delete backup1
(openstack) volume backup list
+--------------------------------------+---------+-------------+-----------+------+
| ID | Name | Description |
Status | Size |
+--------------------------------------+---------+-------------+-----------+------+
| 9de60042-b4b6-478a-ac4d-49bf1b00d297 | backup2 | None |
available | 2 |
+--------------------------------------+---------+-------------+-----------+------+

(openstack) volume backup create --name backup-inc1 --incremental
--force 51c18b65-db03-485e-98fd-ccb0f0c2422d
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 79e2f71b-1c3b-42d1-8582-4934568fea80 |
| name | backup-inc1 |
+-------+--------------------------------------+

(openstack) volume backup create --name backup-inc2 --incremental
--force 51c18b65-db03-485e-98fd-ccb0f0c2422d
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | e1033d2a-f2c2-409a-880a-9630e45f1312 |
| name | backup-inc2 |
+-------+--------------------------------------+

# Now backup2 ist the base backup, it has dependents now
(openstack) volume backup show 9de60042-b4b6-478a-ac4d-49bf1b00d297
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| has_dependent_backups | True |
| id | 9de60042-b4b6-478a-ac4d-49bf1b00d297 |
| is_incremental | False |
| name | backup2 |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+-----------------------+--------------------------------------+

# backup-inc1 is incremental and has dependent backups
(openstack) volume backup show 79e2f71b-1c3b-42d1-8582-4934568fea80
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| has_dependent_backups | True |
| id | 79e2f71b-1c3b-42d1-8582-4934568fea80 |
| is_incremental | True |
| name | backup-inc1 |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+-----------------------+--------------------------------------+

# backup-inc2 has no dependent backups
(openstack) volume backup show e1033d2a-f2c2-409a-880a-9630e45f1312
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| has_dependent_backups | False |
| id | e1033d2a-f2c2-409a-880a-9630e45f1312 |
| is_incremental | True |
| name | backup-inc2 |
| volume_id | 51c18b65-db03-485e-98fd-ccb0f0c2422d |
+-----------------------+--------------------------------------+

# But I don't see a base backup like your output shows
control:~ # rbd -p images ls | grep 51c18b65-db03-485e-98fd-ccb0f0c2422d
volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.79e2f71b-1c3b-42d1-8582-4934568fea80
volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.9de60042-b4b6-478a-ac4d-49bf1b00d297
volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.e1033d2a-f2c2-409a-880a-9630e45f1312

# The snapshots in Ceph are only visible during incremental backup:
control:~ # for vol in $(rbd -p images ls | grep
51c18b65-db03-485e-98fd-ccb0f0c2422d); do echo "VOLUME: $vol"; rbd
snap ls images/$vol; done
VOLUME:
volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.79e2f71b-1c3b-42d1-8582-4934568fea80
VOLUME:
volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.9de60042-b4b6-478a-ac4d-49bf1b00d297
VOLUME:
volume-51c18b65-db03-485e-98fd-ccb0f0c2422d.backup.e1033d2a-f2c2-409a-880a-9630e45f1312

# Of course, I can't delete the first two
(openstack) volume backup delete 9de60042-b4b6-478a-ac4d-49bf1b00d297
Failed to delete backup with name or ID
'9de60042-b4b6-478a-ac4d-49bf1b00d297': Invalid backup: Incremental
backups exist for this backup. (HTTP 400) (Request-ID:
req-7720fd51-3f63-484d-b421-d5b957d5fa83)
1 of 1 backups failed to delete.

(openstack) volume backup delete 79e2f71b-1c3b-42d1-8582-4934568fea80
Failed to delete backup with name or ID
'79e2f71b-1c3b-42d1-8582-4934568fea80': Invalid backup: Incremental
backups exist for this backup. (HTTP 400) (Request-ID:
req-a0ea1e8f-13b8-43cb-9f71-8b395d181438)
1 of 1 backups failed to delete.
---cut here---

This is Ocata we're running on. Do you have any setting configured to
automatically create incremental backups? But this would fail in case
there is no previous full backup. If I delete all existing backups and
run it again starting with an incremental backup, it fails:

(openstack) volume backup create --name backup-inc1 --incremental
--force 51c18b65-db03-485e-98fd-ccb0f0c2422d
Invalid backup: No backups available to do an incremental backup.

We backup our volumes on RBD level, not with cinder, so they aren't
incremental.

Maybe someone else is able to reproduce your issue.

Regards,
Eugen


Zitat von Chris Martin <cmart@cyverse.org>:

> I back up my volumes daily, using incremental backups to minimize
> network traffic and storage consumption. I want to periodically remove
> old backups, and during this pruning operation, avoid entering a state
> where a volume has no recent backups. Ceph RBD appears to support this
> workflow, but unfortunately, Cinder does not. I can only delete the
> *latest* backup of a given volume, and this precludes any reasonable
> way to prune backups. Here, I'll show you.
>
> Let's make three backups of the same volume:
> ```
> openstack volume backup create --name backup-1 --force volume-foo
> openstack volume backup create --name backup-2 --force volume-foo
> openstack volume backup create --name backup-3 --force volume-foo
> ```
>
> Cinder reports the following via `volume backup show`:
> - backup-1 is not an incremental backup, but backup-2 and backup-3 are
> (`is_incremental`).
> - All but the latest backup have dependent backups (`has_dependent_backups`).
>
> We take a backup every day, and after a week we're on backup-7. We
> want to start deleting older backups so that we don't keep
> accumulating backups forever! What happens when we try?
>
> ```
> # openstack volume backup delete backup-1
> Failed to delete backup with name or ID 'backup-1': Invalid backup:
> Incremental backups exist for this backup. (HTTP 400)
> ```
>
> We can't delete backup-1 because Cinder considers it a "base" backup
> which `has_dependent_backups`. What about backup-2? Same story. Adding
> the `--force` flag just gives a slightly different error message. The
> *only* backup that Cinder will delete is backup-7 -- the very latest
> one. This means that if we want to remove the oldest backups of a
> volume, *we must first remove all newer backups of the same volume*,
> i.e. delete literally all of our backups.
>
> Also, we cannot force creation of another *full* (non-incrmental)
> backup in order to free all of the earlier backups for removal.
> (Omitting the `--incremental` flag has no effect; you still get an
> incremental backup.)
>
> Can we hope for better? Let's reach behind Cinder to the Ceph backend.
> Volume backups are represented as a "base" RBD image with a snapshot
> for each incremental backup:
>
> ```
> # rbd snap ls volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base
> SNAPID NAME
> SIZE TIMESTAMP
> 577 backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
> 10240 MB Thu Aug 23 10:57:48 2018
> 578 backup.93fbd83b-f34d-45bc-a378-18268c8c0a25.snap.1535047520.44
> 10240 MB Thu Aug 23 11:05:43 2018
> 579 backup.b6bed35a-45e7-4df1-bc09-257aa01efe9b.snap.1535047564.46
> 10240 MB Thu Aug 23 11:06:47 2018
> 580 backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
> 10240 MB Thu Aug 23 11:22:23 2018
> 581 backup.8cd035b9-63bf-4920-a8ec-c07ba370fb94.snap.1535048538.72
> 10240 MB Thu Aug 23 11:22:47 2018
> 582 backup.cb7b6920-a79e-408e-b84f-5269d80235b2.snap.1535048559.82
> 10240 MB Thu Aug 23 11:23:04 2018
> 583 backup.a7871768-1863-435f-be9d-b50af47c905a.snap.1535048588.26
> 10240 MB Thu Aug 23 11:23:31 2018
> 584 backup.b18522e4-d237-4ee5-8786-78eac3d590de.snap.1535052729.52
> 10240 MB Thu Aug 23 12:32:43 2018
> ```
>
> It seems that each snapshot stands alone and doesn't depend on others.
> Ceph lets me delete the older snapshots.
>
> ```
> # rbd snap rm
> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@backup.e3c1bcff-c1a4-450f-a2a5-a5061c8e3733.snap.1535046973.43
> Removing snap: 100% complete...done.
> # rbd snap rm
> volume-e742c4e2-e331-4297-a7df-c25e729fdd83.backup.base@backup.10128aba-0e18-40f1-acfb-11d7bb6cb487.snap.1535048513.71
> Removing snap: 100% complete...done.
> ```
>
> Now that we nuked backup-1 and backup-4, can we still restore from
> backup-7 and launch an instance with it?
>
> ```
> openstack volume create --size 10 --bootable volume-foo-restored
> openstack volume backup restore backup-7 volume-foo-restored
> openstack server create --volume volume-foo-restored --flavor medium1
> instance-restored-from-backup-7
> ```
>
> Yes! We can SSH to the instance and it appears intact.
>
> Perhaps each snapshot in Ceph stores a complete diff from the base RBD
> image (rather than each successive snapshot depending on the last). If
> this is true, then Cinder is unnecessarily protective of older
> backups. Cinder represents these as "with dependents" and doesn't let
> us touch them, even though Ceph will let us delete older RBD
> snapshots, apparently without disrupting newer snapshots of the same
> volume. If we could remove this limitation, Cinder backups would be
> significantly more useful for us. We mostly host servers with
> non-cloud-native workloads (IaaS for research scientists). For these,
> full-disk backups at the infrastructure level are an important
> supplement to file-level or application-level backups.
>
> It would be great if someone else could confirm or disprove what I'm
> seeing here. I'd also love to hear from anyone else using Cinder
> backups this way.
>
> Regards,
>
> Chris Martin at CyVerse
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack