Mailing List Archive

[nova][cinder] Migrate instances between regions or between clusters?
Hi,

So here's a possibly stupid question - or rather, a series of such :)
Let's say a company has two (or five, or a hundred) datacenters in
geographically different locations and wants to deploy OpenStack in both.
What would be a deployment scenario that would allow relatively easy
migration (cold, not live) of instances from one datacenter to another?

My understanding is that for servers located far away from one another
regions would be a better metaphor than availability zones, if only
because it would be faster for the various storage, compute, etc.
services to communicate with each other for the common case of doing
actions within the same datacenter. Is this understanding wrong - is it
considered all right for groups of servers located in far away places to
be treated as different availability zones in the same cluster?

If the groups of servers are put in different regions, though, this
brings me to the real question: how can an instance be migrated across
regions? Note that the instance will almost certainly have some
shared-storage volume attached, and assume (not quite the common case,
but still) that the underlying shared storage technology can be taught
about another storage cluster in another location and can transfer
volumes and snapshots to remote clusters. From what I've found, there
are three basic ways:

- do it pretty much by hand: create snapshots of the volumes used in
the underlying storage system, transfer them to the other storage
cluster, then tell the Cinder volume driver to manage them, and spawn
an instance with the newly-managed newly-transferred volumes

- use Cinder to backup the volumes from one region, then restore them to
the other; if this is combined with a storage-specific Cinder backup
driver that knows that "backing up" is "creating a snapshot" and
"restoring to the other region" is "transferring that snapshot to the
remote storage cluster", it seems to be the easiest way forward (once
the Cinder backup driver has been written)

- use Nova's "server image create" command, transfer the resulting
Glance image somehow (possibly by downloading it from the Glance
storage in one region and simulateneously uploading it to the Glance
instance in the other), then spawn an instance off that image

The "server image create" approach seems to be the simplest one,
although it is a bit hard to imagine how it would work without
transferring data unnecessarily (the online articles I've seen
advocating it seem to imply that a Nova instance in a region cannot be
spawned off a Glance image in another region, so there will need to be
at least one set of "download the image and upload it to the other
side", even if the volume-to-image and image-to-volume transfers are
instantaneous, e.g. using glance-cinderclient). However, when I tried
it with a Nova instance backed by a StorPool volume (no ephemeral image
at all), the Glance image was zero bytes in length and only its metadata
contained some information about a volume snapshot created at that
point, so this seems once again to go back to options 1 and 2 for the
different ways to transfer a Cinder volume or snapshot to the other
region. Or have I missed something, is there a way to get the "server
image create / image download / image create" route to handle volumes
attached to the instance?

So... have I missed something else, too, or are these the options for
transferring a Nova instance between two distant locations?

Thanks for reading this far, and thanks in advance for your help!

Best regards,
Peter

--
Peter Penchev openstack-dev@storpool.com https://storpool.com/
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
On 09/17/2018 09:39 AM, Peter Penchev wrote:
> Hi,
>
> So here's a possibly stupid question - or rather, a series of such :)
> Let's say a company has two (or five, or a hundred) datacenters in
> geographically different locations and wants to deploy OpenStack in both.
> What would be a deployment scenario that would allow relatively easy
> migration (cold, not live) of instances from one datacenter to another?
>
> My understanding is that for servers located far away from one another
> regions would be a better metaphor than availability zones, if only
> because it would be faster for the various storage, compute, etc.
> services to communicate with each other for the common case of doing
> actions within the same datacenter. Is this understanding wrong - is it
> considered all right for groups of servers located in far away places to
> be treated as different availability zones in the same cluster?
>
> If the groups of servers are put in different regions, though, this
> brings me to the real question: how can an instance be migrated across
> regions? Note that the instance will almost certainly have some
> shared-storage volume attached, and assume (not quite the common case,
> but still) that the underlying shared storage technology can be taught
> about another storage cluster in another location and can transfer
> volumes and snapshots to remote clusters. From what I've found, there
> are three basic ways:
>
> - do it pretty much by hand: create snapshots of the volumes used in
> the underlying storage system, transfer them to the other storage
> cluster, then tell the Cinder volume driver to manage them, and spawn
> an instance with the newly-managed newly-transferred volumes

Yes, this is a perfectly reasonable solution. In fact, when I was at
AT&T, this was basically how we allowed tenants to spin up instances in
multiple regions: snapshot the instance, it gets stored in the Swift
storage for the region, tenant starts the instance in a different
region, and Nova pulls the image from the Swift storage in the other
region. It's slow the first time it's launched in the new region, of
course, since the bits need to be pulled from the other region's Swift
storage, but after that, local image caching speeds things up quite a bit.

This isn't migration, though. Namely, the tenant doesn't keep their
instance ID, their instance's IP addresses, or anything like that.

I've heard some users care about that stuff, unfortunately, which is why
we have shelve [offload]. There's absolutely no way to perform a
cross-region migration that keeps the instance ID and instance IP addresses.

> - use Cinder to backup the volumes from one region, then restore them to
> the other; if this is combined with a storage-specific Cinder backup
> driver that knows that "backing up" is "creating a snapshot" and
> "restoring to the other region" is "transferring that snapshot to the
> remote storage cluster", it seems to be the easiest way forward (once
> the Cinder backup driver has been written)

Still won't have the same instance ID and IP address, which is what
certain users tend to complain about needing with move operations.

> - use Nova's "server image create" command, transfer the resulting
> Glance image somehow (possibly by downloading it from the Glance
> storage in one region and simulateneously uploading it to the Glance
> instance in the other), then spawn an instance off that image

Still won't have the same instance ID and IP address :)

Best,
-jay

> The "server image create" approach seems to be the simplest one,
> although it is a bit hard to imagine how it would work without
> transferring data unnecessarily (the online articles I've seen
> advocating it seem to imply that a Nova instance in a region cannot be
> spawned off a Glance image in another region, so there will need to be
> at least one set of "download the image and upload it to the other
> side", even if the volume-to-image and image-to-volume transfers are
> instantaneous, e.g. using glance-cinderclient). However, when I tried
> it with a Nova instance backed by a StorPool volume (no ephemeral image
> at all), the Glance image was zero bytes in length and only its metadata
> contained some information about a volume snapshot created at that
> point, so this seems once again to go back to options 1 and 2 for the
> different ways to transfer a Cinder volume or snapshot to the other
> region. Or have I missed something, is there a way to get the "server
> image create / image download / image create" route to handle volumes
> attached to the instance?
>
> So... have I missed something else, too, or are these the options for
> transferring a Nova instance between two distant locations?
>
> Thanks for reading this far, and thanks in advance for your help!
>
> Best regards,
> Peter
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
Create a volume transfer VM/machine in each region.
attache the volume -> dd -> compress -> internet ->decompress -> new
volume, attache(/boot with) to the volume to the final machine.
In case you have frequent transfers you may keep up the machines for the
next one..

In case the storage is just on the compute node: snapshot ->glance download
->glance upload

Would be nice if cinder/glance could take the credentials for another
openstack and move the volume/image to another cinder/glance.

If you want the same IP , specify the ip at instance boot time (port
create),
but you cannot be sure the same ip is always available or really route-able
to different region.. unless... VPN like solution in place...

The uuid not expected to be changed by the users or admins (unsafe),
but you can use other metadata for description/your uuid.



On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes@gmail.com> wrote:

> On 09/17/2018 09:39 AM, Peter Penchev wrote:
>
>> Hi,
>>
>> So here's a possibly stupid question - or rather, a series of such :)
>> Let's say a company has two (or five, or a hundred) datacenters in
>> geographically different locations and wants to deploy OpenStack in both.
>> What would be a deployment scenario that would allow relatively easy
>> migration (cold, not live) of instances from one datacenter to another?
>>
>> My understanding is that for servers located far away from one another
>> regions would be a better metaphor than availability zones, if only
>> because it would be faster for the various storage, compute, etc.
>> services to communicate with each other for the common case of doing
>> actions within the same datacenter. Is this understanding wrong - is it
>> considered all right for groups of servers located in far away places to
>> be treated as different availability zones in the same cluster?
>>
>> If the groups of servers are put in different regions, though, this
>> brings me to the real question: how can an instance be migrated across
>> regions? Note that the instance will almost certainly have some
>> shared-storage volume attached, and assume (not quite the common case,
>> but still) that the underlying shared storage technology can be taught
>> about another storage cluster in another location and can transfer
>> volumes and snapshots to remote clusters. From what I've found, there
>> are three basic ways:
>>
>> - do it pretty much by hand: create snapshots of the volumes used in
>> the underlying storage system, transfer them to the other storage
>> cluster, then tell the Cinder volume driver to manage them, and spawn
>> an instance with the newly-managed newly-transferred volumes
>>
>
> Yes, this is a perfectly reasonable solution. In fact, when I was at AT&T,
> this was basically how we allowed tenants to spin up instances in multiple
> regions: snapshot the instance, it gets stored in the Swift storage for the
> region, tenant starts the instance in a different region, and Nova pulls
> the image from the Swift storage in the other region. It's slow the first
> time it's launched in the new region, of course, since the bits need to be
> pulled from the other region's Swift storage, but after that, local image
> caching speeds things up quite a bit.
>
> This isn't migration, though. Namely, the tenant doesn't keep their
> instance ID, their instance's IP addresses, or anything like that.
>
> I've heard some users care about that stuff, unfortunately, which is why
> we have shelve [offload]. There's absolutely no way to perform a
> cross-region migration that keeps the instance ID and instance IP addresses.
>
> - use Cinder to backup the volumes from one region, then restore them to
>> the other; if this is combined with a storage-specific Cinder backup
>> driver that knows that "backing up" is "creating a snapshot" and
>> "restoring to the other region" is "transferring that snapshot to the
>> remote storage cluster", it seems to be the easiest way forward (once
>> the Cinder backup driver has been written)
>>
>
> Still won't have the same instance ID and IP address, which is what
> certain users tend to complain about needing with move operations.
>
> - use Nova's "server image create" command, transfer the resulting
>> Glance image somehow (possibly by downloading it from the Glance
>> storage in one region and simulateneously uploading it to the Glance
>> instance in the other), then spawn an instance off that image
>>
>
> Still won't have the same instance ID and IP address :)
>
> Best,
> -jay
>
> The "server image create" approach seems to be the simplest one,
>> although it is a bit hard to imagine how it would work without
>> transferring data unnecessarily (the online articles I've seen
>> advocating it seem to imply that a Nova instance in a region cannot be
>> spawned off a Glance image in another region, so there will need to be
>> at least one set of "download the image and upload it to the other
>> side", even if the volume-to-image and image-to-volume transfers are
>> instantaneous, e.g. using glance-cinderclient). However, when I tried
>> it with a Nova instance backed by a StorPool volume (no ephemeral image
>> at all), the Glance image was zero bytes in length and only its metadata
>> contained some information about a volume snapshot created at that
>> point, so this seems once again to go back to options 1 and 2 for the
>> different ways to transfer a Cinder volume or snapshot to the other
>> region. Or have I missed something, is there a way to get the "server
>> image create / image download / image create" route to handle volumes
>> attached to the instance?
>>
>> So... have I missed something else, too, or are these the options for
>> transferring a Nova instance between two distant locations?
>>
>> Thanks for reading this far, and thanks in advance for your help!
>>
>> Best regards,
>> Peter
>>
>>
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
On Tue, Sep 18, 2018 at 11:32:37AM +0200, Attila Fazekas wrote:
[format recovered; top-posting after an inline reply looks confusing]
> On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes@gmail.com> wrote:
>
> > On 09/17/2018 09:39 AM, Peter Penchev wrote:
> >
> >> Hi,
> >>
> >> So here's a possibly stupid question - or rather, a series of such :)
> >> Let's say a company has two (or five, or a hundred) datacenters in
> >> geographically different locations and wants to deploy OpenStack in both.
> >> What would be a deployment scenario that would allow relatively easy
> >> migration (cold, not live) of instances from one datacenter to another?
> >>
> >> My understanding is that for servers located far away from one another
> >> regions would be a better metaphor than availability zones, if only
> >> because it would be faster for the various storage, compute, etc.
> >> services to communicate with each other for the common case of doing
> >> actions within the same datacenter. Is this understanding wrong - is it
> >> considered all right for groups of servers located in far away places to
> >> be treated as different availability zones in the same cluster?
> >>
> >> If the groups of servers are put in different regions, though, this
> >> brings me to the real question: how can an instance be migrated across
> >> regions? Note that the instance will almost certainly have some
> >> shared-storage volume attached, and assume (not quite the common case,
> >> but still) that the underlying shared storage technology can be taught
> >> about another storage cluster in another location and can transfer
> >> volumes and snapshots to remote clusters. From what I've found, there
> >> are three basic ways:
> >>
> >> - do it pretty much by hand: create snapshots of the volumes used in
> >> the underlying storage system, transfer them to the other storage
> >> cluster, then tell the Cinder volume driver to manage them, and spawn
> >> an instance with the newly-managed newly-transferred volumes
> >>
> >
> > Yes, this is a perfectly reasonable solution. In fact, when I was at AT&T,
> > this was basically how we allowed tenants to spin up instances in multiple
> > regions: snapshot the instance, it gets stored in the Swift storage for the
> > region, tenant starts the instance in a different region, and Nova pulls
> > the image from the Swift storage in the other region. It's slow the first
> > time it's launched in the new region, of course, since the bits need to be
> > pulled from the other region's Swift storage, but after that, local image
> > caching speeds things up quite a bit.
> >
> > This isn't migration, though. Namely, the tenant doesn't keep their
> > instance ID, their instance's IP addresses, or anything like that.

Right, sorry, I should have clarified that what we're interested in is
technically creating a new instance with the same disk contents, so
that's fine. Thanks for confirming that there is not a simpler way that
I've missed, I guess :)

> > I've heard some users care about that stuff, unfortunately, which is why
> > we have shelve [offload]. There's absolutely no way to perform a
> > cross-region migration that keeps the instance ID and instance IP addresses.
> >
> > - use Cinder to backup the volumes from one region, then restore them to
> >> the other; if this is combined with a storage-specific Cinder backup
> >> driver that knows that "backing up" is "creating a snapshot" and
> >> "restoring to the other region" is "transferring that snapshot to the
> >> remote storage cluster", it seems to be the easiest way forward (once
> >> the Cinder backup driver has been written)
> >>
> >
> > Still won't have the same instance ID and IP address, which is what
> > certain users tend to complain about needing with move operations.
> >
> > - use Nova's "server image create" command, transfer the resulting
> >> Glance image somehow (possibly by downloading it from the Glance
> >> storage in one region and simulateneously uploading it to the Glance
> >> instance in the other), then spawn an instance off that image
> >>
> >
> > Still won't have the same instance ID and IP address :)
> >
> > Best,
> > -jay
> >
> > The "server image create" approach seems to be the simplest one,
> >> although it is a bit hard to imagine how it would work without
> >> transferring data unnecessarily (the online articles I've seen
> >> advocating it seem to imply that a Nova instance in a region cannot be
> >> spawned off a Glance image in another region, so there will need to be
> >> at least one set of "download the image and upload it to the other
> >> side", even if the volume-to-image and image-to-volume transfers are
> >> instantaneous, e.g. using glance-cinderclient). However, when I tried
> >> it with a Nova instance backed by a StorPool volume (no ephemeral image
> >> at all), the Glance image was zero bytes in length and only its metadata
> >> contained some information about a volume snapshot created at that
> >> point, so this seems once again to go back to options 1 and 2 for the
> >> different ways to transfer a Cinder volume or snapshot to the other
> >> region. Or have I missed something, is there a way to get the "server
> >> image create / image download / image create" route to handle volumes
> >> attached to the instance?
> >>
> >> So... have I missed something else, too, or are these the options for
> >> transferring a Nova instance between two distant locations?
> >>
> >> Thanks for reading this far, and thanks in advance for your help!
> >>
> >> Best regards,
> >> Peter
>
> Create a volume transfer VM/machine in each region.
> attache the volume -> dd -> compress -> internet ->decompress -> new
> volume, attache(/boot with) to the volume to the final machine.
> In case you have frequent transfers you may keep up the machines for the
> next one..

Thanks for the advice, but this would involve transferring *a lot* more
data than if we leave it to the underlying storage :) As I mentioned,
the underlying storage can be taught about remote clusters and can be told
to create a remote snapshot of a volume; this will be the base on which
we will write our Cinder backup driver. So both my options 1 (do it "by
hand" with the underlying storage) and 2 (cinder volume backup/restore)
would be preferable.

> In case the storage is just on the compute node: snapshot ->glance download
> ->glance upload

Right, as I mentioned in my description of the third option, this does
not really work with attached volumes (thus your "just on the compute node")
and as I mentioned before listing the options, the instances will almost
certainly have attached volumes.

> Would be nice if cinder/glance could take the credentials for another
> openstack and move the volume/image to another cinder/glance.
>
> If you want the same IP , specify the ip at instance boot time (port
> create),
> but you cannot be sure the same ip is always available or really route-able
> to different region.. unless... VPN like solution in place...
>
> The uuid not expected to be changed by the users or admins (unsafe),
> but you can use other metadata for description/your uuid.

Best regards,
Peter

--
Peter Penchev openstack-dev@storpool.com https://storpool.com/

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
On Tue, Sep 18, 2018 at 2:09 PM, Peter Penchev <openstack-dev@storpool.com>
wrote:

> On Tue, Sep 18, 2018 at 11:32:37AM +0200, Attila Fazekas wrote:
> [format recovered; top-posting after an inline reply looks confusing]
> > On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes@gmail.com> wrote:
> >
> > > On 09/17/2018 09:39 AM, Peter Penchev wrote:
> > >
> > >> Hi,
> > >>
> > >> So here's a possibly stupid question - or rather, a series of such :)
> > >> Let's say a company has two (or five, or a hundred) datacenters in
> > >> geographically different locations and wants to deploy OpenStack in
> both.
> > >> What would be a deployment scenario that would allow relatively easy
> > >> migration (cold, not live) of instances from one datacenter to
> another?
> > >>
> > >> My understanding is that for servers located far away from one another
> > >> regions would be a better metaphor than availability zones, if only
> > >> because it would be faster for the various storage, compute, etc.
> > >> services to communicate with each other for the common case of doing
> > >> actions within the same datacenter. Is this understanding wrong - is
> it
> > >> considered all right for groups of servers located in far away places
> to
> > >> be treated as different availability zones in the same cluster?
> > >>
> > >> If the groups of servers are put in different regions, though, this
> > >> brings me to the real question: how can an instance be migrated across
> > >> regions? Note that the instance will almost certainly have some
> > >> shared-storage volume attached, and assume (not quite the common case,
> > >> but still) that the underlying shared storage technology can be taught
> > >> about another storage cluster in another location and can transfer
> > >> volumes and snapshots to remote clusters. From what I've found, there
> > >> are three basic ways:
> > >>
> > >> - do it pretty much by hand: create snapshots of the volumes used in
> > >> the underlying storage system, transfer them to the other storage
> > >> cluster, then tell the Cinder volume driver to manage them, and
> spawn
> > >> an instance with the newly-managed newly-transferred volumes
> > >>
> > >
> > > Yes, this is a perfectly reasonable solution. In fact, when I was at
> AT&T,
> > > this was basically how we allowed tenants to spin up instances in
> multiple
> > > regions: snapshot the instance, it gets stored in the Swift storage
> for the
> > > region, tenant starts the instance in a different region, and Nova
> pulls
> > > the image from the Swift storage in the other region. It's slow the
> first
> > > time it's launched in the new region, of course, since the bits need
> to be
> > > pulled from the other region's Swift storage, but after that, local
> image
> > > caching speeds things up quite a bit.
> > >
> > > This isn't migration, though. Namely, the tenant doesn't keep their
> > > instance ID, their instance's IP addresses, or anything like that.
>
> Right, sorry, I should have clarified that what we're interested in is
> technically creating a new instance with the same disk contents, so
> that's fine. Thanks for confirming that there is not a simpler way that
> I've missed, I guess :)
>
> > > I've heard some users care about that stuff, unfortunately, which is
> why
> > > we have shelve [offload]. There's absolutely no way to perform a
> > > cross-region migration that keeps the instance ID and instance IP
> addresses.
> > >
> > > - use Cinder to backup the volumes from one region, then restore them
> to
> > >> the other; if this is combined with a storage-specific Cinder
> backup
> > >> driver that knows that "backing up" is "creating a snapshot" and
> > >> "restoring to the other region" is "transferring that snapshot to
> the
> > >> remote storage cluster", it seems to be the easiest way forward
> (once
> > >> the Cinder backup driver has been written)
> > >>
> > >
> > > Still won't have the same instance ID and IP address, which is what
> > > certain users tend to complain about needing with move operations.
> > >
> > > - use Nova's "server image create" command, transfer the resulting
> > >> Glance image somehow (possibly by downloading it from the Glance
> > >> storage in one region and simulateneously uploading it to the
> Glance
> > >> instance in the other), then spawn an instance off that image
> > >>
> > >
> > > Still won't have the same instance ID and IP address :)
> > >
> > > Best,
> > > -jay
> > >
> > > The "server image create" approach seems to be the simplest one,
> > >> although it is a bit hard to imagine how it would work without
> > >> transferring data unnecessarily (the online articles I've seen
> > >> advocating it seem to imply that a Nova instance in a region cannot be
> > >> spawned off a Glance image in another region, so there will need to be
> > >> at least one set of "download the image and upload it to the other
> > >> side", even if the volume-to-image and image-to-volume transfers are
> > >> instantaneous, e.g. using glance-cinderclient). However, when I tried
> > >> it with a Nova instance backed by a StorPool volume (no ephemeral
> image
> > >> at all), the Glance image was zero bytes in length and only its
> metadata
> > >> contained some information about a volume snapshot created at that
> > >> point, so this seems once again to go back to options 1 and 2 for the
> > >> different ways to transfer a Cinder volume or snapshot to the other
> > >> region. Or have I missed something, is there a way to get the "server
> > >> image create / image download / image create" route to handle volumes
> > >> attached to the instance?
> > >>
> > >> So... have I missed something else, too, or are these the options for
> > >> transferring a Nova instance between two distant locations?
> > >>
> > >> Thanks for reading this far, and thanks in advance for your help!
> > >>
> > >> Best regards,
> > >> Peter
> >
> > Create a volume transfer VM/machine in each region.
> > attache the volume -> dd -> compress -> internet ->decompress -> new
> > volume, attache(/boot with) to the volume to the final machine.
> > In case you have frequent transfers you may keep up the machines for the
> > next one..
>
> Thanks for the advice, but this would involve transferring *a lot* more
> data than if we leave it to the underlying storage :) As I mentioned,
> the underlying storage can be taught about remote clusters and can be told
> to create a remote snapshot of a volume; this will be the base on which
> we will write our Cinder backup driver. So both my options 1 (do it "by
> hand" with the underlying storage) and 2 (cinder volume backup/restore)
> would be preferable.
>

Cinder might get a feature for `rescue` a volume in case accidentally
someone
deleted the DB record or some other bad thing happened.
This needs to be admin only op where you would need to specify where is the
volume,
If just a new volume `shows up` on the storage, but without
the knowledge of cinder, it could be rescued as well.

Among same storage types probably cinder could have an admin only
API for transfer.

I am not sure is volume backup/restore is really better across regions
than the above steps properly piped however
it is very infrastructure dependent,
bandwidth and latency across regions matters.

The direct storage usage likely better than the pipe/dd on close proximity,
but in case of good internal networking on longer distance (external net)
the diff will not be big,
you might wait more on openstack api/client than on the
actual data transfer, in case of small the size.
The internal network total bandwith nowadays is very huge
a little internal data move (storage->vm->internetet->vm->storage)
might not even shows ups on the internal monitors ;-)
The internet is the high latency thing even
if you have the best internet connection on the world.

The light is not getting faster. ;-)


>
> > In case the storage is just on the compute node: snapshot ->glance
> download
> > ->glance upload
>
> Right, as I mentioned in my description of the third option, this does
> not really work with attached volumes (thus your "just on the compute
> node")
> and as I mentioned before listing the options, the instances will almost
> certainly have attached volumes.
>
> yes, you need to use both way.

> Would be nice if cinder/glance could take the credentials for another
> > openstack and move the volume/image to another cinder/glance.
> >
> > If you want the same IP , specify the ip at instance boot time (port
> > create),
> > but you cannot be sure the same ip is always available or really
> route-able
> > to different region.. unless... VPN like solution in place...
> >
> > The uuid not expected to be changed by the users or admins (unsafe),
> > but you can use other metadata for description/your uuid.
>
> Best regards,
> Peter
>
> --
> Peter Penchev openstack-dev@storpool.com https://storpool.com/
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
On Tue, Sep 18, 2018 at 3:07 PM, Attila Fazekas <afazekas@redhat.com> wrote:

>
>
> On Tue, Sep 18, 2018 at 2:09 PM, Peter Penchev <openstack-dev@storpool.com
> > wrote:
>
>> On Tue, Sep 18, 2018 at 11:32:37AM +0200, Attila Fazekas wrote:
>> [format recovered; top-posting after an inline reply looks confusing]
>> > On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes@gmail.com> wrote:
>> >
>> > > On 09/17/2018 09:39 AM, Peter Penchev wrote:
>> > >
>> > >> Hi,
>> > >>
>> > >> So here's a possibly stupid question - or rather, a series of such :)
>> > >> Let's say a company has two (or five, or a hundred) datacenters in
>> > >> geographically different locations and wants to deploy OpenStack in
>> both.
>> > >> What would be a deployment scenario that would allow relatively easy
>> > >> migration (cold, not live) of instances from one datacenter to
>> another?
>> > >>
>> > >> My understanding is that for servers located far away from one
>> another
>> > >> regions would be a better metaphor than availability zones, if only
>> > >> because it would be faster for the various storage, compute, etc.
>> > >> services to communicate with each other for the common case of doing
>> > >> actions within the same datacenter. Is this understanding wrong -
>> is it
>> > >> considered all right for groups of servers located in far away
>> places to
>> > >> be treated as different availability zones in the same cluster?
>> > >>
>> > >> If the groups of servers are put in different regions, though, this
>> > >> brings me to the real question: how can an instance be migrated
>> across
>> > >> regions? Note that the instance will almost certainly have some
>> > >> shared-storage volume attached, and assume (not quite the common
>> case,
>> > >> but still) that the underlying shared storage technology can be
>> taught
>> > >> about another storage cluster in another location and can transfer
>> > >> volumes and snapshots to remote clusters. From what I've found,
>> there
>> > >> are three basic ways:
>> > >>
>> > >> - do it pretty much by hand: create snapshots of the volumes used in
>> > >> the underlying storage system, transfer them to the other storage
>> > >> cluster, then tell the Cinder volume driver to manage them, and
>> spawn
>> > >> an instance with the newly-managed newly-transferred volumes
>> > >>
>> > >
>> > > Yes, this is a perfectly reasonable solution. In fact, when I was at
>> AT&T,
>> > > this was basically how we allowed tenants to spin up instances in
>> multiple
>> > > regions: snapshot the instance, it gets stored in the Swift storage
>> for the
>> > > region, tenant starts the instance in a different region, and Nova
>> pulls
>> > > the image from the Swift storage in the other region. It's slow the
>> first
>> > > time it's launched in the new region, of course, since the bits need
>> to be
>> > > pulled from the other region's Swift storage, but after that, local
>> image
>> > > caching speeds things up quite a bit.
>> > >
>> > > This isn't migration, though. Namely, the tenant doesn't keep their
>> > > instance ID, their instance's IP addresses, or anything like that.
>>
>> Right, sorry, I should have clarified that what we're interested in is
>> technically creating a new instance with the same disk contents, so
>> that's fine. Thanks for confirming that there is not a simpler way that
>> I've missed, I guess :)
>>
>> > > I've heard some users care about that stuff, unfortunately, which is
>> why
>> > > we have shelve [offload]. There's absolutely no way to perform a
>> > > cross-region migration that keeps the instance ID and instance IP
>> addresses.
>> > >
>> > > - use Cinder to backup the volumes from one region, then restore them
>> to
>> > >> the other; if this is combined with a storage-specific Cinder
>> backup
>> > >> driver that knows that "backing up" is "creating a snapshot" and
>> > >> "restoring to the other region" is "transferring that snapshot to
>> the
>> > >> remote storage cluster", it seems to be the easiest way forward
>> (once
>> > >> the Cinder backup driver has been written)
>> > >>
>> > >
>> > > Still won't have the same instance ID and IP address, which is what
>> > > certain users tend to complain about needing with move operations.
>> > >
>> > > - use Nova's "server image create" command, transfer the resulting
>> > >> Glance image somehow (possibly by downloading it from the Glance
>> > >> storage in one region and simulateneously uploading it to the
>> Glance
>> > >> instance in the other), then spawn an instance off that image
>> > >>
>> > >
>> > > Still won't have the same instance ID and IP address :)
>> > >
>> > > Best,
>> > > -jay
>> > >
>> > > The "server image create" approach seems to be the simplest one,
>> > >> although it is a bit hard to imagine how it would work without
>> > >> transferring data unnecessarily (the online articles I've seen
>> > >> advocating it seem to imply that a Nova instance in a region cannot
>> be
>> > >> spawned off a Glance image in another region, so there will need to
>> be
>> > >> at least one set of "download the image and upload it to the other
>> > >> side", even if the volume-to-image and image-to-volume transfers are
>> > >> instantaneous, e.g. using glance-cinderclient). However, when I
>> tried
>> > >> it with a Nova instance backed by a StorPool volume (no ephemeral
>> image
>> > >> at all), the Glance image was zero bytes in length and only its
>> metadata
>> > >> contained some information about a volume snapshot created at that
>> > >> point, so this seems once again to go back to options 1 and 2 for the
>> > >> different ways to transfer a Cinder volume or snapshot to the other
>> > >> region. Or have I missed something, is there a way to get the
>> "server
>> > >> image create / image download / image create" route to handle volumes
>> > >> attached to the instance?
>> > >>
>> > >> So... have I missed something else, too, or are these the options for
>> > >> transferring a Nova instance between two distant locations?
>> > >>
>> > >> Thanks for reading this far, and thanks in advance for your help!
>> > >>
>> > >> Best regards,
>> > >> Peter
>> >
>> > Create a volume transfer VM/machine in each region.
>> > attache the volume -> dd -> compress -> internet ->decompress -> new
>> > volume, attache(/boot with) to the volume to the final machine.
>> > In case you have frequent transfers you may keep up the machines for the
>> > next one..
>>
>> Thanks for the advice, but this would involve transferring *a lot* more
>> data than if we leave it to the underlying storage :) As I mentioned,
>> the underlying storage can be taught about remote clusters and can be told
>> to create a remote snapshot of a volume; this will be the base on which
>> we will write our Cinder backup driver. So both my options 1 (do it "by
>> hand" with the underlying storage) and 2 (cinder volume backup/restore)
>> would be preferable.
>>
>
> Cinder might get a feature for `rescue` a volume in case accidentally
> someone
> deleted the DB record or some other bad thing happened.
> This needs to be admin only op where you would need to specify where is
> the volume,
> If just a new volume `shows up` on the storage, but without
> the knowledge of cinder, it could be rescued as well.
>
> Among same storage types probably cinder could have an admin only
> API for transfer.
>
> I am not sure is volume backup/restore is really better across regions
> than the above steps properly piped however
> it is very infrastructure dependent,
> bandwidth and latency across regions matters.
>
> The direct storage usage likely better than the pipe/dd on close proximity,
> but in case of good internal networking on longer distance (external net)
> the diff will not be big,
> you might wait more on openstack api/client than on the
> actual data transfer, in case of small the size.
> The internal network total bandwith nowadays is very huge
> a little internal data move (storage->vm->internetet->vm->storage)
> might not even shows ups on the internal monitors ;-)
> The internet is the high latency thing even
> if you have the best internet connection on the world.
>
> The light is not getting faster. ;-)
>
>
One other thing I forget to mention, depending on your
compress/encrypt/hash method you
might have a bottleneck on a single CPU core.
Splitting the images/volume to nr cpu parts might help.


>
>> > In case the storage is just on the compute node: snapshot ->glance
>> download
>> > ->glance upload
>>
>> Right, as I mentioned in my description of the third option, this does
>> not really work with attached volumes (thus your "just on the compute
>> node")
>> and as I mentioned before listing the options, the instances will almost
>> certainly have attached volumes.
>>
>> yes, you need to use both way.
>
> > Would be nice if cinder/glance could take the credentials for another
>> > openstack and move the volume/image to another cinder/glance.
>> >
>> > If you want the same IP , specify the ip at instance boot time (port
>> > create),
>> > but you cannot be sure the same ip is always available or really
>> route-able
>> > to different region.. unless... VPN like solution in place...
>> >
>> > The uuid not expected to be changed by the users or admins (unsafe),
>> > but you can use other metadata for description/your uuid.
>>
>> Best regards,
>> Peter
>>
>> --
>> Peter Penchev openstack-dev@storpool.com https://storpool.com/
>>
>> _______________________________________________
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>
>
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
On Tue, Sep 18, 2018 at 03:07:45PM +0200, Attila Fazekas wrote:
> On Tue, Sep 18, 2018 at 2:09 PM, Peter Penchev <openstack-dev@storpool.com>
> wrote:
>
> > On Tue, Sep 18, 2018 at 11:32:37AM +0200, Attila Fazekas wrote:
> > [format recovered; top-posting after an inline reply looks confusing]
> > > On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes@gmail.com> wrote:
> > >
> > > > On 09/17/2018 09:39 AM, Peter Penchev wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> So here's a possibly stupid question - or rather, a series of such :)
> > > >> Let's say a company has two (or five, or a hundred) datacenters in
> > > >> geographically different locations and wants to deploy OpenStack in
> > both.
> > > >> What would be a deployment scenario that would allow relatively easy
> > > >> migration (cold, not live) of instances from one datacenter to
> > another?
> > > >>
> > > >> My understanding is that for servers located far away from one another
> > > >> regions would be a better metaphor than availability zones, if only
> > > >> because it would be faster for the various storage, compute, etc.
> > > >> services to communicate with each other for the common case of doing
> > > >> actions within the same datacenter. Is this understanding wrong - is
> > it
> > > >> considered all right for groups of servers located in far away places
> > to
> > > >> be treated as different availability zones in the same cluster?
> > > >>
> > > >> If the groups of servers are put in different regions, though, this
> > > >> brings me to the real question: how can an instance be migrated across
> > > >> regions? Note that the instance will almost certainly have some
> > > >> shared-storage volume attached, and assume (not quite the common case,
> > > >> but still) that the underlying shared storage technology can be taught
> > > >> about another storage cluster in another location and can transfer
> > > >> volumes and snapshots to remote clusters. From what I've found, there
> > > >> are three basic ways:
> > > >>
> > > >> - do it pretty much by hand: create snapshots of the volumes used in
> > > >> the underlying storage system, transfer them to the other storage
> > > >> cluster, then tell the Cinder volume driver to manage them, and
> > spawn
> > > >> an instance with the newly-managed newly-transferred volumes
> > > >>
> > > >
> > > > Yes, this is a perfectly reasonable solution. In fact, when I was at
> > AT&T,
> > > > this was basically how we allowed tenants to spin up instances in
> > multiple
> > > > regions: snapshot the instance, it gets stored in the Swift storage
> > for the
> > > > region, tenant starts the instance in a different region, and Nova
> > pulls
> > > > the image from the Swift storage in the other region. It's slow the
> > first
> > > > time it's launched in the new region, of course, since the bits need
> > to be
> > > > pulled from the other region's Swift storage, but after that, local
> > image
> > > > caching speeds things up quite a bit.
> > > >
> > > > This isn't migration, though. Namely, the tenant doesn't keep their
> > > > instance ID, their instance's IP addresses, or anything like that.
> >
> > Right, sorry, I should have clarified that what we're interested in is
> > technically creating a new instance with the same disk contents, so
> > that's fine. Thanks for confirming that there is not a simpler way that
> > I've missed, I guess :)
> >
> > > > I've heard some users care about that stuff, unfortunately, which is
> > why
> > > > we have shelve [offload]. There's absolutely no way to perform a
> > > > cross-region migration that keeps the instance ID and instance IP
> > addresses.
> > > >
> > > > - use Cinder to backup the volumes from one region, then restore them
> > to
> > > >> the other; if this is combined with a storage-specific Cinder
> > backup
> > > >> driver that knows that "backing up" is "creating a snapshot" and
> > > >> "restoring to the other region" is "transferring that snapshot to
> > the
> > > >> remote storage cluster", it seems to be the easiest way forward
> > (once
> > > >> the Cinder backup driver has been written)
> > > >>
> > > >
> > > > Still won't have the same instance ID and IP address, which is what
> > > > certain users tend to complain about needing with move operations.
> > > >
> > > > - use Nova's "server image create" command, transfer the resulting
> > > >> Glance image somehow (possibly by downloading it from the Glance
> > > >> storage in one region and simulateneously uploading it to the
> > Glance
> > > >> instance in the other), then spawn an instance off that image
> > > >>
> > > >
> > > > Still won't have the same instance ID and IP address :)
> > > >
> > > > Best,
> > > > -jay
> > > >
> > > > The "server image create" approach seems to be the simplest one,
> > > >> although it is a bit hard to imagine how it would work without
> > > >> transferring data unnecessarily (the online articles I've seen
> > > >> advocating it seem to imply that a Nova instance in a region cannot be
> > > >> spawned off a Glance image in another region, so there will need to be
> > > >> at least one set of "download the image and upload it to the other
> > > >> side", even if the volume-to-image and image-to-volume transfers are
> > > >> instantaneous, e.g. using glance-cinderclient). However, when I tried
> > > >> it with a Nova instance backed by a StorPool volume (no ephemeral
> > image
> > > >> at all), the Glance image was zero bytes in length and only its
> > metadata
> > > >> contained some information about a volume snapshot created at that
> > > >> point, so this seems once again to go back to options 1 and 2 for the
> > > >> different ways to transfer a Cinder volume or snapshot to the other
> > > >> region. Or have I missed something, is there a way to get the "server
> > > >> image create / image download / image create" route to handle volumes
> > > >> attached to the instance?
> > > >>
> > > >> So... have I missed something else, too, or are these the options for
> > > >> transferring a Nova instance between two distant locations?
> > > >>
> > > >> Thanks for reading this far, and thanks in advance for your help!
> > > >>
> > > >> Best regards,
> > > >> Peter
> > >
> > > Create a volume transfer VM/machine in each region.
> > > attache the volume -> dd -> compress -> internet ->decompress -> new
> > > volume, attache(/boot with) to the volume to the final machine.
> > > In case you have frequent transfers you may keep up the machines for the
> > > next one..
> >
> > Thanks for the advice, but this would involve transferring *a lot* more
> > data than if we leave it to the underlying storage :) As I mentioned,
> > the underlying storage can be taught about remote clusters and can be told
> > to create a remote snapshot of a volume; this will be the base on which
> > we will write our Cinder backup driver. So both my options 1 (do it "by
> > hand" with the underlying storage) and 2 (cinder volume backup/restore)
> > would be preferable.
> >
>
> Cinder might get a feature for `rescue` a volume in case accidentally
> someone
> deleted the DB record or some other bad thing happened.
> This needs to be admin only op where you would need to specify where is the
> volume,
> If just a new volume `shows up` on the storage, but without
> the knowledge of cinder, it could be rescued as well.

Hmm, is this not what the Cinder "manage" command does?

> Among same storage types probably cinder could have an admin only
> API for transfer.
>
> I am not sure is volume backup/restore is really better across regions
> than the above steps properly piped however
> it is very infrastructure dependent,
> bandwidth and latency across regions matters.
[snip discussion]

Well, the reason my initial message said "assume the underlying storage
can do that" was that I did not want to go into marketing/advertisement
territory and say flat out that the StorPool storage system can do that :)

Best regards,
Peter

--
Peter Penchev openstack-dev@storpool.com https://storpool.com/

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [nova][cinder] Migrate instances between regions or between clusters? [ In reply to ]
On Tue, Sep 18, 2018 at 4:31 PM, Peter Penchev <openstack-dev@storpool.com>
wrote:

> On Tue, Sep 18, 2018 at 03:07:45PM +0200, Attila Fazekas wrote:
> > On Tue, Sep 18, 2018 at 2:09 PM, Peter Penchev <
> openstack-dev@storpool.com>
> > wrote:
> >
> > > On Tue, Sep 18, 2018 at 11:32:37AM +0200, Attila Fazekas wrote:
> > > [format recovered; top-posting after an inline reply looks confusing]
> > > > On Mon, Sep 17, 2018 at 11:43 PM, Jay Pipes <jaypipes@gmail.com>
> wrote:
> > > >
> > > > > On 09/17/2018 09:39 AM, Peter Penchev wrote:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> So here's a possibly stupid question - or rather, a series of
> such :)
> > > > >> Let's say a company has two (or five, or a hundred) datacenters in
> > > > >> geographically different locations and wants to deploy OpenStack
> in
> > > both.
> > > > >> What would be a deployment scenario that would allow relatively
> easy
> > > > >> migration (cold, not live) of instances from one datacenter to
> > > another?
> > > > >>
> > > > >> My understanding is that for servers located far away from one
> another
> > > > >> regions would be a better metaphor than availability zones, if
> only
> > > > >> because it would be faster for the various storage, compute, etc.
> > > > >> services to communicate with each other for the common case of
> doing
> > > > >> actions within the same datacenter. Is this understanding wrong
> - is
> > > it
> > > > >> considered all right for groups of servers located in far away
> places
> > > to
> > > > >> be treated as different availability zones in the same cluster?
> > > > >>
> > > > >> If the groups of servers are put in different regions, though,
> this
> > > > >> brings me to the real question: how can an instance be migrated
> across
> > > > >> regions? Note that the instance will almost certainly have some
> > > > >> shared-storage volume attached, and assume (not quite the common
> case,
> > > > >> but still) that the underlying shared storage technology can be
> taught
> > > > >> about another storage cluster in another location and can transfer
> > > > >> volumes and snapshots to remote clusters. From what I've found,
> there
> > > > >> are three basic ways:
> > > > >>
> > > > >> - do it pretty much by hand: create snapshots of the volumes used
> in
> > > > >> the underlying storage system, transfer them to the other
> storage
> > > > >> cluster, then tell the Cinder volume driver to manage them, and
> > > spawn
> > > > >> an instance with the newly-managed newly-transferred volumes
> > > > >>
> > > > >
> > > > > Yes, this is a perfectly reasonable solution. In fact, when I was
> at
> > > AT&T,
> > > > > this was basically how we allowed tenants to spin up instances in
> > > multiple
> > > > > regions: snapshot the instance, it gets stored in the Swift storage
> > > for the
> > > > > region, tenant starts the instance in a different region, and Nova
> > > pulls
> > > > > the image from the Swift storage in the other region. It's slow the
> > > first
> > > > > time it's launched in the new region, of course, since the bits
> need
> > > to be
> > > > > pulled from the other region's Swift storage, but after that, local
> > > image
> > > > > caching speeds things up quite a bit.
> > > > >
> > > > > This isn't migration, though. Namely, the tenant doesn't keep their
> > > > > instance ID, their instance's IP addresses, or anything like that.
> > >
> > > Right, sorry, I should have clarified that what we're interested in is
> > > technically creating a new instance with the same disk contents, so
> > > that's fine. Thanks for confirming that there is not a simpler way
> that
> > > I've missed, I guess :)
> > >
> > > > > I've heard some users care about that stuff, unfortunately, which
> is
> > > why
> > > > > we have shelve [offload]. There's absolutely no way to perform a
> > > > > cross-region migration that keeps the instance ID and instance IP
> > > addresses.
> > > > >
> > > > > - use Cinder to backup the volumes from one region, then restore
> them
> > > to
> > > > >> the other; if this is combined with a storage-specific Cinder
> > > backup
> > > > >> driver that knows that "backing up" is "creating a snapshot"
> and
> > > > >> "restoring to the other region" is "transferring that snapshot
> to
> > > the
> > > > >> remote storage cluster", it seems to be the easiest way forward
> > > (once
> > > > >> the Cinder backup driver has been written)
> > > > >>
> > > > >
> > > > > Still won't have the same instance ID and IP address, which is what
> > > > > certain users tend to complain about needing with move operations.
> > > > >
> > > > > - use Nova's "server image create" command, transfer the resulting
> > > > >> Glance image somehow (possibly by downloading it from the
> Glance
> > > > >> storage in one region and simulateneously uploading it to the
> > > Glance
> > > > >> instance in the other), then spawn an instance off that image
> > > > >>
> > > > >
> > > > > Still won't have the same instance ID and IP address :)
> > > > >
> > > > > Best,
> > > > > -jay
> > > > >
> > > > > The "server image create" approach seems to be the simplest one,
> > > > >> although it is a bit hard to imagine how it would work without
> > > > >> transferring data unnecessarily (the online articles I've seen
> > > > >> advocating it seem to imply that a Nova instance in a region
> cannot be
> > > > >> spawned off a Glance image in another region, so there will need
> to be
> > > > >> at least one set of "download the image and upload it to the other
> > > > >> side", even if the volume-to-image and image-to-volume transfers
> are
> > > > >> instantaneous, e.g. using glance-cinderclient). However, when I
> tried
> > > > >> it with a Nova instance backed by a StorPool volume (no ephemeral
> > > image
> > > > >> at all), the Glance image was zero bytes in length and only its
> > > metadata
> > > > >> contained some information about a volume snapshot created at that
> > > > >> point, so this seems once again to go back to options 1 and 2 for
> the
> > > > >> different ways to transfer a Cinder volume or snapshot to the
> other
> > > > >> region. Or have I missed something, is there a way to get the
> "server
> > > > >> image create / image download / image create" route to handle
> volumes
> > > > >> attached to the instance?
> > > > >>
> > > > >> So... have I missed something else, too, or are these the options
> for
> > > > >> transferring a Nova instance between two distant locations?
> > > > >>
> > > > >> Thanks for reading this far, and thanks in advance for your help!
> > > > >>
> > > > >> Best regards,
> > > > >> Peter
> > > >
> > > > Create a volume transfer VM/machine in each region.
> > > > attache the volume -> dd -> compress -> internet ->decompress ->
> new
> > > > volume, attache(/boot with) to the volume to the final machine.
> > > > In case you have frequent transfers you may keep up the machines for
> the
> > > > next one..
> > >
> > > Thanks for the advice, but this would involve transferring *a lot* more
> > > data than if we leave it to the underlying storage :) As I mentioned,
> > > the underlying storage can be taught about remote clusters and can be
> told
> > > to create a remote snapshot of a volume; this will be the base on which
> > > we will write our Cinder backup driver. So both my options 1 (do it
> "by
> > > hand" with the underlying storage) and 2 (cinder volume backup/restore)
> > > would be preferable.
> > >
> >
> > Cinder might get a feature for `rescue` a volume in case accidentally
> > someone
> > deleted the DB record or some other bad thing happened.
> > This needs to be admin only op where you would need to specify where is
> the
> > volume,
> > If just a new volume `shows up` on the storage, but without
> > the knowledge of cinder, it could be rescued as well.
>
> Hmm, is this not what the Cinder "manage" command does?
>
> Sounds like it does:
https://blueprints.launchpad.net/horizon/+spec/add-manage-unmanage-volume


> > Among same storage types probably cinder could have an admin only
> > API for transfer.
> >
> > I am not sure is volume backup/restore is really better across regions
> > than the above steps properly piped however
> > it is very infrastructure dependent,
> > bandwidth and latency across regions matters.
> [snip discussion]
>
> Well, the reason my initial message said "assume the underlying storage
> can do that" was that I did not want to go into marketing/advertisement
> territory and say flat out that the StorPool storage system can do that :)
>
> Best regards,
> Peter
>
> --
> Peter Penchev openstack-dev@storpool.com https://storpool.com/
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>