Mailing List Archive

[Nova][Glance] Nova imports flat images from base file despite ceph backend
Hi list,

this week I noticed something strange in our cloud (Ocata).

We use Ceph as backend for nova, glance and cinder, everything really
works like a charm. But from time to time we've noticed that some
instances take much longer to launch than others. So I wanted to take
a look what's happening, turned on debug logs and found that in some
cases (I have no idea how to reproduce yet) there is an image
downloaded to /var/lib/nova/instances/_base which then is used to
import it back to Ceph, that is obviously the reason for the delay.
The problem is that this new instance is not CoW, it's a flat rbd
image. Here are some relevant logs (instance_id:
65567fc1-017f-45dc-b0ee-570c44146119, image_id:
0da1ba0f-c504-45ea-b138-16026aec022b)

---cut here---
[...]
2018-10-04 11:46:39.189 10293 DEBUG nova.compute.manager
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
65567fc1-017f-45dc-b0ee-570c44146119] Start spawning the instance on
the hypervisor. _build_and_run_instance
/usr/lib/python2.7/site-packages/nova/compute/manager.py:1929
[...]
2018-10-04 11:46:39.192 10293 INFO nova.virt.libvirt.driver
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
65567fc1-017f-45dc-b0ee-570c44146119] Creating image
2018-10-04 11:46:39.220 10293 DEBUG
nova.virt.libvirt.storage.rbd_utils
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.241 10293 DEBUG
nova.virt.libvirt.storage.rbd_utils
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.245 10293 DEBUG oslo_concurrency.lockutils
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock
"3c60237fbf59101d3411c4f795d0a72b82752e0b" acquired by
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.001s
inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2018-10-04 11:46:39.246 10293 DEBUG oslo_concurrency.lockutils
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock
"3c60237fbf59101d3411c4f795d0a72b82752e0b" released by
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.001s inner
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
2018-10-04 11:46:39.266 10293 DEBUG
nova.virt.libvirt.storage.rbd_utils
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 11:46:39.269 10293 DEBUG oslo_concurrency.processutils
[req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] Running cmd (subprocess): rbd
import --pool images
/var/lib/nova/instances/_base/3c60237fbf59101d3411c4f795d0a72b82752e0b
65567fc1-017f-45dc-b0ee-570c44146119_disk --image-format=2 --id
openstack --conf /etc/ceph/ceph.conf execute
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:355
[...]

# no parent data
control:~ # rbd info images/65567fc1-017f-45dc-b0ee-570c44146119_disk
rbd image '65567fc1-017f-45dc-b0ee-570c44146119_disk':
size 6144 MB in 1536 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.c8f88a6b8b4567
format: 2
features: layering, exclusive-lock, object-map
flags:
create_timestamp: Thu Oct 4 11:46:39 2018

---cut here---

###################################################################

For comparison I moved the respective base file and tried to spawn a
new instance from the same glance image:

---cut here---
[...]
2018-10-04 10:30:29.412 2336 DEBUG nova.compute.manager
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
91d0b930-97b0-4dd0-81b4-929599b7c997] Start spawning the instance on
the hypervisor. _build_and_run_instance
/usr/lib/python2.7/site-packages/nova/compute/manager.py:1929
[...]
2018-10-04 10:30:29.415 2336 INFO nova.virt.libvirt.driver
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
91d0b930-97b0-4dd0-81b4-929599b7c997] Creating image
2018-10-04 10:30:29.435 2336 DEBUG nova.virt.libvirt.storage.rbd_utils
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
91d0b930-97b0-4dd0-81b4-929599b7c997_disk does not exist __init__
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 10:30:29.455 2336 DEBUG nova.virt.libvirt.storage.rbd_utils
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
91d0b930-97b0-4dd0-81b4-929599b7c997_disk does not exist __init__
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
2018-10-04 10:30:29.492 2336 DEBUG oslo_concurrency.lockutils
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] Lock
"3c60237fbf59101d3411c4f795d0a72b82752e0b" acquired by
"nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.035s
inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
2018-10-04 10:30:29.527 2336 DEBUG nova.virt.libvirt.imagebackend
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] Image locations are: [{u'url':
u'rbd://655cb05a-435a-41ba-83d9-8549f7c36167/images/0da1ba0f-c504-45ea-b138-16026aec022b/snap', u'metadata': {}}, {'url': u'rbd://655cb05a-435a-41ba-83d9-8549f7c36167/images/0da1ba0f-c504-45ea-b138-16026aec022b/snap', 'metadata': {}}] clone
/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py:903
2018-10-04 10:30:29.681 2336 DEBUG nova.virt.libvirt.imagebackend
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] Selected location: {u'url':
u'rbd://655cb05a-435a-41ba-83d9-8549f7c36167/images/0da1ba0f-c504-45ea-b138-16026aec022b/snap', u'metadata': {}} clone
/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py:912
2018-10-04 10:30:29.682 2336 DEBUG nova.virt.libvirt.storage.rbd_utils
[req-942ba103-1932-4adf-b9b2-670e1a2fc126
df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
2e3c3f3822124a3fa9fd905164f519ae - - -] cloning
images/0da1ba0f-c504-45ea-b138-16026aec022b@snap to
None/91d0b930-97b0-4dd0-81b4-929599b7c997_disk clone
/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:236
[...]


# instance has parent data
control:~ # rbd info images/91d0b930-97b0-4dd0-81b4-929599b7c997_disk
rbd image '91d0b930-97b0-4dd0-81b4-929599b7c997_disk':
size 8192 MB in 1024 objects
order 23 (8192 kB objects)
block_name_prefix: rbd_data.c8abcb6b8b4567
format: 2
features: layering, exclusive-lock, object-map
flags:
create_timestamp: Thu Oct 4 10:30:29 2018
parent: images/0da1ba0f-c504-45ea-b138-16026aec022b@snap
overlap: 6144 MB

---cut here---

Why is there a local copy of the image in the first place? Obviously
nova doesn't need that and creates clones successfully without them.
I would be thankful for any explanation for this behavious.

Regards,
Eugen


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Nova][Glance] Nova imports flat images from base file despite ceph backend [ In reply to ]
Hi,

I just wanted to follow up on this for documentation purpose. Although
I still don't have all answers there's something I can explain.

When I upload a new image (iso) to create a new base image for glance,
and I use "--disk-format iso", this will lead to the described
behavior, nova will report something like:

> rbd image clone requires image format to be 'raw' but image
> rbd://<Cluster_ID>/images/<IMAGE_ID>/snap is 'iso' is_cloneable

If I launch a new instance from that iso, nova will download it to the
filesystem (/var/lib/nova/instances/_base) which will take some time.
Then I attach an empty volume, finish the installation, destroy the
instance and upload the volume to glance, that new glance image has a
default "disk-format = raw".

Now when I launch an instance from this new image (raw) I usually get
a CoW clone on RBD layer. The _base file of the ISO will eventually be
removed by nova if the base file is old enough. This is how I always
created new instances, not knowing that the ISOs should have the "raw"
disk-format. Despite the wrong format of ISOs my procedure usually
leads to CoW clones anyway since I upload volumes to glance.
I tried to reproduce it with the exact same workflow, but everything
worked as expected (including the download of the iso image to local
filesystem, I learned that now).

So it's still unclear why nova downloaded a raw glance image to the
local filesystem during the previous attempt.

I always knew that with Ceph as backend it's recommended to use raw
images but I always assumed the "disk-format" was not more than a
display option. Well, now I know that, but this still doesn't explain
the downloaded base image.
If anyone has an idea how to reproduce such behavior or an
explanation, I'd love to hear it.

Regards,
Eugen


Zitat von Eugen Block <eblock@nde.ag>:

> Hi list,
>
> this week I noticed something strange in our cloud (Ocata).
>
> We use Ceph as backend for nova, glance and cinder, everything
> really works like a charm. But from time to time we've noticed that
> some instances take much longer to launch than others. So I wanted
> to take a look what's happening, turned on debug logs and found that
> in some cases (I have no idea how to reproduce yet) there is an
> image downloaded to /var/lib/nova/instances/_base which then is used
> to import it back to Ceph, that is obviously the reason for the delay.
> The problem is that this new instance is not CoW, it's a flat rbd
> image. Here are some relevant logs (instance_id:
> 65567fc1-017f-45dc-b0ee-570c44146119, image_id:
> 0da1ba0f-c504-45ea-b138-16026aec022b)
>
> ---cut here---
> [...]
> 2018-10-04 11:46:39.189 10293 DEBUG nova.compute.manager
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
> 65567fc1-017f-45dc-b0ee-570c44146119] Start spawning the instance on
> the hypervisor. _build_and_run_instance
> /usr/lib/python2.7/site-packages/nova/compute/manager.py:1929
> [...]
> 2018-10-04 11:46:39.192 10293 INFO nova.virt.libvirt.driver
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
> 65567fc1-017f-45dc-b0ee-570c44146119] Creating image
> 2018-10-04 11:46:39.220 10293 DEBUG
> nova.virt.libvirt.storage.rbd_utils
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
> 65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
> 2018-10-04 11:46:39.241 10293 DEBUG
> nova.virt.libvirt.storage.rbd_utils
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
> 65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
> 2018-10-04 11:46:39.245 10293 DEBUG oslo_concurrency.lockutils
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] Lock
> "3c60237fbf59101d3411c4f795d0a72b82752e0b" acquired by
> "nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.001s
> inner
> /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
> 2018-10-04 11:46:39.246 10293 DEBUG oslo_concurrency.lockutils
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] Lock
> "3c60237fbf59101d3411c4f795d0a72b82752e0b" released by
> "nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.001s
> inner
> /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:282
> 2018-10-04 11:46:39.266 10293 DEBUG
> nova.virt.libvirt.storage.rbd_utils
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
> 65567fc1-017f-45dc-b0ee-570c44146119_disk does not exist __init__
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
> 2018-10-04 11:46:39.269 10293 DEBUG oslo_concurrency.processutils
> [req-85d728c3-5da1-4b37-add7-b956b4b2bb3d
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] Running cmd (subprocess):
> rbd import --pool images
> /var/lib/nova/instances/_base/3c60237fbf59101d3411c4f795d0a72b82752e0b
> 65567fc1-017f-45dc-b0ee-570c44146119_disk --image-format=2 --id
> openstack --conf /etc/ceph/ceph.conf execute
> /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:355
> [...]
>
> # no parent data
> control:~ # rbd info images/65567fc1-017f-45dc-b0ee-570c44146119_disk
> rbd image '65567fc1-017f-45dc-b0ee-570c44146119_disk':
> size 6144 MB in 1536 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.c8f88a6b8b4567
> format: 2
> features: layering, exclusive-lock, object-map
> flags:
> create_timestamp: Thu Oct 4 11:46:39 2018
>
> ---cut here---
>
> ###################################################################
>
> For comparison I moved the respective base file and tried to spawn a
> new instance from the same glance image:
>
> ---cut here---
> [...]
> 2018-10-04 10:30:29.412 2336 DEBUG nova.compute.manager
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
> 91d0b930-97b0-4dd0-81b4-929599b7c997] Start spawning the instance on
> the hypervisor. _build_and_run_instance
> /usr/lib/python2.7/site-packages/nova/compute/manager.py:1929
> [...]
> 2018-10-04 10:30:29.415 2336 INFO nova.virt.libvirt.driver
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] [instance:
> 91d0b930-97b0-4dd0-81b4-929599b7c997] Creating image
> 2018-10-04 10:30:29.435 2336 DEBUG
> nova.virt.libvirt.storage.rbd_utils
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
> 91d0b930-97b0-4dd0-81b4-929599b7c997_disk does not exist __init__
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
> 2018-10-04 10:30:29.455 2336 DEBUG
> nova.virt.libvirt.storage.rbd_utils
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] rbd image
> 91d0b930-97b0-4dd0-81b4-929599b7c997_disk does not exist __init__
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:77
> 2018-10-04 10:30:29.492 2336 DEBUG oslo_concurrency.lockutils
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] Lock
> "3c60237fbf59101d3411c4f795d0a72b82752e0b" acquired by
> "nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.035s
> inner
> /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:270
> 2018-10-04 10:30:29.527 2336 DEBUG nova.virt.libvirt.imagebackend
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] Image locations are:
> [{u'url':
> u'rbd://655cb05a-435a-41ba-83d9-8549f7c36167/images/0da1ba0f-c504-45ea-b138-16026aec022b/snap', u'metadata': {}}, {'url': u'rbd://655cb05a-435a-41ba-83d9-8549f7c36167/images/0da1ba0f-c504-45ea-b138-16026aec022b/snap', 'metadata': {}}] clone
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py:903
> 2018-10-04 10:30:29.681 2336 DEBUG nova.virt.libvirt.imagebackend
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] Selected location: {u'url':
> u'rbd://655cb05a-435a-41ba-83d9-8549f7c36167/images/0da1ba0f-c504-45ea-b138-16026aec022b/snap', u'metadata': {}} clone
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py:912
> 2018-10-04 10:30:29.682 2336 DEBUG
> nova.virt.libvirt.storage.rbd_utils
> [req-942ba103-1932-4adf-b9b2-670e1a2fc126
> df7b63e69da3b1ee2be3d79342e7992f3620beddbdac7768dcb738105e74301e
> 2e3c3f3822124a3fa9fd905164f519ae - - -] cloning
> images/0da1ba0f-c504-45ea-b138-16026aec022b@snap to
> None/91d0b930-97b0-4dd0-81b4-929599b7c997_disk clone
> /usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py:236
> [...]
>
>
> # instance has parent data
> control:~ # rbd info images/91d0b930-97b0-4dd0-81b4-929599b7c997_disk
> rbd image '91d0b930-97b0-4dd0-81b4-929599b7c997_disk':
> size 8192 MB in 1024 objects
> order 23 (8192 kB objects)
> block_name_prefix: rbd_data.c8abcb6b8b4567
> format: 2
> features: layering, exclusive-lock, object-map
> flags:
> create_timestamp: Thu Oct 4 10:30:29 2018
> parent: images/0da1ba0f-c504-45ea-b138-16026aec022b@snap
> overlap: 6144 MB
>
> ---cut here---
>
> Why is there a local copy of the image in the first place? Obviously
> nova doesn't need that and creates clones successfully without them.
> I would be thankful for any explanation for this behavious.
>
> Regards,
> Eugen




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Nova][Glance] Nova imports flat images from base file despite ceph backend [ In reply to ]
On Tue, 09 Oct 2018 08:01:01 +0000, Eugen Block wrote:
> So it's still unclear why nova downloaded a raw glance image to the
> local filesystem during the previous attempt.
>
> I always knew that with Ceph as backend it's recommended to use raw
> images but I always assumed the "disk-format" was not more than a
> display option. Well, now I know that, but this still doesn't explain
> the downloaded base image.
> If anyone has an idea how to reproduce such behavior or an
> explanation, I'd love to hear it.

Right, in order to get the ceph CoW behavior, you must use RAW glance
image format [1]. You also need to have created the image as 'raw'
before you upload to glance (qemu-img info should show the file format
as 'raw').

If you have done this, then you shouldn't see any image downloaded to
the local filesystem. Is it possible that your image did not have file
format 'raw' before you uploaded it to glance?

Cheers,
-melanie

[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Nova][Glance] Nova imports flat images from base file despite ceph backend [ In reply to ]
Hi Melanie,

thanks for your response.

I would consider this thread as closed. After I learned that nova
imports flat images if the image's disk-format is not "raw" I tested
some different scenarios to understand more about this topic. I still
couldn't explain why nova did that in the specific case as I was sure
the image's format was raw (the image has been deleted in the meantime
but I can see in the database that the image's disk-format was indeed
"raw").

But a couple of days ago I encountered the same problem. I tried to
upload an image from volume via Horizon dashboard (where a dropdown
exists to choose disk-format) and the preselected value was indeed
"qcow2" instead of "raw". I have no idea *why* it was different from
the default "raw" but this could have happened last week, too, althoug
according to the database entry it should not have happened. Anyway,
now I know the reason why some ephemeral disks are flat, I just can't
tell how Horizon selects the format, but that's enough research for
now. If I find out more I'll report back.

Thanks again!
Regards,
Eugen


Zitat von melanie witt <melwittt@gmail.com>:

> On Tue, 09 Oct 2018 08:01:01 +0000, Eugen Block wrote:
>> So it's still unclear why nova downloaded a raw glance image to the
>> local filesystem during the previous attempt.
>>
>> I always knew that with Ceph as backend it's recommended to use raw
>> images but I always assumed the "disk-format" was not more than a
>> display option. Well, now I know that, but this still doesn't explain
>> the downloaded base image.
>> If anyone has an idea how to reproduce such behavior or an
>> explanation, I'd love to hear it.
>
> Right, in order to get the ceph CoW behavior, you must use RAW
> glance image format [1]. You also need to have created the image as
> 'raw' before you upload to glance (qemu-img info should show the
> file format as 'raw').
>
> If you have done this, then you shouldn't see any image downloaded
> to the local filesystem. Is it possible that your image did not have
> file format 'raw' before you uploaded it to glance?
>
> Cheers,
> -melanie
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Nova][Glance] Nova imports flat images from base file despite ceph backend [ In reply to ]
On Fri, 12 Oct 2018 21:50:27 +0000, Eugen Block wrote:
> I would consider this thread as closed. After I learned that nova
> imports flat images if the image's disk-format is not "raw" I tested
> some different scenarios to understand more about this topic. I still
> couldn't explain why nova did that in the specific case as I was sure
> the image's format was raw (the image has been deleted in the meantime
> but I can see in the database that the image's disk-format was indeed
> "raw").

Thanks for sharing that info. The reason I suggested that maybe the
original image wasn't created with 'raw' format (example, with 'qemu-img
create -f raw ...') was because I have made that mistake before in the
past. :P It is possible to upload an image to glance that is not format
'raw' and also specify to glance that it _is_ indeed 'raw' (I used the
openstack CLI when I did the image upload). In that case, you will see
nova download the image instead of doing ceph CoW clone.

> But a couple of days ago I encountered the same problem. I tried to
> upload an image from volume via Horizon dashboard (where a dropdown
> exists to choose disk-format) and the preselected value was indeed
> "qcow2" instead of "raw". I have no idea*why* it was different from
> the default "raw" but this could have happened last week, too, althoug
> according to the database entry it should not have happened. Anyway,
> now I know the reason why some ephemeral disks are flat, I just can't
> tell how Horizon selects the format, but that's enough research for
> now. If I find out more I'll report back.

Sure, let us know if you notice anything else unusual.

Cheers,
-melanie




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Nova][Glance] Nova imports flat images from base file despite ceph backend [ In reply to ]
Hi all
I do not have it handy now but you can verify that the image is indeed raw or qcow2

As soon as I get home I will dig the command and pass it on. I have seen where images have extensions thinking it is raw and it is not.

Remo

>> Il giorno 12 ott 2018, alle ore 17:17, melanie witt <melwittt@gmail.com> ha scritto:
>>
>> On Fri, 12 Oct 2018 21:50:27 +0000, Eugen Block wrote:
>> I would consider this thread as closed. After I learned that nova
>> imports flat images if the image's disk-format is not "raw" I tested
>> some different scenarios to understand more about this topic. I still
>> couldn't explain why nova did that in the specific case as I was sure
>> the image's format was raw (the image has been deleted in the meantime
>> but I can see in the database that the image's disk-format was indeed
>> "raw").
>
> Thanks for sharing that info. The reason I suggested that maybe the original image wasn't created with 'raw' format (example, with 'qemu-img create -f raw ...') was because I have made that mistake before in the past. :P It is possible to upload an image to glance that is not format 'raw' and also specify to glance that it _is_ indeed 'raw' (I used the openstack CLI when I did the image upload). In that case, you will see nova download the image instead of doing ceph CoW clone.
>
>> But a couple of days ago I encountered the same problem. I tried to
>> upload an image from volume via Horizon dashboard (where a dropdown
>> exists to choose disk-format) and the preselected value was indeed
>> "qcow2" instead of "raw". I have no idea*why* it was different from
>> the default "raw" but this could have happened last week, too, althoug
>> according to the database entry it should not have happened. Anyway,
>> now I know the reason why some ephemeral disks are flat, I just can't
>> tell how Horizon selects the format, but that's enough research for
>> now. If I find out more I'll report back.
>
> Sure, let us know if you notice anything else unusual.
>
> Cheers,
> -melanie
>
>
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [Nova][Glance] Nova imports flat images from base file despite ceph backend [ In reply to ]
On Fri, 12 Oct 2018 20:06:04 -0700, Remo Mattei wrote:
> I do not have it handy now but you can verify that the image is indeed raw or qcow2
>
> As soon as I get home I will dig the command and pass it on. I have seen where images have extensions thinking it is raw and it is not.

You could try 'qemu-img info <image>' and get output like this, notice
"file format":

$ qemu-img info test.vmdk
(VMDK) image open: flags=0x2 filename=test.vmdk
image: test.vmdk
file format: vmdk
virtual size: 20M (20971520 bytes)
disk size: 17M

[1] https://en.wikibooks.org/wiki/QEMU/Images#Getting_information

-melanie




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack