Mailing List Archive

boot order with multiple attachments
Hi colleagues,

is there any mechanism to ensure boot disk when attaching more than two
volumes to server? At the moment, I can't find a way to make it predictable.

I have two bootable images with the following properties:
1) hw_boot_menu='true', hw_disk_bus='scsi', hw_qemu_guest_agent='yes',
hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true',
locations='[{u'url': u'swift+config:...', u'metadata': {}}]'

which corresponds to the following volume:

- attachments: [.{u'server_id': u'...', u'attachment_id': u'...',
u'attached_at': u'...', u'host_name': u'...', u'volume_id':
u'<VOLUME1>', u'device': u'/dev/sda', u'id': u'...'}]
- volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent':
u'yes', u'disk_format': u'raw', u'image_name': u'bionic-Qpub',
u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'hw_boot_menu':
u'true', u'min_ram': u'0', u'container_format': u'bare', u'min_disk':
u'0', u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi',
u'size': u'...'}

and second image:
2) hw_disk_bus='scsi', hw_qemu_guest_agent='yes',
hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true',
locations='[{u'url': u'cinder://...', u'metadata': {}}]'

which corresponds to the following volume:

- attachments: [.{u'server_id': u'...', u'attachment_id': u'...',
u'attached_at': u'...', u'host_name': u'...', u'volume_id':
u'<VOLUME2>', u'device': u'/dev/sdb', u'id': u'...'}]
- volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent':
u'yes', u'disk_format': u'raw', u'image_name': u'xenial',
u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'min_ram': u'0',
u'container_format': u'bare', u'min_disk': u'0',
u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi', u'size':
u'...'}

Using Heat, I'm creating the following block_devices_mapping_v2 scheme:

block_device_mapping_v2:
        - volume_id: <VOLUME1>
          delete_on_termination: false
          device_type: disk
          disk_bus: scsi
          boot_index: 0
        - volume_id: <VOLUME2>
          delete_on_termination: false
          device_type: disk
          disk_bus: scsi
          boot_index: -1

which maps to the following nova-api debug log:

Action: 'create', calling method: <bound method ServersController.create
of <nova.api.openstack.compute.servers.ServersController object at
0x7f6b08dd4890>>, body: {"ser
ver": {"name": "jex-n1", "imageRef": "", "block_device_mapping_v2":
[.{"boot_index": 0, "uuid": "<VOLUME1>", "disk_bus": "scsi",
"source_type": "volume"
, "device_type": "disk", "destination_type": "volume",
"delete_on_termination": false}, {"boot_index": -1, "uuid": "<VOLUME2>",
"disk_bus": "scsi", "so
urce_type": "volume", "device_type": "disk", "destination_type":
"volume", "delete_on_termination": false}], "flavorRef":
"4b3da838-3d81-461a-b946-d3613fb6f4b3", "user_data": "...", "max_count":
1, "min_count": 1, "networks": [{"port":
"9044f884-1a3d-4dc6-981e-f585f5e45dd1"}], "config_drive": true}}
_process_stack
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:604

Regardless of boot_index value, server boots from VOLUME2 (/dev/sdb),
while having attached VOLUME1 as well as /dev/sda

I'm using Queens. Where I'm wrong?

Thank you.

--
Volodymyr Litovka
"Vision without Execution is Hallucination." -- Thomas Edison


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: boot order with multiple attachments [ In reply to ]
Hi again,

there is similar case - https://bugs.launchpad.net/nova/+bug/1570107 -
but I get same result (booting from VOLUME2) regardless of whether I use
or don't use device_type/disk_bus properties in BDM description.

Any ideas on how to solve this issue?

Thanks.

On 9/11/18 10:58 AM, Volodymyr Litovka wrote:
> Hi colleagues,
>
> is there any mechanism to ensure boot disk when attaching more than
> two volumes to server? At the moment, I can't find a way to make it
> predictable.
>
> I have two bootable images with the following properties:
> 1) hw_boot_menu='true', hw_disk_bus='scsi', hw_qemu_guest_agent='yes',
> hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true',
> locations='[{u'url': u'swift+config:...', u'metadata': {}}]'
>
> which corresponds to the following volume:
>
> - attachments: [.{u'server_id': u'...', u'attachment_id': u'...',
> u'attached_at': u'...', u'host_name': u'...', u'volume_id':
> u'<VOLUME1>', u'device': u'/dev/sda', u'id': u'...'}]
> - volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent':
> u'yes', u'disk_format': u'raw', u'image_name': u'bionic-Qpub',
> u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...',
> u'hw_boot_menu': u'true', u'min_ram': u'0', u'container_format':
> u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true',
> u'hw_disk_bus': u'scsi', u'size': u'...'}
>
> and second image:
> 2) hw_disk_bus='scsi', hw_qemu_guest_agent='yes',
> hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true',
> locations='[{u'url': u'cinder://...', u'metadata': {}}]'
>
> which corresponds to the following volume:
>
> - attachments: [.{u'server_id': u'...', u'attachment_id': u'...',
> u'attached_at': u'...', u'host_name': u'...', u'volume_id':
> u'<VOLUME2>', u'device': u'/dev/sdb', u'id': u'...'}]
> - volume_image_metadata: {u'checksum': u'...', u'hw_qemu_guest_agent':
> u'yes', u'disk_format': u'raw', u'image_name': u'xenial',
> u'hw_scsi_model': u'virtio-scsi', u'image_id': u'...', u'min_ram':
> u'0', u'container_format': u'bare', u'min_disk': u'0',
> u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi', u'size':
> u'...'}
>
> Using Heat, I'm creating the following block_devices_mapping_v2 scheme:
>
> block_device_mapping_v2:
>         - volume_id: <VOLUME1>
>           delete_on_termination: false
>           device_type: disk
>           disk_bus: scsi
>           boot_index: 0
>         - volume_id: <VOLUME2>
>           delete_on_termination: false
>           device_type: disk
>           disk_bus: scsi
>           boot_index: -1
>
> which maps to the following nova-api debug log:
>
> Action: 'create', calling method: <bound method
> ServersController.create of
> <nova.api.openstack.compute.servers.ServersController object at
> 0x7f6b08dd4890>>, body: {"ser
> ver": {"name": "jex-n1", "imageRef": "", "block_device_mapping_v2":
> [.{"boot_index": 0, "uuid": "<VOLUME1>", "disk_bus": "scsi",
> "source_type": "volume"
> , "device_type": "disk", "destination_type": "volume",
> "delete_on_termination": false}, {"boot_index": -1, "uuid":
> "<VOLUME2>", "disk_bus": "scsi", "so
> urce_type": "volume", "device_type": "disk", "destination_type":
> "volume", "delete_on_termination": false}], "flavorRef":
> "4b3da838-3d81-461a-b946-d3613fb6f4b3", "user_data": "...",
> "max_count": 1, "min_count": 1, "networks": [{"port":
> "9044f884-1a3d-4dc6-981e-f585f5e45dd1"}], "config_drive": true}}
> _process_stack
> /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:604
>
> Regardless of boot_index value, server boots from VOLUME2 (/dev/sdb),
> while having attached VOLUME1 as well as /dev/sda
>
> I'm using Queens. Where I'm wrong?
>
> Thank you.
>

--
Volodymyr Litovka
"Vision without Execution is Hallucination." -- Thomas Edison


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: boot order with multiple attachments [ In reply to ]
Hi Volodymyr,

I didn't really try to reproduce this, but here's an excerpt from a
template we have been using successfully:

---cut here---
[...]
vm-vda:
type: OS::Cinder::Volume
properties:
description: VM vda
image: image-vda
name: disk-vda
size: 100
vm-vdb:
type: OS::Cinder::Volume
properties:
description: VM vdb
image: image-vdb
name: disk-vdb
size: 120
vm:
type: OS::Nova::Server
depends_on: [vm_subnet, vm_floating_port, vm-vda, vm-vdb, service]
properties:
flavor: big-flavor
block_device_mapping:
- { device_name: "vda", volume_id : { get_resource : vm-vda },
delete_on_termination : "true" }
- { device_name: "vdb", volume_id : { get_resource : vm-vdb },
delete_on_termination : "true" }
networks:
[...]
---cut here---

So basically, this way you tell the instance which volume has to be
/dev/vda, vdb etc. We don't use any boot_index for this.

Hope this helps!

Regards,
Eugen


Zitat von Volodymyr Litovka <doka.ua@gmx.com>:

> Hi again,
>
> there is similar case - https://bugs.launchpad.net/nova/+bug/1570107
> - but I get same result (booting from VOLUME2) regardless of whether
> I use or don't use device_type/disk_bus properties in BDM description.
>
> Any ideas on how to solve this issue?
>
> Thanks.
>
> On 9/11/18 10:58 AM, Volodymyr Litovka wrote:
>> Hi colleagues,
>>
>> is there any mechanism to ensure boot disk when attaching more than
>> two volumes to server? At the moment, I can't find a way to make it
>> predictable.
>>
>> I have two bootable images with the following properties:
>> 1) hw_boot_menu='true', hw_disk_bus='scsi',
>> hw_qemu_guest_agent='yes', hw_scsi_model='virtio-scsi',
>> img_hide_hypervisor_id='true', locations='[{u'url':
>> u'swift+config:...', u'metadata': {}}]'
>>
>> which corresponds to the following volume:
>>
>> - attachments: [.{u'server_id': u'...', u'attachment_id': u'...',
>> u'attached_at': u'...', u'host_name': u'...', u'volume_id':
>> u'<VOLUME1>', u'device': u'/dev/sda', u'id': u'...'}]
>> - volume_image_metadata: {u'checksum': u'...',
>> u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw',
>> u'image_name': u'bionic-Qpub', u'hw_scsi_model': u'virtio-scsi',
>> u'image_id': u'...', u'hw_boot_menu': u'true', u'min_ram': u'0',
>> u'container_format': u'bare', u'min_disk': u'0',
>> u'img_hide_hypervisor_id': u'true', u'hw_disk_bus': u'scsi',
>> u'size': u'...'}
>>
>> and second image:
>> 2) hw_disk_bus='scsi', hw_qemu_guest_agent='yes',
>> hw_scsi_model='virtio-scsi', img_hide_hypervisor_id='true',
>> locations='[{u'url': u'cinder://...', u'metadata': {}}]'
>>
>> which corresponds to the following volume:
>>
>> - attachments: [.{u'server_id': u'...', u'attachment_id': u'...',
>> u'attached_at': u'...', u'host_name': u'...', u'volume_id':
>> u'<VOLUME2>', u'device': u'/dev/sdb', u'id': u'...'}]
>> - volume_image_metadata: {u'checksum': u'...',
>> u'hw_qemu_guest_agent': u'yes', u'disk_format': u'raw',
>> u'image_name': u'xenial', u'hw_scsi_model': u'virtio-scsi',
>> u'image_id': u'...', u'min_ram': u'0', u'container_format':
>> u'bare', u'min_disk': u'0', u'img_hide_hypervisor_id': u'true',
>> u'hw_disk_bus': u'scsi', u'size': u'...'}
>>
>> Using Heat, I'm creating the following block_devices_mapping_v2 scheme:
>>
>> block_device_mapping_v2:
>>         - volume_id: <VOLUME1>
>>           delete_on_termination: false
>>           device_type: disk
>>           disk_bus: scsi
>>           boot_index: 0
>>         - volume_id: <VOLUME2>
>>           delete_on_termination: false
>>           device_type: disk
>>           disk_bus: scsi
>>           boot_index: -1
>>
>> which maps to the following nova-api debug log:
>>
>> Action: 'create', calling method: <bound method
>> ServersController.create of
>> <nova.api.openstack.compute.servers.ServersController object at
>> 0x7f6b08dd4890>>, body: {"ser
>> ver": {"name": "jex-n1", "imageRef": "", "block_device_mapping_v2":
>> [.{"boot_index": 0, "uuid": "<VOLUME1>", "disk_bus": "scsi",
>> "source_type": "volume"
>> , "device_type": "disk", "destination_type": "volume",
>> "delete_on_termination": false}, {"boot_index": -1, "uuid":
>> "<VOLUME2>", "disk_bus": "scsi", "so
>> urce_type": "volume", "device_type": "disk", "destination_type":
>> "volume", "delete_on_termination": false}], "flavorRef":
>> "4b3da838-3d81-461a-b946-d3613fb6f4b3", "user_data": "...",
>> "max_count": 1, "min_count": 1, "networks": [{"port":
>> "9044f884-1a3d-4dc6-981e-f585f5e45dd1"}], "config_drive": true}}
>> _process_stack
>> /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:604
>>
>> Regardless of boot_index value, server boots from VOLUME2
>> (/dev/sdb), while having attached VOLUME1 as well as /dev/sda
>>
>> I'm using Queens. Where I'm wrong?
>>
>> Thank you.
>>
>
> --
> Volodymyr Litovka
> "Vision without Execution is Hallucination." -- Thomas Edison
>
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack