Mailing List Archive

unexpected distribution of compute instances in queens
Hi,

I am deploying OpenStack with 3 compute node, but I am seeing an abnormal
distribution of instance, the instance is only deployed in a specific
compute node, and not distribute among other compute node.

this is my nova.conf from the compute node. (template jinja2 based)

[DEFAULT]
osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1'][
'ipv4']['address'] }}
metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address']
}}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement

what is the problem? I have lookup the openstack-nova-scheduler in the
controller node but it's running well with only warning

nova-scheduler[19255]:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported

the result I want is the instance is distributed in all compute node.
Thank you.

--

*Regards,*
*Zufar Dhiyaulhaq*
Re: unexpected distribution of compute instances in queens [ In reply to ]
On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote:
> Hi,
>
> I am deploying OpenStack with 3 compute node, but I am seeing an abnormal distribution of instance, the instance is
> only deployed in a specific compute node, and not distribute among other compute node.
>
> this is my nova.conf from the compute node. (template jinja2 based)

hi, the default behavior of nova used to be spread not pack and i belive it still is.
the default behavior with placement however is closer to a packing behavior as
allcoation candiates are retrunidn in an undefined but deterministic order.

on a busy cloud this does not strictly pack instaces but on a quite cloud it effectivly does

you can try and enable randomisation of the allocation candiates by setting this config option in
the nova.conf of the shcduler to true.
https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates

on that note can you provide the nova.conf for the schduelr is used instead of the compute node nova.conf.
if you have not overriden any of the nova defaults the ram and cpu weigher should spread instances withing
the allocation candiates returned by placement.

>
> [DEFAULT]
> osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> enabled_apis = osapi_compute,metadata
> transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
> controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
> my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> use_neutron = True
> firewall_driver = nova.virt.firewall.NoopFirewallDriver
> [api]
> auth_strategy = keystone
> [api_database]
> connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
> [barbican]
> [cache]
> backend=oslo_cache.memcache_pool
> enabled=true
> memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
> [cells]
> [cinder]
> os_region_name = RegionOne
> [compute]
> [conductor]
> [console]
> [consoleauth]
> [cors]
> [crypto]
> [database]
> connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
> [devices]
> [ephemeral_storage_encryption]
> [filter_scheduler]
> [glance]
> api_servers = http://{{ vip }}:9292
> [guestfs]
> [healthcheck]
> [hyperv]
> [ironic]
> [key_manager]
> [keystone]
> [keystone_authtoken]
> auth_url = http://{{ vip }}:5000/v3
> memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> project_name = service
> username = nova
> password = {{ nova_pw }}
> [libvirt]
> [matchmaker_redis]
> [metrics]
> [mks]
> [neutron]
> url = http://{{ vip }}:9696
> auth_url = http://{{ vip }}:35357
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> region_name = RegionOne
> project_name = service
> username = neutron
> password = {{ neutron_pw }}
> service_metadata_proxy = true
> metadata_proxy_shared_secret = {{ metadata_secret }}
> [notifications]
> [osapi_v21]
> [oslo_concurrency]
> lock_path = /var/lib/nova/tmp
> [oslo_messaging_amqp]
> [oslo_messaging_kafka]
> [oslo_messaging_notifications]
> [oslo_messaging_rabbit]
> [oslo_messaging_zmq]
> [oslo_middleware]
> [oslo_policy]
> [pci]
> [placement]
> os_region_name = RegionOne
> project_domain_name = Default
> project_name = service
> auth_type = password
> user_domain_name = Default
> auth_url = http://{{ vip }}:5000/v3
> username = placement
> password = {{ placement_pw }}
> [quota]
> [rdp]
> [remote_debug]
> [scheduler]
> discover_hosts_in_cells_interval = 300
> [serial_console]
> [service_user]
> [spice]
> [upgrade_levels]
> [vault]
> [vendordata_dynamic_auth]
> [vmware]
> [vnc]
> enabled = true
> keymap=en-us
> novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
> novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> [workarounds]
> [wsgi]
> [xenserver]
> [xvp]
> [placement_database]
> connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement
>
> what is the problem? I have lookup the openstack-nova-scheduler in the controller node but it's running well with only
> warning
>
> nova-scheduler[19255]: /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning:
> Configuration option(s) ['use_tpool'] not supported
>
> the result I want is the instance is distributed in all compute node.
> Thank you.
>
> _______________________________________________
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: unexpected distribution of compute instances in queens [ In reply to ]
Hi Smooney,

thank you for your help. I am trying to enable randomization but not
working. The instance I have created is still in the same node. Below is my
nova configuration (added randomization from your suggestion) from the
master node (Template jinja2 based).

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address']
}}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
virt_type = kvm
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
keymap=en-us
server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
server_proxyclient_address = {{ hostvars[inventory_hostname][
'ansible_ens3f1']['ipv4']['address'] }}
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

Thank you,

Best Regards,
Zufar Dhiyaulhaq

On Mon, Nov 26, 2018 at 11:13 PM Sean Mooney <smooney@redhat.com> wrote:

> On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote:
> > Hi,
> >
> > I am deploying OpenStack with 3 compute node, but I am seeing an
> abnormal distribution of instance, the instance is
> > only deployed in a specific compute node, and not distribute among other
> compute node.
> >
> > this is my nova.conf from the compute node. (template jinja2 based)
>
> hi, the default behavior of nova used to be spread not pack and i belive
> it still is.
> the default behavior with placement however is closer to a packing
> behavior as
> allcoation candiates are retrunidn in an undefined but deterministic order.
>
> on a busy cloud this does not strictly pack instaces but on a quite cloud
> it effectivly does
>
> you can try and enable randomisation of the allocation candiates by
> setting this config option in
> the nova.conf of the shcduler to true.
>
> https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates
>
> on that note can you provide the nova.conf for the schduelr is used
> instead of the compute node nova.conf.
> if you have not overriden any of the nova defaults the ram and cpu weigher
> should spread instances withing
> the allocation candiates returned by placement.
>
> >
> > [DEFAULT]
> > osapi_compute_listen = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> > metadata_listen = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> > enabled_apis = osapi_compute,metadata
> > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{
> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
> > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
> controller3_ip_man }}:5672
> > my_ip = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> > use_neutron = True
> > firewall_driver = nova.virt.firewall.NoopFirewallDriver
> > [api]
> > auth_strategy = keystone
> > [api_database]
> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
> > [barbican]
> > [cache]
> > backend=oslo_cache.memcache_pool
> > enabled=true
> > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
> }}:11211,{{ controller3_ip_man }}:11211
> > [cells]
> > [cinder]
> > os_region_name = RegionOne
> > [compute]
> > [conductor]
> > [console]
> > [consoleauth]
> > [cors]
> > [crypto]
> > [database]
> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
> > [devices]
> > [ephemeral_storage_encryption]
> > [filter_scheduler]
> > [glance]
> > api_servers = http://{{ vip }}:9292
> > [guestfs]
> > [healthcheck]
> > [hyperv]
> > [ironic]
> > [key_manager]
> > [keystone]
> > [keystone_authtoken]
> > auth_url = http://{{ vip }}:5000/v3
> > memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
> }}:11211,{{ controller3_ip_man }}:11211
> > auth_type = password
> > project_domain_name = default
> > user_domain_name = default
> > project_name = service
> > username = nova
> > password = {{ nova_pw }}
> > [libvirt]
> > [matchmaker_redis]
> > [metrics]
> > [mks]
> > [neutron]
> > url = http://{{ vip }}:9696
> > auth_url = http://{{ vip }}:35357
> > auth_type = password
> > project_domain_name = default
> > user_domain_name = default
> > region_name = RegionOne
> > project_name = service
> > username = neutron
> > password = {{ neutron_pw }}
> > service_metadata_proxy = true
> > metadata_proxy_shared_secret = {{ metadata_secret }}
> > [notifications]
> > [osapi_v21]
> > [oslo_concurrency]
> > lock_path = /var/lib/nova/tmp
> > [oslo_messaging_amqp]
> > [oslo_messaging_kafka]
> > [oslo_messaging_notifications]
> > [oslo_messaging_rabbit]
> > [oslo_messaging_zmq]
> > [oslo_middleware]
> > [oslo_policy]
> > [pci]
> > [placement]
> > os_region_name = RegionOne
> > project_domain_name = Default
> > project_name = service
> > auth_type = password
> > user_domain_name = Default
> > auth_url = http://{{ vip }}:5000/v3
> > username = placement
> > password = {{ placement_pw }}
> > [quota]
> > [rdp]
> > [remote_debug]
> > [scheduler]
> > discover_hosts_in_cells_interval = 300
> > [serial_console]
> > [service_user]
> > [spice]
> > [upgrade_levels]
> > [vault]
> > [vendordata_dynamic_auth]
> > [vmware]
> > [vnc]
> > enabled = true
> > keymap=en-us
> > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
> > novncproxy_host = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> > [workarounds]
> > [wsgi]
> > [xenserver]
> > [xvp]
> > [placement_database]
> > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement
> >
> > what is the problem? I have lookup the openstack-nova-scheduler in the
> controller node but it's running well with only
> > warning
> >
> > nova-scheduler[19255]:
> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
> NotSupportedWarning:
> > Configuration option(s) ['use_tpool'] not supported
> >
> > the result I want is the instance is distributed in all compute node.
> > Thank you.
> >
> > _______________________________________________
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openstack@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
Re: unexpected distribution of compute instances in queens [ In reply to ]
Hi Smooney,
sorry for the last reply. I am attaching wrong configuration files. This is
my nova configuration (added randomization from your suggestion) from the
master node (Template jinja2 based).

[DEFAULT]
osapi_compute_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
metadata_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
my_ip = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
randomize_allocation_candidates = true
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement

Thank you

Best Regards,
Zufar Dhiyaulhaq


On Tue, Nov 27, 2018 at 4:55 PM Zufar Dhiyaulhaq <zufardhiyaulhaq@gmail.com>
wrote:

> Hi Smooney,
>
> thank you for your help. I am trying to enable randomization but not
> working. The instance I have created is still in the same node. Below is my
> nova configuration (added randomization from your suggestion) from the
> master node (Template jinja2 based).
>
> [DEFAULT]
> enabled_apis = osapi_compute,metadata
> transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
> }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
> }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
> my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
> 'address'] }}
> use_neutron = True
> firewall_driver = nova.virt.firewall.NoopFirewallDriver
> [api]
> auth_strategy = keystone
> [api_database]
> [barbican]
> [cache]
> backend=oslo_cache.memcache_pool
> enabled=true
> memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
> }}:11211,{{ controller3_ip_man }}:11211
> [cells]
> [cinder]
> [compute]
> [conductor]
> [console]
> [consoleauth]
> [cors]
> [crypto]
> [database]
> [devices]
> [ephemeral_storage_encryption]
> [filter_scheduler]
> [glance]
> api_servers = http://{{ vip }}:9292
> [guestfs]
> [healthcheck]
> [hyperv]
> [ironic]
> [key_manager]
> [keystone]
> [keystone_authtoken]
> auth_url = http://{{ vip }}:5000/v3
> memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
> }}:11211,{{ controller3_ip_man }}:11211
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> project_name = service
> username = nova
> password = {{ nova_pw }}
> [libvirt]
> virt_type = kvm
> [matchmaker_redis]
> [metrics]
> [mks]
> [neutron]
> url = http://{{ vip }}:9696
> auth_url = http://{{ vip }}:35357
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> region_name = RegionOne
> project_name = service
> username = neutron
> password = {{ neutron_pw }}
> [notifications]
> [osapi_v21]
> [oslo_concurrency]
> lock_path = /var/lib/nova/tmp
> [oslo_messaging_amqp]
> [oslo_messaging_kafka]
> [oslo_messaging_notifications]
> [oslo_messaging_rabbit]
> [oslo_messaging_zmq]
> [oslo_middleware]
> [oslo_policy]
> [pci]
> [placement]
> os_region_name = RegionOne
> project_domain_name = Default
> project_name = service
> auth_type = password
> user_domain_name = Default
> auth_url = http://{{ vip }}:5000/v3
> username = placement
> password = {{ placement_pw }}
> [quota]
> [rdp]
> [remote_debug]
> [scheduler]
> [serial_console]
> [service_user]
> [spice]
> [upgrade_levels]
> [vault]
> [vendordata_dynamic_auth]
> [vmware]
> [vnc]
> enabled = True
> keymap=en-us
> server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
> 'address'] }}
> server_proxyclient_address = {{ hostvars[inventory_hostname][
> 'ansible_ens3f1']['ipv4']['address'] }}
> novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
> [workarounds]
> [wsgi]
> [xenserver]
> [xvp]
>
> Thank you,
>
> Best Regards,
> Zufar Dhiyaulhaq
>
> On Mon, Nov 26, 2018 at 11:13 PM Sean Mooney <smooney@redhat.com> wrote:
>
>> On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote:
>> > Hi,
>> >
>> > I am deploying OpenStack with 3 compute node, but I am seeing an
>> abnormal distribution of instance, the instance is
>> > only deployed in a specific compute node, and not distribute among
>> other compute node.
>> >
>> > this is my nova.conf from the compute node. (template jinja2 based)
>>
>> hi, the default behavior of nova used to be spread not pack and i belive
>> it still is.
>> the default behavior with placement however is closer to a packing
>> behavior as
>> allcoation candiates are retrunidn in an undefined but deterministic
>> order.
>>
>> on a busy cloud this does not strictly pack instaces but on a quite cloud
>> it effectivly does
>>
>> you can try and enable randomisation of the allocation candiates by
>> setting this config option in
>> the nova.conf of the shcduler to true.
>>
>> https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates
>>
>> on that note can you provide the nova.conf for the schduelr is used
>> instead of the compute node nova.conf.
>> if you have not overriden any of the nova defaults the ram and cpu
>> weigher should spread instances withing
>> the allocation candiates returned by placement.
>>
>> >
>> > [DEFAULT]
>> > osapi_compute_listen = {{
>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>> > metadata_listen = {{
>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>> > enabled_apis = osapi_compute,metadata
>> > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{
>> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
>> > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
>> controller3_ip_man }}:5672
>> > my_ip = {{
>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>> > use_neutron = True
>> > firewall_driver = nova.virt.firewall.NoopFirewallDriver
>> > [api]
>> > auth_strategy = keystone
>> > [api_database]
>> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
>> > [barbican]
>> > [cache]
>> > backend=oslo_cache.memcache_pool
>> > enabled=true
>> > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
>> }}:11211,{{ controller3_ip_man }}:11211
>> > [cells]
>> > [cinder]
>> > os_region_name = RegionOne
>> > [compute]
>> > [conductor]
>> > [console]
>> > [consoleauth]
>> > [cors]
>> > [crypto]
>> > [database]
>> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
>> > [devices]
>> > [ephemeral_storage_encryption]
>> > [filter_scheduler]
>> > [glance]
>> > api_servers = http://{{ vip }}:9292
>> > [guestfs]
>> > [healthcheck]
>> > [hyperv]
>> > [ironic]
>> > [key_manager]
>> > [keystone]
>> > [keystone_authtoken]
>> > auth_url = http://{{ vip }}:5000/v3
>> > memcached_servers = {{ controller1_ip_man }}:11211,{{
>> controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
>> > auth_type = password
>> > project_domain_name = default
>> > user_domain_name = default
>> > project_name = service
>> > username = nova
>> > password = {{ nova_pw }}
>> > [libvirt]
>> > [matchmaker_redis]
>> > [metrics]
>> > [mks]
>> > [neutron]
>> > url = http://{{ vip }}:9696
>> > auth_url = http://{{ vip }}:35357
>> > auth_type = password
>> > project_domain_name = default
>> > user_domain_name = default
>> > region_name = RegionOne
>> > project_name = service
>> > username = neutron
>> > password = {{ neutron_pw }}
>> > service_metadata_proxy = true
>> > metadata_proxy_shared_secret = {{ metadata_secret }}
>> > [notifications]
>> > [osapi_v21]
>> > [oslo_concurrency]
>> > lock_path = /var/lib/nova/tmp
>> > [oslo_messaging_amqp]
>> > [oslo_messaging_kafka]
>> > [oslo_messaging_notifications]
>> > [oslo_messaging_rabbit]
>> > [oslo_messaging_zmq]
>> > [oslo_middleware]
>> > [oslo_policy]
>> > [pci]
>> > [placement]
>> > os_region_name = RegionOne
>> > project_domain_name = Default
>> > project_name = service
>> > auth_type = password
>> > user_domain_name = Default
>> > auth_url = http://{{ vip }}:5000/v3
>> > username = placement
>> > password = {{ placement_pw }}
>> > [quota]
>> > [rdp]
>> > [remote_debug]
>> > [scheduler]
>> > discover_hosts_in_cells_interval = 300
>> > [serial_console]
>> > [service_user]
>> > [spice]
>> > [upgrade_levels]
>> > [vault]
>> > [vendordata_dynamic_auth]
>> > [vmware]
>> > [vnc]
>> > enabled = true
>> > keymap=en-us
>> > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
>> > novncproxy_host = {{
>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>> > [workarounds]
>> > [wsgi]
>> > [xenserver]
>> > [xvp]
>> > [placement_database]
>> > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement
>> >
>> > what is the problem? I have lookup the openstack-nova-scheduler in the
>> controller node but it's running well with only
>> > warning
>> >
>> > nova-scheduler[19255]:
>> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
>> NotSupportedWarning:
>> > Configuration option(s) ['use_tpool'] not supported
>> >
>> > the result I want is the instance is distributed in all compute node.
>> > Thank you.
>> >
>> > _______________________________________________
>> > Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> > Post to : openstack@lists.openstack.org
>> > Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
>
Re: unexpected distribution of compute instances in queens [ In reply to ]
Hi,

Thank you. I am able to fix this issue by adding this configuration into
nova configuration file in controller node.

driver=filter_scheduler

Best Regards
Zufar Dhiyaulhaq


On Tue, Nov 27, 2018 at 5:01 PM Zufar Dhiyaulhaq <zufardhiyaulhaq@gmail.com>
wrote:

> Hi Smooney,
> sorry for the last reply. I am attaching wrong configuration files. This
> is my nova configuration (added randomization from your suggestion) from
> the master node (Template jinja2 based).
>
> [DEFAULT]
> osapi_compute_listen = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> metadata_listen = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> enabled_apis = osapi_compute,metadata
> transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
> }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
> }}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
> my_ip = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> use_neutron = True
> firewall_driver = nova.virt.firewall.NoopFirewallDriver
> [api]
> auth_strategy = keystone
> [api_database]
> connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
> [barbican]
> [cache]
> backend=oslo_cache.memcache_pool
> enabled=true
> memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
> }}:11211,{{ controller3_ip_man }}:11211
> [cells]
> [cinder]
> os_region_name = RegionOne
> [compute]
> [conductor]
> [console]
> [consoleauth]
> [cors]
> [crypto]
> [database]
> connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
> [devices]
> [ephemeral_storage_encryption]
> [filter_scheduler]
> [glance]
> api_servers = http://{{ vip }}:9292
> [guestfs]
> [healthcheck]
> [hyperv]
> [ironic]
> [key_manager]
> [keystone]
> [keystone_authtoken]
> auth_url = http://{{ vip }}:5000/v3
> memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
> }}:11211,{{ controller3_ip_man }}:11211
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> project_name = service
> username = nova
> password = {{ nova_pw }}
> [libvirt]
> [matchmaker_redis]
> [metrics]
> [mks]
> [neutron]
> url = http://{{ vip }}:9696
> auth_url = http://{{ vip }}:35357
> auth_type = password
> project_domain_name = default
> user_domain_name = default
> region_name = RegionOne
> project_name = service
> username = neutron
> password = {{ neutron_pw }}
> service_metadata_proxy = true
> metadata_proxy_shared_secret = {{ metadata_secret }}
> [notifications]
> [osapi_v21]
> [oslo_concurrency]
> lock_path = /var/lib/nova/tmp
> [oslo_messaging_amqp]
> [oslo_messaging_kafka]
> [oslo_messaging_notifications]
> [oslo_messaging_rabbit]
> [oslo_messaging_zmq]
> [oslo_middleware]
> [oslo_policy]
> [pci]
> [placement]
> os_region_name = RegionOne
> project_domain_name = Default
> project_name = service
> auth_type = password
> user_domain_name = Default
> auth_url = http://{{ vip }}:5000/v3
> username = placement
> password = {{ placement_pw }}
> randomize_allocation_candidates = true
> [quota]
> [rdp]
> [remote_debug]
> [scheduler]
> discover_hosts_in_cells_interval = 300
> [serial_console]
> [service_user]
> [spice]
> [upgrade_levels]
> [vault]
> [vendordata_dynamic_auth]
> [vmware]
> [vnc]
> enabled = true
> keymap=en-us
> novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
> novncproxy_host = {{
> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
> [workarounds]
> [wsgi]
> [xenserver]
> [xvp]
> [placement_database]
> connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement
>
> Thank you
>
> Best Regards,
> Zufar Dhiyaulhaq
>
>
> On Tue, Nov 27, 2018 at 4:55 PM Zufar Dhiyaulhaq <
> zufardhiyaulhaq@gmail.com> wrote:
>
>> Hi Smooney,
>>
>> thank you for your help. I am trying to enable randomization but not
>> working. The instance I have created is still in the same node. Below is my
>> nova configuration (added randomization from your suggestion) from the
>> master node (Template jinja2 based).
>>
>> [DEFAULT]
>> enabled_apis = osapi_compute,metadata
>> transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{
>> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
>> controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
>> controller3_ip_man }}:5672
>> my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
>> 'address'] }}
>> use_neutron = True
>> firewall_driver = nova.virt.firewall.NoopFirewallDriver
>> [api]
>> auth_strategy = keystone
>> [api_database]
>> [barbican]
>> [cache]
>> backend=oslo_cache.memcache_pool
>> enabled=true
>> memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
>> }}:11211,{{ controller3_ip_man }}:11211
>> [cells]
>> [cinder]
>> [compute]
>> [conductor]
>> [console]
>> [consoleauth]
>> [cors]
>> [crypto]
>> [database]
>> [devices]
>> [ephemeral_storage_encryption]
>> [filter_scheduler]
>> [glance]
>> api_servers = http://{{ vip }}:9292
>> [guestfs]
>> [healthcheck]
>> [hyperv]
>> [ironic]
>> [key_manager]
>> [keystone]
>> [keystone_authtoken]
>> auth_url = http://{{ vip }}:5000/v3
>> memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
>> }}:11211,{{ controller3_ip_man }}:11211
>> auth_type = password
>> project_domain_name = default
>> user_domain_name = default
>> project_name = service
>> username = nova
>> password = {{ nova_pw }}
>> [libvirt]
>> virt_type = kvm
>> [matchmaker_redis]
>> [metrics]
>> [mks]
>> [neutron]
>> url = http://{{ vip }}:9696
>> auth_url = http://{{ vip }}:35357
>> auth_type = password
>> project_domain_name = default
>> user_domain_name = default
>> region_name = RegionOne
>> project_name = service
>> username = neutron
>> password = {{ neutron_pw }}
>> [notifications]
>> [osapi_v21]
>> [oslo_concurrency]
>> lock_path = /var/lib/nova/tmp
>> [oslo_messaging_amqp]
>> [oslo_messaging_kafka]
>> [oslo_messaging_notifications]
>> [oslo_messaging_rabbit]
>> [oslo_messaging_zmq]
>> [oslo_middleware]
>> [oslo_policy]
>> [pci]
>> [placement]
>> os_region_name = RegionOne
>> project_domain_name = Default
>> project_name = service
>> auth_type = password
>> user_domain_name = Default
>> auth_url = http://{{ vip }}:5000/v3
>> username = placement
>> password = {{ placement_pw }}
>> [quota]
>> [rdp]
>> [remote_debug]
>> [scheduler]
>> [serial_console]
>> [service_user]
>> [spice]
>> [upgrade_levels]
>> [vault]
>> [vendordata_dynamic_auth]
>> [vmware]
>> [vnc]
>> enabled = True
>> keymap=en-us
>> server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'
>> ]['address'] }}
>> server_proxyclient_address = {{ hostvars[inventory_hostname][
>> 'ansible_ens3f1']['ipv4']['address'] }}
>> novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
>> [workarounds]
>> [wsgi]
>> [xenserver]
>> [xvp]
>>
>> Thank you,
>>
>> Best Regards,
>> Zufar Dhiyaulhaq
>>
>> On Mon, Nov 26, 2018 at 11:13 PM Sean Mooney <smooney@redhat.com> wrote:
>>
>>> On Mon, 2018-11-26 at 17:45 +0700, Zufar Dhiyaulhaq wrote:
>>> > Hi,
>>> >
>>> > I am deploying OpenStack with 3 compute node, but I am seeing an
>>> abnormal distribution of instance, the instance is
>>> > only deployed in a specific compute node, and not distribute among
>>> other compute node.
>>> >
>>> > this is my nova.conf from the compute node. (template jinja2 based)
>>>
>>> hi, the default behavior of nova used to be spread not pack and i belive
>>> it still is.
>>> the default behavior with placement however is closer to a packing
>>> behavior as
>>> allcoation candiates are retrunidn in an undefined but deterministic
>>> order.
>>>
>>> on a busy cloud this does not strictly pack instaces but on a quite
>>> cloud it effectivly does
>>>
>>> you can try and enable randomisation of the allocation candiates by
>>> setting this config option in
>>> the nova.conf of the shcduler to true.
>>>
>>> https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates
>>>
>>> on that note can you provide the nova.conf for the schduelr is used
>>> instead of the compute node nova.conf.
>>> if you have not overriden any of the nova defaults the ram and cpu
>>> weigher should spread instances withing
>>> the allocation candiates returned by placement.
>>>
>>> >
>>> > [DEFAULT]
>>> > osapi_compute_listen = {{
>>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>>> > metadata_listen = {{
>>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>>> > enabled_apis = osapi_compute,metadata
>>> > transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{
>>> controller1_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
>>> > controller2_ip_man }}:5672,openstack:{{ rabbitmq_pw }}@{{
>>> controller3_ip_man }}:5672
>>> > my_ip = {{
>>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>>> > use_neutron = True
>>> > firewall_driver = nova.virt.firewall.NoopFirewallDriver
>>> > [api]
>>> > auth_strategy = keystone
>>> > [api_database]
>>> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
>>> > [barbican]
>>> > [cache]
>>> > backend=oslo_cache.memcache_pool
>>> > enabled=true
>>> > memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
>>> }}:11211,{{ controller3_ip_man }}:11211
>>> > [cells]
>>> > [cinder]
>>> > os_region_name = RegionOne
>>> > [compute]
>>> > [conductor]
>>> > [console]
>>> > [consoleauth]
>>> > [cors]
>>> > [crypto]
>>> > [database]
>>> > connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
>>> > [devices]
>>> > [ephemeral_storage_encryption]
>>> > [filter_scheduler]
>>> > [glance]
>>> > api_servers = http://{{ vip }}:9292
>>> > [guestfs]
>>> > [healthcheck]
>>> > [hyperv]
>>> > [ironic]
>>> > [key_manager]
>>> > [keystone]
>>> > [keystone_authtoken]
>>> > auth_url = http://{{ vip }}:5000/v3
>>> > memcached_servers = {{ controller1_ip_man }}:11211,{{
>>> controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
>>> > auth_type = password
>>> > project_domain_name = default
>>> > user_domain_name = default
>>> > project_name = service
>>> > username = nova
>>> > password = {{ nova_pw }}
>>> > [libvirt]
>>> > [matchmaker_redis]
>>> > [metrics]
>>> > [mks]
>>> > [neutron]
>>> > url = http://{{ vip }}:9696
>>> > auth_url = http://{{ vip }}:35357
>>> > auth_type = password
>>> > project_domain_name = default
>>> > user_domain_name = default
>>> > region_name = RegionOne
>>> > project_name = service
>>> > username = neutron
>>> > password = {{ neutron_pw }}
>>> > service_metadata_proxy = true
>>> > metadata_proxy_shared_secret = {{ metadata_secret }}
>>> > [notifications]
>>> > [osapi_v21]
>>> > [oslo_concurrency]
>>> > lock_path = /var/lib/nova/tmp
>>> > [oslo_messaging_amqp]
>>> > [oslo_messaging_kafka]
>>> > [oslo_messaging_notifications]
>>> > [oslo_messaging_rabbit]
>>> > [oslo_messaging_zmq]
>>> > [oslo_middleware]
>>> > [oslo_policy]
>>> > [pci]
>>> > [placement]
>>> > os_region_name = RegionOne
>>> > project_domain_name = Default
>>> > project_name = service
>>> > auth_type = password
>>> > user_domain_name = Default
>>> > auth_url = http://{{ vip }}:5000/v3
>>> > username = placement
>>> > password = {{ placement_pw }}
>>> > [quota]
>>> > [rdp]
>>> > [remote_debug]
>>> > [scheduler]
>>> > discover_hosts_in_cells_interval = 300
>>> > [serial_console]
>>> > [service_user]
>>> > [spice]
>>> > [upgrade_levels]
>>> > [vault]
>>> > [vendordata_dynamic_auth]
>>> > [vmware]
>>> > [vnc]
>>> > enabled = true
>>> > keymap=en-us
>>> > novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
>>> > novncproxy_host = {{
>>> hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
>>> > [workarounds]
>>> > [wsgi]
>>> > [xenserver]
>>> > [xvp]
>>> > [placement_database]
>>> > connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip
>>> }}/nova_placement
>>> >
>>> > what is the problem? I have lookup the openstack-nova-scheduler in the
>>> controller node but it's running well with only
>>> > warning
>>> >
>>> > nova-scheduler[19255]:
>>> /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
>>> NotSupportedWarning:
>>> > Configuration option(s) ['use_tpool'] not supported
>>> >
>>> > the result I want is the instance is distributed in all compute node.
>>> > Thank you.
>>> >
>>> > _______________________________________________
>>> > Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> > Post to : openstack@lists.openstack.org
>>> > Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>>
>>
Re: unexpected distribution of compute instances in queens [ In reply to ]
On 11/28/2018 02:50 AM, Zufar Dhiyaulhaq wrote:
> Hi,
>
> Thank you. I am able to fix this issue by adding this configuration into
> nova configuration file in controller node.
>
> driver=filter_scheduler

That's the default:

https://docs.openstack.org/ocata/config-reference/compute/config-options.html

So that was definitely not the solution to your problem.

My guess is that Sean's suggestion to randomize the allocation
candidates fixed your issue.

Best,
-jay

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: unexpected distribution of compute instances in queens [ In reply to ]
I'm seeing a similar issue in Queens deployed via tripleo.

Two x86 compute nodes and one ppc64le node and host aggregates for virtual
instances and baremetal (x86) instances. Baremetal on x86 is working fine.

All VMs get deployed to compute-0. I can live migrate VMs to compute-1 and
all is well, but I tire of being the 'meatspace scheduler'.

I've looked at the nova.conf in the various nova-xxx containers on the
controllers, but I have failed to discern the root of this issue.

Anyone have a suggestion?

--
MC


>
Re: unexpected distribution of compute instances in queens [ In reply to ]
On 11/30/2018 02:53 AM, Mike Carden wrote:
> I'm seeing a similar issue in Queens deployed via tripleo.
>
> Two x86 compute nodes and one ppc64le node and host aggregates for
> virtual instances and baremetal (x86) instances. Baremetal on x86 is
> working fine.
>
> All VMs get deployed to compute-0. I can live migrate VMs to compute-1
> and all is well, but I tire of being the 'meatspace scheduler'.

LOL, I love that term and will have to remember to use it in the future.

> I've looked at the nova.conf in the various nova-xxx containers on the
> controllers, but I have failed to discern the root of this issue.

Have you set the placement_randomize_allocation_candidates CONF option
and are still seeing the packing behaviour?

Best,
-jay

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: unexpected distribution of compute instances in queens [ In reply to ]
>
>
> Have you set the placement_randomize_allocation_candidates CONF option
> and are still seeing the packing behaviour?
>
>
No I haven't. Where would be the place to do that? In a nova.conf somewhere
that the nova-scheduler containers on the controller hosts could pick it up?

Just about to deploy for realz with about forty x86 compute nodes, so it
would be really nice to sort this first. :)

--
MC
Re: unexpected distribution of compute instances in queens [ In reply to ]
On 11/30/2018 05:52 PM, Mike Carden wrote:
>
> Have you set the placement_randomize_allocation_candidates CONF option
> and are still seeing the packing behaviour?
>
>
> No I haven't. Where would be the place to do that? In a nova.conf
> somewhere that the nova-scheduler containers on the controller hosts
> could pick it up?
>
> Just about to deploy for realz with about forty x86 compute nodes, so it
> would be really nice to sort this first. :)

Presuming you are deploying Rocky or Queens,

It goes in the nova.conf file under the [placement] section:

randomize_allocation_candidates = true

The nova.conf file should be the one used by nova-scheduler.

Best,
-jay

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: unexpected distribution of compute instances in queens [ In reply to ]
>
>
> Presuming you are deploying Rocky or Queens,
>

Yep, it's Queens.


>
> It goes in the nova.conf file under the [placement] section:
>
> randomize_allocation_candidates = true
>

In triple-o land it seems like the config may need to be somewhere like
nova-scheduler.yaml and laid down via a re-deploy.

Or something.

The nova_scheduler runs in a container on a 'controller' host.

--
MC