Mailing List Archive

[nova][ceph] Libvirt Error when add ceph as nova backend
Hi, Im running my openstack environment with rocky release, and I want to
integrate ceph as nova-compute backend, so I followed instruction here :
http://superuser.openstack.org/articl...
<http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/>

and this is my nova.conf at my compute node

[DEFAULT]
...
compute_driver=libvirt.LibvirtDriver

[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = nova
rbd_secret_uuid = a93824e0-2d45-4196-8918-c8f7d7f35c5d
....

and this is log when I restarted the nova compute service :

2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
[req-f4e2715a-c925-4c12-b8e6-aa550fc588b1 - - - - -] Exception
handling connection event: AttributeError: 'NoneType' object has no
attribute 'rfind'
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host Traceback
(most recent call last):
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
148, in _dispatch_conn_event
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host handler()
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
414, in handler
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host return
self._conn_event_handler(*args, **kwargs)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
470, in _handle_conn_event
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
self._set_host_enabled(enabled, reason)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
3780, in _set_host_enabled
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
mount.get_manager().host_up(self._host)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
line 134, in host_up
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
self.state = _HostMountState(host, self.generation)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
line 229, in __init__
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
mountpoint = os.path.dirname(disk.source_path)
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
"/usr/lib64/python2.7/posixpath.py", line 129, in dirname
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host i =
p.rfind('/') + 1
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
AttributeError: 'NoneType' object has no attribute 'rfind'
2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
2018-10-11 01:59:57.231 5275 WARNING nova.compute.monitors
[req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Excluding
nova.compute.monitors.cpu monitor virt_driver. Not in the list of
enabled monitors (CONF.compute_monitors).
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
[req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Error updating
resources for node cp2.os-srg.adhi.: TimedOut: [errno 110] error
connecting to the cluster
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager Traceback
(most recent call last):
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722,
in _update_available_resource_for_node
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
rt.update_available_resource(context, nodename)
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
line 687, in update_available_resource
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager resources
= self.driver.get_available_resource(nodename)
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
6505, in get_available_resource
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
disk_info_dict = self._get_local_gb_info()
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
5704, in _get_local_gb_info
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager info =
LibvirtDriver._get_rbd_driver().get_pool_info()
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
line 368, in get_pool_info
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager with
RADOSClient(self) as client:
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
line 102, in __init__
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
self.cluster, self.ioctx = driver._connect_to_rados(pool)
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
line 133, in _connect_to_rados
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager client.connect()
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
"rados.pyx", line 875, in rados.Rados.connect
(/builddir/build/BUILD/ceph-12.2.5/build/src/pybind/rados/pyrex/rados.c:9764)
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager TimedOut:
[errno 110] error connecting to the cluster
2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
2018-10-11 02:04:57.316 5275 ERROR oslo.messaging._drivers.impl_rabbit
[-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] AMQP server on
ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
by peer
2018-10-11 02:04:58.353 5275 INFO oslo.messaging._drivers.impl_rabbit
[-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] Reconnected to AMQP server
on ct.os-srg.adhi:5672 via [amqp] client with port 60704.
2018-10-11 02:05:02.347 5275 ERROR oslo.messaging._drivers.impl_rabbit
[-] [2dda91e7-c913-4203-a198-ca53f231dfdc] AMQP server on
ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
by peer
2018-10-11 02:05:03.376 5275 INFO oslo.messaging._drivers.impl_rabbit
[-] [2dda91e7-c913-4203-a198-ca53f231dfdc] Reconnected to AMQP server
on ct.os-srg.adhi:5672 via [amqp] client with port 60706.

does anyone can help me with this problem ?

--
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi
<http://about.me/a_dhi?promo=email_sig>
+62-812-82121584
Re: [nova][ceph] Libvirt Error when add ceph as nova backend [ In reply to ]
Hi,

your nova.conf [libvirt] section seems fine.

Can you paste the output of

ceph auth get client.nova

and does the keyring file exist in /etc/ceph/ (ceph.client.nova.keyring)?

Is the ceph network reachable by your openstack nodes?

Regards,
Eugen


Zitat von Adhi Priharmanto <adhi.pri@gmail.com>:

> Hi, Im running my openstack environment with rocky release, and I want to
> integrate ceph as nova-compute backend, so I followed instruction here :
> http://superuser.openstack.org/articl...
> <http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/>
>
> and this is my nova.conf at my compute node
>
> [DEFAULT]
> ...
> compute_driver=libvirt.LibvirtDriver
>
> [libvirt]
> images_type = rbd
> images_rbd_pool = vms
> images_rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_user = nova
> rbd_secret_uuid = a93824e0-2d45-4196-8918-c8f7d7f35c5d
> ....
>
> and this is log when I restarted the nova compute service :
>
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> [req-f4e2715a-c925-4c12-b8e6-aa550fc588b1 - - - - -] Exception
> handling connection event: AttributeError: 'NoneType' object has no
> attribute 'rfind'
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host Traceback
> (most recent call last):
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
> 148, in _dispatch_conn_event
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host handler()
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
> 414, in handler
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host return
> self._conn_event_handler(*args, **kwargs)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> 470, in _handle_conn_event
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> self._set_host_enabled(enabled, reason)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> 3780, in _set_host_enabled
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> mount.get_manager().host_up(self._host)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
> line 134, in host_up
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> self.state = _HostMountState(host, self.generation)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
> line 229, in __init__
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> mountpoint = os.path.dirname(disk.source_path)
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> "/usr/lib64/python2.7/posixpath.py", line 129, in dirname
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host i =
> p.rfind('/') + 1
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> AttributeError: 'NoneType' object has no attribute 'rfind'
> 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> 2018-10-11 01:59:57.231 5275 WARNING nova.compute.monitors
> [req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Excluding
> nova.compute.monitors.cpu monitor virt_driver. Not in the list of
> enabled monitors (CONF.compute_monitors).
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> [req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Error updating
> resources for node cp2.os-srg.adhi.: TimedOut: [errno 110] error
> connecting to the cluster
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager Traceback
> (most recent call last):
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722,
> in _update_available_resource_for_node
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> rt.update_available_resource(context, nodename)
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
> line 687, in update_available_resource
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager resources
> = self.driver.get_available_resource(nodename)
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> 6505, in get_available_resource
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> disk_info_dict = self._get_local_gb_info()
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> 5704, in _get_local_gb_info
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager info =
> LibvirtDriver._get_rbd_driver().get_pool_info()
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
> line 368, in get_pool_info
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager with
> RADOSClient(self) as client:
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
> line 102, in __init__
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
> line 133, in _connect_to_rados
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager client.connect()
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> "rados.pyx", line 875, in rados.Rados.connect
> (/builddir/build/BUILD/ceph-12.2.5/build/src/pybind/rados/pyrex/rados.c:9764)
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager TimedOut:
> [errno 110] error connecting to the cluster
> 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> 2018-10-11 02:04:57.316 5275 ERROR oslo.messaging._drivers.impl_rabbit
> [-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] AMQP server on
> ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
> peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
> by peer
> 2018-10-11 02:04:58.353 5275 INFO oslo.messaging._drivers.impl_rabbit
> [-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] Reconnected to AMQP server
> on ct.os-srg.adhi:5672 via [amqp] client with port 60704.
> 2018-10-11 02:05:02.347 5275 ERROR oslo.messaging._drivers.impl_rabbit
> [-] [2dda91e7-c913-4203-a198-ca53f231dfdc] AMQP server on
> ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
> peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
> by peer
> 2018-10-11 02:05:03.376 5275 INFO oslo.messaging._drivers.impl_rabbit
> [-] [2dda91e7-c913-4203-a198-ca53f231dfdc] Reconnected to AMQP server
> on ct.os-srg.adhi:5672 via [amqp] client with port 60706.
>
> does anyone can help me with this problem ?
>
> --
> Cheers,
>
>
>
> [image: --]
> Adhi Priharmanto
> [image: http://]about.me/a_dhi
> <http://about.me/a_dhi?promo=email_sig>
> +62-812-82121584




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Re: [nova][ceph] Libvirt Error when add ceph as nova backend [ In reply to ]
Hi,
This is my ceph node ( using single node ceph) for test only

> [cephdeploy@ceph2 ~]$ cat /etc/ceph/ceph.client.nova.keyring
> [client.nova]
> key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
> [cephdeploy@ceph2 ~]$ ceph auth get client.nova
> exported keyring for client.nova
> [client.nova]
> key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
> caps mon = "allow r"
> caps osd = "allow class-read object_prefix rbd_children, allow rwx
> pool=vms, allow rx pool=images"
> [cephdeploy@ceph2 ~]$


and this at my compute-node

> [root@cp2 ~]# cat /etc/ceph/ceph.client.nova.keyring
> [client.nova]
> key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
> [root@cp2 ~]#

yes both nodes, ceph & nova-compute node was on same network
192.168.26.xx/24 , does any special port need to allow at firewalld ?

On Thu, Oct 11, 2018 at 2:24 PM Eugen Block <eblock@nde.ag> wrote:

> Hi,
>
> your nova.conf [libvirt] section seems fine.
>
> Can you paste the output of
>
> ceph auth get client.nova
>
> and does the keyring file exist in /etc/ceph/ (ceph.client.nova.keyring)?
>
> Is the ceph network reachable by your openstack nodes?
>
> Regards,
> Eugen
>
>
> Zitat von Adhi Priharmanto <adhi.pri@gmail.com>:
>
> > Hi, Im running my openstack environment with rocky release, and I want to
> > integrate ceph as nova-compute backend, so I followed instruction here :
> > http://superuser.openstack.org/articl...
> > <http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/>
> >
> > and this is my nova.conf at my compute node
> >
> > [DEFAULT]
> > ...
> > compute_driver=libvirt.LibvirtDriver
> >
> > [libvirt]
> > images_type = rbd
> > images_rbd_pool = vms
> > images_rbd_ceph_conf = /etc/ceph/ceph.conf
> > rbd_user = nova
> > rbd_secret_uuid = a93824e0-2d45-4196-8918-c8f7d7f35c5d
> > ....
> >
> > and this is log when I restarted the nova compute service :
> >
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > [req-f4e2715a-c925-4c12-b8e6-aa550fc588b1 - - - - -] Exception
> > handling connection event: AttributeError: 'NoneType' object has no
> > attribute 'rfind'
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host Traceback
> > (most recent call last):
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
> > 148, in _dispatch_conn_event
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host handler()
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
> > 414, in handler
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host return
> > self._conn_event_handler(*args, **kwargs)
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> > 470, in _handle_conn_event
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > self._set_host_enabled(enabled, reason)
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> > 3780, in _set_host_enabled
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > mount.get_manager().host_up(self._host)
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
> > line 134, in host_up
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > self.state = _HostMountState(host, self.generation)
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
> > line 229, in __init__
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > mountpoint = os.path.dirname(disk.source_path)
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
> > "/usr/lib64/python2.7/posixpath.py", line 129, in dirname
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host i =
> > p.rfind('/') + 1
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > AttributeError: 'NoneType' object has no attribute 'rfind'
> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
> > 2018-10-11 01:59:57.231 5275 WARNING nova.compute.monitors
> > [req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Excluding
> > nova.compute.monitors.cpu monitor virt_driver. Not in the list of
> > enabled monitors (CONF.compute_monitors).
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> > [req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Error updating
> > resources for node cp2.os-srg.adhi.: TimedOut: [errno 110] error
> > connecting to the cluster
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager Traceback
> > (most recent call last):
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722,
> > in _update_available_resource_for_node
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> > rt.update_available_resource(context, nodename)
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> > "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
> > line 687, in update_available_resource
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager resources
> > = self.driver.get_available_resource(nodename)
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> > 6505, in get_available_resource
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> > disk_info_dict = self._get_local_gb_info()
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
> > 5704, in _get_local_gb_info
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager info =
> > LibvirtDriver._get_rbd_driver().get_pool_info()
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> >
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
> > line 368, in get_pool_info
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager with
> > RADOSClient(self) as client:
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> >
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
> > line 102, in __init__
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> > self.cluster, self.ioctx = driver._connect_to_rados(pool)
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> >
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
> > line 133, in _connect_to_rados
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> client.connect()
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
> > "rados.pyx", line 875, in rados.Rados.connect
> >
> (/builddir/build/BUILD/ceph-12.2.5/build/src/pybind/rados/pyrex/rados.c:9764)
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager TimedOut:
> > [errno 110] error connecting to the cluster
> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
> > 2018-10-11 02:04:57.316 5275 ERROR oslo.messaging._drivers.impl_rabbit
> > [-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] AMQP server on
> > ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
> > peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
> > by peer
> > 2018-10-11 02:04:58.353 5275 INFO oslo.messaging._drivers.impl_rabbit
> > [-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] Reconnected to AMQP server
> > on ct.os-srg.adhi:5672 via [amqp] client with port 60704.
> > 2018-10-11 02:05:02.347 5275 ERROR oslo.messaging._drivers.impl_rabbit
> > [-] [2dda91e7-c913-4203-a198-ca53f231dfdc] AMQP server on
> > ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
> > peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
> > by peer
> > 2018-10-11 02:05:03.376 5275 INFO oslo.messaging._drivers.impl_rabbit
> > [-] [2dda91e7-c913-4203-a198-ca53f231dfdc] Reconnected to AMQP server
> > on ct.os-srg.adhi:5672 via [amqp] client with port 60706.
> >
> > does anyone can help me with this problem ?
> >
> > --
> > Cheers,
> >
> >
> >
> > [image: --]
> > Adhi Priharmanto
> > [image: http://]about.me/a_dhi
> > <http://about.me/a_dhi?promo=email_sig>
> > +62-812-82121584
>
>
>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>


--
Cheers,



[image: --]
Adhi Priharmanto
[image: http://]about.me/a_dhi
<http://about.me/a_dhi?promo=email_sig>
+62-812-82121584
Re: [nova][ceph] Libvirt Error when add ceph as nova backend [ In reply to ]
Hi,

the keyrings and caps seem correct to me.

> yes both nodes, ceph & nova-compute node was on same network
> 192.168.26.xx/24 , does any special port need to allow at firewalld ?

Yes, the firewall should allow the traffic between the nodes. If this
is just a test environment you could try disabling the firewall. If
this is not an option, open the respective ports, an excerpt from the
docs:

> For iptables, add port 6789 for Ceph Monitors and ports 6800:7300
> for Ceph OSDs.

For more information take a look into [1]. Can you see requests
blocked in your firewall?

Regards,
Eugen

[1] http://docs.ceph.com/docs/master/start/quick-start-preflight/


Zitat von Adhi Priharmanto <adhi.pri@gmail.com>:

> Hi,
> This is my ceph node ( using single node ceph) for test only
>
>> [cephdeploy@ceph2 ~]$ cat /etc/ceph/ceph.client.nova.keyring
>> [client.nova]
>> key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
>> [cephdeploy@ceph2 ~]$ ceph auth get client.nova
>> exported keyring for client.nova
>> [client.nova]
>> key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
>> caps mon = "allow r"
>> caps osd = "allow class-read object_prefix rbd_children, allow rwx
>> pool=vms, allow rx pool=images"
>> [cephdeploy@ceph2 ~]$
>
>
> and this at my compute-node
>
>> [root@cp2 ~]# cat /etc/ceph/ceph.client.nova.keyring
>> [client.nova]
>> key = AQBLxr5bbhnGFxAAXAliVJwMU5w5YgFY6jGJIA==
>> [root@cp2 ~]#
>
> yes both nodes, ceph & nova-compute node was on same network
> 192.168.26.xx/24 , does any special port need to allow at firewalld ?
>
> On Thu, Oct 11, 2018 at 2:24 PM Eugen Block <eblock@nde.ag> wrote:
>
>> Hi,
>>
>> your nova.conf [libvirt] section seems fine.
>>
>> Can you paste the output of
>>
>> ceph auth get client.nova
>>
>> and does the keyring file exist in /etc/ceph/ (ceph.client.nova.keyring)?
>>
>> Is the ceph network reachable by your openstack nodes?
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Adhi Priharmanto <adhi.pri@gmail.com>:
>>
>> > Hi, Im running my openstack environment with rocky release, and I want to
>> > integrate ceph as nova-compute backend, so I followed instruction here :
>> > http://superuser.openstack.org/articl...
>> > <http://superuser.openstack.org/articles/ceph-as-storage-for-openstack/>
>> >
>> > and this is my nova.conf at my compute node
>> >
>> > [DEFAULT]
>> > ...
>> > compute_driver=libvirt.LibvirtDriver
>> >
>> > [libvirt]
>> > images_type = rbd
>> > images_rbd_pool = vms
>> > images_rbd_ceph_conf = /etc/ceph/ceph.conf
>> > rbd_user = nova
>> > rbd_secret_uuid = a93824e0-2d45-4196-8918-c8f7d7f35c5d
>> > ....
>> >
>> > and this is log when I restarted the nova compute service :
>> >
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > [req-f4e2715a-c925-4c12-b8e6-aa550fc588b1 - - - - -] Exception
>> > handling connection event: AttributeError: 'NoneType' object has no
>> > attribute 'rfind'
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host Traceback
>> > (most recent call last):
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
>> > 148, in _dispatch_conn_event
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host handler()
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line
>> > 414, in handler
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host return
>> > self._conn_event_handler(*args, **kwargs)
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>> > 470, in _handle_conn_event
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > self._set_host_enabled(enabled, reason)
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>> > 3780, in _set_host_enabled
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > mount.get_manager().host_up(self._host)
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
>> > line 134, in host_up
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > self.state = _HostMountState(host, self.generation)
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/mount.py",
>> > line 229, in __init__
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > mountpoint = os.path.dirname(disk.source_path)
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host File
>> > "/usr/lib64/python2.7/posixpath.py", line 129, in dirname
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host i =
>> > p.rfind('/') + 1
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > AttributeError: 'NoneType' object has no attribute 'rfind'
>> > 2018-10-11 01:59:57.123 5275 ERROR nova.virt.libvirt.host
>> > 2018-10-11 01:59:57.231 5275 WARNING nova.compute.monitors
>> > [req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Excluding
>> > nova.compute.monitors.cpu monitor virt_driver. Not in the list of
>> > enabled monitors (CONF.compute_monitors).
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
>> > [req-df2559f3-5a01-499a-9ac0-3dd9dc255f77 - - - - -] Error updating
>> > resources for node cp2.os-srg.adhi.: TimedOut: [errno 110] error
>> > connecting to the cluster
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager Traceback
>> > (most recent call last):
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> > "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722,
>> > in _update_available_resource_for_node
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
>> > rt.update_available_resource(context, nodename)
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> > "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
>> > line 687, in update_available_resource
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager resources
>> > = self.driver.get_available_resource(nodename)
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>> > 6505, in get_available_resource
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
>> > disk_info_dict = self._get_local_gb_info()
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> > "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line
>> > 5704, in _get_local_gb_info
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager info =
>> > LibvirtDriver._get_rbd_driver().get_pool_info()
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> >
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
>> > line 368, in get_pool_info
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager with
>> > RADOSClient(self) as client:
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> >
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
>> > line 102, in __init__
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
>> > self.cluster, self.ioctx = driver._connect_to_rados(pool)
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> >
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/storage/rbd_utils.py",
>> > line 133, in _connect_to_rados
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
>> client.connect()
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager File
>> > "rados.pyx", line 875, in rados.Rados.connect
>> >
>> (/builddir/build/BUILD/ceph-12.2.5/build/src/pybind/rados/pyrex/rados.c:9764)
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager TimedOut:
>> > [errno 110] error connecting to the cluster
>> > 2018-10-11 02:04:57.279 5275 ERROR nova.compute.manager
>> > 2018-10-11 02:04:57.316 5275 ERROR oslo.messaging._drivers.impl_rabbit
>> > [-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] AMQP server on
>> > ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
>> > peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
>> > by peer
>> > 2018-10-11 02:04:58.353 5275 INFO oslo.messaging._drivers.impl_rabbit
>> > [-] [bc957cdf-01b6-4d9a-8cb2-87f880f67cf9] Reconnected to AMQP server
>> > on ct.os-srg.adhi:5672 via [amqp] client with port 60704.
>> > 2018-10-11 02:05:02.347 5275 ERROR oslo.messaging._drivers.impl_rabbit
>> > [-] [2dda91e7-c913-4203-a198-ca53f231dfdc] AMQP server on
>> > ct.os-srg.adhi:5672 is unreachable: [Errno 104] Connection reset by
>> > peer. Trying again in 1 seconds.: error: [Errno 104] Connection reset
>> > by peer
>> > 2018-10-11 02:05:03.376 5275 INFO oslo.messaging._drivers.impl_rabbit
>> > [-] [2dda91e7-c913-4203-a198-ca53f231dfdc] Reconnected to AMQP server
>> > on ct.os-srg.adhi:5672 via [amqp] client with port 60706.
>> >
>> > does anyone can help me with this problem ?
>> >
>> > --
>> > Cheers,
>> >
>> >
>> >
>> > [image: --]
>> > Adhi Priharmanto
>> > [image: http://]about.me/a_dhi
>> > <http://about.me/a_dhi?promo=email_sig>
>> > +62-812-82121584
>>
>>
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> --
> Cheers,
>
>
>
> [image: --]
> Adhi Priharmanto
> [image: http://]about.me/a_dhi
> <http://about.me/a_dhi?promo=email_sig>
> +62-812-82121584




_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack