Mailing List Archive

Queens metadata agent error 500
Hi All,
I upgraded manually my centos 7 openstack ocata to pike.
All worked fine.
Then I upgraded from pike to Queens and instances stopped to reach metadata
on 169.254.169.254 with error 500.
I am using isolated metadata true in my dhcp conf and in dhcp namespace
the port 80 is in listen.
Please, anyone can help me?
Regards
Ignazio
Re: Queens metadata agent error 500 [ In reply to ]
Hi,

Can You share logs from Your haproxy-metadata-proxy service which is running in qdhcp namespace? There should be some info about reason of those errors 500.

> Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu 12.11.2018, o godz. 19:49:
>
> Hi All,
> I upgraded manually my centos 7 openstack ocata to pike.
> All worked fine.
> Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500.
> I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen.
> Please, anyone can help me?
> Regards
> Ignazio
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Slawek Kaplonski
Senior software engineer
Red Hat


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: Queens metadata agent error 500 [ In reply to ]
Hello,
attached here there is the log file.

Connecting to an instance created befor upgrade I also tried:
wget http://169.254.169.254/2009-04-04/meta-data/instance-id

The following is the output

--2018-11-12 22:14:45--
http://169.254.169.254/2009-04-04/meta-data/instance-id
Connecting to 169.254.169.254:80... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2018-11-12 22:14:45 ERROR 500: Internal Server Error


Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 19:49:
> >
> > Hi All,
> > I upgraded manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace
> the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
PS
Thanks for your help

Il giorno lun 12 nov 2018 alle ore 22:15 Ignazio Cassano <
ignaziocassano@gmail.com> ha scritto:

> Hello,
> attached here there is the log file.
>
> Connecting to an instance created befor upgrade I also tried:
> wget http://169.254.169.254/2009-04-04/meta-data/instance-id
>
> The following is the output
>
> --2018-11-12 22:14:45--
> http://169.254.169.254/2009-04-04/meta-data/instance-id
> Connecting to 169.254.169.254:80... connected.
> HTTP request sent, awaiting response... 500 Internal Server Error
> 2018-11-12 22:14:45 ERROR 500: Internal Server Error
>
>
> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
>
>> Hi,
>>
>> Can You share logs from Your haproxy-metadata-proxy service which is
>> running in qdhcp namespace? There should be some info about reason of those
>> errors 500.
>>
>> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
>> dniu 12.11.2018, o godz. 19:49:
>> >
>> > Hi All,
>> > I upgraded manually my centos 7 openstack ocata to pike.
>> > All worked fine.
>> > Then I upgraded from pike to Queens and instances stopped to reach
>> metadata on 169.254.169.254 with error 500.
>> > I am using isolated metadata true in my dhcp conf and in dhcp
>> namespace the port 80 is in listen.
>> > Please, anyone can help me?
>> > Regards
>> > Ignazio
>> >
>> > _______________________________________________
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
Re: Queens metadata agent error 500 [ In reply to ]
Hello again,
I have another installation of ocata .
On ocata the metadata for a network id is displayed by ps -afe like this:
/usr/bin/python2 /bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
--state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
--metadata_proxy_group=993
--log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
--log-dir=/var/log/neutron

On queens like this:
haproxy -f
/var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf

Is it the correct behaviour ?

Regards
Ignazio



Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 19:49:
> >
> > Hi All,
> > I upgraded manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace
> the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Hi,

From logs which You attached it looks that Your neutron-metadata-agent can’t connect to nova-api service. Please check if nova-metadata-api is reachable from node where Your neutron-metadata-agent is running.

> Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu 12.11.2018, o godz. 22:34:
>
> Hello again,
> I have another installation of ocata .
> On ocata the metadata for a network id is displayed by ps -afe like this:
> /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 --metadata_proxy_group=993 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log --log-dir=/var/log/neutron
>
> On queens like this:
> haproxy -f /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>
> Is it the correct behaviour ?

Yes, that is correct. It was changed some time ago, see https://bugs.launchpad.net/neutron/+bug/1524916

>
> Regards
> Ignazio
>
>
>
> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <skaplons@redhat.com> ha scritto:
> Hi,
>
> Can You share logs from Your haproxy-metadata-proxy service which is running in qdhcp namespace? There should be some info about reason of those errors 500.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu 12.11.2018, o godz. 19:49:
> >
> > Hi All,
> > I upgraded manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>


Slawek Kaplonski
Senior software engineer
Red Hat


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: Queens metadata agent error 500 [ In reply to ]
Hello,
the nova api in on the same controller on port 8774 and it can be reached
from the metadata agent
No firewall is present
Regards

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> > haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Hello again, ath the same time I tried to create an instance nova-api log
reports:

2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
[req-e4799f40-eeab-482d-9717-cb41be8ffde2 89f76bc5de5545f381da2c10c7df7f15
59f1f232ce28409593d66d8f6495e434 - default default] Database connection was
found disconnected; reconnecting: DBConnectionError:
(pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server
during query') [SQL: u'SELECT 1'] (Background on this error at:
http://sqlalche.me/e/e3q8)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines Traceback
(most recent call last):
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73,
in _connect_ping_listener
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
connection.scalar(select([1]))
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880,
in scalar
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
self.execute(object, *multiparams, **params).scalar()
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948,
in execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
meth(self, multiparams, params)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269,
in _execute_on_connection
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
connection._execute_clauseelement(self, multiparams, params)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060,
in _execute_clauseelement
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
compiled_sql, distilled_params
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200,
in _execute_context
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409,
in _handle_dbapi_exception
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
util.raise_from_cause(newraise, exc_info)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203,
in raise_from_cause
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193,
in _execute_context
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
507, in do_execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
cursor.execute(statement, parameters)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines result =
self._query(query)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _quer




I never lost connections to de db before upgrading
:-(

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> > haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Hi,

> Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu 12.11.2018, o godz. 22:55:
>
> Hello,
> the nova api in on the same controller on port 8774 and it can be reached from the metadata agent

Nova-metadata-api is running on port 8775 IIRC.

> No firewall is present
> Regards
>
> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <skaplons@redhat.com> ha scritto:
> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent can’t connect to nova-api service. Please check if nova-metadata-api is reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 --metadata_proxy_group=993 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log --log-dir=/var/log/neutron
> >
> > On queens like this:
> > haproxy -f /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <skaplons@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is running in qdhcp namespace? There should be some info about reason of those errors 500.
> >
> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp namespace the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>


Slawek Kaplonski
Senior software engineer
Red Hat


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: Queens metadata agent error 500 [ In reply to ]
I tried 1 minute ago to create another instance.
Nova api reports the following:

RROR oslo_db.sqlalchemy.engines [req-cac96dee-d91b-48cb-831b-31f95cffa2f4
89f76bc5de5545f381da2c10c7df7f15 59f1f232ce28409593d66d8f6495e434 - default
default] Database connection was found disconnected; reconnecting:
DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection
to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error
at: http://sqlalche.me/e/e3q8)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines Traceback
(most recent call last):
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73,
in _connect_ping_listener
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
connection.scalar(select([1]))
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880,
in scalar
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return
self.execute(object, *multiparams, **params).scalar()
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948,
in execute
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return
meth(self, multiparams, params)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269,
in _execute_on_connection
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return
connection._execute_clauseelement(self, multiparams, params)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060,
in _execute_clauseelement
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
compiled_sql, distilled_params
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200,
in _execute_context
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409,
in _handle_dbapi_exception
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
util.raise_from_cause(newraise, exc_info)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203,
in raise_from_cause
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193,
in _execute_context
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
507, in do_execute
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
cursor.execute(statement, parameters)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines result =
self._query(query)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
conn.query(q)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in
query
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in
_read_query_result
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
result.read()
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in
read
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
first_packet = self.connection._read_packet()
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 987, in
_read_packet
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
packet_header = self._read_bytes(4)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1033, in
_read_bytes
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection
to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error
at: http://sqlalche.me/e/e3q8)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
[req-cac96dee-d91b-48cb-831b-31f95cffa2f4 89f76bc5de5545f381da2c10c7df7f15
59f1f232ce28409593d66d8f6495e434 - default default] Database connection was
found disconnected; reconnecting: DBConnectionError:
(pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server
during query') [SQL: u'SELECT 1'] (Background on this error at:
http://sqlalche.me/e/e3q8)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines Traceback
(most recent call last):
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73,
in _connect_ping_listener
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
connection.scalar(select([1]))
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880,
in scalar
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines return
self.execute(object, *multiparams, **params).scalar()
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948,
in execute
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines return
meth(self, multiparams, params)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269,
in _execute_on_connection
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines return
connection._execute_clauseelement(self, multiparams, params)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060,
in _execute_clauseelement
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
compiled_sql, distilled_params
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200,
in _execute_context
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409,
in _handle_dbapi_exception
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
util.raise_from_cause(newraise, exc_info)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203,
in raise_from_cause
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193,
in _execute_context
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
507, in do_execute
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
cursor.execute(statement, parameters)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines result =
self._query(query)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
conn.query(q)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in
query
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in
_read_query_result
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
result.read()
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in
read
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
first_packet = self.connection._read_packet()
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 987, in
_read_packet
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
packet_header = self._read_bytes(4)
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1033, in
_read_bytes
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
2018-11-12 23:07:28.831 4224 ERROR oslo_db.sqlalchemy.engines
DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection
to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error
at: http://sqlalche.me/e/e3q8)


Anything went wrong while upgrade to queens in nova database ?

# nova-manage db online_data_migrations

/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
exception.NotSupportedWarning
Running batches of 50 until complete
1 rows matched query migrate_instances_add_request_spec, 0 migrated
+---------------------------------------------+--------------+-----------+
| Migration | Total Needed | Completed |
+---------------------------------------------+--------------+-----------+
| delete_build_requests_with_no_instance_uuid | 0 | 0 |
| migrate_aggregate_reset_autoincrement | 0 | 0 |
| migrate_aggregates | 0 | 0 |
| migrate_instance_groups_to_api_db | 0 | 0 |
| migrate_instances_add_request_spec | 1 | 0 |
| migrate_keypairs_to_api_db | 0 | 0 |
| migrate_quota_classes_to_api_db | 0 | 0 |
| migrate_quota_limits_to_api_db | 0 | 0 |
| migration_migrate_to_uuid | 0 | 0 |
| populate_missing_availability_zones | 0 | 0 |
| populate_uuids | 0 | 0 |
| service_uuids_online_data_migration | 0 | 0 |
+---------------------------------------------+--------------+-----------+




Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> > haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Yes, sorry.
Also the 8775 port is reachable from neutron metadata agent
Regards
Ignazio

Il giorno lun 12 nov 2018 alle ore 23:08 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 22:55:
> >
> > Hello,
> > the nova api in on the same controller on port 8774 and it can be
> reached from the metadata agent
>
> Nova-metadata-api is running on port 8775 IIRC.
>
> > No firewall is present
> > Regards
> >
> > Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
> > Hi,
> >
> > From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
> >
> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 22:34:
> > >
> > > Hello again,
> > > I have another installation of ocata .
> > > On ocata the metadata for a network id is displayed by ps -afe like
> this:
> > > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> > >
> > > On queens like this:
> > > haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> > >
> > > Is it the correct behaviour ?
> >
> > Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
> >
> > >
> > > Regards
> > > Ignazio
> > >
> > >
> > >
> > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
> > > Hi,
> > >
> > > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> > >
> > > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com>
> w dniu 12.11.2018, o godz. 19:49:
> > > >
> > > > Hi All,
> > > > I upgraded manually my centos 7 openstack ocata to pike.
> > > > All worked fine.
> > > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace the port 80 is in listen.
> > > > Please, anyone can help me?
> > > > Regards
> > > > Ignazio
> > > >
> > > > _______________________________________________
> > > > OpenStack-operators mailing list
> > > > OpenStack-operators@lists.openstack.org
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> > > —
> > > Slawek Kaplonski
> > > Senior software engineer
> > > Red Hat
> > >
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Any other suggestion ?
It does not work.
Nova metatada is on port 8775 in listen but no way to solve this issue.
Thanks
Ignazio

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skaplons@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> > haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skaplons@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > _______________________________________________
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Did you change the nova_metadata_ip option to nova_metadata_host in
metadata_agent.ini? The former value was deprecated several releases ago
and now no longer functions as of pike. The metadata service will throw
500 errors if you don't change it.

On November 12, 2018 19:00:46 Ignazio Cassano <ignaziocassano@gmail.com> wrote:
> Any other suggestion ?
> It does not work.
> Nova metatada is on port 8775 in listen but no way to solve this issue.
> Thanks
> Ignazio
>
> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski
> <skaplons@redhat.com> ha scritto:
> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
>> Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w dniu
>> 12.11.2018, o godz. 22:34:
>>
>> Hello again,
>> I have another installation of ocata .
>> On ocata the metadata for a network id is displayed by ps -afe like this:
>> /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>> --metadata_proxy_group=993
>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>> --log-dir=/var/log/neutron
>>
>> On queens like this:
>> haproxy -f
>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>
>> Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
>>
>> Regards
>> Ignazio
>>
>>
>>
>> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski
>> <skaplons@redhat.com> ha scritto:
>> Hi,
>>
>> Can You share logs from Your haproxy-metadata-proxy service which is
>> running in qdhcp namespace? There should be some info about reason of those
>> errors 500.
>>
>> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
>> dniu 12.11.2018, o godz. 19:49:
>> >
>> > Hi All,
>> > I upgraded manually my centos 7 openstack ocata to pike.
>> > All worked fine.
>> > Then I upgraded from pike to Queens and instances stopped to reach
>> metadata on 169.254.169.254 with error 500.
>> > I am using isolated metadata true in my dhcp conf and in dhcp namespace
>> the port 80 is in listen.
>> > Please, anyone can help me?
>> > Regards
>> > Ignazio
>> >
>> > _______________________________________________
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: Queens metadata agent error 500 [ In reply to ]
Hello, I am going to check it.
Thanks
Ignazio

Il giorno Mar 13 Nov 2018 03:46 Chris Apsey <bitskrieg@bitskrieg.net> ha
scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini? The former value was deprecated several releases ago
> and now no longer functions as of pike. The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano <ignaziocassano@gmail.com>
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skaplons@redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> > haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skaplons@redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com>
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > _______________________________________________
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators@lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
Re: Queens metadata agent error 500 [ In reply to ]
Hi Chris,
many thanks for your answer.
It solved the issue.
Regards
Ignazio

Il giorno mar 13 nov 2018 alle ore 03:46 Chris Apsey <
bitskrieg@bitskrieg.net> ha scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini? The former value was deprecated several releases ago
> and now no longer functions as of pike. The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano <ignaziocassano@gmail.com>
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skaplons@redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com> w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> > /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> > haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skaplons@redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomo?? napisana przez Ignazio Cassano <ignaziocassano@gmail.com>
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > _______________________________________________
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators@lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>