Mailing List Archive

[OCTAVIA][KOLLA] - Amphora to control plan communication question.
Hi Folks,

I'm currently deploying the Octavia component into our testing environment
which is based on KOLLA.

So far I'm quite enjoying it as it is pretty much straight forward (Except
for some documentation pitfalls), but I'm now facing a weird and hard to
debug situation.

I actually have a hard time to understand how Amphora are communicating
back and forth with the Control Plan components.

From my understanding, as soon as I create a new LB, the Control Plan is
spawning an instance using the configured Octavia Flavor and Image type,
attach it to the LB-MGMT-NET and to the user provided subnet.

What I think I'm misunderstanding is the discussion that follows between
the amphora and the different components such as the
HealthManager/HouseKeeper, the API and the Worker.

How is the amphora agent able to found my control plan? Is the
HealthManager or the Octavia Worker initiating the communication to the
Amphora on port 9443 and so give the agent the API/Control plan internalURL?

If anyone have a diagram of the workflow I would be more than happy ^^

Thanks a lot in advance to anyone willing to help :D
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Hi Flint,

We don't have a logical network diagram at this time (it's still on
the to-do list), but I can talk you through it.

The Octavia worker, health manager, and housekeeping need to be able
to reach the amphora (service VM at this point) over the lb-mgmt-net
on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
the database and the information we save from the compute driver (I.e.
what IP was assigned to the instance).

The Octavia API process does not need to be connected to the
lb-mgmt-net at this time. It only connects the the messaging bus and
the Octavia database. Provider drivers may have other connectivity
requirements for the Octavia API.

The amphorae also send UDP packets back to the health manager on port
5555. This is the heartbeat packet from the amphora. It contains the
health and statistics from that amphora. It know it's list of health
manager endpoints from the configuration file
"controller_ip_port_list"
(https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list).
Each amphora will rotate through that list of endpoints to reduce the
chance of a network split impacting the heartbeat messages.

This is the only traffic that passed over this network. All of it is
IP based and can be routed (it does not require L2 connectivity).

Michael

On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>
> Hi Folks,
>
> I'm currently deploying the Octavia component into our testing environment which is based on KOLLA.
>
> So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation.
>
> I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components.
>
> From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet.
>
> What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker.
>
> How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL?
>
> If anyone have a diagram of the workflow I would be more than happy ^^
>
> Thanks a lot in advance to anyone willing to help :D
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Hi Michael, thanks a lot for that explanation, it’s actually how I
envisioned the flow.

I’ll have to produce a diagram for my peers understanding, I maybe can
share it with you.

There is still one point that seems to be a little bit odd to me.

How the amphora agent know where to find out the healthManagers and worker
services? Is that because the worker is sending the agent some catalog
informations or because we set that at diskimage-create time?

If so, I think the Centos based amphora is missing the agent.conf because
currently my vms doesn’t have any.

Once again thanks for your help!
Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com> a
écrit :

> Hi Flint,
>
> We don't have a logical network diagram at this time (it's still on
> the to-do list), but I can talk you through it.
>
> The Octavia worker, health manager, and housekeeping need to be able
> to reach the amphora (service VM at this point) over the lb-mgmt-net
> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
> the database and the information we save from the compute driver (I.e.
> what IP was assigned to the instance).
>
> The Octavia API process does not need to be connected to the
> lb-mgmt-net at this time. It only connects the the messaging bus and
> the Octavia database. Provider drivers may have other connectivity
> requirements for the Octavia API.
>
> The amphorae also send UDP packets back to the health manager on port
> 5555. This is the heartbeat packet from the amphora. It contains the
> health and statistics from that amphora. It know it's list of health
> manager endpoints from the configuration file
> "controller_ip_port_list"
> (
> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list
> ).
> Each amphora will rotate through that list of endpoints to reduce the
> chance of a network split impacting the heartbeat messages.
>
> This is the only traffic that passed over this network. All of it is
> IP based and can be routed (it does not require L2 connectivity).
>
> Michael
>
> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >
> > Hi Folks,
> >
> > I'm currently deploying the Octavia component into our testing
> environment which is based on KOLLA.
> >
> > So far I'm quite enjoying it as it is pretty much straight forward
> (Except for some documentation pitfalls), but I'm now facing a weird and
> hard to debug situation.
> >
> > I actually have a hard time to understand how Amphora are communicating
> back and forth with the Control Plan components.
> >
> > From my understanding, as soon as I create a new LB, the Control Plan is
> spawning an instance using the configured Octavia Flavor and Image type,
> attach it to the LB-MGMT-NET and to the user provided subnet.
> >
> > What I think I'm misunderstanding is the discussion that follows between
> the amphora and the different components such as the
> HealthManager/HouseKeeper, the API and the Worker.
> >
> > How is the amphora agent able to found my control plan? Is the
> HealthManager or the Octavia Worker initiating the communication to the
> Amphora on port 9443 and so give the agent the API/Control plan internalURL?
> >
> > If anyone have a diagram of the workflow I would be more than happy ^^
> >
> > Thanks a lot in advance to anyone willing to help :D
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Hi Flint,

Happy to help.

Right now the list of controller endpoints is pushed at boot time and
loaded into the amphora via config drive/nova.
In the future we plan to be able to update this list via the amphora
API, but it has not been developed yet.

I am pretty sure centos is getting the config file as our gate job
that runs with centos 7 amphora has been passing. It should be in the
same /etc/octavia/amphora-agent.conf location as the ubuntu based
amphora.

Michael



On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>
> Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow.
>
> I’ll have to produce a diagram for my peers understanding, I maybe can share it with you.
>
> There is still one point that seems to be a little bit odd to me.
>
> How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time?
>
> If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any.
>
> Once again thanks for your help!
> Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com> a écrit :
>>
>> Hi Flint,
>>
>> We don't have a logical network diagram at this time (it's still on
>> the to-do list), but I can talk you through it.
>>
>> The Octavia worker, health manager, and housekeeping need to be able
>> to reach the amphora (service VM at this point) over the lb-mgmt-net
>> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
>> the database and the information we save from the compute driver (I.e.
>> what IP was assigned to the instance).
>>
>> The Octavia API process does not need to be connected to the
>> lb-mgmt-net at this time. It only connects the the messaging bus and
>> the Octavia database. Provider drivers may have other connectivity
>> requirements for the Octavia API.
>>
>> The amphorae also send UDP packets back to the health manager on port
>> 5555. This is the heartbeat packet from the amphora. It contains the
>> health and statistics from that amphora. It know it's list of health
>> manager endpoints from the configuration file
>> "controller_ip_port_list"
>> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list).
>> Each amphora will rotate through that list of endpoints to reduce the
>> chance of a network split impacting the heartbeat messages.
>>
>> This is the only traffic that passed over this network. All of it is
>> IP based and can be routed (it does not require L2 connectivity).
>>
>> Michael
>>
>> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >
>> > Hi Folks,
>> >
>> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA.
>> >
>> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation.
>> >
>> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components.
>> >
>> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet.
>> >
>> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker.
>> >
>> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL?
>> >
>> > If anyone have a diagram of the workflow I would be more than happy ^^
>> >
>> > Thanks a lot in advance to anyone willing to help :D
>> >
>> > _______________________________________________
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Hi Michael,

Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there
a release target for the API vs config-drive thing? I’ll have a look at an
instance as soon as I’ll be able to log into one of my amphora.

By the way, three sub-questions remains:

1°/ - What is the best place to push some documentation improvement ?
2°/ - Is the amphora-agent an auto-generated file at image build time or do
I need to create one and give it to the diskimage-builder process?
3°/ - The amphora agent source-code is available at
https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent
isn’t?

Sorry for the questions volume, but I prefer to really understand the
underlying mechanisms before we goes live with the solution.

G.

Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com> a écrit :

> Hi Flint,
>
> Happy to help.
>
> Right now the list of controller endpoints is pushed at boot time and
> loaded into the amphora via config drive/nova.
> In the future we plan to be able to update this list via the amphora
> API, but it has not been developed yet.
>
> I am pretty sure centos is getting the config file as our gate job
> that runs with centos 7 amphora has been passing. It should be in the
> same /etc/octavia/amphora-agent.conf location as the ubuntu based
> amphora.
>
> Michael
>
>
>
> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >
> > Hi Michael, thanks a lot for that explanation, it’s actually how I
> envisioned the flow.
> >
> > I’ll have to produce a diagram for my peers understanding, I maybe can
> share it with you.
> >
> > There is still one point that seems to be a little bit odd to me.
> >
> > How the amphora agent know where to find out the healthManagers and
> worker services? Is that because the worker is sending the agent some
> catalog informations or because we set that at diskimage-create time?
> >
> > If so, I think the Centos based amphora is missing the agent.conf
> because currently my vms doesn’t have any.
> >
> > Once again thanks for your help!
> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com> a
> écrit :
> >>
> >> Hi Flint,
> >>
> >> We don't have a logical network diagram at this time (it's still on
> >> the to-do list), but I can talk you through it.
> >>
> >> The Octavia worker, health manager, and housekeeping need to be able
> >> to reach the amphora (service VM at this point) over the lb-mgmt-net
> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
> >> the database and the information we save from the compute driver (I.e.
> >> what IP was assigned to the instance).
> >>
> >> The Octavia API process does not need to be connected to the
> >> lb-mgmt-net at this time. It only connects the the messaging bus and
> >> the Octavia database. Provider drivers may have other connectivity
> >> requirements for the Octavia API.
> >>
> >> The amphorae also send UDP packets back to the health manager on port
> >> 5555. This is the heartbeat packet from the amphora. It contains the
> >> health and statistics from that amphora. It know it's list of health
> >> manager endpoints from the configuration file
> >> "controller_ip_port_list"
> >> (
> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list
> ).
> >> Each amphora will rotate through that list of endpoints to reduce the
> >> chance of a network split impacting the heartbeat messages.
> >>
> >> This is the only traffic that passed over this network. All of it is
> >> IP based and can be routed (it does not require L2 connectivity).
> >>
> >> Michael
> >>
> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >> >
> >> > Hi Folks,
> >> >
> >> > I'm currently deploying the Octavia component into our testing
> environment which is based on KOLLA.
> >> >
> >> > So far I'm quite enjoying it as it is pretty much straight forward
> (Except for some documentation pitfalls), but I'm now facing a weird and
> hard to debug situation.
> >> >
> >> > I actually have a hard time to understand how Amphora are
> communicating back and forth with the Control Plan components.
> >> >
> >> > From my understanding, as soon as I create a new LB, the Control Plan
> is spawning an instance using the configured Octavia Flavor and Image type,
> attach it to the LB-MGMT-NET and to the user provided subnet.
> >> >
> >> > What I think I'm misunderstanding is the discussion that follows
> between the amphora and the different components such as the
> HealthManager/HouseKeeper, the API and the Worker.
> >> >
> >> > How is the amphora agent able to found my control plan? Is the
> HealthManager or the Octavia Worker initiating the communication to the
> Amphora on port 9443 and so give the agent the API/Control plan internalURL?
> >> >
> >> > If anyone have a diagram of the workflow I would be more than happy ^^
> >> >
> >> > Thanks a lot in advance to anyone willing to help :D
> >> >
> >> > _______________________________________________
> >> > OpenStack-operators mailing list
> >> > OpenStack-operators@lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
No worries, happy to share. Answers below.

Michael


On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.therond@gmail.com> wrote:
>
> Hi Michael,
>
> Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora.

No I have no timeline for the amphora-agent config update API. Either
way, the initial config will be installed via config drive. The API is
intended for runtime updates.
>
> By the way, three sub-questions remains:
>
> 1°/ - What is the best place to push some documentation improvement ?

Patches are welcome! All of our documentation is included in the
source code repository here:
https://github.com/openstack/octavia/tree/master/doc/source

Our patches follow the normal OpenStack gerrit review process
(OpenStack does not use pull requests).

> 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process?

The amphora-agent code itself is installed automatically with the
diskimage-builder process via the "amphora-agent" element.
The amphora-agent configuration file is only installed at amphora boot
time by nova using the config drive capability. It is also
auto-generated by the controller.

> 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t?

Yes, the agent code that runs in the amphora instance is all under
https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
in the main octavia repository.

>
> Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution.
>
> G.
>
> Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com> a écrit :
>>
>> Hi Flint,
>>
>> Happy to help.
>>
>> Right now the list of controller endpoints is pushed at boot time and
>> loaded into the amphora via config drive/nova.
>> In the future we plan to be able to update this list via the amphora
>> API, but it has not been developed yet.
>>
>> I am pretty sure centos is getting the config file as our gate job
>> that runs with centos 7 amphora has been passing. It should be in the
>> same /etc/octavia/amphora-agent.conf location as the ubuntu based
>> amphora.
>>
>> Michael
>>
>>
>>
>> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >
>> > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow.
>> >
>> > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you.
>> >
>> > There is still one point that seems to be a little bit odd to me.
>> >
>> > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time?
>> >
>> > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any.
>> >
>> > Once again thanks for your help!
>> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com> a écrit :
>> >>
>> >> Hi Flint,
>> >>
>> >> We don't have a logical network diagram at this time (it's still on
>> >> the to-do list), but I can talk you through it.
>> >>
>> >> The Octavia worker, health manager, and housekeeping need to be able
>> >> to reach the amphora (service VM at this point) over the lb-mgmt-net
>> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
>> >> the database and the information we save from the compute driver (I.e.
>> >> what IP was assigned to the instance).
>> >>
>> >> The Octavia API process does not need to be connected to the
>> >> lb-mgmt-net at this time. It only connects the the messaging bus and
>> >> the Octavia database. Provider drivers may have other connectivity
>> >> requirements for the Octavia API.
>> >>
>> >> The amphorae also send UDP packets back to the health manager on port
>> >> 5555. This is the heartbeat packet from the amphora. It contains the
>> >> health and statistics from that amphora. It know it's list of health
>> >> manager endpoints from the configuration file
>> >> "controller_ip_port_list"
>> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list).
>> >> Each amphora will rotate through that list of endpoints to reduce the
>> >> chance of a network split impacting the heartbeat messages.
>> >>
>> >> This is the only traffic that passed over this network. All of it is
>> >> IP based and can be routed (it does not require L2 connectivity).
>> >>
>> >> Michael
>> >>
>> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >> >
>> >> > Hi Folks,
>> >> >
>> >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA.
>> >> >
>> >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation.
>> >> >
>> >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components.
>> >> >
>> >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet.
>> >> >
>> >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker.
>> >> >
>> >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL?
>> >> >
>> >> > If anyone have a diagram of the workflow I would be more than happy ^^
>> >> >
>> >> > Thanks a lot in advance to anyone willing to help :D
>> >> >
>> >> > _______________________________________________
>> >> > OpenStack-operators mailing list
>> >> > OpenStack-operators@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment
with peace in mind.

Regarding the documentation patch, does it need to get a specific format or
following some guidelines? I’ll compulse all my annotations and push a
patch for those points that would need clarification and a little bit of
formatting (layout issue).

Thanks for this awesome support Michael!
Le mer. 1 août 2018 à 07:57, Michael Johnson <johnsomor@gmail.com> a écrit :

> No worries, happy to share. Answers below.
>
> Michael
>
>
> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >
> > Hi Michael,
> >
> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is
> there a release target for the API vs config-drive thing? I’ll have a look
> at an instance as soon as I’ll be able to log into one of my amphora.
>
> No I have no timeline for the amphora-agent config update API. Either
> way, the initial config will be installed via config drive. The API is
> intended for runtime updates.
> >
> > By the way, three sub-questions remains:
> >
> > 1°/ - What is the best place to push some documentation improvement ?
>
> Patches are welcome! All of our documentation is included in the
> source code repository here:
> https://github.com/openstack/octavia/tree/master/doc/source
>
> Our patches follow the normal OpenStack gerrit review process
> (OpenStack does not use pull requests).
>
> > 2°/ - Is the amphora-agent an auto-generated file at image build time or
> do I need to create one and give it to the diskimage-builder process?
>
> The amphora-agent code itself is installed automatically with the
> diskimage-builder process via the "amphora-agent" element.
> The amphora-agent configuration file is only installed at amphora boot
> time by nova using the config drive capability. It is also
> auto-generated by the controller.
>
> > 3°/ - The amphora agent source-code is available at
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent
> isn’t?
>
> Yes, the agent code that runs in the amphora instance is all under
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
> in the main octavia repository.
>
> >
> > Sorry for the questions volume, but I prefer to really understand the
> underlying mechanisms before we goes live with the solution.
> >
> > G.
> >
> > Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com> a
> écrit :
> >>
> >> Hi Flint,
> >>
> >> Happy to help.
> >>
> >> Right now the list of controller endpoints is pushed at boot time and
> >> loaded into the amphora via config drive/nova.
> >> In the future we plan to be able to update this list via the amphora
> >> API, but it has not been developed yet.
> >>
> >> I am pretty sure centos is getting the config file as our gate job
> >> that runs with centos 7 amphora has been passing. It should be in the
> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based
> >> amphora.
> >>
> >> Michael
> >>
> >>
> >>
> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >> >
> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I
> envisioned the flow.
> >> >
> >> > I’ll have to produce a diagram for my peers understanding, I maybe
> can share it with you.
> >> >
> >> > There is still one point that seems to be a little bit odd to me.
> >> >
> >> > How the amphora agent know where to find out the healthManagers and
> worker services? Is that because the worker is sending the agent some
> catalog informations or because we set that at diskimage-create time?
> >> >
> >> > If so, I think the Centos based amphora is missing the agent.conf
> because currently my vms doesn’t have any.
> >> >
> >> > Once again thanks for your help!
> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com>
> a écrit :
> >> >>
> >> >> Hi Flint,
> >> >>
> >> >> We don't have a logical network diagram at this time (it's still on
> >> >> the to-do list), but I can talk you through it.
> >> >>
> >> >> The Octavia worker, health manager, and housekeeping need to be able
> >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net
> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
> >> >> the database and the information we save from the compute driver
> (I.e.
> >> >> what IP was assigned to the instance).
> >> >>
> >> >> The Octavia API process does not need to be connected to the
> >> >> lb-mgmt-net at this time. It only connects the the messaging bus and
> >> >> the Octavia database. Provider drivers may have other connectivity
> >> >> requirements for the Octavia API.
> >> >>
> >> >> The amphorae also send UDP packets back to the health manager on port
> >> >> 5555. This is the heartbeat packet from the amphora. It contains the
> >> >> health and statistics from that amphora. It know it's list of health
> >> >> manager endpoints from the configuration file
> >> >> "controller_ip_port_list"
> >> >> (
> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list
> ).
> >> >> Each amphora will rotate through that list of endpoints to reduce the
> >> >> chance of a network split impacting the heartbeat messages.
> >> >>
> >> >> This is the only traffic that passed over this network. All of it is
> >> >> IP based and can be routed (it does not require L2 connectivity).
> >> >>
> >> >> Michael
> >> >>
> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >> >> >
> >> >> > Hi Folks,
> >> >> >
> >> >> > I'm currently deploying the Octavia component into our testing
> environment which is based on KOLLA.
> >> >> >
> >> >> > So far I'm quite enjoying it as it is pretty much straight forward
> (Except for some documentation pitfalls), but I'm now facing a weird and
> hard to debug situation.
> >> >> >
> >> >> > I actually have a hard time to understand how Amphora are
> communicating back and forth with the Control Plan components.
> >> >> >
> >> >> > From my understanding, as soon as I create a new LB, the Control
> Plan is spawning an instance using the configured Octavia Flavor and Image
> type, attach it to the LB-MGMT-NET and to the user provided subnet.
> >> >> >
> >> >> > What I think I'm misunderstanding is the discussion that follows
> between the amphora and the different components such as the
> HealthManager/HouseKeeper, the API and the Worker.
> >> >> >
> >> >> > How is the amphora agent able to found my control plan? Is the
> HealthManager or the Octavia Worker initiating the communication to the
> Amphora on port 9443 and so give the agent the API/Control plan internalURL?
> >> >> >
> >> >> > If anyone have a diagram of the workflow I would be more than
> happy ^^
> >> >> >
> >> >> > Thanks a lot in advance to anyone willing to help :D
> >> >> >
> >> >> > _______________________________________________
> >> >> > OpenStack-operators mailing list
> >> >> > OpenStack-operators@lists.openstack.org
> >> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Hi Flint,

Yes, our documentation follows the OpenStack documentation rules. It
is in RestructuredText format.

The documentation team has some guides here:
https://docs.openstack.org/doc-contrib-guide/rst-conv.html

However we can also help with that during the review process.

Michael

On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS <gael.therond@gmail.com> wrote:
>
> Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment with peace in mind.
>
> Regarding the documentation patch, does it need to get a specific format or following some guidelines? I’ll compulse all my annotations and push a patch for those points that would need clarification and a little bit of formatting (layout issue).
>
> Thanks for this awesome support Michael!
> Le mer. 1 août 2018 à 07:57, Michael Johnson <johnsomor@gmail.com> a écrit :
>>
>> No worries, happy to share. Answers below.
>>
>> Michael
>>
>>
>> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >
>> > Hi Michael,
>> >
>> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora.
>>
>> No I have no timeline for the amphora-agent config update API. Either
>> way, the initial config will be installed via config drive. The API is
>> intended for runtime updates.
>> >
>> > By the way, three sub-questions remains:
>> >
>> > 1°/ - What is the best place to push some documentation improvement ?
>>
>> Patches are welcome! All of our documentation is included in the
>> source code repository here:
>> https://github.com/openstack/octavia/tree/master/doc/source
>>
>> Our patches follow the normal OpenStack gerrit review process
>> (OpenStack does not use pull requests).
>>
>> > 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process?
>>
>> The amphora-agent code itself is installed automatically with the
>> diskimage-builder process via the "amphora-agent" element.
>> The amphora-agent configuration file is only installed at amphora boot
>> time by nova using the config drive capability. It is also
>> auto-generated by the controller.
>>
>> > 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t?
>>
>> Yes, the agent code that runs in the amphora instance is all under
>> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
>> in the main octavia repository.
>>
>> >
>> > Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution.
>> >
>> > G.
>> >
>> > Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com> a écrit :
>> >>
>> >> Hi Flint,
>> >>
>> >> Happy to help.
>> >>
>> >> Right now the list of controller endpoints is pushed at boot time and
>> >> loaded into the amphora via config drive/nova.
>> >> In the future we plan to be able to update this list via the amphora
>> >> API, but it has not been developed yet.
>> >>
>> >> I am pretty sure centos is getting the config file as our gate job
>> >> that runs with centos 7 amphora has been passing. It should be in the
>> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based
>> >> amphora.
>> >>
>> >> Michael
>> >>
>> >>
>> >>
>> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >> >
>> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow.
>> >> >
>> >> > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you.
>> >> >
>> >> > There is still one point that seems to be a little bit odd to me.
>> >> >
>> >> > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time?
>> >> >
>> >> > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any.
>> >> >
>> >> > Once again thanks for your help!
>> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com> a écrit :
>> >> >>
>> >> >> Hi Flint,
>> >> >>
>> >> >> We don't have a logical network diagram at this time (it's still on
>> >> >> the to-do list), but I can talk you through it.
>> >> >>
>> >> >> The Octavia worker, health manager, and housekeeping need to be able
>> >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net
>> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
>> >> >> the database and the information we save from the compute driver (I.e.
>> >> >> what IP was assigned to the instance).
>> >> >>
>> >> >> The Octavia API process does not need to be connected to the
>> >> >> lb-mgmt-net at this time. It only connects the the messaging bus and
>> >> >> the Octavia database. Provider drivers may have other connectivity
>> >> >> requirements for the Octavia API.
>> >> >>
>> >> >> The amphorae also send UDP packets back to the health manager on port
>> >> >> 5555. This is the heartbeat packet from the amphora. It contains the
>> >> >> health and statistics from that amphora. It know it's list of health
>> >> >> manager endpoints from the configuration file
>> >> >> "controller_ip_port_list"
>> >> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list).
>> >> >> Each amphora will rotate through that list of endpoints to reduce the
>> >> >> chance of a network split impacting the heartbeat messages.
>> >> >>
>> >> >> This is the only traffic that passed over this network. All of it is
>> >> >> IP based and can be routed (it does not require L2 connectivity).
>> >> >>
>> >> >> Michael
>> >> >>
>> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >> >> >
>> >> >> > Hi Folks,
>> >> >> >
>> >> >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA.
>> >> >> >
>> >> >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation.
>> >> >> >
>> >> >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components.
>> >> >> >
>> >> >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet.
>> >> >> >
>> >> >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker.
>> >> >> >
>> >> >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL?
>> >> >> >
>> >> >> > If anyone have a diagram of the workflow I would be more than happy ^^
>> >> >> >
>> >> >> > Thanks a lot in advance to anyone willing to help :D
>> >> >> >
>> >> >> > _______________________________________________
>> >> >> > OpenStack-operators mailing list
>> >> >> > OpenStack-operators@lists.openstack.org
>> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Ok ok, I’ll have a look at the guidelines and make some documentation patch
proposal once our POC will be working fine.

Do I have to propose these patch through storyboard or launchpad ?

Oh BTW, one last question, is there an official Octavia Amphora pre-build
iso?

Thanks for the link.
Le mer. 1 août 2018 à 17:57, Michael Johnson <johnsomor@gmail.com> a écrit :

> Hi Flint,
>
> Yes, our documentation follows the OpenStack documentation rules. It
> is in RestructuredText format.
>
> The documentation team has some guides here:
> https://docs.openstack.org/doc-contrib-guide/rst-conv.html
>
> However we can also help with that during the review process.
>
> Michael
>
> On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >
> > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment
> with peace in mind.
> >
> > Regarding the documentation patch, does it need to get a specific format
> or following some guidelines? I’ll compulse all my annotations and push a
> patch for those points that would need clarification and a little bit of
> formatting (layout issue).
> >
> > Thanks for this awesome support Michael!
> > Le mer. 1 août 2018 à 07:57, Michael Johnson <johnsomor@gmail.com> a
> écrit :
> >>
> >> No worries, happy to share. Answers below.
> >>
> >> Michael
> >>
> >>
> >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >> >
> >> > Hi Michael,
> >> >
> >> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is
> there a release target for the API vs config-drive thing? I’ll have a look
> at an instance as soon as I’ll be able to log into one of my amphora.
> >>
> >> No I have no timeline for the amphora-agent config update API. Either
> >> way, the initial config will be installed via config drive. The API is
> >> intended for runtime updates.
> >> >
> >> > By the way, three sub-questions remains:
> >> >
> >> > 1°/ - What is the best place to push some documentation improvement ?
> >>
> >> Patches are welcome! All of our documentation is included in the
> >> source code repository here:
> >> https://github.com/openstack/octavia/tree/master/doc/source
> >>
> >> Our patches follow the normal OpenStack gerrit review process
> >> (OpenStack does not use pull requests).
> >>
> >> > 2°/ - Is the amphora-agent an auto-generated file at image build time
> or do I need to create one and give it to the diskimage-builder process?
> >>
> >> The amphora-agent code itself is installed automatically with the
> >> diskimage-builder process via the "amphora-agent" element.
> >> The amphora-agent configuration file is only installed at amphora boot
> >> time by nova using the config drive capability. It is also
> >> auto-generated by the controller.
> >>
> >> > 3°/ - The amphora agent source-code is available at
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent
> isn’t?
> >>
> >> Yes, the agent code that runs in the amphora instance is all under
> >>
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
> >> in the main octavia repository.
> >>
> >> >
> >> > Sorry for the questions volume, but I prefer to really understand the
> underlying mechanisms before we goes live with the solution.
> >> >
> >> > G.
> >> >
> >> > Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com> a
> écrit :
> >> >>
> >> >> Hi Flint,
> >> >>
> >> >> Happy to help.
> >> >>
> >> >> Right now the list of controller endpoints is pushed at boot time and
> >> >> loaded into the amphora via config drive/nova.
> >> >> In the future we plan to be able to update this list via the amphora
> >> >> API, but it has not been developed yet.
> >> >>
> >> >> I am pretty sure centos is getting the config file as our gate job
> >> >> that runs with centos 7 amphora has been passing. It should be in the
> >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based
> >> >> amphora.
> >> >>
> >> >> Michael
> >> >>
> >> >>
> >> >>
> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <
> gael.therond@gmail.com> wrote:
> >> >> >
> >> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I
> envisioned the flow.
> >> >> >
> >> >> > I’ll have to produce a diagram for my peers understanding, I maybe
> can share it with you.
> >> >> >
> >> >> > There is still one point that seems to be a little bit odd to me.
> >> >> >
> >> >> > How the amphora agent know where to find out the healthManagers
> and worker services? Is that because the worker is sending the agent some
> catalog informations or because we set that at diskimage-create time?
> >> >> >
> >> >> > If so, I think the Centos based amphora is missing the agent.conf
> because currently my vms doesn’t have any.
> >> >> >
> >> >> > Once again thanks for your help!
> >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <
> johnsomor@gmail.com> a écrit :
> >> >> >>
> >> >> >> Hi Flint,
> >> >> >>
> >> >> >> We don't have a logical network diagram at this time (it's still
> on
> >> >> >> the to-do list), but I can talk you through it.
> >> >> >>
> >> >> >> The Octavia worker, health manager, and housekeeping need to be
> able
> >> >> >> to reach the amphora (service VM at this point) over the
> lb-mgmt-net
> >> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net
> via
> >> >> >> the database and the information we save from the compute driver
> (I.e.
> >> >> >> what IP was assigned to the instance).
> >> >> >>
> >> >> >> The Octavia API process does not need to be connected to the
> >> >> >> lb-mgmt-net at this time. It only connects the the messaging bus
> and
> >> >> >> the Octavia database. Provider drivers may have other connectivity
> >> >> >> requirements for the Octavia API.
> >> >> >>
> >> >> >> The amphorae also send UDP packets back to the health manager on
> port
> >> >> >> 5555. This is the heartbeat packet from the amphora. It contains
> the
> >> >> >> health and statistics from that amphora. It know it's list of
> health
> >> >> >> manager endpoints from the configuration file
> >> >> >> "controller_ip_port_list"
> >> >> >> (
> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list
> ).
> >> >> >> Each amphora will rotate through that list of endpoints to reduce
> the
> >> >> >> chance of a network split impacting the heartbeat messages.
> >> >> >>
> >> >> >> This is the only traffic that passed over this network. All of it
> is
> >> >> >> IP based and can be routed (it does not require L2 connectivity).
> >> >> >>
> >> >> >> Michael
> >> >> >>
> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <
> gael.therond@gmail.com> wrote:
> >> >> >> >
> >> >> >> > Hi Folks,
> >> >> >> >
> >> >> >> > I'm currently deploying the Octavia component into our testing
> environment which is based on KOLLA.
> >> >> >> >
> >> >> >> > So far I'm quite enjoying it as it is pretty much straight
> forward (Except for some documentation pitfalls), but I'm now facing a
> weird and hard to debug situation.
> >> >> >> >
> >> >> >> > I actually have a hard time to understand how Amphora are
> communicating back and forth with the Control Plan components.
> >> >> >> >
> >> >> >> > From my understanding, as soon as I create a new LB, the
> Control Plan is spawning an instance using the configured Octavia Flavor
> and Image type, attach it to the LB-MGMT-NET and to the user provided
> subnet.
> >> >> >> >
> >> >> >> > What I think I'm misunderstanding is the discussion that
> follows between the amphora and the different components such as the
> HealthManager/HouseKeeper, the API and the Worker.
> >> >> >> >
> >> >> >> > How is the amphora agent able to found my control plan? Is the
> HealthManager or the Octavia Worker initiating the communication to the
> Amphora on port 9443 and so give the agent the API/Control plan internalURL?
> >> >> >> >
> >> >> >> > If anyone have a diagram of the workflow I would be more than
> happy ^^
> >> >> >> >
> >> >> >> > Thanks a lot in advance to anyone willing to help :D
> >> >> >> >
> >> >> >> > _______________________________________________
> >> >> >> > OpenStack-operators mailing list
> >> >> >> > OpenStack-operators@lists.openstack.org
> >> >> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Hi there.

We track our bugs/RFE in storyboard, but patches go into the OpenStack gerrit.
There is a new contributor guide here if you have not contributed to
OpenStack before: https://docs.openstack.org/contributors/

As for images, no we don't as an OpenStack group. We have nightly
builds here: http://tarballs.openstack.org/octavia/test-images/ but
they are not configured for production use and are not always stable.

If you are using RDO or RedHat OpenStack Platform (OSP) those projects
do provide production images.

Michael

On Thu, Aug 2, 2018 at 12:32 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>
> Ok ok, I’ll have a look at the guidelines and make some documentation patch proposal once our POC will be working fine.
>
> Do I have to propose these patch through storyboard or launchpad ?
>
> Oh BTW, one last question, is there an official Octavia Amphora pre-build iso?
>
> Thanks for the link.
> Le mer. 1 août 2018 à 17:57, Michael Johnson <johnsomor@gmail.com> a écrit :
>>
>> Hi Flint,
>>
>> Yes, our documentation follows the OpenStack documentation rules. It
>> is in RestructuredText format.
>>
>> The documentation team has some guides here:
>> https://docs.openstack.org/doc-contrib-guide/rst-conv.html
>>
>> However we can also help with that during the review process.
>>
>> Michael
>>
>> On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >
>> > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our deployment with peace in mind.
>> >
>> > Regarding the documentation patch, does it need to get a specific format or following some guidelines? I’ll compulse all my annotations and push a patch for those points that would need clarification and a little bit of formatting (layout issue).
>> >
>> > Thanks for this awesome support Michael!
>> > Le mer. 1 août 2018 à 07:57, Michael Johnson <johnsomor@gmail.com> a écrit :
>> >>
>> >> No worries, happy to share. Answers below.
>> >>
>> >> Michael
>> >>
>> >>
>> >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >> >
>> >> > Hi Michael,
>> >> >
>> >> > Oh ok! That config-drive trick was the missing part! Thanks a lot! Is there a release target for the API vs config-drive thing? I’ll have a look at an instance as soon as I’ll be able to log into one of my amphora.
>> >>
>> >> No I have no timeline for the amphora-agent config update API. Either
>> >> way, the initial config will be installed via config drive. The API is
>> >> intended for runtime updates.
>> >> >
>> >> > By the way, three sub-questions remains:
>> >> >
>> >> > 1°/ - What is the best place to push some documentation improvement ?
>> >>
>> >> Patches are welcome! All of our documentation is included in the
>> >> source code repository here:
>> >> https://github.com/openstack/octavia/tree/master/doc/source
>> >>
>> >> Our patches follow the normal OpenStack gerrit review process
>> >> (OpenStack does not use pull requests).
>> >>
>> >> > 2°/ - Is the amphora-agent an auto-generated file at image build time or do I need to create one and give it to the diskimage-builder process?
>> >>
>> >> The amphora-agent code itself is installed automatically with the
>> >> diskimage-builder process via the "amphora-agent" element.
>> >> The amphora-agent configuration file is only installed at amphora boot
>> >> time by nova using the config drive capability. It is also
>> >> auto-generated by the controller.
>> >>
>> >> > 3°/ - The amphora agent source-code is available at https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent isn’t?
>> >>
>> >> Yes, the agent code that runs in the amphora instance is all under
>> >> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
>> >> in the main octavia repository.
>> >>
>> >> >
>> >> > Sorry for the questions volume, but I prefer to really understand the underlying mechanisms before we goes live with the solution.
>> >> >
>> >> > G.
>> >> >
>> >> > Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com> a écrit :
>> >> >>
>> >> >> Hi Flint,
>> >> >>
>> >> >> Happy to help.
>> >> >>
>> >> >> Right now the list of controller endpoints is pushed at boot time and
>> >> >> loaded into the amphora via config drive/nova.
>> >> >> In the future we plan to be able to update this list via the amphora
>> >> >> API, but it has not been developed yet.
>> >> >>
>> >> >> I am pretty sure centos is getting the config file as our gate job
>> >> >> that runs with centos 7 amphora has been passing. It should be in the
>> >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based
>> >> >> amphora.
>> >> >>
>> >> >> Michael
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >> >> >
>> >> >> > Hi Michael, thanks a lot for that explanation, it’s actually how I envisioned the flow.
>> >> >> >
>> >> >> > I’ll have to produce a diagram for my peers understanding, I maybe can share it with you.
>> >> >> >
>> >> >> > There is still one point that seems to be a little bit odd to me.
>> >> >> >
>> >> >> > How the amphora agent know where to find out the healthManagers and worker services? Is that because the worker is sending the agent some catalog informations or because we set that at diskimage-create time?
>> >> >> >
>> >> >> > If so, I think the Centos based amphora is missing the agent.conf because currently my vms doesn’t have any.
>> >> >> >
>> >> >> > Once again thanks for your help!
>> >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <johnsomor@gmail.com> a écrit :
>> >> >> >>
>> >> >> >> Hi Flint,
>> >> >> >>
>> >> >> >> We don't have a logical network diagram at this time (it's still on
>> >> >> >> the to-do list), but I can talk you through it.
>> >> >> >>
>> >> >> >> The Octavia worker, health manager, and housekeeping need to be able
>> >> >> >> to reach the amphora (service VM at this point) over the lb-mgmt-net
>> >> >> >> on TCP 9443. It knows the amphora IP addresses on the lb-mgmt-net via
>> >> >> >> the database and the information we save from the compute driver (I.e.
>> >> >> >> what IP was assigned to the instance).
>> >> >> >>
>> >> >> >> The Octavia API process does not need to be connected to the
>> >> >> >> lb-mgmt-net at this time. It only connects the the messaging bus and
>> >> >> >> the Octavia database. Provider drivers may have other connectivity
>> >> >> >> requirements for the Octavia API.
>> >> >> >>
>> >> >> >> The amphorae also send UDP packets back to the health manager on port
>> >> >> >> 5555. This is the heartbeat packet from the amphora. It contains the
>> >> >> >> health and statistics from that amphora. It know it's list of health
>> >> >> >> manager endpoints from the configuration file
>> >> >> >> "controller_ip_port_list"
>> >> >> >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list).
>> >> >> >> Each amphora will rotate through that list of endpoints to reduce the
>> >> >> >> chance of a network split impacting the heartbeat messages.
>> >> >> >>
>> >> >> >> This is the only traffic that passed over this network. All of it is
>> >> >> >> IP based and can be routed (it does not require L2 connectivity).
>> >> >> >>
>> >> >> >> Michael
>> >> >> >>
>> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <gael.therond@gmail.com> wrote:
>> >> >> >> >
>> >> >> >> > Hi Folks,
>> >> >> >> >
>> >> >> >> > I'm currently deploying the Octavia component into our testing environment which is based on KOLLA.
>> >> >> >> >
>> >> >> >> > So far I'm quite enjoying it as it is pretty much straight forward (Except for some documentation pitfalls), but I'm now facing a weird and hard to debug situation.
>> >> >> >> >
>> >> >> >> > I actually have a hard time to understand how Amphora are communicating back and forth with the Control Plan components.
>> >> >> >> >
>> >> >> >> > From my understanding, as soon as I create a new LB, the Control Plan is spawning an instance using the configured Octavia Flavor and Image type, attach it to the LB-MGMT-NET and to the user provided subnet.
>> >> >> >> >
>> >> >> >> > What I think I'm misunderstanding is the discussion that follows between the amphora and the different components such as the HealthManager/HouseKeeper, the API and the Worker.
>> >> >> >> >
>> >> >> >> > How is the amphora agent able to found my control plan? Is the HealthManager or the Octavia Worker initiating the communication to the Amphora on port 9443 and so give the agent the API/Control plan internalURL?
>> >> >> >> >
>> >> >> >> > If anyone have a diagram of the workflow I would be more than happy ^^
>> >> >> >> >
>> >> >> >> > Thanks a lot in advance to anyone willing to help :D
>> >> >> >> >
>> >> >> >> > _______________________________________________
>> >> >> >> > OpenStack-operators mailing list
>> >> >> >> > OpenStack-operators@lists.openstack.org
>> >> >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Re: [OCTAVIA][KOLLA] - Amphora to control plan communication question. [ In reply to ]
Ok perfect, I’ll have a look at the whole process and try to cleanup all my
note and make a clear documentation of them.

For the image I just choose to goes the CICD way with DIB and successfully
get a centos working image.

I’m now facing a SSL handshake issue, but I’ll try to fix it as I suspect
my certificates to be incorrect and then create a new discussion feed if I
don’t found out what’s going on.

Thanks a lot for your help and kuddos to the Octavia team that build a rock
solid solution and provide an awesome work.

I especially love the healthManager/Housekeeper duo as I wished for nova
and other OS Services to get some (Tempest I’m looking at you) in order to
make my life easier by properly managing resources waist.
Le jeu. 2 août 2018 à 16:04, Michael Johnson <johnsomor@gmail.com> a écrit :

> Hi there.
>
> We track our bugs/RFE in storyboard, but patches go into the OpenStack
> gerrit.
> There is a new contributor guide here if you have not contributed to
> OpenStack before: https://docs.openstack.org/contributors/
>
> As for images, no we don't as an OpenStack group. We have nightly
> builds here: http://tarballs.openstack.org/octavia/test-images/ but
> they are not configured for production use and are not always stable.
>
> If you are using RDO or RedHat OpenStack Platform (OSP) those projects
> do provide production images.
>
> Michael
>
> On Thu, Aug 2, 2018 at 12:32 AM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >
> > Ok ok, I’ll have a look at the guidelines and make some documentation
> patch proposal once our POC will be working fine.
> >
> > Do I have to propose these patch through storyboard or launchpad ?
> >
> > Oh BTW, one last question, is there an official Octavia Amphora
> pre-build iso?
> >
> > Thanks for the link.
> > Le mer. 1 août 2018 à 17:57, Michael Johnson <johnsomor@gmail.com> a
> écrit :
> >>
> >> Hi Flint,
> >>
> >> Yes, our documentation follows the OpenStack documentation rules. It
> >> is in RestructuredText format.
> >>
> >> The documentation team has some guides here:
> >> https://docs.openstack.org/doc-contrib-guide/rst-conv.html
> >>
> >> However we can also help with that during the review process.
> >>
> >> Michael
> >>
> >> On Tue, Jul 31, 2018 at 11:03 PM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >> >
> >> > Ok sweet! Many thanks ! Awesome, I’ll be able to continue our
> deployment with peace in mind.
> >> >
> >> > Regarding the documentation patch, does it need to get a specific
> format or following some guidelines? I’ll compulse all my annotations and
> push a patch for those points that would need clarification and a little
> bit of formatting (layout issue).
> >> >
> >> > Thanks for this awesome support Michael!
> >> > Le mer. 1 août 2018 à 07:57, Michael Johnson <johnsomor@gmail.com> a
> écrit :
> >> >>
> >> >> No worries, happy to share. Answers below.
> >> >>
> >> >> Michael
> >> >>
> >> >>
> >> >> On Tue, Jul 31, 2018 at 9:49 PM Flint WALRUS <gael.therond@gmail.com>
> wrote:
> >> >> >
> >> >> > Hi Michael,
> >> >> >
> >> >> > Oh ok! That config-drive trick was the missing part! Thanks a lot!
> Is there a release target for the API vs config-drive thing? I’ll have a
> look at an instance as soon as I’ll be able to log into one of my amphora.
> >> >>
> >> >> No I have no timeline for the amphora-agent config update API.
> Either
> >> >> way, the initial config will be installed via config drive. The API
> is
> >> >> intended for runtime updates.
> >> >> >
> >> >> > By the way, three sub-questions remains:
> >> >> >
> >> >> > 1°/ - What is the best place to push some documentation
> improvement ?
> >> >>
> >> >> Patches are welcome! All of our documentation is included in the
> >> >> source code repository here:
> >> >> https://github.com/openstack/octavia/tree/master/doc/source
> >> >>
> >> >> Our patches follow the normal OpenStack gerrit review process
> >> >> (OpenStack does not use pull requests).
> >> >>
> >> >> > 2°/ - Is the amphora-agent an auto-generated file at image build
> time or do I need to create one and give it to the diskimage-builder
> process?
> >> >>
> >> >> The amphora-agent code itself is installed automatically with the
> >> >> diskimage-builder process via the "amphora-agent" element.
> >> >> The amphora-agent configuration file is only installed at amphora
> boot
> >> >> time by nova using the config drive capability. It is also
> >> >> auto-generated by the controller.
> >> >>
> >> >> > 3°/ - The amphora agent source-code is available at
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends/agent
> isn’t?
> >> >>
> >> >> Yes, the agent code that runs in the amphora instance is all under
> >> >>
> https://github.com/openstack/octavia/tree/master/octavia/amphorae/backends
> >> >> in the main octavia repository.
> >> >>
> >> >> >
> >> >> > Sorry for the questions volume, but I prefer to really understand
> the underlying mechanisms before we goes live with the solution.
> >> >> >
> >> >> > G.
> >> >> >
> >> >> > Le mer. 1 août 2018 à 02:36, Michael Johnson <johnsomor@gmail.com>
> a écrit :
> >> >> >>
> >> >> >> Hi Flint,
> >> >> >>
> >> >> >> Happy to help.
> >> >> >>
> >> >> >> Right now the list of controller endpoints is pushed at boot time
> and
> >> >> >> loaded into the amphora via config drive/nova.
> >> >> >> In the future we plan to be able to update this list via the
> amphora
> >> >> >> API, but it has not been developed yet.
> >> >> >>
> >> >> >> I am pretty sure centos is getting the config file as our gate job
> >> >> >> that runs with centos 7 amphora has been passing. It should be in
> the
> >> >> >> same /etc/octavia/amphora-agent.conf location as the ubuntu based
> >> >> >> amphora.
> >> >> >>
> >> >> >> Michael
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On Tue, Jul 31, 2018 at 10:05 AM Flint WALRUS <
> gael.therond@gmail.com> wrote:
> >> >> >> >
> >> >> >> > Hi Michael, thanks a lot for that explanation, it’s actually
> how I envisioned the flow.
> >> >> >> >
> >> >> >> > I’ll have to produce a diagram for my peers understanding, I
> maybe can share it with you.
> >> >> >> >
> >> >> >> > There is still one point that seems to be a little bit odd to
> me.
> >> >> >> >
> >> >> >> > How the amphora agent know where to find out the healthManagers
> and worker services? Is that because the worker is sending the agent some
> catalog informations or because we set that at diskimage-create time?
> >> >> >> >
> >> >> >> > If so, I think the Centos based amphora is missing the
> agent.conf because currently my vms doesn’t have any.
> >> >> >> >
> >> >> >> > Once again thanks for your help!
> >> >> >> > Le mar. 31 juil. 2018 à 18:15, Michael Johnson <
> johnsomor@gmail.com> a écrit :
> >> >> >> >>
> >> >> >> >> Hi Flint,
> >> >> >> >>
> >> >> >> >> We don't have a logical network diagram at this time (it's
> still on
> >> >> >> >> the to-do list), but I can talk you through it.
> >> >> >> >>
> >> >> >> >> The Octavia worker, health manager, and housekeeping need to
> be able
> >> >> >> >> to reach the amphora (service VM at this point) over the
> lb-mgmt-net
> >> >> >> >> on TCP 9443. It knows the amphora IP addresses on the
> lb-mgmt-net via
> >> >> >> >> the database and the information we save from the compute
> driver (I.e.
> >> >> >> >> what IP was assigned to the instance).
> >> >> >> >>
> >> >> >> >> The Octavia API process does not need to be connected to the
> >> >> >> >> lb-mgmt-net at this time. It only connects the the messaging
> bus and
> >> >> >> >> the Octavia database. Provider drivers may have other
> connectivity
> >> >> >> >> requirements for the Octavia API.
> >> >> >> >>
> >> >> >> >> The amphorae also send UDP packets back to the health manager
> on port
> >> >> >> >> 5555. This is the heartbeat packet from the amphora. It
> contains the
> >> >> >> >> health and statistics from that amphora. It know it's list of
> health
> >> >> >> >> manager endpoints from the configuration file
> >> >> >> >> "controller_ip_port_list"
> >> >> >> >> (
> https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.controller_ip_port_list
> ).
> >> >> >> >> Each amphora will rotate through that list of endpoints to
> reduce the
> >> >> >> >> chance of a network split impacting the heartbeat messages.
> >> >> >> >>
> >> >> >> >> This is the only traffic that passed over this network. All of
> it is
> >> >> >> >> IP based and can be routed (it does not require L2
> connectivity).
> >> >> >> >>
> >> >> >> >> Michael
> >> >> >> >>
> >> >> >> >> On Tue, Jul 31, 2018 at 2:00 AM Flint WALRUS <
> gael.therond@gmail.com> wrote:
> >> >> >> >> >
> >> >> >> >> > Hi Folks,
> >> >> >> >> >
> >> >> >> >> > I'm currently deploying the Octavia component into our
> testing environment which is based on KOLLA.
> >> >> >> >> >
> >> >> >> >> > So far I'm quite enjoying it as it is pretty much straight
> forward (Except for some documentation pitfalls), but I'm now facing a
> weird and hard to debug situation.
> >> >> >> >> >
> >> >> >> >> > I actually have a hard time to understand how Amphora are
> communicating back and forth with the Control Plan components.
> >> >> >> >> >
> >> >> >> >> > From my understanding, as soon as I create a new LB, the
> Control Plan is spawning an instance using the configured Octavia Flavor
> and Image type, attach it to the LB-MGMT-NET and to the user provided
> subnet.
> >> >> >> >> >
> >> >> >> >> > What I think I'm misunderstanding is the discussion that
> follows between the amphora and the different components such as the
> HealthManager/HouseKeeper, the API and the Worker.
> >> >> >> >> >
> >> >> >> >> > How is the amphora agent able to found my control plan? Is
> the HealthManager or the Octavia Worker initiating the communication to the
> Amphora on port 9443 and so give the agent the API/Control plan internalURL?
> >> >> >> >> >
> >> >> >> >> > If anyone have a diagram of the workflow I would be more
> than happy ^^
> >> >> >> >> >
> >> >> >> >> > Thanks a lot in advance to anyone willing to help :D
> >> >> >> >> >
> >> >> >> >> > _______________________________________________
> >> >> >> >> > OpenStack-operators mailing list
> >> >> >> >> > OpenStack-operators@lists.openstack.org
> >> >> >> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>