Mailing List Archive

The two types of interoperability
Hi everyone,

As you all know, the DefCore effort is working on redefining "core" (and
associated permitted usage of the OpenStack trademark), with the end
goal of fostering interoperability.

Interoperability is good, and everyone agrees we need it. However,
"interoperability" is a single word describing two very different
outcomes, and using that word until now prevented having a necessary
discussion on what type of interoperability we are actually after.

There are two types of interoperability. I call the first one "loose
interoperability". The idea is to minimize the set of constraints, and
maximize the ecosystem able to call themselves "openstack" (or
"interoperable").

I call the second one "total interoperability". The idea is to maximize
the set of constraints, and have less deployments be able to call
themselves "openstack" (but have that lucky few share more common traits).

Loose interoperability enables differentiation and is business-friendly
(larger ecosystem). Total interoperability enables federation and is
end-user-friendly (easier to move workloads from one openstack to
another, as they share more).

We can continue defining the process, but now we need to choose between
those two types of interoperability, as it affects the implementation
details (how we designate "must-use" parts of the code and determine
"must-pass" tests).

I think it's more than time we look beyond the process and decide what
is our end goal here.

--
Thierry Carrez (ttx)

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
On Wed, 2014-02-05 at 12:21 +0100, Thierry Carrez wrote:
> Hi everyone,
>
> As you all know, the DefCore effort is working on redefining "core" (and
> associated permitted usage of the OpenStack trademark), with the end
> goal of fostering interoperability.
>
> Interoperability is good, and everyone agrees we need it. However,
> "interoperability" is a single word describing two very different
> outcomes, and using that word until now prevented having a necessary
> discussion on what type of interoperability we are actually after.
>
> There are two types of interoperability. I call the first one "loose
> interoperability". The idea is to minimize the set of constraints, and
> maximize the ecosystem able to call themselves "openstack" (or
> "interoperable").
>
> I call the second one "total interoperability". The idea is to maximize
> the set of constraints, and have less deployments be able to call
> themselves "openstack" (but have that lucky few share more common traits).
>
> Loose interoperability enables differentiation and is business-friendly
> (larger ecosystem). Total interoperability enables federation and is
> end-user-friendly (easier to move workloads from one openstack to
> another, as they share more).
>
> We can continue defining the process, but now we need to choose between
> those two types of interoperability, as it affects the implementation
> details (how we designate "must-use" parts of the code and determine
> "must-pass" tests).
>
> I think it's more than time we look beyond the process and decide what
> is our end goal here.

Very nicely put, Thierry. I agree we need to be crystal clear on our
interoperability goals, otherwise arguing over the implementation
details of our interoperability program is fairly futile.

I tried and failed to express that idea here:

http://blogs.gnome.org/markmc/2013/10/30/openstack-core-and-interoperability/

I think the focus should be on immediate baby-steps towards
kick-starting this marketplace. One simple question – if and when we
certify the first batch of interoperable clouds, would we rather have
a smaller number of big clouds included or a large number of smaller
clouds? In terms of resource capacity provided by this marketplace, I
guess it’s more-or-less the same thing.

Let’s assume we absolutely want (at least) a small number of the
bigger providers included in the program at launch. Let’s say “small
number” equals “Rackspace and HP”. And let’s assume both of these
providers are very keen to help get this program started. Isn’t the
obvious next baby-step to get representatives of those two providers
to figure out exactly what level of interoperability they already have
and also what improvements to that they can make in the short term?

If we had that report, we could next do a quick “sniff test” comparing
this to many of the other OpenStack clouds out there to make sure we
haven’t just picked two clouds with an unusual set of compatible APIs.
Then the board could make a call on whether this represents a
reasonable starting point for the requirements of OpenStack clouds.

No, this isn’t perfect. But it would be a genuine step forward towards
being able to certify some clouds. We would have some data on
commonalities and have made a policy decision on what commonalities
are required. It would be a starting point.


Basically, I'd love to be able to look at a chart which plots
interoperability against the size of the interoperable
marketplace/ecosystem. And then we could have a rationale discussion
about whether we aim for higher interop, even if that means a tiny
interoperable marketplace, or a larger interoperable marketplace with a
tiny level of interoperability.

It's a difficult trade-off, but it's *the* important decision for the
board to make. I'd prefer to be making that decision with data rather
than in the abstract but, in the absence of data, I think the board
should be discussing it.

To put the question another way - which helps users more? The ability to
write interoperable code which uses a large number of OpenStack APIs,
but which is only actually interoperable between a small number of
OpenStack clouds? Or interoperability whereby you have a smaller set of
interoperable APIs, but code that uses only those APIs would be
interoperable with a much larger number of OpenStack clouds?

My instinct would be to start with the set of APIs which are already
interoperable between the highest profile OpenStack clouds and build a
campaign around enlarging the set of APIs every release without damaging
the size of the interoperable ecosystem.

Mark.


_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
Mark McLoughlin wrote:
> To put the question another way - which helps users more? The ability to
> write interoperable code which uses a large number of OpenStack APIs,
> but which is only actually interoperable between a small number of
> OpenStack clouds? Or interoperability whereby you have a smaller set of
> interoperable APIs, but code that uses only those APIs would be
> interoperable with a much larger number of OpenStack clouds?

I like your way of framing the question.

Personally I think going for loose interoperability is short-sighted.
Yes you'll have a lot of providers but you'll forever have a bad
experience moving workloads around.

With total interoperability, you may have less providers at start, but
at least they provide a great experience moving workloads around. And as
more join them, it only gets better. Total interoperability is basically
the only way to potentially reach one day the nirvana of "lots of
providers + great end user experience".

The trick is to bootstrap it. If you have 0 or 1 "true openstack" cloud
available at first, it's hard to get any benefit from that hypothetical
federation. Total interop requires a bit of a leap of faith.

So yet another way to frame that discussion (at board level) is: are you
more interested in convergence and federation (and beating Amazon all
together), or are you more interested in competing (and be a set of
individual loosely-coupled small competitors to Amazon).

--
Thierry Carrez (ttx)

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
On Wed, 2014-02-05 at 13:04 +0100, Thierry Carrez wrote:
> Mark McLoughlin wrote:
> > To put the question another way - which helps users more? The ability to
> > write interoperable code which uses a large number of OpenStack APIs,
> > but which is only actually interoperable between a small number of
> > OpenStack clouds? Or interoperability whereby you have a smaller set of
> > interoperable APIs, but code that uses only those APIs would be
> > interoperable with a much larger number of OpenStack clouds?
>
> I like your way of framing the question.
>
> Personally I think going for loose interoperability is short-sighted.
> Yes you'll have a lot of providers but you'll forever have a bad
> experience moving workloads around.
>
> With total interoperability, you may have less providers at start, but
> at least they provide a great experience moving workloads around. And as
> more join them, it only gets better. Total interoperability is basically
> the only way to potentially reach one day the nirvana of "lots of
> providers + great end user experience".
>
> The trick is to bootstrap it. If you have 0 or 1 "true openstack" cloud
> available at first, it's hard to get any benefit from that hypothetical
> federation. Total interop requires a bit of a leap of faith.

Right, it's about bootstrapping - I'm all for total interoperability,
but feel it will need to be "looser than total" in order to bootstrap
it.

> So yet another way to frame that discussion (at board level) is: are you
> more interested in convergence and federation (and beating Amazon all
> together), or are you more interested in competing (and be a set of
> individual loosely-coupled small competitors to Amazon).

I'd hope there would be consensus around convergence and, if so, I'd
like to see a discussion about bootstrapping tactics.

Mark.


_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
Having been part of the defcore committee for a bit, I must say that
the work that is being done there has a very good intent. It is
also great to define what type of interoperability we want to mesure
and offer is key to providing OpenStack users valuable information
they expect.

However, as part of this committee, I have failed to make myself
heard on a very key point in my mind. The notion of core, or
interoperability, means 3 different things, because of the nature of
what we do and the population they address:

1/ Which projects are part of core and can call themselves
OpenStack?
-> Interesting to developpers and distribution builders

2/ Which distributions are based on core OpenStack and can therefore
call themselves OpenStack?
-> Interesting to cloud operators that want to pick an OpenStack
distribution

3/ Which clouds are delivering an OpenStack experience and can call
themselves OpenStack?
-> Interesting to cloud users and enterprises wanting to pick the
right infrastructure to deploy their application on

The current bylaws of the foundation fails to make this distinction in
defining the term Core, and therefore the DefCore committee has failed
to distinguish those 3 aspects and is producing what seems to me a
bunch of rules which, by trying to satisfy 3 very different things with
a single set, is going to, at best, produce some lowest common
denominator set.

So, yes, we need to determine which type of interoperability we want
to achieve, but I do think that we need to define what is Core
independently for each one of the 3 cases above. The perception of
what OpenStack is cannot be defined independently of the type of users
and what interactions they are going to have with OpenStack. and this
may very well have ties onto which body has the right to validate what
Core is.
Re: The two types of interoperability [ In reply to ]
+1 markmc

To get close to total interop (which I think is the goal, or ideal at least) you have to start where you are (bootstrap).

If we were to, at this moment, define OpenStack as something no current cloud would qualify for, that wouldn't be very practical. I think we can bootstrap while encouraging the trend to move towards the ideal over time.

I also think just having the information (the good, the bad, and the ugly) about what current clouds (and distributions used to build them) support will improve market efficiency. Regardless of where any lines might be drawn about what the results mean.  Users/customers can and will draw their own conclusions.


On Feb 5, 2014 6:14 AM, Mark McLoughlin <markmc@redhat.com> wrote:
>
> On Wed, 2014-02-05 at 13:04 +0100, Thierry Carrez wrote:
> > Mark McLoughlin wrote:
> > > To put the question another way - which helps users more? The ability to
> > > write interoperable code which uses a large number of OpenStack APIs,
> > > but which is only actually interoperable between a small number of
> > > OpenStack clouds? Or interoperability whereby you have a smaller set of
> > > interoperable APIs, but code that uses only those APIs would be
> > > interoperable with a much larger number of OpenStack clouds?
> >
> > I like your way of framing the question.
> >
> > Personally I think going for loose interoperability is short-sighted.
> > Yes you'll have a lot of providers but you'll forever have a bad
> > experience moving workloads around.
> >
> > With total interoperability, you may have less providers at start, but
> > at least they provide a great experience moving workloads around. And as
> > more join them, it only gets better. Total interoperability is basically
> > the only way to potentially reach one day the nirvana of "lots of
> > providers + great end user experience".
> >
> > The trick is to bootstrap it. If you have 0 or 1 "true openstack" cloud
> > available at first, it's hard to get any benefit from that hypothetical
> > federation. Total interop requires a bit of a leap of faith.
>
> Right, it's about bootstrapping - I'm all for total interoperability,
> but feel it will need to be "looser than total" in order to bootstrap
> it.
>
> > So yet another way to frame that discussion (at board level) is: are you
> > more interested in convergence and federation (and beating Amazon all
> > together), or are you more interested in competing (and be a set of
> > individual loosely-coupled small competitors to Amazon).
>
> I'd hope there would be consensus around convergence and, if so, I'd
> like to see a discussion about bootstrapping tactics.
>
> Mark.
>
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
Mark Collier wrote:
> To get close to total interop (which I think is the goal, or ideal at least) you have to start where you are (bootstrap).
>
> If we were to, at this moment, define OpenStack as something no current cloud would qualify for, that wouldn't be very practical. I think we can bootstrap while encouraging the trend to move towards the ideal over time.

As far as we clearly establish that "total interoperability" is the end
goal, then I think it's OK to start with some compromises to bootstrap
the effort, then gradually (but constantly) increase constraints.

That said, that only works if there is a bit of consensus in OpenStack
companies that this is the end goal, otherwise you won't be able to
increase the constraints. Does everyone agree on the end goal ?

In all cases, there is a lot of value in having the board clearly
stating it.

--
Thierry Carrez (ttx)

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
> On Feb 5, 2014, at 7:13 AM, Nicolas Barcet <nicolas@barcet.com> wrote:
>
> Having been part of the defcore committee for a bit, I must say that
> the work that is being done there has a very good intent. It is
> also great to define what type of interoperability we want to mesure
> and offer is key to providing OpenStack users valuable information
> they expect.
>
> However, as part of this committee, I have failed to make myself
> heard on a very key point in my mind. The notion of core, or
> interoperability, means 3 different things, because of the nature of
> what we do and the population they address:
>
> 1/ Which projects are part of core and can call themselves
> OpenStack?
> -> Interesting to developpers and distribution builders
>
> 2/ Which distributions are based on core OpenStack and can therefore
> call themselves OpenStack?
> -> Interesting to cloud operators that want to pick an OpenStack
> distribution
>
> 3/ Which clouds are delivering an OpenStack experience and can call
> themselves OpenStack?
> -> Interesting to cloud users and enterprises wanting to pick the
> right infrastructure to deploy their application on
>
> The current bylaws of the foundation fails to make this distinction in
> defining the term Core, and therefore the DefCore committee has failed
> to distinguish those 3 aspects and is producing what seems to me a
> bunch of rules which, by trying to satisfy 3 very different things with
> a single set, is going to, at best, produce some lowest common
> denominator set.

I think Nick begins the discussion around a very important point that I have also, so far, failed to articulate well. We keep tying to solve multiple problems with one solution.

(Here is where I started articulate, in my words, those problems and realized It was essentially a repeat Nick's)

"Core" can't solve all of these with one brand/test/process and be very good at it. I don't think every feature/capability/option in Nova should be required in an "OpenStack cloud", for instance. But, just branding a collection of capabilities as OpenStack without anyway to extend the brand to the major projects is not proving to be a satisfying approach either. We need to find the best approach for these specific problems vs. trying to stretch "core" to be everything.


>
> So, yes, we need to determine which type of interoperability we want
> to achieve, but I do think that we need to define what is Core
> independently for each one of the 3 cases above. The perception of
> what OpenStack is cannot be defined independently of the type of users
> and what interactions they are going to have with OpenStack. and this
> may very well have ties onto which body has the right to validate what
> Core is.
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
----- Original Message -----

| From: "Troy Toman" <troy@tomanator.com>
| To: "Nicolas Barcet" <nicolas@barcet.com>
| Cc: foundation@lists.openstack.org
| Sent: Wednesday, February 5, 2014 8:08:02 AM
| Subject: Re: [OpenStack Foundation] The two types of interoperability

| On Feb 5, 2014, at 7:13 AM, Nicolas Barcet < nicolas@barcet.com > wrote:

| | Having been part of the defcore committee for a bit, I must say that
|
| | the work that is being done there has a very good intent. It is
|
| | also great to define what type of interoperability we want to mesure
|
| | and offer is key to providing OpenStack users valuable information
|
| | they expect.
|

| | However, as part of this committee, I have failed to make myself
|
| | heard on a very key point in my mind. The notion of core, or
|
| | interoperability, means 3 different things, because of the nature of
|
| | what we do and the population they address:
|
| | 1/ Which projects are part of core and can call themselves
|
| | OpenStack?
|
| | -> Interesting to developpers and distribution builders
|

| | 2/ Which distributions are based on core OpenStack and can therefore
|
| | call themselves OpenStack?
|
| | -> Interesting to cloud operators that want to pick an OpenStack
|
| | distribution
|

| | 3/ Which clouds are delivering an OpenStack experience and can call
|
| | themselves OpenStack?
|
| | -> Interesting to cloud users and enterprises wanting to pick the
|
| | right infrastructure to deploy their application on
|

| | The current bylaws of the foundation fails to make this distinction in
|
| | defining the term Core, and therefore the DefCore committee has failed
|
| | to distinguish those 3 aspects and is producing what seems to me a
|
| | bunch of rules which, by trying to satisfy 3 very different things with
|
| | a single set, is going to, at best, produce some lowest common
|
| | denominator set.
|

| I think Nick begins the discussion around a very important point that I have
| also, so far, failed to articulate well. We keep tying to solve multiple
| problems with one solution.

| (Here is where I started articulate, in my words, those problems and realized
| It was essentially a repeat Nick's)

| "Core" can't solve all of these with one brand/test/process and be very good
| at it. I don't think every feature/capability/option in Nova should be
| required in an "OpenStack cloud", for instance. But, just branding a
| collection of capabilities as OpenStack without anyway to extend the brand
| to the major projects is not proving to be a satisfying approach either. We
| need to find the best approach for these specific problems vs. trying to
| stretch "core" to be everything.

Strongly agree :)

To me the key interoperability question is "can I as a user easily understand the portability of my workload across different clouds that use the OpenStack label". While one approach to this is that all clouds labeled "OpenStack" have exactly the same "core" API/toolset surface, I doubt this is a very practical approach. For example, I know many people who are deploying OpenStack without Swift, and that makes perfect sense for them. What they would care about is whether the code that wrote against their API (Nova, Glance, etc.) would work against another cloud. So if we instead approach the problem from a sense of "how can we give people a mental framework and perhaps a set of tools for understanding interoperability", I think we will end up at a much better place.

And I definitely agree that using the term "core" for othis is extra confusing, as the same term applies to a projects status. However, just because a project is "core" does not mean that all instances of it expose the same API/toolset surface, due to API extensions, different release versions, different configuration options, different flavors, etc.

Dan

| | So, yes, we need to determine which type of interoperability we want
|
| | to achieve, but I do think that we need to define what is Core
|
| | independently for each one of the 3 cases above. The perception of
|
| | what OpenStack is cannot be defined independently of the type of users
|
| | and what interactions they are going to have with OpenStack. and this
|
| | may very well have ties onto which body has the right to validate what
|
| | Core is.
|

| | _______________________________________________
|
| | Foundation mailing list
|
| | Foundation@lists.openstack.org
|
| | http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
|

| _______________________________________________
| Foundation mailing list
| Foundation@lists.openstack.org
| https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=%2FGmAC0C4XZyIEAvm3CvAWMRhdLzlce10LeJNK46NsWU%3D%0A&m=BwXFws9S3oB5yn4XyrLBKqyCiMCR2a6GqTO5yTehi3Y%3D%0A&s=fa24bbe732e8d92f221c82a92ee0c6ad7edebdf49febaea76475146bb9c23be9
Re: The two types of interoperability [ In reply to ]
> On Feb 5, 2014, at 6:05 AM, "Thierry Carrez" <thierry@openstack.org> wrote:
>
> Mark McLoughlin wrote:
>> To put the question another way - which helps users more? The ability to
>> write interoperable code which uses a large number of OpenStack APIs,
>> but which is only actually interoperable between a small number of
>> OpenStack clouds? Or interoperability whereby you have a smaller set of
>> interoperable APIs, but code that uses only those APIs would be
>> interoperable with a much larger number of OpenStack clouds?
>
> I like your way of framing the question.
>
> Personally I think going for loose interoperability is short-sighted.
> Yes you'll have a lot of providers but you'll forever have a bad
> experience moving workloads around.

+1

>
> With total interoperability, you may have less providers at start, but
> at least they provide a great experience moving workloads around. And as
> more join them, it only gets better. Total interoperability is basically
> the only way to potentially reach one day the nirvana of "lots of
> providers + great end user experience".
>
> The trick is to bootstrap it. If you have 0 or 1 "true openstack" cloud
> available at first, it's hard to get any benefit from that hypothetical
> federation. Total interop requires a bit of a leap of faith.
>
> So yet another way to frame that discussion (at board level) is: are you
> more interested in convergence and federation (and beating Amazon all
> together), or are you more interested in competing (and be a set of
> individual loosely-coupled small competitors to Amazon).

I think this is the right conversation to have around the goal of interoperability. I think it is important to note that it was decided to NOT focus on interoperability in the initial phases of DefCore but to just try and solve the branding issue. Based on the feedback, it feels like the most pressing issue really is around interop and we should wade into that sooner rather than later even if it is (or maybe because it is) messy. As I said earlier, I also don't think it is helpful to try and solve both with one label or brand.

Given all that, I am very supportive of taking simpler first steps and "learning" our way into this. I think it is more likely to not only yield progress sooner but get us to the proper endpoint faster too. Making tangible progress on making federation/convergence is critical for OpenStack in 2014 IMHO. Or, if we can't line up on that goal, then let's be explicit and stop setting that expectation. I really hope we can do the former. I'm not sure what a loosely-coupled set of clouds does to change the landscape.
>
> --
> Thierry Carrez (ttx)
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
I think "the branding issue" is inseparable from interop (unless I misunderstand what you mean by that).

I think the initial phases of defcore make sense, in so much as the focus is on defining what is expected to be in commercial products and services to be called "openstack", and in starting with a modest definition and a reasonable set of tests (largely based on what already exists) so that progress can be made quickly. To me these things are the critical next steps on the road to interop.

At the moment, there are zero tests being used as a gate for the use of the mark commercially*, so as long as X is >0 it's an infinite improvement IMHO, and one that can't come soon enough. Note that the tests are not the sum total of the requirements today for the commercial use of the mark, nor should they be in the future, but as an additive step they are most welcome and helpful.

Mark

*To be clear, the use of the mark DOES currently require, contractually, that products/services pass tests... the tests just need to exist. This is why I will throw a party when we can implement the first tests, however modest




On Wednesday, February 5, 2014 10:44am, "Troy Toman" <troy.toman@rackspace.com> said:



>
>
> > On Feb 5, 2014, at 6:05 AM, "Thierry Carrez" <thierry@openstack.org>
> wrote:
> >
> > Mark McLoughlin wrote:
> >> To put the question another way - which helps users more? The ability to
> >> write interoperable code which uses a large number of OpenStack APIs,
> >> but which is only actually interoperable between a small number of
> >> OpenStack clouds? Or interoperability whereby you have a smaller set of
> >> interoperable APIs, but code that uses only those APIs would be
> >> interoperable with a much larger number of OpenStack clouds?
> >
> > I like your way of framing the question.
> >
> > Personally I think going for loose interoperability is short-sighted.
> > Yes you'll have a lot of providers but you'll forever have a bad
> > experience moving workloads around.
>
> +1
>
> >
> > With total interoperability, you may have less providers at start, but
> > at least they provide a great experience moving workloads around. And as
> > more join them, it only gets better. Total interoperability is basically
> > the only way to potentially reach one day the nirvana of "lots of
> > providers + great end user experience".
> >
> > The trick is to bootstrap it. If you have 0 or 1 "true openstack" cloud
> > available at first, it's hard to get any benefit from that hypothetical
> > federation. Total interop requires a bit of a leap of faith.
> >
> > So yet another way to frame that discussion (at board level) is: are you
> > more interested in convergence and federation (and beating Amazon all
> > together), or are you more interested in competing (and be a set of
> > individual loosely-coupled small competitors to Amazon).
>
> I think this is the right conversation to have around the goal of
> interoperability. I think it is important to note that it was decided to NOT focus
> on interoperability in the initial phases of DefCore but to just try and solve the
> branding issue. Based on the feedback, it feels like the most pressing issue
> really is around interop and we should wade into that sooner rather than later
> even if it is (or maybe because it is) messy. As I said earlier, I also don't
> think it is helpful to try and solve both with one label or brand.
>
> Given all that, I am very supportive of taking simpler first steps and "learning"
> our way into this. I think it is more likely to not only yield progress sooner but
> get us to the proper endpoint faster too. Making tangible progress on making
> federation/convergence is critical for OpenStack in 2014 IMHO. Or, if we can't
> line up on that goal, then let's be explicit and stop setting that expectation. I
> really hope we can do the former. I'm not sure what a loosely-coupled set of
> clouds does to change the landscape.
> >
> > --
> > Thierry Carrez (ttx)
> >
> > _______________________________________________
> > Foundation mailing list
> > Foundation@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
Re: The two types of interoperability [ In reply to ]
>
>
> My instinct would be to start with the set of APIs which are already
> interoperable between the highest profile OpenStack clouds and build a
> campaign around enlarging the set of APIs every release without damaging
> the size of the interoperable ecosystem.
>

By focusing on the greatest common denominator, you're focusing
specifically on backing those with the deepest pockets and the greatest
ability to execute. While there are some benefits to this, and it is true
that those with the biggest pockets are most likely to contribute back to
the community, the biggest detriment is that it damages the ecosystem by
potentially setting the bar too high for the smaller companies.

I'd rather start with the set of APIs which are "necessary and vital". That
is to say to define "core" per its purest definition, those things that
are, in fact, core. Standardize those things that are necessary for
interoperability because without them, the cloud just doesn't work. Don't
enforce those behaviors that are "wants", rather than "needs".

Such a process would focus on the merit of the API functionality rather
than the merit of the money behind its sponsors.

Regards,
Eric Windisch
Re: The two types of interoperability [ In reply to ]
We also need to define the process for change of the tests


- What is the lead time before an additional set of tests become the standard ?

- For how long will a branded cloud be allowed to continue to be branded while failing the tests ?

There could be two different styles of tests which could have different parameters:


- Those that increase the quality of testing (such as testing an additional use case)

- Those that increase the scope of testing (such as a new component)

Tim

From: Mark Collier [mailto:mark@openstack.org]
Sent: 05 February 2014 18:23
To: Troy Toman
Cc: foundation@lists.openstack.org
Subject: Re: [OpenStack Foundation] The two types of interoperability


I think "the branding issue" is inseparable from interop (unless I misunderstand what you mean by that).



I think the initial phases of defcore make sense, in so much as the focus is on defining what is expected to be in commercial products and services to be called "openstack", and in starting with a modest definition and a reasonable set of tests (largely based on what already exists) so that progress can be made quickly. To me these things are the critical next steps on the road to interop.



At the moment, there are zero tests being used as a gate for the use of the mark commercially*, so as long as X is >0 it's an infinite improvement IMHO, and one that can't come soon enough. Note that the tests are not the sum total of the requirements today for the commercial use of the mark, nor should they be in the future, but as an additive step they are most welcome and helpful.



Mark



*To be clear, the use of the mark DOES currently require, contractually, that products/services pass tests... the tests just need to exist. This is why I will throw a party when we can implement the first tests, however modest









On Wednesday, February 5, 2014 10:44am, "Troy Toman" <troy.toman@rackspace.com<mailto:troy.toman@rackspace.com>> said:

>
>
> > On Feb 5, 2014, at 6:05 AM, "Thierry Carrez" <thierry@openstack.org<mailto:thierry@openstack.org>>
> wrote:
> >
> > Mark McLoughlin wrote:
> >> To put the question another way - which helps users more? The ability to
> >> write interoperable code which uses a large number of OpenStack APIs,
> >> but which is only actually interoperable between a small number of
> >> OpenStack clouds? Or interoperability whereby you have a smaller set of
> >> interoperable APIs, but code that uses only those APIs would be
> >> interoperable with a much larger number of OpenStack clouds?
> >
> > I like your way of framing the question.
> >
> > Personally I think going for loose interoperability is short-sighted.
> > Yes you'll have a lot of providers but you'll forever have a bad
> > experience moving workloads around.
>
> +1
>
> >
> > With total interoperability, you may have less providers at start, but
> > at least they provide a great experience moving workloads around. And as
> > more join them, it only gets better. Total interoperability is basically
> > the only way to potentially reach one day the nirvana of "lots of
> > providers + great end user experience".
> >
> > The trick is to bootstrap it. If you have 0 or 1 "true openstack" cloud
> > available at first, it's hard to get any benefit from that hypothetical
> > federation. Total interop requires a bit of a leap of faith.
> >
> > So yet another way to frame that discussion (at board level) is: are you
> > more interested in convergence and federation (and beating Amazon all
> > together), or are you more interested in competing (and be a set of
> > individual loosely-coupled small competitors to Amazon).
>
> I think this is the right conversation to have around the goal of
> interoperability. I think it is important to note that it was decided to NOT focus
> on interoperability in the initial phases of DefCore but to just try and solve the
> branding issue. Based on the feedback, it feels like the most pressing issue
> really is around interop and we should wade into that sooner rather than later
> even if it is (or maybe because it is) messy. As I said earlier, I also don't
> think it is helpful to try and solve both with one label or brand.
>
> Given all that, I am very supportive of taking simpler first steps and "learning"
> our way into this. I think it is more likely to not only yield progress sooner but
> get us to the proper endpoint faster too. Making tangible progress on making
> federation/convergence is critical for OpenStack in 2014 IMHO. Or, if we can't
> line up on that goal, then let's be explicit and stop setting that expectation. I
> really hope we can do the former. I'm not sure what a loosely-coupled set of
> clouds does to change the landscape.
> >
> > --
> > Thierry Carrez (ttx)
> >
> > _______________________________________________
> > Foundation mailing list
> > Foundation@lists.openstack.org<mailto:Foundation@lists.openstack.org>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org<mailto:Foundation@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
Re: The two types of interoperability [ In reply to ]
Interop is a good goal. However, I frankly don't see us ever achieving
"total interoperability" by exclusively relying on trademark enforcement as
leverage.

The only path I see currently towards the type of interop that would allow
for workload federation across providers is if there was an upstream
project, whereby various providers would proactively write and maintain
connectors into a central auth system... kinda like an "openstack native
rightscale." Those deploying openstack on-prem would then be able to
leverage this module to federate on-prem environment with all the providers
that have a functional driver. I.e. just like you have multiple storage
provider drivers to cinder, we need services providers to write and
maintain drivers against something in OpenStack. I've seen discussions and
even blueprints on federated keystone in the past, but am not sure how much
progress has been made... Thierry maybe you know more?

I.e. there has to be a technology solution to back up administrative
action. Not just trademarks and definitions.

Thoughts?

-Boris


On Wed, Feb 5, 2014 at 6:25 AM, Thierry Carrez <thierry@openstack.org>wrote:

> Mark Collier wrote:
> > To get close to total interop (which I think is the goal, or ideal at
> least) you have to start where you are (bootstrap).
> >
> > If we were to, at this moment, define OpenStack as something no current
> cloud would qualify for, that wouldn't be very practical. I think we can
> bootstrap while encouraging the trend to move towards the ideal over time.
>
> As far as we clearly establish that "total interoperability" is the end
> goal, then I think it's OK to start with some compromises to bootstrap
> the effort, then gradually (but constantly) increase constraints.
>
> That said, that only works if there is a bit of consensus in OpenStack
> companies that this is the end goal, otherwise you won't be able to
> increase the constraints. Does everyone agree on the end goal ?
>
> In all cases, there is a lot of value in having the board clearly
> stating it.
>
> --
> Thierry Carrez (ttx)
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
Re: The two types of interoperability [ In reply to ]
On 02/06/2014 05:21 PM, Boris Renski wrote:
> Interop is a good goal. However, I frankly don't see us ever achieving
> "total interoperability" by exclusively relying on trademark enforcement
> as leverage.
>
> The only path I see currently towards the type of interop that would
> allow for workload federation across providers is if there was an
> upstream project, whereby various providers would proactively write and
> maintain connectors into a central auth system... kinda like an
> "openstack native rightscale." Those deploying openstack on-prem would
> then be able to leverage this module to federate on-prem environment
> with all the providers that have a functional driver. I.e. just like you
> have multiple storage provider drivers to cinder, we need services
> providers to write and maintain drivers against something in OpenStack.
> I've seen discussions and even blueprints on federated keystone in the
> past, but am not sure how much progress has been made... Thierry maybe
> you know more?
>
> I.e. there has to be a technology solution to back up administrative
> action. Not just trademarks and definitions.
>
> Thoughts?

Federation is interesting, too. However, I think you can have what I
would consider total interop without federation.

As a user, I just want to be able to point my application at a different
cloud (with a different set of credentials) and have it work and behave
the same way. There's no federation needed to achieve that.

The OpenStack project itself is a huge consumer of OpenStack clouds via
the infrastructure project. They shouldn't have to do special casing
for which cloud they're talking to, yet they do.

--
Russell Bryant

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
Folks,


I still struggle with this entire conversation. Linux is not interoperable, how do you expect OpenStack to be so? RHEL is interoperable with RHEL/Fedora/CentOS. SUSE with SUSE and so on. Debian and Ubuntu perhaps more so than most Linux distros. Take an app you wrote on Cray Linux and run it on Android. Or recompile it for Android even. It may run, but it’s certainly not guaranteed.

The simplest example I can think of: HP vs. Rackspace Public Clouds. HP has chosen a “flat" network scheme. Rackspace has chosen the “multi_host” option. On HP, each VM has one NIC. On Rackspace, they have two NICs. If you build your app around one assumption vs. the other, then portability becomes difficult, regardless of the APIs in use. That forces you as the app owner to figure out the differences between these two clouds, regardless of them both being OpenStack and having the OpenStack APIs.

This is only the simplest example. There are dozens of these kinds of situations where architectural design decisions directly impact application portability and the API is irrelevant.

I see it as a simple issue: we either stifle choice OR we facilitate choice and compromise on interoperability.

I don’t think compromising on interoperability means there is no interop, but more of a baseline of interoperability that is a lowest common denominator (an SQL92 equivalent if you will) and then additional interoperability between “flavors” of OpenStack.

It took years for IPSEC, which was a standard protocol, to actually have decent interoperability and I would argue it’s still difficult. This is largely because of the amount of choice that was available, which caused complexity. Choice is the enemy of interoperability. Choice seems like it has been a key defining attribute of OpenStack. How can we continue to enable choice, while searching for interop. The only solution I see is the one I’m proposing (above). You can try to dictate to the community what they should build, but I can’t see a way that is successful in the long term. Instead you are likely to cause forks.

Linux is significantly more mature than OpenStack and the Linux Standard Base (LSB) still struggles to unify the various Linux distros in a way that guarantees application portability. We should be focusing more on driving DefCore to conclusion, building a lowest common denominator baseline, and then figuring out what the “flavors” of interop are above that.

Just my $0.02.



--Randy

Founder & CEO, Cloudscaling
Board of Directors, OpenStack Foundation
+1 (415) 787-2253 [SMS or voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
CALENDAR: doodle.com/randybias









On Feb 5, 2014, at 4:04 AM, Thierry Carrez <thierry@openstack.org> wrote:

> With total interoperability, you may have less providers at start, but
> at least they provide a great experience moving workloads around. And as
> more join them, it only gets better. Total interoperability is basically
> the only way to potentially reach one day the nirvana of "lots of
> providers + great end user experience".
Re: The two types of interoperability [ In reply to ]
On Feb 6, 2014, at 3:36 PM, Russell Bryant <rbryant@redhat.com> wrote:
> As a user, I just want to be able to point my application at a different
> cloud (with a different set of credentials) and have it work and behave
> the same way. There's no federation needed to achieve that.
>
> The OpenStack project itself is a huge consumer of OpenStack clouds via
> the infrastructure project. They shouldn't have to do special casing
> for which cloud they're talking to, yet they do.

How do you fix this without dictating architecture/design decisions? I’ve been building infrastructure for 23 years and deploying apps on top of infrastructure in an automated fashion for 13 years. I just don’t understand how you resolve this problem without either special casing OR dictating a reference architecture.

Am I missing something obvious? I sure would like to know if I am.


--Randy

Founder & CEO, Cloudscaling
Board of Directors, OpenStack Foundation
+1 (415) 787-2253 [SMS or voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
CALENDAR: doodle.com/randybias
Re: The two types of interoperability [ In reply to ]
Randy,

While I can find no fault in your argument, I would point out that
Linux is POSIX compliant. And I can expect any software I write to a POSIX
standardized environment in C to work if I slap it statically compiled into
/opt . I guess that's along the same lines as your SQL 92 argument?

I think there is a need to standardize some open interface architectures
that are expected to work the same everywhere. And I think that is the
thrust of this conversation. Not the specific implementation of service
management or where you throw your own Cloud OS binaries. Just a uniform
way for anyone deploying images into OpenStack or integrating their
software with it that they can target. Even if it is as limiting as
statically compile all your code and throw it in /opt.

I agree with your points, but I wanted to get some clarity on how far
you are willing to go on standardizing architecture to support open
standardized interfaces for integrators and users? We probably agree and I
think for the most part the arguments being raised here are heading in the
same general direction so that's a good sign.

-Matt


On Thu, Feb 6, 2014 at 9:52 PM, Randy Bias <randyb@cloudscaling.com> wrote:

> Folks,
>
>
> I still struggle with this entire conversation. Linux is not
> interoperable, how do you expect OpenStack to be so? RHEL is interoperable
> with RHEL/Fedora/CentOS. SUSE with SUSE and so on. Debian and Ubuntu
> perhaps more so than most Linux distros. Take an app you wrote on Cray
> Linux and run it on Android. Or recompile it for Android even. It may
> run, but it's certainly not guaranteed.
>
> The simplest example I can think of: HP vs. Rackspace Public Clouds. HP
> has chosen a "flat" network scheme. Rackspace has chosen the "multi_host"
> option. On HP, each VM has one NIC. On Rackspace, they have two NICs. If
> you build your app around one assumption vs. the other, then portability
> becomes difficult, regardless of the APIs in use. That forces you as the
> app owner to figure out the differences between these two clouds,
> regardless of them both being OpenStack and having the OpenStack APIs.
>
> This is only the simplest example. There are dozens of these kinds of
> situations where architectural design decisions directly impact application
> portability and the API is irrelevant.
>
> I see it as a simple issue: we either stifle choice OR we facilitate
> choice and compromise on interoperability.
>
> I don't think compromising on interoperability means there is no interop,
> but more of a baseline of interoperability that is a lowest common
> denominator (an SQL92 equivalent if you will) and then additional
> interoperability between "flavors" of OpenStack.
>
> It took years for IPSEC, which was a standard protocol, to actually have
> decent interoperability and I would argue it's still difficult. This is
> largely because of the amount of choice that was available, which caused
> complexity. Choice is the enemy of interoperability. Choice seems like it
> has been a key defining attribute of OpenStack. How can we continue to
> enable choice, while searching for interop. The only solution I see is the
> one I'm proposing (above). You can try to dictate to the community what
> they should build, but I can't see a way that is successful in the long
> term. Instead you are likely to cause forks.
>
> Linux is significantly more mature than OpenStack and the Linux Standard
> Base (LSB) still struggles to unify the various Linux distros in a way that
> guarantees application portability. We should be focusing more on driving
> DefCore to conclusion, building a lowest common denominator baseline, and
> then figuring out what the "flavors" of interop are above that.
>
> Just my $0.02.
>
>
>
> --Randy
>
> Founder & CEO, Cloudscaling
> Board of Directors, OpenStack Foundation
> +1 (415) 787-2253 [SMS or voice]
> TWITTER: twitter.com/randybias
> LINKEDIN: linkedin.com/in/randybias
> CALENDAR: doodle.com/randybias
>
>
>
>
>
>
>
>
>
> On Feb 5, 2014, at 4:04 AM, Thierry Carrez <thierry@openstack.org> wrote:
>
> With total interoperability, you may have less providers at start, but
> at least they provide a great experience moving workloads around. And as
> more join them, it only gets better. Total interoperability is basically
> the only way to potentially reach one day the nirvana of "lots of
> providers + great end user experience".
>
>
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
>
Re: The two types of interoperability [ In reply to ]
Randy Bias wrote:
> [...]
> I see it as a simple issue: we either stifle choice OR we facilitate
> choice and compromise on interoperability.
> [...]

Yes, that's another way to frame the question. My point in the original
email is that I know there are, in OpenStack, strong proponents of both
end goals. We shouldn't delay anymore that necessary discussion on what
should be our common long-term goal.

That said, I like Nicolas's point about the various types of trademark
usage (project names, distributions, deployments) and that slightly
different rules could be used for each of those. I also agree with Boris
that trademark usage is not the only weapon we could use to encourage
portability of workloads. And I like your point that even in the case of
"total interoperability", specific deployment choices will affect
portability, so unless you build architectural clones (which I think
nobody actually wants), interoperability will never be perfect.

To come back to your point, IMHO there is value in maximizing the
*feature set* that is common between deployments, even if deployment
architectures differ. You might need to adjust your workload to go from
a flat to a multi_host deployment, or have your app support both cases.
But if you rely on Heat API on the first one and the other one doesn't
propose Heat at all, you just have to rewrite your workload completely.

Cheers,

--
Thierry Carrez (ttx)

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
Re: The two types of interoperability [ In reply to ]
This thread has been very interesting to read and I find merit in ALL of the viewpoints. Most of them are already part of the discussion record and have been factored into the DefCore plans (including the multiple uses of "core" and "brand").

The whole point of "spider" was that there are multiple correct yet divergent viewpoints around OpenStack. Alan and I called these "tensions" and felt they were healthy in keeping us balanced.

I am also very impatient to move faster; however, experience has shown that we build consensus when we resolve concrete issues and chip away problems like "core", "interop" and "brand."

In this cycle, we're making progress on defining core capabilities that should help move towards both interop and brand usage. Some of the issues in this thread are addressed specifically in how we are defining the criteria used to select-must pass tests (criteria list<http://robhirschfeld.com/2014/01/07/defcore-critieria/>). We could use help on this and support helping explain the approach.

Note: There is also conflict on commercial vs. project brand use that we're running into and will need further clarification. I'd been hoping this could hold off until after the must-pass test cycle.

Rob

Date: Fri, 07 Feb 2014 10:27:19 +0100
From: Thierry Carrez <thierry@openstack.org>
To: Randy Bias <randyb@cloudscaling.com>
Cc: "foundation@lists.openstack.org" <foundation@lists.openstack.org>
Subject: Re: [OpenStack Foundation] The two types of interoperability
Message-ID: <52F4A6F7.3010205@openstack.org>
Content-Type: text/plain; charset=windows-1252

Randy Bias wrote:
> [...]
> I see it as a simple issue: we either stifle choice OR we facilitate
> choice and compromise on interoperability.
> [...]

Yes, that's another way to frame the question. My point in the original email is that I know there are, in OpenStack, strong proponents of both end goals. We shouldn't delay anymore that necessary discussion on what should be our common long-term goal.

That said, I like Nicolas's point about the various types of trademark usage (project names, distributions, deployments) and that slightly different rules could be used for each of those. I also agree with Boris that trademark usage is not the only weapon we could use to encourage portability of workloads. And I like your point that even in the case of "total interoperability", specific deployment choices will affect portability, so unless you build architectural clones (which I think nobody actually wants), interoperability will never be perfect.

To come back to your point, IMHO there is value in maximizing the *feature set* that is common between deployments, even if deployment architectures differ. You might need to adjust your workload to go from a flat to a multi_host deployment, or have your app support both cases.

But if you rely on Heat API on the first one and the other one doesn't propose Heat at all, you just have to rewrite your workload completely.

Cheers,

--
Thierry Carrez (ttx)


------------------------------

_______________________________________________
Foundation mailing list
Foundation@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation


End of Foundation Digest, Vol 27, Issue 7
*****************************************
Re: The two types of interoperability [ In reply to ]
Hi Matt,


Yes, the POSIX argument goes in the SQL92 bucket. I should start using that example alongside as I think it might be easier for some to understand. Even in the case of POSIX though, portability isn’t guaranteed, certainly without at least a recompile. Not all shared libraries of the right revision are in each place or even available to be installed via packaging. It’s an imperfect world and interoperability is hard.

I’m not advocating any particular approach. I’m trying to highlight the inherent contention between choice and interoperability. I want to solve these problems like everyone else and there are a variety of ways to go about it. Dictating cloud architectures seems anathema to what OpenStack is trying to accomplish, which means that we need to find another way [1]. Thierry had some good ideas I want to expand on in a separate thread.

And yes, I think DefCore is headed in the right direction.


--Randy

Founder & CEO, Cloudscaling
Board of Directors, OpenStack Foundation
+1 (415) 787-2253 [SMS or voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
CALENDAR: doodle.com/randybias

[1] I think everyone has already heard a million times how I think AWS EC2 and GCE are pretty much the same architecture (“elastic cloud”) and I’ve bet Cloudscaling’s future on a version of OpenStack that recreates that architecture, but worth stating it again as a footnote. There *are* reference architectures out there that OpenStack can and probably should just mimic. These are the flavors I have been talking about. The CERN/ANL architectures are similar too and beg for an HPC reference architecture.







On Feb 6, 2014, at 7:07 PM, matt <matt@nycresistor.com> wrote:

> Randy,
>
> While I can find no fault in your argument, I would point out that Linux is POSIX compliant. And I can expect any software I write to a POSIX standardized environment in C to work if I slap it statically compiled into /opt . I guess that's along the same lines as your SQL 92 argument?
>
> I think there is a need to standardize some open interface architectures that are expected to work the same everywhere. And I think that is the thrust of this conversation. Not the specific implementation of service management or where you throw your own Cloud OS binaries. Just a uniform way for anyone deploying images into OpenStack or integrating their software with it that they can target. Even if it is as limiting as statically compile all your code and throw it in /opt.
>
> I agree with your points, but I wanted to get some clarity on how far you are willing to go on standardizing architecture to support open standardized interfaces for integrators and users? We probably agree and I think for the most part the arguments being raised here are heading in the same general direction so that's a good sign.
>
> -Matt
>
>
> On Thu, Feb 6, 2014 at 9:52 PM, Randy Bias <randyb@cloudscaling.com> wrote:
> Folks,
>
>
> I still struggle with this entire conversation. Linux is not interoperable, how do you expect OpenStack to be so? RHEL is interoperable with RHEL/Fedora/CentOS. SUSE with SUSE and so on. Debian and Ubuntu perhaps more so than most Linux distros. Take an app you wrote on Cray Linux and run it on Android. Or recompile it for Android even. It may run, but it’s certainly not guaranteed.
>
> The simplest example I can think of: HP vs. Rackspace Public Clouds. HP has chosen a “flat" network scheme. Rackspace has chosen the “multi_host” option. On HP, each VM has one NIC. On Rackspace, they have two NICs. If you build your app around one assumption vs. the other, then portability becomes difficult, regardless of the APIs in use. That forces you as the app owner to figure out the differences between these two clouds, regardless of them both being OpenStack and having the OpenStack APIs.
>
> This is only the simplest example. There are dozens of these kinds of situations where architectural design decisions directly impact application portability and the API is irrelevant.
>
> I see it as a simple issue: we either stifle choice OR we facilitate choice and compromise on interoperability.
>
> I don’t think compromising on interoperability means there is no interop, but more of a baseline of interoperability that is a lowest common denominator (an SQL92 equivalent if you will) and then additional interoperability between “flavors” of OpenStack.
>
> It took years for IPSEC, which was a standard protocol, to actually have decent interoperability and I would argue it’s still difficult. This is largely because of the amount of choice that was available, which caused complexity. Choice is the enemy of interoperability. Choice seems like it has been a key defining attribute of OpenStack. How can we continue to enable choice, while searching for interop. The only solution I see is the one I’m proposing (above). You can try to dictate to the community what they should build, but I can’t see a way that is successful in the long term. Instead you are likely to cause forks.
>
> Linux is significantly more mature than OpenStack and the Linux Standard Base (LSB) still struggles to unify the various Linux distros in a way that guarantees application portability. We should be focusing more on driving DefCore to conclusion, building a lowest common denominator baseline, and then figuring out what the “flavors” of interop are above that.
>
> Just my $0.02.
>
>
>
> --Randy
>
> Founder & CEO, Cloudscaling
> Board of Directors, OpenStack Foundation
> +1 (415) 787-2253 [SMS or voice]
> TWITTER: twitter.com/randybias
> LINKEDIN: linkedin.com/in/randybias
> CALENDAR: doodle.com/randybias
>
>
>
>
>
>
>
>
>
> On Feb 5, 2014, at 4:04 AM, Thierry Carrez <thierry@openstack.org> wrote:
>
>> With total interoperability, you may have less providers at start, but
>> at least they provide a great experience moving workloads around. And as
>> more join them, it only gets better. Total interoperability is basically
>> the only way to potentially reach one day the nirvana of "lots of
>> providers + great end user experience".
>
>
> _______________________________________________
> Foundation mailing list
> Foundation@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
>
Re: The two types of interoperability [ In reply to ]
Thierry,


I’m in violent agreement on this one. Interoperability can’t be perfect and looking somewhere other than OpenStack’s infrastructure components has real value. I have felt for a while that Heat should become the lingua franca for how applications are bootstrapped onto a cloud. Can a tool like Heat help us with creating applications that “run anywhere” either by being able to interrogate an OpenStack cloud for capabilities (e.g. how many NICs does a VM get?) and pass that data on to tools like Chef/Puppet or RightScale/Dell Cloud Manager?

I think it can. I remember Dell Cloud Manager (née enStratius) would interrogate a cloud for capabilities, but perhaps presenting cloud capabilities (starting with the “POSIX” type baseline that Defcore is driving towards + additional changes specific to that cloud) is a better approach. Better yet, having something like Heat being able to push the cloud capabilities information into whatever framework they are bootstrapping (Chef/Puppet, Cloud Foundry / Apprenda / Stackato / PaaS of whatever kind, etc.) would be ideal.

There will still be challenges in terms of QoS/SLAs being different, but at least you could always load an app written in this manner. It might also allow automated continuous integration with a variety of OpenStack reference architectures.


Best,



--Randy

Founder & CEO, Cloudscaling
Board of Directors, OpenStack Foundation
+1 (415) 787-2253 [SMS or voice]
TWITTER: twitter.com/randybias
LINKEDIN: linkedin.com/in/randybias
CALENDAR: doodle.com/randybias









On Feb 7, 2014, at 1:27 AM, Thierry Carrez <thierry@openstack.org> wrote:

> Randy Bias wrote:
>> [...]
>> I see it as a simple issue: we either stifle choice OR we facilitate
>> choice and compromise on interoperability.
>> [...]
>
> Yes, that's another way to frame the question. My point in the original
> email is that I know there are, in OpenStack, strong proponents of both
> end goals. We shouldn't delay anymore that necessary discussion on what
> should be our common long-term goal.
>
> That said, I like Nicolas's point about the various types of trademark
> usage (project names, distributions, deployments) and that slightly
> different rules could be used for each of those. I also agree with Boris
> that trademark usage is not the only weapon we could use to encourage
> portability of workloads. And I like your point that even in the case of
> "total interoperability", specific deployment choices will affect
> portability, so unless you build architectural clones (which I think
> nobody actually wants), interoperability will never be perfect.
>
> To come back to your point, IMHO there is value in maximizing the
> *feature set* that is common between deployments, even if deployment
> architectures differ. You might need to adjust your workload to go from
> a flat to a multi_host deployment, or have your app support both cases.
> But if you rely on Heat API on the first one and the other one doesn't
> propose Heat at all, you just have to rewrite your workload completely.
>
> Cheers,
>
> --
> Thierry Carrez (ttx)