Add a meeting Rate this page

A

B,.

A

You.

A

Hi um we will get started in about two minutes. If you just come in- and you can add your name and any agenda items to the meeting notes.

A

All right are there any walk-in items that people have that you haven't added to the meeting notes.

B

Yeah, I added two of them uh just as a follow-up on the discussion. um The last week we had around the external network orchestration and about the glossary terms that to be added, so I think you created.

A

A.

B

Discussion there so add it as a comment with couple of terms and their definitions to that.

A

Great, thank you.

A

Does anyone have anything else to add meeting notes are in the zoom chat, as well as select chat.

A

All right.

A

um

A

I'll just add the review of um prs that were merged during the week and any open prs. In addition to these.

A

All right, that's at the end.

C

Though,.

A

So if you would like to go over yours on the glossary, a lexicon maybe would be the start.

A

And I can share my screen or you could share yours if you'd like.

B

Yeah go ahead with the share. Maybe if you have it.

A

All right so, first item this glossary lexicon.

A

Open that up.

B

Yeah, so I think you created this discussion for for the terms that needs to be added to the glossary, and we can agree upon here and then create a pr as it says,.

B

So I added couple of them like network attachment, relates to primary as well as secondary networks and then different types of networks like overlay, external networks, tenant networks.

B

And so the intention is to bring everyone on the on the same page when it comes to when we are using these terms and what we are basically trying to define for the x for the external network, orchestration and the discussion so to ease out such discussions. And that are we talking about the same thing. So.

B

Yeah so maybe go through the definitions and add comments and we can review it offline and in the comment.

B

Section.

D

uh Thank you for this. I think, there's a. I think we need to spend some time uh reading it carefully. At least I do.

B

Sure.

A

Does anyone have any comments or questions or anything right now or.

E

Yeah I I'll chime in like something like pod. um I think we should avoid overloaded terms um yeah, so I actually use pod uh internally kind of the way that alok has it here, um but obviously pod means something very specific in the kubernetes world as well, which is um important right. um That is true. I mean.

B

I I spent some time. Okay, we, like you, said we we use it heavily when it comes to saying about optimized data centers, which involves compute switches and yeah storage and and the physical infrastructure belonging to. But but you are totally right. It's it's overloaded term in at least in the kubernetes ecosystem, so I'll I'll try to yeah, replace it with some some other unload. I.

E

Mean it's fine, like everybody. We need to like help you through this right but like because, like I said, I've used ponson same thing with, like the term availability zone. um Yeah, that's that's a common term in the networking world. It's also a product feature in aws. It's a metadata.

A

Construct.

E

In openstack, so I mean I would say that you know the big thing is. Is we try to even if we do have a term like? Maybe we keep pod, I'm not telling you to get rid of it? I'm just saying we have to be careful and explicit in the glossary.

B

Because.

E

These duplications, I.

B

Try to make it as explicit as possible, but yeah. Maybe we can go through it and then.

F

But do we do we need the pod with capital pod in this glossary at.

A

All.

F

I mean what is the dependency on on that construct.

B

We might.

F

Not.

B

Need even that's right yeah, so we can even remove that.

F

Yeah, if there is no need, then I think we should not introduce it.

B

Yeah, I think we are not talking about the data center or we can. We can replace it with the data center and I think data.

F

Center.

B

One.

F

Will be all of them.

B

Yeah yeah right awesome.

D

Another quick comment: the the network attention attachment. You know talking about primary and secondary, uh that's very multi-specific, and maybe it's even going into too many details. You know talking about specifically relationship to pods um yeah.

D

I I think we should talk about it in a more abstract way, perhaps but um yeah, I'm not sure.

E

I'm inclined to agree with tal, um I would say, if we're going to bring anything, that's like I don't know product or solution specific that maybe we just do like um like a prefix to it like if we're talking about like network attachment, and we call it just that it should be abstract to tells point if we're going to talk about like something that's specific to multis. You should maybe you just put like multi-dash secondary network attachment or something, but um really we don't want to like do anything in the glossary.

E

That's already starting to pre-prescribe things or you know, build something from a specific solution. Standpoint we're trying to kind of get like this common understanding of just concepts that.

A

Way we can.

E

um Evaluate different solutions fairly and equally across the board.

D

um So my suggestion- and I know it's not a great suggestion- is I've been using the term networking rather than network right network attachment.

D

What we're really missing here is even a definition of what a network is, uh but part of the problem here is that the kubernetes uh plumbing network plumbing sig, already took over the uh the term network attachment and network definition. So that's that's already.

D

You know- and it's referenced here so so that's already something that's defined and and it's true, it is kind of defined in relation to malta. Specifically, so those terms do exist, but in some of the discussions we've been having, uh I was using this ian was using it too. We were thinking of of a higher level kind of abstraction, and I was using the word networking rather than network, and it's not great.

D

I I don't love that term, but it's a term I've been using to to try to differentiate from the the kind of lower level plumbing that that is referenced specifically by multis.

A

I.

F

Mean, I would argue, yeah, I understand what you're after. I would still argue that the uh the term network attachment as coined by the kubernetes network plumbing working group is, is more generic than maltose. Multis is just a reference implementation of that concept. um So I think this deserves a place. We can kind of qualify it here to mean exactly that. Yeah and.

A

Then we.

F

Could.

A

We could.

F

Have a more generic uh abstraction of a secondary network attachment um that encompasses the multus network, secondary network attachment nsm secondary network attachments and any any other um type of secondary network attachment.

D

Yeah that that's a good idea to get a little bit technical here for people who aren't totally versed in this. uh So there is something called a network attachment definition which is standardized crd um within the kind of standard, kubernetes name space.

D

Sorry name, space is the wrong world. The naming convention, uh malta specifically, adds an annotation to connect a pod to that network. Attachment definition so- um and I won't comment about how awkward those annotations are- I'm not a big fan of them, but um uh yeah you're right the multis way to it to use those is specific to motus, but a network attachment definition by itself uh could live by itself. But the strange thing is is: if it lives by itself, there's no there's, no definition.

A

Of how it will be.

D

Used yeah, it doesn't have a lot of uh specific meaning um and also I'll point out. It's it's a very minor definition. The the crd is extremely simple. It's it simply encapsulates a cni configuration in json, um so there's not a lot there. It's very very generic.

E

So on this note, though too, when we talk about being generic or whether it's solution specific, I mean the way that primary network is laid out um like I think we should.

E

We should be careful like being generic in some areas and be specific and stuff, because, if it's all just like kate centric's networking, that should be like the kate's primary network or something this is the awkward place that we arrive when we come to like cnf's is, if you talk to a network operator, and you talk about primary networks, right um they're, not thinking about it from a kate's perspective right like.

G

At least in.

E

Most cases so, like I'm, hesitant to use terms like primary network and then I have very specific connotations um like just because we're trying to bridge two worlds. Here I mean, if you talk to a kubernetes person, you said primary network. They're, probably gonna have their own biases, so I I think we should be like explicit with our terminology when it's important um or if we do do something, that's vague, like network attachment. Then it should be. You know like what I was saying.

E

It should be abstracting, you know, cover all potential implementations or like at least accommodate for, like the different ways that you might attach a network that isn't 100. You know just kate specific.

D

Right, I, I think, that's a very good point. I'll add that you know we usually talk about planes in our networking world, so we would call this maybe the kubernetes control plane, but then at the same point, sometimes the data pain playing piggybacks over the control plane. You know the primary network, uh so you know other terminologies that we use are things so planes and we also have fabrics.

D

uh I think this is a very good start to help us thinking, but I think we there's a lot more stuff in the glossary we need to add and think about. um But but thank you for this. This is this is a good opening shot.

C

Perhaps a better term might be default, uh kubernetes network yeah. It makes it very explicit, yeah.

B

During the discussion I was thinking, the same could be, yeah could be a way.

C

To say.

B

The.

C

And default also gives the implication that there may be other networks attached to it as well, so as opposed to just a unified uh primary or secondary, like secondary, even gives the connotation that there's only two networks when there may be more than two as well.

D

So I I have a preference for calling it a plane, because network is so overloaded, it's more than just a network. You know this is the technicality of it. Yes, it's an ipv3. Sorry, it's a it's a third layer, uh ip network. uh Dual stack right now is supported in the latest version of kubernetes.

D

So we can talk about it that way, but it's implemented often using some sort of fabric, some sort of sdm controller, so um I'm more inclined to call it a plane and then that plane itself is is implemented through various networking solutions. Right.

E

Now, I'm with you, because we have something if you scroll down a little bit, it's like control network, I've, I've, never I've. You know it's always in my mind, if that's the control plane right, like yeah,.

D

um

E

And so we have data planes and then data planes you know, can be subdivided. So the the thing too, that it's gonna be tough, is figuring out like how networky you know, I'm gonna make that a word um we get just because I definitely know dealing with like the ed's of the world in the past. It calls us the sneaky network people they tend to like get a little queasy when we start talking about overlays and this and that.

E

But when we start talking about like a default kubernetes network and stuff or control playing kubernetes network, I mean there's still an overlay involved right, like you're riding on top of the underlay you're doing in ip or some type of other encapsulation method, that the cni is brokering for you so like it needs to be like explicit enough that real networking people who are going to have to like plumb these cnfs into their networks.

E

You know once again there's that overloaded term it makes sense to them, but at the same time you know it's accessible to, like maybe the more cloud side of the house where everything has just been abstracted in a yaml file. Up until this point.

D

Right we care very much about the implementation details. That's that's the difference between us. I think in some of the other parts of the kubernetes world. I'll just add that celium might not be using overlay networks. There are other solutions to to implementing that kubernetes control, plane.

E

And sure, even like any of the ones that have direct bgp attachment right, um there's ways to like.

A

Directly appear.

E

With the underlying cellular, but that's that's the thing, though right is that's, why it's important to acknowledge playing games, like you said it's important to acknowledge, overlays, etc, because that drastically changes, even at the cni layer, where we do have some some of these. um You know just court case constructs. um Yes, uh that's a good one.

E

I I've gone through this before with that, because we did the same thing where, like you, you bridge these two worlds like in the nsm space, where, like nobody agreed on what a network was, what an attachment was with an interface even in like the term interface was like this super complicated thing for all of us to like agree on so, um but I do like the idea collectively of us, like centering around the concept of um networking networks, planes and overlays, because it kind of helps clarify those implementees implement, can't talk, implementation, details that you were describing.

E

And the last part I'll say on this: is it's important right? Because if we go with something like psyllium, then the gnat assumptions that we come into? Typically when we're dealing with kubernetes, because we start talking about that primary network, for instance, like some of those assumptions, may be false in certain contexts. So we don't want our terms to like specifically lead us astray.

C

Yeah and uh another really good example- and this is one of the ones that we uh it was the the early writing on the wall- that it was much more complex, was uh calico.

C

So in fact, uh when the plumbing group was being created, we met, we all met in person in austin, and uh it was at the time called multi-interface group, and one of the things that we had agreed upon was to get rid of that specific name because with calico, if you want to add something or change, something you weren't going to add a secondary network to it or a secondary interface.

C

Instead, you're going to render your your thing into into calico and then calico could make the right cross changes or whatever, whatever else that you want to to make that's within its capabilities. So we do want to be very careful in that. It may not be a second a secondary network.

C

It may just be a configuration, that's flipped in a in a control plane that causes the uh the functionality that you want within a single interface, a single, a single network, but from a from a production perspective or from an operational perspective that still ends up with that separation that you want.

D

um I'll point out another thing: one of the one of the things I hope there will be a deliverable from this group is suggestions, recommendations for the plumbing group. So, as I said, the curtain network attachment definition accepted by the plumbing group is extremely extremely generic and simple, and- and it's obvious why you know there's just so many. um um There are a lot of problems to reach an agreement in alignment. That would please uh everybody, but you know we're a group that I think is. Is uh we are versed in these things?

D

So so I don't know how much it's set in stone already the the network, attachment definition.

D

Also, of course, as you guys know, in in crds, you can version things so uh so maybe the version one of this of the crd that exists now, or maybe it's version, one alpha one uh that's already set in stone, but we could potentially think about a version. Two of that network attachment definition.

D

That would eventually encapsulate a lot of the uh new thinking that that we might introduce here so um anyway. My hope is to eventually get to that point that might take a while.

C

Yeah and it's something that we we should not try to deconflict the entire world here we should just deconflict and explicitly say what we mean ourselves uh locally, because, like even something like uh data plane, like we've, had conversations where it's like. Oh this is a data plan. No, no, that's actually a control plane. The real data plane is here and then it's like you look at the hardware.

A

I don't know that's the control.

C

Plane on the hardware, the real data plane is here: it's like turtles all the way down, and so we should. uh We should draw a line somewhere and say: here's explicitly what we mean: uh we're okay with uh we're, okay, that it doesn't cover 100 of the edges. It should be clear what we what we mean and if it's not clear, then let's make sure that we we get that clarity, but without having to deconstruct for across the whole industry.

D

Right I'll just point out that there can be many control planes in many data planes. It's a control plane, not the control, plane, necessarily.

C

Exactly it's um yeah, why my data plane is someone else's control, plane.

G

Definition, so do you think that is important to including the list.

E

I I think, long term. Personally, it is because when we start talking about overlays, you need to have context for what you're writing. On top of I mean at some point, you know you need to understand that, like if you're pulling in say an srv like vfid into a pod or something, then that means you're starting to get like down into the weeds and like who knows. Maybe the best practices eventually say.

E

Srov is a bad idea, um but when you start getting into those low-level things- and you start like doing um direct peering into the underlay or even something like calico right when you you appear with the underlay versus building an overlay on top like you need to have that concept of an underlaying and overlay in place, um and then it's exactly like the planes. It's not the overlay, it's an overlay right. um So I I would say that it's important.

C

Yeah, I tend to think of it as if it's something you have to build before you can establish connectivity, then it's it's. Maybe it not guaranteed, but it may be an underlay. So, for example, I if I have two kubernetes clusters and I want to hook two istio based overlays to each other or two hdl-based systems.

C

I cannot just say, hey here's. The connection like I have to go build something else before I can start establishing those those seo connections. So, in that scenario, that thing that I have to build underneath yeah is a is a candidate for being called an underlay.

C

uh So that's that tends to be how I try to position or think of it, but I know that there's there's rough edges to that definition as well.

D

Well, you know another term that might need some definition is mesh right. I think we keep inventing new terms because network is already taken so there's fabric plane mesh, and I was always curious why network service smash took that term, but uh yeah I don't know if mesh even has a common definition. Well, it.

C

Is a mesh of network services, and so that's why we chose that that term um it uh it does meet it does it does meet that we negotiate connections between each other. We establish those connections and uh the other. The other phrase would be to call it a dag or or not even a dag. It's it's also a graph but yeah it's. It is a it it. It is a hard problem, picking yeah picking names that don't completely.

D

All of these things are graphs, all of them exactly so yeah.

A

Tell um one of the I think, very important things that you pointed out was: if, ideally, we can get recommendations accepted or um at least seriously considered upstream into kubernetes, and I think the the important thing out of that would be to make sure that whatever we use, we can communicate clearly how it relates to existing terms.

A

So the conversations earlier around like pod and other stuff, if we feel like we have, we need to use a term and ident and show it where there's a conflict and the meaning. Then we need to be very clear and wherever we use it, what we mean and if we can do that, then when we present use cases, then they'll be a lot easier to consume, because what we're asking is for people to take their time to read through understand what we're wanting, what we need and then try to find solutions.

A

So, if we're going to do that, we want to make the barrier of entry as low as possible, and I would I would suggest, whenever possible, we try to use those existing terms whatever they are.

D

Yeah and and by the way one of the things we can contribute upstream, it doesn't have to be something technical in terms of a new definition. It could be updating the documentation right right now. The documentation for kubernetes networking is problematic. I think for some of us uh some of the language there won't fit some of the concepts we have here. It's not generic enough.

D

um So so that could be something that we would do upstream. You know help help kubernetes find better language. um I mean it's no mistake that it took so long for kubernetes to finally get dual stack ip support.

D

um Some of the initial thinking was just uh not thinking far enough ahead, so one thing we can help is is really um better conceptualized right, the the how networking is described in kubernetes upstream, but we'll.

A

See, I don't know.

D

That's putting the carriage before the the horse, maybe I think uh we have a lot of work to get.

C

There yeah, I would even go far enough to suggest that early kubernetes was not even concerned about things like uh ip or similar. It was primarily concerned with just connectivity like I have a name it resolves to something. Can I can I connect to it, and there were basically three properties that if it, if you met it, then it was a happy like you can nodes talk to nodes. You can know stock to pods and bots talk to pods and how that happened.

C

It didn't it didn't care whether it was one ip, multiple, ips or or something else like it was to try to to detach as much as possible and see a loof of the network as much as possible, which turns out. There is more complexity there than that occurred. Because of because of that.

D

Well, I mean there was a basic assumption that it would be tcp version 4 with a specific subnet, so it was making certain assumptions and that's that's part of the problem right.

C

Yeah, but possibly I ip is probably the one, the one assumption that it made uh in that in that path.

D

Right yeah, I should say not tcp specifically ip is the assumption.

D

All right.

A

So is.

E

There anything else before we move.

A

On can.

E

You go back to the discussion taylor um so which.

A

I would say over here or.

E

The um and get this, and if you scroll up um so I don't know if you guys remember like very first call or second call, I said we needed to define cnf um people came at me with pitchforks and then sure enough. None of us agreed what it was um when the first pr was put in. So um I've made another attempt to pull this one from the tug.

E

I think that this is a definition we should probably get done sooner than later, because it's kind of specific to our domain here and kind of what we're trying to contribute versus modify like you know, from an originality standpoint and then, if there's any agreement on this I'll put a pr in for it and I'd also like us to start the argument on what does cube native mean, I mean, and for the record for cube native, I'm I'm okay with just saying that it's designed intrinsically to run in kubernetes, but I'm sure people will want more than that.

H

um I thought sorry, I'm late by the way I got another meeting and I only just got out of it, but I saw gergei made a perfectly reasonable point that um kubernetes varies from version to version, but you know applications still run on kubernetes, regardless of the version. So I think there's something we can do with coop native here.

E

Yeah so I mean first, he's got the cloud native like um victor kind of talked about, maybe just rephrasing it a little bit. I mean I'm fine with whatever, but um I would like us to just have a starting point, but when we say we're working on cnf stuff, what does a cnf mean.

H

I find it horrifying, we have to define it, but I think it's the other, it's the emperor with no clothes. In many regards. We can't work without having a definition. We can't just assume everybody has the same definition in their heads.

E

Well, not only that but like we should assume that, like this definition is really just a placeholder, because we just got done discussing for 30 minutes the fact that basically, everything that we're using to build the definition of a cns cnf is poorly defined. So then you get into this weird chicken in an egg scenario, but like I mean as people come and start checking us out, keep keep kind of used.

E

What happened right around the corner this and that like, if people just want to go back to their boss and say this, is what a cnf is based on. What I heard from the cnf working group or a tug, they at least have something.

D

Yeah yeah I'll, add that you know some definitions can be very specific and they can be very generic, so we could potentially work something that uh is kind of a big definition that allows for uh a lot of a lot of uh specificities.

H

uh We we are going to have to accept that someone's going to disagree with our definition, um because, as things stand, um as I say, it gets a bit vague as to what precisely is and is not a cnf.

H

um So I think, even with a fairly light-handed definition we'll catch somebody out, um but yeah I mean other than that. We don't have to go too far in depth. It doesn't need to be. It runs with a certain kind of networking it. It requires cpu pinning this kind of thing. In fact, that isn't part of the cnf definition. I think we all accept, but somebody will say it is so you know we're going to have to find some middle ground there.

A

Jeffrey there's already a dedicated discussion for this one and I would suggest that we move keep it over there because cnf the the comments- it's pretty short here, but if we go over in the the original discussion that you started, there's a lot more um back and forth on.

E

That I think I'll just put the pr in later today and I'll probably incorporate some of them victor's suggestions, uh and then you know this comes to thousand earlier, like that, that's the one I pulled from the tug white paper, um the first one that caused all the like conflict was the one I pulled from the scene, uh principles um etc. Like you know, at some point too, we could just the thing is: theoretically, all of this is agile. Theoretically, all of this is open right.

E

So if we do something here and we've modified something that we've you know borrowed from another place, we can always attempt to go put prs and those adjoining repos to you know try to make sure that there's not this definition sprawl going around.

E

I mean I don't think it's any secret to anybody that, like it's probably going to be the same 12 people who are looking at the tug repo as the ones that are on this call right now, so I doubt it'll be that big of a challenge.

A

Frederick, um I think you put forward the the term of using cube native.

A

Do you have any feedback on that, or is there something written out that you've seen or that you have.

C

So I didn't write anything down on it, specifically. The the thought process that I had on here was one of the traps that we that we ran into was trying to keep it too too generic.

C

So with cloud native, we don't want to make the assumption that it's that it that it runs any specific place in the cloud, but instead to try to to drive down what we mean to uh to a smaller thing like, I would argue things like if you create a build pack which runs in lambda or runs in cloud run, and you were to run in that.

C

That would also be cloud native, but it's likely not what we're looking for in these conversations, and so the term cube native was thrown out specifically to try to provide some initial direction, saying hey, what are the things that we need to do in order to get these things to not only running in the kubernetes environment but to to run well, but we also have a there's also a trap here, because it is possible to weld ourselves too much to kubernetes and the way that it currently does things where, as kubernetes itself evolves or maybe another platform comes around in the future- that we're we're stuck in a similar position as we were before not being able to run well.

C

So I think it's uh it's a balance. We don't. We don't want to turn the crank too far, but I think the term I still think the term is still still useful, but I'm okay with with dropping in favor of another term as well. uh If that's what the group would like, the the purpose.

H

I think originally was purely to say that calling something cloud native does not mean it runs on containers and saying it runs in containers does not mean it runs on kubernetes and so kube native was a shortcut through the whole discussion so that we didn't use cloud native to mean something it didn't mean.

H

um So, if we're, if we try and keep it that light and airy, so it's just really a shortcut for what about 70 of people actually mean when they say cloud native, because they're not being careful with their words, then we might get somewhere with that.

C

Yeah, because we're.

H

Not.

C

Trying to build.

H

Network functions that run on anything but kubernetes here we're trying to say you know not cloud native network functions or containerized network functions, but network functions that run on kubernetes. We're not pretending that anything else is our goal. I don't think.

C

Yeah like if.

A

I already marked it go ahead.

C

Like if I were ready to create an l2 network- and I was to expose it behind an api that you could then use add into a cloud in order to connect with a cell to network like is, is that cloud native and some of the definitions would say: yes, because it's it's nicely accessible through an api, you can declaratively state it what it is and so on and other scenarios people would argue no, because it uses primitives, which don't lend itself well to cloud to cloud environments or things that tend to work well across cloud environments.

C

So, as I was trying to, I was trying to be careful in not having to argue those particular types of things it's like it could be. We could say: here's here's the best way to run within career kubernetes on how to interact with it, and it separates out the the question as to whether it's a good idea to to expose out these type of things like what what's.

C

What are our best practices towards this? And what is something that runs well in those in those environments and separate the two out so that we don't uh and the end and isolates the conversation specifically to things that are that are within kubernetes, rather than try to drag in a whole range of other things, which we may eventually have to jump into some of those things? But uh but we don't, we don't have to do them now.

D

I I don't know if this is helpful or or will complicate things that anymore, but um for me I never liked the term cloud native network function.

D

I I don't refer to workloads as cloud native cloud native to me implies a set of practices, so you could potentially take a network function that was not designed necessarily with cloud native practices but wrap it in some sort of orchestration system connectivity, layer set of operators that cloud nativizes it right makes it work much better within the kubernetes environment, in a way that can make it seem cloud-native whether the workload itself was designed. That way is almost beside the point.

C

A little bit of historic context, what you're describing is exactly what we meant as in it's not enough to do a lift and shift, but you really should redesign it to to work in a cognitive environment. So if you have something that was a lifted shift in the containerized that that is not the intention of cloud native network functions, uh it was it's literally. How do you design following 12-factor apps, creating good metadata that you can then consume, and you can then reason about it.

C

So that way, your scheduler can make decisions about your workload like oh you're, a workload that supports ip, I'm not going to join you to something that only speaks uh ethernet frames instead. I'll make sure I connect you to something else. That's ip or if you use srov I'll, connect you to an sroe thing and make sure you get all that in the scheduler.

H

And we're going to go around this, I mean because I want to contradict, tell and I'm going to bite my tongue and not. I don't think this is going to be a productive way of using our time on this call, because there are probably as many perspectives as what cloud native means or could mean, as there are people on this call. um So let's take this to the discussion. If you really want to have it, but I think yeah as.

D

I say it.

H

Started with is coop native a useful way of a useful sub definition of what cloud native means, and I think it could be here because it's.

C

More specifically,.

H

What we're trying to accomplish.

C

Yeah and to add to that, we spent well over a year trying to get people to come to agreement on what it means to be cloud native network function and there's still no agreement on that. So uh that's. The other reason for driving towards uh cube native was to avoid all of the discussion around that, because it's a that's a trap that will that will lead us down towards a dark hall that we may never emerge from.

H

I mean it seems to me that what we're trying to do here is build, find best practices for building applications that usefully serve end user needs and run on kubernetes.

H

Because it isn't the cloud nativeness of the application, it isn't the cube notice of the application so much as does it help.

D

Yeah that was pretty much my point as well. You know um it's kind of nice to idealize and think of these pure excellent cloud native functions that are out there but, for example, our whole. Our whole conversation about network networking orchestration is not cnf specific. I mean you could work with pnf's as well.

D

We're we're thinking about the environment in which these network functions are going to be running, which is kubernetes based.

H

But of those pure cloud native functions name, three.

D

Right um anyway, I I don't think I'm helping here, I'm just making it more uh more disgraceful.

H

We're all saying that there is a difficulty here. We all see it we're all using different words to try and solve it. But this isn't a meeting for solving things, because if it was, it would be a lot longer than an hour.

A

It would be all week eight hours a day, um let's everyone, if you, if you have thoughts on cubenative, then please add it. If we feel if we feel like we need a new into the discussion um thread and if you feel like we need a dedicated thread. Just for cube native then create one. We do already have a dedicated thread for cnf definition, so feel free to add in here and then um ian use.

A

Well, sorry, not ian jeffrey, I think dropped jeffrey is going to create a pr. So we'll see what that looks like for the cena. Let's move on um hello. Are you still with us? Do you want to talk about the discussion.

B

Yeah, I think there were some comments that we collected last time just at the end of this discussion. If you scroll down, um I think jeffrey had some points. Unfortunately, I think he dropped already, but there wasn't.

B

Are you still around you're free?

B

No, so I think there was a discussion about. Should we consider external network orchestration to be part of kubernetes ecosystem or does operator sitting inside a kubernetes cluster should be accountable to such kind of uh orchestration rule.

H

uh I want you to ask, um I mean it seems to me that we've got dan m maltes, this nsm a bunch of theoretical things that could exist, but don't um they're solutions to a problem. They could potentially be a best practice if you can make a strong argument that one of them does everything that could possibly be conceived of and could never be better right.

H

There is a perfection here and you've reached it um and I presume you're not arguing that you know is that you're saying you know, is you know better, not the best ever um the question.

B

Is.

F

I think that that's not our point ian is not at all substituting or replacing any of those that you mentioned. It is complementing them because it's filling a gap for which there is nothing there today, and that is to orchestrate networks that then multus and the cni's can use in order to attach pots to them.

H

Right so you're thinking in terms of more the connectivity that multis doesn't address as opposed to the um presentation that multisum does address, but I mean all right: fine, motorsports.

F

A very small gap or a task, and that is to to to plumb pots, to networks that that already exist and are configured up to a certain level inside the cluster on the worker. Node. That's what the cni does right right, you know, is addressing all the rest that is there and to set up those networks in the fabric and inside the cluster and maybe on the dc gateway. In order to prepare the infrastructure for for multus to do its job or for diamond to shop.

H

Yeah or anything, that's fine. um What I was trying to I may have not used the most elegant words to do it, but the the point I was getting to is you can take this two ways either. You can say that eno itself is the best practice or should be a best practice, because um it solves this problem either as well as anything does right now or as well as ever will be solved. um The second part of this is, rather than take eno the implementation.

H

You take the problem space it was trying to address, which I think is what you were just talking about. You were saying that connectivity is an issue, and you say what have we just learned about problem space.

H

I mean, what's your end, if I left you to your own devices, if I got you to write the best practices, what best practices would you write based on what you know about eno.

H

Hypothetically anything will do.

B

I mean with the external with eeno we basically bring in the like jan said, the the automation for the external networks, which then eventually be consumed by the network managers like maltese or dynam and nsm so yeah. We kind of bringing the sense of automation for for such networks that will then later be consumed by the network functions which doesn't exist today, yeah in the ecosystem.

B

Yes,.

H

But so I'm saying best practices so what best practice would you write that either declares that eno is the best practice or points strongly towards eno as a good solution that solves the best practice? How would you phrase that.

F

I'm not sure ian, I understand what you're after at all, I must say: I'm completely puzzled. uh What do you mean with best practice? Well, we have. We have a challenge. We have a challenge today. If an operator deploys a kubernetes cluster, he has to manually set up all the networking underneath and inside the cluster in order to prepare for these uh secondary network attachment uh managers like multisynth and this the cni's that they control to do their work, so we don't have a best practice today.

F

What we are trying to do is to create something that that gives that provides an api, a kubernetes style api. So it is, it is uh it's meaningful to actually host it on the cluster itself, uh crd, to provide an interface, a declarative way for for an orchestrator to to to create those networks automatically all right. That's that's the idea. That's what we're after.

H

Yeah right and the reason I bring it back to best practices is because that's what this group writes, I'm trying to work out how we use those best practices to argue that emo is great or it's bad or it's as good as you're getting right now, and I think what you're.

A

Saying is.

H

That um a best practice here that we're looking for is that you have a set of apis well, an initial best practice is that you have a set of apis that allow you to reconfigure the networks that you can attach to where you want to attach to and a long term best practice would be. You use precisely this api because this api is standard. If you use it you'll work on any any kubernetes deployment you find um and that's where we could. You know again either of those actually says eno in it.

H

But the thing I'm trying to um I'm not trying to say eno is good or bad.

H

You've heard that I have my I've thought about this, and I I think there are other things we can do here, but that's not to say that I mean right in my choice of implementation, I'm just trying to work out what it tells us that we can use from a best practice perspective, and I think I do absolutely accept that um eno lets you do something that you need to do, and and interestingly also that today you can't practically speak. You do right.

H

So if we were to write you know, um user stories and use cases are not um altruistic. In my experience, you write them with a fairly pointed aim of saying there is a hole here that we need to fix. So what you're saying here is. I would like to connect to the network that sits next to my cloud, and I currently can't do that. um I'm gonna! Well, you don't have to say I can't do that.

H

You simply have to say in order to do that, I am gonna need these things and that's where I think I would take your what you have and phrase it as this is going to be a necessary component of whatever solution we build, because you aren't going to have network functions if you can't actually attach them to the right bit of the network where you want them to be.

F

Attached.

F

Am I yes.

H

Yeah, I think that's our.

F

Proposal here, that's that's the the thing that we want to say we, the the aim is that we, we define a northbound api, a kubernetes style api that can be hosted on the cluster itself to pro to provide an interface that can be consumed by orchestrators running on top of the clusters to actually do all the necessary network plumbing inside the data center to prepare for these networks to be consumed by cnfs.

F

Yes,.

A

That is very.

F

Open and the example that we have shown, and that we are that we are coding in the poc, is focusing on the very simple first use case, to provide kind of bridge domains across the fabric. To connect to.

A

Connect.

F

The secondary network interfaces to dph right yeah, but.

A

That's.

F

Just a starting point and there may.

A

Be other.

F

More interesting use cases that will require additional api constructs uh to model them.

A

Successfully.

F

And yeah we are 100 percent open to that. What.

A

We're after.

F

The main thing we're after is that we believe there should be such an api and an operator underneath it that automates this.

A

Yes, I um it sounds like there are some best practices that are at least being are part of the design for you're trying to get some type of practical solution that could be used and at a minimum, you're saying declarative apis for configuring, the network.

A

But I was hearing other words that were mixed into the like communication about what is xena doing, and what are you trying to accomplish that, I would say would be at least ideas of best practices that should be used and then some things that sounded like more of the implementation side, underneath maybe it's the best practice or not when you're talking about the plumbing and northbound side and everything else, there's some concepts in there that are maybe not best practice, but something else so taking some time to go through what eno is and identifying here is an area that we actually are trying to follow a best practice and here's an area where there is no best practice or we don't know about it and we're trying to solve this.

A

And if those could be labeled or identified, then we could look at the items that maybe don't have best practices and think more about those.

H

Yeah, I think also there's some things about you know as it is it's current implementation, the fact that it does layer, two networking um that one- I think um I mean you've, heard my opinion on this before, but it isn't, to my mind, necessarily a best practice, because you know there are other ways of doing networking. They may or may not be more valuable, um so that one might be more of an implementation choice. But again it sounds, like we've just said, not really. The focus of this.

H

The focus of this is that we have absolutely a blank wall here. We can't do anything with networking and that itself is the problem that we need to address.

B

And regarding the l2 network, I think that's just one of the network scenario and they implement the initial implementation of you know with the yes fairly straightforward use case that we have chosen.

H

Today,.

B

So um yeah, so that's.

H

Fine, I I I I appreciate why we do that because it's you know it's a simple thing to do: it's logical, it's basic and actually it plays the history. So everybody understands it totally fine and it may well have its uses and that's completely good as well, um but I think if we divorce the two, then you don't lose one argument, because you're trying to win the other one you've got you're. We need a decent networking api.

H

We need to figure out what that networking api would do. We can work through some use cases or user stories specifically to work out how it could be used when it's a matter of network admin problems versus cnf owner problems. Those are all valuable things to address.

F

Yeah, I totally agree.

A

But whatever we do, we must not lose.

F

This simple basic use case because it plays, if you like it or not, it plays a very prominent part today and we will have to continue to support it for a long time for many of these container cloud native network functions that have been built now, based on existing technology, including sriv, and all these things we don't like.

A

Them.

F

They're, not cloud native but they're out there and we need to support them and we need to automate them as well.

A

Well, can you work on adding the use cases, and you just mentioned several- I heard at least three. I would say that could come out of that quick comment and I think those are important to keep. Could you work on prs to create those use cases or at least create.

C

Discussions.

A

For that, so we can do that yeah. All right! I want to quickly go through. We got about a minute, so uh switch on what's been merged, so we switch individuals and the interested.

A

Parties, I'm gonna, click the wrong link there so interested parties.

A

If anyone would like to add themselves, it's now just a long list of anyone interested and then we tried to attach uh names uh company names to everyone, so we see that so this is backwards compatible with what we've existing have. But please, if you're, not on here and you'd like to be added, then do a pr request to add yourself and it has the get abuser name. We remove the tech leads from from the governance items until we need them. We can add them back later.

A

If we, if, if we decide that it's necessary but to simplify things, uh we.

H

What we were trying to do there is they'd kind of grown up as a concept without really having a purpose, so we thought it was better to remove the wording until we found what we want people to do, and then we will fold it back in. So it's not like they've gone and they've gone forever. We're not trying to change the way things work, we're just trying to make sure that that need drives change versus you know: change for changes, sake.

A

Yep all right um and let's see the acceptance process for delegation, this has been merged, so this is about the simpler items and will be based on the contributing guide and the pull request information. So all of this is uh now merged in there and you can see that and I think those are the top ones.

A

There's a few pull requests, some pretty minor ones, but if you want to review and give any feedback, that's there, we still want to get luke's um use case through. So please do some reviews on that and let's hopefully get that in by end of week thanks everyone for your time, we'll see you next week.

A

Thank you all.

D

Right thanks.

D

Bye.

D

hmm

D

You.
youtube image
From YouTube: CNF WG Meeting 2021-04-26

Description

CNF WG Meeting 2021-04-26