Add a meeting Rate this page

A

Great I said we have meeting minutes shared, so thank you where ever showing them mm-hmm.

B

Cool.

A

We have enough people here so yeah, let's get started so welcome to the next network service message meeting. So we have this, but every week we have this particular meeting. We have two others which are currently on hey Dizz, which is the document use case meeting. We may reconvene them as as needed. We have the CNCs Telecom user group, which we join in on which occurs every first Monday at 8 a.m. and every third one. They ask for a.m. Pacific and of course, this call, which occurs every two Li at 8 a.m.

A

so we have some major events coming up. We have ons Europe, which is occurring in Antwerp with for accepted talks. We have open source summit in Leon with one except to talk by Ivana and Radha sauce. We have cube Khan and cognitive cond in North America in San Diego, and so we have announced network service mesh con as a day. 0 event, please, please register the there's limited space and there's also a call for proposal.

A

So please this is the most important thing that people can contribute here is going to be content, so talk about listos about.

C

What you're good, please note: the CFP closes this Friday. So get your talk. Proposals at yes,.

A

So we also have sponsorships available so.

A

We also have multiple I believe the agenda has been posted for Q Khan as well, and we have a maintainer will have a maintainer track. Talk.

D

Yeah, that's the only one that I am home, I, don't know if anyone else got anything else, except it ya.

C

Know we program we just had the maintainer talk: okay, it's getting harder and harder for things to get on the main program. Yeah.

D

So.

C

I.

A

Reckon they have, unless my guess is, they have less than 5% acceptance rate. At this point, mm-hmm.

C

So my little math brain just tells me that needs me to get 20 talk. Submissions. The next time.

D

Taylor Duke Duke, if anything accepted by chance.

B

On.

B

For that cube tongue, yeah and well doing maintainer tracks didn't get accepted on the other ones Lexi. We were looking at doing it the wrong way. Joking, you and Ian. Why? Okay.

C

It might be good if you can add a link to the, because I suspect that you know your talk will be of interest to the NSM community as well, because you guys do a lot of stuff. You know we've just a lot of good interplay care if you could add it yeah. That would be great yeah.

B

Sure.

A

Yeah, which brings me there's also in in ons Europe, there is the the tug of meetup we're just going to occur, um believe it's Thursday at 11:45 a.m. and work local time, so, if you're still around on Thursday, that may be interesting. That may be an interesting area to meet up.

A

Correct me, if I'm wrong with that Taylor and it might be a different meeting but I believe that's what it was and let's see we have a social media community team. So lucena I thought you on the call. Can you spread it? You have a poor good.

E

Day, thank you awesome. So this week, I was able to post about and I'll start backwards, the OVS orbit. Podcast everything is clickable. So if you have a Twitter account, please feel free to click through on these meeting notes and retweet, like all that good stuff to promote the OVS Twitter podcast, that's been published, there's also an announcement, a reminder of the network service mesh in the CNF testbed session at open networking summits. This month, I've also posted for a call for CFPs for network service mesh con @q con north america.

E

There was also a posts that I retweeted about the CAF webinar intro to network service mesh that will be on October, 2nd details on how to RSVP are available in that tweet. That's linked in the meeting notes and I also created a thread for all of the Q Khan sessions that mentioned network service mesh that one got a lot of traction: I tagged 10 people to that one and then didn't really realize that when I created comments with each session listed than those 10, people would be tagged again.

E

So that was a learning experience, but it also got a lot of eyes on network service mash. So if you're curious as to which sessions will be at Keuka North America in November, that tweet is one place, you can find those things and it's also a reference point. If we want to copy/paste the URLs dates and times into these meeting notes, we can do so. It's all together. There and I also posted a reminder of today's working group session. There was a really good comment from an account that said: look ins and network service mesh.

E

It has a winning architecture. So I put the link to that really high praise into the meeting notes, so congrats yeah.

A

That's really cool. Thank you very much.

E

You're welcome.

A

So we have announcements so cmf testbed, there's announcement with the Hawaii and and Michael.

D

So today we had a another small breakthrough with Michael, so essentially for the last ten days, Michael know at least a week, yeah.

F

I think I've been working on it more extensively in like just the last couple of days. Okay,.

D

Okay, so Michael has been trying to essentially be able to inject an external interface and create the so called gateway, which is not the final story that we want to to show. But it's somehow the beginning of like the precursor to to have the final solution, I believe.

D

So we have an end-to-end external costs, then going through this gateway to a physical interface into the any same, and you know you can pass pins back and forth between the external world and the chain of services package computer.

D

We based clients, kernel based client and all these things are chained together and it's working something that we want to show in two weeks at open. Networking submit one of the examples that we want to show. So we are kind of proud of it. This time.

F

Yeah, it's a good step in the right direction. At least it was this. It's I guess it's last minute, but that's that's more than usual I guess so.

C

Now we just have to get the hardware Nick stuff where he and network service smashing. We can make it all of us.

F

Yeah I think it's just to add: I guess we're doing a few workarounds right now. I guess it. In particular, the kernel driver we're loading from the host and just pretty much having a privileged container, which can then access it and do using that DB DK plugin for VPP were able to attach VPP to the interface and and and do whatever we need to do one. There.

D

Okay, doing quite the same using motors mm-hmm, it's.

F

An alternative it's walking, yeah and I think we discussed that briefly today as well, that we probably need to look into some better way of doing it and I guess there are quite a few options: multiple at least one of them. Where we can we can. We can do this in a bit of a prettier way than we're doing right now. I think my biggest concern is just we need to have it ready for ons as well. I mean.

C

One of the thing that is coming down is versus really gets the folks. We just look at in doing the hardware.

C

The hardware nixed off I'll probably go and revisit that spec here and and pretty it up with a little bit more recent stuff because that'll handle not only. How do you get the NIC in there, but how do you have something that will call properly in an orderly way? The right API is to set up the particular VLANs and whatnot correctly for you, so you actually get the network service that you went there yeah.

G

It's.

C

So, basically, bringing proper Dynamis city into making sure the tour switch is offering the right network service to you. 200.

D

At least for me this this, this step is valuable because we kind of stumbled in some unexpected problems. Maybe I ever think about setting the PDK. Do you need privileged container? How do you isolate a specific device because we currently are not really able to do that? It just Maps whatever is out there. So it's it's a bit of the beginning of the learning curve, I think at least for me, so yeah I mean whatever we have in the higher-level specs, surely be able to take advantage of this work. In any case, that's my point.

A

Yeah we'd love to have your involvement in in the creation of this stuff as well, because you have experience setting up and managing these things and that's invaluable.

A

We have the SDK evolution so.

D

It was updated.

A

My apologies: we guys roll back tailor, see enough testbed roadmap yeah, no.

B

Worries- and this is just building on what everyone's talking about and I can share this I'll just send this slide via zoom I guess he's. Can someone click that if you're sharing I guess I can add that to my notes as well as a link, you can click that and up in a doubt, whoever sharing you.

D

Want to please please.

B

All right, I'm, yeah so and there's also one in the repo um but I need to get that one updated with what we did here.

B

It's a little bit off anyways, so most of the as you can see, most of the use cases that were focused on are going to be around innocent for the next couple of months, and what Michael and Nicolai were just talking about is that second, one in September of the innocent physical net gateway that we're planning on using as the example that we talked about this and a tutorial luxury type thing, as well as the talk that Nick, client and I are doing. So.

B

That's the I guess the big one that we're trying to have ready for ons and we're also in the middle of refactoring. A lot of different things in the CNF test bed, including, like what's listed, is about the use cases we're also working on the provisioning of machines and and clusters, as we're finishing some work on that we're also doing in the scene, 50 I project and switching over will B's and cubes break.

B

But this part is, something would probably will see for own us moving towards on, at least on the kubernetes side, splitting up the use cases into reusable components and Michael Denver, and a few of us have been working on this and it's right now in a different branch, but ideally we're going to at least have the NSM packet filter use case ready by OH&S. In this new set up, and probably if timing is right, then we can also get the this physical NIC gateway use case that we're also working on depending on where things are.

B

But we think this is going to be a lot nicer for people to contribute and they can come in and work on, service chains or cns or whatever they want and add different pieces and further on down the line, you can see Dan Em's on there and we've had a request to get Dan em in to the CNF test bed, as well as their that nokia CPU puller I've, put it further out because I don't know when they have time and but I.

B

Think by the time we hit November we'll be able to focus on it and other people and if Nokia's not available, and that hybrid kubernetes up in psyches case is the one that we were trying to get accepted with nikolai yang you and a bunch of other folks, and ideally, we can still target getting that up. If we can get some OpenStack um help on a some of the VP p GP tunnel issues that we're running into that'd be one of the main things, and that could be a talk.

B

That would be good at the NSM con and we got some others further out, including talking to some folks about switching to Cola OpenStack helm, there's some packet projects and people that are wanting to help on that and then that the multis Intel stuff comes from Intel's.

B

They have a container experience kit that Michaels already tested and on packet, so we're hoping to pull some of that stuff in as well. The last thing, if folks are interested in getting involved, then let me know put it pretty far out for January and kind of thinking about Mobile World Congress in Barcelona, which is in February next year, would be a GSM 5g type of use case and ideally within the sim.

B

Connecting this and packet has facilities that connected to France 5g, Network and they're willing to work with us to have access to various things. So if, if you're interested and like to talk about that and see, if we can put something together.

H

Taylor, yes, in that 2020 frame, when you think about what you get when you're thinking about the UPF thinking about what the UPF mode of like be well.

B

um I guess I'd say to be determined on what we, what we have available I guess.

A

Yeah.

B

I've been talking with packet a little bit to see some of that and I think we're gonna have some conversations with Sprint, probably post ons great, but if you have some thoughts on something that'd be interesting or to do a Buller. What you'd like to save them on me now or any of these? Oh.

H

Well,.

B

Let go.

H

To Russia 1s and then get returned NSM, but that you can shut on that all right.

A

If it's that's fantastic, so this we did have a very minor present at the last at the last rule, Mobile World Congress. So basically at the cisco booth they had I had a micro booth there with some people talking about upcoming things that cisco has been involved with, but giving it to something where we can actually have some something that's working there I think would be absolutely fantastic, so I'd love to I'd love to make that happen.

A

So.

A

Is there anything else we want to talk about on the on the roadmap.

A

Okay with that, let's move towards the status of the projects so for this I will hand it off to Edie, and that can start poking. The various people yeah.

C

The SDK evolution work finally landed last week. This is not only stuff that is going to make it easier to write fragments of your services of your and network service endpoint, but it also set up in a way that gives you internal tracing so that you can sort of see the progression through the mini pieces as well, which can be very helpful in figuring out. What's going on, particularly, you have tiny issues on requests and also is set up in such a way that any logs inside your SDK fragments show up in the spans.

C

So it should make things quite a bit easier to sort of run through the SDK. Now there is a need there for better Docs and then there's also a matter that we need to discuss either now or we can discuss, approve it down around multi-goal multigo modules in the main repo. What are the problems for having with the SDK that I think a number of people have hit?

C

Is the SDK, because it's in the main repo tolls, a bunch of requirements, you don't actually need via go modules, which sort of makes things harder than they have to be, and so we're sort of looking at solutions to them. The two that have come to mind is it's possible to have multiple go modules in the same repo or it's possible. We could break the SDK into its own repo.

C

Do folks have thoughts or opinions or I know you were hitting some of this Frederick.

A

Yeah I mean my my preference would be to to eventually to do one of two things that absolute best case scenario would be to convince the other go team to do some analysis of what they we actually need. So in other words downloaded and do like a pre compile step, but I, don't think we're gonna get that anytime soon. So I think that having a it's, the problem is not the size of it, even though that will be the problem for some. The biggest problem we're gonna run into.

A

Is that when you pull in something that has a very large number of dependencies, then you're now creating a and a burden on the integration of that library, with others and you're, limiting the scope of what others can upgrade to potentially unnecessarily considering that's most of the dependencies that are not being used. So the biggest one that I can think of from the SDK side is the kubernetes dependency. So I'm, not a hundred percent positive. Yet, but I. Don't think that we have.

A

We I don't think that we have a dependency on the kubernetes repo within the SDK and that's by far the the biggest one that that we need to jump over. The second thing that happens as well is a problem of go mod, tooling, so go mod because of the way the kubernetes is versioned and released.

A

It's not very go mod friendly, so you have to put a list of like 15 different, replace to actually scope at exact versions and once you've done that, then it'll it'll work, but it basically turns into a magic incantation that feels fragile to me. So I think that these I think this will go away if we were to split it off. So those are my those are my primary concerns at this point. Yeah.

C

I mean it's also sort of good hygiene, reducing the scope of dependencies. That makes our bidding quite a bit easier for folks.

C

So, okay do do folks, have other thoughts or opinions. I mean yeah.

D

I mean I: do you know that we have a lot of projects already in the same so, for example, our wonderful-tasting written by andre, you know I mean it I think that it should live its own life in a separate repo. We have also AWS and various SDKs for the public clouds there, which, which are just inherently there, because we have scripts and whatever whatever is needed, to do our CI and, as you said then, when, when someone he wants to utilize our SDK, they essentially are depending on AWS, whatever, which I.

C

Thinking it been I'm very curious in the short term, let's start trying to move some of these some towards go multi modules, because there are some complications for breaking things in the separate repos. This particular moment in time, because we're doing some things with API refactoring that make it a little more complicated. So particularly around like the multi data playing supporting the kernels and moving.

E

Being.

C

Strings instead of enums, but if we get to multi Bill Maher roles, then once the API settles down a little bit for V pros or that it becomes relatively easy to break these things into separate, repos and I. Think it makes a lot of sense, plus the the go module separate. Go modules will force us to think about the the interdependencies between pieces of things like so, for example, right now, if we were to break the SDK into its own repo.

C

The first thing we discover is that it's pulling me of the ice from the main repo and therefore pulling the ball. We do have exactly the same problem yeah. We would identify in the process of going to go multi modules.

C

Does that make sense, yeah.

D

So.

C

Maybe that's the that's the way to go. Okay, cool! So anything else on this before you move on.

A

There's one experiment I'd like to try, which is having a small kubernetes repo that we import. It's not really through identity is what it is, is all the replaces stuck into a go mod and that's all it is that way we can. If we try to include it in will, will that force the the go modules to properly load properly, because if it can't, if it does this, you know it this whole.

A

It won't solve the problem of downloading all of ribbon at ease, but it will solve the problem of keeping them all aligned across multiple repos. So just something there's something about.

G

Okay, cool, okay,.

A

That's all I am cool.

C

Alright, so moving on to be in progress stuff, I we are tantalisingly close on security, so Andre I'm, sorry, Elia I think you you've got just a little bit of stuff to rebase and then we hopefully are relatively good to go. Oh yeah.

I

I'll read your based and waiting for good results. Okay,.

C

So see is running and then we've got some more things coming as we move along on the security stuff, that's our v6, so our gnome I think you. You have a little bit of a blocker there on a bug and PPP. Is that correct.

C

Pardon do we have on the call. We don't have our time. The call okay. So when I last spoke to Artem, he was saying: there's apparently a bug in VPP on deleting srt-6 id's and that's sort of the lots of blocking piece on the SR v6 support. So we're working with the VP PG resolve is.

G

It then their baby be really good.

C

Don't get me wrong, we found some bugs in legato that got shaken out as well. Logano team has been wonderful and the VP B team is being wonderful about engaging to start these things out.

C

So we're just relatively more dynamic things. Okay,.

H

I'm, quite that well.

C

You'll be happier you'll be happier when we actually get the bugs fixed.

C

So and then you can tell us all the things we've done wrong: Daniel, because I'm sure there will be a list actually following it. Pretty.

H

Closely and we're working on this in other angles: I.

C

Understand that but no one actually fully envisions what they need until they try and use it. And then we discover all the little things.

C

So we appreciate them. Ok, so we do have a discussion. That's still ongoing about movie moving some of the remote mechanism, Stoffer on vni selection, into the innocent order from the network service manager. I, think everybody agrees. It's kind of a good idea. It's just a matter of sort of working out exactly what we want to do and when and how and there's some refactoring going on in the data plane that hopefully will make that simpler. um So do we have rata slop?

C

Do you want to say some things for I, just love about the kernel affording plane he's.

J

Actually, on pto this week, that's.

C

But.

J

I know a bit because I'm using his matrix implementations, so he still has a working progress, PR and I'm, not sure if it's ready, he let it is whip before he left, but he pushed some fixes and changes. It seems to me that it's close to end.

C

Which is Europe next we're on the list.

C

Yes,.

J

I'm, currently stepping from eto server cluster and here I wrote the other implantation for tracking matrix in Prometheus and now I'm setting up the server to test. All of that and regarding the VPP, show I think you have seen the issue that day disappointing legatos ripple.

J

They agreed with having configurable matrix configurable in periods I mean so I think from what they whites that they are going to implement it. I, don't know if it's something that this might. This is my impression that they said they're going to.

C

Some of the initial confusion was VP is able to collect metrics at a speed that almost no system is able to consume them, yeah what they thought we were asking, which was every time you update your metrics. Would you send us a RPC message, like sometimes people ask for things you like I? Don't think you really want that because they collect metrics so fast in so many metrics and we sort of said hey how about just every.

C

So often you give us a summary that made a lot more sense to them so because they can literally, they can throw off. Unbelievable they're also meant so much metric that they're actually innovations in DBP to make it possible to make them more consumable that way. So, rather than providing metric events, they actually will allow you to share memory where they will update metrics, because it's the only way you can possibly keep up, but obviously that's not what we're gonna want to do here. We're just gonna bet some cheer PC messages.

C

So that's actually also very good news.

A

Shared memory: do your PC method is what we need. Yeah.

C

I'm so sure about that, but okay.

C

Cool so refactoring to simplify. So do you want to say a few words under I know: you're starting the chain refactor a degree factory or the network service manager, yeah.

I

It's in progress, but I think you need more time changed. It's quite complicated and it split all requests and close into two separate here are his four local and for remote. So just trying to make all things easier.

C

So the other one I actually want to make sure we capture here is that we actually have a whole request out for refactoring the agent data plane. You do sort of a more chaining style as well. Let me go ahead and.

C

That's our number.

C

15 69 so essentially doing a similar kind of thing to refactor data plane in the hope of a much simpler to work with, and also hopefully make it easier for various people to build their own video plane with or without the agent, because I expect I mean you guys have done a great job with the kernel 40 playing that's a great stuff forward, but I suspect, particularly as we start looking at hardware. Next support where the hardware names may have special features.

C

We're gonna have a lot of people running to write their own and Assam borders for a variety of reasons, and we want that to be as easy as can be.

C

So well are there other things that people are aware of that are in progress right now,.

A

Well, I got a question for the kernel porting plane. Are we using IP tables or any related machinery? In that no okay cool? There was a. There was a tweet that was put out by Tim Hawking about the shift from do recall with what the shift was and I can.

C

It was for my tea tables to whatever was coming after IP tables I'm, trying to remember which everything it was and he's part of the progression where they keep thinking there to solve their problems. So what.

A

I.

C

Think.

A

Yeah, so I just wanted to make sure that was on on all of your radar, because it has there's an unstable API at this moment. The breaking things so.

I

If.

A

We're relying on the event it's just best to keep that in mind, so.

C

There were a couple things that I came across I wanted to discuss past sort of in progress things. What is we just had someone to open an issue? Saying, hey I went to go, try and get the latest of this thing. You know we don't and you don't have it yet and what I realized when I went to go. Look at that is in our production repos. We have a tag for the branch like a tag master.

C

We do not have a reduced version, 0.1 of any rushed office, tag anywhere, I loved, so did that not get taken care of when we did the 0.1 release.

C

Release mechanics I was on vacation that week for 0.1. This.

D

Should be tax or heaven, though, I probably didn't push them, but.

C

Yeah I I went by and looked for her than, for example, like on the minute, not.

D

Sure you're able to release we don't attack months, okay, yeah.

C

So when I went to the look at tags 407 it, let me go ahead.

A

Question on this, just get the gift tags or darker tags, docker tags.

D

Okay,.

C

So we're we're missing. That was the first thing, I noticed, and then the second thing I noticed is that we're missing latest tags for everything as well. Well,.

D

We agreed on only this time well,.

C

I, don't recall the full content that conversation but I thought that we were going to have a branch tag and then a release version and the latest angle at plated for the most recent. Oh.

D

We must.

C

Enter right right for formal things for master branch, my point is: if we've got a version, 0.1 release, I thought we were gonna have a latest. That pointed to the most recent release version. Does that make sense yeah.

A

I I think darker. We need we need a latest one, because if you do da capo and effeminate or MSM or so on, then it's going to default to two latest and that's sort of the latest release.

A

So I think we should I think we should change that particular policy and just make sure that we have a latest tag.

C

Because we want to be able to get whatever is most recent. The master latest shouldn't point to that of master. That way lies behind this yeah, so okay, so the question is: who wants to pick up getting the 0.1 tags pushed and the latest tax push 20, but the 0.1 for the stuff that, from the 0.1 release.

D

To give me I guess: I.

C

Know it's not a fun job, but it is much depreciated.

G

Just.

D

I.

C

Don't understand particularly what happened there with mechanics, but I did want to sort of bring attention to.

C

The tagging stuff.

D

So I see that for some of the containers there is okay, it's.

C

Some of them there isn't.

D

Yeah, okay,.

C

So it's good, not necessarily brothers, so we should definitely get it fixed.

G

We.

A

Should also watch it on occasion just to make sure we don't have an overly aggressive script that they clean things also.

D

Are you sure we have any same unit in the in the 0.1, because I'm not sure that they sense I mean yeah? Oh, that.

C

May be true III, maybe maybe that rename may have taken place later than longer ago, that I thought, but that's also a possibility.

D

Because all the kill chance, when you download and that's explicitly tested, they download images or clean docker local cache.

C

Ok, so here's the thing that I'm seeing sorry, no one that hasn't changed into a long time, which is that a see ya, let's see house but the b01 that it has the latest yeah. Okay, so I may have misunderstood, because I thought the si minute change was more recent.

C

This.

D

Latest is three months old when we stopped using latest in the in domain, got.

C

It got it got it got it yeah I, for some reason that I apologize that I created undue confusion. Oh there's.

D

No I mean it's always good to revisit that's what I was a bit like I. Remember, pushing all these things and testing five times right.

C

Well this! This is why you should ask the question, because good okay, the other thing, I'm, actually sort of periodically cleaning up a little bit. Is we have a few remaining CI tags like running loose and some of these repos yeah.

E

And it's.

C

Just because the switch over like things on old replaces blah blah blah so and I'm, just deleting those as I come across them. They're. Not many.

C

So ok good I'm glad to know that I was delighted to be having been sounding a false alarm there, all right cool.

C

So then the other one I wanted to bring up was- and this is just purely for discussion at this moment- is somebody you know- Andre put it out with the switch to mechanism type being from Edom to string, we might be able to collapse the local and remote versus the api's to a single API, and that that you know that there are some things to work out there around things like how to properly limit the local remote mechanisms to the proper context, because, for example, it makes no sense to allow a pod to ask his local network service manager for an SR v6 connection, because, like there's no way to represent back to the pod right, the pod can only be represented local things, but I think that's probably solvable, but I take it from your comment Nikolai.

C

This sounds like a good idea to you in general,.

D

Yeah I mean I think that's already in case anything changed fundamentally I didn't.

C

Remember discussing this before we just having a senior moment that happens to me. Sometimes okay, but I mean. Does that sound like a good goal for people I mean there's some details to be worked out. Yeah.

D

Yeah yeah I mean but yeah. It sounds good. Yeah.

C

Yeah, it should hopefully simplify a lot of stuff yeah.

D

Simple simplification is always a good yeah.

C

I also, like it warms the cockles of my heart that we're likely employees we get more feature-rich. It just makes me happy yeah.

D

All.

C

Right cool anything, I guess this is your line. Frederick.

A

Cool yeah there, let's see, is there anything else that anyone would like to discuss.

A

Okay, I'll remind people that we have until next week's NSM call to work out if we're going to have a call during so for those for those of you that are going to actually will do, I think it's next week will ask the last how many people will be will be around, and if we have enough people, then I will leave that to to Ed to organize, and but does that sound like a good plan.

D

This I actually just figured out that I have a coalition. Do you want to just cancel these meetings that are not that haven't happened for the last couple of months, because I just sitting there and we keep repeating that they are on break but yeah. It's September already ends yeah.

C

I would probably go check out the guys who organize them and say what you know. Are you going to be reading these back? If not where you provide to cancel them into the friendly place that way, yeah.

G

I think.

A

That's good idea and part of it is like who's. Jeff is an example, so Jeffrey Jeffrey was out for some other things that he needed to to take care of, and so he's back now. So he may have intentions of doing some more things. So I, don't I'm, not comfortable, just outright canceling I'm at this point, without having a good discussion with them, but yeah. We definitely two to reach out and then I. Didn't it's okay as well like to to put them on on hiatus.

A

But if they're gonna be on hiatus for longer than some period of time, then we should probably remove them from the calendar, and that doesn't mean that they're gone forever. It just means because we can always start them up again as needed, but yeah, let's, let's reach out to use Jeffrey on the call I'm not seeing them on the cult. So, let's reach out to Jeffrey and let's reach out to to rocky and she's, praying on the call yeah yeah, let's reach out to them and ask.

D

Okay,.

A

Cool anything else does anyone would like to dispense.

A

With that you'll kneel back sometime and thank you all for attending and we will see you all again next week- take care.

J

You.
youtube image
From YouTube: CNCF Network Service Mesh Meeting - 2019-09-10

Description

Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io

Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects