Add a meeting Rate this page

A

Welcome to another edition of kubernetes working group biggest meeting, we follow CN CF code of guidelines. This meeting is recorded, so this is going to be posted on the internet later, let's dive into the agenda today, I have the link in the chat for the meeting agenda. Please at yourself as an attendee.

A

And this last item is survey. Data I have last time last meeting we did on May 14th I presented set of slides on.

A

Can you.

B

Guys hear me.

A

Can anybody confirm.

C

Yes, we can okay.

A

Thanks Arden so yeah, so continuing I presented these raised slides and whatever progress I made. There I have added a couple of more graphs to the slides and Jordan asked if everybody can get the data, so I think I, massage the data, removing email, ids and respondent ids and IP addresses from the data. The first link that you see over here and the Google Drive is the data itself.

A

Everybody in kubernetes staff should be able to access it. If you can't, let me know if someone on the call can access the link and confirm if they can access the data. That should be great.

A

The second link is the presentation that I presented, and there are a couple of slides added to it. I can go through them, but I think they speak for themselves.

A

I haven't yet reached a summary with respect to the goals we had with respect to the data, so I'm gonna present that next time, when I have summarized able points from the data, that's it from me from my side of the agenda. The second item is follow on api compatibility, skew test, Jordan.

C

Yeah I just wanted to follow up I know one point: we talked about different different things we wanted to have in place in order to start solidifying our support across multiple versions. One of those things you searched way down and our meeting minutes was tests that would catch if we broke compatibility around serialized data formats and so at the end of the 1:15 release. This got put in place for all the API types under Kate's, io, /, API, so links to the directory that contains those and a readme that kind of describes how it works.

C

But the short version is that we now can detect if we have made any changes in a release that would break round-tripping of data from previous releases, and so, as we cut additional releases, we will grow this corpus of test data for all the formats we support, JSON and llamó and proto. So this is a this is part of our effort to avoid accidentally breaking release compatibility, and this benefits us, whether we add long term support or not, the goal is to make it to fit.

C

Upgrading from one version to another is never going to to break these guarantees.

A

Do we also support batch Alize and this IC did we got a is only for one dot 14.0, so I.

C

Only added it for the major releases.

C

What one of the things that this does it's always testing against the the fixtures for the current branch. So if a change gets picked into the 114 release branch that would affect compatibility, it would break this unit test against the 140 no or against the head directory in that in that branch.

C

So, if we find ourselves making changes in patch releases that affect API serialization, which we really really should not, then we can consider adding folders for patch releases, but this test would detect if we had done that and we can kind of decide how we want to deal with that. If we encounter it but I don't expect people encounter.

D

Is this gonna be on a release blocking test grid board? This.

C

Is this is part of the unit tests so.

D

It's merge blocking okay,.

C

Fills merge, yeah, yeah, it's very quick to run. It takes a couple seconds. Oh that.

D

Makes sense.

D

And if you have you seen it like I guess since its given that it runs there. If somebody encounters a problem and corrects it, we wouldn't necessarily have as much visibility on that. But do you know if it's triggered for anybody yet.

C

When is basically so I was I actually was running it on the API, changing PRS towards the end of the release. I was kind of rebasing it on all of the API changing PRS and verifying that none of them were breaking compatibility, so it hasn't, it hasn't saved us yet, but another thing that it will check, especially around the photo serialization, is when we change our proto libraries bump to newer versions of those all of the generated.

C

Part of a serialization code gets changed in ways that cannot be covert viewed, and so this lets us keep those dependencies up-to-date and know that we're not breaking backwards compatibility. So this also ties into the. How can we get out of dependency gridlock and several of our dependencies like a CD, and some of the cloud providers really really really want newer versions of protobuf libraries, which is fine as long as they're backwards compatible and now we know that there are backwards compatible.

C

So this ties into some of the code organization stuff that Dennis was working on in the beginning 1:16 we will be upgrading a whole swath of our dependencies to tagged releases and more up-to-date releases, and so this was a prereq for doing that in a non terrifying like squint and generated turn up-up and hope we didn't break some way.

D

Very cool.

D

You.

A

Any questions or comments for children.

A

Going once twice: okay, that's more on to the next topic charge you order.

E

So, hey everyone, the next topic on the agenda, will least post mortems. It's actually an idea that was brought up by themes in the architecture, architecture, group and so on. Why so? One of the one of the things that we have? No, this is the dodo. A release for foremost a most of the dodo releases for kubernetes are not actually production already, and there is a lot of work that has to go in from the patch from the patch release teams.

E

A lot of cherry big, huh a lot of cherry picks that have to be lurching before anyone actually is a it's really enough to rotate the right to run the latest release of kubernetes.

E

So so the idea was to actually try to do some police post mortems and try to gather some is some data onto how many cherry picks debated, that the type of cherry picks are merged into them into a latest and release in your to get up to get a better perspective of what are there are things that we can do better during the earlier in the normal release cycle, cadence and I wanted to bring up. The idea in this group seemed like it seemed like they yeah.

E

It seems it seems like it's writing your a lien I wanted a want to say if this is something that anyone here will be interested in doing a if this is actually the right place to bring it up or yeah. Just we just fishing for some ideas on how to make how we could possibly go on doing this.

E

You.

D

I'm definitely interested I have data from the last five months of or six months, nearly now of being involved in the patch management side and yeah I'm interested to see what how we can discern trends there and turn that into actions that break the cycle.

D

Although I will say that I am I'm, not convinced the data backs, the assertion, the initial assertion being that the dotto releases are just not usable for for some definition of that I think we we need to dig into that part, but that's where the post-mortem will will be valuable. That's.

E

Yeah, that's a good point. Do you have the data you have a like a organized something? Yes,.

D

And no so the pet release managers use a spreadsheet to track the state of incoming potential cherry-picks, and sometimes that has CVE data in it. So, like definitely, historical I can snapshot, but that's also readily available in yet because everything that goes into a given release is there right, so um yeah the and where I say the data I'm talking about like okay, so we put out dot, oh, what when in doubt one how much of those patches imply that something was critically broken and ATO.

D

From my perspective, what I'm seeing is for various little what I'll say is little point things, but for you it may have been the critical feature that you are looking forward to the release. There have been problems yeah and especially as things mature alpha beta is stable. There are problems, but is: are you unable to form a cluster? Are U&I able to stand up a cluster at the dot or release? Is it just utterly unstable? Does it not scale? Does it not perform I've, not seen that to be the case.

E

Okay, so yeah, that's that's always a good day. That's always a good thing to hear so a a good action item, a they drug, the spreadsheet and a bunch managers use. Could we a Prada, be made public or is it a or is there some date at all, rather not well.

D

Let's just say hypothetically today, I know that there's a CBE coming in that would be in the spreadsheet, but it's not public data so that that's where, like I I, can copy stuff out to show 113 and 114, for example, where the first releases went. But the head of that is is something that's potentially sensitive at any moment.

E

Okay, so I guess have you a good way of moving forward? Yours gather the data from a from git and.

D

I'll go ahead and copy those over and share them to just let the 113 114 just to show okay.

E

Okay, yeah, don't be awesome, I just want you just really want to see. You know just get out broad overview and see where we can dig in and just go from there and other than that. If anybody has any other opinions comments, I want to hear they release I.

F

Could.

A

Go ahead, please it.

F

Occurred to me that something that could explain both of your statement would be that new releases have new alpha features that are way too small and those are less stable, and so it appears that the new release is this table, but actually it's the new feature appearing in that releases, but maybe if you took the datum, has and broke it up to say was that the cherry-pick against an existing stable feature, or was it the cherry-pick again? A new alpha feature that might answer that question.

A

I think what would help a do? We know if, whenever there is a dot one r dot to release, do we see that whatever patches go and they come in with tests that avoid regression and also test about the patch itself? Is there a rule or is there a behavior that we have seen in the past? There's.

D

Definitely a guideline there. All things coming in always should have that, but practically speaking, no that doesn't happen. I think some of the SIG's are better than others, and but again that's all in the the commit record. Is you see a PR like the cherry-pick is usually a fairly isolated thing. If you go up to the parent PR and if there's an issue associated and look at the conversations there.

D

Quite often those will have discussion of how did we get here and okay, we've got this fix, but can you also write up a test that will prevent it from happening again and those usually aren't a emerge blocking criteria that the test exists, but sometimes they are the so you'd have to kind of case-by-case go.

D

Look at all the commits the parent PR associated issue, if they're there and kind of there's quite a bit of fan out there to see which portion came with tests either they're ahead of the PR merging into master or as a follow-on issue to say: hey, let's add some testing in this area.

A

Yeah I just wish there was mandatory. You know, label that said it requires tests and then the reviewers can create an exception and sort of you know reverse the process of you know exception. Was this rule so right now, every not having a test having a test has become an exception rather than a rule. I.

A

Think in a part of post that would help.

E

Ya make sense we will try to also look for the area.

A

Charge you have the next. Does anybody else any comments on the postmortem so.

D

Who's owning that are like what form are you asking this workgroup to step up to that or cig release or sick architecture? Is it's just an FYI? The sick architecture is gonna, be do running with this, so.

E

A so the plant from the city architecture, side of things is a, although unit as much as possible. What I wanted to see if I wanted to bring the idea to this group to see if anybody else is interested and also to check whether there was or based on existing work, that could be that could be related or that way it could be useful to this endeavor. But I'll do my best to keep this moving forward until we until a this group seeking architecture and anyone, that's interested gets a good enough idea of fair.

E

You know what things break releases, what things are stable if we have a if this actually helped that a anything that a anything that might just help anyone really.

D

Okay, I'll bounce, you a link what I copy that spreadsheet, stuff out and kind of, and I can kind of describe how we are managing the flow of patches in and and what I see for trends. Maybe that are a bit just as a data point and.

E

I'm.

D

Happy to be involved in an ongoing basis, okay.

E

Thanks so much.

E

With that, any more questions over a can I move on to way to the next thing going once twice. Christ like in the next topic is also it's also related. It's also related to these things, so I'm a I'm all over that I'm. Pretty sure that everyone is aware of how they're releasing notes, but you know see I signal team a if we only really look for we only really take care of secretly is a master master important for me and working in a team balking in for me for mine now, 114 113, 1 12.

E

All those tests just going to the boy in and I get in again a like a lot of tests, great and at some point, they're just gonna become wet because nobody is looking at them. No more ain't, no one's really maintaining them. So this one I just wanted to bring the bounce idea with base, so more people hope you use.

E

Group of people essentially do see a signal both for past releases. They also to make also with a goal of trying to help patch my a batch managers, as you know, so they kind of get. They can actually get a signal, a an end-to-end signal of whether I cherry pick actually a doesn't break anything or that type of thing. What you'll think.

G

This is related to personally would Jordan brought up with the cube, cuddle napping and master blocking correct.

E

Yeah or yeah I know, yeah I know way a correct me if I'm wrong, but the the cubicles, the cubicle, it does Jordan a mentioned that was actually moving at that was actually moving a job or a test. I, don't remember from a sick CLI, a desperate dashboard to to the my head to the secret list dashboard so so, for this one I think I think that actually just falls under the release team. How do you actually want to graduate test from 1-4 to the other?

E

Put one of the things that I really wanted to see engage if there's interest is on actually maintaining the older, really a older release. It release dashboards, for example the one 14 13 and 12 right now, so so in a way like expand. The CAC, no team from the release team jour/day, take care of the folder all the releases that we are still looking into and maintaining I.

A

Think the spectacular from that release team as to how the release team is structured.

D

Hatch release his so historically patch release has been a person who, for nine months, managed the incoming queue of cherry-picks and watched that test grid board and triage failures, they're routed them for fixing as needed. So it was for the three supported branches. It was always these three people and they didn't necessarily communicate a whole lot.

D

So we're trying to shift it more into a team effort and that's that's one of the things that's been useful in sort of doing the spreadsheet based workflow like if it's just one person they have their workflow, it's all kind of anecdotal, but in a Google sheet we can share and we're starting to build up a process of how we triage and manage things that are coming in and then one of the immediate observations across that was is the majority of what comes by?

D

Is a cherry pick against most or all branches, so having three people doing the same, triage and decision making should I merge? This is it healthy was a waste of effort. We should be communicating across these and we get a little bit better coverage if we do it as a team as well.

D

So like the the patch releases last week, I'd started some stuff or we've been communicating all together as a team watching the things merge in CI on those test boards stick going green or staying green with the new commits I started, some stuff building I went to bed pulling the teammate in Poland woke up, did some stuff.

D

They handed back off to me I hand it off to a part to a Googler so like, instead of one person having to spend the better part of their day and making a release, we spread the work out to where it was a set of people, but to do that, we have to have a shared knowledge of the state test grid as a part of that the cherry-pick queue and the status of master or the release branch commits as part of that and yeah. That makes this hard, um so I actually have on tomorrow.

D

I'm gonna do a little screen share with folks like github of what this workflow for triage and kind of merge master. What it looks like because it's it's really difficult so say somebody just goes and flips some something that I'm the milestone or just that closes an issue and github.

D

Now it no longer shows up in a query. So unless you new issue number seven one, two three four five was important and noticed that it's no longer in the query, results it just disappeared and there's there's not something: that's sort of the stateful queue management process, and so they were interested in understanding that and maybe helping build something. Otherwise, maybe we might build our own air table to to manage it better, but it's it's tricky and definitely I.

D

One of the things that is I'm driving sort of this agenda to spread this out under a team is I. Don't want all of that state and knowledge tied up in one person. It's a risk to the project and then also you have the hive mind as if it's as a team, one person could be like. Oh, this is totally cool, but Jordan then it'll be like no.

D

No, you guys are totally missing this, so the weave benefit when I mean everybody knows this engineering works best in teams right so especially when there's a whole bunch of complicated state changing over time. So we're trying to improve that and it yeah that's kind of where we are on the Pat Riley side.

B

So.

A

If he already do monitor this desk, where dashboards, when we make the patch there's no specifics here, signal roll, but they did that release managers monitor them for the.

D

Older releases, yeah I have like I, basically always got a window open those the boards for 13 and 14 and seem to be 15. Ok,.

E

Yeah, that's actually super super. It's super helpful I did I, didn't I, didn't know that the way you work so expanding a so I guess expanding size. You know, that's, you know something that they patch manager. Sorry gonna are gonna take or is there something anybody Danny new contributor that you will think that a new contributor could help in work I.

D

Definitely want more people helping on that again going back to like them. The more eyes makes the problems more opaque, but right now, because so I'm only a couple months into having exercised it myself and come to sort of a sense of what we need. Part of that is kind of declaring or establishing the process of doing this monitoring and triage and patch merging having it documented. So somebody who's new can come in because before it was little I mean it was.

D

It was basically a series of Googlers who somehow anecdotally knew how to make this stuff happen. So we we've done the discovery. Now we need to kind of refine a collaborative process, because the way one person does it in isolation is different than how a team is going to do it. So um we need to establish a process, get that documented and that'll enable us to bring in new people.

D

But that's the goal to make this more sustainable over time that it's it's not one person burden for nine months and that you get the benefit of team collaboration on triage.

E

Okay, so for that is a for that issue, if I can just close it and make a mention over everything, but a patch manager still yeah, but so at some point they do all are going to are going to make this a thing right. So you guys have the owner service I. Think.

D

That's that's a fair assessment, yeah and I'm just trying to find the issue where we're kind of tracking the dues on building that team out for patch management. Okay,.

E

So.

D

I'll I'll put a link to it and yours if you're closing your just is to say like hey these, these people and what they're doing won't be the forward pass.

E

Another.

A

Point I wanted to make was the one dot extra toe releases and not necessarily not production-ready, because there is nobody watching it I think we don't have enough tests that actually Cuba's always weather what zero dot dot. Zero is good enough or not it's that when it goes out in the wild is when we figure out hey.

A

There is a new issue and then that's why I think the dot one and dot two releases deserve a you know new tests, because whenever there is a new issue that is not part of the test, suite is found outside the test suite then you should probably add those tests. So that's why I think there should be a policy that says whenever we had any patch there should be a test to test a patch and also the fix. Also the issue that patch is fixing yeah.

C

I think that applies generally like we. We should not have the same problem twice.

C

We like Tim, said different areas are more insistent on this than others. You know, even things going into master bug fixes should be accompanied by tests, but definitely for things that were severe enough to warrant picking back to release brain yeah.

A

I think if there's a default label and the automation that gets added, if it's in bug and the review, ask someone else, can go and say: okay, I see this, and this does not require a test that would be much easier to remove the label versus under this one. Anyways yeah I think that's for the plus 1 up team and.

D

This is something on George that I would really suggest you watch. So your role, maybe a CI signal, lead on the release team is about to end, but watch really closely the next month and a half what gets merged back into that branch? What does it tell you about? um The 115 did. What did we miss if you're declaring effectively this week like we're good to go to release?

D

When you see the patches that come after? Do any of them go? Oh, we should have. We should have caught that or I'm embarrassed that I said thumbs up before now that I see this patch like so is release Lee and I.

D

Did that and I wasn't overly worried by what I saw I think I think we have a reasonable I mean I'm, not happy with this, the state, the fullness robustness of it, but we we're not having horrible escapes I, don't feel like, but you'll be able to judge that very clearly, because you've seen the flow over the last quarter and then over the coming weeks. To sort of compare and and ask yourself was I delusional where, during that.

E

Okay, make sense, yeah well, do that and thank you so much for all the conversation around this.

A

Just for bringing that attention.

A

Yep I think that's a I, don't think it's good. That next topic is mine. I attended the conformance meeting. I was not at the coop Khan and the architectures city architect, as extended conformance working group, work, conformance work stream, I, don't know what it's called. There's a separate meeting for performance reviews and during that the api's hippie hacker presented, follow up discussion of kabocha an EU on what he presented Kubek on.

A

If you see that there is this particular I have pasted the minutes of the meeting and in the notes they were interesting discussions around how ApS no ABS nope was used to track. What are the tests that are untested, but you know the API endpoints that are exposed versus untested. You can easily see from the link that night I mean how many endpoints are not being tested and how many endpoints, which are stable and not being tested. This is just an example, but they have covered different categories and this can be used to query.

A

They have also taken a passion items to actually file umbrella issues and sub issues under four different API endpoints to be tested and then also with respect to conformance to see which of the stable ApS that are stable and are supposed to be upgraded to conformance level needs to be upgraded. So that's a pretty interesting.

A

Data for with respect to how many stable appears are actually getting tested today versus how do we call it stable, because I think I found two issues in the last release where they were the stable API but was not tested and never used from 1.12 say one of them was not used from 1.12 and never was never working.

A

So it's a good data and I think I'm gonna tell more on to this to see what action items and how we can connect this with saying. Okay, this should. How do we derive the stability of the of the API is itself from this data. Saying: okay, when we say something is stable, what should be the process around? It? Second is what is missing today across the system.

A

Apart from what is mentioned in the state eyes, I have seen that there are in this release. We had 14 alphas versus 3, betas, sorry versus three stable. The number of progressions from alpha to stable is less, and it's it. There should be a way to calculate the life of an API. How long is it alpha, and that is where that would be a very important metric to understand?

A

How long does it take for us to an average time for an alpha feature to go stable and are there, and what do we do with alpha features that are rotting and nobody is taking ownership and they are living there for months and years. So those are some of the important things data perspective which I wanted to collect in the future. A couple.

C

Of things to call out around trying to tie that to some of the API tests like the auto collected data, a lot of the Alpha features are modifications to existing types, and so you may be calling a v1 type that has a new alpha feature in progress, and so the auto collected data isn't going to understand that you're calling v1 pods with a flow fields.

C

Another point is that a lot of the tests we have our unit and integration level tests and the auto collected conformance data only understands ete tests.

C

That was kind of a issue that was raised from the beginning about how the conformance weight is set up, and so we actually have lots and lots and lots of test coverage that is not captured by the script metrics or logs around what API is for exercised in conformance. So just just keep that in mind as you're trying to interpret some of the data that was gathered, yep.

A

I think this, the the link that I have shared is not covering conformance but entire testing integration tests, not the you to the unit tests, so it covers more than conformance, but I agree that it doesn't cover the entire breadth.

A

One thing, I would say: is you would still if you are exposing an endpoint, you probably should have an e to e test for it for sure, and that in the link that I just showed is actually is what the link that is shared in the and the notes is what I'm, opening it parallely just a sec.

A

It's it's not necessarily restricted to conformance. It's actually talking about e to e tests, so this right.

C

There's conflicting guidance right like. On the one hand, this is only capturing ete tests, but, on the other hand, there's constant feedback that are e to e tests run too long and.

C

Just from a developer perspective, it's a much much better experience to have an integration test that I can run on a particular package and exercise sort of the exhaustive set of API is like every operation on every API in ten seconds, rather than spinning up a cluster to do a test loop. And so, if the statement is, if you have an API any endpoint any verb any method any field, there must be an e te test exercising it that basically explodes the number of e te tests we have, and that is going against.

C

The guidance we're getting from sync testing and just the practical experience of I need to fix something. I want to test suite. I can run on an API in ten seconds, not ten minutes, so I agree. There needs to be some set of e te tests. Kind of high-level, like does this object? Get persisted.

C

Does the happy path like does do these things works and there are some things that can only be tested in EE tests right, like upgrade things and end-to-end like API, to controller to cubelet back to control or back to API, like those sorts of things need ET tests, but I disagree with the assertion that there must be an AE to e test for every combination of endpoint and verb and version and field.

C

So before people take that and just kind of run with it, and we see an explosion of media tests, it's worth kind of figuring out what our desired direction is there yeah.

A

I think there's going to be an umbrella shoot that is gonna cover all this tests, so we should probably voice opinions there. It's I mean if it's, if it passes through I'm sure we'll be do it as for each verb and each endpoint. Ultimately, if nobody says no I see that the number of endpoints not being covered here is by by that the base nobody's reporting is not great, and from your perspective, if we are not actually capturing all the data, then there should be something that gives us information on our coverage.

A

If this is incomplete data, then they should be something that tells us exactly what is a complete data, so that is, that is something missing and probably which I will mention in the comment and where there is that umbrella should do that, says: okay, wait: how do we? How do we use the integration tests and it the unit tests etc? And this.

A

Yep I think that's all in the agenda right now. If it does, anybody have any other issue to bring in or any comments on this one.

A

Any updates from kubu can I everyone who attended. Oh that should be relevant to this working group. Yes,.

D

So since we had a meeting two weeks ago, I forgot about that um so I, just I did a basic, deep dive kind of describing the general status of where we are to a fairly small audience. It's had a few views additionally on YouTube, but it's this isn't the sort of thing that draws users out in droves, I feel like in the the conference's growth has been more user focused.

D

So like the the discussion we had in Seattle at that Q con, where it was more core community members or strong opinions on how we might do this was it was certainly a much more robust discussion and I think will. um Is we drift towards maybe having proposals in the fall for potential changes?

D

I think that again will drive more folks to show up with pitchforks or maybe celebratory chairs, I, don't know well, we'll we'll see how that plays out, but I think the the cadence of cube cons has been exhausting and there were less of the type of folks it's Q Connie, you I think, would have been active in the discussion versus earlier adopters at this point, but so yeah I gave an update. There are a few questions and kind of expected things to interesting things.

D

I would call out from the discussion and QA was I'm blanking on where they were from, but it's in the recording, like somebody, got up and asked a question and made a few assertions. Maybe but um I asked them well well, where do you work who you're describing these experiences on supporting your kubernetes? Where do you work and they were a little shy to say which vendor it is true? They were from initially, but then in the audience. Red Hat, Sousa canonical all raise our hands and said, hey we're.

D

Here too, we could all figure this out together, it's safe, it's okay and the person mentioned there. One of the Chinese vendors, but we had a bit of discussion across a number of the vendors and one of the the. The second thing was that there were a couple of people, two or three people from canonical who were there, who are interested in being more involved over time on this type of stuff as well, so I haven't seen them show up to our meetings but I.

D

Let them know that where the forum's exist, where we're discussing this and that we welcome them. If they're interested in participating.

A

Thank stem mainotes. Finally, we're speaking sorry.

D

About that.

A

Well,.

D

I'll drop, a link to the the recording, I mean it's a it's 3035 minutes or something so folks who are curious could watch it, but there was no strong actions. We didn't, we didn't decide how we're gonna solve all the world's problems there. So it's I mean you could watch it as fYI folks want to.

A

You.

A

Any questions for Tim, okay, so I had a closing perspective. We have reached about nine months and six months of this working group, where I think we had four months of official work after we got approved versus six months and it is good to go back and say: okay, let's have, let's summarize what this virtual groups goals where and what should be the deliverables? We already have the survey data back and what are the talk?

A

How should we structure the action items of this working group and I think in the next coming meetings once I, and once we look through the survey data and we look at all the different work, we have done around understanding how the Petrelli's management works, how the tests opera tests are working, skew tests, api compatibility, etc, and we probably should narrow down on action items. I have a working dog on that. That is trying to create a.

A

Set of metrics or set of indicators that would give us some understanding of what a good proposal or a cap for I, know two different proposals from this distal deal and the boomers proposal how to measure them how to how to measure a proposal that is proposing how to change the release cadence so I, don't yet have it ready enough to share, but I think this. Is these it's good to point it out that in the future, probably these discussions will channelize us into an action list of action items that will.

A

He was a good deliverable from this working group. I think the working groups in the past in other working groups. It has always been discussions but I think some action items that can be distributed across six to get some work done would be great any comments from anyone.

A

Okay, I'm going to be optimistic and say everybody likes it, and everybody agrees with it and everybody's excited about it. So yeah I think I'll have some more data next meeting about this and as meetings go through, I think we'll. Ultimately, after two or three meetings, we'll have something concrete.

A

We don't have anything on the 13 minutes can be brought back, calling out for any more topics. One two, three okay, thank you. Everyone.

A

Bye-Bye have a good day.
youtube image
From YouTube: Kubernetes WG LTS 20190611

Description

The Long Term Support Working Group (WG LTS) is organized with the goal of provide a cross SIG location for focused collection of stakeholder feedback regarding the support stance of Kubernetes project in order to inform proposals for support improvements.

"WG LTS" is simply shorter than "WG To LTS Or Not To LTS" or "WG What Are We Releasing And Why And How Is It Best Integrated, Validate, And Supported", but should NOT be read in that shortness to imply establishing a traditional LTS scheme (multi-year support; upgrade from LTS N to N+1, skipping intermediate versions) is the foregone conclusion of the WG.

Charter: https://git.k8s.io/community/wg-lts/charter.md
Meeting Minutes/Agenda: https://bit.ly/2HI8ppj
Maling List: https://groups.google.com/forum/#!aboutgroup/kubernetes-wg-lts