Add a meeting Rate this page

A

Hello greetings folks, um I will be leading this one today. um Fellow co-chairs are not on.

A

If you can drop any agenda items into the meeting notes which posted in zoom they're, also in the slack channel.

B

Please.

A

Welcome folks did anyone else join there. uh Please add your name and andy. Please add your name and any agenda items to the meeting notes.

A

Which I've posted again.

A

I see that you added one gem welcome and thank you.

A

How much time do you need for that? Normally.

C

Yeah, so we can, um depending on the other items you know about, 10 minutes should be good for a quick demo. If you want to go deeper, we can certainly showcase more use cases and other items all right.

A

Are you familiar with the telecom music group.

C

I had a quick discussion with victor as we were discussing uh different use cases, and I have looked at some of the documents and other information, but not deeply familiar, but I am at a high level. Yes, all.

A

Right, typically for longer demos, we suggest putting them on on that, um okay and within that user group, but a 10 minute demo sounds good and and then for sure discussion. That's the main focus here. Is this your first time in this group.

C

It is.

A

All right so and have you been, have you gone to the scenic work group? Repo? No, no.

B

Not just I just invited uh taylor.

A

Because.

B

I guess like uh giberno is a great tool like at least be nice, to be aware of the benefits and all the efforts that they are doing there. um So yeah, it's the first time you came here so um I guess like he can present that and just share a few few things that they have and eventually maybe we can do a another, more deeper demo in the future, but yeah. I guess for now like what jim has like 10 minutes, showing what is a minimal or the basic ideas. I guess good enough.

B

What do you think.

A

Yeah sounds good, um maybe um thinking about the focus so just a quick overview for you, jim, um the cnf working group or it's clap, means cloud native or stands for cloud native network function and the focus is on identifying best practices and the use cases around those where for networking applications so that we can see how they can best run in a kubernetes environment.

A

So we're specifically looking for things that might already be obvious and other like enterprise applications and stuff. Okay, how do they get applied?

A

So this um run as a non-user for the pod security part of your demo, that's relevant to a whole area on the security side that we've been looking at specifically within the principle of lease privileges, and one of them would be running. Your processes is non-root, so that's actually what we're going to be talking about today is one of them one of the things and um that's so that's a goal. So if there's use cases that you know of that, are you would be useful that illustrate.

A

Here's why you should run is non-route, here's why you should do these other security things or whatever, and that would be good contributions from the working group also areas that are problematic like here's, an area that is, you know, they're thinking about over in um the plumber's working group, or you know, sick testing or wherever and if you point out they're trying to work on it, but they don't they're having some problems.

A

Those are also area, those kind of gaps and then again the end goal is to be able to share here are best practices that we're trying to get everyone to adopt for the platform and applications a another initiative. That's related that I think I don't know enough about this project that you're going to demo to say, but the cnf test suite it's actually focused on creating tests that try to check practices and how they're how things are deployed and how it's running, how it works in run time.

A

So this would be you know, deployment onboarding of new applications as well as, like the second second ongoing life cycle management type items. um So it goes across the board, so it we that initiative actually use um various tools like falco, if you're familiar with that and opa and other things as part of it. Okay, the testing, so this might be something that could even be used there, and we could talk about that perfect.

D

Okay,.

A

Does anyone have anything else to add to the agenda.

A

All right I'm having some problems. This just came up. It's see. This is just going like this. So uh bill. Are you available if my screen um dies or I don't know, lucina somebody that could help? If, if I can't load something essentially right now, I'm okay, um so the this best practice. So we've been working on a whole set of use cases and we have a bunch of write-ups around this. I'm not going to go back through, but if folks want to look, you can go and see in the notes.

A

There's links to documents around least privilege and other things. We've had this best practice for a while and then we've been going through and.

A

Resolving um issues we're now down to a point where everything has been resolved and we have uh some of this is pulled in just because it's it's from the the main branch. So these are typos and stuff. So we can ignore that here's, the main best practice so.

A

This is for the non-root.

A

With the recommendation that all the processes should by default be run as a non-root user.

A

Part of the defense and depth strategy against compromises. If something gets through, then what do we do? So this is the idea, so here's the set of user stories. This is one of the last kind of remaining thing.

A

Writing these up, so there's a whole set of user stories that can be used around at a minimum. The least privileged items, there's probably other security things.

A

So wherever something does break through, if it happens, which is likely at some point, then how can we stop it? Well, the non-root user is part of it, but so compromised updates or maybe a central registry. Maybe it's coming directly from a vendor. Maybe there's some other. You know centralized registry, where everybody's pushing to and updates.

A

Maybe the registry itself was compromised and um we actually pull down compromise code, maybe the images or there's if, if someone has done more of a lift and shift and the application is not deploying new images for updates, instead, it's doing an update within itself, which wouldn't be a good practice in of itself. But if it was doing that and it wasn't using root, then it could limit some of those things and there's several other user stories.

A

I'm not going to try to pull these up or maybe I'll I'll see if it loads. No, I don't think it's going to. Let so y'all can click through this if you want to check them out, but we have a bunch of user stories written out and a bunch of references going to different places. Talking about why this is a good idea, including the cncs, tagged security, white paper and a bunch of papers from different other folks.

A

And I think that's about it. We talk about some ways to check for these things that can happen, and it happens that the cnf test suite also can check for this one. So that's it. Okay,.

B

It's just one thing about that: one. It's like uh at least. I noticed that you haven't reversed the latest changes, so um at least for me it was a little bit hard to distinguish what was the difference between uh the previous or like, like what is like dcr adding.

B

So I don't know you can because and now it's reflecting 70 changes, so I mean most of them are coming from other commits. I don't know if you could release it or we just need to focus on the the document that you have.

A

Created because I'm going to try to pull it back up it's having a hard time but I'll. um If, if someone else wants to pull it up and share, then you can do that and I can walk through it, but the the we were actually at a point where everything was resolved and ready to go, except for the the user stories.

A

So um everything else that's been updated has been typos, spelling or somehow I think either in a rebase or emerge we've pulled stuff in from maine yeah. It's I don't know what's happening with my dns, but um we've pulled stuff in from maine.

A

That is.

A

um Causing all of those commits to show, but those are on other files like the get actions files the spell check file um read the readme got updated for spelling.

A

So uh here we go so so this workflow, I don't know why it's showing here pulling over here.

A

So that it should be a no op whenever it goes on to the shouldn't, do anything on the main when we merge these in git, ignore so ignoring the dictionary. This is probably already on main and the rebase pulled it in spell check so that also shouldn't matter. This read me: that's a spelling um update that someone did um this. One is also a grammar issue on the existing process, doc document I'm going to skip this one for a minute. We can go. Look at some of the others again, spelling on the main readme glossary.

A

This one is updated on the main so that this is bringing this branch that has the process aligned with. What's in maine, it added some new, but we already have these on the main branch.

A

This is one of the gap action things. I think um this is more spelling on an existing use case.

A

Spelling on another use case, this is the newest and one of the older actually use cases. The onboarding use case. It's just a full ad. It's already been added to main. So again, this is something where the pr is messing up, because it's not it's not going to actually be an update. It's is already there supply chain attacks, so this is new. This is the user stories.

A

um It looks like we have some spelling errors there. It is, I see it. I think uh victor, would you mind doing like a a commit request, suggestion for those sure yeah again, I can do that right now, where it says kana tyner instead of container.

A

But this is the user stories, so this the defense, in-depth supply chain attacks. Talking about what those are. I'm saying you can have, whether it's bugs or an actual malicious actor. That's trying to get something in there's a lot of places from development all the way through production.

A

Where a problem can happen, you can have a bug, that's all the way in production and it wasn't intentional, but it could cause some type of security issue and and then they can get in that way, and these are the actual stories about different different ways that this can happen.

A

And that's the main thing that one and then this section right here, user story adding this section with the user story. The rest of the the rest of the pieces within here were either spelling or specific changes requested by folks in the comments. So if I I probably can't see it here, but if we, if we look at like the conversation, there was some stuff that randy um suggested and those were accepted. Changes.

A

um She suggested several things: those have been accepted, uh ponchai made suggestions and those were included if we show like very minor things on the central system, and actually this user story got deleted. So it doesn't really matter, but there were several things like that on essentially.

A

Like this uh notice, here's a question that shouldn't be there: okay yeah. This was a comment that somehow made it in the code, so we deleted that.

A

But these were the minor changes. Most of the most of this was all done all the way back in july uh july and august, and then the user stories was what we were waiting on and those came in.

B

um If you go down, you will see my um yeah the comments to change the container typos.

B

All right.

A

Yeah, if I can load this as oh it loaded all right and then I can do.

A

All right.

A

So I think that's it victor and um you can I just refresh you can do a a new review.

A

And it was pointed out that it's harder to um review because of the rebase. So.

A

Not a whole lot that we could have done on this one because it's been open for so long, but maybe we can figure something out next time, any other comments or before we merge this. We do have enough approvals.

B

Yeah, apparently there are a few words which are not in the dictionary, but I can just add later all right.

A

All right, I'm gonna squash and merge.

A

huh I'm gonna keep that one, because it's funny.

A

That's a comment and what should be tested.

A

uh That happened.

A

Add a format version. I don't want that sparkle.

A

I think this was already covered. It's checked when richard okay, let's delete that just like that.

A

Several additional terms- that's multiplying, but.

A

Bill you want two different co-authored. Let's remove that one.

C

No, it's okay! You can just do one.

A

All right, let's get rid of that in my sense.

A

Wow, so you can see there's a whole lot.

A

Okay, let's see that's pisces, let's forget that typos spelling fix glossary upgrade super winner. It's alignment.

A

That's okay! It's a whole lot going on here.

A

Okay, another bill, another vector gurkey.

A

Wow.

A

Everybody was on this.

B

I just wondering in favor of uh taking time uh I don't know if thing can start preventing, I mean it's.

A

Yeah.

B

No.

A

Problem, I will let's do that thanks.

A

Jim go ahead and I'll finish this all.

C

Right, okay, that sounds good. Thank you, yeah. Let me um you know just kind of do a very quick introduction. In fact, this um presentation, I'll kind of just share a few slides from, is something I did at oss summit just last week, so on kievano.

C

So just quick introduction to myself, because this is my first time in the working group and thank you for having me and being open to kind of listening to caverno and what we're building in the project. So I'm one of the you know, creators as well as a maintainer on the camera project.

C

I will also act as a co-chair in the kubernetes policy working group and a track lead in the multi-tenancy working group, so also, of course, participate in other various uh forums, like you know, tag security, security, etc, and I'm a co-founder and the ceo at nermata.

C

So just a few few things on caverno and I'll jump around, I'm not going to go through the whole architecture, because this was an hour long presentation. There should be a recording up in a few weeks if you're interested you know, I can share that with the team, but talking about why you know. What's the motivation for caverno and you know, what's the what's the kind of um what are we trying to solve with this project right so, first off in kubernetes, of course, policies are becoming critical.

C

um As you know, the complexity of kubernetes and not just kubernetes but extensions being built on kubernetes, continues to grow right. So what we're trying to do at caverno is to bring a very kubernetes native of a uh to policy management right and obviously, given that our tagline is kubernetes native policy management, it begs the question: what does that even mean, and why does it matter so one way of kind of you know picturing. That, and thinking of that is uh there's several tools which you know talk about being kubernetes native.

C

But what we mean by this is being you know, fairly deeply plugged in into the control plane being able to not just talk to the kubernetes api server, but also understand kubernetes api schema, understand, custom resources and be able to work with kubernetes patterns.

C

Idioms, like pod controllers, you know things like knowing what pod admission controls means and how do we complement that and extend that to provide better security and automation tools, so that's kind of what we're trying to solve at uh you know with kiverno, make it really simple to and very native to kubernetes and how policies are written, how policies are managed and even how policy reports are visible. You know in kubernetes itself and we'll see that in a quick demo that I'll do right, but just to quickly explain I'll kind of skip past.

C

Some of this feel free to. Let me know uh if there's things you know you want, if there's questions etc, we can go back, but just how kiverno works. It works as a admission control um time webhook, so it integrates and I'll go through a quick, install, it's very simple, to bring up on test clusters or production clusters.

C

It supports full ha mode. For you know if you have larger clusters, but it's very simple to get started and install it plugs in as admission control uh mutating as well as validating web books. It starts, starts receiving then policies or you know, admission requests and based on your configured set of policies which are just kubernetes resources themselves.

C

It will either validate block or mutate as well as it can generate. um You know, new resources on the fly right so brings up quite a lot of interesting use cases. For example, when you deploy a new workload, and you want to you know, let's say mutate, the pod. um Even you know, for example, I saw some, um you know like some other, you know things, but it doesn't have to be just for port security, but you could automatically inject, or you know, override a security context could be like injecting side.

C

Cars could be even changing things like network settings, etc on the fly or creating completely new resources. For example, if you're running a service mesh and every time a service is created, you want to generate certs and create an istio service right. Those type of things are common use cases we're starting to see in the community, so just to explain how a policy works and there's different kinds of rules in caverno. So every policy has a set of rules.

C

Each rule has a match or exclude block, and that lets you do some fine-grained logic on which you know, resources to match, which name spaces. Even you can match by user roles, labels things like that, and then you can. You know, pay once you've matched a set of resources. You can run. You know, rules to either mutate uh to verify, and you know images image signatures.

C

So I think earlier hillary mentioned supply chain security. So that's, of course, something that requires admission controls to complete that end-to-end uh security in a posture and some of the work that's going on with other communities like six store, we're integrating cosign with kiverno to also be able to verify image signatures from any oci compliant registry, um and then you can of course validate which can either be just blocking.

C

So if you, if something's non-compliant, you can block in production clusters or you can report an audit in dev test clusters and you can have a mix of this based on policies or even based on name spaces. As you wish, and then, like I mentioned another powerful use case, is generating resources itself right. So this allows you to automate a lot of things which previously required custom admission controllers, we're seeing more and more use cases um and even simple things like if you want to deploy.

C

You know registry secrets, other cert, like things like certificates, things like that you can either generate and you know kind of manage on the fly and make available to every workload, every name space.

C

So with that, let me dive into the demo and I'll show. You know some on the kiverno io website and you know we have a whole bunch of sample policies today, we'll just look at the pod security policies and but there's several other best practices, for example, using immutable label tags right and not not using something like latest now. It seems harmless to use latest and of course, it gets used quite a bit in dev test.

C

I do that everyone does that, but if you're running in production you want to use the version, you know software. You want to also do things like replacing your image tags with digest.

C

All of these are best practices and there's 80 plus policies driven by the community, and this list keeps growing with every release, so certainly several to kind of look through, but just focusing on the pot security, and we talked about running as non-root as one of the policies, but there's several other policies, and all of these are you know, as for the definition of the pod security standards in kubernetes.

C

So this, if you're not familiar with, is a very key document which is driving so psps were one implementation of pod security standards, but now there's other implementations like caverno oppa gatekeeper, as well as the upcoming pod admission controller, which will be you know, doing, label based settings on a namespace level, granularity right. So that will be, I believe, it's targeted for version 1.25.

C

But if you're using something like a policy engine like evernote, you just get a lot more flexibility and how you're managing these profiles and how you're applying them across your workloads name spaces um as well as you can do. Of course you know security, not just for pods, but for other things like making sure um other best practices like running a read-only root file system is not one of the psp policies, but that's also, you know, considered a best practice and a good security standard to apply so anyways.

C

What I'm going to do first, is I'm going to start by installing kiverno right, and I want to show how easy it is even to get started so we'll jump back into the documentation or go to the installation.

C

uh There's several ways to install kiberno, I'm just gonna use the in a command line: option three ammos. So this will, you know, pull down a set of yamls uh and it will run qrno. um Just to kind of you know show I just brought up a new mini cluster and I have I think, one namespace. You know that I created oops, uh let's say get namespace.

C

I created a test namespace which is running in nginx pod, but that's all I have on this cluster right. So the first thing I'm going to do is just pull down these yamls and install kiberno, and it comes with a set of you know: custom resources which allow you to define policies which also and then there's a policy reporting resource which is by the way, which is also now being used by falco by coop bench and a few other projects. So more and more projects are generating.

C

You know policy reports in the same manner, which allows for some standardization and reuse all right. So if I do get namespace now, we should see kiberno running if we do minus n kiverno and we do a get. You know test uh or, let's just say, get pods. I don't know why I keep saying get test here, but uh so we have a pod which is up and running and so give or no should be ready at this point right so like we can.

C

If we want to make sure we can just check the logs and we'll tell the logs of this whoops.

C

We can just do it based on the deployment.

C

Get rid of that okay, so everything's good! It says that you've configured its own web hooks, which is what it needs to start receiving policies and things like that. But if I now do, you know get cluster policy, I don't have any policies installed at the moment right. So it's saying you know if I do um so. This is our the resource which uh has you know which we will now install some of these pod security policies.

C

But at the moment there's no, you know policies running on this cluster itself, so let's go back to the policy repo I'll pull. You know here. This is a command which will apply all of the pod security policies and it's going to apply a customization.

C

So what this customization will do is it's going to actually set these spot security policies instead of audit, which is the default mode that they're configured in it will set them to enforce? And this means that now, if I try to create an insecure pod that should get blocked by default right. So to test this, what I'm going to do? You know on site that I use for testing some of these insecure pods is there's this site by a company called bishop fox and the security space.

C

They have the site called bad pods, which shows you. You know like pods running with the host name spaces uh with runners. You know root like several other things are not configured correctly in the pot right and you can grab like a deployment or a daemon set, or things like that I'll go with the deployment in this case, we'll go for the raw ammo and I'll grab this one right.

C

So if we, by the way, if we do that same command again, we should see there's several policies configured at this point right and I'll show what one of these policies looks like in a second once we finish this, but now, let's try and run. You know this part.

C

So if I do cuddle create minus f, we'll just give it the yaml, and I see a bunch of errors which came up right away, which are saying I can't run on the host name spaces, and this is the one that we were interested in to making sure that it's checking not just the pod but like the containers, but also the init containers, and one thing, if you notice is when I install this policy.

C

Let me actually show what this policy looks like in kivernor right, because when I installed the or when you write this policy, the policy itself is written.

C

You know just on the pod resource, but kiverno automatically knows again, since it's designed for kubernetes, it can automatically now apply these policies on deployments. Daemon sets any part controller which you run even if it's a custom like something like a you know, argo deployment, which is a custom part controller. It will recognize and it will apply the policy correctly to it right. But this is you know what the policy looks like there's just some.

C

You know again we're matching on the resource pod um and then we're checking in the security context and we're saying run as an unroot. uh So if security context this this means this declaration means that if security context is configured run as non-root should be true, and similarly here we're checking, you know for init containers and then we're also doing the checks uh both in the pod spec, as well as the container spec right. So that's really, you know how simple it becomes again to configure and run these policies.

C

One other thing I can just quickly show is: if we do get policy report, minus minus all or let's try minus a. I can see that for my existing, you know pod, which I was already running now it's generated. You know some policy report and if you look at that, you know, let's just do minus so yaml I'll, see all the details of what you know uh passed and what failed right. So actually this is in the namespace test.

C

So I need to do that and it shows me every workload where you know which and every rule that it applied which one's passed which one's failed and, of course all of this can be collected. There's other open source tools, there's ways to get this into prometheus. um You know so there's several ways to kind of report. This information.

C

In fact, I think I have some slides on that here, which I was showing the default dashboard as well as there's. You know a policy reporter project which can show up this information graphically as well right, so lots of interesting things, but um you know, let me pause there, I think and see if there's any questions. Otherwise we can, you know, keep the demo short uh for today and certainly happy to follow up with more details.

D

uh I have a question so very interesting stuff and, uh of course, I'm always supportive of anything that brings us to policy-oriented orchestration. I really think that is the future of uh we'll keep seeing more and more policies being used in lots of areas. You know I'm thinking of the topology operator for kubernetes as well, for policies for placement, um but my question is: maybe, uh if you want to talk more about how this would directly be related to cnfs and where you see it in telco being especially important.

C

Right yeah, so one one quick thought is just making it easy to automate. You know as you're doing testing as you're doing validation. um Certainly, this can be also integrated in your ci cd pipelines. So that's a very simple. You know a simple set of policies which can be managed through gitops or any other. You know solution. You wish the other. You know thing which someone by the way you know something which may come up is, of course, and we often get asked.

C

How does this compare to oppa gatekeeper, uh which do us perform a similar role right? So the main difference is: have you author policies, but then other powerful use case? That kiverno enables which oppa gatekeeper doesn't is to be able to generate resources, and- and that is also another. You know area where, if you want to create policies for workload, you can in fact have policies to generate policies. You can have policies to distribute common. You know elements to set up different things which helps it really helps in decoupling.

C

um You know that uh that the creating that separation of concerns, decoupling, what the developers have to do from what the operators have to do, which is a fundamental problem right now in kubernetes and scaling kubernetes right. In fact, in this, I think I have a slide year where which it talks about. You know just uh policies acting using those as a contract and helping decouple what developers care about what security cares about and what operations care is about is where I think caverno can help quite a bit.

E

So one area that you could probably help with in this scenario is like. We have policy on networking and similar, and I know you can help in that. If, if that, networking is through uh a kubernetes based um kubernetes, aware cni, but uh one of the things that we see within the networking and telecom service provider space is that there are secondary networks that may not have the same set of policies or may be unaware of kubernetes but end up as secondary interfaces within within pods.

E

And if you have a way to help with where the policies that are there can be rendered into the appropriate sdns, so that the policy can persist regardless as to whether uh which direction it's uh that information is coming in uh in from or uh to help with the control of that to say, like what systems should be able to connect to each other or not be able to connect to each other based upon a set of rules could be very valuable.

C

Yeah, in fact, victor- and I were discussing that use case right so from my understanding, it seems like what's necessary, is based on the cluster configuration your your developer or the author of the cnf may not know. You know how the cluster is configured to operate, so you probably want to inject some of these settings. Add mission controls and it could vary based on where the cnf actually ends up running victor, not not sure if you want to add anything else to that or.

B

No, no you're, correct, um yeah yeah one one use case that we were talking about was um the usage of nsm but yeah, for example. Multis and anon can also include a few few modifications or validations in the decoration.

B

I mean I like this, this particular product, because it's it's it's part of the the cncf, um and I guess it's proposing a another cloud native way to to do the things or more like kubernetes way so, and do you mention I don't know you mentioned like there are a few uh default policies that you can also take advantage of them.

C

Yes, so so this this policy library, so there are the pod security policies, but there's also like policies you can have for um you know, generate mutate validate like we, of course, with every namespace. You know uh it's. It's always good to have a default network policy, uh but these can be also customized based on the deployment or uh whatever needs to be done like we're talking about like injecting a secondary network, and you know configuring the pod for that right. So I think, there's a there's a use case.

C

Let me see if I can find that which it's kind of similar to injecting a sidecar in some ways right. So uh it's a fairly elaborate sort of in this case here, um we're you know, kind of checking for certain things and creating a new container and as well as the init container, based on you know, the policy setting itself so but yeah in terms of defaults would highly recommend. If you know- and I know the team has already been looking at the running as non-route, but starting with pod security.

C

There's several other controls, which are you know, part of the pod security standards so certainly enforcing those, because there's no typically, um you know most pods, don't need to run with higher privileges. Much more spots, uh don't you know, uh shouldn't be using uh non-default volume type shouldn't be using hostpath privilege, you know kind of having again requiring escalated privileges host namespaces. All of that can be blocked by default right.

C

So starting with this set of policies is always a good best practice and then auditing for that in your ci cd pipeline reporting and then, of course, enforcing in production.

B

And also freddie, uh the use case that you are also mentioning about people using multus. Maybe another possibility could be adding a validation which ensures that you have predefined the additional network in multus. So if someone is trying to use an existing network yeah, you can I'm pretty sure that you can cache all these things, because uh at the end, it's just like a single annotation. So I'm pretty sure that kiberno can catch and do some logic to ensure that that network exists.

E

Yeah and it's it's an issue not only in multis, uh but it's like you, land, an interface, and I could even use network service mesh as an example. You land, an interface here and uh both of them have some level of control as to like who's allowed. To put the interface there, there's uh there's a a portion there that that could be.

E

That could be bound against uh there's a little bit more flexibility in network service mesh in terms of in terms of how the policy can get injected and enforced, but neither neither one of these uh has the has the component that's already built in. That is. How do you actually program the sdn itself, like maybe you have? Maybe you have certain rules that need to be within the sdn once something is set up?

E

How do you ensure that those rules have been rendered into the into the sdn itself, and so those those particular types of things would be would be useful in both the nsm and and multis solutions. Because then, if you could define what those rules look like here, then you can render them into each environment and ensure that they're getting applied consistently across the across the board.

C

Are those rules expressed as a kubernetes resource or are they through a custom resource or a config map, or something like that.

E

Oh they're they're, not ex they're not expressed at all, is the point is so being able to like there. There was uh some literature, I don't know if it ended up in the uh in the cntt path where, if, if you are adding a secondary interface, you have to make a decision. This are you respecting the kubernetes policy contract or are you not and if you are um in other words, are you exposing uh a faster kubernetes compliant path, or is it a non-compliant path and we and that distinction was, was made necessary?

E

Because if it's non-compliant, then you have to rely on the sdn and additional configuration, and you want to make it explicit to the person who's configuring it that they have to pay attention to this if it's compliant to kubernetes you're, just providing a faster path like maybe I have a web application that needs faster access to a storage system. That storage system is exposed in kubernetes and that you're basically providing a faster kubernetes path.

E

Then what you're doing is you're saying the sdn, has awareness of the cluster and is able to monitor the policies and is able to render those policies regardless as to where the uh regardless as to whether you're taking the slower path or the or the accelerated pass.

E

But that was a distinction that was added at that particular level. But there's still that issue about. If you, what rules do you want to apply to the to the secondary networks that are non-compliant to kubernetes policy and being able to just to match like? I have these particular things that I want to have these type of connections with and being able to then set something up where you could eventually interpret that into the appropriate sdn like this?

E

I would not provide this with the full set of that that full path to get there, but it could be that first initial step is to like here's, the first half of the problem, where we could even express that and then how do we render that into the sdn is still an exercise that needs to be done, but we'd be a step closer.

C

Okay,.

C

Yeah so happy to you know again help explore and write out some of these policies. We are fairly active, of course, within the kiverno community, helping users with different use cases like we're, also like, for example. Here you see, policies for cert manager, uh there's you know, domain specific and other policies, uh flux.

C

One of the get ops controllers is also using given for multi-tenancy open ebs is using kiverno as well for pod security, so several projects are starting to adopt, um but so yeah it would love to kind of work on cnf specific policies explore some use cases and help kind of advertise. What cabernet can do.

A

All right, thank you.

A

Any other comments, questions or other topics.

A

I guess we only have two minutes, probably not another topic.

A

Thank you. Jim you're welcome.

C

Thanks everyone.

A

Yeah, please reach out I'd like to talk to you about how you can, I guess, get more engaged for uh writing up some of those best practices that you're showing on the baseline. Maybe um we could talk about use cases and that would be make it relevant to the folks in the networking communication service provider space- and I can talk with you- maybe about the test. Suite sounds great. You see.

C

My email.

A

In the and the google doc yes, contact me that way: we're on slack.

A

Thanks, everyone see y'all next.

C

Week.

A

Thanks.

A

Good day,.

A

You.
youtube image
From YouTube: CNCF CNF WG Meeting - 2021-10-04

Description

No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).