Add a meeting Rate this page

A

Hello, everybody welcome to the November 16th cluster ops, sig meeting. This is our one more before koukin. So Robert has asked some time in this meeting as our primary agenda topic to talk to discuss and get input on the cluster API work that he's been doing so with that prefaced, let's roll it, let's roll great.

B

Thanks Rob, so I'm gonna get to a couple of slides to Rebecca that I give background and then do a quick demo, so the cluster API, what we started looking at from our perspective at Google is it's. The state of the world for cluster management has turned into a lot of fragmentation. Over the last couple of years. We started off where you know most people are using cube up and then a lot of people decided that they didn't like queue up, started, rewriting their own solutions. So we've got sort of a wide variety of things.

B

You know: cops are from open source like cops or cube, o or cube spray to commercial entities like tectonic or gke. There's just been sort of a flourishing of different options. Here, along with that came, you know, each person chose their particular way. They wanted to configure certain components. We have a lack of consistency and things like admission controllers or how people have configured off off a neuron, see people implement upgrade in different ways, so you've got some people doing and place upgrades.

B

Some people are doing, delete them and create upgrades, and even things like system components get upgraded in different orders.

B

We've got a lack of consistent, erm lack of consistency in terms of what version of @cd folks that are running how it gets upgraded, whether we upgrade you know especially h-a configurations in different orders and then, if things like machine management have also fragmented GK introduced a notion of node tools, which has been sort of widely copied and there's even been some efforts to upstream the notion of note pools into the core kubernetes api guys, which were rejected, and so, as a result of that, we started looking at trying to sort of build those api's outside of the core of two or Nettie's and we're calling it the cluster api, and we envision this as being a foundation for building higher-level cluster management or cluster ops tooling.

B

So the cluster API is, is a declarative way to create, configure and manage your cluster. It's declarative in the sense of cuber Nettie's, where we have you specify your desired state, and then we have reconcile ours that look at the actual state of the world and make it match the desired state.

B

Those reconciles could run inside or outside of the cluster. This is a little bit different than normal currents. Reconciler switch pretty much always run inside the cluster, because we have interesting bootstrapping problems when you're talking about the lifecycle of the cluster itself and then what we have is is sort of an interface layer between this cluster api and different underlying cloud or bare metal providers where we want to have higher level tools, speed to cluster api and an intermediary controller interface, that down to the underlying infrastructure.

B

So the higher level tools don't need to know anything about the under underlying infrastructure and import existing tools that target underlying infrastructure that are built for kubernetes to instead target the cluster api, and this is sort of what's been been built for gke for cops for tests. There are a lot of systems that implicitly or explicitly define a cluster API, but there's no consistency across the community and what we'd like to do is to standardize it.

A

Probably's questions, ok, so.

B

What we envision this looking light is like is, you have still have you know multitude of deployment tools. People gonna have different ways that like to deploy their cluster, if they really like ansible or if they've like cops, they can still use those. But when the cluster gets created, the cluster will come with an API where you have a consistent set of tools or automation like the cluster autoscaler, no daughter, provision or cluster upgrade cluster repair.

B

They can run against any cluster that is conformant to the cluster api, and those tools can do things like add in remove nodes from the cluster and upgrade the cluster apply different upgrade strategies underneath the cluster api. You have the underlying cloud providers and then sort of a glue layer in between where your controllers that reconcile the desired state. That's expressed for the cluster api to the actual state of the world, and this will you know, there's an S there's going to be sort of one controller to rule them all.

B

There's actually be multiple controllers running in a single cluster, multiple controllers, written for different environments. So you could imagine a terraform controller that works against Google Cloud and also a sort of five Google Cloud Control, as it runs against Google Cloud as well. So some example features that we expect to come out of this, which, which is what I was hoping to discuss. Mostly with this group. Is you know you can do things like specify policies for cluster upgrades?

B

And today you know, if you have three clusters on Prem and 3 on GCE, you likely installed them using two different tools and you're likely do upgrades using two different tools and what we'd like to do is to be able to write a single tool that knows how to upgrade all of your clusters, regardless of where they live, and once you have that single tool, you can then start to express policies like doing gradual rollouts of your clusters. Even when the clusters span different environments, you can also, once you have a declarative definition of clusters.

B

You can diff clusters and you can try to keep them in sync, and so you can do fun things like make sure that if you're running, you know see I type clusters for testing new changes that those have all of the same configuration flags for the API server and controller manager that you using for your production clusters, so that your test environment as closely as possible matches your production environment. Even if you have you know a different number of nodes in your cluster, you can also differ.

B

So if you have, you know two different clusters in two different availability zones or two different cloud environments.

B

You can diff those and make sure you're running with the same configuration of those places and sort of along the same lines as you can extend that to to something that's called get-ups, which is effectively, and this is being pushed locked by we've works or sort of not push but advertised and yeah evangelize by works, where you can effectively check in your declarative, configuration into a source control system and they promote it and have that be automatically applied into your running system.

B

And so right now you can do this pretty well if to raise for apps, where you can take your deployment files and you can take your daemon set files and you check those in to get, and then you can basically just cube cuddle, apply whatever then get into your running clusters, and what we'd like to do is have that apply to the cluster itself as well.

B

So if you want to change the size of your cluster, if you want to change configuration flags to your cluster, you can also check those into source control and automatically apply those to change your pressure environment as well, and that gives you change, control, tracking of what's running production and you know who changed it and when it makes it easier to track down when things went wrong and how to fix them.

B

So this sort of strong end user guide is here is you might have different tools to turn up a cluster as I show on an earlier slide, where you might use one tool to create a cluster on vSphere and other tool to create a cluster on GCP?

B

We imagine that the cluster KML files here are can largely be the same modulo, the cloud provider differences when you want to specify specific types of machines, but once you have a cluster up the things we do after that, don't rely on cloud specific tools, so we built a client-side tool called machine set which allows you to sort of get a grouping of machines, kind of like a node pool, and you can use that to scale, and that would work across both of these clusters using exactly the same client-side code and exactly the same commands or you could.

B

You know, modify machines, and since machines are expressed using the kubernetes api semantics, you can actually just use cue cuddle to do this. You don't have to have any custom tools. You can use all the tools that you're used to for modifying your apps to also modify your infrastructure underneath those apps, so I can dive a little bit into what the machines API looks like this is one of the sort of I would say more like harder problems that we're trying to solve.

B

It's also a problem that a number of people have tried to solve in the community as well.

B

We had a cluster API working group meeting yesterday, where some folks have been working on a product they called node sets where they have attempted to attack all a very similar problem and I've come up with a different way to express declaratively what machines should look like in a cluster and so we're hoping to collaborate with them and sort of fold them in to our standardization process here and come up with a definition that serves their purposes along with the goals that we've set it up to serve as well.

B

So a couple of tenants we're trying to follow in this API is that the API should be extra functional on top of kubernetes. It's not part of the Corcoran at ease, and we shouldn't have to change any of the core types to make it work. So that has a couple of ramifications. You'll see in a minute and we'd also like to make a very clean separation between pieces that are specific to where you're running for your environment or your provider.

B

First, those that are not- and the important thing here is that the pieces that are agnostic are very easy to build a higher level automation on top of where the pieces that are specific, you can't build consistent automation nearly as easily so our goals for what the Machine API should do is we should be able to create a credit note right, so you build it declare didn't specify what a machine should look like and then create a functional, kubernetes node.

B

As a result of that definition, you should be able to delete specific notes, and this is not something that's needed by the cluster autoscaler when it wants to scale down the size of your nodes. It figured out which node it thinks should be deleted and wants to delete that specific node as opposed to the next one, which is I, have a set of nodes just pick one to randomly delete. If you think about this as analogous to pods and replica sets replica sets, have this this third bullet, which is you scale down by one?

B

You don't pick a specific one to get rid of when you're scaling down unless you're doing some fancy things with labels.

B

Second, we like to be able to do individual OS updates on a per node basis, so you can pick specific nodes to be updated and both for OS images and for kubernetes versions, and both of those should be done in the declarative fashion.

B

A little bit lower priority is be able to update, contain a runtime, so we'd like to be able to declaratively say we should be upgrading from docker 112 to dr 113 on this node and have that be enacted by the controller and much lower priority, which is something where we're thinking about, but we haven't really delved into too much or figured out what the API would look like is to specify sort of arbitrary packages that you'd like to have on your machines. So things like what kernel version you want to have.

B

What do you wanna have so cat installed or open SSL or other sort of system libraries? We have heard that there are a lot of people, especially they were running on Prem, more and more customized enterprise environments that have very specific ways that they want to set up their base images and so I think there is some some demand for this. But I don't think we know enough about what the requirements are to design an API that we think will will actually work for everyone and finally, we'd like to build support auto-scaling.

B

So one of the goals here is to to rebase existing tools that work against cloud provider api's directly on top of the cluster api, and one of those tools is auto scaling. So we have an auto scaler in the kubernetes ecosystem that works on GCE, and I think it's also important to AWS and was at least at one time for detacher. Although I think the support for Azure fell out of maintenance and was removed.

B

But if we had the autoscaler pointed at the cluster API, then any environment where you implemented the cluster api would just sort of get auto scaling for free, which I think would be a really big win for the whole ecosystem. So what what does this look like? Normally in kubernetes, we have a type that includes sort of two top-level things.

B

One is a speck in one as a status, so the speck is your desired state of the world and the status is your current state of the world and then the reconciler that backs that type is trying to make the status equal to speck.

B

If you go back to our initial tenants, which was to not touch the existing Corelli's API you'll notice it that four nodes, we actually have a node type in your dice and it has a node speck and it has a node status, but the existing node type is really just a status right. It reflects the current status of a node and has really has no declarative fields. Where you can, you can ask you know the cubelet to change the state of that node itself.

B

So what we've done is we've taken a step up from there and create a concept called a machine, and a machine is basically the spec for a node, and so you can think of sort of a virtualized supertype, which is what's what's actually a node in kubernetes contains a spec which is the new type we've created and a status, which is the existing type. That's in the core, and you put these two things together, and that is really what a node in Corelli's is.

B

It's the desired state of the node which we're expressing through our type called machine and then the status of the node, which is what exists so I guess this is. This is what I was just explaining in the future.

B

You know if we, if we refactor kubernetes and if we decide that the the API is for the cluster or should be part of the core which is still TBD, we could make it a node type in the future that actually had the speck in the status in the same object and I, don't tell me, do a quick demo, so I ran the first command already, which was to create a cluster. You can see it here. It took about two and half minutes now.

B

I have a cluster I'm running get nodes in a loop here, so you can see that I have two nodes. My cluster I'm also running get machines, so one of the nice things about the way we're doing this. Is it again you can just use your existing two Rani's tools to to look at resources, so you can get machines who can describe a single machine.

B

Oops I need to actually say what type it is besides showing and you get you get the EML out of that which tells you that this machine is running one seven. It should be running one seven, four right. This is a declarative half and if you like it get nodes, we see that in fact it is running version 174. The next thing we're going to do is scale, so this is what I saw before is we have a client-side tool?

B

That knows how to add new machines to your cluster, so we're gonna say you know, I like this node that I've got I want to have more of those so I'm going to scale the number of machines. Oh redo, our watch here scale. The new machine is my cluster, where the type equals node up to five, and so here before we just had the first two right, they've been around for half an hour and we scale to five.

B

We get four new machines added, so this is are expressing our intent that we would like new machines to be added to our cluster down here. I'm watching the instances in Google compute engine and you can see as a result of adding the fact that I would like new machines to exist. We are provisioning new VMs in GC and then, if we wait a minute or two, those will start to show up as nodes inside of our cluster, and you know you can scale up down. You know to your heart's content.

B

Well, let that show up and then the last thing we're gonna do is we can show how upgrades work and so upgrades are fun because with upgrades, what what we do is we just change any machines? What version we would like to be running on each machine and then the control under underneath that actually goes in that change for us and modifies the underlying infrastructure to give us nodes to running in a different version.

B

So, while we're waiting for the notes of hop up and then I'll run the next command, if you guys have questions or they're things you want to know more about, we can do that now.

A

Turns yes,.

B

That's it sorry, I.

A

Mean you're issuing CLI commands, you had talked about it being amal. Declarative is the. Is this actually, then updating a yeah mo metadata of what the cluster spec is yeah.

B

So this this is getting machines right. So if you get a single machine and you look at.

A

To it a little bit of an extent this, this feels sort of terraform like.

B

Yes, it is sort of terraforming, and one of the things that we've talked about is having a terraform controller, where you can express what you want your infrastructure to look like using terraform and then have something that effectively just runs. Terraform apply and a loop to enact that. Okay.

C

And your examples are creating, if you want to do things like cordoning your node and draining the pods off it and then doing the actual, replace and then waiting for it to come up before you uncoordinated n right. This provides a way, I think, that's easier than, or at least more native than terraform to all. That kind of I.

A

Mean from a curb admin perspective, some of what you just what we're describing or primitive x' that you would wrap to take those actions. Are you envisioning this as a standing service that you're interacting with the an api, or is this a go binary that would exercise the existing api's of different cloud providers and and could control yeah.

B

So think about the layering with cube admin, cube admin assumes infrastructure exists right, it's it's built to be run on top of machines that have been provisioned, and this is the the next layer we're trying to standardize on which is provisioned machines right. So this is how can I declaratively say, create these machines for my cluster, we're actually in our implementation, using cube and in it on the master and using cube I've enjoyed on the nodes to actually do the clustering part of this.

B

So this is layered on top of cube admin to give sort of more control over your cluster.

A

So to do this, are you do you, but is this? Is the cluster API become a standing service that you're talking against? Then it supervises all the cluster building activities and it because it would have to provide a state machine to do that. So.

B

I mean the cluster API is: is a kubernetes api right, so in the same way that you have a deployment api or a daemon set api? You have a cluster like this. Is the machines half this is the machines api there's another half, which is the control plane configuration half of the api as well, so.

A

Is it an api extension to the kubernetes api is what you're thinking? Yes,.

B

Right now it's implemented as CR DS, but it will be an API extension so.

A

The cluster state data gets stored in NC d. It's it's basically a metaphor. The cluster. Yes,.

B

And and since it's a metaphor, the cluster there's an interesting questions around disaster recovery and you know what happens if etsy D gets corrupted. How do you cover from that? And that's one of the reasons where you want to migrate from CR DS API extensions is we can put our cluster definition outside of the same at C D, that's being used for the cluster itself. So if, if you create a job that causes that CD to whom you can have API, the API server still be hitting the API extension server.

B

That's that's hosting your machine definitions. That's reading from a different at CD. It is not down right and, and that could be stored outside of the cluster as well right. So I think this is where, in the beginning of the slides I was saying like are in general in kubernetes, we run reconciler inside the cluster that they're operating on, and since this reconciler is operating on the cluster itself, I think there are cases where people might want to run this reconciler outside of the cluster. What.

A

Happens if you scale.

B

To 0 right, then, where is the running? That's.

A

That's what was getting me a little bit confused so now, I see why you had the inside outside right, so from from the way you've built up the slides. It wasn't clear to me if you were talking about a kubernetes extension or an additional service, just just his feedback for the it would be. It would have been clear to me if it had been very distinctively set as an API extension to the to the cluster, because some of what you were describing like reconciling multiple clusters, it's not something that needs your reconciler.

A

Would they have to be outside the cluster? At that point, yeah yeah.

B

It's like I, said their diets. Two things I was trying to describe. One is how we define what a cluster looks like and how we might implement that, and so we have an implementation right now that I'm showing you which works on GCE, along with our proposed API, for what a cluster should look like, and the other thing was once you have that definition, what can you do with it right and I guess that's what I really wanted to discuss with cluster operators is what are the common things that people do on their clusters?

B

How the spoke are they? How much can we standardize on on those tasks, and how can we make sure that the cluster API is supporting those tasks so that we can build common tools and common automation on top of the cluster API that actually achieves these common use cases so.

A

One of one of the things that jumped out at me in your priorities list I, don't have a couple of clinton somebody somebody else wants to jump in just jump, jump in or race, race, questions or droid and chat and I'll, say I'll shut up, because I I have a tendency to ask enough questions to fill up all the air.

A

The the it strikes me that the pattern of destroying a node to upgrade it makes a lot of sense for cloud users, it's not as easy on physical ops, but our our opinion has merged, and this is where, when we talk to people other operators, it's like this also moving to the destroy. Recreate pattern is a better pattern for managing a cluster like this, then try to figure out all the patch permutations and there's a part of me that says that be opinionated. Don't try patch machines, just say it's a it's!

A

A destroy, replace operations, gravity.

B

So that's interesting. That's that's sort of coming in line with the push for a mutable infrastructure right where you create it, and it's it's there and if you want something different, you delete a new one. I think what we're trying to achieve with the cluster API is not to be that opinionated I think we actually want to support both types of upgrades. Depending on what your controller does. So you you change you. You know cube cuddle edit, your your node and you give it a new version, and what does that mean and I think we're?

B

We're still have yet to say exactly what it means that one proposal is. That means you do an in-place upgrade and if you want to do sort of the immutable infrastructure, you should just delete the machine, create a new machine with the version you want and that will express your intent of delete and create because you wanted me to go under structure. The other proposed alternative. Is you you cuddle edit, a machine?

B

You change the desired cubelet version and it's up to the controller to decide whether it implements that as an in-place or replacement upgrade, and that's actually what we have today right now. You can cue cuddle edit, a node and it will just it'll, delete the VM and create a new VM, and we were discussing yesterday whether that's the right, semantics or not, but I think the intent either way is for the API to allow you to have both implementations. So you can imagine somebody who's running on bare metal.

B

That really wants to do in-place upgrades and they can write their controller to do in place, upgrades or I. Think most people running on clouds, especially for bigger version, jumps well to replace replacements.

A

I I in the past I might have said to be flexible, but I. Think at this point with my experience, the right answer is to be opinionated, because I think you're gonna get lost in that edge case and I. Think the edge case has very little value. I do think that semver wise a dot, a dot, couplet patch should be a that's a that's a patch I. Guess, that's fine, but anything else. I would I would say, look it's! This is the pattern. You should be doing this.

A

You should be doing this anyway and if you can't figure out how to make your physical infrastructure do this then you're, you know. Maybe you should take a step back before you try to hit then a lot of kubernetes it just it gets super edgy yeah go ahead. Yeah.

C

So like, if I have a physical machine right, I'm not gonna, actually take an axe to it and dump it out the shredder out back right. It's gonna be the same physical machine when.

B

I bring it up so.

C

If I have, for example, you know like a few terabytes 10 20 of data on it when I recycle the machine, maybe all I'm doing- is nuking the boot disk or doing something like core OS style replacement, and so is that rebooting into a new version of my OS, a delete and recreates like you can maybe argue semantics, but if it comes back with the same name and it's got the new version on it like check mission accomplished, and so this is where saying like?

C

No we're not going to, let you do a core OS style update or if you do have, the cubelet version is differing, and you know one point: seven, five to one point: seven: six or eating one: seven, five plus you're. So you know colonel upgrade to handle your fun latest zero day right.

A

I think you get into really serious issues, especially with a high level API like this. If you say, oh we're, gonna try and enable you to do a kernel patch. You know of a bun too, because there's a new kernel out that I don't want to touch the rest of my install I'm, just gonna start doing kernel patches that to me isn't, is it's not a good pattern for a generalized API right?

A

You want to say: look we you, if you're, if you're, not just patching a couplet, then that's really outside of the scope of cluster management. We expect you to have a either. Do that completely outside of this or and we'll give you a reset button and you can implement the reset. However, you want if your reset is just patched. The kernel is you're gonna interrupt the flow anyway, while you're patching the kernel, you're gonna have to reboot that's a minimum right k.

C

Splice.

A

It just it I think you're you're, just your career I, would suggest those edge cases are not are not gonna not going to win, not going to help you with adoption. I. Think.

C

That makes sense per p1 right, like the point is not to make it mandated that you support all of these, but to say that we're never gonna support any form of in place. Update is all that doesn't seem like a good like I. Don't get what that buys us in the near term and Rob.

B

You even mentioned that we probably would want to do in place updates for patch releases of cubelets right, so like they're right there, you've got a use case already, where you'd think we should do in place upgrades so I think what we're trying to do with the API is is not close. The doors to the different types of upgrade scenarios people might want to build rather than mandating. This is how you have to do it for it. Yeah.

A

I guess I'm thinking back to you know a year ago, or so in cluster ops, when we identified three different ways to install kubernetes right and some of them were just used to go binaries package to go binaries and in containers and manage the containers and then the self hosted self hosted and we, you know it was very unproductive to try and maintain all these different alternatives.

A

When what we you know we, so we quickly dropped the use, go binaries whether it was logical or not, operators made it made a ton of sense for operators to put the binaries in place and not authorizing, but that didn't write it. It was a variant that people didn't. You know we did it just wasn't helping, even if it was smart and so I. What my suggestion is the cloud pattern is the pattern you're gonna see the most of anyway.

A

We need to bring along the physical opposite people. This is I mean this is practical, considering we're right where I used to think on this I've just been watching people who are moving into more mutable infrastructure patterns, and it fits better with the kubernetes model. You might leave the day-to-day success. Fine right, yeah.

C

That's fair feedback but like that sounds like a call it a quibble or omits. What's your take on the overall I am, if that's not to hear what I'm asking yeah.

A

No I, sorry yeah I, don't want to get distracted on that I, like the concept I think it's important time. The other thing that jumps out at me is you're. Gonna end up having to implement note, create destroy interfaces within this API or you know how do you, how do you? How do you not get into the a million ways to create you know to create nodes depending on your infrastructure problem? You.

B

Mean well so the API is is to clarity declaratively, expressing what the node should look like, not actually how its created, and so the controller that that watches, that API and actually interfaces with the ayahs can be opinionated about the. How that gets created right, it can say, like I'm, only gonna create nodes using terraform.

B

If you don't give me a terraform blob, I'm, not gonna, do anything or it could say, I'm a new stalker machine to create nodes and both of those systems support a really wide variety of cloud platforms, and you can have sort of a common denominator controller. It's pretty easy to plug in and get pretty broad support without too much work.

B

The the loots a guys who came to our meeting yesterday so that they would chose doctor machine for that reason when they were implementing node sets because they could add new cloud providers in about two days. All right, so I gives a really broad reach, on the other hand, for the larger clouds like I could imagine you know. Google is gonna, be interested in this, and maybe Amazon and Microsoft as well like.

B

We might have you know more customized controllers that can, you know, eke a little bit more performance or features out of out of a mature cloud platform. So you could install the doctor machine controller and use that provision Google nodes or you can install the gyro controller and use that frisson Google nodes and from the API semantics. It looks exactly the same to you, but the underlying implementation might be. You know significantly faster or better in some way right right. So.

A

In from that perspective, you could actually use terraform providers without using terraform.

B

Sure, right and that's the other thing Chris and I were talking about, is you could have the controller generate the terraform right? You don't have a user-specified, terraform user could specify like I, want to know and I wanted to have this machine type and as long as the controller understands that it could then generate terraform, you can still use tariffs or behind the machine behind the scenes, actually apply. Those changes to your cloud right, so there's lots of different implementation patterns behind the API.

B

As long as we can get the API sort of right in terms of how how users can specify what the note should look like I would.

A

In this it's a mixed mixed bag, but I would almost consider picking up the terraform community through the providers without it being terraform itself. I actually think hashed.

A

Cork would be fine with that, because basically you're creating an additional market for the providers, which only helps terraform but adding extracting tear your Terra forms, not adding value in this you're using it for the providers, because you're feeding them atomic actions, which is what they're designed to do and then basically that means that there'll be more gravity to supporting the providers from the there, because they're already doing it so Amazon Google right Micra, if you rockin, has created terraform providers for different types of infrastructure, if you're, if you can can hone in on that, then that becomes the easy way to plug into this and everybody use for a shortcut, a whole bunches of angst yeah.

B

We're starting to explore what that might look like Chris gave a talked at hash, ecomp recently, where she was doing something Sorrell on those lines, basically running a pod that ran its our phone, applying a loop and was talking about how we could extract some of the provider code out of that to build machine controllers I actually.

A

Like that dumb in the meta, you don't want your you don't want to have two sources of trees, so the metadata for terraform would potentially become a problem for for this, when you're, actually storing the metadata in the API I I mean all the services makes a ton of sense, I like the basically that you're creating a controller like service that mimics node controller googley controllers, but would be a cluster level controller and would run in the.

B

Infrastructure.

A

Somewhere that makes a lot of sense, I'm, looking at Gregg and seeing it's and also I felt the prep you did on the deck was good. It helped stage things through. My my biggest thinking is that, while I understand the temptation to want to have all this plug ability in.

B

It.

A

I actually think that that would that that undermines the I. Think I think you have too much scope right now. I would I would be more willing to make compromises to do things like SATA use that terraform plugins, not terraform at the plugins and drive them be an API. Only support node create delete model just to get things to get things flowing through called version one.

A

If you want to people, don't freak out about, you know limiting degrees of freedom, but I think that that with the the need is there and the general outlines, the approach are good and I with narrow scope or narrow optionality.

B

Right thanks, so the core tell I've, got a run and unfortunately, I'm also recording so I'm going to stop the recording.
youtube image
From YouTube: Kubernetes SIG Cluster Ops 20171116

Description

No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).