►
From YouTube: Kubernetes SIG API Machinery 20200812
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Computer,
perfect,
okay,
I
think
we
are
on
good
morning.
Everybody
thank
you
for
joining
us
in
this
weekly
meeting
of
seek
api
machinery.
Today
is
august,
12th,
20
and
20..
I
hope
everybody
is
good
and
healthy
and
safe.
A
We
do
have
three
items
today
in
the
agenda
two
design
discussions
and
then
maybe
a
technical
discussion
with
a
small
demo.
So
let's
get
right
to
it,
joe
you're,
going
to
be
first,
I'm
going
to
open
the
document
here
in
the
agenda.
B
Great
everyone,
so
this
has
to
do
with
server
side
of
pi.
So
one
of
one
of
the
things
that
is
currently
missing
from
server
side
to
pi
compared
to
something
like
strategic,
merge
patch,
is
the
ability
to
explicitly
declare
that
some
fields
should
be
deleted
or
should
be
absent
from
the
live
object.
It's
pretty
much.
The
only
thing
the
parity
gap
between
server
side
apply
and
strategically
patch,
but
we
had
it.
Controllers
could
do
some
really
nice
things.
B
A
good
example
that
I
think
daniel
pointed
out
was
that
you
could
have
a
controller
that
said:
hey
we've
got
some
deprecated
annotation
I'll,
go
into
objects
and
own
the
opinion
that
that
that
you
know
that
annotation
should
not
exist.
If
it
exists,
it
could
remove
it.
If
it
doesn't
exist,
it
could
only
opinion
that
it
should
not
come
into
existence
and
then
you'd
become
in
conflict
with
its
field
manager.
If
you
tried
to,
if
you
tried
to
set
that
field,
so
it's
useful
and
it
completes
the
kind
of
the
feature
set.
B
It's
a
little
tricky
to
implement.
So
in
the
proposed
design
section
here,
I've
proposed
one
way
that
I
think
it
could
be
possible
to
do,
which
is
basically
introduce
a
tombstone
identifier
in
this
case,
I'm
using
a
map
with
a
single
key
and
a
single
value
that
are
very
clearly
kind
of
identifiable,
and
then
those
can
be
placed
anywhere
that
you
could
have
something
that
you
would
want
to
state
shouldn't
exist.
B
So
if
it's
a
field,
this
would
be
the
value
of
the
field.
If
it
was,
a
map
item
is
simplify
like
an
annotation.
It
could
be
the
value
if
it's
an
associative
list.
You
can
actually
just
put
this
in
to
the
fields
of
that
item
and
so
forth.
The
the
last
example
of
the
the
set
is
a
little
less
than
desirable
in
my
mind,
compared
to
some
of
the
other
ones,
but
it's
still
possible
right.
You'd
have
to
say
what
item
you
don't
want
and
you'd
have
to
say
you
don't
want
it.
B
B
Is
it
worth
putting
this
in
line?
Does
it
make
sense
to
try
and
try-
and
you
know,
break
up
and
and
deal
with
the
consequences
of
that?
Would
it
be
better
to
try
and
put
it
in
some
sideband
data?
Like
a
you
know,
a
set
of
field
pass,
that's
put
somewhere
or
something
else
which
would
be
kind
of
hard
to
maybe
notice,
not
right
in
the
object
where
you'd
expect
it,
but
it
would
get
it
out
of
the
out
of
the
way
things
like
that.
B
If
you
have
any
thoughts
or
questions
on
what
I'm
explaining
that
this
might
be
a
little
subtle.
C
C
B
So
this
is
this:
the
idea
of
of
this
way
I'm
showing
here
is
it
could
be
directly
in
line,
so
it's
right
inside
the
object
in
the
normal
place
right.
So
if
you
previously
had
set
host
name
and
you
decide-
you
really
want
that
to
be
set,
then
you
would
you
would
you
would
substitute
in
this?
B
Applied
state
does
not
exist
there,
and
so
that
would
just
be
what's
in
your
manifest
at
that
point
and
then
you're,
taking
a
very
strong
opinion
that
that
field
should
not
be
there,
and
you
want
to
conflict
with
anybody
that
thinks
it
should.
B
C
I'm
I'm
trying
to
understand,
then
wait
so
that
that
would
mean
that
that
I
could
not
have
a
like.
I
don't
know
if
I
had
a
description.
I
couldn't
have
a
description
set
to
this
value.
B
If
your
description
could
be
omitted,
if
it's
like
an
optional
field
and
one
of
the
states
is
that
it
can
be
absent
or
can
be
deleted,
then
you
could.
You
could
put
that
there.
D
B
Yes,
yeah
so
yeah,
I
think
if
you're
asking
about
the
schema
yeah,
that's
the
big
problem
here
is
that
we,
if
you
have
like
an
editor
or
two
mean
that
wants
to
check
if
the
manifest
conforms
to
the
schema.
B
It's
now
broken,
because
we've
tried
we've
we've
added
some
we've
added
these
tombstone
identifiers
as
a
supported
value
in
a
bunch
of
places
that
the
schema
doesn't
know
about,
at
least
not
for
the
life
type.
D
D
If
you're,
the
only
owner
of
a
field,
changing
your
mind
and
saying
I
don't
care
anymore,
the
right
right
behavior,
which
is
deleting
it
basically,
this
proposed
proposal
adds
the
ability
for
you
to
have
a
negative
opinion
about
a
field
right.
So
that's
right
it.
I
I
think
that
this
this
is.
This
is
definitely
the
missing
feature.
Server
side
apply
and
strategic,
merge
patch.
D
So
I
I
think
the
I
think
it's
a
good
thing
for
us
to
have
the
thing
that
is
in
question.
Is
the
syntax
like
how
do
you
express
this?
It's
the
first
non
like
until
now,
everything
just
completely
irregular
kubernetes
objects
you
can
with
with
fields
potentially
emitted.
D
This
makes
it
different,
and
I
think
we
should
be
super
careful
about
how
we
do
this,
because
that's
kind
of
where
strategic
merge
patch
went
off
the
rails
is
when
we
started
adding
custom
syntax
right
like
I
currently
don't
foresee
any
other
feature
of
this
nature
so
like
this
is
this
is
this?
Is
this
would
be
it,
but
I'm
not
100
sure
about
that.
It's
difficult
to
be
100
sure
about
that.
I
was
going
to
say
one
other
thing.
E
One
quick
thought
and
I
think
it's
easily
dealable
if
we
go
over
a
couple
of
releases
but
during
something
like
an
upgrade,
what's
going
to
happen
to
the
older
cube
api
servers
when
they
see
one
of
these
apply
states,
we
might.
B
D
Yeah,
so
that
that
that
reminds
me
what
I
was
going
to
say
actually,
which
is,
is
this
a
persistent
opinion,
like
you
say
this,
and
then
we
hold
that
on
for
you
in
the
managed
feels
list
or
is
it
a
transient
opinion
right?
You
say
this.
We
make
sure
it's
true
at
the
time
you
say
it,
but
we
don't
give
conflicts
to
other
users.
D
B
Typically,
you
really.
Wouldn't
it
really
wouldn't
kind
of
expand
out,
but
in
the
cases
you
know
in,
like
a
pathological
case,
you
definitely
could
come
up
with
those
you
might
have
to
find
some
way
to
like,
like
garbage,
collect
a
new
field
managers
or
something
like
that.
I'd
have
to
put
some
thought
into
how
to
do
that.
It's
kind
of
a
hairy
problem.
E
How
do
we
tell
the
difference
between
someone
who
explicitly
wants
to
re-add
the
field
and
a
controller,
a
controller
that
just
thinks
it
owns?
The
field
was
unaware
that
it
had
been
removed
and
is
just
sending
all
the
data
like
normal.
B
So
if,
if
the
field,
if
we
retain
the
field
manager
in
the
managed
fields
who
express
this
opinion,
then
it's
really
clear
kind
of
what's
going
on.
So
if,
if
you're,
not
that
field
manager,
then
you're
going
to
get
a
conflict
and
you
try
and
do
anything
with
this
field,
because
the
other
the
field
manager
that
expressed
this
tombstone
and
opinions
owns
it.
So
you
have
to
you
have
to
you
have
to
resolve
that
conflict.
So
that's
pretty
clear.
B
If
you
do
own
it,
then
it's
in
the
golden
state
and
you
owned
that
opinion.
I
think
that's
pretty
clear.
E
F
In
just
to
to
be
sure
again
those
tombstones
they
only
get
persisted
in
the
form
of
managed
fields.
But
no
yes
in
the
object,
because
that's
not
possible
after
the
coding
right.
B
F
Okay,
so
we
might
have
to
ensure
that
our
current
managed
fields,
cleanup
doesn't
doesn't
get
rid
of
tombstones
because
it
doesn't
know
the
field,
but
that's
yeah.
F
B
Yeah,
that
was
the
biggest
thing
I
wanted
to
get.
Clarity
on
from
people
is,
is:
is
this
design
eliminated
because
it
doesn't
match
the
schema,
and
should
we
be
looking
out
for
alternatives?
B
The
main
alternatives
I
can
think
of
would
be
that
you
would
have
an
additional
either
a
set
of
field
paths
somewhere
hidden
away
in
the
object
that
told
you
what
everything
a
tombstone
is
or
you
would
have
like.
Another
object
kind
of
like
the
field
managers
that
just
expressed
his
opinions
about
tombstone
somewhere
else.
Why
wouldn't
the
user
have
to.
B
That's
a
possibility,
I
did,
I
have
thought
about,
we
feel
about
people
directly,
editing
the
managed
fields.
D
Yeah,
you
can
do
it
from
like
with
get
and
put
you
can't,
and
you
can't
both
apply
and
change
the
managed
skills
at
the
same
time,
but
we
could.
We
could
have
a
yeah,
so
I
can
imagine
like
we
could
have
a
special
tombstone
entry
that
we
did
permit
you
to
to
set
with
an
apply.
I
can
also
it
occurred.
D
I
had
an
idea,
while
you
were
showing
this
joe
like
what,
if
what,
if
we
we
accepted
like
a
positive
object
and
a
negative
object
right
like
so
in
your
apply,
manifest
the
the
object
that
you
have
positive
opinions
about,
you
can
like,
maybe
with
a
yaml,
separator
or
something
put
in
the
fields
that
you
want
to
tombstone
as
if
they
were
a
positive
object.
If
that
makes
sense,
I
haven't,
I
haven't
thought
this
through,
so
maybe
it
doesn't
make
sense,
but
yeah.
B
I
think
I
think
antoine
had
a
slightly
related
idea,
a
slightly
similar
formulation
of
that
I
could.
I
could
I'd,
be
happy
to
look
at
both
of
these
two
alternatives
to
see
if
we
can
get
them
into
the
field
managers
or
have
a
negative
representation.
I'd
be
happy
to
go.
Look
at
those
designs
and
come
back.
D
Yeah
yeah,
I'm
I'm
really
eager
to
have
this
functionality
in,
but
I
think
it's
super
important
to
get
the
syntax
correct,
so
yeah,
maybe
if
you
can
come
back
with
with
more
alternatives.
Okay.
C
Sorry
so
so
I
can
certainly
see
in
usage,
it's
helpful
to
be
able
to
say
I
think
this
field
should
be
empty.
Do
we
think
that
there's
also
say
I
think
this
particular
stand?
This
should
be
exactly
this,
so
so
I'm
thinking
about
a
case
where
I
have
like
a
struct
inside
spec
called,
I
don't
know
tls
and
in
one
release
it
has
a
field
foo
and
later
on
it
has
a
field
bar
now.
I
think
I
should
own
everything
in
this
tls
struct
is.
C
D
If
it's
marked
as
an
atomic,
then
that's
the
behavior
you
get,
but
if
it's
not,
but
if
it's
not
marked
as
an
atomic
struct
like
it
sounds
like
you're
asking
for
exclusive
and
not
shared
ownership
of
a
particular.
D
C
D
B
Yeah:
okay!
Well
thanks
everyone,
the
design's
linked
from
the
the
meeting
notes.
If
anybody
has
additional
stuff
to
come
up
later-
and
I
will
circle
back.
H
Yeah
this
one
is
related
a
little
bit.
This
one's
about,
like
using
apply
with
client
go
right
now.
The
way
to
use
apply
with
client
go.
Is
you
send
a
patch
with
a
special
type
and
you
send
an
unstructured
object,
but
it's
not
type
safe.
So
for
controllers
to
use
apply,
we
wanted
to
have
a
type
safe
way
to
send
an
apply
quest,
apply
request,
but
we
can't
just
use
the
ghost
structs,
like
the
other
request
types
do
because
it
doesn't.
H
Let
you
specify
that
you
want
field
to
be
absent
from
the
object.
You
can
set
it
to
its
zero
value,
but
it
doesn't
it.
You
can't
differentiate
between
a
zero
value
and
not
sending
the
value.
So
if
you
wanted
to
set
replicas
to
zero,
it
would
be
there'd,
be
no
way
to
say
that
versus
that
you
don't
care
about
replicas
anymore,
and
so
we
have
a
design
doc
with
a
couple
of
different
for
ways
to
represent
an
object
with
absent
fields
in
a
type
safe
way.
H
So
one
is
the
builder
functions
we
would
just
have.
You'd
have
to
basically
call
a
function
for
every
field
you
wanted
to
set,
and
then
it
would
keep
track
of
which
fields
you
set
and
also
probably
have
a
struct
backing
object,
or
it
could
all
be
done
with
unstructured
as
well,
and
then
the
other
option
is
generating
new
structs
that
have
pointers
for
every
field.
H
Yeah,
and
this
also
kind
of
relates
to
the
tombstone
thing,
because
I
mean
clients
would
need
to
send
the
tombstone
values
so
we'd
have
to
kind
of
work
that
into
either
one.
I
think
it's
possible
to
work
into
both,
but
it
might
be.
H
So
we
wanted
to
kind
of
bring
this
to
a
machinery
meeting
to
talk
about
which
of
the
two
options
seems
like
the
way
to
go
forward.
C
If
I'm
understanding
the
builder
scenario
correctly,
I
would
actually
use
the
existing
structs
that
we
have
and
I
would
be
calling
essentially
like
a
series
of
set
methods.
I
want
to
set
this
field,
I
want
to
set
that
field
and
you
would
have
a
struct
that
had
like
the
object
I'm
dealing
with
and
then
a
record
of
what
I
actually
explicitly
said
and
then
yeah.
H
That
would
be
one
way
yeah
to
set
the
fields,
and
that's
probably
the
easiest
way,
and
it
would
make
sure
that
everything
was
the
right
types.
H
And
these
would
just
be
like
additional
structs
or
additional
types
that
would
be
generated
for
all
the
types
in
the
system
and
it
would
probably
be
in
its
own
package
and
then
it
could
be
used
by
clients,
and
it's
not
just
useful
for
reply.
We
think
there's
probably
other
uses
where
you
might
want
to
specify
a
manifest
for
an
object
that
can
have
missing
fields.
B
B
H
It's
kind
of
going
to
depend
on
how
we,
because
but
yeah,
I
don't
know.
C
H
D
B
There's
two
different
ways
that
we
could
we
could
actually
back
the
builders.
One
would
be
to
have
the
ghost
struct
and
a
list
of
fields
that
you
have
an
opinion
about
and
then
the
other
one
would
be
just
back.
If
it's
unstructured,
I'm
pretty
sure
that
they're
completely
isomorphic,
you
could
do
whichever
one
worked
out
for
implementation,
but
I
was
just
going
to
mention
that
I
was.
D
I
was
just
gonna,
add
a
I
don't
know.
If
this
is
I
well,
I
I
see
a
chunk
of
yamal
in
this
document,
so
it
can't
be
completely
unrelated,
but
I
I
I
see
a
load
that
defines
objects
by
assembling
ghost
trucks,
and
I
see
a
lot
of
non-code
that
defines
objects
by
writing.
Yaml
files-
and
I
don't
know
if
I'm
the
only
person-
that's
ever
done
this,
but
when
I
have
a
yellow
file
and
I
want
to
assemble
a
go
object.
D
So
I
kind
of
feel
like
it
would
be
better
if
the
default
way
that
you
specified
objects
in
go
was
through
yaml
files
or
yaml
chunks,
then
it
would
be
at
least
the
same
thing
as
what
what
other
people
do?
I
see
clayton
doesn't
like
this
idea.
D
My
own
argument,
though,
so
maybe
this
will
save
it
for
you,
which
is,
if
you're
a
code,
and
you
don't
have
a
static
chunk
of
data
but
you're
building
it.
Then
it
is
not
good
to
make
people
build
yaml
right
like
that,
that
that
would
be
really
mean.
C
So
so
I
actually
and
clayton
is
probably
angry
at
me.
For
this
I
build
up
a
whole
lot
of
stuff
by
creating
the
yaml
for
every
piece
that
is
static
and
using
essentially
said
for
a
lot
of
like
pre-substitution,
because
if
you
want
to
substitute
like
an
integer,
you
can't
read
it
as
a
string
and
then
read
it
into
the
go.
Link.
Object
then
manipulate
the
remaining
static
pieces
that
are
hard
to
do
with
said
using
golang,
and
then
I
read
it
back
out
david.
I
mean
this
in
the
nicest
possible.
D
G
Agree
in
most
places
we're
doing.
We
want
to
start
with
something.
That's
like
so
there's
like
a
couple
places
where
this
really
does
come
out,
which
is
like
the
canonical
representation
of
this
is
something
in
the
animal
and
you
are
just
tweaking
it.
It
is
kind
of
unfortunate
you
know,
certainly
stuff,
like
tsx
and
all
that
or
jsx,
like
you
know,
structural
things
that
makes
it
someone's
done
this
in
go
actually
structural.
G
You
know
typing
they're,
doing
some
weird
stuff
with
gogenerate,
though
so
I'm
not
sure
that's
really
the
best
option,
but
there's
probably
so,
there's
definitely
better
things
than
dropping
a
chunk
of
yamalan
and
there's
better
things
than
dropping
in
our
horrifying.
You
know
nested
structs
too,.
D
I
guess
it's
a
fair
question:
whether
we
want
jenny
to
solve
the
world
here
or
if
we
just
want
a
a
syntax
that
makes
it
easy
to
or
controller
to
call
apply.
I
kind
of
think
that
that
ladder
is
probably
a
more
practical
solution.
I
mean.
G
I'll
be
honest,
most
of
the
places
that
most
of
the
controllers
are
building
json
right.
It's
not
really
our
biggest
problem
and
we
do
have
it.
We
have
edge
cases
in
there
like
if
someone
obviously
is
constructing
the
json
by
a
string,
but
we've
been
like
even
the
cubelet,
which
has
other
ordering
problems,
because
it's
patching
things
that
other
people
could
patch
and
it's
happening
at
high
frequency
and
you
get
into
controller
stomp
vests.
G
But
you
know
it's
constructing
the
patch
by
doing
an
object,
reflect
or
serializing
the
merge,
patch
diff
and
that's
99
effective.
So
other
controllers
probably
have
an
even
lower
bar.
D
Yeah,
so
one
thing
that
you
can
do
with
the
json
patches
is
provide
a
system,
two
objects
and
it'll
do
a
diff
and
give
you
a
patch.
I
don't
know
if
controllers
make
heavy
use
of
that,
but
especially
if
we
get
some
sort
of
tombstone
syntax,
that's
the
thing
that
we
could.
We
could
supply
for
controllers,
yeah.
G
G
So
I
mean
isn't
that
a
so
before
we
go
and
reinvent
like
eclipse,
sdo
or
something
insane
like
that,
like
emf,
and
all
the
old
school
java
detect
all
the
changes
that
happen
is
is
a
is
a
static
wait
list
enough
for
someone
to
specify
in
many
cases.
I
would
probably
guess
yes
like
like
how
many
like
how
many
or
how
many
of
the
the
tombstones
are.
We
really
going
to
have
an
average
case.
D
B
We
certainly
discussed
it
yeah.
One
of
the
alternatives
is
where
you
just
explicitly
somehow
in
a
type
safe
way,
hopefully
provide
the
the
field
paths
of
the
things
that
you
care
about
separate
from
the
object.
I.
G
I
would
probably
say
for
anything:
that's
not
a
truly
generic
control.
First
off
nobody
should
writing
generic
controllers.
Investing.
You
know
two
person
years
into
it,
because
every
generic
controller
ultimately
takes
that
much
to
be
successful.
For
the
very
specific
controllers
I
do
feel
like
even
something
is
as
brutally
simple,
as
that
probably
would
be
the
most
effective
and
then
the
first
time
we
have
a
generic
controller
that
needs
to
be
generic
apply.
We
put
those
people
under
observation
and
figure
out
what
they
need
from
their
use
case
and
probably
yeah.
I
wonder.
G
Who
what
do
we
have
anything
entry?
That's
close
to
this
like
a
generic
apply
so
like
in
openshift.
We
have.
We
have
a
couple.
We
have
like
a
like
the
template
object.
We
have
a
controller
that
does
a
create,
but
no
apply.
I
know
home
operator
might
have
this
problem
and
other
problems
that
are
related.
D
Right
garbage
collector
could
actually
use
the
because
sometimes
it
removes
owner
references.
You
could
use
the
the
tombstone
for
that.
G
G
D
Yeah,
I'm
I
I'm
not
sure
we
should
spend
a
lot
of
time,
revamping
our
existing
controllers.
D
B
Yeah
yeah,
we
did
talk
to
some
of
the
controller
runtime
people
and
they
said
that
they
would
be
ruling
victims
for
trying
us
new
things
if
we
gave
them
a
prototype.
So
we
do
potentially
have
some
people
that
are
interested
in
hammering
on
something.
If
we
can
propose
something,
or
at
least
come
up
with
a
prototype.
G
I
would
definitely
prefer
feedback
with
someone
who
needs
apply
but
doesn't
need
a
generic
mechanism
versus
the
you
know,
something
like
a
helm
operator
or
the
margin
yeah.
B
B
That
sounds
like
a
good
step,
so
we
could
do
a
prototype.
We
can.
We
can
try
and
find
some
community
members
somewhere
that
are
building
not
a
generic
thing,
but
I'm
a
specific
controller
and
see
what
they
like.
D
A
In
the
chat,
so
it
was
asking
to
be
clear
here:
okay,
you
guys
are
I've
seen
it.
I
think
we're
good.
A
Okay,
okay,
so
let's
move
to
the
next
and
last
topic:
spotter
are
you
here,
do
you
need
to
present.
A
Yes,
okay,
we
lost
it,
we
saw
it
and
we
lost
it.
I
If,
for
instance,
a
client
of
ours
has
multiple
operators
running
of
our
operator
running
in
different
name
spaces,
which
is
a
use
case
that
we
support
it
becomes
difficult
to
set
up
in
mission
control,
both
from
like
an
administration
point
of
view
of
actually
getting
it
set
up,
because
this
is
a
cluster
opera.
I
This
is
a
cluster
level
resource
and
from
a
management
point
of
view
of
making
sure
you
set
things
up
correctly
like
we
can't
just
have
a
single
admission
endpoint,
because
our
admission
control
works
based
on
the
state
of
the
redis
cluster
that
we
are
managing
and
while
it's
there
are
ways
that
you
could
set
up
with
the
label.
Selectors.
I
And
to
do
this,
we
it
and
to
demonstrate
the
value
and,
in
a
sense,
to
provide
backwards
compatibility
for
legacy
kubernetes
systems,
because
we
have
to
support
a
wide
range
of
kubernetes
environments.
We
created
an
operator,
slash
reverse
proxy,
that
works
based
on
two
custom
resources.
To
enable
this.
The
two
custom
resources
are
what
I
call
namespace
validating
types,
which
is
a
cluster
scoped
rule
which
would
be
instantiated
by
the
administrator
of
the
cluster
and
these
define
which
sort
of
web
hook
rules
can
be
proxied.
I
If
namespace
validating
allow
something
to
be
proxied,
it
doesn't
matter
what
the
end
user
would
create
in
the
namespace
validating
rule,
which
is
namespace
scoped,
and
if
and
it
is
what
would
we
do,
what
defines
what
gesher
would
proxy
into
that
namespace
for
or
for
that
name,
spaces
resources
what
it
would
be.
So
if
a
namespace
validating
type
does
not
define
a
rule,
a
namespace
validating
rule
can't
have
any
effect.
I
So
as
an
example,
we
have
the
namespace
validating
type
just
defines
the
group
version,
resource
and
operations
that
we
allow
to
be
proxied
and
the
namespace
validating
rule
is
basically
analogous
to
the
existing
webhook
validating
configuration,
except
it
only
works
on
the
existing
namespace,
because
this
is
a
custom
resource.
It's
not
like.
We
have
the
whole
spec
slash
status,
set
up
to
match
sort
of
the
sort
of
the
traditional
ways.
Custom
resources
are
done.
I
So
that's
why
it's
not
completely
the
same
as
a
validating
webhook,
but
it's
pretty
close
and
to
show
it
how
it
works
all
together
and
pictures
before
I
go
into
the
demo,
so
we
start
off
with
a
kubernetes
cluster.
We,
it
obviously
has
a
cube.
Api's
have
two
namespaces
within
the
namespace
one
of
the
namespaces.
We
are
running
our
gesher
proxy
and
mission
controller,
and
we
create
a
cluster
level,
then
admit,
creates
a
cluster
level
resource
of
the
namespace
validating
type
gescher
consumes
this
resource
and
uses
that
resource
to
emit
a
web
hook.
I
Value
iteration
that
points
to
itself,
and
this
is
how
only
the
resources
defined
by
the
namespace
validating
types
can
be
proxy,
because
if
it's
not
only
those
are
visible
in
the
emitted,
webhook
validating
configuration.
So
then
what
happens?
Let's
say
within
the
same
namespace:
we
create
our
own
emission
controller
and
a
namespace
validating
rule
that
defines
its
use,
and
this
is
consumed
again
by
this
custom
resource
it's
consumed
by
gesher.
So
let's
say
we
create
a
pod
that
we'll
call.
I
What
would
happen
is
the
cube
api
server
would
hit
gesher
because
that's
what
the
validating
web
hook,
the
webhook
validating
configuration,
says
and
sure
would
say:
okay,
there's
a
namespace
validating
rule
for
this.
It
talks
in
mission
controller.
I
I
But
if
you
go
to
another
namespace,
which
does
not
have
a
namespace
validating
rule,
you
create
the
pod
again
because
there's
a
webhook
validating
configuration,
it
would
have
to
hit
gescher
in
this
case
in
an
idealized
world
as
like,
if
it
wouldn't
even
like.
If
this
was
built
in
kubernetes,
these
resources
wouldn't
have
to
happen,
but
we're
working
with
what
exists
today.
So
what
hit
gesture
would
see?
There's
no
namespace
validating
rule.
Nothing
would
happen,
though
it
would
return
that
the
pod
is
good.
I
Similarly,
the
same
exact
bad
podiamo
would
happen,
hit
gesture,
no
namespace,
validating
rule.
It
would
be
allowed.
So,
to
do
this,
we
have
our
namespace,
which
has
gesture
running
and
a
simple
admission
controller.
I
wrote
this
simple
controller
that
works
based
on
essentially
any
kubernetes
object
with
the
label,
and
it
looks
first
particular
label.
If
the
label
exists,
it
allows
it
if
the
label
does
not
exist,
it
denies
it.
I
This
is
actually,
I
found
useful
for
testing
in
mission
control
in
general
for
us,
but
if
we
look
okay,
so
we
have
our
crds,
our
namespace
validating
types
and
our
namespace
validating
rules.
I
I
I
I
I
I
Basically,
the
proxy
submission
allow
label
not
allowed
to
set
for
this
resource,
but
if
I
switch
to
another
namespace
and
I
sort
of
create
the
pod,
it
does
work
so,
from
our
perspective,
having
the
ability
to
create
web
hook,
configurations
that
are
namespace
that
only
work
on
resources
within
that
namespace
provide
a
lot
of
flexibility
and
ease
of
use
to
our
customers,
who
want
to
run
operators
that
which
are
named,
and
so
they
might
end
up
running.
Multiple
operators
of
the
same
type
on
a
single
machine.
Simple
examples
would
be
different.
I
Groups
are
given
differ
their
own
operator.
They
might
have
like
development
and
production
operators
on
the
same
cluster
or
the
ability
to
test
the
new
version
of
our
operator
in
parallel
with
the
existing
operator
within
its
own
namespace,
and
it's
difficult
to
do
this
today
with
the
cluster
with
the
cluster
level.
Webhook.
I
So
hey
the
way
we
implemented
it
is
not
optimal.
It
would
be
better
for
this
to
be
a
first
class
feature
in
kubernetes.
Just
like
you
have
validating
web
hooks,
which
are
cluster
level
they're,
going
to
also
be
namespace
and
the
function
that
gets
the
set
of
web
hooks
to
operate
on
can
get
it
from
both
the
cluster
and
the
set
that
belongs
to
the
namespace.
I
And
if
it
was
a
first
class
feature,
we
you
wouldn't
need
the
name
space
validating
type.
This
is
only
necessary
in
the
proxy
world
to
make
it
sort
of
both
secure
and
control
and
controllable,
and
we
also
have
our
own
question
in
the
proxy
mode
like
what
should
be
the
failure
policy
if
namespace
webhook
is
not
defined
right
now
we
treat
it
as
sort
of
an
ignore,
but
perhaps
it
should
be
configurable
to
fail,
and
this
is
something
like
where
we
want
to
like
get
feedback
on
like
in
the
what
you
saw.
I
It
obviously
ignores
it,
but
like
because
in
a
sense
that's
analogous
to
how
the
cluster
level
ones
work.
If
you
have
a
cluster
level
validating
web
port
configuration
with
a
label
selector
anything
that
doesn't
match
that
label
selector
will
be
automatically
approved
like
or
like
it
won't
hit
the
web
hook,
but
anyways,
that's
the
our
sort
of
demo
and
like
and
sort
of
motivation
for
the
issue
where
we
want
to
hear
feedback
like
what
people
think
like
do.
They
think
we're
going
about
this
the
wrong
way.
D
So
yeah,
so
I
I
just
want
to
say
this
is
really
impressive.
I'm
impressed
that
you
wire
everything
up
like
that.
It's
really
cool
yeah
thanks
for
the
demo.
That
was
a
lot
of
work.
That's
really
cool
yeah!
I
don't
know
david.
Do
you
wanna.
C
C
There's
there's
side
conversations
about
the
mechanics
of
like
if
you
allow
someone
to
intercept
an
update,
then
that
person
can
stop
things
like
garbage
collection,
but,
but
aside
from
that
is
the
idea
here
that
you
want
to
have
an
api
that
exists
on
the
cluster
scope
that
has
different
validation
rules
per
namespace.
It
sounded
like
that's
what.
I
It
was
so
there
are
two
there's
a
security
and
there's
a
namespace
and
a
reg
and
a
multiple
operator
issue
right
today,
like
a
cluster
admin
like
let's
say,
I'm
a
cluster
admin
and
I
give
a
user
a
name
space
to
wait.
I
set
up
the
crd
for
them,
but
I
give
them
a
name
space
to
play
in
the
only
way
to
set
up
a
validating,
webhook
is
me
being
involved.
They
can't
set
up
a
validating
webhook
for
the
their
for
resources
in
their
namespace.
D
D
C
So
so
thinking
about
mechanically,
though
right,
like
you're,
saying
a
cluster
admin,
installs
a
custom
resource
definition
and
then
they
want
to
have
a
validating
admission
web
hook.
Well,
normally
that's
created
to
provide
consistent
validation
rules
for
all
instances
of
that
custom,
resource
definition
for
all
instances
of
that
cr
in
the
entire
cluster.
And
so
when
you
install
the
type
yeah.
D
I
I
like
not
based
on
sort
of
the
current
state
of
the
resource
and
the
cluster,
so,
for
instance,
if
you
wanted
to
change
sharding
rules
right,
we
would
validate
that
the
change
of
sharding
rules
you
are
making
is
valid,
but
this
depends
on
the
sort
of
the
state
of
the
operator
and
the
cluster
or
clusters
that
it
is
managing
like
you
can't
just
you
can't
just
have
in
a
sense
a
single
like
in
mission
control
or
to
make
these
decisions
it.
I
I
Yes,
but
the
admission
plug-in
would
check
so
the
way
we
do
it
today
is
we
we,
instead
of
like
redis
enterprise
in
its
http
interface,
already
has
validation
rules,
so
we
basically
submit
a
dry
run
request
to
it,
like
I
think,
like
we
like,
when
you
have
a
crd
with
with
a
cluster
scoped
operator
it
make
like
it
makes
sense
to
have
a
cluster
scope.
I
The
mission
control,
but
in
many
uses
of
sort
of
the
operator
paradigm
especially
say
in
a
olm
world
people
are
creating
namespace
scoped
operators
and
hence
they
are
not
like
in
mission
control,
doesn't
and
they're.
They're
too
quick
in
the
operator
today,
like
you,
can
build
the
emission
control
built
into
the
operator
itself.
D
So
I
want
to
interrupt,
I
want
to
ask,
I
feel
like
there
might
be
some
missing
constraint
like.
Is
it
the
case
that,
like
the
keys
to
make
these
requests,
you're
talking
about,
are
located
only
within
a
namespace
and
like
a
cluster
wide
web
book?
Doesn't
have
access
to
them
or
something.
I
Or
it
would
have
to
introspect
into
each
namespace
if
you
would,
basically,
anyone
who
created
an
operator
or
like
that
was
meant
to
be
namespace
and
meant
that
multiple
would
have
to
be
run
would
end
up
having
to
create
like
essentially
a
very
specific
version
of
this
reverse
proxy.
That
I
wrote
like
like
like
basically,
and
the
question
is:
is
why
one
one
it
doesn't
have
it
shouldn't
be
like
they
shouldn't?
I
No
one,
not
everyone
should
have
to
reinvent
the
wheel
like
it
seems
like
there's
value
and
being
able
to
say,
like
four
like,
like
four
resources
in
this
namespace,
like
it's
sort
of
like
a
namespace
selector,
except
that
the
namespace
selector
depends
on
is
both
still
a
cluster
level.
Resource
depends
on
namespace
setup,
which
means
that
it's
more
fragile
because,
like
if
somehow
the
labels
on
the
namespace
change,
the
web
hook
won't
be
selected
anymore.
G
C
C
If,
if
this
is
good
protection,
I
have
it
seems
like
I
would
want
to
have
this
protection
everywhere
and
I
can
appreciate
you
found
a
way
to
implement
it,
but
it
seems
as
though
you
could
also
have
created
an
admission
web
hook
that
matched
you
know
where
the
type
system
actually
created.
This
type.
I
So
so,
basically,
like
you
can
set
it
up
with
the
name
based
that
you
can
set
up
multiple
web
hooks
with
namespace
selectors.
They
each
point
to
the
mission
control
in
their
respective
namespaces,
and
that
would
work.
But
again
you
have
to
make
sure
your
name.
Spaces
are
set
up
correctly.
If,
for
some
reason,
like
the
labels
on
your
namespaces
get
modified
without
you,
knowing
admission
control
will
now
not
be
happening,
will
be
essentially
broken
because
kubernetes
won't
be
selecting
that
web
hook,
because
the
labels
don't
match
anymore.
I
You
still
like
you
still
need
admin
involvement
to
set
up
each
individual
selected
web
hook,
because
it's
a
cluster
level
resource
like
we
are
like
overweight
in
in,
like
what
I
tried
to
do
with
the
proxy
was
like.
I
don't
view
the
proxy
as
the
end
goal.
The
proxy
is
a
motivation,
flash
demonstration
and
possibly
backwards
compatibility
for
older
versions.
D
I
vote
that
we
take
some
discussion
to
the
mailing
list
and,
let's
put
a
temp
at
least
temporarily,
a
placeholder
for
for
next
meeting
to
continue.
What's
on
the
mailing
list,
try
to
try
to
find
some
specific
topic,
some
aspect
of
it
sure.
A
Yeah,
I
I
appreciate
everybody
coming
to
the
meeting
today
and
I
want
to
be
mindful
of
everybody's
time.
You
have
other
commitments,
so
we
are
going
to
finish
the
meeting
here,
I'm
going
to
upload
the
recording
later
and
thank
you,
everybody
and
we'll
see
you
soon,
yup
thanks
for
the
demo.
Thank
you
so
much.
Thank
you.