►
From YouTube: WG Component Standard Office Hours 20200915
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay
good
morning,
folks
welcome
to
the
tuesday
september
15th
2020
working
group
component
standard
office
hours.
This
is
our
hour
for
folks
to
drop
in
ask
questions
and
get
help
with
what
they
are
working
on.
I'm
mike
tuffin
and
I'll
be
in
the
channel
today,
hey
meme.
How
are
you.
B
Yeah,
like
I'm
just,
was
doing
a
follow
up
in
the
in
the
comments
of
the
gap
to
understand
what
okay
are
the
ideas
of
the
comments
there.
A
I
think,
like
often
what
happens
on
these
there's
all
kinds
of
problems
that
need
to
be
solved
right
and
often
what
happens
is
you
are
solving
a
problem?
That's
like
adjacent
to
somebody
else's
problem,
and
then
they
like
it's
easy
to
like
piggyback
on
that
and
be
like.
Oh,
like.
I
also
had
this
other
idea
of
this
other
problem
that
I
need
solved.
Maybe
this
could
do
that
too,
and
so
there's
kind
of
a
balance
between
saying
like
okay.
A
Let
me
think
about
that
and
add
that,
and
you
know
that's
really
not
what
this
is
for.
Yet,
let's
stay
focused
on
what
what
the
original
problem
we're
trying
to
solve
is
so
that,
like
we,
don't
create
a
bunch
of
extra
work
that
prevents
us
from
solving
that
problem
in
the
first
place.
A
Total
understand
yeah,
so
that's
you
know.
I,
like
movement,
has
some
really
interesting
ideas
around
conf
managing
config
maps
in
in
like
more
advanced
bootstrap
scenarios,
but
a
lot
of
that
is
also
like
beyond
what
we're
trying
to
do
here,
which
is
just
like
provide
a
file
based
api.
A
Dynamic
configuration
he
described
a
scenario
where
you
would
have
like
a
component
bootstrapping
itself,
so
q,
baby,
adm
kinda,
does
this
and
where
there's
a
component
that
like
creates
a
config
map
and
then
uses
that
config
later
so
it's
like
instead
of
just
a
consumer
side,
workflow,
it's
a
producer,
that's
also
a
consumer.
A
A
I,
although,
like
I
know
cube
adm,
does
create
and
manage
like
config
maps
for
node
config
when
it
bootstraps.
So
that's
probably
like
the
closest
real
example.
We
have.
A
C
Yeah
like
it
it,
it
did
not
need
to
be
built
this
way.
It
does
not
need
to
continue
working
that
way,
and
I
I've
tried
to
express
this
several
times
on
the
kubernetes,
but
like
kubernetes
has
always
like
been
touted.
You
know
as
having
like
very
restricted,
like
node
local
scope,
and
they
added
this
cluster-wide
configuration
management,
api,
that's
very
incomplete
and,
like
somebody
just
needs
to
be
able
to
pass
their
component
config,
you
know
like
when
the
thing
joins
the
cluster.
A
Okay,
that's
good
context
to
have.
It
would
be
good
for
us
to
maybe
discuss
this
in
a
meeting
that
lukamir
can
make
it
to
just
to
to
talk
it
through,
like
I
don't
want
to
like
trap.
You
know
rag
on
his
proposal
here
without
him
here,
oh
yeah,
and
I'm
missing
some
context
from
what
you
guys
were
talking
about,
but
yeah,
but
so
yeah.
We're
ameem,
kindly
rewrote
the
instant
specific
config
cap
to
support
a
strategic,
merge
patch
approach,
instead
of
a
case
by
case
merge,
which
works
much
better.
A
It's
much
easier
to
implement
it's
much
more
flexible
for
users,
so
we're
trying
to
like
get
that
reviewed,
very
cool
yeah,
just
walking
through
some
of
the
comments.
A
Yes,
cool,
so
this
was
one
of
the
I
mean
he
even
said,
like
some
random
thoughts.
So
it's
not
like
he's
requesting
this,
but
the
hypothetical
is-
and
I
think
I
really
think
this
is
derived
from
cubadm.
A
But
the
hypothetical
is
like
you
have
some
config
map
in
the
cluster
or
type
your
future
config
map.
You
install
like
the
first
instance
of
this
component.
It
like
writes
its
own
config
to
that
config
map
and
then
de-privileges
itself
before
others
are
created,
but
I
think
I
think
like
that's.
A
I
think
it's
hypothetical
and
I
think
there
are
better
approaches
to
solving
the
same
problem
that
more
cleanly
divide
responsibility
and
don't
require
you
to
like
screw
with
our
back
in
the
middle
of
your
deployment,
so
like
other
approaches
would
be-
and
this
is
all
like
all
out
of
scope
for
this
cap
at
this
point.
In
my
opinion,
so
other
approaches
would
be
like
using
an
operator
to
manage
the
install,
give
the
operator
the
privilege
to
write
the
config
and
only
give
the
thing
that
is
installed
the
privilege
to
read
the
config.
C
Like
I
mean
the
config
map
that
feeds
the
kubelet
config,
I
believe
it's
yeah
it's
stored
in
coupe
system,
so
that
one
you
can
actually
put
anywhere
you
can.
You
can
put
it
anywhere,
but
in
the
kubernetes
case,
okay,
it's
it's
stored
in
group
system
and
like
so.
You
would
need
to
have
significant
privileges
to
ever.
A
C
A
C
Yes,
yeah
yeah.
What
I
was
getting
at
is,
if
yeah,
if
a
user
has
that
kind
of
permission
in
coop
system,
they
likely
have
access
to
the
service
account
tokens
which
gives
you
the
access
to
impersonate.
A
C
A
Just
don't
I
don't
you
know
it's
just
orthogonal
to
this
proposal
like
we're
just
trying
to
provide
a
file
based
api
for
components
to
like
consume
the
their
configuration
off
the
file
system
and,
like
anything
beyond
that
is
like.
There
are
problems
that
make
sense
for
a
working
group,
but
not
for
this
specific
cat.
C
Yeah
one
second,
I'm
pulling
up
the
kept
so
that
I
can
skim
through
what
has
changed
and
I
don't
know
if
it's
possible
to
succinctly
explain
what
has
changed.
I
can
just
pull.
A
Up
so
in
the
previous
approach,
we
were
adding
a
new
kind
called
like
cubelet
instance
config,
or
you
know,
instance
config,
and
we
were
gonna
write
a
case
by
case
merge
between
layering
the
instance
of
the
kind
over
the
normal
one.
Okay,
that
is
really
really
difficult
to
implement
it.
A
You
like
it
sounds
easy
at
the
start
because
you're
like
oh,
I
just
like
copy
stuff
between
two
structs,
but
when
you
layer
all
the
api
machinery
in
it,
it's
really
hard
and
it's
really
easy
to
clobber
yourself,
like
clobber
state
by
accident.
It's
really
easy
because,
like
automatic
conversions
can
happen
and
they'll
clobber
your
perception
of
whether
something
was
like
set
or
not
by
a
user
yeah.
I.
C
Found
a
bug
before
I
went
to
sleep
this
morning.
That
is
exactly
a
problem
with
this
kind
of
implementation,
between
ctl
flags
and
good
config
and
client
config.
So
yeah
totally
this.
It's
like
the
it
sounds
good
in
theory
and
then
you're.
You
have
the
constraints
of
all
of
these
ghost
trucks
and
it's
just
not
fun
at
all
cool.
B
A
Yeah
and
just
to
keep
it
simple
like
you,
you
should
only
need
two
layers
in
the
file
system
for
most
things:
yeah
yeah,
so
it's
just
config
and
there's
still
a
config
flag
and
an
instance
config
flag.
So
it's
really
clear
which
one
goes
over
the
other.
So
it's
like
you
know
if
you're
reading
a
command
line
you
don't
have
to
like
figure
out
what
order
like
flags
get
layered
in
and
it's
just
yeah
it's
just
all.
It
does.
A
Is
strategic,
merge
patch
it
like
we
can
determine
the
best
strategy
for
each
field,
like
kind
of
the
only
thing
that
we
it
maybe
leaves
to
be
desired
is
like
you
know
it
would
be
nice
if
users
could
choose
the
strategy
for
the
merge
based
on
what
they
want,
but
I
think
that
can
be
left
to
future
work.
A
C
B
C
A
Interleave
it
by
default
yeah
you
can
choose,
you
can
choose,
and
I
think
interleave
for
almost
every
map
is
the
correct
answer
yeah.
Otherwise,
you
should
probably
be
using
the
list
because,
if
you
think
of
it,
if
you
think
of
like
I
mean
not
kubernetes,
but
like
lots
of
other
command
line
tools
that
have
hierarchical
configuration
and
only
expose
it
via
flags
like
you're,
always
like
doing
like
dash
dash
something
dot
something
else
and
like
setting
a
specific
parameter
in
that
structure,
and
so
like
the
interleave
approach
would
most
closely
match.
That
convention.
A
Key
right
yeah,
it
also
minimizes
duplication
like
if
you
only
if
there's
only
one
like
sub
sub
parameter
of
a
map
that
needs
to
be
in
specific.
Like
then,
that's
the
only
one
you
have
to
specify
in
your
instance,
config.
C
Well,
I
think
that
this
is
a
great
way
to
move
forward,
on
instance,
specific
config
cool.
This
makes
sense.
I
can
see
a
path
for
this
working
in
kubernetes.
C
C
At
that
point
in
time
it
doesn't
stay
synchronized
and
then,
when
the
config
is
copied
to
the
disk,
you
are
lacking
all
the
instance
specific
information
about
that
inside
that
copy
of
the
config.
So
it's
not
patched
in
any
way,
and
they,
the
machinery
was
difficult
enough
and
they
didn't
want
to
import
kubelet
configuration
into
kuwaiti
and
to
allow
it
to
say
like
patch
the
instant
specific
fields
and.
A
C
C
Yeah,
so
the
unit
file
then,
is
used
locally
on
the
node
to
use
flags
via
environment
variables.
To
then
do
the
flag
overrides
at
the
kublet
for
those
fields.
C
So
the
point
being
is:
even
if
we
didn't
have
the
strategic
merge,
patch
kuberdam
was
in
a
position
where,
if
it
had
the
kublet
config
api
available
or
could
use
a
hacky
patch
mechanism,
it
could
have
taken
the
global
config
from
the
cluster
and
extended
upon
it
in
a
node-specific
way.
C
Similarly,
you
know
like
we
could
have
just
had
the
user
pass.
The
kubla
config,
you
know
for
the
node
and
like
not
have
any
global
config
storage
like
there
was
all
kinds
of
things
that
could
have
happened.
That
would
be
a
more
minimal
idea
than
this
kind
of
like
half
config
management
thing
half
half
api.
C
I
wholeheartedly
agree
with
this:
the
kubelet
configuration
should
have
all
of
the
fields
it's
recommended
not
to
set
certain
ones
in
the
base
configuration
the
instance
specific
one.
Can
strategic
orange
batch
to
get
those
things
and
tooling
can
generate
those
patches
per
node
if
necessary,.
C
Yeah
exactly
the
other
thing
as
well
is
like
you
could
like
well,
yeah,
there's
not
really
like
a
when
you're
starting
to
talk
about
like
operators
generating
these
configs
or
like
other
kinds
of
compute
patterns.
C
We
there
there
might
be
some
interesting
ways
to
I'm,
not
really
like
yeah.
I
think
that.
C
Yeah,
which
I
don't
think
is
like
a
lot
of
kubernetes
built
to
encourage
a
lack
of
that
and
the
the
component
config,
the
global
config
management
piece
being
the
only
way
to
pass
those
configs
has
been
a
mistake
that
we've
propagated
across
too
many
releases.
C
We
needed
to
start
with
the
simpler
thing
and
honestly
I
mean
I
could
have
taken.
You
know
time
to
write
the
patches
to
fix
this
several
releases
ago,
but
yeah.
C
But
now
we
have
this
more
complicated
thing
that
users
depend
on
and
it's
locked
into
the
api
of
kubernetes.
Honestly,
I
I
think
we
should
just
deprecate
it
as
fast
as
possible,
or
I
I
have
put
in
a
like
a
a
couple
requests
to
do
a
design
meeting
about
how
it
should
work.
C
If
we
decide
to
support
the
idea
of
node
groups
in
kubernetes.
Oh
then.
C
C
Status
of
cluster
api,
it
doesn't
really
like
compose
as
a
something
inside
of
kubernetes,
but
it
composes
well
around
it
and
if
kuberium
supported
doing
per
node
kubelet
configurations,
then
you
could
use
cap
k.
The.
C
Whatever
it
is,
I
don't
know
yeah
anyway,
the
the
infrastructure
provider
for
kubernetes
control
plan
provider
for
kubernetes.
You
can
use
that
to
supply
a
kubelet
config
for
a
node
group
or
a
machine
set
something
like
that
is
the
name,
because
you
can
treat
the
provisioning
options
like
for
that
group
of
machines.
So
at
the
cluster
api
level,
that's
something
that
can
be
accomplished
if
kuberdam
has
support
to
just
do
the
singleton
configs,
which,
right
now
it
doesn't
got
it.
C
Okay,
you
can
hack
around
kubernetes
singleton
config
behavior
if
you
use
phases
and
then
write
to
the
file
system
as
you
as
a
user
right,
yeah,
okay
and
then
I
believe
cluster
api
has
some
hacks
that
have
experimented
with
that
kind
of
approach.
But
I
need
to
play
with
it
more
intimately.
A
I
wouldn't
really
stayed
up
on
it.
I
just
I
know
they
were
like
re
architecting,
the
whole
thing
and
I
kind
of
like
lost
track
of
it
after
that.
C
Yes,
yeah
the
v1
alpha
3
api
is
much
different
than
the
previous
renditions
and
I
can
chase
this
down
as
well
with
the
weave
workers
who
are
participating
in
this
process.
B
C
Myself,
but
I
I
join
our
sync
calls
every
now
and
then
on
it
to
keep
up
to
date
with
what
things
are
and
then
yeah
I
mean
ideally
kubitam
as
a
tool
as
well
should
first
class
support
just
like
adding
a
node
group
label.
C
C
It's
it's
a
logical
extension
of
what's
already
there.
If
you
can't,
you
know,
have
multiple
base
configs,
then
people
can
accomplish
what
they
need.
A
Like
basically
like,
avoiding
so
so
that,
I
guess,
there's
like
two
things
like
one
thing
is
like
avoiding
cyclic
dependencies
helps
a
lot
so
like
so
these
sort
of
like
self
bootstrapping
things
or
like
dynamic
cubic
config,
even
hat
like
like
it,
adds
more
directions
to
your
data
flow,
and
it
just
makes
it
like
really
hard
to
reason
about
yeah,
exactly
yeah.
A
I,
like
I,
really
like
how
like
the
idea
of
having
like
just
like
simple
agents
like
bootstrapping
agents.
So,
like
you
know,
your
take
care,
is
that
still
does
a
lot
of
it
with
shell
and
systemd
unit
files,
but
like
it's
so
you
know
it's
still
like.
A
If
you
can
get
your
config
in
and
written
to
the
file
system.
The
way
you
want
and
then
you
get
your
thing
bootstrapped
and
then
you
just
treat
it
as
immutable
like
if
you
want
to
upgrade
the
node,
you
create
a
new
node
like
it's
much
simpler,
it's
heavier
weight
because
you
end
up
having
to
tear
things
down
and
create
new
ones,
and
so
it's
like
a
little
more
expensive,
but
it's
much
simpler.
I
think
you
like
save
a
lot
in
reliability.
A
A
C
Yeah,
I
think,
basically,
what
you're
getting
at
there
is.
What
I
see
is
the
the
value
add
of
doing
something
like
node
groups
in
kubernetes.
C
It's
like
there
already
is
this
top-down
approach,
where
kubernetes
puts
the
component
config
at
the
global
level,
and
then
there
is
less
coordination
as
the
kublets.
The
agents
are
joining
the
cluster.
Now
they
can
be
pointed
to
their
configuration
without
having
to
be
have
that
data
be
supplied
every
single
time
yeah,
since
it
flows
down
from
the
control
plane
and
it's
a
just
a
part
of
the
privileged
bootstrapping
operation
that
lets
it
happen.
That
way.
A
A
Yeah,
you
can
make
exceptions
for
that
in
like
key
areas
where
it
really
matters
like
credential
management,
for
example,
is
a
great
example
where,
like
it's
much
easier
to
centrally
coordinate
and
like
push
keys
down
from
the
top
level,
but
it's
much
more
secure
to
have
like
hardware,
localized,
local
yeah
and
then
use
like
those
tpi
mana
stations
to
assign
keys
that
are
created
locally,
so
you're
not
sending
keys
over
a
network
so
like
there's
a
gear,
occasionally
there's
occasionally
cases
like
that,
where,
like
you,
actually
want
it
to
be
bottom
up
in
certain
cases
instead
of
top
down,
but
like
I
think
in
most
cases
like
it's
just
easier
to
implement
and
verify
top-down
approaches.
C
Yeah,
so
where
this
just
was
in
completely
implemented
in
covidien,
is
that
the
the
top-down
data
flow
is
not
indirectable
anywhere.
There's
no
way
to
branch
based
off
of
things
that
are
very
necessary
in
a
kubernetes
cluster
yeah.
A
C
If
you
decide
to
build
it
right,
but
if
you
build
it
wrong,
then
your
user
has
those
escape
hatches
and
there's
no
cost
in
this
case,
from
a
security
or
policy
perspective
in
allowing
that
to
happen,
because
if
somebody
has
access
to
bootstrap
the
node
they're
running
code
on
the
node
as
a
super
user,
and
so
they
they
naturally
have
those
the
permission
to
configure
the
way
that
they
want
to
yeah.
C
And
so
it's
like
don't
make
that,
like
from
a
ux
standpoint
impossible
for
somebody
who
does
not
understand
the
intricacies
of
what's
happening
right
like
provided
an
easy
path
for
that
user
to
do
exactly
what
they
need
until
the
top-down
experience
that
you
have
decided
to
own.
This
new
configuration
management
that
expands
the
scope
of
how
kubernetes
clusters
are
deployed
until
that
is
good
right
until
it's
serving
the
majority
of
people
until
it's
possible
to
do
everything
you
need
to
do
in
that
way.
C
A
A
Cool,
if
you
wouldn't
mind
commenting
on
it
with
some
of
your
thoughts
on
just
like
on
the
validity
of
the
cap
and
and
and
your
approval
for
that
direction,
that
would
be
helpful
in
moving
the
discussion
forward.
C
Yeah,
I
can
do
that
right
after
this
call
before
me.
Next
thing
cool
is
this:
has
this
been
a
good
conversation
for
you
I
mean,
do
you
have
additional
things
you
wanted
to
talk
about.
B
Yeah
there's
a
good
design
class.
C
A
You,
when
are
you
married
sunday,
so
I'll
be
on
sunday
or
two
weeks
after
that
and
all
starting
this
thursday,
I'm
off
so
nice.
B
A
A
Got
we've
got
a
lot
to
do.
C
My
partner,
I
mean
her
name-
is
tanya
she's
been
watching
this
netflix
reality
show
where,
like
these
people,
they
like
meet
each
other
without
seeing
each
other,
and
then
they
decide
to
propose
to
each
other
without
seeing
each
other
and
then,
in
a
month
they
get
like
married.
C
Called
love
is
blind
and
it's
it's
kind
of
like
sacrilegious
like
what
they
do
to
matter.
C
C
C
A
C
C
A
C
Well,
we
did
the
cap
thing
and
all
comments
on
this
with
my
full
thoughts
wow.
There
are
78
comments
on
this.
A
Yeah,
it's
been,
it's
been
through
several
iterations
saga
yeah.
It
is
quite
a
saga
when
I
first
heard
it
I
thought
like.
Oh,
this
will
be
merged
in
like
a
week,
because
it's
like
pretty
simple
right.
I
I'm
glad
we
had
the
discussion,
though,
because
that
first
approach
was
like
ridiculously
hard
to
actually
do
yeah
I
mean,
as
a
meme
discovered
after
he
offered
to
build
a
prototype.
C
Yeah,
it's
let's
see.
B
B
It
looked
very
easy
for
sure,
but.
C
This
is
a
huge
issue
that
we've
gotten
ourselves
into
with
kubernetes
api
machinery,
just
the
on
the
inability
to
understand
whether
or
not
a
value
is
actually
set
yeah.
It's.
C
C
Yeah
yeah-
and
I
some
of
this
is
adopted
from
grpc,
which
has
for
most
of
its
history,
not
supported
this
kind
of
I'm.
A
B
A
Fields
and
generate
the
thing
that's
cool
about
proto.
Is
it
generates?
You
do
everything
through
access
or
methods,
you
don't
read
them
off
stuff
off
the
struct
directly
and
you
get
accessors
that
have
like
conventions
around
like
fields
that
may
or
may
not
be
set
that
make
it
easier
to
work
with.
A
C
Yeah
there's
a
some
great
rants
by
people
on
why
proto
is
bad
and
a
lot
of
it
is
just
like,
even
where
it
came
from
you.
C
I
think
you
know
it's
after
something's
been
so
widely
adopted.
The
damage
is
done,
but
then
you
get
to
a
point
where
the
protocol
actually
does
get
good.
Yeah,
like
things
are
now.
A
Yeah
and
that's
what
I've
kind
of
noticed
about
a
lot
of
things
like
often
software
starts
off
super
like
it
solves
an
interesting
problem.
It's
useful
people
use
it
a
lot
of
it's
super
ugly
and
doesn't
work
very
well,
but
then,
like
over
time
like
you,
can
never
get
it
to
like
as
good
as
it
would
have
been
if
you'd
like
nailed
it
in
the
first
place,
but
you
also
never
know
like
what
you
actually
need
to
do
in
the
first
place,
so
it's
kind
of
an
impossible
perfection
to
achieve.
C
Yeah,
but
in
this
case
we
have
a
legacy
of
apis
that
have
no
distinction
between
one
and
on
a
field
is
set
when
it
would
have
cost
one
bit.
C
A
And
a
lot
of
it
just
comes
out
of,
like
you
know,
go
doesn't
have
very
like
it
has
a
native
map,
but
it's
not.
A
It's
it's
just
like
not
like
it's
a
statically
type
language,
and
it
just
like,
doesn't
play
nice
with
a
lot
of
the
like
sort
of
ideas
around
like
not
only
dynamic
types
but
like
dynamic
manipulation
of
of
like
heterogeneous
containers
and
and
not
necessarily
manipulation
well,
yeah
so
like
in
terms
of
mergers,
manipulation,
but
also
just
like
the
idea
that,
like
you,
might
get
something
and
you
don't
actually
know
what's
in
it
and
it's
not
like
go
can't
handle
that,
like
you
have
interface
and
you
can
do
all
these
like
casts
and
stuff.
A
It's
just
like
super
verbose,
and
nobody
wants
to
like
deal
with
that
code.
So
nobody
writes
that
code.
It's
awesome
because
interface.
C
Yeah
it,
it
would
have
been
interesting
to
to
see
how
we
could
have
created
strucks
or
complementary
structs
that
allowed
us
to
do
this
kind
of
things
yeah,
and
we.
A
Have
like
a
good
in
between
where,
like
we
know,
we
like
it's
not
like,
we
just
have
like
some
random.
It's
not
like.
It's
not
like
downloading
a
web
page
like
you're,
just
gonna
get
some
like
random
html
like
it
might
not
even
be
formatted
right
and
like
there's
all
this
crazy
stuff.
You
have
to
do
to
like
try
and
get
that
thing
to
render.
It's
like.
A
We
have
pretty
well
defined
apis
with
a
like,
relatively
small,
fixed
set
of
options
and
like
we
could
have.
I
think
we
could
have
honestly
like
kept
things
as
pointers
in
internal
types,
so
that
we
could
maintain,
set
versus
unset
behavior
across
conversions
and
like,
and
it
gets
weird
with
defaulting.
C
A
It's
like
one
of
the
things
is
like
not
wanting
to
make
it
too
complicated
right
like
if
you
could
merge
across
api
versions
like
that,
would
get
really
crazy
because,
like
okay,
like
I
got
this
like,
what
did
the
user
actually
mean
like
they
didn't
set
these
fields
in
this
version?
They
want
to
merge
it
with
this
other
version
that,
like
also
doesn't
set
these
fields
like.
Do
they
want
me
to
take
the
default
from
version
a
and
like
force
that
default
into
version
b
after
the
merge?
A
So
it's
like
easier,
like
the
reality,
is
like
probably,
nobody
actually
needs
to
merge
across
api
versions
for
something
like
component
config
and
then,
if
you
need
to
merge
across
api
versions,
I,
like
don't
do
defaulting
on
the
first
thing
you
that,
on
the
thing
you're
applying
over
the
other
thing,
like
server
side,
apply
like
it
like.
A
Oh,
it's
only
a
patch
right.
It's
not
a
like
fully
defaulted
thing
that
gets
applied
over
and
that's
probably
like
the
the
like
two,
like
simplifying
characteristics
that
like
make
it
tractable
for
people
to
work
with.
C
A
So
that's
yeah,
that's
what
we
do
here
like
we
just
merge
on
the
ammo
like
we
just
we
require
you
to
like
your
config
and
your
instance.
Config
have
to
be
the
same
api
version
and
the
same
kind,
and
then
we
just
merge
the
yaml,
and
then
we
unmarshal
it
at
the
end,
and
it's
just
way
simpler
because,
like
we
don't
we
basically
like
do
it
before
it
hits
api
machinery
and
therefore
we
avoid
all
the
problems
that
you
get
from
trying
to
do
it
after
it
hits
api
machinery.
A
C
B
C
B
A
Yeah,
that's
the
trade-off
exactly
so
yeah
if
it
works.
Actually,
it's
probably
interesting
how
customized
deals
with
that,
because
it
kind
of
sits
in
it
in
in
between,
like
here
we're
managing
resources
that
don't
actually
live
in
an
api
server,
they're
just
config
files,
it's
it's
you
you
don't
have
and
because
you're
following
a
top-down
approach,
it's
very
unlikely
that
you
have
version
skewed
clients.
Writing
the
writing.
A
The
same
configuration
like
if,
if
the
like,
configs
the
endpoint
or
some
other
monitoring
thing
was
available
to
monitor
the
configuration
that
was
there
like,
you
may
have
version
skewed
clients,
reading
the
configuration,
but
that's
a
different
problem,
because
you
still
have
a
single
writer
and
that
kind
of
also
gets
you
around
the
problem
of
trying
to
like
write
over
old
api
versions.
A
You
might
still
have
to
do
a
conversion
to
get
it
to
a
version
that
the
thing
understands
before
you
do,
a
patch
or
like
whatever,
but
probably
that
thing
is
just
gonna
like
be
able
to.
If
it's
deploying
like
an
old
version
of
the
component,
it
can
use
an
old
api
version
of
the
configuration.
If
it's
deploying
a
new
version,
it
can
use
a
new
api
version.
A
It's
like
not
trying
to
patch
the
thing
it
just
deployed
in
most
scenarios
like
there's
a
lot
of
ways
to
simplify
it,
but
with
customize
it's
in
an
in-between
where
it
is
managing
configuration
that
does
live
in
an
api,
and
it's
likely
that
your
configuration
in
your
git
ops
setup
is
going
to
lag
in
its
understanding
of
available
kubernetes
apis
versus
what
is
available
in
the
latest
kubernetes
version
that
you
just
upgraded
your
cluster
to,
and
I
so
I
and
I'm
pretty
sure,
like
at
the
end
of
the
customized
flow.
A
I
think
I
think
the
way
it
handles
this
is
like
at
the
end
of
all
the
customized
merge
flows
like
it
does
just
does
the
server
side
apply
and
like
the
api
server
is
storing
in
whatever
the
latest
api
version
is,
and
it
just
like.
Does
the
right,
conversions
and.
A
Server
side
applies
definitely
has
to
handle,
merges
like
across
versions.
A
B
A
A
A
So
server
side
apply
must
have
something
that
it's
doing
special
to
handle
that
case,
whether
it's
converting
I
don't
know
if
it's
converting
the
stored
v2
to
the
thing,
that's
cool
about
the
stored
config
is
it's
already
all
defaulted
too,
so
it
doesn't
have
a
the
store.
Config
doesn't
actually
need
to
know
set
versus
onset.
A
A
I
assume
happens
is
it
takes
the
stored
v2?
This
is
one
way
it
could
work.
It
would
maybe
take
the
stored,
v2
and
you're
like
trying
to
apply
a
v1
over
it.
It
could
it'll
convert
that
v2
back
to
a
v1
and
do
the
apply
and
then
convert
it
back
to
v2.
I
think
it
would
go
the
other
way.
I
think
that
the
patch
you
can't
convert
the
patch,
though,
unless
you,
I
guess,
unless
you
really
well,
I
believe.
A
Gets
weird
is,
if
you
have
there's
a
couple
cases,
it
gets
weird
one
v2
might
have
fields
that
don't
exist
in
v1,
or
vice
versa.
Usually
the
conversion
mechanism
is,
is
like
to
write
a
custom
conversion
and
store
those
in
annotations
on
one
side
or
the
other.
That's
one
way
it
gets
weird.
The
other
way
it
gets
weird
is,
you
might
have
a
different
representation
of
the
same
information
in
each
api
like
something
that
was
flat
might
be
in
a
substructure,
something
that
is.
It
was
like.
A
Between
those
and
if
you
are
and
and
and
so
what
gets
really
hard
with
those
scenarios
is
mapping
the
understanding
of
set
versus
unset
fields
in
one
version
to
the
under
series.
B
A
A
C
The
problem
there,
though,
is
just
that
it
might
be.
A
Lossy
as
long
as
you're,
so,
for
example,
if
if
you
you're
applying
a
v1
over
a
v2,
so
you
take
your
stored
v2,
convert
it
to
a
v1.
A
Maybe
your
v2
had
more
fields
so,
like
your
v1
has
some
annotations,
that's
storing
those,
but
the
v1
patch
can't
override
those
fields,
because
it
only
has
the
old
stuff
right
and
then
so.
You
patched
it
over
the
v1
like
if
there
was
a
mapping
from
like
some
string
in
v1
to
an
actual
structure
in
v2,
like
the
conversion
back
to
v2,
takes
care
of
that.
A
C
No,
the
the
version
I
I
would
assume
that
it
is
supported.
I
think
it
has
to
be
if
yeah
it.
A
I
think
everything
every
time
you
upgrade
it
automatically
up
converts
every
stored
object
to
the
latest
api
version
of
that.
So.
A
Your
get
request
can
specify
a
version
that
you
want
right,
which.
C
Would
normally
be
that
yeah.
A
Yeah
with
cube
control,
you
probably
get
the
like.
It's
probably
more
dynamic,
but
I
don't
know
I
don't
know,
because
if
you
have
an
old
cube
control,
you
yeah,
if
you
have
an
old
queue
control,
you
might
just
not
print
that
new
fields
out
yeah
I
mean
in
in
our
case,
like
usually
usually
what
happens
with
kubernetes
apis
is
fields,
get
added
to
existing
api
versions
and
see
if
you
have
an
old
client
like
you,
just
don't
see
those
fields.
If
you.
A
A
Yeah,
if,
if
you've
made
like
an
actual
api
version
transition,
I
assume
you
still
need
a
new
client,
but
I
I
also
assume
that
the
client
is
like
requesting
the
version
it
understands
when
it
makes
the
request.
B
B
C
Supports
the
old
version,
then
it
gets
transparently
converted
into
the
new
version
and
it's
stored
as
the
new
version
and
then
on
the
next
kubernetes
upgrade
that
eventually
will
deprecate
the
old
version.
Your
apply
will
start
failing.
If
you
don't
upgrade
the
objects.
Oh.
B
A
A
A
When
you
use
server
side
when
you
use
apply
in
general
right,
like
you
get
an
annotation
that
says
your
last
applied
configuration,
does
that
annotation
get
if
you,
if
you
apply
a
v1
to
a
v2
and
it
works
by
here
I
mean
we
don't
actually
know
how
it
works,
but
in
maybe
in
theory
it
works
by
patching
a
v1
conversion
of
the
v2
and
then
converting
back.
It
also
applies
this
annotation
of
the
last
supplied
configuration.
A
C
Yeah,
it's
like
a
higher
resolution,
like
structured
version
of
the
classified
config
json,
but
it
yeah.
What
is
the
canonical
example
that
we
can
use
here
for
probably
something
that
not
like
deployment?
That
would
be
a
bad
example
where.
A
But
something
that
stayed
in
the
same
api
group
deployment
is
probably
a
good
example
like
I
would
assume
or
just
like.
Actually,
I
would
guess
that,
like
lots
of
things,
there's
lots
of
controllers
that
are
like
throwing
labels
and
annotations
around
on
things
that
need
to
operate
like
operate
on
the
same
resource,
but
on
independent
parts
of
that
resource.
A
There's,
like
the
other
big
use
case
of
apply,
is
like
git
ops
flows,
right
where
I
like
change,
something
commit
it
to
git
and
then
that
changed
thing
gets
applied
over
whatever
exists,
and
I
might
only
be
configuring
a
subset
of
that
and
git
and,
like
some
other
controllers,
managing
the
rest.