►
From YouTube: Kubernetes SIG API Machinery 20190717
Description
Bi weekly Sig Meeting on July 2019
A
B
B
B
D
Don't
know
of
anything
right
off
I,
just
a
note
on
implementation,
I
think
there
should
be
at
least
one
release
where
it
is
off
by
default
and
still
turn
honorable
with
a
feature
gate,
because
that'll
be
the
point
at
which
people
notice
right.
No
one
notices
until
it
goes
away
by
default.
Yeah
I
agree
that
it.
B
A
D
B
F
I
know
at
one
point:
when
we
had
been
talking
about
having
different
path:
prefixes,
there
is
value
in
potentially
knowing
that,
but
the
presence
of
discovery
mostly
covers
that
right,
like
you
know,
I,
want
to
know
where
a
kind
goes.
You
do
discovery,
I,
think
self
like
predated,
some
of
our
closed
conceptions
on
discoveries,
so
yeah.
B
F
F
B
F
Not
sure
that
we've
ever
tried
group
to
unset
a
field
but
like
I
know,
the
the
closest
thing
I
can
think
of
is
the
null
empty
array
Fiasco.
That
was
more
pervasive
because
that's
typically
a
serialization
framework
problem
rather
than
a
an
empty
but
not
said,
field
yeah.
Okay,
we
decide
to
empty
all.
B
B
G
A
G
Apply
to
this
to
the
regular
resource,
but
change
the
status
change
gets,
the
request
is
accepted,
but
the
change
doesn't
get
persisted,
but
the
stage
at
which
we
calculate
the
field
managers
doesn't
know
that
it's
going
to
be
wiped
out,
so
we're
just
trying
to
expose
that
behavior
without
changing
how
that
actually
being
works.
Cuz
it's
pretty
important,
but
it
still
allows
those
requests
to
go
through.
It
doesn't
ever
change
status
on
the
regular
employee,
yeah.
B
D
On
I
want
to
I
want
to
make
sure
I've
got
understood
correctly,
so
I'm
gonna
do
an
edit
I.
Do
that
effectively
does
it
get
and
an
update,
and
that
means
that
I
will
always
be
sending
status
for
my
objects
and,
if
I've
understood
the
impact
correctly,
I
am
then
going
to
be
trying
to
claim
these
fields
from
the
controller
that
actually
set
them.
It.
G
G
F
F
That
so
the
the
the
measurements
at
the
5,000
node
scales
showed
it's
got
about
a
25
to
30
percent
regression
on
P
99
at
the
super
largest
sizes,
and
that's
four,
that's
just
because
the
way
that
the
the
five
thousand
of
clusters
work
is
the
workload
is
entirely
within,
like
very
close
proximity,
so
gzip
doesn't
really
actually
benefit
you.
If
you're
on
infinite
bandwidth
connections
at
the
smaller
size
clusters,
there's
really
no
impact
that
we
could
measure
like
tail
latency
for
everyone
who's,
not
a
controller
is
going
to
be
massively
increased.
I.
F
Think
the
only
follow-up
like
practical
follow-up
we
have
right
now
is
it's
possible
that
we
want
to
consider
disabling
compression
from
the
controllers
and
some
deployments
which
you
can
opt
at
them.
We
just
don't
make
it
trivially
easy
today
and
we
don't
I,
think
that's
the
the
follow-up
based
on
the
data
we've
got
so
far
as
in
addition
to
the
cap
about
potentially
just
recommending
people,
disabling
controllers,
yeah.
B
Okay,
yeah
I,
the
one
thing
that
I
did
say
in
this
I'm,
not
sure
that
I
was
clear
enough
and
the
the
thought
was
like.
We
have
this
watch,
which
is
hanging
yet,
and
it
like
right
to
chunk
and
slashes,
and
my
thought
was
that
chunk
could
be
encoded
one
at
once
and
zip
it
once
and
then
distributed
to
everybody
who's
watching
the
resource,
and
that
would
advertise
the
the
cost
of.
F
Doing
the
confession:
yeah:
that's
that's
wojtek's,
amortized
cost
of
encoding
kept
yeah;
actually
fine,
but.
B
The
difference
is
that
in
so
so
the
the
thing
that
we're
sending
is
like
a
wrapper
that
describes
the
event
and
then
the
object
is
encoded.
In
that
event,
wojtek's
change
only
cashes
that
inner
object,
the
serialization
of
that
inner
object
that
doesn't
catch
the
encoding
of
the
entire
thing.
Part
of
the
reason
for
that
is
that
different
Watchers
get
slightly
different
events
depending
on
the
labels
that
you're
watching
an
ad
for
somebody
might
be
an
update
for
somebody
else
or
so.
F
I
I
will
say
if,
like
the
other
approach
that
we
consider
for
watch,
was
you
don't
get
a
lot
of
benefit
if
you
use
if
each
chunk
independently,
you
get
the
real
benefit.
When
you
share
a
dictionary
unfortunate,
that
requires
transferring
coding
versus
content,
encoding
and
so
transfer
encoding
is
definitely
something
you
can
add
to
a
aware,
client,
but
it's
work
and
in
theory
like
it
might
be
that
the
trade-off
like
with
transfer
encoding.
F
B
Where
I
want
to
be
clear,
because
we
might
be
talking
about
different
things,
if
we're
talking
about
like
a
single
client,
then
I
absolutely
agree
like
sharing
a
dictionary
and
not
repeating
it
every
time
right,
but
about
multiple
clients,
then
the
cost
of
like
API
server
only
has
to
pay
the
cost
of
compression
once
and
then
each
client
has
to
pay
that
cost.
In
bandwidth
we
paid
the
cost
of
sending
a
dictionary
each
time.
F
So
yeah,
but
I'll
say
this
is
like
compression,
is
actually
more
expensive
and
just
judging
the
numbers
of
the
5,000
level
scale,
it's
a
pretty
significant
bump
in
terms
of
latency.
Just
because
of
the
extra
time
it
takes
to
do
the
gzipping
and
so
I'd
say
it's
probably
better
I
would
say
we
would
we
see
a
worse
outcome,
it
would
save
bandwidth,
but
it
would
cost
CPU
and
most
of
our
high
cardinality
watches.
F
Well
high
cardinality,
high
member
or
high
identical
watches
is
actually
really
is
mostly
local,
like
if
you
have
5000
Watchers,
it's
almost
certainly
nodes
or
things
in
the
cluster,
so
my
gut
just
based
on
the
data
we
have
today
is
that
we
would
get
less
benefit
from
watch
in
cluster.
It
would
probably
a
net
cost
and.
F
F
F
F
And
that
is
actually
a
good
point
to
go
investigate
a
lot
of
this
is
dominated
by
small
effects,
and
you
know
like
a
lot
of
the
like
we're
down
to
a
pretty
efficient
watch
behavior,
but
like
gets
had
a
lot
of
overhead,
which
is
something
I
was
trying
to
measure
just
to
get
a
feel
for
where
this
is
so.
We
definitely
know
that
this
regressed
latency
at
high
scale,
the
suspicion
is
this
more
CPU
use
related
due
to
the
gzipping
than
it
is
to
the
other
effects,
but
I'll
double-check
that
bit
yeah.
B
H
F
F
So
a
lot
of
like
this,
the
densest
clusters
have
very
high
bandwidth
between
nodes
and
masters,
but
as
you
get
out
like,
if
you
as
you
get
out
of
like
the
0.01%
of
like
super
dense
clusters,
you
start
moving
into
you're,
either
running
things
out
in
the
cluster,
where
there
might
be
a
bottleneck
between
node
and
master
or
you're
running,
something
where
you
might
be
going
over
an
SDN
or
you're
running
through
a
proxy
or
some
other
intervening.
And
so
the
benefits
tend
to
accrue
at
the
edges.
F
F
Know
we
only
so
that
keep
our
the
proposals.
We
only
do
things
over
128
KB,
that's
a
tunable
in
that
we
could
bump
that
up
even
more
the
vast
majority
of
requests
made
by
clients
that
aren't
list
are
below
that
limit.
So
it's
really
just
very
large
lists
on
clients
and
so
most
of
the
clients.
Doing
very
large
lists
are
in
the
control,
plane
or
extensions
built
side.
The
control
blame.
B
F
B
B
F
Please
continue
yeah,
so
the
116
adds
the
two
this
is
just
described
and
what
we're
adding
namespace
GC
quote,
accounting.
That's
all
it
emerged,
didn't
notice
anything.
If
someone
we
did
uncover
that.
If
someone
in
the
future
does
go,
add
a
dynamic
controller
to
the
controller
manager
that
wants
to
do
operations
on
dynamic
resources,
then
we
basically
are
gonna
pay
a
cost
where
we
would
have
to
switch
back
to
the
dynamic
Informer.
So
I,
don't
know
that
means
we
wouldn't,
but
I
would
say.
F
F
D
D
D
That's
what
it
looked
like
was
going
down
I'm
completely
fine
with
that
I.
Don't
think
it
affects
this
design
sounds.
The
line
is
only
about
trying
to
protect
the
ones
already
under
API
review
and
I.
Don't
plan
to
I
see
no
reason
to
expand
it
to
include
something
like
F
Kate's,
because
that
defeats
the
purpose
of
X
Kate's.
D
B
D
J
B
Basically,
server-side
apply
is
a
little
too
expensive,
and
so
the
the
current
plan
is
so
the
current
plan
is
that
has
a
few
parts
right
like
we
have
this
library
that
actually
does
the
apply
operations
and
we
have
a
serialization
format
in
in
kubernetes.
Community
is
like
the
main
repo
and
so
so
right
now,
every
every
time
we
do
anything,
there's
actually
a
couple
conversion
steps
and
that
ends
up
being
very
expensive.
B
So
the
current
plan
is
instead
of
like
actually
filling
out
about
a
big
structure,
we're
going
to
encode
it
to
like
an
opaque
string
or
opaque
blob
of
some
sort,
and
that
will
allow
us
to
do
this
in
codes
decode
step
once
it
will
also
allow
us
to
not
necessarily
decode
it
unless
we
want
to
manipulate
it
in
some
way
is
sort
of
like
the
raw
extension.
If
you
I
object,
we
have
right
now
it
requires
a
two-faced
approach
to
actually
decode
the
contents.
B
So
so
the
plan
is,
will
change
to
encoding
a
a
as
a
string
or
a
appliance
or
something,
and
then
in
the
separate
library
will
actually
make
the
compatible
API
guarantees
with,
like
back
rest,
infallible,
golden
data
test,
etc.
So
that
that
library
can
optimize
its
encode
decode
yeah
so
and
we
believe
that
part
or
all
of
the
problem
is
that
this
is
the
structure
is
a
bunch
of
little
tiny
things
which
causes
a
lot
of
allocations.
It
makes
a
lot
of
garbage
yeah.
B
So
with
that
said,
I
started
yesterday,
making
a
changing
the
structured
merge
and,
if
library,
to
store
these
things
in
a
format,
that's
more
meaningful
to
serialization
and
I.
Think
the
next
step
I
haven't
talked
with
Jenny
yet,
but
probably
one
of
us
will
define
a
protobuf
or
Center
protobufs
that
encode
the
datatypes
there
and
and
our
step
will
be
like
in
coda
proto
and
either
treat
that
as
just
an
array
of
bytes
or
like
do
do
a
subsequent
like
a
64
encoding
or
something
so.
D
I
had
one
concern
about
using
proto
and
it
was
serializing
that
inside
of
a
CR
D
and
then
our
CR
and
getting
it
back
for
that
custom
resource.
There
want
to
be
a
way
for
a
an
easy
client
to
be
able
to
read
or
manipulate
that
data
and
as
I
recall
it
that's
data
is
how
I
can
explicitly
clean
ownership.
K
B
B
B
F
B
F
F
Also,
just
in
terms
of
communicating
who
the
owner
is,
is
a
really
important
thing
when
you're
editing
an
object.
Yes,
most
people
practically
today
are
either
editing
amyl
or
writing
very
limited
editors,
and
so,
like
you
know,
I'm
not
saying
that's
the
most
important
use
case,
but
I
could
proto
is
definitely
novel.
F
B
F
B
F
Jordan
and
I
were
having
a
discussion
on
a
side
channel
about
like
I
know.
People
have
wanted
a
more
efficient
format
for
JSON,
and
you
know
we
had
just
done
some
quick
like
hey.
Well
what?
If
we,
what
if
we
used
something
that
was
a
little
bit
more
efficient
than
JSON,
but
not
quite
as
horribly
unskilled
as
proto
for
things
like
CR
DS,
it's
certainly
difficult
to
deal
with
proto
if
you're,
not
yeah,
a
huge
proto
organ
is
a
not
saying,
that's
a
reason,
I'm
just
saying
it's:
we
were.
F
B
F
There's
there's
certainly
some
aspects
of
this
which
is
like,
even
if
the
protocols
more
efficient
in
other
languages,
we're
gonna
run
into
issues
where
front
it
to
be
worth
it
for
us,
it
needs
to
be
more
efficient
than
our
fairly
hacked
up
and
saying
complicated,
JSON
marshalling
paths,
and
we
have
to
preserve
some
of
those
semantics
too,
but
you
know
there
might
be
some
argument
that
one
or
other
these
might
be
useful
for
the
more
efficient
the
JSON
less
efficient
than
proto,
but
so
separate
discussion
just
do
is
something
that
came
up.
Yeah.
F
Yeah
managed
fields
being
viewable
by
edit
by
direct
editors.
We
could
probably
canvas
some
of
the
other
I.
Don't
know
how
many
of
the
like
people
who
are
doing
like
eclipse,
you
eyes
or
web
and
ITER's
for
yamo,
but
we
could
canvas
a
couple
of
them
and
see
if
any
of
them
would
want
to
use
this
in
a
context.
It
wouldn't
be
a
100%
use
case.
It
might
be
like
a
way
on
us,
but
not
heavily
yeah.
B
I
mean
I
can
imagine
a
number
of
ways
of
addressing
it
like
you
can
always
use
you'd,
be
able
to
use
the
server-side
dry
run
to
figure
out
if
you
have
a
conflict
or
not
and
who
would
be
with.
I
can
also
imagine
adding
a
are
not
content
type
or
something
where
the
server
expands
this
for
you,
or
maybe
it's
good
enough
to
just
permit
like
keep
control
to
both
expand
and
compress
this
data.
B
F
I'll
know
so
alongside
that's,
like
I,
had
taken
a
quick
look
just
because
I
was
triggered
by
this
to
go.
Look
at
some
of
the
efficiency
of
our
update
and
I
still
think.
There's
some
decent
algorithmic
wins
possible
and
the
other
parts
of
the
update,
stack
and
so
like
we're
about
a
about
a
fourth
of
our
allocations
during
update,
come
from
validation
from
something
completely
stupid
because
of
the
field
path,
and
that
has
a
factor
and
I'm,
pretty
sure
that
there's
opportunities
in
both
patch
and
update
to
go
improve
performance.
F
And
so,
if
we
could
get
close
with
this,
we
might
want
to
just
say
we
can
go,
make
update
and
patch
a
little
bit
better
yeah.
Okay,
I
can
definitely
look
with
that.
Like
encoding
and
decoding
right
now
in
update
is
less
than
less
than
3%
of
allocations
when
you're
not
talking
about
patch
and
it's
a
little
bit
more
time,
CPU
eyes.
But
there's
like
a
there's,
a
lot
of
other
things
going
on
an
update
and
patch
that
are
probably
way
more
inefficient
than
they
need
to
be
yeah.
B
B
E
B
G
G
B
B
B
F
G
F
B
B
Yes,
you
can
check
in
validation,
but
that
doesn't
get
you.
The
user
doesn't
necessarily
know
that
unless
you
also
state
it
in
the
documentation,
you
might
have
bugs
in
your
validation
code,
declaring
it
as
immutable
in
the
field
specification
lets
us
like
automatically,
do
the
validation
and
automatically
get
it
in
the
documentation
as
much
more
visible
to
users
and
as
much
more
consistent
does
that
answer
the
question?
Yes,.
A
G
B
G
E
B
This
is
the
one
where
there
was
discussion
about,
whether
it's
okay
to
consider
the
old
object
when
trying
to
figure
out
what
the
user
meant,
but
with
the
new
object,
I.
Think
specifically,
the
concern
is,
you
might
be
able
to
send
the
same
payload
over
and
over
again,
the
API
server
oscillates
on
something
I.
C
A
A
C
C
Once
we
are
gathering
numbers,
we're
doing
scale
test
category
number,
four
C
or
D
to
see
if
we
can
match
the
native
resource
threshold
and
SL
model,
which
is
like
for
read
latency
how
many
objects
can
have
when
you
do
a
list
request
and
the
everything
p90
that
would
be
within
like
five
seconds
at
all,
when
it's
a
using
a
namespace
just
like
that
and
we're
still
gathering
models.
We
have
a
shared
public,
shared
dock,
not
sure
if
it's
in
here
there
are
some
existing
numbers.
C
A
C
C
C
B
F
I'd
say
you
know
it's
funny:
it's
like
that's
true,
like
with
a
hundred
or
so
and
like
lots
of
like
complex
environments.
That's
not
the
number
one
problem
I've
seen
on
the
Masters
is
open,
API
just
in
general.
So
these
comments
get
to
it's
like
I.
Might
my
practical
thing:
it's
we're.
Gonna
hit
the
wall
on
CR
DS,
an
open
API.
Before
we
hit
the
wall
anything
else.
We
haven't
even
really
seen
crazy,
open,
API,
CR
DS.
So
like
that's,
maybe
a
it
feels
like.
F
B
F
I,
don't
even
know
that
I
don't
even
know
that
that's
a
g8,
blocker
I
was
just
kind
of
saying,
like
you
know,
if
we
think
about
the
things
that
are
blocking,
we
definitely
there's
a
there's,
a
factor
here
where
we
never
set
early
CR
descale
target,
so
we're
kind
of
coming
in
after
the
fact
I
think.
One
one
thing
to
keep
in
mind
is:
if
we
have
to
do
a
lot
of
work
to
take
it
to
GA.
I
would
almost
argue
that's
a
thing
for
just
scoping
down
our
numbers
to
fit
within
a
hole.
Yeah.
D
F
B
Yeah
we're
gonna
change
our
target
here.
Not
our
code
redefine
success,
yeah
and
and
and
just
to
be
clear.
There's
two
sorts
of
scalability
one
is
having
a
C
or
D
with
many
many
many
CRS
and
the
other
is
having
many
CRS
with
a
few
CR,
many
c
r
DS
with
a
few
c
ARS
each
and
yes,
I
think
different
people
will
want
different
things
and.
B
Okay,
good,
okay:
let's
we
only
have
a
few
minutes
left
so.
B
A
D
J
H
B
B
Rough
idea
is
he's
adding
to
the
serialization
stuff
a
like
function.
You
can
implement
like
it's
like
override
the
serialization,
and
it
takes
the
defaults
here
like
that,
so
that
lets
him
inside
the
watch.
Cache
implement
a
like
special
kind
of
conduct
which
intercepts
the
serialization
path
and
cache
is
it.
It
only
involves
a
few
changes
in
the
serialization
stack
and
so
I
guess.
That's
good
I
can't.
F
Say
I'm
super
reasonably
excited
about
it.
I
have
the
same
feeling,
which
is
it
it's
probably
gonna
help,
but
this
is
a
like
watch
cache
territory
for
me,
which
is
like
we
depend
on
this
being
fast,
but
it's
a
source
of
concern
and
complexity.
Maybe
I
think
this
is
less
complexity
than
some
of
the
other
hard
trade-off
components.
F
B
F
With
the
complexity
of
leaves
behind
yeah
this
one,
this
one
was
like
I
I
feel
like
there's
a
lot
like
the
fact
that
we
got
a
forty
percent
went
on
per
debuff.
Just
by
looking
at
some
aspects
of
the
problem
like
I
know,
there's
other
low-hanging
fruit
in
the
stack
that
may
not
make
as
much
of
a
big
deal
for
the
huge
scale
clusters,
but
would
actually
make
a
lot
of
difference
for
tail
latency.
It's
smaller
sizes,
so.
F
Yeah
I
mean
a
lot
of
little.
Optimizations
are
just
going
to
add
up
well
like
like
the
gzipping,
makes
5,000
worse
and
like
this
is
the
sky
bus.
It
was
violent
but
like
it
makes
that
worse,
but
like
I
have
people
who
have
like
are
seeing
like
three
KS
I
can't
download
rates
from
their
control
plane
in
real
world
production
environments,
because
that's
just
the
way
the
world
is,
and
so
like.
You
are
definitely
like.
The
big
supermassive,
dense
cluster
is
definitely
benefit
from
this.
B
F
And
I'm
not
I'm,
not
opposed
to
trying
it
I
think
maybe
like
I
looked
at
this
and
I
was
like
I
think
about.
We
keep
like
even
watch
cash.
Today
we
still
find
things
that
cause
problems,
so
we're
still
dealing
with
the
debt
from
that
I
I
sometimes
feel
like
that.
One
was
is
coloring
how
I've
used
some
of
the
other
cross-cutting
assumptions,
because
the
risk
is
high,
that
we
make
a
mistake
and
it
impacts
everything
yeah.