►
From YouTube: Kubernetes SIG API Machinery 20200212
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
February
12
2020.
This
is
capi
machinery
meeting
bi-weekly
meeting.
We
have
a
nice
topic
on
the
agenda
today,
so
why
don't
we
get
started?
Thank
you,
but
everybody
that
flora
number
one
I
think
there
are
two
major
things
that
we
are
trying
to
line
in
118.
One
is
service,
I
apply
the
other
one
is
rate
limiting
unfairness,
and
then
we
have
a
second.
C
We're
currently
trying
to
enable
the
guild
management,
so
every
single
object
in
kubernetes
right
now
we
only
track
objects
that
have
been
applied.
We
saw
outside
applier
because
initially
because
of
performance
issues,
Joe
has
done
a
lot
of
work
to
improve
the
performance.
We're
hoping
this
is
good
enough
now,
so
we
want
to
enable
these
for
all
single
objects:
creative
in
communities,
yeah.
D
C
D
We've
got
a
theory,
we're
testing
it,
yeah,
okay,
on
priority
and
fairness
or
limiting
it's
not
technically
a
rate
limit.
It's
actually
a
concurrency
limit,
so
I've
been
trying
to
call
it
a
priority
in
fairness,
you
know
it's
also
not
technically
priorities
and
C's.
It's
more
like
I,
don't
know
anyway,
it
will
protect
your
control
plane
from
clients
who
are
in
a
hot
loop
or
doing
too
much,
or
you
know
who
knows
what
we
don't
control
the
clients.
They
can
do
many
things
that
are
inadvisable.
D
D
You
know
eyeballs
like
like,
give
it
a
try,
so
you
could
see
if
it
helps
so
that'll
be
that'll,
be
super
exciting,
the
next,
the
next
things
we're
doing
on
this
after
we
after
we
get
the
last
sort
of
functional
piece
merged.
Next
things
we're
doing
is
working
on
visibility.
So
because
now
changes
are
gonna
be
like
potentially
waiting
in
a
queue
for
some
amount
of
time,
so
we
want
to
make
sure
we
give
lots
of
disability
likewise,
why?
It's
my?
Why
did
my
change?
D
Wait,
an
extra
three
seconds
or
whatever
so
I
want
to
give
visibility
like
oh
you've
matched
this
particular
flow
schema
which
dumped
you
into
this
priority
level
and
you'll
be
able
to
see
like
oh,
like
every
request
in
the
cluster
matching
this
flow
schema
or
wow.
This
seems
full
because
of
this,
like
every
couplet
is
sending
some
requests
like
that,
so
making
it
easy
to
diagnose
as
the
next.
E
Might
be
questioned
by
my
service
I
apply,
I
did
try
it
and
oh
my
gosh,
it
is
so
verbose.
It
seems
like
it
ought
to
be
something
that
could
be
summarized
much
more
succinctly
and
what?
If
we
can
talk
about
that
at
some
point,
yeah.
D
D
D
C
A
D
A
D
C
So
here's
what
we
tried,
we
try
to
change
the
rest.
So
the
way
you
do
it
today,
you
know
not,
since
we
don't
have
any
changes
to
the
client
is
that
you
have
to
send
a
patch
and
you
have
to
provide
the
patch
type.
It's
the
families
thesis
that
it's
not
very
discoverable
people
don't
know
we
could
fix
it
with
documentation.
Obviously,
so.
D
C
It
like
yeah,
our
own
client,
doesn't
support
it
like
that's
that's
what
is.
It
is
not
that
it
doesn't
suppose
that
you
do
these
dents.
Now
we
try
to
do
that
in
the
rest.
Client,
it's
it
sounds
like
the
right
client
you
supposed
to
be
worth
rest
and
apply
is
not
a
rest
limited.
So
maybe
it's
not
the
right
place
to
do
that.
That's
Stefan
has
argued
that
it's
not
the
best
place
to
be
I
think
he
suggested
we
do
it
in
a
generic
client.
I
know
fairly
all
about
the
generate
irons.
C
D
G
D
G
D
Let's
know
employ
them,
I
think
yeah.
The
main
problem
is
like
if
I
have
an
int,
Lake
and
I
want
it
to
be
0,
but
it's
not
a
pointer.
The
JSON,
encoding
library
and
I
think
the
proto
1
2
in
at
least
some
cases
will
refuse
to
actually
say
that
on
the
wire,
which
makes
it
hard,
but
I
was
actually
asking
for
it.
E
D
Yeah,
in
some
cases
where
it
has
made
a
difference
elsewhere,
we
you
have
used
pointers
because
then
it
makes
it
very
clear
like
if
it's
nil
it
doesn't
get.
It's
got
a
value
than
it
does,
but
because
of
why
kind
of
generally
uses
presence
or
absence
of
the
field
to
determine
whether
or
not
you
intended
to
manage
that
field.
D
Now
we
make
use
of
this
for
all
fields
everywhere
so
and
and
making
everything
a
pointer
is
you
know
at
best
case
it's
a
user,
build
it's
a
significant
usability
issue
with
go
so
yeah
we
may
have
to
or
I
guess.
I
hadn't
really
thought
too
much
about
this,
but
I
guess
I
was
thinking
that,
like
you
give
this
thing
a
lower
case
on
document
Chiara.
B
D
Yeah
I
mean
that
would
also
be
consistent
with,
like
all
the
tooling,
that
we
expect
users
to
write,
not
super
thrilled
with
the
way
we
have
it
that
we
have
of
like
defining
kubernetes
optics
with
go
structs
like
like
in
the
in
the
in
your
go
file,
because
that's
not
not
compatible
with
the
tooling
that,
like
cute
control
or
customize,
or
anything
that
users
are
using
to
like
manage
their
configuration.
D
D
Then
you'd
have
a
type
sake
way
of
using
it,
but
we
could
store
it
in
like
a
map
or
whatever,
instead
of
the
the
type
thing,
and
that
retains
the
information
about
whether
it's
present
or
not
I'm,
not
saying
that
would
be
good
for
every
single
use
case
because
there's
probably
not
nearly
as
performant
as
the
existing
typed
clients,
but
I.
Think
that
would
be
pretty
interesting
and
and
more
natural,
yeah
and
and
more
more
compatible
with
the
existing
cooling
for
operating
on
JSON,
a
demo
files
and.
E
D
A
F
Did
our
next,
you
all
right.
So
if
you
follow
that
link
I've
got
there,
it
usually
it
which
I
think
might
help
with
this
a
little
bit
yeah
I'm
going.
That
means
yes,
so
scroll
down
to
the
one.
That's
list
:
that
table
and
then
you'll
see
over
on
the
far
right
column,
there's
an
exact
entry,
if
you
maybe
zoom
in
on
this
a
little
bit.
So
what
we're
talking
about
here
is
the
way
that
the
resource
version
parameter
works
when
you
do
lists
requests
just
lists.
We
get
this
this
case.
F
F
F
The
third
case
is,
if
you
ask
for
resource
version
that
you've
gotten
back
in
the
past,
so
now
maybe
you've
been
watching
and
you've
seen
the
resource
version.
The
latest
research
version
you've
seen
is
some
particular
version,
and
then
you
want
a
list,
and
you
want
to
guarantee
that
you
don't
see
something
older
than
what
you've
already
seen
through
the
watch.
In
that
case,
you
can
provide
the
resource
version
in
and
it
will
give
you
something
not
older
than
that,
but
maybe
newer
than
that,
but
the
semantics
change
in
a
surprising
way.
F
If
you
then
set
a
limit
on
that
on
that
query,
if
you
set
a
limit
on
it,
you
get
exactly
that
revision
or
you
get
a
401
410
gone,
and
it's
not
fair
I
ran
into
this
and
it
caused
me
some
trouble
so
I
kind
of
wanted
to
bring
it
up
and
see
if
we
should
change
the
semantics.
Some
of
the
reasons
it
causes
problems
is
that
you
might
not
know
that
you
set
a
limit
on
your
request.
It
might
be
that
you're
using
a
list
of
watcher
or
a
reflector
that
sets
it
for
you.
F
F
D
F
And
this
this
table
used
to
be
worse,
there
used
to
be
another
dimension
to
this
table
where,
if
the
watch
cache
was
enable
they're,
not
there
were
more
things.
We've
collapsed
that
together,
so
at
least
we're
now
getting
consistent
behavior
across
those
two
things,
but
as
a
historical
artifact
to
watch
cache.
This
case
showed
up
because
we
always
give
to
the
lodge
cache
for
this
case,
and
so
what
we
did
is
we
made
it
used
to
be
that
the
above
column
was
exact
as
well.
D
F
Yeah
it
for
an
agreement,
I
think
what
we'd
like
to
do
is
propose
some
kind
of
backwards,
maybe
backwards
compatible.
The
question
I
had
is
is:
do
we
need
to
make
a
backwards
compatible
change
here,
where
we
introduce
some
way
that
you
can
get
consistent
semantics,
or
can
we
break
this
and
do
no
we'll
do
then
I
my.
E
D
Something
yeah,
I,
yeah,
I,
think
I
think
it
is
actually
kind
of
useful
to
be
able
to
send
both
kinds
of
requests.
But
this
is
like
not
a
not
a
good
API
for
expressing
that
so
I
think
it
would
be
good
to
preserve
the
ability
in
a
better
API,
okay
and
like
maybe
we
can
do
something
where,
if
you
introduce
a
parameter
like
the
defaults,
can
be
opposite
for
a
while
between
those
those
two
cases.
Okay,
that's.
D
But
at
least
at
least
like
if
you,
if
you
specify
the
parameter,
you
always
get
what
you
say
and
you
do
get
the
old
behavior.
If
you
don't
specify
it,
something
like
that,
okay
I
would
go
from
this.
Behavior
should
be
deprecated
right.
The
one.
Where
is
this
weird
instrumental
versus
resource
version?
I
mean
it's
really
hard
to
change
this
behavior
without
breaking
clients?
Well,
this
is
what
I'm
saying
deprecated
like
at
least
I,
buy
your
argument
that
we
should
have
the
equivalent
behavior.
D
F
D
Warning
people
away
from
music,
we
can
warn
people
that
they
fail
to
set
some
parameter.
Maybe
okay,
like
the
limit
parameter,
is
to
enable
the
the
paging
behavior.
If
you
want
to
look
at
all
of
the
resources
but
you'll,
maybe
not
see
them
all
in
one
gigantic
request
so
like
we
can't
get
rid
of
that.
D
No
but
I'm
saying
if
we
wanted
to
do,
we
could
deprecated
the
resource
version
and
go
to
an
exact
and
amend
version
instead,
all
to
change
the
resource
version
creditor
right
now,
unless
I'm
saying,
let's,
let's
deprecated
resource
version
put
in
a
min
version
and
an
exact
version
that
have
sensible,
semantics
I
guess:
that's
one
way
to
do
it
haven't
occurred
to
me
in
other
ways.
You
could
have
an
additional
time
where
you
second.
D
A
F
F
F
It's
it's
a
little
subtle.
It's
subtle
enough
that
when
we
try
to
fix
some
stale
reed
problems
that
we
accidentally
ran
into
it
again,
despite
all
our
efforts
in
this
fight,
you
know
me
and
the
reviewers
of
this
document,
having
seen
this
case,
we
totally
missed
it
again,
we've
reviewed
code
about
it,
it's
really
hard.
There's.
H
F
Did
get
at
least
one
suggestion
from
somebody
that
they
were
deliberately
using
this
to
get
stuff
at
an
exact
version
because
they
were
creating
the
exact
version,
caches
and
so
I.
Don't
know
how
to
tease
out
of
the
B.
If
anybody
is
treating
this
as
a
feature,
I'm
sure
we
don't
know
all
the
developers.
H
A
D
B
Yeah
your
comments
were
super
helpful
on
the
draft.
Pr
I
threw
up
like
a
very
quick
minimal
change
version
on
the
main
motivating
factor.
I
have
for
writing.
This
is
to
basically
scatter
client
behavior
in
the
case
of
nated
failure
like
the
most
obvious
case,
where
a
user
can't
really
do
anything
is
if
they
have
a
coordinated
failure
in
cold
a
lot
of
pods
at
once,
and
they
go
into
synchronised
crash
leap
back.
Coughs,
I
do
really
about
something
it
scatters
over
time.
B
D
Yeah,
so
on
on
this
topic,
it's
it's
kind
of
the
topic
of
the
day
like
a
few
days
before
you
raise
this.
Somebody
else
raised
a
similar
issue,
but
just
for
the
reflector,
a
part
of
the
client
and
I
think
a
change
to
add
some
back
off
and
jitter
went
in
for
that.
So
that'll
that'll
help
for
Watchers,
but
obviously
watchers
are
not
the
entire
entirety
of
clients
running
in
the
in
the
cluster.
So
I
think
your
change
is
good
and
fine.
A
D
D
At
the
same
time,
the
ideal
case
is
that
priority
parents
is
configured
to
identify
those
like
those
clients
all
fall
into
the
same
priority
level,
and
there
we
have
a
parameter,
which
is
how
we
divvy
up
the
concurrency
net
level
and
ideally
they're
configured
into
a
level
that
matches
them
by
some
aspect.
They
have
in
common,
whether
it's
like
same
namespace
or
same
user
agent
and
that
forces
them
to
go
and
see
really
in
the
same
queue.
D
B
The
specific
bottleneck
that
I
have
seen
like
take
down
systems
isn't
actually
the
API
server.
It's
the
fact
that
if
you
have
like
a
according
to
failure,
problem
like
sending
a
packet
of
death
or
something
to
all
your
services-
and
you
solve
that,
you
basically
wait
up
to
five
minutes
for
the
lock
step
back
off
to
actually
kick
in
versus,
seeing
gradual
recovery
of
individual
pods
as
soon
as
the
actual
problems
addressed.
So
are.
D
You
maybe
yeah,
so
that's
a
good
reason
to
have
this
on
the
on
the
agenda
here.
So
we
can
take
a
step
back
and
see
if
maybe
there's
some
other
something
else
going
on
to
you,
because
Hewlett
has
its
own,
like
it
crash
loot
back
off.
So
if
all
the
pads,
if
all
the
pods
crash
at
the
same
time,
cubelet
without
doing
any
contact
with
API
server,
will
retry
them
in
an
exponentially
backing
off
thing
and
I,
don't
know
if
that's
jittered!
D
D
D
Wonder
it
because,
because,
let's
say,
let's
say
your
service
actually
crashed
loops
for
five
minutes
before
you
address
whatever
it
is,
that's
causing
it
then,
no
matter
what
the
jitter
was.
All
those
qubits
are
going
to
be
waiting
for
the
maximum
time.
So
I
wonder
if
maybe
a
another
option
is
to
like
stagger
the
max
time
so
that
at
least
some
of
those
pods
get
a
shorter
max
weight
between
three
tries.
The.
B
Thought
that
we
had
was
that
once
you've
done
a
couple
iterations
of
the
back
off,
even
though
they
all
cap
out
at
the
max
time,
with
or
without
jitter
to
ping
an
implementation
in
practice,
they're
staggered
because
since
they've
had
different
jitters
up
until
hitting
the
back
off
five
minutes,
four
on
one
pod
won't
be
the
exact
same
start
and
stop
as
five
minutes
for
another
part.
I
see
basically
see
rolling
recovery
immediately
in
versus
a
delay
of
absolutely
no
recovery,
and
then
it's
not
recovery.
Yeah.
D
I
see
that
makes
sense
that
makes
sense
and
then
and
then,
and
so
one
I
think
one
of
the
things
I
asked
in
the
in
the
PRS
were
like
a
test
that
just
sort
of
establishes
min
and
Max
wait
times.
I
think
that
would
be
super
useful
to
do
that
and
make
sure
that
that
difference
is
actually
significant,
yeah
right,
because
I
can
imagine
the
math
coming
out
that,
like
all
the
random
steps,
actually
cancel
out
and
it
basically
doesn't
make
a
big
difference
or
I
can
imagine
like.
D
A
Leslie
a
couple
of
reminders
and
this
any
variables
who
discuss
something
else,
code
phrase
is
approaching,
so
we
have
two
and
a
half
weeks
or
less
I
think
the
craziness
of
poor
request,
and
all
of
that
starts
next
week
related
to
that.
Remember
that
we
have
twice
a
week
open,
triage
meetings
for
all
the
pull
request
that
comes
and
the
issues
that
are
open
against
capi
machinery.
A
If
you
are
worried
about
a
particular
pull
request
or
an
issue
before
the
code
freeze,
you
know
is
a
good
did
you
show
up
and
discuss
it
there?
Usually,
there
is
a
bunch
of
people
from
the
sea
in
those
meetings,
so
everybody's
invited
and
with
that
I
think
we
are
done
for
today
thanks
everyone.
Thank
you.