►
From YouTube: Kubernetes SIG Arch - KEP Reading Club 20220606
Description
KEPs Discussed:
1) Subresource Finalizers: https://github.com/kubernetes/enhancements/pull/3286
2) Graduate API List pagination to GA: https://github.com/kubernetes/enhancements/pull/3274
A
Okay,
hi
welcome
to
this
month's
kept
reading.
Club
today
is
the
6th
of
june,
and,
as
always
since
this
is
a
community
meeting.
This
follows
the
kubernetes
and
cncf
code
of
conduct,
which
boils
down
to
be
excellent
to
each
other.
A
This
meeting
is
also
being
recorded,
which
means
please
don't
do
or
say
anything
that
you
don't
want
posted
online
okay.
That
being
said,
we
have
two
caps
with
us
today,
the
first
of
which
is
finalizers
for
sub
resources.
A
The
draw
the
agenda
is
in
the
chat
the
this
kept
belongs
to.
I
think,
api
machinery,
if
I'm
not
wrong.
A
B
A
B
B
If
I
had
to
put
this
in
a
very
simple
way,
it's
essentially
a
way
to
add
role-based
access
control
to
the
finalizer
of
the
resource,
instead
of
making
that
finalizer
the
ability
to
change
and
update
and
do
operations
of
that
finalizes
depends
on
the
role
based
access
control
that
is
defined
at
the
resource
level.
Be
that
resource
whatever
it
is
with
storage,
for
example,
or
something
something
something
like
that.
B
To
that
end,
the
idea
is
to
add,
obviously
the
api
endpoint
that
enables
to
perform
the
that
operation
and,
if
I
understood
correctly,
there
also
some
additional
tooling
in
cube
ctl.
That
would
also
allow
for
the
manipulation
of
said
resources
and
and
that
that
is
the
main
takeaway
that
I
that
I
got.
A
Yeah
yeah,
I
I
unders
the
same
way:
the
the
cube
ctl
tooling,
that's
there
so
that
interestingly
got
merged
last
release.
So
there
was
a
feature
that
a
few
of
us
worked
on
called
cube,
ctl
sub
resource,
so
that
got
merged.
Last
year's
alpha.
Okay,
in
case
that's
of
interest
to
you,
you
know
like
go
use
it.
B
A
A
B
A
Also,
don't
really
have
any
questions
as
such.
I
think
I
get
why
they
I
get
the
intent
of
the
cap
and
the
implementation
of
it,
but
I
don't
know
maybe
I'll
have
to
take
a
deeper
look
to
see
if
I'm
missing
anything
least
from
a
high
level
yeah.
I
think
our
understandings
are
similar.
B
Yeah
I
took
a
quick
there's,
a
code
snippet
that
is
linked
in
the
somewhere
in
there,
which
I
took
a
a
look
and
I
sort
of
get
it.
Although
there's
a
lot
of
boilerplate
go
codes,
that
is
certainly
immediately
understandable
to
anyone
who
has
developed
this
to
a
wider
extent
than
myself,
but
it
seems
aligned
with
what
we
just
discussed.
Yeah.
B
A
B
Right
pushing
on
this
yeah
I
was
trying
to
to
match
if
this
was
on
the
125
enhancement
tracking
lists.
Think
I
I
I
don't
think
it
was
last
time
that
I've
saw
in
the
released
him
presentation,
but
maybe
it
will
be
updated.
A
Yeah,
if
you're
talking
about
the
release
retro,
is
that
the
one
you're
talking
about.
B
I
know
so
in
the
release
team
meeting
there
was
a
a
kubernetes
1.25
enhancements
tracking
spreadsheet.
A
B
A
Yeah
that
that
makes
that
makes
a
lot
of
sense.
I
I'm
not
too
sure
how
api
machinery
does
it,
but
like,
for
example,
right
now.
The
way
sig
node
keeps
track
is
they
have
a
beginning
of
each
release?
A
They
typically
have
a
call
for
a
call
for
enhancements,
like
whoever
wants
to
add
a
cap
for,
unlike
under
signal
for
that
release,
join
that
call
or
talk
about
that
kept
for
like
one
or
two
minutes,
and
then
they
internally
track
it
first
in
a
separate
sheet
and
then
the
enhancement
tracking
sheet
is
give
the
access
to
that
is
given
only
to
if
I
recall
correctly,
the
sig
leads
and
the
release
name
right,
and
then
they
updated
later
on,
I'm
not
sure
api
machinery
does
it,
maybe
once
the
enhancement
team
marks
it
as
dragged
then
like
it.
B
A
Okay,
so
if
there
are
no
last
like
further
questions
or
comments
on
this
one,
maybe
we
can
do
the
next
one,
so
the
next
one
is
actually
not
a
new
feature.
It's
graduating
a
feature
which
has
been
in
beta
for
like
quite
some
time
now
to
ga,
and
this
is
also
something
that
I
am
helping
do
so
I
can
maybe
try
and
provide
some
context
as
in
when
we
go
through,
but
it
took
me
a
while
to
even
understand
what
the
goal
was
like
for
ga.
A
To
see
so
maybe
we
can
start
off
with
that.
So
this
is
the
link
and
I'm
gonna
start
the
timer
in
three
two
and
one.
A
Okay,
that's
10
minutes.
Does
anyone
need
a
couple
of
minutes
extra
I
see
fresh
repo
is
done
immediately.
You
need
like
a
couple
of
minutes
more.
A
Now
we
can
go
ahead
with
the
discussion,
okay,
sure,
okay,
any
questions
to
start
off
with.
A
Okay,
so.
A
The
way
that
okay
assume
that
you're
using
cube,
ctrl
and
you
say,
cube
ctl,
get
points.
A
What
that
is
ultimately
going
to
translate
into
is
it's
going
to
translate
into
a
a
a
get
request
for
the
pods
endpoint
and
that's
going
to
get
all
the
pod
objects
from
hcd
through
the
api
server
and
return
it
back
to
the
client
now
the
problem.
One
of
the
problems
with
this
approach
is
that
let's
say
that
you
open
a
watch,
so
the
watch,
if
you're
not
aware,
is
this
mechanism
and
kubernetes,
which
gives
you
like
an
incremental
notification
feed.
A
So,
like
you
can
say,
if
something
changes,
you
get
a
notification
saying
this
object
changed
and
you
can
react
to
that
change.
So
lcd
provides
this
watch
feature
which
kubernetes
leverages
for
for
the
functionality.
Now,
let's
say
you
have
multiple
clients
opening
up
multiple
watch
connections.
A
This
would
mean
that
the
api
server
would
then
end
up
opening
multiple
watch
connections
to
hdd.
Also,
the
problem
with
this
is
that
you
can
only
have
a
certain
number
of
watch
connections
at
a
particular
point
of
time
and
beyond
which
things
are
going
to
start
degrading
for
you.
So
to
try
and
sort
of.
A
Make
this
more
efficient?
What
folks
then
ended
up
doing?
Was
they
made
a
cache
which
was
they
made
a
cache
in
the
api
server
itself
called
the
watch
cache
and
the
purpose
of
the
watch
cache?
Was
you
open
up?
One
watch
connection
to
xcd,
which
then
fills
in
all
events
and
objects
that
that
watch
connection
gets
and
it
fills
it
into
the
watch
cache
which
is
just
like
a
map
ultimately,
and
then
all
clients
then
open
up
connections
to
the
api
server
and
then
get
it
through
the
watch
cache
itself.
A
So
it's
like
a
lot
of
like
when
I
explain
it
verbally.
It
probably
does
not
make
a
lot
of
sense,
but
there
is
a
really
good
talk
on
this
that
I
will
link
for
you
to
watch.
A
And
there
is
also
a
design
proposal,
so
in
case
you
don't
know
before
keps
existed
in
kubernetes
there
were
these
things
called
design
proposals,
after
which
they
introduced
kubernetes
and
answered
proposals
to
bring
in
more
structure
to
the
whole
process.
So
there.
A
A
Yeah
yeah
so
chunking
and
pagination
is.
There
has
been
there
for
quite
a
while
now
and.
A
Documentation,
it's
you
will
see
it
under
efficient
detection
of
changes.
B
A
Yeah,
so
the
efficient
detection
of
changes
right
like
pagination
and
junction
that
right
now
is
in
beta
and
one
of
the
goals
to
make
it
ga
is
to
sort
of
unify
the
resource
version
semantics.
So
if
you
read
what
the,
if
you
read
what
right
now
the
resources
and
semantics,
are
it's
a
little
unintuitive
to
understand?
Okay,
so,
for
example,
right
like
if
you
specify
a
limit
and
a
continue
in
your
request,
the
the
request
will
directly
get
routed
to
a
lcd.
A
Yeah
yeah,
that's
one
of
the
reasons,
and
now
what
we
are
working
on
is
evolving.
The
watch
cache
in
a
way
that
we
can
support
pagination
through
the
watch,
cache
itself
and
the
movement
we
are
able
to
achieve
that.
We
can
sort
of
unify
the
behavior
of
these
resource
versions,
yeah
and
sort
of
get
rid
of
these
disparities
and,
like.
B
Yeah,
because
you
wouldn't
be
penalizing
the
the
use
cases
where
things
are
requested
with
pagination,
but
they
go
through
to
the
watch
cache
or
something
else
right,
exactly
yeah.
That
makes
it
clearer.
Thank
you.
A
And
so,
okay,
let
me
if
you
look
at
the
code
right,
there
is
like
a
function
in
the
code
which
says
it's
literally
called,
should
delegate
list
so
like
it.
It
decides.
If
we
should
delegate
the
incoming
list
call
to
fcd
or
not
and
one
of
the
checks
there
is.
Is
it
a
limit
call
or
like?
Is
it
a
list
called
with
a
limit
parameter
or
is
it
a
list
called
with
a
continued
parameter
so.
A
Yeah,
so
this
is
actually
really
interesting
because
we
sort
of
try
and
mimic
how
hcd
does
things
internally,
but
in
a
minimal
way
in
in
the
api
server
itself
right.
So,
instead
of
an
indexer,
we
make
use
of
a
b
tree
and
sort
of
traverse
that
to
like
get
results
for
us
and
the
continuation
token,
which
we
generate
so
right
now.
The
continuation
token
has
a
deadline
which
is
about
five
minutes.
A
I
think,
and
once
the
continuation
token
expires,
you
need
to
reissue
a
list
and
get
the
new
get
a
new
continuation
token
and
use
that.
But
the
interesting
thing
with
watch
cash
is
it
gives
us
this
expiration
sort
of
implicitly.
A
So
if
you
watch
the
talk,
I've
linked
the
implementation
of
the
watch
cache
is
you
have
like
two
layers
to
it.
So
one
layer
is
a
ring
buffer
of
sorts
through
which
you
actually
serve
events,
and
then
there
is
an
actual
underlying
store
which
asynchronously
populates.
This
ring
buffer.
A
Popped
out,
so
what
we
need
to
do
to
see
if
a
continuation
is
valid
or
not
is
basically
just
check
if
the
resource
version
that
we
want
is
still
in
that
ring
buffer
or
not,
and
if
it's
not,
then
you
basically
say
that
the
continuation
has
expired
and
the
client.
A
Exactly
so,
you
re
reissue
a
new
one
and
then,
by
default
that
piece
of
code
that
I've
linked
the
delegate
list
function
that
is
going
to
detect
that
okay,
we
don't
have
a
continuation
in
the
watch
cache,
so
we're
going
to
delegate
this
to
lcd
and
then
and
then
lcd
serves
it
again
for
you
yeah
the
difficult
part,
although
is
like
maintaining
behavior
compatibility
like
you,
can't
it's
easier
if
you
return
an
error
early
on
itself,
but
that'll
break
a
few
clients
so
like
what
you
have
to
do.
A
Is
you
need
to
return
an
error
later
on
and
then
that
error
is
consistent
with
what
the
client
would
do
if
you
had
returned
it
earlier,
so
like
you're,
not
breaking
anything
but
like
you,
you
might
have
to
issue
a
new
call,
which
is
like
a
trade-off
which
kubernetes
often
makes
them
realize.
A
But
yeah,
that's
I
mean
is
like
sorry.
My
explanation
wasn't
very
clear.
Talk
should
explain
how
things
are
done
in
a
better
manner.
I.
A
The
design
proposal-
maybe
that
also
provides
more
detail.
A
Comments
disc,
like
oh,
there
is
a
tracking
issue
as
well
with
a
bunch
of
things
that
aren't
planned
for
this.
I
can't
keep
track
of
my
dads
anymore.
A
Yeah,
so
if
you
want
to
follow
along,
so
this
is
an
issue
which
is
sort
of
cutting
across
api
machinery
and
scalability.
So,
interestingly,
this
will
also
improve
performance
in
more
than
one
way.
A
But
what
this
will
allow
you
to
do
is
the
copy
that
you
make
right
now
is
going
to
be
the
copy
of
a
tree
which
is
used
in
implementation
and
lucky
for
us,
the
b
tree
that
is
being
used
has
copy
and
write
semantics
right.
So.
A
That
you
make
isn't
going
to
be
all
that
expensive.
It's
just
going
to
be
a
bunch
of
references
that
are
just
going
to
be
yeah.
A
The
time
you
spend
under
a
lock
is
going
to
be
significant.
A
A
This
is
actually
like.
The
second
part
of
minimizing
lock
attention
there.
Initially,
what
we
did
was,
instead
of
making
everything
and
copying
it
under
a
lock.
We
sort
of
introduced
like
two
pointers
and
made
it
at
interval
pointing
into
the
cache
and
then
the
interval
sort
of
moved
forward.
Asynchronously.
A
Like
there's
a
a
lot
of
work
being
done
to
like
optimize,
this
part
like
it's,
it's
giving
really
nice
scalability
outcomes
for
us.
This
is
like
the
next
step
of
it,
which
is
like
really
exciting
to
me,
at
least
so,
if
you
are
interested
reach
out
to
api
machine,
your
scalability
or
just
comment
on
the
issue.
If
there
are
things
that
you
want
to
pick
up,
I'm
sure
like
people
would
love
extra
an
extra
set
of
hands.
B
We
will
surely
at
least
follow
it
and
well.
If
I
have
anything
of
interest
to
to
to
to
add,
I
will
surely
do
thank
you.