►
From YouTube: Kubernetes SIG API Machinery 20230614
Description
- [Stefan Schimanski, MikeSpreitzer] KEP-4050: Add generic control plane staging repository https://github.com/kubernetes/enhancements/pull/4052
- [jefftree] Lazy OpenAPI Aggregation
Lazy OpenAPI Aggregation and CRD Building
[- mo] is there a desire to have something like StorageVersionMigrator built into KCM?
A
And
very
good
so
hello,
good
morning,
good
evening
good
afternoon,
depending
on
where
you
are
welcome
to
kubernetes
API
machinery.
My
weekly
meeting
for
recubernetes
open
source
today
is
June
14th
2023
and
we
are
getting
started
with
a
number
of
items
in
our
agenda.
So
let's
begin
with
Stefan
I
think
representing
Mike
too,
hopefully
about.
B
This
I
haven't,
haven't
heard
him
so
here,
if
you
can
click
there
on
the
on
the
cap.
B
So
it's
a
cap
I
think
we
have
been
talking
about
that
at
least
for
half
a
year,
maybe
even
longer,
actually
much
longer,
because
this
is
about
yeah
config
or
option
config
server,
and
most
of
you
will
know
this
pattern
from
lots
of
our
depositories
like
API
extension
and
okay,
as
a
guys
have
all
of
those
are
built
on
that
pattern,
and
this
is
a
cap
to
extend
that
basically
finish
the
work
you
have
started
years
ago
into
Cube
API
server,
Cube
API
servers,
one
of
those
components
which
is
not
really
modular
in
House
of
Plumbing
is
implemented
and
the
consequences
that
we're
using
parts
of
cube,
API
server,
for
example,
for
generic
control
plans,
so
control
plane,
usages,
which
are
not
container
orchestration
related.
B
It's
hard
like
it's
possible
people
do
it,
but
it
needs
lots
of
plumbing
code,
and
this
cap
is
about
finishing
that
work.
Splitting
up
how
a
cube,
API
server
is
instantiated
in
code.
Everything
here
is
about
code,
that's
very
important:
it's
not
about
building
or
delivering
a
generic
API
server
as
part
of
kubernetes.
That's
not
the
code.
That's
an
ongoing!
B
Actually
it's
written
somewhere
in
between
this
is
about
enabling
our
factoring
the
libraries
we
are
building
for
consumption
for
use
cases
which
exist
where
people
work
around,
but
just
to
make
it
easier
and
participate
or
make
make
it
possible
that
those
parties
can
participate
in
the
project.
If
you
move
down
to
the
picture,
maybe
that's
the
best
way
to
illustrate
what
this
is
about.
B
So
this
is
basically
the
dependency
structure,
dependency,
topology
of
as
a
main
repositories
main
staging
repositor
is
about
API,
so
so
for
machinery
and
apis
they
are
client,
go
API
server
and
at
the
bottom
of
the
skk,
so
Cube
API
server,
Cube
control,
Dimension
I
live
in
KK,
so
the
command
and
the
packages
are
leading
there
and
they
depend
on
searching
repositories
but
consuming
those
is
just
It's
Tricky.
It's
messy
and
the
suggestion
here.
The
proposal
is
to
add
one
level
in
between.
So
it's
called
generic
control
plane.
B
There's
a
discussion
like
a
small
back
sharing
discussion.
We
started
further
down,
but
basically
a
generic
control
plane,
maybe
two
of
those
repositories
which
sit
between
and
they
they
will
include
everything
which
is
necessary
for
generic
control
plane,
so
control
plane
is
always
apis.
Have
a
plus
controllers.
B
That's
the
idea,
so
things
like
garbage
collection,
namespace
deletion,
maybe
a
CID
extension
API
servers,
those
things
belong
to
a
control,
plane
and
generic,
because
there's
nothing
specific
inside
there's
no
portion,
no
notes
all
those
things
stay
in
KK,
but
we
want
to
move
those
generic
pieces
into
generic
control
plane.
B
As
I
said,
you
don't
want
to
deliver
a
binary.
So
there's
a
lot
of
yeah
one
size
fits
all
binary
like
chip
control
pane.
We
want
to
deliver,
there's
no
one
size
which
fits
everybody.
So
this
is
it's
a
little
bit
like
embedded
systems
where
you
also
have
to
adapt
things
and
they
they
differ
here
and
there
some
people
want
airbag
consolidation,
some
people,
don't
some
people
want
Secrets,
some
people,
don't
so
those
things
should
be
configurable
in
this
Library,
but
in
a
very
simple
way.
So
using
options.
B
If
you
don't
want
Secrets,
just
change
your
Boolean
and
it's
gone.
That's
basically
the
the
goal
of
Simplicity
here
and
if
you
move
to
the
goals
again,
maybe
because
years
ago
it's
yeah
the
first
one,
maybe
to
call
out
the
first
line
here.
This
will
change.
Plumbing
like
we
will
change
the
options,
for
example,
split
them
up
and
do
similar
things.
One
goal
is
this:
should
not
change
the
overall
structure
of
Cuba
ISO
I
mean
the
plumbing
will
change,
but
the
pr
should
be
pretty
obvious
that
they
just
move
stuff
around.
B
So
it
should
be
obvious
and
we
have
done
first
steps
already.
This
is
possible
splitting
up
what
is
in
Cube
API
server
having
a
generic
part
and
having
a
non-genary
part,
but
it
should
be
obvious
that
the
result
is
basically
the
same,
so
we
don't
touch
any
behavior
of
group,
API
server
same
thing
for
for
controller
manager
as
well.
B
What
else
is
important,
I
think
most
things
I
talked
about
already
yeah.
Maybe
the
last
goal:
here's
the
last
line
above
non-go
Wheels.
B
The
factory
should
not
be
prescriptive
about
the
binary
topology,
so
Cube
API
server,
obviously
is
just
the
API
server
control
Dimensions
its
own
binary
in
Cube,
create
hypercube
in
the
past.
We
didn't
like
it,
but
this
Library
it
should
just
allow
composition.
So
if
somebody
wants
to
compose
an
API
server
on
controller
manager
like
some
projects
to
like
Microsoft
or
k3s,
we
don't
want
to
encourage
it.
We
don't
want
to
discourage
it.
It
would
be
just
diagnostic
to
those
things
so
and
so
options.
Config
server
pattern
will
just
allow
that
yeah.
B
As
I
said,
we
don't
want
to
deliver
something
one
size
fits
over.
That's
not
to
go.
There's
no
zero
change
in
behavior
of
the
existing
binaries
towards
the
user.
We
want
to
deliver
a
sample.
Maybe
that's
that's
the
most.
We
do
like
a
sample
control
plane,
that's
just
proving
composition
that
it
works.
The
plumbing
works
and
shows
people
how
to
use
it,
but
it's
not
used
for
production.
So
it's
really
just
one
configuration
that's
and
the
last
thing
here
in
the
non-go.
It's
we
don't
want
to
make
a
staging
repository
for
cube.
B
Api
so
I
mean
maybe
that's
another
step,
but
this
kept
doesn't
doesn't
care
about
that.
It's
not
about
those
steps
to
get
rid
of
KK.
That's
not
the
goal.
C
B
A
D
So
I
think
this
is
a
really
good
idea.
I've
thought
for
a
while
that
the
the
basic
API
server
is
insufficient
for
what
a
lot
of
people
actually
want
to
do
and
they
want
to
build
like
krm
control
plans
for
non
non-traditional
kubernetes
uses.
So
this
makes
a
lot
of
sense
to
me.
I
think
I
think
this
is
a
good
split.
D
D
And
we
also
I
think
we
did
talk
about
this
briefly,
like
maybe
two
months
ago
and
I
think
the
general
consensus
was
that
we
did
want
to
move
forward
with
a
cap
on
this.
So
we've
already
kind
of
kind
of
pre-agreed
to
the
idea
and
I
think
this
implementation
is,
is
pretty
reasonable.
I,
like
I
like
that
we're
getting
the
crd,
the
new
space
garbage
controller,
a
bunch
of
stuff
all
in
one
place.
B
D
I
I,
don't
wanna
I,
don't
wanna
yeah
actually
learn
it
too
much,
but
I
I
do
want
to
make
sure
that
we
don't
have
too
much
like
fan
and
fan
out
on
one
component
but
I.
Think
logically,
this
is
this.
Is
the
model
I
want
so
I'm
supporting
yeah.
B
And
maybe
the
approach
will
be
so
we
have
the
package
control
plan
and
package
group
that
so
we
will
try
to
move
there
and
split
there
in
the
beginning
until
we
have
a
structure
which
is
separated
enough
and
then
in
one
big
step,
we
just
move
the
stuff
in
the
station
repository
So
to
avoid
noise.
Basically.
B
Sounds
good,
maybe
maybe
a
last
word
about
outmap
timeline.
The
work
has
started
so
we
are
already
splitting
up
stuff.
One
published.
B
There's
no
need
for
pleasure
like
it's
finished
when
it's
finished
I
think
so
there
won't
be
a
complete
split,
probably
for
128,
probably
129.
I'm,
not
sure
if
I
beta
makes
any
sense
here,
functionally
nothing
changes.
So
it's.
D
Yeah
I
I
think
I
agree
that,
like
there
is
no
there's,
no
feature
getting
we're
just
done
when
we're
done
and
you
couldn't
like
you
couldn't
really
I,
don't
think,
there's
a
way
to
say
that
a
staging
repo
is
Beto
and
RGA
features
and
depend
on
it.
So
it
probably
doesn't
that
model
probably
doesn't
make
sense.
A
D
A
Very
good,
okay,
so
moving
on
Jeffrey,
it's
your
turn.
A
E
Cool
so
today,
I'd
like
to
talk
about
lazy,
old,
API,
aggregation
and
crd
building
and
to
provide
a
bit
more
context.
A
while
ago,
we
introduced
this
idea
of
lazy
marshalling
to
the
open
API
which
drastically
reduce
or
improve
the
performance
of
the
cube,
open,
API
servers
on
cluster
installations,
especially
for
clusters
with
many
charities,
and
since
then,
we've
refactored
this
through
lazy
mechanism,
a
more
generic
into
this
caching
Library
thanks
Antoine,
and
this
sort
of
allows
for
reuse
across
different
components
in
kubernetes.
E
What
that
looks
like
more
concretely
is
the
here's
the
before
diagram,
which
is,
we
have
a
controller
that
runs
and
checks
for
both
local
and
aggregated
API
services
on
a
set
interval
and
Aggregates
the
open,
API
there,
and
it's
basically
all
that
that
open
API
stored
in
memory
and
when
a
user
fetches
the
open
API,
the
only
process
that's
done
is
late,
is
the
Marshall
thing
step,
so
Marshall
is
done
on
demand.
Everything
else
is
eager.
E
What
we're
proposing
is
in
the
after
step
that
the
entire
process
of
building
open,
API,
aggregating
and
marshlink
is
all
done.
Lazily
such
that
all
these
computations
are
performed
when
it
requests
to
the
open,
API
endpoint
is
sent
instead
of
doing
it.
Whatever,
let's
say
a
crd
is
applied
Etc.
E
Now
we
note
that
I
get
API
Services,
since
we
don't
really
have
much
information
about
their
availability
latency.
We
don't
want
that
to
be
factored
into
our
sort
of
slos.
So
we'll
continue
with
the
current
approach
of
having
this
controller
run
on
a
60
second
interval
to
fetch
the
aggregated
API
services.
E
Okay,
going
down
to
the
results
going
down
a
bit,
yeah
I'm
there.
So
I
have
a
work
in
progress,
sort
of
Branch
for
this
proposal
and
I'm
just
sharing
some
numbers
after
installing
a
set
of
200
crds.
This
was
obtained
from
the
KCC
repo
and
we've
done
that
the
in
use
memory
decreased
by
approximately
36
percent
and
memory
analogs
decreased
by
like
23
percent,
and
this
is
a
this-
is
because
it's
lazy.
E
This
memory
measurement
was
measured
after
the
cids
were
applied,
but
before
a
request
to
the
open,
API
was
sent
and
obviously
like
after
after
the
request,
the
open
API
sends
the
memory.
Is
it's
not
that
much
different,
because
the
operations
are
still
the
same?
We're
just
trying
to
make
everything
more
lazy.
E
So
one
drawback
of
this
approach
is
that
the
initial
request,
the
open
API,
because
we're
making
everything
more
lazy,
it's
going
to
increase,
and
in
this
case
it
increased
by
about
300
percent,
which
is
from
around
20.5
seconds
to
one
second-
and
this
is
only
on
initial
request,
open
API,
because
we
do
things
lazily
and
we
cache
things.
E
Some
requests
to
the
open
API
are
still
extremely
fast
at
less
than
10
milliseconds,
and
so
the
main
goal
of
this
is
to
smooth
out
the
memory
Spike
from
installing
my
ecrds
at
once,
and
at
least
on
the
test.
Cluster
I.
Don't
really
have
any
situations
where
the
API
servers
start
for
resources,
but
in
things
like
gke
Etc,
there
are
a
lot
more
instances
where
the
memory,
the
memory
Spike
that
occurs
from
Solomon
ecrds,
can
cause
an
out
of
memory
and
an
API
server
restart
and
so
we're
right.
E
Now
we're
trying
to
trade
this
performance
for
a
hit
to
the
speed
of
the
first
open
API
request,
while
trying
to
improve
the
performance
of
this
memory.
Spike
phase,
when
installing
many
crds
I
have
a
bunch
more
notes
afterwards,
but
I
think
I'll
just
stop
here
for
now
and
see.
If
there's
any
feedback
comments,
Etc.
E
Okay,
so
the
main
thing
I
just
wanted
to
sort
of
get
out
of
this
meeting
was
if
it
was
okay
to
sort
of
move
forward.
With
this
approach,
this
idea
of
creating
the
like
performance
hit
on
the
first
request
of
the
open
API
to
the
increasing
memory
performance
onto
installation
of
crds.
B
Stefan,
just
just
a
question:
do
we
have
any
metrics
how
many
V2
requests
actually
exists
nowadays,
like
oh
with
cubecroppers,
which
are
out
there
or
something
my
gut
feeling
would
be
it's
just
not
used
and
those
300
don't
matter,
I
mean
don't
matter
it's
worse
than
before,
like
in
numbers,
but
we
should
go
forward.
E
E
But
yeah
after
a
couple
more
comments
thanks,
everyone.
D
Sorry
I
was
just
sorry:
I
was
muted,
there
I
think
it's
probably
okay,
I'm
a
I'm
a
little
curious.
If
there's
anything
easy,
we
could
do
to
kind
of
like
pre-warm.
That
system
for
the
first
request,
but
I
don't
know
like
it's,
not
that
slow,
like
clients
should
time
out
and
if
they
do
they'd
be
trying,
then
they're
good.
So
I
don't
know.
I,
don't
feel
super
strong
about
this.
D
D
F
F
E
So
this
would
only
be
the
case
where,
if
there
are
changes,
so
in
the
case
that
I
actually
made
a
note
in
the
interpretation
section
which,
if
there's
nothing
changed,
we
use
etax
for
everything.
So
we
basically
compare
the
etag.
We
have
a
cache
version
and
we
serve
directly.
We
don't
really
need
to
go
through
that
entire
aggregating
computation
process.
F
I
guess
I'm
curious
if
it's
been
explored
to
have
rate
limiting
around
this
computation
and
basically
instead
of
trading
first
hit
first
request:
latency
we
could
trade
possibly
time
to
establish,
or
maybe
that
doesn't
work
because
then,
if
this
schema
changes
you
already
established
so
maybe
I
withdraw
it
in.
E
I'm,
not
sure
I
fully
understand,
but
if
you
want
to
make
a
comment
we
can
discuss
further
in
this
talk.
D
A
Thank
you,
Jeffrey,
okay,
so
next
one
I
think
it's
MO.
C
So
I
wanted
to
bring
this
up
because
it
kind
of
came
back
on
my
mind
with
the
the
universal
version.
Proxy
thing:
that's
you
know
in
Flight,
so
you
know
we're
starting
to
make
sure
the
version
API
and
inevitably
it
will
have
to
graduate
for
other
things
that
are
depending
on
it
to
meaningfully
make
any
progress.
C
D
I
think
it's
needed
we're
still
at
this
point
where,
like
you,
can
have
arbitrarily
old
objects
in
the
system
right
now,
there's
no
way
to
guarantee
there's
not
some
cluster.
With
the
oldest
introduced,
you
know
version
of
anything
in
storage,
and
so
we
keep
all
of
the
alpha
apis
around
for
storage
internally
in
the
API
server.
So
it
seems
like
a
it
would
be
pretty
amazing
if
we
could
actually
start
making
real
progress
on
this.
C
A
C
So
to
me,
something
like
like
MVP
for
something
like
this
would
be
the
ability
to
be
able
to
guarantee
that,
like
maybe
like
per
release,
this
happens.
It's
like
a
minimum
and
maybe
the
ability
to
let
like
the
cluster
admin,
just
force
it
if
they
want
to
make
it
happen
again.
Those
are
actually
the
two
things
I
had
in
mind
on
your
hand,.
G
Yeah
I
was
actually
thinking,
it
might
be
nice
if
we
actually
formally
introduced
the
concept
of
mixed
date,
awareness
into
the
API
server,
so
that
basically
we
can
detect
version
boundaries,
and
then
we
could
possibly
kick
this
off
automatically.
C
So
you're
saying
that,
instead
of
it
being
just
on
some
kind
of
timer
effectively,
that's
you
know
based
on
things
kind
of
moving
forward.
There
would
be
more
like
schema
awareness
per
resource,
so
that
way
the
SKU
would
be
more
obvious,
and
thus
you
could
kind
of
run.
G
Well,
I
was
actually
thinking
like,
maybe
in
the
process
of
graduating,
API
server
identity,
we
introduce
versions,
and
then
we
could
watch
on
those
objects
to
determine
when
we
enter
a
mixtape
field
and
then
we
could
kick
off
storage
version,
migration.
D
D
That's
actually
pretty
hard
to
do,
because
you
would
have
to
have
some
kind
of
guarantee
that
you'd
actually
finished
some
migration
before
doing
an
upgrade
to
a
particular
version.
That
would
be
a
pretty
amazing
long-term
goal.
If
we
could
get
something
like
that
in
place
where
we
actually
get
a
guarantee
that
some
old
version
of
storage
is
like
completely
gone
and
we've
moved
on.
D
Yeah
I'm
in
no
hurry
to
actually
get
rid
of
the
storage
version,
support
in
the
API
server
yet,
but
still
getting
something
in
place
that
we
believe
is
is
actually
like.
Forcing
things
up
and
making
it
part
of
the
upgrade
process
like
you're,
not
really
upgraded,
till
1
30
until
you've
finished
the
storage
migrations.
And
we
don't
really
want
you
upgrading
past
that
until
that's
done
getting
that
working
and
getting
that
and
working
with
skip
upgrades
and
LTS
is
sounds
hard.
But
that
would
that
would
be
like
an
actual
win.
D
All
right
sort
of
that
I
mean
it's
it's
nice
to
run
it
and
I.
Think
I
think
it
is
good
to
start
moving
it
into
the
system
like
it
would
still
be
a
win
if
it
was
just
a
default
thing
that
everybody
was
running
because
you
would
get
performance
benefits
from
not
having
to
do.
You
know
this
much
conversion
and
all
that
kind
of
stuff
I
think
we're
Stefan.
Do
you
remember?
Are
we
switching?
Are
we
switching
crd
storage
version
now?
I
can't
remember?
No.
We
didn't.
C
Yeah
so
for
like
the
motivating
factor
for
me
is
I
want
to
be
able
to
do
storage
migration
for
key
rotation
for
encryption
at
rest
and
I.
Don't
want
to
tell
people
to
like
run
svm
and
I.
Don't
want
to
tell
them
to
use
here.
Ctl
like
it's
incredibly
manual
like
I'd,
be
okay,
telling
them
issue
this
one
Cube
CTL
command!
That
tells
the
controller
to
go.
C
Do
all
that
for
you
and
then
just
check
the
status
later,
because
I
can
basically
completely
automate
that
and
make
a
UI
around
it,
where
people
just
press
a
button
or
something
so
that
you
know
that's
sort
of
the
angle
I'm
looking
at
it.
So
so
for
me,
it
can't
just
be
purely
automated
by
schema
changes,
I
guess
or
API
version
changes,
because
key
rotation
can
happen
arbitrarily
internally
right.
C
So
I
do
need
like
a
mechanism
to
at
least
just
do
it.
One
off.
D
C
Okay,
yeah
I
I.
Guess.
Let
me
ask
this
question
I
asked
again
earlier,
like
what
do
we
feel
is
like
bare
minimum
MVP,
because
I
feel
like
I
I
I
went
through
the
list
of
the
old
efforts
we
had
and
like
there's
like
five
of
them
right,
so
I'm
really
nervous
about
trying
to
solve
all
of
the
problems
at
once.
I
would
much
rather
just
move
the
current
state
forward
and
then
have
another
cap
or
another
kept
after
to
like
sure,
yeah.
D
Forge
progress
is
a
fair,
is
a
fair
thing
to
do
here.
I
I
feel
like
if
you
could
run
a
storage
migration
and
know
that
it
was
done
even
if
you
had
to
manually
kick
it
off,
but
it
was
part
of
like
the
KCM
and
you
could
manage
it
in
a
very
same
way.
That
seems
like
pretty
good
progress
right,
even
if
you,
even
if,
like
you,
still
have
to
tell
somebody
like
somehow
you
have
to
set
up
the
migration
you're
running,
but
you
get
to
know
when
it's
done.
C
B
C
It's
amazing,
I
I
forget
exactly
the
state
of
all
the
apis
and
sem,
but
part
of
this
is
part
of
for
me
to
make
this
more
viable
is
actually
moving
an
intrigue
like
yes,
the
component
exists
but
like
if
you
moved
our
back
out
of
tree
I,
don't
think
people
would
be
particularly
happy
with
you
because
they
think
of
it
as
like
a
core
part
of
the
system,
even
though
it's
completely
optional,
it's.
B
C
C
When
you
guys
originally
built
this
out,
how
happy
were
you
with
with
the
rest
apis
associated
with
it
were
those
like
they
never
made
it
to
Beta,
officially
so
I'm
kind
of
curious.
What
prevented
that.
D
B
C
Okay,
so
I
mean
I,
guess
we
could
technically
move
those
things
in
tree
as
Alpha,
basically,
as
is,
and
then
bike
show
away
on
a
cap
on
if
we're
happy
with
them
before
any
promotion
like
basically,
we
can
pretend
like
svm,
didn't
happen
and
just
start
it
from
scratch,
but
go
through
all
the
standard
process,
and
let
everyone
make
sure
it's
all
perfect,
no
I
think.
D
C
Okay,
I
think
I'll
write
down
some
notes
here,
but
I
think
that's
good
for
me.
Okay,
thank.
F
A
D
A
A
Can
it
can
wait?
What's
When's
the
next
meeting
two.
C
G
A
For
the
account
problem,
like
I
I
had
no
idea,
I
was
checking
my
email.
There
is
no
warning,
so
I
will
figure
it
out
for
next
time
for
sure
that's,
okay!
Well,
thank
you.
Everybody
for
coming
and
participating
I
will
upload
the
meeting
later
today
and
we'll
see
you
in
two
weeks
sounds
good.
Thank
you.