►
From YouTube: Kubernetes SIG API Machinery 20211201
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everybody
to
the
seek
api
machinery
meeting
from
kubernetes
today
is
december
1st
last
month
of
the
year
of
2021,
and
we
have
a
couple
of
topics
in
our
agenda
to
discuss
the
first
one
is
one
that
we
didn't
make
to
discuss
last
time,
so
we
made
sure
today
we
have
the
people
to
cover
it,
and
then
we
will
continue
in
order.
So
without
any
further
said,
let's
begin
with
the
first
one
step:
five,
do
you
want
to
take
it.
B
B
B
C
Or
like
services,
you
can
have.
C
C
It
would
be
really
helpful,
I
think,
to
to
place
that,
because
only
someone
who
who
knows
what
goes
in
the
resource
name
and
how
it
is
related
to
the
name
of
the
type
itself
is
ever
gonna
understand
that,
based
on
your
description
here
and
there's
like
10
of
us,.
D
Right,
so
let
me
just
make
sure
I
caught
that
correctly,
because
I
didn't
understand
that
you're
telling
me
the
issue.
The
problem
here
is
that
for
a
resource
quota
object,
its
name
is
limited
to
62
characters,
but
its
name
also
has
to
be
a
specific
function,
which
is
a
concatenation
of
some
other
resource,
some
other
object's
name
plus
some
fixed
stuff
and.
C
That
no
no,
the
name
of
the
resource
quota
object
itself
is
immaterial,
but
there
is
a
field
on
resource
quotas
that
is
called
resource
name
and
a
resource
name
represents
the
thing
that
you're
counting
and
goes
inside
of
a
resource
list.
B
C
C
One
of
the
things
you
can
represent
as
a
resource
main
is,
is
I
want
to
count
how
many
pods
or
I
want
to
count,
how
many
foods,
and,
if
you
specify
a
type
of
where
my
really
really
long
crd
type
name,
is
greater
than
the
maximum
length
allowed
for
a
resource
name.
I
could
imagine
how
you
would
have
trouble.
D
B
Yes,
david,
I
sent
you
a
boxilla,
there's
no
real
example.
I
had
one
in
select,
but
I
don't
see
it.
This
is
artificial,
but
it
shows
error
message.
So,
yes,
this
is
resource.
Coder
quota
test
is
invalid
and
then
you
have
this
combination
of
resource
name
and
group
name
concatenated
and
that
must
be
below
63.
D
B
Group
yeah
so,
and
this
I
mean
you
can
imagine,
63
bytes
are
not
many.
C
C
B
C
C
But
it
would
also
be
small
to
show
what
would
what
would
end
up
changing
and
then
I
would
suggest
waiting
until
next
year,
so
we
have
more
people
to
come
in
and
think
about
what
else
would
break.
I
don't
see
anything
offhand,
because
it's
an
esoteric
field.
A
B
Yeah
we
have
a
boxilla
here
and
it's
it's
a
real
problem.
I
think
it
wasn't
canadian.
It
was
some
transformation
of
cloud
apis
which
have
long
names.
E
C
C
D
A
D
Right
yeah,
this
has
come
up
before
and
it
kind
of
came
up
tangentially
the
previous
meeting,
and
there
was
some
encouragement
to
follow
up
on
it.
So,
let's,
let's
just
do
this
right.
We
want
to.
We
have
this
idea
that
the
api
server
can
paginate
responses,
but
we've
never
gotten
around
to
implementing
the
pagination
for
responses
that
are
coming
from
the
watch
cache
and
you
know
we
do
have
some
problems
with
large
responses.
You
know
in
our
work
on
priority
and
fairness.
It's
really
just
a
matter
of
server
self-protection.
D
D
D
I
had
a
thought
that
it
seems
to
me
logical
that
the
cd
cluster
would
have
the
similar
problem,
someone
started
to
say
nah,
but
we
ran
out
of
time.
So
that's
where
we
ended
on
it.
B
B
B
D
So
let
me
just
understand
that
part
there.
What
you're
saying
with
consistent
so
in
a
particular.
E
D
Well,
let
me
be
clear,
let
me
make
sure
I
understand
clearly
so
I
do
understand
the
idea
that
different
api
servers
will
have.
You
know,
reflect
different
state
in
their
watch
cache
for
a
given
api
server.
D
Is
it
fair
to
say
that
for
a
given
resource,
its
watch
cache,
it
is
consistent
with
some
resource
version
at
a
given
time.
B
D
B
Yeah
I
mean
for
this
problem.
There
are
ways
around
that
an
easy
way
which
is
proposed
already.
You
can
ask
that
city
for
the
latest
resource
version
before
starting
your
watch,
and
then
you
wait
until
the
watch
cache
has
caught
up
which
might
take
milliseconds
or
maybe
a
second.
If
nothing
is
pro
and
then
you
are
guaranteed
that
the
cache
is
fresh.
This
could
be
done.
So
we
could
move
information
to
this
model.
B
There's
still
a
fundamental
issue
with
that
and
paging
doesn't
have
anything
you
can.
If
you,
if
you
page
the
client,
doesn't
know,
object
size.
So
it's
very
easy.
If
you,
for
example,
have
big
secrets,
two
megabyte
sequences,
it's
super
easy
to
kill
an
api
server
with
a
few
requests,
because
each
of
them
gets
data,
puts
it
into
memory
and.
B
C
D
Right,
I
was
reviewing
that
code
recently.
Yes,
if,
if
the
request
specifies
pagination,
it
does
not
hit
the
watch
cache
so.
D
F
F
C
G
I'm
I'm,
I
did
a
pass
on
the
resource
version
semantics
a
while
ago
and
as
part
of
that
documented
it
I'm
going
to
just
paste
that
into
chat,
because
it
can
be
helpful
in
these
discussions.
This
is
actually
part
of
working
on
a
very
closely
related
problem,
but
we
actually
did
make
some
changes
to
it,
and
I
think
I
think
we
did
get
it
to
the
point
where,
when
you
say,
resource
versions
equals
zero,
you're
saying
any
resource
version
newer
than
that
is
fine,
and
if
you
don't
provide
a
resource
version.
B
B
B
B
The
infrastructure
is
there:
we
even
have
that
for
legacy
reasons,
there's
actually
a
way
when
you
do
watch
to
think
about
the
detail
was
exactly,
I
think,
when
you,
when
you
say,
start
from
zero.
You
get
set
for
legacy
reasons,
but
it's
it's
not
used
anywhere,
but
we
have
that.
Initially,
events,
if
you
know
the
code,
that's
a
keyword
there,
and
this
gives
you
basically
the
same
without
any
pagination
implementation.
D
Right
and
I
general
I
like
that
for
a
lot
of
reasons,
but
I
will
say
one
thing
that
it
leaves
unsatisfying
to
me
in
the
course
of
the
priority
and
fairness
work
we
are
you
know
taking
on
you,
know,
management
of
long
lists
and
watch
requests.
The
thing
about
the
watches
is
you
know
the
bigger
that
initial
burst.
D
B
It's
a
great
requirement.
We
haven't
thought
about
that,
but
that's
a
good
point,
so
you
have
to
sync
this
with
abu
and
world
tech
and
you
probably
to
see
how
this
would
fit.
Pnf.
I
see
the
point.
G
Yeah,
that's
a
good
point.
There's
one
thought
I
had
mike,
which
was
right
now.
Sometimes
we
skip
the
watch
cash.
You
know
there.
We
don't
always
hit
the
watch
cache,
even
though
it's
enabled,
if
you
ask
for
something
that
you
want
consistency
for,
we
have
to
hit
ncd.
G
There
are
ways
where
we
can
actually
check
with
what
scdnc
like
what
the
latest
thing
is
and
then
go
to
the
watch
cache.
I
can't
remember
if
we
turned
any
of
that
on,
but
like
a
naive
way
to
turn
this
on
would
be
like.
G
C
I
did
provide
a
code
ref.
It
looks
like
if
resource
version
equals
zero.
You
skip
the
watch,
cache
it
intentionally
delegates
a
code
ref
from
the
chat.
Okay.
So
if
you
think
it
works
differently,
it
might
be
worth
building
that
test,
because
that
code
is
consistent
with
the
failure
that
we
saw
in
olm.
F
B
Okay,
but
wait
david,
isn't
there
has
limits?
How
does
this
go
into
this
situation?.
C
So
if
you're
expecting
it
like,
if
you
thought
in
a
later
version,
olm
would
be
fixed.
That's
gonna
be
a
thing
to
maybe
maybe
I'm
reading
that
code
incorrectly.
But
that's
going
to
be
a
thing
you
want
to
check.
E
F
C
D
C
It'll
be
buried
inside
the
reflector.
I
I
don't
need
to
derail
the
whole
thing,
it's
just
so
that
we
understand
where
we
are.
When
we're
making
statements
about.
We
can
just
do
this.
I
want
to
make
sure
we
are
consistent
in
what
we
think
this
is.
G
I'm
pretty
sure
the
reflectors
set
it
to
zero
because
we
at
one
point
were
messing
around
to
see
if
we
could
make
that
a
strongly
consistent
thing
and
at
the
time
for
performance
reasons,
we
couldn't
switch
it
away.
We
had
to
hit
the
cache
it
was
there
or
it
was
causing
serious
performance
problems.
G
C
D
G
B
G
Yeah,
this
is
the
one
that
I
had
worked
on
tackling
before.
I
think
there
is
a
next
step
that
could
make
progress
on
this.
This
problem,
which
I
realized,
is
separate
than
what
mike's
asking
for
specifically,
even
if
something's
in
a
cache
we-
and
you
say
you
want
say
you
say
you
want
the
latest
from
ftd
and
and
you
don't
know
what
it
is
and
so,
but
you
still
want
to
serve
it
from
the
watch
cache.
If
you
can,
what
you
could
do
is
you
could
basically
ask
fcd
hey?
G
What's
your
latest
revision
and
this
I
think
you
mentioned
this
at
the
beginning,
stefan
and
then
what
you
need
to
do.
Is
you
need
to
wait
briefly
for
your
watch,
cache
to
catch
up
and
then
only
serve
it
once
you've
got
that
revision,
and
then
you
get
the
same
guarantee
as
though,
as
though
you
had
served
it
directly
from
scd
with
its
consistency
guarantees
there
is,
I
could
find
it.
Maybe
there
is
a
proposed
way
of
doing
that.
G
Or
something
empty
yeah,
I
remember
we
proposed
a
couple
and
somebody
had
found
a
really
elegant
way
to
do
it.
That
was
going
to
be
really
fast,
so
we
do
know
how
to
do
that,
but
I
don't
know
if
it
ever
got
implemented.
G
If
we
did
that,
then
we
can
serve
consistent
reads
from
the
watch
cache,
which
I
think
is
part
of
the
puzzle
here
and
then,
if
we
had
that
working
then
presumably
we
could
stop
setting
zero
there
in
the
reflector
and
you
could
actually
say
I
want
consistent
data
and
that
prevents
any
time
travel.
B
G
G
But
both
of
those
will
miss
the
watch
cash.
I
think,
because
you're,
I
think
it's
possible
to
serve
them
from
the
watch
cash,
but
we
never.
We
never
implemented
that.
I
think
to
do
those
rules
because
they
weren't
common
cases.
We
just
skipped
the
watch
cache
related
to
what
mike
was
asking
another
thing,
and
this
is
a
little
unrelated
on
the
back
end.
When
we
go
to
fcd,
we
do
paginate
at
10k
sizes,
even
if
you're
asking
for
the
whole
thing.
D
So
you're
saying
the
storage
layer
in
the
api
server
paginates
its
traffic
with
its
cd,
but
it
just
for
the
purpose
of
accumulating
the
whole
response
in
the
api
server
yeah,
that's
right!
Okay,
I'm
also
looking
to
get
it
that
code
that
was
quoted
from
cacher.go
damn.
Why
do
I
keep
losing
that
right
window
here?
The
reflector
to
go?
I'm
sorry
right
lines,
570-572
was
quoted,
but
that's
under
the
condition
that
the
last
recent
last
sync
resource
version
is
empty
right.
D
C
G
G
B
C
B
A
G
Mike
I'd
be
willing
to
spend
a
session
with
you
and
maybe
get
voice
check
in
and
see
if
we
can
list
out
the
current
state
of
affairs
a
little
more
crisply
and
try
and
surface
any
of
the
problems
that
kept
us
from
doing
this
before.
D
So
it's
joe
and
polynomial
and
whitetech
and
me
yeah.
E
G
A
Okay,
very
good,
I'm
happy
to
help
organize
across
calendars,
if
you
guys
can't
figure
it
out,
but
I
hope
you
can
for
people
very
good.
So
I
will
try
to
capture
and
put
also
you
know
the
links
that
were
shared
here
in
the
agenda
for
posterity.
Should
we
move
to
the
last
topic.
A
We're
good
okay,
mike
last
topic
is
also
yours.
D
Right
well
again,
it
kind
of
came
up
earlier
and
there
were
some
some.
You
know
positive
echoes,
so
I
thought
I'd
pursue
it.
You
know
this
is
part
of
you
know
it's
just
a
very
broad
theme
that
you
know
we
like
the
api
machinery
for
building
systems
and
in
order
to
use
it,
you
know
anybody
designing
with
it
needs
to
understand
what
it's
going
to
cost
right.
We
have
a
little
bit
of
effort
at
slos.
D
It
seems
to
me
it
would
make
sense
to
have
a
maintained
characterization
of
the
costs
of
running
this
stuff
right
like
and
I'm
not
quite
sure
exactly
what
that
would
be,
because
you
know
the
stuff
you
run
it
on
is
it
can
be
so
different,
but
maybe
some
kind
of
a
characterization.
D
I'm
sorry
example,
you
know
when
we
run
it
on
this.
You
know
this
kind
of
you
know,
request
rate
for
these
kind
of
requests
costs
this
much
cpu.
You
know
this
is
what
the
memory
actually
memory
is
a
really
important
one
and
that's
a
little
bit
more
manageable.
It's
still
squishy
and
complicated
because
at
the
end
of
the
day,
it's
up
to
the
kernel
how
much
memory
a
process
really
has
resident,
which
is
what
counts,
but
you
know
still,
we
can
say
things
like
okay.
D
This
is
how
much
the
watch
cache
is
going
to
hold
on
to
you
know
this
is
the
stuff.
You
know
it
seems
to
me
it
would
make
sense
to
have
maintained.
You
know
cpu,
maybe
networking
and
and
certainly
memory.
You
know,
characterization
of
the
costs
of
these
the
major
control
plane,
components,
the
the
api
server.
The
informers
seem
like
obvious
candidates
here.
D
D
I
have
trouble
actually
understanding
what
any
of
that
actually
refers
to
specifically,
but
I
think
it's
pretty
clear
that
it's
results
of
big
stings,
big
big,
runs
that
do
a
lot
of
stuff.
So
it's
not
kind
of
you
know
breaking
it
down
and
analyzing
it
right
into
pieces
that
the
designer
could
use.
D
So
yeah
I'm
not
entirely
clear
either,
but
let
me
give
you
some
kind
of
a
flavor
of
an
idea
right.
So,
for
example,
for
cpu
right,
we
could
say:
hey,
okay,
we're
going
to
run
this
on
a
certain
kind
of
machine
right,
pick
an
aws
or
google
or
whatever
you,
like
instance,
type
and
say:
okay,
we're
going
to
look
at
what
the
cpu
usage
of
the
api
server.
D
When
we
do
say
you
know
a
hundred
qps
of
you
know
fetching
a
config
map
or
updating,
configmap
or
something
right
and
just
say
you
know:
what's
it
cost
to
sustain
this
workload
right
we
can
do
you
know
reads
a
lot
of
different
things.
We
can
do
kind
of
trivial.
You
know
attribute
updates
of
pretty
much
anything
you
know,
so
we
can
get
kind
of
just
what
it
costs
to
serve.
D
Similarly,
for
network,
you
know
again
easily
enough
to
measure
that
right,
so
cpu
and
network
are
easy
to
measure
pretty
easy
to
ac
attribute
memory
again
is
more
difficult,
because
what
you
really
care
about
is
the
resident
memory,
so
you're.
C
D
C
D
So
I
think,
for
the
kinds
of
things
I'm
talking
about
my
suspicion
is
we
don't
really
have
a
very
clean
test
scenario
right.
The
test
scenarios
we
have
are
pretty
high
level
complex
scenarios,
so
it's
it's
a
little
difficult
to
break
it
down
into
elements.
Although
I
suppose
you
could
look
at
that
and
get
some
aggregates
right,
so
you
could
say,
for
example,
you
know
we
ran
so,
for
example,
we
could
say
okay,
you
know
during
this
scenario
you
know
we.
D
You
could
say
that
there's
a
certain,
then
the
next
thing
I'm
going
to
want
to
know
is:
what's
the
background
load
right,
there's
some
some
amount
of
background
maintenance,
so
I'd
want
to
kind
of
figure
that
out
then
subtract
the
test
activity
from
the
background-
and
you
know
start
to
get
some
analysis
of
the
components
of
the
cost.
D
I
mean
I'd
like
to
know
things
like
you
know:
you
submit
an
update
through
one
api
server
and
then
there's
notifications
that
come
back
to
all
of
them.
You
know
as
a
designer
right
I'd
like
to
have
a
handle
on
the
costs
of
handling
those
update
notifications.
D
You
know
again,
I
don't
have
an
exact,
totally
crisp
ask
here,
but
you
get
the
idea
what
I'm
talking
about.
D
I'm
not
going
to
disagree
on
your
main
point.
Just
let
me
make
sure
I
understand
the
the
first
part
of
that.
If
statement
of
yours
you
said
you're
talking
about,
am
I
asking
by
taking
what
we
already
have
or
not
I'm
not
quite
sure.
I
follow
that
part
of
what
you're
saying.
What
do
you
mean
by
take
what
we
already
have.
C
I'm
trying
to
figure
out
if,
if
there
is
some
feature
in
the
api
server
that
is
missing
to
allow
you
to
imagine
imagine
you
want
to
go
and
do
this
just
in
your
lab
at
ibm
right.
What
I'm
looking
at
is.
Is
there
something
the
api
server
needs
to
add
to
make
that
possible,
because
that
would
be
a
thing
that
if
you
identified
and
then
build
that
as
long
as
it's
not
invasive,
that's
probably,
I
think
we
can
get
behind
right.
D
Thank
you,
okay.
Now
I
understand
that
that
point
yeah.
C
D
Yeah,
I
I
think
what
I'm
looking
for
is
stuff
that
can
be
measured
from
the
outside.
In
fact,
I
would
go
a
little
bit
further
and
say
no
I'll,
just
yeah
categorically
say
I
definitely
want
to
talk
talking
about
stuff
you'd
measure
from
the
outside.
D
I
don't.
I
don't
think
we
need
to
add
anything
to
the
server
and
I'm
perfectly
fine
with
the
answer
that
that
falls
under
six
scalability.
That's
I
mean
it's
not
exactly
a
scaling
question
I
think,
but
I
guess
practically
yeah.
You
don't
really
care
unless
you're
going
to
push
it
so
yeah.
C
So
I
I
can't
speak
for
how
similar
this
is
to
the
other
tests
sounds
like
you
looked,
and
maybe
they
aren't
doing
a
test
of
this
nature
yet,
but
I
wouldn't
expect
us.
I
wouldn't
expect
api
machinery
to
pick
that
up
as
as
a
side
project.
Unless
you
know
if
you
wanted
to
try
to
work
on
a
joint
project
and-
and
you
wanted
to
push
on
this-
you
know
I'm
obviously
I'm
interested
in
the
answer.
C
Just
not
interested
enough
in
the
answer
to
you
know
personally
drive
it
and
I'm
not
sure
it
falls
under
the
sig
to
try
to
build
that
mechanism,
because
there's
considerations
other
than
just.
Can
we
build
the
thing
it's
can
we
build
it?
Do
we
have
the
maintainers
over
time?
Do
we
have
the
funding
to
run
it
and
that
that
sort
of
information
that
scalability
is
already
figured
out.
C
Yeah,
okay,
and
that
if
there
is
some
information
missing
some
piece
of
information,
we
need
to
add
to
the
qapi
server
that
you
know
bringing
it
back
here
and
discussing
that
is,
is
valuable
and
it
makes
sense.
A
A
A
Yes,
okay,
so
thank
you
everybody
for
joining.
I
hope
you
have
a
wonderful
rest
of
your
wednesday.
Wherever
you
are
and
we'll
see
you
in
two
weeks.