►
From YouTube: Kubernetes SIG API Machinery 20180425
Description
For more information on this public meeting see this page: https://github.com/kubernetes/community/tree/master/sig-api-machinery
A
B
So
we
added
paging
to
the
API
a
few
releases
back.
We
have
some
experience
now
using
it
seems
to
work
pretty
well.
What
we'd
like
to
do
is
gather
requirements.
People
have
for
taking
us
from
the
beta
level
to
GA,
so
we've
seen
improvement
when
you
do
large
lists
and
mutation
and
streaming
through
the
list
works
very
well.
B
One
of
the
things
that
has
come
up
as
a
potential
requirement
is
trying
to
expose
some
of
the
paging
options
out
through
the
client,
and
do
you
mean
out
a
go
client
interface
or
out
of
the
API
interface
out
of
the
API
interface?
It
is
something
that
we're
gonna
look
at
and
consider,
but
if
other
people
have
more
requirements
before
we
try
to
move
a
so.
B
B
Think
nice
well,
I,
don't
know
what
cue
sets
it
to
but
default
are
environments
except
two
five
minutes
bill
right
Jordan
anyway,
you
can
set
a
certain
number
of
minutes
and
within
that
time
frame
you
can
have
a
sort
of
like
cursor
saying
I
want
the
list,
as
it
appeared
at
this
particular
point
and
then
I'm
gonna
page
through
to
get
number
of
results,
and
so
we
exposed
away
with
a
continue
token
to
that
through
the
API.
But
it
was
just
a
very
minimal.
A
C
C
A
A
B
A
G
C
G
A
A
A
F
A
F
A
So
therefore,
are
not
moral
tested,
probably
not,
and
we
are
planning
on
testing
it
extensively.
Let
me
back
up
a
minute.
We
need
some
way
to
get
schema
information
to
the
code
that
does
the
actual
apply
logic.
I
don't
have
a
super
strong
opinion
that
we
should
use
open
API
for
that
purpose,
but
it's
the
thing
that
is
in
our
code
base,
and
it
is
like
the
most
obvious
way
to
do.
It
is
to
thread
open
API
through
previously
there's
been
some
hesitancy
about
embracing,
more
use
of
open,
API,
I.
A
Think
to
that
I
would
say
we
do
only
care
about
the
data
model
for
this
purpose,
don't
necessarily
we
don't
have
any
yeah.
We
don't
have
any
use
for
the
operations
and
in
this
I
one
compelling
reason
to
use
the
data
model
for
the
API
is
that
we
have
put
it
in
cearĂ¡
to
use
as
the
what
do
we
call
it.
The
validation
tag
or
the
JSON
is
back
part
of
it.
Yeah
the
JSON
spec
part
of
it.
A
B
You
know,
hang
on
I'd,
be
careful
about
characterizing
it
that
way
right,
so
so,
rather
than
it's
not
narrowly
scoped
right.
So
the
current
way
at
least
what
I
remember
seeing,
is
taking
a
an
open,
API
doc
generated
for
every
API
group
in
a
single
file
and
pulling
that
dependency
and
adding
it
to
code
that
deals
and
internal
versions
with
a
hope
of
using
it
somewhere.
B
So
you
have
an
externally
version
thing
that
is
dealing
with
very
broad
sets
of
objects
unrelated
to
the
ones
that
you're
currently
looking
at
and
your
plumbing
it
into
a
place
that
doesn't
have
the
external
version
to
deal
with.
So
the
wallet
objects
to
the
idea
of
of
doing
the
purpose
you
had
mentioned
was
validation
for
open
API.
That
doesn't
seem
like
a
bad
purpose,
but
the
implementation
is
not
a
straightforward
way
of
managing
that.
So.
A
B
A
Were
comments
in
the
PR
and
I?
Don't
think
we
were
suggesting
that
we
take
it,
as
is
without
addressing
comments,
that's
a
little
more
detail
than
I
was
hoping
to
go.
The
reason
I
put
it
on
this
agenda
is
I,
don't
want
I,
don't
want
this
to
surprise
people
when
we
go
to
reintegrate
the
feature
branch
can't
put
a
link.
F
F
C
F
C
C
F
C
Makes
sense
and
I
know
like
external
or
per
version?
Validation
has
actually
been
a
goal.
Some
of
what
we
have
today
doesn't
make
a
lot
of
sense,
but
if
we
want
to,
if
we
want
to
make
open,
API
participate
in
validation,
I
think
we
should
switch
validation
to
be
operating
on
the
external
versions.
Yeah.
A
F
A
A
C
H
C
External
you
also
it's
easier
to
give
error
messages
that
make
sense.
It
is
also
easier
to
miss
fixing
bugs
in
validation
and
let
invalid
data
slip
through
in
one
version,
but
on
another.
So
that's
a
downside
that
we'd
want
to
figure
out
how
to
put
tests
around
or
do
automated
tests
or
fuzz
testing
or
something
to
flush
that
out
yeah.
A
One
thing
David
said
that
I,
don't
think
I
agree
with
is
that
the
request
scope
object
is
only
referring
to
the
internal
version.
Like
I
know,
it's
got
some
information
about
the
external
version
in
there
too,
because,
for
example,
the
patch
handler
which
I
just
wanted
with
it
does
a
bunch
of
internal
to
external,
to
internal
conversions
in
the
course
of
the
retry.
So
so.
B
F
B
But
you'll
notice
that,
where
that
happens
is
not
the
same
place
that
you
would
be
able
to
inject
a
validator
right
because
it
happens
pre-admission,
so
the
validators
are
threaded
through
that
code
and
they're
called
like
a
course
the
internal
version.
They
are
done
via
a
strategy
that
gets
called
via
the
storage
layer
and
that
it
happens
because
you
have
to
be
posted
mission.
If
you're
gonna,
try
to
add
external
validation.
I
I
would
like
to
see
ya.
B
B
A
Not
sure
how
that
can
work,
as
we
also
run
the
admission
controllers
which
dial
out
to
admission
webhooks
in
some
cases
and
those
are
all
converted
to
the
external
version
which
the
web
book
registered
for.
So
that's
interesting
anyway.
This
is
a
rat
hole
that
this
is
there's
some
stuff
to
be
worked
out,
but
I
think
we
have
to
thread
the
external
version.
Validation
if
we're
gonna
do
validation
in
the
external
version,
which
it
seems.
Like
everybody
agrees,
it's
a
good
idea.
Then
we
have
to
thread
information
about
these.
F
A
G
A
B
One
sure
so,
one
of
the
difficulties
in
working
with
an
aggregated
API
server
is
figuring
out
weird
or
your
data,
so
that
you
have
a
consistent
place.
That's
easy
to
backup
and
we
have
decided
that
we
are
not
going
to
have
a
new
storage
layer
introduced
in
code
right.
So
we
aren't
going
to
accept
a
poll
that
adds
a
new
storage
layer
for
managing
aggregated
API
servers
or
anything
right.
We
have
a
storage
later
that
works
against
that
CD,
and
we
aren't
going
to
change
that.
B
The
thing
that
we
can
finish
so
the
thing
that
is
possible
to
do
is
to
go
through
and
there
is
actually
an
sed
proxy
G
RPC
proxy.
That
already
exists
a
day
and
you
can
scope
the
right
keys,
where
you're
able
to
actually
write
data
into
and
read
from,
and
you
can
scope
down
so
that
you
don't
leak
outside.
B
But
it
doesn't
require
any
changes
to
the
storage
layer
itself.
You
just
point
the
aggregative
API
server
that
when
you're
trying
to
aggregate
in
to
a
different
sed
and
it
writes
to
it
as
normal
and
that
if
you
wire
it
up,
cleverly
can
actually
be
the
same
sed
that
the
cube
API
server
writes
to,
and
so
there's
a
project
to
explore.
That
idea
for
the
google
Summer
of
Code
and
I
actually
don't
know,
I've
never
pronounced
X
mood.
Reasoning
before
are
you
going
to
call
yes
I?
Am
how
do
you
pronounce
your
name.
H
B
F
Yeah
so
I
said,
email
I,
don't
know
people
had
a
chance
to
read
it,
though.
So
in
the
previous
discussions
about
the
storage
layer,
they're.
Actually,
two
interfaces
under
discussion
long
was
the
interface
of
the
of
the
master
API
server
in
combination
kubernetes
that
serves
all
the
existing
built-in
api's.
F
There's
a
single
in
system
client,
a
small
number
of
server
implementations
and
especially
for
the
case
where
we
don't
have
to
worry
about
authentication
authorization
like
for
same
note
calls
this
does
actually
not
meet
those
requirements.
So
I'd
say
you
know.
If
you
want
to
experiment
with
building
a
service,
then
aggregated
API
servers
can
use
as
a
standalone
thing.
That's
totally
fine,
that's
not
different
than
each
one
running
their
own
Etsy
server.
F
B
B
B
F
F
I
F
Api
objects
address
them
through
different
versions
and
move
them
to
across
API
groups
and
change
the
names
of
the
resources.
Although
lunched
argue
whether
we
should
still
do
that
or
not
now,
we've
learned
some
things
about
how
the
storage
is
organized
that
could
be
improved,
to
put
it
mildly,
there's
also
the
issue
of
sharding,
which
was
brought
up
recently.
F
The
resource
version
obfuscation,
which
was
brought
up
just
a
few
minutes
ago,
and
for
since
before
kubernetes
existed,
I
wanted
to
change
the
storage
to
actually
store
the
objects
by
UID
and
have
the
name
be
assembly,
but
with
the
semantics
of
SED.
We
cannot
actually
do
that.
That's
how
poor
master
works.
It's
our
Omega
worked,
notably
even.
I
I
think
I
feel
like
we're
coming
at
this
from
two
different
perspectives
as
well
as
and
I.
Think.
The
trigger
word
here
is
API,
which,
if
somebody
created
a
service
and
they
hit
services,
slash
namespace,
slash
name,
slash
to
it.
That's
really
not
something
that
we
have
like.
That's
why
we
have
cute
I
think
if,
for
this
to
even
really
be
palatable,
the
best
way
to
prove
the
idea
is
like
any
other
extension,
to
cube
the
fact
that
it
may
actually
have
relevance.
B
A
I
I
A
path
traversal
in
David
may
be
like
maybe
my
are.
Mental
models
are
on
a
day
like
I
feel
like
this
is
the
dual
to
the
to
the
entity
operator.
Where
the
ends
of
the
operator
is
you
create
an
extension
see.
Our
data
says:
I
want
a
net
CD
cluster,
and
this
is
the
dual
to
that
which
says:
I
created
something
that
something
fulfilled
that
gave
me
a
key
space.
I,
don't
really
care
what
it
is
like.
It's
just
not
a
detail.
I
care.
B
B
It
can
be
built
completely
outside
of
cube,
but
I
suspect
that
the
primary
consumers
will
be
people
in
cube,
trying
to
actually
have
storage
for
the
aggregative
API
server
and
there'd
be
different
ways
of
fulfilling
it
right
on
platforms
that
would
allow
you
to
write
to
at
CD
in
a
shared
space.
It
would
work
there,
those
that
don't
I,
guess
they
would
create
their
own
them.
So
starting.
A
B
I
F
This
actually
have
required
any
changes
to
the
main
you
guys
ever
like.
If
the
problem
is
that
the
people
are
running
for
aggregated
API
servers
and
they
don't
want
to
have
to
deal
with
running
for
at
ease,
and
they
don't
want
those-
that's
the
East
have
routes
or
things
like
that,
the
NIC.
If
this
service
were
stood
up,
they
could
all
use
this
and
has
no
implications
on
whether
the
sed
for
the
cluster
uses
it
that's.
B
A
F
A
Yeah
I
can
see
arguments
that
the
storage
API
is
actually
actually
is
fundamentally
imperative
and
also
arguments.
That
would
be
better
if
we
structured
it.
Basically
like
a
community
like
a
content
agnostic
where
we
could
implement
like
UID,
storage
and
stuff
in
some
you
ideally
to
be
indexing
and
some
component,
where
we
can
provide
those
for
all
resources,
regardless
of
man
or
70s
or
I.
B
F
That's
not
the
reason
to
do
it
and
the
reason
is
more
for
things
like
to
have
a
fixed
name
that
doesn't
change
as
we
rev
guy
types
it's
to
be
able
to
serve.
The
final
study
object,
asynchronous
consumers,
post
deletion
provide
cleaner
semantics
around
education
with
respect
to
deletion,
v
conditions,
and
things
like
that.
So
the
reason
why
I
think
it's
important
that
it
be.
A
Done
once
somewhere
that
everybody
can
reuse
is
the
risk
of
fragmenting
the
ecosystem
like
having
various
api's
that
don't
behave
in
the
same
way.
Yeah
and
I
think
that's
no
good
for
clients,
then
I,
don't
know
what
you
can
expect
from
a
guru
Nettie's
resource.
Then
we
haven't
really
built
the
right
kind
of
platform.
Yeah.
F
I
mean
that's
something
that
we've
been
discussing
to.
You
is
what
conformance
tests
we
need,
so
the
watch
tests
are
underway.
Thank
you
very
much.
Ginny
I
think
we
really
need
solid
tests
around
the
TV
behaviors
that
were
dependent
on,
because
we
have
seen
with
the
console
PR
the
dialogue
to
be
proposal
and
others
that
you
know
it's
inevitable,
that
people
will
be
trying
to
swap
in
other
backends,
even
if
the
upstream
doesn't
strictly
support
it,
either
with
an
entity
stream,
proxy
translation
later
or
some
other
approach.
F
So
it's
really
imperative
that
we
actually
get
tests
for
watch
and
often
optimistic
concurrency
and
whatever
other
behaviors.
We
think
there
are
at
the
economy
we
just
kind
of
inherit.
Those
behaviors
from
from
NCBI
would
also
like
some
tests
to
help
guard
against
people
building
dependencies
on
things
that
we
actually
don't
guarantee
like
the
resource
for
you.
Well
like
the
resource
version
in
the
in
the
sharding
thing
like
the
multiple
resource
types
are
actually
in
the
same.
A
te
ensues
because
currently
not
Karen
guarantee
that
burning
resource
types
yeah,
one.
A
A
F
A
Remember
what
I
was
going
to
say,
which
is
I,
have
talked
to
a
lot
of
people
writing
trying
to
write
some
sort
of
extension,
whether
it's
aggregating,
a
guy
or
C
or
E
everybody
a
they
say
they
hate
at
CD
or
they
don't
like
the
thought
of
operating
at
CD,
I.
Think
B,
I
think
the
actual
thing
that
people
are
saying
or
thinking
is
that
they
don't
want
to
operate
their
own
storage,
they're
not
objecting
to
yet
CD
they're
objecting
to
the
operation,
part
of
that
so
I'm
objecting
to
NZ,
specifically
right.
A
Yes,
there's
no,
there's
no
like,
like
I,
actually
I.
Think
people
just
don't
care
about
their
storage
like
they
just
want
something
that
works,
and
they
don't
want
to
think
about
it
and
that's
why
they
don't
want
to
operate
it.
So
if
we
can
offer
a
standardized
at
the
end
point
and
a
cluster
I
think
that
would
actually
make
a
lot
of
people
really
happy
because
they
could
use
that
without
thinking
about
it
and
I,
don't
know
how
we
want
to
look
that
up,
but
I
think
it's
something
that
could
keep
in
an
ongoing
manner.
A
F
That
gets
into
the
issue,
then
then
we're
really
saying
that
forever
we're
offering
annex,
III,
guy
but
I,
don't
think
great,
say
yeah
I
wouldn't
want
to
I,
wouldn't
want
to
promise
this
forever.
We
haven't
even
exercised
very
many
at
Sandy
three,
like
the
full
range
of
see3
behaviors.
So
effectively
our
hands
are
tied
because
we
still
support
density
yeah
but
I'm
saying
like
we're
not
actually
we
don't
actually
have
the
experience
to
offer
like
a
fully-fledged,
that's
III
service
because
we're
actually
the
only
exercising
a
small
part
yeah.
J
C
Right,
I
wouldn't
expect
it
to
be
a
standardized
thing.
I
would
expect
it
to
be.
If
you
are
running
an
API
server
and
you
were
using
the
sed
3
back-end,
you
can
point
it
at
a
service,
and
then
you
can
either
run
just
an
sed
at
that
service,
which
you
can
do
today
or
if
you
want
to
point
a
bunch
of
things,
but
two
different
API
servers
edit
and
shard
them
off,
and
not
just
trust
that
they're
playing
nicely
in
their
own
space.
Then
you
can
run
this
thing
that
is
being
proposed
to
do.
K
Something
sure
so
I'm
looking
at
using
kubernetes
api
machinery
to
build
other
instrument
systems,
so
I
need
to
be
able
to
do
things
like
predict
its
throughput
and
scale
it
if
necessary,
I'm
working
on
characterizing.
What
I
can
get
now
it's
its
little
turn
has
to
be
a
little
bit
more
work
than
I
expected.
I
was
wondering
what
the
is.
A
K
E
E
E
C
Actually,
it
actually
there's
an
optimization
for
that,
so
it
will.
A
A
I
When
we
moved
to
GRDC
of
Woods
III,
we
lost
most
of
the
metrics,
so
there
isn't
it's
not
in
the
API
server.
Today
we
don't
have
the
same
level
of
granularity.
You
could
have
approximate
the
hit
rate,
probably
just
by
knowing
what
queries
hit
the
watch
cache,
but
we
probably
we
can
do
a
better
job
of
saying,
which
ones
are
served
by
the
lodge
cache
yeah.
A
A
K
F
F
B
K
C
A
And
I
think
I
think
since
the
number
of
retries
you
can't
statically
analyze
that
like
actually
measuring
it,
that's
what
we
do
empirically
has
probably
more
value
than
the
static
analysis.
I
think
a
static
analysis
you'll
find
that
there's
like
yeah
I
I
could
guess
what
I
want.
I
think
it
would
be
more
useful
to
measure
eerily
what
it
is
under
loads.
Okay,.
F
Scalability
SLO
documentation.
Well,
that's
probably
in
skilled
early
I'll
guess
this
one
I
would
actually
put
into
the
community
contributors
design
proposals,
API
machinery,
the
API
machinery.
Some
part.
This
is
the
design
part
of
the
design
proposals
like
with
our
targets.
Well,
that
the
one
we
believe
the
existing
design
does.
Okay,.
H
F
F
B
Design
proposals,
if
you
agree
well,
one
thing:
I
am
curious
about.
Are
you
counting
in
your
how
many
requests
you
get
out
like
you,
make
a
request
with
a
service
account
token,
and
so
we
have
to
verify
that
that
service
account
you
it
matches.
Are
you
including
that,
as
the
API
request
for
the
get
of
the
pod
that
you
wanted,
or
are
you
including
that
as
a
separate
one,
since
it
has
different,
get
requests
associated
I.
B
E
D
E
K
It
clearly
it's
a
caching
just
for
reads.
Obviously
the
cache
the
performance
of
the
cache
is
dependent
on
the
request.
Next
yeah
and
you
know,
and
for
many
things
there'll
be
dependencies
in
general,
there's
always
a
dependency
on
the
you
know
the
usage
pattern,
so
you
know
just
spray
something
the
outlines
it
right,
so
that
a
designer
knows
has
some
idea
what
to
expect
what
the
range
of
variation
is.
What
are
the
relevant
issues?
You
know,
that's
what
I
think
could
be
useful
for
someone
wants
to
use
this
machinery
for
other
purposes.
Yeah.
K
A
K
K
K
C
C
Down
into
the
individual
test
runs
there's
more
raw
data
available
like
percentiles
and
buckets,
and
things
like
that.
That
doesn't
give
up
a
lot
to
the
top
level
graphs
and
we
usually
dig
into
that
when
we're
trying
to
track
down
a
particular
issue,
but
that
some
of
it
is
captured.
There's
a
balance
between
how
much
data
it
gets
retained
from
the
runs
versus
not
blowing
up
our
storage
right.
A
K
K
Order
that
it
merged
them
in
well,
but
the
important
thing
is
that
it
is
remembering.
If
the
trick
is,
it
is
serializing.
So
there
is
some
bit
of
logic
to
handling
one
notification
at
a
time
and
that
bit
of
logic
can
construct
a
sequence
of
version,
vectors
that
are
totally
ordered,
and
it's
a
very
simple
idea
is
just
for
every
notification
that
gets
passed
along
the
it's
just
keeping
a
current
version
vector
and
it
updates
the
appropriate
field.