►
From YouTube: Kubernetes SIG API Machinery 20171011
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
So
if
it
looked
like
you've
gotten
most
of
the
input
or
you
gotten
a
lot
of
input
from
issue,
11:37
I
think
it
was,
or
we
had
talked
about
some
of
the
use
cases
for
both
and
what
it
would
take
to
get
them
to
beta,
and
you
would
distilled
it
down
to
what
we
needed
for
webhook
admission.
You
want
to
talk
about
the
difference
in
flow
because
I
saw
you
had
from
that.
Other
talk.
The
idea
of
mutators
and
alligators
I.
C
Verify
things
that
way:
do
you
want
to
be
absolutely
sure
that
certain
types
of
objects
aren't
allowed
in
your
system?
You
can
write
that
just
as
a
validator
and
some
types
of
things
will
need
to
be
split
into
registering
both
the
mutating
and
validating
look,
but
it
could
be
just
a
back
end
like
a
slightly
different
resource
of
god.
A
Yes,
so
so
I
like
the
split
I,
think
that
the
I
think
I
had
two
substantive
comments
on
the
dock.
One
was
about
conversion
and
I
think
that
that
we're
roughly
agreeing
with
each
other
that
we
don't
want
to
touch
an
object
unnecessarily.
We
have
to
be
careful
about
converting
and
then
for
mutaters
they're
gonna
have
to
send
back
patches,
so
they
can
reliably
not
strip
fields
which
I
think
was
mentioned
in
your
doc
and
then
the
other
kind
of
has
that
I
would
actually
like
to
see.
A
C
A
C
A
C
D
A
C
A
I
would
say
that
the
the
dock
was
very
comprehensive.
I
thought
I
took
input
from
oh
no,
we
had
maybe
three
or
four
different
issues
and
polls
describing
different
things.
We
want
a
true
beta,
I'm
happy
with
the
stock.
If
we
got
all
this
in
I.
Think
calling
it
beta
for
admissions
is
very
reasonable.
The.
E
C
E
Don't
know
if
we
may
not
actually
have
to
fix
some
part
of
it,
but
just
we
should
at
least
three
or
four
people
should
be
able
to
sit
down,
walk
through
the
entire
flow
and
say
this
isn't
gonna
make
cube
worse.
I
think
the
analogy
here
is
the
DeLong
discussion
we
had
about
API
aggregation
was
fundamental
change
to
the
stability
of
this
system,
and
so
we
spent
a
little
bit
of
extra
time
on
it's
the
same
thing.
A
E
A
F
So
basically
I
shared
it
today.
I
didn't
realize
that
I
didn't
share
this
publicly,
but
this
because
discussed
before
I
guess
with
some
folks
in
engineering
channel
if
I
remember
correctly,
it's
nothing
big,
like
I
will
give
you
guys
time
to
make
a
tip
and
comment
if
you
have
any
comments,
but
how
if
our
new
member
is
going
to
work
on
this,
it's
just
getting
rid
of
the
other
open,
API
swagger
endpoints.
We
have.
F
We
have
swagger
JSON,
so
I
get
the
Jason
jay-z
swagger
the
taste
from
the
TV
that
jay-z
all
of
those
will
go
away.
We'll
have
open,
API
underscore
be
true
and
we
would
set
accept
and
accept
encoding
on
headers
to
get
what
we
want
from
that
endpoint.
That's
just
basically
what
it
is
I'll,
not
mark
it
as
approved
I,
don't
want
it
to
be
approved
in
this
meeting.
Let
me
give
you
guys
time
to
look
at
it,
because
I
forgot
to
share
it
actually
public,
so
yeah.
A
I
think
I
would
like
a
chance
to
read
through
it
in
a
little
more
detail.
Yeah
I
do
like
the
idea
of
trying
to
stop
the
proliferation
of
different
variations
on
the
I,
don't
remember,
of
a
swagger
of
an
API,
but
limiting
the
number
of
variations
on
it
and
then
using
looks
like
you're
gonna
use
accept
headers
here.
That
seems
pretty
reasonable.
Mm-Hmm.
A
F
B
F
A
Yeah
I
think
I,
like
getting
concept.
I
want
to
want
to
read
through
and
then
read
through
all
the
comments.
F
A
E
Note
they're
moving
API
chunking
to
beta
there's.
There
are
various
discussions.
Nobody
disagreed
at
any
point.
The
the
enablement
is
merged.
There's
a
few
usability
user
interface
changes
to
make.
If
anybody
has
any
additional
comments
on
chunking
we're
gonna
go
through
the
performance
tests
with
it,
even
though
it
doesn't
get
fully
expressed.
If
anyone
has
any
feedback
on
that
post
merged,
and
that
would
be
the
time
to
do
it
in.
A
E
G
E
It
paging
and
then
in
the
proposal
there
was
a
lot
of
confusion,
because
people
thought
this
was
imagination
focused
on
M
user
pagination,
and
so
we,
the
proposal,
currently
says
chunking
two
distinct
English,
because
we're
not
trying
right
now
to
solve
on
the
end-user
going
to
AB
you
I
and
just
clicking
through
stuff.
So
we
can
change
the
naming
in
the
dots
we
don't
explicitly
call
it
paging
or
chunking,
except
in
code
and
in
code,
we're
leaning
a
little
bit
towards
paging
into
client-facing
api's,
but
the
the
docs
and
so
forth.
E
G
H
E
One
request
gets
everything,
and
then
you
can
say
I
want
to
get
parts
of
this
very
large
request
which
isn't
consistent
like
the
difference
here
is
that
paging
usually
isn't
consistent.
Okay,
so
you
got
the
first
page
of
a
web
UI.
You
come
back
30
minutes
later.
You
click
next
page.
That's
not
what
this
is.
This
is
the
same
way
that
I
can
list
all
500,000
pods
in
the
system.
E
I
E
J
J
So
a
different
topic:
this
is
I,
wanted
to
revisit
the
notion
of
user
API
servers
and
specifically
storage.
Last
week,
the
or
last
two
weeks
ago,
the
assertion
was
made
that
there
was
no
time
or
no
manpower
for
pursuing
a
TV
like
API
within
kubernetes,
and
the
advice
given
was
to
basically
use
our
own
at
CD.
Server
was
just
curious
about
the
possibility
or
the
reasonableness
of
using
the
sed
server.
J
That
actually
is
with
kubernetes
and
the
logic
there
being
that
you
know
as
a
API
server,
you
know,
kubernetes
is
going
to
have
to
have
mechanisms
in
place
for
itself,
as
the
API
servers
are
being
split
out
into
multiple
chunks.
Is
it
reasonable,
in
general,
for
third-party
servers
to
also
use
that
same
mechanism
iSchool
make
it
a
lot
more
confusing.
C
E
One
of
the
things
that
we
did
talk
about
last
week
was
the
possibility
of
making
it
easy
for
you
to
vote.
Get
subsets
of
like
to
get
a
credential.
That
lets
you
talk
to
a
CD
and
do
a
subset
of
keys
is
something
that
you
know.
If
someone
could
build
and
would
probably
be
an
easy,
would
be
beneficial
to
a
number
people
who
would
like
to
continue
to
use
the
central
ed
CD
server
as
long
as
people
who
live
with
the
trade-offs
of
not
having
quota.
A
D
D
C
E
That's
a
reasonable
thing:
it's
also
okay.
Well
then,
what's
the
alternative
is
creating
lots
and
lots
of
that
CD
servers,
and
so
then
those
each
individually
fail
and
if
anything
else
in
the
entire
cluster
depends
on
one
of
those
failing,
then
the
whole
cluster
goes
down
and
you
didn't
actually
get
fault
isolation,
so,
like
operationally
I,
think
you're
right,
there's
a
trade-off.
There
I
don't
I
like
honestly
openshift
when
we
do
aggregate
API
servers
we're
going
to
the
same
NCP.
E
D
One
thing
I
think
that's
reasonable,
given
that
you're
shipping,
an
entire
product,
that's
tested
and
commend
all
together
right,
one
end
of
the
spectrum
of
different,
which
is
that
an
extension
add-on
provided
by
a
third
party
that
an
operator
installed
into
their
own
cluster
and
then
using
the
same
sed
and
wasn't
validated
together
with
the
rest
of
the
system
and
therefore
make
kubernetes
stop
working
in
a
way
that
makes
an
unrecoverable
for
an
end
user.
Well,.
E
E
Web
hooks
are
actually
the
example.
I
think
that's
really
relevant.
Right
now
is
right.
The
controller
manager
goes
down,
it
comes
back
up
until
it's
self
hosted
and
it
doesn't
come
back
up
because
an
omission
helps
fail
closed.
So
maybe
what
we
need
to
do
is
actually
take
the
previous
dock
and
add
some
of
the
trade-offs
like
you're
saying
like
we
recommend
that
if
you're
talking
about
end
user
facing
is
this
something
that
you
trust
as
much
as
the
kubernetes
development
project?
K
I
think
giving
good
guidance
is
gonna,
be
really
helpful
for
people
here.
One
thing
we've
seen
operationally
is
because
at
CD
runs
everything
through
a
central
log,
sometimes
two
things
that
individually
would
have
run
fine
on
separate
clusters
when
combined
have
nonlinear
effects
and
bring
the
system
to
its
knees.
So
if
you
have
it
tested
things
together,
there's
really
no
way
of
knowing.
If
two
independent
use
cases
that
worked
fine
on
separate
at
CD
instances
will
work.
Fine
together,
so
I
think
giving
people
really
good
guidance
on
on
keeping
things
separate.
K
K
K
C
E
I
think
this
is
the:
what
level
is
the
thing
that's
trying
to
do?
Api
aggregation,
I
mean
we've
always
said:
CR
DS,
like
a
CRT
controller,
can
blow
up
at
CD
and
know
differently,
and
so
we'll
start
getting.
The
question
of
should
see.
Rd
run
against
effort
at
CD
server
as
well,
and
that's
been
raised
in
the
past
by
David
and
others.
Yeah.
E
Like
it's
a
really
good
point,
there's
actually
nothing
in
kubernetes
today
that
prevents
the
cluster
going
down
from
someone
forgetting
to
do
some
sort
of
control
like
more
clusters
fail,
because
people
forget
to
clean
everything
up
and
almost
any
other
mechanism,
I
know
of
so
we're
kind
of
like
we
probably
have
the
general
issue
of.
We
need
to
get
a
little
bit
better
at
unbounded
growth
before
we
really
worry
about
the
addition
of
other
things
like
most
of
the
things
in
the
core
cube.
E
Api
are
somewhat
interdependent,
where
it's
hard
to
reason
about
how
they
fail
independently.
The
mission
plugins
are
gonna,
be
the
exact
same
way.
Ass,
additionally,
well
used
CRD
or
a
green
api
has
the
exact
same
problem.
We
definitely
blast
radius,
but
we're
not
doing
a
great
job
of
limiting
blast
radius
even
on
the
core
API
today,
yeah.
K
And
we
also
want,
we
also
want
people
that
are
not
experts
on
NCD
to
have
predictable
behavior.
So
you
know,
if
we're
able
to
isolate
things,
then
it's
you
know
the
system
will
fail
and
much
more
predictable
and
obvious
way.
Whereas
if
you
combine
the
use
cases,
the
area
you
might
see
might
have
been
telescope
from
one
system
into
you
know
a
visible
area
and
other
system
and
tracing
it
back
to
the
original
cause
is
very
interactive.
E
L
F
C
So
I
want
to
add,
in
terms
of
the
run
your
own
NCD,
like
I,
think
there's
a
bunch
of
low-hanging
fruit
in
improving
the
NCD
operator,
and
they
know
the
CG
apps.
Folks
at
least
would
like
to
spend
some
time
making
that
better,
so
I
think
we
should
try
to
improve
NCD
operator
and
we
miss
it
in
the
corner.
How
we
all.
E
Even
even
some
level
of
we
did
this
for
events,
which
is
we
split
out.
We
made
it
possible
to
split
on
that
CD,
it's
possible
that
we
should
just
make
it
really
easy
to
run
parallel
ones
on
the
same
masters.
They
have
correlated
failure
domain,
but
in
terms
of,
if,
if
isolation,
because
again
like
a
lot
of
the
resources
that
might
be
critical
to
a
cluster,
there
might
only
be
20
of
them
right.
So
it
really
doesn't
even
matter
like
if
you
have
20
resources
and
one
of
them
does
go.
K
G
A
So
I'll
go
ahead
and
type
myself
there.
Yes,
I
intended
to
there
a
comment
here.
Let's
say:
there's
an
assignment
button
here
somewhere
right,
awesome,
just
check-in
Thanks
yeah
there
we
go
yeah
I
can
describe
it,
there's
definitely
a
spectrum
of
how
much
do
you
trust
the
things
and
how
much,
how
much
you're
willing
to
take
in
terms
of
risks
to
be
able
to
have.
You
know
easier,
backup/restore
stories
that
nothing.