►
From YouTube: Kubernetes SIG API Machinery 20180228
Description
For more information on this public meeting see this page: https://github.com/kubernetes/community/tree/master/sig-api-machinery
A
A
A
A
How
can
we
more
optimize
our
sort
of
design
progress?
We
want
to
make
sure
people
get
good
help
on
their
designs
and
we
want
to
make
sure
that
we
get
something
early
in
the
these
pipeline
so
that
we're
not
like
working
on
the
design
at
the
same
time
that
we're
about
to
enter
code
freeze
at
the
same
time,
David
and
I
have
the
limited
capacity
to
actually
review
these
things.
So
this
is.
This
is
more
like
group
problem
solving.
How
can
we,
how
can
we
make
this
better
and.
C
A
A
D
C
A
Yeah
reasonable,
as
an
expectation
people
that,
like
should
have
like
a
design,
at
least
in
progress
at
that
point,
yeah
I
mean
and
if
you're
like
you,
just
got
a
design
out
right
before
a
feature.
Freeze,
like
your
expectation
like
well
work
with
you
as
we
have
time,
but
the
expectation
is
probably
that
you're
talking
about
the
release
after
this
one
there,
any
some
projects
that
are
like
not
behind
in
that
governance,
release
and
Phil
is
asking.
If
there's
some
projects,
not
tired
to
the
main
release.
C
A
A
Okay,
let's
see
it,
looks
like
we've
got
12
people
on
so
shall
we
go
through
the
apply
stuff.
A
A
Why
is
this
good
users
get
used
to
modifying
and
interacting
with
the
exact
API
schema
that
the
system
expects?
This
means,
if
different,
various
Paiute
tools
in
the
pipeline
act.
In
this
way,
the
user
can
inspect
all
of
all
stages
of
the
pipeline
and
reasonable
conclusions,
as
opposed
to
a
system
where
there's
like
some
complicated
configuration
building
system
where
the
user
has
to
like
understanding
some
code,
some
templates
and
stuff
to
to
understand
that
they're
doing
the
right
thing
from
a
system
integrators
perspective.
A
Instead
of
having
to
integrate
with
an
API
for
every
operation
for
every
object,
you
just
have
one
API
with
with
a
very
Inc
data
data
format.
So
if
you're
thinking
in
terms
of
calls
it's
very,
very
simple,
and
if
he
really
a
lengthy
declared,
a
management
document
like
you
control
apply
is
necessary
for
that
it
is
not
good
for
imperative
commands
you
shouldn't
charge
a
credit
card
with
people
call
I
should
not
launch
missiles
shouldn't
do
that
anyway,
but
you
should
do
it
with
a
declarative
system.
A
A
Why
is
it
hard
because
it's
hard,
because
we're
deducing
what
the
user
wanted
based
on
the
last
thing
they
wanted
and
the
thing
they
want
now
and
users
are
not
located
creatures?
They
want
to
complicated
things.
I
think
the
general
case
is
perfectly
need
a
human
level
intelligence
to
figure
out
what
the
user
was
trying
to
do
to
make
this
simpler.
Ideally,
we
restrict
our
our
API
surface
area
to
try
and
make
rules
for
the
deduction
that
will
be
correct.
I'm,
not
aware
of
other
systems
that
do
the
similar
thing.
A
A
The
object
apply
anyway
and
there's
some
additional
difficulties
like
identifying
the
universe
that
objects
that
you
want
to
run
apply
on,
that
I'm
not
going
to
cover
I'm,
not
providing
an
API
that
just
performs
merges
like
giving
me
take
these
three
overlays
or
these
three
patches
merge
them
all
together
and
return
the
result
without
affecting
in
the
system
I'm
not
doing
that
I
think
that's
the
future
working
I
think
something
like
that
was
going
to
be
necessary.
Eventually,
that's
not
what
this
design
is
about
right
now,
and
some
sources
of
user
confusion.
A
I
can't
fix
the
name
and
namespace
in
our
manifest
identify
the
object
that
you're
applying
a
change
to.
If
you
modify
those
fields,
you
won't
see
that
object,
renamed
you'll
see
another
object
created,
and
we
need
to
fix
that.
We
can't
fix
that
in
the
control
plane.
Gotta
fix
that
in
the
users
head
I
think,
hopefully
with
the
documentation,
there's
sort
of
two
parts
of
my
overall
proposal
here
and
the
first
one
is
about
the
overall
life
cycle
and
the
second
one
is
about
how
you
actually
do
a
structured,
merge
or
diff.
A
So
the
first
one
I've
called
I
call
this
user
intent
lifecycle
management.
This
is
the
first
design
doc
I
sent
around
example.
Problems
in
this
space,
the
user
doesn't
post
changes.
Something
applies.
You
get
a
surprise,
because
your
apply
maybe
does
different
things
to
default
fields,
user
doesn't
apply,
then
they
edit,
it
again
surprise
your
apply,
wipes
out
everything
you
did
with
edit.
Maybe
these
are
doesn't
get
locally.
A
Then
you
apply
that
thing.
You
just
change.
Mobley.
You
go
to
surprise,
because
now
you
own
a
bunch
of
default
fields
that
you
didn't
actually
intend
to.
Maybe
you
tweak
some
annotations
and
apply
and
that's
surprising
if
you
modify
the
last
applied
state,
an
annotation,
a
bunch
of
strange
things
could
happen
and
then
finally,
I
don't
know,
there's
debate
on
whether
people
actually
make
this
mistake
in
real
life.
A
Why
is
it
hard
kind
of
bunch
of
smart,
talented
people
working
on
this,
but
somehow
it's
still
broken
I
think
there's
two
main
reasons.
First,
one
the
tactical
reason
there's
just
too
much
stuff
that
has
to
change
in
order
to
get
a
fix
out
there.
You
have
to
change
the
the
client,
the
control
client.
You
have
to
change
the
schema
baby.
You
got
to
change
the
strategic
bridge,
match
format,
maybe
to
change
something
in
the
control
plane.
That's
a
lot
of
stuff.
A
Details
of
this
proposal
we're
going
to
move
the
apply
logic
out
of
control
into
a
control
plane
with
a
logical,
apply,
verb,
probably
implemented
as
part
of
patch,
we'll
have
to
add
a
dry
run
mode,
a
potentially
controversial
part
of
the
proposals
that
we
should
just
generalize
this
and
track
last
applied
State
for
multiple
fires,
and
this
line
says
client,
ID
and
six
CLI.
We
were
talking
about
really
and
so
for
lloyd
in
a
fire.
So
I
may
change
that
name
in
a
bunch
of
places.
A
C
A
A
B
C
A
I
intend
for
that
to
be
returned
by
API
server
I
think
we
should,
if
you
apply
a
manifest
and
that
manifest
includes
this
field.
We
should
reject
this
because
that's
an
indication
that
you
you're
following
an
incorrect
workflow.
So
when
you
say
that
last
to
fly,
this
return
is
the
last
applied
return,
the
one
specific
to
your
client
ID.
You
see
the
entire
map,
everybody,
ok,
everyone's
last
excellent.
Ok,
this
is
this
lets.
You
do
a
bunch
of
things
like,
like
figure
out
who
is
trying
to
set
what.
A
It
should
default
to
error
on
conflict
if
you're
taking
over
fields
and
somebody
else
is
set,
you
should
get
a
error.
You
should
have
to
say
something
specific
to
override
their
detect
when
manifests
are
sourced
from
some
incorrect
user
workflow.
The
help
users
not
follows
for
clothes,
and
we
have
to
add
some
extension
so
in
any
case
genious
to
actually
do
the
implementation
work
in
the
future.
I'd,
like
other
API
verbs
to
intelligently,
interact
with
the
last
applied
States
and
provide
errors
or
Orient's.
A
A
Default,
my
general
theory
on
errors
is
that
we
should
try
to
funnel
errors
to
users
to
humans
where
they
can
do
something
intelligent
about
them.
So
I
think,
like
controllers,
should
always
be
in
the
conflicts
like
if
you're,
if
you're
an
HPA
and
you're
getting
it
you're
getting
a
conflict,
but
user
owns
field
respect
our
replicas
you're
stupid
because
you're
a
bit
of
code
and
you're
just
going
to
keep
on
trying
to
apply
that.
So,
ideally,
we
just
let
the
HPA
win.
F
A
The
next
time
user
tries
to
use
the
that
HP
is
trying
to
go
that
so.
C
C
Not
asking
you
to
you're
bringing
it
up
here,
it's
here
and
I'm
I'm
sitting
through
thinking
about
the
implications
of
that
at
this
very
moment
and
I.
Think
the
implications
are
that
controllers
are
gonna,
go
in
and
say
no,
no
I
don't
want
to
default
error,
I
wanted
to
fault,
I
win,
yes,
controllers
will
and
the
thing
is
I
can
hear.
I
can
hear
Phil
in
the
background
and
I
know
he
was
there
when
we
made
a
plot,
a
default
over
I
equals
true
and
so
he's
gonna
be
saying.
C
D
Yeah,
so
just
I
think
that
there's
a
number
of
things
in
here
where,
like
the
details,
are
probably
gonna,
be
discussed
and
maybe
be
different
than
what's
on
this
slide
like
for
that
particular
one
I
probably
argue
that
it
should
always
err
for
controllers
and
users,
and
we
just
make
everyone
in
the
field.
Training
controllers,
yeah.
G
People
may
agree
that
so
I
was
thinking
about
this
from
there.
How
do
we
explain
this
to
someone,
starting
with
the
proposition
that,
if
you
take
something
over
that
you
didn't
intend
at
least
we
have
a
way
to
go
figure
out
that
someone
took
something
over
like
we
can
give
a
clear
error.
Giving
a
clear
error
in
some
cases
removes
the
need
for
us
to
break
our
own
system
to
make
users.
You
know
able
to
understand.
What's
going
on
yeah.
A
I
noticed
in
flying
around
and
playing
around
with
the
current
queue
control
apply.
There
are
some
conditions
under
which
a
little
it'll
give
you
a
little
warning,
don't
be
like
yeah.
It
looks
like
you
took
over
a
bunch
of
field.
Sorry
about
that.
Maybe
don't
do
that
next
time
and
it
doesn't
actually
stop
you
yeah.
A
So
I'd
like
it
to
actually
stop
you
and
give
you
the
choice
like
did
you
really
want
to
do
that?
Do
you
really
want
to
take
over
all
these
default
yields?
If
so,
like
just?
Do
it
again
with
the
Porsche
course
option,
and
you
can
do
that,
but
if
not,
if
you're
actually
following
this
incorrect
workload,
just
probably
what
you
were
doing,
then
maybe
you
can
fix
that
now
before
you
clobber
your
optic
instead
of
after
or
maybe.
A
B
C
A
C
A
C
Am
not
disputing
that?
What
I'm
saying
is
that
the
existent
code
that
we
have
today
across
the
entire
cube
stack
and
anyone
who
has
developed
anything
else
like
say
Service
Catalog,
is
one
in
which
defaulting
happens
first
to
make
your
object
convertible
to
the
hub.
And
if
that
assumption
is
going
away-
and
that
seems
like
a
top-level
thing
to
call
out
for
anyone
who
has
ever
worked
with
API
machinery
and
then
requires
working.
Our
entire
staff
I.
A
C
Isn't
involved
in
this
choice
one
way
or
the
other
right
like
this
is
the
having
you.
How
do
you
convert
one
external
type
to
another?
One
external
serialization
do
another
external
serialization.
Our
current
path
is
due
to
faulting,
convert
to
a
hub,
convert
out
serialized.
That
first
step
of
do
defaulting
is
not
accidental
I.
A
G
A
C
C
A
As
well,
okay,
if
you're,
if
you're,
really
worried
about
that
I
can
I
can
send
a
PR
I
have
to
slide
on
multiple
pliers.
This
is
debated
by
some
people.
My
feeling
is
that
apply
is
more
general
and
more
convenient
like
the
format.
The
data
format
that
you
send
for
applies
marginal
convenient
than
the
data
format
that
you
send
attached
so
and
and
like
strategic,
merge
patch
has
some
deficiencies,
so
maybe
we
should
just
urge
everyone
to
use
apply.
A
Some
people
feel
that
apply
that
there
should
be
like
one
applier
and
everybody
else
is
like
a
second-class
citizen
like,
like
controllers,
are
second-class
citizens,
I
think
I'm
more
ambivalent
about
that,
but
I
think
even
in
that
case,
the
applied
data
format
is
more
convenient
than
the
patch
data
format,
and
you
should
think
of
the
multiple
requires.
As
the
blast
applied.
States
is
providing
like
the
gift
blame
functionality
and
not
as
as
a
stack
of
patches
which
add
up
to
the
live
state.
I.
A
A
A
A
That's
a
that's
a
good
question.
I
think
certainly
like
if
it
is
Alto
like
somehow
an
easy
flag
garden.
The
question
is:
do
you?
Do
you
flag
guard,
like
the
user,
visible
parts
or
cute
like
guard,
like
all
of
your
logic
under
the
surface,
the
latter
of
those
becomes
very
entangling
and
hairy
yeah
it
worked
with
like
depends
on
if
that's
I
think.
C
A
A
D
Yes,
great,
okay,
so
yeah
briefly,
what
I
want
to
talk
about
is
proposal
for
a
sub-project
to
put
a
porcelain
layering
on
top
of
the
API
extension
building
tools.
There's
some
cool
some
cool
projects,
I've
seen
built
by
community
members
for
like
operator,
kids,
and
then
I
saw
one
for
OpenShift
for
web
hooks.
That
is
pretty
cool
it.
So
number
of
these
things
exist.
D
These
specific
kind
of
components
of
this
that
I've,
been
thinking
about,
would
be
tools
to
help
with
bootstrapping
projects
such
as
providing
like
the
common
set
of
vendor
dependencies,
common
directory
structure,
build
files,
etc
giving
linear
depths,
even
though
it
sounds
easy,
there's
fraught
ways
of
doing
it
and
I've
ended
up
with
the
wrong
set
of
open,
API
libraries,
a
number
of
different
kinds.
It's
always
a
little
bit
hard
untangle,
defining
API
schema
so
by
tools
for
running
integration,
test
locally
run.
D
Setting
up
a
service
account
setting
up
our
vac
rules,
generating
certificates
and
creating
resources
with
these
certificates,
installing
them
in
secrets,
generating
reference
documentation
for
the
pieces
and
kind
of
setting.
All
this
up
for
service
catalog
there's
a
go.
Client.
That's
written
that
sets
of
all
this
stuff
that
generates
all
the
pieces
and
in
having
a
single,
unified
platform
that
allows
folks
to
plug
into
might
useful.
So
getting
the
question
that
here
is
this:
something
that
would
make
sense
to
write
a
kept
floor,
and
they
can.
You
know
you
would
be
good
so.
C
C
D
C
A
D
Like
so,
if
you
look
at,
if
you
look
at
I,
think
it's
spring.
Boot
is
an
example
of
a
framework
where,
like
when
they
upgrade
versions
that
give
you
all
the
new
libraries
and
you
don't
have
to
figure
out
how
to
do
those
in
yet
your
point
David
like
when
you
change
the
code
generators,
for
instance.
You
also
have
to
update
gametime
machinery,
and
you
have
that
the
I
think
engineering
libraries
you've
entered
and
then
some
is.
D
You
have
to
update
the
actual
code
that
you've
written
that
uses
the
API
machinery
and
so
in
the
to
get
into
the
details.
Yes,
this
attempts
to
address
that
by
one
making
sure
that
the
code,
generators
and
the
vendor
code
are
in
same
using
the
same
versions
and
then
also
tries
to
abstract
away
a
lot
of
the
details,
because
a
lot
of
the
times,
what
breaks
is
like
an
implementation
piece
and
not
like
the
abstract
notion
of
I,
have
a
function
that
takes
an
object
and
then
gets
invoked
when
it.
C
C
Mean
right,
I
know
so,
if
I
want
an
API
server
that
I
supported
ApS
over
things
like
the
apply
the
Daniel
just
described,
my
apply
would
not
work
across
versions
unless
I
changed
my
conversions
to
all
be
compatible
with
the
fact
that
defaulting
when
he
calls
on
the
conversion
path.
So
as
a
concrete,
that's
one
that
you
would
not
be
able
to
insulate
a
user
from
and
I
want
to
be
sure
that
we'd
be
able
to
make
that
change
a
lot.
Yes,.
D
C
C
Yeah
so
so
I
mean
there's
that,
but
in
this
patiska
versus
a
ssin
it
is
the
meaning
of
a
particular
thing
where
you
needed
to
do
something
to
be
able
to
use
this
feature,
and
it
may
or
may
not
be
fully
compatible.
I
just
want
to
make
sure
we
aren't
going
to
try
to
use
something
like
the
fact
that
I
bootstrap
the
project
is
for
you
to
prevent
a
different
feature
from
getting
in
I.
Don't.
D
D
C
D
C
C
D
C
F
F
D
So
I
guess
there's
two
ways:
I'll
address
that
the
first
is
see
gaps
actually
really
once
this
and
was
willing
to
sponsor
it.
Weekend's
kiddin
Owens
from
the
workloads
team
here
at
Google
is
using
it
and
once
wants
to
see
this.
The
reason
I'm
coming
to
API
machinery
is
when
I
talk
to
take
architecture.
They
said
sig
ATM
machinery
would
be
the
correct
state
to
sponsor
it,
but
in
terms
of
staffing,
I
think
that
will
be
possible
have.
C
D
Don't
know
how
much
step
three
is:
I
mean
yes
to
something
like
Service
Catalog
has
to
deal
with
this
and
I
guess.
My
argument
will
be
here,
like
that's
gonna,
be
a
problem,
no
matter
what,
if
you're,
building
it
into
like
or
either
pushing
it
out
to
every
single
operator
to
have
to
solve
that
same
problem
individually
or
pushing
it
to
one
place.
No.
C
D
C
A
D
Okay,
well,
let
me
put
this
in
me.
They
kept
but
I
think
like
we
were
all
on
the
same
page
where
one
like,
potentially
maybe
another
state
could
adopt
it.
If
it's
clearly
that
API
missionary
someone
else
wants
to
maintain
it
and
it
is
a
sub-project,
it's
not
I
think
like
the
way
these
sub
projects,
don't
not
all
sub
projects,
has
the
same
you're
not
obligated
to
make
efficient
just
by.
C
A
G
An
update
to
it
and
it
gets
deleted
and
the
cube,
look
and
say
yep.
This
is
deleted,
it
can
immediately
crash
and
then,
when
it
hits
the
API
server
again,
it
can
get
an
older
version
of
the
watch
cache
that
still
has
that
pod
in
it,
which
means
that
if
the
state
will
set
controller
went
in
scheduled
pod
zero
on
another
node,
you
can
now
have
two
pod
zeros
from
the
stateful
set
running.
G
You
know
a
small
set
of
nodes
to
a
lot
of
nodes
and
we
depend
on
a
pretty
heavily
so
most
of
the
mitigations
short
of
turning
the
watch
cache
off
altogether,
which,
even
though
I'd
CD
is
improved,
significantly
probably
unlikely
to
to
get
us
back
to.
We
had
to
take
a
pretty
significant
hit
and
people
would
be
pretty
significantly
impacted.
F
G
G
We
basically
don't
have
a
correct
way
to
keep
a
cache
up-to-date
with
to
resynchronize
our
cache
2x.
You
need
to
guarantee
that
that
we
have,
you
know,
get
it
happens
for
between
a
client
hitting
NCD
from
another
machine
in
the
watch
cache
we
one
of
the
options
raised
and
I
don't
know
if
this
is
the
right
thing,
but
we've
not
been
particularly
great
at
being
correct
on
the
wedge
cache
over
the
last
couple
releases.
G
E
G
C
G
C
A
F
A
G
I
was
gonna,
I
was
gonna,
say
we
really
need
to
probably
consider
investing
in
an
invariant
checker
that
can
run
alongside
or
you
eat
sweets.
We've
done
it
for
a
couple
of
the
controllers.
We
probably
need
to,
as
a
group
say
it's
really
hard
to
reason
about
these
systems.
If
we
have
invariance,
we
need
to
start
thinking
about
testing
them.
It
was
way
across
the
I
agree
with
that
and
yeah
I
think.