►
From YouTube: Kubernetes SIG API Machinery 20180117
Description
For more information on this public meeting see this page: https://github.com/kubernetes/community/tree/master/sig-api-machinery
A
A
E
D
F
So
I
guess
just
a
checklist
from
me.
It
would
be
something
like
are.
We
sure
that
it
actually
works
with
constituent
API
servers
and
have
the
consumers
and
I
remember
that
someone
like
Google
was
trying
to
develop
to
use
at
Tim
I
think
if
he
actually
has
operational
experience
using
it
and
if
he
can
comment
on
the
different
pieces
he's
used
just
because
there
are
a
lot
of
different
options
and
I'd
like
to
make
sure
that
most
of
them
at
least,
have
actually
been
used
before
calling
a
ga.
So.
D
A
F
Am
asking
whether
someone
is
interested
in
doing
it,
because
it's
been
on
the
backlog?
I
have
a
general
interest
in
seeing
and
I
know
that
the
cubelet
is
struggling
through
it
now.
But
if
someone
was
really
interested
in
trying
to
see
an
experiment,
it
would
be
something
that
I'd
be
willing
to
review
poles
for.
G
F
F
A
G
G
A
G
A
F
A
A
We
can
add
versioning
in
a
v2
but
separately.
We
volunteered
many
to
think
about
this,
so
this
is
the
the
other
reason
I
put
this
on.
The
list
is
to
fish
for
other
people
who
are
interested
in
working
on
C
or
the
versioning,
and
we
can
have
a
short
working
group
or
or
just
a
more
informal
design
meetings.
So.
E
F
Doesn't
actually
add
it
to
a
v1
I
think,
even
even
with
the
concern
about
you
know,
there
is
a
particular
field
in
other
api's.
What
we've
done
is
added
a
another
slice
that
says
something
like
additional
versions
in
the
GA
version
that
exists
and
then
bumped
to
another
version
to
have
it
just
in
line
with
the
different
sterilization.
It
looks
prettier,
so
it's
a
backwards,
compatible
change.
H
Was
gonna
say
like
it
feels
like
the
decision
to
go,
GA
will
depend
on
what
the
design
for
versioning
looks
like
in
scope,
but
then
there's
a
set
of
other
work
and
is
the
set
of
other
work
enumerated.
That's
something
that
maybe
I
missed
this.
This
was
gonna,
be
in
a
doc
or
there
was
a
doc
on
that.
That's
not
the
email
chain,
and
if
there
is
anything
more
than
that.
D
D
H
F
A
G
A
K
When
you
guys
sorry
so
this
one
is
from
user
for
developer
experience.
So
when
trying
to
develop
out
of
coaster,
API
server
and
then
register
it
on
on
the,
for
example,
on
the
mini
cube
coaster,
it's
very
hard
to
actually
I
mean
it's
possible
to
to
do
it.
But,
for
example,
the
the
the
port
is
hard
code
is
hard
coded
and
you
have
to
run
your
API
server
in
mode
and
etc.
So
if
and
you
cannot
run,
for
example,
multiple
custom
API
servers
on
your
local
machine
during
development,
so
I'm
not
sure.
F
H
A
K
J
So
we
talked
about
this
a
little
bit
before
cube
con
and
then
over
the
holidays.
It
kind
of
fell
between
the
by
the
wayside,
but
I
am
going
to
be
starting
up
in
a
repo
today
and
sending
out
mail
to
sig
API
machinery
to
start
working
on
a
prototype
of
the
dependent
resources.
Api
back
before
cube
con
I
had
talked
about
this
in
one
of
the
meetings
and
I
know.
J
J
I,
don't
know
if
this
warrants
creating
a
working
group,
because
I
know
there
are
other
interested
folks
and
it
would
be
great
to
have
other
folks
kind
of
you
know
doing
code,
reviews
and
looking
at
this
design
as
it
evolves.
But
I
also
don't
know
what
the
process
is
for
starting
a
working
group
or,
if
that's
appropriate,
for
for
this
sort
of
a
thing.
Well,.
D
F
H
Could
certainly
like
you
can
have
meetings,
get
feedback,
email,
bring
people
together
right
Doc's
without
having
formed
a
working
group.
Isaac
working
group
is,
there
must
be
there
enough.
People
are
complaining
that
this
isn't
being
structured.
It's
usually
when
people
start
complaining
that
you're
being
obscure
about
if
you're
deciding
you
create
the
working
group
to
keep
Joe
from
yelling
at
you,
okay,.
J
A
H
Oh
and
I,
Phil
and
I
were
going
back
and
forth
prior
to
the
break
on
whether
the
open,
API
column,
JSON
path,
thing
that
they
had
put
in
to
suit
control
was
sufficient,
and
so
basically
there
is,
you
know,
add
a
JSON
path
thing
to
a
field
in
open,
API
and
then
queue
control
would
go
and
parse
the
open,
API
spec
use
JSON
path
to
go
find
and
that
works
when
you're
offline.
At
least
you
know
you
have
an
open,
API,
spec,
cached
locally,
that
you
can
use,
but
we
were
talking
about.
H
H
We
kind
of
work
through
that
in
terms
of
there's
just
Q
controls.
Git
is
one
of
the
more
important
you
eyes.
We
have
it's
not
acceptable
to
really
have
a
best
effort
there
for
most
of
our
core
resources.
People
would
say:
well,
you
know
the
platform
is
unusable
because
I
don't
know
how
you
know.
I
can't
use
JSON
path
to
come
up
with
a
good
pod.
Q
control
get
pods,
UI
and
so
working
backwards
from
that.
The
fact
that
other
clients
like
web
consoles
could
use
this
for
things
they
didn't
know
about
to
offer.
H
Based
on
the
discussions
we
had
in
1-9,
where
we
said
we
wanted
to
see
this
work
in
a
CLI
and
let
it
bake
before
we
made
the
decision
to
go
to
beta.
But
everybody
was
roughly
on
board
with
the
schema
and
the
mental
model,
and
so
I
would
assume
that
statement
would
hold
for
110.
Unless
there's
objections
here
and
then
the
follow-up
was,
there
was
a
separate
discussion
going
on
for
partial
object,
metadata
for
use
in
garbage
collectors
and
generic
controllers
that
only
wanted
to
fetch
they
wanted
to
get
a
schema.
C
H
Know
so
that
was
discussed
at
6
CLI
this
morning,
so
Phil
and
Phil
and
I
were
talking
about
this
in
the
kind
of
the
proposal
was
there's
been
some
concern
about
the
full
scope
of
what
a
server-side
apply
might
look
like
until
we
decide
some
of
the
things
Jordan
had
a
good
suggestion
about
how
to
Jordan
and
Phil
talked
a
cute
con
about
a
short
term
step
that
could
move
some
of
the
hard
parts
to
the
server
to
do
the
diff
the
merge
on
the
server-side.
You.
E
H
Do
and
add
an
API
specifically
for
accepting
emerge,
and
so
we
suggested
using
110
to
get
a
working
proposal
for
what
it
would
look
like
on
the
server
side
and
get
agreement
on
that
for
110
and
do
what
we
can
to
break
smaller
chunks
off
apply
as
server-side
things
or
prototypes
of
server-side
things.
Okay,.
C
C
H
E
H
It
vary
when
apply
was
originally
designed.
We
suggested
a
number
of
different
approaches
for
storing
you
know
the
server's
preferred
copy
the
users
preferred
copy.
All
of
those
are
complex,
and
so
I
feel
like
that.
Getting
getting
some
sort
of
happy
place
where
we
agree
on
either
a
compromise
or
a
long
term
direction,
with
a
short-term
series
of
steps
that
don't
conflict
with
it
is
the
biggest
obstacle
to
apply
on
the
server-side.
H
Honestly,
like
I'm,
not
as
worried
about
it,
because
at
this
point
I
don't
think
any
big
like
redesigns
are
gonna
pragmatically
happen,
because
we're
too
far
in
and
we've
got
too
many
real-world
things
to
go
solve.
But
I
do
want
to
make
sure
that
we
like,
if
somebody
does
come
up
with
a
really
clever
way,
to
solve
this.
That
isn't
too
much
work.
M
M
A
M
No,
it
was,
it
was
effectively.
The
component
had
the
same
air
over
and
over
again
to
the
point
that
it
was
effectively
logging,
the
same
thing:
Oh
hot
hot,
right
yeah.
We
helped
you
it's
only
a
thousand
times
a
second,
you
see
worse,
yeah
I,
understand
the
fact
that
it
was
worse
before,
but
it's
not
helpful
that
they
there
is
no
instrumentation
to
understand
that
the
error
is
useless
to
me
now,
because
I
have
seen
it
a
million
times
and
I
don't
need
to
keep
saying
it.
M
A
M
H
Didn't
have
any
objection
to
the
D
dupe
other
than
it.
It
causes
new
and
exciting
problems,
and
then
one
advantage
of
tackling
at
this
level
is
we've
actually
gotten
pretty
good
mileage
out
of
forcing
everybody
to
use,
handle
error
and
we
are
able
to
handle
some
of
that
like
an
open
shift.
We
send
all
of
the
handle
errors
to
century,
so
at
least
when
we
get
panics
in
mint
some
environments
where
at
least
reporting
those
I
feel
like
there's
at
least
an
advantage.
You
know
we're
centralizing
where
something
important,
that's
an
easy
thing
to
screw.
A
M
M
A
F
Exact
call
site
matched,
but
there
are
different
reason
that
you
can.
There
are
different
reasons
that
you
can
fail
from
the
same
call
site
and
if
you
get
the
same
message,
but
with
a
different
name
space
in
it,
for
instance,
you
want
the
call
site
to
match
perfectly.
But
you
want
your
message
to
be
within
75
percent,
the
same
something
like
that
right,
so
you
don't
get
both
match
for
a
different
call
stack,
but
so
that
you
actually
do
messages
that
are
listing
nothing
but
unique.
Namespaces
I
actually
feel
that
that.
H
But
one
argument
would
actually
be
a
problem.
We
have
today
form
the
generic
spigot
style
errors
in
controllers,
as
we
don't
actually
know
where
that
might
have
originated,
and
so,
if
we
were
going
to
go
to
the
effort
just
doing
more
to
annotate,
the
error
on
its
way
in
is
required
to
get
to
make
that
easier
to
sort
back
because,
like
source
is
finding
the
call
stack
is,
is
that
best
to
hack?
The
next
step
you
know,
might
just
be
annotating
the
errors
better.
A
M
F
Just
comment
that
out,
though,
so
we
can
put
it
back
in
later
cuz,
it's
gonna
blow
it
out,
don't
comment
it
out,
put
it
in
a
like
if
false
block
or
something
fine
I
mean
it's
yeah,
a
it's
one
of
those
I
suspected.
We
the
next
thing
that
we
need,
and
it's
one
that
I
don't
want
to
say
I've
seen
on
my
local
system,
I
haven't
seen
in
production,
yet
probably
because
they
all
get
filled
by
something
else.
M
That's
fair,
so
Daniel
you
wanna
start
races,
Clayton
was
unsure
and
David
was
man,
I
think
yeah.
A
It's
certainly
a
lot
easier
to
search
by
call
site
than
it
is
by
error
message,
because
the
error
messages
are
constructed
and
therefore
you
don't
necessarily
know
what
you
should
search
for
to
find
a
screen
match
in
this
source
code.
So
it's
super
handy
to
know
where
the
errors
that
was
source
from.
A
F
F
It's
a
need
for
the
CLI,
the
first
one
is
scale
right
and
in
the
beginning
the
CLI
had
custom
logic
for
every
scaling
resource
there
was,
and
so
at
HPA.
We
eventually
collapsed
them
down,
HPA
made
it
I,
think
last
release
and
the
CLI.
The
work
is
underway
now
as
a
community
member
polynomial
who's
taking
it
on
and
using
a
generic
scale
client
in
the
CLI.
F
But
when
we
look
at
the
CLI
there
are
more
resources
that
match
this
pattern,
where
I
want
to
do
something
that
is
functionally
the
same
on
different
resources,
and
the
next
example
is
probably
logs.
So
I
think
what
I'd
like
to
do
mocks
that
logs
a
big
burden
does
anything
besides
a
pod
have
a
log
yeah.
So
if
you
look
at
what
the
CLI
likes
to
do
it
likes
to
be
able
to
gather
logs
from
replication
controllers
and
replica
sets
and
daemon
sets,
and
how
do
those
things
have
a
log?
Well,
so
there
are
multiple.
F
There
are
a
couple
different
options.
There
is
find
one,
and
then
there
is
fine
all
and
the
CLI
has
implemented
find
one
and
the
there
is
significant
utility
in
that
not
just
for
our
resources,
but
for
other
resources.
You
can
imagine
other
controllers
who
would
like
to
be
able
to
say
you
know
little
controller
I've
created
this,
and
this
is
how
you
get
to
my
logs
and
so
as
a
polymorphic
resource
that
one
makes
sense.
A
This
might
be
topic
for
a
more
specific
meeting.
I,
don't
know
if
I
buy
that
I
agree.
It's
useful
to
be
able
to
you
like,
like
your
services,
is,
it
is
having
an
error
like
collect
all
the
logs
from
all
the
things
that
are
possible.
That
could
possibly
be
having
the
error,
but
that's
not
really
the
same
operation.
That's
getting
a
log
from
a
specific
pod,
I.
F
Think
that
the
ceylon
has
been
working
on
the
the
user
experience
for
how
people
actually
use
our
system
would
disagree.
They
have
the
experience
dealing
with
the
users
that
says
they
want
to
see
the
logs
for
their
thing
and
they
have
actually
a
fairly
well
defined
and
long-lived
algorithm
for
doing
that.
So.
L
For
a
long
time
we
Merrick
and
I
for
literally
about
a
year,
we
had
a
traceability
problem
where
we
wanted
to
emit
events
from
all
places.
So
that
way
we
could
actually
have
traceability
for
all
the
transitions
that
occurred
for
all
the
resource
objects
over
time,
and
this
became
thorny
and
but
with
the
events
me
v2
API.
The
plan
was
to
be
able
to
support
that
level
of
granularity.
L
I
think
we
should
audit
the
transition
for
resources
across
the
system
when
they
go
from
state
a
to
state,
B
and
the
edges
are
not
recorded
systematically
throughout
the
system.
So
if
you
wanted
to
see
transitions
of
where
things
are
falling
over
or
failing,
if
you
had
all
of
the
events
traced
in
the
event
stream,
you
would
be
able
to
know
where
things
failed
and
why
and
debug
information
would
be
extra
logs.
So
having
ux4
traceability
is
what
it
sounds
like.
L
L
L
H
E
H
A
log
well
end,
jobs
like
jobs,
is
a
great
example.
Cron
jobs
is
a
log
like.
Arguably,
if
I
wanted
to
see
the
cron
job
logs
I
would
want
to
see
that
across
that
and
there's
different
ways
of
implementing
it
like
Donna
rod.
You
could
pull
this
in
the
client
side
if,
if
you
have
a
UI,
that's
dealing
with
extensions
like
logs,
for
when
a
service
instance
comes
up
in
Service
Catalog
again
like
some
of
these
are
like
further
out.
H
Would
have
and
I
would
have
to
go
and
make
three
separate
calls
on
the
client
side,
knowing
that
in
order
to
get
all
of
the
events
and
so
I
think
10
tying
it
back
in
to
your
point,
I
could
imagine
an
events
sub
resource
for
an
object
that,
on
the
server
side,
does
something
suitably
simple
to
go.
Find
the
events
for
the
things
that
are
impacted
as
a
series
of
calls.
A
Alright
I
want
to
make
sure
we
get
to
our
last
topic.
My
thought
here
is
I'm.
Definitely
in
favor
of
expanding
our
list
of
polymorphic
resources,
I'm
I
think
maybe
maybe
a
topic
on
our
mailing
list,
or
something
would
be
good
because
that
I
think
we
probably
need
some
offline
discussion
about
whether
things
are
really
doing
how
much
it
makes
sense
to
expand
those
logs
concept.
A
A
A
A
A
Encouraged
to
use
aggregation
when
actually
CRTs
are
more
appropriate
for
their
use
cases,
so
I
I
think
we're
we
we
decided.
We
are
okay.
If
the
aggregation
API
goes
to
GA,
because
there's
still
a
lot
of
like
libraries
and
stuff,
it
should
be
clear
that
people
as
soon
as
they
dig
in
that.
Actually
this
is
going
to
be
a
lot
more
work
to
make
an
aggravated
API
that
it
is
a
CRT.
F
Yeah
I
think
you're,
probably
right
our
API
server,
libraries
are
still
not
stable.
Our
read
knees
indicate
that,
but
the
actual
act
of
aggregation
for
the
people
that
are
currently
using
it.
It's
been,
it's
been
working
well
right,
we
have
Service
Catalog
and
the
metrics
server
and
openshift
have
all
used
it
for
some
time.
C
C
H
H
What
is
please
not
I'm,
just
saying
in
practice,
everyone
right
now
that
I
know
that
is
building
a
real
aggregation
server
for
something
that's
important,
but
not
yet.
Super
performance
critical
is
using
CR
D,
Service,
Catalog
and
three
of
the
OpenShift
aggregations
that
we've
discussed
over
two
or
three
of
the
outcomes
of
aggregations
and
then
feel,
like
somebody
else
in
the
community
told
me
about
this
just
recently,
but
they
basically
were
like
we
don't
want
to
run
our
own
Etsy
DS.
C
C
Is
maybe
clear,
I'm
not
advocating
for
this
sale
or
any
group
to
sign
up
for
creating
that
storage
API
in
some
finite
amount
of
time?
I'm
just
saying
that
I
think
we
need
documentation
or
guidance
for
end-users
who
are
building
an
aggregated
API
servers
and
then
they
facto
the
answer
is
C,
RDS
and
C.
Are
these
are
not
going
GA?
Then
it's
not
as
strong
a
statement
as
it
could
be,
and
so
I
don't
know
that
we're
trying.
D
H
Did
you
agree?
It
feels
a
little
weird
I
mean,
maybe
there's
a
desire
to
get
steady
to
GA.
It's
just.
We
know
there's
more
work
there
than
there
is
Nagre
Gatien
and
would
we
hold
aggregation
just
because
we
want
to
do
more
work
on
C
or
D,
so
I
tend
to
I
tend
a
way
that
I
don't
see
anything
on
the
horizon.
H
That
would
change
aggregation,
I
understand
the
concern
about
advocating
it
before
CRD,
but
I
kind
of
also
think
something
being
in
GA
has
never
stopped
any
of
our
users
from
deploying
any
of
it,
especially
the
people
who
would
write
CR
DS,
and
so
you
know
it's
kind
of
one
of
those
pragmatic
things
I.
Think,
okay,.
A
H
It
was,
it
was
just
saying,
like
the
reason
why
we're
doing
it
is
because
we
figured
that,
if
you're
giving
somebody
else
something
telling
someone
how
to
run
at
CD
when
there's
already
one
they
have
to
run
running
a
second
one
is
usually
twice
as
much
work
and
doing
it.
Well,
it's
only
when
you
get
to
like
three
or
four
or
five
add
CDs,
that
the
operational
toil
starts
to
go
away
because
you
have
to
automate
it.
It's
like
two
is
worse
than
zero.