►
From YouTube: Kubernetes SIG API Machinery 20170913
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Recording
welcome
to
the
September
13th
September
13th
2017,
seeing
API
machinery
meeting
I
think
we
have
a
pretty
full
agenda
so
we'll
get
started.
Maybe
just
briefly
I
will
introduce
the
people
in
the
room
since
there's
some
new
faces
and
there's
even
some
people
off
camera
off
off
camera.
We
have
Joe
yeah,
there's
Joe
Joe
fancies,
please
working
on
a
TV
with
us
and
other
things.
Walter.
B
B
C
B
A
D
B
D
Would
we
be
happy
with
forking
NCD
ourselves
and
using
that
crazy
face
to
our
blob
baby
yet
and
I?
Think
that
that's
the
sort
of
question
we
need
to
ask,
because
the
thing
that
we
created
for
our
storage
should
probably
move
us
in
the
direction
of
allowing
someone
out
of
tree
to
do
something
reasonable,
like
we've
done
with
with
other
web
hooks
in
the
product
and
other
pluggable
mechanisms,
I.
D
G
A
CR
D,
there's
not
a
theory.
An
aggregated
API
server
is
running,
presumably
the
generic
rest
storage,
which
takes
a
storage
interface
and
we're
wanting
to
back
that
with
something
right.
We
could
back
it
with
a
CD.
We
could,
it
could
run
it
on
a
CD.
It
could
run
against
the
same
as
CD
that
the
main
API
server
is,
or
you
know
we
could
back
it
with
a
purpose-built
blob
store
type
of
API
implemented.
That
would
implement
the
storage
interface
right
and.
B
B
G
H
G
With
Daniels
that
we
don't
want
entry
implementations
targeting
specific
vendors
I
instead
of
saying
no
new
entry,
implementations
I
would
I
would
rather
see
something
that
says
entry.
Additional
entry
implementations
will
be
external
extension
focused,
so
they
would
be
defining
an
API
that
you
could
integrate
with,
and
we.
G
B
G
The
absence
of
that
are
what
our
external
API
is:
the
sed
three
wire
format,
which
I
think
is
a
you
know,
I
mean
someone
could
do
that
today
and
that's
basically
saying
we're
not
going
to
support
you.
You
maintain
this
yourself,
but
we're
going
to
want
to
provide
that.
Your
second
point
is
a
blobstore
API
as
a
potential
solution,
and
they
would
have
to
be
implemented.
It
would
be
an
implementation
for
the
storage
interface,
and
so,
if
we're
defining
that
API,
why
wouldn't
we
also
let
people
who
want
to
compile.
B
What
I
want
to
do
is
expand
the
storage
interface
to
include
all
the
features
of
Etsy
v3,
the
ones
that
we
want
to
use
at
least
and
I
don't
want
to
be
in
a
situation
where,
every
time
we
expand
the
storage
interface,
we
break
a
bunch
of
storage
implementations,
so
I
sort
of
want
to
draw
the
boundary
at
the
the
between
processes
and
say
our
storage
API.
That
we
expect
to
write
code
against
is
the
at
cd3
API.
If.
D
D
B
D
B
D
G
B
G
B
B
G
J
I'm
guessing
maybe
this
clarification
might
help.
So
the
first
bullet
is
really
saying
that
we
only
really
want
to
store
a
net
CD
and
we
don't
want
to
support
storing
in
Postgres
or
my
sequel
when
it
comes
to
the
generic
store
implementation
but
fruit.
As
far
as
the
second
below
point
goes,
we
can
and
probably
should
have
a
little
shim
that
conforms
to
the
storage
interface
that
we
currently
have
so
that
a
user
API
server
can
be
configured
to
use
that
shim
to
talk
over
an
API
that
we
defined.
That
goes
into
a
blobs.
G
G
B
B
What
I'm
saying
is
like
I
think
we
need
somewhere
a
layer
that
talks
to
at
CD
3
that
builds
in
that
can
build
indexes
and
maintain
them.
That
can
we
can
do
things
like
split
spec
and
status,
but
I'm
not
sure
we
need
to
expose
that
complexity
to
user.
Written
ap
is
like
I
would
expect
a
when
a
user.
Api
server
goes
ahead
and
does
a
write
operation
I'd
expect
that
that
data
to
come
in
to
the
the
blobstore
interface,
the
blobstore
interface-
maybe
it
has
a
spot
for
this-
is
this
is
the
spec
blob?
B
B
K
D
H
H
So
I
mean
we
completely
understand
the
maintenance
burden
of
supporting
one
another
data
store
right.
But
do
you
have
a
really
good
counter
example
in
the
networking
right,
so
that
the
entire
ecosystem
is
vibrant
because
of
the
various
networking
components
that
are
out
there
right
and
they've
made
that
possible?
Why
is
it
that
that
community
has
to
take
it?
Hard
stands
on
supporting
only
one
particular
datastore
right.
H
F
Think
this
one
is
tricky,
because
portability
is
one
of
the
critical
benefits
of
using
kubernetes
at
all,
and
so
API
server
needs
to
work
in
the
situation
where
it's
not
on
a
cloud
can't
use
a
minute
or
this
dynamo
or
sequel,
and
we
need
to
be
able
to
test
and
verify
that
the
behavior
is
consistent
across
different
employment
environments.
You.
D
Saying
that
we
retest
it
for
them,
I
am
saying
that
if
you
build
a
remote
able
interface
and
if
we
then
used
that
remote
interface
for
our
aggregated
API
servers
and
we
will
have
a
very
high
degree
of
confidence
that
it
works.
And
if
somebody
chooses
to
plug
in
on
the
back
end,
it
will
end
up
being
a
plug-in
external
out
of
tree.
Just
like
we
have
for
volumes
and
for
networking.
And
for
so.
K
K
B
This
is
this,
is
this
is
really
not
a.
This
is
an
implementation
detail
like
where
API
server
chooses
to
storage,
stuff
and
I?
Think
commonality
between
clusters
is
only
going
to
be
a
good
thing
for
system
administrators.
If
you
learn
how
to
take
how
to
have
care
and
feed
for
one
kubernetes
cluster,
that
knowledge
is
portable,
doing
surgery
and
and
swapping
out,
the
storage
back
in
is
actually
a
pretty
big
deal
and
I
thought
of
the
system.
I
mean.
F
G
It's
it's
cleanly
separated
today
and
I.
Don't
think
we
should
give
that
up,
especially
when
the
need
to
externalize
storage
before
aggregate
it's
or
extension
servers
is
present,
like
you
say,
there's
no
benefit
to
users,
but
if
I'm
running
an
extension
server
having
this
API
defined
and
usable
by
any
API
server
gives
me
immediate
benefit,
because
I
can
run
my
server
and
hook
into
storage
like
defining
the
API
and
writing
an
implementation
of
storage
that
uses
that
API
benefit.
G
L
L
L
D
G
B
Love
me,
let
me
take
another
approach
right,
like
the
point
of
a
the
the
point
of
a
of
an
API
is
to
give
some
isolation
between
is
to
define
the
interface
right.
You
give
some
isolation
between
the
consumer
of
a
service
and
the
provider
of
a
service.
Right
now
we
have
encode
a
interface
that
is,
does
not
cross
process
boundaries
right.
The
storage
interface
is
a
go
concept.
It
is
not
a
API.
It
is
not
like
an
interim
process.
Api
we
have
external
to
our
code.
B
Like
actual
process
boundary
crosses,
can
happen
in
either
Etsy
d2
format
or
Etsy
d3
format.
I
am
proposing
that
we
say
we're
gonna,
stick
to
the
Etsy
d3
format
and
in
terms
of
process
crossing
I
think
we
need
a
solution
that
is
like.
We
don't
want
to
talk
at
CD,
3
protocol
between
user
API
servers
and
the
main
API
server.
That
would
be
kind
of
silly
and
doesn't
solve
or
not
know
that.
B
B
G
I
I
don't
disagree
that
a
storage
API
is
not
something
we
should
do
lightly,
but
if
we're
not
going
to
define
a
storage,
API,
then
I
think
extension.
Api
servers
should
use
a
CD
I,
don't
think
we
should
come
with
something
other
than
sed
and
other
than
a
generic
storage.
Api,
specifically
for
aggregated
servers,
I.
D
To
me,
that's
really
a
separate
question
from
its
do
we
want.
Is
there
all
API
servers
right,
like
they're
all
trying
to
store
and
they're
all
based
on
the
same
generic
API
server?
Then
the
API
that
you're
using
to
store
the
data
for
an
aggregated
API
server
should
be
the
same
as
the
one
that
you
are
using
for
a
cube,
API
server,
and
if
we
want
to
add
to
that
to
be
able
to
support
a
blob
store
by
all
means.
G
B
So
the
the
the
API
that
I
want
between
the
aggregated
API
server
and
the
and
the
like
main
API
server
I
wanted
to
abstract
that
away.
So,
like
we
add,
we
add
indexing,
you
should
not
have
to
recompile
your
extension
API
server
to
take
advantage
of
that.
It
should
just
be
the
case
that
all
of
your
queries
suddenly
are
faster.
B
G
G
B
G
G
G
F
Think
the
mental
model
of
seeing
cube,
API
server
is
just
another
yes
or
is
it
useful
thought
exercise
in
any
design
process?
So
I
think
that's
a
that's
a
good
perspective.
I
think
certainly
I've
been
thinking
of
it
as
system
data
is
somewhat
different
than
user
generated
data
and
I
think
the
system
should
continue
to
function
even
if
the
user
works
themselves.
F
G
For
the
first
point,
I
will
say:
I
think
that
I
was
agree
that
we
shouldn't
add
more
entry,
storage,
implementations
for
specific
backends
or
vendors.
If
we
add
more
entry
source
implementations,
they
should
be
external,
focused
extension
focused,
like
the
network,
see
and
I
plug-in
stopping
the
volume
CSI
plug-in
they.
M
K
F
G
Interested
parties
around
designing
that
an
API
for
that
are
people
wanting
to
run
aggregated
storage
and
delegate
that
to
the
main
API
server,
without
giving
them
direct
access
to
a
CD
and
also
alternate
storage
vendors
who
want
to
provide
their
own
storage.
So,
like
there's
interest
from
both
sides,
but
at
the
same
time
there
is
a
cost
to
abstraction
and
designing
and
maintaining
and
maintaining
over
the
long
haul.
That
API
is
going
to
be
expensive,
and
so
we
have
to
be
careful
about
how
we
do
it.
B
M
F
K
F
G
G
G
Same
thing
happens
with
networking,
and
so
part
of
the
most
gather
is
what
storage
are
using
and
we
validate
with
a
CD
and
we
validate
we,
this
hypothetical
API.
We
would
validate
it
by
means
of
aggregated
servers,
running
against
the
main
API
server.
So
we
test
it
from
both
sides,
but
your
storage
implementation
owns
your
storage
and
so
all.
F
B
We
are
40
minutes
in
I
think
we
should
table
this
issue
for
the
moment,
and
I
will
write
something
up
concrete
for
us
to
our
you
about
at
the
next
one.
Does
that
sound?
Okay?
Yep?
That's
fine!
Okay!
Next
item
on
the
list,
it
came
to
our
attention
that
the
current
upgrade
storage
object.
Scripps
is
very
crafty,
broken
and
at
least
some
distributors
aren't
distributions
are
not
actually
in
fact
running
it
between
upgrades
or
after
an
upgrade
so
I
think
we
as
a
cig
need
to
come
up
with
a
solution
that
just
works.
G
B
What
I'm
thinking
here
is
like
the
API
servers,
have
a
consensus
mechanism
where
they
produce
the
desired
version
of
the
cluster,
because
it
could
be
more
than
one
API
server
and
and
if
you
do
a
rolling
upgrade,
they
might
not
all
be
the
same
version.
So
there
has
to
be
some
like
desired
version
and
then
I
think
there
should
be
a
controller
which
could
be
built
into
the
controller
manager
or
could
be
a
separate
binary.
G
B
I
B
I
So
you
know,
obviously
using
the
binary
format
is
much
more
visioning
a
thing
to
do,
but
we've
had
some
needs
to
just
go,
feel
to
go
in
and
directly
kind
of
inspect
the
data.
That's
an
ICD
and
right
now,
if
you
look
at
it
in
Mayan
areas,
kind
of
you
know
hard
to
piece
together.
What's
going
on
there,
it
might
be
too
small
yeah.
Let
me
see
it
should
book
file.
Size
is
fine.
Okay,
okay,
but
you
know
here's
just
kinda
like
a
hex
ton,
but
what
you
might
get
for
a
pod.
I
You
can
kind
of
see
some
strings
in
there
and
we
can
kind
of
make
sense
of
it.
But
you
know
we
have
the
ability
to
decode
this
for
people.
So
the
idea
was
just
with
a
simple
command-line
tool
that
allows
you
to
decode
data
encoded
back.
If
you
really
had
to
and
also
be
able
to
extract,
though
the
eb
files
in
more
emergency
situation,
that's
a
pretty
simple
tool
and
what
it
does
is
it
just
uses
the
kubernetes
code
to
do
its
work.
I
So
if
you
say,
let
me
grab
a
simple
example
here
and
then
we'll
move
on
I'm
gonna
just
show
you
one
really
quick
so
say
we
take
and
we
get
the
value
of
a
card,
and
we
want
to
give
that
back
as
JSON.
You
could
just
you
know
pipe
that
through
the
tool
and
well
that
was
the
animal,
but
you
get
the
idea.
It
just
does
the
decoding.
They
just
uses
the
right
logic.
I
I
D
G
I
B
G
B
I
Server
s
e-g
stores
these
kind
of
obfuscated
keys
that
it
uses
to
maintain
its
indexes
in
version
history
yeah
this
tool
actually
go
through
at
CD
or
go
through
bolt
DB
and
find
your
at
CD
key
for
you
and
all
the
versions.
If
it
exists
that
you
can
actually
go.
Look
at
historical
versions
that
you
might
have
always
built
to
see.
I'm.
G
B
I
I
I
I
B
C
H
O
I'd
love
to
pick
up
some
things
from
the
API
machinery,
in
particular.
We're
interested
in
CRD,
sub-resources,
json,
schema
for
c
rd
resources
and
reflection,
I'm
starting
to
troll
through
github
issues
and
the
passed
notes
for
this
SIG.
But
if
anyone
has
any
pointers
on
that
or
knows
of
work
items,
so
they
could
pick
up.
That
would
be
really
great.
I
would.