►
From YouTube: Kubernetes SIG API Machinery 20170426
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
B
B
C
E
C
F
F
C
F
F
F
On
the
agenda
still
put
an
item
on
reheat
proposals
will
display
resources
without
requiring
compiled
time
and
of
view.
Oh
yeah,
okay,
so
this
is.
This
is
a
it's
still
on
the
call
just
like
cystic.
No
that's,
okay.
This
is
there.
There
is
a
there's
two
proposals
that,
in
my
opinion,
are
not
in
conflict,
but
they
are
tackling
different
aspects
of
the
same
problem
and
they
don't
reference
each
other
during
the
event
that
one
one
of
the
ATM
machine.
F
G
F
We've
been
David
and
myself
and
Walter
have
been
looking
up.
Getting
the
aggregation
layer
turned
up,
adding
it
directly
into
the
API
server,
and
it
may
or
may
not
be
common
knowledge,
but
the
API
server
is
more
flexible
than
possibly
should
be.
In
this
case,
it
it
is
intended
is
built
to
not
depend
on
any
of
the
networking
stack.
F
But
it
is
a
it
is
a
current
assumption
of
the
the
control
plane
that
those
parts
of
the
stack
are
actually
optional.
If
you
think
about
it,
I
think
there's
a
there's.
If
we
were,
you
start
relying
on
something
of
those
one
of
those
things
there
I
not
say
they're,
saying
they're
definitely
would
be
a
good
staffing
problem,
but
it
would
be
easy
to
have
a
big
strapping
problem.
F
H
F
I
think
Walter
was
looking
at
factoring
this,
so
just
just
to
be
clear
for
everyone
who's,
not
who
hasn't
been
a
super
involved.
The
the
particular
point
of
contention
is
that
the
the
aggregation
layer
needs
to
dial
out
to
services
in
the
cluster.
The
services
rich
host
user,
API
servers
so
and
the.
I
A
I
Know
how
to
do
it
in
the
same
way
right
it
as
as
I
understand
it
right
it.
It
is
not.
The
point
is
that
it
would
that
doesn't
support
things
that
doesn't
support
any
flexibility
right.
So
when
you
hit
the
proxy
sub
resource,
it
is
doing
something
different
and
when
you
try
to
hit
a
service
from
a
pod
right
across
you,
sub
resource
says,
get
me
one
of
these
things
anyway,
these
things
I,
don't
care
which
one
I
don't
carry
load
down.
Get
me
across
me
to
this
thing.
Well,.
A
F
Think
the
I
think
the
code
paths
happened
to
do
the
same
thing
at
the
moment,
but
there's
no
there's
no
guarantee
that
they
will
continue
to
do
the
same
thing
like
every
year.
I
think
it
customizes
cute
proxy
or
drops
their
own
network
routing
plugin
in
then
they
would
stop
corresponding
at
the
moment,
though,
in
the
standards
stack,
they
do
correspond
well.
F
I
I
know
of
any
setups
that
actually
use
this
the
sign
gke
and
make
EGCG
brightest
cops
of
the
music
cube.
Atm
doesn't
do
that
given
comment,
anyone
who
want
to
run
a
daemon
test
that
want
to
co-locate
on
their
masters,
which
I've
seen
issues
about
wanting
to
be
able
to
do
that
and
no
rules
that
affect
they
would
also
have
a
few
proxy
running.
So
is
it
I
think.
F
I
think
there's
like
there's
this
like
tension
cause,
why
do
you
self
host
or
not
right
like
if
you
sell
post,
then
yeah?
The
master
has
available
all
the
stuff
available
to
any
node.
If
you
don't
sell
post
I,
don't
necessarily
have
that
and
I
think
kubernetes
has
to
support
both
environments
at
the
moment.
H
F
F
J
F
That's
why
we're
talking
about
it
at
this
set?
This
call
because
it's
yeah
yeah
yeah.
Clearly
we
need
to
have
all
defined.
This
I
mentioned
this
to
Brian,
and
he
pointed
at
his
layers
document
and
and
made
the
claim
that
network
routing
is
that
is
a
subsequent
layer
from
the
court
or
the
court
puts
a
call
to
the
nucleus
or
whatever,
and
therefore
stuff
in
the
nucleus
shouldn't
depend
on
on
network
routing.
Now.
I
If
that's
going
to
be
the
case,
so
that's
just
that
we
actually
built
the
API
wrong
and
what
we
really
wanted
to
actually
just
put
an
IP
or
DNS
name
in
there
right.
That
is
not
a
service,
because
if
you
have
a
service,
that
means
that
you
are
relying
on
networking
availability
to
the
service,
which
you
say
would
pass
do
I.
F
B
Like
we're
still
like
I'm,
not
sure
that
Brian's
definitely
a
burrito
model
is
exactly
relevant
here,
but
I
don't
think
we
were
really
following
it
anyway,
because
we
need
a
service
controller
to
be
twisting
in
the
first
place
like
there's
plenty
of
stuff
that
has
to
work
beside
the
API
server
before
we
can
get
to
this
point.
All
we're
really
saying
is
we
don't
get
the
information
from
to
proxy,
but
it
has
to
have
been
set
up
somehow,
like
Renata,
you
know
we're
not
avoiding
a
dependency
on
that
part
of
the
system.
Yeah.
F
B
I
High
performance
is
a
really
valid
concern
for
for
what
it
is
that
we
are
doing
here
right
you,
you
resolve
it
onto
an
IP
and
then
from
there
she's
telling
take
about
your
IP
I.
Don't
think
that
thing
going
to
the
next
age.
Dialer
is
going
to
be
so
much
fun
after
this,
and
they
worth
a
change
for
that.
Are
you
talking
about
the.
F
J
We're
talking
about
tunneling
into
running
networks,
there's
an
approach
for
this.
It's
proxies
right.
How
about
we
just
say
that
the
API
server
you
know
as
proxy
variables
that
you
can
set
up
when
you're
running
as
self
hosted.
You
don't
have
to
set
those
proxy
variables,
build
an
SSH
specific
proxy.
Actually
SSH
supports
that
and
then
just
move
on
with
our
lives.
That's.
F
J
H
H
H
If
we
don't
think
that
we
can
deliver
encode,
something
that
fit
as
reliable
as
an
external
proxy,
that
would
be
a
better,
a
good
argument
in
my
mind,
towards
like
how
much
how
much
more
proxy
are
we
going
to
build
in
cube
boxes,
that's
different
or
in
api
server,
that's
different
than
youtube,
Lexi
or
different
than
supportive,
T,
RPC,
endpoint
connection,
etc,
etc,
etc.
The
idea
for.
F
I
Ideas
are
we
passing
it?
It's
something
that
I
find
a
lot
more
palatable
than
a
cup
change.
That
just
says.
You
know
what
the
aggregator
is
going
to
do.
The
aggregator
is
going
to
implement
its
own
key
frosting,
so
if
it
were
set
up
to
be
something
where
it's
like,
an
optional
wrapper
like
that
that
that,
when
it
modern
me
as
much.
F
I
I
F
J
Working
so
I
think
it
might
be
worthwhile
to
take
another
step
back
here.
The
to
proxy
fundamentally
is
an
affordance
for
legacy
code
and
legacy
code
that
doesn't
know
how
to
actually
talk
to
criminalities
discovery
service.
The
events
directly,
the
cube
api
server
is
not
legacy
code.
Why
does
it
actually
need
to
use
a
cluster
IP,
IP
I?
J
Notes
fundamentally,
the
service
object
is
essentially
a
named
query
for
doing
service
discovery.
The
weave
glommed
on
a
bunch
of
other
stuff
like
load,
balancing
and
cluster
ICP.
But
fundamentally,
a
service
is
essentially
identifying
a
set
of
pots
and
that
load
balance
in
terms
of
actually
talking
to
those
can
be
done
in
the
queue
proxy
or
can
be
done,
client-side
in
a
sidecar
or
in
analyte.
The
binary
as
folks
move
to
things
like
STL
and
link
or
D
the
cluster
IP
in
the
service
proxy,
and
then
the
tube
proxy
implementation
become
less
and
less
critical.
J
H
But
I
think
practically
today,
99%
of
all
traffic
on
a
cute
clusters,
obviously
the
service
for
a
feed.
So
it's
somewhat
of
a
like
in
the
long
run,
your
I,
like
I,
guess
this
two
arguments:
there's:
should
the
API
server
be
implementing
that
line
component
itself
or
sort
of
delegate
something
that's
doing
that
for
it?
Whether
it's
to
proxy
is
do
socks
proxy,
whatever
I
like
the
failover
and
the
fail
clothes
and
the
sale
of
them
stuff.
You
know
talking
to
different
backends,
like
I.
Have
a
lot
of
concerns
about
that.
H
Just
from
the
level
of
we
don't
have.
A
bit
of
code
is
doing
that.
Well
today,
except
the
user
space
proxy
code-
and
we
have
you
know:
I
mean
it's
stuff.
We
can
write,
can
I
be
kept
isolated
enough.
It
doesn't
impact
everything
else.
Maybe
I
mean
if
you
guys
want
to
build
something,
that's
like
if
there's
a
need
to
build
something
that
is
a
SSH
tunneling,
dialer
proxy
load,
balancer
client-side.
H
J
J
We
know
that
that's
a
case
I
mean
that's
the
question
of
whether
things
like
logs
and
exec
should
have
to
do
this
also
and
whether
those
api's
are
right,
but
that
that
train
of
sale,
okay,
the
second
thing
here
is
implementing
the
the
cluster
IP
behavior
of
taking
a
stable
IP
address
and
then
spraying
that
against
odds
okay,
that's
an
affordance
for
legacy
workloads
and
over
time
I.
This
is
my
view.
Okay,
you
can.
J
But
but
but
you
know
fundamentally,
cluster
IP
should
be
optional
for
enlighten
board
clothes
and
I
would
say
that
the
the
API
server
is
in
the
language
and
then
there's
the
question
of
the
API
server
should
need
to
actually
talk
to
pods
and-
and
we
know
that
that
used
to
happen
for
a
whole
host
of
reasons
and
so
so
being
able
to
join
networks
and
have
the
API
server
talk
to
pods.
That's
simple
proxy
there's
no
load
balancing
there,
that's
just
proxy,
and
so
an
off-the-shelf.
J
I
J
Might
be
able
to
drop
that
in
some
sense
cube
with
a
dry-aged
servers.
So
if
we
don't
expect
that
proxy
to
both
be
doing
simple,
Network,
proxy
n--
and
cluster
IT
proxy,
then
we
can
simplify
the
problem
uses
something
off
the
shelf.
So
what
I'm
saying
is
that
from
the
API
servers
point
of
view,
if
we
make
cluster
IPS
optional,
then
we
might
be
able
to
utilize
more
standard
networking,
stuff,
yeah.
I
So
I
don't
want
the
default
mode
to
be
read.
Let
me
try
to
resolve
my
way
all
the
way
down
to
a
pod
and
talk
to
that
I
would
like
the
default
node
to
be
the
one
that
will
work
correctly
in
the
vast
majority
of
deployments
that
we
know
of
in
the
direction
of
the
self
opening
stuff,
which
is
the
way
all
the
bootstrapping
and
proposals
are
coming.
I
F
J
F
I
F
I
I
I
H
So
we're
still,
no
matter
what
we're
doing
like
we're,
not
saying
we
don't
want
to
allow
someone
to
use
your
oxy
or
the
cluster
I
see,
and
we're
not
saying
that
we
don't
want
to
allow
someone
to
to
be
able
to
connect
from
a
master
to
some
other
component
across
a
firewall
to
those
and
be
able
to
reach
the
endpoints.
What
does
it
come
down
to
debt?
H
I
K
B
I
F
F
B
F
So
I
think
I
think
that's
missing.
I
think
the
document
that
would
be
helpful
is
like
basically,
why
why
do
we
have
the
sort
of
flexibility
that
lets
us
put
the
SSH
tunnels
in
like?
Why
does
that
exist?
The
short
answer
is
like
so
that
the
master
doesn't
have
to
run
in
the
same
network
as
the
codes.
H
H
J
J
H
F
Let's
see,
okay,
we've
got
after
time.
Love
final
thoughts
on
this
seems
like
we've
got
an
action
item.
F
So
let's
move
on
the
last
thing
that
I
had
a
list
was
mostly
informational,
so
as
background
I
think,
probably
most
of
you
know,
we've
we're
in
this
sort
of
situation
where
we're
we've
got
some
parts
of
our
code,
split
into
separate
repositories
and
sorry
I'm
distracted
by
looking
at
what
David
is
writing
in
and
out,
we've
got
some
code
split
out
into
separate
repositories
and
some
code
like
half
split
out
a
client
library
in
particular
like
half
of
it,
is
or
well
like
three-quarters
of
it
is
directly
copied
and
look.
No
sorry.
F
Three-Quarters
of
it
is
like
written
in
the
staging
area
like
up
the
canonical
location
and
then
there's
like
a
quarter
of
it.
That's
copied
and
assembled
and
stitched
together
from
elsewhere.
In
the
end,
the
in
repository
and
we
sort
of
we
want
to
get
to
a
world
where
code
use
code
in
the
main,
repository
and
use
the
client
library
code
outside
of
the
main
repository
can
use
the
client
library
and
everything
sort
of
just
works.
The
blocker
on
that
is
the
API
types
and
right
now
they
are
being
copied
into
the
client
repository.
D
D
I
I
F
F
Now
this
so
and
sort
of
a
side
effect
of
this
is
that
you'd
be
able
to
have
your
own
set
of
internal
types
and
I.
Think
that's
really.
The
right
way
for
things
to
function
in
the
future
is
for
different
components
that
have
their
own
set
of
internal
pipes.
So.
I
I
would
like
how
can
we
separate
out
the
internal
pipe,
also
not
not
separate
them
into
the
same
refo
as
as
the
API
I
I,
get
that
but
I.
Don't
think
that
I
want
to
have
a
a
our
controllers
with
a
different
copy
of
internal
fight
or
cute
control,
with
a
different
copy
of
internal
pipes
than
our
API
server.
No.
B
B
I
Talking
about
re,
we're
talking
about
research
with
only
no
idiot
guys
cut
and
right
now
the
API
server
and
aq
control
agree
on
then
don't
up.
A3
is
the
thing
it's
the
same
one.
So
this
is
the
mental
load
trying
to
figure
out
what
it's
doing
is
minimal
right.
Like
oh
I
know
what
the
internal
pipes
are
there
are
these
that's
what
reason
of
internal
types
to
and
once
someone
updates
it.
There
is
near
zero
drifts,
all
the
tests
work
so
and
you
can
move
on
I.
Don't
think
I
want
to
move
that.
F
F
Well,
there's
no
work
forward
for
today
we're
splitting
the
external
types
out
that
will
let
us
like
do
the
client,
repo
rationalization,
that
we
want
to
do
and
queue
control
and
API
server
will
continue
to
use
the
internal
types
that
are
in
tree
then
like
we
can
so
we're
not
going
to
change
the
functionality
in
this
step.
I
think
that's
a
future
step.
We
are
at
acting
it
in
some
sense
of.
I
F
F
F
I
That
sounds
good,
reasonable,
I,
don't
know,
I,
don't
know
what
there's
gonna
be
wrinkles
in
doing
that,
but
yeah
I
think
if
we
ended
up
with
multiple
copies,
multiple
kinds
of
internal
pipes
now
I
would
be
very
concerned.
So
I.
F
F
I
F
Yeah
yeah
I
agree,
I
mean
well
I'd,
really
love
to
get
at
the
point
where
we
could
just
delete
the
staging
directory
and
everybody
to
have
develops
in
the
external
repos
I.
Don't
think
we're
going
to
get
there
in
this
quarter
so
at
least
want
to
get
to
the
point
where
the
camana
canonical
copy
of
anything
is
clearly
in
one
place
in
this
aging
directory
right
now,
that's
true
for
some
of
the
repos
in
the
staging
directory,
and
not
true
for
some
Clank
Oh
in
particular,.
H
In
impact
to
external
users
and
to
other
people
in
the
system-
and
it's
going
to
be
cause
everybody
to
have
to
refactor
stuff,
so
we
should
tell
everybody
about
it
ahead
of
time.
So
they're
not
surprised
in
people
like
Joe,
don't
get
on
this
and
yell
at
API
machinery
for
nothing
over
the
community
Oh
Joe
you're.
My
I
want
to
be
smacked
again:
yeah,
okay,.