►
From YouTube: Kubernetes SIG API Machinery 20190814
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
I
think
David
isn't
here
so
I,
don't
know
if
we
can
discuss
the
story.
Written
super,
in-depth,
I
think
David
was
mostly
in
agreement,
and
the
sticking
point
is
like
sure
the
API
servers
actually
coordinate
over
the
storage
version
is
that
necessary,
I
think
it's
not
essential
for
the
further
proposal,
but
I
think
so
eventually,
I
think
API
servers
do
need
to
coordinate
over
some
things
like
there's
some
facts.
There's
some
configuration
details
that
are
facts
that
should
be
facts
about
the
cluster,
not
facts
about
API,
server
and
I.
B
B
It
seems
like
logically,
the
the
storage
version
is
a
fact
about
the
cluster
not
affect
about
API
server,
but
it's
not
critical
and
I'd,
like
of
all
the
things
that
we
run
in
like
in
the
like,
partially
upgraded
split,
API
server
version
model
like
that
doesn't
seem
like
the
most
important
one,
the
most
significant
one,
so
I'm
fine.
If
we
like,
like
first
start
publishing
each
API
servers
idea
of
like
what
is
possible
and
what
it
refers
and
like
we
can.
B
D
B
C
I
think
I
think
that
consumers
do
not
need
to
actually
coordinate
what
storage
version
to
use,
but
they
still
need
to
coordinate
what
discovery
document
is
show
yeah.
So,
for
example,
yes,
different
API
servers
are
using
a
different
storage
version
right
now
being
the
discovery
document
they
should
just
say.
Currently,
the
storage
vendors
are
not
Oh,
something
like
that.
Yeah.
B
Then
you
know
what
we
do
need
an
automated
view
of
the
discovery
of
yeah.
We
still
have
to
say
something
in
the
discovery
document.
It's
not
it's
not
good
to
say
like
what
the
individual
API
server
around,
because
clients
might
be
coming
through
a
load
balancer
or
something
and
getting
a
random
API
server
and
then
they're
like
like
they
might
read
one
discovery
document
and
then
send
a
response
to
another
API
server
and
so
yeah.
E
C
F
G
E
So
the
thing
I
want
to
be
careful
about
is
that
we
that
was
talking
about
like
group
versions,
not
about
storage
version.
So
it's
not
exactly
the
same,
but
when
we
talk
about
altering
the
discovery
document
based
on
what
is
supported
by
all
servers,
we
want
to
be
careful
not
to
return
a
discovery
document
that
omits
things,
that
some
of
the
servers
are
actually
serving,
or
things
like
garbage
collection
and
namespace
cleanup
will
not
be
correct.
Yeah.
B
So,
like
a
coordinating
over
things
has
been
like
a
long-standing
issue,
I'm
pretty
sure,
there's
a
github
issue
somewhere,
where
I
listed
out
a
few
things
that
we
could
coordinate
over
I.
Think
this
the
of
the
discovery
doc
related
stuff.
The
storage
version
is
maybe
the
least
interesting
thing
to
coordinate
over.
E
B
E
H
H
C
B
I
thought
yet
another
couple:
okay
yeah,
maybe
one
high-level
bit
of
high-level
bit
we
should
decide
on,
is
like
do
we
want
a
single
object
which
serves
all
of
our
coordination
needs,
or
do
we
want
a
model
and
be
like
make
a
new
resource
to
solve
a
separate
coordination
problem?
That
might
be
something
to
think
about,
because
the
you
can
write
the
cap
differently
depending
on
when
you
wish,
we
choose
okay,
anybody
else
have
thoughts
on
this
I.
A
B
F
F
B
I
A
I
B
B
I
Possible
yeah,
yeah,
I,
think
you're
I
think
you're
right.
It's
not
that
it's
not
possible,
but
so
from
my
use
case
that
that
I'm
thinking
of
is
for
the
so
there's
projects
a
project
to
cluster
lifecycle,
called
cluster
API
and
there
are
common
controllers
and
that
are
that
are
being
written
and
they're
going
to
be
talking
to
different
cluster
API
endpoint,
and
it
would
be
great
not
to
have
to
modify
those
controllers.
You
know
with
like
explicit
support
for
you
know,
for
proxies
and,
and
you
know,
different
network
requirements.
I
So
today
those
controllers
just
load
a
cube
config
for
the
cluster
that
they
need
to
talk
to,
and
so
it
would
be
great
to
just
add
that
you
know
add
support
for
like
an
HTTP
proxy,
potentially
a
socks
proxy
in
the
in
the
cube
config,
and
then
that
can
be
sort
of
transparent
to
the
to
the
controller
it
just
loads.
The
kippah
config
and
everything
is
everything
would
then
be
prepared
by
by
and
heightened
oh
I
have.
B
Some
questions
about,
like
the
the
setup
here
is:
is
it
the
case
that,
like
so
you've
got
it
like
multiple
clusters
and
you've
got
a
cute
config
for
each
one,
but
like
the
host
name
or
the
IP
address,
and
each
cube
config
is
the
same
or
different,
but
just
not
accessible
or
Appling
like?
Can
you
just
be
more
concrete
about
what
than
what
the
networking
model
is
that
we're
trying
to
solve.
I
Sure
so
imagine
that
you
are.
You
have
a
cluster
where,
where
you
know
where
these
controllers
to
run
and
then
the
you
know
what
I
call
workload
or
with
a
cluster
API
called
workload
clusters,
they
might
be
running
in
environments.
You
know
each
of
them
in
different
networks
that
restrict,
let's
say
ingress
from
from
the
internet,
and
so
then
you
might
set
up
something
like
a
TCP
tunnel
and
and
then
over
that
TCP
tunnel.
You
can,
you
know
reach
from
your.
I
B
I
B
So
we
actually
have
a
very
similar
problem.
Getting
traffic
from
the
control
plane
into
the
into
the
cluster,
at
least
in
some
environments
and
Walter,
who
is
on
vacation
at
the
moment,
has
just
added
a
like,
like
aggress,
this
concept
of
an
egress
type
to
api
server
and
a
corresponding
connectivity
with
a
okay
connectivity
service
that
runs
in
the
in
the
cluster.
So
it's
like
from
API
servers
perspective.
You
you
configure
like
okay,
NTD
traffic
gets
dropped
on
the
network.
B
This
way,
like
just
direct
directly
dialing
traffic,
for
that's
intended
to
go
through
the
things
running
in
a
cluster,
has
to
go
through
this
proxy.
So
there's
this
like
a
guy
server
dials
to
this
proxy
server
and
inside
the
cluster,
those
agents
that
also
dial
into
the
proxy
server
and
it
tunnels
through
the
connection
the
agents
make.
This
is
replacing
the
old
SSH
tunnels
that
API
server
used
to
used
to
make
so
it'll
be
great
to
get
back
that
out.
B
F
H
E
B
D
B
Further,
like
yeah
yeah,
so
those
so
it
makes
sense
to
like
put
that
information
in
the
cube,
config
file
and
not
into
the
environment
of
the
program
that's
operating
on
the
config
file,
because
then
you
have
to
put
you
have
to
throw
this
information
to
different
places
and
it
means
it's
much
harder
to
consume.
You
can't
use
it.
You
can't
use
a
standard
tool
on
a
cluster
behind
this
strange
networking
environment
without
doing
something
beyond
editing
here
at
giving
file,
but.
E
B
B
E
D
E
E
So,
from
my
perspective,
like
the
multiple
distinct
proxies
for
sub
things
within
a
single
process,
I
can
see
the
use
case
for
that
I
think
the
things
we
would
want
to
be
really
clear
about
are
what
happens
if
the
environment
and
the
cube
config
file
both
express
opinions
about
proxy
stuff,
like
which
one
wins?
Is
it
an
error
like?
What's
the
precedents
like
Adams.
B
There's
another
model
which,
which
is
why
I
was
asking
about
the
the
IP
spaces,
is
another
model
which
is
the
people
setting
up
since
this
environment
are
expected
to
like
monkey
with
the
network,
namespace
or
whatever,
and
actually
make
tunnels
or
Nats
or
or
whatever,
so
that
particular
IP
addresses
actually
get
routed
through
a
proxy
or
manipulation
in
whatever
manner
is
necessary.
We
debated
going
with
that
model
for
the
for
the
the
egress
type
that
I
talked
about
a
minute
ago.
B
We
decided
against
it
because
it
was
kind
of
hard
for
like
we
thought
it
would
be
a
lot
easier
to
monitor
and
get
it
to
work
in
like
fixed
bugs.
If
we
had
all
this
stuff,
it's
running
as
code
inside
the
binary
that
we
like
understand,
then
it
will
be
to
debug
as
like,
Linux
networking
spaces,
which
is
not
really
our
very
expertise.
B
B
F
K
B
Okay,
so
sorry
I
mean
yeah.
You
could
you
could
like
use
that
tool
to
solve
this
problem
where
you're
running
a
but
in
the
opposite
direction,
where
you're
running
an
agent
and
every
cluster
that
you
want
to
talk
to
it,
dials
back
to
a
proxy
server
like
the
the
server
somewhere
that
you
have
access
to
and
when
you're
somehow
pacifying,
which
of
those
things
you
want
to
go
to.
D
A
Wondering
without
being
an
expert,
if
this
is
something
that
how
hard
are
the
networking
settings
on
the
world
clusters
that
do
not
allow
inbound
connections
and
increasing
you
know,
I
thinking
out
of
the
box
is
the
connectivity.
Problems
can
be
solved
even
outside
of
cornetist,
with
the
network
setup.
B
Yeah,
maybe
maybe
actually
we
should
think
about
rather
than
trying
to
solve
this
by
fixing
keep
config
files.
We
should
think
about
trying
to
solve
this
by
integrating
that
that
the
components
that
we
just
wrote
with
places
like
somehow
I
think
I
think
we
need
to
see
a
design
that
does
that
I,
don't
know
I.
J
B
The
trade-off
is
like
we
can
definitely
make
changes
to
keeping
big
that
add
this.
The
trade-off
is
that
makes
every
single
consumer
become
big.
That
makes
their
life
harder
more
complicated
and
is
like
is
the
benefit
from
enabling
this
use
case.
Does
that
justify
the
harm
from
like
making
everything
complicated
and
going
through
the
scroll
out
process
and
yeah?
So
that's
the
that's
the
calculation
that
we
need
to
decide
when
we
go
yes
or
no
on
this
I.
C
B
D
J
A
J
B
E
B
That
reminds
me
something
else
that
I
wanted
to
mention
which,
like,
if
you're,
if
you're
talking
some
multi-tenant
setup
like
you,
should
really
think
hard
about.
Is
it
okay
for
both
of
those
credentials
to
be
loaded
in
memory
in
the
same
process?
At
the
same
time
like
maybe
actually
having
like
only
running
one
process
per
tenant,
might
actually
be
safer,
because
it's
much
harder
to
to
like
have
a
bug
that
chairs
memory
between
two
things.
It.
C
K
So
it
it's
again
about
proxies,
so
a
little
bit
of
background,
what
we're
trying
to
do?
It's
very
similar
to
the
previous
historic
attempt
and
right
now
we
have
for
every
single
our
quilters
import
of
API
servers.
We
have
lot
balancers,
but
this
is
very
costly
attempt.
So
what
we
are
currently
experimenting
with
is
to
have
a
transparent,
tos
proxy,
which
is
doing
curve
pass-through,
and
it's
that
and
it's
data
fiying,
which
Questor,
which
control
plan
you
want
to
talk
to
which
API
server
you
want
to
talk
to
vrsni
and
this
thing
is
working
currently.
K
However,
the
only
problem
is
that,
right
now
the
cubelet
is
not
supporting
the
master
service
to
be
of
type
external
name
and
therefore
it's
not
automatically
injecting
the
correct
host
the
correct
environment
variables.
We
can
overcome
this
thing
by
having
like
a
mute,
I,
think
what
hook
and
that
which
is
with
any
call
the
pots.
But
it's
not
my
solution,
so
I'm
pretty
much
open
for
any
and
any
discussion
about
this
thing.
How
we
can
do
it
if
possible
at
all
you.
K
Real
name
educator,
so
when
you,
when
a
client
talks
to
a
to
a
TOS
host,
so,
for
example,
if
the
client
connects
attempts
to
connect
to,
for
example,
my
coaster,
calm
and,
and
there
is
a
proxy
in
front
of
it,
which
is
which
is
capable
of
more
or
less
trunk.
More
or
less.
This
proxy
will
transfer
your
connection
to
to
the
to
the
actual
server
which
is
serving
under
this
doing
more
or
less.
J
K
J
B
K
Correct,
yes,
so
so
it
will
only
add,
like
the
like,
the
kubernetes
like
what
was
called
hostname
and
then
service,
something
yeah,
I've
written
did
it
there.
So
currently
we
can
overcome
this
by
having
annotating
pipe
hook,
which
is
mutating
called
the
post.
But
it's
not
my
solution.
In
my
honest
opinion,
so
I
sorry.
F
K
Can
you
come?
Can
you
go
over
in
the
beginning,
so
the
main
problem
is
that
components
which
are
using,
which
are
using
the
automatic
discovery
in
coaster
configuration,
for
example,
client,
go
they're,
looking
for
those
specific
environment,
variables
called
Co
branches,
service,
port
and
kubernetes
service
cost
and
they're
not
present
there
out.
All
those
components
are
not
automatically
discovering
that
the
rerunning
is
at
the
about
this
cluster
and
the
cubelet
is
only
injecting
those
if
the
master
service
queue
brightest,
adopt
default
is
of
type
coaster.
Ip.
K
E
E
K
B
E
E
B
Like
advertised
address,
which
is
a
flag
to
API
server
like
may,
need
a
symbolic
name
like
advertise
symbolic
name
which
changes
the
as
this
extra
million.
When
the
service
ensures
that
we
can
check
on
startup
that
that
name
is
in
the
stands,
for
the
cert
that
we're
going
to
present
I
think
we
might
consider,
instead
of
changing
the
existing
environment.
Variable
just
set
up
a
new
environment.
Variable
I
was.
D
B
E
F
E
F
E
E
K
E
K
E
K
B
C
B
K
B
H
I
just
want
to
update
to
refresh
that
this
exists
and
we
know
something's
happening
on
your
end.
I
saw
it
couldn't
move
into
your
on
the
room
issue:
okay
I
copy
their
own
yeah.
It
opens,
but
isn't
here?
Oh
there
we
go,
I
saw
it
can
move
into
API
review,
I
think,
but
we're
still
waiting
I'm.
Somebody
gonna
go
to
merge
this
and
start
with
implementation.
Yeah.
B
E
H
So,
for
the
first
part,
I
think
this
is
still
my
first
cap
and
I
didn't
know
what
to
how
detail
to
fill
up
of
the
top
things.
If
anything
is
missing.
Please
yeah
tell
me
because
I'm
like
on
here,
the
cap,
don't
worry,
probably
I,
think
there
is
a
comment
from
one
turn
that
he
wasn't
sure
who
even
need
this
cap
or
it
was
different,
not
sure.
H
H
Mostly
block
due
to
the
scalability
issue
and
one
is
open,
where
I'm
still
waiting
for
help
about
what
minute
field
wiping.
B
A
H
A
Okay,
so
we're
not
going
less
one,
it's
mine
we're
not
going
to
solve
it
today,
but
maybe
we
can
our
thinking
on
what
we
can
do.
As
you
know,
I
think
the
entire
coordinate
is
when,
under
some
security,
LD
and
the
auditors
were
creating
a
bunch
of
tickets
for
the
different
states
and
projects
we
got
ours,
which
is
this
list
filtered
I
did
a
super
weak.
A
Flight
look
over
them,
I,
don't
know
how
I
tunable
are
them.
They
are
kind
of
undefined
a
little
bit.
I,
don't
know
if
anybody
knows
here
also
what
is
the
deadline
if
there
is
any
commitment
to,
we
need
to
fix
them
by
it's
version,
or
this
is
the
time
that
we
have
now
that
these
things
are
public
I.
E
J
E
B
D
E
E
Space
there's
a
lot
a
fair
amount
of
research
too.
That
could
be
done
ahead
of
time
so
like
for
each
of
these.
A
lot
of
these
are
not
new
and
so
like
finding
the
existing
issues
or
requests
or
earlier
attempts
or
conversations
and
linking
them
in
here,
but
there's
kind
of
three
phases
like
what's
the
history
of
this
issue.
E
Do
we
agree?
This
is
a
thing
that
should
be
done
and
then
why
isn't
it
or
can't
it
be
done?
Sometimes
it's
for
backwards
compatibility.
Sometimes
it's
something
that's
already
possible,
but
it's
not
a
default,
and
sometimes
it
just.
We
agree
it
should
be
done,
but
it
hasn't
been
a
top
priority
and
so
kind
of
pinning
that
those
things
for
each
issue
we've
got
a
like
farm
out.
All
of
those
things
to
people,
because.
B
Right
like
we
can't,
we
can't
expect
like
Jordan
to
do
the
research
on
all
of
these
and
then
give
like
give
like
very
concrete
instructions
to
keep
contributors.
Is
that
that's
like,
like
the
research,
is
the
work
much
of
the
work
right?
It's
it's
not
really
helping
us.
If
the
it
doesn't
really
help
us,
it
doesn't
help
new
contributors
learn
how
to
be
an
effective
contributor
and
it
doesn't
help
existing
contributors
do
less
work
something
up.
We
have
to
like
farm
out
all
all
parts
of
that
process.
If
that
makes
sense,
yeah.
A
E
They
were
all
reviewed
and
scrubbed
before
the
issues
were
opened
and
determined
to
either
be
low
severity
or
already
known
issues
or
feature
requests
or
things
that
are
already
possible,
but
not
done
by
default.
Okay,
yeah,
so
any
anything
that
would
require
like
a
critical,
urgent,
immediate
security
response
has
already
been
described.
Okay,.
A
Thank
you,
so
I
I
took
some
to
entren
items
from
this
one,
maybe
coordinated
during
meeting
between
the
two
six
and
then
see
if
we
can
use
this
as
an
opportunity
to
spread
it
through
a
cig
and
new
contributors
to
those
and
research
and
and
that
so
I
will
work
on
those
two
very
good.
What
else
anything
else
we
ended
up
using
almost
entire
our
but
I
think
was
useful
with
discussions.