►
From YouTube: Kubernetes SIG Multicluster 12 May 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Yes,
there
is
an
jeremy's
going
to
give
an
update
on,
like
the
prototyping
he's
been
doing
on
the
multi-cluster
services
cap
awesome.
Okay,
thank
you.
A
C
So
yeah
I
wanted
to
cover
a
few
things
today.
Let
me
I
put
a
little
deck
together.
C
But
I
think
the
first
thing
is
just
basically
bookkeeping.
Let
me
share
oh
paul.
You
need
to
enable
screen
sharing.
Yes,.
A
Okay,
I'll
look
at
changing
that
by
default,
but
it's
on
now:
okay,
cool.
C
C
Right
cool,
so
I
guess
first
things.
First,
I
put
together
a
demo
based
on
kind
of
where
the,
where
the
cap's
at
right
now,
first
a
I
threw
together
and
I'll
I'll.
C
Add
this
to
the
notes
too,
a
repo
with
the
with
the
crds
and
the
go
client
generated
so
that
people
can
start
playing
around
with
what
this
would
look
like,
and
you
know
it's
bare
bones
very
simple
but
like
enough
to
to
hopefully
get
get
going,
and
I'm
really
hoping
that
we
can
start
collaborating
and
getting
more
people
involved
and
and
kind
of
playing
around
with
multi-cluster
services,
and
what
what
this
could
look
like.
I'm
not
sure
what
makes
the
most
sense
there.
C
I'm
certainly
happy
to
keep
it
going
in
my
in
my
github,
but
maybe
it
makes
sense
to
have
a
sigs
project
for
for
this.
I
don't
know
what
thoughts
are
there
do
we
do?
We
think
it's.
C
Yeah,
I
don't
I
don't
know
I.
I
don't
know
that
we
want
to
keep
adding
a
ton
of
repos,
I'm
just
thinking
about
like
the
way
signet
has
been
doing
gateway,
there's
just
like
a
service
apis
repo,
but
you
know
maybe
it's
kubernetes,
maybe
maybe
we
just
yeah,
I
don't
know.
Maybe
we
just
use
our
existing
repo.
I
don't
I
don't
have
a
problem
with
that
at
all.
I
just
think
we
need
to
figure
out
where
we
want
to
put
it.
A
Yeah,
I'm
I'm
not
against
having
a
six
project
at
all.
I
think
before
we
start
creating
them.
Let's
think
carefully
about
like
what
the
next
two
or
three
like
development
milestones
would
be
and
try
to
shoot
for
that,
rather
than
like,
where
we
might
be
right
at
this
moment
in
terms
of.
C
Absolutely
and
yeah,
just
one
more
there,
for
anyone
who
wants
to
play
around
I'll
also
show
this.
I
actually,
I
think
it's
already
linked,
but
I've
got
a
demo
repo
where,
basically,
you
follow
these
steps.
You'll
get
a
a
implementation,
a
very
simple
bash
controller
for
mcs
for
the
ncs
api
that
basically
spins
up
a
second
cube
proxy.
That
only
cares
about
the
service
imports
and
connects
all
your
services
that
way,
there's
no
dns
or
anything
yet,
but
it'll
actually
build
key
proxy.
C
It's
it's
very
hacky,
and
it
just
does
it
out
of
my
own
kubernetes
branch
and
then
it'll
install
it
and
and
run
it
locally
in
two
kind
clusters,
so
that
there's
actually
something
you
can
play
with,
but
but
yeah,
so
no
dns.
Yet
the
controller's
just
a
script,
it's
very
simple,
but
it
shows
that
everything
works
with
with
crds,
which
I
think
is
is
where
we
got
to
what
we
wanted
to
start
with
last
time.
C
The
other
the
other
kind
of
big
open
question-
and
I
don't
know
if
I
want
to
talk
about
this
first
or
later.
Basically,
the
other
two
things
I
want
to
cover
today
are
starting
to
talk
about
headless
services
and
then
next
steps
and
kind
of
that.
What
we
need
to
do,
what
we
actually
want,
the
next
milestone
to
be
and
what
it
would
take
for
us
to
consider
this
a
worthwhile
api.
What
do
you
think
we
should
cover.
D
C
No,
it
functions
exactly
the
same
way.
Only
last
time
it
was.
It
was
protoz
like
a
first
party
api,
this
time,
it's
crds,
but
there's
there's
actually
no
change
at
all
in
the
in
the
flow.
Okay.
C
Yes,
that's
great,
okay
and,
and
anyone
can
pull
this
down
and
try
it
out.
Hopefully,
if
my
setup
scripts
work
properly
yeah,
it
absolutely
works
with
crds
it.
It
is
a
second
cube
proxy
so
that
it,
you
know,
there's
definitely
some
scaling
concerns,
because
you've
got
basically
two
controllers
programming,
ip
tables
in
conflict
with
each
other
which
each
are
taking
locks
and
there's
there's
issues
around
that.
So
I
don't
think
this
is
exactly
how
we'd
want
it
to
go
long
term
but
yeah.
C
The
second
cube
proxy
just
watches
the
service
import
crd
and
it
watches
endpoint
slices
that
are
marked
with
the
service
proxy
name
annotation,
so
that
they're
ignored
by
by
the
core
q
proxy.
So
it
all
works
with
crds,
which
is
which
is
great.
C
Okay,
cool,
so
there's
been
some
conversations
back
and
forth
on
the
pr,
and
I've
got
some
more
ideas
for
follow-up
pr.
I
think.
Last
time
we
talked
about
not
really
wanting
to
overload
the
first
one
anymore,
but
I
think
we're
all
set
now
to
actually
get
the
the
first
one
merged
yeah.
A
Maybe
maybe
we
can
just
pause
on
that,
so
I
had
a
pr
to
update
the
an
outdated
list
of
approvers
for
the
sig
that
got
merged.
So
I
think
I
should
be
able
to
go
and
merge
your
cap
in
provisional
later
on
today,
but
I've
you
know
the
proof's
in
the
pudding
we'll
see.
C
Yeah,
I
think
so
too
also.
I
saw
that
there
was
a
we
actually
have
like
an
owner's
file
set
up
in
our
in
the
sig
multi
cluster
kept
directory.
Now
too
so,
and-
and
it
does,
it
does
include
you
paul,
so
I
think
hopefully
yeah,
hopefully
we're
good
but
yeah.
So
once
that's
done,
then
I've
got
a
follow-up
pr.
That
kind
of
addresses
a
lot
of
things
that
I
want
to
talk
about
today.
C
So
I'll
update
it
based
on
our
discussions,
but
I
guess
the
let's
talk
about
headless
services
because
we
haven't
really
got
there
and
and
initially
we
said
that
we'd
kind
of
hold
off
on
stateful
sets
until
we
started
having
more
discussion,
and
basically
we
just
target
stateless
at
first,
but
I
think
it's
probably
time
to
talk
about
headless.
It
keeps
coming
up
and-
and
so
I
guess
the
first.
C
The
first
thing
is
the
idea
that
we
should
probably
derive
this
from
source
services,
because
this
isn't
something
that's
likely
to
vary
in
behavior.
I
don't
think
you'll
want
like
a
cluster
ip
in
a
cluster,
but
you'd
want
it
headless,
supercluster
or
vice
versa.
C
So
the
idea,
basically
being
that
this
is
an
all
or
nothing
state,
that's
derived
from
source
services.
So
if
all
of
the
services
that
export
a
given
or
all
of
the
clusters
that
export
a
given
super
cluster
service
are
headless
or
have
the
service
as
a
headless
service,
then
we
just
consider
the
super
cluster
service
headless.
A
C
Yeah
well
so
so
this
this
brings
up
a
question
that
kind
of
came
up
and
a
few
times
is
service.
Import,
like
it
really
can
exist
separately
like
it,
doesn't
technically
need
to
be
derived
from
source
services.
It's
just
that's!
That's
how
we're
using
it
right
like
a
service
import
really
just
is
a
a
multi-cluster
service.
C
C
But
there's
nothing
stopping
me
from
just
creating
my
own
service
import
manually
that
references
a
bunch
of
other
clusters,
so
it
kind
of
brings
up
the
question
like
it
seems
like
either
everything
on
a
service
import
is
actually
status
or,
and
it's
all,
and
we
just
consider
it
all-
derived
or
or
it's
okay,
that
it's
all
spec,
but
I
don't
see
a
huge
difference
between,
like
the
vip,
for
example,
and
and
whether
or
not
a
service
is
headless
in
terms
of
like
who's
who's
responsible.
For
that.
A
Yeah,
it
does
make
sense,
it
might
so
what
I,
what
I
heard
you
describe
is
sounds
to
me
like
a
status
behavior.
It's
it's
worth
thinking
hard
about.
C
Yeah
yeah,
so
so,
let's,
let's
take
a
step
back
for
a
second,
so
I
do
think
there
needs
to
be
a
type
and-
and
I
guess
the
question
is,
is
it
so
like
standard
type
would
be
super
cluster
ip
headless
type
would
be
headless?
I
think
we
should
make
that
more
explicit
and
not
not
extend
the
cluster
ip
none
behavior,
I
think
that's.
C
You
know
in
hindsight.
That's
probably
a
little
confusing
so
make
it
explicit,
but
then
yeah
should
it
be
a
should
it
be
a
status
and
then
should
everything
be
a
status
or
since,
like
the
real
value
in
a
status?
Is
that
I
can
create
a
resource
and
a
controller
can
populate
the
status
right.
So
if
the
controller's
both
creating
the
resource
and
populating
the
status,
does
that
distinction
really
help.
A
A
So
what
happens?
If
what
happens
if
I
have
in
like
my
super
cluster,
is
clusters
abc
and
in
c,
for
whatever
reason
the
the
service
is
cluster
ip
and
in
a
and
b
it's
headless.
C
So
with
with
this
proposal,
I
threw
it
here,
then
we
just
treat
it
as
a
super
cluster
ip
service.
A
Yeah,
so
I
guess
I
could
see
the
value
of
the
value
of
a
spec
field
for
type
would
be
to
enforce
constraints
like
that,
like
enforce
a
constraint
that,
if
you
say
a
service
is
headless
that
it's
headless
in
each
of
the
clusters,
I
think
it's
really
hard
to
know
exactly
what
people
are
going
to
want
to
do.
It's
something
to
think
about.
D
A
Maybe
an
initial
way
to
start
would
be
like
have
a
status
field
that
says
it's
it's
headless
in
every
cluster.
C
Yep
yeah,
so
maybe
maybe
the
like,
there's
a
concept
of
like
a
clusters
list,
basically
on
the
on
the
service
import
that
has
cluster
specific
information,
and
maybe
maybe
that
should
have
the
cluster-specific
headless
state.
Maybe
that's
status.
That's
really
more
of
a
status
thing
then
that,
like
that
whole
clusters
list
is
really
is
really
status
and
then
we
can.
You
can
use
that
to
kind
of
figure
out
the
service
import.
I
guess
the
question
is
like
so
thought
experiments.
C
I
can
definitely
come
up
with
with
ways
that
this
is
confusing
and
and
this
behavior
isn't
good
but
like
in
practice,
is
there
I'm
having
trouble
thinking
of
a
case
where
you'd
want
a
where
you
would
actually
mix
and
match
headless
and
non-headless
services
like
it?
It
seems
like
that's,
it's
either
headless
or
it's
not
everywhere
like
it.
Wouldn't
it
wouldn't
make
sense
to
create
a
service
with
a
cluster
ip
and
a
headless
service
for
the
same
workloads
and
that's
kind
of
what
we're
saying
is
that
these
across.
B
The
supercluster,
I
think,
having
it
be
explicit
in
terms
of
defining
the
type,
is
actually
going
to
be
easier,
simply
because
I
think
it's
more
usable
right,
you're
not
having
to
kind
of
guess
or
derive.
I
think,
as
you
have
content
or
configuration
spread
across
many
clusters,
the
system
which
updates
that
configuration
or
the
users
that
modify
it
might
make
errors
right
simply
because
there
are
more
places
where
someone
might
introduce
a
change
and
cause
a
change
in
behavior
that
derived
the
change
in
change
in
behavior
unexpected.
B
I
also
kind
of
think
that
we
were
talking
about
should
the
controller
update,
spec
or
update
status.
I
think
that
if
you're
generating
information
like
the
cluster
list
or
whether
it's
a
cluster,
I
successfully
cluster
ip
or
successfully
headless,
based
on
some
set
of
conditions.
In
the
background,
if
you're
modifying
the
spec,
then
I
think
it
makes
it
more
confusing
when
you
want
to
manage
that
object
under
a
github
style
flow,
where
I've
got
many
clusters,
I'm
delivering
content
to
those
clusters.
B
I
apply
a
certain
spec
and
then
suddenly
the
controller
modifies
it
was
that
a
valid
change
or
an
invalid
change
versus
the
controller
modified,
a
status
which
provided
me
all
the
details
that
I
could
still
get
to
through
an
api
call
or
through
the
cli,
but
not
necessarily
confusing
or
invalidating.
My
get
ups
flow
to
deliver
that
configuration.
C
Yeah
that
so
that
makes
sense,
I
think
that
brings
up
a
kind
of
fundamental
question
that
I
think
we
touched
on
like
a
few
weeks
ago
and
kind
of
put
it
to
rest,
but
I
want
to,
but
I'm
not
sure
we
really
consciously
put
it
to
rest.
So
I
want
to
bring
it
back
up,
but
basically
the
kept
today
sees
a
service
import
as
a
as
like
a
purely
derived
resource.
C
So
it's
not
something
that
a
user
will
directly
interact
with,
so
it
would
be
created
by
a
controller
in
response
to
to
service
exports
being
in
in
other
clusters,
or
it
could
be
created
by
a
git
ops,
workflow
directly,
but
there's
no
real
separation
like
so.
C
Would
we
instead
want
this
to
be
something
that
a
user
could
create
and
a
controller
update
separately
and
that's
kind
of
the
status
like
a
git
ops
workflow
could
create
a
service
import
in
specific
clusters
and
the
controller
could
fill
in
the
the
status
based
on
service
exports.
E
So
it's
not
considered
to
to
be
the
type
as
as
a
property
of
the
service
import
in
the
spec.
You
know
I
mean
you
are
completely
considering
that
should
be
part
of
the
starters.
E
C
Well,
so
I
guess
that
that's
something
that
we
need
to
kind
of
decide
on
like
should
it
seems
like
it
would
be
reasonable
if
I
was
creating
a
service
import
and
expecting
controller
to
fill
it
in.
I
could
set
a
type,
so
that
would
be
kind
of
the
spec,
but
if
it's
purely
derived,
then
it
feels
more
like
a
status
so.
A
I
think
orthogonal
from
that,
if
I
so,
for
example,
if
if
I
create
an
import
and
expect
like
a
cluster
ip
type,
behavior
from
it
and
load
balancing
I
as
someone
who's,
not
a
networking
expert
at
all
would
expect
that
there
might
be
problems
fulfilling
that
if
one
of
the
exported
services
was
headless.
A
So
maybe
there's
also
a
status
component
to
that
that
attests
to
this,
the
exported
services
being
compatible
with
your
import.
C
Yep,
I
think
that
makes
sense
too
so
this
I
guess
the
thing
I
want
to
kind
of
put
to
rest
here
is
so
the
kept
today
kind
of
basically
assumes
that
users
don't
actually
create
service
imports.
C
It
is
the
controller
solely
managing
it
and
so
like
they
would
always
they
would
always
have
them
managed
by
annotation,
pointing
at
your
your
super
cluster
service
implementation.
C
Have
like
all
we
should
list
for
the
source
clusters,
whether
they're
headless
or
not?
So
you
can.
You
can
see
how
it
relates
to
the
derived
status
or
the
you
know,
or
if,
if
you
were
to
specify
the
status,
how
or
headless
sorry
a
type,
you
could
verify
that
it's
actually
workable.
C
Yeah,
actually,
let's
look
at
so
I
haven't
updated
the
names
to
service
import
based
on
feedback,
but
let's
actually
look
at
the
the
types
here.
So
I
guess
what
here
would
make
sense
to
be
spec
and
what
here
would
make
sense
to
be
status.
E
C
E
C
Yeah,
and
so
I
think.
C
I
agree-
and
so
I
think
type
probably
is
also
spec,
because
that's
how
it
is
that's
how
we're
actually
going
to
process
the
service
right.
So
if
it's
headless
or
not,
maybe
though,
it
also
makes
sense
to
have
status,
which
is
just
per
cluster
status,
that
can
basically
say
the
source,
the
source
service
type.
A
Yeah,
I
think
I
think,
there's
that's
one
one
possibility
another
possibility
would
be
like
the
the
exposed,
so
sorry
one
another
possibility
would
be.
Does
the
instead
of
like
presenting
the
type
directly,
a
condition
that
would
be
like
the
type
of
the
exported
service
in
every
cluster
is
compatible
with
this
one.
A
C
Okay,
I'll
I'll
draft
that
up
okay
and
then
so
there
would
be
a
type
which
is
the
type
of
the
of
the
service
import
and
then
there
would
be
perk
cluster
status.
So
you
can
kind
of.
You
can
relate
the.
C
Two
cool,
so
then
I
guess
practically,
though,
what
do
we
do
with
that
right,
so,
presumably
with
a
headless
service,
if
you
just
talked
to
the
service
address,
you'd
get
you'd
get
all
of
you
get
a
records
for
for
all
backing
pods,
but
then
it
starts
getting
tricky.
What
do
we
actually
do
with
the
the
per
pod
name?
C
So
you
know,
I
think,
the
the
obvious
solution,
whether
or
not
it's
good
is
just
add
a
a
cluster
name,
and
we
already
have
this
concept
in
the
kep
of
a
a
cluster
name
or
cluster
identifier,
which
is
basically
a
registry
scoped,
unique
id
for
a
cluster.
If
we
add
the
constraint
that
this
is
that
the
cluster
name
has
to
be
a
valid
dns
label,
then
we
could
simply
make
the
per
pod
address,
be
pod.cluster,
dot,
service
namespace,
which
seems
workable.
C
You
know
this.
The
main
concern
here
is,
if
you
have
stateful
sets
deployed
in
each
cluster,
and
you
just
are
using
the
same
stateful
set
spec,
then
you're
going
to
get
conflicting
hostnames
for
in
in
all
of
your
clusters
right.
So
so
we
need
to
add
something
else
in
there,
and
this
basically
means
that
your
your
endpoint
hostname
in
your
in
the
supercluster
needs
to
add
the
source
cluster,
the
value
from
the
source,
cluster
annotation.
A
Yeah,
I
think
the
the
formulation
of
the
the
endpoint
and
pod
names
make
a
lot
of
sense
to
me.
The
only
the
only
effect
as
you
have
noticed
that
I
can,
or
that
that
you
noted
that
I
can
personally
think
of-
is
that
whatever
identifier
you
use
for
the
cluster
has
to
be
a
valid
dns
subdomain
right,
not
not
a
controversial
restriction
in
general
for
cube
types
like
there's
many
types
that
already
have
that
constraint
right.
Just
something
to
note.
C
C
You
know
there
are
lots
of
ways
to
solve
that.
It
still
has
a
lot
of
flexibility
and
it
makes
this
much
easier
and
actually
lets
us
support
staple
sets.
Where,
ideally,
I
could
have
you
know
my
cassandra
deployment
in
each
in
each
cluster
and
not
have
to
worry
about
the
conflicts.
C
A
C
C
C
Yeah,
that
makes
sense
cool.
I
will
once
that
next
pr
is
up,
so
I
wrote
up
all
this
in
the
fallout
pr
and
when,
once
that's
up
for
review
I'll
try
to
get
some
more
people
chiming
in
there
high
level
just
seems
like
it
could
work.
One
other
question
I
had
too
was
so
this.
This
addresses
the
top
level
service
and
the
per
pod.
Addressing?
C
Would
there
actually
be
any
value
in
in
being
able
to
say,
like
give
me
all
the
endpoints
in
cluster
a
so
this
would
mean
addressing
just
the
cluster.service.namespace.
A
So
in
the
the
storied
history
of
kubernetes
multi-cluster,
there
was
in
cube,
fed
and
federation
b1
a
concept
of
exposing
different
layers
of
endpoints
for
things
so
that
you
could,
if
you
wanted,
address
a
service
bounded
to
what
was
deployed
in
a
particular
availability
zone
or
particular
cluster.
So
it's
definitely
a
use
case.
That's
gotten
some
attention
how
much
people
want
it?
I
don't.
I
don't
know.
If
I
can
say
I.
C
Right,
I
guess
so
one
of
the
one
of
the
things
that
we
probably
need
to
do
before
this
becomes
more
real
and
I
don't
know
if
that's
before
it's
implementable
or
if
this
is
before
it
goes
beta
or
you
know
where,
but
we're
going
to
need
a
detailed
multi-cluster
service.
Dns
spec-
and
I
guess
the
question
is:
is
this
something
that
you
could
implement
if
you
wanted
to,
or
is
this
something
that
makes
it
into
the
spec?
I
don't
know
that
that
really
has
to
be
answered
today,
but.
A
Agree
that
seems
to
be
a
question
to
answer.
I
would,
in
terms
of
like
a
multi-cluster,
dns
spec.
I
think
it's
probably
profitable
to
create
one
long
before
beta,
like
it
should
be
a
living
thing
that
exists
even
in
alpha.
C
Yep
completely
agree,
and
fortunately
I
think
we
have
a
good
place
to
start
like
everything
we've
talked
about
through
this
entire
process
has
been
very
much
in
line
with
how
things
work
in
a
single
cluster.
So
you
know,
I
think
we
can
start
by
copy
and
pasting
the
this-
today's
cluster,
local
spec
and
and
making
a
few
changes
and
and
go
from
there
yeah,
because
hopefully
it
ends
up
functioning
in
a
very
familiar
way
like
it
shouldn't
be
hard
to
consume
multi-cluster
services.
That's
that's
the
point.
Yeah
yeah.
I
agree.
C
Exactly
so,
and
and
some
of
the
questions
I
came
up
to
on
the
pr,
maybe
worth
spending
a
few
minutes
on
are
like
so
today
we
talk
about
the
super
cluster
ip
as
as
kind
of
the
standard
and
behaving
just
like
a
cluster
ip.
So
we
make
the
guarantee
that
there
is
a
vip
in
each
in
each
cluster
that
resolves
to
this
or
the
routes
to
the
supercluster
service.
C
C
Would
that
be
okay?
Now?
My
my
take
is
that
that
that
starts
getting
risky,
because
there
are
advantages
to
having
a
single
vip
that
you
lose
as
soon
as
you
start
having
you,
you
break
that
guarantee,
but
since
that
came
up
on
the
pr,
I
just
wanted
to
bring
it
up
here.
A
I
think
this
is
another
area
where
it's
it's
much
easier
to
start
small
and
start
highly
constrained
and
gradually
relax
constraints
as
demand
dictates.
Basically,
so
in
terms
of
keeping
things
simpler
and
more
constrained,
I
would
say
that
having
one
is
is
the
minimum
right
yeah
you
need
at
least
one.
I
I
in
general
feel
like
super
conservative,
about
keeping
this
particular
problem
like
very
constrained
at
first.
So
that's
where
I'm
leaning.
C
Yeah,
I'm
with
you
there
too
also.
You
know
that
with
a
single
vip
behind
the
the
super
cluster
service
name,
behavior
will
be
the
same
as
as
a
cluster
local
service.
So
the
the
story
of
you
know
just
change
cluster
local
to
supercluster.local.
C
Vips
well,
like
submariner,
for
example,
can
can
expose
a
cluster
ip
to
multiple
clusters,
so
you
could
have
it
so
that
just
the
list
of
existing
cluster
ips
for
that
service
and
the
other
clusters
get
returned
which
works,
but
then,
like
the
main,
the
main
value
in
the
vip
is
that
you
don't
have
to
rely
on
on
a
good
dns
client,
which
are
you
know,
historically
pretty
rare.
C
So
if
you,
if
you
then
break
that
now,
all
of
a
sudden,
depending
on
the
implementation,
multi-cluster
services
may
only
work
if
your
dns
client
is,
is
a
good
one.
A
Yeah,
I
think
my
bias
is:
let's
keep
it
at
one
and
if,
if
a
lot
of
people
are
coming
to
us
and
saying
that,
like
they
use
submariner
or
scupper
or
some
other
thing
in
the
space,
so
they
don't
need
that.
A
That's
a
signal
to
us,
but
I
think
it
will
be
easier
to
just
constrain
it
for
now
and
have
it
act
more
like
a
cluster
ip
service,
cool
yeah.
That
makes.
C
Sense
to
me
all
right
cool
so
that
that's
a
lot
to
work
with
here
and
I
will
draft
up
this
feedback
too.
I
think
probably
good
in
the
last
little
bit
here
to
move
on
to
next
steps,
so
so
for
approvers,
tim
volunteered
and
I
think
that
makes
sense
as
a
you
know,
for
cube
proxy
implementation
implications
and
paul.
C
I
have
volunteered
you
as
an
approver
for
now,
if
you're,
all
right
with
that
and
and
then
I
think
it
would
be
helpful
to
have
more
reviewers.
A
Yeah,
I
think
it
would
definitely
be
helpful
to
identify
reviewers
that
could
attest
to
you
know
the
compatibility
of
decisions
that
we're
making
with,
like
other
key
related
parts
of
the
system
like
I'd
love,
to
get
a
stateful
set
owner.
Yeah
I'd
love
to
get
someone
that
was
well
versed
in
in
networking.
C
Yeah
cool
and
dns,
probably
as
well.
C
Cool
I'll,
I
can
keep
following
up
on
that
too.
You
know
without
digging
too
much
into
the
rest
of
this.
I
guess
what
what
do
we
think?
What
do
we
think
is
required
to
make
this
implementable.
C
Yeah,
I
think-
that's
probably,
I
think,
probably
both
of
these
things.
I
don't
think
we
can
go
alpha
without
having
an
understanding
of
what
it
means
to
go
to
beta,
and
we
certainly
can't
do
anything
at
a
test
plan.
A
Yeah,
so
thinking
of
how
like
how
we
would
test
the
dev
setup
with
an
additional
cube
proxy
and
crd
api,
I
want
to
understand
the
enclosing
scope
of
like
the
ee
testing
with
khan.
C
A
A
C
C
Exactly
exactly-
and
it
might
be
that,
like
some
cloud
providers,
it
just
works.
Some
cloud
providers
work.
If
you
install
something
like
submariner,
you
know.
Basically,
everything
we've
talked
about
has
has
a
flat
network
requirement
between
your
clusters,
so
either
you
need
to
run
somewhere
where
it
is
a
flat
network
or
you
need
to
be
able
to
flatten
the
network.
Unfortunately,
it's
very
easy
to
flatten
time
networks,
so
that
seems
like
a
good
place
to
start
but
yeah.
C
I
think
this
needs
to
evolve
a
lot,
I'm
less
certain
that
that
needs
to
happen
like
prior
to
alpha.
I
think
so.
Here's
a
little.
A
Yeah,
an
alternative
might
be
run
track
like
use
the
free
travis
plan,
and
then
you
can
get
a
vm
and
you
can
get
something
that
you
can
run
containers
on.
But
then
you
take
on
the
constraints
of
travis
that
you
have
a
pretty
hard
timeline
or
timeout
deadline
that
you're
up
against.
A
D
I
might
be
I'll.
A
D
Off
on
this,
but
I
had
recently
put
in
a
pr
on
a
different
project
and
the
prowl
ran
an
end-to-end
test
using
kind.
It
seemed
at
least
the
test
case
described
it
that
way,
so
I
would
be
super.
D
C
C
D
The
end
and
see
if
I
can
find
it,
but
it
the
test
description,
was
relating
to
running
ipv6
testing
on
kind.
C
Okay,
I
mean
that
would
be
awesome
and
it
kind
of
makes
sense,
because
I
know
that
you
know
testing
kubernetes
is
like
the
primary
motivation
behind
kind,
but
I
didn't
know
if
that
was
like
an
end
integration
test
or
if
that
was
actually
testing
during
development.
F
A
Yeah
the
another
thing
that
we've
run
into
that
isn't
an
immediate
concern,
but
it's
just
it's
something.
To
note
is
that
we
have
never
managed,
I
don't
think,
to
get
budget
to
test
cube,
fed
or
federation
with
a
cloud
provider
provided
setup.
So
there's
a
future
question
of.
If
we
want
to
test
in
the
cube
community
with
different
cubes
as
a
service
that
might
be
a
hurdle,
we've
got
to
jump.
C
Yeah,
that's
a
that's
a
good
point
so
that
that's
even
more
reason
to
figure
out
how
to
make
it
work
with
kind
for
now
but
yeah.
I
agree
with
that.
I
I
wonder
too,
since
the
the
main
idea
here
is
that
you
know
we
want
to
be
open
to
different
implementations.
Maybe
it's
like.
We
have
the
canonical
implementation
with
kind
and
then
other
implementations
kind
of
have
to
provide
their
own
edd
testing
yeah.
Maybe
that's
an
acceptable
answer,
but
yeah
we
we
should.
We
should
follow
up
on
that
understand.
A
A
C
That
makes
sense
yeah
that
okay,
that
seems
like
a
good
solution
too
and
yeah.
Maybe
there's
a
few
of
those.
If,
if
a
few
cloud
providers
think
that
this
is
something
they
definitely
want
to
prove
they
support,
that's
great.
That
seems
less
likely
to
happen
before
alpha,
but
definitely
worth
pursuing.
A
A
C
I
mean
I
think
kind
of
now,
so
I
actually
showed
like
there
are
some
sig
network
folks
commenting
on
on
the
pr
and-
and
I
pointed
them
at
it-
I
went
to
a
network
meetup
a
while
ago
and
and
pointed
them
at
it
initial
reception
from
the
folks
that
I've
talked
to
has
been
very
against
the
idea
of
a
separate
cube
proxy,
so
definitely
for
q
proxy
support.
I
think
it's
worth
revisiting
that,
but
we
probably
have
enough
now
to
talk
about
what
that's
that's
like.
A
Yeah
any
any
explicit
signal
on
like
cube
proxy
extensions.
C
Some,
I
guess
the
I
haven't
haven't,
dug
into
that
too
much,
because
that
seems
like
a
whole
can
of
worms
like
I
think
general
consensus
is.
It
would
be
great
if
q
proxy
was
easy
to
extend
actually
making
it
so
that
we
can
create
like
out
of
tree
plugins
for
qproxy.
C
There
are
ways
we
can
do
that
for
sure,
but
it
like
building
that
api
might
be
in
the
same
level
as
this
api.
You
know
yeah.
A
Yeah,
so
I
think
the
a
lot
hinges
on
a
lot
hinges
on
like
entry
or
auditory
extension
to
the
proxy
yeah,
because
if
it's
entry,
I
think
the
tart,
like
the
the
target
api
effectively,
is
part
of
kubernetes
slash
api
yeah.
If
it's,
if
it's
at
a
tree,
it
can
be
a
crd,
but
then
there's
the
challenges
of
a
tree
extension.
C
C
C
A
Yeah
it
it's
something
to
figure
out
the
I
think
we're
at
time
for
today.
So
thanks
a
lot
for
the
update.
C
A
Absolutely
and
offline,
I
will
work
with
you
to
get
the
the
kept
merged
as
provisional.