►
From YouTube: Kubernetes SIG Multicluster 2020 July 14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
All
right,
hey
everyone,
I
think
we
can
probably
get
started
paul
is
away.
A
This
week,
so
I'm
hosting
all
right,
so
it
looks
like
we
have
a
pretty
light
agenda
today,
so
maybe
this
will.
This
will
be
shorter.
I
wanted
to
give
everyone
an
update
on
kind
of
the
mcs
implementation,
how
that's
going
now
that
it's
gotten
started.
Some
of
you
may
have
seen.
A
I
I
shared
out
last
night
a
link
to
a
doc
that
came
out
of
conversations
with
cignetwork
about
how
how
we
want
to
integrate
the
multicluster
services
apis
with
cubeproxy
and
and
what
might
be
the
best
way
to
go
about
this,
and
so
I'll
share.
My
screen
here.
A
All
right,
so,
basically,
when,
when
I
originally
planned
to
go
about
this,
I
think
we
talked
with
sig
network
a
while
ago
about
basically
having
q
proxy
just
ingest
the
crds
directly,
just
like
it
does
with
service
and
and
go
go
about
the
implementation
that
way
making
this
you
know
simpler,
probably
from
from
our
perspective,
but
certainly
have
more
impact
on
cue
proxy,
but
with
the
idea
that
you
know.
A
Overall
performance
would
be
better
this
way
so
when
filing
the
ticket
to
create
a
staging
repo
that
key
proxy
can
actually
use
to
import.
The
resources
andrew,
who,
who
is
here
today,
suggested
that
instead,
we
maybe
look
at
just
using
service
instead
and
the
idea
would
basically
be
that
we
have
another
controller
or
maybe
the
implementation
could
be
responsible
for
this
directly.
A
But
this
this
controller
creates
a
proxy
service
for
the
original
service
import
so
that
qproxy
can
ingest
that
service,
just
like
just
like
any
other
service
today
and
so
the
basically
the
the
controller
would
be
responsible
for
you
know
configuring
endpoint
slices
to
reference
that
that
new
service,
as
well
as
the
service
import
and
then
you
know
the
service
import,
would
still
be
used
for
for
dns,
and
then
you
know
we
would
also,
at
least
unless
we
fixed
this.
A
We
would
have
alternative
like
cluster
local
overloaded
dns
records
for
that
original
service.
But
this
is
a
way
to
kind
of
implement
the
multi-cluster
services
api
without
without
any
changes
to
cube
proxy
itself,
then
another
idea
that
was
brought
up
with
the
idea
of
introducing
a
proxy
sidecar,
basically
in
queue
proxy,
that
proxies
api
requests
so
that
we
can
inject
services
in
flight.
I
think
you
know
some
of
the
feedback
was
that
this
this
approach
kind
of
had
some
of
the
drawbacks
of
both.
A
So
maybe
I
don't
know
that
this
is
necessarily
way
to
go,
and
then
the
last
idea,
of
course,
was
was
to
go
ahead
with
using
a
staging
repo,
which
is
you
know
more
invasive
requires
the
a
new
staging
repo
and
and
a
lot
of
process
overhead
there,
but
in
terms
of
implementation,
might
be,
might
be
a
little
simpler.
A
So
that's
kind
of
the
conversation
any
I
did.
You
know
I
think
it'd
be
good
to
get
more
feedback
or
andrew
since
you're
here,
if
you
want
to
kind
of
chime
in
on
what
you're
thinking.
B
Yeah
sure
I
I
think
the
the
more
I
I
think
about
it,
the
more
unsure
I
am
with
my
original
suggestion,
like
yeah
like
after
reading
the
stuff
a
few
times
yeah,
I'm
not
sure
anymore.
I
think
one
clarifying
question
is
like
for
the
last
point
for,
for,
if
we
go
with
the
standard
code,
is
that
a
staging
repo?
But
this
but
the
apis
are
still
crds
or
is
it
staging
repo
as
core
apis.
A
Yeah
staging
repo,
but
but
still
crds.
I
think
we
kind
of
all
see
mcs
as
a
like
a
valuable,
hopefully
common
add-on,
but
certainly
optional.
You
know
you'd
still,
probably
start
with
single
cluster
kubernetes.
B
Okay,
I
think,
like
one
thing
that
was
interesting,
that
you
mentioned
in
the
last
sig
network
call
was
that
you
said
that
the
thing
that
manages
the
data
plane
so
like
q
proxy
in
this
case
is
a
thing
that
should
consume
this
service
import
directly
so
like
following
that
model,
I
I
think
just
staging
the
multicluster
api
and
having
q
proxy
consume
service.
B
Import
makes
sense,
but,
like
I'm
thinking
like
you
know,
if,
if
the
cluster
was
running,
you
know
a
service
mesh
instead
is
the
expectation
that,
like
like
you
know
if,
let's
say
the
the
service
mesh
proxy
was
like
envoy,
are
we
expecting
like
envoy
to
watch
service
imports
directly,
like
that
seems
pretty
far-fetched?
And
I
don't
think
onboard
would
ever
do
that
so
like
in
a
service
mesh
world?
A
Exactly
that's
that's
kind
of
what
I
think
so
so
I
I
think
my
expectation
is
something
like
this
like.
If
we
take
istio
as
an
example
like
the
istio
pilot
would
consume
the
service
import,
that
whatever
implementation
that
you
were
using
with
qproxy
creates,
and
then
it
would,
it
would
feed
that
to
on
to
envoy
over
xds.
So
you
know
your
your
envoy
controller
would
be
responsible
for
translating
the
service
import.
B
Okay
yeah,
I
yeah
like
I,
I
don't,
I
think
both
seem
pretty
reasonable
solutions
to
me
like.
I
think
the
only
reason
that
I'd
still
prefer
like
using
actual
services
is
just
because,
like
purely
for
the
reason
that
we
get
to
start
out
of
tree-
and
you
know
if
it
works
out
of
tree-
then
we
don't
have
to
worry
about
going
in
tree
as
opposed
to
starting
entry
and
then
trying
to
go
out
of
tree.
A
Okay-
okay,
that's
okay!
That's
good
to
know
so
all
right,
so
that
seems
a
reasonable
motivator
does,
I
guess,
does
anyone
else
here
have
any
other
thoughts
on
this
or
you
know
I.
I
think
I
linked
this
in
the
notes
in
the
agenda.
So
if
you
want
to
chime
in
offline,
so
I
hate
be.
A
C
C
I
I
find
what
you're
saying
andrea
actually
to
be
fairly
compelling.
I'm
just
concerned
about
the
amount
of
debris
that
comes
out
of
this,
like
all
the
little
services
that
we
end
up
sprinkling
around
that
we
have
to
manage
by
names
to
like
it
just
feels
it
feels
dirty.
C
B
Yeah,
that
makes
sense.
I
guess
I'm
thinking,
like
the
reverse,
where,
if,
if
we
ended
up
not
liking
the
api
right,
then
then
we're
stuck
kind
of
like
you
know,
adding
a
bunch
of
key
proxy.
But
I
guess
like
that's,
why
we're
kind
of
starting
it
at
alpha
and
promoting
it
as
we
go
so.
C
Right
right,
we
we
we
shouldn't
rush
to
that
ultimate
conclusion,
but
it
does
seem
to
be
like
a
reasonable
ultimate
conclusion.
Like
the
jeremy
gave
me
a
peek
at
the
code,
he
was
writing
to
do
the
integration.
It's
really
not
that
bad
right
like
internally.
It
translates
to
the
same
structure.
It's
not
so
awful.
Would
I
want
to
do
10
of
these?
No,
I
probably
wouldn't.
A
Right
and
for
anyone
else,
who's
interested
too
this
doc
links
links
a
pr
that
is
definitely
not
ready
to
be
merged,
but
but
works,
and
I
linked
in
the
agenda
a
demo
that
you
can
use
to
kind
of
pull
it
down
and
play
with
it
too.
B
Okay,
yeah
because
my
next
question
was
like:
what's
that,
suppresses
it
for
like
the
next
time
right
like
we're?
Not
we
don't
want
to
do
it
10
times,
but
we
want
to
do
it
like
two
more
times
three
more
times
so
yeah,
if
we're
saying
like
gateway,
is
going
to
solve
it.
For
the
you
know,
80
90
use
case.
That
sounds
good
to
me.
Yeah.
A
Yeah,
my
my
kind
of
hope
is
that
we
have
service,
which
is
your
kind
of
cluster
local
endpoint
aggregator,
and
then
you
have
service
import,
which
is
your
multi-cluster
endpoint
aggregator,
and
then
hopefully
you
know
those
those
are
primitives
that
you
can
build
on
and
maybe
gateway.
We
want
to
import
in
a
way
as
well,
because
that's
the
new,
more
generic
way
to
describes
things,
but
you
know
I
if,
if,
if
we
look
at
it
as
the
service
is
the
one
one
to
one
and
service
import
is
the
one
to
many?
A
A
Practicality,
I
think,
is
really
it.
I
don't
think
we
actually
want
service
import
in
tree.
I
think
it's
just
that
today.
Well,
I
mean,
and
staging
is
kind
of
the
half
in
half
out
right,
and
I
think
it's
really
just
that
you
know
we
want
it
out
of
tree,
but
in
practice
for
a
core
component
to
consume
it.
It
should
be.
It
should
at
least
be
in
staging.
A
Or
it's
just
vendored,
but
that
you
know,
gets
really
confusing,
because
now
you
have
to
tie
api
reversions
back
to
back
to
kubernetes
releases
in
a
much
more
convoluted
way.
C
Where
are
we
jeremy
with
timelines
like
at
what
point
does
this
hard
decision
need
to
come
down.
A
I
mean
my
I
would
certainly
like
as
soon
as
possible.
You
know
if
in
the
next
week
it
could
be,
it
could
be
decided.
That
would
be
excellent.
I'd
really,
I
think
it'd
be
just
great
if
we
could
have
if
we
could
hit
alpha
and
120..
So
you
know
unblocking
this,
so
we
can
start
merging
things,
especially
because
there
is
a
fair
amount
of
process.
If
we
go
the
staging
repo
route,
you
know
it's
not
so
much
effort,
but
you
know
there's
a
few
issues
that
need
to
get
resolved.
A
I
think
sooner
is
better.
C
Yeah
well
now
you
got
me
thinking:
jeremy
are
there?
We
don't
need
to
take
up
the
whole
day
on
this,
but
one
last
question
I
guess
are
we
talked
at
some
point
about
security
import
being
safe
because
normal
users
couldn't
write
to
it.
If
we
change
that
so
that
it
ends
up
creating
a
service
and
the
service
is
the
thing
that
actually
triggers
the
multi-cluster
ip
to
be
captured.
Does
that
present
any
security
or
permissions
problems.
A
A
Yeah,
well,
I
I
think
it
depends
how
we
did
the
name
mapping
and-
and
you
know,
if
the
service
import,
if
it's,
if,
if
the
service
import,
has
a
reference
to
its
service
or
or
instead
of
the
service
referencing
the
service
import,
I
don't
actually
think
with
this
model,
like
the
service
needs
to
know
what
service
import
it
came
from.
A
A
I
think
multi-cluster
service
dns
wouldn't
would
still
talk
to
service
import
directly.
Of
course,
traditional
dns
regular
cluster
dns
would
would
understand
the
phony
service,
but
yeah.
I
don't
think
we
would.
It
doesn't
make
sense
to
you
to
make
to
do
something
special
with
dns
for
the
phony
service.
I
think
service
imports
still
make
sense.
There.
A
But
it
does
mean
that,
like
we
will
have
extra
dns
records,
of
course,
for
for
the
dummy.
B
A
Okay,
I
see
yeah,
I
think-
maybe
I
don't
know
if
we
need
to
eat
the
whole
day
on
this,
but
I
see
like
the
core
dns
plug-in
for
for
a
multi-cluster
service,
for
example,
as
a
separate
add-on
that
you
a
separate
plug-in
that
you
turn
on
versus.
You
know
something
that
we
necessarily
want
to
work
in
to
the
core
kubernetes
plug-in,
at
least
at
first.
C
A
Right,
and
if
that
were
the
case,
then
then
multiples
or
services
would
be
and
would
probably
be
in
six
or
at
least
its
own
repo.
B
C
It
jeremy,
can
you
say
again:
sorry
we're
gonna,
we're
gonna
end
up
wasting
the
whole
day
on
this.
Can
you
say
again
why
vendoring
it
is
a
problem
like
why
not
put
it
somewhere
else
and
then
vendor
it
back
in.
A
Versioning,
I
think,
is
the
biggest
thing
well.
First
of
all,
I
don't
see
any
precedent
for
that,
so
I
don't
know
that
we
want
to
be
the
first
to
say:
hey
we're
just
going
to
vendor
some
sigs
repo,
so
that's
one,
because
that
just
hasn't
happened
as
far
as
I
can
tell
the.
A
Well
or
we
decide
that
that
that
at
least
part
of
gateway
or
all
of
gateway
actually
can't
live
in
cigs
and
needs
to
be
moved
in
right.
So
I
I
think,
maybe
that's
the
answer,
and
and
certainly
that's
kind
of
more
in
in
your
domain
to
to
drive.
But
I
I
don't
think
that
decision
has
actually
been
made
right.
C
Sure
but
we
will
like
you're
not
alone
in
having
to
make
this
decision.
A
Right
and-
and
so
part
of
it
honestly,
is
as
as
kind
of
trailblazing
with
this,
as
as
also
the
first
attempt,
or
at
least
the
first
potentially
successful
attempt
to
use
crd's
entry
or
for
for
more
core
components.
A
You
know,
I
think
part
of
it
honestly
is
trying
to
to
pick
the
challenges,
but
I
think
the
the
biggest
issue
I
see
with
vendoring
is
is
versioning,
like
which
version
of
of
kubernetes
supports,
which
version
of
the
multi-cluster
services
api
since
cube
proxy
is
kind
of
tied
in
with
everything,
at
least
in
in
practice,
is
really
easy
in
the
staging
repo
right.
It
becomes
a
lot
more
difficult
to
track
down,
and
you
have
to
rely
a
lot
more
on
documentation
if
you're
vendoring.
C
All
right,
so
we
need
to
to
tackle
this.
We
don't
need
to
tackle
it
all
here.
I
flagged
your
doc
for
reading
and
pontification
some
more
and
I
will
have
to
maybe
I'll
come
up
with
some
more
questions
around
timelines
and
sequencing
like
how
bad
would
it
be
if
we
started
with
x
and
moved
to
y
or
something,
but
I
don't
really
know
what
the
questions
are
yet
so.
A
I
do
think,
potentially
starting
with
sigs
and
and
this
service
approach,
with
a
plan
like
to
migrate
to
staging
or
at
least
make
a
final
decision
before
beta
wouldn't
be
the
worst
thing.
You
know
that
that
is,
that
is
a
route
as
well
like.
I
agree
that
starting
in
staging
is
more
probably
more
of
a
commitment.
A
I
mean
we're,
starting
with
alpha
too,
so
we
we
reserve
the
right
to
to
pull
it
out,
but
I
agree:
moving
from
from
sigs
to
staging
is
probably
easier
than
moving
from
staging
to
six,
but
I
think
you
know
there
is
a
lot
more
clutter
with
the
with
the
other
approach
and
I
think
there's
some
some
things
we
didn't
count
on
like
how
do
I
actually
track
that
my
service
was
properly
created
by
this
new
intermediate
controller,
which
starts
complicating
things
too?
A
C
A
The
the
drawbacks
there
with
yeah,
I
think
I
mean
I
tried
to
keep
it
relatively
light.
So
you
know
it's
not
a
novel,
but
yeah
the
drawbacks
of
the
approaches
are
are
listed.
I
don't
think
I
think
only
in
the
comments
is
the
moving
entry
out
of
tree
discussed,
but
I'll
move
that
in
all
right,
I'll
try
to
I'll
try
to
read
this
black
asap,
okay,
cool
all
right.
I
believe
that
is
all
we
actually
had
on
the
agenda
today.
A
All
right:
well,
I
guess,
before
we
go
per
last
week's
discussion
on
cluster
registry,
paul
shared
out
on
the
on
the
mailing
list,
a
doc
where
we
can
start
kind
of
enumerating
various
potential
use
cases
for
registry,
so
you
know
anyone's
interested.
Please
take
a
look
at
your
ideas.
I
think
it'd
be
good
to
get
that
discussion
going.