►
From YouTube: Kubernetes SIG multicluster 22 September 2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
C
C
A
D
Yeah,
so
I
know
this
has
come
up
in
the
past
before
and
it
was
something
that
was
considered
in
the
original
cap
and
it
was
kind
of
deferred.
So
I
figured
you
know
now
now
it
might
be
a
good
time
to
talk
about
what
what
would
it
look
like
for
the
multi-cluster
services
api
to
support
node
ports?
D
So,
basically,
I
think
today,
multi-cloud
services,
api
kind
of
assumes
that
every
pod
and
all
knock
clusters
are
routable
within
each
other,
which
I
think
works
great.
If
you,
if
you're,
using
a
managed
offering
or
you
have
a
network
topology
that
allows
that
I
think
in
many
clusters
today,
they're
using
overlays
and
and
the
only
assumption
you
can
kind
of
work
with-
is
that
the
nodes
can
connect
to
each
other.
But
pods
themselves
have
to
go
through
the
notes
to
connect
to
each
other.
D
So,
basically
like
I'm
thinking
about
workflows
around,
if
I
have
a
service
type,
node
port
and
I
point
to
service
export
at
it,
should
there
be
some
field
and
service
export
that
says,
export
the
node
ports
and
not
the
product
keys
or
should?
B
Yeah
that
one
gets
really
interesting
right.
I
think
so.
So
your
your
initial
suggestion
is
something
like
optionally
export,
a
node
port
like
node,
ip
and
port
instead
of
the
the
pod
ip
right
yeah.
B
Yes,
so
node
port
gets
really
complicated
because
I
think
there's
a
there's.
A
few
assumptions
with
node
port
right,
like
obviously
between
clusters,
coordinating
a
node
port
is,
is
tough
and
then
I
think
in
you
know
something
like
what
you're
suggesting
could
work
in
a
basic
case,
but
there
is
no
guarantee
of
correlation
between
nodes
and
pause
right.
So
then
you
know,
if
you
look
at
the
way.
B
Oh
tim,
the
the
question
is
multi-cluster
service,
node
port
service.
D
Yeah,
I
think,
like
the
first
question
I
have
even
before
that
is
like,
if
we
did
want
to
support
this,
should
the
noteport
export
happen
implicitly
based
on
the
type
of
the
service
that
we're
trying
to
export
so
like
should
service
type
node
ports
automatically
be
exported
as
no
port?
I
don't
think
we
want
that,
but
that's
one
option
and
the
second
option
would
be
like:
should
there
be
an
explicit
field
and
service
export?
That
says
if
this
service
is
a
service
type,
node
port
export,
the
node
for
addresses
and
not
the
pod
keys.
D
Yes,
sorry
if
it
wasn't
clear,
basically
like
in
an
environment
where
pods
aren't
ruddable
right,
pods
need
to
connect
to
each
other
using
like
the
node
network,
and
so
one
way
to
do
that
is
with
no
ports.
So
how
do
we?
How
do
we
have
the
multi-cluster
services
api
support?
E
I
see
that's
interesting
question
so
stepping
if
I
can
step
back
for
a
second
typically
well,
it's
a
little
bit
vague,
so
I'm
gonna
wave
my
hands
a
little
bit,
although
you
can't
see
me
because
my
video's
off
there's
the
distinction
between
north
south
and
east
west
right
and
pod
ips
and
cluster
ips
are
the
sort
of
east-west
lane
and
node
ports
and
load
balancers
on
the
north-south
lane.
So
if
we
start
going
into
node
ports,
we're
we're
opening
the
can
of
worms
to
well
does
that
apply
to
load.
D
Yeah,
I
definitely
agree,
I
think
I
think
I
guess
my
next
question,
for
that
would
be
in
clusters
or
network
topologies,
where
the
pods
aren't
readable,
like
today,
like
the
multicultural
services,
apis
wouldn't
work
right.
So
what
are
the
alternatives,
if
not
with
notepads,
like
I
see
the
other
alternative
being
export
the
nodes
and
their
ingress
ports
like
assume
the
nodes?
D
B
I
was
just
going
to
say,
like
I
think,
we've
we've
kind
of
hand
waved
that
away.
Basically,
if,
if
they're
it's
up
to
an
implementation,
then
to
to
make
the
pods
routable
is
kind
of
the
the
approach
that
we've
taken.
I
don't
know
if
that's
the
right
approach,
long
term,
but
like
we,
we
saw
that
demo
from
cisco
a
couple
weeks
ago,
where
you
know
they
had
a
solution
to
kind
of
add
their
own
proxying
layer.
B
We're
going
to
see
a
demo
of
submariner
today
and
so,
like,
I
think,
there's
a
couple
like
there
are
options
for
for
flattening
a
network,
and
maybe
maybe
the
answer
is
an
overlay,
and
maybe
that's
not
always
a
good
answer.
B
I
the
I
guess,
the
the
issues
with
node
port
that
tim
raised
are
good
ones,
but
the
other
one
that
I
have
is
that
by
kind
of
abusing
this
existing
north-south
feature
for
east
west
we're
kind
of
introducing
different
characteristics,
and
if
we
say
that
this
is
like
a
the
way
to
do
it,
you
know
the
standard
way
to
connect
to
you
know
partially
connected
networks
or
or
any
network
where
pods
are
not
visible.
B
E
A
And
I
won't,
I
won't
speculate
about
like
your
encompassing
use
case.
So
I'll.
Ask
instead
is
is
the
interesting
node
port
a
case
of
I
have
a
thing
it
uses
node
port,
it
has
to
use
node
port
or
is
the
motivation?
A
D
I
think
so
I
think
there's
nothing
forcing
noteports
the
only
require
the
or
the
only
restriction
is
that
we
don't
have
readable,
pods
and
clusters,
and
I
think
that's
a
pretty
fair
assumption
to
make
in
many
environments
and
and
given
that
service
type,
no
port
seems
the
most
like
the
lowest
friction
way
to
do
that.
E
So,
if
again
in
the
abstract
in
a
handwave
we're
using
node
ports
as
a
gateway
of
sorts
right,
an
entry
point
into
the
cluster,
but
they
are
not
the
only
hypothetical
such
thing
right.
I
could
stick
a
proxy.
I
could
stick
an
elb
in
front
of
my
cluster
that
spanned
two
different
vpc
networks
right
and
that
doesn't
it
behaves
like
a
node
port.
E
A
So
another
question
which
is
related
is,
it
seems,
like
a
lot
could
hinge
on.
The
specifics
of
the
use
case
about
is,
is
it
is
a
is
a
motivator
to
have
a
service
be
externally
available
or
is?
Is
the
motivator
to
have
pods
in
cluster
a
address
a
service
in
cluster
b?
A
So
is
it
external
to
the
cluster
set
or
internal
to
the
cluster
set,
or
both,
it
seems
seems
like
a
lot
hinges
on
that
and
it
might
be
worth
decomposing
into
those
specific
permutations.
If
so,.
D
E
Yeah,
I'm
andrew
I'm
not
disagreeing.
My
my
point
about
north
south
was
that's
what
node
board
was
traditionally
used
for.
No
in
this
point,
I
agree
with
your
characterization,
but
because
of
the
shape
of
it,
it
doesn't
fit
well,
and
so
I
think
I'm
not
against
it.
We
just
need
to
think
really
hard
about
how
we
want
it
to
behave.
B
Yeah,
maybe
maybe
another
way
to
look
at
this
is
how
do
we
just
address
like
there
is
a
very
clear
need
to
address
partially
connected
networks
or
networks
where
the
pod
is
not
visible
for
for
east
west,
and
you
know
node
port
aside,
like
that
seems
like
something
that
that
needs
a
solution.
D
E
And
it
gets
arbitrarily
complicated
it
gets.
It
gets
ridiculous
because
I
could
have
three
clusters
on
one
vpc
and
three
clusters
on
another
vpc
and
I
need
a
different
mapping
depending
on
who's
talking
to
who
right.
E
B
Awesome
yeah
thanks
tim.
Yes,
this
this
has
come
up
a
couple
times.
I
think
you
know
we
started
with
the
simple
case,
which
is
all
the
pods
can
be
can
be
seen,
but
it
seems
like
with
this.
B
A
I
was
just
going
to
wonder
aloud
whether,
like
the
cisco
implementation
that
we
saw
would
address
this,
and
similarly,
I'm
wondering
whether
the
submariner
implementation
that
we'll
see
later
today
will
address
that.
B
Yeah
yeah,
I
think,
like
with
the
cisco
implementation.
I
I
I
certainly
seem
to
address
part
of
this
for
sure,
or
maybe
it
addressed
this
problem.
I
think
there
were
some
other
things
that
that
needed
addressing
there
as
well,
but
but
yeah
it
was
like
it
was
solving
this
use
case
and
submariner
may
as
well.
So
maybe
maybe
like
I'm,
you
know
I'm
very
much
looking
forward
to
seeing
the
the
submariner
demo,
but
maybe
the
answer
is
like
here's,
a
couple
projects
that
implement
mcs4,
disconnected
clusters
or
partially
connected.
B
I
don't
want
to
make
that
statement
because
I
feel
like
this
is
a
common
enough
use
case
and
I'm
certainly
in
general,
not
against
making
changes
to
the
api
if
it's
not
solving
needs,
of
course
like
it
needs
to
be
useful.
So,
but
maybe
maybe
the
a
different
question
is:
what
are
the
common
use
cases
like
what
or
what
are
the
common
needs
of
these
solutions
like?
Is
there
a
common
way
that
we
need
to
extend
the
api
to
make
it
easier
to
implement
these
disconnected
solutions
partially
connected
solutions?
B
Maybe
that's
a
better
way
like
we
probably
don't
want
to
steer
the
api
to
always
like,
so
that
you
always
have
to
care
about
this.
You
know
if
you
do
have
a
have
a
flat
network,
but
the
api
needs
to
work
for
for
partially
connected
networks.
E
I'm
having
a
little
audio
trouble,
so
I
let
me
know
if
you
can
not
hear
me.
I
I
think
if
we're
gonna
solve
this,
we
will
need
to
look
at
how
to
expose
this
explicitly.
E
Something
like
this,
like
specifically
expose
this
surface,
but
use
this
gateway
to
get
there
where
the
gateway
may
be
all
of
the
node
imports
in
this
cluster
or
this
load
balancer
or
this
vip,
and
we
need
to
figure
out
where
that
information
comes
from
and
given
like
an
archipelago
model
which
we've
seen
a
lot
of
customers
build.
E
We
need
to
make
that
contextual.
So
I
don't
know
where
that
information
comes
from
or
what
it
looks
like,
but
I
think
sort
of
that's
the
turtle
at
the
bottom.
What
is
the
mapping
if
anybody
knows
acpi?
It's
like
the
system,
locality
information
table
a
two
dimensional
table
of
the
the
cost
metric
from
x
to
y
for
all
x
and
all
y.
B
Should
we
start
a
dock
and
just
start
throwing
some
ideas
in
there
or
use
cases,
and
you
know
be
great
to
have
you
know
like
submariner
folks,
chime
in
too
and
issues
they've
run
into,
but
it,
but
it
seems
like
like
this
is
gonna
need
to
be
a
more
involved.
Conversation
for
sure.
D
A
Okay,
it
sounds
like
we're
talked
through.
Why
don't
we
proceed
to
the
submariner
demo?
If
folks
are
ready
for
that.
C
Yeah,
I'm
ready
me
share
my
screen.
F
Let
me
know
once
everyone
can
see.
My
screen
looks
good
all
right,
okay,
I
believe
we've
already
given
them
off
just
somewhere.
Now,
without
the
you
know,
service
discovering
the
ncsap
implementation
part
of
a
long
time
back.
So
in
this
time,
I'm
mainly
going
to
focus
on
the
m6
api
implementation
that
we
have
done,
and
so
what
I
have
over
here
are
two
clusters.
This
is
all
running
in
a
kind
setup,
I'll
share
instructions
between
the
slide
like
how
you
can
bring
it
up.
It's
real
simple!
F
F
This
is
from
where
I'll
be
trying
to
access
the
services
that
I'll
be
creating
in
cluster
two
okay,
so
just
put
up
stuff
quickly,
so
this
one
I'm
creating
a
stateful
set
with
the
headless
service
backing
it
in
close
to
two,
so
you
just
straight
so
the
pods
are
coming
up
and
they're
up.
F
Okay.
This
is
over
here.
If
I
try
to
excess,
it
will
not
work
because
I
have
not
exported
a
service
yet,
okay,
so
that's
all
fine
and
then
I
just
export
the
service
which
creates
us.
So
software
is
submariners
on
cli,
so
we
just
made
it
easy
to
create
service
exposure.
You
can
also
manually
create
a
yaml
and.
E
F
It
with
something
like
this,
so
the
service
export
has
been
created.
Let's
see.
F
Okay
and
just
one
quick
thing
the
way
we
do
thing
in
summoner,
we
don't
have
a
central
mcsd
controller.
We
have
like
a
broke
kind
of
architecture,
it
acts
as
a
data
store
and
it's
like
every
cluster
distributes
the
resources,
that's
on
it
to
everyone
else,
so
that
kind
of
we
don't
have
a
central
mcsd
controller
that
is
doing
all
this
distribution.
F
F
It's
easy
to
show,
you
know
like
we
are
able
to
create
individual
parts.
Similarly,
let
me
just
create
a
cluster
service,
also
just
for
completion
sake,
and
I
create
this.
It's
created
the
first
type
of
service.
It
should
show
up
here
pretty
soon
yeah,
so
the
parts
are
up
I'll
export,
it
export
the
service.
F
And
now,
if
I
try
take
it
returns
okay,
so
this
is
one
difference
that
we
haven't
submitted
with
the
mcs
api
out,
because
remember,
a
subunit
has
been
going
on
for
almost
three
year
and
a
half,
and
we
had
this
basic
lighthouse
service
discovery
solution
before
the
mcsap
was
fully.
You
know
formalized
and
rather
are
still
in
the
works.
F
So
the
way
we
do
things
here
in
suburban
lighthouse
is
that
actually
we
don't
have
a
super
cluster
ip
implementation
yet,
but
you
get
the
ip
of
the
individual
or
the
service
running
in
that
particular
cluster
in
case
of
plus
type
service.
So
if
you
can
see
that,
I
think
that
we
got
over
here
is
same
as
a
cluster
ip
of
this
service.
You
know
it's
136,
205
136
205..
F
F
F
That's
the
one
key
difference
with
in
the
ap
implementation
that
we
have
so
we
plan
to
conform
fully
to
the
upstream
spec
going
forward,
but
it's
kind
of
a
working
progress,
because
we
already
had
something
working
and
we
were
using
our
own
some
crew,
which
was
some
rough
approximation
of
what
service
import
was
doing
at
the
time,
and
so
this
is
basically
the
gist
of
it.
F
This
is
awesome,
so
so
the
uk
service
export
service
imports
get
distributed,
and
everything
is
you
know
running
in
each
cluster.
We
don't
have
a
central
controller,
which
kind
of
adds
to
the
complication
of
having
circular
stripes,
so
we're
trying
to
figure
out
a
good
way,
a
performant
way
to
do
it
without
having
to
have
a.
F
So
so
this
was
the
demo.
Let
me
just
quickly
run
through
the
slide
deck
for
those
who
may
not
have
attended
our
first
presentation
when
we
gave
it
last
time
and
maybe
at
the
end
of
it
we'll
have
the
submariner
it's
an
open
source
upstream
vendor
neutral
project.
This
is
the
link
to
the
website
summary
of
io.
It
has
documentation
on
how
what
the
architecture
is,
how
you
can
run
it
and
all
that
so
the
core
of
submariner
is
it
provides
healthy
connectivity
between
clusters
using
ipsec,
believe
restaurants
from
tron
or
wildcard.
F
So,
basically,
like
you
were
discussing
there
right
that
a
sort
of
towards
a
flat
network
in
the
qj
across
cluster,
so
every
part
can
access
every
service
or
board
in
another
cluster.
That's
what
it
does
and
it
has
been
designed
enough,
which
is
cloud
provider
and
cni
agnostic.
Now
we
say
cni
agnostic
means
it
can
work
with
any
cna,
but
we
have
not
tested
with
all
of
them.
F
The
ones
listed
here
are
the
ones
we
have
actually
tested
with
and-
and
I
think
schroder
from
red
hat
recently
did
some
tweaks,
which
were
the
testing
with
calico
and
got
it
working
a
few
hiccups
here
and
there.
So
if
you
have
a
different
cni,
you
would
like
to
make
it
work
with
somebody
now.
Try
it
out,
raise
the
issues
or
just
get
in
touch
with
it
and
we'll
do
that,
for
you
and
other
thing
is
either
with
submariner
was
okay.
F
People
would
have
already
have
existing
clusters
that
they
would
like
to
connect,
and
you
know
access
service
across,
so
it
provides
that
you
don't
need
to.
It
doesn't
have
to
be
a
new
printer
deployment.
You
can
have
existing
cluster
with
your
workloads
and
everything
and
you
can
install
top
and
you'll
get
that
connectivity.
F
This
is
some
of
the
top
use
cases
that
we
had
when
we
were
coming
up
with
the
design
and
something
that
we
have
few
folks
try
out
this
submariner
with
service
mesh
across
clusters.
Stretch
databases.
You
know
where
you
have
multiple
database
instances
running
on
multiple
clusters
for
availability,
apps
on
edge
and
your
database
on
on
premises
and
another
one
of
the
key
design
goals
for
us
was
always
to
use
as
much
as
existing
resources
on
the
cloud
as
possible
so
that
it's
able
to
work
in
a
guard
manner,
so
core
components.
F
F
So
the
way
we
work
is
first,
you
deploy
a
broker
and
some
mirror,
and
then
you
join
clusters
to
it.
So
if
you
have
just
one
cluster
that
will
need
to
join
to
the
broker
and
only
requirement
is
this:
cluster
should
be
able
to
access
the
kubernetes
api
point
of
whichever
cluster
broker
is
running
on,
and
here
you
can
see
you
know
the
advantage
of
having
this
than
a
central
controller.
F
If
you
run
a
central
controller,
then
that
central
controller
needs
to
have
some
way
to
communicate
to
ap
endpoints
of
all
the
other
clusters,
whereas
this
way
every
customer
just
needs
to
talk
to
one
cluster,
where
the
broker
is
running
and
things
work.
So
that's
why
we
went
with
this
kind
of
model
and
for
the
core
components.
F
It
has
two
components
when
it
comes
to
providing
network
connectivity
first
is
the
submariner
engine,
so
submariner
has
a
concept
of
gateway
nodes
which
provide
that
ipsec
tunnel
between
two
clusters,
and
you
can
set
up
multiple
pods
as
your
gateway
nodes,
and
one
of
them
will
become
a
leader.
It
will
establish
ipsec
or
wire
cartons,
depending
on
what
is
the
cable
driver
you
are
using,
and
it
is
also
responsible
for
updating
your
local
clusters.
Information
to
you
know
like
what
is
the
cluster
cid?
F
What
is
the
service
cid
and
things
like
this
to
the
other
clusters?
That's
what
it
does,
and
that
is
the
submariner
route
agent,
which
runs
on
every
node.
It
is
always
aware
which
node
is
the
current
gateway
leader,
okay,
and
it
is
used
to
do
traffic
from
parts
running.
Your
cluster
to
the
gateway,
node
and
then
gateway
node
is
the
one
that
sends
to
the
other
clusters.
F
Okay,
then
another
optional
component
is
global
controller,
so
the
use
case
of
globalnet
was
added
a
bit
later
than
day
one
fit
submariner.
So
the
use
case
was
okay.
If
I'm
going
to
deploy
somewhere
to
connect
classes
that
are
already
existing,
I
will
definitely
run
into
scenarios
where
multiple
clusters
are
using
the
same
c
idea
right.
Like
most
people,
when
you
say
the
department
say
kubernetes,
they
will
have
some
false
idea
that
every
cluster
comes
with
it
and
we
found
out
most
so.
F
B
F
So
we
have
this
address
space,
one
six,
nine,
two,
five,
four:
zero,
zero
slash,
sixteen
range
to
which
every
and
whenever
you
join
a
cluster,
it
gets
a
sub
subset
of
this
idea
range
and
then
every
any
service
or
a
pod
requires
access
to
a
service
or
a
port
that
is
cross
cluster.
F
It
is
given
the
ip
from
this
one
from
this
edge,
and
that
is
what
is
used
for
communication,
so
so,
basically,
like
I
said
normal
sub
somewhere
when
you
connect
clusters,
if
there
are
no
overlapping
ideas,
then
yeah
it's
a
flat
network.
Anyone
can
talk
to
anyone,
but
if
they're
overlapping
say
yes,
because
that
communication
is
not
possible.
So
that's
where
global
comes
into
picture,
and
in
this
case
you
cannot
talk
to
every
other
part,
but
only
the
ones
you
designate
that
you
want
to
have
access
to
so
user
can
give
a
difference.
F
Data
if
you
want
like
for
some
reason,
you're
using
one
six,
nine,
two
five
fours
in
one
measure
cluster-
and
it
does
not
work
for
you.
You
can
specify
a
difference
area
that
is
supported.
Another
thing
that
you
do
is
you
can
specify
the
size
you
know
of
this
area
chart.
Then
you
want
to
allocate
a
diff
plus,
like
you,
may
have
different
clusters
of
different
sizes.
It
does
not
make
sense
to
give
a
slash
24
to
each
one
of
them.
Some
may
have
lesser
number
of
ports
that
you
need.
F
Pretty
quickly,
so
that's
just
somebody
now
the
connectivity
we're
not
going
to
too
much
detail,
because
that's
not
the
focus
of
this
one,
but
if
you
want
we
can
have
a
separate
section
that
goes
in
depth
into
it
or
you
can
look
at
the
website
link.
B
Yeah,
just
so
I'm
clear
so
that
when
it
establishes
tunnels,
everything
goes
through.
The
broker
cluster
is
that
is
that
correct?
So
there's
not
no!
No.
F
F
Okay,
so
lighthouse
is
the
project
of
submariner
that
provides
service
discovery.
Maybe
I'll
come
back
to
the
diagram
later
just
to
go
through
the
talking
points
first,
so
it
as
of
today,
works
with
seminar.
We
use
it
with
submariner,
but
we
have
designed
it
in
a
way
that
it
can
work
with
other
connectivity
solutions
like
instead
of
something,
if
you
have
some
of
some
other
connectivity
solution
that
provides
that
sort
of
flat
interconnectivity
with
clusters
of
access
to
services,
then
you
can
use
lighthouse
with
with
a
few
tweaks.
F
If
I
remember
correctly,
as
of
now,
only
somewhere
in
a
specific
stop
actually
in
lighthouse
is
where
we
detect
whether
the
cluster
is
connected
or
not,
because
when
we,
when
you
do
a
you
know,
a
dns
query
for
a
service
which
is
a
plus
type
service
running
in
multiple
clusters,
you
only
return
ip
and
from
round
robin.
If
that
cluster
is
actually
connected
and
same
goes
for
headless
service,
you
don't
return
part
ips
for
parts
running
in
a
cluster
that
is
not
connected
at
a
given
time.
F
I
think
other
than
that
we
don't
have
anything
submariner
specific.
So
it
has
two
main
components:
one
is
the
lightest
agent.
That's
what
I
was
talking
about
earlier
that
that's
the
control
component
that
runs
in
each
cluster.
It
runs
as
a
replicator
size
of
one.
This
is
the
one
that's
spoiling
the
core
mcsap
implementation,
it
listens
forward
and
service
exports
are
created.
It
creates
endpoint
slices
and
sends
those
endpoint
slices
to
the
broker
to
distribute,
and
it
reads
from
the
broker
any
endpoint
slices
and
imports
created
by
other
clusters
to.
B
B
F
Is
where
the
bulk
of
the
api
implementation
exists
in
the
lighthouse
agent,
and
then
we
also
have
another
lighthouse
dns,
which
is
written,
which
is
a
code
dns
based
implementation
default
size
of
two
you
can
change
to
it.
This
is
the
one
that
handles
all
the
in-cluster
queries
for
the
domain
cluster
setup,
local.
So
the
way
it
is
done
is
you
already
have
a
codeine
is
running
in
your
cluster.
We
use
the
forward
plugin
to
forward
any
queries
for
cluster
set.local
domain
to
the
lighthouse
dns
over
here.
So
this
is
it.
F
F
All
right,
so
this
is
example
like
we
have
two
clusters:
one
is
west
and
east
and
they
are
connected
to
somewhere
else
indicated
by
this
encrypted
tunnel
that
goes
over
internet.
So
the
ways
there's
a
service,
a
running
on
east.
So
you,
when
you
create
a
service
export,
the
lighthouse
agent
will
listen
for
service
export
creation
and
look
at
the
end
points
and
then
create
an
endpoint
slice
and
service
imports,
which
will
be
then
distributed
to
the
broker.
F
And
then
the
lighthouse
agent
over
there
from
the
broker
will
pull
that
copy
locally
and
would
be
available
in
its
local
db.
Now,
when
the
port
over
here
in
the
west
tries
to
query
for
this
service
using
cluster.local,
it
first
goes
to
the
core.
Dns
and
codiness
has
a
lighthouse
plugin
that
we
have
written
and
which
forwards
the
query
for
any
services
for
any
queries
with
clusterset.local
in
the
fqdn
to
the
lighthouse
dns,
which
reads
the
service,
import
and
endpoint
slice
information
to
return
the
ips.
F
So
that's
the
basic
gist
of
it
and
as
you
see
over
here
in
this
case
the
broker
is
running
on
westbrook.
So
you
can.
It
can
run
on
venue
of
any
of
the
data
clusters
that
you
are
running
or
it
can
run
on
a
totally
separate
cluster
if
you
want,
which
is
not
running
that
and
that's
up
to
you
so
because
it's
basically
just
like,
I
said
db,
using
kubernetes
api
endpoints
right
so
where
we
are
in
terms
of
implementation
of
mcs
api.
F
So,
as
I
showed
in
the
demo,
you
create
service
export
to
make
a
service
multi-cluster
aware
cluster,
ip
headlight
service
and
stateful
sets.
These
are
the
ones
I
demoed
right
now.
These
are
already
supported
and
are
working,
and
it
only
resolves
services
which
are
in
so.
If
you
have
multiple
clusters,
it
will
only
resolve
services
which
are
in
places
that
are
connected.
I
know
that
in
the
mcsap
we
talk
about,
you
know
that,
and
one
slice
have
some
sort
of
a
detail
or
something
to
detect
when
it's
no
longer
reachable.
F
Instead
of
that,
because
with
submariner
we
already
have
cluster
connectivity
information
and
with
endpoint
slices
coming
over,
we
have
a
little
bit
like
if
a
part
goes
down.
The
endpoint
is
no
longer
valid.
We
kind
of
update
it
and
it
will
be
removed
anyhow.
So
it's
we
are
not
using
ptl
and
all
that
stuff.
F
So
we
had
our
own
in-house
like
lighthouse
or
something
like
I
o
slash
service,
important
export
so
which
are
based
on,
I
think
very
early
version
of
the
ap
that
you're
not
they're
not
fully
updated
to
the
current
version,
because
our
plan
is
once
service
protects
what
are
available
in
kubernetes,
and
you
know
I
remember
we
have
a
repo
for
api,
then
we'll
use
those
and
move
to
those.
F
So
it's
a
sort
of
a
working
progress,
kind
of
cr
that
we
are
using
for
now,
not
for
the
long
term
and
then,
as
I
pointed
out,
we
don't
use
super
sip,
yet
it
is
to
be
done.
So
our
plan
is
once
we
move
to
service
what
I
expect.
F
What
is
the
kubernetes,
and
at
that
time
we
will
also
look
into
how
to
we
can
do
the
ip
allocation
without
recording
a
central
controller,
and
other
thing
is
like
I
mentioned:
global
net
is
used
for
providing
support
for
overlapping
cds
when
you
have
multi-cluster
overlapping
cds.
Headless
service
doesn't
work
with
yet
because
currently
globalnet
does
not
support
ingress
on
the
pods,
so
that
is
also
in
the
works
on
our
roadmap
and
once
that
is
available
and
headless
services
will
also
work
with
globalnet,
and
the
final
point
is
something
that
spec
doesn't
talk.
F
I
think
this
bank
doesn't
explicitly
allow
or
deny
it,
but
because
of
those
waste
submariner
does
thing.
It
was
very
easy
use
case
for
us
to
support.
We
support
us,
so
we
do
support
it.
So
idea
is
you
can
use
cluster
as
a
prefix
to
your
query
to
get
cluster
ip
or
the
pod
ips
or
headless
in
case
of
headless
service
from
one
specific
cluster.
So
this
finder
point
is
something
which
is
I'm
not
sure.
If
you
would
call
it,
is
it
non-compliance
with
the
spec
or
not?
B
Yeah
interesting
that
so
that
piece,
it
would
be,
you
know,
be
good
to
come
back
to
at
some
point
it
I
I
wouldn't
say
it's
non-compliant
to
have
an
extra
feature,
but
it's
you're
right,
it's
not
in
the
spec,
but
it
it
has
come
up
before,
but
I
think
mostly
around
debugging,
like
in
practical
use,
we
haven't.
I
haven't
yet
heard
much
much
demand.
F
F
Yeah
yeah,
I
know
I
remember
I
had
to.
I-
was
the
one
who
it's
in
the
middle
to
the
sick,
because
actually
we
have
someone
raise
the
issue
on
later
that
they
wanted
a
future.
So
it
was
actually
someone
requesting
people
with
the
details
of
why-
and
we
did
not
push
more
for
that,
but
we
felt
okay
because
it
is
enough
for
us
might
as
well.
G
Yeah,
I
think
I
think
it
was
necessary
for
multi-cluster
cockroach
db
setup
right
now.
F
B
Yeah,
the
two,
the
two
use
cases
I've
definitely
heard
are,
of
course
I
want
a
single
endpoint
that
returns
all
the
ips
for
headless,
and
then
I
want
to
be
able
to
address
individual
pods
throughout
my
multi-cluster
headless
service.
Those
ones
are
like
definitely
required.
The
the
cluster
level
addressing
where
you
return
all
the
ips
within
a
single
cluster,
like
seems
like
an
obvious
middle
step,
but
I
haven't
yet
heard
specific
demand.
But
if
you
have
that's
great,
we
should.
F
Yeah
yeah,
because
besides
other
things,
you
know
I
remember
when
we
were
also
discussing
the
team.
We
find
that
okay,
when
we
have
a
full
implementation
of
mcs
api.
Maybe
this
is
something
that
could
be
managed
in
some
ways
through
network
policy,
like
some
other
use
cases
that
we
sort
of
came
up
with
for
cluster
specific
use.
F
Cases
like
like
gdpr
and
those
kind
of
policies
where
you
want
from
us
specifics,
and
we
felt
that
once
you
have,
you
know,
topology
of
anything,
endpoint
slices
and
maybe
even
network
policies
that
might
be
able
to
cover
this
use
case.
But
at
least
till
we
have
that
this
is
good
enough
for
me
to
know.
I
think
so
people
will
find
it
useful
for
now,
at
least
so.
But
I
wanted
to
highlight
this.
You
know
so
that
if
someone
is
using
somewhere,
they
it
works
for
them.
F
F
Okay,
so
now
these
are
some
of
the
useful
links.
First,
one
is
the
project
documentation
website,
which
I
took
most
of
the
material
I've
used
in
this
slides.
You
have
more
detailed
architecture
diagram
how
to
use
some
cli
examples
than
the
github
repository
and,
of
course,
the
slack
channel
that
you
use
for
submitting
it's
on
the
kubernetes
slack
submariner.
B
Thank
you
very
much
for
doing
that
demo
and
and
giving
this
presentation.
It's
awesome
to
see
it
in
action.
Andrew's
left,
but
it
also
you
know,
I'd
like
to
circle
back.
Like
does.
Does
this
kind
of
solve
the
use
case
he's
looking
for
but
yeah.
That
was
great.
Thank
you
and
then
my
main
comment
is
that
you
know
I'd
like
to
circle
back
on
that
cluster.
Addressing
use
case.
B
I
think
something
we
should
probably
address
something
around
that
in
the
spec
if
it's
specifically
per
cluster
headless,
addressing
or
if,
if
we
want
to
outline
another
way
of
solving
the
same
problem,
but
we
should,
we
should
definitely
revisit
that.
F
You,
after
the
call
I'll,
send
across
the
link
to
the
slide
deck
on
the
mailing
list
in
case
anyone's
interested
in
it.