►
From YouTube: Kubernetes SIG Multicluster 2020 Apr 14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Cool
well
thanks.
Everyone
for
coming
I
want
to
talk
about
this
multi
cluster
services.
Api
I,
put
together
a
little
slide
deck,
so
we
can
kind
of
track
it,
but
this
is
basically
all
revolving
around
the
draft
kept
that
I
put
up
a
PR
for
it's
just
based
on
that
that
dock
that
I
shared
a
while
ago,
and
then
a
bunch
of
people
have
already
commented
on
them
as
well.
B
B
B
Well,
I
can
start
talking
about
it
anyway
because
it
just
starts
with
text,
but
I
think
that
so
the
goals
that
I
really
wanted
to
do
here
with
with
the
cap
or
define
a
minimal
API
to
support
service
discovery
and
consumption
across
clusters,
namely
consuming
a
service
in
another
cluster
and
consuming
a
service
deployed
in
multiple
clusters.
As
a
single
service,
then
I
guess
when
consumed
from
another
cluster
service
behavior
should
be
consistent
with
current
behavior.
So
it's
Jeremy.
B
B
A
B
Basically,
we
want
we
want
to
mimic
the
existing
behavior
as
much
as
possible,
so
the
target
is
really
an
experience
like
cluster
IP,
which
everyone's
familiar
with
when
consuming
services
across
clusters.
You
want
to
create
building
blocks
for
multi,
cluster,
tooling
and
I.
Think,
most
importantly,
we
want
to
support
multiple
implementations,
so
non
goals.
We
do
not
want
to
define
specific
implementation
details
beyond
general
API
behavior,
the
controller
that
coordinates
this.
There
may
be
many
different
implementations.
It
could
be
decentralized
with
a
controller
running
in
each
cluster.
B
It
could
be
centralized
with
one
controller
kind
of
acting
as
a
hub-and-spoke
between
all
your
other
clusters.
We
don't
want
to
define
that
just
the
API
and
how
the
service
is
consumed
and
exported.
We
don't
change
the
behavior
of
single
cluster
services
in
any
way,
so
it
should
be
absolutely
safe
to
enable
this
feature
in
a
new
cluster
or
in
an
existing
cluster
and
and
nothing
will
be
impacted.
We
don't
want
to
solve
mechanics
of
actually
orchestrating
multi
culture
services
so
how
services
are
actually
deployed
across
clusters.
B
We
just
want
to
define
the
API
that
that
I
guess
needs
to
be
deployed.
Not
not
actually
get
into
the
mechanics
of
how
that
needs
to
happen
and
then
I'm
kind
of
hand,
waving
away
network
policy
right
now,
that's
a
big
topic
and
I
think.
Once
we
have
these,
what
the
service
looks
like
building
blocks,
then
we
can
start
talking
about
what
multi
cluster
network
policy
would
look
like.
I
know:
there's
a
few
different
takes
on
it
right
now.
Cilium
has
one,
for
example,
but
I
think
that
will
be
shaped
by
what
we
decide.
B
So
there's
a
lot
of
texture
and
I'm.
Sorry,
for
that,
so
I
think
the
biggest
thing
for
exporting
service
is
to
create
a
service
export
customer
resource
in
the
same
namespace
as
the
service.
This
is
kind
of
the
new
resource
that
we
are
proposing
to
declare
that
a
service
should
be
visible
beyond
the
cluster
boundary.
B
B
So
next
slide
so
alternatives,
and
so
these
are
kind
of
open
questions
and
label.
Selectors
tend
of
the
biggest
one,
so
I
want
to
get
to
that
last
and
then
we
should
talk
about
it,
but
I
think
the
thing
the
questions
that
have
come
up
so
far.
First
one
is
there's
no
spec.
So
why
don't
we
just
use
an
annotation
I
think
the
biggest
reason
is
honestly
just
that
service
doesn't
have
like
an
extensible
status.
B
If
service
had
conditions
I,
there
may
not
be
a
great
reason
not
to
just
not
to
just
make
this
as
an
annotation
on
service.
The
other
advantage-
and
this
is
a
small
one-
is
future-proofing
like
you
know,
we'll
get
to
it
later,
but
haven't
really
discussed
yet
whether
what
a
headless
monthly
cluster
service
could
look
like
or
something
that's
the
maybe
some
need
for
additional
configuration,
but
at
least,
as
is
the
only
true
justification
for
not
using
an
annotation,
is
that
service
status
isn't
really
extensible
yeah.
B
Yeah,
that
is
true
and
and
we'll
kind
of
talk
to
talk
about
it
at
the
end,
but
another's
new
work
going
on
with
with
gateway
as
kind
of
like
what
what
the
evolution
of
the
service
API
could
look
like
and
service
export
could
still
be
valid
in
that
world
as
well,
so
it
may
not
always
be
tied
directly
to
a
service.
That's.
C
B
Right
so
then
they
know
you
don't
have
write
access
for
service
exports
in
this
namespace
and
then
the
other
benefit
of-
and
this
is
the
object
where
things
question
this
came
up.
Why
not
just
have
it
point
at
it
at
a
service
explicitly
and
our
back
is
kind
of
the
biggest
justification
there
as
well.
This
way,
it's
really
easy
to
control.
A
B
B
Right
exactly
exactly,
whereas
otherwise
you
know
yeah,
we
have
to
figure
out
the
mapping
and
then
what
happens?
We
don't
have
to
figure
out
what
happens
if
if
two
people
create
exports
for
the
same
service,
what
you
know,
what
does
that
actually
look
like,
so
this
kind
of
simplifies
things
so
label
selector
is
a
is
a
is
a
good
question
and
I
think
it's
definitely
more
flexible,
like
I.
B
Could
I
could
create
a
rule
that
all
services
in
this
namespace
are
are
exported
pretty
easily
or
services
that
yet
with
a
certain
label
like
there's,
there's
a
lot
more
flexibility
there.
The
main
reason
for
against
this-
and
this
is
a
discussion
that
we'd
really
like
to
have-
is
it
it
kind
of
complicates
status
reporting
like
what
I?
B
What
would
it
look
like
if
one
service
was
exporting
another
one
wasn't,
but
but
the
biggest
thing
is
that
you
can
actually
build
like
with
a
with
the
name
map
service
export.
You
could
build
similar
functionality
pretty
easily
like
you.
Could
you
could
have
a
controller
that
uses
a
selector
to
create
service
exports
for
a
service
when
it's
created,
so
by
sticking
with
name
mapping
and
not
using
a
selector
we're
not
actually
excluding
the
ability
to
to
create
a
selector
based
implementation?
B
A
That's
a
really
good
point
and
I
think
it's
also
worth
drawing
everybody's
attention
to
the
fact
that
this
would
be
an
alpha
API
and
the
the
decisions
that
we
make
today
are
not
ones
that
we
have
to
live
with
forever.
You
know
the
the
virtue
of
alpha
is
that
you
can
make
breaking
changes
if
you
feel
that
it's
necessary,
but
I
also
think
that,
like
you
know,
kind
of
skipping,
more
complex
and
sophisticated
selection,
mechanics
I
will
also
allow
us
to
focus
on
the
focus
on
the
details
of.
C
As
a
user
of
this,
I
100%
would
want
to
be
able
to
do
a
label
selector.
Now
that
I've
said
that
I
think
your
point
that
you
can
just
write
an
operator
for
this
or
like
there's
a
million
different
ways
to
do
it
is
totally
valid.
I
think
it
should
just
be
called
out
that,
like
again
as
a
user,
I'm
gonna
want
to
say
everything
that
matches
my
app.
Please
just
export
it.
I
don't
want
to
go
write
another
loop
in
a
helm,
template
to
go
and
plop
all
these
resources
down
right.
A
B
Yeah
I,
like
it
maybe
there's
some
service
export
policy
or
something
that
that
you
can
create.
That
does
that
because
the
other
thing
that's
come
up
too,
as
an
open
question
kind
of
related
is
what,
if
it's
not
like,
there
may
be
implementations
that
are
not
based
on
labels
on
services.
They
might
be
based
on
annotations
on
the
namespace.
You
know
things
like
that.
So
this
way
we're
not
really
prescribing
which
which
way
you
want
to
go
there's.
A
B
B
Actually,
it
can
say
the
initialized
or
and
then
the
status
could
be
like
it's
kind
of
kind
of
up
to
the
controller,
but
the
the
common
ones,
I
see
are
basically
initialized
and
then
whether
or
not
the
export
has
been
synced
or
the
exported
service
has
been
sent
to
other
clusters.
Any
errors
on
that.
We.
A
B
Absolutely
and
then
yes,
I,
just
figured
that
was
that
was
a
little
bit
bigger
I
think
there
would
be
that
would
might
be
a
good
discussion
to
have
on
the
PR
or
or
something
yeah
and
and
also
it's
kind
of
opening
it
as
well,
because
you
know
this
is
all
talking
about
service
export.
But
if
you
built
something
or
exported
services
in
the
cluster
IP
sense.
But
if
you
built
something,
you
know
that
used.
You
know
services
for
something
else.
Maybe
it's
like
load
balancing
across
clusters.
Things
like
that!
B
Maybe
you'd
have
your
own
condition
that
you
want
to
set
as
well
so
yeah
there.
That
should
be
a
whole
other
discussion,
so
I
think
the
next.
The
next
big
topic
is
actually
what
does
it
mean
for
something
to
be
exported
and
so
I'm
gonna
use.
The
word
super
cluster
a
lot,
that's
kind
of
what
we've
been
kicking
around,
but
it's
definitely
not
the
name
where
we're
actually
for
this.
B
But
supercluster
is
kind
of
your
your
group
of
clusters
that
are
all
managed
together
and
basically
it's
the
group
of
clusters
that
a
exported
services
can
be
consumed
by,
and
so
you
know,
there's
been
that
proposal
about
namespace
sameness
and
it
would
kind
of
be
be
the
group
of
clusters
that
are
governed
by
that
by
some
single
authority.
That
can
make
decisions
like
this.
B
So
just
because
something
has
been
exported
doesn't
mean
that
you're
not
necessarily
consuming
it
from
that
from
that
exporting
cluster,
so
basically
using
the
super
cluster
IP
as
a
declaration
that
you
don't
really
care
where
the,
where
the
backends
for
this
service
actually
are
and
you're,
leaving
that
up
to
the
implementation
to
make
the
decisions
about,
so
the
super
cluster
IPE
once
exported
can
be
accessed
by
any
clustering.
The
super
cluster
again.
B
This
is
where,
at
some
point
will
want
to
come
back
and
talk
about
super
question
or
super
cluster
and
Multi
cluster
network
policy
and
how
you
can
lock
that
down,
but
this
proposal
it
would
at
least
be
visible
to
every
cluster
in
the
super
cluster.
We're
not.
We
don't
want
to
get
into
necessarily
how
this
is
a
sign.
So
maybe
it's
super
cluster
wide
or
maybe
it's
a
per
cluster
basis
that
can
be
an
implementation
decision,
but
within
a
cluster.
B
Exactly
yes,
so
our
hope
is
that,
like
I
know
that
there's
some
discussions
around
you
know
service,
topology
and
how
that
will
evolve
but
like
we
would
take
advantage
of
service
topology
as
described
today
or
an
implementation
could
do
something
better
if
it
wanted
to,
but
yeah.
We
definitely
don't
want
to
make
any
specific
statements
around
like
all
endpoints
are
synced.
For
that
exact
reasons,
the.
C
B
B
Yes,
oh
yeah,
so
the
consumer
cluster
defines
the
super
cluster
IP,
but
the
implementation
may
decide
that
this
is
a
shared
super
cluster.
It
be
that
all
services
access
or
it
may
go
up
into
the
cluster.
The
only
important
thing
is
that
it
is
at
least
go
to
the
cluster.
You
know
we
don't
want
to
get
in
the
world
where
each
pod
sees
its
own
version
of
the
super
PAC.
Here.
B
The
DNS
is
obviously
going
to
be
important
here
and
this
this
is
probably
some
we're
going
to
want
to
dig
into
a
bit
more.
So
the
thinking
is
that
we'd
we'd
have
a
new
zone,
super
cluster
dot,
local
zone,
and
otherwise
it
looks
like
cluster
local.
When
you
make
a
request
for
a
service
namespace
super
cluster
logo,
it
would
resolve
to
the
secret
cluster
IP.
There
would
be
no
change
whatsoever
to
the
cluster
that
local
zone,
so
some
other
implementations
have
kind
of
gone
a
different
way.
B
But
the
idea
here
is
that
exporting
service
should
not
impact
existing
consumers
who
are
who
are
specifically
requesting
cluster
that
level
they
should
need
to
opt
in
by
selecting
super
cluster
at
local
instead.
So
this
means
it's
easier
to
do.
Gradual
rollout-
and
you
know
you-
can
you
don't
have
to
worry
that
you're
going
to
turn
on
this
feature,
export
a
service
and
break
a
whole
bunch
of
things
that
are
currently
deployed.
B
C
B
Yeah
like
if,
if
the
local
cluster
exports
a
exports
or
service-
and
you
consume
it,
you
could
just
talk
to
beckons
in
your
local
cluster
and
and
it
may
be
that
the
that
the
implementation
decides
if,
in
that
case,
you
actually
only
get
n
points
in
your
local
cluster.
You
know
so
yeah
that
would
be
but
yeah
it
definitely
could
be
local.
B
Beyond
that,
so
you
know,
we've
talked
on
this
before,
but
my
service,
my
name's,
my
name
space
in
all
clusters
is
the
same
service
and
those
endpoints
can
be
in
multiple
in
multiple
clusters.
We've
been
thinking
about
it.
Just
disallowing
external
name
services,
I,
just
I,
can't
really
wrap
my
head
around.
What
a
super
cluster
eternal
name
service
would
be.
You
probably
just
want
to
create
external
name
services
in
each
cluster
versus
trying
to
sync
something
but
other
than
external
name
services.
B
B
This
doesn't
really
solve
for
stateful
services.
Yet
there's
other
constraints
that
come
with
with
exporting
stateful
sets
like
if
I
deploy
any
state
will
set
with
the
same
name
and
multiple
clusters.
I'm
gonna
have
duplicate
host
names
and
those
pods.
What
happens
when
those
are
actually
combined
behind
one
service,
so
this
doesn't
really
solve
that
and
that's
kind
of
the
normal
use
of
headless
service.
B
However,
there
is
a
new
use
for
headless
services
that
that
comes
up
in
in
this
world,
which
is
I've,
got
a
service
that
will
only
ever
be
exposed
as
a
super
question
level
and
I
don't
want
to
have
any
local
cluster
IV
overhead.
So
you
know,
maybe
you
still
do
want
to
support
headless
services,
but
your
existing
health
services
probably
won't
behave.
The
way
you
expect
at
the
super
question
level
today
then.
C
Actually
leads
into
kind
of
another
question:
I've
got,
which
is
somewhat
DNS
related.
Would
it
be
possible
to
address
specific
to
opt
into
addressing
specific
clusters?
So
if
we
follow
that
stateless
concept,
if
you've
got
a
namespace
there
for
the
per
cluster
basis,
instead
of
letting
just
the
implementation
handle
it
under
the
covers
right.
B
A
In
general,
I
think
it's
it's
smart
to
start
with
the
simplest
possible
set
of
features
that
could
yield
value
and
disallow,
like
any
other
type
of
service
right
beyond.
What
we
feel
is
fits
that
bill.
It's
much
easier
to
widen
the
applicability
of
this
concept
in
the
future
or
introduced
parallel
concepts
if
we
need
to
for
different
types
of
services
than
it
is
to
like
to
pull
back
from
something
that
we've
already
said,
X
or
Y
thing
is,
is
covered
completely.
C
I
think
my
biggest
worry
concern
is
having
it
end
up
the
implementations
end
up
having
a
whole
bunch
of
spread.
I
am
thinking
of
a
world
where
the
consumer
made
to
your
point
not
actually
want
to
necessarily
address
the
specific
cluster
but
may
want
to
you
know,
say
somehow
that
I
only
want
to
talk
to
in
region
things
or
something
like
that,
at
which
point
it
is
totally
up
to
the
implementation.
But
I
could
see
that
then
spreading
into
annotations
and
a
bunch
of
random
implementation
stuff
to
go
and
solve
that
specific
use
case
right.
C
A
C
A
C
B
And
and
then
there's
also,
the
question
is
like
as
the
implement
as
the
implementer
do
you
you
may
want
to
give
that
control
their
consumer,
but
you
also
might
want
to
give
it
to
the
service
owner,
like
the
service
owner
really
should
be
saying
they
should
only
be
concerned,
consumed
local
to
my
region.
So
you
know
who
gets
the
who
gets
the
knobs
might
vary,
but
at
least
with
this
implement
with
this
basic
spec,
we
can
start
playing
around
with
that.
Let's
see
well
see
what
works
so
specifically
on
the
DNS
I.
B
Think
a
few
questions
that
have
come
up
is
supercluster
local,
a
standard
suffix,
or
can
it
be
configure,
there's
pros
and
cons
of
both?
You
know
if
supercluster
are
local
or
whatever
we
end
up
going
with
again.
Supercluster
is
a
placeholder
name,
so
just
a
little
confusing,
but
if
is
there
some
fixed
suffix
that
kind
of
becomes
the
the
normal
way
to
access,
supercluster
or
mesh
services
or
whatever
we
want
to
call
it
or
does
that?
Is
this
something
that
actually
needs
to
be
configured
on
a
per
cluster
basis,
I'm
curious?
What
thoughts
are
there?
B
B
A
B
So
they
kept
as
written
assumes
that
it
can't
be
changed.
So
really
it
just
comes
down
to
the
you
know
the
favorite
naming
conversation.
What
do
you
guys
want
to
call
that?
But
that's
that
seems
less
important
and
what
we
could
probably
spend
hours
just
on
that
as
naming
conversations
go
and
then
the
the
next
question
this
one's
kind
of
more
open
there?
How
do
we
handle
short
name?
Lookup
I,
think
it's
pretty
clear
that
super
cluster,
that
local
should
always
go
super
cluster
cluster
that
local
should
always
know
cluster
local.
B
But
what
happens
if
I've
been
just
talking
to
DB
my
DB
or
something
as
a
service,
and
that
you
know
goes
to
the
local
namespace
service
and
the
search
path
it
resolves
to
serve
at
that
cluster.
Local!
Should
that
actually
resolve
to
super
cluster
local
first
in
this
new
world,
pros
and
cons
to
both
I
mean
it
kind
of
adding
super
close
super
cluster
by
local
to
the
search
path,
kind
of
does
change
existing
services,
but
it
also
seems
like
it
might
be
the
intuitive
behavior.
So
I'm
really
curious
what
people
think
there.
B
C
It
it's
big
enough
to
have
foot
gun.
I
would
tend
to
go
that
cluster
dot,
local
stays
where
it
is
and
super
super
cluster.
That
local
is
basically
requires
an
fqdn
unless
somebody
changes
their
result
like
maybe
the
best
way
to
do.
This
is
if
you
go
and
set
up
a
name
config
to
change
the
search
path.
Just
stick
super
cluster
local.
That
would
be
how
you
get
it,
because
that's
an
explicit
opt-in
through
configuration,
yeah.
A
C
A
B
B
Awesome:
okay,
cuz,
that's
been
our
thinking
as
well,
but
this
is
this
one
I
think
more
than
anything
else
has
come
up
and
and
yeah
also
we,
you
know.
We
also
don't
really
know
today
he's
using
the
short
name
just
because
you
can
make
very
clear
assumptions
about
what
happens
with
it
and
who's
using
it,
because
they
really
don't
care
I,
guess
most
people
care.
So
so
that's
great
and
then
kind
of
kind
of
open
question
like
we
talked
about,
but
I
agree.
We
should.
We
should
leave
this
for
after
we
figure
out.
B
C
B
B
So
the
idea
that
we've
had
is
basically
we
we
create
a
new
CRD
imported
service
and
there
would
be
a
CR
for
each
unique
by
namespace
name
service.
That's
marked
for
export
in
the
super
cluster,
so
this
this
is
basically
be
multi
cluster
equivalent
of
service
just
focused
on
that
cluster
ID
case.
It
contains
a
union
of
service
ports
from
source
services,
so
this
would
be
the
way
that
we
do
incremental
incremental
rollout.
B
Unfortunately,
service
has
some
characteristics
that
don't
really
merge
nicely
like
IP,
family
and
and
service
port
like
we
would
probably
not
be
able
to
reconcile
multiple
services
that
that
that
use
the
same
service
port
name
for
different
ports.
But
otherwise,
if
you
know,
if
they
disagree
on
their
explore
report,
we
could
support
both.
If
you
wanted
to
allow
rollout
of
a
new
port.
B
B
Basically
the
goal
here
is
that
imported
service
would
would
be
referenced
by
endpoint
slices
just
like
service
one
is
now,
and
the
goal
is
that
if
we
create
a
in
point
slice
based
on
the
service
from
an
exporting
cluster,
we
can
keep
the
same
topology
keys
and
such
an
affinity
and
whatever
else
associated
with
that
end
point
slice
on
the
imported
side.
So
you
could
do
gradual
rollouts
of
topology
changes.
Things
like
that.
B
So
what
this
would
actually
look
like
is
an
important
service
resource
with
a
spec
that
contains
this:
the
ports,
the
IP
family,
the
IP.
This
is
the
super
cluster
IP
and
a
list
of
cluster
specs
and
the
cluster
spec
would
have
basically
everything
that
is
spoke
to
that
cluster
and
now
again
hug-lust
is
punted.
So
maybe
we
get
rid
of
publish,
not
ready,
addresses
I,
don't
know
that
makes
any
sense,
but
yeah,
so
it
would
be
ketone.
Cluster
and
cluster
is
some
unique
to
the
implementation
ID
for
that
cluster.
So
we
don't.
B
A
Yeah
I
think
that's
a
really
really
important
point
to
make
that
the
the
coordinate
that
you
use
for
the
cluster
field
of
cluster
spec
is
only
required
to
be
like
mutually
intelligible
in
the
implementation
that
you
have.
It's
not
a
coordinate
in
the
cluster
registry.
It's
not
a
cluster
ID.
It
might
be
in
the
future.
But
it's
not
right
now,
right.
B
Right
for
the
purpose
of
this
implementation,
it
is
just
some
can
scope
to
some
unique
identifiers
scoped
to
multi-culture
service
implementation
and
that
and
that's
it
and
so
how
we
use
that
is
well.
We
create
one
or
more
endpoint
slice
for
each
source
cluster,
based
on
whatever
you
know,
scaling
parameters.
We
have
an
endpoint
slice,
so
I
think
it's
by
default.
It's
100
endpoints
per
slice.
B
Again
it
contains
some
representative
set
of
end
points
for
its
for
its
cluster
and
it
would
reference
the
imported
service
by
a
new
label:
new
well-known
label,
Multi
cluster
kate's
that
io
imported
service
name.
Today
we
have
kids
that
io
service
name,
so
it
looks
a
lot
like
the
existing
implementation,
but
it
references
important
service.
Instead,
then,
we
have
this
new
multi
cluster
cakes
that
io
source
cluster
label
as
well,
and
that
would
be
how
we
look
up
the
per
cluster
endpoints
to
know
how
to
actually
apply
the
the
proper
properties
from
the
spec.
B
This
is
the
other
place
where
that
that
idea
is
used
and
again,
registry
scoped,
unique
identifier
for
cluster
and
registry.
Isn't
the
right
word:
it's
really
implementation
scope.
So
as
long
as
the
implementation
can
consistently
use
apply
this
identifier,
it
can
be
whatever.
If
you
want
and
then
be
the
last
piece
here
is
endpoint
expiry,
so
you
could.
Your
implementation
could
potentially
lose
connection
to
any
cluster.
We
want
to
kind
of
normalize
behavior
around
cleanup,
so
the
idea
would
be
that
there's
at
least
created
for
each
source
cluster.
B
Some
controller
periodically
renews
the
lease
in
each
cluster.
This
would
be,
or
during
from
your
core
implementation
controller.
As
long
as
the
sort
of
cluster
source
cluster
is
still
connected,
and
then
there
would
be
another
controller,
it
might
be
the
same
controller.
It
might
be
something
running
local
in
the
cluster
that
watches
the
lease
and
removes
all
associated
endpoint
slices
from
with
that
cluster
upon
expiry,
so
that
you
wouldn't
have
a
risk
of
you
know
a
cluster
becoming
unreasonable
for
a
long
time
and
having
to
I
still
try
its
endpoints.
C
B
Don't
so
in
an
earlier
draft
of
this,
we
thought
about
requiring
that
the
courts
just
need
to
be
normalized.
But
what,
if
you
wanted
to
roll
out
a
new
port,
we
need
to
at
least
support
a
temporary
state
where
there's
some
disagreement.
So
the
idea
is,
we
could
at
least
try
to
do
the
right
thing
and
take
a
union
as
long
as
you
know,
there's
no
conflicting
port
names
that
should
be.
That
should
be
okay.
B
C
Think
that
makes
sense,
obviously
target
ports
wouldn't
need
to
be
normalized
at
all.
You
could
do
that
right.
Randomly
it's
really
the
port
itself
again
as
an
implementation.
I
could
see
lots
of
issues
there
like
as
picking
endpoint
slices
based
off
of
the
source
port
and
where
that
goes
so
it
might
be
worth
at
least
specifying
in
the
spec
a
little
bit
more.
The
behavior
that's
expected.
B
Okay
yeah,
so
there
is
actually
a
good
section
on
kind
of
like
resolution
it
just
it's
in
the
in
the
cap,
so
yeah
I
guess.
Please
take
a
look
and
add
comments
and
anything
that
you
disagree
with
or
needs
to
be
clarified.
Please
let
me
know
because
yeah
there's
that
that
is
one
part
that
gets
messy
for
sure.
B
I
guess
all
we've
done
is
to
find
it
at
the
cluster
level.
So
if
the
cluster
is
unreachable,
because
if
the
endpoints
gone
from
the
source
cluster,
you
know
it's
gone
from
all
clusters,
but
if
the
source
cluster
is
unreachable,
that
doesn't
necessarily
mean
anything's
wrong
with
the
endpoints,
and
we
don't
want
to.
B
B
C
Totally
my
question
was
much
less
around
the
TTL
I
think
that's
hundred
percent
right.
It
was
more
around
be
like
general
connectivity.
If
you
go
through
a
network
split,
what
happens
if
there's
connectivity
down?
Do
you
flush
all
the
endpoints?
Do
you
only
flush?
Some
of
them?
You
do
aliveness
check
on
all
the
endpoints
from
the
remote
clustered.
You
do
it
just
to
one
and
all
of
the
subtleties
there
right
so
yeah.
B
A
B
That
is
actually
a
main
like
one
of
the
main
motivators
for
using
this
imported
service
CR,
instead
of
trying
to
cram
this
into
service
so
that
they
can
coexist.
You'd
access,
the
imported
service
via
the
superclass
or
local
name,
the
service
by
a
cluster
local,
and
you
get
a
cluster
ID
and
a
super
cluster
IP.
A
So
so,
just
just
to
make
sure
that
I
understand
like
the
the
imported
service
is
basically
a
parallel
API.
That's
just
for
exported
service
endpoints.
Is
that
right,
that's
right,
yeah
and
if
I
export
in
the
same
cluster
as
I
import,
the
imported
service
has
the
endpoints
that
I
exported
from
my
cluster
as
well
right,
that's
right
exactly
so!
B
A
B
Awesome
yeah
any
any
thoughts
or
feedback
to,
as
you
think
about
this
more,
please
definitely
appreciate
it,
I
think
kind
of
going
into
how
the
things
we
need
to
do
to
make
this
work.
So,
ideally,
DNS
implementations
just
become
aware
of
imported
service
so
that
we
can
actually
make
this
make
this
standard
q
proxy
becomes
aware
of
important
service
on
our
label
as
well.
So
so
this
is
something
that
q
proxy
would
support.
B
So
you
don't
have
to
handle
routing,
you
know,
I
would
ban,
and
the
goal
really
is
that
when
a
controller
creates
these
objects
in
a
cluster
routing
just
works,
and
so
that
controller
can
be
whatever
you
want
for
your
implementation,
it
could
even
be
manually
creating
you
know,
as
a
user,
to
test
this
out
manually,
creating
these
resources.
But
ideally
we
can
get
this
to
a
point
where
these
are
recognized,
well-known
resources
and-
and
you
can
turn
this
feature
on
and
just
start
creating
these
these
resources
and
everything
works.
I.
B
Now
you
may,
you
may
have
rules
that
prevent
anything
from
being
created
in
a
namespace
in
a
given
cluster,
but
ideally
you're
not
explicitly
having
to
import
a
service,
but
that
is
something
we
can
support
and
maybe,
instead
this
this
looks
like
you,
create
an
imported
service
to
basically
grant
permission
to
import
that
and
then
their
controller,
just
populates
the
property
and
in
place
I.
Guess
that's
their!
That
kind
of
came
up
as
another
approach.
B
Yeah
I
think
so
I
don't
see
why
we
couldn't
like
have
our
own
implementation
that
coexist,
certainly
for
alpha
yeah.
There
are
some
practical
implications
there,
like
basically
iptables,
is
a
mess
and
running
two
different
things
that
are
controlling
IP
tables.
All
the
time
may
not
be
ideal,
but
I
don't
see
anything
wrong
with
that
before
you
know,
yeah.
C
B
So
yeah
yeah,
so
you
know
multiple
implications
could
handle
this
in
a
different
way.
I
mean
it'd,
be
I,
guess
as
an
end
goal
like
by
the
time.
This
was
with
this
one's
taken
out
of
beta
at
least
or
you
know,
hopefully
taking
out
of
alpha,
but
but
definitely
by
the
time
I
was
taking
over
beta
I
think
they
would.
Ideally
it
would
be
a
first-class
citizen.
You
know
if
it
was
flag.
B
B
And
so
I
guess
the
last
thing
I
wanted
to
mention.
I,
don't
think
this
needs
to
keep
any
discussion
yet,
but
we
have
been
thinking
about
the
you
know,
gateway,
API
and
how
that
evolves
and
how
this
would
work.
So
maybe
there's
a
future
where
importing
services
actually
replaced
with
a
gateway
and
a
multi
cluster
route.
B
A
B
Yeah
I'm,
not
completely
sure
I,
know,
there's
been
a
bunch
of
progress.
There
was
I
was
at
a
good
meetup
at
UConn
in
San
Diego,
but
I'm.
Not
exactly
sure.
You
know,
I
think
it's
it's
a
steady
but
careful
evolution
there.
So
I
don't
know
exactly
what
the
timeline
is,
which
is
kind
of
why
this
is
starting
with
imported
service.
If
Yahweh
was
gonna
be
out
tomorrow,
I
think
there
would
be
no
question.
B
A
That
is
a
very
good
question.
It
has
been
quite
some
time
since
I
personally
worked
on
something
that
had
a
chance
of
affecting
the
core
API
and
I.
I
must
be
open
about
the
fact
that
I'm
unsure
of
what
the
what
the
right
point
is
to
be
at
before
you
merge
the
initial
cap.
So
that's
that's
something
that
I
need
to
do
a
little
bit
of
research
on
I
I
think
you
should
keep
going
geremy
and
we
should
think
about
like,
what's
what's
the
best
most
widely
applicable
way
to
prototype
this
and
and
I'll.
A
B
A
Yeah
I
think
maybe
then
next
steps
are
us
to
figure
out
like
are.
Are
we
trying
to
do
something
that
will
wind
up
in
kubernetes
api
repo?
Are
we
thinking
that
the
api
would
live
in
maybe
kubernetes
SIG's?
What
are
the
coordinates
that
were
expecting
to
write
to
for
the
API
or
write
the
API
into?
If
that
makes
sense
right.