►
From YouTube: Kubernetes SIG Multicluster 2021 Jun 08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right,
so
I
saw
that
there
were
that
stephen
kit
put
stuff
on
agenda
for
today,
but
I
haven't
checked
it
since
earlier
this
morning.
A
A
All
right,
we'll
just
give
it
one
until
the
clock
strikes
12
32
and
then
we
can
go
ahead
and
get
started.
A
D
E
Thanks
yeah,
so
this
is
so
since
laura's
been
driving
things
to
hopefully
a
conclusion
in
the
near
future.
E
I
thought
this
was
perhaps
a
you
know:
sort
of
speak
up
now
hold
your
piece
sort
of
thing
about
cluster
set
ips,
so
basically
in
this
comes
up
solved
because
of
the
spectrum,
drive
implementation
or
shouldn't
specify
implementation,
details,
side
of
things,
and
I'm
trying
to
figure
out
exactly
how
much
of
cluster
set
ip
is
implementation
detail
and
how
much
isn't
with
the
background,
unsurprisingly,
that
in
submariner
we
don't
currently
support
cluster
set
ips,
and
so
the
big
reason
behind
that
is
that
we
don't
touch
cube
proxy,
and
so
we
don't
have
virtual
ips
and
there
would
be
other
ways
of
dealing
with
it.
E
So
it's
not
necessarily
we're
not
driving
to
actually
remove
cluster
set
ips
because
we
can
implement
them
in
in
different
ways,
but
I
just
wanted
to
make
sure
that
the
spec
is
correctly
binded
sort
of
thing,
and
so
the
the
questions
that
arose
when
I
was
looking
into
this
are
well.
How
essential
are
they
that's
the
first
one,
and
so
I
guess
they
are
already
essential
because
of
the
stance
that's
taken
in
kubernetes
in
general.
E
That
services
should
have
virtual
ips,
because
dns
well
playing
relying
on
dns
to
move
things
around,
isn't
actually
all
that
reliable
because
of
ttls
and
applications
that
do
a
lookup
once
and
then
keep
the
result
forever.
C
Yeah,
I
think
that's
that's
basically
it
we.
We
had
this
conversation
a
bunch
back
in
the
early
days
and
that
that
characteristic
of
services
and
cluster
ips
has
just
been
really
valuable
time
and
time
again,
because
so
many
client
libraries
applications
whatever
don't
actually
do
what
they're
supposed
to
do
with
dns.
E
C
E
Yeah
yeah,
it
definitely
makes
sense
yeah
less
of
the
the
impression
I've
gained
more
and
more
working
with
the
mcs
api,
the
real
value
that
isn't
so
much
all
the
sort
of
underlying
technology
that
it
assumes,
or
rather
that
falls
out
from
it.
But
it's
just
being
able
to
consume
services
without
caring
about
where
they
are,
because
that's
what
users
end
up
seeing
okay
and
so
then
so
the
the
the.
E
What
saves
us
in
sabrina,
at
least
from
having
too
much
complexity
and
having
to
touch
q
proxy?
Even
if
we
wanted
to
support
custer's
ip,
is
the
provision
in
the
spec.
That's
already
there,
the
clust,
the
clusters
ips
can
be
cluster
specific,
and
so
I
was
just
curious
if
there
was-
and
I
don't
think
there
is
really
but.
E
Cluster
set
ips
that
are
the
same
across
a
cluster
set
and
clusters
ips
that
aren't,
and
I'm
thinking
really
that
consuming
the
consumers
shouldn't
care
about
this
unless
they
end
up
sending
ip
addresses
to
other
clusters.
But
that's
that
would
be
really
bad.
And
if
we
allow
that,
then
we're
just
asking
for
trouble
right.
C
Yeah,
I
don't
think
because
the
consumer
should
have
to
care
at
all.
I
mean
we
kind
of
we
kind
of
took
the
stance
that
consuming
a
service.
I
mean
all
service,
consumers
are
assumed
to
be
within
a
cluster
right
and
then
consumer
service
should
rely
only
on
information
local
to
the
consumer
cluster,
so
so
yeah.
I
don't
really
see
why
why
the
consumer
would
have
to
care.
E
C
E
C
E
That's
cube
proxy,
yes,
cube
proxy's
responsibility
to
implement
virtual
ips
for
services,
and
so-
and
I
guess
the
answer
is
probably
no,
but
I
was
wondering
then,
if
it
would
make
sense
to
sort
of
extend
that
cube
proxy
responsibility
to
also
cover
service
imports.
But
that
would
be
pretty
bad.
I
think
in
terms
of
allowing
different
implementations
of
the
mcs
api,
because
effectively
you're
just
moving
the
interface
right.
A
Ideally,
we
would
not
have
to
touch
cube
proxy,
because
the
moment
you
do
that
now
you
have
an
entanglement
between
version
of
q,
proxy
and
functionality
that
you
get
right
and
you
know
future
roll
out
maintenance
headaches
around,
like
which
versions
of
cube
proxy
a
customer
may
be
using
or
a
user
may
be
using
so
like,
I
said,
not
sure
if
we
recorded
it
as
a
goal,
but
there's
certainly
a
tangible
but
not
easily
estimatable
cost
to
requiring
changes
in
q,
proxy.
E
F
We
should
also,
I
feel
obligated
whenever
the
discussion
comes
up
about,
should
we
do
something
in
cube
proxy
to
point
out
that
cube
proxy
is
a
completely
replaceable
component
right.
It
is
an
implementation
of
a
somewhat
undefined
interface,
and
so,
when
we
say
should
we
put
it
in
cube
proxy,
what
we're
really
saying
is:
should
we
make
it
somebody
else's
problem
by
jamming
the
logic
in
there
and
saying?
Well,
if
you
run
psyllium
psyllium
gets
to
implement
this,
and
if
you
run
andrea,
well,
andrea
gets
to
implement
this
right.
E
C
C
Else
who
implements
the
single
cluster
service,
api,
yeah.
E
E
So
I'm
not
actually
familiar
with
all
the
details
behind
this,
but
so
I
was
hoping
that
the
shell
from
the
project
would
be
able
to
connect,
but
apparently
not
but
anyway
he
he
was
wondering
about
aggregated
service
imports
and
if
there
isn't
a
risk
there
that
they
end
up
requiring
a
common
cluster
set
ip
across
all
clusters.
C
Yeah
they,
I
wouldn't
think
they
do
right,
because
the
cluster
set
ip
I
mean,
assuming
it's
something
like
key
proxy,
that's
including
it
cluster
set
ip
is,
is
only
for
consumers
in
each
cluster
right.
So,
if
you
have
consumers
in
multiple
clusters
like
as
long
as
that
clusters
that
ip
can
route
to
back
ends
in
in
multiple
clusters,
I
don't,
I
don't
think
we
have
any
requirements
around
a
common
ip.
I
think
that
is
strictly
up
to
the
implementer.
G
C
G
C
Of
the
implementation
is
responsible
for
for
providing
an
ip
yeah,
I'm
just
one.
Okay
right,
I
don't
think
we've
said
anything
that
would
prevent
users
from
specifying
ips
too
in
a
very
in
an
implementation.
Well,
except
that
you
know
the
that
creates
all
kinds
of
headaches.
I
think
that
we've
seen
in
other
areas
and
and
really
the
implementation
needs
to
own
service
import
and
that's
where
the
ip
is
created.
So
by
whatever
mechanism
it
gets
there,
implementation
can
decide.
G
Yeah,
I'm
just
like
noticing
that
there's
like
a
slight
inconsistency
with
regular
service
api,
like
regular
service
like
I,
can
create
a
cluster
ip
service
without
specifying
an
ip
and
the
implementation,
underneath
that
api
for
service
goes
and
fills
it
in
for
me,
whereas
here
I
can't
go,
create
a
service
import
like
that,
doesn't
make
any
sense
for
me
to
create
a
service
import
without
a
cluster.
I
like
without
the
ip
specified.
F
E
Yeah
one
might
even
consider
that
service
import
is
an
in-spec
implementation.
Detail
is
something
that's
completely.
It
should
be
hidden
from
administrators
and
end
users
really.
Well,
I
mean
they
never
need
to
interact
with
it.
E
G
Trying
to
figure
out
whether
there's
some
value
in
actually
allowing
for
diversions
like
a
thing
that
writes
service
imports
in
a
completely
independent
system
that
implements
them
like.
Would
we
see
that
emerging
and
if
so,
would
we
want
to
have
semantics
around?
I
can
create
one
without
an
ip
specify.
It.
C
Yeah,
I
I
don't
think
we
I'm
not
seeing
creating
the
service
import
without
an
ip
specified,
but
definitely
like
you
create
a
service
export,
that's
just
like
specifying
an
ip
and
then
the
import
gets
created
with
one
sure
sure
I
guess
I
think
I
feel
like
we've
broken
the
service
concept
into
two
pieces
and
like
the
service
spec
is
now
kind
of
the
sort
of
the
set
of
services
and
service
exports
that
make
up
that
service
and
the
service
status
is
that
is
the
import
like
it's
that
it's
owned
by
the
implementation.
C
C
F
G
E
Yeah,
it's
always
dns
rival,
yeah
that
covers
it.
From
my
perspective,
I
think
thanks
for
the
discussion
and
really
it
solved,
yeah
good,
just
all
balls
back
down
to
the
first
point
already,
since
we
want
the
same
guarantees
for
cluster
services
as
we
have
for
wall
for
cluster
set
services.
As
we
have
for
any
cluster
services,
then
we
need
cluster
set
ips
and
everything
falls
out
from
that.
G
Sorry,
I
I
don't
think
I
actually
understand
that
reasoning.
Can
someone
try
to
just
like
restate
it
for
my
my
thicker
head.
E
So
yeah
so
the
the
going
back
to
the
in
cluster
services
so
without
considering
mcs
api.
There's
this
in
the
service
description
itself.
There's
this
requirement
that
services
are
backed
by
virtual
ips
that
remain
constant
and
the
value
there
is
that
basically
run
rob
and
dns
can
be
relied
upon
to
work.
E
Then
so
the
same,
the
same
and
so
really
an
exported
services,
or
rather
an
imported
service.
Basically,
a
cross-cluster
service
should
be
consumable
in
the
same
way
and
in
a
cluster
service
is,
and
so
that
ends
up,
meaning
that
they
should
also
be
backed
by
an
ip
address
that
doesn't
change
from
the
consumer's
perspective,
which
means
we
end
up
in
the
same
sort
of
well
with
the
cluster
boundaries
applied.
We
need
a
virtual
ip
address.
F
So
sort
of
retro
continuity-
if
you
go
back
to
the
beginning
of
kubernetes-
and
you
say
service-
is
an
api
that
describes
a
virtual
ip
that
covers
the
set
of
pods
that
make
it
up
and
there's
no
implementation
ignore
cube
proxy
there's
just
the
api
right.
You
can
choose
to
implement
that
with
actual
load,
balancers
and
just
dynamically
programming
them
or
you
could
implement
them
in
some
other
way.
Cube
proxy
is
a
reference
implementation
that
happened
to
stick
of
a
way
to
do
that
right.
F
In
fact,
the
first
cube
proxy
was
in
fact
a
proxy.
It
was
a
user
space
thing
that
would
copy
bytes
from
one
socket
onto
another
socket
right,
and
we
want
with
multi-cluster
services
to
be
to
hold
that
api
boundary
a
little
bit
more
rigorously
and
say
the
api
boundary
is
service.
Import
defines
an
ip
address,
we
don't
care
or
prescribe
where
that
iphone
comes
from
or
what
it
means,
but
it
is
an
ip
address
that
represents
the
set
of
pods
across
clusters
that
make
up
a
multi-cluster
service.
F
A
B
Screen
now,
can
you
see
it
I've
taken
control?
Oh
all,
right,
okay,
great
okay!
I
wanted
to
give
a
little
update
about
the
crd
bootstrapping
project,
that's
kind
of
happening
under
the
waves,
but
a
little
bit
of
background
is
that
cluster
id,
which
we
described
in
this
cap
cluster
id
for
cluster
set
identification,
which
is
also
referred
to
as
cluster
properties.
That's
the
crd
name.
We
need
it
to
be
installed
before
the
mcs
api
controller,
which
is
depending
on
it
can
do
stuff
particularly
program
dns
records.
B
Maybe
lucy's
haiku
is
also
relevant
here,
and
we
also
as
a
sig
believe
that
the
crd
would
be
useful
outside
of
the
club
of
just
clusters
who
implement
the
mcs
controller
and
should
be
available
on
cluster
startup.
So
this
next
slide,
I
think,
is
pretty
recognizable
for
a
lot
of
people.
Who've
been
following
cluster
properties,
crd,
but
just
fyi.
This
is
what
the
actual
crd
itself
is.
B
So
a
little
bit
more
background,
too,
is
a
while
ago
we
went
to
save
architecture
to
kind
of
talk
about
this,
about
whether
it
even
should
be
a
crd
or
if
we
should
implement
it
in
kk.
If
we
believe
that
it
should
be,
you
know
widely
available
and
available
on
cluster
startup,
and
the
takeaway
from
that
meeting
was
that
the
existence
of
a
crd
bootstrapping
mechanism
that
was
independent
of
the
amps
controller,
which
was
being
discussed
with
sig
architecture
at
the
time.
B
What
is
sufficient
cause
to
not
put
cluster
property
in
kk,
so
that
kind
of
was
like
the
operating
principle
we
were
moving
forward
with.
So
I
just
want
to
give
an
update
on
how
that's
going
from
my
pers
from
you
know
what
I've
seen
thus
far,
so
some
other
people
may
also
be
more
or
less
up
to
date.
On
this,
there
are
basically
two
or
three
major
conversations
about
it.
As
I
can
tell
this
poc,
which
is
on
github
by
the
group.
B
That's
working
on
this
that
uses
a
post,
startup
hook
to
install
crds
from
a
manifest.
Then
there
was
an
announcement
on
sig
architecture's
mailing
list,
and
it
got
cc
to
our
mailing
list
too.
So
people
may
have
seen
and
there's
some
conversation
out
of
that,
and
then
it
also
got
brought
up
at
sig
api
machinery
last
week.
B
So
that
meeting
is
kind
of
what
I'm
recording
in
a
little
bit
now,
from
my
perspective,
there's
sort
of
two
pieces
that
this
is
has
been
kind
of
split
into
what
makes
the
api
server
reject
or
otherwise
make
make
it
feel
like
you
can't
access
this
cluster.
Yet
until
this
thing
is
installed,
so
that's
kind
of
like
the
deny
traffic
part
of
it
and
then
what
actually
installs
the
crd
itself,
which
then
conceivably
opens
up
the
deny
rule,
and
now
you
actually
can
come
in
so
kind
of
got
split
into
those
two
things.
B
As
the
conversation
progressed,
the
current
status
is
that
on
that
deny
side,
there's
some
talk
about
adding
another
web
hook.
Since,
apparently
you
can
only
make
one
in
cube
api
service
authorization
chain
so
that
you
could
block
traffic
until
a
or
all
target.
Crds
is
slash
r
installed.
So
that's
the
deny
side
and
it
sounds
like
that's
definitely
going
to
progress,
but
then
what
installs?
B
That
has
been
a
little
bit
more
controversial
from
what
I've
seen
there.
It
could
be
by
some
platform
platform
admin
programmatically
or
by
human.
It
could
be
more
inside
the
open
source
side
by
an
arbitrary
bootstrapping
binary,
that's
run
with
the
add-on
manager
or
buy
a
new
controller
via
controller
manager
and
just
to
kind
of
skip
down
about
connecting
this
a
red
asterisk
with
this
red
asterisk,
this
installer
part
seems
pretty
controversial.
As
of
that
last
meeting,
most
of
the
consensus
is
being
built
around
denying
traffic.
B
B
There's
a
distinction
between
cluster
startup
and
cluster
upgrade
that
have
that
these
processes
have
different
properties
from
architecture's
perspective
that
makes
them
complicated
to
to
address
so
hopefully
that
made
some
sense
when
I
said
it
in
my
head,
it
made
more
sense
than
I
feel
like
when
I
say
that
loud,
but
this
is
the
status
right
now
for
me
personally,
I
think
I'm
going
to
follow
up
with
the
folks
who
are
writing
up
this
pros
cons
list
about
these
three
points.
B
My
perspective
is
we
as
sig
multi-cluster,
do
care
about
where
this
install
step
happens
and
as
it
happening
in
the
open
source
side
is
what
we
want
and
but
let
me
know
if
I'm
super
off
base
there
and
that's
that's
the
info.
C
B
Well,
I
would
say
the
end
of
the
sig
api
machinery
meeting.
It
was
kind
of
rushed
at
the
end.
So
that's
why
I
want
to
follow
up
a
little
bit
too,
but
these
three
options,
like
by
a
platform
admin
by
the
add-on
manager
by
the
controller
manager,
the
add-on
manager
and
controller
manager,
seem
attractive
to
to
our
sig
from
my
perspective,
because
this
would
be
like
in
open
source,
but
they
were
on
on
thin
ice.
I
guess
by
the
end
of
the
meeting,
but
again
it
was
kind
of
rushed.
B
F
Laura
is
there,
is
there
a
doc
that
discusses
like
I
didn't
get
to
go
to
whatever
meetings
these
things
in
detail.
B
It
doesn't
exist
yet
that's
part
of
what
also
happened
in
the
very
last,
like
30
seconds
of
this
api
machinery
meeting
was
like.
We
should
make
a
dock
right,
so
I'm
gonna
run
after
those
people
and
see
that
that
happens
or
if
I
can
help
too,
because
I
have
some
of
the
context
as
well
from
following
along.
So
I
think
that
will
make.
F
It
clear
to
everybody
I
mean
the
the
truth
is
this
is
a
hard
problem
and,
like
I
started
a
doc
on
this,
I'm
trying
to
find
it
now
like
18
months
ago
and
right
the
accumulation
of
issues
that
needed
to
be
tackled
pushed
it
from.
F
You
know
like
out
of
the
realm
of
reality
for
me
to
tackle
myself
at
the
time
and
I'm
hoping
that
we
I'm
back,
I'm
hopeful
that
we
solve
it.
We
have
to
figure
out
how
to
how
to
tackle
it
right
there.
There
has
to
come
a
point
at
at
some
point
in
time
where
we
decide
this
is
taking
too
long.
We
don't
really
want
to
delay
this
valuable
functionality
and
we
make
an
argument
for
well
sorry
we're
just
it
doesn't
seem
reasonable
to
wait
for
the
boat.
F
So
let's
do
the
other
thing,
but
we've
done
this
in
the
past
and
we're
still
doing
it
and
I
at
some
point
we
have
to
stop
doing
it
right
and
I
don't
mean
to
make
sigmulti
cluster
the
sacrificial
lamb
on
this
one,
I'm
also
holding
cignet's
feet
to
the
fire
on
the
same
topic.
So
spicy.
C
I
guess
my
thoughts
on
here
are
yes,
I
agree
like
we
can't
wait
forever
for
this
to
happen.
At
the
same
time,
we
don't
so
we
want
to
make
progress
with
cluster
property,
because
we
have
some
immediate
use
cases.
We
think
it
would
be
really
great
if
this
was
bootstrapped
in
the
cluster,
because
we
can
come
up
with
lots
of
theoretical
ideas
for
how
you
would
use
a
cluster
id
that
was
always
present,
but
we
don't
have.
C
So
I'm
trying
to
think
if
we,
if
we
say
this,
is
taking
too
long,
which
way
should
we
actually
go
should?
Is
it
worth
you
know
pushing
in
tree
for
for
reasons
that
we
don't
really
understand,
or
do
we
just
go
with
the
crd,
because
we
know
that
actually
does
solve
our
needs.
A
So,
from
my
perspective,
two
things
one,
I
think
I
think
before
we
do
any
like
any
action,
because
we
feel
that
we
have
been
waiting
too
long.
We
should
we
should
try
to
quantify
how
long
we
think
would
be
too
much
longer
to
wait
in
the
interim
before
that
pathway
for
crd
initialization
exists.
A
A
H
That
josh
also
asked
do
we
have
a
cat
for
our
desired
path.
We
could
link
that
to
the
sick,
arch
cap
and
use
pretext
for
lobbying
for
faster
action.
B
We
have
some
information
in
this
cap
like
a
section
on
just
here
here
or
not
to
crd,
which
I
think
we
could
stick
up
in
there.
We,
our
needs
are
like
a
direct,
we're,
a
direct
influence
on
speeding
up
this
project
in
stick
architecture
like
in
the
sig
api
machinery
we
were
like,
we
were
called
out
directly,
so
we
are
definitely
a
known
use
case,
but
we
can
put
you
know
the
paper
to
the
paper
for
sure
on
that.
A
One
dimension
that
I
think
should
not
be
like
forgotten.
Is
that,
like
laura
just
said,
we
did,
we
did.
You
know,
apply
some
activation
energy
to
have
this
recognized
as
a
priority.
So
I
I
think
it
would
be
great
for
us
to
be
like
one
of
the
first
consumers
of
that
and
like
be
there
to
provide
feedback
on
it
because
we
asked
for
it
to
be
prioritized.
So
that's
sort
of
another
thing
in
my
head
around
bias
toward
like.
A
Let's
call
it
a
crd,
we'll
know
that
crd,
like
bootstrapping,
will
materialize
and
improve
in
the
future,
and
it
will
be
great
for
us
to
provide
feedback
around
it
like
those.
Those
new
things
are
like
really
really
important
and
valuable
to
give
feedback
on,
because
you
know
the
newer
it
is.
The
fewer
people
are
going
to
use
it.
B
Cool,
I
will
give
further
updates,
as
I
have
them
if
this
was
valuable
and
if
anybody
else
has
any
comments
or
concerns,
let
me
know
otherwise.
I
will
see
my
time.
A
I'm
sorry,
I'm
gonna
take
a
stab
at
pronouncing
your
name.
Is
it
anneal.
I
Cool,
so
just
to
introduce
myself,
so
my
name
is
anil.
I
The
background
is
basically,
I
think
right
now
like
so
I
work
for
intuit
I'll,
like
we
also
have
some
more
representation
from
the
company
we
are
working
on,
like
we
already
have
like
a
multi-network
setup
with
like
kubernetes
clusters,
spread
on
like
different
networks,
and
today
we
are
using
istio
and
service
mesh
too,
and
on
top
of
that,
like
some
additional
controller,
which
is
pretty
similar
to
mcs
controller,
that
kind
of
orchestrates
and
puts
all
these
services
together
with
like
a
load
balancer
and
that's
what
this
diagram
is,
and
I
think
the
kind
of
the
the
reason
for
like
presenting
here
and
like
getting
this
across
and
the
dark
across
is
basically
bringing
in
this
use
case
as
like
a
valid
use
case,
and
also
like
overlaying
this.
I
I
Okay,
hope
you
guys
can
see
this
better.
So
basically
the
scenario
we
have
is
like
every
kubernetes
cluster
it
it
belongs
to
like
it
belongs
to
a
region,
and
then
it
also
is
in
a
separate
network,
and
then
we
have
so,
if
you
see
in
this
particular
example,
so
there
is
a
workload,
a
that's
trying
to
consume
workload
b,
that's
spread
across,
like
it
has
instances
running
in
different
clusters
in
different
networks.
I
The
view
that
workload
a
would
see
like,
for
example,
it's
calling
b
dns.
I
just
made
that
up,
it
could
be
anything
pretty
much
would
be.
It
would
have
a
couple
of
end
points.
It
would
point
to
b
dot
svc.local,
which
is
what
we.
I
The
conversation
was
in
the
beginning
of
this
group
discussion,
where
it's
a
cluster
local
eyepiece
and
then
it
would
also
have
a
loadbalancer3.com
which
is
basically
the
ingress
into
cluster
3,
where
the
other
instances
of
workload
b
are
running,
it
could
be
the
same
region,
it
could
be
the
different
region,
but
in
this
case
it
happens
that
it's
in
a
different
region.
I
Now,
if
you
see,
there's
also
a
concept
of
weighting
here,
like
the
weights
that
are
added
to
the
endpoints,
because
you
might
have
a
different
distribution
of
the
number
of
instances
that
is
running
or
for
other
reasons
like,
for
example,
the
canary
across
clusters,
you
want
to
distribute
the
traffic
differently
across
these
endpoints,
so
so
that's
the
view.
The
workload
a
would
have
for
workload
b
as
it's
consuming
the
b
dot,
dns
and
the
way
the
dns
is
set
up
is
again
it's.
It
uses
the
underlying
proxies
envoy.
I
So
it's
it's
using
all
the
constructs
that
onward
has
and
then
all
the
all
that
the
controller
does
the
controller
I'm
talking
about,
which
is
very
similar
to
the
mcs
controller.
Is
it
renders
this
view
as
it
observes
all
these
clusters
across
different
networks?
I
It
understands
that
they
are
on
different
networks,
and
it
renders
this
configuration
that
the
workload
a
can
now
consume,
where
it
can
simply
rely
on
calling
b,
dot,
dns
and
it's
shortly
transparent
and
rest
of
the
magic
is
handled
by
the
proxy
based
on
the
weights
and
the
locality
that's
defined
for
each
of
these
endpoints,
and
it
also
aggregates
all
of
like
you
can
think
of
it
as
very
similar
to
service
inputs
that
we
are
trying
to
do
with
mcs,
where
the
service
imports
basically
represent
the
service
in
every
cluster.
I
So
this
is
very
similar,
except
for,
like
yeah.
E
I
Have
end
points
directly
so
any
and
then
the
gateway
we
are
talking
about
here
is
a
sni
router
and
pass
through
where
it
can
and
still
understand
the
the
the
name
that
is
like
the
service
is
trying
to
call,
and
it
can
route
appropriately
to
the
respective
workload
based
on
what
the
what
the
host
name
is
or
the
dns
is.
E
A
Interesting
one
question
that
comes
to
mind
is
whether
whether
you
think
it's
possible
that
you
could
like
that
you
could
turn
your
existing
implementation
into
an
implementation
of
mcs
like
with
some
with
some
mutations.
Can
it
can
it
implement
mcs.
I
Yeah,
I
think
we
think
so
and
then,
like
we
see
there
are
like
some
constructs
that
are
already
there.
That's
the
thing
I'm
going
to
talk
about
next,
but
then
there
are
like
some
gaps
that
would
need
to
be
filled
before
I
think
that
implementation
can
happen.
So
that's
basically
the
second
set
of
this.
Like
the
I
mean
the
second
part
of
the
discussion.
So
if
you,
if
you
look
at
I
mean
this
is
like
based
on
the
mcs
spec.
So
if
you
look
at
the
very
same
problem.
E
I
Workload,
a
is
going
to
consume
the
service
import
b,
and
if
you
look
at
the
service
input
b
like
the
way
you
define
it
like
it's
backed
by
endpoint
slices,
each
endpoint
slice
represents
a
service
behind
it
like
it
could
be
the
cluster
local
or
it
could
be
a
remote
instance
of
that
service.
Beam
right
and
endpoint
slice
already
supports
the
fqdn
or
the
ipv4
like,
so
you
can
render
it
differently
so
that
the
implementation
that
we
have
can
basically
do
this,
which
is
it
can
render
the
eyepiece.
I
I
Now
the
the
the
problem
comes
when
there
are
two
problems,
one
is
identifying
the
network
for
the
cluster,
so
as
part
of
the
spmc
aspect,
there
isn't
that
there
isn't
that
defined.
Yet,
so
we
need
to
see
if
these
clusters
belong
to
the
same
network,
which
means
the
ips
are
relatable
across
clusters,
so
that
you
don't
have
to
use
a
you.
I
You
don't
have
to
use
a
load
balancer,
you
can
directly
connect
to
the
workloads
in
another
cluster
or
if
you
can
identify
the
network
again
going
back
on
like
what
laura
presented,
we
could
use
the
same
machinery
api
or
like
there
is
a
cluster
similar
to
cluster
id.
I
I
It
can
also
be
extended
to
support
the
cluster
set
services
that
can
be
exposed
from
the
gateway
and
the
very
similar
model
that
I
explained.
I
think
it
should
all
work.
The
the
like
one
of
the
like
few
open,
like
few
open
questions
we
have
is
the
end
point
slices
today
like
they
do
not
have
weights.
I
The
problem
here
is
like
whatever
sits
behind
this
gateway
on
the
right
hand,
side,
you
don't
know
like
how
many,
how
many
instances
of
that
workload
are
running
or
like
the
number
of
parts,
so
it
becomes
difficult
for,
like,
like
the
workload,
a
that's
consuming
this
workload
b
to
load
balance
in
a
right
manner.
Right
because
it
all
you
have,
is
a
load
balancer
ips,
and
that
doesn't
give
you
any
enough
information
to
distribute
that
load
evenly.
I
That
is
possible
only
if
we
can
have
weights
on
the
endpoint
slices.
So
I
think
that
seems
to
be
like,
like
one
of
the
important
like
one
of
the
gaps
now
the
other
thing
like
the
routes
and
gateways
as
they
are
defined
for
the
gateway
api.
I
think
they
should
be
able
to.
I
I
mean
they
should
they
should
be
again
extended
to
support
the
clusters
at
local
hosts,
but
I
think
I
I
don't
see
like
a
major
concern
with
that,
but
yeah,
that's
still
something
that's
required
before
this
model
can
be
implemented.
I
That's
true,
yeah
yeah,
that's
correct!
I
don't
think
the
spec
needs
to
be
modified.
I
think
it's
more
on.
Like
the
implementation
side,
I
don't
think
there
is
yeah
anything
implemented
or
restricted
just
to
the
cluster
set
local.
F
I
It
is
a
very
similar
model,
except
for,
like
istio,
has
multiple
implementations
of
like
there
is.
There
is
istio
where
you
every
every
cluster
has
its
own
control
plane
and
it's
not
aware
of
the
other
network,
it's
running
on
a
single
cluster
and
that's
the
model
we
use
because
like
for
us,
we
have
hundreds
of
clusters.
We
cannot
have
like
a
single
control
plane,
look
at
all
the
clusters
or,
like
all
the
endpoints.
Now
that
being
said,
there
is
also
another
model
of
istio
where
you
can
run
like.
I
F
But
they
also
I'm
not
a
super
familiar
with
istio,
but
they
also
have
a
way
of
representing,
I
think
external
endpoints,
so
they
could
in
this
case,
if
you
had
two
different
control
planes,
you
could
represent
the
other
one
as
an
external
endpoint
in
the
same
service
right.
I
Exactly
that's
that
that
construct
is
called
service
entry.
You
can
bring
in
a
con
like
a
service
and
then
that
that
is
exactly
what
I
was
referring
to
here
in
this,
like
this
box
right
here,
the
b
dot
dns,
that's,
basically,
a
service
entry,
that's
translated
by
is
like
istio
into
onward
configuration
like
where
the
proxy
understands
got
you.
The
lord.
F
I
Yeah,
I
think
I
think
that's
pretty
much
what
I
have
and
then
the
network
identification.
I
think
I
already
talked
about
it.
We
could
use
something
very
similar
to
cluster
id
to
represent
the
network
for
that
cluster
for
the
mcs
controller
to
consume
so
yeah
with
that
yeah
I
mean
the
whole
point
of
having
this
is
to
have
discussion
so
yeah.
All
questions
are
welcome
and.
D
Just
yeah
just
one
quick
thing:
I
think
one
of
the
things
we
were
looking
for
feedback
on
too,
and
this
was
how
much
of
this
we
wanted
to
pull
so
right
now
this
is
a
very
like
kind
of
is.
This
would
be
a
very
istio-ish
specific
implementation
of
mcs.
I
think
part
of
the
question
was:
do
we
want
to
bring
some
of
this
into
more
general
concepts
where
it
could
be
applicable
besides
just
like
an
istio
implementation
of
mcs?
F
I
I
think
it
I
think
it's
for
both,
I
would
say,
yeah.
D
We
primarily
use
it
for
hpv
and
hb2
right
now,
but.
F
The
the
reason
I
ask
is
the
fqdn
thing
sort
of
set
my
hackles
up,
like
obviously
a
plain
old
tcp
client
is
never
going
to
be
able
to
resolve
the
fqdn
properly
through
ip
tables.
Right
like
we
have
to
there's
an
extra
step
that
needs
to
happen
there
versus
a
proxy
which
could
do
dynamic.
Fqdn
resolution
like
if
the
proxy
is
doing
full
encapsulation
right.
D
Yeah
today
we
definitely
use
the
proxy
for
the
full
encapsulation
to
be
able
to
to
resolve
it.
I
don't
think
we'd
be
able
to
make
it
work
without
the
proxy
there.
I
It's
I
think,
yeah
tim's
point
is
like.
If
it's
http,
then
I
think
that's
possible.
If
it's
tcp,
then
it's
not
possible.
F
Yeah
right,
if
I,
if
I'm
an
arbitrary
client
and
I
want
the
full
set
of
endpoints
and
you
have
one
endpoint
slice-
that's
filled
with
ips
and
one
endpoint
slice
that
has
an
fqdn
in
it.
I'm
not
sure
that
I
like,
if
I'm,
if
I'm
a
smart
client,
I'm
consuming
it
directly
like
a
proxy
cool.
If
I'm
trying
to
consume
it
through
transparent,
like
ipvs
or
vpf,
or
something
else,
and
something
has
to
resolve
that.
F
Yes,
but
if
I'm
going
to
resolve
an
an
fqdn
into
so
the
virtual
ip
is
a
front
end
to
a
set
of
eight
well.
Let
me
back
up
in
the
default
implementation
like
cubeproxy,
that
virtual
ip
is
a
front
end
to
a
bunch
of
backend
ips.
F
If
we
want
to
put
a
hostname
in
there,
something
has
to
actively
resolve
that
name
to
an
ip
or
set
of
ips
put
that
into
the
set
and
then
repeat
that
process
over
time.
Right
signet
is
talking
about
how
to
do
fqdn
based
policies.
So
it's
not
like
it's
off
the
table,
but
it
wasn't
really
considered
for
services.
It
was
only
considered
for
policy
space,
but
I
you
know
wouldn't
be
against
it
in
this
regard,
but
having
it
be
in
the
proxy
makes
a
lot
more
sense.
C
G
Actually
think
that
gary
had
his
hand
up
first.
K
Oh
sure
that
didn't
matter
to
me
yeah,
I
guess
you
know
being
new
to
the
group,
but
having
played
around
with
you,
know,
multi-cluster
service.
I
guess
what
struck
me
a
little
bit
about
this
proposal.
I
mean
cool,
don't
get
me
wrong
was,
though,
maybe
I'm
on
the
same
point
as
tim
was.
K
So
it
just
seems,
like
you,
could
end
up
with
this
sort
of
very
layered
proxy
load,
balancer
architecture,
so
moving
things
up
more
to
layer,
four
layer,
seven,
rather
than
trying
to
deal
with
things
that
kind
of
the
the
layer
say
for
layer
well,
layer,
two,
three
four,
you
know
two
and
three
networking
layer
which
I
thought
you
know.
Multi-Cluster
services
was
more
designed
to
do
so
I
I
guess
that
would
be
my
kind
of
take,
but
maybe
I
maybe
I
missed
something
in
the
proposal
in
what
you
were
saying.
F
So
if
I
can
respond
real
quick,
I
don't
mean
jump
the
queue
so
I'll
be
fast
from
a
cignet
point
of
view.
I
don't
actually
object
to
the
multiple
layers
here.
I
think
it's
a.
I
think
it
should
be
a
valid
model.
I
mean
my
enthusiasm
for
perfect
solutions
has
been
tempered
by
talking
with
a
lot
of
customers
and
this
scene.
This
seems
like
the
sort
of
thing
that
could
actually
work
in
practice
in
in
much
broader
practice,
and
so
I'm
not
inherently
against
it.
F
C
Yeah,
I
think
I
think
to
me
it
all
sounds
reasonable.
If
the
you
know,
if
the
consumer
experience
can
stay
the
same,
I
think
that's
that's
really
kind
of
the
goal
of
the
mcs
api
is
really
focusing
on
the
consumers
and
and
making
it
as
easy
as
possible,
but
there
are
some
some
questions
there
and
obviously
this
introduces
some
complication
and
then
I
think
the
big
question
that
I'm
getting
here
from
manila
is:
should
this
be
like
something
that
mcs
actually
tackles,
or
is
this
some
like
istio
add-on
kind
of
thing.
F
Is
there
other
than
weights?
Is
there
something
that
this
needs
from
the
spec
like?
If
we
added
weights,
this
could
be
a
completely
transparent
implementation
of
mcs
right,
I
think
so.
Yeah
and
I
mean
my
position
on
istio-
has
always
been.
If
istio
can
implement
mcs,
then
cool,
that's
a
it's!
A
valid
implementation
of
the
mcs
semantic
right.
F
Yes,
it
has
been
proposed
in
cignet
several
times
we
haven't
engaged
on
it
yet
because
it's
a
pretty
slippery
thing
to
grab
a
hold
of
remote
clusters
with
multiple
endpoints
is
much
more
obviously
necessary
than
different
sized
pods,
because
size
is
not
always
proportional
to
cpu
or
capacity
is
not
always
proportional
to
cpu
or
memory.
C
F
F
Need
some
sort
of
feedback
about
capacity
and
fullness
that
isn't
related
necessarily
to
requests
per?
Second,
it
isn't
necessarily
related
to
connections
per
second,
it's
it's
related
to
the
actual
state
of
the
observed
universe
on
those
back
ends,
and
that's
just
something
that
signet
doesn't
have
an
answer
to
yet.
But
it
is
definitely
on
the
radar.
C
So
one
on
this
topic
and
I'm
also
sorry
for
kind
of
jeopardy
again,
but
on
this
topic
we
I
brought
up
at
the
istio
meetup
last
week.
The
idea
that
maybe
this
can
be
an
istio
extension
on
endpoint
slice
like
istio,
can
implement
weights
for
first
multi-cluster
services
based
on
endpoint
slices,
and
we
could
maybe
start
there
like
to
the
consumer
and
the
exporter
of
a
multi-cluster
service.
The
api
is,
you
know,
the
contract
is
completely
honored
and
there's
just
a
little
extra
magic
happening
behind
the
scenes.
To
me.
L
I
think
the
the
the
issue
we
had
with
that
was
that
we
we
kind
of
wanted
istio
to
kind
of
be
able
to
layer
the
functionality
on
top
right
like
so.
We
want
to
be
able
to
work
with
an
existing
mcs
controller
as
well
as
be
the
mcs
controller
and
and
if
we,
if
we
start
relying
on
istio
extensions,
then
that
doesn't
really
work
out
so
well.
C
I
always
think
about
it
with
a
weight
and-
and
basically
these
are
would
be
like
managed
by
endpoint
slices
where
keep
proxy
ignores
them
and
istio
becomes
istio
takes
responsibility.
That's
like
one
one
answer.
We
need
to
know
what
that
actually
looks
like
yeah.
C
Well,
instead,
what
you
would
do
is
well
so
here's
the
thing.
I
I
don't
think
that
quite
is
the
scaling
concern
right
because
it
would
be
it
wouldn't
be
per
per
endpoint.
It
would
be
per
remote
network
where
you
need
weights,
and
so
you
could
just
have
this.
Would
this
would
be
a
little
ugly,
but
you
could,
if
you
had
three
remote
networks,
you
could
have
three
endpoint
slices,
each
with
one
endpoint
and
an
annotated
with
the
weight.
I
don't
think
it's
so
bad.
C
F
C
G
G
To
like
try
to
carry
carry
that
information,
because
you
know
you
can't
have
more
than
one
of
a
given
name.
The
question
is
sort
of
like.
G
I
I
think
it's
it's
a
native
capability.
I
think
there
is
like
a
lot
of
extra
like
constructs
that
are
used
and
like
also
the
controller
like
it's
not
like
it.
I
It
becomes
like
those
one
of
things
where
it
fits
only
to
the
model
that
we
have,
and
but
we
think
that
it's
it
fits
like
with
the
broader
community,
and
it
like
most
of
the
other
companies,
also
use
the
same
thing
so
and
running
through
basically
the
same
problems,
so
I
think
having
it
in
like
at
the
kubernetes
level,
would
benefit
the
community
and
also
be
more
generic
than
what.
L
We
have
we've
also
had
a
lot
of
customers
that
that
really,
you
know
just
kind
of
want
to
gradually
opt
in
to
to
mesh
and-
and
we
don't
really
have
a
good
mechanism
for
that
in
istio.
We
just
kind
of
assume
that
everything
is
everywhere
by
default,
and
then
you
know
you,
we
have
a
little
bit
of
hackery
where
you
can
say
define
certain
services
as
cluster
local.
So
if
you're,
if
you're
in
a
cluster
the
law
and
go
to
endpoints
in
that
cluster,
but
we
don't
have
anything
beyond
that.
L
So
we
we
probably
eventually
want
to
have
like
a
more
sophisticated
policy.
Where
we
say
you
know
you
can
you
can
more
generally
define
where
you
what
what
endpoints
you
can
access
from,
where
you
know
to
kind
of
you
know,
meet
a
wide
variety
of
use
cases,
but
this
this
seems
at
least
as
an
interim,
a
pretty
good
stepping
stone
toward
a
better
policy
api
as
well.
C
We
are
almost
out
of
time.
I've
I've
got
a
heart,
stop
at
least
the
I
had
the
last
hand.
So
that's
perfect.
There
was
one
other
idea.
I
want
to
throw
out
there
how
how
common
is
it
really
to
have
a
single
service?
That's
that
with
endpoints
that
span
multiple
networks.
C
Certainly
multi-network
multi-cluster
makes
a
lot
of
sense.
You've
got
clusters
and
different
networks.
They
need
to
consume
services
from
each
other.
But
how
often
do
you
really
have
like
service
foo
that
has
endpoints
in
cluster
in
networks
a
and
b,
instead
of
like
in
multiple
clusters,
potentially
all
within
one
network,
because
if,
if
we
could
just
add
a
constraint
saying
that
at
least
for
now,
a
service
must
be
backed
entirely
by
endpoints
in
a
single
network,
your
you
know,
your
consumers
can
be
anywhere.
C
You
can
have
as
many
services
you
want
across
in
different
networks,
but
each
service
is
wholly
contained
within
a
network.
Then
this
gets
really
easy.
Then,
for
you
know,
remote
network
services,
the
the
cluster
set
ip
is
just
a
gateway
to
that
other
network
and
the
endpoints
are
all
on
that
network,
and
you
know
we
get
rid
of
that
other
layer
and
you
know
the
like
they
can.
C
The
mcs
controller
just
does
a
little
magic
to
figure
out
who
or
when,
when
a
cluster
set
ip
should
be
local
or
when
it
should
be,
you
know
a
gateway,
but
consumers
and
exporters
get
a
much.
You
know
nothing
changes
for
them
and
overall
this
seems
to
be
a
lot
simpler.
C
A
sense,
no
I'm
thinking
you
just
set
the
requirement
that,
like
a
service,
must
be
exported
from
only
one
network.
Like
you
just
say,
we
don't
support
a
service
with
endpoints
in
multiple
networks.
C
D
No,
I
don't
know
just
for
some
questions
yeah
I
know
just
for
us.
Like
we
run
a
lot
of
our
services
will
run
in
like
multiple
regions,
so
one
service
will
run
in,
like
you
know,
u.s,
west
and
east,
for
example,
and
that
one
of
those
regions,
the
services
themselves,
will
either
run
like
active,
active
depending
on
whether
their
state
management
can
do
it
or
that'll
be
like
their
their
failover
region.
Okay,
so
then,
depending
on
where
the
client
is
it
it,
it
would
look.
D
That's
where
you
kind
of
get
independent
like,
for
example,
you
might
have
two
services
that
are
running
you
know
in
the
same
cluster,
but
one
of
them
fails
over
to
the
other
region.
For
example,
we
have
different
scenarios
like
that.
So
it's
not
if
we
represent.
D
As
one
end
point,
then,
we
would
have
to
like
kind
of
use
another
layer
on
top
of
this
to
be
able
to
manage
that
endpoint
with
where
to
route
the
traffic
to
I
don't
know
if
that
makes
sense
or
not.
C
Cool,
we
are
just
a
bit
over
time
and
I
want
to
be
respectful
of
everybody's
time
here,
but
thanks
everyone
this
week,
stephen
anil
laura
for
great
conversations.
This
is
excellent.
It'd
be
great
to
keep
this
going
next
week
too.
If
there's
more
thoughts
around
this,
I
think
you
know-
we've
talked
for
a
long
time
about
needing
to
figure
out
multi-network
for
mcs,
and
the
answer
has
been
we'll
figure
out
in
the
future
and
it
sounds
like
we
are
approaching
the
future.