►
From YouTube: Kubernetes SIG Multicluster 2021 May 11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Hello,
I
did
hey
jeremy,
I
can
hear
you
awesome
so
did.
Did
folks
have
an
enjoyable,
kubecon
experience.
This
go
around.
A
It's
definitely
on
the
early
side.
I
I
have
a
very
early
schedule
so,
like
our
slot
was
a
time
that
I
would
be
like
awake
and
alert
anyway,
but
I
think
even
for
the
east
coast,
like
a
lot
of
a
lot
of
that
stuff
would
have
been
super
super
early,
and
I
can't
even
imagine
west
coast.
D
A
Stretch
all
right:
well,
maybe
we
can
go
ahead
and
get
started
so
welcome
everybody
to
the
may
11th
2021
meeting
of
kubernetes
multi-cluster
laura.
I
think
you
are
on
the
agenda
first,
unless
my
eyes
deceive
me,
because
I'm
looking
at
the
agenda
right
now.
C
And
okay,
hopefully
you
all
can
see
my
slides.
C
C
C
So
if
we
still
need
to
talk
about
them
more,
but
they're
not
blockers
to
be
to
graduation,
I'm
interested
to
know
that,
because
I
definitely
want
to
know
what,
like
the
specific
blockers,
are
for
beta
right
now,
so
just
for
prioritization
purposes.
C
So
the
things
I
want
to
talk
about
the
topics
that
are
still
open
is
about
the
existing
beta
graduation
point
about
cube
proxy
consuming
service
imports
and
service
exports.
Also
talk
a
little
bit
about
cluster
id
alpha
beta,
ga
timeline.
If
you
want
to
align
those
and
also
what's
the
latest
on
the
crd
bootstrapping
project,
and
then
there
are
a
couple
sig
network
projects
that
we
expect
or
came
up
last
time,
we
met
that
we
think
might
interact
with
mcs
api.
C
Some
way
like
that,
we
need
to
adapt
mcs
api
to
do
what
they
do.
So
I
did
a
little
bit
of
research
into
where
they
stand
right
now,
we'll
report
a
little
bit
on
that
for
anybody
else
who
didn't
doesn't
already
know,
and
then
we
can
discuss
whether
those
are
relevant.
E
C
Beta
graduation
or
kind
of
separate,
so
that's
the
schedule,
so
I
want
to
open
up
with
the
topic
that
we
tabled
last
time
in
particular,
since
jeremy
is
here.
I
know
I
had
mentioned
that
he
had
some
thoughts
about
this,
so
we
had
kind
of
tabled
it
to
make
sure
he
could
be
involved.
So
this
line
item
q
proxy
can
consume
service.
Import
and
endpoint
slice
is
currently
an
alpha
beta
graduation
item.
But
what
I'm
bringing
the
discussion
up
about?
What
I'm
proposing
is
whether
or
not
this
should
still
be
included.
C
Is
this
a
like
restriction
of
the
standard
or
is
it
more
of
an
implementation
detail?
And
if
so,
does
that
mean
that
it
doesn't
need
to
be
part
of
the
data
situation
requirements
for
the
api
cap?
So,
for
example,
the
current
implementations
use
dummy
services
to
sort
of
get
around
the
fact
that
q
proxy
can't
interact
with
these
directly.
C
So
it's
still
possible
to
implement
mcs
api
without
this,
and
there
is
one
pro
that
if
we
don't
include
this,
then
the
fully
specified
mcs
api
is
not
restricted
by
a
cluster
version.
It's
not
restricted
by
a
version.
A
version
of
implementation
of
q
proxy,
so
I'd
like
to
open
the
floor
to
talk
a
little
bit
about
that
jeremy.
Maybe
if
you
have
more
color
to
put
on
this
too,
since
you
had
wanted
to
be
part
of
the
discussion.
D
Yeah
just
kind
of
looking
at
where
we're
at
and
what's
out
there
right
now
using
this
like
initial,
like
dummy
implementation
that
we
put
up
on
on
on
the
six
repo
it.
D
I
don't
know
that
we
need
to
keep
going
with
our
original
plan
to
to
force
q
proxy
to
consume
service.
Import
directly
like
it
doesn't
seem
like
it
buys
as
much.
It
does
open
up
some
new
options
for
flexibility,
but
we
could
always
do
that
in
the
future.
If
it's
not
in
this
bag,
you
know
there
could
be
something
like
in
up
until
one
kubernetes
123,
you
know,
there's
dummy
services
and
then
eventually
we
switch
over
to
q
proxy
consuming
service
import
directly
either
way.
I
don't
think
mcs.
A
D
D
I
think
we
could
like,
from
my
understanding,
if
I
remember
back
when
we
were
like
meeting
with
cignetwork
and
chatting
it
through
like
there.
There
was
some
openness
to
it,
but
it
would
be
a
fair
amount
of
work.
It
would
definitely
impact
release
schedule
and
I'm
not
sure
we
get
anything
out
of
it.
A
Yeah,
you
definitely
get
a
linkage
with
the
version
of
q,
proxy.
D
Right
right,
exactly
like,
like
the
you
know,
in
terms
of
functional
changes
right
now,
it
doesn't
seem
like
we
get
anything
good
and
we
do
get
some
added
friction.
So
you
know
I'd
just
be
inclined
to
keep
that
as
an
implementation
detail,
but
I
think
we
should
be
very
clear
that
it's
like
whatever
decisions
we
we
make,
should
should
leave
that
open
as
a
implementation.
Detail
like
we
shouldn't.
D
Actually,
I
don't
think
we
should
encode
the
dummy
service
in
the
spec,
for
example,
because
you
know
if
you
wanted
some
new
data
plane.
You
know
that
you
built
in
your
in
your
distro
that
consumed
service
import
directly
or
if
you
wanted
your
own
patch
to
keep
proxy
that
like.
A
E
Cool
cool
yeah
and
as
a
as
an
additional
data
point,
submariner
consumes
service
imports
without
changing
anything
in
cube.
D
D
C
C
Of
implementations,
like
dummy
service,
if
you
made
proxy
work
with
it,
your
own
thing,
okay,
cool
and
any
other
seems
seems
final.
I
I
can
take
that
action
for
sure
moving
right
along
so
last
time
we
met,
there's
also
brought
up
the
point
of
should
the
mcs
api's
beta,
be
tied
to
cluster
id
caps,
beta
and
etc,
and
big
highlight
here,
even
though
it's
a
very
small
font
is
that
the
mcs
api
depends
on
cluster
id
for
dns.
So,
while
dns
is
technically
optional,
it
is
pretty
common.
C
So
do
we
think
that
that's
grounds
enough
to
tie
their
graduation
to
each
other?
Is
my
question
and
then
I
will
bring
this
point
up,
but
is
there
any
initial
feelings
about
that.
A
I
I
think,
ideally
like
something:
that's
like
I,
ideally
at
a
certain
maturity
level.
You
only
depend
on
like
your
maturity,
level
or
more
mature.
So
in
that
light
it
seems
like
it
should
be
a
requirement
that
cluster
id
is
beta.
D
D
It's
kind
of
it's
kind
of
an
interesting
thing,
because
it's
right
now
the
primary
use
case
is
mcs,
so
it
kind
of
like
I
imagine
it
will
kind
of
roll
together
with
mcs.
C
Right
yeah,
I
think
we're
having
a
little
bit
of
a
chicken
egg
problem
and
I
think,
even
though
you
know
we
went
the
route
of
like
to
crd
or
not
to
crd
and
got
some
advice
from
sick
architecture
and
their
answer
was
the
crt
bootstrapping
project.
I
think,
since
it's
still
kind
of
uncertain
uncertain,
then
there's
a
feeling
amongst
us
that,
like
maybe
we
still
need
to
to
see
if
that
will
be
good
enough
for
our
purposes,
to
give
the
update
on
that,
the
project
is
moving
forward.
C
Possibly
people
saw
there
was
announcement
that
was
cc
to
us.
There's
a
poc
pull
request,
there's
a
doc
that
I've
linked
to
that.
I
got
from
nikita
directly
and
I
also
stalked
even
more
and
there's
stig
api
machinery
is
probably
gonna,
be
talking
about
it
at
their
next
meeting
on
5
19.
So
maybe
we
want
to
like
inject
ourselves
in
any
or
all
of
these
things.
C
So
I
think
when
we
met
two
weeks
ago,
we
weren't
even
sure
if
this
was
like
really
getting
off
the
ground
or
not,
because
it
was
still
kind
of
like
in
stealth
mode.
But
it's
definitely
in
the
public
eye
now
and
there's
from
what
I'm
reading
it's
going
to
undergo
like
a
kept
process.
C
So
you
know
we'll
have
plenty
of
opportunity
for
input,
but
that
definitely
means
that
this.
You
know
this
project
undergoes
the
same,
like
governance
that
if
that
slows
down
cluster
id,
then
that
slows
down
mcs
api.
So
I
guess
we
have
to
decide
if
that's
valuable
to
us
to
tie
them
all
together.
A
Well,
I
think
it's
about
valuable
having
having
like
a
dependency
on
cluster
id
being
more
mature
than
alpha
like
it.
What
what
you
get
in
exchange
for
like
that
linkage
is
the
cost
is
like
now
they're
linked
what
you
get
in
exchange
for
that
as
a
consumer.
A
Is
that
like,
if,
if
we
take
mcs
to
to
beta
and
cluster
ids
beta,
like
you,
have
you
have
a
real
like
ability
to
reason
about
things
in
terms
of
like
the
maturity
levels
that
you
would
expect
if
we
take
mcs
to
beta-
and
we
say,
cluster
id
is
alpha
and
there's
some
like
breaking
change
to
cluster
id,
that
that
will
ripple
up
potentially
into
your
view
as
someone
that,
like
only
cares
about
mcs
and
like
that,
that
is
the
reason
why
you
want
to
depend
on
things
that
are
your
maturity
level
or
later.
D
Right
yeah,
I
think,
like
I
think,
if
we're
going
to
depend
on
cluster
id,
they
have
to
be
locked
together.
At
the
same
time,
I
think
we
we
need
cluster
id
graduation
criteria
to
have
an
implementation
that
uses
it
or
some
some
kind
of
use
case
right.
So
I'm
actually
wondering
if
we
shouldn't
just
like
tightly
couple
both
of
them-
and
maybe
I
don't
know
about
merging
the
kep,
but
almost
because
I
don't
know
how
we
like
what
a
beta
cluster
id
with
without
any
consumers
would
mean
anyway.
D
So
we
should
probably
just
couple
them
together.
A
It
does
seem
like
in
this
light,
like
they
are
at
this
point,
one
thing
where
we
think
that
cluster
id
will
have
like
use
outside
of
outside
of
the
boundary
that
it's
sort
of
like
inside
right
now
but
like
if
we,
if
we
combine
them,
it
does
get
simpler
to
reason
about
the
whole
thing,
I
think.
D
Yeah-
and
it
also
seems
like
and
and
anyone
jump
in
if
there's
other
things
I'm
missing
here,
but
really
the
big
blocker
from
for
cluster
id
going
even
beta
is
probably
just
figuring
out
deployment.
So
maybe
we
should
just
check
in
after
we
see
what's
happening
with
with
crd
bootstrapping,
as
I've
thought
about
this
more
over
the
last
few
weeks,
like
at
least
for
our
current
use
cases,
I
don't
know
that
we
that
we
actually
need
bootstrapping,
it's
just
given
the
other
use
cases
we've
outlined.
D
For
cluster
id,
knowing
that
it
will
be
possible
at
some
point
is
important,
but
it
doesn't
need
to
exist
now,
like.
I
don't
think.
We
need
to
wait
for
a
proper
bootstrapping
mechanism
to
graduate
cluster
id.
C
C
So
I
think
that's
the
nuance
in
there
and
that
also
opened
the
floor
for
like
oh,
maybe
clustered
you
should
chill
in
alpha
for
a
while.
While
that
settles,
which,
based
on
this
conversation,
then
also
ties
mcs
api
to
chillin
in
alpha
until
cluster
id
settles
so
anyways,
that's
kind
of
just
a
rehash
of
what
what
had
been
talked
about
last
time.
C
They
should
be
tied
together
for,
like
the
principal
reasons
of
like.
What's
that,
your
maturity
level
shouldn't
depend
on
things
that
are
less
mature
than
you
and
then
I
think
it
can
be
sort
of
a
separate
tangent
of
deciding
if
cluster
id
truly
is
restricted
for
its
own.
You
know
beta
graduation
of
we
can
bootstrap
it
today.
You
know
like
whatever
day
that
is,
and
hopefully
we
don't
get
too.
You
know
caught
up
in
it
again,
but
those
are
my
that's
what
I'm
taking
away
from
from
this.
C
If
I
even
wrote
it
down,
it's
that,
yes,
we
tied
them
to
each
other's
beta.
Graduation,
like
I,
can
literally
put
a
line
in
them
in
mcs
api
that
this
other
kept
is
at.
D
Yeah
yeah.
No,
I
think
I
think
that
makes
sense
and
the
more
I
think
about
this.
I
I
wouldn't
want
to
block
cluster
id
on
figuring
out
cid
bootstrapping.
We
know
that
it
has
to
happen
at
some
point.
We
don't
have
any
tangible
use
cases
that
require
it
right
now
for
for
cluster
id.
That
I
can
remember
like.
I
think
we
want
to.
We
want
to
see
our
d
regardless,
especially
if
we're
going
to
couple
these
two
together.
D
Otherwise,
mcs
gets
all
the
burden
of
of
being
in
kk
by
proxy
without
actually
being
in
kk
and-
and
I
just
like
all
of
our
reasons
for
wanting
the
cluster
id
crd
bootstrapped
in
a
deployment
are
theoretical
use
cases
right
now,
which
are
interesting
for
sure,
but
right
but
they're
still
not
real.
A
So
that
sounds
like
it
sounds
like
you're
articulating,
a
preference
for
crd
jeremy
and,
like
the
the
future
bootstrapping
allows
us
like,
is
additive,
but
fundamentally
doesn't
change
that
fundamentally
doesn't
change
like
the
in
or
out
a
desire
right
like
it
makes
what
we,
what
we
think
will
be
the
best
way
to
do
it
better
exactly
but
it,
but
it
doesn't
it's
not
the
difference
between.
We
should
or
should
not
do
it.
D
Right
and-
and
I
mean
also
some
of
this-
is
you
know-
I've
been
chatting
about
this
kind
of
thing
with
like
daniel
smith
and
it
at
some
point
there
will
be
like
lots
of
people
need
crd
bootstrapping.
We
will
solve
it.
This
doesn't
need
to
be
the
only
driver.
I
think
we
should
keep
following
up,
though
like
if,
if
we
don't
block
your
d
bootstrapping,
we
we
should
still
definitely
be
looking
for
it
and
do
what
we
can
to
help
make.
It
happen
right.
C
So
all
right!
Well,
I
can
take
these
and
then
I
think
the
meta
point.
The
the
relevant
point
for
beta
blockers
is,
as
I
said
before,
that
they
depend
on
each
other
and
then
they're
also
needs
to
be
a.
You
know
like
keep
following
up
with
cluster
distracting
project,
and
you
know
that
we
should.
We
should
stay
involved,
because
we
are
a
use
case
basically,
and
we
should
make
sure
that
projects
about
that
keep
going
forward
and
that
our
needs
are
at
least
known.
A
Yeah
and
it
it
provides
like
an
example
use
case
to
like
test
crd
bootstrapping
with.
C
Right
yeah,
I
was
happy
to
see
in
the
announcement
too.
We
were
mentioned
explicitly
as
a
as
the
use
case,
so
we're
definitely
the
minds
and
then
we
can
keep
keep
hanging
out
there.
That's
great
cool
all
right,
I'm
going
to
move
from
this
one
and
to
the
finalizing
it.
Yes,
we
tie
them
to
each
other.
C
I
don't
think
we
structurally
will
combine
the
cat
unless
people
think
that
it's
worth
it
I'm
going
to
leave
them
separate
for
now.
C
And
then
there
were
three
related
sig
network
projects
that
are
happening,
that
we
think
either
from
conversations
I've
eavesdropped
on
or
because
they're
mentioned
in
the
mcs
api
kept
that
we
need
to
interface
with
somehow
like
we
need
to
do
the
things
that
they
do,
but
at
the
multi-cluster
level,
so
I
did
a
little
jump
into
them.
C
C
Okay,
so
service,
topology,
akai,
revamp
the
current
hotness
on
this
is
kev2433
topology,
aware
hints,
which
has
this
really
schedule
and
see
it's
it
seeks
there.
I
it
looks
like
there
were
some
other
like
abandoned
caps,
that
tried
to
handle
this
like
in
a
broader
like
more
configurable
way,
but
this
one
is
kind
of
like
the
latest
iteration,
which
seeks
to
avoid
seeks
to
do
a
smaller
portion.
C
So
there's
some
details
here
about
how
it's
suggested
in
the
cap
to
work
that
the
endpoint
controller
is
going
to
like
have
endpoint
slices
know
where
their
pods
are,
because
the
nodes
of
those
pods
know
their
zone
annotation
and
then
there's
this
other
like
auto,
auto
way
to
determine
which
zone
should
be
aligned
with
which
end
point
and
the
endpoint
slice
based
on
being
in
the
same
zone
based
on
that
same
annotation,
also
proportion
of
cpu
cores
in
the
zone
to
try
and
like
not
overload
any
individual
zone
and
then
there's
some
work
going
on
to
make
you
proxy
consume.
C
B
C
This
is
going
this
endpoint
controller
prop
propagating
this
information
up
from
this
zone.
Annotation
is,
I
guess,
works
as
described
on
the
box,
but
then
the
question
of
how
these
hints
get
used
for
from
a
multi-cluster
perspective.
I
guess
I
don't
feel
as
confident
about.
D
B
I
agree
this.
The
topology
design
was
trying
to
be
as
loose
as
possible
here
and,
in
fact,
rob
recently
just
loosened
it
up
in
a
little
bit
more
just
to
make
sure
that
it
isn't.
Oh
did
we
lose
laura
to
make
sure
that
it
isn't
too
locked
into
what's
going
on.
B
B
D
B
A
Makes
sense
to
me
laura:
did
you
get
any
of
that.
A
F
Boy,
cool,
hey,
y'all,
nice
to
meet
you,
I've
kind
of
been
lurking
a
little
bit,
but
it's
yeah
I'll
talk.
Now.
I'm
not
gonna
know
how
to
show
my
screen
thanks
so
gosh
my
I
I
think
like
we're
interested
in
this,
this
multi-cluster
services
stuff.
I
wrote
an
email
about
this,
so
if
you've
read
my
email,
then
this
might
just
be
rehashing
old
stuff,
but
I'd
love
to
get
people's
takes
in
more
detail
and
and
other
information.
F
So
this
thing
about
mutual
trust
and
shared
ownership
amongst
clusters,
I'm
concerned
is,
is
so
narrow
as
to
make
this
effort,
which
is
very
appealing
to
me,
like
not
quite
usable,
for
some
of
the
use
cases
that
that
my
employer
has
in
mind,
and
so
the
like.
The
use
case
that
we
are
pretty
interested
in
is
like
not
requiring
very
much
trust
but
still
letting
you
share
services
so
like
to
be
super.
I
don't
know
make
an
example.
F
You
got
three
clusters
that
have
an
app
have
different
apps
on
them
run
by
different
teams,
and
you
have
a
fourth
cluster
that
runs
a
service
and
the
admins
of
those
clusters
are
different
people
who
don't
trust
each
other
to
not
screw
up
their
clusters
or
to
do
things
that
would
hurt
each
other
and
so
like.
You
might
have
some
sort
of
security
expectations
or
requirements
from
the
administrators
of
those
independent
clusters
like
the
ones
who
are
importing
would
be
like.
F
F
Namespaces
filled
up
with
services
from
from
over
there,
and
then
the
admin
of
the
cluster,
that's
sharing
services
might
say
only
export
and
don't
pull
anything
into
my
cluster
only
export
from
this
one
name,
space
that
is
supposed
to
be
shared,
and
so
these
kinds
of
requirements
are
just
like
this
is
not
what
a
cluster
set
has
been
defined
to
be,
but
like
service
export
is
still
super
valuable
to
the
folks
who
are
trying
to
share
the
service
and,
like
I've,
talked
with
people
about
this
and
they're
like
yeah,
I
would
totally
just
drop
a
service
export
into
my
application
manifest,
and
I
don't
really
want
to
have
to
care
about
what
the
trust
environment
is
between
the
cluster
that
I
happen
to
be
deploying
into
and
where
my
consumers
are
that's
something
that
maybe
is
managed
beyond
my
team's
control
or
interest.
F
So
I've
been
like
looking
at
weight
at
like
possible
implementations
of
this.
I
hacked
up
something
myself,
but
then
people
pointed
at
me.
He
had
this
other
project
called
scupper,
which
I
don't
really
know
anything
about,
but
it
looks
really
cool.
They
lets
you
link
up
namespaces
with
each
other,
like
namespace,
by
namespace
across
clusters
and
then
pull
services
across.
F
So
I
opened
an
issue
there
and
it
was
like
hey.
This
looks
like
you
could
just
implement
service
export
and
they're
like
yeah,
but
we
wouldn't
really
be
compliant
with
the
cap,
because
the
cap
says
service
export
means
this
one
thing
which
is
automatically
share
with
all
the
other
clusters
in
this
cluster
set
and
that
might
result
in
mismatched
expectations
and
that
that
was
sort
of
a
similar
sentiment
that
I
was
getting
from
people
internally,
who
we
might
want
to
build
on
this
thing.
F
D
So
what
I
guess
an
optional
cluster
set
to
me
just
seems
like
a
cluster
set
of
one
like
right,
yeah
and,
and
that
I
so
part
of
the
high
trust
thing
is
is
to
me
has
been,
and
I
think
we
are
all
open
to
discussing
differences.
But
it's
been
that
like,
as
you
said,
if,
if
it's
not
to
everyone,
then
somebody
needs
to
handle
who
can
talk
to
who
right
so
that
service
export?
D
I
can
just
throw
it
in
my
manifest
and
export
a
service,
but
if
it's
not
going
to
everybody,
somebody
needs
to
create
the
the
barriers,
the
so
talking
a
lot
with
sig
network.
One
of
the
ideas
that
we've
had
here
is
that
this
is
a
really
good
use
for
the
gateway
api.
D
So
maybe
you've
got
a
bunch
of
cluster
sets
of
one,
but
you
can
define
gateways
to
point
to
exported
services
and
consume
them
that
way,
because
when
you're,
you
know,
if
you
don't
have
a
cluster
set,
then
like
the
to
some
degree,
it
seems
like
you
lose
some
of
the
kubernetes
nativeness
of
it,
and
you
really
just
want
an
easy
way
to
expose
a
service
to
other
clusters.
D
But
maybe
you
know
if,
if
you
were
doing
like
the
the
bare
bones,
mcs
implementation
on
a
flat
network,
for
example,
that
kind
of
already
requires
some
degree
of
trust.
So
just
making
like
you
could
use
a
service,
you
could
use
like
a
a
cluster
set
with
a
with
holes.
Poked
in
it
to
you,
know,
export
all
the
endpoints
from
one
cluster
to
another
and
do
q
proxy
based
networking
for
that
service
between
clusters.
D
F
Yeah,
but
I
think
that's
all
implementation
details
right
from
the
perspective
of
the
team
that
owns
the
person
who
owns
the
service,
they
just
want
to
say
I've
got
the
services
in
this
cluster.
Now
I
want
to
share
it
and,
like
maybe
under
the
hood,
there's
a
controller.
That's
you
know,
setting
up
programming
the
gateway
but
write
it
writing
a
gateway,
tls
route
or
whatever.
The
thing
is.
D
Right,
but
I
guess,
but
that's
kind
of
the
way
that
we
at
least
the
way
I've
been
envisioning.
How
this
would
work
today
is
like
you
could
just
have
service
exports
and
some
controller
creates
gateway
resources
as
necessary,
or
something
like
that.
But
somebody
still
needs
to
do
that,
like
I
don't
think
if,
if
you're
not
going
to
have
a
flat,
then
service
export
by
itself
isn't
enough
like
if
it's
not
going
to
export
to
everybody,
you
still
need
someone
to
define
the
the
walls
yeah
and.
F
B
Yeah,
I
think
it's
so
I
can
see
it
in
both
dimensions
of
of
choosing
to
express
this
as
gateways
or
even
as
load
balancers
right
like
historically.
The
way
you
share
stuff
between
two
clusters
is
expose
it
of
a
type
load
balancer,
with
some
annotation
to
the
implementation
that
says
internal,
and
then
you
get
a
local
load
balancer
that
that
lets
you
bring
traffic
in
and
out
where
load
balancer
is
completely
loosely
defined
right,
it
might
be
a
a
vip
or
it
might
be
an
actual
proxy.
B
So
gateway
is
an
extension
of
that
which
is
you
know,
more
powerful
and
flexible,
but
looking
at
it
from
an
mcs
point
of
view,
it
seems
at
least
at
first.
It
seems
reasonable
that
the
implementation
is
governed
by
policy.
So
if
you
step
back-
and
you
say
this
is
in
fact
an
implementation
of
mcs,
but
my
implementation
includes
policies
like
don't
export
this
to
that.
B
Do
export
this
to
that
right,
and
that
seems
it
sounds
funny
to
say
strictly
additive,
because
the
result
is
subtractive
right,
you,
you
add
policy
and
subtract
results,
and
it
seems
to
me
like
that
should
be
allowed.
B
I
mean
we
when
we
wrote
the
mcs
semantic
descriptions
we
we
wanted,
I
think,
to
be
as
loose
as
we
could
and
allow
implementations
to
be
creative
for
exactly
this
sort
of
reason.
Right
so
I
mean
backing
up,
it
seems
completely
reasonable
to
take
a
generic,
mcs
implementation
and
say,
but
I
want
a
specific
deny
list
right
and
I
want
this
cluster
is
not
allowed
to
export
that
namespace.
I
don't
care
what
the
cluster
says.
B
D
I
mean
you
could
even
have
policy
that
just
doesn't
allow
rights
to
service
exports
in
those
namespaces
and
clusters
that
shouldn't
export,
and
maybe
your
exporting
cluster,
that
should
only
export
a
service
and
not
consume.
Any
only
has
the
namespace
that
exports,
it
doesn't
have
the
other
namespaces.
Those
services
can't
be
imported.
B
So
I
mean
my
my
take
is,
I
think,
you're
legit.
I
think.
Maybe
we
need
to
make
sure
this.
The
specs
that
we've
written
are
expansive
enough,
that
they
explicitly
call
out
that
this
is
allowed,
or
at
least
that
they
don't
imply
that
it's
not
right,
and
because
I
I
mean
this
is
the
truth
of
our
industry
right.
We
accumulate
policy
over
time,
and
so
it
seems
like
it
seems
like
it
should
be
legitimate
to
me.
D
Yeah,
I
think
the
the
only
thing
we'd
want
to
avoid,
which
doesn't
seem
like
it
showed
up
in
your
list
at
all,
is
like
you
have
basically
two
copies
of
the
same
service,
each
exported
to
a
subset
of
clusters,
but
we
want
those
clusters
to
be
in
the
same
cluster
set
like
that's
that,
with
with
our
definition
of
cluster
set,
we've
just
been
trying
to
avoid
the
situation
where
there's
where,
basically,
you
need
to
go,
do
a
huge
amount
of
look
up
to
understand
who
exports
to
what?
D
Because
you
know
you
can't
inspect
any
given
cluster
and
understand
its
interactions
with
the
others.
So
if
you
look
go
ahead,.
B
Sorry,
let
me
ask
a
clarifying
question
for
gabe:
if
you
have
two
clusters
and
they
don't
trust
each
other,
what
prevents
them
both
from
having
the
same
name
space
and
trying
both
to
export
it.
F
Right,
and
so
that
is
like
in,
I
think,
the
solution
that
we're
thinking
about
that
is
a
that's
a
thing
that
some
multi-cluster
admin
who's
got.
A
third
party
we
haven't
talked
about
yet
has
made
the
determination
that,
like
I'm,
going
to
allow
those
two
people
to
talk
to
each
other
or
maybe
those
two
cluster
admins
have
just
emailed
each
other
and
said:
hey.
We
want
to
share
services.
F
Let's
make
sure
we
dedupe
our
namespaces
before
we
try
to
share,
or
they
say
actually
we
do
want
to
share
across
these
namespaces
like
we're
intentionally
lining
them
up,
so
they
have
the
same
name.
So
everything
else
works
for
us,
but
there's.
B
Okay,
so
that's
the
I
think
that
was
the
really
important
part
is
that
it
is
intentional,
either
you're
intentionally
making
them
the
same,
or
you
have
to
go
out
of
your
way
to
make
them
not
the
same.
In
order
to
not
collide
here
and
you
know
in
my
head,
I
have
this
vision
of
a
multi-cluster
switchboard
where
a
patch
panel,
where
I'm
pulling
wires
from
this
cluster
to
that
cluster
and
this
cluster
to
the
broadcast
hub
and
and
okay,
I
mean
that
seems
it
seems
like
it
should
be
allowable
to
me.
B
F
Yeah,
okay,
yeah,
I
think,
like
the
way
I
see
it
is
cluster
set,
is
a
really
useful,
simple
model
for
talking
about
how
services
could
be
shared
and
it
like
it
might
be
the
inspiration
for
certain
mcs
controllers,
but
it
doesn't
have
to
be
part
of
the
definition
of
the
semantics
of
service
export
and
service
import.
Those
things
can
be
decoupled
from
the
definition
of
cluster
set,
or
maybe
we
frame
it
as
sharing
between
cluster
sets.
Has
this
these
semantics
that
are
looser
sure.
B
Yeah,
I've
certainly
heard
from
other
folks
that
they
that
they
like
the
idea
of
sameness,
but
they
want
some
specific
controls
to
say
you
know
what
the
the
foo
system
name
space
is:
never
sameness.
It's
never
the
same.
It's
always
a
cluster
vertical
thing,
and
I
want
to.
I
want
to
control
that
at
a
higher
level,.
D
I
think
we
did
actually
spell
that
out
in
the
in
the
same
in
the
sameness
position
statement.
Doc
like
these
are
never
the
same
is
is
kind
of
a
sameness.
I
think
the
only
thing
we
want
to
avoid
is
that
these
are.
These
are
the
same,
sometimes
and
not
others
like
that's
when
it
starts.
A
So
one
one
thing
that
I'd
be
curious
about
is
like
gabe,
you
mentioned
the
scupper
project.
A
A
I
wonder
if,
like
we
can
share
future
wording,
changes
with
that
audience
to
see.
Would
this
affect
your
decision
about
whether
to
implement.
F
Okay,
cool
yeah,
I'm
I'm
not
close
to
them,
but
I'm
happy
to
do
that
like
I
I
it's
pretty.
I
would
like
to
if
folks
are
open
to
it,
I'm
happy
to
like
try
to
draft
some
morning
changes
and
then
circulate
them
here
and
then,
if
we're
all
good
with
that,
I
can
circulate
it
outside
of
here
and
okay
cool.
I
did
want
to
poke.
F
F
D
It's
that
I
think
that
the
top
two
boxes
are
scary,
the
foo,
because
now,
like
you,
don't
really
like
import
only
from
cluster
two
import.
Only
from
cluster
three.
F
D
So
so
then
it
becomes
harder
to
reason
about
what
foo
is
and
who
it
talks
to.
I
guess
so
like.
Why
would
you
do
this
and
you
know
obvious
examples
I
can
come
up
with.
Are
that
you
know
we
have
like
a
u.s
deployment
and
a
europe
deployment,
and
I
don't
want
chatter
between
the
two,
but
then
are
these
really
still
foo
or
is
it
foo
us
and
foo
europe
right,
like
I
guess
why
are
the
namespaces?
That's
still
the
same,
if
I
actually
care
about
this
distinction,
so
the.
D
Right
so,
but
I
mean,
could
there
be
policy
when
you're
handing
these
out?
I
guess
like
cluster
admin
is
an
extremely
privileged
role
right.
So
if
you've
got
a
bunch
of
cluster
admins
and
they're
just
like,
and
they
can
make
whatever
namespaces
they
want,
then
it
gets
harder.
Identity
gets
a
lot
harder
to
reason
about,
because
now
it's
just
everybody
has
like
it
to
me,
and
you
know
please
push
back
on
this,
but
it
feels
like
the
result
is
that
everybody
just
kind
of
has
to
be
on
their
best
behavior.
D
D
B
B
Is
this
a
model
that
I'm
super
comfortable
about,
telling
everybody
to
do
less
so
because
I
think
it
is
a
little
bit
complicated
to
reason
about,
but
I
can
sort
of
make
up
stories
about
why
you
would
end
up
in
that
situation,
and
I
don't
think,
there's
a
technical
reason
why
you
can't
implement
it
that
way.
B
Right
again,
if
you
zoom
out
it's
just
policy
you're
saying
the
switchboard
doesn't
allow
connections
between,
I
lost
your
colors,
but
between
the
between
the
the
things
that
are
not
connected
right,
I'm
not
sure
if
we
want
to
go
as
far
as
saying
this
is
a
common
pattern
and
you
should
all
implementation
should
be
able
to
express
it.
B
Yet
if
it
turns
out
that
this
is
something
that
like
everybody
needs
to
do
like,
maybe
we
need
to
actually
invest
more
into
visibility
and
and
figuring
it
out
the
service
import
service
export
api
because
it's
very
cluster
centric,
like
any
errors,
will
be
propagated
down,
presumably
propagated
down
to
the
correct
clusters
right,
there's,
no,
there's
no
global
control
plane
that
is
mandated.
That
has
to
be
aware
of
this
right
now.
I
don't
particularly
want
to
write
this
control
plane
for
what
you
just
drew.
B
But
if
you
want
to
do
that,
I
mean
I
I
wouldn't
say
you're
terribly
wrong.
I
would
say
it
seems
a
little
unfortunate
that
the
name
spaces
are
the
same,
and
I
would
perhaps
in
the
future,
endeavor
to
coordinate
better.
D
F
I'm
certainly
not
trying
to
say
that
I
I
wanted
to.
I
think
I
might
want
to
allow
implementations
to
do
this,
but
we
don't
have
to
I
don't
want
to.
I
don't
want
to
tell
them
that
it's
prohibited
or
I'm
trying
to
figure
out
if
there
are
reasons
why
we
would
want
to
prohibit
it
other
than
it's
scary,
if
you're
trying
to
do
a
multi-cluster
global
control,
plane
thing
but
like
if
we're
saying
this
is
just
the
natural
result
of
clusters,
independently
federating
with
one
another,
because
the
cluster
admins
are
choosing
to
do
so.
B
Probably
what
we
need
is
to
go
back
and
look
at
the
words
of
what
we've
written
and
make
sure
that
those
places
where
we
say
all
clusters
that
we,
you
know
add
a
footnote.
This
is
you
know,
modulo
policy
right,
because
the
implementations
are
allowed
to
do
whatever
they
want
with
policy
really-
and
I
think
that's
a
good
thing
right.
First
it'll
be
a
little
bit
painful.
While
there
are
10
different
implementations
and
people
figure
out.
What
are
the
sticky
features,
but
eventually
that'll
converge.
D
Yeah,
like
the
the
situation
you
described,
will
probably
just
work.
Fine,
if
you,
if
you
have
your
implementation,
carve
up
clusters
and
people
consuming,
it
will
probably
get
the
right
experience,
but
I
I
like
the
modulo
policy
thing
like
we
can.
We
can
add
a
note
saying
this
is
how
it's
going
to
work,
and
maybe
it
doesn't
in
your
implementation.
So
please
read
the
fine
print.
You
know.
B
Yeah
yeah:
this
is
an
api
for
that
describes,
the
the
willingness
to
publish
and
the
the
active
importing
exactly
how
the
the
patch
panel
between
your
clusters
looks
is
an
implementation
detail
and
we
should
be
careful.
B
I
I
love,
writing
spec
ware,
but
we
should
be
careful
not
to
make
it
super
precise,
but
we
should
also
not
make
it
so
loose
that
we
can't
really
think
about
it.
What
I
want
to
avoid
is
that
we
end
up
producing
a
50-page
document
with
examples
of
crazy
things
that
you
could
do,
even
if
we
don't
think
they're
good
ideas.
F
A
Yeah,
I
I
do
think
that,
like
this,
the
the
use
case
that
you
bring
up
like
the
the
the
thing
that
I
think
is
like
both
progress
and
like
it's
a
little
bit
of
a
complication,
because
it's
sort
of
outside
the
conceptual
framing
of
cluster
set
like
to
me.
The
element
that
like
that
represents.
That
is
that
it
seems
like
this
might
be
an
environment
where,
like
you
know,
the
there
are
previously
established
clusters
that
were
created
like
I
don't
want
to.
A
You
know,
project
anything
onto
this
situation,
but
like
here's,
here's
maybe
another
one
that
maybe
has
some
parallels.
It's
like
in
my
organization
use
of
kubernetes
like
started
out
as
like
shadow
I.t,
and
maybe
different
departments
like
built
their
own
sets
of
built
their
own
x
y
and
z
clusters
and
now,
like
they're,
reaching
a
point
where
they
want
to
do
some
service
sharing.
A
A
If
you
know,
in
this
case,
I
I
think
we
have
found
some
like
you
know
semantic
things
about
the
wording
that
was
used
that
maybe
like
construe,
something
that
was
unintended
but
like
as
a
group.
Let's
pay
attention
to
this,
like
as
we
get
more
examples
of
people
wanting
to
use
cluster
set
because
we
have
been
sort
of
like
you
know.
A
I
don't
want
to
say
spec
where
in
a
pejorative
sense,
but
like
it's
really
good
to
get
the
input
from
like
you
have
a
real
use
case
that
doesn't
factor
cleanly
into
what
we've
done
so
far.
So
we
should
all
be
paying
attention
to
that,
as
as
we
get
more
folks
joining
that,
like
have
pre-existing
environments
that
don't
don't
happen
to
line
up
with
cluster
set
or
something
else
about
what
we've.
What
we've
decided.
D
Right,
I
I
I
also
like,
in
this
specific
use
case.
Like
again,
I
don't
see
anything
really
wrong
with
it,
but
just
thinking
about
a
complication
like
what,
if
you
have
a
few
services
across
namespaces
that
are
kind
of
co-deployed,
now
it's
kind
of
on
the
switchboard
operator
to
make
sure
that
they
draw
the
same
lines
around
all
those
namespaces.
Otherwise,
you
might
get
some
strange
behavior.
So
that's
the
kind
of
thing
that,
like
that,
opens
up
that
otherwise
wouldn't
have
been
an
issue
but
is
also
solvable
so.
B
Yeah,
I
don't
I
don't
see
like.
I
don't
want
to
write
the
switchboard
application.
It
sounds
like
hell,
but
I
see
how
it's
a
useful
thing
and
in
fact
I
can
go
even
a
step
further.
B
E
Yeah,
I'm
wondering
if
there's
sorry,
if
there's
isn't
more
of
a
problem
with
the
second
line
and
the
diagram
is
the
one
where
there
are
four
clusters
in
the
cluster
set.
Three
of
them
have
three
of
them:
import,
a
given
service
and
the
service.
E
That's
exported
isn't
available
on
the
fourth
one,
because
to
me
it
seems
that
that's
that
changes
the
namespace
sameness
from
the
consumer's
perspective,
which
might
be
more
significant
because,
like
in
the
top
line,
the
service
system
and
there's
a
subset,
there
are
two
subsets
that
don't
talk
to
each
other,
but
any
consuming
service
that
can
assume
that
the
service
is
available
in
the
other
namespace
and
foo
or
wherever
it
was
it.
E
Just
might
not
the
the
implementation
of
the
service
might
be
different,
but
it
will
get
the
service,
whereas
in
the
second
line
services
in
the
first
cluster,
they
can't
expect
that
the
services
available
there
well
or
they
might
because
of
name
space
sameness
and
they're
in
the
cluster
set.
But
it's
just
not
there.
D
Yeah,
that's
that
is
actually
you're
right,
that
that's
kind
of
a
bigger
flag,
because
the
in
the
in
the
first
case
that
the
two
namespaces,
as
tim
said,
we
built
room
for
subsetting.
So
one
one
option
is
that
they
are
actually
the
same
service,
but
the
implementation
only
only
ever
provides
clusters
three
and
four,
each
other's
endpoints
and
clusters.
One
and
two,
each
other's
endpoints
and
just
never
lets
end
points
cross
between
that
that
subset,
the
consumer
doesn't
care.
D
They
just
get
a
service
where
those
endpoints
are
is
is
up
to
the
implementation
but
yeah
the
the
next
line,
like
our
I
think
currently
recommended
way
to
do.
This
would
be
don't
allow
bar
in
in
cluster
one.
But
I
guess,
if
you,
if
it's
unrestricted
cluster
admins,
that's
that
becomes
a
hard
thing
to
enforce
and
that's
where
we
need
to
have
this
switchboard.
B
Yeah,
I
think
I
I
think
that's
right.
I
think
there
will
be
necessarily
implementations
that
are
very
automatic
convention
based
and
you
plug
things
in
and
they
just
work
and
there
will
be
implementations
that
are
very
configuration
based
and
some
cluster
admin
somewhere
has
to
go
and
add
the
name
or
address
of
your
cluster
to
a
yaml
file
and
get
it
get
ops,
reviewed
and
pushed
to
some
control
system.
B
And
that's
how
you
enable
it
and
probably
lots
and
lots
of
steps
in
between,
because
there
are
very
different
customer
bases
out
there
who
want
to
do
different
stuff
right.
F
F
All
right:
well,
thanks
for
the
feedback,
this
is
really
helpful.
I'm
gonna
go,
I
guess
concretely,
I
I'm
new
to
this
contributing
to
cap's
process.
Should
I
make
a
pull
request
against
the
cap
document
with
the
changes
I'm
talking
about,
I
know
laura
has
a
pull
request.
That's
in
flight.
I
don't
want
to
like
make
merge
conflicts
with
hers.
A
B
D
Yeah
yeah
no
no
great
way
around
that,
but
yeah
that
would
be
a
great
place,
make
your
suggestions
that
we
can
discuss
on
the
pr
sounds
good.
Thank
you.
Thank
you.
D
I
think
we
are
out
of
time
for
this
week
so
laura.
I
know
we
didn't
get
through
everything.
I
hope
I
hope
you're
back
and
can
hear.
I.
A
D
Awesome,
maybe
we
can
pick
up
the
rest
of
yours
next
week.
C
Yeah
no
problem
and
sorry
if
I
was
talking
over
people
before
when
my
internet
was
haunted
and
also
if
we
want
to
take
any
of
this
offline,
especially
because
we're
in
sort
of
like
the
related
sig
network
projects,
section
I'm
happy
to
like
dm
with
people
too.
A
Yep:
okay,
let's,
let's
pick
it,
let's
at
least
give
a
readout
of
of
where
we
land
on
this
next
week.
If,
if
we
do
land
somewhere
or
we
can
pick
it
back
up
next
week,
okay.