►
From YouTube: Kubernetes SIG Multicluster 2022 Jan 25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Whoops
all
right
can
folks
hear
me.
Yes,
okay,
great
well,
happy
new
year,
everybody
happy
new
year.
Let's
see
we,
I
know
that
we
have
one
thing
on
the
agenda
today.
Let
me
just
pull
it
up.
A
A
And
if,
if
you
have,
if
you
want
to
screen,
share
and
show
your
document
that
you've
linked
to
from
the
agenda,
I
enabled
screen
sharing.
So
you
can
do
that
whenever
you're
ready.
B
Sounds
good
should
we
wait
a
minute
and
then
start
yeah.
C
A
C
What
happened?
I
did
like
a
a
big,
like
you
know,
blow
out
cook
the
whole
dinner
thing,
so
that
was
fun
and
my
new
winter
thing
is
I
got
into
this.
Is.
C
A
All
right
well
we're
three
after
sanjeev,
why
don't
you?
Why
don't
you
go
ahead
with
the
agenda
item
around
multi-cluster,
sorry,
multi-cluster
network
policy.
B
So
I
guess
I
should
have
prepared
a
few
slides
would
have
been
easier
to
communicate
than
the
doc.
Just
curious
did
anybody.
I
know
andrew
did,
but
did
anybody
else
get
a
chance
to
read
the
doc
before
meeting.
C
B
Okay,
thanks
laura.
Maybe
we
can
just
hear
what
what
any
feedback
you
have.
B
Your
thoughts
and
comments
I'll
go
into
detail
for
the
other
folks,
but
any
quick
thoughts
from
your
side.
C
Quick
thoughts,
I
mean,
I
think,
and
they
were
listed
in
the
question-
some
like
open
items
like
whether
this
needs
to
be
implemented
at
the
mcs
api
layer
that
I
think
we
should
all
probably
talk
about.
B
Okay,
andrew
any
quick
thoughts
from
your
side.
I
know
you
reviewed
and
had
a
few
comments.
E
Sure
yeah,
hey
everyone,
my
name
is
andrew
stoykis.
I
work
at
red
hat.
I
haven't
been
here
before
pretty
active
over
in
sig
network,
but
I'm
starting
to
work
on
submariner
and
some
other
multicultural
related
things.
Most
of
my
work
prior
to
this
has
been
involved
with
network
palsy.
So
I
wanted
to
come
here
and
hear
some
g
valve.
I
think
you
know,
after
a
first
review
of
the
document.
The
main
question
for
me
is
about
main
questions.
E
I
guess
are:
do
we
need
a
new
resource
and
if
we
do
how
tightly
is
it
going
to
be
coupled
specifically
with
the
mts
api,
so
that
that's
a
little
bit
of
an
overlap
with
what
we
were
already
talking
about?
But
asides
from
that?
I
think
it's
still
super
infantry
infantry
stages
and
I'm
excited
to
hear
what
you
guys
over
here.
What
everyone
over
here
thinks
about
such
an
object.
B
B
You
know
some
early
thoughts
around
what
could
be
an
api
or
possibly
a
modification
of
an
existing
api,
but
all
related
to
multi-cluster
network
policy
at
the
quote-unquote
service
layer,
and
that's
why
the
thought
was
to
initially
assume
that
the
network
satisfies
the
mcs
api,
the
multi-cluster
services,
api,
and
so
the
assumption
is,
you
have
any
implementation
of
an
mcs
api,
but
you
also
want
to
do
some
kind
of
policy
filters.
What
could
you
do
right?
So
these
are
some
early
thoughts.
B
B
B
B
So
I've
as
you
may
have
noticed
those
who
have
read
it.
I
have
purposely
kept
a
lot
of
open
questions
because
open
to.
However,
this
goes
right
and
understand
objectively,
where
it
makes
sense
where
it
doesn't
make
sense
or
where
you
know
it
could
be
done
differently.
B
So
the
very
first
question
is:
do
we
let's
be
clear
that
do
we
really
need
multi-cluster
network
policy,
or
should
we
assume
that
there
will
always
be
some
other
another
service,
networking
layer
on
top,
maybe
a
service
mesh
where
we
couldn't
choose
to,
or
maybe
even
prefer
to
impose
policy
so
that,
right
now,
the
working
assumption
is
that
we
may
there
may
or
may
not
be
a
service
mesh
on
top
of
the
mcs
implementation
and
just
like
regular
network
policy
is
needed,
regardless
of
whether
there's
also
a
service
mesh
training
on
top
of
regular
network
policy.
B
F
Yeah,
okay,
so
the
network
model.
Is
you
here
to
your
first
question?
I
kind
of
want
to
add
another
question
in
there:
do
we
need
a
network
policy
or
will
there
always
be
a
layer
on
top?
I
think
is
a
good
question,
but
another
question
assuming
there's
no
layer
on
top
and
we
and
we
want
to
have
a
a
you
know,
kubernetes
native
network
policy.
F
Do
we
need
a
new
resource
anyway
because,
like
we
already
have
network
policy
right-
and
I
know
that
that's
evolving
do-
do
we
actually
expect
that
cross
cluster
needs
to
be
fundamentally
different
than
than
in
cluster?
If
I'm,
you
know,
if
I
only
want
services
in
one
namespace
to
to
talk
to
a
specific
other
namespace,
and
I
want
to
lock
everything
else
down
or
I
want
to
you
know,
have
only
certain
pods
can
talk
to
this
service.
F
Does
that
actually
need
to
change
across
clusters?
Have
we
changed
the
concepts
I
mean
aside
from
smoothing
some
of
the
gaps
where
you
have
to
talk
about?
You
know
selectors
instead
of
services
for
a
network
policy,
but
if
you
know
is
there
a
fundamental
difference,
I
think
is,
is
the
question
that
I
have.
B
Agreed
so
we
do
have
similar
open
questions
as
we
go
into
the
dock
and,
like
you
said,
even
if
there
is,
there
is
a
service
mesh
on
top
which
has
its
own
policy
layer.
You
could
still
need
a
multi-cluster
network
policy
just
like
you
need
regular
network
policy
either
because
different
personas
prefer
enforcing
policy
at
different
layers
or
because
the
policy
can
be
made
independent
of
specific
types
of
net
service
measures
right.
B
So
yes,
the
working
assumption
is
this
but
agree
with
your
point
and
those
are
in
fact
some
open
questions.
I
have
in
the
stock
so
definitely
open
to
you
know
the
community
talking
about
requirements
and
expected
deployment
models.
Now
we
did
discuss
this
a
little
bit
within
sig
network
and
I've
had
some
offline
discussions
with
the
tim
hawkins
and
others.
B
They
do
believe
that
this
is
needed
and
this
kind
of
straddles
between
segments,
sigmund
cluster,
sigma
cluster
and
civ
network,
so
tim
was
quite
positive
on
having
something
started
for
this,
but
we
keep
on
iterating
as
we
go,
so
the
network
model
we
are
assuming
is
that
okay,
it
must
be
a
model
that
supplies
that
supports
the
mcs
api,
so
we
assume
the
same
kinds
of
terminologies
about
service
imports
and
exports
and
cluster
sets
and
name
space
sameness.
B
The
other
thing
we
are
assuming
for
now
is
that
we
want
to
keep
this
as
flexible
in
terms
of
the
pod
ips.
So
the
back
end,
pod
ips
of
the
services
may
or
may
not
be
globally
unique
right.
B
So
there
could
be
scenarios
where
using
the
mcc
api,
where
you're,
assuming
that
all
pods
and
services
come
from
a
single
global
ip
space.
Some
other
deployments
might
not
assume
that.
So
what
we're
assuming
for
now
is
that
definitely
we
do
not
want
to
assume
that
pod
ips
from
across
all
clusters,
and
certainly
across
all
multi-cluster
services,
come
from
a
globally
unique
space,
but
service
ips
by
the
very
nature
must
be
unique
across
the
cluster
set,
so
the
cluster
set
ips
because
they
are
required
to
be
reachable
from
other
clusters.
F
The
cluster
set,
but
so
that's
not
actually
the
mcs
spec,
doesn't
actually
require
that
each
cluster
can
have
its
own
picture
of
a
cluster
set
ip
according
to
the
specs.
So
and
in
our
implementations
I
don't
actually
know
if
we
have
any
that
do.
Have
a
single
global
cluster
set
ip.
B
F
Yeah,
it's
the
service.
Names
have
to
be
consistent
and
unique,
but
but
not
the
eps.
B
Sorry,
maybe
I
shouldn't
try
to
update
dynamically,
because
okay,
I've
just
put
a
note
to
myself
I'll
update
that
thanks,
yeah
agreed
yeah,
so
the
model
is
basically
whatever
is
the
mcs
model
and
and
nothing
more
and
nothing
less.
B
Now
we
do
have
a
note
here
which
says
that
there
are
some
vendor
implementations
of
some
kind
of
network
multi-cluster
network
policy
that
do
make
the
assumptions
like,
for
example,
they
assume
that
back-end
pod
ips
are
globally
unique
and
so
on.
Should
we
do
anything
special
keeping
those
restrictions
in
mind
at
the
moment.
The
answer
is
no,
but
you
know
if
the
community
feels
like
we
can
have.
B
You
know
some
generic
policies
which
make
no
assumptions
other
than
mcs
api
and
then
some
additional
ones
that
maybe
assume
that
back-end
pods
are
unique,
because
then
you
can
also
have
policies
that
simply
just
look
at
ip
cider
based
classification
and
assume
that
the
ips
are
all
unique
across
the
entire
cluster
set
anyway.
So
you
can
just
do
regular
network
policy
based
on
ips,
and
you
can
do
everything
you
want,
but
that
is
not
as
generic
and
you
know
it
could
have
some
scalability
issues.
B
So
we
have
put
a
note
at
the
end
of
the
dock,
that
that
is
a
valid
approach
with
some
when
projects
have
taken,
but
it
it
seems
to
be
somewhat
restrictive
in
its
assumptions.
B
The
other
thought
is
whether
to
do
it,
whether
to
delegate
this
function
to
the
actual
implementation
of
the
mcs
api
or
whether
to
do
it
at
a
layer
that
is
independent
of
the
mcs
api
independent
of
the
implementation
right.
So
maybe
that
that
could
be
another
thing
that
the
community
could
think
about
whether
okay
yeah.
This
is
some.
You
know
if,
if
if
it
is
expected
that.
B
G
I
think
from
like
the
esto
standpoint,
I
think
you
know
we
would
prefer
to
use
kubernetes
as
the
building
block
for
anything
like
this,
so
I
would
think
we
would
want
that
kind
of
standardized.
F
Yeah
I'd
second,
that
too
I
mean.
Obviously
you
know
I'm
very
much
behind
the
mcs
api,
but
a
big
part
of
that
is
trying
to
make
it
as
portable
as
possible
so
that
you
can
use
you
know
whatever
vendor
or
you
know
mesh
if
you
want
to
connect
your
multi-cluster
services
and
if
it
would
be
great
if
network
policy
kind
of
followed
the
same
pattern,
so
you
could
just
plug
in
different
implementations
and
not
have
to
change
your
policy.
B
B
Do
we
necessarily
need
a
new
api,
or
should
we
just
modify
or
enhance
an
existing
network
policy
api
that
is
still
open
and
that's
something
which
we
would
want
to
discuss
within
sig
network
for
now,
for
this
draft,
I'm
assuming
it's
a
new
api
resource,
but
we
could
certainly
look
at
it
as
an
extension
of
an
existing
api
resource
as
well.
B
G
G
B
This
particular
this
I
introduced
this
into
sig
network
just
last
week,
so
it's
too
early.
We
haven't
actually
discussed
each
of
these
points,
so
I'm
sure
we'll
hear
more
in
the
coming
weeks.
On
that.
E
And
just
a
little
back
story,
that's
a
little
relevant.
The
work
there's
been
a
lot
of
work.
I
guess
for
like
a
year
now
on
a
new
object
called
what
was
cluster
network
policy
and
is
now
gonna
be
called
admin
network
policy,
it's
more
of
like
a
cluster-scoped
version
of
network
policy,
and
if
we
wanted
to
implement
this
into
that,
it's
definitely
possible.
At
first
glance,
you
would
just
kind
of
add
a
new
selector.
E
F
Thanks
andrew
that,
I
actually
had
that
exact
question.
How
is
this
tying
into
the
the
ongoing
effort,
an
effort
that
I
have
not
been
following?
Admittedly,
but
yeah?
I
think
another
question,
though
I
guess
another
flavor
of
what
you
kind
of
just
mentioned.
Could
we
have
the
same
api,
but
maybe
with
an
annotation
or
something
if
you
need
to,
if
it
you
know
if
the
multi-cluster
implementation
needs
to
be
watching
it,
I
I
definitely
have
some
bias
here.
F
I
would
love
if
you
know
you're
in
cluster,
your
single
cluster
configuration
just
magically
worked
across
clusters
when
you
added
that
service
export
and
you
didn't
have
to
like
redefine
anything.
That
would
be
really
great.
I
don't
know
if
that's
actually
feasible.
B
B
Right,
but
partly
because
of
both
the
example
in
the
gateway
api,
as
well
as
in
some
particular
implementations
like
submariner,
I
noticed
that
there
is
a
differentiation
made
between
single
cluster
services
and
multiple
services.
So
it's
an
open
point
for
now
for
this
initial
draft,
I've
said:
okay,
let's
start
with
a
different
api
with
a
different
selector,
but
totally
this
this.
This
question
is
basically
saying
but
yeah.
We
could
modify
this
to
an
existing
api
with
some
annotations
or
different
selectors.
F
That
make
sense,
and
if
we
yeah,
if
we're,
building
off
roughly
an
api
that
looks
as
close
as
it
can
to
the
network
policy
implementation
and
they
end
up
being
identical,
then
we
can
merge.
Otherwise
we
could
have
two
that
are
at
least
intuitive
to
jump
between.
E
Yeah
and
we
we've
been
building
the
admin
network
policy
api
to
be
way
more
easily
extensible
than
network
policies
like
we've
built
the
api,
with
the
mindset
that
we
are
going
to
have
new
selectors
one
day
and
another
thing
that
actually
might
help.
The
integration
is
that
the
admin
network
policy
is
only
focusing
on
east-west
traffic.
E
B
Yeah
now
only
the
one
flip
side.
The
other
thing
to
note
is
that
both
this
proposal,
as
well
as
the
current
mcs
api,
are
essentially
namespace
level
objects
right,
whereas
the
admin
network
policy,
as
you
recall,
andrew,
is
cluster
scope,
so
mcs
api
essentially
works
at
a
namespace
level.
Where
you
you
know,
each
namespace
is
deciding
what
to
export
and
import.
It
is
not.
It
is
a
tool
primarily
for
a
namespace
devops
persona.
B
So
that's
something
we
have
to
think
about.
I
don't
need
to
rat
hole.
I
don't
want
to
write
hold
on
that,
but
there
are
two
different
personas
that
that
those
objects
are
targeting.
So
this
may
need
to
be
something
which
is
more
aligned
with
a
future
version
of
the
conventional
net
policy
than
the
administrator
policy,
but
we
look
at
both
those
options.
F
You
actually
raise
a
really
good
point
who
is
who
is
creating
the
network
policy?
Is
the
service
owner
deciding
who
can
talk
to
this
service
or
is
or
you
know
I
mean,
there's
there's
a
bunch
of
answers
to
this
question,
but
or
or
is
this
targeting
the
platform
team
who
wants
to
decide
which
services
or
which
teams
can
talk
to
each
other,
and
you
know,
is
it
always
going
to
be
the
same
person?
I
think
I
think
the
answer
is
probably
both.
B
So
that's
what
I
put
is
sort
of
this
assumption
number
four,
but
the
initial
version
of
this
document
is
targeted
at
a
persona
that
is
a
either
an
application
developer
themselves
or
a
devops
persona,
because
it
is
namespace
scoped
and
the
same
analogy
is
true
for
network
policy
as
well,
because
when
it
is
name
space
scoped,
it
is
targeted
at
the
devops
persona.
When
it
is
cluster
scope,
it
is
targeted
at
the
cluster
admin
persona.
B
Also,
as
you
will
see,
the
policy
expects
knowledge
of
the
application
that
okay,
these
pods
within
the
same
namespace,
can
talk
to
this
imported
service,
and
these
other
parts
cannot
talk
to
it.
So
it
requires
that
application
awareness,
which
means
it's
probably
an
application
persona
or
a
devops
persona,
so
yeah.
So
the
goal
here
is
to
sort
of
follow
the
analogy
of
both
the
namespace
network
policy
and
the
admin
administrator
policy,
but
extend
those
to
multi-cluster,
and
the
first
pass
of
this
is
for
the
developer
persona.
F
No
that
that
made
sense
it
made
a
lot
of
sense,
okay,
so
yeah,
but
but
yeah.
I
think
yeah.
We
just.
We
need
to
see
how
this
evolves
for
sure,
because
I
could
you
know,
I
could
see
that
maybe
the
admin
persona
is
actually
the
only
in
the
multi-cluster
case,
more
commonly
the
person
setting
the
rules
since
they're.
Also,
you
know
responsible
for
configuring,
the
the
mcs
implementation
and
maybe
the
developer
persona,
has
less
weight
in
multi-cluster
or
maybe
not.
B
Yeah
yeah,
but
we
will
come
back
to
that.
There's
some
interesting
points
there
to
go
over
for
now
for
the
purpose
of
today's
call.
Think
of
this
as
a
developer
persona
working
within
his
namespace
and
since
we
have
namespace
sameness,
he
has
access
to
all
of
those
namespaces
across
all
of
those
clusters
which
are
relevant
to
this
application.
B
So
then
we
talk
about
sort
of
what
are
the
policy
types
and
how
do
they?
How
do
they
get
attached
both
to
the
control
plane
and
the
data
plane
right?
So
first,
let's
look
at
you
know
a
pictorial
representation
of
what
it
might
look
like.
So
here
there
is
a
kubernetes
cluster
which
has
got
two
parts
running
p1
and
p2,
and
they've
got
the
regular
cni
networking
data
plane
right
and
typically
in
a
multi-cluster
services.
B
So
in
this
picture
there
are
two
pods
in
one
cluster
accessing
a
remote
service,
foo
right.
So
this
is
a
remote
cluster.
This
little
box
reachable
across
an
internet
reachable
across
a
network,
and
there
is
a
remote,
a
multi-cluster
service
called
foo,
which
has
two
backend
pods
in
the
remote
cluster,
and
both
these
pods
for
now
want
to
be
able
to
access
this
remote
class,
remote
multi-cluster
service,
okay.
B
Now
that
means
that,
but
we
want
to
then
control.
Maybe
maybe
we
want
part
one
to
access
the
multi-cluster
service,
but
not
part
two,
and
that's
where
some
of
the
sample
manifests
kind
of
go
into
the
details.
So
those
are
data
plane
attached
policies.
Now
there
is
also
policies
that
potentially
policies
are
the
control
plane
layer
right,
which
is.
B
F
We
actually
have
a
fitting
with
this.
We
actually
have
an
exception
for
if
namespaces
don't
exist,
if
a
name
is
if
the
namespace
for
a
service
does
not
exist
in
an
importing
cluster,
we
the
the
spec,
says
you
don't
have
to
import
the
service,
so
that's
a
very
coarse
knob
that
can
be
used
to
implement
that
the
kind
of
behavior
you're
talking
about
where
you
say
no,
this
these
clusters
can't
import
services
from
this
name.
Space.
B
Right
but
perhaps,
as
you
said,
it's
a
course
knob
right,
but
we
don't
want
to
control
how
individual
clusters
name
their
namespaces
if
they,
coincidentally
named
a
namespace.
That
happens
to
be
also
one
of
the
namespaces.
F
We've
actually
been
like
the
sig
kind
of
release,
position
statement
on
this.
We
do.
We
do
want
to
tell
people,
don't
name
your
name
spaces,
the
same
thing
if
they're
not
the
same
thing
in
the
multi-cluster
world,
because
it
it
makes
it
makes
things
very
difficult.
F
If
I,
if
I
name
something
you
know,
you
know
foo
in
my
cluster
and
you
name
something
foo
in
your
cluster,
it
gets
really
hard
to
reason
about
what
those
services
are.
So
you
know
we
can
always
revisit
our
position,
but
the
sig
mc
has
has
kind
of
taken
the
stance
that
you
shouldn't.
You
should
avoid
duplication
and
that's
a
problem,
so
namespaces
should
be
unique
or
be
referring
to
the
same
services
owned
by
the
same
teams.
F
That
kind
of
thing.
So,
if
you
know,
I
think
we
can
feel
comfortable
taking
that
stance
here.
It
certainly
simplifies
a
lot
of
this.
B
Yeah,
I
actually
agree
with
the
thinking
behind
that.
So
let
me
restate
it
let's.
Let's
agree
for
now
that
that
is
a
reasonable
assumption
and
we
have
namespace
seamless
throughout,
but
it
still
doesn't
change
the
fact
that
we
could
still
want
a
control
plane
policy
where
you
have
the
namespace
sameness,
but
I,
but
I
still
do
not
want
to
import
a
particular
service
into
a
particular
namespace.
F
I
agree
we
should.
We
should
dig
into
that
more
though,
and
and
see
like.
Does
that
actually
make
sense
too
or
or
is
that,
like?
I
can
see
someone
asking
for
that
for
sure
I
could
absolutely
see
like
one
of
our
customers
acting
asking
for
that
all
kinds
of
people
in
the
community,
but
but
I
wonder
if
we
couldn't
come
up
with
a
better
model
that
will
make
life
easier.
Maybe
maybe
don't
do
that
could
be
a
a
fine
response.
F
We
should
investigate
because
like
well,
you
know
you
might
be
fine
with
one
service
being
imported.
Another
not
like.
Maybe
maybe
two
namespaces
is
better
anyway,
because
your
you
know,
security
profile
of
those
services
is
is
quite
different
and
they
and
if
they're
living
in
the
same
name,
space
there's
going
to
be
more
shared
resources
like
maybe
maybe
we're
solving
the
problem
at
the
wrong
point
or
or
not,.
B
Yeah,
so
that's
why
I
just
put
a
note
on
that
one
for
not.
I
didn't
dive
too
much
into
them,
focusing
on
at
least
this
draft
is
focusing
on
the
data
plane
attached
policies,
but
I
just
put
a
paragraph
here
saying
that
we
anticipate
there
could
be
a
third
class
of
policies
which
are
purely
control,
plane
policies
and
essentially
control
filter,
the
imports
and
exports.
But
then
we
just
put
a
note
saying
that
we
will.
If
the
community
feels
we
want
to
go
there,
then
we
will
add
more
to
it.
C
Can
I
just
see
if
I
understood
the
last
part
of
the
conversation
it
sounded
that
there's
some
assumption
here,
that
network
policy
is
going
to
make
a
statement
about
what
clusters
traffic
can
go
to,
but
it
sounds
like
if
name,
space
sameness
is
as
strong
as
sig
mc
wanted
to
be.
We
would
only
be
establishing
what
name
space
is.
C
B
Yeah
but
the
the
point
was,
I
think,
which
we,
unless
let
me
state
a
point
here
and
you
can
decide
whether
that
answers
your
question.
Even
if
we
assume
that
there's
namespace,
sameness,
right
and
every
clusters
have
cluster
have
exactly
the
same
namespaces
and
all
of
that
stuff.
You
still
may
not
want
to
import
into
every
cluster
every
service
that
was
exported
within
that
collection
of
new
spaces.
B
So
there
could
be
a
popular
service
that
everybody
needs
to
access
and
there
could
be
a
unpopular
service
that
only
two
clusters
in
the
cluster
set
really
need
to
access,
or
it's
a
high
security
service
that
only
you
know
privileged
parts
can
access
that
particular
service.
But
it's
all
within
the
same
collection
of
namespaces.
C
Right-
and
I
guess
there's
a
open
spot
here-
I
think
this
is
what
jeremy
was
saying
of
whether
we
want
to
basically
say
no
to
no
to
like
making
that
choice
at
a
cluster
level
and
keep
namespace
sameness
as
strong
as
it
has
been.
Historically.
B
Yeah,
I'm
not
challenging
namespace
sameness,
I
mean
we
could
certainly
revisit
that,
but
I'm
saying
even
assuming
namespace
sameness,
there
still
could
be
the
need
for
filtering
imports
and
exports,
even
while
retaining
namespace,
seamless
yeah
anyway.
So
so
that's
I
just
put
a
reference
there,
but
the,
but
the
bulk
is
to
focus
on
the
data
plane,
policies,
okay,
yeah,
so
I'm
gonna
not
go
into
the
details
of
some
of
these
user
stories.
For
now,
please
look
at
it.
B
The
first
few
user
stories
are
more
about
things
like
what
should
the
default
behavior
be
or
how
should
they
coexist
with
network
policies?
You
know:
does
the
default
logic
of
network
policy
also
apply
to
this?
All
of
that
so
for
now,
unless
somebody
is
really
wants
to
go
into
it
right
now,
I'm
gonna
just
pass
by
that,
so
that
we
can
just
look
at
a
couple
of
sample
emails
for
now
and
later
on.
If
this
interest,
we
can
come
back
to
these
other
use
cases.
Also.
B
B
B
Okay,
maybe
we'll
use
this
picture
for
now,
so
this
picture
shows
that
I
want
a
policy
which
says:
pod1
cannot
access
this.
This
remote
service
has
been
imported
into
this
cluster
into
the
name
space
which
both
parts
p1
and
p2
belong.
So
both
parts,
p1
and
p2,
are
in
namespace,
x,
right
and
mcsfu
is
in
also
a
namespace
x,
but
on
a
remote
cluster,
and
that
policy
has
been
imported
into
this
cluster.
That
service
has
been
imported,
but
I
still
don't
want
every
part
in
that
namespace
to
access
that
imported
service.
B
Okay,
so
this
is
sort
of
the
pictorial
view
that
parts
p1
and
p2
both
are
in
the
namespace
that
is
importing
a
service,
but
only
p2
pods
get
access
to
it,
not
p1,
so
a
possible
yaml,
for
that
could
be
something
like
this
for
now
again,
let's
for
now
assume
that
it's
a
new
api
object.
As
we
said
earlier,
we
could
talk
about
a
modification
of
an
existing
api
object
also,
but
all
it
does,
it
looks
pretty
similar
to
an
existing
network
policy.
B
This
one
is
attached
to
parts,
so
it
selects
a
pod
and
it
says
that
those
pods
which
are
labeled
as
clients
of
this
multi
multi-cluster
service
are
allowed
to
egress
to
this
imported
service
and
those
club
other
pods,
which
are
not
legitimate.
Clients
of
this
imported
service
will
not
be
able
to
egress
traffic
to
this,
and
here
the
selector
is
a
service
import.
So
this
is
just
like
the
gateway.
Api
has
a
back-end
reference
to
a
service
import.
B
H
So
I
have
a
question
about
this,
so
is
this
policy.
The
goal
of
the
policy
is
to
enforce
the
policy
or
it's
like
it's
something
that,
for
example,
because
you
use
the
labels,
then
anyone
can
add.
I
mean
any
product
without
label
can
decide
to
add
this
label,
then
suddenly
it
will
gain
access.
So
is
that
something
you
want
to
prevent
or
you
you,
that's
not
something
you
care
about.
B
B
B
B
I
want
only
parts
p2,
because
maybe
the
parts
p2
are
the
only
ones
that
actually
really
need
the
remote
service
to
talk
to
it,
and
I
want
to
block
p1,
even
though
I
control
bot
both
parts,
p1
and
p2,
because
if
part
p1
is
compromised,
then
it
will
not
be
able
to
access
the
service.
I
want
least
privilege
I
want
only
the
pods,
which
I
know
really
need
to
access.
The
service
should
be
allowed
to
access
it,
so
so
this
is
the
same
philosophy
of
even
regular
network
policy.
B
Right
I
mean,
if
I
was
running
in
a
namespace,
I
could
have
just
said:
allow
every
pod
to
always
talk
to
every
pod
like.
Why
should
I
even
bother
with
network
policy,
because
these
are
all
my
parts,
and
I
trust
my
points
well.
The
point
is
that,
even
though
these
are
all
my
parts,
they
can
be
compromised
without
me
knowing
it.
H
F
I
want
these
sets
of
pods
to
talk
to
this
service
and
then
there's
that
egress
policy
like
I
want
this
pod
to
only
talk
to
this
one
service,
because
I'm
worried
that
maybe
my
code
is
trying
to
reach
out
somewhere
else
or
something
like
that
today,
they're
kind
of
in
the
same
api,
but
they
they
very
different
use
cases,
especially
in
multi-cluster,
because
in
multi-cluster
I
might.
I
might
want
to
say
that
my
service,
like
my
service,
is
only
accessible
to
these
namespaces.
F
It
actually
probably
seems
reasonable
in
most
cases,
because
you
know
you
relying
on
pod
labels
is
is
not
great
when
those
pods
are
another.
Namespace
is
potentially
owned
by
other
teams,
because
anybody
can
label
a
pod,
but
for
the
you
know,
pod
owner
controlling
who
their
pod
can
talk
to
that
kind
of
control
makes
more
sense.
Yes,
people
want
to
split
things
out
more,
maybe
like
maybe
maybe
formally
separating
those
that's
probably
too
big.
F
E
I
mean
jeremy
from
from
the
multi-cluster
api's
point
of
view.
What
is
what
do
you
see
as
a
more
important
use
case?
Do
you
see
a
cluster
admin,
giving
a
name
space
in
their
cluster
access
to
an
exported
service
as
the
main
use
case,
or
do
you
see
a
developer
who
owns
a
pod
as
allowing
you
know
their
pod
to
talk
to
an
exported
service
use
case?
I.
F
E
F
F
Right,
like
the
pod
level
thing
is,
is
it
seems
like
in
all
cases
it's
just
a
layer
of
control
for
the
pod
multi-cluster
or
not.
You
really
just
want
to
control
what
your.
What
ranges
your
your
pod
can
talk
to,
because
you
don't
you
don't
want
it
reaching
out
right.
It's
single
cluster
multi-cluster
or
you
know
talking
to
some
legacy
vms.
F
It's
all
kind
of
the
same
story.
I
think
for
multi-cluster
services,
the
you
know
these.
These
teams
can
talk
to
this
service
and
these
teams
can't
is
probably
the
more
common.
D
D
One
of
the
most
common
sort
of
implementations
I
see
of
multi-cluster
services
are
that
I
have
a.
I
have
a
micro
service.
Gotta
hit
that
term.
I
have
a
micro
services
app
right.
Many
services
live
in
the
same
name
space.
D
B
So
a
of
course
one
of
the
main
values
of
the
multi-cluster
service
api
is
to
have
back
ends
across
multiple
clusters,
so
that
either
that
service
is
dead
in
your
local
cluster
or
it's
dead
in
cluster
a,
but
it's
still
reachable
via
cluster
b.
You
know
you
want
to
have
that
availability
right.
B
Yeah
good
point,
which
is
which
is
you're,
reiterating
the
point
that
even
within
industry,
even
within
the
same
name
space,
you
need
micro
segmentation
right,
you
need
to
exactly
press
decide
who's
allowed
to
talk
to
whom,
even
within
the
same
name
space
is
that
pretty
much
yeah,
yeah
so
yeah.
B
So
that
is
what
this
use
case
is,
or
these
thoughts
are
kind
of
addressing,
as
and
jeremy
and
andrew,
to
go
back
to
your
early
point
and
thanks
nick,
and
I
definitely
welcome
more
of
your
feedback
from
what
you're
seeing
in
the
field,
because
some
of
these
we're
making
reasonable
guesses,
but
any
actual
customer
data
that
you
can
bring
and
others
can
bring,
would
help
us
prioritize.
You
know
what
makes
more
sense
to
do
first,
so
please
continue
to
bring
your
you
know
valuable
customer
data
to
these
reviews
or
to
the
community.
B
B
Let's
continue
that
at
a
you
know,
we'll
need
to
draw
a
few
more
pictures,
but
all
those
are
just
some
thoughts
in
in
response
to
what
andrew
and
jeremy
were
saying
earlier,
any
other
feedback
from
anyone
or
any
other
experience
with
real
world
deployments
that
you
would
like
to
bring.
We
can
talk
about
a
few
more
things,
but
before
we
leave
this
use
case,
any
other
comments
or
questions.
G
Kind
of
a
follow-on
to
the
last
one,
just
just
with
respect
to
isolation
in
general,
so
I
could
envision
like
a
use
case
where
I'm
bringing
a
cluster
that
has
same
name
spaces
and
same
services
that
the
cluster
set
already
has,
and
I
want
to
onboard
it
into
the
cluster
set.
G
But,
as
you
know,
I
want
to
be
able
to
do
that
without
changing
any
cross
cluster
load,
balancing
behavior
to
start
with,
and
then
slowly
opt
into
that
now.
If
I
understand
the
way
mcs
would
work
today,
I
would
automatically
get
service
imports
and
everything
would
just
start
talking
across
cluster,
but
to
be
more
safe.
You
would
probably
want
some
sort
of
network
policy
kind
of
installed
at
like
before
you
kind
of
join
into
the
cluster
set.
G
That
says,
I
don't
want
to
change
any
behavior
off
the
bat
I
just
want
to
join
the
cluster
set,
and
then
I
can
slowly
kind
of
open
up
that
network
policy
to
allow
services
to
talk
between
that
cluster
and
other
services
in
the
cluster
set,
both
both
egress
and
ingress.
C
Yeah,
I
think
that's
an
interesting
like
time
window
of
when,
like
implying
that,
there's
a
time
when
the
cluster
has
joined
the
cluster
set,
but
it
doesn't
really
sort
of
have
like
this
full
trust.
Yet
and
it's
still
like
warming
up.
I
guess-
and
I
can
see
that
even
more
so
than
like
otherwise
arbitrarily
having,
I
think
to
the
earlier
point.
C
We've
been
trying
really
hard
to
not
have
users
like
treat
their
cluster
separate
clusters,
special
in
a
multi-cluster
case
and
think
more
at
the
name
space
level,
but
that
time
window
of
when
a
cluster
is
joining
the
cluster
set.
If
we
want
to
do
something
special
of
how
that
cluster
is
treated,
makes
a
in
my
mind,
stronger
case
of
filtering
network
policy
by
cluster.
B
Yeah
there's
also
one
agreed
with
everything
he
said
laura
and,
and
you
know
nathan,
I
think
it
was
earlier
one.
Other
thing
is:
if
you
look
at
the
goals
of
the
mcs
api,
it
is
both
for
services
whose
backends
are
across
multiple
clusters,
as
well
as
services
whose
back
ends
happen
to
be
all
in
one
cluster.
B
B
Okay,
we've
we're
gonna
run
out
of
time
soon,
so
I'll
just
mention
one
more
user
story,
so
the
other
exam.
The
earlier
example,
we
said
was
limiting
a
pod
to
egress
to
a
remote
cluster.
B
Now
what
about
the
ingress
side
right
so
now,
as
it
happens,
a
cluster
which
is
receiving
traffic
from
another
remote
cluster
in
most
implementations
has
no
idea,
which
is
the
originating
part,
because
the
pod-
because,
as
you
recall,
our
starting
assumption
was
that
pod
ips
are
not
globally
unique.
So
you
can't
use
the
original
pod
ip
when
you're
sending
traffic
to
a
remote
receiver
right
so,
which
means
that
the
pod
ip
has
already
gotten
at
it.
B
B
I
could
implement
this
policy
if,
if
you
know
if,
if
this
is
what
I
wanted
to
achieve,
I
could
implement
this
policy
either
by
on
the
sending
cluster
controlling
who
gets
to
send
to
the
service.
But
what
about
the
receiving
cluster?
If
the
receiving
cluster
says
I
want
to
receive
traffic
to
this
service
from
parts
that
meet
this
criteria,
but
not
from
parts
that
meet
that
criteria.
B
I
really
don't
have
any
way
of
enforcing
it,
because
by
the
time
the
traffic
comes
to
this
cluster,
I
don't
know
whether
it
was
sent
by
p1
or
p2.
We've
lost
the
identity
of
the
source,
because
they've
gone
typically
through
a
gateway
and
both
of
these
parts
got
so
snapped
to
the
gateways
for
ib
right.
B
B
Pods
that
are
actually
providing
the
server
side
of
mcs
foo.
I
only
want
to
allow
traffic
from
the
from
this
cluster.
This
is
still
not
enough,
because
I
could
have
a
malicious
pod
in
in
a
good
cluster.
I
mean
I
I
I
may
I
may
trust
this
cluster,
but
but
still
there
could
be
some
part
that
I
don't
want
to
receive
traffic
from
as
opposed
to
another
cluster
in
the
cluster
set,
which
I
I
know
I
mean
I
know
for
sure
that
there
should
be
no
client
of
the
service
in
the
cluster.
B
So
for
now
I
have
left
that
out,
but
I
just
put
a
note
here
that
if
we
want
to
make
an
assumption
that
there
is
some
way
of
carrying
metadata
from
a
source
cluster
to
a
destination
cluster,
that
metadata
can
then
convey
information
from
the
source
cluster
to
the
destination
cluster.
But
right
now
there
is
no
opportunity
for
us
to
convey
information
from
the
source
cluster
to
the
destination
cluster.
So
at
the
moment
yeah
this.
F
One's
really
tough,
I
think,
on
the
mcs
side
we
would
definitely
prefer
not
to
have
cluster
level
filtering
it
kind
of
it
kind
of
breaks
the
model
apart,
because
now
you
know,
basically
we
break
namespace,
sameness
right,
you
can,
you
can
block
a
cluster
that
may
or
may
not
have
the
right
namespace.
Yes,
you
can
know
that
it
doesn't
have
the
right
namespace,
but
it
adds
another
layer
of
things
that
you
have
to
know
to
use
the
api
and
it
makes
it
easier
to
break.
F
To
filter
specific
namespaces
from
a
cluster
that
has
that
has
or
this
behind
a
gateway.
B
F
Wow
one
thing,
though,
on
this
not
answering
this
or
or
even
suggesting
anything,
but
well
we
have
been.
We
have
avoided
in
basically
all
circumstances,
assuming
anything
about
implementations.
I
think
one
thing
that
we
can
rely
on
is
that
any
given
mcs
implementation
does
know
about
all
the
clusters,
so
it
can.
It
can
potentially
act
as
a
way
to
relay
metadata,
not
sure
that
helps
that
much
in
in
the
gateway
case.
We
need
to
figure
that
out
but
other
clusters.
B
B
I
could
add
a
propriety
tag
in
the
packet
whenever
I
send
it
from
cluster
one
to
cluster
two,
and
in
that
tag
I
could
have
some
context
which
can
be
used
for
filtering
whether
we
should
standardize
or
not,
that
that's
still
an
open
question
like
you
will
not
standardize
the
actual
encapsulation,
but
you
can
perhaps
standardize
that
there
will
always
be
a
way
to
communicate
16
bits
of
context
from
source
to
destination.
B
So,
as
you
can
see
that
we've
kept
several
things
as
open
questions
so
that
the
community
can
feedback,
I
didn't
want
to
be
overly
opinionated
early.
We
putting
some
thoughts
with
some
possibilities,
but
this
is
all
flexible
to
change
based
on
the
community
feedback,
both
from
this
community
and
from
sig
network,
and
this
is
a
very
early
thing
for
people
to
think
about
and
kind
of
throw
darts
at.
B
There's
some
pros
and
cons
there,
which
we're
probably
going
to
run
out
of
time
to
go
into,
but
that's
something
we
need
to
discuss
in
sick
network
there's
also.
Some
thoughts
on
using
service
accounts
to
convey
identity
still
at
an
early
stage.
So,
and
the
last
thing
in
this
dock
is
just
a
reference
to
some
projects
that
have
done
some
things
with
multi-cluster
network
policy.
One
example
is
a
project
called
coast
guard
which
is
part
of
the
submariner
implementation
of
mcs
api,
and
it
is
based
on
the
assumption
that
all
pod
ips
are.
B
From
a
globally
unique
are
globally
unique,
so
you
can
actually,
when
you
create
a
network
policy
in
one
cluster,
you
can
translate
that
into
equivalent
policies
in
the
other
cluster
which
look
for
the
unique
ip
of
the
sending
source.
Really
every
every
cluster
knows
every
remote,
pods,
ip
or
at
least
parts
that
are
parts
of
multi-cluster
services.
B
Maybe
that's
all
that's
needed,
in
which
case,
maybe
we
don't
need
to
do
anything
more
initial
feedback
from
tim
was
that
well
we
shouldn't
assume
that
part
ips
are
globally
unique,
that's
typically
not
practical,
but
it
may
be
okay
in
certain
cases,
so
these
are
all
thoughts
which
we
have
included
so
that
people
have
a
reference
on
other
kinds
of
thoughts
in
a
similar
space.
B
B
We
can
just
use
those
as
inspiration,
but
not
for
actual
functionality.
So,
anyway,
sorry
we've
kind
of
rambled
on,
but
this
was
just
to
introduce
this.
Please
provide
your
comments
and
you
know
we'll
will
be
it's.
It's
pretty
open
right
now,
but
you'll
be
evolving
it
over
the
coming
weeks
and
possibly
months.
F
Yeah,
this
is
awesome.
Thank
you.
So
much
for
coming
today,
great
discussion,
lots
to
think
about
thanks
a
lot.
F
All
right,
I
think
that
is
the
meeting
for
today
thanks
everyone
again,
this
was
great.
I
know
I've
got
some
reading
to
do.
I
think
we
love
everybody
comments
on
the
doc.
C
E
E
Somewhere
in
the
middle,
yes,
black
is
great
yeah,
it's
sig.
It's.