►
From YouTube: Antrea Community Meeting 09/13/2021
Description
Antrea Community Meeting, September 13th 2021
A
Perfect
so
good
morning,
good
afternoon
or
good
evening,
this
is
the
entry
community
meeting
and
today
is
september
13th
or
september
14th.
According
to
your
time
zone,
we'll
get
started
in
a
few
seconds
on
the
agenda.
For
today
we
have
abhishek
and
lan
with
the
proposal
for
multi-cluster
support
in
andrea.
A
This
is
described
in
github
issue,
2270,
which
I'm
sharing
on
the
on
the
zoom
chart,
so
I'll
be
check.
If
that's
okay
for
you,
perhaps
we
can
get
started.
I
believe
we're
already
two
minutes
into
the
meeting,
so
I'm
not
sure
many
people
are
going
to
join
okay,
so.
B
Could
you
also
provide
sharing
capabilities
to
landlord
or,
if
lan
wants
me
to
continue
to
present?
That
is
also
fine
with
me.
C
C
Okay,
I
think
abstract
you
can
show
the
talk
to
you.
A
C
Since
that's
okay,
you
don't
have
to
do
that.
I
think
when
episode
finished
the
first
a
few
parts
I
can
continue
and
he
he
will
show
the
dogs.
C
B
In
these
multiple
clusters-
and
perhaps
you
know,
there
is
also
some
work
going
on
in
upstream
kubernetes,
specifically
targeting
this
area,
and
there
is
a
cap
available
which
is
called
the
mcs
kept
multi-cluster
service
kept,
which
is
introducing
some
crd
some
resources,
some
ways.
You
know
to
standardize
how
communication
between
multiple
multiple
clusters
may
happen
and
also
how
these
services
can
be
shared
across
clusters.
B
So
so
that
is
the
motivation
behind
why
we
want
to
do
multiple,
multi-cluster
integration
within
korea.
We
want
to
standardize
some
of
those
apis
and
some
of
the
crd
concepts
with
entry
for
specifically
for
multiple
clusters,
which
can
then
be
a
platform
or
like
a
base
for
other
tooling
to
be
created.
On
top
of
this,
this
effort
to
perform
more,
you
know
more
workflows
in
this
area,
so
the
basically
we're
gonna
look
into.
B
You
know
some
of
the
high
level
use
cases
that
we
see
and
these
these
might
not
be
the
only
use
cases.
But
these
are
something
that
we've
seen
in
the
community
and
then
we
will
look
at
the
mcs
kept
that
I
spoke
about
we'll
look
at
what
that.
B
What
those
crds
are
really
how
they
look
like,
and
you
know
because
they
form
the
basis
for
our
following
cid
design,
and
then
we
will
briefly-
or
rather,
we
will
take
a
look
at
our
our
cids,
that
we
plan
to
implement
for
multiple
clusters,
and
you
know
how
the
different
clusters
will
interact
with
each
other
and
how?
How
do
we
form
the
registration
and
registration
workflow
for
clusters
to
be
onboarded
as
part
of
the
multiple
clusters
set
and
then
once
we
have
this
multiple
cluster
set?
B
How
is
this
resource
exchange
going
to
work
within
this
multi?
You
know
within
this
set
of
multiple
clusters,
so
we're
going
to
look
at
the
resource
exchange
pipeline
and
then
we
would
also
like
to
discuss
some
of
the
items
that
we
will
not
be
targeting
in
the
phase
zero
or
immediate
couple
of
religious
foreign.
But
we
will
be,
but
that's
something
that
we
have
on
our
mind,
to
work
on
and
and
land
will
help
in
presenting
some
of
the
portions
here
so
and
feel
free
to
stop
me.
B
B
So
one
of
the
use
cases
is
you
know
when
you
have
multiple
clusters,
you,
you
could
have
like
one
scenario
in
which
the
same
service
is
being
deployed
in
all
of
your
clusters,
or
you
know
a
subset
of
your
clusters.
Now.
The
reason
behind
this
could
be
multiple,
multiple.
We,
you
know
it's
possible
that
the
administrator
wants
to
have
some
sort
of
an
h.a
support.
B
Some
sort
of
locality
wherein
like
services
are
created
which
are
more
local
to
the
workloads
based
on
jio
or
availability
zones,
and
you
know
it
also
could
be
done
to
improve
the
scale
of
the
services.
So
these
could
be
some
of
the
reasons
why
a
user
could
deploy
the
same
service
in
multiple
clusters.
Now,
when
we
talk
about
same
service,
what
we,
what
we
mean
is
that
the
name
and
namespace
of
that
service
across
these
clusters
is
the
same.
So
that's
the
basic
criteria
for
service
sameness.
B
B
You
know
well
protected,
so
so
now,
as
a
use
case
like
if
you
have
a
service
which
goes
down
in
one
of
your
cluster,
all
the
workloads
within
that
cluster
may
lose
access
to
that
service,
and
even
if
there
is
a
similar
service
which
is
serving
the
same
content
in
a
different
cluster,
these
these
workloads
will
not
have
that
access.
So
that's
one
of
the
reasons
why
a
user
made
it
may
do
same
service
just
for
supporting
it.
B
Another
use
case
that
we
see
is
that
you
know
there
might
be
cases
wherein
you
have
some
services
which
are
common
throughout
different
clusters.
Now
those
services
could
be
some
public
services
that
the
organization
has
some
dns
services
or
log
management,
which
are
which
are
shared
across
services,
which
are
the
shared
services
across
clusters
and
and
they
they
they
need
to
be
accessed
by
multiple
clusters.
B
So
here
we
need
to
enable,
or
the
the
use
case
of
the
administrator
is
that
they,
the
the
inter
cluster
service
consumption,
should
be
enabled.
So
so
this
is
one
another
use
case
that
we
see
for
which
people
may
use
multiple
clusters.
B
Now,
when
we
talk
about
opening
up
traffic
across
clusters,
that
also
leaves
you
know
a
use
case
for
security.
So,
for
example,
I
perhaps
I
don't
want
to
expose
all
of
my
workloads
to
other,
even
though
other
clusters,
even
though
they
are
mutually
trusted,
but
still
I
would
want
to
only
expose
the
pause
or
workflows
which
are
the
backends
for
this
service,
my
public
service
or
my
multi-cluster
service,
and
necessarily
don't
want
to
expose
other
parts
within
my
cluster.
B
So
so
there
is
this
concept
of
security
that
should
also
be
applied,
and
npr
must
be
able
to
solve
that.
There
are
a
couple
of
a
couple
of
use
cases
around
security.
One
is
you
know,
consistent
securities,
for
example,
you
have
multiple
communities
clusters
and,
as.
C
B
Administrator,
I
want
to
ensure
that
all
of
my
clusters
that
come
up
have
some
sort
of
a
consistent
behavior
in
terms
of
when
the
workload
startup
there
is
some
sort
of
baseline
security
associated
with
those
clusters.
So
essentially
you
want
to
you
know.
Perhaps
it's
a
baseline
default
deny
or
maybe
you
are
denying
inter
name
space
traffic
and
only
allowing
internal
trust
international
traffic,
so
those
kind
of
baseline
rules
must
be
repeated
in
all
kubernetes
clusters.
Now,
one
way
to
do
it
is
doing
it
manually.
B
Writing
those
policies
in
every
cluster
or
the
other
way
is
to
you
know,
use
some
apis
exposed
by
multiclass
by
anglia,
wherein
you
write
the
policy
once
and
it
gets
copied
over
to
all
your
clusters.
That's
one
use
case.
B
The
other
use
case
is
you
know,
whenever
you're
talking
about
exposing
traffic
to
multiple
clusters,
you're
talking
about
security
across
clusters,
so
what
we
are
calling
a
stretch-
security
across
cluster,
so
here
you
may
have
certain
pods,
which
should
only
accept
traffic
from
other
other
pods
in
other
clusters,
but
not
necessarily
exposed
traffic
to
everything
else.
So
this
is
basically
what
we
mean
by
saying
that
you
know
pods
or
workloads
in
one
cluster
should
be
able
to
access
services
in
other
cluster,
but
not
other
traffic.
B
So
this
is
like
stretching
your
policy
not
just
to
your
own
local
cluster,
but
defining
it
across
multiple
clusters.
So
this
is
slightly
different
from
the
first
previous
use
case.
So
so
far,
any
any
questions
on
the
use
cases
or
before
we
move
on
to
the
design.
B
C
Okay,
thanks
abstract,
you
have
to
move
scroll
down
a
little
bit
yeah,
oh
the
first,
the
diagram
you
may
go
through
this
part.
First,
where
are
you.
B
Yeah,
so
so,
typically,
what
we
want
to
do
is
like
we
want
to
enable
cluster
communication
across
different
humanities
clusters.
Now,
as
part
of
that,
we
introduce
two
types
of
clusters,
so
there
could
be
you
know.
So
every
cluster
is
called
as
a
member
cluster
because
it
is
part
of
a
multi-cluster
set
and
then
in
order
to
facilitate
the
exchange
of
resources
and
manage
some
of
these
member
clusters.
B
The
member
clusters
elect
a
leader
and
the
the
job
of
the
leader
cluster
is
to
facilitate
this
resource
exchange
pipeline,
and
that
is
a
those
kind
of
clusters
are
called
data
clusters.
The
leader
clusters
would
be,
you
know,
public
consumption,
so
this
is
called
something
called
an
inbound
mode,
wherein
member
clusters
have
the
ability
to
reach
to
the
leader
cluster.
B
So
the
leader
clusters,
kubernetes
api,
should
be
publicly
accessible
by
the
member
clusters,
while
the
member
clusters
could
be
on-prem
or
wherever,
where
they
can,
and
essentially
the
communication
is
from
the
member
clusters.
So
the
the
mcs
controller,
which
is
called
the
multi-cluster
controller
component,
is
what
reaches
out
to
the
kubernetes
api
or
the
cluster,
and
they
can,
you
know,
push
objects
and
listen
to
objects
from
the
leader
cluster.
So
these
are
so
this.
B
This
diagram,
pretty
much
kind
of
like,
shows
the
different
kind
of
clusters
and
how
they
interact
with
each
other,
and
then
we
have
these
mcs
kept
crd
so
which
I
you
know
briefly
alluded
in
the
beginning.
This
is
something
that
is
is
being
proposed
upstream
and
lan
will
take
over
from
here.
C
Okay,
yeah,
I'm
sure
just
to
mention
about
our
class
set
management
if
you're
a
lit
a
little
bit
about
that-
and
here
is
a
way
you
know
that
when
there
are
the
community
service,
usually
it
means
locally
to
its
own
cluster.
When
we
want
to
use
the
service
from
another
cluster,
we
need
a
way
to
tell
the
user
or
the
other
part
how
which
service
will
be
exposed
and
which
service
can
be
accessed
by
outsiders.
So
we
reused
the
mcs
cap
crds,
which
is
the
service
export
and
the
service
import
crds.
C
We
just
re
use
the
definition
and
the
crd
in
our
entry,
mult
cluster
design
and
for
the
service
export
as
you
as
you
see
here,
it's
actually
used
to
specify
that
which
service
in
your
cluster
you'd
like
to
expose
to
all
clusters
in
the
cloud
set
and
the
service
export,
so
you
have
to
create
it
manually
in
each
cluster,
at
least.
For
now
we
require
user
to
create
manually
and
in
each
cluster.
C
You
need
to
expose
and
tell
that,
tell
user
that
this
service
I
like
to
expose
to
other
clusters
and
then
they
can
access
the
service.
So,
in
one
cluster,
the
service,
when
you
create
a
service
export
in
your
cluster,
it's
signifying
that
the
service
with
the
same
name
and
in
the
same
name
space
they
will
be
treated
like
the
same
service
in
the
cluster
set.
C
Okay,
I'm
sure
can
you
scroll
down
a
little
bit?
Okay
thanks!
So
you
can
see
here
that
there
are
the
basic
definition
of
source
export
and
also
the
sales
export
status
and
also
the
condition
we
just
reviewed
that
from
the
mcs,
the
kubernetes
mcs
api,
so
the
this
part
actually
the
same
as
the
kubernetes
and
says,
and
the
next
product
service
imports.
C
You
know
that
when
we
expose
the
service,
we
need
to
import
them,
because
if
we
didn't
do
anything
for
that,
the
member
class
other
member
cluster
won't
know
that
they
can't
learn
that
so
which
service
I
can
access
from
remote.
So
right,
so
the
service
imports
is
actually
act,
just
like
a
in-class
representation
of
a
multi-cluster
service
and
usually
in
kubernetes
mcs
definition.
It's
just
like
it's
like
traditional
source
type,
but
it's
a
little
different
in
ensuring
matte
cluster
feature.
C
We
just
use
this
source
import
type
definition,
but
not
the
crd,
so
you
won't
be
able
to
see
the
crd,
this
actual
real
cr
in
member
cluster.
We
just
wrap
these
service
imports
into
our
resource,
import
type,
which
I
think
abstract
will
introduce
it
more
in
later.
So
here
you
will
just
see
those
basic
definitions
from
kubernetes
mcs,
api.
C
Okay,
abstract!
Can
you
scroll
down
yeah?
So
for
the
for
the
very
basically,
when
we
want
to
access
a
math
class
service
in
our
class
set
in
entry,
we
will
create
a
creator.
It's
just
like
a
general
service.
You
will.
C
Us
the
same
way
so
you
and
it
will
be
as
overload
balancer
a
lot
of
the
traffic.
A
little
bench
will
do
by
the
locals
communities,
network
interface,
plugin
and
just
like
entry
do
and
this
kind
of
class
ip
is
also
cannot
be
accessed
by
outside
of
the
cluster.
C
So
the
importer
will
create
some
endpoints
and
with
an
annotation
like
the
much
cluster
kubernetes
dot
io
and
with
the
service
name
and
the
end
point
behind
it.
We
treated
we,
we
were
treated
a
little
different
based
on
the
members
of
the
expo
exposed
service
type
for
client
for
cluster
ip.
You
actually
there's
some
assumption
for
for
this
feature
for
the
class
type
and
if,
for
the
part
I
p
behind
that
are
not
outside
of
the
cluster,
then
we
will
use
those
part
ip
as
a
as
an
endpoint.
C
So
this
is
the
basic
requirement
that
has
a
part
ip
can
be
accessed
outside
and
then
the
part
will
be
the
same
as
service
target
parts
and
the
class
ip.
If
it's
we,
it
is
with
external
ip,
then
you
know
the
external
is
ip
are
known
outside
of
the
cluster.
Then
we
can
use
it
directly
and
we
also
use
the
same
part
defined
in
the
service
so
for
node
parts
and
load
balancer,
the
actual
similar
logic,
but
the
endpoints
are
a
little
different
based
on
their
types
for
the
external
name.
C
We
just
treat
it
like
a
kubernetes
mcs
api,
because
we
are
not
clear
about
this
use
case,
so
it
will
be
ignored.
Let's
take
a
class
ip
with
external
ip
as
an
example,
a
typical
access
flow
for
multi-class
service
access.
C
Then
it's
actually
just
like
just
like
a
ordinary
kubernetes
service
and
there's
a
class
ip,
it's
let's
say
the
entry
give
the
service
a
class
ip
and
the
class
ip
will
be
mapping
to
the
endpoints,
which
you
will
just
show
that
about
what
kind
of
endpoints
for
the
class
ipv
external
ip,
then
the
the
remote
member
cluster
service,
an
endpoint,
will
be
well.
External
ip
will
be
the
endpoint.
So
the
when
the
request.
C
You
know
that
the
part
actually
acts
as
a
remote
service
external
ip.
So
when
the
traffic
goes
to
the
remote
cluster,
the
traffic
will
be
rotated
to
the
pod,
because
external
ip
is
just
you
can
consider
as
a
virtual
ip
right.
So
the
real
ip,
the
real
part
serving
the
service
is
actually
the
part
right.
So
the
part
will
access
will
be
given
the
service
and
the
the
trafficker
will
eventually
goes
to
the
part
by
the
cri
yeah.
C
C
Their
leader
cluster
for
class
sets
it's
actually
a
placeholder
name
for
a
group
of
clusters
when
we
want
to
gather
a
few
of
clusters
into
one
class
set
and
we
we
treated
them
like
a
mutual
mutual
trusted
and
shared
ownership
that
their
service
can
be
shared
in
in
one
class
set,
and
when
we
the
cluster
as
you
as
I
just
said,
it's
just
a
placeholder
name.
We
use
the
multi-class
cell
crd,
do
to
describe
a
multi-class
set
and
member
cluster
defines
a
member
cluster.
C
Just
like
its
name-
and
you
can
see
here
in
the
definition
for
the
member
cluster,
it
will
have
its
own
class
id
and
the
server
the
server
means
the
member
itself's
api
or
the
leader
clusters
api
and
the
secret
it
will
use
for
member
or
leader
to
access
the
remote
cluster
and
the
service
account.
It's
also
used
by
member
cluster
to
access
to
a
leader,
cluster,
and
also
there
are
a
few
other.
I
fields
like
the
math
class
status
we
were
used
to
to
represent,
represent
how
the
member
cluster
works.
C
It's
healthy
or
maybe
there's
an
arrow
behind
it.
Okay,
I've
checked
you
can
yeah
for
okay,
okay,
here
here
are
some
other
fields
so
like
we
use,
we
follow
the
capital
two
one,
four
nine
we
use
the
class
set
and
the
class
id
and
the
class
set
to
describe
the
class
set
and
also
our
kubernetes
cluster,
and
I
think
it's
quite
simple.
So
we
go
move
forward
to
the
memory
and
us
we
use
member
announce
to
declare
that
member
cluster
configuration
to
leader
clusters.
C
Okay,
okay,
let's
move
forward,
here's
some
steps
or
basic
configuration
about
the
math
class
sets
in
very
beginning.
We
need
to
do
one
thing:
is
that
create
or
define
our
multi-class
sets
in,
so
there
are
actually
some
manual
steps.
We
need
required
user
to
do
this
or
our
cluster
admin
to
do
that,
because
in
early
implementation
we
want
to
do
those
automation,
but
in
the
future
we
will.
We
might
use
this
command
line
or
operator
to
do
this
kind
of
thing
to
build
up
a
multi-cloud
set
in
in
the
first.
C
If
we
want
to
create
my
class
set
in
leader
cluster,
you
know:
that's
a
member
cluster
need
to
access
the
leader
cluster,
so
we
need
some
access
information
for
them
right.
So
we
need
to
create
a
service
account
for
each
member
cluster
and
we
also
need
to
create
or
transfer
those
kind
of
service
accounts
secret
to
those
member
clusters
and
in
leader
cluster.
C
C
You
know
that
I
think
I'm
sure
we'll
talk
a
little
bit
more
about
export
and
import
here
we
will
associate
the
export
and
the
impulse
cluster
rows
to
the
multi-class
controller
service
account
and
in
each
cluster.
Of
course,
we
need
to-
and
just
I
mentioned
above-
that
we
need
a
member
announced
to
tell
the
class
set
it's
itself
about
it's
a
class
id
right.
So
we
need
to
do
to
apply
those
unique
class
id
and
the
classes
id
in
each
cluster
and
also
in
leader
and
the
member
cluster.
C
We
need
to
make
sure
that
all
the
multi-class
assets,
the
definition
or
the
resource
they
are
think
and
consistent
and
okay,
let's
go
so.
Let's
go
down
a
little
bit
okay
here
when
I
just
mentioned
how
to
create
the
multicast
set
above
and
the
basic
steps,
but
when
it's
we
may,
we
may
have
a
dedicated
leader
cluster
for
the
whole
class
set.
So
there
may
sound
different,
but
here
the
basic
steps
actually
the
same.
The
little
difference
is,
we
may
create
those
multicast
sets.
C
The
service
account
the
rows
in
namespace,
specific
namespace,
that's
the
main
difference,
and
if
we
want
to
add
a
new
member
cluster
to
existing
member
class,
existing
multi-class
set
actually
the
major
steps.
Just
like
the
first
part.
We
need
to
create
a
service
account
for
for
the
for
the
member
cluster
in
each
leader,
cluster
right,
and
we
need
to
copy
this
kind
of
service
account
secret
to
the
member
cluster
and
do
similar
things
to
make
sure
that
the
whole
class
set
synced.
C
I
think
there
was
a
new
member
cluster
and
the
member
class
new
member
cluster
have
their
correct
service,
a
secret
which
it's
created
in
first
step
and
when
they're
a
new
leader
cluster,
it's
trying
to
we're
trying
to
add
a
new
leader
cluster
to
an
existing
multiple
set.
The
the
first
thing
that
I
think
the
whole
scenario
is
almost
the
same
as
the
first
part.
C
The
only
difference
is
that
in
little
cluster,
you
need
to
create
a
source
account
for
all
existing
member
cluster
and
create
this.
So
it's
a
consequence
to
those
clusters
and
also
the
same.
C
We
need
we
need
to
do
the
same
thing
to
sync
or
multi-class,
set
in
each
member
cluster
and
for
the
if
anyone
yeah
sorry
any
cluster,
trying
to
leave
the
cluster
or-
and
I
mean
remove
the
cluster,
we
just
need
to
remove
the
member
announcement
which
will
remove
the
class
id
and
the
class
id
and
also
the
multiple
sets
cr
and
in
later
clusters,
all
the
actually
all
the
member
classes
and
the
leader
class.
So
we'll
think
about
these
actions
to
remove
this
cluster
from
from
the
class
set.
D
C
Okay
and
any
questions
over,
I
think
for
the
mafia
set
management
and
mcs
caps
already.
I
that's
all
from
me.
C
A
I
I
don't
really
have
a
question,
but
more
like
a
general,
I
mean
general
confusion
like
this
mcs
cluster
proposal.
It's
something
that
it's
been
actively
developed
upstream,
something
that
we
are
picking
up
from
some
upstream
work
which
is
currently
suspended,
or
is
this
something
that
we
are
doing
specifically
for
andrea?
A
I
was
looking
at
the
cap
that
you
referenced,
that
I
don't
know
what's
happening
there.
It
seems
like
there
is
a
sort
of
out
of
three
implementation,
but
you
know
I
don't
know
if
we
are
going
to
use
that
implementation
or
if
we
are
going
to
do
another
implementation.
C
Oh,
we
just
addressed
a
a
quite
simple
implementation.
We
just
reused
source
export.
We
don't
want
to
recreate
a
similar
definition,
so
just
reduce
this
a
small
part
into
our
implementation.
That's
all!
Actually,
we,
there
are
a
lot
of
other
stuffs.
We
need
to
do
in
entry
and
I
believe
abstract
will
show
you
that
how
our
resource
action
exchange
pipeline
like
and
you
will
learn
more
about
what
and
trim
outcast
do
we
just.
C
We
are
not
limited
to
the
source
exports,
because
in
the
future
we
may
export
the
endpoint
and
the
network
policy
etc.
So
the
capital
one
six
four
five
is,
I
think,
just
a
very
little
part.
B
Yeah
to
add
to
what
lan
said
so
sorry,
this
is
more
like
the
cap1645
is
more
like.
You
know.
This
is
what
the
specification
is
for:
service
export
and
service
import,
but
we
don't
really
provide
an
implementation.
So
it's
more
like
the
network
policy
api.
You
have
the
resource
specification
up
there.
Now
you
can
implement
it
in
a
way
that
you
want
to
and
then
extend
it
using
whatever
crds
that
you
would
want
to.
A
B
Gratification,
so
if
there's
no
more
questions,
we'll
continue
with
the
resource
exchange
pipeline,
thanks
lan
for
a
wonderful
presentation
so
to
to
build
on
what
land
just
mentioned,
the
mcs
cap
crds
are
just
talking
about
how
services
resource
can
be
service.
Resource
can
be
exported
from
one
kubernetes
cluster
to
others,
and
we
have
introduced
a
couple
of
crds
for
that
purpose.
B
B
You
know
network
policies
or
anti-native
policies
to
be
able
to
do
the
stretched
network
policies,
use
cases
that
we
spoke
about
in
the
in
the
beginning,
but
also
end
points
and
different
other
resources,
and
also
we
want
to
make
sure
that
the
the
the
the
way
the
information
is
exchanged
among
the
clusters
is
standardized
and
to
this
point
we
we
plan
to
introduce
two
new
resources,
called
resource
export
resource
and
a
resource
import
resource.
B
Now,
as
the
name
suggests,
they
are
pretty
much
like
an
extension
to
the
service,
export
and
service
import
resource,
so
the
resource
export
resource
is
pretty
much
a
wrapper
around
the
resources
that
would
be
exported
from
one
member
cluster
to
another.
So,
as
mentioned
that
is
similar
to
a
service
export
concept,
except
that
it
will
be
exporting
this
encapsulated
resource
to
other
clusters,
and
we
will
look
at
the
pipeline
diagram
a
little
later.
B
But
this
is
how
the
crd
resource
expert
spec
looks
like
essentially
you
know
you
will
have
like
what
is
the
name
of
the
resource
and
the
name
space
in
which
these
this
resource
is
exported
to
and
the
kind
of
that
resource.
That
is,
you
know,
because
we
we
plan
on
to
encapsulate
more
than
just
the
services,
so
it
could
be
a
service
or
it
could
be
some
endpoints.
B
It
could
be
external
entities
that
we
could
use
to
implement
engineering
policies
or,
for
some
other
use
cases,
and
then
we
also
plan
to
do
a
raw
resource
export
now.
The
reason
why
is
that
we
want
to
do
a
raw
export
is
because
I
think
there
is
possibility
that
we
may
want
to
export
resources
which
are
not
yet
introduced
and,
and
that's
one
reason
that
we
could
use.
We
introduced
our
resource,
export
and
similar
to
the
other
resources.
B
We
also
have
a
status
for
exports,
which
essentially
talks
about
you
know
what
is
the
overall
status
of
this
exported
resource
and
for
that
we
have
complied
with
like
the
status
propose,
or
rather
how
statuses
are
implemented
using
conditions,
and
so
we
introduced
the
resource,
export
condition
for
that,
and
now
the
the
other
side
of
the
equation
is
the
resource
import
so
similar
to
how
services
are
imported
into
your
local
clusters
via
the
service
import
resource.
B
We
also
have
another
encapsulator,
another
resource
which
would
be
encapsulating
those
other
resources
which
are
being
you
know,
merged
or
created
as
a
multi-cluster
service
and
or
in
this
case,
service
import
and
then
encapsulated
as
part
of
the
resource,
import,
spec
and
then
push
to
the
clusters
member
clusters,
which
are
going
to
be
reading
from
that
imported
or
which
would
be
importing
this
particular
resource.
B
So
similarly,
this
is,
you
know
cluster
ids,
which
would
basically
specify
which
are
the
member
clusters
to
which
this
resource
must
be
imported
and,
if
not
specified,
we
will
import
them
to
all
member
clusters,
and
then
you
know
some
of
those
resources.
Some
of
those
field
names
are
pretty
self-explanatory
and
then
again
this
is
you
know
what
is
the
different
type
of
resources
for
service
import?
You
will
have
this
field
set
and
similarly
for
endpoints
external
entity
and
then
for
the
resource,
input
and
similar
to
the
export
conditions
or
status.
B
B
You
have
these
member
clusters,
and
these,
like
blue
blocks,
is
like
blue
boxes
and
then
the
linear
cluster
is
a
slightly
darker
blue
box,
wherein
the
leader
cluster
components
reside
as
we
know
that
the
member
the
leader
cluster
is
responsible
to
you,
know,
merge
those
services
or
create
those
multi-cluster
services
or
other
resources
that
would
be
exported
in
the
future.
B
That's
the
job
of
the
reader
cluster
and
then
it
will
be
responsible
to
push
those
resources
to
the
different
member
clusters,
which
would
be
importing
those
resources
so
and
then
in
the
member
clusters
you
will
have
you
know,
users
creating
services,
and
then
they
will
create
a
service
export
mark
that
service
to
be
exported,
and
that
will
be
then
monitored
by
an
exporter
controller
which
will
essentially
encapsulate
encapsulated
into
a
resource
export
resource
and
then
push
it
to
something
called
a
common
area
which
was
sort
of
an
extraction
for
where
you
know
these
resources
are
pushed
to
the
leader.
B
B
On
top
of
this
controller,
wherein
you
know
certain
criterias
would
determine
whether
these
resources
should
be
should
be
further
moved
into
the
pipeline
or
not,
and
those
filter
criterias
can
be
added
on
later.
Today.
We
don't
have
them
yet,
but
but
we
would
want
to
keep
like
the
criteria
to
filter,
also
part
of
the
pipeline,
so
that
it
can
be
extended
in
the
future.
B
B
It
will
be
capsulated,
it
will
extract
those
resources,
and
then
you
know
in
as
lan
mentioned
in
in
the
beginning
that
a
multi-cluster
service
is
basically
backed
by
different
services
from
different
clusters.
So
that's
the
job
of
the
multi-cluster
manager
to
create
this
multi-cluster
service
and
and
then
it
will
push
it
down
to
a
controller
which
would
then
encapsulate
this
into
a
resource
import
resource
and
the
different
servers,
export
service,
import
resources
as
well,
and
then
it
will
push
it
back
to
the
common
area
from
wherein
the
member
clusters
would
be
listening
on
and
monitoring.
B
These
resource
import
resources
and
then
it
will
be
extracting
the
multi-cluster
service,
let's
say
from
from
those
encapsulated
resource,
import,
import
resources,
and
then
it
will
actually
go
ahead
and
create
the
service
locally
for
for
the
consumption.
So
now
now
the
member
cluster
gets
a
multi-cluster
service,
which
is,
as
land
previously
mentioned.
It's
a
it's
a
regular
community
service.
B
So
so
the
importer
will
then
create
the
service
locally
in
the
member
cluster.
And
now
any
workload
in
your
local
member
cluster
will
be
able
to
consume
this
multi-cluster
service
and
then,
as
land
previously
already
mentioned,
that
it
will
be
then.
B
It
will
then
be
pushed
to
a
remote
cluster
service
which
will
be
actually
serving
on
that
particular
service.
So
that's,
basically
how
the
exchange
pipeline
would
would
work
and
how
the
member
clusters
and
leader
clusters
interact
and
how
different
components
interact
with
each
other.
B
So
this
is
just
like
a
sample
wherein
workflow
know
we
talk
about
how
a
kubernetes
service
say:
funnest
kubernetes
service
foo
in
the
name,
space
ns
in
cluster
one
and
the
same
service
in
cluster
two
present
in
the
same
name,
space,
how
they
would
be
merged
into
a
single
multi-cluster
service
called
multicluster
foo.
B
So
you
know
administrator
will
create
these
exports
in
the
full
name,
space
or
sorry
for
each
name
space
in
the
clusters,
and
then
the
exporters
will
catch
that
we'll
watch
on
that,
and
then
it
will
export
that,
as
as
a
resource
export
to
the
leader
cluster
and
the
leader
cluster
will
now
get
notification
of
these
two
services
being
exported.
Now
it
figures
out
that
it's
it's
part
of
the
same
service.
B
It
will
create
a
multi-cluster
service,
called
multi-cluster
foo
and
it
will
associate
its
endpoints
and
then
it
will
encapsulate
it
into
a
resource
import
resource
which
will
then
be
pushed
down
to
the
member
clusters
which
are
actually
listening
on
to
this
this
leader
cluster.
For
for
these
resources,
and
then
it
will
extract
that
multi-cluster
through
resource
service
and
the
endpoints,
and
then
we'll
create
them
locally,
and
then
your
workload
should
be
able
to
access
this
multicluster
service.
B
If
something
fails
in
between
the
the
status
of
the
resource
would
be
updated
to.
You
know,
whatever
the
error
that
was
encountered
as
part
of
this,
so
so
this
is
basically
how
the
information
works,
exchange
would
work
and
similar
to
this
you
know
others
other
other
resources
can
also
be
exchanged
in
in
a
similar
fashion.
B
And
so
that's
pretty
much
like
the
overview
of
the
different
crds
that
we
plan
to
introduce
and
and
kind
of,
like
the
you
know
how
the
the
multi-cluster
set
workflow
would
take
place.
B
We
have
a
few
items
that
you
know.
Initially
we
mentioned
that
there
are
some
items
that
see
part
of
use
cases
that
we
also
want
to
tackle,
and
those
are
something
that
we
will
focus
on
once
we
have
the
basics
in,
and
so
one
of
one
of
them
was
the
stretched
anterior
native
policies,
wherein
we
would
want
to
be.
We
would
want
to
allow
workloads
from
one
cluster
to
securely
connect
to
workloads
in
other
clusters
and
then
make
sure
that
they,
don't
unintentionally
access
other
workloads.
B
The
three
approaches
that
we
we
feel
we
have
to
as
part
of
the
design,
and
it
gives
some
pros
and
cons
about
it,
so
we
can
also
get
some
feedback
based
on
that,
but
we
will
continue
to
evolve
this
discussion
in
addition
to
that,
there's
also
I'll
briefly
talk
about
the
data
plane
design
also,
but
talk
more
on
this.
If
there
is
interest
essentially,
we
also
want
to
make
sure
that,
along
with
you
know,
the
multi-cluster.
B
Design,
we
would
also
want
to
provide
a
change
in
the
data
plane
such
that
pods
and
one
cluster
can
communicate
to
plot
powers
in
other
clusters
by
changing
some
of
the
aspects
in
the
variable.
By
introducing
you
know,
either
some
ipsec
tunneling
between
multiple
clusters
introducing
gateways
or
or
it
could
also
be
a
full
mesh
mode,
wherein
every
pod
is
able
to
connect
to
every
other
part
and
all
nodes
are
reachable
with
each
other
and
perhaps
land.
Maybe
you
want
to
add
some
more
on
the
data
inside
of
this.
C
C
That's,
I
think
it's
a
limitation
to
some
customers
so
in
our
room
map
we
like
to
in
the
future
to
do
some
change
on
the
data
point.
So
we
can,
you
know,
remove
the
limitations,
so
the
customer
can
commun
use
the
remote
or
the
multicast
service
without
exposing
their
service
through
the
external
ip
or
through
the
power
external
ip
like
that
yeah,
but
I
think
still
on
the
discussion
and
we
will
focus
on
the
current
design.
First
yeah.
B
Yeah-
and
you
know
big
thanks
to
land
for
working
on
this
and
also
akshay
and
his
team,
who
also
contributed
vastly
on
this
design.
B
A
That
was
extremely
informative.
I
I
have
to
say
that
the
whole
proposal
needs
some
digesting
to
do
at
least
four
people
coming
the
first
time
into
contact
with
it.
It
seems
like
a
fairly
big
work,
so
is
the
roadmap,
I
believe
the
road
the
roadmap
will
be
spread
across
multiple
anterior
releases.
Ideally,
is
that
correct.
B
Yes
correct,
so
we
actually
have
a
created,
a
feature
branch
on
which
we
plan
to
start
working,
and
we
have
actually
some
sketch
work
appears
already
up
there
and
what
we
plan
to
do
is
that
at
least
what
we
have
so
far
discussed
in
detail.
You
know
the
idea
is
that
we
continue
with
the
1.4
timelines
as
target,
but
we
do
believe
that
1.5
seems
a
bit
more
practical,
but
at
least
we
want
to
target
1.4
and
and
then
moving
forward.
A
Thanks
abhishek
all
right,
so
we
are.
We
still
have
a
few
minutes
left
in
the
meeting.
Is
there
any
question
through
the
community
regarding
this
proposal,
the
implementation
data
plane
design,
please
you
know,
go
ahead
and
ask
all
the
questions
you
may
have
for
to
have
a
check
and
learn.
D
I
was
just
thinking:
do
you
see
any
any
requirements
we
need
to
in
the
in
the
leader
cluster?
We
need
to
know
about
the
member
cluster
objects,
to
for
users,
to
define
the
policy
and
to
say
how
policies
are
realized.
What
do
you
think
of
foreign.
B
Yet
go
ahead,
yes,
so
I
think
if
we
want
to
apply
policies,
you
know
based
on
hard
work
loads.
So
maybe
there
is
some
need
to
be
able
to
fetch
that
inventory
or
to
be
able
to
see
what
kind
of
powers
exist
what
kind
of
services
exist.
So
maybe
there
is
that
requirement
which
you
know
which
will
help
a
user,
because
because
our
eventual
goal
is
to
create
these
policies
in
the
in
the
leader
cluster.
B
So
if
you
don't
have
the
world
view
of
your
multi-cluster
set
and
the
different
services,
then
in
that
case
it
might
be
harder
to
write
such
rules
from
the
reader
cluster
and
in
that
case,
you'll
have
to
write
policies
in
the
number
cluster
which
might
not
be
desirable.
D
If
you
think
the
user
should
be
able
to
know
what
the
end
points
in
the
member
cluster
from
leader
cluster,
I
mean
we
need
to
create
one
resource,
one
crd
whatever
for
I
report
and
I
rename
space
in
the
member
cluster
inside
the
leader
class.
B
I
don't
think
so.
I
I
think
the
idea
probably
would
be
to
to
create
that
policy,
but
the
policy
should
probably
be
transferred
or
or
okay
to
the
member
clusters,
rather
than
the
other
way
around.
D
So
I
mean
whoever
created
a
policy
student
know
about
the
water
workers
in
the
member
cluster
yeah.
B
B
D
B
For
the
policy
side-
yes,
it
might
be,
I
think,
for
troubleshooting,
I
think
locally-
might
make
more
sense,
so
so,
depending
on
which
approach
we
end
up.
Choosing
so
let's
say:
if
we
are
doing
the
exporting
the
computed
group
members,
then
we
actually
already
have
the
computed
list
on
on
where
the
actual
policy
will
be
realized.
B
So
so,
in
this
case,
on
the
member
cluster
side,
you
already
know
which
parts
you're
applying
to
and
what
to
from
exported
group
members
are
already
part
of
the.
So
maybe
it
might
be
easier
to
debug
on
the
on
the
member
cluster
site.
B
But
here
again
I
mean
if
there
is
any
changes
in
the
computed
group
members
and
which
has
not
yet
reached
or
those
changes
in
the
computer
group
members
have
not
been
reflected
correctly
in
the
in
the
remote
cluster,
then
in
that
case
you
might
not
be
getting
accurate
information,
so
I
guess
it
needs
to
be
correlated
with.
What
is
the
status
of
this
export,
whether
it's,
whether
whether
those
exporting
clusters
are
still
reachable
or
not?
So
there
are,
I
think,
multiple
ways
in
which
this
may
there
might
be
some
misinformation.
B
For
the
stats,
I
think
I
know
that
there
was
at
least
folks
who
are
working
on
the
flow
exporter
side.
I
do
believe
that
they
were
also
mentioning
that
this
is
something
that
can
be
put
on
the
roadmap
to
be
able
to
unify
the
stats
across
clusters,
but
I
haven't
had
any
detailed
discussions
on
how
we
would
go
about
and
approach
approach.
This.
B
B
One
is
realization
and
the
other
yeah
for
for
realization.
I
think
we
continue
to
do
it
on
the
member
cluster,
because
so,
for
example,
if
again
we
think
about
the
first
approach.
The
policy
will
be
realized
in
let's
say
local,
in
your
local
cluster
and
for
the
local
cluster
you
are
going
to
your.
Your
span
is
going
to
be
your
cluster
nodes
and
the
pause
in
your
workloads
in
your
own
cluster.
So
so
today,
what
we
have
will
work,
as
we
already
have
the
status.
B
Sorry
realization
status
for
the
local
single
cluster
network
policy
and
the
exported
group
members
are
going
to
be
like
ip
blocks,
perhaps
so
so
it
it
won't,
it
won't
reflect
the
whether
this
I
think,
at
least
from
the
realization
perspective.
At
least
we
can.
We
can
report
the
status
saying
that
in
my
local
cluster
it
was
successfully
realized
or
not.
D
Yes,
so
you
still
say
in
the
leader
class:
we
want
to
aggregate
right,
at
least
in
the
beginning,
so
we
can
say
yeah.
B
So
in
the
leader
cluster,
perhaps
we
can
have
some
component
which
aggregates
the
different
policies,
at
least
the
ones
which
are
you
know
spanning
across
and
then
perhaps
we
can
have.
I
mean
I
have
not
jotted
down
that
particular
item,
but
that's
something
I
think
should
be
possible
if
you
just
you
know
it's
similar
to
how
we
do
aggregation
of
policy
information
from
different
nodes
to
the
single
line,
entire
controller.
Now
this
would
be
aggregating
from
different
controllers
to
a
single
leader,
cluster
controller.
D
B
I
had
something
that
is
currently
not
added
in
the
items,
but
definitely
I'll
make
a
note
of
the
aggregation
part
here.
D
B
D
In
general,
I
just
think
maybe
it's
useful
if
we
can
aggregate
the
other
state
from
the
member
cluster
like
realization,
stats
and
even
group
members,
if
we
use
different
way
to
implement
the
policy
in
the
group.
B
B
If
there
are
no
more
questions
on
the
multi-cluster
side
of
things,
I
actually
just
wanted
to
make
a
quick
update
on
the
upstream
network
policy
work.
I
think
it's
been
quite
a
while
that
I
haven't
given
an
a
brief
update.
B
We
are
continuing
on
the
cluster
network
policy
effort
and
last
week
I
had
a
one-on-one
with
tim
harkin,
and
we,
I
think,
have
progressed
well
enough
on
to
at
least
him
understanding
the
proposals
that
we
have,
and
he
has
a
couple
of.
He
had
a
couple
of
inputs
to
that,
and
I
have
incorporated
that
in
our
proposal,
in
addition
to
the
cnp
work
that
we
are
that
we
are
doing
there
are,
there
is
also
an
effort
to
do
network
policy
status
in
upstream
now.
B
This
is
something
that
we
have
done
in
entry
native
policies,
but
it's
it's
still
not
part
of
the
kubernetes
network
policies,
and
I
think
the
the
motivation
behind
the
introducing
the
status
is
that
recently,
the
endpoint
import
feature
was
merged
in
in
upstream
kubernetes
and
for
network
policies
and
as
part
of
its
ga
requirement,
there
was
requirement
that
at
least
four
cni
supported
andrea
is
one
of
the
cni's
which
supported,
but
there
is,
I
think,
only
three
cnn
so
far
which
are
supporting
it,
and
it's
not
yet
gotten
like
the
fourth
cnn
now,
so
I
think
the
sig
network
community
feels
that
we
should
have
a
way,
because
now
this
is
part
of
like
better
network
policy.
B
Sorry,
this
is
part
of
a
network
policy
resource
beta
this
field,
and
some
cni's
would
support
some
want.
I
think
there
was
a
need
for
people
to
have
this
information
be
exposed
in
the
form
of
status.
So
so
there
is
some
work
there.
I
know
one
guy
from
vmware
who
ricardo
is
helping
out,
but
I
think
he
needs
more
help
and
if
anyone
in
the
android
community
wants
to
work
on
this
particular
kept
and
feel
free
to
get
in
touch
with
me,
and
I
can
put
you
in
touch
with
ricardo.
B
Similarly,
there's
also
a
parallel
effort
to
work
on
the
v2
aspect
of
network
policies,
and
I
think
some
folks
from
juniper
and
red
hat
are
trying
to
work
on
this
use
case,
but
or
rather
this
new,
this
new
v2
resource
for
network
policies.
But
again,
if
there's
anyone
in
the
community
who
wants
to
work
on
anything
network
policy
related
feel
free
to
ping
me.
A
Thanks
abhishek
all
right
anything
else
that,
as
you
already
like
five
minutes
over
time,
unfortunately,
is
there
any
final
topic
that
you
like
to
bring
up
for
discussion
today
with
a
few
seconds.
Otherwise,
we'll
call
this
meeting
off
and
and
that'll
be
it.
A
All
right,
so
I
think
that
it
might
be
all
for
today.
I
would
like
to
thank
everyone
for
joining
and,
most
importantly,
thanks.
Many
thanks
to
abhishek
and
lana
for
this
very
extensive
presentation
about
multi-cluster
support.
If
you
have
any
additional
feedback.
Of
course,
the
discussion
does
not
end
here.
You
can
provide
your
feedback
on
the
github
issue
or
even
more,
even
or
even
provide
your
comments
on
the
design
document
that
has
been
presented
today
by
abhishek
and
lan.
A
So,
thanks
again
for
presenting
this
and
thanks
everyone
for
attending
the
only
thing
left
for
me
to
do
is
to
wish
everyone
a
good
evening
a
good
morning
or
a
good
afternoon,
thanks
again
for
joining
and
talk
to
you
again
in
tweaks
time.