►
From YouTube: Antrea Community Meeting 07/05/2022
Description
Antrea Community Meeting, July 5th 2022
A
And
and
good
morning,
good
afternoon
or
good
evening,
this
is
the
andrea
community
meeting
and
today
is
a
wednesday,
the
sixth
of
july,
of
course,
as
usual,
if
you're
in
the
united
states
still
tuesday
july
5th,
this
meeting
has
been
moved
from
by
24
hours
from
the
original
schedule
to
take
into
account
for
independence
day
in
the
united
states,
and
for
today
we
have
a
presentation
about
multi
network
policy
enforcement
in
multi-cluster
scenarios.
We
know
that
we
already
have
replicated
network
policies
in
these
scenarios.
A
This
is
a
feature
that
has
been
worked
so
far
by
grayson
and
young,
with
quite
a
bit
of
help
from
lan
for
what?
Whatever
concerns
the
multi-cluster
deployment
so
grayson,
please
go
ahead.
C
B
B
Okay,
hi
guys,
I'm
grayson
today,
I'm
gonna
introduce
the
stretched
and
transitive
policy
and
young
and
I
will
go
through
the
whole
design
of
it.
So
this
is
today's
agenda.
B
First,
the
background
we're
going
to
introduce
our
current
state,
why
we
need
a
stretched
entry
native
policy
and
then
go
to
the
label
identity
which
is
required
by
the
stretch
network
policy
to
do
the
matching
and
then
the
implementation
of
the
stress
network
policy
and
also
I
have
a
demo
video
to
show
to
show
and
the
first
I
will
let
young
to
start.
D
Yeah,
sorry
about
that,
it's
some
there
are
some
issues
with
the
computer
speakers,
so
I'll
just
quickly
go
over
the
the
background
for
the
scratch
network
policy,
and
you
know
the
label
identity,
design.
In
terms
of
the
background
you
know
in
our
current
upstream
entry
multi-cluster
service
capabilities.
D
You
know
we
have
already
have
a
lot
of
features
that
are
in
the
multi-cluster
space,
which
first
and
foremost
include
service
expert
import.
This
means
that
you
know
we
are
implementing
the
upstream
and
multi-cluster
service
api
and
for
a
bunch
of
clusters.
If
they
are,
you
know,
having
you
know,
similar
they're
managed
by
common
admins
or
whatnot,
they
can
form
a
cluster
set
and
some
you
know,
services
in
one
cluster
can
be
exported
onto
the
cluster
set.
D
So
the
every
cluster
in
the
same
cluster
set
can
access
the
exported
service,
and
this
is
you
know,
the
very
basic
part
of
the
mcs
api,
and
then,
on
top
of
that,
you
know,
we
have
acmp
first
supporting
two
services
for
multi-cluster
services,
which
means
that
in
a
single
cluster,
if
you
say
I
wanted
to
allow
deny
or
reject
traffic
to
a
specific
multi-cluster
service,
this
this
can
be
done
in
a
single
cluster.
D
But
you
know
the
apply
to
will
only
be
workloads
in
the
cluster
which
the
acmp
has
created
and
then,
after
that,
we
added
a
capability
called
acmp
replication,
which
means
that
for
a
same
acmp,
you
can
replicate
it
across
all
the
clusters
in
the
cluster
set.
So
with
those
two
capabilities
combined,
we
already
have
you
know
a
really
complete
egress
control
for
workloads
towards
multi-cluster
services
without
having
to
you
know,
really
introduce
anything
specif
specific
for
multi-cluster
services.
D
However,
this,
as
I
mentioned
in
the
limitation,
this
is
not
the
case
for
ingress
right,
so
thinking
about,
if
you
wanted
to
restrict
access
towards
a
multi-cluster
service
endpoint,
when
you
wanted
to
specify
ingress
peers,
you
needed
to
specify
whether
this
ingress
pair
is
only
concerning
the
workloads
that
initiate
traffic
to
the
service
from
the
cluster
itself,
or
it
can
be.
You
know
a
lot
of
other
parts
from
other
clusters
in
the
cluster
set.
D
D
D
Now
we
we
wanted
to
talk
about.
You
know
what
is
labeled
identity
and
the
concept
behind
it.
So
why
label
identity?
Let's
just
imagine
that
we
wanted
to
write
a
policy
which
selects
some
back
ends
of
the
multi-cluster
service,
and
then
we
wanted
to
restrict
ingress
from
pods
that
are
either
in
in
the
same
cluster
or
from
another
cluster.
D
We,
a
naive
idea,
will
be
just
to
use
the
when
we
have
such
policy,
we'll
just
compute
all
the
ips
of
the
selected
workloads,
either
it's
in
the
cluster
or
from
another
cluster,
and
we
just
use
this
eyepiece
to
program.
Obs
rules
like
we
do
for
the
network
policies.
D
D
D
Each
cluster
in
the
cluster
set
will
have
to
know
the
ips
of
you
know
all
the
different
parts
in
in
different
clusters
in
order
to
use
the
policy
label
selectors
and
translate
those
into
the
ips
that
these
selectors
actually
select
and
to
sync,
these
ipf
informations
in
the
inter
cluster
set
is
not
scalable
at
all,
because
the
ips
are
can
can
come
and
go
right.
So
the
so
the
pod
ips
are
very
efficient.
D
They
they
can
change,
and
you
know
for
departments,
scale
up
and
scale
down.
These
ips
adds
and-
and
you
know,
they're
destroyed
pretty
quickly,
so
there's
going
to
be
a
lot
of
turn
in
terms
of
syncing.
These
informations
in
the
internet
cluster
set.
D
So
the
the
solution
we
came
came
up
with
to
resolve
this
issue
is
the
thing
called
label
identity
instead
of
needing
to
know
the
ips
of
all
the
parts
we
just
needed
to
think
the
information
about
the
pod
labels
across
the
cluster
set.
D
D
The
first
alternative
is
that
we
have
a
long
label
identity
for
each
pod,
not
for
each
part,
but
for
each
parting
for
each
deployment,
I
should
say
basically
and
the
label
identity
will
include
the
information
of
the
namespace
of
the
part,
which
includes
all
the
labels
in
that
specific
namespace
and
also
the
parts
labels
and
we
concatenated
them
into
a
single
label
identity.
D
Then
we
will
have
two
different
kinds
of
labor
identities
and
and
we
can
use
this
label
identity
to.
We
can
encode
this
label
identity
in
the
sending
side,
and
you
know,
enforce
policies
for
this
labor
identity
on
the
receiving
side,
and
the
other
alternative
is
that
you
know
we
separate
the
namespace
and
part
label
identities.
D
This
can
potentially
reduce
data
exchange
because
you
know
we
kind
of
like
decoupled,
the
namespace
label
identities
and
the
power
label
identities
so
for
the
so
for
the
above
example,
for
name
label
identity.
We
just
have
one
identity,
which
is
e
equals
product
and
for
the
positive
vitamins.
We
have
two.
So
even
though
that
there
are
three
different
identities.
D
Well,
we
needed
to
sync
in
this
bare
bones
example,
but,
as
you
can
imagine
in
real
world,
if
you
have
multiple
namespaces
with
different
labels
and
each
of
the
namespaces
only
have
pod
at
picoscience
and
part
appeals
db,
then
decoupling.
Those
two
will
significantly
reduce
the
number
of
label
identities.
We
need
to
sink
across
cluster
set.
So
this
is
the
idea
on
you
know.
Maybe
we
can
separate
these.
We
with
you
know
these
two
label
identities
separated.
D
We
will
have
a
little
bit
more
complicated
logic
in
terms
of
enforcing
these
or
or
computing
the
labor
identity
selected
by
a
policy
and
enforcing
this
in
in
obs,
but
we
think
that
you
know
separating
these
will
be
a
better
solution
as
of
now,
even
though
that,
for
the
first
round
of
implementation,
we
we
were
using
alternative
one
instead
of
alternative
two.
D
So
in
the
demo
video
that
the
grace
is
going
to
show
later
today
it
will,
it
will
also
be
using
the
alternative
one
name
label,
identity
format,
but
we're
looking
to
change
the
label
identity
format
to
the
ladder
I'll
I'll
pause
here
in
case
there's
any
questions
so
far,.
B
I
want
to
add
some
details
for
the
the
bit
usage
thing
like
if,
if
we
use
separate
labels,
so
we
got,
we
have
to
assign
a
separate
bits
for
namespace
at
label,
identity
and
the
pod
label
identity.
For
example.
B
If
we
have
a
24
base
in
total,
then
if
we
like
assign
12
bits
for
the
namespace
label,
identity
and
the
12
bits
for
positive
identity,
then
the
part
label
identity
only
will
have
less
than
5000
available
this
it's
kind
of
risk
that
will
not
be
enough,
and
if
we
go,
for
example
like
8
and
16,
then
the
namespace
label
identity-
probably
will
not
be
enough.
B
But
if
we
use
the
combined
label
identity,
which
means
just
one
id,
we
have
24
bits
which,
although
we
probably
we're
gonna
use
much
ids,
but
there
will
be
more
available
ids
if
we
combine
them.
Yes
for
the
bit
usage
thing
and
for
the
data
exchange.
Probably
there
will
be
a
example
like
there's
the
one
namespace
and
a
hundred
pod
inside
it
and
when
the
namespace
changed
its
labels.
B
If
we
separate
it,
we
only
have
to
do
the
resource
export
import
for
the
namespace
identity.
Instead
of
do
a
hundred
label
identities,
I
think.
B
D
You
yeah
just
some
details.
Thanks
for
this
and
grayson
I
would,
I
would
imagine
you
need
to
bring
up
your
first
point
again
when
we
actually
talked
about.
We
wanted
to
encode
the
label
agency
into
the
into
the
geneva
header,
because
we
don't
have
any
you
know
so
so
right
now
we
haven't.
You
know
introduced
this
yet
so
it
might
be
a
little
bit
confusing
for
people
who
doesn't
know
this,
but
so
yeah.
B
D
Can
we
get
the
next
slide?
Please
thank
you.
So
this
is
the
this
is
the
sort
of
like
life
cycle
diagram
for
the
for
the
label,
identity,
replication
across
all
the
clusters
for
people
who
are
familiar
with
you
know:
multi-cluster
resource
exchange
pipelines.
D
This
is
a
really
really
familiar
diagram
and
we
actually
also
uses
the
the
regular
resource
import
and
export
pipeline,
which
is
already
existing
in
the
multi-cluster
to
implement
the
label
identity
replication.
So
the
the
idea
is
that
you
know
in
each
of
the
member
cluster.
There
is
a
exporter
that
is
watching
for
pardon
namespace
label
events,
so
let's
say:
there's
a
new
namespace
created
or
a
bunch
of
new
parts
created
in
those
name
spaces.
Those
are
also
those
are
qualifying.
D
Now
we
needed
to
wrap
this
label
identity
in
a
resource
export
and
create
this
resource
export
to
let
the
leader
cluster
know
that
we
have
new
label
identities
in
this
member
cluster
and
what
the
leaderboard
cluster
does
is
that
it
consolidates
all
the
resource,
export
information
from
all
the
member
clusters
regarding
the
label
identities,
and
it
also
does
some
deduplication,
because
multiple
member
clusters
can
have
the
same
label
identities
because
it
it
would
just
be
you
know,
namespace
labels
and
pod
labels.
So
when
that
happens,
it
is
only
we.
D
We
only
needed
to
assign
a
id
for
each
unique
label
identity
and
for
this
unique
label
identity
in
the
cluster
set
and
the
id
mapping
the
leader
cluster
will
actually
create
a
resource
import
so
that
each
importer
in
the
member
cluster
can
watch
this
resource
and
actually
create
the
label
identities
in
their
own
clusters.
So,
with
the
entire
you
know
loop
after
you
know,
the
entire
loop
under
the
outcome
should
be.
D
This
is
how
you
know
if
a
packet
is
sent
out
from
one
member
cluster
and
received
from
the
the
other
member
cluster,
and
it
has
the
id
let's
say
two,
then
the
receiving
end
will
know
that,
oh
for
the
original
packet,
it
is
sent
out
by
a
pod
which
has
label
blah
blah
blah
and
the
name.
Space
will
also
have
a
label
blah
blah
blah
because
the
it
can
use
the
id
to
you
know,
look
up
from
the
local
mapping
and
understand
what
the
label
identity
was
originally
and
yeah.
D
Let's,
let's
go
ahead
to
the
next
slide.
D
In
addition
to
the
service
export,
you
know,
acnp,
export
or
external
entity
expert
or
whatnot
we're,
adding
a
new
type
which
is
label,
identity,
export-
and
you
know
for
here,
if
we're
using
the
alternative
to
we
just
mentioned,
then
for
the
label
identity
expert.
We
will
export
the
normalize,
the
pi
labels
and
normalize
names,
namespace
labels
of
a
specific
cluster
and
export
it
to
the
leader.
D
And
yeah,
so
for
the
first
three
bullet
points,
this
is
the
entire
lifecycle
I
just
mentioned
from
the
from
the
diagram
we
just
saw,
and
this
is
a
also
a
example
on
what
these
label
identities
will
look
like.
As
you
can
see
here,
you
know
we
have
two
name
spaces.
D
The
two
name
spaces
have
two
different
labels
for
namespace
themselves
and
one
name
space
a
will
have
a
deployment.
You
know
with
a
replica
of
two
and
for
the
name
space
b.
There
is
also
a
I
app
equals
client
pod
and
there
is
a
apicos
client
level
equals
a
min
pod.
In
this
specific
case,
if
we're
using
alternative
one,
we
will
have
three
different
label
identities.
D
So
both
of
them
are,
you
know
really
similar,
but
in
their
ideas.
It's
just
that.
You
know
the
label.
Itd
format
will
be
slightly
different.
D
And
I
I
guess
I'll
just
skip
skip
this
for
now.
Oh,
oh
sorry,
I'll
not
skip
this
because
because
we
also
had
the
two
alternatives
in
terms
of
label
identity
importing,
but
we
did
decide
on
using
the
first
alternative,
which
is
that
for
the
label,
identity
imported
into
each
member
cluster
will
create
an
object
for
each
label
and
id
peer.
D
So
the
the
the
result
is
that
in
the
in
the
importing
member
cluster
we
will
have
a
label
identity
object
for
each
of
the
label
identities,
and
we
didn't
want
to
do
you
know
a
entire
map
of
the
audio
label
identity
that
exists.
D
It's
because
that
if
there's
any
label
identity,
add
or
update
events
it
will.
It
will
cause
the
entire
object
to
be
updated
and
for
each
of
the
member
cluster
they
will.
They
will
receive
a
huge
update
and
needed
to
figure
out.
You
know,
what's
the
update
to
the
labor
identities
and
which
we
thought
would
be
bad
so
essentially,
after
you
know,
all
this
resource
exchange
pipeline
is
done.
D
Each
member
cluster
will
have
a
label
identity
object
for
each
of
the
label
idp
and
also
we
will,
if
we
are
going
with
the
alternative
two,
which
is
separating
the
namespace
and
part
labels
and
we'll
also
have
a
resource
type
or
resource
name
for
the
label
identity
in
prospect
which
signals
that,
if
whether
this
label
identity
is
for
namespace
labels
or
pi
labels,.
F
D
That's
a
good
question
so
for
the
resource
export,
we
actually
just
have
one
resource
export
per
cluster
so
that
one
leader
receives
it.
It
knows
that
okay,
we
are
talking
about
the
resource
export
for
we're
talking
about
all
the
label
identities.
That's
in
you
know
cluster
a
or
cluster
b,
so
there's
a
predefined
name
for
the
resource
expert
for
that
cluster
and
for
the
resource
import.
The
name
is
a
hash
based
on
the
the
label,
identity.
F
You
you
mean,
you
mean
the
resource.
Export
is
one
object
per
cluster
which
contains
oh
current
labels,
but
does
it
have
the
same
problem
as
the
alternative
tool
of
label
in
labor
identity,
import
that
whenever
a
port
is
created,
if
it
has
a
new
label,
the
whole
labels
of
this
cluster
will
be
exchanged
with
with
the
common
error
the
export
api
once
right,
if
there
are
like
thousands
of
deployments
in
this
class
than
an
object
with
thousands
of
labels,
I'm
not
sure
the
size
of
the
object.
But
where
is
the
performance
concern.
D
That
that's
a
good
question.
Let
me
let
me
think
about
it
a
little
bit
on
why
we
did
not.
F
I'm
not
sure
whether
there
is
there
will
be
a
security
of
the
performance
issue
as
assorted
if
it
has
the
same
issue
as
the
alternative
to
or
for
import
resource
right.
If
we
we
concern
the
performance
of
the
autumn
native
to
of
import,
why
we
still
use
the
same
approach
to
vote
for
the
export
resource.
D
Let
me
let
me
try
to
think
about
this.
I
really
remember,
there's
a
concern
on
maybe
racing
between
different
clusters.
D
It's
because
that,
if
you,
if
you
consider
this,
you
know
if
there
are
two
clusters
that
are
receiving
a
label
identity,
the
same
label,
identity
which
they
wanted
to
update
to
the
leader
cluster
at
the
same
time
and
and
if
we
don't
do
a
resource
export
per
cluster.
Instead,
we
do
a
resource
export
based
on
the
label
identity
itself.
Then
the
two
different
clusters
which
have
the
same
labels
might
be
you
know,
trying
to
update
or
delete
the
same
resource
export
at
the
same
time.
So
I
I
I
would.
D
I
would
imagine
that
by
the
time
we
designed
it,
it
will
be
a
sort
of
like
a
racing
concern
in
in
the
in
the
current
design.
You
know
each
of
the
member
cluster
will
only
be
you
know,
performing
on
the
the
resource
expert
on
this
own
cluster,
so
we
we,
we
can
easily
manage
the
race
there,
but
if
we
are
using
the
other
way,
then
there's
this
racing
concern,
because
it
doesn't
really
apply
to
the
resource
import,
because
the
leader
cluster
will
be
the
only
one
updating
these
resource
imports.
D
F
I
see,
I
think,
if
risk
conditioning
is
a
concern
and-
and
we
address
this
by
even
with
current
approach-
I
think
we
just
use
different
name,
for
example,
for
cluster
a.
I
think
you
will
have
some
kind
of
prefix
or
suffix
hiding
the
cluster
information
as
a
resource
name
or
or
for
export
right.
Then
perhaps
the
suffix
or
prefix
could
also
be
used
as
a
per
label.
F
Export
object,
name
right,
for
example,
from
this
cluster
even
is
same
labels
as
another
one.
It
will
has
a
different
name
so
that
they
want
conflict,
but
I'm
just
curious
why
we
use
the
different
approach.
I'm
not
saying
there
must
be
a
performance
issue,
let's
see,
if
we
can,
they
used
to
rarely
exist.
First.
D
G
F
Yes,
that
that
sounds
not
very
efficient
and
and
according
to
the
implementation
of
the
the
storage
in
this
id,
I
think
it
will
keep
a
copy
of
each
version
of
the
object.
F
D
I
see
I
see
I
I
I
agree.
This
is
something
I
haven't
I
haven't
revisited,
I
guess
after
you
know
the
design,
so
we
we
probably
wanted
to
consider
using
a
single
resource
expert
for
each
unique
label,
identity
for
the
for
the
exporting
process
as
well.
D
We
will
grace-
and
I
I
guess,
we'll
follow
up
on
this
thanks
for
the
thanks
for
the
suggestion.
B
Okay,
yeah
I'll,
take
her
from
a
takeover
from
here
to
go
with
the
label
identity
data
passing
implementation.
We
also
have
two
alternative
here.
First,
one
is
a
leverage,
the
agenda,
header
tlv,
so
we
we
load
the
pause
label
identity
into
the
register,
import
classifier
flow.
B
As
you
can
see
below
we
first
we
got
an
initial
tlv
map.
We
can
define
our
class
and
type
and
choose
a
metadata
to
use
for
the
label
identity,
and
there
are
some
changes
in
our
pipeline,
like
the
first
in
the
classifier
table
for
each
part,
we're
going
to
add
a
action
to
load
the
label
identity
into
the
register.
B
For
example,
we
use-
let's
say
we
go
with
the
separate
label
id.
We
use
the
first
16
bits
for
name
space
label,
we
load
the
name
space
label
in
the
first
10
6
bits,
first
16
bits,
and
then
we
load
the
pot
label
id
into
the
last
16
bits.
B
So
then,
the
the
label,
the
the
label
identity
of
this
part,
is
inside
the
register,
and
if
the
traffic
is,
let
me
go
through
the
I'll
go
to
the
last
one
first.
So
when
the
packet
is
leave
this
pipeline
in
the
table
output,
it
will
move
the
data
in
the
register
10
into
the
title
metadata
one,
which
is
we
chose
to
in
the
tlv
map.
So
in
this.
B
If
the
data
is
from
the
tunnel,
we
will
unload
the
data
from
the
tunnel
metadata
into
the
register
where
we
used
to
store
the
label
identities
here
is
a
kind
of
traffic
path
to
for
better
understand.
B
This
is
the
most
complicated
case
like
a
regular
node
on
cluster,
a
want
to
talk
to
another
regular
node
on
cluster
b,
as
we
can
see
on
the
very
left,
the
part
a
send
out
the
traffic
and
the
first
in
the
classifier
flow.
We
know:
oh
it's
from
a
pod,
so
we
load
this
load.
We
load
the
label
id
into
the
register
and
then
we
arrive
the
output
table.
B
We
move
the
register
data
into
the
tunnel
metadata
and
the
genevieve
tlv
map
do
the
job
to
map
the
data
into
the
variable
lens
options
and
then,
as
I
arrive,
the
gateway
node
first,
the
new
tlv
map
will
do
the
reverse
thing,
so
the
data
is
now
in
the
tunnel
metadata
and
in
classifier
flow.
B
Since
it's
from
the
tunnel,
we
unload
the
tunnel
metadata
into
the
register
and
oh
by
the
way
during
the
whole
whole
pipeline.
If
the,
if
some
some
place,
the
network
policy
will
be
enforced,
it
can
use
the
register
data,
which
is
a
label
id
to
enforce
yeah,
so
went
to
the
output
table,
do
the
same
thing
and
then
go
to
the
cluster
b
gateway
node,
which
have
the
same
pipeline
as
the
gateway
on
the
cluster
a
and
then
finally
arrive
the
regular
node
on
the
cluster
b.
B
So
it's
also
from
the
tunnel
and
the
tunnel
metadata
moved
into
the
register.
So
now
the
register
contains
the
label
identity
of
the
part
a
in
cluster,
a
regular
node.
So
we
can
use
this
label
id
in
the
register
to
enforce
the
network
policy
probably
will
be
inside
the
entry
ingress
rule
table
here
so
yeah.
This
is
the
alternative
one
to
use
the
geneve
header
tlv
map
and
the
alternative
two
we're
going
to
leverage
the
vmi
this.
B
This
is
a
much
simpler
on
the
implementation
we
all
we
need
is
the
still
on
the
pod
classifier
flow.
We
load
the
part
label
identity
into
the
vmi,
which
is
a
set
tunnel,
and
after
this
action,
the
the
label
identity
will
go
with
this
packet
until
the
end.
So
we
could
use
the
tunnel
id
the
vi
whenever
we
want
to
enforce
the
network
policy.
B
B
The
cons
will
be.
The
vri
only
have
24
bits.
So,
as
I
said
before,
probably
we
can't
separate
the
name
space
and
part
labels,
since
we
we,
if
we
separate
to
12,
12
or
8
16
there
are,
they
all
have
some
risk
to
be
not
enough,
and
the
other
other
cons
is
I'm
not
sure
if
this
field
will
be
used
by
other
purpose
in
the
future,
since
it's
kind
of
like
a
network
identifier,
but
currently
it
should
be
fine.
B
We
don't,
we
don't
use
it
in
andrea,
but
yeah,
and
this
is
the
two
alternative
for
the
data
path.
Does
anyone
have
some
comments
or
ideas.
A
B
I
think
this
I
I
discussed
this
with
jen
jim
before
he
mentioned
a
scenario
like
if
a
user,
for
example,
start
a
start
and
shia,
but
haven't
enabled
the
multi-cluster
feature
since
if
we
to
save
the
bandwidth,
if
the
multi-cluster
feature
was
not
enabled,
we
don't
change
the
mco,
but
if
the
user
set
up
the
entry
and
then
enable
the
multi-cluster
feature
gate,
probably
we
have
to
like
restart
the
all
agent
or
or
all
pass.
So
this
kind
sounds
like
kind
of
weird.
A
I
guess:
do
you
see
this
starting
the
agents
and
then
restarting
also
all
pods.
B
E
So
if
a
parties,
for
example,
it
doesn't
use
any
cross
multiple
cluster
service,
but
does
the
part
will
also
be
affected
if
we
have
to
change
the
mtu
of
our
of
our
parts.
B
E
B
Yes,
since
we
we
are
not
sure
if
a
pod
will
use
the
mc
service,
not
yeah
got
it.
G
Yeah,
I
personally
feel
using
wing
eye
not
that
bad
in
my
mind,
anyway,
in
andrea
network,
we
don't
use
that
field
and
today-
and
I
think,
set
who
was
there
around
today.
B
G
Potentially,
if
you
believe
that
some
other
service
will
look
at
traffic,
I
think
mostly
they
will
not
look
at
the
wing.
I
feel
in
my
mind,
but
that
is
mostly
for
the
layer
2
switching
to
identify
layer,
2
network.
B
Okay,
so
so
what
do
you
think
about
the
separate
or
combined
labels,
since,
if
we
choose
a
vi,
probably
we
have
to
choose
the
like
one
id
combination.
G
B
B
If
this
is
case,
I
guess
maybe
we,
if
we
combine
them
to
a
single
ide,
it
should
be
fine,
since
the
data
exchange
won't
be
that
different.
If
it's
like
each
id
have
a
single
object,
then
the
difference
will
be
will
be
huge,
which
is
like
what
I
the
example
I
said
before:
yeah
the
the
namespace
contains
a
hundred
past.
One.
F
B
G
Yeah,
I
don't
know
it's
a
little
hard
to
make
a
trade
off
here.
Okay,
by
the
way
you
know
say
they
also
encode
the
id
into
the
packet
right.
I
think
in
their
case
they
are
doing
some
even
tricky
things
they
actually
differentiate
is
tcp
or
udp
if
tcp
are
including
to
the
first
thing
packet.
I
believe,
since
the
same
pack
is
very
small,
so
you
can
also
assume
you
can
append
quite
some
pads
there.
G
B
So
basically,
it's
a
trickier
than
we
use
the
vi
to
to
store
the
label
identity,
yeah.
G
G
A
They
they
encode,
the,
I
think
they
encode
the
network
policy
identifier
into
the
into
the
vni
header
yeah,
then
coordinate
and
identify
a
unique
identifier
for
the
network
policy
in
the
other.
A
But
in
that
case
you
know
it's
it's
a
slightly
different
because
they're
in
single
cluster.
G
B
Okay,
there's
not
too
much
time
left.
Maybe
we
could
discuss
this
more
offline.
C
B
Yeah,
so
for
the
stretch
network
policy
on
the
api
wise,
there
are
also
two
alternatives.
First,
is
we
add
a
scope
field
like
the
policy
on
the
left
to
our
current
android
network
policy?
B
So,
basically,
after
this
field
is
added,
the
policy
on
the
left
will
work
that,
like
it
will
apply
to
all
parts
in
the
name
space
prod
us
west,
which
match
the
label
f
equals
dbe.
This
is
a
same
as
the
single
cluster
work
for
this
one.
It
will.
G
B
All
traffic
from
all
parts
in
the
namespace
prod
us
west,
from
all
clusters
in
the
cluster
set,
and
if
this
namespace
exists
in
that
cluster
and
the
hostname
label
match
the
app
equals
client
and
deny
of
other
traffic
so
to
to
use
this
this
alternative
to
as
the
strategizer
policy
api.
The
advantages
is
that
we
can
reuse
our
current
implementation,
like
we
can
reuse
the
controller
and
the
whole
priority
module
and
the
challenge
of
this
one
is.
B
The
entry
native
policy
will
be
more
complicated
like
more
special
flags,
since
this
is
another
like
special
flex.
For
the
scope
of
this.
B
And
the
other
alternative
is
that
we
create
a
new
crd,
for
example,
called
the
multi
multi-cluster
access
policy.
This
is
works
exactly
the
same
as
before
they
apply
to
and
and
the
ingress
from
only
difference
is
it
will
work
in
the
in
a
cluster
set
scope.
B
B
So
so,
according
to
this,
our
current
implement
implementation
is
using
the
alternative
one.
Since
the
the
the
priority
things
is
quite
tricky.
E
B
A
B
Okay,
oh
and
yeah,
updated
cache
and
the
controller
will
process
the
entire
policy
into
an
internal
mp
using
sketch.
Basically,
it
will
translate
the
the
selector
to
the
label
identity.
B
So
in
this
case
we
will
add
the
label
identity
field
in
the
network
policy
peer
in
internal
mp
and
the
net
label
identity
peer
looks
like
below
is
contains
one
part
label
id
and
the
name
space
label
id.
If
we
combine
them,
then
here
will
be
only
one
ide.
B
So
here
is
an
example:
how
the
controller
translated
a
policy
like
the
left,
it
select,
the
like
pod,
selector,
app
equals
client
and
the
name
space,
selector
environment,
equals
def
or
the
pulse.
Selector
level
equals
user,
and
this
is
the
current
label
identity
imported
in
this
cluster.
B
So
a
internal
network
policy
like
the
right
will
be
created.
So
it's
a
first
of
the
label.
Ids
2,
which
is
the
app
equals
clan
and
the
namespace
slide.
Id
is
one
which
is
environment
and
is
equals
development
and
the
other
is
the
label.
Only
the
label
id
equals
one.
B
So
after
the
internal
network
policy
was
created,
then
the
works
go
to
the
agent
first.
The
agent
will
also
watch
is
the
label
identity,
crd
and
the
update
is
on
cache.
Also,
the
agent
will
watch
the
path.
Labeled
update
event
to
update
this
update,
its
classifier
flow
using
sketch
since
the
the
pulse
classifier
flow,
gonna
load,
the
label.
B
One
thing
to
mention
is:
is
besides,
we
install
the
flows
for
the
for
the
network
policy
rule
we're
gonna,
also
install
a
set
of
security
flows
for
each
row.
Here's
is
a
an
example
to
show
you
how
the
flows
look
like
and
how
the
security
conjunction
looks
like
the
left
is
the
internal
network
policy.
B
So
first,
the
normal
conjunction
will
be
like
here,
like
the
register
10,
the
first
16
bits
equals
to
2
and
the
last
the
last
16
b
equals
to
2
and
the
first
16
bits
equals
to
1,
which
match
the
the
first
label
identity,
and
the
other
thing
is
only
the
last
16
bits
equals
to
one
which
master
label
id
is
one.
B
This
is
a
normal
conjunction
and
we
also
have
a
security
conjunction,
so
why
we
need
this
is
because,
when
the
part
update
is
labeled,
if
it's
changed
to
a
completely
new
label
combination
which
is
new
to
the
whole
cluster
set,
then
the
leader
cluster
have
to
assign
a
new
label
id
to
this
label
combination.
B
B
So
when
the
when
the
new
id
arrives
back
to
the
agent
we're
going
to
update
the
classifier
flow
with
the
new
id,
but
during
this
time
the
all
traffic
comes
come
out
from
this
part
will
carry
the
label
0
and
we
have
to
make
sure
our
stretch.
Network
policy
will
not
will
not
allow
those
unknown
id
pulse
traffic.
So
we
install
this
security
conjunction
which
will
match
the
label.
Identity
is
zero
and
we're
gonna
do
a
arbitrary
drop
for
those
traffic
and
yeah.
B
This
will
be
a
just
a
two
in
case
if
if
the
part
will
carry
the
unknown
label,
but
this
this
will
be
a
kind
rare
case,
since
I
haven't
do
any
benchmark
or
skill
test,
but
currently
the
the
new
level
id
speed
is
the
gap
is
small
and
I
don't
have
too
much
time.
C
B
B
Okay,
the
traffic
is
blocked.
Maybe
this
one
is
part
it's
boring.
Let's
see,
maybe
we
show
the
maybe
show
this
one
like
we,
we
we
we
create
some
pod
and
namespace
on
the
on
the
cluster,
a
and
downside
on
the
cluster
b.
You
can
see
the
label
identity
has
been
successfully
imported.
This
we
use
the
combination
part
and
the
namespace
path,
like
the
namespace
label
and
the
path
label
and
the
id
is
five
and
the
on
the
upside.
B
A
And
the
the
security
flows
that
you
mentioned,
you're
getting
oh.
B
Yeah,
that's
that's
all.
A
A
H,
I
believe,
means
that
it
will
be
sort
of
a
compromise
between
the
approach
of
exporting
labels,
one
by
one
and
the
approach
of
exporting
all
the
labels
at
the
same
time.
So
that's
another
idea
that
we
can
consider
all
right.
We
are.
We
are
already
over
time.
So
I
would
like
to
thank
a
lot
grayson
and
yank
for
this
great
presentation
and
is
a
nice
demo.
Even
if
we
had
to
shorten
the
demo
just
to
three
minutes.
Is
there
any
final
question
for
grayson
and
or
yang.
A
Okay,
my
only
comment
that
we
can
revise
also
in
other
meetings
that
you
initially
you
see
that
you
have
two
different
alternatives
for
you
know
for
having
for
the
style
of
the
label,
let's
say
and
then
that
eventually
you
you
want
it
to
move
from
alternative
one
to
alternative
two,
and
when
doing
so,
we
probably
just
want
to
consider
the
upgrade
implications
to
see
if
there
will
be
any
issue
during
upgrade,
but
that's
something
that
we
can
discuss
in
a
separate.
We
don't
need
to
discuss
it
now.
A
Okay,
I
will
just
wait
a
few
seconds
to
see
if
anyone
has
any
other
question.
Otherwise
we
can
conclude
a
meeting.
A
All
right,
it
seems
that
there
are
no
more
questions,
so
thanks
again
grayson
and
young,
a
great
presentation,
and
I
would
like
to
wish
everyone
a
good
night
good
morning
or
a
good
afternoon
thanks
again
for
joining,
and
the
next
instance
of
the
entire
community
meeting
will
be
back
on
tuesday
on
tuesday
july.
The
19th
thanks
a
lot
again
and
have
a
good
one.