►
From YouTube: Episode 21: Antrea Policy in Action
Description
Come join Yang Ding and Grayson Wu to be the first to know about the new features in Antrea 1.6!! Antrea 1.6 is coming soon!!
A
Okay,
we're
live
hi.
Everyone
welcome
to
this
week's
entrance
live
show.
Today
we
have
a
young
dean
and
grantham
with
us
and
they
are
the
old
friends
of
entre
life
because
they
were
here
when
we
did
the
first
episode
so
for
the
new
audience
who
just
like
joined
a
show
with
us
today,
young
dean
and
grandson.
Would
you
like
to
like
do
a
self
introduction
shortly
for
them.
B
Yeah
sure
so
my
name
is
young
and
I've
been
mainly
working
on
the
policy
side
of
sides
of
things
in
gentria.
So
I've
been
on
the
part
of
a
lot
of
the
big
international
policy
features.
For
example,
you
know
the
all
the
international
policy
features.
I
would
say.
I've
basically
been
involved
in
developing
these
and
maintaining
these,
and
I,
I
think,
that's
also
basically
the
case
for
for
grayson
as
well.
So
we're
very
excited
to
to
be
here
to
present
some
the
exciting
features
of
the
intro
naturopathy.
C
Yeah
thanks
young.
You
basically
helped
me
to
introduce
myself
now.
I
only
need
to
say
I'm
grayson
yeah.
A
Okay,
okay,
well
welcome
back
young
and
grayson
with
this
great
features
for
the
entry
1.6.
I
heard
that
we
already
have
the
network
policy
support
in
the
in
a
previous
version,
but
this
time
we
have
a
improvement
of
that
right.
B
Right
right
so
for
the
intro
policy,
it
has
been
around
for
a
long
time
and
we're
constantly
you
know
adding
new
features
into
the
intro
policies.
So,
for
example,
we
have
recently.
I
believe
we
have
something
like
you
know:
fqdm
policies,
policy
to
services,
support
stuff,
like
that.
I
know
grayson
cheerio,
who
is
also
our
team
and
myself
included.
B
Are
you
know
working
towards
these
features
and
we're
excited
you
know,
so
this
becomes
a
complete
feature
set
of
the
intranative
policies,
we'll
basically
try
to
cover
as
much
as
possible.
Today.
In
you
know,
this
intra
live
demo
and
we'll
see
how
it
goes.
Awesome.
A
So
would
you
like
to
go
ahead
and
tell
us
a
little
bit
context
about
the
network
policy
and
then
yeah,
okay,.
B
Yeah
cool
so
also,
I
guess
as
a
reminder,
the
audiences
can
also
post
questions
in
the
live
chat
right
so.
A
B
Would
you
take
care
of
you
know
drawing
these
questions
to
our
attention?
If
anything
you
know
arises.
A
Yeah
yeah
for
the
audience,
please
sign
in
to
youtube.
So
you
can
comment
in
the
in
in
the
comment
chart
and
we
can
see
that
and
maybe
we
can
answer
any
questions
you
have
during
the
showtime
and
you
can
just
like
say
hi
to
to
us
so
to
let
us
know
that
you
are
here
yeah.
B
Hi
hi,
so
let
me
get
started
real
quick.
Let
me
try
to
share
my
entire
screen:
okay,
so,
okay,
okay,
so
to
introduce
the
intranative
policies,
we
wanted
to
first
take
a
step
back
and
do
a
really
really
quick
review
of
what
the
kubernetes
network
policy
does.
So
in
kubernetes
osi
model.
If
there
are
no
policies
whatsoever
in
a
cluster
by
default,
kubernetes
prescribes
that
all
pots
can
definitely
talk
to
each
other.
B
You
know,
without
any
you
know
dna
on
everything
they
can
just
you
know,
connect
to
each
other
using
their
ips,
but
the
network
policy
coming
to
place
where
a
developer
might
want
some
sort
of
like
security
postures
imposed
on
some
of
their
workloads.
A
really
typical
use
case
is
that
you
know
some
pods
are
deployed
as
part
of
some
services
and
people
wanted
to
control
what
other
parts
or
what
other.
B
Maybe
external
ciders,
can
talk
to
these
services
and
the
the
how
they
can
achieve
that
using
the
kubernetes
osi.
Now
a
policy
model
is
that
they
will
apply
certain
network
policy
onto
these
parts.
So
you
can
see
here.
This
is
a
very
basic
sample
network
policy.
B
Here
it
basically
is
the
network
policy
is
a
namespace
resource,
so
this
specific
network
policy
applies
to
all
db
parts
in
the
default
namespace,
and
it
has,
you
know
a
bunch
of
ingress
and
egress
rules
saying
what
are
the
ingress
that
are
allowed
for
these
pots
and
what
are
the
egress
allowed
for
these
pouts?
B
I
mean
that
once
a
pod
is
selected
by
some
network
policy
and
it
has
a
policy
type
of
ingress
or
egress
or
both
right.
So,
in
this
specific
scenario,
it
has
both
ingress
and
egress
rules,
meaning
that
ingress
to
this
pot
or
egress
from
the
spot
are
implicitly
denied
when
there's
a
policy
like
this
in
place,
except
for
something
that
is
explicitly
allowed,
which
is
specified
in
the
in
the
network
policy
rules.
B
So
in
this
case,
we
are
selecting
the
db
parts
and
we're
allowing
ingresses
from
a
specific
cider
from
a
specific
another
namespace
which
matches
you
know
this
criteria
and
we
are
also
allowing
ingress
matching.
You
know
this
specific
criteria
in
the
same
name.
Space
from
you
know
these
parts
on
specific
ports.
B
Obviously,
so
this
is
how
the
kubernetes
network
policy
works,
but
a
lot
of
discussions
has
already
come
up
in
the
sig
network,
sig
space
in
the
upstream
kubernetes,
because
you
know,
obviously
a
lot
of
clusters
means
we
want
to
control
some
traffic
in
their
cluster
on
a
namespace
level
right.
So
a
very
typical
use
case,
I
will
show
my
terminal
here.
I
guess,
because
I
prepared
a
really
simple
kubernetes
cluster
to
showcase
the
internet
of
policies
today
and.
A
A
quick
question:
yeah:
so
can
you
go
back
to
the
previous
yeah?
So
if
I
don't
have
the
ingress,
roll
and
egress
raw
defined
it
below,
I
only
have
the
policy
types
to
in
like
ingress
and
egress.
Is
that
means
that
the
pod
cannot
be
reached
by
anyone
any
other
pause
and
the
post
cannot
talk
to
any
other
pause
right.
B
A
A
B
Okay
sure,
let's
move
on
a
little
bit,
so
I've
prepared
a
you
know
sample
cluster
in
in
the
kubernetes,
and
it
has
some
workloads
deployed
in
three
different
namespaces
other
than
the
system
namespace.
So
we
got
a
devus
east
namespace,
a
devus
west
namespace
and
a
production
us
west
namespace.
B
So
what
I
can
tell
you
is
that
the
dev,
usbs
and
ws
west
has
label
app
or
type
equal
staff
for
the
namespace,
and
this
guy
has
a
type
equals
production
label
for
the
namespace
and
for
each
of
the
three
namespaces.
I've
basically
deployed
the
same
deployment,
which
is
a
client
deployment,
db
deployment
and
web
deployment
with
you
know
app
because
client
db
web
as
a
label
for
each
of
them.
So
it's
really
sort
of
like
a
minimalistic
and
simplified
version
of
how
you
know
some
production.
Kubernetes
cluster
can
look
like
now.
B
A
typical
use
case.
We've
been
asked
by
you
know:
entry,
customers
or
any,
I
guess,
cni
customers
really
often
is
that
I
want
a
namespace
isolation
between
my
tenants.
In
the
specific
case
the
tenant
can
map
to
a
specific
namespace.
B
Now,
if
you
wanted
to
enforce
something
like
this
using
the
kubernetes
now
policy
I
just
mentioned
it
will
be
really
hard,
because
the
network
policy
resource
is
not
designed
for
a
use
case
like
this.
So
a
custom
admin
would
need
to
write
some
sort
of
like
controller
which
does
the
following.
It
needs
to
go
into
each
of
the
namespaces
here
and
select
everything
in
the
namespace
and
say
that
for
ingress
and
egress
I
wanted
to
allow
only
stuff
in
my
own
namespace
and
for
such
a
policy.
B
It
needs
to
be
created
in
each
of
the
workload
namespaces
that
are
present
in
a
cluster
so
which
means
that
once
a
new
namespace
come
up
or
something
similar,
the
controller
needs
to
go
into
the
namespace
and
create
another
kubernetes
network
network
policy
resource.
For
that
specific
use
case,
and
so
that's
another
sort
of
like
layer
of
policy
definition
on
top
of
the
top
of
the
resource
itself,
so
which
is
not
really
user
friendly.
B
Most
probably
I
I
would
guess
there
are
some
sort
of
like
controller
that
does
that
you
know
in
you
can
probably
find
some.
You
know
github
repo,
that
does
that
I'm
not
entirely
sure,
but
I
also
would
imagine
that
people
would
who
people
who
need
this
feature
would
also
use
a
cni
such
as
entria,
which
supports
cluster
level
network
policies.
So
so
this
you
know
basically
autumn
automates,
that
for
them.
A
B
Right
right,
so
this
is
the
first
this.
This
is
a
first
caveat.
I
want
to
mention
about
kubernetes
network
policy.
The
second
caveat
is
its
increased
isolation,
behavior
right.
So
as
long
as
in
the
namespace,
there
are
some
developer,
which
applies
some
network
policy
onto
on
the
workloads.
It
becomes
implicitly
isolated
other
than
something
that's
explicit
allowed.
B
So
a
lot
of
times,
people
often
forget
to
put
dns
into
the
ingress
and
rules,
and
when
they
apply
some
sort
of
narrow
policy,
they
suddenly
realize
that
you
know
hey,
you
know,
dns
doesn't
work
anymore
and
they
will
be.
You
know,
searching
down
the
rules
to
find
out.
You
know
what
am
I
missing
here,
but
they're
not
missing
anything
explicitly
they're,
just
missing
it.
You
know,
implicitly,
because
with
this
policy
specifically
for
example,
ingress
and
negros
dns
are
ex
implicitly
denied
because
they
are
not
explicitly
specified
in
the
rules.
B
B
B
Okay
cool,
so
let's
take
a
look
at
you
know
the
example
and
trade.
Our
policy,
which
has
appeared
here,
called
the
strict
namespace
isolation.
B
This
is
a
cluster
level,
cluster,
the
entry
cluster
network
policy
and,
as
you
can
see
in
its
name,
it's
called
strict,
namespace
isolation,
which
means
that
it
does
what
I
just
told
you
about.
It
isolates
each
workload
namespace
so
that
the
parts
in
each
namespace
can
only
talk
to
their
own
namespace,
and
this
is
how
the
cluster
network
policy
will
look
like
with
this
single
resource.
B
Every
namespace
in
the
cluster
can
only
talk
to
depositing
their
own
namespace
with
this
one
single
yamo
and
I
will
walk
you
through.
You
know
what
it
does
right
now.
So
a
cluster
network
policy,
obviously
like
other
criminal
policies
or
other
kubernetes
objects,
will
have
a
name
and
a
spec.
So
in
the
spec
you
will
have
two
things
that
are
basically
determining
you
know
the
priority
of
the
cluster
network
policy
compared
to
the
others.
The
first
differentiating
factor
is
the
tier.
C
B
So,
on
entire
startup
we
create
six,
I
believe,
static
tiers,
which
are
defined
using
crds
that
people
can
refer
to
for
cluster
network
policies.
There
are
application,
security,
ops,
network,
ops
and
emergency,
and
there
is
something
else
I
I
forgot
about
its
name,
but
essentially
these
tiers
has
inherent
priority
among
the
others.
So
what
you've,
cr
every
cluster
network
policy
created
in
the
security
ops
tier,
will
have
higher
precedence
than
the
cluster
network
policy
created
in
application
tiers.
B
So,
by
doing
this,
in
a
tiered
fashion,
we
can
ensure
that
once,
for
example,
the
when
the
cluster
and
means
wanted
to
create
some
cluster
network
policy,
more
related
to
the
application
later
logic
of
things,
they
would
not
be
able
to
overwrite
those
policies
that
are
more
demanding,
which
is
for
security
purposes
and,
as
the
naming
suggests,
if
you
create
something
in
the
emergency
tier
that
will
be
basically
of
the
highest
precedence
and
they
will
override
any
other
customer
policies
in
the
other
tiers
now
within
a
tier
people
can
also
specify
the
priority
of
the
policy
by
putting
a
priority
number
so
the
lower
the
number
is
the
higher
the
presidents
will
be
so
a
priority.
B
B
Right,
that's
a
good
question,
so
a
kubernetes
network
policy
is
not
associated
with
any
entree
tier,
so
so
how
this
works.
Is
that
let
me
let
me
do
a
really
really
quick
dog
check
for
you
guys.
So
you
can.
B
You
can
refer
to
this
specific
document
if
you
have
any
questions
regarding
the
entry
native
policies
but
for
the
tiers
crds,
you
can
see
here
that
these
are
the
tiers
that
entria
creates
for
you
automatically
and
all
of
the
policies
in
those
tiers
have
a
higher
precedence
than
kubernetes
network
policies,
except
for
a
specific
tier
which
is
called
the
baseline.
Every
policy
that's
created
in
that
specific
tier
will
have
a
lower
precedence
compared
to
the
kubernetes
network
policy.
I
will.
B
I
will
cover
what
the
baseline
tier
does,
but
basically
it
is
a
way
for
people
to
specify
a
different
security
posture
for
for
the
cluster
to
begin
with.
B
Otherwise,
if
they
wanted
to
impose
something
that
is
more
strict
or
non-overwritable,
they
can
impose
these
policies
into
these
specific
tiers
so
that
they
ensure
that,
no
matter
what
network
policies
are
created
by
developers,
they
cannot
be
able
to
override
the
cluster
network
policies
created
in
these
tiers.
If
that
makes
sense,.
A
Yeah
yeah
so
for
the
the
general
generally
speaking
that
the
entry
policy
has
like
the
higher
priority
than
the
kubernetes.
Now
policy
right.
B
Yes,
okay,
okay,
moving
on
to
the
to
the
spec
okay,
a
quick
one
does.
Is
there
any
other
questions
in
the
in
the
comments?
Let
me
just
quickly
check
that.
C
A
And
let's
say
hi
to
the
ricardo
vivek
and
salvatore
selector.
Welcome
to
welcome
to
next
week's
show.
B
Okay,
I'll
move
on
to
this
back
of
the
crust
network
policy,
then
so,
just
like
the
kubernetes
network
policies,
a
cluster
network
policy
can
have
ingress
and
rules
now
remember
that
every
rule
needs
to
be
explicit.
B
So
if
there's
no
egress
rules,
it
doesn't
really
mean
that
egress
is
denied
by
default
for
this
policy,
it
doesn't
really
mean
anything,
so
every
rule
basically
needs
to
be
explicit
here
for
intra
cluster
network
policies.
B
Now,
let's
take
a
quick
look
into
the
ingress
rule
here,
so
the
first
ingress
rule
here
says
it
is
a
pass
rule
and
let
me
explain
this
password
here,
because
we
have
a
second
rule
here,
which
is
dropping
every
ingress
from
parts
from
all
other
name
spaces.
B
The
pass
rule
in
the
in
the
front
which
has
a
higher
this
rule,
has
a
higher
precedence
than
this
rule.
Obviously,
and
then
this
rule
here
when
it's
saying
a
pass,
it's
trying
to
select
something
and
for
the
traffic
matching
dose,
these
traffic
will
not
be
subject
to
the
drop
rule
that
is
mentioned
below.
So
it's
like
for
every
traffic
matching,
this
specific
ingress,
pier
you
know
basically
for
every
traffic.
That's
from
here
to
whatever
it's
applied
to
this
will
skip
the
acmp
evaluation.
B
If
that
makes
sense,
I
I
think
I
I
skipped
the
apply
to
fill
here.
Let
me
back
up
a
little
bit
so
that
people
are
easy.
It
will
be
easier
to
understand
this,
so
the
applied
to
here
is
basically
like
the
it's
basically
like
the
past
lecture
field
in
the
kubernetes
network
policy,
but
because
the
kubernetes
now
a
policy
as
a
namespace
bond
resource.
B
B
B
In
our
example
here
it
will
select
all
the
name
spaces
which
are
not
coupe
system.
So
what
I'm
highlighting
here
right?
So
this
is
what
the
policy
applies
to
now
for
the
ingress
rule.
Let's
look
at
the
first
ingress
rule
here.
It
says
that
I
want
to
pass
every
name
spaces
that
matches
self,
and
this
is
more,
like
you
know,
a
synthetic
sugar
if
you
will
for
the
ingress
rules,
and
it
has
a
really
strong,
it's
a
really
strong
expression
which
basically
just
means
the
self
namespace.
B
In
this
context,
so
let
me
expand
a
little
bit
more
on
what
this
means,
because
we
have
three
different
name
spaces
here
right.
So
when
we
are
evaluating
this
particular
policy
for
this
specific
namespace,
that
specific
policy
rule
which
matches
self
namespace
means
every
workload
that
are
in
this
namespace
and
nothing
else.
B
B
So
with
these
two
rules,
what
will
happen
is
that
these
two
peers
are
not
affected
by
the
drop.
All
rule
that's
specified
below
and
what
it
means
is
sorry.
What
it
means
is
that
you
know
for
this
specific
namespace,
I'm
dropping
everything
ingress,
except
for
two
different
things,
one
if
the
traffic
is
coming
from
my
own
namespaces,
which
is
something
like
when
a
client
pod
wants
to
talk
to
a
webpart
in
my
own
namespace,
and
this
traffic
is
not
affected
by
the
drop
rule
and
the
second
one
is
the
dns.
B
So
if
I
try
to
resolve
google.com
from
my
client
pod,
it
will
basically
send
a
dns
request
to
the
coupe
system,
dns
components
and
when
the
result
comes
back,
it
will
not
be
affected
by
the
drop-out
rule.
It
will
still
go
through
other
than
these
two
kind
of
traffic.
Everything
else
will
be
denied,
and
the
same
is
true
for
the
two
other
namespaces,
because
for
the
other,
two
namespaces,
the
match
equals
self
will
resolve
to
different
network
peers.
B
For,
for
those
two
namespaces
and
end
result
by
applying
such
policy
will
be
that
each
namespace
will
be
isolated
on
its
own,
so
that
only
the
pause
in
its
own
name
place
can
talk
to
each
other.
But
everything
else
on
the
other
on
the
outside
will
be
isolated
to
these
name
spaces,
except
for
dns
components,
and
essentially
this
is
what
this
policy
is
saying,
and
you
can
see
that
it
has
the
same
rules
for
ingress
and
egress.
B
So
you
know
there
is
some
symmetry
there
and
and
by
applying
on
this
specific
namespace
isolation
policy,
no
matter
what
new
namespaces
come
or
go
you
can
make
you
can
in
this
policy
can
ensure
that
as
long
as
there
are
workload
namespaces
when
they
come
up,
they're
automatically
isolated
and
when
they're
deleted
sure
you
know,
the
the
entry
policy
will
not
be
covering
the
namespace
anymore,
but
in
in
short,
every
namespace
that
exists
in
the
cluster
will
be
enforced
ali
isolation
policy.
A
B
Oh
okay,
I
see
what
the
question
is
asking,
so
there
is
actually
a
upstream
effort,
we're
making,
which
is
merging
a
specific
resource
called
admin.
Echo
policy
in
the
in
the
kubernetes
upstream
so
shout
out
to
app
shack.
Also,
for
you
know
starting
this
effort,
but
you
know
right
now
we're
currently
still
deciding
how
you
know
this
at
the
main
network
policy
will
come
to
play
side
by
side
with
the
current
entry
native
policies.
B
So
I
cannot
tell
you
exact
answer
right
now,
but
yes,
this
is
something
to
also
to
watch
for
because
this
is
this
will
be
the
kubernetes
upstream
version
of
the
cluster
scope
network
policy.
A
And
is
there
any
questions
about
the
aspect
of
this
cluster
network
policy.
B
Okay,
okay,
so
I
just
wanted
to
quickly
mention
a
a
caveat
here,
so
the
reason
why
I
put
you
know
this
specific
expression
for
the
apply
to
alpha
cluster
network
policy
is
that
you
probably
wanted
to
be
really
careful
if
you
wanted
to
apply
this
policy
to
all
namespaces
blindly,
because
you
know,
if
you
apply
this
to
all
namespaces
chances
are,
there
are
also
some
system
components
that
are
also
applied
are
affected
by
the
policy.
B
In
this
specific
example,
if
you
apply
this
policy
to
all
name
spaces,
you
can
apply
these
policies
onto
the
core,
dns
pods
as
well.
B
The
end
result
of
that
is
that
for
the
core
dns
part,
it
will
only
accept
traffic
from
its
own
namespace,
which
is
coupe
system
or
the
crude
dns
part,
which
is
itself
which
doesn't
really
make
any
sense,
and
it
will
drop
everything
else
now.
What
that
will
mean
is
that
whenever
some
clients
in
the
same
cluster
are
trying
to
do
some
dns
resolving
that
request
will
get
dropped
by
this
cluster
network
policy,
and
it's
not
ideal.
B
So
that's
why,
in
the
demo
here,
I
put
this
specific
expression
so
that
we
make
sure
that
this
cluster
narrow
policy
is
only
applying
to
the
workload
namespaces,
which
is
you
know
what
it
is
supposed
to
operate
on?
Okay,
so
let's.
A
B
All
right
so
so,
as
I
mentioned,
the
past
will
basically
just
skipped
the
route
evaluation
for
this
drop
rule
so
that
you
know
these
traffic
will
not
be
affected
by
the
draft
rule
below,
but
for
the
allow
rule
it
will
be
something
more
strong.
It
would
be
like
I
wanted
to
explicitly
allow
egress
towards
these
towards
the
service
and
there's
no
way
there's
other
lower
level
policies
can
override
this
behavior
unless
there's
a
deny
rule,
that's
of
higher
precedence
than
the
specific
policy.
B
A
B
Yes,
it's
a
it's
a
new
feature
added,
I
think
in
the
1.5
release,
if
I
remember
correctly
or
the
1.4
release,
so
that
this
is
a
like
a
really
cool
wrapper.
If
you
wanted
to
to
allow
or
deny
or
reject
any
traffic
towards
a
specific
kubernetes
service,
you
can
just
put
a
name
and
namespace
there
and
it
will,
you
know,
resolve
to
that
service,
so
you
don't
have
to
put
any
you
know
pod
selectors
and
put
the
service
ports
in
the
port
section.
C
Okay,
I
I
just
want
to
add
a
little
bit
comment
on
this
field.
It's
we
use
that
there's
a
little
caveat
since
the
two
services
only
select
it
rely
on
the
entry
proxy.
So
if
we,
if
we
so
the
entry,
I
will
do
the
load
balance
for
these
services.
So
if
you
connect
to
the
backend
endpoint
of
this
services,
it
will
not
be
selected
by
this
policy.
C
It's
if
you
want
to
realize
that
realize
the
the
backend
endpoints
selection
there's
another
another
approach
in
cluster
network
policy,
but
so
maybe
it
says
that
too
much
we
can
do
this
in
in
another
live
show.
I
guess
just
just
a
caveat
to
mention
here.
B
Yeah,
so
so
I,
if
I
wanted
to,
if
I
can
wrap
it
up
real
quick,
what
grayson
is
essentially
saying
is
that
you
know
if
you're
specified
to
services
rule
you're,
not
actually
specifying
the
rule
on
these
service.
Backend
endpoints
you're
just
specifying
a
rule
towards
the
cluster
ip
of
the
service.
So
if
you
really
wanted
to
have
complete
protection
against
the
service,
you
probably
wanted
to
protect
the
cluster
ip
and
the
back
end
communication
as
well.
So
but
that's
a
more
sort
of
like
advanced
topic
for
this
I
would
say.
A
B
Hey
guys,
okay,
so
as
promised,
this
live
show
is
about
the
intro
policy
in
action.
So
let's
see
some
actions,
we'll
apply
the
specific
policy
and
as
promised,
every
namespace
is
now
isolated,
except
except
for
the
dns
right.
So
let's
verify
really
quickly
if
that's
working
as
expected.
So
if
I
go
exact
into
one
of
the
client
parts
in
the
namespace
usws
east
right
and
I
do
a
flash.
B
So
if
I
do
a
curl
on
the
wrap
deployment
of
the
same
namespace,
which
is
this
guy
and
I'm
not
too
sure
what
the
pod
is
listening
on,
but
I'm
guessing
ade
right.
So
it's
it's
good
right,
so
I
can
basically
talk
to
the
web
part
for
my
own
namespace.
Now
what
will
happen
if
I
try
to
talk
to
the
web
part
of
a
different
namespace,
which
is
you
know
this
guy?
What
happened
there.
B
Look
at
it
denied
because
of
the
drop
rules
we
have
for
the
name,
swiss
isolation,
so
that's
how
it
works
and
as
promised
also
we
don't
regulate
dns
traffic
right.
So
the
the
coupe
system
coordinates
is
running
in
another
namespace,
but
we
can
still
talk
to
that
because
we
have
another
password.
So
let's
try
to
paint,
say
google.com
and
it's
good
and
you
might
probably
ask
we-
have
a
egress
drop
or
rule
right.
So
why
does
the
egress
to
google
still
works?
Because
you
can
see
here?
B
You
know
we
can
still
talk
to
google
as
an
egress.
That
is
because,
in
the
strict
names
of
acceleration
for
the
drop
rules,
we
are
selecting
two
and
the
name
space
selector.
Everything
means
that
I
wanted
to
drop
the
egress
of
the
parts
to
all
other
namespaces
within
the
cluster,
but
it
doesn't
really
specify
what's
the
behavior
for
all
of
cluster
traffic.
B
B
You
can
see
the
dns
still
works
right
because
we
made
a
explicit,
allow
rule
or
another
a
pass
rule
or
allow
rule
for
that.
So
the
client
was
able
to
receive
the
resolved
address
for
that
specific
domain
name,
which
is
google.com,
but
when
it
sends
the
traffic
out
of
the
cluster
it
gets
denied
because
we
have
a
deny
or
egress
policy
in
place.
So
this
is
how
you
know
this
works.
A
I'm
a
little
confused.
Could
you
like
show
the
yaml
file
of
the
cluster
now
policy
again
so
before
you
edit
it
just
under
the
drop
action?
What
you?
What
do
we
have
there?
I
remember.
A
B
Yes,
because
the
namespace
letter
and
everything,
this
specific
clause
is
basically
saying
that
I'm
selecting
every
namespace
of
the
cluster,
but
obviously
we're
not
selecting
all
of
cluster
items
right.
So
what
this
rule
translates
to
is
that
I
wanted
to
drop
everything
to
all
I
addresses
in
the
cluster
if
you
will
right,
but
not
dropping
anything
to
out
of
the
cluster,
but
if
you
remove
these
two
lines
and
just
say
action
equals
drop.
This
is
a
completely
different,
meaning
rule.
B
This
means
that
no
matter
where
the
packet
is
going,
you
need
to
drop
it
because
we
have
a
drop
all
rule,
which
is
a
really
really
strong
rule.
Basically,.
A
Okay,
so
the
so
for
all
the
traffic
to
the
outside
the
cluster
is
will
be
dropped
if
we
don't
specify
the
namespace
selector,
but
if
we
select
all
the
namespace,
it
just
means
that
we
cannot
talk
to
this
all
the
other
namespace,
but
we
can
still
talk
to
the
outside
the
cluster
right.
A
C
So
if
you
define
an
actual
policy
without
any
relevant
field
to
towards
the
l4
protocol,
for
example,
if
you
want
to
define
this
policy,
can
you
open
a
cluster
network
policy
on
screen.
C
I
mean
open
the
yaml
file
yeah,
for
example,
in
the
egress
you
only
define
the
two
as
the
to
all
name,
speed
to
empty
namespace
selector.
This
will
only
select
all
namespace,
but
you
you
can
also
define
a
field
called
ports
in
under
the
ports.
You
can
define
all.
I
only
want
to
drop
the
tcp
traffic
or
utp
traffic
or
http
traffic,
but
if
you
didn't
define
the
ports
field,
so
it
will
only
do
the
traffic
match
on
the
l3
layer,
so
the
ping
will
also
be
infected
by
this
part.
C
C
Yeah
and
also
currently,
I'm
working
on
the
add
icmp
support
in
the
in
the
cluster
network
policy,
so
you
can
define,
I
only
want
to
like
drop
the
icmp
type,
8
code
0
or
something
like
this.
You
can
do
this
in
the
next
version
cluster
network
policy.
It
will
bump
up
the
cluster
network
network
policy
to
view
alpha
2..
A
Okay,
I
think
I
think
you
answered
his
question
very
well
and
jay.
If
you
have
like
any
further
questions
about
this,
you
can
reach
out
to
grayson
to
see
how.
B
Okay,
what
else
we
do
we
have?
I
do
also
have
another
another
thing
prepared.
Oh,
let
me
quickly
jump
into
this
guy
so
for
this
allow
dev
intro
namespace
ayamo.
B
B
So
the
name
space
isolation
basically
says
that
all
namespace
can
only
talk
to
it
itself,
but
there
can
be
cases
where
you
know
those
two
def
name
spaces
that
I
mean
may
probably
want
these
two
name
spaces
to
be
able
to
talk
to
each
other,
in
addition
to
itself
right
because
they
are
all
type
equal
staff
namespaces,
that's
a
really
reasonable
assumption.
B
B
So
this
is
a
security,
ops
tier
of
power
to
one
policy
which,
on
the
policy
level,
it
has
a
higher
precedence
than
the
previous
name
space,
isolation
policy.
Now
what
it
does
is
that
it's
applied
to
type
equal
staff
namespaces,
which
selects
the
two
namespace
I
just
described
and
in
the
ingress
in
rules.
B
It's
saying
that
I
wanted
to
allow
typical
staff
namespaces
to
talk
to
me,
so
what
this
policy
does
essentially
is
that
it
will
not
only
enable
you
know
these
two
namespaces
to
talk
to
each
other,
to
talk
to
itself,
which
is
done
in
the
extreme
names
miss
isolation
example,
but
in
addition,
it
will
allow
these
two
name
spaces
to
talk
to
each
other
as
well.
B
So
let's
see
how
that
works.
If
we
apply
this
policy.
B
Check
which
is
right
now,
I'm
in
the
client
of
the
us,
ws,
east
namespace
and
remember
when
we
tried
to
curl
the
web
in
another
dev
name
space,
it
was
returning.
B
It
was
not
returning
everything
because
the
connection
was
dropped
and
let's
see
if
this
is
give
me
a
success
this
time
it
does
so
so
we
what
I'm
showing
here
is
I
applied
a
higher
powering
policy.
On
top
of
the
you
know,
strict
namespace
isolation,
so
that
it
overrides
the
drop
rule
of
the
of
the
original
policy
a
little
bit.
B
B
Okay,
so
let's
look
at
the
final
policy
that
I
prepared
here
is
the
allow
aws
for
db.
So
remember
we
have
a
bunch
of
applicants.
Db
parts
in
all
of
the
clusters
in
all
of
the
name.
Spaces
are
right,
so
one
specific
use
case
might
be.
You
know
these
db
parts
are
actually
constantly
trying
to
talk
to
you
know,
f3
buckets
or
whatever
to
you
know,
update
the
the
entries
there
right
and
from
a
security
perspective.
B
The
cluster
of
meaning
might
want
to
say
that
hey.
I
wanted
these
to
talk
to
the
dbpos
to
talk
to
this
online
or
another
cider
specifically,
and
that's
all
I
need
for
any
other
egresses.
I
don't
want
to
allow
it
because
that's
not
secure
so
by
this
example.
I'm
trying
to
showcase
the
new
fqdn
feature
we
added
into
the
alternative
policies
a
couple
releases
ago.
B
So
looking
at
this
policy,
we
are
actually
you
know,
selecting
every
db
part
in
the
cluster,
no
matter
what
the
namespace
is
and
we're
allowing
an
egress
to
this
specific
fqdn,
regular
expression.
Oh
I
shouldn't
say
regular
expression,
I
should
say
fqdn
wildcard
expression.
So
that
means
that
every
egress
request
to
you
know
maybe
www.amazonaws.com.
B
Or
s3bucket
blahblahblah.msjws.com
is
allowed
and
the
dns
components
is
also
a
pass
because
you
know
I
do
want
it
to
positively
still
able
to
resolve
things.
Obviously,
otherwise
this
will
will
be
in
vain,
because
if
a
part
doesn't
can,
if
a
part
can't
even
resolve
dns,
then
obviously
it
won't
be
able
to
egress
anywhere
right
so
other
than
these
two
cases
we
drop
all
egress,
which
means
that
you
won't
be
able
to
talk
to
google.com
or
everything
else
right.
So,
let's
see
this
guy
in
action.
B
B
If
you
will
so,
let's
double
check
that,
if
we
exact
into
let's
do
this
guy
now
db
deployment
from
production,
us
west.
B
Okay,
so
we
tried
to
ping
google,
which
is
not
supposed
to
be
working.
The
dns
was
working,
but
the
egress
request
is
getting
dropped
because
we
have
to
drop
our
rule
if
you
remember,
but
the
www.amazonaws.com.
B
B
B
B
Okay,
yeah
remember:
I
changed
this
policy
to
drop
all
egress
and
it's
effecting
so
basically
we
have
policies
stacked
with
each
other.
Now
this
amazon
aws
being
dropped
is
because
there's
another
policy
in
place
which
I
forget
to
apply
and
it's
dropping
all
egress.
So
that's
basically
the
reason
so
right
now,
if
I
configure
this
correctly
and
now
I
go
to
this
pod-
let's
do
this
again:
let's
do
google
again
it's
dropping
the
egress
and
that's
through
amazon
again
and
it's
connecting.
A
B
Does
not
if
you
look
at
if
you
look
at
the
allow
aws
db,
it's
a
it
doesn't
have
a
tier
keyword
and
what
happens
is
that
if
a
interior
cluster
now
policy
is
not
specif
specified
with
a
specific
tier
name,
it
will
be
associated
with
the
application.
Tier
and
the
conditions
here
has
a
lower
precedence
than
the
security
ops
tier,
which
is
what
we
put
the
name
service
isolation
policy
on.
So
what
happened
is
that
namespace
association
policy?
B
We
forgot
to
change
back
and
it
is
dropping
all
egress
for
all
the
namespaces,
and
so
so
it
doesn't
really
matter
what
you
define
in
the
application
tier
anymore.
What
the
ftd
and
policy
put
there,
because
every
egress
will
first
be
evaluated
in
the
egr
in
the
security,
ops,
tier
and
there's
a
rule
drop
in
rd
grass,
so
everything
will
be
dropped
for
all
of
clusters.
A
Oh
yes,
I
just
like
noticed
the
priority,
but
I
didn't
like
see
the
tears
there.
A
B
So
I
would
say
that
concludes
all
the
demo
we
have
today.
It's
really
it's
really
good
that
we
actually
run
into
a
small
problem
now.
It
really
basically
shows
you
how
we
kind
of
like
debug
these
kind
of
things,
because
you
will
probably
also
want
it
every
time
you
see
some
policies,
not
working
or
not
working
as
intended,
you
probably
wanted
to
check
what
the
other
all
the.
What
are
the
other
policies
that
are
applied
to
this
pod
and
if
these
policies
make
sense.
B
Right
right,
right,
right,
hey
and
the
last
thing
since
we're
speaking
on
that.
The
last
thing
I
want
to
mention
is
that
for
the
entry
agent
we
actually
have
a
pretty
cool
command,
which
is
called
cuddle,
exact
I.t.
If
we
go
to
the
entry
agent
in
the
in
the
cube.
B
It
gives
you
what
policies
are
applied
onto
the
specific
part
at
the
time
being
so
for
client
parts.
The
only
policy
affecting
it
is
the
strict
namespace
isolation
which
is
we
just
put
on.
But
if
we
do
this
for
the
db
pod,
which
is
you
know
this
guy
in
this
namespace,
you
will
have
two
policies,
because
it's
affected
by
the
structure,
stricter
name
to
its
isolation,
and
it
is
also
affected
by
the
fqdn
rule.
B
B
A
Yeah,
I
think
that's
a
really
powerful
way
to
debug
the
network
policy,
because
sometimes
you
maybe
forget
it
which
which
policies
are
applied
to
this
part
right
right,
yeah,
yeah,
thanks
young
and
grandson
for
like
showing
us
the
entry
policy
today.
I
learned
a
lot
from
your
like
demo
and
hope,
like
our
audience,
also
have
a
have
an
idea
of
how
to
use
the
entries
policy
from
watching
our
show.
A
B
A
I
think
we
are
clear
in
the
comment
place
and
where,
where
like,
there's
like
no
time
like
for
this
show
thanks
everyone
for
watching
and
if
you
like
this
kind
of
context,
please
consider
liking
our
videos
and
subscribing
the
channel
and
next
week,
jay
will
be
back
and
host
the
next
one
and
see
you
next
wednesday
appreciate
your
time.