►
From YouTube: Project Calico Network Policies
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
for
today's
session
we
are
going
to
focus
on
the
policy
implementation
for
those
who
are,
let's
say,
unfamiliar
with
project
calico
or
starting
off
with
it
recently
calico
as
an
open
source
project,
is
split
up
into
really
three
functionality,
so
there's
the
cni
plugin,
so
the
container
networking
interface
cni
for
again
those
who
are
trying
to
understand
this.
It's
responsible
for
setting
up
the
networking
between
your
containers,
adding
ip
addresses
for
those
containers.
A
Now,
there's
also
the
ip
allocation
management
piece,
the
ipam
plugin.
As
you
can
imagine,
this
plugin
is
responsible
for
creating
or
assigning
the
ip
addresses
for
those
pods.
A
Although
these
two
pieces
are
fundamental
for
container
networking
and
they're
not
actually
relevant
to
today's
session,
so
the
agent
we're
going
to
focus
on
here
is
felix,
as
you
can
see
in
the
architectural
diagram,
I've
got
in
front
of
me.
Felix
is
what
the
agent
that's
responsible
for
speaking
with
iptables
for
the
policy
implementation,
so
felix
is
responsible
for
calculating
and
enforcing
network
policies.
Seen
in
this
presentation,
as
I
just
said
there
a
second
ago,
it
will
use
standard
linux
control,
plane,
implementation
with
qproxy.
A
We
do
also
support
ebbs
without
using
cubeproxy,
that's
worth
noting,
and
this
updates
ip
tables
for
you.
So
this
is
just
a
high
level
view
of
architecture
with
the
product.
If
you
are
looking
to
learn
more
about
the
architecture
of
the
open
source
project,
my
colleague
casey
who's,
a
maintainer
of
project
calico,
actually
has
the
video
uploaded
at
tigera.io.
Four
slash,
video
four
slash,
tigera
dash
calico
fundamentals,
definitely
recommend
checking
it
out,
so
we
can
go
into
that
later.
A
A
So
why
do
we
need
network
policy?
Kubernetes
essentially
is
a
flat
network.
So
what
this
means
is
that
the
pods
on
those
nodes
can
freely
talk
to
other
pods
across
other
nodes
without
the
use
of
network
address
translation
so
on
day
one
when
you
set
up
a
cluster
pods
are
incredibly
insecure,
they're
freely
talking
amongst
each
other
and
there's
no
guardrails,
essentially
configured
in
this
case,
they're
also
ephemeral.
So
what
I
mean
by
this
is
pods,
don't
have
a
fixed
ip,
that's
going
to
last
very
long,
because
kubernetes
itself
is
highly
scalable.
A
A
We
need
something
static
and
in
this
case
we
will
talk
about
label
selectors,
as
that
kubernetes
specific
abstraction
layer
by
which
we
can
build
policy
around
something
that
is
somewhat
fixed
so
again,
as
the
pods
scale
up
and
down
as
long
as
they
have
label
schema
built
around
them,
then
policy
is
dynamic.
It
doesn't
have
to
worry
about.
The
ip
address
is
changing
and
if,
in
fact,
we're
not
going
to
build
policy
around
ip
address
specifically,
so
the
network
policy
is
the
primary
tool
to
secure
kubernetes
traditional
applications.
A
Let's
say
you're,
making
a
migration
over
from
a
traditional
monarch
architecture.
What
I
mean
by
that
is
you
kind
of
have,
like
a
you
know,
a
server
hosting
a
front-end
application.
Maybe
the
back-end
being
a
microsoft
sql
back-end
database,
but
in
this
case
we've
got
microservices.
So
it's
a
kind
of
a
cloud
native
or
microservice
architecture,
and
in
this
case
you
know
traditionally
the
firewall
was
great
because
you
had
a
static
ip,
usually
associated
with
that
host
that
server
that
was
hosting
the
application.
A
It
was
very
easy
to
build
perimeter
based
ruling
to
say,
allow
these
ports
and
protocols
over
these
ips
to
this
destination
when
in
reality
with
kubernetes,
because
of
it
being
highly
dynamic,
and
as
we
mentioned
there,
pods
being
ephemeral,
it's
not
going
to
be
something
we
can
work
with
long
term
with
ips,
so
we're
not
even
going
to
focus
on
using
a
traditional
firewall
implementation,
we're
going
to
use
policy
or
network
policy
as
an
alternative
to
your
traditional
firewalls,
also,
whether
we're
talking
about
project
calico
or
the
kubernetes
default
network
policy
implementation,
it
uses
standard
policy
api,
and
this
is
going
to
be
relevant
throughout
our
session.
A
So
yeah.
These
are
the
three
things
I
just
want
to
focus
on
when
we're
talking
about
policy,
whether
it
be
again
calico's
policy
or
to
default,
kubernetes
policy.
Is
that
we're
trying
to
identify
what
context
we're
going
to
select
for
policy
rules
based
on
key
pair
values,
so
those
will
be
the
labels
you'll
usually
have,
as
you
see
in
the
picture
owner
equals
nigel
or
distribution
or
platform
equals.
You
know,
gke.
A
Now
we're
not
going
to
talk
about
making
rules
on
a
platform
perspective,
so
let's
say
I
could
have
type
equals
front
end
or
type
equals
back
end.
Then
we
know
which
pods
are
assigned
to
those
key
value
pairs,
so
those
labels
and
those
again
are
going
to
be
static
regardless
again
of
the
pods
being
scaled
up
or
down,
and
the
ips
being
reassigned
on
those.
A
Since
it's
declarative,
we're
essentially
declaring
once
we
scope
it
to
those
labels
then
apply
these
actions
on
it
and
since
the
environment
is
highly
dynamic
in
by
nature,
kubernetes
is
designed
to
be
dynamic.
Our
policy
needs
to
be
dynamic
with
that,
so
we
shouldn't
have
to
continuously
revise
network
policy
that
we
would
have
done
with
traditional
firewalls.
If
we're
ip
based
we'd
have
to
keep
changing
in
those
rules.
A
We
want
it
so
that
once
we
have
those
guardrails
defined,
especially
again
default
denies
we'll
talk
about
that
in
a
while,
so
that
even
as
new
workloads
get
introduced,
ones
that
we're
not
even
familiar
with
with
other
label
based
schema,
they
would
be
captured
by
these
catch-alls
and
again.
That
is
a
similar
concept
to
firewalling,
but
we're
going
to
apply
that
to
network
policy,
so
it
has
to
stay
dynamic,
so
yeah
we'll
stick
to
that.
First
point:
there
about
labels,
so
labels
is
not
a
calico
specific
concept.
This
is
a
kubernetes
concept.
A
You
can
see
the
source
link
there
to
kubernetes
io
if
you
want
to
find
out
more
about
how
it
works.
But
as
you
see
from
the
example,
it
is
essentially
key
value
pairs.
So
what
are
key
value
pairs?
Like
I
say
it's,
usually,
you
know
a
concept,
something
that
you
want
to
focus
on
a
purpose,
an
intention
and
then
the
associated
value
with
that.
A
So
if
it's
ownership
or
organizational
structure
define
that
is
the
value
and
then
or
the
the
pair
and
the
key,
and
then
you
want
to
assign
a
relevant
value
to
that.
So
each
key
must
be
unique
for
a
given
object
and
that
way,
once
we
have
that
unique
nature,
then
it's
very
easy
for
us
to
say
again:
we
create
a
schema.
We
strongly
recommend
defining
a
kind
of
a
a
label
schema.
So
you
understand
the
intentions.
A
The
purpose
of
your
your
workloads
going
forward
and
it'll
be
so
much
easier
for
us
to
build
our
policy
around
that.
So
here's
an
example.
If
we
didn't
use
calico,
we
have
an
api
version.
Coming
from
networking.kx.ios
was
the
standard
kubernetes
api.
The
kind
is
network
policy,
so
that
is
what
we're
making
right.
Now,
it's
a
crd
call
network
policy.
A
I've
given
a
name
called
my
network
policy,
it's
a
simple
name,
because
that's
what
it
is
you
it
by
default
in
kubernetes,
the
the
structure
is
that
the
scope
to
a
namespace,
so
you
have
to
define
a
specific
namespace
and
then
whatever
pods
reside
inside
that
network
namespace,
then
we're
going
to
specify
that
what
we
actually
want
to
scope.
So,
under
the
specification
we're
looking
for
pod
selectors,
those
selectors
are
matched
labels
and,
as
you'll
be
saw
there
about
the
key
value
pair.
A
It's
role
equals
db,
so
if
you
assign
a
label
to
a
database-
and
you
call
it
role
the
intention
and
then
we
say
well,
the
intended
part
is
going
to
be
a
database,
then
say:
roll
equals
db.
In
this
case,
any
time
a
new
database
or
a
change
has
happened
to
those
db's,
no
matter
how
many
pods
there
might
be
anything
with
that
matching
label.
This
policy
will
apply
to
it
and
in
this
case
it's
only
scoping
action
that
is
ingress
received
traffic
to
the
pods.
A
So
you
see
ingress
from
other
pods
that
have
pod
selectors
matching
the
label
role
equals
front
end.
Now,
that's
a
very
simple
logic:
it
is
very
structured.
It
is
again
declarative,
so
we're
saying
anything
with
the
role
of
db.
A
Then
it
can
receive
traffic
from
front
end,
but
notice
how
the
example
of
the
other
role,
which
was
the
role
equals
helper,
because
we
never
scope
that
into
our
rule
condition.
It
means
even
if
a
new
workload
came
along
and
he
had
a
different
role,
it
would
be
automatically
denied,
and
also
I
didn't
clarify
that
was
we
only
allowed
specifically
tcp
6379
for
that
db.
To
talk
to
the
front
end,
so
anything
else
would
be
ultimately
denied
or
received
traffic
from
the
front
end.
A
I
should
say
now,
with
these
we'll
go
over
kind
of
crossover
between
network
policy
with
kubernetes
and
obviously
the
advantages
of
using
calico's
pulse
implication,
but
the
next
bit,
I'm
just
going
to
suppose,
specifies
via
an
ip
block.
So
I
mentioned
briefly
earlier:
we
don't
build
policy
based
on
ip.
That's
not
necessarily
true
like
there
might
be
an
example
where
you
want
to
say:
allow
a
pod
to
talk
to
public
internet.
A
You
know
or
a
specific
range
of
ips
that
you
want
to
declare,
and
in
that
case
you
can
absolutely
declare
here's
my
ip
block,
specify
decider
range
and
then
say
am
I
going
to
allow
traffic
to,
as
we
saw
there
172.18.0.0
over
the
24
cider
range,
then
you
can
absolutely
say
allow
that
and
that
anything
else
that
doesn't
isn't
part
of
the
ip
block
will
obviously
be
denied.
A
But
what
we
want
to
do
is
not
focus
on
policy
for
workloads
based
on
ip.
We
want
to
keep
to
that
label
idea
of
something
that's
static,
but
of
course
you
can
build
policy
around
ips
and,
of
course,
it's
relevant
when
you're
declaring
what
are
the
again
ranges
that
I
should
be
allowed
to
talk
to
and,
as
you
can
see
from
this
example,
it's
saying
anything
with
the
label
construct
is
now
allowing
egress
to
talk
out
to
those
ip
ranges.
A
So
this
only
an
egress
action,
the
previous
one
was
received
from
so
that
was
an
ingress
action.
Now
some
organizations
don't
do
it.
I
strongly
recommend
everyone
should
have
this
implemented
an
organization.
It
is
the
simplest
guardrail
you
can
organize.
But
again
it's
all
based
on
you
already
enforcing
zero
trust,
and
this
is
what
we
call
it
fault
deny.
So
a
default
deny
is
essentially
saying
I've
scoped,
both
ingress
and
egress
actions.
So,
as
we
said,
it's
still
a
kubernetes
network
policy,
so
it's
using
api
version
network
policy.oh.
A
This
will
work
for
both
kubernetes,
as
you
can
see
here,
as
well
as
calico,
but
I'm
specifying
for
all
my
ingress
and
egress
action,
I'm
matching
quite
literally,
every
label
selection
I
could
find
for
those
pod
selectors.
A
So
regardless
of
workload,
it's
going
to
deny
it
because
I've
enabled
the
action
it's
looking
for
something
to
do,
because
I
never
gave
it
anything.
I
didn't
declare
something
for
it
specifically.
Do
then
it's
not
exclusively
or
explicitly
denying
the
traffic
it's
implicitly
denying
the
traffic
so,
regardless
of
new
workloads
that
get
introduced,
whether
they're,
permitted
or
rogue.
So
someone
tried
introducing
the
cluster
they
will
automatically
be
denied.
A
So
how
this
would
work
is
we
would
implement
zero
trust
and
I'll,
try
and
explain
that
in
this
session
about
only
allowing
the
traffic
you
actually
would
permit
in
your
environment,
and
sometimes
you
can
be
a
bit
broad
with
it,
but
certainly
try
to
be
as
granular
as
you
can
and
once
those
workloads
are
freely
talking
of
the
ports
and
protocols
that
you
do
permit.
Then,
as
long
as
this
defaults
in
ice,
it's
at
the
end
of
every
namespace
or
potentially
a
global
rule
which
we'll
talk
on
in
a
while.
A
Then
any
new
unwanted
traffic
will
automatically
be
captured.
It's
a
very
powerful
policy
and
quite
simple
to
implement
as
we
can
see
here,
but
it
can
be
dangerous
if
you
implement
this
in
the
beginning,
without
putting
a
serious
thought
into
zero
trust,
because
you'll
end
up
denying
traffic
that
you
actually
would
otherwise
wish
to
permit
so
yeah
here's
some
kind
of
ideas
around
calico's
network
policy.
I've
only
shown
you
kubernetes
so
far.
So
it's
important
to
know
what
are
the
advantages
of
using
calico's
implementation.
A
So
it's
an
extension
on
the
kubernetes
network
policy
implementation.
It's
not
an
alternative
way
of
looking
at
it.
It
takes
the
exact
same
structure
that
we're
familiar
with
so
if
you're
already
using
kubernetes,
but
you
would
like
some
additional
capabilities
then
do
that
you
know
I
strongly
recommend
using
it.
As
you
can
see
here,
it
requires
calico
for
policy.
So
we
talked
about
the
felix
agent
in
the
architectural
diagram.
It's
not
necessarily
as
the
cni.
A
So
even
if
you're,
using
aws,
vpc
cni
on
an
eks
cluster
or
you're
using
azure's
cni,
they
you
know
they
have
their
own
cni.
So
there's
a
bunch
of
different
cni's
you
could
use,
but
that
shouldn't
affect
using
calico
for
the
network
policy
implementation.
That
should
be
a
separate
logic
here.
A
Another
thing
is:
when
we
looked
at
the
examples
of
the
policy,
we
actually
can
define
explicit
ordering
or
precedence
of
our
policies
within
calico,
so
you
can
say,
read
this
policy
before
these
policies
again
globally
or
in
a
name
space
so
that
we
know
this
is
higher
precedence.
It's
more
important
to
have
security
actions
enforced
before
the
xero
trust
starts
implementing
on
a
per
application
level.
A
All
kubernetes
network
policies
are
named
space.
Scoped.
A
That's
perfect
in
most
cases,
however,
as
an
example,
I
want
to
implement
a
high-level
security
rule,
so
I
want
to
say,
deny
traffic
associated
with
known
bad
ips
or
to
known
bad
actors,
or
even
around
say
the
the
default,
and
I
even
though,
in
this
case
default-
and
I
makes
more
sense
this
way-
you
don't
want
to
keep
building
identical,
replicated
actions
for
each
namespace,
because
it
just
takes
time
you
have
to
keep
building
new
policy
invitation
and
if
you
have
dozens
of
namespaces
you're
replicating
the
rule
across
all
of
those
namespaces.
A
So
calico
also
offers
a
globally
scoped
policy
rule.
That
way,
if
we
know
there's
a
known
bad
actor
or
we
want
to
build
a
quarantining
rule
to
deny
all
traffic
associated
with
a
specific
bad
actor,
then
I
wouldn't
have
to
keep
replicating
the
same
rule
for
each
namespace.
I
can
say:
apply
it
with
this
globally
scoped
so
it'd
be
called
a
global
network
policy
as
a
crd
and
then
within
that
object,
you
can
then
filter
down.
For
here
is
the
scope
of
that
context.
A
Even
though
it's
a
global
rule,
that's
really
powerful
and
I'll
show
some
examples
to
that
in
a
while
and
within
kubernetes
it
is
explicitly
just
allowing
traffic.
So,
as
I
said,
with
the
default
deny
you
allow
the
traffic
you
want
anything
that
wasn't
what
you
do
permit
catch
it
through
default
and
I
implicitly
deny
it.
A
Whereas
in
this
case
you
can
explicitly
use
an
action
to
deny,
you
can
also
explicitly
pass
and
log
traffic
and
why
logging
is
quite
good
alongside
denying
is
it
as
the
example
I
mentioned
there,
a
quarantine
rule
so
you're
saying
anything
that
matches
these
contexts.
We
know
we
otherwise
would
never
trust
like
it
might
be
a
communication
between
certain
pods
they're
doing
certain
actions.
A
Then
you
quarantine
it.
So
you
action
it
to
say,
deny
all
traffic
from
this
pod
to
this
other
location.
But
then
you
could
also
say
well
log
it
so
that
way,
you're
forwarding
a
syslog
message
to
notify
yourself
via
a
centralized
sim
solution
that
there
is
some
unusual
activity
based
on
the
fact
that
we
created
a
rule
that
says
deny
that
actions.
We
obviously
don't
want
it.
So
we
want
to
be
dynamically
ahead
of
it,
so
we're
outright
going
ahead
and
trying
to
deny
unwanted
traffic,
but
also
logging
it.
A
So
we
notify
ourselves
that
there's
an
unusual
actor
there
I'll
try
to
go
through
this,
a
little
quicker.
So
we
allow
you
to
scope
down
per
endpoint
namespace,
as
you
can
imagine,
but
also
service
counts,
and
when
you're
trying
to
enforce
things
like
pci
compliance,
then
if
there's
a
user
that
should
be
permitted
to
access,
for
instance,
workloads
that
handle
payments
details
then
perfect.
Allow
them
to
define
service
accounts,
like
the
the
user
account
into
the
context
of
the
policy
to
say.
A
Well,
don't
allow
this
service
account
to
talk
to
this
one
and
again
I'll
show
this
example
earlier
by
default,
we're
handling
layer.
Five
to
you,
know,
layer,
three
layer,
four,
you
know
network
traffic
up
to
there.
Five,
we
integrate
nicely
with
projects
like
envoy,
an
istio
as
a
service
mesh.
So
if
we
use
the
envoy
daemon
set,
then
we
have
the
ability
also
to
collect
layer,
7
traffic
and
similarly
build
network
policy
around
not
just
the
network
specific
traffic,
but
also
the
application
layer.
A
The
https
traffic
that
we
would
otherwise
be
logging
through
layer,
7.-
and
this
is
probably
my
most
interesting
part
that
I
get
excited
about-
is
we're
not
just
building
policy
around
workloads.
Like
we
talk
about
with
the
kubernetes
implementation,
we
can
create
what
we
call
host
endpoints,
so
you
can
in
the
felix
configuration
you
can
ask
for
it
to,
or
you
can
flag
it
to
automatically
create
host
endpoints
associated
with
your
nodes
in
your
cluster
and
those
nodes.
A
Then
we
can
build
policy
around
them
to
say
just
like
workload
like
the
pods
to
say,
allow
this
or
deny
this
traffic
for
those
hosts.
So
you
might
have
a
specific
use
case
where
you
have
an
scd
host
and
you
want
to
again
allow
or
deny
that
traffic
perfect.
You
know,
allow
those
specific
ports
or
deny
specific
ports
for
the
host,
endpoint
and
I'll
show
a
good
example
in
a
while
for
why
host
endpoint
is
quite
powerful.
A
It
again.
It's
just
like
the
kubernetes
implementation.
You
can
use
kubectl
to
apply
dash
f
on
any
yaml
manifest
as
long
as
you're,
using
the
correct
kinds
and
using
the
correct
api
version.
It's
going
to
work,
we
also
offer
our
own
calico
cto.
I
think,
as
time
moves
on
we're
probably
going
to
deprecate
that,
as
most
people
use
cube,
ctl
universally
for
different
frameworks,
not
just
with
calico.
So
I
guess
everyone
has
a
preference
to
continue
using
cube
ctl
again,
if
you
use
our
enterprise
technology,
I'm
not
here
to
sell
enterprise
to
a
community.
A
It's
just
worth
noting
that
if
you
use
enterprise,
there
is
even
further
extensions
like
the
ability
to
specify
dns
policy
on
egress
level
to
say
you
know,
don't
just
allow
it
to
talk
to
these
ikea
address
like
we
talked
about,
but
also
whitelist
or
block
list.
This
specific
you
know
wildcard.domain.com
and
that
way
again,
a
further
abstraction
rather
than
just
focusing
on
ip.
We
can
now
do
with
dns,
but
this
is
something
you
cannot
do
in
kubernetes
policy
implementation,
there's
this
concept
of
tiering
and
preview
and
staging
all
really
useful.
A
A
So
here
is
a
calico
network
policy.
As
you
can
see,
it's
highlighted
for
project
calico.org
is
the
api
version
we're
dealing
with.
It
has
the
same
kind.
So
it's
a
network
scope,
our
namespace
scoped
network
policy,
the
order
value,
as
I
mentioned
earlier,
it's
saying
the
precedence
of
the
policy
so
order.
900
comes
after
order
800
or
after
order
700.
So
you
know
it's
further
down
the
chain.
A
It's
also
scoping
the
same
way
as
we
did
earlier
under
specification.
We
say
selector
based
on
the
label
role
assigned
to
it,
but
here's
the
bits
that
are
different
for
calico
that
we
didn't
see
earlier.
You
can
specify
action,
explicit
actions,
so
we
never
mentioned
actions
earlier
because
it
was
only
allow.
So,
in
this
case
we
can
say
well,
I
would
permit
certain
ports
and
protocols,
but
I
would
never
allow
tcp-
or
in
this
case,
where
I'm
trying
to
do,
I
don't
permit
certain
service
accounts.
A
Well,
then,
I
could
say,
deny
ingress,
so
I'm
not
allowing
to
receive
traffic
that
is
tcp
from
the
source
where
it's
a
service
account
with
the
name
sre
account
and
the
graph
on
the
right
should,
or
the
diagram
on
the
graph
should
give
a
better
idea
of
that.
So
the
service
account
sre
account.
He
cannot
talk
or
she
cannot
talk
to
any
workload
that
has
the
label
role
equals
helper
over
tcp.
A
However,
in
the
case
of
the
second
action
we
are
also
focusing
on
logging
icmp
traffic.
That's
coming
from
the
source
with
the
name
say:
selector
color
equals
green,
so
notice
how
you
can
be
quite
broad,
I'm
ju
this
isn't
a
really
a
powerful
example.
It's
just
to
show
how
flexible
it
is.
So
you
can
not
just
allow
you
can
explicitly
deny.
A
We
also
allow
service
count
context
you
can
log,
which
is
brilliant,
if
you're
forwarding
to
a
syslog
or
sim
solution,
and
then
you
have
icmp
amongst
other
protocols.
So
it's
not
just
tcp
udp
icmp,
like
ping
scanning.
We
can
also
log
deny
allow
that
traffic
so
yeah,
I'm
going
gonna,
try
and
show
a
real
world
example
of
what
you
might
do
with
a
traditional
firewalling
tool.
So
in
a
monolith
architecture
you
would
have
a
zone
based
architecture.
That
is
what
they
often
try
to
create.
So
you
create
a
demilitarized
zone
or
a
dmz.
A
A
What
do
I
mean
by
that?
As
you
can
see
in
the
diagram,
they
should
be
pods
that
can
talk
to
the
public
internet
and
also
receive
traffic
from
the
public
internet.
So
there's
in
this
example,
there
looks
like
there's
four
kubernetes
pods
in
that
dmc.
They
can
talk
to
public
internet
and
they
will
have
a
labels,
concept
of
or
construct
of
firewall
zones.
Dmz
now,
as
you
can
see,
the
trusted
pod
should
be
able
to
talk
to
dmz.
A
I
think
the
diagram
should
better
explain
that
it's
also
allowed
to
receive
traffic
from
dmz,
but
the
point
here
is
that
trusted
zone
by
no
means
can
talk
directly
to
internet.
In
the
same
way,
the
restricted
zone
cannot
talk
to
it.
So
what
we're
ultimately
trying
to
do
is
we're
not
really
concerned
with
trusted
pods.
What
we
are
concerned
with
is
we
want
to
ensure
that
only
permitted
pods
can
talk
for
permitted
purposes
that
we've
approved
and
by
having
strength
and
depth.
A
So
the
dmz
essentially
has
no
visibility
within
the
zone.
There's
large
ips
between
those
zones.
So
with
traditional
firewalls,
you
would
have
three
firewall
zones
again.
It's
a
lot
of
context
here,
so
we
need
to
make
it
static.
So,
as
those
workloads
go
up
and
down,
I
don't
have
to
keep
rebuilding
the
structure.
A
We
need
it
to
be
highly
dynamic
and
then
those
pods.
Otherwise
you
know,
if
we
didn't
create
this,
they
would
have
full
lateral
access
between
those
zones.
So,
there's
nothing,
no,
nothing
to
stop
a
compromise
workload
if
it
did
get
into
that
unidentified
dmz
zone
from
talking
straight
to
a
restricted,
db,
stealing
personally
identifiable
data,
even
if
it
was
just
doing
port
scanning.
Once
it's
got
that
data.
If
it
gets
away
with
it,
then
it
can
reach
out
to
a
command
and
control
server
and
do
whatever
it
wants,
because
it
can
talk
to
public
internet.
A
So,
as
I
say,
large
ip
range
is
for
egress
and
then
you
have
a
bunch
of
different
tools,
as
you
can
see
from
the
example
that
they
need
to
talk
to,
and
maybe
the
trusted
pods
or
the
dmz
yeah.
Probably
the
front-end
pod
needs
to
access
those
external
endpoints,
or
maybe
the
trusted
pods
need
to.
So
we
will
make
certain
exceptions
to
the
rule.
A
We'll
say
only
allowed
to
talk
to
dmz
unless
it
talks
to
external
endpoints
and
again
we
need
to
identify
exactly
what
endpoint
we're
opening
it
up
to
not
opening
up
dns
to
potentially
anywhere.
So
that's
why
we
mentioned
earlier
that
the
dns
egress
rules
are
really
useful
in
calico
enterprise.
A
So
I
have
an
example:
you
know
that
I
can
share.
Actually
you
can
go
to
docs.calico
cloud.io
if
you're
interested
in
it,
but
it's
application
called
storefront.
It's
pretty
standard.
It's
got
a
front
end
two
mark
services,
so
they're
in
the
trusted
zone
and
a
back
end
and
a
login
component.
You
know
you
have
a
standard
logging
tool,
but
that's
also
holding
sensitive
data
and
within
those
zones
we
notice
that
someone
may
introduce
a
rogue
workload
like
we
have
to
assume
that
someone's
capable
of
compromising
our
cluster.
A
A
So
we
are
building
the
zones,
we're
building
a
dmz
trusted
and
restricted
so
that
the
blast
radius
in
this
case
is
far
more
restricted
because,
as
you
see,
the
rogue
workload
managed
to
get
into
the
restricted
zone,
but
it
can't
talk
outside
the
zone
because
we've
really
set
no
like
zero
trust
to
the
outside.
The
only
way
the
data
could
have
got
to
a
different
zone
is
over
permitted
ports
and
protocols
only
from
a
back
end
or
logging
pod
to
those
other
intermediate
workloads.
A
So
in
this
case
we
need
to
lock
down
not
just
north
south
traditional
firewall
can
do
that,
but
the
east-west
traffic's,
where
it
gets
complicated.
As
I
mentioned,
staging
the
different
environments.
We
can
talk
about
that
in
a
while,
but
it's
important
to
understand
within
this
tenant
that
I
have,
like
I
say
their
front
end.
I
have
a
screenshot
from
a
google
cloud
environment
that
I
run
and
you
can
see.
I
have
a
front
end
pod
back
end
two
micro
services
and
they
have
those
labels.
A
A
I
just
realized:
there's
a
one
slight
thing
to
mention
why
this
policy
is
different,
the
other
one
is
because
it's
you.
It
originated
from
calico
enterprise,
so
it
has,
for
instance,
this
concept
of
tiering,
so
you
have
product
tier,
so
the
tier
is
called
product
again,
it's
another
abstraction
associated
with
enterprise.
So
in
the
case
of
the
policy
name,
it's
product.dmz
as
opposed
to
just
dmz,
but
other
than
that.
A
So,
as
you
can
see
from
the
screenshot
example
that
I've
taken
anything,
that's
scoped
in
the
namespace
storefront,
which
was
the
test
application
we
just
introduced
in
that
case,
if
it
has
the
label
dmz
for
that
firewall
zone,
then
I'm
allowing
it
to
talk
to
public
internet
which
I've
actually
abstracted
a
bit
where
I've
said
type
equals
public
for
that
ip
range,
so
that
network
set.
A
Alternatively,
if
you're,
not
creating
network
sets
which
we
haven't
discussed,
yet
you
could
just
say
cider
match
to
the
ip
range
everything
else,
as
you
can
see
here,
from
an
ingress
from
what
we
receive
is
denied
and
similarly
what
egress,
what
tops
out
from
that
dmz
is
allowed
to
talk
to
pods
with
the
labels
far
well
known
as
trusted
and
or
map
equals
logging,
so
notice
how
there's
fine-grained,
I
guess
it's
boolean
logic,
we're
talking
about
here
so
and
or
logic.
A
So
you
know
we're
allowing
it
to
talk
to
trusted,
but
rather
than
creating
another
rule,
we're
saying
or
if
it
has
map
equals
logging,
or
you
could
say
if
it's
firewall,
trust
it
and
and
app
equals
logging.
So
you
can
define
again
fine
grain
context
as
for
that,
if
it's
not
in
those
permitted
zones,
as
you
can
see
here,
the
other
action
is
to
utter
it.
You
know
otherwise
didn't
deny
that
traffic,
so
that's
really
powerful
strongly
create.
I
recommend
creating
something
like
this
for
your
own
workloads,
especially
if
they
have
similar
architecture.
A
So
the
example
I
showed
there
was
yet
demilitarized
zone.
Then
we
have
a
trusted
zone
same
idea
again,
firewall
zone
is
set
to
trusted.
So
we
know
what
pods
this
going
to
apply
to
it's
going
to
be
highly
dynamic,
because
even
as
pods
go
up
and
down,
they
should
keep
this
static,
firewall
zone
label,
and
then
we
have
the
rule,
always
stay
the
same,
so
it
says
allow
it
to
talk
to
the
dmz
trust
it
could
talk
up
to
dmc
or
it
can
talk.
A
Yes
again,
this
is
ingress,
so
it's
allowed
to
receive
traffic
from
the
dmz,
the
one
above
it
and
it's
allowed
to
receive
traffic
from
trusted
below
it.
But
if
it's
not
in
those
two
zones,
then
it's
not
permitted
because
it
is
trusted,
as
the
name
goes
and
then
egress
what
it
talks
about,
we're
allowing
it
to
talk
only
to
restrict
it
so
pods
within
the
same
zone.
So
it
can
talk
out
to
pods
in
the
same
zone,
but
anything
it's
receiving
is
from
those
two
zones
on
a
relay
and
then
anything
else
is
denied.
A
So
it
really
is
fine
grain
context
here
and
then
the
final
one
it
seems
repetitious.
But
it's
really
powerful
is
to
say
the
last
zone
is
restricted.
Then
it
can
also
receive
traffic
from
the
trusted
zone
or
other
pods
in
the
restricted
zone,
because,
as
we
found
out,
although
there's
one
back-end
pod
with
that
zone,
there
may
be
more
pods
in
the
future.
So
it's
important
to
have
that
there.
A
Otherwise,
if
there
were
more
pods
than
they
couldn't
actually
you
know
receive
traffic
from
one
another
and
then
it's
allowing
all
egress
access
traffic.
That's
another
point
make
it's
allowed
any
egress
traffic
out.
But,
of
course,
if
it
tried
to
talk
to
some
other
ip
range
that
would
otherwise
be
blocked
by
other
guardrails,
we've
already
configured
so
yeah.
This
is
our
network
policy.
A
A
The
only
reason
the
syntax
is
slightly
different
here
is
that
we
have
tier
dot
policy
name,
which
is
familiar
in
calico
enterprise,
but
if
you're
using
calc
open
source,
it
would
just
be
policy
name
because
it
doesn't
understand
this
unique
construct
or
concept
of
tiering,
which
is
not
an
issue
so
to
find
your
policies
is
just
cuddle,
get
policy
or
network
policy
or
get
global
network
policy
for
logging
and
denying
actions
again
just
trying
to
enforce
if
zero
trust,
like
I
say,
strongly
recommended
for
if
you've
got
different
environments
like
you
have,
let's
say
you
have
a
single
cluster,
but
you
have
some
things
that
you
consider
like
test,
namespaces
or
kind
of
in
there.
A
Even
in
a
production
environment,
it's
really
important
to
define
what
are
the
purpose
and
intentions
of
workloads
so
like
here,
we've
got
environmental
variable
of
environment
equals
prod
or
environment
equals
def,
and
that
way
in
this
case,
if
it's
a
front-end
part,
it
should
be
so
yeah
if
there's
royal
front
end,
but
it
can
talk
to
anything
or
receive
anything
from
production
environments,
but
otherwise
it's
denying
tcp
traffic
to
prod.
I
think
I
made
a
typo
there,
but
the
point
still
stands,
as
you
can
see
on
the
right
side
view
the
development
view.
A
What
we're
trying
to
specify
here
is
even
if
there's
new
workloads,
it's
development,
it
could
be
identical
to
production,
one.
It
couldn't
hypothetically
talk
to
production
and
compromise
that
production,
because
again,
this
is
the
applications
that
are
being
used
by
our
users.
So
it's
important
to
set
those
guardrails
based
on
intentions
as
well,
not
just
their
zones.
A
So
we've
gone
over
the
difference
between
kubernetes
and
calico
network
policy,
calico
or
kubernetes
goes
quite
far.
You
know
it
allows
us
to
do
ingress,
egress
rules.
We
can
specify
the
pods
it
already
scopes
to
a
name
space.
There's
port
matching
tier.
You
know,
protocol
traffic
matching
even
ip
blocks.
However,
when
it
does
come
down
to
scoping
rich
const,
you
know
richly
detailed
environments,
then
you
may
also
want
to
have
globally
scoped
policy.
A
You
may
need
to
explicitly
deny
or
log
the
traffic
to
better
understand
from
forensic
analysis
where
that
traffic's
going
and
from
a
you
know
compliance
perspective.
You
may
have
in-compliant
users
or
compliant
users
that
shouldn't
be
permitted
to.
You
know
make
traffic
from
workloads
to
other
workloads
so
having
the
ability
to
scope
service
accounts
into
those
richer
matches,
as
well
as
integrating
with
your
existing
service
mesh.
A
If
you're
using
istio-
and
you
need
to
you
know,
understand
what
is
the
https
layer
7
traffic
on
top
of
what
we
can
already
scope
by
default
in
our
network
policy,
then
you
can
go
so
much
further
to
get
kind
of
a
full
scope
of
what
traffic
we're
going
to
allow
where
it's
not
in
our
environment
and
sticking
on
the
topic
of
those
traditional
firewalling
solutions.
Firewalls
are
deployed
enterprise
wide.
A
They
can
be
quite
expensive
again.
We're
talking
about
totally
free,
open
source
writing
in
yaml
manifest,
and
we
gave
some
simple
examples,
but
I
guess
why
some
organizations
still
today
ask
for
firewalls
and
that's
absolutely
fine.
It
does
fall
under
some
regulatory
standards.
Is
that
some
of
those
compliance
frameworks
are
somewhat
archaic.
You
know
they've
been
around
for
a
long
time.
Will
they
change
in
the
time
in
the
future?
It's
hard
to
know,
but
in
the
meantime
you
may
still
have
a
firewall
solution.
A
It
needs
to
sit
on
the
perimeter.
That's
fine!
You
know,
that's
not
the
discussion
we're
having
here,
it's
more
that
if
you
have
a
traditional
firewall
that
you've
invested
into-
and
you
can't
understand
it-
has
no
visibility
of
the
east-west.
What
traffic
goes
on
between
pods
in
the
cluster
not
just
goes
out
of
the
cluster,
then
it's
have
no
control
over
that.
A
They
have
to
get
skilled
up
and
familiar
with,
or
maybe
your
development
you
need
to
write
policy
alongside
security
and
we
have
again
fine
grain
policy,
which
is
easy
to
use,
is
good
to
know,
but
if
you
are
looking
at
potentially
talking
with
about
calico
enterprise,
there's
web
user
interface
and
additional
controls
for
your
security
and
devops
team,
so
that
they
can
work
alongside
one
another,
but
ultimately
network
policy,
as
you
probably
could
see
from
the
session
so
far,
is
the
de
facto
way
for
us
to
secure
east
west,
as
well
as
north
south
traffic
for
those
pods
yeah
existing
policy
creation
process
is
essentially
slow
for
devops,
so
it
needs
to
be,
as
I
say,
a
tool
that
works
alongside
security
and
devops.
A
The
devops
teams
themselves
rely
on
kubernetes
to
enable
agility
and
speed.
That's
why
they're
using
kubernetes
today,
that's
not
a
surprise.
Connecting
new
applications
or
services
requires
a
firewall
rule.
Change
that
takes
time.
The
idea
of
our
policy
being
highly
dynamic,
as
I
mentioned
earlier,
is
as
new
workloads
are
brought
up.
We
don't
want
to
invest
time
in
redesigning
the
wheel.
We
don't
want
to
spend
a
lot
of
time.
A
Implementing
new
firewall
rule
changes
based
on
ip,
so
we
can
maybe
even
from
the
beginning
as
you
have
a
small
environment
and
as
it
scales
from
something
quite
large
or
multiple
clusters
that
you
can
replicate
the
same
policy
mutation
across
multiple
distributions,
multiple
different
cloud
or
on-prem
bare
metal
environments,
without
any
major
choke
point
in
that,
so
you
know,
you've
got
a
centralized
deployment.
It's
easy
to
replicate
same
api
for
each
environment
and
realistic
devops
can
apply
this
into
ci
cd
pipeline.
A
That's
probably
the
most
important
part
that
this
yaml
manifest
is
no
reason
why
they
can't
make
these
changes
at
the
highest
point
in
the
deployment
chain.
So
security
should
never
be
an
afterthought.
It
will
always
be.
You
know
as
part
of
the
development
or
sorry.
Security
is
code.
I
think,
is
the
phrase
that
people
like
using,
but
you
know,
use
it
within
your
ci
cd
pipeline
for
automation.
A
So
as
we're
coming
up
to
an
end
on
this
session,
it's
important
to
know
it's
not
just
security.
It's
also
compliance.
You
may
still
be
using
firewalls
for
that
compliance,
which
is
totally
understandable.
A
However,
if
it
doesn't
maintain
the
compliance,
the
existing
firewall
investment
that
it
doesn't
give
us
that
control,
it
doesn't
guarantee
for
external
auditor
that
we
are
complying
with
the
east
west
traffic.
That's
going
on
within
our
cluster,
then
yeah.
We
need
to
start
really
looking
into.
How
far
can
we
go
with
our
policy
and
similarly
from
an
access
controls?
If
there
are
no
access
controls
already
for
those
external
resources,
then
we
need
to
do
that.
A
We
need
to
define
which
pods
can
talk
to
external
crds
and
again,
what
is
the
ports
and
protocols
that
are
permitted
for
those?
If
there's
a
lack
of
visibility,
we
have
enterprise
grade,
offering
that's
dynamic,
to
give
more
visibility
into
it,
but
either
way
the
policy
scoping
is
so
well
defined.
So
as
long
as
you
write
up,
a
good
label
schema
from
the
beginning,
you're,
pretty
confident
that
your
policy
is
working
and
it's
matching
what
you
expect
it
to
so
that's
again,
it's
trustworthy
and
it's
popular
for
a
reason.
A
So
I
hope
the
session
was
interesting
and
that
you
got
a
lot
out
of
it.
As
you
know,
it's
an
open
source
community.
We
have
open
slack
channel.
We
have
over
four
thousand
users
in
it,
which
is
great.
It's
always
going
up.
If
you
have
questions
about
any
of
the
content
you've
seen
today,
you
can
contact.
You
can
reach
out
to
me
directly
for
your
slack
community,
the
project
itself
calico
has
over
150
contributors.
So
it
is
worth
noting
that
we
have
the
project
calico,
github
repo.
A
Again,
you
can
bring
up
these
discussion
points
with
our
developers
directly
via
slack
via
that
community
or
via
discuss.projectcalico.org.
The
project
is
widely
adopted.
I
think
it's
still
the
most
widely
adopted
networking
and
security
solution
for
kubernetes,
with
over
one
and
a
half
million
nodes
powered
by
calico
every
day,
so,
whether
you're,
looking
at
using
our
cni
implementation
or
just
the
policy
that
we
discussed
today,
you
can
reach
out
to
us
in
a
bunch
of
different
forms,
so
hope
you
had
a
great
session
and
look
forward
to
hearing
from
you
soon.