►
From YouTube: Reducing Attack Surface using Network Policies
Description
Mmadu Manasseh is a software Engineer at Deimos Cloud, based out of South Africa.
A
B
A
A
A
Thought
I
mispronounced
something
so
so.
Yeah
devops
engineer
at
demos,
cloud
and
umadu
will
be
speaking
to
us
about
this
topic,
which
is
basically
reducing
attacks
office
using
network
policies.
A
B
A
B
Good
afternoon,
everyone
thank
you
for
having
me.
Let
me
know
if
you
can,
if
you
can
hear
me.
B
All
right
cool,
so
I'm
just
gonna
start
presenting
now
so
yeah
good
afternoon,
once
again
welcome
to
our
very
first
edition
of
kubernetes
advances
this
africa,
so
I'm
here
to
be
talking
about
reducing
attack,
surface
using
network
policies
all
right.
So,
first
of
all,
very
briefly,
my
name
is
maya
c
and
I
work
as
an
sre
at
demos.
B
At
the
most
we
work
on
infrastructural
design,
security
and
implementation,
so
yeah
for
those
of
you
who
don't
know
what
they
must
do.
Deimos
is
a
devsecops
company
that
helps
to
guide
other
companies
into
their
path
of
cloud
adoption.
So
we
offer
a
range
of
services
from
migration
to
cloud
to
circuits
on
your
own
infrastructure,
and
we
also
offer
google
workspaces
us
as
a
service.
B
B
Cool
today
we'll
be
talking
about
security,
which
is
a
very
interesting
first
set
of
infrastructural
management
and
in
as
much
as
most
people
don't
like
to
hear
it.
It's
it's
one
of
the
basic
things,
one
of
the
major
things
that
affects
your
performance,
the
performance
of
your
application
and
the
security
of
your
application.
B
So
basically
security
is
just
ensuring
you
adhere
to
confidentiality
of
data,
integrity
of
data
and
the
availability
of
data
to
your
customers,
because
what
basically
security
is
and
yeah.
We
get
this
question
a
lot.
How
do
you
make
your
system
secure?
So
there's?
No,
I
think
before
we
just
proceed.
I
just
want
to
state
this
out
that
there's
no
single
solution
to
security,
you
just
have
to
keep
implementing
layers
of
security
defenses
and
that
would
improve
your
the
security
of
your
system
as
a
whole.
B
All
right,
let's,
let's,
let's
dig
deep
into
this
nice
security
cloud
security.
Basically,
is
you
trying
to
implement
a
whole
range
of
policies?
Services?
That's
going
to
help!
You
protect
your
data
from
the
cloud.
You
know
you
want
to
protect
your
data
from
attackers.
You
want
to
ensure
that
your
data
is
constantly
available
and
all
these
methods
which
you
apply
to
help
make
all
this
possible.
It's
just
basically
cloud
security.
You
already
been
cloud
secured.
B
It
goes
from
you
know,
managing
your
im
permissions,
ensuring
that
the
appropriate
user
has
the
appropriate
permissions
to
you
know,
setting
up
firewalls.
All
of
this,
like
all
these,
they
constitute
security
in
the
cloud
and
one
of
the
major
ways
of
of
improving
your
security
is
identifying
identifying
ways
which
you
can
reduce
attack
surface.
So
basically,
an
attack
surface
is
is
what
an
attacker
can
do
when
it
gains
access
into
your
system.
So,
for
example,
when
you
walk
into
a
compound,
what
can
you
do
you?
Can
you
see
a
lot
of
rooms?
B
You
can
walk
into
any
of
the
rooms,
so
the
number
of
rooms
you
can
walk
into
is
basically
like
an
attack
surface.
An
attacker
has
access
to
when
it
compromises
your
security
and
the
game
that
gains
access
to
your
to
your
infrastructure.
B
So
one
major
thing
to
do
in
order
to
improve
your
security
is
just
to
you
know,
try
to
reduce
what
this
attacker
can
do.
So
we
will
be.
B
So
the
basic
components
of
the
kubernetes
is
the
board
with
the
containers,
as
most
people
would
say
so
yeah.
What
what
can
a
particular
attacker
do
when
he
compromises
your
port
or
your
container?
That's
something
we
want
to
basically
know
so.
One
of
the
major
things
is
he
can
have
access
to
the
qbi
server
if
the
service
account
is
mounted.
B
So
this
is
very
important
because
the
cube
api
server
contains
accessing
the
qbi
server
means
the
attacker
can
have
access
to.
You
know
understanding
what
is
deployed
in
your
cluster,
knowing
the
name
spaces,
knowing
the
applications
you
have
and
everything
about
your
cluster
basically
is
available
once
you
have
access
to
the
cube
api
server,
so
you
really
don't
want
to
give
him
access
to
your
qbpi
server.
B
Another
thing
is:
if,
if
your
pods
are
running
in
privilege
mode,
the
attacker
can
have
access
to
you
know
the
underlying
node
that
the
port
is
running
on
we've
seen
cases
of
what
we
call
container
escape,
so
the
person
the
attacker
can
choose
to.
You
know
compromise
the
node
which
the
particular
port
is
running
on.
Another
thing
is,
you
know
if
you
have
access
to
a
port
on
the
cluster,
the
cluster
is
also
on
the
network,
so
the
port
can
have
access
to
you
know
other
workloads
and
other
components
of
your
network.
B
So
if
you
have
a
database
on
your
network,
you
have
order
workloads
on
your
network.
The
attacker.
Can
you
have
access
to
all
of
this
once
it
compromises
your
port
one
one.
Another
part
which
is
very
important
is
data
exploitation
when
an
attacker
compromises
a
particular
port,
he
can
be
able
to
export
data
to
like
another
location,
and
you
know
based
on
what
data
he
needs
and
the
data
he
has.
B
Access
to
that
can
be
very
dangerous
when
maybe
exports,
user
information
or,
let's
say
credit
card
details,
for
example,
to
a
particular
location.
So
what.
A
B
We
do
we've
been
able
to
identify.
You
know
what
this
attacker
can
do
once
they
have
access
to
report.
We
want
to
be
able
to
reduce
the
attack
surface.
So
that's
what
we
said.
Basically
what
this
talk
is
about,
so
we
want
to
limit
what
the
user
can
do
once
it
compromises
a
particular
port.
So
most
times,
architectures
are
in
such
a
way
that
we
have
front-facing
applications
and
then
we
have
applications
at
the
back.
B
So
your
front-facing
application
can
be
probably
your
front-end
and
then
you
can
have
like
you
know
back
and
applications
that
you
know
power
those
front
end.
So
we
want
to
limit
the
depos
or
the
the
services
that
this
user
can
access
once
our
funds
facing
application
is
compromised,
for
example,
so
yeah
enters
network
policies,
so
yeah
network
policies
are
basically
a
set
of
rules.
You
know
that
determines
how
our
clusters
would
would
communicate.
B
So
if
your
pod
is
in
your
cluster
and
you
want
to
restrict
it
only
to
be
able
to
communicate
only
to
a
particular
subset
of
calls,
you
know,
for
example,
the
case
I
gave
earlier.
You
have
your
font
and
you
have
your
back
end.
I
have
your
database,
for
example,
the
front-end
communicates
the
back-end,
the
back-end
communication
database.
You
don't
want
a
direct
communication
from
the
front
end
to
the
back
end,
so
you
can
take.
You
can
implement
things
such
as
network
policies
and
these
network
policies
would
restrict
the
communication.
B
Like
ip
tables
photo
supports,
coming
from
a
linux
background,
so
we
have
ip
tables
are
just
like
a
way
of
setting
firewall
rules
on
on
on
vms
on
on
linux,
and
they
are
network
policies
are
like.
I
came
to
ip
tables
on
kubernetes,
so
why
do
we
need
network
policies?
Some
people
might
say.
Okay,
we
have
ip
tables.
We
could,
probably
just
you
know,
get
an
inspiration
for
my
ip
table
on
and
do
something
around
it.
B
But
one
thing
with
kubernetes
is
the
ipr
addresses
are
quite
dynamic,
so
your
port,
when
your
port
spins
up
it,
it
has
an
ip
address
that
would
change
when
it
spins
up
again-
and
we
know
from
deployments
that
this
this
this
is
bound
to
happen.
So
when
you
scale
up
you
scale
down,
new
ports
gets
created.
All
of
these
sports
have
different
ip
addresses
and
yeah.
That's
why?
One
of
the
major
reasons
why
we
needed
something
that
is
tailored
to
kubernetes,
specifically
and
yeah.
Another
thing
we
have
to
notice
is
keyboard
net.
B
Is
most
people
don't
don't
know
this,
but
bonuses
by
default,
don't
give
you
all
these
security
features
implemented.
So
you
have
to
do
this
yourself
and
by
default
all
all
the
pods
in
every
name,
space
can
communicate
to
each
other.
So
you
have
a
test
name
space.
You
have
a
staging
name
space.
You
have
your
production
name,
space
staging
can
communicate
to
production.
So
that's
how
kubernetes
works
by
default.
There
is
no
isolation
of
the
port
traffic
at
all,
so
yeah
right
now.
You
see
why
we
need
national
policies.
B
So
how
does
network
policies
just
basically
work
so
network
policies
they
use
labels
so
from
kubernetes
services
and
deployment.
We've
seen
this
in
when
you're
defining
your
deployment
or
your
or
your
so
your
service.
One
thing
you
you
will
notice
is
we
use
what
we
call
selector
levels
so
selectors
are
just
labels
that
that
target
specific
board.
B
So
that's
the
same
thing
with
network
policies,
network
policies,
use
labels
and
this
labels
target
specific
boards
to
apply
those
policies
to
and
another
thing
you
have
to
know
in
network
policies
is
it
is
implemented
by
kubernetes
it's
so
it
is.
It
is
designed
by
kubernetes,
but
it
is
implemented
by
your
network
plug-in.
So
if
your
network
plug-in
doesn't
support
network
policies,
then
I'm
sorry,
you
wouldn't
be
able
to
use
this
functionality.
B
So
if
some
example
of
network
plugins
that
that
do
this,
we
have
kaliku,
we
have
with
net,
we
have
celium
and
you
have
cube
router.
There
are.
A
B
These
are
not
just
the
the
four
limited
network
plugins,
but
you
have
to
be
sure
if
you
want
to
implement
a
policy
you
have
to,
you
have
to
confirm
that
your
network
plug-in
actually
supports
kind
of
policy
implementations
themselves,
so
you
also
need
to
be.
You
also
need
to
know
that
network
policies
are
name
spaced.
That
means
they
are
limited
to
naming
spaces.
B
B
Although
you
can't
have
extensions,
the
basic
network
policies
are
names
based,
but,
as
we've
said,
network
policies
are
implemented
by
the
network
plugin.
So
kaliku
can
choose
to
extend
this
by
adding
support
for
global
network
policies.
Yeah.
A
But
just
know
that
network.
B
Policies
by
default
are
namespaced,
except
you
want
to
go,
opting
for
like
customized
setup
policies
offered
by
network
plugins.
Those
ones
offer
other
enriched
features
for
network
policies
so
yeah.
This
is
like
a
quick
link
to
a
site
you
can
use.
This
is
used
by
psyllium
and
just
it
helps
you
design
a
network
policy
and
visualize
it
properly.
So
you
can
easily
work
with
network
policies
cool.
B
Let's
look
at
the
basic
definition
of
a
network
policy,
so
we're
just
going
to
run
through
how
to
create
a
network
policy
which
is
very
important.
So
the
first
and
important
part
of
the
network
policy
is
the
port
selector,
so
the
port
selector
determines
which
ports
these
network
policies
apply
to.
They
are
akin
to
selector
labels
on
services
or
on
deployment.
So
before
the
deliberates,
you
define
another
port
selector
targets,
some
specific
set
of
ports
and
then
apply
these
network
policies
to
those
ports
and
the
policy
type.
B
So
we
have
two
types
of
traffic
traffic.
We
have
ingress
and
ingress
traffic.
So
if
we
want
to
target
only
ingresses
there's
traffic
coming
in,
we
specify
a
policy
type
of
ingress.
If
you
want
to
target
traffic
going
out
of
the
port,
we
target
igor's
type
policies,
and
generally
you
can
also
add
both
of
them
like
that.
You
can
define
both
ingress
and
equates
policies
for
your
network
policy,
so
you
can,
after
you've,
defined
the
policy
type.
B
The
next
thing
is,
you
need
to
specify
the
rule
that
this
policy
is
going
to
is
going
to
use
during
communication.
So,
as
you
can
see
here,
your
rules
can
have
four
four
basic
types,
so
you
can
define
an
ip
block,
so
an
ip
block
is
basically
saying
this
is
a
particular
ip.
I
want
to
limit
the
communication
from
o2
does.
If
you
are
using
network
ingress
or
egress
namespace
is
is
just
saying.
I
want
to
limit
communication
to
a
particular
namespace
or
from
a
particular
namespace.
B
This
is
very
important
like
when
you
are
deploying
isolated
environment.
So
you
you
have
a
test
environment
and
you
want
to
isolate
test
environments
from
the
demo
environment
and
they
are
all
on
the
same
cluster.
So
what
you
want
to
do
is
you
want
to
limit
traffic
that
can
that
can
enter
a
particular
port
in
the
test
or
in
the
demo
space,
so
yeah
the
import
selector,
which
are
very
granular.
So
this
sorry
this
this
specifies
the
ports
that
you
want
to
target,
particularly
like.
This
is
specific.
B
You
want
to
target
a
specific
set
of
ports
on
the
in
your
in
your
particular
cluster.
So
when
you
specify
the
spot
selector
under
egress
or
ingress
rule,
any
traffic
coming
to
from
that
port
only
will
be
accepted,
so
any
any
other
traffic
that
is
going
to
any
other
portal
coming
from
any
other
port
would
be
dropped
by
the
network
policy.
B
B
So
you
limit
the
port
that
it
can
communicate
to
so
yeah
as
a
rule
of
thumb,
as
a
rule
of
thumb,
when
you're
creating
your
network
policy,
you
should
start
with
a
default
deny
or
so
what
this
policy
just
means
is
drop
every
traffic
in
your
cluster
or
in
that
particular
name,
space,
no
traffic
comes
in
no
traffic
goes
out,
so
once
you've
done
this,
you
can
now
build
up
on
this
gradually
and
you
know
define
granular
network
policies
for
your
board.
B
So
you
want
to
do
this
because
you
forget,
you
know
any
you
don't
want
to
miss
out
on
any
traffic.
It's
just
like
permissions
on
on.
B
You
see
you
want
to
specify
you
want
to
give
granular
permissions.
You
don't
want
to
give
excess
permission.
So
by
default
you
first
drop
everything
and
then
you
give
them
specific
policy
definitions
to
allow
specific
traffic
all
right.
So
yeah.
I've
talked
about
this.
Briefly
when
I
was
talking
about
isolating
environments.
Network
policies
are
very
important
in
that
they
help
you
isolate
environment,
so
environments,
a
couple
people
a
couple:
architectures
would
have
devon
staging
environments
on
the
same
cluster.
B
They
can
have
staging
like
some
can
even
have
all
three
dev
staging
and
production
on
the
same
cluster,
so
you
don't
want
a
vulnerability
or
an
attacker
to
be
able
to
access
your
production
environment
from
your
staging
environment.
That
would
be
very
weird
and
very
very
insane,
so
you
want
to
be
able
to.
You
know,
shut
down
traffic
from
dev
that
is
going
to
production
from
dev
to
staging
so
yeah.
You
use
network
policies
to
isolate
these,
this
particular
traffic
and
you
easily
help
it
helps
you
like
isolate
your
your
environments
properly.
B
That
is,
if
you
go
for
a
multi-tenant
kind
of
approach
in
the
plan,
your
cluster
okay,
so
we're
just
going
to
see
some
basic
jammus,
I'm
not
sure
we'll
be
applying
them,
we'll
just
look
at
how
they
work.
So
I'm
just
quickly
going
to
change
this
screen
to
another.
A
Right
so
yeah.
B
So
I'm
just
going
to
just
quickly
run
through
this.
I
think
we've
recently
coupled
time
trying
to
resolve
network
connectivity
issues.
B
So
I
was
just
talking
about
the
default
deny
policy
and
we
we're
just
noting
that
you
know
when
applying
network
policies
you
want
to
start
with
this
drop
all
traffic
first
then
build
up
on
this.
B
So
from
there
you
can
go
on
to
you
know
allowing
dns,
because
dns
is
very
important
in
kubernetes,
so
you
want
to
allow
dns
resolutions
in
the
board.
So
this
is
the
next
one
you
go
for.
So,
as
you
notice,
we
are
only
allowing
an
egress
policy.
That
means
the
port
should
only
be
sent,
should
be
able
to
only
send
traffic
to
port
53,
and
you
will
notice
here
we
are
not
specifying
any
destination
for
labels
or
anything.
B
That
is
because,
as
I
said
earlier,
network
policies
are
name
space,
so
the
dns
port
is
is
in
another
name
space
entirely,
so
we
cannot
target
it
from
here.
So
all
you
can
do
do.
Is
we
target
53
and
say?
Okay,
let
us
allow
all
traffic
from
port
53
to
go
out
of
this
particular
port.
B
The
next
thing
you
do,
then,
is
you
build
up
on
this?
You
just
build
up.
You
keep
building
on
this
existing
policies,
so
say
you
have
a
back
end,
a
backing
port
like
this
back
in
deployment
as
this
very
basic,
as
you
can
see
it
has
these
labels
here
that
just
see
the
lost
targets,
this
particular
back
end,
and
you
can
now
build
up
on
this
and
create
other
policies.
So
here
we
are
saying
this
is
another
policy
that
says
we
should
allow
back-end
access
to
boards.
B
So
this
is,
we
are.
We
are
selecting
any
port
that
has
this
particular
label.
This
is
so.
If
board
has
this
label
and
it's
working
allow
back
and
access
set
to
true
we
are
going
to
allow
it
to
be
able
to
send
egress
to
any
port
that
has
this
tier
called
back
end.
So
I
remember
we
are.
We
are
dropping
both
ingredients
and
igor's
policies,
so
that
means
we
want
to.
B
If
you
want
to
create
a
network
policy
you
have
to,
if
you
want
to
create
a
network
policy,
you
have
to
allow
ingress
and
you
have
to
allow
egress.
So
I
want
to
communicate
to
the
backend
board,
so
you
first
have
to
allow
me
to
be
able
to
send
traffic
to
the
backend
board.
I
have
to
also
set
the
back
end
to
be
able
to
allow
traffic
to
come
from
me.
So
this
is
where
we
are
setting
traffic
to
go
to
the
back
end
port.
B
A
B
From
any
port
that
has
this
label
set,
so
this
is
just
basically
something
you
can
do
so.
By
doing
this,
already
we
just
limited
who
can
communicate
to
the
bargain,
and
the
only
point
that
can
communicate
to
the
backend
is
a
port
that
has
this
label
networking
allow
back
and
access,
so
you
can
set
this
label
now
in
your
front.
End
port,
for
example.
B
If
we
just
go
down
to
front
end
real
quick,
this
is
the
front
and
manifest,
and
you
will
see
that
it
defines
this,
for
this
particularly
will
allow
back-end
access
so
this
front.
End
port
now
should
be
able
to
send
traffic
to
the
backend
board
and.
B
B
He
can't
access
a
database
and
even
if
he
tries
to
compromise
your
your
your
backend
port,
your
backend
port
can
send
eagles
to
any
other
traffic
apart
platform,
the
backing
port
can
send
egress
any
other
services,
except
from
the
ones
you've
you've
white
listed
that
the
backend
should
communicate
to
so
you
you
apply
this.
These
granular
accesses,
so
this
granular
traffic
permissions
to
reduce
what
boards
can
communicate.
You
don't
want
everything
to
be
visible
to
every
other
port
on
the
cluster
this
this
is.
This
is
very
risky
in
terms
of
security.
B
B
All
right
cool,
so
that's
just
a
quick
rundown
of
basic
policies
on
how
you
can
build
up
on
this
very
important,
establish
a
d4
deny
you
allow
dns,
and
then
you
build
up
on
this
for
each
individual
service.
We
are
running
if
you're
running
critical
systems,
it's
very
important
to
ensure
that
you
don't
allow
access
to
every
board
or
you
don't
allow
access
to
every
infrastructure
on
your
on
your
to
every
resource,
in
particular
on
your
infrastructure,
for
example.
B
So
yeah
network
policies
are,
I
holy
grow
in
this
aspect,
but
yeah
they
have
a
couple
limitations,
they
don't
enforce
tls,
they
don't
love
requests
and
they
don't
enforce
policies
on
local
hosts.
So
you
can
just
look
at
this
and
build
up
on
it.
There
are.
There
are
solutions
here
you
can
use,
link
id
or
elasticsearch
to
log,
and
you
can
use
link
d
for
the
service
mesh
to
enforce
tls
and
elasticsearch
to
elasticstack.
To
you
know,
log
requests
on
everything
on
your
particular
network
policies,
so
yeah
bonuses
that
I've
said
earlier.
B
But
you
can
check
the
link
in
the
slide
and
then
you
will
find
other
security
policies
that
you
can
apply
in
your
keyboard
cluster
and,
very
importantly,
you
want
to
ensure
that
you've,
integrated
security
policies
in
your
ci
pipelines
scan
your
images
for
vulnerabilities,
scan
your
kubernetes,
manifest
that
they
add
hair
to
best
practices,
and
occasionally
you
want
to
scan
your
cluster
as
a
whole
to
ensure
that
everything
is
fine
and
secured
on
your
cluster
and,
as
a
final
key
note,
please
remember
that
security
is
only
important
as
the
person
implementing
it.
B
So
if
you,
the
person
implementing
it,
make
your
credentials,
you
know
free
to
everyone.
You
are
definitely
not
going
to
be
able
to
secure
infrastructure.
You
are
basically
giving
them
the
key
to
just
go
ahead
and
risk
your
on
attack
infrastructure
for
free,
so
yeah.
You
need
help
because
security
is
a
very
wide
thing.
It's
a
very
important
part
of
your
of
your
infrastructure.
B
So
if
you
need
help
around
securing
your
infrastructure,
you
can
always
contact
demos.
We
run
check
ups
circums
offerings
and
we
will
help
you.
You
know
we
run
audits
on
your
cluster
and
infrastructure
and
then
we'll
check
where
we
can.
We
can
help
you
secure
your
infrastructure
better,
so
yeah,
thank
you
for
listening.
I
think
that's!
B
B
B
A
This
is
question,
time,
question
and
answers.
If
you
have
any
questions.
B
A
Well,
yeah:
well,
let's
wait
for
an
extra
minute
and
if
you
don't
get
any
questions
then
move
to
an
sp
but
yeah.
So
basically
I
I
have.
I
don't
have
any
question
but
I'll
ask
you
know:
can
you
willing
to
make
your
slides
available
to
the
attendees.
A
B
Yeah
sure
I'll
be
making
this
slide.
Really.
I've
done
now
I'll
be
sending
it
to
to
you
or
to
any
of
the
hosts
here.
A
B
Sorry,
there
is
no
link
in
the
slide
on
how
to
get
the
limit
devops.
I
think
you
could
probably
check
on
on
keyboard.
I'm
sorry
on
github.
You
could
check
I
like
using
the
awesome
list.
You
can
check
awesome,
github
or
some
devops
on
on
github
and
you
can
get
you.
A
B
A
There,
yes,
let.
B
A
Help
out,
let
me
just
share
the
awesome
devops
for
you,
google.
So
this
is
the
link
to
the
awesome
device.
So
yeah
from
here,
you
could
actually
see
some
helpful
links
to
some
of
the
tools
and
technologies
you
need
to
know
so.
Yeah.
A
A
lot
maduro,
I
really
enjoyed
the
session.
It
was
really
insightful
and
yeah,
and
I
know
for
sure
my
now
these
are
also
excited
and
they
get
a
lot
from
the
session
as
well.
A
So
yeah
thanks
a
lot
madhu
and
yeah
to
see
you
some
other
time,
man.