►
Description
Back by popular demand on Red Hat Live Streaming, Connor Gorman is joining to discuss the top security challenges facing Kubernetes today and how to address these challenges.
Join Connor Gorman and Michael Foster to answer your Kubernetes Security questions in our monthly StackRox Community Office Hours.
A
Yes,
hello
and
welcome
thanks
everybody
for
coming
over
from
christian
stream
great
job.
Little
get
ops
guy
to
the
galaxy.
Now
we're
gonna
be
talking
security
on
stackrock's
office
hours,
I'm
joined
by
engineer
at
stackrocks
connor
gorman
connor.
You
want
to
introduce
yourself
before
I
get
through
the
agenda
for
the
day.
B
Sure
yeah
nice
to
meet
everyone.
I'm
connor
I've
worked
at
stackrocks
and
now
red
hat
for
a
little
over
four
years,
so
kind
of
since
the
beginning
of
our
current
product
cycle
and
yeah.
I'm
a
principal
engineer
at
red
hat
and
primarily
working
on
the
back
end
so
been
around
been
around
for
a
while,
been
through
the
evolution
of
kubernetes
as
well
and
so
excited
to
talk.
You
know
anything
about
security
and
containers
and
kubernetes.
A
Awesome
yeah
for
those
who
don't
know
chris
short,
obviously,
if
you're
watching
the
channel
chris
short
is
gone
last
day
earlier
this
week,
so
we're
having
a
little
bit
of
chris
short
woes.
That
being
said,
I'm
mike
foster
standing
in
for
him
for
staccrock's
office
hours,
a
couple
announcements
next,
the
following
months:
we're
not
going
to
do
the
show
on
thursday
it's
going
to
be
on
tuesday
same
time
and
yeah.
A
Today
we're
going
to
be
discussing
kubernetes
security,
our
top
10
mcconner
and
I
kind
of
put
together
a
list
of
our
top
10
things
that
I
think
people
should
be
aware
of
some
mitigation
issues
in
the
kubernetes
open
source
community
that
you
can
do,
and
then
you
know
some
more
advanced
features
and
more
advanced
solutions
that
you
can
look
at
as
well.
B
A
All
right,
first,
one,
I
think,
is
a
little
bit
of
a
layup,
but
we
have
disable
public
access
as
our
number
one
sort
of
intro.
I
think
it's
kind
of
general.
What
do
you
think
when
you
say
disabled
public
access?
It's
kind
of
kubernetes,
private
cluster,
secure
the
api.
B
B
Know
I
think
one
of
the
main
things
from
the
api
server
perspective
is
that
there's
surface
area
there
right
I
mean
it's
the
way
you
control
your
cluster,
and
so
you
know,
there's
been
a
series
of
kind
of
attacks
or
ddos
attacks
that
happen
against
public
api
servers.
I
mean
famously
people
have
run
crypto
miners
through
exposed
cube
api
servers-
that's
probably
the
least
malicious
way
that
they
could
use
it.
And
then
you
know
there
was
like
the
billion
laughs,
yaml
ddos
that
could
occur.
So
you
know
those
things.
B
Those
things
happen.
Those
things
exist
in
the
wild
and
you
know
the
kubernetes
security
team
does
a
really
good
job
of
triaging
them
and
fixing
them
as
fast
as
possible.
But
you
know
just
take
your
api
server
off
the
internet
and
you
know
a
whole
variety
of
attacks
are
basically
mitigated
by
that,
and
you
know
that's
a
little
bit
of
peace
of
mind
and
you
know
like
like
likewise
like
with
nodes
and
things
like
that.
You
know
you
can
stick
them
into
private
clusters
as
well.
A
And
some
of
the
defaults
that
are
installed
in
provider
clusters,
if
you're
unaware
might
open
you
up
to
things
like
having
you
know,
it's
like
default,
nginx,
backgrounds
or
stuff
like
that
hanging
around,
if
you're
not
sure,
always
good
to
keep
it
private
before
you
really
understand,
what's
going
on.
B
Some
of
the
security
defaults
are
are
lost
in
that,
and,
and
so
you
know,
you
got
to
be
really
careful
about
when
you're
actually
building
a
production
cluster,
because
what
you
see
a
lot
is
that
a
cluster
that
you
were
playing
around
with
doing
a
poc
on
or
something
like
that
starts
running
more
and
more
critical
workloads
and
then
over
time
now
it's
become
a
production
cluster
or
a
semi-production
cluster,
and
so
now
you
gotta
like
take
a
step
back
and
make
sure
you're
applying
all
the
proper
security
controls.
A
Yeah,
those
those
bass
lines
aren't
necessarily
properly
there.
It
might
work,
but
you
really
haven't
operationalized
it.
It
actually
leads
into
the
next
point,
which
is
implementing
the
least
permissive
role-based
access
control,
and
I
think
we
can
kind
of
assume
that
everybody
has
our
backs
set
up
on
their
clusters
now.
This
wasn't
really
an
assumption
that
we
could
make
three
years
ago,
but
it's
safe
to
say
that
now
any
thoughts
on
one
how
to
go
about
that
best
practice.
Service
accounts.
Things
like
that.
B
A
B
Let's,
let's
make
sure
you
understand
kind
of
like
you,
do
an
audit
of
all
your
roles
and
role,
bindings
and
and
who's
accessing
your
cluster.
There's
a
couple
of
ways
to
do
this.
You
know
within
within
our
product.
We
can
show
you
basically
all
the
access
that
you
have.
So
you
can
look
at
a
particular
role
and
say:
hey:
are
you
a
cluster
owner?
You
know,
do
you
have
full
full
access?
B
There
there's
also
a
tool
by
jordan,
leggett
wrote
which
like
will
parse
audit
logs
and
will
kind
of
tell
you
you
know
what
people's
access
and
what
they've
been
using,
and
so
there's
some
cool.
There's
some
cool
stuff
out
there
to
look
at.
You
know
parsing
of
audit
logs
and
breaking
that
down,
and
so
yeah
just
make
sure
you're
always
auditing.
B
That
and
then
you
know,
one
of
the
best
parts
would
be
start
with
minimal
access
and
as
people
need
it,
you
know
slowly
add
it
and
then,
of
course
you
know
just
the
realities
of
running
a
production
cluster
is
that
sometimes
you
gotta,
you
need
break
glass,
permissions
and
so
building
that
process
in
where
you
have
kind
of
an
audit
cycle
of
okay.
I
need
cluster
admin
for
xyz,
okay,
we
got
an
approval
or
I
write
a
reason.
Why
and,
and
that
kind
of
like
then
can
give
me
the
access.
B
B
Yeah
exactly,
I
think
it's
like
a
general
theme
with
like
kubernetes
and
security
right.
There's
always
this.
This
balance
of
you
have
folks
who,
like
are
managing
the
cluster
administrating
the
cluster,
and
then
you
have
people
running
applications
and
there's
going
to
be
some
level
of
debugging
that
you
need
to
do
on
the
cluster
right.
And
so
how
do
you
grant
that
access
and
how
do
you
make
sure
that,
like
you're
still
providing
the
workflows
of
you
know
actually
running
the
applications
versus
having
a
really
hardened
cluster.
A
I
completely
agree
makes
sense
and
speaking
of
applications,
container
images
now
not
necessarily
kubernetes
specific
as
the
orchestration
tool,
but
obviously
our
main
workloads
are
running
in
containers,
so
managing
vulnerabilities
and
providing
a
safe
image.
I
assume,
when
we
talk
about
number
three,
we
mean
base
image
as
well
as
your
own
application,
so.
B
Things
found
right
over
time
right,
and
so
you
know
your
images
naturally
kind
of
drift
towards
less
secure,
just
based
on
the
number
of
vulnerabilities
found
and
so
constantly
rebuilding
your
base
images.
All
the
distributions
do
a
really
nice
job
of
constantly
updating,
fixing,
critical
vulnerabilities
and
so
by
just
constantly
rebuilding
your
images.
A
B
B
Yeah,
I
think
you
have
to
look
at
the
like
kubernetes
gives
you
a
lot
of
context
about
where
things
are
running,
how
things
are
running
right,
and
so,
when
you're,
looking
at
a
vulnerability
many
times
there,
a
lot
of
them
are
not
fixable
and
they
can
vary
in
criticality.
But
I
always
like
to
look
at
things
that
you
can
actually
do
all
right
like
what
can
you
actually
fix,
and
so
we've
got
a
lot
of
fixable
vulnerabilities
with
like
new
patch
versions.
B
You
know
those
are
the
no-brainers
in
my
opinion,
right
like
let's
go
rebuild
those
images,
let's
go
redeploy
them.
Hopefully
your
services
are,
you
know,
really
robust
and
you
can
constantly
roll
them
out.
Do
a
cd.
I
know,
that's
not
always
the
case,
and
so
you
have
to
be
cognizant
of
that.
B
You
know
and
then
constantly
update
those
and
then
also
if
you
can
reduce
the
amount
of
packages
you
have
right
that
just
reduces
your
surface
area.
So
you
know,
if
you
have
a
bunch
of
extraneous
packages
in
your
images,
let's
try
to
reduce
those
and,
and
that
just
like
provides,
you
know,
kind
of
a
slimmer
workflow
in
case
of
vulnerability
management.
A
Yeah
and
once
those
containers
get
into
the
cluster
just
kind
of
moves
into
the
fourth
point:
managing
secrets,
environment
variables,
config
maps,
injecting
various
secrets
and
variables
into
your
containers.
What
are
your
thoughts
on
securing
that
the
process
for
that,
because
I
always
had
an
issue,
especially
once
you
get
into
teams?
How
do
you
manage
secrets?
How
do
you
do
it
when
there
really
was
no
encryption
in
kubernetes,
you
know
three
or
four
years
ago
at
the
beginning,
right,
it
was.
Was
it
just
base64
there
for
a
while.
B
Yeah
exactly
so,
I
mean
going
back
to
our
our
back
point.
That's
like
the
first
step.
You
know
if
you're
using
kubernetes
secrets,
the
first
and
foremost
thing
to
audit
is
who
has
access
to
actually
reading
or
listing
the
secrets
they're
the
same
thing
and
like,
and
when
you
do
that
you
know
you
do
get
them
back
as
basics
before
they
are
encrypted.
B
You
can
encrypt
them
through,
like
kms
and
different
methods,
but
you
know
when
I,
when
I
get
a
yaml
of
a
secret
I'll,
be
able
to
read
the
secret
right,
and
so
you
know
who
has
the
ability
to
do
that?
Some
folks,
don't
even
let
read
read
for
secrets
ever
happen
like
you
can
only
write
secrets,
and
so,
if
you
want
to
roll
them,
you
just
have
to
rewrite
over
the
top.
B
This
is
a
good
workflow
for
like
get
ops
or
or
different
kind
of
like
cd
systems,
like
with
the
secret
store,
for
example,
which
is.
That
is
a
good
example
of
that,
but
yeah
that
the
secrets
are
one
of
the
biggest
challenges
I
think
still
kind
of
that
kind
of
exists
within
the
ecosystem
and
there
hasn't
been
like
the
perfect
solution
for
any
of
that.
A
Yeah-
and
I
know
that
I
think
if
fig
maps
became
immutable
as
well,
there's
the
option
for
for
that
aspect,
which
some
people
don't
understand
why
configmacs
are
immutable,
but
if
you're
injecting
environment
variables
into
your
container,
you
kind
of
want
hey
we're
going
to
roll
out
the
next
version.
This
container,
we
should
probably
make
sure
our
config
map
that's
going
to
be
used
to
set
up.
This
container
is
also
versioned
with
it
little
things
like
that.
I
found
really
interesting.
B
Yeah
one
thing
I've
seen
is
that
they'll
take
some
aspect
of
the
config
map
and
like
hash
it,
and
then
you
put
it
as
the
label
on
a
pod.
So
then
like.
If
the
config
map
changes
like
you
can
see
which
version
of
the
config
map
should
be
running
because
they
do
hot
swap
it
underneath
the
mount
right.
B
So
you
can
be
running
a
container
and
the
config
map
could
change,
and
so
you're
kind
of
in
this
weird
state
where
it
takes
about
a
minute
to
propagate
and
you're,
not
sure
you
know
which
config
map
is
there
or
does
my
service
actually
hot
reload,
the
configs
in
some
ways,
just
rolling
out
like
doing
a
roll
out
of
the
pods
would
be
an
easier
way
to
guarantee
that
okay,
this
configmap
is
going
to
be
mounted
because
when
you
roll
out
the
new
pod,
you
will
always
get
the
most
recent
config
map
attached
and
so
that's
kind
of
a
way
that
some
people
mitigate
the
kind
of
uncertainty.
A
Yeah,
especially
if
you
have
any
sort
of
cd
process
or
anything
like
that,
with
testing
set
up,
you
can
always
roll
back
right
if
your
config
map
screws
up
so
you're,
not
sitting
there
in
in
limbo,
yeah
exactly
that's
sort.
B
Of
like
a
helm,
I
mean
that's
like
one
of
the
benefits
of
something
like
helm.
For
example,
yeah,
like
you,
know,
they're
all
on
lockstep.
You
know
our
operators
in
the
openshift
ecosystem,
the
similar
concept
right,
like
you,
know
exactly
what
you're
getting
all
together
at
one
time.
So
that's
that's
a
big
benefit
of
kind
of
like
the
bundled
approach
of
deploying
your
service.
A
Now
had
number
six
and,
by
the
way,
anybody
people
who
are
watching
feel
free
to
put
your
questions,
comments,
things
that
are
bugging
you
about
security,
even
you'd,
be
like
oh
security
sucks.
I
like
to
be
wide
open.
That's
fine,
too,
but
we'd
love
to
hear
your
feedback
as
we
get
into
number
six,
which
I
found
is
a
sneaky
security
aspect
that
is
overlooked.
Often
overlooked
is
resource
limits.
A
B
So
yeah
there's
this
is
sort
of
like
a
little
bit
contentious
too.
Actually
so
resource
limits
from
you
know,
depending
on
what
service
you're
running,
yeah
really
matters.
So
one
aspect
is
that
helps
you
provide
the
segmentation
that
you
want
between
different
services
right,
you
don't
want
one
service
to
basically
be
poorly
written
or
have
a
fork
bomb
or
whatever
right,
like
just
suddenly
fill
up
a
node
and
then
start
having
things
be
killed.
I
mean
the
the
process
is
fairly
good.
B
It
will
try
to
it'll
try
to
kill
first,
the
the
things
that
are.
You
know
over
utilizing
their
resources
and
evict
them.
It's
not
a
foolproof
process
by
any
means,
and
so
I
always
think
that
setting
resource
limits
are
good,
especially
in
like
a
multi-team
environment,
where
you
know
like
a
lot
of
times,
people
will
look
at
resource
limits
and
be
like
this.
B
Is
your
quota
and
like
this
is
how
much
of
the
cluster
that
you're,
using
and
like
even
use
like
customer
billing
to
basically
be
like
here's,
how
much
you're
costing
us,
because
these
are
the
resources
that
you're
requiring
that
puts
some
pressure
back
on
the
devops
team
and
the
team
itself
to
optimize
their
code.
So
sometimes,
without
that
pressure
of
like
you,
need
to
set
resource
limits
and
constraints.
B
You
know
there
isn't
as
much
reason
to
to
optimize
your
code.
For
that
reason,
right
because
there's
not
a
direct
impact
on
you,
it's
kind
of
like
up
to
the
infrastructure
team
and
that's
a
challenge
for
for
any
infrastructure
team
and
another
thing
with
resource
limits,
and
I
think
security
and
availability
go
hand
in
hand.
A
lot
which
is
resource
limits,
allow
and
having
them
be
as
low
as
possible
help.
B
You
schedule
things
in
your
cluster
right,
sometimes
exactly,
and
so
you
can
schedule
things
more
readily
if
you
have
proper
resource
limits
and
constraints
set.
A
Yeah,
I
think,
especially
for
stateful
workloads,
if
you're
setting
very
strict
limits
on,
and
especially
if
you
have
a
limit
on
your
state
for
workloads,
but
not
on
your
other
ones.
They
get
the
default
for
the
namespace,
but
they
also
get
evicted
first
because
they
don't
have
something
that's
supplied
by
the
developer
or
the
administrator
there
too
right.
So
there's
a
little
bit
of
of
assumption
there
and
some
there's
some
pretty
cool
features
in
kubernetes,
where
you
can
set
it
by
namespace
too.
A
So
as
an
administrator,
you
can
kind
of
say:
hey
you
in
this
name,
space
you're
only
allowed
this
much
work
within
it.
Yeah
it's
definitely
especially
in
multi-team
clusters.
I
have
seen
I'm
curious
what
your
thoughts
I've
seen
a
lot
of,
let's
say,
micro,
sharding
of
clusters.
B
Yeah,
so
I
mean
from
from
resource
limits
for
those
you
kind
of
have
I
mean,
there's
a
console
like
limit
range,
which
will
just
automatically
give
a
pod
that
has
no
resource
no
resources
set
like
the
basic
resources,
but
actually
from
a
security
perspective.
I
do
like
the
smaller
clusters,
though
you
have
to
mitigate
the
challenge
of
like.
Where
are
my
clusters
right,
which
is
something
that
happens.
B
You
know
you
have
100
teams
now
you
have
100
clusters
and
so
you're
always
trying
to
find
that
balance
as
an
organization
of
like
how
many
clusters
do
you
have
versus
like
running
one
large
cluster
right,
there's
always
going
to
be
operational
overhead
between
of
running
a
cluster
and
some
cost
there,
but
you
can
really
get
the
natural
segmentation
that
you're
looking
for
between
teams
by
just
having
separate
clusters
right.
You
know,
you
know,
team,
a
isn't
bleeding
their
service
into
team
b,
because
they're
literally
segmented,
right,
yeah.
B
Right
exactly
and
like
you
can
allow
each
team
to
scale
up
their
clusters,
but
you
want
to
make
sure
that
you
have
like
unified,
tooling
deployed
in
that
cluster
right
and
so
that's
kind
of
as
an
infrastructure
side.
If
you
want
to
go
that
way,
you
have
to
be
able
to
say
you
know,
speaking
from
personal
experience
like
with
acs.
B
It's
like
you
know,
you
want
to
make
sure
that
you
have
our
secured
cluster
components
and
every
single
cluster
that
you
deploy
right
and
we've
had
a
bunch
of
customers
who
have
built
that
into
their
ops
cycle,
and
so
when
they
launch
a
new
cluster
up
to
300
or
something
clusters
right,
then
every
single
cluster
gets
these
components
and
registers
itself
and
and
is
actively
being
secured,
like
through
security
policy.
But
now
you
know
you
can
really
scale
that
out
broadly
as
your
as
your
organization
grows,.
A
Yeah
so
clusters
that
get
set
up,
they
don't
fall
through
the
cracks
right
you're
using
some
sort
of
like
acm.
For
overall
policy.
I
thought
one
of
the
coolest
things
that
I
had
seen
was
the
through
acm.
Let's
say
you
want
to
set
up
a
developer
specific
cluster.
A
You
know
you
can
have
the
sensor
set
up
for
acs,
but
just
there's
no
admission
controller.
So
it's
like
hey
developer,
started
up
a
cluster.
Okay.
There
it
is.
I
can
see
what
they're
doing,
but
there's
no
enforcement.
So
things
like
that
you
can
kind
of
mitigate
stuff
before
it
gets.
B
B
Yeah
exactly
and
you
can-
and
in
that
way
too,
you
can
see.
Oh
hey,
this
person
created
a
load
balancer
in
this
cluster
right,
and
so
we
have
a
lot
of
insights
into
kind
of
the
overall
context
of
like
what's
going
on
in
each
cluster,
and
so
you
could
see.
Oh
hey,
they
exposed
this
service
over
the
internet,
like
that's
not
within
our
security
policy
or
we
haven't.
B
You
know
verified
that
it
hasn't
gone
through
a
security
review
right
and
there's
a
lot
of
aspects
like
that
where
they
make
it
really
easy
to
use
this
stuff,
expose
it
over
the
internet,
make
it
publicly
available,
and
maybe
that's
not
exactly
what
you
want.
Maybe
they
didn't
realize
the
risk
in
doing
that
right
in
terms
of
hey
yeah.
B
I
was
just
testing
this
from
my
local
laptop
and
you
know
making
sure
the
proper
security
controls
are
there
from
from
each
individual
cluster
and
the
services
that
are,
you
know,
being
publicly
pushed
the
worst
situation
is
to
have
a
prod
service
that
you
don't
know
about.
That's
you
know
that's
sitting
out
there
on
the
internet.
A
And
good
thing
we
kind
of
got
into
that
clusters
number
seven.
We
have
segregate
sensitive
workloads.
So
there's
a
couple
points
here.
I
think
yeah.
We
already
talked
about
secrets
a
little
bit,
but
implementing
and
monitoring
traffic
and
setting
baselines
separating
through
kubernetes
native
tools
like
namespaces
right.
B
One
don't
use
the
default,
namespace,
there's
actually
there's.
Actually
some
admission
controllers
and
stuff
that
you
can
use
that
will
say,
don't
let
anything
be
deployed
into
the
default
namespace.
I
think
it's
sort
of
like
it's
just
someone
ran
coop
cuddle
and
something
showed
up.
There
is
typically
what
happens,
and
so
you
really
want
to
make
sure
everyone's.
You
know
in
a
proper
name
space
I
mean
the
name.
B
Spaces
are
super
useful
for
some
of
the
reasons
we
already
outlined
around
like
resourcing,
for
example,
and
seeing
kind
of
the
resource
consumption
of
an
individual
name
space,
but
also
it's
a
really
easy
barrier
to
say
this
team
should
not
be
able
to
talk
to
this
team
or
also
like,
if
you
have
a
multi-tenant
environment
with
different.
You
know,
customers
right,
okay,
customer
a
should
not
be
able
to
talk
to
customer
b,
and
how
do
I
go
verify
that,
and
actually
I
might
jump
over
to
the
demo
real
quick,
foster.
A
B
That's
pretty
good
cool
sure,
so
this
is
our
network
graph,
we're
showing
live
traffic
between
different
name
spaces
right,
and
so
this
is
kind
of
what's
pretty
interesting.
It's
like
you
want
to
create
network
policies
that
die
between
namespaces,
but
there
might
be
holes
that
you
want
to
poke
for
things
like
monitoring
right
like
if
you
have
a
prometheus
operator
running
in
your
cluster.
B
You
want
the
monitoring
to
happen
against
your
services,
and
so
what
you
really
want
to
do
is
break
down
and
actually
verify,
even
from
an
audit
perspective
that
you
know,
customer
a
is
not
talking
to
customer
b.
So
you
know,
stackrocks
should
not
be
talking
to
any
of
these
other
back-ends.
That
has
nothing
to
do
with
them
right
and
so
like.
This
is
just
a
visual
verification,
at
least
that
you
can
do
that,
and
then
there's
a
lot
of
capabilities
within
the
product
that
I
probably
won't
go
into
right
now,
around
simulating
network
policies.
B
The
yamls
are
pretty
difficult
to
write
sometimes,
and
so
you
really
want
to
look
at
them
and
say:
okay,
is
this
doing
what
I
think
it's
doing?
Is
this
actually
going
to
block
an
active
flow
right?
Is
this
something
that's
actually
being
used?
For
example,
this
sensor
is
in
a
remote
cluster,
so
there
is
no
centralized
component
here
and
it's
going.
You
know
you
can
see
it
reaching
out
to
google
here
at
the
bottom.
That's
because
the
central
is
in
a
different
gke
cluster
right.
B
A
A
B
Oh
everything
can
talk
to
this
and
you
know,
depending
on
the
service,
you
know
you
have
different
levels
of
connectivity
and
ingress
or
or
egress
right.
So,
for
example,
this
is
our
collector.
This
is
an
agent
that
runs
on
every
node.
It
doesn't
run
a
web
server.
It
needs
absolutely
no
ingress
right.
So
this
is
saying
we'll
allow
zero
ingress
flows
based
on
the
network
policy
we
wrote
and
based
on
the
overall
environment.
We
have
16
egress
flows
that
will
be
allowed.
A
Very
cool
yeah,
the
visualization
aspect
of
network
policies.
I
don't
think
that
can
be
understated.
That's
a
huge
time.
Saver,
not
working
through
all
those
yaml
files,
and
one
of
my
favorite
aspects
of
moving
development
to
test
clusters
was
to
like
just
basically
flick
on
and
on
and
off
network
policies.
A
B
Yeah,
I
think,
like
some
of
the
aspects
you
know
things
that
I
would
do
here
is
like.
Maybe
you
have
a
default
template
for
a
namespace
and
you
say:
okay
from
you
know
the
namespace
of
prometheus
operator
we'll
allow
all
connections
but
then
deny
all
the
other
ones
right
and
then
every
namespace
that
you
basically
print
will
then
have
that
default
deny.
And
then
you
know
you
just
if
you
need
to
poke
holes
later
or
someone's
got
a
real
use
case
right.
You
can
evaluate
those
at
the
time
if
you
can
start
this
way.
B
This
is
definitely
the
way
to
go.
Now.
It's
very
challenging
if
you've
been
using
kubernetes
for
two
years
and
you're
running
a
bunch
of
production
services.
How
do
you
start
applying
this
to
a
cluster?
That's
already
running
without
breaking
a
lot
of
things
and
that's
where
looking
at
the
active
traffic
and
crafting
network
policies
that
you
know
align
with
that
active
traffic
is
really
useful
for
folks
who
already
have
clusters
up
and
running,
but
they
haven't
gotten
to
the
network
segmentation
portion.
A
Yeah
yeah
the
observability,
without
enforcement,
especially
as
a
security
tool,
is
extremely
useful
right.
You
don't
want
to
be
shutting
down
anything
that
you
haven't
exactly
verified
as
normal
traffic,
so
very.
B
Cool
exactly
yeah
and
when
it
comes
to
security
in
configuration,
especially,
I
think
it's
always
turn
on
violations
right.
So
like
first
you
get
visibility
into
something
like
network
traffic.
Then
you
can
start
sending
violations
about
things
that
violate
that
network
traffic
right
that
happens
over
time.
You
know
we
don't
have
the
perfect
view
of
the
world.
B
You
know
you
could
run
postgres
and
we
can
look
at
all
the
prog,
the
processes
that
run,
but
then,
when
someone
runs
a
backup
and
maybe
the
backups
by
executing
in
and
running
some
command
right,
we
don't
want
to
kill
that
we
don't
want
to
kill
postgres
because
of
that
right.
So
you
want
to
see
that
you
want
to
say:
okay
highlight
highlight
anything,
that's
outside
of
that
band:
oh
actually,
yeah!
This
is
a
backup
good
thing.
B
We
didn't
turn
on
enforcement
right
away
and
then
once
you're
very
certain
that
we've
like
seen
everything
that
you're
looking
for,
feel
free
to
turn
on
enforcement
or
or
violations
that
go
to
a
higher,
even
a
higher
notifier
right.
Getting
emailed
about
something
is
not
like
being
paged
at
2am
for
something,
and
so
you
kind
of
have
all
of
these
different
tiering
of
kind
of
how
you
work
through
a
security
program.
A
B
Yeah,
so
you
can
annotate
your
pods
with
like
specific
aspects
that
you'd
like
like
to
route
through,
so
you
can
say,
okay
to
this
slack
web
hook
or
and
that
could
that
would
be
done
on
a
pod
basis
or
a
deployment
basis.
The
same
thing
with
like
email
right,
a
lot
of
times.
People
will
have
an
owner
field
of
like
here,
here's
the
team.
B
You
should
email
about
some
of
these
issues
and
then,
of
course
you
have,
you
know,
levels
of
when
you'd
want
to
send
these
notifications
and
how
serious
and
the
relative
severity
of
them
right,
and
so
you
want
to
separate
from
there.
So
you
know
someone
using
latest
tag.
Okay,
maybe
that's
a
very
low
severity
and
then
you
know
someone
with
a
really
critical
vulnerability.
That's
fixable,
hey,
like
someone
needs
to
jump
on
this.
A
Pretty
cool,
that's
awesome,
yeah!
There's
honestly,
I
feel
like
kubernetes,
also
favors
the
meticulous
right
in
that
aspect,
you
have
some
applications
in
a
namespace
everything's,
going
to
revolve
around
those
labels
and
annotations
and
how
you
separate
your
workloads.
So
I
always
just
tell
people
getting
that
that
first,
the
namespace
setup
for
your
application
is
that
core
base
that
you're
going
to
build
all
your
policies
off
of
right.
B
Yeah
exactly
so,
some
namespaces
are
just
naturally
more
sensitive
than
others.
Right,
like
you
have
a
namespace
for
the
payments
team
right
yeah.
This
is
going
to
be
a
much
more
stringent
from
a
security
perspective
than
I
don't
know,
a
machine
learning
thing,
that's
that
you
know
namespace,
that's
never
on
the
internet.
B
Right
never
exposed,
has
no
load
balancer
and
so
there's
kind
of
a
large
variety
of
different
constraints
that
you
have
and
name
spaces
that
allow
you
to
you
know
also
within
our
platform
say:
oh
yeah,
these
name
spaces
have
these
rules
even
this
cluster
and
these
name
spaces
have
these
rules
and
these
other
namespaces
don't
have
these
rules
or
or
you
know,
here's
what
here's?
What
we
care
about?
Here's
what
we
care
about.
So
you
can
kind
of
separate
that
by
by
team
and
namespace.
A
Yeah,
and
if
you
do
it
right,
you
don't
have
the
teams
bickering
with
each
other
over
resources.
That's
I
think,
that's
why
we
got
to
sharding
of
clusters.
It's
just
hey!
Here's
your
cluster,
here's,
your
cluster,
but
then
you
get
cluster
sprawl
so
bringing
that
all
together
is
interesting,
speaking
of
which
audit
logging
is
coming
up
next,
enabling
audit
logging,
I
think,
by
default.
Most
clusters
nowadays
have
and
most
cloud
services
have
some
sort
of
auto
logging
functionality.
A
B
Right
yeah,
I
think
anything,
that's
directly
interacting
with
pods
is
kind
of
like
you
know,
especially
those
exacts
or
port
forwards.
Right,
like
those
are
things
that
I
would
be
concerned
about,
and
then
also
typically,
if
you're
you
have
an
rbac
setup
right,
you
don't
want
a
lot
of
people
to
run.
Coupe
cuddle
commands
against
your
cluster,
like
I
think
you
know,
when
you're,
when
you're
really
at
the
production
level,
you
don't
want
systems
like
that
or
people
necessarily
directly
interacting
with
your
cluster.
B
Unless
you
have
to
right,
and
so
that's
the
stuff
you
want
to
log
and
that's
the
stuff
you
want
to
look
out
for
and
like
you
know,
we
recognize
this
for
sure,
because
you
know
port
forward
yeah,
you
have
a
direct
local
host
to
a
server
right,
potentially
bypassing
different
things
or
different,
like
firewalls
or
load
balancers,
and
then
likewise
with
execs
right
you're
in
the
pod.
B
Like
you
know,
you
could
you
could
have
as
much
security
as
you
want,
but
if
someone's
inside
your
container-
and
you
know,
can
cat
your
database
files
or
like
look
at
your
database
files,
you
know
there's
limited
limited
things.
You
can
do
from
that
perspective,
and
so
actually,
if
you
want
to
flip
back
real
quick,
I
actually
really
love
this
feature
that
we
ended
up
building
and
we
pipe
it
through
actually
the
niche
controller.
B
So
so
you
can
kind
of
default,
deny
them
as
well
I'll
go
to
the
violations
here,
I'm
on
my
terminal,
but
I'm
just
port
forwarding
to
or
sorry
exactly
into
one
of
the
stack
rocks
pods
on
this
cluster
nice.
B
Didn't
really
do
too
much
kind
of
automatically
shifted
there,
but
right
so
I'll
exact
into
the
sensor
pod
in,
like
all
live
demos
and
slightly
stressful.
Oh.
A
B
A
B
What
you
can
do
is
kind
of
trace
these
breadcrumbs
that
if
we
go
to
the
risk
page
and
look
at,
for
example,
the
deployment
which
is
sensor
in
this
case
there's
two
of
them,
but
we
can
see
that
this
one
actually
found
an
anomalous
process
and
I
go
to
process
discovery
here
and
it's
sh
right,
and
so
I
just
had
to
open
a
bin
sh
there.
Let's
see
all
I
can
do
here,
reload
this
real
quick
see
if
he
picked
up
all
my
other
stuff
that
I
was
doing
yep.
B
So
I
ran
ls
right
so
right.
You
can
kind
of
like
there's
some
breadcrumbs
to
follow
here
in
terms
of
what
processes
were
run
and
this
red
indicates
that
we
found
that
it's
anomalous
that
we've
basically
been
running
this
container
for
quite
a
while,
or
this
pod
for
quite
a
while,
and
we've
only
ever
seen
this
kubernetes
sensor,
it's
a
go
binary.
It's
very
easy
to
tell
that.
This
is
the
only
thing
that
should
be
running,
and
then
these
are
the
other
processes
that
came.
You
know
after
we
baselined
it.
B
We're
like
hey
these
are
these
are
new.
These
are
strange
now,
sometimes
this
happens,
or
things
run
periodically
and
you
can
just
add
them
to
the
baseline
and
they
won't
show
up
as
anomalous
anymore
in
this
case
I'll
leave
them,
because
they
are
anomalous,
because
I
exactly
so,
you
can
kind
of
immediately
see
that
and
then
we
also
have
the
ability
for
everything
that
you
can
highlight
through
alerts.
You
can
also
reject
them
through
the
admission
controller.
So
if
I
change
that
policy,
I
could
make
it
reject
my
exec.
B
So,
from
an
emission
controller
perspective,
you
can
like
you're
just
blocked,
like
you,
just
get
a
report
back
to
kubeco
that
your
action
got
blocked.
But
if
you
have
other
runtime
data
around
processes,
for
example,
that's
actually
running
right-
the
pod
may
be
already
running,
and
then
you
see
some
anomalous
processes.
B
There
is
an
option
on
the
enforcement
to
kill
the
pod
and
we'll
just
kill
that
through
kubernetes
and
say
hey.
We
saw
something
strange:
let's
just
kill
that
pod
and
you
can
see
if
it
happens
again
again
always
be
very
careful
with
this
right.
You
don't
want
somebody
to
exec
into
a
container
to
do
some
debugging
or
something
and
just
you
know,
we'd
immediately,
kill,
kill
that
pod
and
depending
on
it,
maybe
a
little
bit
more
sensitive
than
others,
and
so
those
are
some
of
the
concerns
just
around
that
strange
enforcement.
B
A
And
having
the
context
of
who's
doing,
it
obviously
is
a
huge
factor
in
that
that
decision,
but
I
mean
just
blocking
the
action
alone
and
then,
if
you're
able
to
delete
a
pod,
if
you
do
have
a
highly
available
deployment,
deleting
a
specific
pod
or
enforcing
that
rule
should
be
okay
right.
It
really.
You
obviously
don't
want
to
set
that
as
the
standard,
but
you
hope
that
people
have
set
up
their
deployment
correctly,
but
you
know
deleting
a
pod
there's,
no
massive
impact
right.
A
It's
a
little
bit
different
from
like
wiping
a
whole
workload
for
a
security
issue
right.
B
B
What
hits
your
cluster
at
that
point
because
it's
not
currently
running,
and
so
it's
kind
of
just
staying
in
the
same
state
that
it
is
when
we
run
admission
controller
on
updates,
for
example,
that
one's
a
little
bit
scarier
right,
because
you
could
be
running
you
know
just
critical
updates
or
image
update,
but
now
you're
violating
some
policy,
and
then
you
know
that
could
be
rejected
at
that
point,
and
so
you
know
you're
still
staying
in
the
same
state,
but
it's
a
little
bit
a
little
bit
more
risky
from
that
perspective.
A
Definitely
yeah
we
gotta
kevin
I'll
I'll
post
all
10
in
the
chat
at
the
end,
where
we're
just
getting
on
to
to
nine
so
two
minutes
and
I'll
just
copy
and
paste
the
whole
list
that
we
made.
A
We
you
kind
of
mentioned
it
touched
on
a
little
bit,
but
that
providing
a
secure
software
development
process
now
this
might
be
the
most
vague
out
of
all
of
them.
I
think
because
everybody
has
a
different
software
development
process,
real
quick
top
things
in
a
container
build
to
deployment
in
kubernetes.
What
are
your
thoughts.
B
Yeah,
so
you
know,
let's
start
with
the
the
source
code
right
like
you,
always
have
to
have
some
sort
of
like
source
code
scanning
linters.
You
know
just
general
hygiene
around
your
code
right
the
earlier
you
can
find
something
the
better.
B
Then
you
take
that
code
and
you
put
in
an
image
all
right
and
then
now
now
you
still
have
the
same
sort
of
hygiene
around
that
image.
Let's
make
sure
you're
scanning
your
base
image
things
are
up
to
date,
the
number
of
times
I've
seen
something
like
alpine
three-point
in
someone's
environment,
right
and
you're.
Like
man,
I
have
to
go
back
and
look.
How
old
that
is.
You
know
are
things
that
you
want
to?
B
You
know
really
adjust
quickly.
You
know,
make
sure
you're
scanning
your
actual
code
to
you
know
java
applications
and
everything,
and
then
you
know
when
you're
crafting
your
deployment
yamls
right,
it's
like
what
is
my
service
actually
need,
and,
and
how
do
I,
how
do
I
validate
that
right?
So
you
know:
does
this
thing
need
to
run
as
privileged
right?
Do
I
need
these
capabilities
right?
Do
I
need
like
net
raw,
which
is
by
which
is
a
linux
capability,
that's
by
default
available,
but
almost
no
one
needs
to
do
transparent.
B
Proxying,
I
think,
is
what
is
what
that
allows
you
to
do
so
very
few
situations
where
it's
actually
necessary.
You
know,
and
then
you
know,
user
setting
your
users
and
uids
and
gids,
and
all
of
that,
like
kind
of
fun
stuff,
like
that's
kind
of
the
nuts
and
bolts
of
like
how
do
I
actually
run
this
service
and
making
sure
that
you're
not
running
everything
as
root
is
kind
of
like
a
really
good
place.
To
start
with
that,.
A
A
Let
me
know
what's
going
on
in
your
process
and
if
you
don't
know
you
have
a
month
or
two
to
get
back
to
me
on
it
right
just
for
teams
that
are
just
starting.
I
think
that
there
needs
to
be
a
little
bit
of
communication
there
about.
What's
expected
thanks
for
bringing
that
up,
because
I
actually
skipped
number
three.
I
started
talking
about
secrets
when
we
should
have
been
talking
about
security
contexts.
A
I
think
that
was
probably
the
biggest
thing,
especially
with
pod
security
policies.
Getting
deprecated.
You
touched
on
it,
linux
capabilities,
uid,
gids,
dropping
privileges.
What
else
for
security
contacts
should
we
get.
B
A
B
Linux,
if
you
can
use
those,
then
that's
great.
I
know
those
can
provide
their
own,
their
own
challenges
as
well,
but
those
are
kind
of
like
just
the
places
to
start
from
the
security
context
perspective.
I
think
yeah,
I
think
the
user.
One
is
just
big
that
just
helps.
You
know.
Just
move
you
out
of
root
immediately,
one
of
the
ones-
that's
maybe
not
in
the
security
context
itself,
but
I
actually
really
like
is
the
read-only
root
file
system,
so
a
lot
of
general
attacks.
B
If
you
look
at
metasploit,
for
example,
a
lot
of
the
attacks
go
to
slash
temp,
because
that
has
been
traditionally
writable
on
every
file
system,
just
dump
everything
in
there
forever
and
like
there's
a
lot
of
applications
that
assume
that
slash
temp
is
writable,
but
you
know,
for
example,
we're
running
a
sensor.
It's
a
go
binary.
B
It
doesn't
write
anything
right
and
so
okay,
how
can
I
utilize
that
fact
to
just
reduce
the
service
area
of
like
anyone
running
a
an
attack
against
our
container,
which
is,
I
said,
the
entire
file
system
read
only
and
if
you
try
to
drop
a
payload,
you
just
get
rejected
right,
and
so
that
is
like
a
huge
way.
You
can
reduce
your
file,
sorry
your
attack
surface
and
it
can
be
complicated
to
run
I
mean.
B
Sometimes
you
know
you
write
lock
files
or
things
out,
but
if
you
can
analyze
that
and
just
put
even
like
the
volume
there
at
those
of
those
locations
through
kubernetes
docker
volumes
don't
work
exactly.
They
don't
take.
B
Correctly
yeah,
but
you
know
you
can
create
a
volume
like
just
a
just
a
temp
volume
in
kubernetes
and
put
it
at
that
spot
and
the
rest
of
your
file
system
could
be
read
only,
and
so
even
that
provides
a
lot
of
protection.
From
that
standpoint,.
A
Nice
yeah
and
you
mentioned
sc
linux.
I
found
one
of
the
better
workflows
security
team
tends
to
be.
Let's
say
you,
security
team
handles
se
linux.
If
you
can
hand
off
the
deployment
file
with
the
list
of
permissions
of
user
ids
and
capabilities,
you
need
you
can
let
the
security
team
figure
out
a
lot
of
those
aspects
for
se
linux.
On
the.
B
B
So
this
kind
of
bleeds,
this
kind
of
bleeds
into
four
and
ten
right.
So
four
is
like
pot
security
policies,
and
ten
is
configured
mission
controllers
right
as
constant
pod
security
policies
get
deprecated
right.
They
are,
I
think
they
are
basically
going
to
be
replaced
with
admission
controllers
right
and
so.
B
Right,
so
you
know
the
emission
controllers
that
you
know
things
that
you
can
use
as
pod
security
policies
go
away.
You
know,
you
know
our
platform
has
a
mission
controller.
You
know
we
do
things
like
opa,
geek,
gate,
opa
gatekeeper
in
the
community
key
verno
different
things
like
that
to
be
able
to
provide
some
of
this
context
here
around
around
security
and
providing
that
in
a
generic
way.
B
So
I
think
those
are
those
are
kind
of
one
of
the
main
ways
that
this
will
be.
You
know,
you'll
do
pod
security
policies,
I
guess,
is-
is
kind
of
replacing
them
with
kind
of
like
also
what's
nice?
Is
that
they're
a
little
bit
more
generic
as
well
right
pod
security
policies
are
pretty
tightly
confined
to
very
specific
aspects
of
your
pod,
and
mesh
controllers
can
check
a
large
variety
of
things.
Just
around
configuration.
A
Yeah,
sorry
to
anybody
watching
the
copy
paste
into
restream
is
not
great,
so
that's
just
a
big
word
soup
right
there,
but
I'll
try
to
format
a
little
bit
yeah.
One
of
the
I
understand
why
psp
deprecating
is
a
conversation.
It
we're
really
still
talking
about
security
context.
It's
just
how
do
we
wrap
a
policy
around
it,
so
we
can
actually
operationalize
that
right.
A
So
it
starts
with
just
making
sure
that
the
security
contacts
are
working
for
each
application
and
the
teams
are
caring
about
that
and
it
makes
the
application
of
whether
it
be
psps
and
mission
control
or
opa.
Oppa
caverno
makes
it
so
much
easier
right,
they're,
just
layers
of
abstraction
built
on
top
to
help
you
scale.
B
Yeah
exactly
you're,
just
you
know
it's
just
a
nice
method
of
enforcing
that
hey
you
guys
are
using
the
right
things
or
that
you're
not
running
these
things
as
root,
or
at
least
you
have
the
exception
or
like
we
can
audit
what's
going
into
the
cluster,
because
I
think
one
thing
that's
happening
with
kubernetes
in
general.
Is
that
there's
a
lot
of
like
deployment
responsibilities
now
on
developers
and
developer
teams?
And
so
you
know
in
a
lot
of
ways.
B
I
view
it
as
application
teams
are
responsibilities
to
write
and
ship
applications
and,
of
course
those
should
be
secure,
but
that
might
not
be
their
forte
or
their
area
of
expertise,
and
so
you
kind
of
need
these
guide
rails
and
guardrails
to
say
hey.
You
know,
this
is
how
you
should
deploy
this
or
you
probably
shouldn't
run
this
as
root.
Is
there
a
good
reason
and
a
lot
of
times
they'll
be
like?
Oh
no,
we
can
run
it
as
any
other
user
like
just.
Let
me
know
how
to
do
it
and
we'll
just
run
it.
A
Yeah
yeah
and
that
conversation
works
a
lot
better
than
hey.
We've
had
no
policy,
let's
implement
these
policies
across
all
our
clusters
really
quickly
without
any
information
about
their
deployments.
That'll
be
a
lot
of
friction
that
you're,
creating
with
something
like
that.
So
yeah
100.
That's
so
that's
our
top
10!
I
will
try
to
post
a
better
comment
in
restream
for
anybody
watching
kevin.
Hopefully
that
helped
you
with
the
notes,
probably
made
more
work
for
you,
but
there's
the
top
ten.
A
We
did
have
a
couple
hot
topics.
What's
next
cubecon
was
last
week
for
those
who
missed
it,
all
the
videos
are
going
to
be
available
online.
I
think
in
the
coming
weeks
they
did
a
great
job
with
the
hybrid
setup
definitely
extremely
challenging.
I
liked
it
at
home
because
I
got
to
instead
of
you
know,
setting
your
schedule
where
you
have
to
go
to
all
the
events
like
you
could
just
binge
watch
a
bunch
but
want
to
talk
about.
A
A
B
Is
yeah?
I
think
I
think
that's
super
true.
You
know,
I
think
I
think
it's
a
really
cool
tool
and
I
think
that
it
does
definitely
have
like
good
security
implications
right
that,
like
you,
know,
kind
of
verifying
things
that
are
going
into
your
registry
and
being
able
to
attest
where
they
came
from
now.
B
This
sort
of
happened
with
solarwinds
right,
which
is,
which
is,
I
think
you
know
big
reason
why
a
lot
of
these
conversations
have
kind
of
come
to
the
forefront.
But
you
know
if
the
supply
chain
itself
is
compromised,
then,
like
you
could
still
sign
binaries
and
things
that
have
issues
right,
so
it
doesn't
doesn't
fix
all
of
the
issues
but
definitely
addresses,
like
you
know,
malicious,
malicious
images
in
your
registry
that
didn't
come
from
where
you
think
they
came
from,
and
so
I
think.
A
B
B
Oh
security
is
like
it's
just
a
multi-layer
process
right
and
there's
a
bunch
of
good
good
tools
out
there
and
products
out
there
that
that
help
with
all
of
that,
and
so
sometimes
you
know
it's
not
one
product-
is
your
silver
bullet
right
and
so
you're
kind
of,
like
you
said
in
arsenal,
it's
kind
of
like
yeah.
You
have
a
bunch
of
these
things
layered
together
and
you
can
like
utilize
the
best
of
each.
So
I'm
a
pretty
big
proponent
of
that.
A
Yeah
one
of
the
best,
probably
in
terms
of
operationalizing
one
of
the
best
things
is,
you
can
do
it
with
the
security
team,
can
just
implement
it.
Like
most
likely,
your
developers
have
set
up
a
container,
build
process
that
you
just
basically
need
to
add
on
to
that
to
sign
and
verify
it,
and
you
can
build
it
into
your
set
process
without
a
lot
of,
let's
say:
friction
between
teams,
because
those
that
existing
ci
system
is
already
there
right.
B
Right
yeah,
I
agree
with
that
too.
It's
like
yeah,
it
should
be.
It
should
be
fairly
easy
to
go,
implement
these
things
and,
and
the
final
part
is
like
okay,
the
actual
verification
process,
which
is
interesting
as
well
right
was
like
you
have
to
go
now.
The
last
part
is,
of
course,
when
you
go
deploy
these
things.
Let's
go
verify
that
this
is
what
you
think
it
is
and
that
you
know
verifying
the
image
that
you're
actually
pulling
is
in
fact,
you
know,
signed
right,
yeah.
A
A
B
I
mean,
I
think
it's
another
thing
to
watch
right
like
if
someone's
launching
an
ephemeral
container,
that's
something
that's
kind
of
out
of
the
norm
right.
I
expect
I
expect
this
to
not
be
launched
all
the
time
from
a
security
perspective.
Actually
it's
a
plus
because
you
can
build
images
that
are
way
slimmer.
Now,
a
lot
of
times
you'd
build
an
image
with
maybe
some
network
tooling
in
it,
just
in
case
you
needed
to
like
do
dns
lookups
to
ensure
that,
like
something
is
working
right
or
like.
B
Why
can't
this
namespace
talk
to
this
namespace?
I
think
network
policy.
B
Debugging
network
policies
is
a
really
hot
topic
there,
where
you
know
these
two
things
can't
talk
to
each
other
you're,
trying
to
figure
out
why
you
know
you
really
want
to
run
a
curl
between
these
two
services
and
be
like
what's
going
on,
but
you
may
not
want
curl
in
your
main,
in
your
main
image,
main
application
that
you're
running,
because
that
can
be
used
by
an
attacker,
for
example,
to
download
a
payload,
so
yeah,
an
ephemeral
container
right
is
just
you
have
all
your
debugging
tools
all
set
up
there
and
the
whole
point
of
something
that's
ephemeral.
B
A
A
development
standpoint
it
seems
to
be
the
most
applicable
you
can
tell
all
the
developers
and
your
team,
hey
here's,
here's
an
image
with
every
single
tool
that
all
of
these
teams
need.
You
guys
can
all
use
it
to
check
your
application.
Just
run
this
one
thing,
but
you
know
you
take
everything
else
out
of
your
image
that
you
would
normally
use.
B
B
I
would
say-
because
I
think
you
know,
debugging
things
in
production
is
always
difficult,
and
so
this
at
least
gives
you
kind
of
a
way
that
you
can
have
application
teams
use
this.
A
Very
true,
all
right
last
topic,
validating
true
security
threats
and
metrics,
and
touching
this
a
little
bit
with
vulnerabilities
vulnerabilities,
never
really
go
away.
How
do
you
triage
it?
Another
aspect
is:
how
do
you
validate
that
the
security
choices
that
you're
making
are
helpful
to
your
overall
posture?
B
Yeah
no
yeah,
I
think
you
know
that's
a
really
good
question.
I
don't
have
all
of
the
answers,
unfortunately
yeah,
but
yeah.
I
think
I
think,
a
lot
of
stuff
around
configuration
and
really
trying
to
scope
it
down
as
much
as
possible
is
really
useful.
I
mean
there's
always
some
edge
cases
here
and
there
where
things
need
special
privileges.
B
I
think
when
you're,
when
you
think
about
measuring
risk
in
a
kubernetes
environment,
you
have
a
lot
more
data
right
and,
like
you,
can
start
utilizing
some
of
that
data
to
really
influence
your
decision.
So
you
have
things
like
okay,
you
have
a
vulnerability
and
you
have
the
vectors
for
it.
If
it's
a,
if
it
has
a
network
attack
vector,
for
example
like
is
it
exposed
on
a
load
balancer,
do
you
have
network
policies
set
up
in
that
name
space
to
deny
access
to
it?
You
know
it's
like
how
do
I?
B
How
do
I
quantify?
What
really
has
a
lot
of
risk
right?
It's
like
something!
That's
a
severe
issue,
that's
exposed
over
the
internet,
it's
like
yeah!
I
want
to
go
look
at
this
right
and
I
want
to
go.
Try
to
address
this
issue
as
fast
as
possible.
You
know
another
aspect,
something
that
we
just
shipped.
It's
not
in
this
demo,
unfortunately,
is
around
the
concept
of
like
active
vulnerabilities,
and
so
how
can
you
see?
B
What
is
actually
running
is.
Is
this
vulnerability
potentially
exploitable
because
there's
a
process,
that's
using
this
library,
for
example,
and
so
that
helps
you
provide
even
more
context
around
this
vulnerability
and
it's
like
potential
to
be
exploited,
you
know,
is
it
in
a
process,
that's
running.
In
that
case,
I'm
like
hey
like
wow,
we
should
go,
we
should
go
look.
A
B
Right
exactly
and
and
sometimes
you
know,
you
wonder,
because
there's
different
things
in
the
base
image
that
you
never
use
or
you
you
may
not
know,
you're
using
or
you
honestly
just
never
use
like.
Sometimes
they
come
with
python.
For
example,
like
we
write
go
applications
you
know
we
never
call
anything
in
python,
and
so
it's
like
how
risky
is
this
python
volume
if
we
never
call
python
ever
right,
and
so
you
can
kind
of
drop
the
risk
of
some
vulnerabilities
and
raise
up
the
other
ones
based
on
how
active
they
are.
B
Right,
so
this
is
kind
of
the
value
of
having
a
whole
runtime
component
of
our
product
as
well.
Right
is
like
where
you
can
look
at
what
are
you
actually
doing
right
if
you
think
about
network
flows,
for
example,
it's
like
okay,
how
much,
how
much
network
flow
traffic
have?
I
actually
stopped
how
many
violations
around
networking
have
I
actually
created
that
are
of
value
right.
B
You
know
looking
at
you
know
one
thing
that
we
could
look
at
in
the
future
or
something
is
around
file
rights
and
whether
or
not
you
could
create
things
read-only
file
system
or
influencing
influencing
configuration
based
on
what's
actually
running
right,
because
I
think
the
concept
of
knowing
everything
before
you
deploy
an
application
is
just
a
pretty
challenging
situation,
especially
if
you're
deploying
something
off
the
shelf
I
mean
I
run.
Nginx
is
always
the
container
that
I
use.
I
you
know
I
kubecod
will
run
in
nginx.
B
I
still
can't
tell
you
all
the
processes
that
run
as
a
part
of
that
or
what
files
are
written
or
what
network
connectivity
is
used,
you
know
and
so
or
like
what
port
is
opened
by
default.
I.
A
Think
the
fact
that
ranchers
no
release
they
they
dropped
the
nginx
default
backend.
I
was
like
yeah
it's
the
defaults.
I
think
that
as
kubernetes
gets
mature,
we're
starting
to
realize
what
defaults
are
useful,
but
that's
that's
just
that's
a
little
funny.
A
A
Yeah,
it
is,
and
for
good
reason
it
seems
to
always
work
right.
That
was,
that
was
always
the
the
huge
benefit
when
starting
with
kubernetes
it's
like.
If
I
take
this
helm
chart
and
it
works
the
first
time,
I'm
gonna
stick
with
it
for
a
while
until
somebody
tells
me
not
to
but
yeah.
That's
that's
awesome.
Any
anything
else
that
you're
looking
forward
to
security
space
changes
in
kubernetes.
B
Not
too
much
I
mean,
I
think,
there's
there's
always
a
lot
of
interesting
movement
in
the
community
and
I'm
curious
to
see
what
the
next
sort
of
they're
not
rfes,
but
the
kind
of
like
improvements
in
the
kips.
I
think
they
are
the
enhancements
or
caps.
B
Sorry,
you
know
really
curious
to
see
what
what
people
have
there
and
what
people
want
to
move
forward,
especially
as
kubernetes
scales
and
you
know,
needs
to
get
more
scalable
and
there's
larger
clusters
that
are
being
created,
and
so
there's
always
going
to
be
interesting,
use,
cases
and
stuff
that
pop
out
of
that.
A
Nice
yeah,
if
anybody
is
checking
us
out
and
wants
to
talk
about
a
use
case,
wants
to
tell
us
to
test
something
out.
You
can
ping
us
stackrocks.io
go
to
the
stackrock.io
community
and
there's
some
emails
bunch
of
docs
and
some
communication
stuff
shoot
me
an
email
and
I'd
be
happy
to
to
chat
about
on
the
future
streams.
A
A
Nice
anybody
else
watching
have
a
good.
I
guess
I
can
say
weekend
right.
It's
thursday,
you
say
good
weekend.
Hopefully.