►
From YouTube: 052820 StackRox Workshop | Using Built In Kube Controls
Description
Kubernetes’ default settings do not enable security. Fortunately, Kubernetes has a ton of built-in controls you can tap to improve your security posture. In this hands-on workshop, we will show you how to:
- Set up read-only root file systems, non-root users, and limited capabilities
- Define network policies, admission control, and resource limits to enhance security
- Use Kubernetes namespacing and metadata to limit exposure
- Apply overall infrastructure security best practices
A
A
Folks,
who
maybe
not
maybe
may
not
understand
everything
about
what
we're
talking
about
the
typical
security
posture
of
kubernetes
clusters
and
apps,
that's
sort
of
background
material,
then
we'll
spend
most
of
our
time
today
doing
exercises
of
how
specific
security
controls
on
your
workloads
and
in
your
cluster
work
live
in
a
cluster
that
that
you
can
use
and,
in
conclusion,
we'll
talk
about
how
to
adopt
those
controls
while
keeping
the
peace
and
not
losing
any
friends.
A
So
that's
that's
the
outline
we
can
start
before
that
with,
let's
get
to
know
each
other,
so
I've
given
this
workshop
live
before
and
unfortunately
we
can't
do
that
these
days.
But
let's
get
to
know
each
other
a
little
bit.
First,
I'm
me
I'm
connor,
I
am
a
product
manager.
I
work
at
stack
rocks
and
I
used
to
be
an
engineer.
So
that's
kind
of
my
new
lens
of
the
world
and
in
terms
of
kubernetes,
I
sort
of
stumbled
into
that
in
2015.
A
At
one
company
figured,
we
should
bring
it
in
there
and
then
move
to
techcrux
to
start
building
container
kubernetes
security
products
after
that
now,
I'd
like
to
know
something
about
everyone,
who's
who's
taking
their
time
to
show
up
so
debbie
has
some
poll
questions
that
she
can
put
out
so
first,
just
sort
of
what
best
describes
your
role,
mostly
looking
for.
Are
you
primarily
a
developer?
Are
you
primarily
working
at
security?
A
Are
you
working
operations
that
kind
of
consultant
vendor
role
that
that's
that's
helpful
for
us
to
know
before
we
continue.
A
A
Okay,
so
it
looks
like
we
have
a
primarily
security
oriented
audience,
but
a
good
mix
of
folks
between
you
know
building
applications
securing
them
operating
clusters,
consulting
with
others
and
being
a
vendor.
So
thanks
for
answering
that
question
now
we
can
move
on
to
the
next
question
now,
which
is
more
about
kubernetes.
So
do
you
use
communities
at
work.
A
This
will
help
me
tune
some
of
the
explanation
as
we
continue.
A
Okay,
so
it
looks
like
we
have
mostly
yeses
and
then
a
good,
even
mix
of
using
kubernetes
every
so
often
sometimes
the
most
of
the
time.
Thanks
for
sharing
that
now
we
can
move
on
to
the
next
question.
A
It's
sort
of
a
different
spin:
what's
your
personal
level
of
familiarity
with
kubernetes,
you
know
separate
from
work,
maybe
so,
if,
if
you
maybe
know
what
a
kubeconnel
ctl
command
does
or
if
you're
you've
deployed
an
app
once
or
twice,
that
kind
of
that
will
help
us,
you
know
make
sure
that
everyone
can
stay
on
the
same
page.
A
In
okay,
nice,
I'm
glad
to
see
that
we
have
a
good
mix
similar
to
the
the
previous
question,
just
two
more
to
finish
up,
which
is
more
about
your
organization,
so
you
might
use
kubernetes
a
lot,
but
your
organization
might
not
so
we're
just
looking
for
how
much
does
your
organization
use
kubernetes
and
this
this
one,
I
think,
is
a
multiple
select,
I'm
not
sure,
but
you
know
can
help
us
understand.
A
Okay
and
see
that
a
lot
of
organizations
are
learning
and
some
are
in
prod,
which
it
makes
sense
if
you're
in
prad,
you
are
a
little
bit,
you
know
forward-leaning
and
that's
that's
always
fun
to
see
all
right.
We
can
move
on
to
the
last
question
and
then
we'll
we'll
get
started.
A
This
final
one
will
help
me
understand
kind
of
which,
which
security
angles
that
would
be
most
interesting
to
folks
here.
So
the
kind
of
things
that
you're
most
interested
in
from
a
from
a
kubernetes
security
perspective
for
your
organization.
A
And
with
this
one
we'll
be
able
to
close
it
as
soon
as
we
have
enough
responses
as
well
before
we
begin
the
the
building
blocks
and
get
into
the.
A
Great
and
it
looks
like
we
have
a
lot
of
folks
interested
in
vulnerability
management
you'll
see
a
little
bit
of
that
in
the
first
few
exercises
in
terms
of
configuration
management
network
segmentation,
you
know,
through
detection
kind
of
stuff,
you'll
see
a
little
bit
more
toward
the
end
and
that'll
be
a
nice
mix
of
audiences
for
us
to
deal
with
the
exercises.
So
thank
you
for
taking
the
time
to
answer
those
polls.
A
It
is
really
helpful
for
me
to
to
know
a
little
bit
about
you
know
who's
here
and
what
what
gets
you
guys
going?
What
gets
you
folks
going?
Okay,
so
we're
done
with
the
polls.
It
was
nice
to
learn
a
little
bit
about
the
audience
and
we
can
jump
right
into
the
orientation,
the
building
blocks
of
kubernetes.
A
So
if
we
talk
about
kubernetes
often
the
first
thing
that
comes
up
in
people's
mind
is
containers
right.
So
you
hear
about
the
screw
daddy's
thing
people
say:
oh,
you
use
containers
cool,
got
it
and
that's
true
right.
Obviously,
kubernetes
is
a
container
orchestrator,
but
I'd
like
to
start
when
I'm
explaining
kubernetes
to
someone
who
doesn't
know
about
it.
A
I'd
start,
one
level
earlier
with
images
and
images
are
pretty
neat,
because
they're
a
standard
form
that
we
can
use
throughout
all
of
our
container
infrastructure,
and
especially
in
kubernetes
as
a
sort
of
cookie
cutter
for
all
of
our
applications,
because
the
image
defines
what
software's
gonna
be
there.
What
the
file
system
will
look
like
some
of
the
defaults.
I
just
reproduced
here
the
the
nginx
docker
hub
official
library
image.
A
You
can
see
a
lot
of
stuff
in
it
if
you've
never
sort
of
really
dug
into
a
dockerfile
or
or
especially
looked
at
one
from
from
an
official
library
image.
It
can
be
actually
kind
of
interesting
to
see
all
the
stuff
that
you
can
put
into
the
image.
So
we
see
right
off
the
bat
like
what
os
distribution
we're
coming
from
and
which
variant
of
the
the
version
and
which
variant
of
the
image
we're
going
to
take.
A
A
We
see
a
file
system
changes
to
to
you,
know,
link
standard
out
and
standard
error
to
some
log
file
links
exposed
port,
how
we'd,
like
the
application
to
be
signaled
to
stop
and
the
command
that
it
will
run
when
it
runs
by
default.
A
So
that's
a
lot
of
information
that
just
gets
packed
up
in
this
nice
standard
format,
and
the
other
really
neat
thing
about
images
is
that
ever
for,
since
docker
1.10
or
so
we've
had,
you
know
shots
of
v6
digests
of
all
these
images.
So
when
you
see
an
image
running
in
production
or
running
in
a
cluster-
and
you
have
it
on
your
machine,
you
can
compare
the
two
very
very
well,
because
it's
got
a
hash.
A
That's
really
neat
also
because
you
can
go
request
that
same
hash
from
the
registry
as
well,
and
since
security
is
often
kind
of
a
data
join
problem.
You
know
putting
data
between
different
systems
and
and
a
configuration
problem
of
locking
down
all
the
right
configurations.
A
So
once
we
get
to
containers
containers,
you
know
obviously
aren't
aren't
brand
new
at
this
point,
they're
almost
passed
docker
was
popular
since
what
2014
or
so,
and
before
that
it
wasn't
even
a
you
know
that
docker
was
in
itself
even
new.
A
It
used
a
lot
of
things
from
c
from
linux,
c
groups,
namespaces
file
systems,
but
the
container
you
know
people
sometimes
think
of
as
a
security
box
or
something,
let's
think
of
it
more
as
a
shipping
container,
something
that
runs
everywhere
that
you
can
use
to
deploy
your
app
consistently
in
different
places.
You
can
put
a
shipping
container
on
a
truck.
You
can
put
it
on
a
train,
you
can
put
it
on
a
ship
and
it
will
run
more
or
less
the
same
way
and
the
other
just
underscore.
A
This
julia
evans
makes
these
wonderful
comics.
One
of
those
there
shows
you
just
like
a
container,
isn't
magic
like
it's.
Not
it's,
not
a
a
magical
security
box
into
which
you
can
put
you
know,
malware
and
run
it
and
it'll
be
fine.
You
can
see
that
actually
you
can
do
your
own
little
version
of
a
container
in
a
couple
lines
of
shell,
creating
c
groups
making
a
file
system.
A
You
know
unsharing
mounts
changing
route
into
this,
this
little
volume.
So
this
is
this.
Is
you
know
just
to
underscore
that
shipping
containers
the
containers
are
more
like
shipping
containers
they're,
not
like
lock
boxes
or
vaults,
but
they're
very
useful,
obviously,
because
they
provide
a
sort
of
isolated
environment
for
us
to
run
our
applications?
It's
a
predictable
reproducible,
one
we
can
move
on
to
orchestrators
so
orchestrators.
Obviously,
if
you're
here
for
kubernetes
control,
you
know
workshop
you've,
you've
heard
of
kubernetes-
probably
I
noticed
in
the
polls.
A
A
You
don't
want
to
just
have
one
container
running
somewhere
and
forget
about
it
because
it
will
go
down,
it
will
need
more
storage,
it
will
need
to
get
rescheduled.
Some
maintenance
will
happen
and
kubernetes
handles
a
lot
of
those
tasks
for
you
sort
of
stands
in
the
middle,
conducting
making
sure
that
the
storage
comes
in
when
it's
needed.
A
The
scaling
happens
when
it
needs
to
happen
that
the
network
exposure
continues
working,
that
the
application
is
still
up
when
you
need
it
to
be
up
that
the
secrets
are
provided
and
any
sort
of
other
interactions
are
mediated
and
that's
why
we
have
a
lot
of
orchestrators
come
up
and
then
you
know,
openshift
and
kubernetes
are
sort
of
the
most
prevalent
these
days,
but
you
had
other
ones
as
well
in
the
past
and
the
other
you
know
key
building
block.
A
I
think
that
that
all
these
things
enable
so
once
you
have
you
know,
containers
you
have
images,
you
have
orchestration,
you
end
up
with
standard
apis
for
basically
everything
and
that's
actually
how
how
we
at
stackrocks
think
of
the
world
a
lot
is
in
terms
of
this
kubernetes
deployment.
A
You
can
see
that
the
middle
of
this
graphic,
you
can
see
all
the
information
that
comes
around
it,
including
the
image,
so
all
the
vulnerabilities
that
are
inside
of
an
image
all
the
all
the
registry
that
it
came
from
the
packages
that
are
installed
both
from
the
os
from
the
from
the
you
know,
language
level,
dependency
managers.
You
can
think
about
your
deployment
in
terms
of
where
it
is
which
kind
of
cluster
it's
in.
Is
that
run
by
a
certain
business
unit?
Is
it
a
test
cluster
or
is
it
a
product?
A
A
There's
a
lot
of
granular
control
of
privileges
which
we'll
see
in
the
exercises,
including
like
linux,
host
privileges,
host
amounts,
kubernetes
service,
accounts,
access
to
secrets,
disks
things
like
that
and
networking
as
well,
so
within
and
without
of
the
cluster,
so
between
applications
and
also
with
the
outside
and
finally
runtime
behavior
is,
is
often
best
understood
in
the
context
of
the
whole
deployment.
A
With
all
the
contacts
that
you
know
from
all
those
other
things,
so
this
is
a
really
neat
opportunity
for
security
and
with
with
all
the
options
that
kubernetes
provides
that
we'll
see,
you
have
a
lot
of
opportunity
to
lock
down
your
applications
more
than
you
might
have
before
or
in
a
more
more
visible
way
than
you
might
have
before.
A
And
finally,
the
last
kind
of
key
thing
I
like
to
mention,
especially
if
folks,
are
not
as
familiar
with
kubernetes
itself,
is
sort
of
your
model
for
how
you
do
things
and
we
throw
on
these.
These
almost
buzz,
ready
things,
but
they're
actually
helpful.
So
it's
declarative,
immutable
infrastructure
and
that's
maybe
best
understood,
as
contrast
between
imperative
and
mutable,
so
declarative
and
imperative.
A
You
can
go
back
to
sort
of
you
know:
middle
school
grammar,
the
clarity,
business
thing
is
saying
something
an
imperative
is
you
know
saying
to
do
something
so
declarative?
You
can
just
say
I
want
this
to
be
true
and
you
you
tell
the
kubernetes
api
server
and
it
and
you
do
this
in
the
form
of
a
yaml,
as
you
see
on
the
screen,
sometimes
with
a
lot
of
animals
more
yamles
than
you
might
like
to
see.
A
But
then,
after
that,
it's
immutable,
it's
not
mutable,
so
you
don't
change
it.
You
don't
go
and
change
the
running
applications.
You
just
make
a
different
request
of
what
you'd
like
to
be
true,
and
this
is
also
really
powerful,
and
it
really
you'll
see
this
a
few
times
in
each
of
the
exercises
we'll
deploy.
A
Something
then
we'll
come
up
with
a
new
manifest
a
new
yaml
spec
that
says
a
different
thing
and
then
we'll
deploy
that
to
to
roll
out
a
security
change,
there's
no
more
going
in
and
patching
things
or
going,
and
you
know
marking
something
read
only
something
like
that
and
that's
really
a
powerful
way
to
do
security
in
a
way
that
doesn't
you
know
either
drift
out
of
your
desired
configuration
or
cause
you
to
lose
a
hold
of
what's
actually
going
on
in
the
cluster.
A
So
those
are
the
building
blocks
that
I
always
like
to
cover.
Now
we
can
talk
a
little
bit
about
the
typical
security
posture
of
a
kubernetes
cluster.
There's,
there's
a
wide
variety
of
thoughts
on
on
how
these
things
usually
work
and
how
secure
communities
is
maybe
out
of
the
box
or
anything.
So
one
good
way
to
start
is
just
to
think
about
kubernetes
defaults
themselves.
A
So
there
are
some
good
things.
There
are
some
iffy
things.
This
is
a
conversation
that
you
might
have
stumbled
into
if
you,
if
you've
been
around
in
the
community,
about
how
we
might
be
able
to
change
defaults
or
work
toward
a
more
default,
a
secure
by
default
distribution
of
kubernetes
out
of
the
box.
Kubernetes
has
some
good
things:
there's
no
external
exposure
of
services.
So
so,
if
you
run
a
container,
it's
not
by
default
exposed
outside
it
cuts
off
a
lot
of
accidental
exposures
and
then
immutability
is
encouraged.
A
So
it's
easiest
to
use
kubernetes
if
you're
sort
of
playing
by
the
rules
and
using
the
mls
and
it's
a
little
bit
more
complicated
if
you're
trying
to
mess
with
things
once
they're
deployed.
So
that's
those
are
good
things,
but
the
the
sort
of
the
things
that
make
you
think
a
little
bit
more
and
they
make
people
sort
of
think
kubernetes
doesn't
have
a
great
security
posture
is
some
of
these
other
defaults.
A
Where
container
someone
has
root
by
default
and
most
almost
everyone
doesn't
remap
root,
so
you're
running
as
root
on
the
host
in
every
container.
By
default
with
network
policies,
any
pod
can
talk
to
any
other.
One
containers
will
come
up
the
writable
root
file
system,
so
by
default,
they're
sort
of
in
a
writable
sandbox
and
no
setcomp
applied.
That
feature
is
just
starting
to
move
again,
but
right
now
in
most
places,
you'll
see
setcomp.
A
Actually
they
take
the
docker
default
and
then
get
rid
of
it
and
go
even
more
permissive
in
most
kubernetes
distributions
and
just
to
underline
this
from
the
kubernetes
security
assessment
from
last
august,
I
believe
the
the
assessors
wrote
that
configuration
and
deployment
of
kubernetes
is
non-trivial
certain
components
having
confusing
default
settings
missing
operational
controls
and
implicitly
defined
security
controls.
So
the
implicit
implicitly
defined
things
here
is
when
we
have
either
some
environmental
characteristic
about
the
host
or
about
the
cluster,
and
it
interacts
with
the
ml
in
some
unpredictable
way.
A
So
by
default
you
know
we're
not
necessarily
in
a
great
place
if
you
haven't
spent
some
time
to
take
the
defaults
and
knock
them
down,
and
so
you
might
then
ask
like.
Are
they
typical?
And
the
answer
is
like?
Yes,
actually,
I'm
not
sure
that
I've
seen
many
clusters
that
I
walk
into
that
have
anything
but
the
defaults
for
many
of
those
options.
A
Although
people
did
kind
of
wake
up
to
root
users,
when
the
run
ccd
came
out
about
a
year
and
a
half
ago,
and
there
are
people
better
than
this,
you
think.
Well,
some
are
yes
and
some
people
are
sort
of
on
the
on
the
hill
getting
better,
but
some
aren't
as
well.
Most
people
aren't
and
some
examples
you
might
have
heard
about
some
of
these
so
often
you'll
see
kubernetes
dashboards
exposed.
A
My
colleague
karen
who's
here
helping
has
been
following
the
cloud
providers
and
where
their
dashboards
are
enabled,
or
disabled
they're,
supported
or
unsupported
and
there's
some
good
movement
there.
The
shopify
bug
bounty
on
hacker,
one
is
really
worth
reading.
If
you
want
to
learn
about
how
pods
can
be
used
to
abuse
infrastructure,
we
have
an
exercise
that
goes
toward
that.
A
friend
of
mine
was
still
running
without
our
back
in
2019.
A
I
haven't
caught
up
with
him
in
2020,
but
this
might
be
a
problem
for
our
friendship
and
then
you
know
you
see.
Most
containers
running
is
root
in
most
places
and
network
policies
usually
aren't
there
because
they're
a
little
bit
difficult
to
get
started
with.
So
the
the
situation
is
not
necessarily
good,
but
let's
talk
about
how
some
of
these
things
can
get
better.
A
So
if
you're
too
long
to
read
or
too
long
didn't
listen,
I
guess
you
know
most
people
aren't
great
at
this
yet,
but
you
don't
have
to
be
most
people
and
I
hope
that
the
exercises
will
you
know,
get
us
a
little
bit
closer
for
you.
So
let's
get
into
those.
A
So
we're
going
to
do
this
using
a
website
because
there's
a
lot
of
you
know
copy
paste
and
looking
at
things
that
don't
fit
well
with
slides
the
website.
Is
there
secure,
k8s,
dev
securecase.gov
and
there
are
some
setup
instructions?
A
Honestly,
if
you,
if
you
don't,
have
a
cluster
ready
it
may
not,
it
may
be
a
little
bit
late
to
start,
but
the
the
first
exercise
page
is
a
setup
page
that
explains
some
of
the
requirements
for
what
we'll
do
for
each
of
the
exercises
we'll
talk
about
kind
of
which
control
we're
looking
at
we'll
run
an
application
and
then
attack
or
reduce
it
in
some
way,
so
using
the
privilege
that
it
has
or
using
the
exposure
that
it
offers
at
that
point.
A
If
there
are
questions
that
have
accumulated,
I
will
answer
them
then,
and
then
we'll
like
apply
a
patch
talk
about
what
that
is
and
then
try.
The
attack
again
see
that
it
doesn't
work,
and
then
we
can
take
questions
then
again,
so
that
would
be
using
the
q
a
at
the
bottom.
If
you're
interested
in
asking
a
question
during
those
exercises
in
terms
of
other
logistics,
do
use
a
cluster
that
you're
specifically
allowed
to
run
vulnerable
absent,
please
know
that
we
are
introducing
vulnerabilities
into
the
cluster
on
purpose.
A
So
don't
don't
do
that
in
the
cluster
you're
not
supposed
to
do
that
too,
and
do
think
of
other
stuff.
You
could
do
during
the
attack
phase.
So
some
of
the
some
of
the
exercises
will
have
a
place
where
we
say
like
okay.
Now
we
have
host
access
like
what
what
you
know
dastardly
things.
Could
you
do
and
you're
welcome
to
submit
those
in
the
q
a
and
we'll
share
some
of
those
if
they
come
in
trying
to
preserve
some
of
the
fun
interaction
of
a
live
workshop?
A
Don't
use
a
cluster
of
hosts
real
workloads
or
that
again
that
you're
not
specifically
allowed
to
run
stuff
in
and
don't
attack
anything
outside
your
cluster,
just
concentrating
the
apps
that
we're
deploying
and
we
can
answer
general
questions
using
the
q
and
a
box
about
sort
of
what
did
you
just
do
or
what
is
this
apply
thing
that's
happening
where
these
images
coming
from
et
cetera,
but
we
can't
actually
provide
individualized
support
or
debugging
just
because
of
the
format
and
because
of
the
number
of
folks
here,
I
will
go
through
everything
in
case
something
doesn't
work
for
you,
so
if
you're
in
an
exercise-
and
it
doesn't
seem
to
be
working
for
you-
the
I'll-
be
showing
the
exact
same
thing
that
you
will
be
doing
so
feel
free
to
just
sort
of
step
back
there.
A
There
are
a
couple
poll
questions
we
have
here
just
to
get
a
sense
of
who's
going
to
be.
You
know
coming
along
with
us
through
the
exercises,
so
we
can
answer
a
couple,
quick
poll
questions
here,
just
to
make
sure
we
all
know
make
sure
that
we're
aware.
So
the
first
question
is:
do
you
plan
to
run
the
exercises
along
with
us?
A
A
Okay,
we'll
need
to
close
this
out
shortly.
Okay
looks
like
most
people
are
going
to
come
on,
which
is
fun,
so
I'm
glad
that's
true,
and
then
the
next
question
is
going
to
be
which
kind
of
cluster
you've
got
just.
So
I
can
make
sure
that
I
explain
you
know
what's
going
on
here.
This
is
for
the
environment
that
you're
going
to
use
for
the
workshop,
so
I'll
be
using
google
kubernetes
engine.
I've
also
run
these
on
docker
for
mac.
A
Okay,
so
it
looks
like
we've
got
a
lot
of
mini
cube,
one
docker
mac,
one
dock
for
windows,
couple
gke
and
eks
couple
self-managed
and
a
couple
other
okay,
so
we'll
have
a
few
exercises
that
have
some
environmental
constraints,
primarily
those
will
be
around
like
network
policies
or
specific
our
back
things
where
the
example
might
not
look
the
same,
but
we
should
be
good
with
that
kind
of
environment
that
I
see
here.
Okay,
so
it's
time
for
that
exercises.
A
So
if
you've
landed
on
the
webpage
securegates.gov
you'll
see,
you
know
length
of
exercises.
Also
at
the
top.
The
setup
like
I
mentioned,
we'll
show
you
a
little
bit.
Tell
you
a
little
bit
about
the
requirements.
There's
one
thing
I
want
to
show
here:
first,
before
we
get
going
so
some
of
them
will
use
network
policies.
You
can
just
skip
that
if
that's,
if
your
cluster
doesn't
support
it.
For
some
example,
gk
you'd
have
to
use
an
option.
A
The
examples
by
default
also
use
node
port
services,
largely
for
simplicity
across
that
number
of
environments.
In
some
places,
you'll
have
to
make
a
firewall
rule.
So
for
my
cluster,
I
had
to
make
a
firewall
rule
that
allows
this
port
range.
31
300.,
it's
a
3d
one
305.
to
my
machine.
A
So
that's
just
let
me
you
know.
We've
talked
about
being
responsible.
You
can
clone
a
repository
if
you
want
to
see
the
the
scripts
and
stuff
in
your
terminal,
but
you'll
also
see
everything
on
the
screen
and
on
in
the
website.
So
that's
all
you
need
to
do
so
to
be
able
to
do
the
exercises
without
modifying
them,
you'll
be
using
node
ports.
If
you
are
using
a
local
setup
like
dr
mac
localhost
might
work
that'll
be
the
default.
What
we'll
use
is
this,
this
same
environment
variable
for
each
of
the.
A
Attacks
that
require
a
network
access.
So
if
you
have
docker
for
mac,
there's
one
or
two
of
you
or
I'm
not
sure
if
minicube
will
work
on
localhost
you'll
be
fine
with
localhost.
If
you're
using
a
different
cluster
like
I
am
you'll
have
to
just
export,
you
know
workshop
node
ip
and
then
I'll
set
it
to
the
node
ip
that
I
have
here.
I
got
oops,
don't
get
it
from
chrome,
you
can
get
it
from
kubepetal,
get
nodes
wide.
A
A
Okay,
so
if
you
have
run
the
you
know
or
create
here
for
for
smoke
test
and
then
open
that
port
31
300
on
your
node
ip,
you
should
be
able
to
load
this
nginx
default
page.
And
if
you
do
then
then
you're
ready
to
go.
A
You
may
have
to
add
a
firewall,
so
I'm
gonna
get
started
if
you
again,
if
you
have
any
trouble
with
a
certain
exercise,
you
can
ask
a
question
in
the
q
a
box,
but
we'll
try
to
keep
this
rolling
for
to
keep
you
to
keep
us
on
time.
So
we'll
answer
any
questions
as
they
come
up,
but
if
you,
if
you
get
stuck
and
want
to
just
watch,
that's
also
totally
cool.
A
A
Information
about
how
the
app
will
work
when
it
when
it
comes
up
so
we're
going
to
start
with
a
vulnerable
application,
attack
it
and
then
see
how
we
can
streamline
the
image
to
help
out
with
with
our
security
posture.
So
we
can
see
kind
of
the
docker
file
here
it's
in
in
the
site,
so
you
don't
have
to
download
it
yourself.
This
is
copied
from
a
from
a
vulnerable
image
repository
and
you
can
see
that
it
sort
of
just
puts,
starts
with
tomcat
and
puts
it
in
a
vulnerable
application
and
exposes
it.
A
So
we're
using
google
apply.
What
it
does
is
take
the
the
manifest
that
that
I've
given
as
a
url
from
this
website
and
it
deploys
it
into
your
cluster.
So
it's
made
in
namespace.
It's
made
a
new
application,
a
deployment
and
it's
created
a
service
so
that
we
can
access
it.
We
can
wait
for
it
to
deploy.
A
And
then
we
can
control
c
out
of
that
command.
Then
we
can
use
a
canned
exploit
to
just
exploit
this
application
because
it's
got
a
struts
vulnerability.
It's
vulnerability
in
the
java
apache
struts
framework
you
can
see.
This
is
actually
the
the
injection
it's
a
big
long
injected.
I
think
ogml
or
something
some
kind
of
funky
funky
injection,
but
you
can
see
actually
right
here
is
where
we
have
the
exploit.
A
So
if
you
do
all
this
funky
stuff
to
this
application,
you
can
use
this
shell
and
it
will
run
this
shell
inside
the
pod
and
with
our
sort
of
out-of-the-box
vulnerable
node.
No,
you
know
security
lockdown
container,
we're
able
to
just
go
ahead
and
download
a
crypto
miner,
unturn
it
into
the
file
system
and
make
it
executable
and
then
run
it,
and
I
you
know
black-holed
it,
so
it
doesn't
actually
mind
any
real
cryptocurrency.
A
I
should
have
made
that
mine,
but
sorry,
and
then
we
can
see
that
when
we
check
the
processes
inside
the
container
that
we
see
modded
is
running
so
like
the
injection
worked
out
of
the
box,
we
were
able
to
download
code
start
running
it
in
your
application,
with
a
with
an
application
vulnerability,
and
that's
often
how
these
things
go.
If
people
haven't
taken
any
steps
to
to
prevent
that,
so
the
first
kind
of
you
know
countermeasure
we
can
talk
about,
is
streamlining
the
image.
A
You
can
see
that
this
this
thing
used
a
shell,
it
used
wget,
it
used
tar,
mod
and
then
ran
the
my
the
crypto
miner.
So
we
can.
We
can
make
this
image
a
little
cleaner
and
that
might
slimy
the
people
that
are
out
there.
You
know
running
scripts
against
all
the
all
the
struts
applications
they
can
find.
A
So,
let's
change
the
image,
so
the
docker
file
started
out,
as
we
saw
it
above
and
then
this
is
just
a
diff,
see
what
we
we
did
to
just
streamline
it.
So
the
first
change
you
can
see
that
we
removed
the
original
base
image.
Tomcat
7
is
sort
of
the
default
image.
A
You
know
for
tomcat.
Tomcat7
it'll
have
a
little
extra
tools,
it's
nice
to
nice,
to
use,
and
then
we
can
switch
it
to
a
slim
variant
that
has
fewer
things.
A
It's
you
know
weighs
less
in
terms
of
the
disk
space
network,
so
so
folks
will
sometimes
use
it
for
that
reason,
but
it's
also
nice
for
security,
we'll
add
this
writable
path
for
later
and
then
we'll
finally
remove
the
package
manager
actually
at
the
end,
because
we
don't
need
it
when
we're
in
production.
A
So
I've
pushed
these
to
to
kuwait,
which
is
a
registry
that
has
this
streaming.
I
got
to
make
it
a
little
wider.
The
03003
are
the
ones
we're
looking
at.
You
can
see
that
this
you
know
standard
image
had
103
vulnerabilities
and
then
the
streamlined
one
had
82..
The
mix
of
severities
is
a
little
interesting.
Just
because
tomcat
7
and
tomcat
7
slim
are,
you
know,
being
updated
at
different
cadences,
but
fewer
vulnerabilities.
Even
though
maybe
the
base
images
were
a
little
out
of
date,.
A
So
that's
the
change
and
then
we
can
go
ahead
and
deploy
that
one
there's
no
real
change
to
the
to
the
application,
yet
trade
it
to
the
to
the
kubernetes
examples
other
than
an
image
tag.
We
can
actually
there's
a
nice
command
called
coupon
diff
that
lets
you
compare
with.
A
A
And
this
time
we
did
the
exact
same
request,
but
then
the
response
actually
got
echoed
back
to
us.
It
was
wget
not
found,
so
we
tried
to
inject
code
into
the
application.
That's
running
it.
Actually,
you
know
did
inject
the
vulnerability
still
worked,
but
then
we
got
to
the
first
step
in
our
process.
In
our
you
know,
mechanized
exploit
and
then
wget
wasn't
there
and
yeah.
A
That's
not
necessarily
a
you
know:
it's
not
a
not
a,
not
a
silver
bullet
for
protecting
you
from
phones,
but
certainly
helpful
to
stymie
people
that
aren't
going
to
spend
too
much
time
on
you.
A
So
we
were
able
to
stop
the
download
of
code
into
the
container,
which
is
pretty
neat.
We
removed
it
just
by
downloading
a
you
know,
lighter
base
image,
and
then
we
also
have
lost
things
like
apps.
That
could
also
be
used
to
add
code
as
well.
A
There's
some
notes
on
the
site
that
I
won't
necessarily
read
to
you
about.
You
know
how
to
sum
down
images.
Often
people
will
use,
try
to
use
alpine
linux
or
another
smaller
distribution,
the
debian
slim
or
butcher.
Slim,
you
know,
based
on
videos,
are
pretty
nice
also
these
days,
and
if
you
have
something
like
a
static
go
binary,
you
can
just
use
scratch,
which
is
kind
of
neat,
we'll
show
a
scratch
image
later.
A
So
that's
that's.
The
first
example
just
sort
of
giving
you
a
streamlined
image,
taking
an
image
and
removing
some
stuff
from
it
to
make
you
a
little
bit
more
prepared
for
folks
to
to
try
to
explore
your
application.
A
The
next
thing
that
we
can
do
is
read
only
root
file
system,
I'll
stop
after
this.
After
this,
this
next
exercise
to
just
get
a
sense
of
pacing,
but
this
one's
related
very
much
to
the
previous.
So
we
can
just
continue
here
for
this
one.
The
setup
is
exactly
the
same
as
the
previous,
so
we
get
to
skip
the
point,
and
what
we're
going
to
do
here
is
show
how
to
mark
the
root
file
system
of
your
container
as
read
only
instead
of
read,
write
and
how
to
make
certain
patterns
writable.
A
A
We
can
go
ahead
and
just
mark
it
back
to
the
base.
The
first
image
wait
for
the
pod
to
run
it's
already
there.
This
image
is
probably
cached
and
then
we
can
attack
it
just
like
before
and
it'll
work.
It'll
show
us
crypto
miner
about
five
seconds
there
we
go
so
the
same
thing
same
exact
thing
happened
before,
which
makes
sense,
because
we're
running
on
the
same
the
same
attack.
A
No,
we
were
able
to
download
code
and
run
it
because
we
had
a
writable
file
system.
If
we
weren't
able
to
do
that,
we,
you
know
we'd
w
get
something
to
the
disk
and
it
would
actually
just
be
like.
No,
you
can't
you
can
add
more
code.
This
is
production.
What
are
you
thinking?
So
we
can
make
a
pretty
simple
change
to
the
docker
file
and
then
also
relatively
simple
change
to
the
queries
yml
to
ask
for
read-only
root
file
system.
So
I'll
do
the
kubernetes
one
first.
A
This
is
the
the
diff
command.
That's
pretty
nice!
You
can
see
we
changed
the
image
to
accommodate
this
docker
file
change
and
then,
in
terms
of
the
kubernetes
settings,
it's
actually
just
one
one
boolean,
so
you
just
inside
of
your
container.
This
is
a
container
level
setting.
You
can
set
read
only
your
file
system,
true
and
then
you've
got
it'll,
make
slash
read-only
unless
you
opt
in
other
pads.
A
So
that's
the
all
you
have
to
do
in
the
yaml
in
the
dockerfile
we
had
we're
using
a
volume
instruction
to
mark
this
path.
Writable,
there's
a
note
later
about
how
you'd
need
to
do
it
in
cryo,
but
this
works
in
docker.
A
A
Once
it's
running
you'll
be
ready
to
go,
and
then
we
can
go
ahead
and
attack
this
one
and
we'll
do
the
exact
same
thing
you
notice.
The
command
is
the
exact
same.
I
just
realized
that
that
there
is
a
download
link
for
this
attack
script.
I
think
on
the
previous
page
yeah.
So
so,
if
you
haven't
cloned
the
repository,
you
can
download
the
attack
script
yourself,
which
might
plus
exit
and
then
run
it
sorry
not
to
have
mentioned
that
before
I've
got
a
local
clone.
A
So
I
just
forgot
that
so
in
this
one
we
don't
see
an
error
about
w
get,
but
we
see
you
know
this.
This
image
actually
still
has
that.
That
main,
you
know
not
slim
base
image,
so
it
still
has
w
got
wget
runs
in
our
in
our
injection.
It
tries
to
download
something,
but
then,
when
it
tries
to
write
to
the
file
system,
it
says:
oh,
no,
no
read
only
root
file
system
or
read-only
file
system,
you're
trying
to
write
to
something
that
is
not
writable
at
all.
A
So
we've
been
able
to
block
this
attempt
to
add
more
code
to
our
container
and
run
it.
This
time,
just
by
setting
a
read-only
root
file
system,
this
is
actually
one
of
my
favorite,
like
underused
security
controls,
because
it
it
if
you
can't
add
more
stuff
to
your
container.
You
just
block
off
a
lot
of
sort
of
lazy
assumptions
in
attacks,
and
it
has
the
side
effect
of
also
enforcing
some
discipline
on
application,
development
and
operations.
So
sometimes
an
application
will
accidentally
start
writing
to
the
local
file
system.
A
Maybe
it
fills
up
some
log
there,
and
this
is
actually
a
nice
way
to
root
out
those
problems.
If
you
set
everything
as
read-only
early
on,
then
your
app
will
start
failing,
instead
of
filling
up
a
disk
later
or
something
like
that,
it's
and
it's
it
is
easier
to
do
when
you're
early
on
an
application
before
you've.
You
know
maybe
started
relying
on
rewrite
file
system
for
lots
of
things.
A
If
you
do
need
a
writable
path,
there's
a
note
here
about
how
to
do
that.
You
can
either
use
a
volume
instruction
in
your
docker
file
if
you're
only
running
on
docker
or
kubernetes
has
a
type
of
volume
called
an
empty
dirt
that
works
everywhere,
including
on
cryo
cryo.
Doesn't
respect
the
volume
instructions
the
same
way
that
docker
does
so?
I
want
to
just
take
a
chance
now
to
run
one
poll
which
is
about
the
workshop.
A
Pacing
just
want
to
understand
how
it's
going
for
everyone,
and
if
you
have
any,
you
know
things
you
want
to
write,
you
can
use
the
q
a,
but
in
general,
if
you'd,
like
the
workshop
to
move
faster,
slower,
same
pace,
go
ahead
and
answer
that
in
the
poll
now
and
we'll
we'll
keep
that
open
until
we
get
a
reasonable
number
of.
A
Okay,
cool
so
one
one
faster
or
three
slower
most
of
the
same
pace.
I
will
shoot
for
approximately
the
same
pace,
maybe
a
little
slower.
If
you
do
have
a
questions
where
I've
I've
glossed
over,
something
or
something's
not
clear.
Please
go
ahead
and
use
the
q
a
so
that
I
can
address
that
with
everyone
all
right.
So
now,
we've
we've
taken
a
stress
application
and
we've
done
two
things
to
it:
we've
streamlined
it
down
to
see
that
that
blocks.
You
know,
usfw
got
another
convenient
tools.
A
We've
also
marked
it
read
only
so
two
kind
of
app
level
things
you
can
do
to
to
make
your
application
more
secure.
The
next
thing
is
more
about
host
access,
so
here
we're
gonna
cover
sort
of
what
host
amounts
are,
how
they're
risky
and
how
to
use
read-only
mounts
if
that
works
for
you.
A
So
the
example
here
is
actually
taken
from
the
datadog
demon
set
nothing
against
datadog,
but
it
is
a
monitoring
tool,
so
it
needs
a
fair
bit
of
access
and
it
needs
to
look
at
some
various
things.
A
So
if
you
look
at
what
it's
got,
access
to
these
are
all
host
mounts
in
kubernetes,
so
it'll
access
the
docker
api,
which
is
gives
it
a
lot
of
power,
the
proc
directory
to
understand
what
all
the
prop,
what
all
the
processes
running
on
the
machine
are
c
groups
which
helps
you
with
which
name
spaces.
Things
are
in
s6,
pod
logs
for
some
log
aggregation,
presumably
and
then
also
a
runtime
directory
for
all
the
containers.
A
This
this
comment
is
from
from
them,
not
for
me
and
then
in
our
example,
we'll
also
mount
etsy
just
for
fun.
A
So
what
I've
done
is
built
a
you
know,
simple
container,
based
on
those
volume
mounts,
you
can
inspect
the
the
yaml
if
you
wish,
but
that's
what
it's
going
to
do.
It's
made
a
new
namespace
and
deployed
a
server
in
there
with
all
these
mounts
inside
of
it.
A
So
with
all
those
mounts
we're
not
even
going
to
try
to
like
exploit
a
vulnerability
or
something
that
gets
old
after
a
while,
we
can
see
you
know.
Is
there
a
pod
running?
Yet,
if
you
see
someone
running
you're
ready
to
go
to
the
next
step
and
then
here's
a
place
where
you
can
have
some
fun
so
I've
I've
added
some
post
modifications
info
gathering
stuff
here
that
stuff
you
could
do
you'll
notice
that
most
of
these
are
prepended
with
host.
A
That's
a
common
thing
when
you're
mounting
code
stuff
to
just
throw
it
under
a
directory,
it's
not
required,
but
it
makes
a
lot
of
sense.
A
So
here's
where,
if
you
have
some
interesting
ideas
about
what
you
could
do
to
a
host,
if
you
had
access
to
these
these
directories,
that'd
be
kind
of
neat
I'll.
Show
you
a
couple,
so
one
of
them
is
like.
Let's
look
at
the
shadow
file
where
the
password
hashes
would
be
if
these
had
any
passion.
If
he's
had
any
password
users,
we
can
look
at
all
the
processes
on
the
host,
which
is
kind
of
neat.
A
See
all
the
command
lines,
I'm
not
going
to
dump
all
those,
but
if
you
look
at
what
is
processed
id
1
doing
on
the
host,
it's
got
some
funky
delimiters
here,
but
it's
like
systemd
doing
stuff
kind
of
neat.
You
can
tell
you're
not
in
the
container
when
you're
looking
at
this,
and
you
can
look
at
the
environment
variable
or
you
could.
You
know,
look
at
them
if
you
wanted
to
I've
just
listed
the
file
and
then
you
can
see
also
within
your
container.
It
also
has
a
profile
system.
A
So
if
you
have
any
any
investory
ideas
of
what
you
could
do
with
this
kind
of
access
like
throw
them
in
the
q
a
and
we
can
share
them
with
the
group
had
someone
when
I
gave
this
workshop
in
person,
I
think
talk
about
messing
with
the
file
system,
tab
entries,
which
I
think
the
force
is
strong
with
this
one.
That
would
be
pretty
pretty
bad
and
I
should
say,
like
you
know
this
is.
A
This
is
not
necessarily
an
argument
against
using
a
monitoring
tool
or
something
that
has
these
mounts
just
to
pointing
at
you
know,
making
sure
that
you've
got
appropriate,
compensating
controls
for
making
sure
people
can't
just
go
in
and
abuse
these.
A
So
the
next
thing,
if
you
have
you
know,
ideas,
go
ahead
and
throw
them
in
if
not
I'll,
just
continue
so
I'll
show
you
one
other
interesting
thing
about
anything
that
mounts
the
docker
socket.
A
This
is
a
policy
we've
built
into
the
stack
rocks
for
a
long
time,
because
mounting
the
docker
socket
is
a
great
way
to
give
people
root
privileges
on
your
node,
so
inside
the
same
container
we
can
just
install
the
docker
client
for
simplicity
first
put
in
corolla,
so
I
can
download
it
and
then
we
can
just
use
the
docker
inside
of
our
container
to
do
lots
of.
A
Stuff,
assuming
your
container
assuming
your
your
your
cluster
is
running
docker
you
could
use
you
know,
christ,
detail
or
something
if
you
weren't,
so
we
did
a
docker
ps
here
and
it's
it's
expecting
a
much
wider
screen.
Obviously,
but
this
is
telling
me
everything
that's
running
on
the
host
by
just
talking
right
to
the
docker
api
server
and
we
can
also
run
commands
on
it
in
most
places.
A
Docker
is
not
super
controlled
outside
of
what's
in
the
outside
of
host
access,
so
you
notice
that
the
prompt
changed
this
was
a
pod
and
now
we're
actually
in
a
winter
container
that
we
just
ran
so
now
we
can
do
more
stuff
and
this
one's
actually
running
in
privilege
mode,
which
I
don't
think
the
agent
was
so
a
nice
nice
little
escalation
here.
So
there's
no
results
right
here,
but
now
you've
got
a
privilege
container
running
on
the
note.
A
If
you
control
d
you're
back
in
the
pod
and
then
here
I
want
to
show
the
the
value
of
logs.
So
that's
one
of
those
entries
there's
a
a
pod
log
path.
That's
that's
listed.
We
can
go
see
what
we
learn
about
the
logs
if
we
wanted.
To
so
say,
I
wanted
to
see
the
logs
for
my
container
for
my
struts
container.
I
can
explore
these
files.
Let's
see
I'm
just
going
down
the
directory
tree.
A
A
So,
with
all
that
host
access,
the
counter
measures
are
kind
of
always
going
to
be
app
dependent,
so
we
can
remove
things.
If
we
don't
need
them,
we
can
mark
things
read
only
if
we
don't
actually
need
to
modify
them
and
we
can
also
mount
more
specific
paths
depending
on
if
we
actually
only
need
like
one
part
of
etsy
or
something.
A
This
is
the
kind
of
thing
that's
a
little
hard
to
do
from
the
outside.
If
you
don't
understand
what
the
app
is
doing
so
better
done
in
collaboration
in
my
case,
I
I've
gone
ahead
and
changed
some
of
the
mounts.
We
use
this
diff
command
again
to
see
sort
of
an
improved
version
and
see
what
the
what
the
differences
are.
A
A
If
we
only
needed
the
hostname
file,
we
could
get
rid
of
etsy
as
like
all
of
etsy
and
just
give
it
host
name
instead,
and
if
we
didn't
need
these
two
log
paths
or
the
docker
containers,
you
know
lib
path,
we
could
just
you
know,
omit
those,
and
then
this
is
just
repeating
what
we
just
said.
A
So
those
are
the
improvements
that
we
made.
Basically
marking
things
read
only
and
removing
some
of
the
paths
making
them
more
specific.
You
can
go
ahead
and
apply
those
and
make
sure
these
gets
deployed.
That's
already
running
it's
the
same
image,
so
it
should
be
pretty
fast
and
then
we
can
get
inside
again.
A
This
one
you'll
see
that
you
know
some
of
the
exercises
we
did
before
touching
the
host
name
cat
and
the
password
won't
necessarily
work,
but
the
docker
thing
will
still
work.
This
is
actually
an
interesting
point,
which
is
that
that
marking
the
docker
socket
read
only
doesn't
help
so
we
use
download
docker.
Now
we
can
see
that
we
can
use
it
to
see
what's
running,
we
can
still
run
a
privileged
container
on
the
host.
A
And
that
wasn't
stopped
by
us
marking
it
read
only
so
really
that
you
know
the
docker
socket
is
something
you've
got
to
be
very
careful
with
no
matter
what
kind
of
things
you're
trying
to
surround
it
with,
because
even
read-only
will
let
you
still
write
containers
and
that's
the
reason
why
you
know
we've
got
a
policy
built
in
for
this,
and
and
that's
something
that
usually
we
don't
see
popping
up
in
customer
environments.
But
if
you
do
it's
always
interesting
to
see
what
they're
doing
with
docker
api
server
in
a
container.
A
So
the
ways
you
can
use
do
this
yourself
is,
you
know
primarily
using
it
with
or
doing
this
with
the
team.
That
knows
the
application.
Well,
you
know
you'll
be
able
to
maybe
understand
what
kind
of
host
access
the
thing
really
needs.
A
Often
times
we
find
that
people
don't
really
realize
what's
been
added
over
time.
Maybe
they're
not
you
know.
Reading
the
animals
super
carefully
or
you
know,
they're,
not
proactively,
looking
at
all
the
ammos,
so
just
understanding
and
maybe
enumerating
what
kind
of
host
access
is
going
on
in
your
cluster
and
then
talking
to
the
application
teams.
Could
be
helpful?
A
A
Next
up
we'll
start
going
toward
network
segmentation.
I
know
a
fair
number
of
folks
expressed
that
they
were
interested
in
in
network
segmentation
as
as
as
one
of
their
sort
of
security
priorities
earlier
on
in
the
poll.
So
in
this
one
this
is
gonna,
be
primarily
about
pod
level.
A
Networking
so
we'll
talk
about
sort
of
how
pods
have
access
to
each
other
and
how
you
can
go
ahead
and
limit
that
access
for
this
one,
the
it's
modeled
after
the
shopify
bugbounty
report,
which
I
should
like
here,
but
it's
on
hackerone
and
effectively
what
happened
is
a
there
was
like
a
preview
tool
like
a
screenshot
taker
thing
and
it
didn't
have
network
policies
on
it.
A
So
you
could
coerce
it
into
accessing
the
google
cloud
metadata
server
and
then
you
could
see
a
little
screenshot
of
what
the
api
server
returned
and
then
you
could
use
that
to
steal
a
google
service
account
and
get
into
the
cloud
account
itself,
which
is
that
was
a
fun
report
to
read
and
it's
it
should
still
be
public
so
worth
reading.
It
takes
maybe
five
ten
minutes
to
read
so
we'll
have
an
application.
That's
kind
of
similar
like
that.
A
A
All
this
code
and
stuff
is,
is
in
the
workshop
repository
on
github
also.
So
if
you
want
to
check
it
out
later,
so
that's
the
application
code.
Is
this
really
simple?
You
know
fetch
with
a
url
parameter.
The
dockerfile
is
actually
very,
very
small.
So
this
is,
I
mentioned
scratch
earlier.
This
is
a
static
go
binary,
so
I
can
just
copy
it
right
in
and
that's
all
I
need
and
this
image
will
be
like
10
megabytes
or
something
may
5.
A
I
don't
know,
and
it
will
have
nothing
else
in
the
file
system.
Just
this
one
binary,
which
is
fairly
neat.
There
are
other
kinds
of
minimal
options
that
you
can
use
if
you're
thinking
about
doing
sort
of
like
the
streamlining
thing
from
the
first
exercise.
A
So
once
that's
running
this
one
we'll
use
in
the
browser
actually
because
we
don't
need
to
try
to
do
it
in
the
command
line
terminal
easier
to
see
in
the
browser.
So
I
have
a
little
link
there.
It's
just
a
convenience
to
open
up
their
url,
but
if
you,
if
you
don't
have
your
enviro
set
or
something
just
go
ahead
and
access
your
node
on
port
31
302
and
that
will
this
one
opens
up
a
check
id
service.
A
So
if
you
see
this
or
well
now
you
know
my
ip
well,
you
know
the
nodes
ip.
So
that's
that's!
What
you're
looking
for.
A
And
then,
in
terms
of
the,
I
guess
quote,
unquote:
attack
we'll
use
our
little
fake
exploit
to
access
different
things.
I've
included
some
examples,
but
if
there
are
other
things
that
you
come
up
with,
you
know
that's
another
thing
you
can
throw
in
the
q
a
and
we
can
chat
about
together.
A
So
this
is
the
example
of
showing
the
nodes
ip
the
cloud
provider
minute
that
a
server
is
how
shopify
was
was
compromised.
There
was,
you
know,
compromised
during
the
bug
bounty
you
can
see
there,
that's
the
169,
254
or
162x4.
That's
the
metadata
server
in
basically
all
environments.
A
If
you,
then
you
know,
go
through
different
paths,
you
can
see
what
you
can
see
this
one
is,
you
know,
starting
to
show
some
concealment
behavior
in
the
one
of
their
api
review
on
beta
one.
We
should
be
able
to
keep
going
in
and
see.
You
know
things
about
what
this
instance
is
find.
A
You
can
see
you
know
where
I
am
in
my
environment.
There
is
going
to
be
a
kubem
in
one
of
these
things,
but
we
can
keep
poking
all
kinds
of
interesting
stuff
and
google
has
been.
You
know
the
bugbounty
came
around.
I
don't
know
two
years
ago
and
google
has
been
taking
a
lot
of
making
a
lot
of
changes
over
time
to
sort
of
block
off
this
method
of
attack.
Even
if
you're,
not
you
know
taking
steps
yourself.
So
that's
interesting.
A
So
that's
you
know
just
an
example
of
all
the
stuff
in
a
metadata
server.
Other
providers
may
not
have
the
same
kind
of
controls,
so
that'd
be
interesting
to
check
in
your
environment.
Obviously
it
won't
work
in
something
like
docker
for
mac
or
something
it
doesn't
have
a
metadata
server.
A
We
can
also
try
using
this
pods
network
because
that's
what
we're
doing
we're
injecting
you
know
url
into
the
pods
network
context
and
then
using
the
pod
to
reach
out.
We
can
reach
out
to
kubernetes
kubernetes.
Has
a
nice
handy,
hostname
kubernetes.default
in
this
case
it's
doing
the
right
thing.
It's
telling
us
hey,
like
you're,
not
allowed
to
see
stuff,
because
you
don't
have
any
creds.
A
That
is
the
limitation
of
these
kinds
of
attacks
as
you're,
using
whatever
access
the
pod
has
intrinsically
you're,
not
able
to
necessarily
include
the
headers
and
stuff
a
nice
one
in
gke,
at
least
in
some
environments.
That's
disabled
is
the
kubelet
read-only
api
nice
example
of
like
a
new
attack
surface
that
obviously
didn't
exist
before
kubernetes
is
the
kubelet,
because
you
know
who
uses
the
kubelet
outside
kubernetes.
A
So
this
is
a
read-only
api
port
and
it
just
gives
it's
read-only.
So
you
can't
change
things,
but
you
can
leak
a
lot
of
information,
and
so
this
one
is,
for
instance,
is
a
pod
list
and
it's
gonna
tell
us
about
every
single
pod
running
on
the
machine
which
is
actually
kind
of
interesting.
So
I
guess
we
don't
have
any
engine
x
on
here,
but
our
struts
struts
pod
actually
shows
up
here.
You
can
see
that
it's
in
the
namespace
struts
with
pods
and
it's
got
a
pod
name.
A
You
can
see
lots
of
information
about
the
secret,
which
things
are
mounted
at
least
different
paths
that
we'll
see
so
big
information
leak
if
you're,
if
you're
breaking
into
a
cluster
and
just
to
show
that
it
works
on
other
pods
as
well,
we
can
try
struts
so
here
this
is
what
the
little
showcase
app
looks
like.
I
think
it's
just
not
getting
some
styles
and
stuff,
but
but
you
can
see
that
you
can
access
the
meat
of
the
page.
A
So
the
the
answer
for
this
is
is
going
to
be
network
policy
I'll,
go
ahead
and
show
the
difference.
This
is
going
to
add
an
error
policy
yep
because
we
didn't
have
one
before
so
so
now,
we'll
add
upon
network
policy,
and
that
means
that
the
pod
will
start
to
opt
into
network
policies.
If
you
don't
have
a
policy
everything's
unconstrained,
one
of
those
not
so
great
defaults
or
it's
great
for
getting
started,
but
not
great
for
security,
so
we
can
add
up
a
network
policy.
A
The
this
is
all
all
the
the
boilerplate
and
then
here
we're
saying
which
pods
it
applies
to.
So
it's
going
to
look
in
the
ssrf
namespace,
it's
going
to
look
for
things
that
are
labeled
app
preview,
which
our
app
is
and
then
it'll
talk
about
egress
and
with
no
with
no
other
options.
It
will
use
that
to
block
egress.
A
We
can
go
ahead
and
apply
that
policy
and
see
that
the
namespace
wasn't
changed.
The
app
itself
was
not
changed,
the
service
wasn't
changed.
We
just
made
a
new
network
policy,
and
now
we
can
just
go
ahead
and
try
the
same
things.
So
if
we,
you
know
refresh
the
page,
it's
hanging
now
and
it's
gonna
time
out
eventually
saying
hey
like
I
actually
couldn't
talk
to
struts
anymore
from
my
application.
Now
that
you've
added
this
policy,
you
can
talk
to
some
things.
A
Potentially
the
rules
can
take
a
little
bit
of
time
to
apply
the
kubelet
one's
not
working,
and
the
cube
default
also
is
probably
dying
here.
Yeah
and
the
metadata
server
should
also
this
all
be
timing
out,
which
is
great.
That's
what
we
want.
If
your
application
doesn't
need
to
to
reach
out
to
things,
then
you
might
as
well
not.
A
The
other
neat
thing
is
that
kubernetes
policies
do
apply
to
connections,
not
packets,
so
with
an
egress
policy
on
like
this,
it's
not
saying
like
no
packets
can
leave.
It's
just
saying
you
can't
make
a
connection
out,
so
you
can
still
expose
this
app.
You
know
you
can
see
that
we're
talking
to
it,
but
it
just
won't
be
able
to
proactively
reach
out,
which
is
pretty
nice.
A
A
That
was
the
kind
of
measure
there
and
obviously
you're
not
going
to
be
able
to
do
egress.
If
you
have
an
egress
policy,
those
can
be
a
little
bit
complicated
to
get
deployed.
So
I've
linked
to
like
an
ingress
and
egress
guide
that
my
co-workers
wrote
about.
You
know
how
you
might
be
able
to
get
started
with
those
and
know
some
more
background.
A
Cool,
so
that's
that's
networking
again.
If
anything
comes
up
in
your
mind
or
if
something's,
not
working
or
you've,
got
a
question,
go
ahead
and
use
the
q
a
box
and
I'll
talk
about
it
with
everybody,
so
the
next
one
we
can
do
is
sort
of
move
to
a
kubernetes
native
or
you
know,
kubernetes
attack
service
itself
talk
about
kubernetes,
our
back
and
service
account
provisioning.
A
So
in
this
one
the
kubernetes
api
is
a
super
powerful.
You
know,
piece
of
of
you
know,
attack
service,
it's
it's
where
you're
gonna
go.
Ask
it
I'll.
Ask
ask
all
the
questions
you
know
I
want.
I
want
these
yamls
to
be
deployed
in
my
cluster,
so
it's
it's!
It's
super
powerful!
It's
going
to
do
everything
for
you.
So
access
to
it
is
is
going
to
be
very,
very
important
to
protect
and
by
default
pods
do
get
a
service
account
that
lets
them
access
the
api
server.
A
They
don't
get
much
access
in
it,
but
you
know
most
environments,
but
but
they'll
they'll
have
some
access.
So
let's
let's
go
ahead
and
deploy
an
application.
In
this
case,
I've
got
two
yamls
here.
One
of
them
is
for
the
for
the
application,
the
other
ones
for
the
rbac,
just
because
our
back
stuff
can
be
a
little
bit
more
complicated
to
deploy.
A
We've
got
a
new
namespace,
let's
see
if
this
thing's
deployed
yet
great
and
again
we're
going
to
use
kind
of
we're
just
going
to
go
into
shell.
We
don't
need
to
try
to
layer
on
the
exploits
here,
so
we
can
just
go
ahead
and
exec
inside
of
this
pod.
A
So
if
we,
if
we
want
to
go
see,
you
know
this
is
basically
a
default
yaml,
so
it
doesn't
have
any
kind
of
changes
to
it.
This
is
just
right
out
of
the
box
more
or
less.
A
We
can
just
go
ahead
and
see
what
we're
allowed
to
do
inside
this
container,
so
go
ahead
and
install
curl
and
then
then
add
kukudo.
Sorry,
if
that's
grading
to
you,
I
just
grew
up
saying
and
now
we
can
see
what
whether
we've
got
a
service
account
and
then
try
to
use,
use
the
command
line
to
see
what
you're
allowed
to
do.
A
So,
if
you
see
this
path,
this
is
where
communities
will
throw
a
service
account
by
default.
Unless
you
tell
it
not,
to
and
you'll
have
a
ca
search
for
the
api
server.
Tell
you
what
name
space
you
are
and
the
token
that
you've
got
so
like.
A
It
should
tell
me
an
essay,
so
it
doesn't
have
a
parameter,
a
separator,
but
you
know
where
it
tells
you
what
name
space
you're
in
you
know
what
c
certain
kind
of
token
you're
going
to
have
to
access
the
api
server
and
again
queue
code
will
just
like
pick
that
up
by
default,
it
works
pretty
nicely
and
it'll.
Tell
you,
there's
a
nice
command
off.
Can
I
and
you
can
ask
it
like
hey?
A
Can
I
do
the
following:
there's
also
a
list
option
that
will
tell
you
a
list
of
things
you're
allowed
to
do
so.
It's
super
useful.
If
you're
trying
to
understand
what
access
you
have
in
an
environment,
so
the
stars
at
the
top
are
immediately
something
to
care
about
resources
and
resource
names
with
with
any
verbs
is
a
little
bit
star
star,
meaning
we
can
do
anything
to
anything
at
least
within
name
space,
which
is
a
little
bit
funky
in
cold
water.
We'll
say
like
we're
all
made
of
stars.
A
Your
back
shouldn't
be
like
that
one,
let's
see
kind
of
what
we're
allowed
to
do
so
in
this,
your
your
cluster
might
act
a
little
different
depending
on
you
know
how
it's
how
it's
set
up.
I
believe
the
first
one
works
in
dr
mac
based
on
their
default
configs.
It
doesn't
work
in
gke,
that's
like
trying
to
get
pods
across
all
name
this
one.
We
can
just
look
at
our
own
namespace
in
this
case.
A
It'll
work
I'll,
be
able
to
see
myself
using
the
kubernetes
api,
and
then
I
can
actually
run
another
container
sort
of
like
how
we
use
docker
to
run
a
container.
We
can,
you
know,
use
coupon
run,
which
is
a
fairly
convenient
thing,
so
you
can
run
a
little
winter
image
with
a
shell
and
say
hey
like
not
much
here.
A
It's
it's
got
a
you
know,
kind
of
container
to
run
and
we're
running
in
our
cluster
now,
which
is
kind
of
neat,
that's
sort
of
by
default
in
the
gke
cluster.
What
access
you
have
kind
of
kind
of
alarming?
Actually,
so
that's
that's
just
what
we
had
for
default
and
then
the
things
we
can
do
to
fix
that
we
can
either
like
we
can
change
the
role
that's
granted
to
the
account
you
can
also
disable
the
automotive,
so
it
doesn't
even
get
a
token
in
the
first
place.
A
So
off,
like
I
mentioned,
has
a
little
bit
of
a
funky.
You
know
yaml
structure
with,
if
you
want
to
remove
things
or
whatever
so
there's
a
special
command
called
auth
reconcile
that
will
help
us
apply
our
back
better.
So
when
we
streamline
our
back,
I
want
to
be
able
to
delete
some
stuff
and
change
some
other
stuff.
Now
this
auth
reconcile
will
help
me.
You
can
see
it
tells
you
what
it's
doing
so
it
added
a
new
rule
and
reconciled
some
other
ones.
A
It
looks
like
which
is
good
and
we
can
apply
a
new
I'll
do
a
diff
first.
Actually
we
can
see
what
the
difference
is
on
a
surface
account
mount
in
fact
I'll
go
ahead
and
show
you
how
the
access
changed
with
that.
So
now
the
star
star
is
gone
with
just
the
the
new,
our
back
rules,
which
is
kind
of
nice.
We
don't
have
that
access
happening
anymore.
Actually,
that
reminds
me
I
may
have.
A
I
may
have
missed
spoken
that
might
not
be
by
default,
but
it's
just
an
our
backed
rule
that
I
applied,
and
then
we
can
change
to
not
even
amount
to
service
account.
That'll
require
a
redeployment,
so
I
can
go
ahead
and
see.
That's
deployed.
A
Go
inside,
do
the
same
thing
that
we
did
before
and
we'll
note.
There's
no
inside
of
our
run,
there's
going
to
be
no
surface
account
mount
directory
at
all,
so
we
could
go
ahead
and
download
cube
code
again.
I
I
don't
think
I
will,
but
you
would
show
that
you
can't
actually
access
the
api
server
anymore,
because
you
don't
have
a
token
here
and
there's
no
way
for
you
to
get
one.
A
So
you
can
do
this
by
disabling
the
pod.
The
automount
surface
account
token
field
I'll
just
go
ahead
and
try
to
show
that
real
fast,
where
that
is
just
using
a
couple.
Diff.
A
So
there's
just
one
thing
in
your
pod:
spec
where
you
can
say
like
I'd,
actually
not
like
a
service,
account
token,
don't
bother
and
then
there's
also
one
in
the
service
account
spec.
You
should
read
the
docs
just
to
be
sure
you
understand
how
those
interact,
because
they
both
have
defaults
and
they
don't
necessarily
act.
The
way
you
would
think
so.
That's
that's
communities
are
back
showing
how
a
you
know.
A
Initial
configuration
might
leave
you
a
little
bit
more
exposed
than
you'd
like
and
then
how
you
can
take
your
art
back
and
streamline
it.
A
little
bit
and
disable
service
account
token
mounts.
If
you
don't
need
it
most
applications,
don't
really
need
kubernetes
api
access
at
all.
So
it's
a
nice
nice
sort
of
easy
app
by
app
lockdown.
You
can
apply
so
our
back
is
oriented
around
namespaces.
You
might
have
seen
up
here
that
that
sort
of
there
are
stars
and
different.
A
We
were
able
to
say
not
see
things
across
namespaces.
We
could
see
pods
in
our
namespace.
Their
namespaces
are
a
really
important
security
kind
of
speed
bump
at
least
they're,
not
they're,
not
a
container
they're,
not
a
isolation,
boundary
per
se.
It's
not
a
hypervisor
or
something
you
might
still
schedule
next
to
each
other,
but
namespaces
can
be
at
least
a
good
security
speed.
A
So
we'll
see
you
know
how
that
works
in
a
couple
contexts
both
in
terms
of
network
policies
and
and
just
things
you
are
and
aren't
allowed
to
do,
if
you're
across
any
space.
A
So
we
can
start
with
with
just
like
this.
This
yml
it'll
deploy.
You
know
a
new
service,
one
called
server,
one
called
bad
server,
they're,
actually
the
same,
but
we'll
see
kind
of
how
how
two
applications
living
next
to
each
other
in
the
same
name,
space
might
share
a
little
more
than
they
should
go
ahead.
A
So
in
this
case,
like
we
can
see
that
there's
a
bad
server
pod
running,
we
can
also
check
that
there's
a
server
one
running
and
they're
running
in
the
same
name,
space
they're
running
running
in
the
default
namespace.
I
believe
yes,
so
if
we
go
ahead
and
go
into
the
bad
server,
our
our
goal
I
should
say
is:
is
that
your
bad
server
like
shouldn't
be
able
to
do
the
same
thing.
So
it's
your
good
server.
A
It
should
be
able
to
talk
to
the
good
server
or
maybe
it's
some
untrustworthy
deployment
or
something
we
can
do
the
same
kind
of
deal
where
we
install
curl
and
then
try
to
use
it,
we're
in
the
bad
server
and
like
our
security
goal
here,
is
not
to
be
able
to
talk
just
to
the
real
server.
A
But
if
we
go
ahead
and
curl
it,
we
get
the
you
know,
standard
nginx
response,
so
we're
able
to
talk
to
the
server
which
we
didn't
necessarily
want,
and
then
I
also
made
a
mistake
in
my
in
my
yaml.
Maybe
I
copied
it
from
server
and
I
didn't
remove
something
so
now
I
can
see
like
maybe
the
sensitive
config,
where
I've
got
like
my
proxy,
my
off
proxy,
that
has
a
username
and
password.
A
This
is
kind
of
a
contrived
example.
Obviously,
but
you
know
this,
this
bachelor
is
not
supposed
to
have
this
stuff
and
I
kind
of
want
to
know
how
this
happened
and
maybe
fix
it
in
the
future.
A
So
the
the
what
we're
going
to
do
then
is
is
trying
to
move
things
into
a
different
name
space.
So
I
can
show
you
sort
of
where
we
started.
We
had
a
network
policy
that
allowed
access
within
the
same
name,
space.
A
We
had
a
secret
that
we
that
had
that
proxy
stuff
in
it
and
they
had
two
deployments.
It
would
be
easy
to
miss
that
we
had
this
this
volume
out
here
and
that
we
had
accidentally
copied
it
over
to
bad
server,
that's
kind
of
what
we're
looking
at
and
since
they're,
both
in
the
namespace
default,
which
you
can
see
up
in
the
metadata
that
that's.
Why
we're
we're
in
this
bad
situation,
so
we
can
go
ahead
and
split
those
out,
delete
the
old
stuff
and
then
try
to
split
it
up.
A
First,
in
this
case,
I
made
a
different
mistake.
I
copied
over
that
server
yaml,
but
I
forgot
to
change
the
the
proxy
config,
and
this
is
actually
an
example
where
the
namespace
will
help
us,
because,
let's
see
how
this
pod
is
going
stuck
in
container
trading,
see
if
it
gives
us
an
error.
A
You
have
to
describe
it
to
be
able
to
see
that,
because
it's
just
going
to
keep
trying
to
create
it
says
like
hey.
I
tried
to
set
up
this
volume
now
that
you
asked
for,
and
I
couldn't
find
secret
config
and
that's
because
if
we
read
the
yaml.
A
It's
called
secret
big,
the
good
server
can
mount
it,
and
then
I
just
copy
that
into
bad
server
like
you're
in
the
battery
ammo
though,
but
this
secret
mount
doesn't
actually
take
a
name
space
parameter.
It
only
takes
a
name
and
if
we're
in
a
different
name
space,
if
we
put
bad
server
in
bad
and
a
good
server
in
the
good
name
space
we
actually,
there
is
nothing
by
that
name
like
we
can't
say.
A
We
want
like
good
namespace
secrets,
config
you're
limited
to
mounting
the
things
in
the
api
you're
limited
to
matching
things
in
the
same
name,
space.
A
So
we
can
fix
that
just
by
like
removing
the
mount,
and
now
we
can
check
that
the
pod
deploys
it's
running
now.
If
we
try
to
do
the
same
stuff,
we
did
before.
A
Go
ahead
and
install
curl
we're
in
the
bad
server
again
so
now
we're
trying
to
talk
to
the
good
server
notice.
We
actually
had
to
add
this
dot
good
on
the
end,
because
it's
in
a
different
name
space,
you
can't
just
say
server
anymore,
once
you're
across
namespaces,
which
is
also
nice.
You
can
give
it
a
little
bit
of
you
know:
separation
and
since
network
policies
are
enforced
in
my
cluster,
we
we
added
the
the
existing
network
policy.
That
said,
hey.
I
only
accept
connections
from
within
my
namespace.
A
That's
working
here,
because
we're
across
namespace
boundaries,
we'd,
have
to
add
a
special
network
policy
rule
to
allow
this
bad
server
to
talk
to
good
server,
just
make
sure
everything
kind
of
still
works
like
we
can
still
talk
to
ourselves,
so
we
can
still
talk
to
bad
server.
A
We
just
can't
talk
to
the
good
server
anymore,
which
is
what's
sort
of
our
security
goal
here
and
also
the
you
know,
the
proxy
config
won't
be
mounted,
so
we
can
just
go
ahead
and
get
out
of
that
pod
now
and
just
an
example
of
how
your
network
segmentation
will
be
improved.
A
If
you
start
separating
stuff
into
namespaces
and
how
you
can't
even
accidentally
take
a
secret,
that's
not
really
meant
to
be
ours
across
namespace
boundaries,
so
people
often
ask
certainly
like
how
would
you
do
namespaces
different
folks
have
different
answers,
sometimes
it'll
be
by
team,
sometimes
by
like
service
tier
like
which
is
this
a
back
end
or
front
end
or
database
tier,
or
something
like
that?
It
kind
of
depends
on
your
organization
and
who's
responsible
for
things.
A
I
typically
like
to
see
the
people
or
the
application
architecture
reflected
in
the
namespaces.
Another
strategy
is
to
just
take
various
very
sensitive
stuff
and
put
it
in
its
own
namespace,
for
this
kind
of
for
this
kind
of
reason,
so
those
are
name
spaces.
A
nice
little
security,
speed
bump
to
prevent
sort
of
a
lot
of
security
stuff
is
not
necessarily
malice,
but
mistakes
or
mistakes
that
get
abused
so
a
nice
way
to
maybe
minimize
some
mistakes.
A
So
we've
got
a
couple
more
at
this
point.
I
can
do
the
poll
one
more
time
just
to
see
if
there's
any
kind
of
guidance
on
on
how
people
feel
about
the
pace,
we
can
throw
that
up
that
pull
up
now
before
we
move
on
to
the
boundary.
A
User,
okay-
and
I
see
that
we
have
one
question
here,
so
what
would
be
the
recommendation
for
namespaces
or
external
tooling,
like
cluster
autoscaler,
monitoring,
logging
etc?
A
I'm
not
sure
I
have
a
great
answer.
The
stuff
that's
going
to
be
in
in
you
know,
cube
system,
and
things
like
that.
Obviously
you
know
I
can
be
able
to
mess
with
cube
system.
Also
apply
has
things
that
apply
differently
like
network
policies,
they
just
don't
work.
If
you
apply
them,
if
you
do
have
have
sort
of
infrastructure
stuff
that
can
be
nice
to.
If
you
have
infrastructure
stuff
it'd
be
nice
to
keep
that
separate
from
your
applications,
at
least
not
certain.
A
Exactly
what
to
tell
you
there,
I'm
sorry
looks
like
we
have
in
terms
of
the
poll
about
pacing
a
little
bit
of
requests
for
slower
the
fair
number
for
the
same
pace,
I'll
shoot
for
same
with
a
little
bit
of
slow.
If
you
do
have
a
question
that
or
something
wasn't
clear
like
please,
please
do
show
it
in
the
q
a
so
that
we
can
help.
A
Okay,
all
right.
Let's
go
back
to
the
exercises,
then
so
we've
got
a
couple
more.
This
one
is
about
using
a
non-root
user.
So
I
mentioned
earlier
that
one
of
the
kind
of
unfortunate
defaults
in
communities
is
that
everything
runs
as
root
and
then
because
most
people
don't
remap
users
container
root
is
the
same
as
host
group.
A
So
we'll
just
use
a
simple
shell
to
get
into
this.
I
think
there's
a
simple
app
here:
that's
gonna
run
a
a
a
server
server,
doesn't
do
much
just
sort
of
listens
on
a
port
and
logs
through
a
path
and
we'll
show
how
that
how
you
can
fix
that
to
work
in
in
a
not
root
identity.
A
You
can
see
also
it's
a
simple,
simple,
docker
image,
so
it's
just
going
to
go
from
alpine
and
then
just
copy
the
static
binary
in
we'll
go
ahead
and
deploy
that
application.
It's
got
a
little
service
and
we
can
go
ahead
and
exec
into
it.
Looks
like
it's
running
nice
thing
about
alpine
stuff.
Is
that
it's
going
to
be
really
small,
so
here
we'll
be
running
as
root?
I
believe
right
so
who
my
root.
That
means
I
can
do
lots
of
lots
of
fun
stuff.
A
There's
a
little
there's
a
host
mount.
I
believe.
So
you
can.
You
know
shove
things
out
to
the
host.
Maybe
don't
do
this
if
your
host,
if
you're
gonna,
keep
this
cluster,
I'm
gonna
throw
it
away
at
the
end.
There
are
other
things
you
wanna
think
about.
This
is
similar
to
the
host
mount
problem,
so
so,
basically
it
is
showing
that
you
can
modify
the
host
with
this
root
identity.
A
A
This
is
comparing
the
a
new
version
of
the
of
the
ml
to
the
one
we
have
deployed.
So
this
one
is
going
to
just
add
a
security
context
option
it
says,
run
as
non-root
true
now
this
we
haven't
actually
changed
the
image
to
use
a
non-reuser.
A
So
this
is
we
can
we
can
see
what
happens
if
we
just
set
the
setting.
You
know,
I
think
I
might
have
applied
this.
The
first
time,
I'm
like
oh
great,
cool
it'll,
just
like
pick
a
non-root
user
and
run
my
app
under
it.
That's
not
what
it
does.
A
So,
if
you
go
see
what
the
the
pods
are
doing
now,
here's
our
here's,
our
old
one,
the
one
we
deployed.
You
know
two
minutes
ago,
and
then
this
is
the
one
we
just
tried
to
deploy
and
it's
telling
us
hey
like
something's
wrong
here.
So
if
you
go
ahead
and
call
describe.
A
Server,
it's
got
a
actually
a
pretty
good
error
message
at
the
bottom.
It's
saying
that
kublet
is
complaining,
so
it
got
all
the
way
to
the
kubelet
got
scheduled
and
then
it
says:
hey.
This
container
has
run
as
non-root
set,
but
the
image
is
going
to
run
as
this
is
a
place
where
your
configurations
are
applying
across
the
stack
where
we've
got.
A
You
know
something
in
the
image
and
something
in
humidity
is
kind
of
combining
and
they're
getting
enforced
at
the
kubelet
when
the
cubelet
starts
to
run
the
container,
so
we
didn't
actually
forget.
We
actually
kind
of
forgot
to
change
the
image.
So
now
we
can
fix
that
before
we
go
so
the
the
docker
file.
All
I
have
to
do
is
add
a
user
instruction.
Usually
you'll.
A
Do
this
like
last
in
the
in
the
docker
file,
so
that
you
can
do
all
your
app
installation
and
whatever
first
and
then
you
can
you
sort
of
switch
down
to
a
to
a
different
user.
You
can
pick
different.
You
know
numbers
it's
best
to
have
a
little
bit
of
strategy
around
this,
and
people
often
use
these
high
numbers
because
they're
never
going
to
be
in
the
actual
infrastructure.
A
Okay,
so
we've
changed
the
image
to
have
that
user
instruction
and
then
the
app
actually
also
needed
to
be
modified
a
little
bit
because
it
was
listening
on
port
80
and
if
you're
not
root
by
default.
You
can't
do
that.
You've
got
to
use
a
higher
number
report,
so
we
can
just
change
the
command
entry
to
to
use
non-default
stop
using
the
the
file
system,
and
you
know,
use
a
ease
of
port.
That's
not
in
that
privileged
range
of
1024
or
below,
and
the
nice
thing
about
kubernetes
having
pods
and
services
separate.
A
Is
that
you
can
change
this
in
a
container
port.
You
can
change
from
80
to
80
80..
You
can
change
your
service
object
to
point
to
8080,
so
you
don't
have
to
change
your
external
exposure.
It'll
still
remap
31303
to
to
80
88
in
this
case,
so
now
we
can
roll
out
that
change.
A
A
We
might
this
time
some
unknown
uid
and
say
we
tried
to
do
the
same
thing
that
like
root,
could
do
before,
and
it's
just
like.
No
we're
not
allowed
to
do
this.
A
That's
actually
pretty
neat,
it
shows
you
how
a
safeguard
in
your
yaml,
the
you
know,
run
as
non-root.
True
can
enforce
things
about
your
image
policy,
just
as
a
aspect
of
your
image
configuration.
You
can
also
try
to
enforce
this
in
your
in
your
images
as
well,
but
the
kubernetes
config
here
will
help
you.
You
know,
keep
that.
Keep
that
on.
A
You
can
also
try
to
require
that
that
setting
be
on
in
all
your
deployments,
which
means
that
all
your
images
will
have
to
run
as
non-root,
but
there
will
be
some
of
those
application
changes,
sometimes
so
like
either
changing
port
numbers
or
changing
the
behaviors
to
to
not
use
the
path
or
something
like
that,
it's
better
to
use
standard
out
anyway
for
logs,
for
instance.
So
this
wasn't
too
disruptive.
A
Okay,
I
see
we
have
a
couple
questions
here.
What
would
be
the
difference
between
using
a
user
in
the
docker
file
versus
using
run
as
user
from
the
security
context?
That
is
a
good
question.
A
I
don't
know,
if
not
my
head,
I'm
sorry
that
I
didn't
didn't
prepare
that
specific
part.
My
recollection
is,
you
know,
the
fs
groups
and
stuff
will
be
like
an
additional
group
identity
run
as
user.
I
I
am
not
fresh
on
sorry
and
then
what
about
running
sidecar
containers
that
are
running
as
root,
my
use
cases
which
engine
x
stable
alpine?
A
So
with
all
this
all
this
actual
all
these
exercises,
there's
there's
sometimes
going
to
be
good
use
cases
for
for
for
root
or
for
privileged
or
for
host
mounts
like
datadog,
was
an
example
of
something
that
had
lots
of
mounts,
but
it's
actually
useful
amounts
because
it
does
a
thing
for
you.
It
monitors
your
containers,
it
you
know,
does:
does
those
performance
monitoring
log
irrigation
all
that
kind
of
stuff?
So
what
what
this?
A
What
the
framing
of
all
this
stuff
is,
is
sort
of,
not
that
you
might
not,
that
you
should
never
use
them,
but
that
you
should
be
using
them
carefully
and
in
most
cases,
applications
don't
need
this
level
of
access.
So,
but
if
they
do,
you
know
it's
actually
a
risk,
you
can
take,
there's
just
one
risk.
You
probably
don't
need
to
be
taking
with
all
your
applications
like
this
little
simple,
dumb
server
doesn't
need
to
run
as
root.
A
All
it's
doing
is
serving
you
know,
a
simple,
simple,
simple
little
request:
it
doesn't
need
to
be
running
us
roots,
so
it
can
be
import
80..
A
So
yeah
running
as
an
unroot
can
be
a
good,
a
good
way
to
to
minimize
your
security
exposure
a
little
bit.
I
mentioned
earlier
that
the
run
c
cve
kind
of
got
people
more
attuned
to
this.
That
cve
was
an
example
where
run
c
itself
could
be
abused
to
overwrite
things
on
the
host
if
you
were
running
as
root
or
as
an
appropriate,
privileged
user.
A
So
if
you
were
running
not
as
root
you're
you're
in
a
much
better
position
in
your
application,
okay,
so
let's
take
one
more
question
too
great-
would
be
possible
to
configure
all
the
pods
or
containers
not
to
run
as
root
with
some
cluster
level
configuration
instead
of
making
changes
in
each
pot
or
deployment
yemel.
So
to
my
knowledge,
no,
you
can't
enforce
that.
A
You
can
require
that
things
work
this
way,
but
you
can't
necessarily
do
it
unless
you
tried
to
do
some
like
mutating
web
hook
or
something
that
I
I'd
be
careful
with
that
kind
of
mutating
strategy
where
you're
changing
deployments,
because
it
will
make
it
harder
to
test
and
relies
on
the
cluster,
changing
things
probably
better
to
have
things
specified
in
yml's.
A
You
know
in
in
whatever
place,
you're
storing
animals
or
configurations
okay,
and
if,
if
I
haven't
answered
in
either
any
of
these
questions,
do
feel
free
to
do
a
follow-up.
Okay,
so
another
could
using
the
sidecar
with
run
as
user,
be
a
solution,
since
we
don't
have
control
over
the
image
itself
without
adding
the
overhead
of
maintaining
a
copy
of
the
image
sure
so
yeah
in
most
cases,
or
in
this
case
I
was
assuming
that
we
built
that
application
image
and
the
container
image
was
kind
of
our
our
creation.
A
So
we
could
go
ahead
and
put
in
the
image
as
a
nice
default,
but
if
you're,
for
instance,
bringing
in
vendor
solutions
or
something
or
taking
an
official
image
run
as
user
could
be
a
nice
way
to
fix
that.
That's
a
good
question
in
that
case,
you'd
still
have
the
same
kind
of
you
know.
Consistency
problem
between
run
as
non-root
you'd
have
to
make
sure
that
you,
if
you
set
that
you'd,
also
have
to
go
set
your
run
as
user.
A
I
typically
just
ended
up
doing
the
image
thing,
because
I
tend
to
be
working
with
applications
that
that
I've
I've
originated
versus
ones
that
are
you
know,
from
vendors
or
outside
outside
libraries
and
stuff.
A
Okay,
so,
and
then
there
are
some
examples,
also
like
nginx,
where
it
will,
it
will
start
as
a
root
and
then
then
create
worker
processes
with
a
lower
identity.
So
those
I
tend
not
to
to
necessarily
worry
about
with
the
root
thing,
although
it'd
be
nice,
if
we
didn't
have
to
do
that
great.
Thank
you
for
the
questions.
Folks
and
again,
please,
you
know,
even
if
we
I'm
about
to
move
on
to
the
privileged
mode
one,
but
if
you
have
more
questions,
feel
free
to
add
okay.
A
So
let's
talk
about
avoiding
privilege
mode,
this
one's
a
fun
example
from
ian
coldwater
and
duffy
cooley
that
they've,
given
in
a
few
places
so
he's
got
this
lovely
like
one
tweet
to
root,
so
we
can
go
ahead
and
like
deploy
nginx,
I'm
gonna,
do
it
as
a
deployment
and
scale
it
just
to
get
one
running
on
every
machine.
It's
not
it's
not
necessarily
required
for
this,
but
it's
an
easier
way
to
adjust
or
to
to
demonstrate.
A
So
I
copied
out
the
the
command
here.
So
you
can.
You
know,
click
and
paste,
we'll
just
run
this
in
our
cluster
and
see
all
the
things
you
can
do
with
a
privileged
container
press
enter
okay
cool.
We
got
our,
we
got
our
command
and
it's
pretty
printed
down
here
also,
so
you
can
see
kind
of
it
to
fit
in
280.
You
can't
use
line
breaks,
but
you
can
see
all
the
container
all
the
options
here.
A
So
it's
enabled
you
know
hostp
namespace,
true,
so
it'll
go
and
it'll
start
processes
in
the
same
name
space
as
the
host.
Then
it's
privileged
as
well
in
the
security
context,
we'll
attach
a
terminal,
that's
easy
and
then
the
command
is
that
it's
going
to
enter
the
the
host's
process,
one
so
the
host's
initialization
process
and
just
running
a
simple
outline
image,
which
is
pretty
fun
here.
A
So
just
as
a
couple
examples,
this
is
a
place
where,
if
you've
got
some
interesting
ideas,
you
can
go
ahead
so,
but
you
can
see
that
we
can
see
all
the
processes
that
are
running,
including
in
other
containers
and
go
find
nginx.
So
we
just
entered
you
know
this
deployment
ran
on
one
host
and
then
we
can
go
find
all
the
nginx
things
running
somewhere
else
or
running
on
that
same
node.
We
can
even
enter
the
namespace
of
those
processes
and
see
what
is
going
on.
A
You
can
see
the
connections
that
are
open,
we're
listening.
We
can
go,
see
the
engine
config
in
that
other
container,
which
is
also
kind
of
fun.
A
Let's
see
this
is
pretty
bog
standard
nginx
and
then
we
can
even
go
back
to
the
host
namespace
and
see.
What's
the
namespace
see
what
it's
like,
some
things
on
the
host,
so
lots
of
fun
if
you've
got
host
pid
and
privileged.
A
If
there
are
any
other
things
you
want
to
share,
please
please
throw
in
the
q
a
we
can
talk
about
it,
so
I'm
not
going
to
keep
poking
around
here.
I
want
to
make
sure
we
stay
on
time,
but
feel
free
to,
and
yours
and
the
kind
of
measure
here
is
like
don't
do
that,
so,
if
you're
not
really
needing
privileged
or
hostped,
you
know
don't
use
them
because
they
provide
this.
This
level
of
access,
most
applications
shouldn't
need.
This
kind
of
you
know
elevated
host
access.
A
This
is
the
kind
of
thing
that
that
you
know
some
built-in
policies
and
stack
rocks,
for
instance,
will
look
for
just
kind
of
this
kind
of
stuff
and
then
the
way
that
so,
if
you
want
to
actually
enforce
this
is
happening,
you
want
to
use
emission
control
whether
you're,
like
a
dynamic
condition,
controller
like
stack,
rocks
or
a
pod
security
policy,
or
something
like
that
to
block
those
containers
by
default.
A
Unless
you
know
what's
going
on
so
if
we
just
change
a
couple
things,
this
is
gonna,
take
host
pit
and
make
it
false.
We
can
see
it
will
not
work.
So
we
can't
you
know
we
could
probably
fix
it
to
work,
but
it's
not
gonna
be
able
to
use
bash
because
it's
not
able
to
get
into
pid
one
on
the
host.
It's
not
really
helpful
and
then
the
privileged
false
also
will
prevent
it
from
being
able
to
use.
A
You
know
these
these
mounts,
so
you
know
those
those
two
are
sort
of
a
deadly
combination,
so
in
terms
of
what
you
can
do
instead,
instead
of
using
privileged,
there
are
some
capability
settings
in
security
context.
Where
you
can
say,
I
don't
need
privilege,
but
I
would
like
certain
linux
capabilities
like
I'd
like
to
be
able
to
use
raw
sockets
or
something
like
that.
A
There's
some
interesting
tools
called
one's
called
capable
from
ben
and
craig
that
can
tell
you
sort
of
which
capabilities
you're
using
and
but
going
back
to
sort
of
the
declarative
immutable
world
of
kubernetes.
You
know
all
these
changes
can
go
through
ci
and
testing
like
any
code
change,
because
they're
right
in
the
yaml,
which
is
kind
of
nice,
there's
a
whole
talk
about
privileged
containers
from
b-sides
sf
this
year
before
we
all
begin
sheltering
in
place
that
you
might
want
to
watch
about
privileged
containers
too
cool.
A
The
final
one
is
actually
going
to
be
resource
limits
before
we
wrap
up,
so
we've
been
able
to
stay
mostly
on
time,
and
for
this
one
I'm
going
to
show
you
sort
of
like
what
research
limits
are
and
what
happens
if
you
don't
apply
them,
and
this
is
actually
an
interesting
place
where
people
might
think
of
this
more
as
like
an
operations
thing
or
a
reliability
thing,
but
in
security
we
often
concentrate
more
on.
You
know,
confidentiality
integrity,
and
we
forget
about
the
a
and
cia
triad.
A
We
don't
forget
about
availability
being
actually
a
key
responsibility
as
well.
So
this
is
a
place
where
you
know
people
across
across
our
our
different
communities
can
can
derive
some
some
shared
value
out
of
out
of
a
single
configuration
sort
of
like
labeling
helps
everyone
understand
what
things
are
you
know
applying
resource
limits
can
help.
You
keep
your
app
available,
whether
you're,
primarily
in
an
ops
role
or
primarily
in
security
or
whether
you're
in
one
of
those
unicorn
devsecops
teams
that
I'm
told
exist.
A
So
in
this
one.
It's
inspired
by
sort
of
a
memory
exhaustion
attack
that
has
gone
around
in
various
languages
and
in
particular
in
kubernetes.
There
was
a
it's
called
billion
laughs
and
yaml.
The
laughs
comes
from
from
xml
when
it
was
called
lol
or
something,
but
yaml
and
xml
are
sort
of
complex,
too
complicated
for
their
own
good.
Sometimes
you
can
build.
These
exploding
manifests
that
that
reference
each
other
in
in
many
many
many
ways
and
just
take
up
too
much
memory.
A
So
this
is
actually
the
subject
of
crimini's
cve,
and
this
is
a
reason
why
you
might
want
to
be
limiting
your
api
server
access,
especially
against
the
entire
internet,
because
you'll
have
a
lot
of
if
you're
running
an
unpatched
version
and
someone
can
just
come
up
and
blow
up
your
api
server
by
sending
a
manifest.
That's
not
very
good!
A
So
it's
inspired
by
this.
It's
not
quite
the
same.
This
little
basic
application
called
memory.
Exploder
just
take
the
parameter
and
we'll
allocate
a
slice
that
is
as
big
as
the
parameter
you
send
it.
A
So
obviously,
that's
not
something
you
should
ever
do,
but
if
your
application,
you
know,
takes
user
input,
it
may
be
processing
something
complicated
like
yaml
or
json,
or
maybe
doing
something
complicated
on
the
user's
behalf.
It
makes
sense
to
to
try
to
limit
that
this
is
a
little
handler
that
just
just
takes
a
parameter
and
and
makes
that
slice.
A
So
we
can
go
ahead
and
deploy
it.
This
one
will
not
have
any
limits
or
anything
when
we
deploy
it
by
default.
You'll
have
some
kind
of
resource
requests
applied
in
most
environments,
usually
pretty
minimal,
like
10
millicourse
or
something
like
.01,
of
course,
but
you
won't
have
a
limit
so
we'll
see
how
that
works.
A
If
I
just
call
my
application
with
a
little,
you
know
address
it's
going
to
say:
yep
cool,
I
got
you
know
what
12
40,
96
bytes
or
whatever
you
know.
It's
not
not
much
go
ahead
and
try
to
do
the
kubeco
top
here
before
I
break
it
looks
like
metrics
may
not
be
available,
we
can
get
pods
and
that
happening.
A
So
now
we
can
try
to
allocate
a
ton
of
memory
and
see
kind
of
what
happens.
It's
just
going
to
hang
here
because
it's
going
to
allocate
a
bunch
of
memory
and
we
should
see
the
pod
depending
on
your
cluster
it'll,
either
kind
of
get
evicted
or
potentially
killed
or
something
won't
get
them
killed
without
a
limit,
but
you'll
you
may
see
it
get
evicted.
A
Sometimes
it
takes
a
little
while
it's
kind
of
like
watching
water
boil,
but
this
this
request
obviously
won't
do
the
thing
it
looks
like.
We
did
in
fact
get
a
couple
pods
evicted
here
and
that's
trying
to
replace
them.
So
I'm
gonna
kill
that
control
c
and
then
try
to
give
this
pot
a
chance
to
come
back.
A
A
So
yeah
it
got
evicted
because
the
node
had
some
memory
pressure.
Does
an
example
of
one
application.
That's
able
to
you
know,
tell
the
tell
kubernetes,
okay,
I'm
in
trouble
and
cause
other
applications
to
get
moved
around.
One
of
the
other
pods
was
probably
the
one
that
got
evicted
because
it
was
using
too
much
memory.
You'll
see
a
different
error.
If
you
see
that
different
places
will
have
either
docker
stats
or
top,
you
can
see
that
it
was
using
a
ton
of
memory.
A
So
the
way
to
stop
that
is
to
apply
some
limits
and
in
this
case
like
we
started
off
with
empty
resource
requests
or
limits
and
then
added
some
requests
so
and
then
some
limits.
So
you
know
people
will
have
different
opinions
on
how
these
should
go.
The
one
one
thing
to
remember
is
if
your
request
and
limits
are
the
same:
you'll
get
guaranteed
quality
of
service,
because
you're
agreeing
to
be
limited
and
you're
requesting
exactly
what
you
need.
A
Otherwise,
you
get
sort
of
a
best
effort
scheduling.
So
here
we
can
see
like
I'd
like
to
request.
Like
some
some,
you
know
medium
amount
and
then
say
I'll
limit
myself
to
something
10
times
higher.
Some
people
like
smaller
multiples
on
that,
but
it's
normal
to
have
your
limits
oversubscribed
and
what
happens
with
the
cpu
limit.
Is
it
will
sort
of
knife
your
process
down,
it'll,
just
slow
it
down
with
memory
you'll
get
killed.
If
you
use
too
much
memory,
it's
a
little
bit
brute
force.
A
So
so
that
can
be
a
that's.
That's
one
reason
to
be
careful
with
your
limits
here,
so
we
can
go
ahead
and
deploy
that
and
it'll
be
on
the
same
node
port.
So
we
can
go
ahead
and
see
if
our
pod
comes
up,
we've
got
one
running
great
and
then
we
can
go
get
it
and
weird,
so
it
failed
fast
and
it
filled
with
this
empty
reply
from
the
server
which
is
different.
A
Now
we
can
go
check
that
pod
and
see
it's
still
running,
but
it
got
actually
got
restarted
once
and
if
we
want
to
go
describe
it,
we
can
see
what
happens.
It
will
tell
us
in
an
event.
It
should
tell
us
actually
up
in
the
in
the
status.
A
We
should
have
a
thing
about
last
date,
yeah,
so
the
last
state
was
terminated.
It
got
killed,
so
it
got
out
of
memory
and
was
killed
and
then
exited
with
137
to
128,
plus
9
first
kill.
So
it
shows
us
how
we
got
got
killed
for
this
okay.
So
there's
a
good
question
right
now
is
how
to
estimate
cpu
and
memory
limits
for
the
app
developers
will
ask
how
to
do
that.
It's
a
good
question.
I
don't
have
an
easy
answer.
A
I
think
it
depends
a
little
bit
on
your
type
of
application.
So
if
you're
in
go
one
thing
that
we
do
is
use
like
the
built-in,
profiler
and
stuff
that's
to
be
really
helpful,
because
it
can
show
your
heat
profile,
how
much
you're
using
that
kind
of
thing.
It's
gonna
be
some
app
knowledge
here
too.
A
So
if
you
know
that
an
application
is
going
to
be
processing
a
lot
of
data,
doing
a
lot
of
in-memory
stuff,
allocating
and
deallocating
you
might
want
to
you-
know,
bump
it
up
a
little
higher,
often
you're,
not
really
walking
a
fine
line
on,
like
you
know
which,
how
many
megabytes
you're
allowed
to
have,
but
you
can
kind
of
so
you
can
kind
of
do
something
reasonable
and
then
monitor
it
in
a
real
workload
and
make
sure
that
it's
not
getting
killed
and
make
sure
that
your
performance
isn't
suffering
because
of
your
cpu
limit
yeah
in
particular,
because
boom
kills
are
disruptive.
A
I
typically
would
say
you
know,
set
your
memory
limit
higher
than
you
think,
as
kind
of
a
stopping
runaway
train
thing
not
as
a
not
as
a
more
fine-tuned.
A
Limit
because
they
will
just
get
killed
and
that
that
actually
can
can
lead
to
more
memory
pressure
if
you're,
if
you're
having
to
redo
work
or
something
so
any
recommendations
on
requests
or
limit
ratios.
My
experience
is
that
sometimes
they're
too
high
and
they
have
noisy
neighbors
or
if
it's
too
high,
you
have
noisy
neighbors
if
it's
too
low
you're
over
committing
the
ratio.
A
A
I
I
don't
tend
to
agree
with
that
personally,
because
it
doesn't
let
you
have
any
kind
of
bursting,
so
I
I
don't
know
somewhere
somewhere,
two
or
higher,
probably
less
than
ten.
My
colleague
karen's
also
mentioning
to
me
that
you
can
use
the
kubernetes
vertical
autoscaler
to
try
and
find
and
automatically
to
for
app
resources.
So
that's
a
nice
kubernetes
resource
that
you
can
use.
A
I
don't
have
any
other
tools
that
I
could
recommend
and
you
will
search
for
finding
out
the
cpu
memory
utilization,
so
a
couple
top
will
do
a
decent
job
for
running
pods
if
you're
in
a
application.
As
far
as
going
with
go
like,
if
you
have
go,
you
can
try
to
use
their
built-in
profiling
stuff
with
java.
A
I
know
you
know
I
don't
do
too
much
java
these
days,
but
but
you
know,
the
heat,
profiling
and
stuff
will
be
potentially
useful
in
terms
of
cpu,
I
tend
to
say,
like
just
run
it
with
a
real
workload
and
see
if
you're
getting
nice
down
or,
if
you're,
if
you're,
getting
if
you're
oversubscribed.
So
that's
my
opinion,
but
I
have
less
fanoffs
back
on
more
of
a
security
and
engineering
background.
So
I
I
you
know,
don't
listen
just
to
me
and
then
here.
A
It's
also
reminding
me
that
the
jvm
stuff,
remembering
that
your
your
heap
size
that
you
allocate
to
the
jvm,
is
not
actually
going
to
be
with
your
full
footprint.
So
the
java
process
will
get
in
memories
and
other
stuff
will
get.
Libraries
will
get
loaded
into
memory
and
there'll
be
some
overhead
on
top
of
your
heap.
A
So
if
you're
tuning,
you
know
just
be
careful,
understand
the
application
that
you're
trying
to
tune
and
make
sure
that
you're
applying
the
right
limit
for
the
linux
process
for
the
linux
container
and
not
you
know
making
that
match
what
the
jvm
thinks
or
something.
A
Thanks
james
cool,
so
the
how
to
use
it
yourself,
obviously
is
to
go
ahead
and
add,
requests
and
limits.
Let
me
talk
a
little
about
how
you
might
be
able
to
get
those
numbers
in
practice.
These
things,
I
think,
take
a
little
bit
of
time
to
to
settle
on
so
some
of
those
you
know
you'll
figure
out.
This
is
actually
the
too
low
cpu
limit
with
rsa.
A
That's
a
very
personal
problem
that
I
had
where
we
tried
taking
a
very
quiet
application
very
to
a
very
low
cb
limit
and
then
trying
to
do
tls
doesn't
work
so
with
like
a
0.1
cpu.
A
So
that's
been
a
problem
in
the
past
for
me
and
then,
if
you
see
your
stuff
getting
room
killed
a
lot
then
obviously
you'd
want
to
maybe
up
either
figure
out
your.
A
You
you'd
want
to
maybe
either
fix
your
app
or
or
fix
the
the
the
limit.
So
that's
actually
it
for
the
exercises.
I
hope
that
was
sort
of
a
rural
one,
tour
of
a
bunch
of
different
communities,
resources
I'll
do
a
little
bit
of
wrap
up
first
and
then
we'll
be
able
to
answer
some.
Any
final
questions.
A
Sorry
for
bumping
you
through
all
those
slides,
so
the
one
thing
I
want
to
talk
about
is
with
all
those
configurations.
You
know
you
want
to
get
these.
You
want
to
get
better
at
some
of
these.
You
want
to
start
using
one
of
these
or
two
of
these
in
your
in
your
environment.
You
know
I
without
actually
losing
the
velocity
that
brought
you
to
kubernetes
or
containers.
A
So
you
know,
let's
figure
out
how
to
do
that.
So
without
using
velocity
or
friends,
these
things
can
be
very
contentious.
Sometimes
so
an
enforcement
workflow
is
one
of
the
neat
things
in
kubernetes
is
trying
to
use
a
mission
control.
This
is
an
example
from
one
of
my
colleagues
blogs
about
admission
control
in
general,
so
he
has
a
good
blog
on
the
kubernetes
that
io
blog
about
you
know
how
you
can
do
emission
control.
A
It
can
be
a
little
complicated
and
we've
figured
out
a
lot
of
this
stuff
within
stack
rocks.
So
you
know
it's
a
feature
of
the
stackrock's
security
platform
as
well,
but
you
can
see
that
the
feedback
is
right
here.
So
when
you
create
this
resource,
you
immediately
get
the
error
before
it
even
tries
to
deploy,
saying
like
hey,
you
actually
ask
for
run
as
non-route,
but
you
forgot
to
do
run
as
user.
A
It's
the
right
time
to
give
the
feedback
is
like
before
you
deploy
before
it
starts
breaking
or
before
you
try
to
like
kill
something
for
for
not
violating
your
policy
or
for
not
following
your
policy,
a
much
more
friendly
place
to
to
see
and
then
so.
I
have
sort
of
a
little
stage
rollout
kind
of
plan,
one
one
way
you
could
get
started
with
all
these
things.
So
I
like
to
say,
like
start
with
the
easy
stuff
that
helps
everybody.
So
you
know
annotate
label
your
deployments
consistently.
A
So
people
know
what
they're
looking
at
start.
Using
concrete
image.
Tags
like
latest
is
convenient
but
can
lead
to
you
not
knowing
what
actually
is
deployed.
You'd
have
to
look
at
shas,
which
is
less
convenient
than
tags,
and
then
you
can
start
scanning
images
using
either
registry.
Like
way,
a
security
platform
like
stack,
rocks
to
try
to
figure
out
sort
of
your
heaviest
weight,
images
and
figure
out
where
the
cvs
are
and
the
fixable
ones,
especially
that
helps
everybody.
A
We
can
continue
with
other
self-contained
changes,
sort
of
app
by
app
or
sorry
self-contained
that
you
affect
different
communities
at
once,
so
limiting
network
access
to
the
api
server
is
great,
especially
if
you're
worried
about
kubernetes
api
service
cves.
There
have
been
a
bunch
of
them
in
the
last
year,
so
you
can
start
disabling.
A
That
automatic
service
account
mount
either
nancy
it's
by
namespace
or
app,
and
then
you
can
try
to
look
at
if
you've
got
people
that
have
cluster
admin,
you
can,
you
know,
think
about
replacing
them
with
a
little
more
scoped
access,
so
running.
You
know
our
product
will
show
you.
People
that
are
cluster
admins
and
maybe
help
you
figure
out
what
to
do
with
them
and
then
moving
on
like
to
especially
the
stuff
we
showed
today.
A
You
can
work
on
those
kind
of
cross-functional
changes
that
might
affect
different
people
app
by
app,
which
is
nice.
You
don't
have
to
do
a
whole
cluster.
You
know
migration.
You
can
try
a
read-only
root
file
system
for
certain
services,
especially
the
stateless
ones.
Maybe
you've
got
some
web
front
end
or
whatever,
and
that
can
help
you
can
make
sure
resource
requirements
are
specified
that
you
know
goes
obviously
potter
deployed
by
deployment,
and
then
you
can
start
because
network
policies
are
opt-in.
A
You
can
go
app
by
app
and
say,
like
hey
for
my
most
sensitive
things,
I'll,
add
an
ingress
network
policy
and
work
up
to
having
them
for
everything,
and
you
can
do
the
same
thing
for
egress.
If
you
know
certain
apps
you'll
never
need
to
do
egress,
you
can
just
go
ahead
and
apply
a
change
there.
It's
nice
because
you
don't
have
to
say
all
right-
everybody
starting
friday
at
00
pm
no
egress
allowed.
Unless
you're
on
this
list.
You
can
just
go
application
by
application
test.
A
The
changes
the
same
way
you
test
different
kinds
of
you
know,
yaml
configurations,
that's
nice
and
then
just
keep
going
so
there's
there's
always
more
features
coming
out.
Kubernetes
has
been
adding
different
kinds
of
things.
You
can
do
to
help
lock
down
your
cluster,
and
just
this
is
a
loop.
This
is
not
a
you
get
to
one
place
and
you're
secure
forever.
You
know
doing
consistent
monitoring
visibility.
A
You
know,
inspection
is
going
to
help
you
in
the
long
term,
that's
a
place
where
actually
I'm
a
product
manager
and
that's
one
of
the
reasons
why
we
built
a
product
like
we
do,
which
enables
visibility
and
sort
of
debugging
and
inspection
and
runtime
monitoring
so
that
you
can
keep
going
through
that
that
loop
of
fixing
things
and
building
the
deploy
watching
them
in
runtime,
finding
out
new
things
to
fix,
so
that
you
can
continually
improve
your
security
posture
with
all
the
contacts
that
we
have
from
kubernetes
great.
A
So
there
are
a
couple
links
here,
we'll
share
this
with
you
after
I
wrote
a
blog
about
the
security
audit
which
is
available
at
that
link.
There
are
some
good
coupon
videos
about
security,
especially
a
workshop
somewhat
like
this
one,
which
I
took
some
inspiration
from
and
then
the
stackhawks
blog.
You
can
also
email
me
or
reach
out
to
me
on
twitter,
I'm
happy
to
answer
questions
and
I
promise
to
answer
you
faster
than
sometimes
I
have
in
the
past.
A
There's
one
final
plug:
there's
a
meetup
group
that
we're
setting
up
that,
where
some
events
are
going
to
get
scheduled
with
karen
and
some
other
folks
from
from
stackrocks
and
there's
a
link
there
for
you
there
and
now
I'm
happy
to
take
any
additional
questions
before
we
hit
the
11
30
time.
A
I
see
we
have
one
in
here
so
far.
So
what
are
the
concerns
in
switching
your
cni
on
a
running
cluster?
The
reason
is
that
introducing
network
policies
are
not
supported
by
the
advs
vpc
cni,
so
and
kevin's
telling
saying
that
you
can
install
calico
anytime
in
eks
and
it
will
still
default
to
open
network
policies
until
you
start
adding
your
own.
A
This
is
this
does
bring
up
a
good
point
that
that
your
cni
provider
will
impact.
You
know
what
your
whether
you're
in
our
policies
have
any
effect
in
google,
for
instance,
there
will
be
a
an
option
that
you
need
to
do
for
enable
network
policies
and
different
platforms.
It'll
be
a
little
different.
Karen
has
written
some
great
explanations
of
you
know
the
the
details
of
every
one
of
those
platforms,
or
especially
the
eks,
which
I
believe
was
your
your
focus
mikael.
A
A
A
A
Great,
I
appreciate
the
the
poll
results
and
if
there
are
any
other
things
you
know
feel
free
to
reach
out
to
me
or
to
any
of
my
colleagues
there's
my
email
and
twitter
and
this
meetup
group
will
be
you
know
getting
some
stuff
started
in
the
next.
You
know
a
couple
weeks
here
swap
back
here
just
the
letter
c
at
starcraft.com,
okay,
I'll
stay
here
in
case
there
any
other
questions,
but
otherwise
I
hope
you.
A
A
And
yes,
the
recording
will
be
sent
to
you.
Zoom
is
a
little
bit
slow
these
days
because
everyone
is
using
it,
so
the
videos
will
come
a
little
bit
later,
but
we'll
get
that
out
to
you
as
soon
as
we
have
it.
A
Okay,
we'll
go
ahead
and
close
the
zoom
now.
Thank
you
so
much
for
joining
us
we'll
send
that
video
out
to
you
as
soon
as
you
have
it
have
a
great
day.