►
From YouTube: Webinar: Zero Trust Services in Kubernetes
Description
In this webinar we will take a look at some of the most important techniques used to create Zero Trust services in a Kubernetes environment. This talk will cover concepts ranging from container image hardening to pod specification runtime constraints and from network policy to platform centric security. Upon completion, attendees will have a detailed understanding of some of the key mechanisms in Kubernetes tool chains available to facilitate Zero Trust computing.
Presenter:
Randy Abernethy, Managing Partner @RX-M
A
Okay,
hi
everyone
happy
Friday
I'd
like
to
thank
everyone.
Who's
joining
us
today
welcome
to
today's
CN
CF
webinar,
zero
trust
services
in
kubernetes,
I'm,
Kristy,
tan
marketing
communications
manager.
Here
at
CNCs
I'll
be
moderating
today's
webinar.
We
would
like
to
welcome
our
presenter
today,
Randi
Abernathy,
managing
partner
at
our
XM,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
There
is
a
Q&A
box
at
the
bottom
of
your
screen.
A
Please
feel
free
to
drop
at
your
questions
in
there
and
we'll
get
to
as
many
as
we
can
at
the
end
of
the
presentation.
This
is
an
official
webinar
of
the
CNCs
and,
as
such
is
subject
to
the
CN
CF
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
your
fellow
participants
and
presenter.
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
CN
CF
webinars
page
at
CNCs,
IO,
slash
webinars.
B
Thanks
Christy
hey
good
morning
good
afternoon
good
evening
as
appropriate
everybody.
This
is
the
zero
trust,
services
and
kubernetes
talk,
and
the
idea
behind
this
webinar
is
to
sort
of
cover
a
bunch
of
security
concerns
that
face
developers,
building
services
for
modern
cloud
native
types
of
environments.
These
days,
developers
are
often
faced.
B
There
are
tons
of
security
topics
out
there,
of
course,
that
we
could
talk
about
specific
to
administering
clusters
or
administering
containerization
processes
and
and
hosting,
but
we're
gonna
really
focus
on
this.
The
the
Vantage
of
the
developer,
the
DevOps
team,
the
you
know,
service,
reliability,
engineer
whatever
the
case
may
be
the
people
concerned
with
building
those
services
and
trying
to
do
their
job
to
secure
them
as
much
as
possible.
B
So
in
this
particular
context,
we
could
sort
of
define
zero
trust
as
a
model
where
components,
trust
nothing
and
every
single
thing
that
is
done
by
a
particular
component
needs
to
be
authorized,
so
authentic
application
and
authorization,
of
course,
being
the
component
there,
and
so
in
cloud
native
environments.
We
have
this
sort
of
mental
model
of
a
perimeter
--less
platform
right.
If
you
are
running
your
software
in
a
orchestrated
cluster.
Where
is
that
cluster?
It's
not
really
supposed
to
matter
right.
It
could
be
on
Amazon
or
Google,
Cloud
or
Azure.
B
B
What
they
have
access
to.
There
are
bad
actors,
you
know
cropping
up
everywhere,
so
we
just
sort
of
need
to
encrypt
everything
on
the
wire.
We
need
to
encrypt
everything
at
rest.
We
need
to
think
about
auth
in
all
scenarios,
and
you
know
that's
just
sort
of
perimeter.
Less
thinking
least
privilege
falls
into
this
as
well.
You
want
to
give
every
component
of
your
system
the
smallest
set
of
privileges
that
you
can
possibly
give
it,
and
this
goes
back
to
zero
trust
right.
B
Trusting
nobody
right,
except
the
specific,
authenticated,
authorized
activities
right
and
so
not
only
do
I
make
sure
I
know
that
that
host
B
is
host
B,
but
I
make
sure
that
host
B
is
only
doing
what
host
B
is
supposed
to
be
able
to
do.
You
know
cranking
it
down
to
another
level,
and
security
in
depth
means
making
sure
that
you're
dealing
with
all
of
these
concepts
every
opportunity
you
get.
B
So
what
we're
going
to
see
is
this
is
going
to
apply
to
service
engineering,
so
the
software
development
process
is
going
to
apply
to
containerization
it's
going
to
apply
to
the
deployment
of
your
applications
in
a
cluster
like
a
kubernetes
environment
and
then
at
runtime,
while
the
applications
up
and
running
in
those
environments
as
well.
So
we're
going
to
look
at
a
lot
of
different
things
to
sort
of
bolster
this
we're
gonna,
look
at
minimizing
attack
surfaces,
we're
going
to
talk
about
assuming
that
you'll
be
compromised
right.
B
If
you
take
that
stance,
assume
that
this
piece
of
software
will
be
compromised,
what
can
you
do
such
that?
When
that
you
know
fabled
day
happens,
the
attacker
is
bereft
of
any
actual
real
ability
to
privilege
escalate
or
to
break
or
to
attack
other
components
within
your
system.
Right
so
assuming
you'll
be
compromised
in
taking
that
Vantage,
and
this
is
hard
I
think
for
a
lot
of
Engineers,
because
we
want
to
defend
ourselves
and
make
sure
that
we
create
this
ironclad
barrier
around
all
of
our
software.
B
But
in
the
end,
there's
always
tricks
right
and
there
are
things
outside
your
control
that
could
occur
specifically
when
you're
in
a
containerized
environment,
because
you
have
the
container
platform
itself,
whether
that's
kubernetes
or
something
else.
You
have
the
containerization
process.
You
have
the
supply
chain.
Now
all
the
build
tools,
and
things
like
that,
so
there's
a
there's,
a
there's,
a
lot
of
opportunity
for
for
new
attack
vectors
to
be
leveraged,
and
some
of
them
are
outside
the
control
of
the
developer.
B
So
if
we
assume
that
we'll
be
compromised,
maybe
we
can
reduce
the
impact
of
that
compromise
off.
All
connections
in
traffic
obviously
provide
software
with
the
smallest
set
of
permissions.
All
right
so
so
give
make
sure
your
service
has
just
the
permissions
that
it
needs,
and
one
of
the
things
that
I
find
traditional
engineering
sort
of
skips
is
really
taking
ownership
of
the
runtime
nature
of
the
application,
even
in
some
DevOps
environments.
If
you
ask
a
team
hey,
what
are
the
exact
capabilities
and
access
permissions
that
your
software
needs?
B
They
find
it
hard
to
articulate
and
then,
when
you
flip
it
the
other
way
around
and
say
who
are
the
exact
legal
or
clients
that
should
be
accessing
your
service
often
again
may
be
hard
to
articulate
when
this
is
stuff.
You've
got
to
articulate
if
you're
going
to
create
any
kind
of
policy.
That's
going
to
control
this
stuff,
it
sort
of
ties
into
resource
consumption
and
things
like
that.
All
of
these
new
things
that
we
need
to
think
about
when
we're
operating
in
a
shared
pool
we
used
to
run
on
our
own
machine.
B
We
would
think
okay,
this
is
my
box.
I
have
the
memory
and
the
CPU
it's
got
when
you
run
in
a
shared
pool
of
resources
like
a
container
orchestration
environment,
you
have
to
say
how
much
memory,
how
much
CPU
you'd
like
to
have
limits
and
requests,
and
things
like
that,
and
it's
the
same
for
the
security
side
of
it.
B
You
can
report
that
our
security
related
in
your
telemetry,
specifically
in
login
logging,
so
that
it
has
no
impact
on
your
application,
but
that
you
give
people
who
are
monitoring
the
environment,
that
extra
leg
up
that
they
might
need
in
order
to
deal
with
the
with
the
understanding
and
controlling
new
attack
vectors
in
containerized
environments.
We're
going
to
talk
about
that
and
then
never
packaging,
unrequired
bits
we'll
see
a
lot
of
containers
out
there
in
production
environments
that
have
all
sorts
of
stuff
inside
the
file
system
of
the
container.
B
That's
not
needed
by
the
service
and
there's
a
trade-off
here
that
we'll
talk
about,
but
we'll
also
take
a
look
at
how
to
really
strip
it
down
to
the
minimum
and
we'll
also
talk
about
never
exposing
unrequired
functionality.
And
there
are
scenarios
where
people
do
this
accidentally
and
in
the
name
of
observability,
often,
and
so
we'll.
We'll
take
a
look
at
some
of
those
concerns
as
well.
B
So
lots
of
things
to
think
about
and
when
we
look
at
securing
small
services
microservices
whatever
you
want
to
call
them
or
any
kind
of
service,
actually
that
you're
going
to
deploy
into
a
orchestrated
environment.
We
have
these
kind
of
five
stages
that
you
might
want
to
think
about.
You
could
break
it
down
other
ways
of
course,
but
this
is
not
a
bad
checklist
service,
design
and
construction
right.
So
the
actual
software
engineering
process
right.
B
You
have
to
build
things
the
right
way
so
that
they
can
be
deployed
in
these
kinds
of
environments
correctly
so
that
they
can
be
observed,
but
also
secure
and
then
service
packaging
and
container
image
design.
This
is
another
step
where
we're
taking
that
service
when
we're
putting
it
into
an
immutable,
containerized
image
and
then
deploying
it,
and
how
can
we
make
sure
that
that
image
is
as
secure
as
possible
then
pod
specification?
B
This
is
how
we're
going
to
tell
the
system
to
deploy
these
containers
when
they're
run
and
the
pod
specification
could
be
a
template
inside
a
deployment
or
it
could
be
something.
That's
you
know
being
set
up
in
a
replica
set
by
spinnaker
or
something
else,
but
that
pod
specification
is
really
the
stock
in
trading
kubernetes
and
it's
for
the
most
part,
almost
purely
up
to
the
team
that
builds
the
software
to
define
that
pod
specification,
because
it's
the
software
engineering
team
that
knows
how
that
container
needs
to
run.
B
What
are
the
resources
it
requires,
what
are
the
environment,
variables
and
configurations
that
it
can
tolerate
or
should
have
and
then
step
four
platform
based
pod
policies.
So
as
an
engineer
working
in
a
cloud
native
environment
you're
also
going
to
have
to
face
challenges,
maybe
outside
of
things
that
you're
familiar
with
when
you're,
when
you're
going
to
a
kubernetes
environment,
we
can
have
policies
set
up
in
namespaces
that
constrain
what
is
doable
by
a
pot
and
so
we'll
look
at
that
and
talk
about
pod
security
policies
and
other
things
that
fit
into
that
category.
B
B
So
that's
that's
the
list
that
we're
going
to
go
through
so
we're
gonna
hit
each
one
of
these,
and
you
know
I've
got
a
short
little
demo
for
each
of
these
set
up.
But
I'll
do
you
guys
and
at
the
end
of
the
day,
there's
you
know
there's
a
lot
of
time
you
could
spend
on
each
one
of
these
and
I'm
I'm,
not
at
all
trying
to
give
you
a
comprehensive
view
of
any
of
these
things,
but
just
trying
to
give
you
a
little
example
of
the
the
sorts
of
stuff
and
to
make
it.
B
B
You
may
be
asked
to
omit
metrics
data
that
can
be
scraped
by
a
system,
some
some
some
SAS
system,
but,
like
you
know,
data
dog
or
a
cloud
environment
or
Prometheus,
or
something
like
that,
and
then
you
may
also
be
asked
to
omit
certain
kinds
of
logging
information
that
sort
of
stuff-
and
these
are
opportunities
to
report
security
events
and
security
events-
aren't
all
bad
right.
We're
not
saying
you
know.
Oh
this,
this
terrible
thing
happened.
I,
you
know.
B
Denial
of
service
attack
occurred
at
such-and-such
time-
that's
great
stuff
to
report,
of
course
as
well,
but
what
you
might
also
want
to
think
about
is
just
providing
a
baseline
of
information.
That's
detailed
enough!
That
gives
people
the
opportunity
to
know
when
something
changes,
so
you
can
do
a
lot
of
this
stuff
at
startup
and
really
avoid
any.
You
know,
burden
on
the
application
at
runtime.
For
example,
why
not
log
the
user,
ID
and
group
ID
that
the
service
is
using
a
lot
of
times?
This
is
specified
by
the
cluster,
not
the
developers
right.
B
Logging
volume
mounts
that
are
present
inside
the
container
logging,
the
namespace
IDs
I,
know
I,
know
it's
that
that
are
present,
so
just
capturing
a
lot
of
startup
stuff
I've
built
some.
Some
pretty
large
trading
systems
at
in
the
you
know,
kind
of
Finance
space
and
we've
found
this
stuff
to
be
really
really
useful
and
we
just
sort
of
loaded.
B
All
sorts
of
you
know
really
useful
security
related
types
of
information
that
described
basically
what
every
single
you
know
thing
was
accessible
to
that
that
particular
service,
so
that
we
could
then
at
a
later
time
you
know
just
out
of
it
and
make
sure
that
nothing
was
changing
or
ask
ourselves
the
question
a
lot
of
times,
just
going
through
the
process
of
making
sure
that
your
enumerated
all
of
the
resources
you've
got
access
to
from
a
particular
service.
Helps
you
rethink,
you
know:
do
we
really
need
access
to
that?
You
know.
B
Maybe
we
should
drop
batter.
That's
figure
out
some
way
to
to
limit
that.
Obviously,
logging
connections
in
and
out
and
then
another
interesting
thing
is
to
think
about
some
of
the
endpoints
that
you
build
into
your
services
just
to
make
them
work
in
a
dynamic
environment,
for
example,
metrics
endpoints,
health
and
readiness
endpoints.
All
of
those
types
of
things
are
often
exposed
to
provide
observability
functionality,
but
that's
a
discrete
set
of
functions,
independent
typically
of
your
applications
behavior.
So
your
application,
endpoints,
probably
should
not
should
be-
should
be
distinctly
controllable
right.
B
You
should
be
able
to
say
these
people
have
access
to
the
application
and
points,
and
these
resources
or
things
or
people
have
access
to
the
observability
information.
The
observability
information
often
will
give
you
deeper
insights
into
the
application,
which
may
be
very
inappropriate
for
a
lot
of
parties.
So
there
are
some.
You
know
some
some
easy
techniques
we
can
use
to,
handle
that
and
then
obviously
em
TLS
everywhere
now
there's
an
interesting
set
of
occurrences
in
cloud
native
environments
that
have
identified
a
lot
of
these
things.
B
You
know
Latin
C
between
services,
because
you've
got
proxies
on
both
ends,
so
it's
not
for
everybody,
but
it's
definitely
an
interesting
thing
to
think
about
so
demo,
service
design,
I'm
going
to
go
ahead,
and
just
you
know,
do
a
simple
example
to
show
you
how
this
impacts
that
developer
the
types
of
things
that
you
need
to
think
about
do
to
you
know,
sort
of
get
us
hip
to
some
of
the
basics
here.
So
again,
you
know
keep
in
mind.
These
are
just
super
simple
demos
designed
to
you
know,
give
you
some
context
here.
B
The
type
of
work
that's
involved,
so
I've
got
a
little
bit
of
a
crutch
script
here,
set
up
so
that
I
don't
have
to
make
you
guys
suffer
my
typing,
so
I'm
gonna
go
over
here
and
I'm
just
going
to
clone
a
repo
that
we've
got
set
up
with
a
sample
application
in
it,
and
I'll
use
this
application
for
a
few
different
demos
here,
okay,
so
the
next
thing
I'm
going
to
do
is
build
this
thing,
so
I,
just
cloned
it
and
now
I'm
going
to
build
it.
Let
me
list
the
directory
here.
B
You
can
see
that
I've
got
a
simple
go
program,
we'll
talk
about
this
in
just
a
second
and
then
we've
got
a
docker
file
that
is
set
up
to
build
this
Bisco
application
into
an
image
that
we
can
run
so
I'll
I'll
walk
through
the
docker
file
in
just
a
second
but
I'm
gonna
go
ahead
and
just
build
it
and
there
we
go
okay,
so
the
image
is
TL
and
it's
built
pre-built
it.
So
that
would
go
a
little
bit
quicker
and
now
we're
gonna
go
ahead
and
run
this
guy.
B
And
so,
when
we
run
this
service
we
can
grab
its
IP
address
and
hit
it,
but
being
just
any
user
fix
the
container
ID
there.
So
I'm
just
gonna
grab
the
container
ID.
Okay,
so
we're
just
we're
just
inspecting
the
service
to
get
all
of
its
metadata
and
looking
for
its
IP
address
there.
It
is
117,
172
1702,
so
we're
you
know
we're
working
with
our
app
we've
built
it.
We've
containerized
it
and
now
I'm
going
to
curl
it.
But
what
I'm
gonna
do
is
I'm
going
to
curl
the
metrics
endpoint
in
this
service.
B
It's
set
up
to
be
delayed
on
purpose
and
then
to
perform
faster
or
slower,
based
on
memory
that
you've
given
it.
But
the
idea
here
is
that
we
say
okay,
this
is
trash
level
is
at
15,
but
it
at
the
same
exact
IP
and
port
I'm,
exposing
the
metrics
endpoint.
Now
you
may
have
some
very
sophisticated
firewalling
capabilities.
Where
you
can
say
these
paths
are
accessible
and
bees
aren't
that's
fine,
but
the
the
simpler
thing
to
do
would
be
just
to
simply
say:
hey.
Look.
We
probably
don't
want
metrics
on
the
same
host
and
port.
B
As
the
you
know,
the
the
main
application
functionality
and
so
to
change
this
would
be
fairly
straightforward
right.
We
could
just
drop
into
the
application,
and
this
is
just
a
simple
go
program
and
we're
using
open,
metrics,
the
Prometheus
libraries
for
open
metrics
to
expose
standard
metrics.
You
know
we
just
got
some
counts
and
things
like
that
that
are
being
added
here.
These
are
a
bunch
of
just
handle
er
functions,
but
down
here
in
the
main
area.
B
Of
course,
we
can
see
that
we're
listening
and
serving
that
port
and
all
of
our
routes,
even
metrics,
and
the
the
readiness
probe
that
we've
got
set
up
and
that
the
health
check
that
we've
got
set
up.
All
of
these
guys
are
on
the
same
port
and
so
creating
any
sort
of
easy
and
simple
separation
is
perhaps
a
bit
problematic.
So
what
we
could
do
is
we
could
just
break
these
apart
and.
B
Just
set
up
a
new
router,
alright,
so
I'm
just
going
to
create
a
second
router,
and
you
know
if
you're
not
familiar
with
go,
that's
fine
I
mean
you
get
the
idea
right.
We're
just
creating
a
router
and
routers
in
many
programming.
Environments
are
just
tools
that
allow
you
to
automatically
handle
paths
and
and
that
that
that
requests
off
to
a
block
of
code
and
we're
using
the
Prometheus
HTTP
handler
to
handle
our
metrics.
But
we're
going
to
move
that
handler.
B
B
A
B
B
That
works
right.
We
can
still
retrieve
the
applications
state.
We
can
tell
it
works
because
it
didn't
give
us
an
error.
Just
take
them
a
second
to
respond.
Heavy
processing
here
to
get
the
trash
can
level
okay,
so
that
works,
and
then,
if
we
try
the
metrics
end
point
no
good
right,
but
if
we
change
ports.
B
B
Okay!
Well,
that
looks
like
the
guys
so
I'll
I'll
debug
be
thes
the
source
there,
but,
as
you
can
see,
we
can
no
longer
get
to
the
metrics
port
on
port
80,
which
was
really
the
goal
right
and
then
obviously
making
the
metrics
work
again
is
important,
but
but
yeah
you
know
just
a
simple,
a
simple
code
change
like
that
gives
us
the
ability
to
now
easily
distinguish
between
the
operational
aspects
of
the
application
and,
be
you
know
the
the
standard
application
components
of
the
application
so
a
lot
of
times
we're
building
small
services.
B
You
know
we
need
to
think
about
the
fact
that
we're
going
to
operate
in
an
observable
environment,
and
yet
these
observability
features
need
to
be
segregated
out
of
the
standard.
You
know
application
operations,
so
the
next
piece
is
container
packaging
and
in
the
container
packaging
side
of
things
we
have
concerns
around
container
images.
Images
are
the
stock
and
trade
in
a
containerized
environment.
When
we
deploy
an
image
into
a
cluster
that
image
brings
with
it
two
basic
things
metadata.
So
a
config
dot,
JSON
more
or
less
with
some.
B
You
know:
high-level
attributes
of
the
image
things
like
the
images
name,
the
images
you
know
metadata
on
the
label
front.
You
know
think
key
value
pairs.
Like
licensing
information,
you
know
what
have
you,
what
command
to
use
to
execute
the
program
when
the
image
is
executed,
without
an
optional
command
from
the
end-user
and
and
that
sort
of
stuff,
then
the
other
piece
of
the
puzzle,
in
addition
to
the
metadata
and
the
config
dot
JSON.
Is
this
basically,
what
equates
to
a
tarball
of
the
root
filesystem
of
that
container?
B
And
so
the
concept
of
building
a
bunch
of
stuff
into
the
container
image
or
using
other
images
to
create
the
image
that
you're
going
to
run
in
production,
introduces
the
opportunity
to
include
additional
information
in
that
container
image?
That's
that's
that
doesn't
need
to
be
in
a
production,
environment
and
tools
like
introspection
tools
like
netstat
and
PS
are
are
really
useful
to
attackers.
If
somebody
breaks
into
your
application
and
somehow
gets
the
ability
to
exec,
you
know
the
process,
you
want
to
limit
the
options
that
they've
got.
B
You
don't
want
a
shell
in
a
container
image
unless
there's
a
specific
reason
to
have
it
unless
it's
unless
you're,
using
that
shell
to
interpret
the
thing
that's
running
in
that
image
and
so
the
the
the
fewer
tools
we
can
provide
in
our
running
containers,
the
better
off
we
are,
and
it
is
quite
possible
to
create
a
container
from
quote-unquote
scratch.
But
we
start
with
nothing
and
we
put
explicitly
what
we
want
into
that
container
image.
Now.
There
are
images
available
all
over
the
place.
B
More
and
more
images
on
docker
hub
are
getting
stripped
down
and
minimized,
but
still
quite
often,
they've
got
a
lot
of
stuff
in
them
that
you
know
we
don't
necessarily
need
and
so
being
able
to
build
images
that
basically
contain
nothing
is,
is
really
going
to
be
a
big
help
in
in
locking
things
down
and
and
providing
as
few
you
know,
vectors
for
attackers
to
take
advantage
of
as
possible
if
they
somehow
or
another
compromised
your
container.
And
so,
let's
take
a
look
at
an
example
here.
B
So
I
built
this
this
trash
levels
example
a
couple
of
times
and
I'm
just
going
to
I'm
gonna,
go
ahead
and
list
everything
that
we've
got
here
on
the
directory
and
in
the
docker
file
that
builds
this
particular
trash
levels.
App
we've
got
two
stages.
The
first
stage
is
grabbing
a
public
image
going
113
and
using
that
as
the
build
tool.
B
Now
this
right
here
might
set
off
alarms
right
in
many
environments,
you're
you're,
going
if
you,
if
you
have
a
serious
security
interest,
you
probably
don't
want
to
ever
use
a
public
image
right
and
you
probably
don't
want
to
grab
it
in
this
way.
There's
no
way
when
you
look
at
this
image
to
know
what
is
what
that
image
contains
right,
there's
no
way
to
know
the
bits
that
are
inside
of
that
image.
We
we
can.
We
can
make
assumptions,
but
bad
actors
and
various
places
could
you
know,
could
could
subvert
that
image.
B
We
can't
control
docker
ink.
We
can't
control
people
who
can
push
to
the
official
repo
where
this
is
coming
from.
So
it's
probably
not
something
they
should
be
trusted
in
the
first
place,
and
so
you
want
to
always
make
sure
that
your
images
are
coming
from
a
trusted
place
where
they've
been.
You
know,
screened
by
your
own
internal
security
processes
and
vetted,
and
that
sort
of
thing
so,
if
you've
got
standard
based
images
that
you're
going
to
be
working
with
developers
are
often
gonna.
Have
language
specific
based
images,
Java
go
what-have-you
Python.
B
Those
should
probably
be
built
in-house
vetted
and
you
know
lockdown,
but
we're
gonna
go
ahead
and
use
this
public
image
just
to
get
started
and
then
we're
going
to
copy
our
program
in
to
go
source
trash
levels.
We're
then
going
to
pull
down
the
libraries
that
we
need
and
we're
gonna
build
this
guy,
but
we're
gonna
statically,
link
it
and
when
you
statically
link
an
application
like
this,
it
doesn't
have
any
dependencies.
Now
it's
going
to
be
a
little
bit
trickier
for
a
dotnet
or
a
java
application.
B
You're
gonna
need
a
runtime
same
thing
with
Python
and
so
on,
but
C
C++
go
and
various
other
programming
environments
give
you
the
ability
to
compile
to
a
static
executable
that
has
no
dependencies,
and
even
if
you
have
dependencies,
of
course,
you
could
include
just
the
dependencies
that
you've
got
and
avoid
all
the
other
stuff.
And
so
our
second
stage
in
the
build
here
says
from
scratch
and
this
in
in
a
in
a
in
a
docker
build,
and
there
are
other
tools
you
could
use
to
build,
but
I'm
just
using
docker.
As
an
example.
B
Here
we're
saying
start
this
new
container
image
with
nothing
right
and
then
we've
just
got
some
key
value
data,
which
is
just
metadata
that
just
goes
in
the
configuration.
But
then
we
go
to
the
previous
in
right,
the
build
image
and
we
grab
the
executable
from
that
build
image
and
we
copy
it
in,
and
then
we
specify
that
we're
going
to
run
this
program
when
somebody
runs
the
image
and
that's
it,
and
so
the
build
process
that
we
ran
just
a
minute
ago
is
pretty
straightforward
right.
B
We
just
tell
doctor
hey,
build
the
current
directory,
docker
5dr
file
and
then
tags
it
whatever.
We
want
to
call
it.
So
if
we
want
to
build
this
guy
again
and
retag,
it
be
it'll,
see
that
everything's
already
been
built,
use
the
cache,
but
off
we
go,
and
so
that
gives
us
the
ability
to
to
now.
Do
this.
B
Take
a
look
at
the
actual
image,
so
this
is
the
image
that
I
just
tagged
right
and
it's
13
Meg's.
This
is
the
intermediate
image.
This
is
the
build
stage
and
the
build
stage
is
910
Meg's
and
you
can
see
the
build
stage
is
910
Meg's,
largely
because
the
go
environment
is
eight
hundred
and
three
Meg's.
So
we
we
have
eight
hundred
three
Meg's
of
golang.
Then
we've
got
all
the
library
sources
that
we
pulled
down
and
we
have
all
the
intermediate
files
for
the
build
process
and
what-have-you.
B
So
the
last
thing
that
we
want
to
ship
off
to
production
is
a
container
image
that
has
a
complete,
build
solution
in
it,
so
that,
if,
if
compromised,
an
attacker
can
now
write
software
and
compile
it
and
deploy
it
right
from
within
our
container,
so
definitely
definitely
something
that
we
want
to
strip
out.
We
want
to
have
the
minimum
amount
of
code
actually
out
there
in
production,
and
so
here's
an
interesting
thing
that
we
can
do
if
we
do
a
docker
container
LS,
and
we
take
a
look
at
spell
container
correctly.
B
B
This
error
can't
execute
LS,
so
the
LS
executables
not
in
there
right
how
about
PS
nope,
how
about
let's
run
CH
a
shell
so
that
we
can
jump
inside
that
container
and
poke
around
nope
right,
so
you're
going
to
be
very,
very
restricted
as
to
what
you
can
do
in
an
image
that
is
so
constrained.
The
other
hand
of
this
is
that,
okay,
you
know
what
if
I
need
to
debug
things,
so
this
is
going
to
make
it
a
little
tough
to
debug
right.
B
You
might
want
to
run
the
image
in
the
build
container
for
experimentation
and
debugging,
and
then
you
know,
run
the
production
container
in
the
production
environment.
So
you
have
to
you
know
kind
of
deal
with
your
CI
CD
split
there
are
we
going
to
use
the
build
image,
for
you
know
for
for
say,
for
example,
unit
tests
and
then
switch
over
to
the
production
image
for
all
of
the
integration
tests?
And
you
know
non-functional
tests
and
things
like
that?
B
That's
typically
a
you
know
a
clean,
a
clean
split,
but
every
environments
different,
every
every
you
know
team
will
will
find
their
own
boundaries.
Another
thing
to
think
about
is
if
we
docker
image
LS
help
here,
you'll
see
that
there's
a
digest
switch
and
if
you
want
to
be
very
explicit
about
plural,
if
you
want
to
be
very
explicit
about
the
images
that
you
pull
instead
of
specifying
that
you
would
like
to
grab
a
repo
and
then
colon
tag
like
:
latest
:
b,
something
like
that.
B
You
will
get
those
bits
every
time
all
the
time,
because
those
are
the
only
bits
that
docker
will
accept,
and
so,
while
you
still
have
to
download
the
image
when
the
image
is
downloaded,
the
the
container
manager
is
going
to
check
the
sha
hash
against
the
hash
that
it
asked
for
and
if
the
bits
aren't
the
same,
somebody
mutated
that
image
and
whether
it
was
the
people
releasing
v1
18:3
and
it
just
decided
to
update
it
without
updating
the
tag
or
if
it
was
an
adversary.
It
doesn't
really
matter
right.
B
So
quick,
look
at
minimal
containers,
so
the
next
step
in
the
process
is
pod
specification,
and
so,
when
we're
specifying
pods
we're
saying,
hey,
we've
got
some
containers
that
we've
created
and
we
want
to
run
them,
and
these
are
the
characteristics
with
which
we'd
like
to
run
them,
and
so
things
like
what
user
that
container
is
going
to
run
under
is
something
that
you
can
specify
in
your
pod
specification
and
so
assigning
on
privileged
users
and
groups
is
a
good
idea.
Now
it's
interesting.
We
kind
of
go
through
some
of
these
things
here.
B
That
are
good
practices,
but
you
may
be
forced
to
do
them
as
well
as
we'll
see
in
a
bit
so
assigning
an
unprivileged
user,
a
group
and
there
may
be
a
selection
of
them
to
choose
from
with
different.
You
know
capabilities,
so
you
may
need
to
make
the
right
choice
there,
but
definitely
avoiding
route
at
all
costs.
If
you're
building
an
application
service,
you
generally
don't
need
any
permissions
of
special.
You
know,
capabilities
or
or
features
you're
just
going
to
be
a
running
application.
B
A
normal
user
is
is
mostly
good
if
you're,
just
simply
listening
on
the
network,
interface
and
handling
service
requests,
so
the
developers
that
are
building
application
level,
you
know
components
don't
generally
need
a
lot
of
the
you
know.
Fancy
features
that
provide
you
extra
permissions,
just
the
basics
are
going
to
do
so
pick
an
unprivileged
user
and
and
have
at
it
when
people
are
building
operational
services
and
stateful
services
and
things
that
you
know
are
kind
of
part
of
the
platform.
B
Justin
Shah
hashes.
This
we've
mentioned
in
your
pods
back,
pull
the
image
by
Shaw
hash.
That's
something
to
think
about.
Not
everybody
agrees
with
that,
because
it
makes
pipelines
sometimes
a
little
bit
brittle,
but
there
are
some
strong
advantages
there
as
well,
and
then
we'll
also
talk
about
side,
cars
and
init
containers
for
just
a
minute
side.
B
So
you
can.
You
can
take
the
pieces
of
of
operation
that
you
need
to
perform
that
require
some
bump
and
privilege,
and
you
can
segregate
them
out
right.
You
could
put
them
in
other
places
and
that
way,
your
main
application
once
again
can
just
be
a
plain
unprivileged
component.
So
the
last
thing
you'd
want
to
do
is
say
well
when
I
start
up.
B
For
example,
imagine
I
need
to
change
the
ownership
of
a
directory,
and
so
I
need
to
have
you
know
the
ability
to
change
ownerships
of
all
directories,
all
the
time
which
isn't
really
true
right.
So
we
can
add
this
temporal
context
to
our
thinking
about
security
right.
If
you
only
need
this
privilege
for
a
short
period
of
time,
then
an
init
container
right,
especially
on
initialization.
B
Typically,
then
an
init
container
might
be
a
really
good
option,
because
you
can
give
that
specific
container
the
permissions
that
it
needs
and
you
can
take
those
permissions
away
from
your
long-running
service,
which
is
the
one
that's
going
to
be
at
risk
to
the
code.
So
a
quick
look
here,
then
at
locking
down
pods
I'll
go
ahead
and
bring
back
up
our
environment
here
again,
and
let
me
just
make
sure
there's
nothing
running.
B
B
This
is
what
we
can
do
as
this
shell
that
we're
running
here-
and
this
is
what
capabilities
are
going
to
be
inherited
by
our
children,
which
is
anything
the
shell
creates.
So
we
can
see
that
it's
not
you
know
all
F's
right.
Obviously,
there
are
some
bits
that
have
been
turned
off.
These
get
turned
off
by
the
container
manager
and
since
docker.
B
Well,
the
world
is
sort
of
you
know,
focusing
on
the
open
container
initiative
OCI
as
the
standard
docker
really
created
the
the
foundation
for
all
that
stuff
right,
and
so,
when
docker
containers
are
executed,
a
lot
of
capabilities
are
removed.
Clearly,
all
four
of
these
bits
have
been
removed
right.
Three
of
these
bits
have
been
removed.
Three
of
these
bits
have
been
removed
and
so
on,
and
so
there's
a
lot
of
things
you
can't
do.
For
example,
you
don't
have
net
admin
capability
inside
a
container.
B
B
Let's
do
just
a
simple
example
of
this
WWE
right
assuming
our
this
is
just
you
know
totally
contrived,
but
to
give
you
an
example
assume
that
the
the
web
root
is
owned
by
the
root
user.
So
if
we
changed
the
owner
of
this
or
the
user
of
this
container
to
some
unprivileged
user
lost
all
of
our
capabilities,
we
would
also
lose
potentially
the
ability
to
write
to
the
web
root,
which
maybe
we
need
right
because
there's
no
right
permissions
for
anyone
other
than
root.
So
how
could
we
address
that?
B
Well,
we
could
fix
that
kind
of
a
problem
with
an
init
container.
So
let's
take
a
quick
look
at
a
way
that
we
can
handle
that
so
I'm
going
to
go
ahead
and
run
that
same
command
again
right
to
create
a
pod
spec.
So
this
will
just
sort
of
show
that
we're
really
really
just
doing
this
exact
same
thing
again:
right,
I'm,
cute
little
running
the
busybox
image,
we're
gonna
call
it
demo,
but
I'm
telling
the
coop
coudl
command.
B
Don't
actually
do
this
and
output
it
into
gamal
and
stuff
it
in
this
file,
all
right,
so
in
kubernetes
we
we
provide
it
with.
These
declarative
manifests
with
ezmo
files
to
say
this
is
what
I'd
like
you
to
do,
and
so,
if
I
open
up
the
pod
mammal,
it
looks
something
like
this,
and
so
the
creation
timestamp
will
get
generated
for
us
automatically.
We're
not
gonna
request
any
specific
resources
and
we
don't
care
about
the
DNS
policy.
Let's
also
make
this
guy
never
restart
and
then
finally,
we'll
get
rid
of
any
status.
B
That's
going
to
be
provided
by
the
system,
but
I've
got
a
bunch
of
stuff
here
that
we're
gonna
tack
on.
So
let's
drop
this
in
okay.
So,
as
we
kind
of
talked
about,
we
want
to
run
this
pod
as
a
non-user,
so
we
can
specify
at
the
pod
level
a
security
context,
and
so
at
the
pod
level,
we're
saying
run
us
user
10,000
run
us
group
10,000
and
run
as
file
system
group
10,000.
So
a
lot
of
people
are
under
the
misconception
that
you
can't
run
a
Linux
process
as
a
user.
B
That's
not
an
Etsy
password
or
something
like
that.
Linux
just
knows
processes
by
ID,
and
it
knows
the
owners
of
those
processes
by
ID.
So
if
you
provide
it
with
an
ID
it'll
give
that
ID
to
that
process
and
if
it
can't
find
any
references
to
that
process
as
special
features
or
powers
or
permissions
it'll
be
just
a
plain,
vanilla,
unprivileged
user.
So
there's
nothing
wrong
with
that
right
and
then,
when
we
run
this
container,
we're
gonna
have
it
tail.
B
F
Devon
all
and
we've
already
got
the
restart
policy
up
there.
So
let's
get
rid
of
that,
but
I
think
that
that
should
get
it
going
right.
So
this
this
container
is
gonna
run
and
it's
just
gonna
sit
there
doing
nothing,
but
we
will
be
able
to
you,
know
shell
into
it
and
inspect
it
and
see
what's
happening
because
busybox,
of
course,
has
a
shell
and
stuff
violating
some
of
the
other
things
that
we
were
talking
about.
But
this
is
a
container
design
for
experimentation.
Okay,
so
we've
got
our
pod
updated.
B
B
B
See
which
capabilities
we've
got
we're
down
to
zeros
right,
and
so
this
this
basically
suggests
that
we're
we're
an
unprivileged
user
and
therefore
we're
gonna
have
issues
if
we
try
to
write
tavar
right,
because
the
container
image
when
this
is
something
that's
an
interesting
thing
to
think
about
the
container
image
is
the
root.
Is
the
root
filesystem
bits
right
for
your
container?
B
You
know
completely
switch
this
around
and
say
no
there's
a
system
user
that
or
a
service
user
that
you
need
to
use
in
production
and
therefore,
when
you
build
your
images,
perhaps
you
can
build
those
images
with
the
right
user
in
advance
right.
You
can
set
up
that
specific
user,
the
same
ID
right,
ten
thousand
or
whatever
it
is
that
you're
going
to
have
in
production
in
your
image,
and
that
makes
you
know
this
problem
kind
of
go
away
to
some
degree.
B
But
it
comes
back
as
soon
as
you
start
using
extra
volumes
and
as
soon
as
those
volumes
get
accessed
by
different
users.
So
we'll
do
one
last
example
here
and
then
I'll
kind
of
talk
about
the
system
side
of
it
and
policy
where
we
can
be
impacted
as
developers
they
can
get
through
this
in
the
last
few
bits
that
we've
got
of
time
here.
So
here
is
another
version
of
this
pod
spec,
and
so
let
me
just
remove
the
old
one.
B
B
So
what
we
could
do
is
we
could
set
up
an
init
container
and
the
anit
container
can
run
as
a
different
user
alright,
so
this
guy's
overriding
the
run
as
user
at
the
pod
level
and
providing
himself
with
the
permissions
that
he
needs,
but
we
don't
need
all
capabilities.
Right
least,
privilege
would
say
only
give
yourself
the
capability
you
need,
so
we
need
to
change
ownership
capability.
So
we
add
that
and
we
drop
all
the
others.
B
So
this
means
we
can't
do
anything
except
change
ownership
and
then
the
command
that
we're
gonna
run
in
the
init
container
is
going
to
be
change
the
owner
to
10,000
for
VAR
w
w.
Now
this
guy's
gonna
mount
w
w
the
same
volume
here
change
the
owner
and
then
when
the
main
container
starts
up,
we'll
have
the
volume,
but
he
needs
all
right.
So
let's
give
that
guy
a
try,
real,
quick.
B
B
B
Now
we've
got
exactly
what
we
were
looking
for
right
we've
got,
the
ownership
changed
for
just
the
directory
that
we
need
to
write
to,
but
we
don't
have
a
long-running
container
that
has
elevated
privileges.
Okay,
so
that's
a
you
know,
kind
of
a
quick
once-over
and
locking
down
pods
pod
policy.
Again,
you
know
45
minute
webinar,
so
we've
only
got
so
much
time
to
talk
about.
You
know
these
things,
I'm
just
gonna
jump
into
pod
policy,
real,
quick
and
mention
that
governance
may
apply
constraints
right.
They
may
force
you
to
do
some
of
these
things.
B
So
you've
got
you
know,
running
privileged
containers
might
be
allowed
or
disallowed.
You
might
have
different
kinds
of
volumes
that
you're
allowed
to
use
and
not
allowed
to
use.
You
might
have
you
know,
users
that
are
going
to
be
required
and
and
so
on.
So
there's
a
lot
of
this
kit
that
can
be
enforced
as
policy
and,
of
course,
that's
a
great
idea.
If
you
want
to
create
a
better
security
posture
inside
your
cluster,
you
want
to
make
sure
that
accidentally,
we
don't
end
up
with.
You
know,
services
that
are
violating
these.
B
B
So
we
can
identify
individual
components
within
kubernetes
by
labels
and
that's
even
finer
grained
and
gives
us
additional
capabilities
also
using
namespaces
and
things
like
that,
all
right.
So,
at
the
end
of
the
day,
you
know
there's
a
lot
of
things
to
think
about
as
a
service
developer,
we
have,
to
you,
know,
consider
the
actual
construction
of
the
software.
We
have
to
consider
the
the
packaging
of
it.
B
We
have
to
consider
the
deployment
of
it
and
then
we've
got
policies
to
enforce
many
of
these
security
concerns,
and
then
we
also
have
network
policies
that
are
actually
back
on
the
plate
of
the
service
developers,
because
they're
so
fine-grained
that
they're
actually
controlling
ingress
and
egress,
for
that
particular
pod,
that's
being
deployed,
and
so
those
are
all
the
kinds
of
things
that
you
know
in
general.
People
should
be
thinking
about.
So
thanks
for
attending
I
will
pass
it
back
over
to
Christy
yeah.
A
Awesome
thanks
Randy
for
a
great
presentation.
Let's
just
go
ahead
and
do
one
question
really
quick
here
since
we're
just
at
the
top
of
the
hour
and
rosh
asks
how
to
deal
with
in
on
flex.
Unfixable
excuse
me,
vulnerabilities
and
the
base
image
reported
by
image
scanners.
B
Great
question
so
image
scanners
come
in
lots
of
flavors
and
some
of
them
are
looking
for
sea
bees
and
if
you
are
using
a
library
that
has
a
vulnerability
and
that
that
vulnerability
is
is,
is
not
repaired,
which
can
happen
in
in
the
world
that
we
live
in
with
all
the
open
source
and
things
there's
there
are.
There
are
things
that
you
can
do
if
you
understand
the
vulnerability,
but
the
tricky
part
is
it's
going
to
be
case
by
case
right?
It
depends
on
what
the
actual
problem
is.
Maybe
there's
an
issue
with.
B
You
know
with
a
particular
attack
pattern
that
you
can
control
by
adding
in
you
know,
kind
of
a
bit
of
code
to
your
application
or
configuring
a
proxy
in
a
certain
way.
Maybe
there's
you
know,
for
example,
a
way
that
you
can
lock
off
traffic.
You
know
to
a
specific
port
to
control
access
to
it.
So
this
there's
a
there's
a
lot
of
different
ways
that
you
can.
B
You
know
solve
these
problems,
but
at
the
range
of
problems
is
pretty
broad,
so
it
kind
of
depends,
and
at
the
end
of
the
day,
of
course,
you
dolt
emile
ike
to
go
back
to
that
project
and
see
them
repair
that
that
vulnerability.
One
of
the
things
I
can
say,
I'm
a
involved
with
the
Apache
thrift
project-
is
that
a
lot
of
the
vulnerabilities
that
come
in
are
pretty
darn
subtle,
get
some
security
researchers
out
there
that
are,
you
know,
have
they
set
up
a
very,
very
complex?
B
You
know
environment
within
which
to
exploit
some
vulnerability
right,
and
so,
if
you
can
defeat
that
environment
right,
if
you
can
make
it
so
that
any
one
of
those
things
you
know
in
that
that
is
required
in
order
to
exploit
that
vulnerability
is
not
possible
through
network
policy
or
through
controlling
you
know
the
features
of
your
pod
or
by
removing
you
know,
key
tools
that
are
required
that
that
can
mitigate
the
problem.
Yeah
great
question.
A
Great
well,
that
takes
us
to
the
top
of
the
hour
and
Randy's
email
is
here
on
the
last
slide.
So
if
you
have
your
further
questions
and
didn't
have
time
to
answer,
it
feel
free
to
connect
with
him
and
via
email,
a
reminder
that
the
recording
and
the
slides
are
going
to
be
available
on
the
CNC
F
webinars
page
later
today
that
CN
CF
IO
slash
webinars.
We
look
forward
to
seeing
you
at
a
future.
Cn
CF
webinar
have
a
great
day
and
stay
safe
thanks
everyone.