►
From YouTube: Master Production-grade Best Practices to Build your Node.js Docker Images - Liran Tal, Snyk
Description
Master Production-grade Best Practices to Build your Node.js Docker Images - Liran Tal, Snyk
You thought you figured out how to build your Node.js web applications with Docker? you're missing out on a lot. Many articles on this topic have been written, yet sadly, without thoughtful consideration of security and production best practices for building Node.js Docker images. In this session, we'll run through step-by-step production-grade guidelines for building optimized and secure Node.js Docker images by understanding the pitfalls and insecurities with every Dockerfile directive, and then fixing it. Join me and master the Node.js best practices for Docker-based applications.
A
Hey
everyone.
Thank
you
for
joining
my
talk
on
mastering
production,
great
best
practices
to
building
your
node.js
docker
image.
Hey
glad
to
have
you
here.
My
name
is
lerontal,
I'm
also
known
as
that
guy,
with
a
yoda
hat,
I'm
a
developer
advocate
at
sneak
or
on
a
mission
to
help
developers
build
applications
securely
using
open
source
software,
I'm
actively
involved
in
the
node.js
security
working
group,
the
oasp.
You
know
different
research
and
security
experiences
and
best
practices.
A
So
you
know
if
you
want
to
attach
based
on
all
of
those
or
any
of
this
or
just
ask
any
questions.
I'm
on
twitter,
just
reach
out
at
leron,
underscore
tell
and
we
can
chat,
but
today
we
are
going
to
specifically
dive
into
this
building
node.js
applications.
A
We
docker
containers
now
most
blog
articles
that
I've
seen
I've
come
kind
of
like
starting
and
finishing
along
the
lines
of
following
the
best,
the
the
not
the
best
really,
but
I
would
say
that
the
simplest
right
and
simplistic,
maybe
other
simplistic,
sometimes
sometimes
docker,
file,
instructions
for
building
node.js,
docker
images
and
what
I
actually
mean
by
that
is.
You
know
it's
simple
and
it
works.
It's
something
as
as
simple
as
this
like
like
this:
this
docker
file,
which
you
can
run
docker,
build
and
then
docker
run
and
it's
it's
fine.
A
The
application
will
run
just
fine.
The
only
problem
is,
it
is
kind
of
full
of
mistakes
and
bad
practices
for
building
docker
images.
Definitely
if
you
want
to
do
this
for
production
grade,
so
you
want
to
avoid
anything
that
looks
like
this
by
all
means
now
we're
going
to
dissect
what
exactly
it
means
every
single
line
of
them,
starting
from
the
first
one
from
node
right.
That
actually
means
no.
Reproducible
builds
because
you're
pulling
in
the
latest
image
and
we're
gonna
drill
into
any
of
the
all
of
this.
A
So
don't
worry,
we're
gonna,
you
know
stay
tuned
on
this.
The
other
thing
here
is,
you
know,
you're
copying,
potentially
sensitive
files,
because
you're
just
copying
everything
you
know,
maybe
config
files,
maybe
environment
files
that
you
wouldn't
need
to
have
on
the
running
environment,
on
the
docker
js
image.
A
What
about
you
know
unneeded
dependencies
in
this
npm
install
command?
You
know
who
knows
what
you're
pulling
in
there
when
you're
running
this
during
during
the
build
time
of
the
docker
image,
and
finally,
even
this
command
to
basically
spawn
the
node.js
runtime
as
well
in
the
application
itself
is
actually
incorrect
usage
of
doing
so,
and
it
may
end
up
in
your
application
not
having
proper
graceful
shutdown.
Definitely,
if
you're
using
this
in
some,
you
know
mature
and
rich
orchestration
environment
like
kubernetes
or
swamp
or
others.
A
So,
let's
start
off
with
our
first,
you
know
best
practice.
This
is
using
explicit
and
deterministic
docker
based
image
tags
and
what
does
it
actually
mean?
So
if
we
look
at
this,
you
know
first
line
of
code
here
in
this
docker
file
that
says
from
node.
Well,
what
image
are
we
actually
pulling
in?
It
may
seem
at
first,
you
know
kind
of,
like
an
obvious
choice,
to
use
this
docker
image.
You
know
from
node,
but
we're.
Actually.
A
This
is
an
alias
to
the
latest
docker
image
and
of
node,
and
should
we
actually
be
pulling
in
that
latest
image,
because
at
the
very
least
it
means
this
is
unreproducible
builds
right.
A
This
is
every
time
we
install
we
create
and
build
this
image,
we're
actually
pulling
in
a
new
version,
potentially
new
version
of
the
images
you
know
from
the
last
time
that
it
was
built,
and
so
several
things
here
right
just
like
we're,
using,
for
example,
log
files
for
npm,
and
you
know,
yarn
or
npn,
whatever
you're,
using
as
a
client
manager.
A
A
You
know
node.js
docker,
that
you're
gonna
pull
in
the
other
thing
is,
you
know,
because
you're
pulling
in
and
if
you're
pulling
in
images
like
that,
like
from
no
latest
you're,
actually
taking
the
latest
node
image,
which
is,
I
don't
know
if
you
knew
this,
but
this
is
a
full-fledged
operating
system
with
many
libraries,
and
you
know
binaries
that
you
may
not
need
for
a
running
node
application.
So
why
would
you
want
that?
It's
you
know
more
software
means
you
know
you
know
down.
You
know
more
downloads
for
a
size.
A
You
know
more
more
risk
for
you,
because
there's
no
more
software
bundled
and
who
knows
what's
going
to
happen
from
it?
I
will
tell
you
that
we're
going
to
see
a
live,
hacking
demo
here
of
what
happens
when
you
actually
bundle
a
lot
of
dependencies
in
a
darker
image
itself,
the
container
itself
could
be
compromised
and
we'll
see
it.
A
But
thinking
about
all
of
these
images
that
you're
pulling
in
you
know
from
node
latest
is
a
full-fledged
operating
system
and,
as
we've
seen
you
know
in
in
previous
research
at
sneakers,
you
know
taking
that
latest
image.
Even
for
other
base
images,
like
you
know
the
the
couch
based
on
mysql
all
of
those
other
popular
base
images.
This
is
basically
top
10
docker
images
on
docker
hub,
almost
all
of
them
you
see
from
except
for
ubuntu.
In
the
last
time
we
scanned
it.
A
All
of
them
have
these
vulnerabilities
that
that
are
by
default
on
the
docker
image
itself.
So
why
would
you
want
to?
You
know,
take
that
latest
image?
Probably
not
so,
let's
fix
it
now.
A
This
base
image
directive
that
we
that
we
are
now
going
to
replace
is
going
to
use
a
new
base
tag
and
we
can
find
that
you
know
shot
256
hash
for
it
on
docker
hub
or
maybe
even
just
by
running
docker
images,
minus
minus
digest
and
that
will
show
you
if
you've
pulled
it
in
what
is
the
image
digest
for
it?
So
you
can
find
it
whatever
you
know
makes
sense,
but
you
can
use
it.
The
the
downside
here
that
it's
a
little
bit,
you
know
unreadable.
A
So
if
you
want
to
maintain
node
images
over
time
or
docker
images
in
general,
you're,
not
really
sure
exactly
which
base
image.
Are
you
actually
referring
here?
So
hey?
Look
at
this
amazing.
We
can
just
replace
you
know
that
that
shutdown,
but
not
really
replace
it
but
prepend
it
with
an
alias
to
what
this
actual
image
is
coming
from,
and
that
is
you
see
this
is
node
lts
alpine,
so
I'm
using
that
base
image
as
a
tag,
but
specifically
the
image
build
of
the
node
version
at
that
time.
A
At
that
time
of
the
shot-
and
this
is
gonna
provide
me
all
the
time
deterministic
builds
of
the
docker
image
moving
on.
Are
we
going
to
install
dependencies
the
right
way?
Are
we
gonna?
Do
it
the
bad
way?
Let's
see
well,
first
of
all,
we
started.
We
started
with
our
docker
file,
the
simplistic
one
with
npm
install
now,
as
you
probably
know,
from
being
in
a
developer.
This
is
not
the
best
way
of
doing
it.
A
You
know
it
adds
unneeded
dependencies
and
security
risks
because
you're
pulling
in
depth
dependency
and
other
things
like
that,
it's
inflating
the
image
side.
You
know,
why
would
you
want
to
do
it?
Not
really
don't
do
this
variant
other,
which
is
you
know,
an
npm,
install
and
then
updating
to
the
latest
version,
because
you
have
no
idea
what
you're
going
to
be
pulling
in.
A
Like
I'm
saying
here,
please
do
not
do
this.
This
is
not
a
best
practice
at
all
did
not
need
those
dev
dependencies.
You
do
not
need
that
indeterministic
way
of
pulling
in
images
in
now.
The
thing
is,
you
could
try
and
add
the
minus
minus
sprout
at
the
end.
To
kind
of,
like
you
know,
pull
only
prod
dependencies,
but
it
may
surprise
you
with
the
dependencies
that
you'll
that
you
will
pull
in
during
a
ci
environment,
because
you
know
many
things
that
can
happen
if
you're
not
using
a
log
file.
A
So
if
you
want
to
do
it,
the
most
proper
way
is
you're
going
to
pull
in
only
the
production
dependencies,
but
you
are
going
also
to
pulling
the
you
know
to
sorry
not
to
pull
in,
but
to
pin
the
dependencies
using
npmci
to
what
you
have
in
the
log
file
and
get
deterministic
builds.
This
is
by
the
way,
also
faster
than
the
other
way
of
installing
dependencies.
So
what
we
got
gonna
get
through
that
one
now
something
that
is
you
know,
I've
seen
happening
in
different
packages
is
where
we
are
optimizing.
A
The
libraries
and
the
way
that
we're
building
the
image
to
actually
work
for
production
and
what
it
actually
means
is.
There
are
a
lot
of
expectations
that
some
libraries
may
have
that
you
may
not
know
about
in
order
to
like
kind
of
like
toggle
on
performance
and
security,
improvement
and
optimization.
Now,
what
exactly
does
does
this
mean?
A
Well,
if
you
wanted
to
do
no
damn
production
to
kind
of
like
tell
the
the
npm
package
manager
to
install
only
production
dependencies
that
will
work,
but
that
node
and
production
only
only
lasts
for
that
states
for
that
state
in
the
layer
and
that
step
in
the
docker
file
to
pull
to
created
the
production
dependencies.
A
When
you
run
npm
starter
starter
at
the
end,
it
will
still
run
in
like
when
node
and
when
nodes
envis
is
kind
of
like
in
dev,
it's
not
in
production
mode.
Now
why
you
want
to
enable
node.production
for
running
the
application
in
general?
Is
things
like
express
where
you
know
express,
will
only
enable
some,
some
caching
and
less
virus
error
messages
and
other
capabilities,
that's
kind
of
like
optimized
for
production,
only
if
no
denver's
running
with
production,
so
there's
a
blog
on
this
from
daniel
khan.
A
You
know
dated
way
way
back
about
why
this
is
important
and
there's
probably
a
lot
more
sense,
but
basically
what
it
actually
means
for
us
is.
We
want
to
install
production,
npm
dependencies
with
npmci
minus
minus
only
production,
and
you
know,
move
that
node
and
outside
of
the
node
install
process
to
be
a
generic
way
of
basically
building
and
running
the
application.
A
Okay.
So
once
we've
got
this,
let's
talk
about
this
other
principle
of
list
privilege,
which
is
a
long
time.
Security
control
from
you
know
the
early
days
of
unix
that
we
should
always
follow,
regardless
of
you,
know,
containerization
and
serverless,
and
whatever
this
is
the
best
practice
now.
What
do
I
mean
by
that?
So
we've
got
into
this
state
of
the
docker
file,
which
is
already
you
know
much
better
than
the
simplistic
approach
of
it
and
we've
probably
already
remediated
some
vulnerabilities
and
risks.
A
But
the
thing
is:
do
you
know
which
process
is
actually
used
which,
owner
of
the
of
the
the
user
owner
of
the
process
to
run
the
runtime,
not
really
sure?
Now?
Why
am
I
asking
you
this
because
let's
see
some
examples
of
how
this
can
turn
really
really
bad?
A
So
maybe
you
know
a
better
way
of
using
you
know
some
insecure
apis
like
this
right,
like
maybe
there's
a
child
process
exec
which
who
knows
you
know
who
owns
this
command
once
it's
running
off
inside
a
container
more
than
this,
you
know.
Let's
say
you
have
this
worker
node.js
application
image
that
listens
on
a
cue
and
a
message:
queue
to
basically
do
offline
image
processing
and
you
use
it.
A
You
see,
I'm
not
even
using
child's
process
here,
I'm
using
pdf
image,
it's
an
open
source
package
that
I
found
that
allows
me
to
do.
You
know
image
manipulation,
so
I'm
using
this
one.
This
is
on
my
node
worker
containerized,
working
off
of
a
queue
handling
billions
of
messages
that
I
need
to
like
basically
resize.
A
But
you
know
what,
if
this
pdf
file
path
is
now
user
controlled
right
like
something
like
this?
What
if
someone
could?
Actually
you
know,
add
that
as
a
payload
that
manifests
into
this
into
this,
you
know
library
called
to
pdf
image
leave
now.
The
thing
is
that
exactly
this
kind
of
of
a
vulnerability
really
happened
for
pdf
image
and
for
other
images.
Now,
why
is
it
happening?
A
Because
you
may
not
know
this,
but
behind
the
scene,
pdf
image,
you
know,
for
you,
it's
an
obstruction
for
what
it
does
behind
the
scene.
The
implementation
detail
is
it's
spawning
that
that
exact?
You
know
insecure
api
child
process,
exec
command
line
to
basically
use
the
convert
utility
to
make
image
manipulation.
So
now
that
you
know
this,
you
are
kind
of
like
a
bit
more
worried
of
about
these
issues,
and
you
know
what
we
want
to
do
here
is
kind
of
like
maintain
and
containerize.
A
This
is
a
bit
of
a
pun,
but
containerized
kind
of
like
the
the
blast
radius
of
what
could
go
wrong
so
instead
of
running
now
that
that
command
injection
in
of
pdf
image
vulnerability,
if
it
happened
running
that
as
the
root
user,
which
is
what
docker
defaults.
So
if
you
don't
don't
choose
anything,
we
now
want
to
use
user
node,
which
is
less
privilege
and
has
less
privileges
and
can
do
a
lot
of
things
inside
the
inside
the
the
container
itself.
A
When
it's
running
the
thing
is
that
copy
command
that
I
showed
you
that
was
you
know,
kind
of
like
bad
before
is
a
bad
practice.
It's
because
maybe
you're
scoping
sensitive
files,
but
now
because
we
also
want
to
be
able
to
to
run
the
user
as
a
least
privileged
one.
We
also
make
sure
we
need
to
make
sure
that
all
the
files
related
to
the
application
itself
are
not
owned
by
root,
but
actually
owned
by
the
user
itself.
A
What
about
those
other
best
practice-
or
you
know
most
common
mistakes
that
I've
seen
that
I
see
with
blog
articles
about
you
know
how
they
containerize
node.js
applications
when
running
node
containers
is
how
they
invoke
the
node
process
itself
as
a
process
inside
the
container
itself.
So
how
many
docker
files
have
you
seen
in
tutorials
and
blogs
that
recommend
this
way
of
of
executing
your
node
runtime
right?
Probably
a
lot
a
lot
of
tutorials
do
maybe
you're
even
doing
this
today.
You
know
in
your
team.
A
In
your
production
environment,
so
here's:
why
not
to
do
it
and
what
could
go
wrong?
The
problem
is
that,
while
this
works
and
is
okay
to
experiment
with
it's
a
bad
choice
for
production,
node.js
containers-
and
this
is
a
bad
way
of
doing
it-
this
is
you
know.
Might
you
might
think
this
is
a
better
way,
but
it's
also
a
bad
way
of
doing
it
like
that
with
you
know,
with
the
square
brackets,
you
know
you,
maybe
think
of
you
know,
invoking
the
note
process
directly
like
this
right
nope.
A
This
is
not
helpful
either,
and
even
if
you're
trying
to
wrap
it
up
with
a
shell
script,
unless
you,
you
knew
what
to
do
in
that
show
script
which
I'll
get
you
in
a
second.
This
is
also
a
bad
way
of
running
and
spawning
your
node
containers.
Now,
why
is
it
okay?
This
is
all
bad
to
understand
why
this
is.
You
know
why
this
is
bad.
A
We
need
to
understand
the
bigger
picture
of
how
node
containers
run
in
a
in
say,
like
a
bigger
environment,
and
what
I
mean
by
that
is
there's
an
orchestration
engine
such
as,
as
you
can
see
here,
docker,
swam
or
kubernetes,
or
even
just
you
know,
docker
engine
itself.
Now
it
needs
a
way
to
you
know
you
know,
generally
speaking,
the
environment
right
needs
a
way
to
send
signals
to
the
process
in
the
container
to
let
the
container
know
that
maybe
it
should
die
because
hey
we
want
to
do
some,
a
b
testing.
A
We
want
to.
You
know
roll
it
roll
roll,
a
new
version
in
so
we
need
to
kill
some
containers,
maybe
they're
overcapacity.
Whatever
is
the
reason
for
it.
You
know
these
orchestration
engines
need
a
way
to
signal
right
to
applications
to
terminate
them,
so
they
send
signals
like
sig,
german,
cql
and
whatever,
and
the
caveat
here
is
kind
of
twofold.
Firstly,
we
are
indirectly
running
the
node
application
by
directly
invoking
the
npm
client.
A
So
what
it
means
is,
you
know
when
we
are
running
npm
start
npm
itself,
as
kind
of
like
the
packet
manager,
the
cli
kind
of
like
spawns
a
new,
a
new
child
process
for
the
node
runtime
for
your
application,
but
who's
to
say
that
it's
going
to
forward
all
the
events
that
it's
that
it's
getting
into
that
application.
Well,
actually
it
doesn't.
If
you
do
not
believe
me,
let
me
show
you
how
it's
a
very
simple
experiment
to
to
set
up.
Add
this
process
on.
A
You
know
sig
up,
which
is
one
of
the
signals
that
an
application
can
receive.
Add
this
code
to
your
very
simplistic.
You
know,
node,
you
know
web
web
application,
then
using
docker
kill
you
can
actually
the
cli
docker
itself.
You
can
actually
send
minus
minus
signal
and
provide
a
specific
signal
to
a
running
container.
A
If
you
do
it,
you
can
see
if
you,
if
you
run
that
you
can
see
that
just
like
for
in
my
in
my
screen
here,
just
like
it's
kind
of
like
waiting
for
interaction,
that's
exactly
what's
happening
because
what
is
happening!
The
node
runtime
will
not
show
you
any
console
logs.
That
that
it
received
the
event,
because
the
npm
cli
in
that
case
swallows
all
of
those
events
and
that's
not
something
we
want
so
the
previous
example.
A
We
had
this
npm
wrapping,
you
know
the
actual,
the
actual
node
runtime
and
not
for
forwarding
all
the
signals
to
it.
Now
we
made
a
change
and
are
we
starting
the
process
directly
or
do
we
what's
happening
here?
So
let's
open
a
shell
in
the
in
this
running
container
and
see
what
we
have
now
it
looks
like
we
started
the
node
runtime
directly
that
cmd
bracket
notation
actually
tells
you
that
docker
to
sorry
tells
docker
to
execute
a
process
and
wrapping
it
with
a
shell.
A
So
does
the
shell
actually
forward
this
sight
signal
to
it,
as
you
can
see
here
in
my
screenshot,
even
though
that
is
process
id
one,
it's
owned
by
root
by
the
way,
which
is,
as
we
talked
before,
a
bad
one,
but
this
shell
minus
c
running
this
and
wrapping
it
is
not
really
actually
forwarding
the
event.
So,
let's
try
a
different
form.
A
This
is
called
the
exact
form
where
we
are
using
square
bracket
notation
and
trying
now
to
run
this
to
run
this
command
and
see
what's
happening
well,
what's
happening
when
you
run
it
entirely
like
this
directly,
it
means
that
it
is
running
as
process
id
one
that
effectively
take
some
of
the
responsibilities
of
an
init
system
inside
a
running
container.
What
it
typically
means
is
that
it
should
be
responsible
for,
like
initializing
operating
system
processes,
but
the
kernel
the
linux
kernel
treats
process
id1
in
a
very
different
way.
A
Then
sorry,
then
it
treats
other
process
identifiers,
and
so
this
special
treatment
for
the
car
from
the
kernel
means
that
the
handling
of
things
like
subterm
signals
is
is
different.
Then
maybe
it
won't
even
invoke
any
fallback
behavior
that
could
kill
the
process.
So
you
know
this
is
a
recommendation
from
the
node,
the
official
node.js
docker
working
group,
to
tell
you
not
to
run
a
node
inside
a
container
as
process
id1.
A
A
You
know
very
easy
to
work
with,
and
it's
a
good
helper
for
this
job,
so
if
you're
spawning
a
node.js
process
like
this
you'll
also
notice
that
I
I
needed
to
install
the
dominated
my
in
my
alpine
container
here
and
we're
taking
advantage
of
image
layer
caching
here
and
what
we're
doing
here
is
now
making
sure
that
dumping
it
is
running
and
when
it
gets
signals
it
actually
forwards
them
to
the
node
process,
so
it
actually
treats
them
correctly,
and
this
also
relates
to
the
fact
that
we
need
the
node.js
application
to
receive
interrupt
signals
like
sigint
and
and
control
c
like
that,
and
it
will
cause
an
up.
A
You
know
it
will
kill
once
it
gets,
that
it'll
actually
kill
the
node
running
the
container
running
the
node
application.
Unless
we've
set
some
kind
of
graceful
shutdown,
because
we
want
all
the
current
connectivity,
the
the
the
requests
you
know
coming
in
and
you
know
in
the
container
itself,
we
actually
want
don't
want
to
like
enter
into
abruptly,
kill
them.
Actually
let
them
finish,
you
know,
stop
new
traffic
from
coming
in
how
we're
doing
that
is
the
ability
of
actually
making
sure
that
the
container
itself
is
able
to
gracefully
shut
down.
A
You
know
when
it
gets
this,
you
know
sigint
or
whatever
is
sig
term
whatever
is
sent
to
it.
To
like
stop
the
container
needs
to
clean
up
resources
needs
to
free
up
memory,
needs
to
you
know
whatever
it
needs
to
do.
You
know
properly.
You
know
close
database
connections
and
at
that
time
like
until
all
the
connections
have
been
freed,
like
stop
finished
all
the
interactions.
Only
then
the
container
will
will
drop
off
and
not
abruptly
kill
some.
You
know
some
some
some
connections
for
people.
A
You
know
in
the
middle
of
things,
so
this
is
all
about
container
handling
and
events
and
all
of
those
best
practices
that
we've
talked
so
far
and
I'm
getting
into
you
know.
Why
are
you
not
fixing
the
vulnerabilities
in
your
docker
images,
for
you
know
for
your
containers
and
what
I
mean
by
that
is
now
docker.
Has
this
scan
command,
which
you
could
use
it's
built
into
it
and
you
could
use
you
know
docker
scan,
for
example,
node
14
whatever?
A
If
you
want
it
and
find
what
vulnerabilities
you
have
in
the
container
now
granted
some
of
finding
some
of
these
vulnerabilities
is
kind
of
hard,
and
you
know
it
might
mean
that
we
need
to
address
them.
But
if
you
know
this
gives
you
already
some
really
interesting
input,
for
example,
it
shows
you,
you
know
where
is
it
coming
from?
This
is
a
vulnerability
that
is
coming
from
image
magic.
So
this
is
you
know
which
library
is
actually
introducing
it.
A
Furthermore,
it's
telling
you
where,
in
the
docker
image
it
is
actually
getting
introduced,
did
you
do
specifically
upgrade
install
image
magic
if
this
was
like
a
debian
or
ubuntu
one,
or
is
it
inside
the
base
image
that
is
now
built
with?
No
that
the
fact
that
you're,
just
using
node,
14
or
node
latest
just
introduces
that
base
image?
A
And
so
you
know,
this
is
very
worrisome,
and
you
know
when
you,
when
you
scan
docker
images,
you
may
find
hundreds
as
we've
seen
before
vulnerabilities,
and
I
know
what
you're
asking
now
like
what
is
the
worst
that
can
happen
because
I
have
to
you
know,
accept
maybe
some
risk
and
I
can't
handle
you-
know
mitigating,
maybe
600
vulnerabilities,
or
where
do
I
start
with
doing
that?
Let's
see
what
can
happen
first,
let's
do
let's
do
a
bit
of
a
demo
and
understand
what
is
happening
and
what
would
happen
so
for
that.
A
What
I'm
going
to
do
next
is:
let
me
go
ahead
and
ensure
my
screen
here
and
my
terminal
I'll
make
that
a
little
bit
yeah
font
size
for
you
to
see
it.
So
what
I
want
to
do
first
is
run
a
container
docker
on
this
container
called
rc.
A
Now,
what
that
actually
is
you're
going
to
move
on
to
this
code
snippet
here
on
vs
code,
you
can
see
this
is
a
node
container
running
on
node
six,
one,
zero
wizzy
right,
we've
traveled
back
in
time
to
node
six,
so
I
can
show
you
some
some
vulnerabilities
and
like
some
interesting
one
as
well.
It's
a
very
simplistic
file
like
nothing
here
is
is
of
issue,
and
I
mean
there's
a
lot
of
issues
here.
A
You
just
talked
about
best
practices,
but
for
us
this
is
actually
working
for
the
container
itself
and
you
can
see,
for
example,
how
I'm
importing
express
and
multer
to
be
able
to
upload
images.
So
this
application
is
going
to
be
giving
me
the
ability
to
upload
images
on
port
3112
and
there's
actually
no
I'd,
say
no
security
in
practices,
bad
practices
from
my
code.
This
is
just
me
using
exec
file,
which
is
a
pretty
secure
api
to
basically
pass
the
command
itself
and
then
any
any
sort
of
arguments
to
it.
A
So
once
they
do
it,
let's
see
if
the
app
is
actually
running
now,
three
one
one
two
it's
on,
if
I
remember
correctly,
on
flash
public
yeah,
so
this
is
it.
This
is
the
application.
Imagine
this
is,
you
know
not
even
interactive.
This
is
just
some
worker
thread
processing
images.
What
I
want
to
do
is
now
upload
an
image.
A
Now
before
I
upload
an
image,
I
want
to
show
you
a
little
bit
more
inside
it,
so
I'm
going
to
move
into
the
container
itself,
I'm
going
to
actually
kind
of
like
ssh
in
and,
if
so
to
say,
I'm
opening
a
terminal
and
showing
you
what
it
looks
like
inside
the
container.
You
can
see
that
I'm
already
in
the
user,
src
goof,
I'm
in
the
goof
application
here
container
running
and
I
can
run
you
know,
cut
server.js,
so
you
could
see
the
actual.
A
You
know
code,
it's
you
know
very
similar,
exactly
what
I
showed
you
before
right.
This
is
the
application
working.
So
let
me
clear
that
up
and
show
you
the
files
again
of
what
actually
exists
here.
What
I'm
going
to
do
now
is
upload
an
image
right
like
you.
Would
expect
any
application?
You
know
to
allow
you
to
upload
images
so
rce1,
let's
see
what's
going
on
here,
rce1
rc1
jpeg,
I'm
gonna
upload
this
one
in.
Let
me
resize
it.
A
It
looks
like
it
was
successful
and
I
can
go
ahead
and
upload
a
new
one.
That's
great!
This
is
a
great
application.
It's
it's
resizing
for
me
to
make
it.
You
know
thumbnail
size
that
I
want
and
so
on,
but
what's
actually
happening
is
let's
see
if,
if
I'm
looking
at
the
list
of
files
here
is
you
could
see
there's
a
bit
of
a
discrepancy.
I
don't
know
if
you
caught
it
first,
but
look
at
this
may
16
rce1
I've
now,
basically
just
uploading
by
uploading
a
file
I
created
a
new
file.
A
I
created,
I,
I
spawned
the
command.
This
is
command
injection
running
inside
my
container
and
creating
a
new
file.
Now.
Why
did
it
actually
happen?
Because
I
have
this
exploits
here
rce1
and
if
I
show
you
what's
happening
inside,
you
can
see
that
it's
not
really
a
regular
jpeg
image,
but
it
is
acceptable
to
be
manipulated
by
the
convert
application.
A
The
image
magic
one
that
exists
inside
this
node
six,
one
zero,
whatever
wizzy
container
this
base
image,
and
what
I'm
doing
here,
I'm
just
giving
it
some
some
commands
and
I'm
sorry,
I'm
concatenating
this
rce1
to
actually
make
it.
You
know
touch
a
new
file,
create
a
new
file.
The
same
way,
I
could,
just
you
know,
create
reverse
shell.
You
know,
do
rmina,
sorry,
whatever
I
want
to
do
on
this
on
this
container.
I
can
now
do
it
because
of
the
ability
of
running
running
container
commands
on
this
running
container.
A
A
After
giving
you
all
of
the
vulnerabilities
and
all
of
the
accounts,
it
actually
tells
you
all
of
the
I'd
say
the
alternative
based
images
that
you
could
actually
transition
to
and
if
you
would
transition
to
one
of
them
it
would
actually
give
you
some
kind
of
I'd,
say
some
kind
of
like
a
less
vulnerability
footprint
if
you
move
to
them.
So
if
docker
node
my
current
image,
docker
node
1410
has
624
vulnerabilities
like
I'm
seeing
here.
A
Actually,
if
I
move
to
node
14
16
buster,
slim
I'll
be
left
with
58
only
so
I'm
mitigating
some,
like
500
dependent
vulnerabilities,
just
by
moving
to
a
different
base
image.
If
my
application
can
function
fine
with
that,
why
not
right
so
you
can
do
it
from
seeing
it
with
docker
scan
and
mitigate
those
vulnerabilities
and
which
we've
seen
right
now
how
they
actually
impact
the
application
itself
and
can
cause
command
injection
or,
if
you're,
using
you
know
the
sneak
app
itself
like
to
to
scan
your
images.
A
We'll
show
you
similar
things,
we'll
tell
you
hey.
We
found
this
note
10
image
that
you're
running
right
now,
but
actually,
if
you
try
to,
if
you
want
to
move
to,
if
you
you
know
want
to
fix
those
vulnerabilities,
you
would
actually
try
no
note
what
is
it.
A
Dominion
buster
slim
actually
be
found
at
a
better
state
in
terms
of
less
vulnerabilities,
impacting
you
and
less
risk
that
you're
having
so
this
is
about.
A
This
was
about
basically,
you
know,
remediating
vulnerabilities,
but
our
other
interesting
things
that
we
can
do,
for
example,
multi-stage,
builds,
are
really
a
great
way
to
move
from
a
simple
and
I'd
say:
you
know
we
had
to
kind
of
like
potentially
earn
a
stalker
file
into
separated
steps
of
building
a
docker
image
and
what
I
mean
by
that
and
how
it
can
help
you
basically
avoid
leaking
sensitive
files.
A
So
if
you
do
something
like
you
know,
npmci
production,
that's
fine,
but
if
you
need
some
private
packages,
you
probably
need
some
token
inside
it.
So
what
you
do
you
go?
You
know
add
the
token
inside
the
docker
file,
like
you,
don't
know
one
two
three
four
here
and
do
the
npm
install
and
it
works.
But
it's
not
really
cool
because
that's
hard-coded
secrets
in
your
docker
file,
so
maybe
you
try
something
else
like.
Maybe
you
try
providing
it
with
a
command
line.
A
Argument,
like
you
know,
npm
token
and
then
building
the
up
the
the
docker
image
with
this
you
know
command
with
this
argument
that
exists
on
the
docker
file,
which
is
you
know,
a
step
better.
But
if
you
look
at
the
history
of
like
the
the
host
that
built
it,
you
can
see
even
the
history
of
the
image
itself.
This
kind
of
like
npm
token
one,
two
three
four
exposed,
so
this
is
still
a
bad
way
of
doing
it
now.
You'd
think
hey!
You
know.
A
I've
I've
created
it,
but
I
also
need
to
this
is,
you
know,
might
be
a
good
way
of
doing
it,
but
I
want
to
remove
it.
Remove
the
remove
the
the
one
two
three
four
sensitive
token
from
the
image
itself,
so
you
know
I
don't
want
it
in
the
running
container,
so
I'll
do
rm
f.
The
thing
is,
this
adds
a
new
layer
that
deletes
it,
but
all
of
these
layers
and
their
history
still
exists
as
part
of
the
docker
image.
A
So
when
I
do
something
like
now,
if
this
is
a
public
image
that
I'm
doing
docker
push
and
I'm
putting
it
on
docker
hub
and
even
if
it's
private
one
because
it
might
theoretically
in
the
future,
be
you
know
public
and
open
source,
that's
still
a
bad,
a
bad
thing,
because
then
that
npm
token,
you
know
one
two
three
four
still
exists
as
part
of
the
history
of
the
docker
image,
and
this
is
really
where
you
know
it
brings
us
into
multi-stage,
builds
the
fact
that
I
am
able
to
now
use
one
image,
for
I
know
this
top
one
for
basically
my
you
know,
even
if
it's
like
a
big
one,
no
latest
or
whatever,
to
do
all
the
installs
that
I
need,
but
but
you
know,
when
I'm
done
with
it
and
I've
installed
whatever
I
needed
from
private
packages,
I
move
you
know
to
like
a
smaller
image.
A
You
know
from
node
lts
alpine.
They
copy
all
of
these
artifacts
from
that
bigger
image
into
the
smaller
one,
the
most
per
you
know
the
purposeful
for
production,
and
I
can
now
basically
mitigate
two
things.
First
of
all,
I'm
having
smaller
base
images
for
production,
less
vulnerabilities,
less
software,
you
know
less
size,
also,
I'm
preventing
sensitive
information
leak.
Now
one
thing
that's
super
important
and
not
well
known,
is
how
do
you
mount
secrets
safely?
There's
like
a
little
bit
of
a
better
way
of
doing
that.
Npm
token
thing,
and
that
is
you
know.
A
Sometimes
you
may
even
need
a
little
bit
more
than
the
token
itself,
like
the
dot
and
pmrc,
which
has
your
registry
and
some
other
defaults.
So
what
do
you
do?
You
don't
really
want
to
copy
all
of
that
into
the
running
container.
Just
because
you
know,
maybe
you
have
a
docker
ignore
or
something
like
that.
So
what
you
could
actually
do
is
there's
a
new
command
available
in
docker.
A
It's
called
with
build
kit,
which
is
the
new
kind
of
like
the
new
capabilities
into
docker,
which
is,
you
know,
mount
mount
with
a
secret,
so
I
can
actually
mount
a
specific
file
into
the
container,
give
it
give
it
give
it
a
name,
and
so
when
I,
when
I
actually
build
it,
actually
give
it
a
reference
id
and
the
file
and
then
what
happens
is
it
will
only
have
that
secret
of
as
a
file
available
on
the
container
itself
for
that
specific
state,
and
that's
all
that
step
at
all
and
nothing
else.
A
This
is
not.
You
know,
retained
in
any
container
history
or
image
layers
or
whatever.
This
is
the
proper
way
of
mounting
secrets
like
files
into
the
container
itself.
Now
there
are
a
lot
of
other
best
practices.
We
haven't
had
time
here
to
to
show
you
how
to
build
containers
securely.
You
can,
you
know,
there's
a
lot
of
them
on
the
sneak
blog
and
you
should
probably
scan
and
monitor
your
code
repositories
and
docker
images.