►
Description
Speaker: Baruch Sadogursky
Surprisingly, implementing a secure, robust and fast promotion pipelines for container images is not as easy as it might sound. Automating dependency resolution (base images), implementing multiple registries for different maturity stages and making sure that we actually run in production containers from the images we intended can be tricky. In this talk, we will compare different approaches, compile a wish-list of features and create a pipeline that checks all the boxes using free and open-source tools.
A
Hello
and
welcome
everybody
to
our
webinar.
Today
we
are
going
to
talk
about
best
practices,
implementing
container
promotion,
image,
container
image,
promotion
pipelines,
it's
a
lot,
but
we'll
unpack
it
and
it's
not
a
big
deal.
So,
first
of
all,
they
said
containers
in
the
title,
but
all
the
market
researchers
showed
that
still
90
of
the
market.
When
they
talk
about
containers
they
actually
mean
docker.
There
are
others
like
podman
and
a
couple
of
others,
but
still
docker
completely.
A
Dominates
the
market
so
today,
when
we're
going
to
talk
about
container,
we're
actually
going
to
talk
about
docker
and,
generally
speaking,
but
also
with
docker,
that's
a
vn
diagram
of
the
relationship
between
the
between
the
software
that
we
know
and
the
software
that
we
like.
It's
a
lot
like
visiting
the
sausage
factory.
Once
you
know
how
the
sausage
is
made,
you
might
have
a
little
bit
less
of
a
fan
of
eating
sausages.
It's
the
same
with
software,
the
deeper
you
dive
into
almost
any
software,
the
the
horror
the
horror
shows.
A
So
it's
true
for
for
for
docker
as
well,
and
one
of
the
issues
with
docker
is
the
issue
of
trust.
I
mean
a
lot
and
we
hear
that
a
lot
in
jfrog
a
lot
of
times.
There
is
like
a
big
question:
do
we
really
know
what
runs
in
our
in
in
our
containers
in
our
production,
my
name
is
baruch
sodorsky,
I'm
the
chief
sticker
officer
of
jfrog,
also
head
of
devops
advocacy
in
jeffrog
and
on
this
slide
is
my
twitter
handle
let's
connect
there,
aj
baruch
pingme.
A
If
you
have
any
questions,
this
is
my
california
disclaimer.
You
can
see
that
the
most
emotionally
expressive
and
confrontational
people
in
the
world
are
from
israel
and
russia.
I
managed
to
be
from
both
that
might
explain
some
of
my
attitudes,
so
I
don't
apologize
about
it
by
the
way.
That's
a
great
chart
from
a
great
book
called
the
cultural
map
by
irene
meyer,
since
all
of
us
work
in
multicultural
environment.
I
highly
recommend
you
reading
this
book
and
this
is
the
most
important
slide
for
today's
webinar.
A
If
you
go
to
jeffree
homestead
show
notes,
you
will
find
the
slides,
which
are
already
there,
the
video
that
we
will
upload
once
once,
it's
ready
and
all
the
links,
including
culture
map,
but
also
everything
that
I'm
going
to
speak
about
this
point
forward
and
the
place
for
comment
to
rate
and
small
raffle
for
thank
you
for
being
here
today.
A
A
Yes,
we
know
how
to
promote
stuff,
because
we
do
cicd
pipelines
for
years,
namely
for
more
than
20
years
by
by
now,
the
first
build
server
slightly
built
and
all
this
stuff
actually
came
into
existence
in
the
late
in
the
late
90s
and
that's
more
than
20
years
ago.
A
This
is
how
it
looks
like
we
have
this
promotion
pyramid
in
which
we
actually
have
builds,
which
are
tested
more
and
more
rigidly
throughout
the
pipeline
and
less
and
less
builds
survive.
Those
tests
up
until
we
have
this
peak
of
the
pyramid,
which
is
the
production
servers
to
which
just
a
handful
of
builds
arrive,
and
this
is
fine.
A
If
we
look
at
the
same
it's
on
on
a
kind
of
the
same
promotion
from
the
side,
then
you
see
that
how
you
have
your
sources,
your
application
sources
and
your
docker
files
and
your
yaml
files
and
whatever,
and
they
are
all
translated
to
binaries
very,
very
early.
Just
this
is
what
the
build
does,
and
once
there
are
binaries
are
created.
Then
we
will
have
the
promotion
of
those
binaries
through
those
quality
gates.
A
Those
locks
and
those
are
the
tests
that
we
are
running,
so
we
have
artifacts,
we
deploy
them
into
the
right
environment,
the
runtime
servers,
integration
system,
basing
staging,
etc.
We
test
them
rigidly
on
those
servers,
and
then
we
decide
whether
we
should
promote
them
to
the
next
stage,
to
the
next
environment
really
and
to
the
next
area
in
our
pipeline.
That
will
correspond
to
the
next
environment
now
with
docker.
It's
both
very
easy
to
do
the
right,
the
wrong
thing
and
very
hard
to
do
the
right
thing.
A
When
we
talk
about
pipelines-
and
this
is
exactly
the
problem-
and
here
is
what
I
mean
by
that-
why
it's
easy
to
do
the
right
thing:
the
wrong
thing,
because
docker
build
docker,
build
it's
very
simple
and
very
powerful
concept.
In
the
just
a
few
lines
in
your
docker
file,
you
are
able
to
construct
in
container
which
will
do
the
right
thing
and
then
you
go
like
well.
Let's
just
docker
build
all
the
things.
A
A
Chances
are
that
when
you're
going
to
rebuild,
you
will
actually
have
different
slightly
different
images
every
time
and
if
you
are
rebuilding
in
different
environments
in
your
pipeline,
you
will
test
not
the
same
that
you
built
and
you
will
deploy
the
production,
not
what
you
have
tested
and
that's
obviously
horrible.
Now
you
can
say
well.
This
is
just
a
very
poor
example.
A
How
about
we
fix
it?
Let's
try
and
fix
it.
For
example,
we
can
nail
the
version
of
the
base
image.
You
can
say
well
here
you
go
now
you
have
a
version
every
time
we
pull
the
base
image,
it
will
be
exactly
the
same
base
image.
Is
it
no
because
docker
tags
are
mutable,
you
can
actually
create
run
docker,
build
with
the
same
tag
and
then
deploy
to
docker
hub
or
any
other
registry,
and
it
will
just
override
the
existing
version
with
new
set
of
bytes
under
the
same
version.
A
A
A
So
this
is
a
little
bit
better,
but
what
about
those?
Can
you
nail
down
the
version
of
python
node.js?
The
answer
depends
on
how
well
you
know
update
it
might
be
possible.
It
might
not.
How
about
that?
How
well
do
you
know
maven?
How
well
do
you
know
java?
Do
you
know
if
it's
possible
to
nail
down
the
versions
in
maven,
especially
of
the
transitive
dependencies?
A
If
you
know
me,
if
you
don't
know
me,
then
you
say
I
don't
know.
If
you
know
me
when
you
say
yes,
you
can,
if
you
know
maven
good
enough,
you
say:
well,
I
don't
know
so
how
about
this,
how
about
just
downloading
stuff
from
the
internet?
How
do
we
know
that
this
stuff
won't
change
under
our
figure
fingers?
A
You
cannot
really
be
certain
that
what
you
build
in
your
development
environment,
what
you
build
in
your
test,
environment
and
what
you
build
in
your
production
environment,
are
actually
the
same
images
or
are
they
different,
and
this
is
exactly
why
we
don't
we
have
this
nagging
feeling
of
something
is
not
100
reliable,
so,
instead
what
we
need
to
do.
We
need
to
build
immutable
and
stable
promotion
pipelines
that
work
with
immutable
and
stable
binaries.
A
And
how
can
we
guarantee
that
that
we
build
strong
pipeline
with
strong
quality
gates?
Basically,
when
we
ask
this
question,
how
do
we
build
a
pipeline
with
strong
quality
gates?
We
ask
a
question
of
how
do
we
separate
development
from
production,
and
there
are
a
couple
of
ways
to
do
it
with
docker.
A
A
The
problem
with
this
approach
is
that,
then
your
environment,
when
they
pull
the
image,
must
check.
If
those
labels
are
set
up
correctly
and
since
they're
all
string
based
errors
will
occur,
you
will
forget
to
check
stuff.
You
will
check
for
wrong
stuff.
Instead
of
maturity,
you
will
look
for
a
stage.
This
kind
of
stuff,
just
string,
labels
that
cannot
be
enforced
on
the
receiving
side
are
not
good
enough.
A
A
Isn't
that
what
we
look
for
well,
not
really,
and
the
problem
is
that
the
repositories
in
docker
are
mapped
under
the
concept
of
repositories
in
git
in
github
and
gitlab,
whatever
you're
using,
and
you
can
see
an
example
here.
Moby
the
whale
is
the
name
of
the
repository,
because
the
repositories
are
intended
for
different
projects.
A
That's
exactly
like
it
is
in
github
in
github.
You
don't
have
repositories
for
testing
staging
and
production,
because
there
is
no
difference
in
the
code
for
testing
staging
and
production.
The
difference
in
staging
in
testing
stage
in
production
appears
in
the
pipelines
and
just
to
remind
you
pipelines
kick
in
once.
The
code
is
not
relevant
anymore.
Once
we
assembled
our
binaries
only
then
we
actually
have
the
the
pipeline,
so
the
repositories
are
intended
to
separate
projects,
not
maturity,
and
it
means
that
it's
not
good
enough.
A
A
Not
so
easy?
The
problem
in
implementing
them
in
docker
is
that
we
have
some
weird,
really
weird
limitation
in
what
we
can
do
in
terms
of
registries
on
a
single
host.
If
we
look
at
how
we
tag
images,
you
can
see
here
that
there
is
no
token
in
the
url
of
the
docker
image
to
express
which
registry
on
this
host.
We
want
to
put
our
httpd
image
in.
We
have
stuff
like
host
port
username,
docker
image,
name
and
tag.
A
A
How
can
we
do
that?
It's
a
good
question.
We
can
say:
well,
it's
probably
not
possible,
let's
go
to
other
solutions
or
we
can
use
stuff
like
url,
rewriting
by
using
virtual
hosts
and
virtual
posts.
So
basically,
what
we
try
is
to
use
this
standard
of
host
port
docker
image,
name
that
translates
to
this
url.
When
we
talk
about
the
protocol
to
actually
support
this,
then
we'll
have
registry
name
and
only
that
tag
name.
A
So
that's
a
nice
solution
and
it
actually
works,
but
it
requires
additional
setup
of
this
http
url
rewriting
tool
and
we
can
do
better.
We
can
do
simpler,
so
we
can
use
the
username
as
the
name
of
the
target
registry,
so
we
can
just
say
hostport
and
then
the
registry
name
and
then
the
docker
image
name.
This
is
elegant
and
it
requires
zero
additional
configuration.
A
And
then
you
can
have
multiple
registries
per
host,
which
is
great.
Now
we
have
multiple
registries
per
host
and
the
next
question
will
be
okay.
How
do
we
promote
now
an
image
from
one
registry
to
another?
That's
what
I
do
right.
We
tested
everything
in
testing,
and
now
this
image
is
ready
for
staging.
We
want
to
move
it
from
my
registry
hold
5000,
whatever
testing
tools
station.
How
do
we
do
that?
The
normal
way
of
moving
images
between
registries
are
pulling
re-tagging
and
pushing
now.
A
This
is
fine,
because,
anyway,
the
register
is
supposed
to
be
on
different
hosts,
so
you
need
to
do
network
transfer
from
one
to
each
other,
but
this
is
not
the
case
anymore.
We
just
saw
how
we
set
up
multiple
registers
on
the
same
host,
and
now
it
doesn't
make
any
sense
to
pull
it
to
the
client
retarget
and
push
it
back
to
exactly
the
same
place.
A
Instead,
we
really
need
to
find
a
tool
that
knows
how
to
do
the
promotion
in
the
same
tool,
and
there
are
cop.
There
are
multiple
tools
like
that.
This
is
an
example
of
jeffrey
container
registry.
Jeffrey
container
register
is
an
absolutely
free
tool
for
you
to
use
and
it
actually
implements
the
exact
promotion
pattern
that
we
spoke
about
now.
You
can
see
here
how
you
have
on
the
right
side:
multiple
docker
registries,
docker,
dev,
local
testing,
staging
docker
pro
local.
All
those
are
different,
docker
registers.
A
You
can
see
how
we
have
a
different
type
of
registry,
the
remote
for
proxing
docker
hub,
and
now
you
probably
heard
about
the
changes
that
docker
hub
is
making.
They
are
actually
going
to
clean
up
old
and
unused
red
images
and
they're
going
to
throttle
how
many
times
a
certain
image
can
be
pulled,
and
a
proxy
like
we
have
the
docker
hub
remote
here,
actually
protects
you
from
both
those
problems.
A
First
of
all,
you
will
cache
all
the
base
images
that
you
use
in
this
remote,
and
that
means
that
whatever
docker
hub
will
delete
won't
affect
you,
and
it
also
means
that
you
only
pull
it
from
docker
hub
once
if,
after
that,
docker
hub
is
blocking
the
request
of
this
image
because
they
hit
the
api
threshold.
You
don't
care
anymore,
because
from
this
moment
on,
you
will
only
consume
it
from
your
own
cash.
A
So
this
is
also
very
very
important,
especially
today,
and
then
we
have
this
docker
virtual,
which
is
also
a
special
kind
of
registry,
and
this
registry
groups
different
registries
under
the
hood.
So
now,
for
example,
it
groups
all
of
them
in
order
to
expose
to
our
developers
all
the
images
that
exist
in
the
system,
because
they
need
to
work
with
all
of
them.
They
need
to
make
changes
to
all
of
them
and
whatever.
A
But
when
the
developer
now
deploys
or
your
ci
server
now
deploys
a
newly
created
image,
it
will
always
be
deployed
to
docker
dev
local
to
the
start
of
the
pipeline
and
from
that,
the
only
way
to
get
it
to
the
next
stage
is
to
promote
it
using
an
api
or
jfox
cli,
and
your
ci
server
will
do
that.
Your
ci
server
techton,
for
this
example,
will
promote
your
docker
images
from
one
registry
to
another
by
sending
an
api
request
or
by
using
jeffro
cli,
and
then
the
image
will
be
promoted
what's
really
exciting
about.
A
A
The
promotion
is
immediate
and
then,
obviously,
when
you
set
up
your
environments,
your
environments
will
only
have
access
to
the
right
registry,
so
they
will
only
see
one
register
that
they're
allowed
to
see
and
they
won't
even
know
about
the
existing
of
others,
and
this
is
exactly
how
you
guarantee
that
your
production
cluster
won't
access
by
mistake.
Any
of
the
dev
images,
because
they
don't
know
the
dev
registry
even
exists.
A
They
only
see
that
their
docker
dev
product
registry
and
those
are
the
strongest
quality
gates.
You
can
hope,
but
you
can
hope
to-
and
this
is
exactly
why
it's
so
good,
so
we
actually
have
like
a
three
side
win-win-win
situation
here
from
one
side,
we
have
single
point
of
access
to
multiple
registers
when
needed,
and
that's
the
virtual
registry
that
you
can
access
very
very
conveniently
from
the
other
side.
A
So
we
spoke
about
how
important
it
is
to
take
control
of
your
dependencies,
both
to
make
sure
that
they
are
not
updated
without
your
knowledge,
but
also
to
make
sure
that
you
are
isolated
from
the
sudden
changes
that
the
remote
sources
of
your
dependencies
can
inflict
on
you,
and
I
obviously
talking
about
docker
hub
now
and
while
we
got
you
just
protected
from
docker
hub,
there
are
other
dependencies.
I
mean
you
not
only
build
your
docker
image
from
the
base
image
and
then
do
nothing
with
it.
There
is
other
stuff.
A
So
here
is
an
example
of
the
remote
docker
proxy,
and
you
can
see
here
how
your
base
image
is
now
in
docker,
remote,
cache,
safe
and
sound.
You
will
always
be
able
to
consume
it
from
there.
No
matter
what
docker
hub
will
do
is
it
and
you
will
also
be
able
to
unlimitedly
access
it,
no
matter
how
docker
habit
throttles
the
request
now,
but
there
are
others
and
to
have
those
others
in
the
same
tool.
So
you
will
be
able
to
build
like
a
cross
cross
package
metadata.
A
A
A
A
Edubark
on
twitter
find
me
there.
That
was
cd,
continuous
delivery
foundation,
webinar,
so
cdf
is
the
right
hashtag
and
you
go
to
jeffers.
Dot
com
show
notes
to
find
there
all
the
links.
The
video,
the
slides
and
also
a
raffle
for
thank
you
for
being
here
with
us.
The
raffle
is
actually
quite
nice.
You
can
enter
an
apple
airport.
You
can
win
an
apple
airports
pro
so
don't
forget
to
visit
the
show,
notes
and
feel
better
off
of
it.
A
B
Great,
thank
you
so
much.
We
really
appreciate
your
time
and
expertise.
As
always,
we
look
forward
to
seeing
you
at
devops
world
2020..
You
contributed
a
few
talks
to
the
cdf
foundation
community
sponsor
track.
Do
you
have
any
other
talks
that
you're
hosting
there.
A
No
there,
I
think
I
have
two
talks
in
the
devs
world,
I'm
not
sure
about
tracks,
I'm
pretty
sure
that
at
least
one
of
them
is
in
the
city
of
track.
The
other
might
be
in
other
okay
crack,
but
yeah
absolutely
exciting
for
the
devs
world.
It
will
be.
It
will
be
a
great
conference,
so
yeah,
let's
meet
you
all
there
and
it
will
be,
it
will
be
fun.