►
Description
Surprisingly, implementing a secure, robust and fast promotion pipelines for container images is not as easy as it might sound. Automating dependency resolution (base images), implementing multiple registries for different maturity stages and making sure that we actually run in production containers from the images we intended can be tricky. In this talk, we will compare different approaches, compile a wish-list of features and create a pipeline that checks all the boxes using free and open-source tools.
Presenter:
Baruch Sadogursky, Head of DevOps Advocacy @JFrog
A
A
All
right,
I'd
like
to
thank
everyone,
who's
joining
us
today,
welcome
to
today's
CN
CF,
webinar,
best
practices
and
implementing
container
image
promotion
pipelines,
I'm
Chris,
Jones,
Clark
consultant
at
level
25,
and
a
cloud
native
ambassador
with
the
CN
CF
I'll
be
moderating
today's
webinar.
We
would
like
to
welcome
our
presenters
today,
bahu
son,
who
goes
the
head
of
DevOps
advocacy
at
j-rock
sailor
to
that.
B
A
A
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
CN
CF
and,
as
such
is
subject
to
the
CN
CF
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
the
code
of
conduct.
Basically,
please
just
be
respectful
of
all
your
fellow
participants
and
presenters
and
with
that
I'll
hand
it
over
to
Balu
to
give
up
today's
presentation.
B
Thank
you.
Thank
you
very
much
Chris
and
let's
get
started.
Yes,
so
we're
going
to
talk
about
containers
today
and
well
when
we
say
containers,
we
all
love
to
entertain
the
idea
that
we
don't
have
a
lock-in
and
there
are
different
container
implementations,
but
between
you
and
me
when
we
say
containers
we
main
docker
in
most
of
the
times.
So
excuse
me
if
I
refer
to
those
terms
interchangeably
during
today's
webinar,
because
really
looking
at
the
industry,
we
are
in
this
state
when
we
say
containers
and
min
docker
and
Sado
can
mean
containers
now.
B
Docker
is
obviously
amazing.
It
revolutionized
the
way
we
do
software
and
do
everything
with
software
and
how
we
go
about
software,
but
as
any
software
is
not
perfect,
and
this
is
a
BM
diagram
that
you
can
apply
to
everything
or
mostly
any
piece
of
software
in
your
life
as
a
professional
and
if
you
think
about
it,
real
good
I
believe
you
will
find
this
Venn
diagram
to
be
correct.
B
And
how
can
we
build
trust
in
what
we
have
in
our
production
containers
in
the
end
of
today,
and
as
Chris
already
mentioned,
my
name
is
Baraka
sagorsky
I'm,
the
chief
sticker
officer,
enjoy
frog.
It
means
that
I
go
to
conferences
like
the
a
cube
con
and
give
people
awesome
stickers.
Since
we
don't
have
a
physical
conferences
now,
I
cannot
give
you
awesome
stickers,
so
I
will
serve
as
the
head
of
DevOps
advocacy
and
we
will
talk
in
this
webinar.
B
The
most
important
piece
of
information
on
this
slide
is
my
twitter,
handle
at
je
borrow
a
please
feel
free
to
connect
with
me
on
Twitter,
we'll
take
the
conversation
there
talking
about
code
of
content
and
how
to
behave.
This
is
an
amazing
diagram
from
an
amazing
book.
The
cultural
map,
since
all
of
us
work
in
multicultural
environment
I,
really
recommend
you
to
read
it.
If
you
didn't
and
on
this
diagram,
you
can
see
that
the
most
emotional
and
expressive
and
confrontational
people
in
the
world
are
from
Israel
and
Russia.
B
Offend
you
during
this
talk,
I
apologize
in
advance.
The
most
important
slide
of
this
presentation
is
the
show
notes
you
go
to.
Jeff
comes
the
shownotes
and
you
will
find
a
special
page.
They
are
dedicated
to
this
webinar
to
be
the
top
link.
You'll
find
the
slides
already
uploaded
there,
the
video
that
will
upload
later
today,
all
the
links
to
everything
I
mentioned,
including
the
culture
map
book
that
we
just
spoke
about
the
place
to
for
commenting
for
for
for
rating
and
a
very,
very
nice
raffle
for
thanking
you
for
being
here.
B
It's
a
Nintendo
switch
light
with
Animal
Crossing
game
definitely
should
try
and
participate
in
win
okay,
so
this
is
kind
of
my
house
holding
items.
Let's
get
back
to
it,
promotion
pipelines
for
containers.
When
we
talk
about
a
concept
that
we
want
to
apply,
we
will
usually
look
at.
Is
it
something
that
will
already
did
and
how
should
we
adopt
it?
And
the
good
news
are
that
CIC,
the
pipeline's
promotion
pipelines
we
do
them
for
years.
B
B
Basically,
you
have
some
artifacts
that
now
are
in
test
environment,
and
then
you
decide
or
integration
environments,
and
then
they
decide
they
are
good
enough
and
you
promote
them
into
system
environment.
You
install
them
on
your
system
testing,
you
install
them
in
the
right,
runtime
environments
system.
Testing
in
this
example
do
all
the
tests
and
if
they
are
good,
you
promote
them
next
to
the
next
level,
etc
itself.
B
Now
this
is
all
again
very
familiar
whatever
you
did
before
you
got
into
containers
and
kubernetes,
and
what's
not,
you
probably
did
that
for
whatever
technology
used
before
now,
what
changed
is
docker
is
that
docker
images
are
large,
the
structure
of
their
management
is
not
trivial.
The
registry
is
not
just
a
file
system
that
you
had
your
artifact
there,
your
java
artifact
or
your
npm
archive
in
its
it
requires
a
lot
of
work
to
build
this
pipeline
with
docker
images.
B
On
the
other
side,
we
have
very,
very
powerful,
simple
and
appealing
docker,
build
command
and
docker
built
file
and
what
it
actually
drives.
A
lot
of
people
to
is
actually
using
the
docker
file
as
an
artifact
that
we
promote
and
then
build
the
image
from
scratch
for
each
and
every
environment.
So,
in
the
end
of
the
day,
what
we
will
get
to
is
instead
of.
B
Instead
of
promoting
the
artifact
that
we
build,
we
are
kind
of
a
trend,
a
lot
of
people
trend
to
promote
the
docker
file
and
docker
build
in
every
environment.
So
we
will
take
this
docker
build
source
file
as
an
artifact.
We
will
build
it
development
and
then
we'll
build
it
again
for
system
test,
and
then
it
will
build
it
again
from
production
mirror,
and
then
we
build
it
again
from
production
environment.
It's
very
convenient
to
do,
because
all
you
need
to
do
to
promote
a
text.
B
File
is
just
tag
it
in
in
get
right,
so
you
attach
a
tag
that
describes
its
state
and
then
you
can
build
and
deploy
to
whatever
environment
you
want,
and
it
sounds
like
a
good
idea,
but
and
fast
and
cheap
builds
are
not
always
the
way
to
go
and
I
will
give
you
one
example
that
when
I
wanted
to
create
like
the
most
unstable,
the
most
unstable,
dockerfile
and
I
said,
ok,
I
will
create
something
that
doesn't
make
any
sense.
No
one
will
ever
do
it.
It
will
be
for
explanation
all
kind
of
purposes.
B
Only
but
then
I
went
to
the
Internet
and
I
discovered
that
the
internet
is
full
of
dogger
files,
which
are
much
less
stable,
that
my
any
imagination,
and
eventually
this
file
is
just
an
example
that
is
actually
used
in
production
there.
It
has
many
Forks
and
you
can
actually
see
the
link
to
it
in
the
show
notes
again.
B
Jeff
comes
their
show,
notes
and
you'll
see
that
it's
not
it's
not
a
fantasy,
it's
actually
something
that
is
used,
and
this
is
a
horrible
daughter
file,
because
every
line
in
it
actually
refers
to
unstable
version
of
a
dependency.
So
when
you
say
from
guntu,
you
actually
mean
take
whatever
version
of
Ubuntu
there
and
download
it
and
build
it
with
it,
you're
from
docker
hub,
and
it's
the
same
with
no
GS.
The
latest
version
with
Python
the
latest
version
and
even
adding
our
App.
B
We
should
obviously
fix
it
and
we
can
try
and
fix
it,
and
we
can
say:
okay,
we
can
use
a
version
here,
so
we
will
use
1904
and
the
question
is:
is
it
better
well
to
some
extent?
First
of
all,
obviously
now,
when
20.4
is
out,
we
won't
get
it
instead
of
1904,
but
in
docker
hub
the
versions
are
not
mutable,
so
Ubuntu
for
canonical.
For
this
reason
or
another,
can
actually
push
new
image
and
tag
it
1904.
B
So
still
when
we
are
going
to
download
when
we
are
going
to
build
this
file,
we
will
get
a
slightly
different
version
of
our
base
image.
There
are
usually
very
good
reasons
to
do
so.
Usually
there
are
security
vulnerabilities,
but
still
when
you
want
a
repeatable
build,
you
cannot
allow
this
to
happen.
There
is
a
way
to
nail
down
a
version
very,
very
strict
and
that's
using
the
hash
code.
If
I
use
the
hash
of
an
image,
it
correlates
to
an
array
of
bytes,
and
this
is
immutable.
B
This
will
always
end
up
in
the
same
array
of
bytes.
The
problem
is,
this
is
completely
unusable.
You
have
no
idea
what
version
of
Ubuntu
my
daughter,
Valerie
first
tuned
and
frankly,
you
don't
even
know
in
that.
It's
the
valleys,
cache,
maybe
I,
just
you
know,
fell
to
sleep
on
my
keyboard
or
my
cat
just
went
through
my
keyboard,
and
this
is
what
we
ended
up.
We
have
no
idea
what
version
this
refers
to.
B
This
is
a
very
unusable
now,
we'll
also
have
all
our
applicative
dependencies
that
you
might
or
might
not
know
how
to
log
the
version
or
maybe
look
in
the
version,
sometimes
is
impossible.
So
to
order
to
know
how
to
log
the
versions
of
Python
and
OGS.
In
those
examples,
you
need
to
know
how
update
command
works,
which
a
parameters
its
accept
and
how
to
specify
the
version
now.
I
bet
the
majority
of
you.
B
If
you
come
from
herbs
background,
know
exactly
how
to
nail
down
apt-get
versions
and
know
how
to
specify
a
specific
version
of
Python.
But
what
about
this?
This
is
a
maven
command
and
I
guess
some
of
you
know
how
maybe
it
works,
and
you
might
imagine
that
you
need
to
check
the
pom
file
and
check
if
all
the
versions
are
nailed
down.
But
then
my
question
will
be
what
about
transitive
dependencies?
B
B
For
this
particular
part
of
your
drug,
your
image
to
make
sure
it
is
reproducible
and
you
nail
down
the
version-
and
this
goes
for-
let's
say
what,
if
I
use
now
basil
for
my
java
build,
do
you
know
how
to
nail
all
the
versions
there
and
what
happens
if
I
use
go
now
and
I
use
go
before
111
and
use
one
of
the
19
available
go
build
or
after
111
and
then
I
use
the
official
go
modules.
Do
you
know
how
to
nail
all
the
versions
there?
It
requires
increasingly.
B
Complicated
knowledge
to
know
how
to
create
a
reproducible,
build
and
then
custom
stuff.
What
about
that?
What
about?
If
our
daughter
image
just
go
ahead
and
download
bunch
of
files
from
Internet?
How
can
we
guarantee
that
those
files
never
change
and
Ansari's?
We
really
can't.
So.
This
is
why
the
problem
exists.
When
we
go
and
we
rerun
the
build
of
our
docker
image
for
every
environment,
we
will
end
up
with
different
with
different
docker
docker
images
in
every
environment.
That's
obviously
a
very,
very
big
problem.
B
So
the
way
we
the
way
we
solve
it
is
that,
instead
of
a
building
in
each
and
every
environment,
we
actually
want
to
build
ones
and
promote
those
binaries
through
those
quality
gates,
all
the
way
to
production.
So
we
run
docker
build
ones,
and
then
we
have
an
image,
and
this
will
be
whatever
we
are
going
to
promote,
and
then
we
go
through
the
quality
gates
and
promote
it
through
our
pipeline.
B
The
gates
are
there,
so
the
QA
won't
get
test
images
by
mistake,
Delta
images
by
mistake,
which
they
are
not
ready
to
create
the
staging,
won't
get
images
which
are
not
ready
to
staging
and
obviously
the
production
won't
get
any
images
that
are
not
ready
to
be
production
views,
and
this
is
again
not
as
trivial
as
it
might
sound
and
not
entry
as
trivial
as
it
might
be.
Your
experience
with
other
technology
steps
docker
makes
it
a
little
bit
harder.
So
let's
see
how
we
can
build
a
rock-solid
pipeline.
B
The
real
question
that
we
need
to
answer
when
we
build
this
pipeline
is
how
do
we
separate
there
from
prod?
How
do
we
separate
there
from
staging
and
staging
from
prod
how
we
separate
between
the
environments?
One
of
the
options
that
docker
gives
us
is
using
metadata.
We
can
tag
our
images
with
labels,
he
value
and
we
can
say,
environment,
staging
environment
testing.
This
is
nice,
but
it
requires
from
us
first
of
all
to
make
sure
that
all
of
them
are
annotated.
B
It
requires
from
us
to
make
sure
that
our
runtime
environments
will
check
those
stats
every
time
they
get
the
pool,
docker
image,
and
it
basically
cannot
be
enforced
in
any
way
because
there
is
no
our
bot
controls
on
tags.
So
this
is
nice,
but
we
can
do
better.
Another
option
is
using
docker
repositories.
Repositories
in
docker
are
actually
folders
in
our
doctor
industry
and
what
docker
suggesting
is
taking
those
repositories
taking
those
folders
and
creating
virtual
folders
for
each
and
every
image.
B
So
each
and
every
image
will
have
their
own
folders
for
development,
for
testing
for
production,
and
this
is
all
already
better
because
you
can
do
our
box
on
repositories,
but
it's
still
not
very
useful,
because
then
for
each
and
every
new
image
you
create
and
think
about
micro
services,
with
tens
of
thousands
of
images,
then
for
each
and
every
one
you
need
to
remember
to
create
those
repositories.
Those
folders
and
make
sure
you
attach
the
correct-
are
back
to
each
and
every
one
of
them.
B
So
this
is
also
nice,
but
we
actually
need
to
do
better.
What
we
want
to
do
is
create
a
separate
registry
per
environment,
so
we
will
have
a
registry
that
that
only
the
devil
images
are
the
registry
that
only
the
staging
images
are
and
the
registry
that
only
the
production
images
are-
and
this
sounds
like
something
that
we
should
be
able
to
do-
how
hard
it
is
to
establish
number
of
registries,
and
apparently
it's
not
so
easy.
It's
not
so
easy,
because
we
have
historical
limitations.
B
A
little
bit
like
this
one,
if
you
organ
off
to
remember
that
makes
our
ability
to
have
multiple
registries
on
the
same
horse
when
our
pipeline
is
run
very,
very
limited,
and
the
problem
is
this
standard
of
docker
tag,
how
docker
tag
is
defined?
And
when
you
look
at
this,
when
you
look
at
the
standard
that
you
see
that
we
have
the
host,
we
have
the
port,
we
have
the
user,
we
have
the
docker
image
and
then
we
have
the
tag
diversion.
B
So
basically,
there
is
no
way
here
that
we
can
express
okay,
what
maturity
of
registry
is
it
on
the
same
host?
There
is
no
way
to
do
it
right,
so
there
is
no
way
to
express
that
we
have
multiple
registries
per
host.
What
do
you
wanna
have
is
something
like
that
I
have
my
host
I,
have
my
port
and
then
I
wanna
do
I
want
to
have
dr.
dev
as
a
separate
registry.
Blocker
QA
is
a
separate
registry
docker
staging
and
dr.
prade,
all
as
separate
registries
on
the
same
host.
B
You
cannot
do
it
because
dr.
tab
want
allowed
you,
so
that's
kind
of
a
little
bit
a
strange
limitation
that
won't
allow
us
to
do
it.
So
obviously,
first
reaction
will
be
well
that
sucks
I
cannot
do
it,
but
then
we
can
start
thinking
about
it
and
getting
smart
on
how
we
can
do
it.
One
of
the
options
will
be
virtual
hosts
or
virtual
ports.
So
this
is
how
it
works.
When
you
run
docker
tag,
host
port
and
the
tag
name,
then
it
converts
into
this
URL
of
the
actual
request.
When
it
goes
to
dr.
B
B
So
here's
an
example
with
a
fake
port,
we
can
say:
ok,
now
we
are
going
to
specify
in
our
daughter
poo
another
port,
not
the
real
one,
80
81,
but
a
fake
one,
five
thousand
and
one
and
every
time
our
reverse
proxy
kind
of
a
layer.
Another
layer
of
abstraction
before
we
actually
hit
the
doctor
registry
will
receive
a
call
to
this
non-existent
port
5001.
B
What
it
actually
has
to
do
is
to
translate
it
to
a
call
to
dr.
death,
and
then
five
thousand
two
will
go
to
dr.
staging
five
thousand
three
will
go
to
dr.
prade
and
what's
not
now
this
actually
works,
and
this
is
approach
that
a
lot
of
users
do
and
it
actually
works.
The
only
problem
with
it
is
that
it
requires
this
additional
software.
This
requires
the
reverse
proxy.
B
Well,
a
lot
of
products
already
have
it
built
in,
but
it's
still
configuration
and
babysitting
etc,
and
we
still
can
do
better
and
we
can
do
better
by
abusing
things.
So
look
at
that.
We
have
here
this
user.
Well,
while
it
is
important
and
usually
it
is
not
used
and
as
you
saw
in
previous
example
with
busybox,
we
didn't
use
it
at
all,
but
this
token
becomes
available
and
we
can
use
this
token
for
for
actually
providing
a
witch
registry
exactly
our
we
wanna
a
tag
or
push
or
pull
our
ador
care
image.
B
So
this
actually
becomes
very,
very
easy,
and
while
we
lose
the
ability
to
use
it
for
them
for
the
username,
we
actually
gain
the
ability
to
have
multiple
registries
per
host
without
having
a
reverse
proxy,
and
this
is
very,
very
useful.
Now,
okay,
we
set
up.
We
have
multiple
registries
in
the
same
in
the
same
host
and
the
next
version
will
be.
The
next
question
will
be:
how
do
we
actually
promote?
How
do
we
take
those
images
from
staging
from
dev
to
staging
from
staging
to
prod
and
the
way
docker
works?
The
way
dr.
B
kind
of
implies
that
you
use
it
is
pull
retag
and
push
now.
This
is
wrong
on
so
many
levels.
First
of
all,
we
are
talking
about
two
registers
in
the
same
host.
Why
would
I
pull
an
image
to
a
different
host
over
network
images?
Are
big
just
to
be
able
to
rename
it
and
push
it
back.
This
is
just
wrong,
but
again
there
is
no
native
way
of
doing
something
else.
Now
the
good
news
are.
There
are
tools
that
can
help
us.
B
I
will
use
an
example
of
Jeffro
container
registry,
which
is
a
container
registry
that
supports
all
that
obviously-
and
it's
free
for
you
to
use,
but
there
are
other
tools
that
use
that
as
well,
and
what
I'm
describing
here
is
the
approach
that
you
need
to
look
for
in
your
tool,
not
necessarily
doesn't
matter
which
tool
is
that
and
here's
the
approach
that
I
would
suggest
you
will
look
for.
So
what
you
see
here
is
actually
a
bunch
of
registries
inside
the
same
the
same
tool
right
inside
again,
this
is
Jeffro
container
registry.
B
So
you
can
see
here
we
have
docker
dr.
dev
local.
We
have
testing,
we
have
staging
docker
tests,
local
docker
stage,
local
and
dr.
prade,
local,
all
inside
one
tool,
and
then,
if
you
need
to
promote,
all
you
need
to
do
is
actually
issue
an
API
request
and
no
files
are
actually
moved
because
the
storage
of
all
these
images
are
on
the
same
storage.
B
So
we
don't
move
filers
even
around
on
the
disk,
not
even
mentioning
stuff
like
pulling
retargeting
pushing
all
we
do
is
they
change
the
visibility
of
those
images
for
our
environments
now
additional
features
that
you
might
like.
Now
you
have
four
different
four
different
doctor
registers
you
as
a
developer
or
your
developer.
They
need
to
work
with
all
of
them.
B
Constantly
switching
between
registries
is
painful,
so
instead,
if
we
can
have
a
virtual
registry
that
presents
a
single
registry,
but
in
the
back
containing
the
number
of
them,
this
obviously
helps
a
lot,
and
this
obviously
simplifies
their
our
interaction
with
docker
another
feature
in
proximity,
remote
registries.
This
is
also
very
useful
to
have,
first
of
all,
again,
we
simplify
the
configuration
now
our
virtual
repository
or
virtual
registry.
B
Now
not
only
sees
all
the
docker
images
that
are
locally
stored,
but
also
all
the
images
that
exist
from
a
remote
registry
like
docker
hub
and
also
it
provides
a
protection
against
situations
when
our
remote
registry
is
down
and
I'm
sure
you
noticed
over
the
last
couple
of
month.
There
are
a
number
of
times
when
a
docker
hub
goes
down.
So
if
you
used
a
registry
that
gave
you
this
proxying
ability,
then
obviously
you
weren't
affected
Fe,
and
if
you
didn't,
you
probably
were
so.
B
This
is
another
nice
feature
and
going
back
to
the
visibility
of
those
registries
from
the
outside
world.
When
you
have
your
clusters,
the
dev
cluster,
the
test
cluster,
the
staging
in
the
prod.
They
only
see
those
registries
that
they
are
allowed
to
see,
and
this
is
exactly
what
we
spoke
about.
This
provides
the
ultimate
quality
gates,
the
strongest
quality
gates
that
can
be.
There
is
no
way
that
the
production
cluster
will
now
be
able
to
access
the
testing
environment,
because
it
doesn't
know
that
there
is
your
registry
there
at
all.
B
B
Another
topic
that
comes
up
as
a
reaction
for
especially
my
first
part
of
the
presentation
when
I
talk
about
how
important
it
is
not
to
use
latest
when
we
are
talking
about
base
images
and
how
open
to
1904
is
much
better
than
just
open
to
a
lot
of
people,
actually
like
the
simplicity
of
working
with
latest.
They
can
also
say
give
me
the
latest
docker
image
and
they
will
know
that
they
will
have
something
that
has
been
recently
updated
and
it's
good
to
go.
Now.
B
We
still
can
have
both
worlds
and
that's
again
an
example
from
Jeffro
container
registry,
but
others
probably
do
that
as
well,
and
this
is
using
of
metadata
to
express
what
latest
actually
relates
to
the
biggest
problem
with
latest.
Is
that
you
don't
know
latest
to
where
it
was?
Was
it
the
latest
really
latest
or
it
was
latest
to
created
a
month
ago
and
still
seems
there?
We
have
hundreds
of
builds,
which
are
now
the
latest,
but
latest
was
not
updated.
So
using
a
metadata
you
can
have
here
latest.
B
That
is
referring
to
a
certain
build
by
number
or
a
certain
tag
by
number-
and
this
is
how
we
know
this
latest
actually
refers
to
actually
email
actual
image
with
the
tag
26,
and
this
gives
you
a
win-win
of
the
simplicity
of
using
the
latest,
and
you
always
know
what
it
really
means.
As
long
as
this
26
that
refers
to
is
promoted
as
immutable
artifact,
because
if
you
remember,
if
you
keep
rebuilding
26,
you
will
actually
not
know
if
this
26
is
326
that
you
actually
started
so
remember.
B
First
is
promoting
immutable
artifacts
once
you
do
that,
you
can
also
alias
it
as
the
latest
if
you
wish
to,
and
it's
very
important
to
make
sure
that
this
connection,
this
alias
between
latest
in
26
is
super
clear.
It
is
their
own
in
metadata
and
this
metadata
can
be
automated,
so
you
can
actually
query
in
API
in
e
query
language.
B
What
is
my
latest
and
know
that
it
is
26
I
mean
the
UI
is
definitely
nice
and
it's
very
nice
for
our
webinar,
but
in
the
end
of
the
day,
when
you
go
and
automate
your
pipelines,
those
questions
should
be
answered
with
an
API
and
a
query.
Language
now,
obviously
not
less
important
for
you
to
nail
down
not
only
your
docker
images
but
also
the
rest
of
your
dependencies,
because
at
the
end
of
the
day,
no
one
uses
docker
just
for
a
docker.
You
always
have
something
inside
you
have
your
NPM.
B
You
have
your
Java,
you
have
your
go.
You
have
your
C
C++
Makonnen.
In
the
end
of
the
day,
there
are
dependencies
which
are
also
needs
to
be
locked
down.
You
need
to
know
that
when
you
install
your
JDK
in
your
daughter
image,
this
JDK
is
exactly
what
you
meant
it
to
be,
and
for
that
again,
if
you're
too
supports
in
as
I
mentioned
for
daugher
the
remote
repositories.
This
is
what
you
do
for
your
base
image.
B
So
your
base
image
will
be
cached
in
your
tool
and
then
you
know
every
time
you
need
your
base
Ubuntu
to
rebuild
your
image.
It
will
be
there.
You
can
rely
on
the
fact
that
it's
cached
and
it
doesn't
matter
if
it
was
changed
in
the
docker
hub
or
even
deleted
or
docker
hub
now
went
away
because
I
don't
know
something
happened
right.
So
you
have
your
cache.
You
control
your
dependencies
and
for
dependencies
which
are
not
docker
again.
Here's
an
example
of
Jeffro
container
registry.
B
You
can
see
how
having
a
generic
storage
generic
repositories
as
we
call
them,
allow
you
to
actually
save
cache
and
control
your
dependencies,
which
are
not
only
your
base
image.
So
when
you
need
to
put
this
JDK
and
the
Apache
Tomcat
and
in
the
end
they
your
application
inside
your
dog,
your
image,
you
know
that
they
all
come
from
trusted
and
control
environment
and
exactly
as
we
did
with
docker
registries,
we
can
do
with
generic
repositories.
We
can
have
an
entire
pipeline
of
those
repositories.
B
B
Helm
packages
which
are
a
home
charts
which
are
supported
in
Jeffro
container
registry
as
well
or
b8,
any
type
of
artifact
in
generic
repository
right.
So
here
we
go
your
JDK
and
you're
talking.
So
basically
you
have
to
own
your
dependencies
if
you
want
to
build
your
reliable
pipeline,
your
base
image-
and
this
is
by
cashing
it
from
docker
hub
your
infrastructure,
everything
that
your
application
needs
to
run
and
your
application
files
as
well.
B
So,
just
to
summarize
for
the
conclusions
you
build
only
once
very
important,
because
there
is
practically
no
way
to
guarantee
a
repeatable
rebuild.
We
can
get
closer,
we
is
doing
a
lot
of
stuff
and
knowing
a
lot
about
our
application,
knowing
a
lot
about
how
daughter
works
with
the
don't
use
latest,
but
also
don't
write,
tags
also
use
hash
tags.
This
is
sorry,
always
use
hashes.
This
is
kind
of
information
that
we
need
to
learn,
but
it
gets
more
complicated
with
every
new
technology
we
put
into
our
container,
we
use
update
for
our
dependencies.
B
We
need
to
know
how
to
nail
down
a
everything
object.
We
use
Java.
We
need
to
know
the
Java
beutel's.
We
use
go,
we
need
to
know
the
goal,
so
it
gets
more
complicated
with
more
and
more
technologies
that
we
use
so
instead
of
trying
and
nailing
down
every
little
piece
that
might
make
your
build
unstable
and
not
repeatable
instead
build
only
once,
and
if
you
build
only
once
you
don't
have
this
problem,
you
can
rely
on
yeah.
You
can
rely
on
your.
B
You
can
rely
on
this
image
to
be
actually
the
same.
All
over
separate
environments,
as
we
mentioned
very
important
as
separating
the
environments
by
using
different
registries,
is
actually
the
way
to
implement
the
most
robust
and
secure
quality
gates
promote
what
you
will
already
build
and
then
own
your
dependencies,
and
that
means
cache
everything,
don't
trust
downloading
stuff
from
internet,
because
either
you
will
download
something
different
or
you
won't
be
able
download
what
you
used
before,
because
it
wasn't
there.
B
B
This
is
the
place
where
you
go
in
order
to
get
the
slides
the
video
all
the
links
and
participate
in
as
I
mentioned
very,
very
attractive,
very
attractive
raffle
of
the
Nintendo
switch
meaning
for
a
thank
you
for
being
here,
I'm
pasting,
the
link
to
the
show
notes
in
the
chat,
so
you
can
use
it
there
from
there
as
well.
It
will
make
it
one
click
away.
So
with
that,
thank
you
very
much
and
I
think
it's
now
time
for
questions
awesome.
A
Thank
you,
so
much
I
definitely
have
some
people
in
mind
that
I
need
to
forward
the
recording
to
after
this.
So
if
you
have
a
question,
please
do
put
it
into
the
Q
and
a
box
or
tap
at
the
bottom
of
your
screen
and
we'll
get
to
as
many
as
we
have
time
for
the
first
one
we
got.
There
is
from
revista
VAR
e,
which
is
hi
Babu.
Are
there
any
other
tools
that
support
this
tank
retag
without
pull?
There
are
as
much
needed
feature
well,
this
is
much-needed
feature.
B
Yeah
reverse.
Thank
you
for
this
question
and
when
we
look
at
the
landscape
of
the
registries
that
are
available
for
us
and
I,
just
think,
if
you
all,
the
cloud
providers
have
their
own
container
registries,
the
Google,
Azure
and
and
AWS
we
have
harbor,
which
is
an
amazing
project
from
a
CNC
F,
which
is
a
container
registry
by
itself.
Also
get
lab
supports.
Container
registry
github
supports
container
register
every
everywhere.
B
You
you
look,
you
see
container
registry
and
the
reason
why
it's
so
easy
to
find
container
registers
around
is
because
docker
did
a
very
good
job
in
making
container
registry
available
for
distribution,
the
it's
actually
called
docker
distribution,
and
this
is
an
open
source
free
and
relatively
simple,
to
run
piece
of
software
that
allows
you
to
have
a
container
registry,
and
then
you
can
put
your
UI
on
it.
Your
brand
on
it
provide
additional
features
like
harbor
does,
for
example,
her
security
and,
what's
not,
and
but
the
end
of
the
day.
B
Most
of
them
is,
if
not
all
of
them
have
this
docker
distribution
under
the
hood
and
docume
ster
bution
means
exactly
single,
isolated
container
registry
inside
and
since
it
has
its
own
storage,
there
is
no
way
to
promote
easily
between
one
container
registry
and
another.
So
even
in
tools
that
supposedly
can
have
multiple
container
registries
in
one
tool,
you
unfortunately,
will
end
up
promoting
them
by
pooling
andrey
tagging
and
pushing
the
game.
B
This
is
not
what
we
have
is
not
an
embedded
document
distribution-
and
this
is
the
was
the
only
way
for
us
to
implement
different
views
of
registries
with
the
shirt
back
and
storage
in
a
in
the
back,
so
I
didn't
hear
I
did
I
I,
don't
know
about
any
other
tool
that
do
that,
and
the
good
news
are
different
in
your
registry
is
free
to
use.
So
you
might
very
well
give
it
a
try.
Oh.
A
Thank
you
so
much.
There
is
another
question
from
daniel
silverman
who
is
asking
if
you
could
actually
show
a
design
example
of
a
CI
CD
pipeline
that
performs
the
internal
pocket
image
promotion
from
def
to
test
to
prod
and
deploys
it
to
a
kubernetes
cluster.
He
is
working
with
Harbor,
but
it
doesn't
have
to
be
specific
to
it.
Yeah.
B
So
let
me
give
me
one
sec
and
let
me
check
if
I
have
an
example
of
a
CI
that
does
that
and
I
will
be
more
than
happy
to
share
it
with
you
real
quick
it.
It
won't
be
the
entire
pipeline,
but
I
think
you
know
exactly
how
to
deploy
to
communities
from
jail
for
registry.
The
promotion
part
is
interesting,
though,
and
let
me
see
if
I
can
find
the
build
and
share
it
with
you,
so
I
will
stop
sharing
the
slides
and
instead
I
will
share
my
browser
here.
B
B
B
B
B
A
B
Yeah
I,
it's
just
I,
had
a
lot
of
Jeffro
container
registers
around
some
of
them
show
the
right
thing,
the
other
not
so
much
so
here
it
is,
and
dr.
prade,
local
yay,
okay
I
found
it.
So
let's
look
at
the
26
that
actually
what
I
used
a
to
take
screenshots
into
my
slides
and
if
you
look
at
the
properties
here,
I
hope
we
can
see
the
build
yes,
okay.
Here
we
go
so
this
is
the
build
URL.
B
This
is
how
it
was
built
and
if
I
click
on
it,
this
is
different
pipelines,
which
is
a
sea-ice
CD
tool
from
Jeff
Rock.
Obviously
you
don't
have
to
use
that,
but
I
just
want
to
show
you.
The
promotion
and
the
promotion
here
promote
application
built
and
the
step
that
we
use
is
just
using
curl
to
actually.
Okay.
Here
is
my
key.
That
I
probably
need
to
revoke
once
this
webinar
is
recorded
and
don't
do
that.
B
Don't
hard-code
your
keys
into
your
ASCI
scripts,
but
what
you
definitely
do
is
you
use
the
promote
API
and
then
you
say:
okay,
my
target
wrapper
will
be
dr..
Prade,
local
and
I
will
promote
this
tag,
and
I
just
vary
from
my
daughter
images
with
the
run
numbers,
but
it
could
be
whatever
makes
sense
and
just
move
it
and
what
it
does.
This
is
an
operation
that
ended
up
in
six
seconds
and
it
actually
and
those
six
seconds
was
actually
the
API
call.
The
promotion
itself
was
immediate
because,
as
I
mentioned,
nothing
actually
changed.
B
What
changed
is
that
now,
before
this
bill,
26
was
in
dr.
dev,
local,
and
now
it's
not
there.
There
is
only
number
10,
because
the
promotion
failed
then
at
the
26
actually
moved
to
Prague
local.
So
this
is
how
the
promotion
does
works.
It's
just
it's
just
one
rest
API.
By
the
way,
the
way
you
do
the
alias
between
the
26
and
local
is
also
the
same
rest.
Api
you
can
see
here.
B
All
I
do
is
I
actually
promote
it
kind
of
promoted
from
the
same
registry
to
the
same
registry
and
what
I
do
change
is
the
tag
so
from
tag
number
I
target
to
latest,
and
this
is
how
we
have
here,
26
and
latest,
and
they
actually
refer
to
the
same.
The
same
image,
the
interesting
part
here,
of
course,
is
that
this
docker
refers
to,
and
this
doctor
refers
to
is
another
REST
API
call
which
is
right
here
and
that
will
be
put
properties.
B
So
the
properties
that
I
want
to
put
is
doctor
refer
to
run
number
I,
learn
mint
on
the
latest
and
I
say:
okay,
it
refers
to
26
and
we
are
looking
in
the
bill.
26
I
mean
I,
hope
that
gives
you
a
clue
and
how
to
how
to
do
it,
and
basically,
what
you
do
is
this
is
how
you
build
your
pipeline.
You
build
a
docker
image.
You
push
the
docker
image,
you
publish
the
building.
For
here
you
do
all
your
tests
here
will
run
your
system
tests,
your
integration
test.
B
B
A
A
B
A
Me
it's
almost
8:00
in
the
evening,
so
no
coffee
for
me
anymore,
but
thank
you
very
much
for
a
great
presentation
and
thanks
to
everyone
else
for
joining
us
today,
the
webinar
recording
and
the
slides
will
be
online
later
today.
We're
looking
forward
to
see
you
in
the
future
have
a
great
day.
Thank
you.
So
much.