►
From YouTube: Distribution Team | UBI Images Discussion
Description
The Distribution team discusses UBI images, including why we build them and how.
https://gitlab.com/gitlab-org/build/CNG/-/tree/master
https://gitlab.com/gitlab-org/charts/gitlab/-/issues/1796
A
A
A
Okay,
can
everyone
see
my
screen
perfect
yeah,
so
the
ubi
images
we
currently
don't
have
them
running
on
pipelines
by
default
on
merge,
request
or
branch
pipelines
by
default?
A
So,
typically,
when
you
look
at
a
cng
pipeline,
it
doesn't
include
them,
so
the
pipelines
that
do
include
them
are
tags
on
dev,
there's
a
if
you
have
a
dash
uvi
on
the
tag
which
we
have
the
release
tools
do
when
we
release
it
runs
the
vi
specific
pipeline
and
also
we
have
basically
how
we
how
we
have
been
testing
ub,
the
ubi
pipeline
since
we've
been
manually,
triggering.
A
B
A
A
Yeah
that
looks
right
and
we
can
tell
immediately
because
it's
broken
up
by
a
lot
more
phases
because
there's
build
stages
and
then
final
image
stages,
so
in
the
ubi
pipeline,
so
the
ubi
images
are
are
just
images
that
are
using
the
red
hat
ubi
containers
as
the
base
image
but
yeah,
which
is
a
hardened
container
that
they
provide.
In
addition,
we
also
wanted
this
container
to
be
kind
of
like
offline
buildable,
which
is
a
requirement
from
the
dod.
A
So
we
have
them
broken
up
into
these
build
containers
and
final
containers.
So
these
build
containers
do
all
the
work
that
has
to
like
reach
out
to
the
internet,
to
grab
sources
for
and
stuff
like
that,
and
then
they
basically
at
the
end
of
them.
They,
you
know
they
have
their
builds
compiled
resources.
A
I
really
appreciate
it
and
then
only
these
ones.
A
Users
actually
download
from
the
registry
or
in
the
dod's
case
they
actually
build
these.
This
set
from
the
docker
files
and
what
these
do
is.
Typically,
they
just
copy
in
the
compiled
assets
from
these
other
containers
and
then
add
whatever
build
dependencies,
because
then,
on
the
dod
side
for
offline,
build,
they're
only
allowed
to
add
dependencies
that
are
in
the
default
red
hat
repositories.
A
So
absolutely
anything,
that's
not
in
the
default
red
hat
repositories
for
the
ubi
images
have
to
be
built
first
or
retrieved
first
in
these
build
containers,
and
this
whole
idea
of
so
that
differs
from
our
regular
cng
release
and
that
we
don't
have
these
build
containers.
A
This
approach
of
having
separate,
build
containers
in
separate
final
images,
probably
is
due
as
one
of
the
steps
for
our
regular
images
as
well,
but
we
did
it
first
as
a
kind
of
an
experiment
for
the
ubi
build
images.
So,
in
terms
of
the
repository,
what
it
looks
like
is
almost
all
of
our
images
have,
in
addition
to
our
regular
docker
file,
have
a
dockerfile.ubi
and
if
they
need,
if
they
need
a
bill,
a
separate,
build
container.
C
A
So,
just
in
terms
of
the
uvi,
that's
that's
kind
of
the
the
high
level
build
case.
There
are
some,
so
most
of
the
images
have
both
a
build
and
a
final
image
stage.
Some
of
the
images
will
only
have
a
build
stage
because
they
never
have
a
final
image
that
users
ever
need
to
download
they're
only
ever
included
as
like
dependencies
is
another
example
that
is
in
python.
A
It
might
be,
as
far
as
I
know,
with
all
of
our
our
images
so
far,
they
either
need
the
build
or
they
need
both
the
build
and
the
final
image.
I
think
in
the
case
of
the
alpine
certificates-
or
in
this
case
I
guess
they're
not
really
alpine,
it
probably
will
be
a
case
as
far
as
we
know
right
now,
because
you're
not
gonna
have
to
grab
anything
from
anywhere
other
than
the
default
repositories.
A
A
A
A
A
A
So
in
terms
of
the
ubi
images,
that's
that's
the
case.
Now
these
alpine
certificates
themselves
have
or
just
using
the
just
getting
the
certificates
to
work,
is
kind
of
a
separate,
a
separate
issue.
We
very
specifically
have
used
alpine
both
because
of
the
command
that
was
there
by
default
for
us
to
include
other
certificates
in
it,
which
I
think
for
the
ubi
images.
A
Jason
already
has
a
comment
on
the
issue
regarding
how
that
might
look
like
for
these
images,
but
then
the
other
thing
is:
is
they
end
up
in
a
different
location
by
default,
which
might
be
fine
for
the
image,
but
it
means
at
least
by
default.
Our
chart
won't
be
looking
for
them
in
the
right
place,
so
there
might
be
there's
going
to
be
that's
going
to
have
to
be
handled
as
well.
Jason
might
be
able
to
speak
to
that,
a
little
more
go
for
it.
A
C
C
Now,
thanks
zoom,
okay,
yeah
guess
what
this
time,
instead
of
my
audio
being
at
10,
my
mic
was
at
10
all
right,
so
the
alpine
certificates
container
essentially
does
nothing
but
rebuild,
etc.
Ssl
into
a
pre-made
ca
certificates
that
we
can
just
kind
of
dump
around
that
gets
consumed
by
the
ssl
certificates,
job,
which
actually
combines
everything.
That's
in
that
directory
into
a
functional
set.
C
That's
still
how
it
will
behave
in
ubi.
As
far
as
I
understand
it's
just
that
the
original
location
of
where
this
files
come
from
is
different.
So
the
commands
to
generate
that
on
the
system
where
the
sim
link
ends
up
pointing
to
that
location
is
changed
and
the
command
to
generate
it
has
changed,
but
how
the
application
operates
is
still
outs
out
of
et
cetera,
ssl
certs.
B
Okay,
that's
helpful,
so
I
think
to
do
a
quick
recap.
We
build
ubi
images
for
most
of
the
services
in
here
if
it
requires
reaching
out
to
the
internet
to
get
dependencies.
We
do
that
in
a
separate,
build
image
and
then
inherit
from
that
in
the
final
image.
That's
the
one
that
will
include
any
user
or
label
configurations
and
then,
in
the
case
of
alpine
certificates,
we
shouldn't
need
a
build
image.
C
I
suspect
there
will
be
very
little
difference.
I
have
to
go
review
the
actual
script,
that's
inside
of
alpine
containers
and
exactly
what
it
does
for
its
rebuild
patterns.
C
So
there
there's
a
fair
chance
that
we'll
actually
have
to
have
a
different
script.
I
believe
that's
because
we
call
update
ca
certs,
which,
outside
of
the
update,
ca
trust
patterns.
C
This
is
the
same
behavior,
but
we'll
have
to
to
have
a
different
script.
That's
copied
into
the
same
location
on
the
ubi,
because
the
execution
of
the
certificates
container
will
expect
it
to
be
the
same.
So
its
intent.
Is
I'm
going
to
pass
you
this
in
you're,
going
to
run
the
thing
and
it'll
update
the
stuff.
B
Got
it
that's
helpful,
okay,
yeah
that
that
covers
my
questions.
I
think
for
the
context
of
why
we
do
it
and
everything
I
might
just
write
some
this
down
and
add
it
to
the
docs
as
well.
Just
for
having
those
as
well
from
your
from
your
quick
look
at
this
issue.
Do
you
think
it'd
be
just
quick
five
minute
thing
for
us
to
run
through
it
together
or.
A
C
Can't
tell
you
for
a
fact:
it's
going
to
make
it
in
in
the
next
two
days,
just
because
we
need
to
get
through
it
and
test
it
and
verify
everything.
That's
the
biggest
thing.
The
amount
of
effort
to
actually
make
the
change
that
is
perceived
should
be
relatively
small.
We're
not
really
pulling
anything
from
the
internet
here
dj
unless
I'm
mistaken
we're
not
pulling
any
sources.
It's
literally
just
the
script.
That's
a
part
of
this
repository,
that's
my
understanding.
C
C
C
C
Script
would
have
no
functionality
to
immediately
detect
whether
it
was
a
ubi
or
if
it
was
just
red
hat.
We
specifically
care
about
like
knowing
which
one
is
which.
C
A
Sorry
I
was
looking
at
some
looking
at
a
way
to
paralyze
the
ubi
jobs
with
our
regular
ones.
I
didn't
catch
that
last
question.
C
Okay,
so
it
basically
is
is
duplicate
the
script
or
no.
I
think
I
just
answered
my
own
question,
but
I'll
explain
it
so
the
question
was
originally:
we
said
just
dupe
the
script
and
install
it
to
the
same
location
which
brings
up.
Can
we
make
the
script
capable
of
detecting
the
platform?
I
think
iteration
one
is
literally
just
make
changes,
know
it
works.
Iteration
two
is
actually
see
if
we
can
make
it
a
single
script.
C
C
C
B
That's
what
I
mean
like
when
I,
when
I
add
the
environment,
variable
to
try
to
run
the
job,
is
it
still
going
to
skip
it?
If
it's
not,
I
saw
that
earlier.
You
ran
it
against
masternode
environment
variable,
but
well
that
same
environment
variable
on
a
different
branch,
like
an
mr
branch,
still
triggered
the
job.
C
A
A
While
I
was
distracted
during
your
last
question,
I
see
now
that
we
have
like
this
idea
of
parallel
matrix
jobs
in
nci,
where
we
could
literally
keep
the
same
job,
but
just
say
that
it
has
a
different
like
to
run
two
of
them
at
the
same
time
with
a
different
environment
variable.
So
there's
probably
a
way
we
can
start
doing
this
a
little
better.
C
Yeah,
that's
a
future
iteration
right
up
there
with
actually
specifying
needs.
So,
yes,.
A
Un
unrelated
to
the
work
you
got
to
do
here
mitch,
but
just
for
some
context
on
why
they
don't
automatically
run
is
because
we
already
have
a
mess
of
a
ci
file
and
we
didn't
want
to
duplicate
the
jobs.
C
B
Cool
I'll
write
up
a
little
mr
to
add
some
more
docs
to
that
file
and
I'll
open
up
nmr
for
this
issue
soon.
Thank
you.