►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Today's
October
27
2022.,
it's
a
distribution
team
demo
today
I'm
going
to
talk
about
working
on
Amazon,
Linux,
2022
image,
decomposition
and
as
a
side
thing
to
that.
I
ended
up
following
up
from
our
previous
demo
on
the
worktree
script,
the
description
targeting
the
yet
or
trees
that
I've
finally
just
published
and
added
some
documentation
to
them.
A
For
the
folks
who
are
interested
in
that
so
yeah
with
that
being
said,
let
me
quickly
run
through
the
war
tree
tools,
because
this
is
how
I
kind
of
prepare
all
my
working
environments.
So,
let's
see,
let
me
open
the
browser.
B
A
So
all
there
is
because
I
work
using
that
child-
it
was
mainly
gear
that
is
that
show.
So
if
you
are
not
that
show,
user
bash
is
possible,
I
actually
did
a
little
bit
of
work
there,
but-
and
some
of
that
show
is
going
to
be
probably
appropriate
to
this
as
well.
So
all
that
is
is
like
a
bunch
of
scripts
that
just
operate
with
a
word
tree,
so
we
can
create
the
worksheet
from
the
specified
Branch.
You
can
work
on
the
fork
and
a
couple
of
other
things
so
really
really
quickly.
A
So
those
are
the
work
trees
that
I
work
with
currently
for
gitlab,
build
images,
for
example.
So
if
I
wanted
just
to
go
so
or
tree
go
live
demo
warning
may
actually
not
work
exactly
the
way
I
expected
to,
but
boom
and
I
just
go
to
the
branch
that
I
wanted
to
go
or
if
I
go,
archery
go
root.
I
return
to
my
master
and
also
what
I
can
do
is
do
work.
Tree,
open.
A
A
So
you
can
work
with
all
the
branches
simultaneously
and
with
our
workflow
I
think
this
is
quite
handy
to
use
work
trees,
because
otherwise
you
can
go
nuts
on
the
disk
space
so
that
push
to
the
side,
that's
enough
of
the
work
tree
stuff,
so
you
want
to
check
it
out
check
it
out.
You
want
to
ask
me
questions.
Ask
me
later.
A
The
main
course
of
this
conversation
was
working
on
the
Amazon
Linux
2022
Builder
image,
and
that's
where
I
kind
of
stepped
into
a
few
complications.
I
found
out
that
well,
Amazon
Linux
2022
had
an
incompatible
open,
SSL
version
and
thanks
to
DJ's
prom
today
turns
out.
We
already
done
something
similar
with
the
Jammy
or
we
actually
are
building
openssl
within
the
image,
but
I
figured
since
I'm.
There
I
might
as
well
go
all
the
way
out
and
decompose
that
image
into
something
slightly
more.
A
Even
though
it
may
not
look
it,
but
actually
for
the
purposes
of
debugging
and
other
things,
I
believe
this
is
way
more
manageable
than
our
previous
composition
of
single
liners
just
to
kind
of
cut
down
on
the
layers.
So
the
end
result
here
is
a
two
layer
image,
because
there
are
two
copies
in
there
and
if
we
really
wanted
to
down
it
to
one
layer,
we'll
just
have
to
do
like
the
intermediate
step
and
copy
everything
into
that
layer
and
then
copy
the
entire
layer
over
ugly.
So
I
didn't
want
to
do
that.
A
So
two
layers
should
be
fine,
so
why
do
I
think
at
least,
why
do
I
do
this
decomposing
things
into
like
a
standalone
commands
instead
of
like
one
chained
command?
Is
that
because
of
this?
So,
for
example,
when
something
goes
wrong
in
your
image
and
I'll?
Just
do
something
like
this
I'll
just
add
one
more
command
here.
A
And
let
it
fail
right,
so
let's
rebuild
this
image
and
clearly
boom
failed
now
I
want
to
troubleshoot
it.
So
easiest
thing
to
do
is:
do
lock
or
run
it
around
and
get
that
image,
and
you
write
in
that
step
right
before
you
did
any
of
that.
So
now
you
have
your
work.
There
has
changed
and
you
write
before
you
executed
this
command.
So
now
you
can
manually,
go
and
execute
that
command.
If
that
failed.
If
you're
running
this,
you
can't
do
that.
A
You
can't
just
jump
in
the
middle
of
that
command
line
and
execute
make
install
if
everything
else
has
succeeded
before
that,
so
I
consider
that
to
be
like
a
big
benefit,
especially
when
you're
debugging,
not
to
mention
you
kind
of
better
utilizing
the
caches
while
you're
developing.
It
really
is
convenient
for
developing,
probably
not
as
much
convenience
when
it's
already
in
production,
but
then
again
does
it
really
matter
and
then
the
other
thing
that
I
don't
know
how
familiar
everybody
is
with
Stage
builds,
and
this
is
something
that
I
really
like.
That
is.
A
You
can
build
images
within
those
stages.
So
first
thing
here
on
creating
the
Builder
base
and
I'm
installing
all
those
development
tools
initially
and
then
next
I'm,
just
using
that
Builder
base
twice
to
build
openssl
and
to
build
Ruby
as
two
separate
entities
and
I'm
not
using
the
build
X
right
now,
there's
some
something
odd
with
my
Fedora
and
the
build
X.
A
So
this
is
how
I
wanted
to
decompose
those
those
things
and
the
other
interesting
note
on,
because
there
was
at
some
point
in
time.
Somebody
mentioned
that
there's
a
potential
if
we're
doing
copy
slash
from
some
image
into
the
current
image.
It'll
be
doubling
the
layers
actually,
not
because
at
least
with
a
Docker
and
I
think
podman
as
well.
They
do
the
analysis
of
how
this
layer
has
been
changed
and
it
records
only
the
changes
in
the
layer.
So
it's
not
doubling
the
layers,
it's
the
same
thing
over
it.
B
A
C
I
do
like
the
approach
of
running
each
command
in
its
own
run,
so
it's
easier
to
debug.
That
definitely
is
going.
B
A
D
A
A
A
C
Yeah,
the
only
thing
I
see
that
would
be
beneficial
to
this
and
I
I,
find
myself
doing
it
on
occasion,
trying
to
factor
it
into
other
things.
We
have
is
get
more
diagnostic
output
during
the
build.
So,
like
tell
you
know,
just
even
a
couple:
Echo
statements
during
the
build
like
saying:
building
with
open
SSL
version,
this
and
Ruby
version
this
right.
If
you
go
back
and
look
at
it,
things
you're
not
trying
to
decipher
what.
What
was
that?
What
was
the
value
of
that
variable?
A
A
C
A
C
A
A
C
If,
if
you
do
it
like
you,
have
it
on
the
you
know,
if
we're
doing
it
like,
we
have
it
on
the
right
side
there,
where
all
the
commands
are
appended
The
Echoes.
Just
there
gives
us
output
yeah
doesn't
do
anything
beyond
that.
So
either
way,
I
mean
either
way.
We
should
be
doing
more,
more
verbose,
again,
I
don't
want
to
be
I,
don't
want
to
be
so
verbose.
It's
just
overwhelming.
I
want
I
want
to
make
sure
that
we're
just
giving
out
pertinent
facts
that
are
useful
or
that
could
be
useful.
D
C
And
I
and
I
see
that,
basically,
you
know
the
initial
thought
is
things
like
what
verges
everything
is
and
if
we're
built
bring
it
in
a
specific
item?
That's
not
normally
brought
in,
but
you
should
not
take
that
in
the
build,
but
also
when
we
push
the
images
that
drives
me.
Another
thing:
that's
insane
is
that
if
I,
if
I'm,
looking
at
a
pipeline,
trying
to
figure
out
what
what
was
just
pushed
at
best
I
get
the
shaw
tag.
C
Every
once,
while
I
get
around
to
like
I,
should
go
put
some
of
that
stuff
together,
just
get
it
into
specifically
CNG,
but
you
know,
I
just
haven't
made
the
time.
A
Since
we
are
talking
about
the
multi-stage
like,
for
example,
this
stage,
it
could
be
useful
to
Cache
it
even
like
we
could
potentially
try
to
Cache
it
as
in
because
when
you
Docker
Docker
build,
you
can
actually
specify
Target
and
you
can
say
I
just
want
to
build
the
Builder
base,
and
then
you
can
tag
it
with
a
specific
name
and
a
tag
and
ship
it
into
the
registry.
And
then
you
can
recycle
that
over
and
over
but
again
I
didn't
go
as
far
just
yet.
A
A
D
A
Cash
actually
will
help
in
the
sense
that,
first
of
all,
because
you,
if
we
have
this
format
rather
than
this,
that
means
that
all
those
multi
lines,
if
we
fetch
prefetch
the
cached
image
and
then
we
try
to
build.
You
will
see
that
oh
wait.
A
I
already
have
those
layers
I'm,
not
I,
don't
have
to
do
any
of
this,
so
I
can
just
do
you
know
the
rest
of
the
steps
like
we
do
with
the
rest
of
it
actually
does
anybody
know
if
we
I
did
not
look
at
the
pipeline,
but
I'm
kind
of
getting
the
feeling
that
we
are
prefetching
the
cache
when
we're
building
those
images?
No,
we
don't
not.
A
B
Where
we
still
need
to
solve
the
caching
in
between
layers
right.
A
B
Okay,
okay
makes
sense,
but
yeah
in
this
in
this
repo
I,
don't
think
we're
we're
the
test
builds.
We
aren't
even
pushing
I
think
at
the
end
like
we
build
it
and
we
don't
even
push
it
in
this
particular.
It's
only
the.
D
I
think
we
do
it's
just
this
is
the
Builder.
This
isn't
Builder.
A
Because
we
tag
them
oddly,
in
my
opinion,
because
we
have
the
base
name
of
for
the
entire
build
image,
the
gitlab,
build
images
and
then
Colin
and
then
the
name
of
the
image
itself
versus
having
the
image
name
and
then
have
the
version
attached
at
the
end.
So
we
don't
have
the
flexibility
of
kind
of
rolling
through
them
or
doing
any
creative
stuff.
A
Unfortunately,
but
otherwise
I
take
it
that
there
are
no
General
objections
for
me
pursuing
that
direction.
I
will
wait
for
the
other
folks
from
the
team
to
kind
of
raise
their
opinion,
plus
it's
going
to
be
on
MMR.
So
there's
going
to
be
a
chance
for
the
maintainers
to
take
a
look
at
it,
but
do
take
a
look
at
it.
I,
don't
think
I'm
fully
done.
A
I'm
gonna
look
into
more
optimization
and
removing
things
that
are
unnecessary
there
and
kind
of
getting
it
as
tight
as
possible
and
making
it
a
nicer
multi-stage
build,
and
then
we
can
take
a
look
at.
You
know
whether
we
want
to
actually
cash
the
intermediary
results
because
I
like
I,
said
I,
don't
see
why?
Wouldn't
we
cache
this
if
we
ever
use
the
same
thing,
someplace
else
in
another
image.
B
Yeah,
we
might
move
these
to
a
different
repo
before
we
do
that,
because
this,
this
repository
isn't
really
actually
a
distribution
repository.
Oh
you
know,
most
of
the
images
in
this
are
actually
for
the
Upstream,
like
rails
projects
to
run
their
CI
pipelines.
Oh.
B
These
are
these,
aren't
the
ones
that
we
use
to
build
Omnibus.
These
are
the
ones
we
used
to
test
all
of
us,
but
we
put
the
the
repo
is
called
build
and
we
put
them.
We
put
them
there,
but
the
Omnibus
images
in
here
we
control
so
like
we
can
do
this,
but
changing
the
CI
to
incorporate
cash
or
something
like
that.
That's
probably
more
engineering
productivity.
For
specifically
for
this
repo.
A
A
B
Yeah
and
then
over
in
in
CNG,
which
is
really
where
more
of
these
changes
would
be
even
more
beneficial
I
know.
Hussein
has
been
also
looking
at.
You
know
how
calculate
cash
these
letters
we're
already
broken,
because
at
least
the
Ubi
ones
are
using
multi-stage,
builds
already
so
we've
already
kind
of
broken
our
cache
for
some
of
the
Pipelines,
so
yeah.
Definitely
getting
those
multi-stages
cash
which
aren't
at
the
moment
for
CNG
is
going
to
be
very
helpful.
Oh.
A
And
if
we
are
ever
to
incorporate
those
stages
back
into
Cash
from
my
past
experience,
maybe
things
have
changed
by
now,
but
the
docker
build
is
unable
to
properly
incorporate
the
caches.
It
just
takes
one
cash
and
that's
it,
and
the
build
X
actually
can
take
all
the
caches,
but
you
have
to
properly
order
them
that
was
I
stepped
on
that
land
bind
before
so
something
to
look
forward
to,
but.
D
A
Possible
to
effectively
use
the
caches
it's
just
that
we
will
have
to
switch
all
of
our
bills
that
the
bill
tax.
That's
another
thing
that
we
need
to
look
at
and
from
what
Hussein
was
saying
about.
The
multi-arc
builds
he's
leaning
towards
doing
the
build
X
as
well.
So
in
the
end,
we
may
just
have
to
go
that
way.
Despite
my
bias,
two
horsepower
man.
B
Yeah
and
on
the
on
the
like
single
line,
runs
versus
multi-line
runs
I
think
for
the
most
part,
it
really
has
been.
You
know
a
combination
of
Legacy
in
these
projects.
You
know
before
the
multi-stage
builds
where
as
as
stable
as
they
are
today
and
and-
and
you
know,
the
cash
being
a
little
bit
of
a
factor-
you'll
see
when
you
get
into
the
Builder
images
specifically
in
that
repo,
not
the
build
images,
but
the
Builder
images.
B
You
know
we
kind
of
did
like
an
in-between
hack,
where
the
final,
the
final
step
I,
think
we
always
copy
from
scratch
and
then
copy
just
the
whole
image
over
to
flatten
the
layers
right,
I
hack,
like
that,
so
we
could
probably,
if
we
haven't
already
even
in
those
projects,
because
we're
doing
that
hack,
we
could
probably
return
to
single
line,
runs
as
well
right.
D
A
A
But
if
you
collapse
them
all
of
a
sudden
that
space
shrinks,
but
also
it's
the
lookup,
because
what
happens
is
at
the
run
time,
whenever
you
try
to
look
up
a
file,
it
has
to
go
through
all
the
layers
trying
to
locate
where
that
file
is
located
and
the
more
layers
you
have,
the
slower
that
that
goes
and
again
with
multiple
layers,
the
load
time
for
the
container,
slows
down
and
everything.
So
it's
mostly
the
runtime
thing.
It
does
affect
the
runtime
quite
a
bit,
so
it's
been
like
squashing
them
into
a
single
layer.
C
Yeah
I
think
it's
actually
even
worse
than
that,
because
I
think
it
has
to
look
through
all
the
layers
of
Nevada
what
it
found
so
that,
if
it
you
know
if
it
starts
looking
through
the
layers
for
a
file
and
finds
it
on
Layer
Two,
it
says
to
continue
looking
through.
D
A
Yeah,
depending
if
it
saves
the
entire
file
or
just
the
Delta,
because
if
the
Delta
then
yes,
absolutely
you'll
have
to
reapply
the
whole
thing.
The
whole
chain,
which
is
a
computational
and
memory
issue
right
there,
foreign
yeah,
through
all
the
conversations
I've
had
people
have
reported
that
squashing
images
has
resulted
in
a
much
more
stable
and
robust
systems.
Yeah.