►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
everybody
to
the
june
30th
distribution
demo.
Today
we're
going
to
go
over
some
of
the
things
that
we've
kind
of
talked
about
in
the
past,
but
a
lot
of
things
have
gotten
out
of
the
way
and
we
can
start
actually
doing
them
now.
A
This
has
been
a
personal
okr
for
mines
for
a
couple
quarters-
and
I
I
it's
always
bugged
me
that
we
have
a
bunch
of
technical
debt
that
results
in
a
bunch
of
wasted
resources,
whether
it's
for
us
or
for
our
customers.
A
We've
talked
about
this
before.
If
we
go
back
to
november
of
last
year,
it's
been
a
little
while
I
went
into
details
of
of
the
impact
on
capex
and
opex,
and
the
fact
that
this
technical
debt
is
passing
having
such
an
impact
down
the
line.
A
A
So,
thanks
to
patrick
from
getaly,
we
were
able
to
drop
the
dependency
on
the
git
base
image
where
we
actually
compiled
and
installed
git
ourselves
from
there.
We
were
able
to
merge
that
with
italy
itself
and
we
no
longer
have
get
into
containers
that
don't
actually
need
it.
A
The
next
thing
along
that
line
was
the
creation
of
the
gitlab
base
image
hitler
base
image
effectively.
What
this
work
does
is
create
a
image
for
all
the
final
runtime
containers
so
that
you've
got
one
consistent
layer
that
has
the
base
requirements
that
all
the
containers
are
going
to
end
up
needing.
This
is
our
debian
base.
A
Our
configuration
files
into
the
right
place
right,
whether
they're
erb
templates
or
complete
templates,
which
we
implemented
as
well,
that
that
behavior
was
inside
of
the
ruby
container,
so
every
runtime
container
actually
was
based
on
the
ruby
container
just
so
that
we
could
have
erb
for
templating
our
configuration
files,
while
that
was
effective,
because
we
could
very
quickly
get
everything
done.
There's
a
number
of
containers
that
absolutely
do
not
need
all
of
the
ruby
runtime,
the
container
registry.
A
So
once
we
had
the
base
and
we
could
re-centralize
the
set
config
patterns
on
top
of
the
work
that
we
did
to
actually
implement
complete
right.
So
now
we
have
the
ability
of
a
smaller
single
binary
templating
option
so
that
all
we
can
do
is
change
how
our
configs
are
rendered,
as
we
put
them
into
the
container,
and
now
we
no
longer
need
the
230
plus
megs
of
ruby
in
every
container.
That's
a
significant
change
right
now.
Obviously
we
had
to
change
things
around.
A
So
in
order
to
have
goblet
function,
all
of
our
configuration
templates
had
to
be
transitioned
from
erb
to
complete
for
those
containers
that
don't
actually
need
ruby
as
a
runtime
we're
not
going
to.
If
erb
we
first
have
to
put
complete
in
place
and
then
we
have
to
configure
their
templates
to
generate
complete.
A
A
A
We
can
actually,
while
we're
doing
this,
investigate
the
size
of
various
layers,
and
maybe
things
that
we
did,
that
resulted
in
something
being
overly
sized
right
say
we
do
it
shown
recursively
accidentally,
creating
a
giant
layer.
A
We'll
go
over
that
in
a
second,
so
I
have
a
couple
of
tools:
we're
going
to
use
today,
which
are
obviously
docker,
particularly
going
to
be
using
docker,
inspect,
okay,
dive,
we're
analyzing
those
docker
containers
and
looking
at
the
layers
and
the
impacts
of
what
are
what
we're
doing,
and
then
we
have
scopio
for
looking
at
the
an
easy
way
to
look
at
the
difference
in
compressed
and
stored
sizes
right
so
docker.
When
you
ask
it
your
size,
it
will
tell
you
the
size
of
the
fully
uncompressed
image
in
your
file
system.
A
A
First
off,
I
actually
realized
that
when
I
tried
to
change
it
from
ruby
to
get
my
base,
the
in
repository
configuration
file
was
near
me,
not
a
output
file,
so
that
kind
of
was
a
fun
little.
Why
does
this
not
work?
A
Oh,
the
next
thing
was
to
actually
remove
ruby
itself
and
use,
install
or
attempt
to
use,
make
install
to
actually
install
the
binaries
that
get
webshell
is
going
to
be
using,
as
opposed
to
checking
out
the
git
building,
everything
where
it
is
and
then
copying
the
results
in,
because
now
we
have
the
source
in
there,
which
isn't
that
much,
but
it's
impactful
and
well.
Maybe
we
don't
need
all
of
those
artifacts
floating
around.
We
definitely
don't
need
any
kind
of
go
baggage.
Cash
right
and
the
last
thing
is
paying
attention
to
shown.
A
The
first
thing
we
we
end
up
actually
caring
about
if
we
go
into
the
docker
file,
we're
changing
the
base
image
over
to
gitlab
base.
So
while
it
used
to
be
built
upon
ruby,
I
go
to
the
last
from
here.
We
had
from
image
which
actually
this
one
was
gitlab
go
which
has
gitlab
ruby
in
it.
Don't
ask
so
moving
away
from
ruby.
We
go
down
to
get
my
base.
That's
really
the
largest
change
is
that
final
image
actually
is
based
on
that.
A
A
All
right,
I'm
reading
it
there.
Sorry,
if
I
look
at
the
image
size
on
main
ubi8,
so
the
ubi
version
of
the
the
version
of
gitlab
shell
for
the
master
branch,
it's
602,
megs.
A
A
Be
used
to
be
the
gitlab
go
image,
so
if
we
look
down
through
where
the
copies
come
from,
we
have
three
ads
and
two
copies
right.
So
the
copies
are
the
local
items
from
the
dockerfile
context.
So
this
is
our
scripts.
These
are
etcetera,
ssh,
configs
things
like
this.
The
ads
in
ubi
final
images
are
actually
adding
the
tar
balls
that
come
out
of
the
build
behaviors
right.
A
A
A
A
How
we're
actually
doing
the
install
we
change
the
location
and
we
reset
everything
up
and
we're
using
install
to
only
install
the
binaries
that
we
care
about
if
make
install
had
actually
worked.
I
would
have
used
that
it
would
have
been
simpler,
turns
out.
There's
a
bug
there
by
setting
the
permissions
when
I
do
that,
install
and
placing
it
under
the
prefix,
I'm
only
copying
in
the
binaries,
because
no
other
part
of
that
content
actually
needs
to
be
installed
and
consumed
by
the
assets.
A
A
A
A
Not
much
of
a
change
right,
you
know
less
than
one
meg
change
in
terms
of
the
total
artifact
here.
We're
talking
about
44
versus
44.8
doesn't
make
a
huge
amount
of
difference,
but
it
does
matter
that
we're
not
shipping
anything.
We
don't
need
just
it's
a
straightforward
thing
from
a
linux
packaging
perspective.
If
you
do
not
need
it,
don't
stick
it
in
the
tarball.
If
you
don't
need
it
for
runtime,
why
are
you
putting
on
everybody's
system
right?
A
A
A
So
that's
why
the
image
that
I
have
here
has
a
single
copy
in
is
because
the
final
is
copying
all
of
its
final
artifacts
into
this
container
and
has
one
layer
instead
of
having
five
layers
right,
I'm
reducing
the
total
number
of
layers
and
that
layer
ends
up
actually
being
smaller
as
a
result,
because,
if
I
look
at
the
add
add
add
copy
copy,
we
still
end
up
with
just
a
little
over
144
megs
we're
not
talking
about
huge
difference
in
size
when
it
comes
to
the
total
content
in
that
tarball.
A
A
So
all
I
really
end
up
doing
is
take
all
of
the
ads
and
the
copies
move
them
into
the
staging
and
or
build
layer,
and
then
all
of
the
configuration
of
the
final
files
and
locations
and
placing
them
up
into
there
in
particular
the
ones
we
care
about,
are
anything
that's
a
ch,
mod
or
absolutely
anything.
That's
recursive,
okay,
this
one
is
is
beyond
important,
and
this
is
why
this
matters,
if
you'd
shown
or
change
group
or
ch
mod
as
a
part
of
an
oci
container,
you're
altering
every
single
file
under
it.
A
A
B
Just
a
quick
follow-up
jason
you
you're
still
using
like
in
a
stage
in
the
build
stage,
you're
still
using
those
long
one-liners
with
a
run
which
practically
you
could
avoid
and
take
a
little
bit
more
advantage
of
caching,
while
building
this
stuff
locally,
when
you're
testing.
A
A
A
A
Then
the
use
of
multi-stage
docker
file
patterns,
so
I
have
implemented
part
of
that
reduction
of
complexity,
but
in
that
same
issue,
talking
about
extending
the
use
of
multi-stage,
we
start
talking
about
using
caching
and
build
x
and
how
many
runs
you
should
and
shouldn't
have
and
things
like
this-
I
only
put
it
in
multi-stage,
so
the
end
result
is
smaller.
I
did
not
go
into
the
optimizing.
The
build
stages
or
caching
patterns.
A
A
If,
if
we
look
at
another
image,
some
of
our
build
images
have
three
runs
and
it's
download
build
and-
and
I
remember
the
third
one
I
think
it
might
just
be
install
but
effectively-
we
have
three
stages
and
they
all
go
run.
Script
run.
Script
run
script
right,
however,
those
images
actually
should
be
refactored
to
fix
up
a
few
things
why
the
answer
is
those
images
have
nothing
that
would
tell
docker
that
that
cache
layer
was
ever
invalidated?
There's
no
way
for
no,
because
the
argument
is
literally
run
scripts
dependency
run.
A
A
So
if
you
change
say
the
version
as
an
argument
that
arg
changing
will
actually
invalidate
layers
that
are
consuming
that
value,
if
you
have
the
arg
and
that
changes
that
doesn't
necessarily
invalidate
things
that
don't
care
about
that
all
the
time
it
gets
a
little
funny.
But
I
don't
want
to
go
too
deep
into
what
triggers
layer.
Caching,
because
it's
kind
of
a
complex
subject
on
its
own,
it
probably
could
use
a
half
hour
demo
just
to
itself.
C
A
So
right
now
we
don't,
we
don't
have
a
plan,
I
don't
have
a
plan
because
we
don't
have
them.
Okay,
there
are
a
number
of
things
that
we
can
look
into,
but
we
don't
have
any
one
particular
thing
that
is
like.
Oh
awesome,
there
are
a
means
of
thing.
A
There
are
a
few
means
to
accomplish
one.
We
could
be
pushing
intermediate
layers,
that's
problematic,
because
what
are
we
going
to
do
with
those
assets?
How
often
they
use?
However,
they
cleaned
up
right.
A
The
other
way
we
could
look
at
this
is
actually
having
dedicated
runners
that
are,
that
will
then
have
a
either
shared
or
common
might
be
better
phrased,
docker
layer
cache,
which
includes
the
image
cache.
Okay.
A
A
There
may
be
some
interesting
things
that
we
can
do
in
looking
into
that
one,
but
I
don't
believe
right
now
that
the
docker
machine
driver
has
the
ability
to
actually
generate
the
machines
that
have
a
shared
file
system
cache,
and
there
is
some
concern
about
how
do
you
do
a
shared
file
system
with
multiple
container
engines
operating
on
the
same
file
system?
Cache.
A
A
Now
one
thing
that
is
related
but
kind
of
off
topic,
but
not
in
when
we
spoke
about
using
the
multi,
my
typo,
it
said
multi-stable
instead
of
multi-stage
when
we
use
the
multi-stage
docker,
we
were
discussing
build
tools
right,
whether
we
use
docker
or
start
investigating,
build
or
build
x
or
build
kit,
or
all
of
these
many
things
right
now
we
don't
have
any
intent
to
change
the
tooling
that
we're
using
in
any
way
shape
or
form.
A
A
What
do
those
require
us
to?
Otherwise
do
that
result
in
those
impacts
like
using
build
x
with
multi-layer
caching
with
intermediate
stages,
those
impacts
are
where
those
intermediate
stages
going
to
be
stored.
How
are
we
going
to
properly
identify
and
invalidate
those
and
ensure
that
they're
properly
identified
and
invalidated?
A
What's
that
tooling
required?
Are
we
now
binding
ourselves
using
docker
and
docker
for
a
long
period
of
time?
We
get
to
the
point
where
we're
not
using
docker
in
docker
right,
a
lot
of
complexities
start
to
add
up
super
fast
when
we
start
changing
things
out,
we
may
want
to
be
to
the
point
where
we're
not
using
docker
and
docker.
I
won't
argue
with
that.
A
A
To
to
swing
this
kind
of
back
to
the
original
question
that
we
tangent
it
off
of
the
layer,
caching
and
optimizations
in
the
developer
or
versus
the
ci
are
significantly
different
right.
Just
by
the
sheer
fact
of
one
job
is
going
to
run
on
a
different
machine
than
the
last
job
and
the
same
job,
even
in
the
same
pipeline.
If
it's
reran
is
going
to
end
up
on
a
different
runner,
more
than
likely.
A
Right
now,
I
would
like
to
focus
on
the
developer
experience
as
much
as
I
would
love
to
improve
bi
performance
and
enhance
that
developer.
Experience
has
been
historically
problematic
and
I
would
like
to
to
focus
on
that
so
that
we
can
get
more
people
capable
and
regularly
willing
to
contribute
to
the
project.
C
Yep
and
that
pretty
much
covers
the
ground
that
I
was
kind
of
on,
and
so
I
just
want
to
bring
that
up,
because
you
know
this
question
comes
up
a
lot
about
the
caches
and
layers
and
speed
and
that
difficult
all
the
difficulties
you
just
outlined
in
ci
are
fairly
well
known
from
some
other
experiments.
We've
done
within
the
team.
C
I
just
you
know
getting
that
on
recording,
so
that's
out
there
and
clear
because
that's
I
think
the
most
misunderstood
thing
is
that
difference
between
this
is
faster
locally
and
in
ci
I
don't
care,
like
those
environments
are
so
different.
We
can't
have
an
analog.
It's
it's
mutually
exclusive,
optimization
almost
so
that
is
just
the
thing
I
wanted
to
make
sure
we
clear
we
cleared
up
and
called
out
early.
A
I'm
gonna
look
real
quick
at
workhorse
as
well,
so
if
I
this-
and
I
pull
up
another
one
go
over
to
the
workhorse,
mr-
that
I
have
up
the
real
changes
that
I
have
in
place
here
are
not
changing
ruby
out.
Okay,
I've
done
anything
on
this
one
other
than
shown
just
to
show
the
difference
in
size
impact
okay,.
A
A
A
A
1.6
gigs
folks,
we
can
make
a
ding
in
that
by
just
fixing
up
multi-layer
right,
as
I
said,
build
versus
staging
whatever
you
want
to
call
that
particular
layer
this
this
time
I
called
it
build.
Don't
ask
me
why
I
did
anything
different.
A
A
Okay,
so
we
have
all
of
the
workhorse
rgz
inside
of
workhorses
targeyz.
You
actually
have
the
workhorse
and
its
binaries
any
of
its
supplementary
config
example
files
and
the
entire
public
assets
directory
which,
by
the
way,
has
another
thing
wrong
with
it.
But
that
means
basically
all
of
the
contents
of
servgit
lab
in
the
workhorse
container.
A
A
Not
having
to
touch
the
permissions
on
those
means,
we
don't
have
two
copies
of
383
megs,
that's
expensive
and
it
doesn't
seem
like
it
has
that
much
of
an
impact
in
the
the
final
image
size
right,
we're
talking.
Oh
gee,
it's
you
know
200
mix,
but
we
have
300
compressed.
That's
not
bad!
Well,
that's
240,
megs
that
we
don't
have
to
transfer
over
the
wire
that
we're
not
paying
the
customer
isn't
paying
that
the
machine
doesn't
have
to
spend
time
transferring
over
the
network
so
transit
time
alone,
storage
costs
for
us
ingress
costs
for
them.
A
Okay,
at
a
half,
a
gig
uncompressed,
that's
one
half
gig
of
data
that
they
don't
have
to
compress
decompress
onto
their
disk
and
they
don't
have
to
spend
the
time
in
cpu
cycles
or
I
o
time,
which
means
that
the
container
will
start
that
much
faster,
no
matter
who
runs
it
because
it's
a
quarter
of
a
gigabyte
to
transfer
half
a
gig
less
to
uncompressed.
That's
going
to
be
completely
unused.
A
Okay,
now,
every
node
that
needs
to
run
workhorse
needs
to
pull
how
many
layers
down
that
are
how
big?
How
long
does
it
take
like
anybody?
Who's
worked
in
the
charts
and
tried
to
start
this
thing
up
you
the
amount
of
time
it
takes
to
start
the
container
versus
the
amount
of
time
it
takes
to
prepare
the
container
to
be
started
right.
Customers
don't
measure
it
from
the
time
the
container
is
ready
to
the
time
the
container
is
started.
A
D
I
know
that
you
recently
added
a
mr
widget
that
shows
the
size
changes
in
images
for
mrs
I'm
just
thinking
whatever.
Would
it
ever
be
worth
it
to
have
a
little
job
in
the
ci?
That's
maybe
allowed
to
fail
or
not
allowed
to
fail.
That
would
basically
alarm
us
if
an
image
got
too
big
just
so
we
never
get
back
into
the
point
where
we
add
something
that
we
think
is
innocent
and
it
ends
up
making
huge
additional
change
in
the
storage
of
an
image.
D
A
So
first
thing
I'll
say
is:
yes
that's
in
place,
it
does
work.
I
will
also
say,
while
I
say
it
works,
not
the
best.
It's
got
some.
It's
got
some
hiccups,
it's
designed
in
the
right
fashion,
but
the
result
is
a
little
problematic.
A
A
However,
the
widget
doesn't
actually
work
that
way
so
oops
to
that
point
like
there's,
if
you
look
at
the
widget
from
mr2mr-
and
I
could
I
could
show
you
in
a
second,
if
you
want
it'll
say
like
yeah,
there's
44
change,
there's
like
no,
there
wasn't
and
that's
a
matter
of
misunderstanding
how
that
actually
works
and
or
needing
to
get
qe
to
help
me
out
to
better
understand
how
to
present
the
data
to
make
it
happen.
A
I
think
it
may
make
sense
to
have
a
a
job
that
basically
says
these
are
kind
of
what
our
expected
values
are.
But
how
do
we
record
what
the
expected
values
are
or
allowable
sizes
are?
Do
we
end
up
checking
that
in
to
get
do
we
check?
Where
do
we
check
it
in
at
I
don't
know
it
could
totally
makes
sense
like
we
have
a
job
in
the
the
omnibus
that
actually
goes
hey.
The
package
is
too
big.
A
B
B
So
you
don't
have
to
do
the
the
queries,
the
longer
queries
to
expand
each
tag,
and
so
we
can
actually
plot
it
out
much
easier,
like
the
general
view
and
kind
of
see
strategically
what
was
happening
over
time
with
with
the
thing
it
doesn't
address,
the
concern
that
jason's
widget
is
addressing
and
its
immediate
view
of
like
is
my
change
impacting
negatively.
A
Nice-
and
this
is
I-
I
didn't
actually
end
up
using
scopio
in
the
demonstration
at
this
point.
One
thing
that
we
have
is
we:
we,
we
have
a
certain
amount
of
information
available
to
us
because
we're
building
the
containers
and
they're
local.
We
can
just
do
docker
inspect
and
pull
some
information
out
of
that
json
scopio
operates
primarily,
you
can
actually
tell
it
just
to
talk
directly
to
the
registry
over
the
api.
A
The
image
matrix
job
runs
scopio
over
all
the
recorded
file,
final
images
and
looks
for
what
changed
it
just
records,
those
images
and
here's
the
image
name
and
here's
the
tag
name
and
then
all
the
information
from
the
registry-
and
it
does
this
effectively
by
just
making
the
query
on
the
specific
tag
and
then
pulling
the
information
back.
We
care
about
how
many
layers
do
I
have
scopio
inspect
dash
dash
raw
will,
give
you
completely
different
input
for
output.
A
I
should
say
then
scopio
inspect,
because
the
raw
will
actually
go
pull
the
manifest
information
directly
back.
Then
you
can
interact
with
that
data,
so
you
can,
if
you
do
doctor,
inspect
on
an
image
and
you
look
at
the
layers
it'll
list,
what
the
layers
are
by
jaw
right
and
roughly
what
those
sizes
are.
But
if
you
do
dot
size,
it
will
just
tell
you
how
big
the
image
is.
A
Whereas
when
you
look
at
it
inside
of
scopio,
it
will
actually
tell
you
the
tarball
carball
shaw,
the
tarball
size
individually
for
every
single
layer.
So
you
can
add
those
up
and
if
you're,
if
your
tarballs
add
up
to
a
certain
size
and
then
your
extracted
size
doesn't
line
up
some
of
that's
compression,
some
of
that's
how
much
of
this
is
overwritten
by
multiple
layers.
A
Well,
maybe
hitting
the
mic,
the
the
trick
there
is.
We
don't
know
that
until
we're
done
and
we
traditionally
want
danger
to
run
beforehand,
we.
C
A
D
D
We
just
go
off
percentages
danger.
Could
leave
a
note
that
this
image
got
ten
percent
bigger.
That
way
we
don't
have
to
have
hard-coded
storage,
acceptable
sizes,
but
at
least
we
gotta
get
a
sense
like
hey.
This
got
10
bigger.
This
got
80
bigger
and
we
can
catch
that
right.
A
D
A
Okay,
so
what
I
do
want
to
talk
about
just
before
we
leave
is
future
work
as
a
relation
to
this
one,
one
yeah.
We
need
to
document
these
optimization
patterns
and
those
practices
into
the
cngs
guidelines
so
that
future
worker
contributors
understand
and
have
a
guideline
to
work
off
of,
and
we
need
to
spend
the
time
to
analyze
every
image
and
outline
the
work
that
needs
to
be
done
to
try
to
optimize
each
image
individually.
A
So
the
file
final
image
had
the
extra
gigabyte
of
junk
in
there,
like
oh
whoops,
fix
that,
but
there
are
some
continual
things
that
can
be
further
optimized,
even
on
top
of
this
right,
so
get
lab
based
is
was
put
in
place
so
that
we
have
that
foundational.
We
could
start
taking
ruby
out,
but
as
of
yet
every
final
image
gets
complete
added
to
it.
Every
final
image
gets
to
get
logger
added
to
it.
A
Right
goblet
could
be
safely
put
into
every
image.
It's
not
in
every
image.
Right
now
it
saves
20
mix
which
doesn't
seem
like
much
right,
but
if
that's
that's,
20
megs
in
a
layer
in
all
of
the
ubi
images,
with
an
additional
layer,
an
additional
different
layer
at
that
right.
Whereas
if
you
have
one
single
consolidated
layer
that
actually
ends
up
being
less
storage
for
less
objects
in
the
object,
cache
or
object,
storage
that
we
use
to
back
the
registry.
One
less
thing
that
the
user
has
to
download
right.