►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
hello:
everybody
welcome
to
november
4th
21
distribution
demo.
This
time,
I'm
actually
not
going
to
be
demoing.
Anything
in
particular,
I'm
going
to
be
walking
you
through
some
of
some
of
the
fun
reasons
why
our
cloud
native
images
are
actually
as
big
as
they
are.
Some
of
them
are.
Some
of
them
aren't
some
of
them
that
are
actually
decent,
some
of
them
that
are
like
wait.
What
happened
here?
A
A
A
I
can
tell
you
that
our
team
does
it
at
least
a
dozen
times
a
day,
just
in
our
environments
right,
so
that
doesn't
seem
like
much
again
until
you
start
adding
that
up
repetitively
and
remember
we
test
in
gke
and
ak
or
eks
and
we're
considering
any
other
places.
We
can
manage
to
do
this
so
how
many
times
do
we
do
that
for
every
single
run
of
the
pipeline?
A
Just
our
team,
not
including
customers,
not
including
other
teams
in
gitlab-
that's
a
lot
right
now.
The
second
thing
is:
if
I
have
to
download
a
gig
and
a
half,
I
have
to
download
a
gig
and
a
half,
and
then
I
have
to
extract
it
right,
700
megs
over
the
wire,
and
then
I
have
to
have
it
land
on
disk,
and
then
I
have
to
extract
it,
and
then
I
can
actually
start
the
pot.
A
The
documentation
is
lagging,
that's
not
helping
anyone
get
on
board
with
how
the
cng
works
when
it
comes
to
the
images,
but
then
it's
also
their
money,
because
they've
got
a
poll
and
they've
got
to
store
it
and
they've
got
to
rebuild
it
right.
So
if
they
have
a
fork
of
ours
and
they're
running
their
pipelines
on
their
runner
and
it's
a
part
of
their
10
gig
limit.
A
Oh
boy,
don't
run
more
than
three
pipelines
because
that's
10
gigs
of
storage
bit
much
folks.
Yes,
I
believe
it
is
so
if
they're
trying
to
work
on
something
for
us
and
they're
doing
it
in
a
fork
and
they're
not
doing
it
locally
to
try
and
build
these
things
that
rebuild
cost
of
transferring
storage
and
transfer
storage
and
transfer
and
storage
right
if
they're
pushing
to
our
registry
from
wherever
their
runner
is,
that's
an
ingress
cost
and
it's
our
storage
bill.
But
it
falls
back
to
them
because
it'll
hit
their
storage
limit
eventually.
A
A
A
A
When
was
the
last
time
we
cleaned
up
the
registry
like
consciously
cleaned
up
the
registry
yeah
mitch,
I
think
you
and
I
sat
there
for
like
an
entire
day,
just
hitting
delete
and
waiting
and
waiting
and
waiting
for
all
of
these
tags,
and
that
was
a
massive
amount
of
cleanup
right
beyond
just
that.
A
The
contributor
experience
has
to
get
better.
That's
on
us
one,
because
we
need
to
clearly
define
our
processes.
We
have
to
clearly
define
the
behavior
patterns
that
we
expect,
but
we
also
need
to
make
sure
our
documentation
stays
up
to
date.
Okay,
beyond
that,
we
still
have
other
things
that
people
are
asking
for
and
steven
you
saw
the
other
day
that
I
brought
up
someone's
like
hey.
I
can't
install
the
operator
on.
You
know
a
t4
instance
in
aws.
Those
are
those
are
graviton
by
the
way
folks,
because
we
only
build
for
x8664.
A
A
A
A
We
should
consider
like
actually
having
a
visible
method
of
seeing
what
the
old
version
is
versus
what
this
version
is,
or
at
least
at
the
end
of
the
job
say.
This
thing
is
a
gig
and
a
half
and
we
can
go
back
and
look
at
it
and
be
like.
Oh,
you
changed
the
gita
image
and
it's
a
gig
and
a
half.
The
version
of
master
is
700
max
what
happened.
B
A
A
A
If
I
look
specifically
at
the
replacing
the
base
images,
we
did
a
breakdown
of
which
applications
actually
need,
ruby
versus
which
ones
actually
have
ruby
container
wise
and
the
big
ones
that
come
out
of
it
are
workhorse
shell
and
pages.
These
are
go
programs
that,
as
far
as
we
know,
don't
actually
need
ruby
to
be
there
now
to
be
to
get
in
this
work.
We
had
to
do
a
bunch
of
things.
A
A
A
Oops
and
that's
a
matter
of
tech
with
that,
because
we
intentionally
used
gitlab
ruby
because
we
knew
we
had
ruby.
Therefore,
we
had
erb
and
everybody
on
the
in
the
team
and
company
were
familiar
enough
with
erb
that
they
could
all
help
us
get
it
going.
It
was
an
accelerant
to
the
project,
no
argument,
but
now
how
much
are
we
adding
to
the
image?
Just
because
we're
based
on
that
right?
A
There's
a
simple
question:
if
we
can
put
goblet
in
the
image
and
it's
like
four
mags
or
we
can
put
all
of
ruby
as
a
base
runtime,
which
is
like
300
megs,
a
little
bit
of
a
tip
of
a
scale
there
right
well
before
we
can
actually
remove
ruby
from
these
images.
We
actually
need
now
to
make
a
base
image
that
actually
has.
This
is
debian
with
curl
and
ca
certs
and
the
bare
minimums
plus
our
init
patterns.
A
That's
it
it's
all.
It
needs
to
have
right.
Everything
else
can
then
be
based
directly
on
that
and
then
include
the
things
that
they
need
much
simpler,
much
more
straightforward,
but
now
we
get
a
consistent
base
that
we
come
from.
So
when
we're
doing
apt-get
update
we're
not
doing
it
in
every
single
container,
because
then
you
don't
have
consistency
and
you
don't
have
a
shared
base
image
that
everything
builds
off
of.
A
A
A
So
if
we,
if
we
go
look
at
italy
and
how
it's
built
in
and
what's
going
on,
giddily
and
anything
that
it
needs
should
effectively
be
artifacts
that
we
stick
into
the
image
right,
because
we
don't
need
their
sources.
Unless
it's
ruby,
we
don't
need
anything
that
it
puts
in
between.
You
need
the
output
binaries
and
you
need
them
to
be
able
to
operate
with
anything
that
they
call
upon.
A
A
A
I
have
this
set
up
here,
trying
to
figure
out
what's
going
on
and
this
here.
This
1.8
gigs
is
a
significant
portion
of
whatever
just
happened
right
and
I
did
trace
it
down.
You
can
see,
that's
the
from
so
there's
git
base,
there's
actually
that's
debian
way
below
here
when
we
get
down
in
here.
Where
does
1.8
gigs
come
from
when
I
trace
this
down?
It
actually
comes
from
specifically
this
stage
right
now.
If
I
look
at
this.
A
A
Okay,
so
now
we're
into
what
we're
adding
in
terms
of
user
bin
right
and
what
we
basically
have
is
I'm
going
to
control
b
and
take
out
any
adjust
nope.
I
want
to
see
those
okay.
A
A
A
D
A
Gems,
the
ruby
gems
is
578x
right
just
in.
I
can't
tell
you
exactly
how
much
we
added
versus
how
much
that
is
just
modified
or
updated,
but
I'm
going
to
bet
most
of
it
was
added.
A
A
A
A
A
There's
the
actual
executables
so
like
the
gitly
server
itself,
the
prefix
server
itself
get
all
these
ssh
command
handler
right
this.
These,
in
particular
this
batch.
We
can
ask
hey
gitely.
Can
you
improve
this?
Your
outputs?
Are
this
big?
Is
there
anything
you
do
to
make
it
smaller,
but
our
issue
within
this
container
is:
why
do
we
have
900
megs
of
gold
compilation,
content.
A
A
E
A
That's
just
the
the
massive
casing
point
that
I
want
to
give
as
an
example
who's
anybody
for
anybody,
that's
interested
in
this
okay,
so
I'm
at
56
past
scheduled
for
about
a
half
an
hour.
A
What
I
want
to
point
out
here
quickly
before
I
let
other
people
to
actually
have
some
input
is
because
we
only
need
the
run
time.
Capabilities
in
these
images.
Streamlining
them
for
that
appropriately
will
significantly
impact
the
size
of
these
images
and
getaway
is
a
great
example.
That's
why
I
chose
it.
A
A
A
B
E
A
B
A
I
think
that's
actually
one
massively
nice
thing,
because
we
wouldn't
require
having
a
privileged
runner.
We
wouldn't
need
to
have
docker
and
docker
and
technically
we
would
possibly
be
able
to
reuse
a
couple
of
hosts
for
some
of
these
stages
doing
pipeline
optimization
because
we
have
local
cache
right.
A
The
nice
thing
about
builda
is
that
you're
not
specifically
requiring
some
system
level
service
to
be
able
to
do
this.
You
can
run
this
in
a
container
through
user
namespace
remapping
without
an
issue
I
think
dimitro
dimitri.
You
recently
did
something
with
with
podman,
and
you
know
what
I'm
talking
about,
where
all
you
had
to
do
was
flip
the
vfs
driver
and
off.
It
goes.
E
I
think
we
did
use
actually
services
in
some
cases,
not
just
the
ind,
where
we
need
to
have
some
container
running.
In
the
background,
I
think
I
gotta
go
and
check
right.
A
A
You
basically
can
write
all
this
stuff
in
shell,
you
bootstrap
an
image
with
d
bootstrap.
You
basically
build
your
base
image
from
scratch,
and
then
you
can
then
install
into
a
schrute,
just
as
you
would
anywhere
else,
and
that
just
it
this
time
around,
that
schrute
is
your
container
and
you
snapshot
that
layer
when
you're
done
so
you
can
edit
these
scripting
behaviors
and
you
don't
have
like
a
run
and
and
and
and
and
pattern
which
works,
but
is
super
annoying
or
having
to
have
scripts.
That's
only
job
is
download,
build,
install
right.
A
D
One
counterpoint
to
that,
though,
is
that
builder
does
not
necessarily
catch
layers
between.
So
when
you
talk
about
large
intermediary
steps,
you
lose
that
thing.
We
talked
about
with
caching
the
larger
intermediary
bills,
no
need
to
change
on
the
internet,
so
you'd
be
forced
to
rebuild
that
each
time
increasing
pipeline
duration
and
increasing
artifact
turn.
D
Well,
I
think
that's
the
point
of
we're
trying
to
make
this
better,
so
we're
not
using
the
best
practice
now,
but
why
would
we
not
you
see,
I'm
saying
I
just
I'm
not.
We
can
have
this
asynchronous
in
the
issue,
but
I'm
not
sold
on
builder
and
I
love
builder,
but
I
just
I'm
not
sold
on
it
being
the
best
thing
for
this.
For
that
reason,
because
we
do.
E
I
think
there
is
a
kind
of
like
I'm
a
build,
a
fan
so
like
just
to
put
it
out
there
build
an
apartment,
but
I
totally
feel
what
robert
is
saying
and
my
experience
is.
We
can
get
around
this
by
practically
doing
our
own
caching,
as
in
like
we
cache
the
art
like,
we
know
what
we're
building
and
the
intermediate
stage,
so
certain
things
we
can
try
to
cache
and
bring
in
and
see.
Maybe
we
can
actually
get
away
with
certain
things.
E
I
haven't
done
this
with
the
golang,
like
it's
much
easier
with
the
c
projects,
because
the
you
know
when
you
have
the
make
file
and
the
1.0
files
and
things
like
that,
it's
way
easier
to
reassemble
things
when
they're
halfway
there
with
the
golang,
I'm
not
so
sure
we
can
do
exactly
that,
but
maybe
we
could
so.
You
know
multiple
ways
of
skinning
that
portion
of
it.
A
Right
and
that's
a
valid
point
in
comparison
on
the
ecosystem,
things
is
to
some
degree
you
do
get
docker
and
docker
buildex
have
cash
from
right,
so
you
can
say:
try
to
build
this
image
if
I
don't
already
have
it
or
take
the
layers
from
this
other
image
that
I
already
have
and
reuse
them.
If
they
happen
to
match,
we
do
need
to
do
a
technical
evaluation
on
that
one.
One
of
the
things
that
we
do
in
the
cng
now
is:
we
have
effectively
artifact
containers
right.
A
A
We
could
be
properly
making
use
of
artifacts
or
pushing
it
to
our
package
registry
and
then
reusing
it
from
that.
But
right
now
we're
not
because
we
only
have
docker.
A
There's
a
number
of
things
that
are
pipeline,
specific
related,
some
of
them
that
are
tooling
related.
Some
of
them
are
ecosystem
availability
related
because
I
mean
our
package
repositories
weren't,
even
possibly
available
at
the
time.
We
started
this
project
right.
You
know
dj,
and
I've
been
doing
this
for
years.
At
this
point,
there's
a
lot
of
optimizations
that
we
can
do
and
we
can
discuss.
A
E
One
thing
to
throw
out
there
like
in
in
line
with
what
you
said
much
earlier
about
the
arm
images
and
there's
a
possibility,
and
we
need
to
make
some
or
we
need
to
come
to
some
decision
at
some
point
in
time.
Whether
we
want
to
maintain
those
images
as
a
separate
one
or
we
want
to
do
the
multi-arc
images,
because
with
the
docker
you
can
have
the
multi-arc
images,
but
with
builder
you,
I
don't
believe
you
can't
build
them.
You
can
actually.
E
Demo
of
how
to
do
it,
okay,
good
good,
because
I
wasn't
aware
of
that-
I
was
aware
only
of
docker
build
x
that
can
do
this
and
you
can
actually
yeah.
There
are
multiple
things
that
we
can
talk
about
like
how
we
can
optimize
our
building
experience.
A
Right
the
effectively
here's
the
thing
that
people
need
to
know
with
a
multi-arch
we're,
not
saying
an
image
that
contains
x86
and
arm
they're,
not
side
by
side.
This
is
not
multi-live.
This
is
not
3264
side-by-side.
This
is
you
have
an
image
that
is
armed.
You
have
an
image
that
is
x86,
and
then
you
have
one
manifest
that
has
pointers
to
the
manifest
for
the
other
one.
A
So
a
multi-arch
image
is
you
pull
get
lab,
workhorse
the
14-2-1?
Okay?
That
is
a
manifest
that
then
points
to
the
docker
layers.
But
if
it's
a
multi-arch,
it's
actually
a
manifest
that
points
to
the
architectures
that
are
supported,
supported
that
point
to
the
actual
docker
files
manifests
for
what
you're
trying
to
ask
for.
So
you
pull
it
on
an
x86
host
and
you
get
an
x86
binary
by
default.
A
A
Documentation
is
out
of
date
and
it's
hard
to
follow
because
we're
so
convoluted
right
now
and
I
wanted
to
touch
on
some
of
the
tooling
stuff
and
where
some
of
this
behavior
is
right
like
when
I
have
tooling
in
the
points
above
I'm
talking
about
dolo
versus
docker
versus
whatever,
whether
it's
canaco
or
bazel
or
whatever
right.
A
We
then
have
the
whole
because
there's
no
central
base
image.
Everything
basically
depends
on
ruby,
which
means
everything
is
at
minimum.
300,
plus
megabytes
kind
of
a
waste
of
space
right.
But
then
the
standardization
that
we
have
to
care
about
is
we
should
try
to
avoid
move
and
copy
and
instead
rather
use
install,
that's
literally
what
it's
made
for
when
you're
installing
final
locations
use
this.
Instead,
we
have
customers
out
there.
That
would
much
prefer
that
we
always
use
copy
rather
than
add,
and
we
we
have
an
open
issue
to
try
and
address
this
one.
A
They're,
not
small
customers.
They
would
really
like
to
see
it
fixed,
but
then
we
also
need
to
go
through
all
of
the
images
and
make
sure
that
we're
actually
using
multi-stage
and
we're
building
the
final
runtime
container,
we're
building
the
final
runtime
container,
not
everything
else.
E
One
quick
point
we
mentioned
ubi
multiple
times
within
this
call,
and
my
like,
I
have
that
I
don't
know
worry,
let's
phrase
it
that
way
that
ubi
dependency
may
actually
be
something
that
we
gonna
be
struggling
with
long
term,
especially
for
the
end
users
who
want
to
contribute
because
the
ubi
availability
and
things
like
that
like
do,
we
want
to
standardize
on
ubi
or
do
we
like?
E
A
A
Yes,
unquestionably
and
we've
actually
discussed
this
in
the
past
before
ubi
8
came
around
and
the
the
separatized
eula
and
repo
and
such
a
weird
thing
right,
we'll
have
to
dig
that
out
and
with
as
with
any
decision,
it's
worth
revisiting
from
time
to
time,
but
we
decided
back
then
that
effectively,
because
there
are
folks
out
there
who
are
not
familiar
with
going
in
and
debugging,
what's
going
on
in
a
red
hat
based
system
or
really
a
dnf
or
a
yum
based
red
hat,
like
system
as
composed
as
a
I
should
say,
as
compared
to
a
debian
or
ubuntu
image
right
they're,
just
the
familiarity
is
not
there
and
there's
a
trade-off
between
the
people
that
don't
want
to
get
anywhere
near
red
hat
as
their
choice
and
the
people
that
don't
want
to
rely
on
a
non-enterprise
that
is
debian
right.
A
B
A
A
What
can
we
do
to
improve
the
cycle
time
on
these
things
like
these
are
the
larger
things
that
people
need
to
think
about
when
they're,
when
they're.
Looking
at
being
a
maintainer
period,
but
when
it
comes
to
this,
when
it
comes
to
this
large
big
one,
we
have
a
lot
to
do,
and
this
is
not
going
to
be
a
quarter.
D
You
test
it
by
deploying
all
of
gitlab
and
at
some
point
what
we
should
look
at
is
we
know
that
when
we're
troubleshooting,
does
this
new
giddily
container
work?
Does
this
new
other
service
container
work?
We
look
at
the
communication
by
saying:
can
we
see
it
connecting
on
this
particular
port?
Can
we
see
these
pipes
having
a
minimal
smoke
test
that
it
tests
that,
when
a
container
is
done,
it
spins
up
the
container
by
itself
and
just
checks?
D
Do
you
do
these
things
when
you're
alive
right,
basic
health
checking,
because
that
way
things
that
just
don't
come
up
at
all,
because
something
got
missed,
get
caught
in
the
cng
pipeline
not
having
to
do
a
complete
deploy
into
like
a
another
place,
and
that's
something
we
don't
have
today
for
visibility
is
when
we
screw
up
a
build
and
get
a
container
on.
We
don't
know
until
much
further
down
process.
A
You
know
we're
gonna
have
to
talk
to
qe
and
qa
about
that
one
viable
decent
question.
D
Or
not,
we
don't,
and
this
is
one
of
the
things
that
I've
been
bringing
up
but
cloud
native,
because
it's
already,
but
this
is
part
of
that
larger
build
epic.
The
other
thing
out
there,
the
other
large
epic
out
there
about
our
build
vision,
because
we
don't
break
this
down
into
composable
pieces
and
because
we
don't
have
that
we
have
the
ship
as
omnibus
is
a
you
know,
a
fast
approaching
the
limits
of
what
an
rpm
size
can
be
even
compressed.
D
So
you
know
this
is
a
long
term
when
you're
able
to
do
this,
to
speed
up
our
testing
and
to
have
more
confidence
right,
because
I
don't
know
anybody
else,
but
I
loathe
the
fact
that
I
have
to
spend
three
to
five
hours
for
each
individual
review.
Spinning
up
the
entirety
of
gitlab
just
to
test
one
little
piece
only
to
find
out
is
some
little
thing.
I
gotta
go
fix
it
do
that
six
times,
and
then
I
can
start
my
integration
testing
right.
So
we
need
to
eliminate
all
that
craft
from
all
the
projects.
D
E
I
know
that
we
can
go
below
that,
but,
for
example,
any
of
the
scripting
or
anything
that's
going
to
happen
within
that
is
going
to
rely
on
certain
things
to
be
available
and
which
things
are
we
going
to
be
bringing
in
as
optional
would
be
again
dependent
on
that.
But
it's
again
it's
just
a
reminder.
I
don't
think
we
need
to
discuss
it
right
now.
B
A
B
That's
a
great
com,
very
comprehensive.
I
see
all
the
issues
in
there
as
well
that
you
think
are
great
too
so
appreciate
it
any
other
closing
thoughts,
if
not
we'll
we'll
step
away.
A
The
short
answer
is,
I
don't
expect
to
see
a
massive
progress
on
this
in
the
next
quarter.
We
got
a
lot
of
work
to
do.
I
really
just
want
this
to
be
aware
within
the
group
and
through
this
demo,
the
rest
of
the
company
like
we
can
do
a
lot,
but
we
need
to
do
a
little
bit
of
groundwork,
so
we
can
accelerate
the
entire
path.
B
D
Maybe
to
be
clear
for
this,
we're
not
looking
for
mrs
we're
looking
for
asynchronous
communication
so
that
we
can
determine
the
path
and
then
start
the
walking
on
it.
Because
that's
the
thing,
that's
gonna!
If
we
start
just
trying
to
do
it
now
and
jason
points
this
out
if
we
start
doing
before,
we
decide
where
we're
going
we'll
end
up
in
the
same
place.