►
From YouTube: Working Group: 2020-09-17
Description
* Stackpacks: https://github.com/buildpacks/rfcs/pull/111
B
Yeah-
let's
do
that
so
I
think
we'll
start
by
talking
about
stack
packs,
and
I
put
so
I
revised
the
unresolved
questions,
part
of
the
rfc
and
I
think
some
of
that
will
drive
this
conversation.
B
Yeah,
I'm
not
sure
what
that
says
about
me
as
a
human
or
potentially
the
process
or
us
as
a
team.
I
don't
know
it's
a
reflection
of
something:
it's
very
iterative.
It's
great
yeah.
It
could
be
a
positive
thing.
It
could
be
a
horribly
negative
thing.
I
don't
even
know.
B
A
I
know
steven
had
a
concern
about
that,
which
is
that
stack
tom
was
usually
written
into
the
builder
when
the
builder
is
created,
and
I
guess
what
happens
if
someone
updates
the
run
image,
or
as
I
was
thinking
about
it
later,
a
better
question
would
be
what
happens
if
someone
provides
a
custom
run
image.
A
A
B
Yeah,
that
makes
sense
I
think
jesse
mentioned.
B
B
D
D
D
A
D
A
So
the
mixins
are
already
a
label
on
the
run
image,
so
I
think
for
most
rebase
cases
that
is
already
fine.
C
E
A
Yeah,
I
think
we
could
begin
our
best
guess
on
create
builder,
but
then
you
can
always
the
platform
can
always
recreate
that
data
and
pass
it
in
right.
E
Well,
it
wouldn't,
I
don't
think
it's
that
weird
right.
It's
like
you
have
different
your
build,
because
your
build
image
can
be,
can
have
everything.
There's
there's
not
a
huge
you
want.
You
want
to
have
one
of
them,
so
you're
not
transferring
it
in
a
lot
of
places,
but
your
run
images
actually
get
deployed
and
have
their
they're
like
surface
area
matters
right,
and
so
I
could
see
it
being
a
very
common
use
case
to
have
one
one
giant
builder,
but
then
swap
between
lots
of
different
run
images
with
mixens
to
build
different
apps.
E
B
B
E
Exactly
all
the
stacks
we're
making
really
different
sizes
have
the
same,
except
for
the
you
know,
one,
that's
not
even
bionic.
Have
I
o
stack
or
I
o
build
extracts
bionic,
because
the
thing
they
just
have
different
mix
and
sets
and
the
mixings
are
what
validates
the
sets
of
packages.
A
A
But
because
there
is
a
reference
to
a
single
default
run
image
I
feel
like
there
is
a
single
default
set
of
run
image
mixins,
it's
not
there's
not
a
you
know
list
to
pick
from
right
now,
but
also
we
can
just
leave
it
out
and
then
write
it
in
it
can
be
the
platform's
responsibility
to
write
it
in
every
time.
I'm
finding
that.
A
There's
a
question,
though
so
sometimes,
if
you're
not
sometimes
when
you
run
all
the
way,
it's
not
even
until
export
that
you
know
what
run
image
you're
using
and
sometimes
the
platform
doesn't
know
what
a
run
image
is
being
used.
So
the
life
cycle
can
read
the
stack,
tumble
file
and
decide
whether
to
use
a
run
image
or
one
of
the
mirrors.
A
So
there's
no
point
at
which
the
platform
would
be
able
to
read
the
label
and
provide
it.
But
I
guess
we
can
just
read
it
off
the
label.
The
lifecycle
can
read
it
off
the
label
at
that
point
and
then
yeah.
I
guess
we
just
need
to
the
the
only
difference
here,
because
the
life
cycle
still
has
to
figure
out
what
running
wants
to
use
and
the
life
cycle
can
read
things
from
the
run
image.
The
problem
is
now
we
have
to
do
that
during
detect
and.
A
D
Goes
back
kind
of
to
the
prepare
phase
that
we
talked
about
where,
like
really
you
want
to
kind
of
validate
your
registry
credentials,
and
things
like
that,
like
maybe
it
spits
out
a
you
know,
run
stack
like
its
own
stack
tumble
from
whichever
stack
it's
gonna,
it's
destined
to
be.
E
A
Always
specifies
by
default,
what
the
run
number
should
be.
So
this
isn't
a
problem
in
pack,
because
pack
is
the
idea
of
local
mirror
config.
So
it's
always
going
through
its
resolution
process,
but
we
don't
require
that
of
every
platform
right.
The
life
cycle
can
do
the
resolution
itself,
and
that
is
useful
for
platforms
that
don't
have
as
many
hooks
to
provide
logic
and
don't
want
as
many
credentials
stuff
like
that.
D
B
Well,
I
mean
you
could
say
that
like
providing
us
a
list
of
build
packs
in
a
sort
of
like
officially
supported
way
or
a
platform,
independent
way
that
this
is
required
for
that,
because,
like
the
only
way,
techton
can
accept
custom,
build
packs
is
with
some
environment
variable.
Then
that
actually
depends
on
how
you're
running
tecton,
I
guess
so
it
I
figured.
I
was
going
with
that,
but
I
think
it's
like
this
is
a
maybe
the
first
like
core
feature
that
that
you
need
it
for.
E
Okay,
I'm
not
opposed
to
introducing
the
phase
if
we
need
it.
I
want
to
ask:
what's
maybe
a
dumb
question:
could
we
not
do
the
resolution,
because
the
build
is
always
going
to
end
up
happening
on
the
image
that
needs
to
have
the
resolved
mix
ends
right?
In
the
end,
the
only
image
that
needs
to
know
about
the
only
time
the
build
or
extend
needs
to
happen
with
the
resolved
mix
and
set
is
on
the
image
that
has
those
mixes.
How
does
this
make
sense?
Yeah.
A
E
Well,
look
I
mean
this
is
why
I'm
saying
is.
This
is
probably
not
a
not
a
smart
question,
but
is
there?
Is
there
a
way
like?
Are
we
sure
that
the
resolution
process
actually
needs
that
much
information
in
order
to
identify
a
candidate
group
where
we
wouldn't
need
to
go
back
and
pick
another
group
right
like?
Is
it?
Is
it
just
the
detector
instead
of
it
failing
during
detect
it
fails
in
a
later
stage,
but
because
the
group
is
so
close
to
having
passed,
we
don't
want
to
actually
move
on
to
the
next
group.
E
I
think
that's
probably
wrong.
It
makes
the
model
more
complicated
for
end
users
to
think
about
it.
That's
why,
like
I
caveat
this
with
like
hey,
you
know,
I
don't
know
if
this,
but
but
it
would
solve
the
problem,
we're
having
without
adding
additional
complexity
if
it
were
true,
and
so
I
want
to
think
about
that-
that
aspect
of
resolution
does
that
make
sense.
D
E
A
E
B
Okay,
that'll
take.
This
will
take
some
more
fleshing
out,
but
I
think
I
know
what
we
need
to
do
to
write
that
up
at
least.
Does
that
sound
good
enough
to
move
forward
then.
A
B
E
B
E
Yeah
yeah,
the
problem
is,
if
we,
if
we
have
like
a
manifest
builder,
that
only
does
detection
and
goes
off
to
other
builders
that
then
do
their
build
process
and
those
builders
then
specify
the
run.
Images
like.
I
wonder
if
we're
creating
an
architectural
loop
in
a
weird
place,
but
I
don't
think
there's
a.
I
think
this
is
the
right
thing
to
do
right
now
for
sure.
B
All
right
jesse:
do
you
want
to
talk
about
the
caching
issues.
D
Yeah
we're
talking
about
caching
and
restoring
this.
This
text
here
is
a
bit
outdated
after
some
discussion
today,
but
yeah
so
caching
and
restoring
snapshots
the
use
case
that
I'm
thinking
of
is,
if
you
have
an
apt
stack,
build
pack
and
you
do
an
apt-get
update.
It's
going
to
update
like
the
sources
list
before
you
do
all
your
installs
and
when
you
come
back
for
your
subsequent
build
that,
for
one
thing
the
we
talked
about
exclude-
and
you
probably
want
to
exclude
the
sources
list
from
the
final
image.
D
But
you
don't
want
that
to
be
excluded
in
the
cache
the
next
time
you
come
back
the
next
time
you
build
and
so
we're
trying
to
kind
of
figure
out
what
that
looks
like
before.
D
Anyone
throws
too
much
thought
kind
of
what
we're
thinking
right
or
what
I'm
thinking
right
now
is
that,
during
the
the
build
phase
after
running
bid
build
for
the
stack
pack,
we
emit
two
snapshots,
one
for
everything
that
is
not
in
the
exclude
list
which
is
not
cacheable,
and
then
everything
that
is
in
the
exclude
list
becomes
the
cache
tarball
with
two
different
layers.
We
and
then
two
different
snapshots
as
two
different
layers.
D
No,
this
was
outdated.
This
is
just
this
morning
kind
of
been
plugging
away,
thinking
about
it
yeah,
so
the
restorer
actually
will
run
with
root
privileges.
That's
one
piece
there
here
that
I
haven't
talked
with
joe
about.
I
talked
briefly
with
emily
earlier
and
I
think
it
makes
sense
for
credential
reasons
as
well
as
yeah.
I
think
it
just
it's
the
right
place.
To
put
it,
I
think
the
restore
will
go
ahead
and
extract
the
snapshot.
Tar
balls
as
root,
in
this
case
our
privilege.
D
Yeah,
we
talked
about
doing
it
in
the
builder
and
that's
what
this
paragraph
talks
about
is
restore,
would
just
leave
the
snapshots,
and
the
builder
would
understand
that.
It's
that
it
is
a
snapshot
and
then
do
the
restore,
but
it
kind
of
I
think
it
conflates
the
two
purposes
of
those
binaries.
B
F
E
D
D
A
I
feel
like
that
would
be
possible
for
us.
We
could
just
extract
the
tar
balls
with
a
special
casing
for
white
outs
and
we
can
ignore
opaque.
That
part
doesn't
frighten
me,
but
we
can
extract
all
these
things
perfectly
in
the
restore
container
and
they're
still
gone
by
the
time
you
read
the
book
container
right.
That's.
A
E
Is
there
a
way
we
can
write?
I'm
just
wondering
if
there's
a
solution
where
you
write
to
the
image
of
the
registry
only
and
then
restoring
is
always
running
an
image
that
has
the
layers
pre-restored
in
it,
but
it
seems
like
that.
Wouldn't
work
well
for
a
fixed
image
platform,
so
you'd
end
up
having
to
change
them
on
every
build
or
you'd
have
to
rewrite
your
own
builder
image
reference.
The
builder
by
tag
on
your
platform,
rewrite
your
own
builder
image.
Every
time
I
don't
know,
it's
not
a
good
idea
good
afternoon.
D
B
D
A
D
B
Is
that
yeah,
you
said
layer,
tamil,
yeah,
yeah,
okay,.
D
D
B
D
I
do
think
that
we'll
have
to
see
how
what
we
don't
want
to
create
one
thing
kind
of
about
this
is
we're
creating
snapshots
all
the
time
and
I
think,
there's
probably
a
lot
of
times.
We
don't
need
to
create
a
snapshot
if
it's
like
a
build,
only
stack
pack,
because
you
only
really
need
to
run
this
as
privileged
and
then
the
build
packs
will
now
successfully
do
what
they
need
to
do
right,
and
so
I
don't
know
I
don't
know
what
that
looks
like.
D
G
E
Back
on
the
topic
of
actually
restoring
images
or
restoring
layers
as
root
on
top
of
the
file
system,
if
that's
a
feature
we
could
get
upstream
into
conoco,
I
feel
like
it's
a
safer
place
for
it
to
be
with
a
lot
more
eyes
on
it.
It
seems
like
a
generic
feature
right,
the
ability
to
restore
things
sure.
A
E
E
A
A
Maybe
it
has
something
it
has
some.
I
think
we
could
look
there
for
all
the
at
least
use
it
as.
H
B
E
C
E
A
A
Other
thing
that
represents
opq
files,
but
that
only
matters
when
you're
thinking
about
like
a
overlay
file
system
that
doesn't
apply
to
us
but
just
skip
those.
It's.
D
E
D
Think
that's
it
for
the
snapshotter
it
it
does.
Some
like
interrogation
of
like
it,
builds
its
own
ignore
based
off
of
like
what
it's
running
on
like,
I
believe,
isn't
that
right
joe,
I
think
he
wrote
that
piece
like
it
built
in
more
or
less.
C
B
Creation
but
yeah,
I
don't
think
we
need
that
when
we're
like.
Do
we
like,
I
guess
the
the
question
would
be
like.
Do
we
want
to
protect
certain
directories
from
being
overwritten
like
if
a
snapshot
contained
either
something
that
wasn't
supposed
to
or
if
the
ignore
list
changed
between
versions
of
the
life
cycle,
or
something
like
that.
D
E
So
there
are
things
like:
are
there
types
of
metadata
like
set
uid
bits,
and
you
know
weird,
weird
stuff,
that
I
think
tar
probably
captures
in
some
cases,
but
might
not
be
good
for
tar
to
capture
like
how
to
do
dossier
layers
deal
with
that
in
a
very
generic
way.
Is
why
don't
no
pick
really
the
only
two
exceptions
or
or
is
there
more
complexity
here
that
we're
not
thinking
about?
I
think.
A
B
B
And
when
it
comes
to
getting
the
changes
back
out,
we
would
reprocess
the
tarball
anyways
to
like
zero
out
the
time
stamps
and
change
the
uids
and
stuff
like
that.
So
it
would
only
be
like
if
we
needed
to
do
things
it
would
only
be
for
during
the
build
process,
but
I
can't
actually
think
of
anything.
D
B
E
So
what
we're
doing
right
now,
when
we
export
those
you
know
individual
layers,
it's
like
we
want
them
to
be
reproducible,
so
we
can,
you
know
zero
out
the
time
stamps
we
don't
care
about
set
uid.
We
probably
never
want
anything
like
the
permissions
need
to
be
exactly
the
same
for
everything
there's.
You
know
normal
number
of
normal
configurations
that
apply
to
the
files
in
those
layers,
because
we
control
that
area
right
now.
We're
talking
about
the
whole
rest
of
the
file
system,
where
some
really
weird
stuff
can
happen.
E
My
my
point
isn't
that
I
think
this
should
go
in
one
place
or
the
other
it's
the
because,
because
this
is
a
we
can't
just
like
take
the
same
stuff,
we
did
before
with
turning
up
layers
that
existed
the
application
level
and
apply
that
lower
down.
There's
like
a
lot
of
other
things
that
scare
me
there
and
so
like.
E
A
E
B
Yeah
I
mean-
maybe
that's
that
is
like
99
of
this,
but
there
are
other
things
like
you
could
have
you.
Those
could
not
be
inputs,
they
could
be
fixed
within
the
build
pack.
I
think.
E
How
about
this?
Don't
don't
zero
time
stamps
for
whatever
mvp
they
come
up
with,
but
when
it's
still
still
experimental
and
you
want
to
just
get
it
out
there,
but
then
research,
zeroing,
timestamps
and
base
images
completely
right
like
what
so
so
we
do
have
full
reproducibility
of
the
image.
So,
instead
of
solving
half
the
problem,.
D
You
can
kind
of
enter
feature
like
right
now.
The
way
it's
working
is
storing
the
snapshot
itself,
zeroed
out
in
the
layers
folder,
but
everything
in
that
tar
is
not
zeroed
out.
So
when
we
do
this
restore
phase
which
we
haven't
done
yet
if
we
don't
use
the
zero
tar
reader,
then
yeah,
so
I
guess
that
would
be
it's
like
half
the
problem.
At
least
I
guess.
A
D
E
Like
what
I'm
worried
about
is
even
for
the
final
image
right,
one
of
the
ubuntu
packages
adds
a
adds.
A
user
to
etsy,
password
and
now
etsy
passwords
timestamp
is
is
in
1980,
whereas
it
didn't
add
a
thing
to
etsy
password
etsy
passwords
timestamp
would
have
been
a
sensible
timestamp.
It
seems
like
an
arbitrary
zeroing
of
timestamps
for
applying
on
the
base
image
that
could
cause
problems
that,
when
you're
talking
about
the
final
running
container,
unless
we
solve
the
problem
for
the
whole
base
image
right
and
have
consistent
time
stamps
for
everything.
E
It's
just
kind
of
looks
like
garbage
right.
This
is
nonsensical
different
timestamps
for
different
files,
based
on
what
what
the
ubuntu
package
happened
to
to
write
to
at
different
times.
So
I'd
pretty
strongly
say:
let's
not
zero
timestamps
for
any
of
the
stackpack
generated
layers
until
we
can
handle
the
problem
in
in
the
create
stack
level
right,
we're
figuring
out
a
consistent
zeroing
of
time,
stamps
or
consistent
way
of
setting
timestamps
for
the
different
files.
B
Okay,
that
makes
sense
that's
work.
Yeah.
That
makes
sense.
Can
we
move
on
to
the
next
topic
because
I
think
we've
got
that
covered
any
final
words.
B
Okay,
so
jesse
do
you
want
to
describe
what
we
were
talking
about
for
rebasing
and
how
different
platforms
might
have
different
ways
of
providing
the
stack
packs.
D
Sure
yeah
this
came
up
before
when
you're
talking
about
rebasing
an
app
and
where,
where
during
a
rebase,
do
you
get
the
stack
packs
that
need
to
run
so
you
know
which
stack
packs
need
to
run.
But
where
do
you
get
the
actual
stack
pack,
you
know
code,
so
you
can
mount
it
in.
I
think
a
couple
suggestions
came
out
from
that
before
one
was
that
you
have
to
go,
you
have
to
have
access
to
the
previous
builder.
D
E
D
A
B
E
E
B
B
B
Yep,
which
I
know
you
wouldn't
want
to
do,
would
you
like
it
out
on
top
sure?
Probably
but
like
we
don't
care
like
we
like
for
our
platform,
it's
unusual
for
our
customers
to
actually
go
get
that
image
stays
internal
to
the
platform,
so
we
don't
really
care
that
they
would
be
there,
but
I
mean
yeah
to
your
point.
There's
problems
with
that
too.
So
it
still
makes
me
feel
like
giving
the
platform
control
over
this
makes
sense.
I
did.
B
E
What
if
what,
if
there's
a
the
default
behavior
is
it
uses
the
one
from
the
run
image,
but
there's
a
check
that
says
if
you
have
the
special
layer
in
your
launch
image
that
has
the
stack
pack
in
it
then
it'll
use
that
stack
pack
layer
instead
of
the
one
from
the
new
red
image.
E
I
I
think
it's
weird,
because
this
something
has
to
copy
from
the
previous
like
something
has
to
look
at
this
ahead
of
time
and
say
I'm
going
to
reconstruct
the
build
a
new
image
that
I'm
going
to
run
right.
It's
like
a
feature
that
tekton
would
never
be
able
to
use,
but
we
have
a
feature
that
tucked
on
can
use,
which
is
just
use.
The
one
from
the
latest
run
image
anyways.
So
like
it's
a
it's
a
we
officially
allow
you
to
do
that,
but.
H
A
E
B
E
Cool
mcnew,
I'd,
look
to
you
and
say
is
that
it's
reasonable.
B
So,
to
summarize,
the
the
run
image
would
contain
the
stack
packs
and
so
on.
Rebase
the
rebase
operation
would
get
both
the
stack
packs
and
the
builder
to
run
them
from
the
new
run
image
and
during
the
rebase.
In
addition
to
swapping
out
the
run
image
layers,
it
would
also
execute
the
new
stack
packs.
B
E
C
E
E
I'd
say
if
the
run
image
has
stack
packs,
it
has
an
extender,
and
you
know
the
stack
packs
all
on
it
together
as
a
bundle.
So
all
you
need
is
a
run
image
in
order
to
do
a
rebase,
otherwise,
so.
A
H
One
makes
it
awkward
to
distribute
that
if
you're
distributing
stacks
like
we
are
at
vmware,
we
already
have
a
separate
location,
we're
distributing
the
lifecycle
and
that's
really
going
to
start
confusing
things.
G
Yeah,
so
I
think,
are
you
saying,
like
builder,
not
the
binary
builder
like
the
image
right
yeah?
I
agree
with
that.
I
think
in
that,
like
I
guess
I
was
a
little
confused
because
it
no
longer
sounds
like
rebase
in
the
way
that
we
know
it
right.
It's
going
back
to
that
conversation
about
whether
or
not
this
is
a
rebuild
to
some
extent,
and
then
that
opens
up
a
different
conversation,
probably.
E
I
mean
it
is
essentially
it's
a
different
kind
of
rebase.
It's
not.
It
requires
containers.
It's
like
a
partial,
it's
a
it's!
A
it's
abi
safe,
rebuild
right
in
the
original
proposal.
I
switched
it
to
pack
upgrade
and
said:
rebase
fails
if
you
need
to
do
an
upgrade
instead,
so
everybody
just
uses
upgrade
that
which
will
do
the
right
thing.
I
don't
know
what
what
the
proposal
for
the
ux
is,
but
but
I
I
agree
that
it
is.
It
is
quite
a
bit
of
a
different
thing.
I
just
to
me
we're
adding
a
lot
of.
E
We
have
the
potential
for
an
upgrade
operation
to
require
just
as
much
stuff
as
a
rebase
operation
requires.
Today
right,
you
receive
an
image,
you
don't
know
what
builder
built
it
right.
You
don't
have
to
keep
track
of
what
builder
built
it.
It
could
be
in
an
environment,
for
instance,
that's
in
production
way,
far
away
and
you've
deleted
your
original
builder.
That
built
the
image,
because
the
image
was,
you
know
it's
a
production
image
that
was
built
a
long
time
ago.
Right.
I
I
don't.
I
feel
like
we're,
we're
creating
this
kind
of
dependency.
E
E
D
F
G
Right
are
we
talking
about
this
being
a
thing
that
actually
would
be
stripped
away
from
the
final
app
image,
because
I
guess
I'm
concerned
about
putting
the
stack,
build
packs
in
the
final
app
image
because
they
could
create
bloat
right.
So
if
you
have
a
stack
build
pack
that
has
the
outline
capabilities,
you
wouldn't
want
to
do.
A
E
A
E
So
what?
If?
What?
If,
in
order
to
support
simple
platforms
right,
you
can
put
the
extender
in
there
right,
and
so,
when
you
do
a
pack
upgrade
you
don't
have
to.
It
doesn't
have
to
do
anything
weird
right,
but
a
platform
just
like
a
platform
can
choose
to
pull
the
stack
packs
from
the
you
know,
builder
that
was
used
to
build
the
image
or
from
the
launch
image
itself
or
whatever
the
platform
can
pull.
The
extender
from
you
know
like
there's
an
imp.
A
E
E
In
that
case,
the
kind
of
only
safe
place,
to
put
it
is
in
the
launch
image
that
gets
generated
or
in
a
special
image
that
always
follows
along
with
the
other
image,
so
I
think
we
have
a
terrible
version
problem,
regardless
right,
you're
never
going
to
be
able
to.
I
don't
know
what
you're
trying
to
match
the
extender
binary
to.
If
that
makes
sense,.
A
A
E
A
E
E
D
A
A
H
A
A
E
But
if
you're,
getting
your
run,
images
from
the
new
newest
run
image
right
and
you're
getting
your
extender
binary
from
the
newest
run
image.
Then
you
know
that
those
the
api
between
those
things
always
works,
and
you
know
that
it's
generally
always
the
newest
version
right
you're,
not
we're
not
talking
about
using
the
extender
in
this
particular
scenario,
even
though
it
may
be
ideal
in
another
case,
the
we're
not
talking
about
using
the
extender
and
run
image
binary
from
the
launch
image
right.
E
B
Yeah
if,
for
example,
a
stack
provider
wants
to
include
a
very
very
old
stack
pack
in
their
run
image,
they're
responsible
for
providing
the
extender
that
works
with
it
and
it's
the
api
version
rather
than
like.
Can
you
imagine,
like
someone?
Has
a
pre
100
stack
pack
that
they
want
to
include
in
their
stack
and
then
my
techton
platform
that
I've
set
up
has
to
go
figure
out
where
to
get
that
that
extender
from
yeah,
but.
D
The
tecton
templates
that
you
have
are
like
passing
arguments
that
that
old
extender
knows
nothing
about
like
timberly's
point
right,
like
all
the
new
stuff
or
like
it's
trying
to
parse
something
out
of
a
resulting
file.
That
is
not
present
in
that
old
version
that
came
from
this
really
ancient
run
image.
B
G
B
G
We
say
we're
not
dropping
sorry,
I'm
really
concerned
about
that
term,
because
I've
never
seen
that
actually
happen
when
we
don't
drop
support
for
something
and
then
to
think
about.
You
know
the
life
cycle
that
we
currently
maintain
being
the
only
instance
that
we're
talking
about.
I
can't
envision
other
maintainers
of
other
life
cycle.
You
know
instances
also
having
that
same
sort
of
requirement.
E
I
gotta
drop
because
it's
it's
three
o'clock,
my
javier
all
right.
I'm
not
trying
to
dismiss
your
your
thing,
but
just
as
a
summary
for
those
who
stick
on
for
what
my
preferences
are
I'd,
either
like
run
image
and
extender
binary
coming
from
new
run
image
or
other
option,
run
image
and
extender
binary
coming
from
the
launch
image
and
keeping
those
things
together.
As
my
preference,
sorry
yeah.
B
Alrighty
thanks
everybody
I'll
I'll
I'll
post
a
summary
in
slack
because
there's
a
lot
to
talk
that
we
talked
about.