►
From YouTube: Working Group: 2020-11-11
Description
* Asset Packages: https://github.com/buildpacks/rfcs/pull/81
* Stackify Repo: https://github.com/buildpacks/rfcs/pull/123
A
Hey
it
looks
like
maintainer
lottery
is
working
steven,
so
we
can
tag
up
all
the
rest
of
them
awesome,
and
I
know
this
because
you
got
assigned
one
of.
A
B
A
A
B
And
joe
did
you
want
me
to
put
stack
packs
there
too.
A
B
B
B
All
right,
let's
kick
things
off
so
first
thing
on
the
agenda
start
recording
did
we
start
recording.
I
think
we
are
recording,
remember
to
sign
in.
If
you
haven't
signed
in,
I
don't
see
any
new
faces.
Marty
you've
been
to
views
before
right.
I
have
not.
Oh
do
you
want
to
introduce
yourself
hi?
I'm
marty,
I'm
on
the
core
dependencies
team
in
the
container
build
program
at
vmware.
B
A
I
will
speak
as
another
implementation
maintainer,
albeit
dot
in
this
bit
of
it.
Emily
is
pursuing
work
towards
the
zero
five
build
pack
api
in
the
life
cycle
nearly
exclusively,
so
there
is
a
push
to
that.
I
don't
think
there
is
a
date,
but
it
is
now
in
the
plan
for
something
coming
awesome.
B
A
We
have
a
new
sub
team,
that
is,
the
distribution
team
and
we
had
a
release
today:
version
3.1.0
to
update
the
default
to
015
a
pack
thanks
to
ben
we're
on
3.0.
So
you
these
other
teams,
I
think,
need
to
catch
up
because
we're
moving
at
a
very
quick
clip.
B
Cool
next
thing
is
rfc
review.
I
think
I
shared
my
screen
camera
to
see
rfcs
and
I'll
just
go
from
the
top
stack
of
five
we're
gonna
talk
about
today.
It's
on
the
agenda.
A
B
Probably
we'll
see
what
this
is
report
thomas-
and
this
is
platforms
back-
is
that.
B
Right,
oh
well,
is
it
yeah?
Okay,
I
think
that's
right,
implement
platforms,
back
implementation,
something
cool
and
then
any
any
updates
on
this
one
we
have.
We
don't
have
danny
here.
It
looks
like
it
just
means
approvals.
Is
that
right.
B
B
A
Yeah
there's
a
problem
with
this
all
right
that
I'm
trying
to
grapple
with
how
to
solve.
Oh.
B
Cool
sounds
good
extension
spec
for
builder.
B
A
I
think
emily
is
for
the
yeah,
that's
what
it
got
assigned
to
you
through
the
bot
as
because
it's
owned
by
the
core
sub
team
sure,
but
I
believe
emily
has
done
everything
that
needs
to
happen
now.
You
need
to.
You
probably
need
to
verify
that
she's
actually
opened
the
issues,
but
I'm
pretty
sure
I
saw
her
open
the
issues
this
morning.
B
Gotcha
cool
also
in
fcp
pre-release
versions
and
experimental
features
for
api.
B
That's
quite
a
few
labels.
All
right
stack,
packs
mix-ins
all
on
the
agenda
for
today,
but
stack
packs,
probably
need
a
you
need
to
assign
a
yep
a
team
to
it
champion.
That's
core
yeah.
B
B
And
this
one
is
implementation
right,
and
this
is
also
does
it
have
the
sms
backfill
packs
on
it,
nice,
okay
and
it's
by
platform,
cool
rfc
for
process
descriptor.
That's
a
draft
layer,
origin
metadata!
We
heard
back
from
paul,
I
don't
believe
so.
As
of
yesterday
emily
said
she
was
going
to
reach
out
to
him.
Maybe.
A
B
Sounds
good
and
offline
build
packages
is
on
the
agenda
and
the
last
thing
is
a
draft
should
be
closed.
This
draft
has
been
open
for
a
very
long
time.
I
believe.
A
A
No,
no
someone
would
have
to
own
it
in
the
first
place,
but
I
believe
we
decided
that
this
would
be
subsumed
by
these
stack
packs,
and
so
we
left
it
open
as
a
draft.
At
this
point,
I'm
willing
to
say
that
the
the
fact
that
we
figured
out
a
way
to
do
it
via
build
packs
is
sufficient
for
us
to
close
this.
B
Yeah,
so
I
think
that
the
last
time
we
had
a
discussion
about
this,
there
were
a
couple
things
that
needed
to
be
updated.
Really
they
were
bringing
some
additional
metadata
that
we
have
in
build
pack
tomml
over
into
this
asset
entry
and
basically
push
the
id
inversion
through
all
all
of
the
metadata
all
over
the
place.
A
B
B
So
as
things
stand,
because
we're
just
using
build
packages
to
make
any
asset
cache,
all
you
need
is
either
a
built,
packed
homo
file
or
a
package
tamil
file,
which
we
already
have
conveniently
defined
and
we're
going
to
add
a
little
bit
more
flexibility
to
let
you
filter
out
assets
that
you
don't
want,
either
on
a
per
build
pack
or
an
individual
asset
basis
using
this
additional
file,
and
this
will
also
let
you
give
a
like
canonical
name
to
your
asset
package
that
gets
placed
in
metadata.
A
I
love
the
fact
that
you
can
build
this
from
pack
toml,
because
that
actually
goes
to
one
of
my
biggest
complaints
around
packaging.
Build
packs
today
that,
even
if
you
like,
there's
no
there's
no
implied
package
toml.
When
you
do
package
build
pack
which
you
could
there
is
no.
A
Thank
you
very
much
for
that
joe,
and
I'm
glad
to
see
that
this
follows
the
same
idea
that
it
like
normal,
normal
build
packs,
already
have
all
the
information
that
you
need.
That's
really
good.
A
B
It
seems
like
platform,
if
I'm
not
mistaken,
but
I
haven't
looked
too
deeply
into
it,
but
I
can
at
least
champion
it
cool
platformer
kind
of
distribution
related
to
you.
B
Are
those
optional
in
that
general
format,
though
those
aren't
are
those
in
the
actual
tamil
yeah?
So
these
are
just
filter
options
right
you,
so
they
should
all
be
optional.
This
entire
file
actually
is
optional.
If
you
really
want
it
to
be,
it's
just
a
way.
A
B
A
B
B
Yeah,
I
don't
know
I
feel
like.
Hopefully
this
is
the
end
of
this,
but
never
really
know.
A
B
And
for
approvals,
cool
and
and
because
this
has
been
through
so
many
iterations,
I
forget
what
this
last
set
of
changes
was
there's
no
longer
a
separate
asset
package.
Image
at
all
is
that
right.
B
B
A
B
I
think
what
I
was
thinking
of
is
when
you're
constructing
a
meta
build
package,
this
asset
package
from
other
build
packages.
You
can
reconstruct
a
new
asset
package
based
on
all
the
other
meta,
build
pack
or
book
build
best.
All
the
sub
build
packs
metadata.
You
don't
have
to
combine
the
sub,
build
pet
construct,
sub
asset
packages
and
then
combine
those
asset
packages
together
to
construct
the
meta,
build
packs
associated
asset
package.
B
A
B
B
B
In
that
case
stackify,
so
I
already
got
some
comments
on
this
one,
but
basically
the
idea
here
is
to
create
a
stackify
repo
that'll.
You
know
hold
a
tool
to
turn
images
into
cmb,
compliant
stack
images
down
the
line,
hopefully
turn
that
into
a
pack
creates
that
command
potentially.
B
But
for
now
we
wanted
to
leave
this
as
a
standalone
repo
for
now
yeah.
I
can,
I
guess,
talk
a
little
bit
about
it,
so
this
would
be
the
alternative
for
people
who
currently
want
to
extend
our
stacks.
They
or
you
know
any
stack
that
exists
they
have
to.
You
know,
create
a
docker
file
and
extend
it
and
monitor
for
the
new
for
new
stack
images
and
rebuild
their
docker
images
using
dockerfiles.
So
this
would
be
a
way
to
do
that
without
docker
files
and
it
would
be
able
to
add
ca
certificates.
B
It
was
javier,
so
if
I'm
not
mistaken,
some
of
the
features
that
this
has
like
the
ca,
certs
and
more
specifically,
the
package
management,
those
are
very
specific
implementation
to
just
debian
or
ubuntu.
Is
that
right
or.
B
Intention
to
be
more
cross
platform,
I
think
the
intention
would
be
that
it
would
be
cross-platform,
and
so
it
would
figure
out
he'd
be
able
to
do
it.
You
know
add
packages
and
ca
certs
for
multiple
types
of
image
images,
so
I
imagine
we'd
start
out
with
just
debian
based
images
but
or
honestly.
We'd
have
to
even
go
bionic
separately
than
focal
because
those
packages
are
going
to
be
different,
but
the
intention
would
be
that
we
could
do
it
across
multiple
types
of
images.
B
And
the
thing
that
separates
this
from
stack
packs
is
this:
isn't
this
isn't
about
adding
packages
during
build
time
or
in
the
build
process?
This
is
just
about
a
tool
that
lets
you.
You
know
as
a
stop
gap,
maybe
until
you
have
that
more
complicated
thing
and
you
know
set
up
a
pipeline
that
without
having
to
have
a
docker
file
and
complicated
things,
adds
packages
or
ca
certs.
You
know
to
images
kind
of
directly
under
registry.
B
B
B
No
that's
right!
You
could
do
that
so.
A
It
says:
maintain
a
docker
file
and
pipelines
to
continuously
update
images.
How
do
you
envision
this
to
stop
the
need
for
a
pipeline
that
watches
for
changes
to
the
root,
stacked
image.
A
And
what
about
doing
a
docker
file
isn't
good
enough
like
I
would
have
expected
just
naively,
I
would
have
done
from
you
know:
pacquiao
build
pack,
slash,
build
slash
base,
and
then
I
would
have
done
apt-get
install
and
then
called
it
a
day
right
what?
Why
is
that?
Why
is
this
a
better
thing
than
just
sort
of
adding
one
more
layer
that
sits
on
top.
B
B
B
So
we
actually
have
to
think
about
that,
because
that,
now
that
I'm
thinking
about
it,
I
think
the
way
we
were
thinking
about
implementing
it.
You'd
still
have
to
use
the
base
image,
but
it
would
at
least
know
you
wouldn't
have
to
know
what
metadata
you're
expected
to
put
in
there.
B
So
we
are
using
the
native
package
management
system
to
install
the
packages,
but
conoco
is
being
used
to
figure
out
the
diff
and
create
a
layer
from
the
diff
of
everything.
We've
done
to
manipulate
the
image
just
kind
of
backing
up
for
a
second
to
frame
this,
maybe
in
a
way
that
makes
conoco
make
sense.
B
The
idea
here
is
that,
like
right
now,
if
you
want
to
go
from
regular
docker
file
base
image
to
cnbi's
stack
image
set
with
maybe
with
additional
packages
on
it
right,
if
you
want
like,
if
you
want
to
go
from
ubuntu
bionic
coming
from
you
know,
canonical
or
ubi,
coming
from
red
hat
turn
those
into
cnb
images
and
add
packages
and
everything
you
have
to
set
up
pipelines
that
need
docker,
which
needs
privileges
to
run.
B
This
can
run
totally
unprivileged
in
case,
for
instance,
but
it's
not
it's
not
a
it
doesn't
do
monitoring
or
anything
like
that.
It's
just
a
sharp
cli
tool
so
that,
regardless
of
what
platform
you're
using
it
makes
it
that
much
easier
to
maintain
cnb
stack
images.
B
A
Yeah,
I
think
if
the
goal
is
to
address
the
like,
not
external
users
but
sort
of
non-core
users,
the
pac
create
stack
is
really
important
and,
along
with
that,
I
think
will
be
helpful,
is
seeing,
I
think,
like
emily,
describes,
seeing
the
interface,
even
if
it's
really
coarse
like
what
are
the
inputs.
What
are
the
options?
How
do
you
do
that
like?
A
B
Yeah
yeah:
I
can
definitely
update
this
with
that
information.
The
plan
would
be
yes
to
all
of
those,
although
I
think.
B
Kind
of
unresolved
what
we
would
do
with
the
stack
id
right
now,
because
we
could
kind
of
only
do
this
with
the
bionic
stack
id
that
we've
defined
already
or
we
could
make
this
a
little
make.
This
can
make
the
stack
id
configurable.
A
B
B
A
B
Sorry
quinn,
actually
I
was
gonna,
say
I
think
emily
you
raised
the
question
of
what
we
do
with
mixins
that
are
only
defined
for
the
bionic
stack
and
so
like
right
now.
Ideally,
if
you're
using
this
with
the
bionic
stack
id,
we
could
just
generate
that
list
of
mixins
for
you.
So
maybe
we
leave
that
as
an
option
only
for
stacks
that
have
been
defined
by
the
bill
pack
spec
and
for
others
like
we
just
won't
set
mixins.
Maybe
that's
the
way
to
get
around
that
issue.
B
A
B
B
B
Thanks
awesome,
look
forward
to
seeing
the
ux
proposal
next
thing
on
the
list
is
the
last
thing
the
list
is
snack
packs
hold
on
before.
A
We
leave
before
we
leave,
who
wants
to
own
that
issue,
what
what
team.
B
Platform
could
do
it.
Is
it
platform
it's
platform,
yeah
stacks
yeah
I'll,
go
ahead
and
assign
a
platform
to
it.
Do
we
have
a
platform
label
I've
seen
it
yeah.
I
already
took
care
of
it.
Okay,
oh
yeah.
I
can't
see
it
because
you
already
did
it.
That
makes
sense,
cool
and
there's
no
spike
change
there.
A
Yeah,
so
we
we
left
off
last
time
talking
about.
A
Rebase
and
I'm
pulling
this
up
I'll
try
to
talk
one
point
up:
there
was
an
open
question
around
how
to
handle
nvars
and
the
platform
directory
at
rebase
time
and
how
you
know
all
the
all
the
problems
that
would
be
introduced
by
either
not
having
it
or
having
it.
B
A
And
whatnot,
so
I
think
what
is
necessary
is
to
decide
on
a
plan
for
either
segregating
the
things
in
the
platform
directory
like
I
think
we
talked
about
having
a
separate
like
platform
directory,
that's
specific,
to
extend
and
rebase,
or
I
I
had
a
proposal
that
I
talked
to
stephen
about
real
briefly,
where
certain
build
packs
might
just
completely
opt
out
of
environment
variable
or
like
opt
out
of
the
platform
directory
during
extend
and
rebase.
But
I
think
steven
had
some
concerns
about
that.
B
B
A
Yeah,
I
may
have
forgotten
that
I
think
maybe
we
just
maybe
we
had
consensus
on
that
and
it
was
just
necessary
to
write
it
up.
Does
that
sound.
B
That's
what
I
yeah.
The
idea
was
if
we
could
have
separate
environment
variables
for
just
for
the
stack
like
dash,
we
say
dash
dash,
dash
end
or
something.
Then
we
can
kind
of
cordon
those
off
and
say
like.
Yes,
we
realize
these
could
change
right,
yeah,
because
I
think
the
theory
was
because
these
would
be
set
by.
B
Then
they're
in
complete
control
of
this
and
there's
no
sort
of
accidental
invars
coming
from
other
platform
that
are
meant
for
the
other
build
packs.
These
are
space,
build
decks
and
then
it's
safe
during
rebase,
because
those
are
the
only
ones.
So
it's
a
special
platform
in
platform
dirt
for
stackpacks
was
that
the
only
reproducibility
concern
we
had.
B
A
B
For
build
packs,
changing
between
rebase
and
original
build
got,
it
was
there
anything
else.
Do
we
or
should
we
just
go
through
the
examples.
A
A
B
A
More
of
the
dog
yeah
right,
I
don't
think
there
was.
I
don't
think
it
was
that
I
think
we
needed
an
example.
I
think
we
agreed
that
we
want
like
we
talked
about
like
we
went
back
and
forth
on
like
not
having
mix-ins
at
all
and
then
not
or
not
having
build
plan
at
all
and
and
whatever,
but
I
think
we
decided
we
just
wanted
to
add.
We
did
want
to
go
this
route
and
it
needed
an
example.
B
B
Well,
I
think
a
big
part
of
that
question
was
you,
don't
have
to
all
the
mix-ins
don't
have
to
match.
Only
one
of
the
mixins
has
to
match
for
it
to
run,
but
then
do
all
the
normal
build
plan
entries
have
to
match.
If
that's
the
case
and
then
there's
this
like,
let
me
say:
probably
yes,
but
then
there's
weird
interplay
like
do
stack,
packs,
always
run
then,
or
do
they
not
always
run,
and
then
we
thought
maybe
they
have
to
always
run.
But
then
we
were
like
no.
That
doesn't
make
sense.
B
That
makes
some
things
way
too
inefficient.
So
sometimes
they
have
to
not
run
and
then
what
does
that
mean
on
rebase,
maybe
or
you.
B
B
A
All
right
cool,
so
this
is
a
example
of
an
app
build
pack
like
I've
got
a
lot
of
questions
about
well
shouldn't.
Do
this
shouldn't
it
do
that?
Yes,
this
is
not
probably
what
we
would
ship.
This
is
just
to
illustrate
the
capabilities,
so
this
this
app
build
pack
would
have
a
bill
pack
tumble
that
defined
an
id
privileged
equals.
True,
I
think,
there's
we're
still
not
certain.
We
want
to
keep
that,
but
that's
kind
of
superficial
we
define
its
stacks
and
then
it
would
provide
any
mixin
using
the
asterisk
notation.
A
A
Positional
arguments
and
the
bin
detect
me
yes
and
that
I
think
that'll
become
clear
in
the
next
example:
cool,
okay
and
then
the
bin
build
would
do
some
stuff
that
it
always
does,
and
then
it
might
or
then
it
would
iterate
over
the
entries
provided
in
the
build
plan.
A
If
they're
a
mix-in
and
install
that
mix-in
as
a
package,
and
then
it
would
yeah.
That's
the
correct
name,
stack
layer
to
the
first
positional
directory,
the
layers
or
output
directory
and
exclude
var
cache,
which
would
ensure,
because
this
same
build
script
is
going
to
run
at
build
and
extend
so
at
build
time.
A
B
We
want
to
both
exclude
it.
You
want
to
cache,
you
want
like
launch,
equals
false,
but
build
equals
true
or
whatever
the
equivalent
is
of
that
cache
equals
true
kind
of
thing.
Catch
equals
through
logical
is
false
yeah.
I
think
it
means
that
the
package
database
isn't
like
a
package.
Cache
isn't
going
to
be
exposed
in
the
final
image,
which
is
right
and
it's
not
going
to
be
exposed
to
subsequent
build
packs,
which
seems
right
enough
and
it's
going
to
get
recovered
so
the
next
time
the
stack
pack
runs
in
both
cases.
B
B
So
when
you
say
build
time
like
subsequent
build
packs,
I
think
we
said
what
are
we
going
to
do
like
dump?
These
excludes,
after
all,
stack
packs
yeah,
so
it
would
still
exist
for
subsequent
stack
facts,
but
not
for
subsequent
user
space.
Buildbacks.
Is
that
right,
good
question?
Okay,
I
think
so.
Maybe
what
if
they
overlap,
makes
it
easier
to
do
it
all
at
the
end,.
A
A
B
A
Yeah,
this
might
be
where
you
know
we
talked
about
not
needing
canaco
snapshotting,
but
if
we're
talking
about
a
tarball
per
build
pack
or
per
stack
pack
you'd
like
let's
say
they
both
touch
far
cash
you'd
want
to
have
the
diff
or
you
just
have
one
one.
Tar
ball
per
excludes
excludes
path.
I
should
say.
A
B
Get
it
I
think,
for
me,
the
thing
is
we
can't
use
conoco,
because
we
can't
when
a
new
base
image
comes
up,
we
can't
apply.
We
can
apply
a
diff
to
something
on
the
old
base
image.
So
as
long
as
we're
okay
saying,
we
cut
out
those
paths,
hard
cut
using
ggcr
and
not
diff
cut
using
conoco
yeah,
and
we
figure
out
the
implementation
later
and
exactly
where
we
pull
it
apart.
I
think
we're
good.
A
Yeah,
that
makes
sense,
I
feel
like
excludes,
is
just
still
a
weird
word
for
the
build
phase,
but
I
don't
know
maybe
anyone
just
bike
shooting.
A
Okay
and
then
finally,
oh
yeah-
this
is
based
on
the
very
very
tentative
proposal
and
the
app
mix-ins
rfc.
This
is
how
a
project
tumble
this
is
how
someone
would
like
use
this
build
pack
right
like
they
would
from
their
application
in
their
project.
Tom
will
define
the
mixins
that
they
want,
and
then
the
platform
would
take
the
mixings
from
the
project
tunnel,
and
this
is
a
again.
A
A
B
So
more
generically
from
a
user's
perspective,
it's
a
way
of
saying
my
app
requires
this,
and
then
it
doesn't
matter
the
user
doesn't
have
to
think
about
this
app
app,
the
build
pack
or
their
stacks
or
whatever
it's
like.
If
that
can
be
delivered,
it
will
be
delivered.
Otherwise
the
build
will
fail
and
say
this
thing
needs
to
get
here,
one
way
or
another
right.
I
think
that's
really
clean
interface
for
the
end
user.
A
Okay,
all
right
so
now
this
is
the
ca
certificate.
Stackpack
example,
which
this
example
again
may
not
be
the
way
we
would
actually
do
ca
certs.
A
It
just
illustrates
an
example
of
a
build
plan,
dependency
that
isn't
a
mix-in,
and
this
example
has
two
build
packs,
one:
that's
a
stack
pack
and
one:
that's
a
user
space
build
pack
so
in
the
way
that
same
way
as
the
project
tunnel
kind
of
triggered
the
behavior
in
the
stack
pack
here
in
this
example,
it's
going
to
be
triggered
by
a
user
space
build
pack,
so
the
stack
pack
is
first
built
back.
Tomo
has
privileged
and
that
stuff
nothing
special
here
it
doesn't
provide
any
mixins.
A
It's
detect
will
provide
ca,
cert
so
based
on
the
rules
that
we
described
above,
oh-
and
it
has
the
like
positional
argument
too,
is
the
build
plan
same
as
for
user
space,
build
packs
and
then
based
on
the
rules
that
are
described
up
above
for
how
the
build
plan
dependencies
are
resolved.
A
This
build
plaque
would
run
or
not,
and
I
think
in
this
case
it's
just
whether
somebody
another
build
pack
requires
this
ca,
cert
dependency
and
we'll
take
a
look
at
that
in
a
second.
So
then,
the
bin
build
for
this
stack
pack
would
filter
the
build
plan
for
entries
named
ca,
cert
that
are
not
mixins,
and
then
I
think
yeah.
We
use
the
metadata
for
that
this
again.
This
is
just
an
example:
the
metadata
for
that
entry
to
get
a
path
to
a
file
that
it
installs
as
a
cacer.
A
A
Yeah,
I
don't
know
in
this
example,
I
have
a
file
and
then
I
actually
put
the
contents
of
the
the
cert
in
the
build
plan.
Again
just
one
possible
way
to
implement
this,
I'm
not
sure
if
that's
really
good
idea
or
not-
and
I've
got
two
of
them
here-
a
database
and
a
server.
I'm
not
sure
why.
I
think
that's
just
to
show
that
you
can
do
more
than
one
and
then
the
bin
build
for
this
user
space.
A
A
You
know,
like
the
workspace,
slash
search
or
something
like
that,
so
yeah,
the
user
space
build
pack
doesn't
do
anything
so.
B
A
Interesting
yeah
totally
whatever
mechanism
it
needs.
Basically,
I
think
that's
what's
trying
to
be
shown
trying.
What
I'm
trying
to
show
here
is
that
whatever
that
mechanism
is
it's
just
working
through
the
build
plan
as
normal
yeah,
so
when
the
ca
cert
is
required,
it
ends
up
here.
A
So
that's
kind
of
an
important
example.
I
think
it
answers
a
bunch
of
the
questions
that
we
had
and
then
talk
speaks
to
that
section
that
we
put
a
pin
in
so
any
questions
about
that
before
we
move
on
to
the
next
example,.
B
This
would
be,
you
know,
not
a
big
deal
that
doesn't
mean
that
the
build
plan
here
with
in
this
case
the
server
certs,
would
need
to
be
on
a
label
somewhere
so
that
when
rebase
occurs,
you
have
that
same
information
to
install
the
certs
again,
so
the
cert
actually
couldn't
be
in
the
app
directory,
because
the
stackpack
doesn't
have
access
to
the
app
directory
during
build
right,
which
is
why
it's
pulling
it
from
the
middle
factor
here
right.
I
think.
B
Yeah
but
earlier
we
were
like
well,
could
the
user
push
ca
certs
with
the
app
and
have
those
those
ca
certs
get
integrated
into
the
build
plan
build
pack
using
this
model?
Oh
because
it's
gonna
that
cat
there
is
intended
to
mean
this
happens
in
user
space
beforehand,
and
so
it's
just
the
metadata
that
makes
it
over.
It's
not
the
path
that
gets
sent
over.
A
B
A
Okay,
I'm
going
to
another
example.
I
think
jason
collins
had
asked
me
to
add
this
one.
It's
a
little
bit
different
than
the
others.
This
is.
A
Oh,
this
is
yeah.
This
is
showing
that
the
something
like
jq
could
be
provided
as
a
mixin
or
as
a
regular
dependency
in
the
build
plan,
and
that's
that's
really
all
that's
being
illustrated
here.
I
think.
B
B
A
Cool
all
right,
so
I
tried
to
capture
some
future
work
that
I
know
that's
not
not
required
for
this
rfc,
but
I
know
we
will
want
to
do.
For
example,
I'm
fairly
certain
there
will
be
stack
packs
that
need
to
branch
based
on
whether
they're
running
and
extend
or
build
like.
A
A
We
can
debate
the
intricacies
of
that,
but
there's
some
mechanism
like
that
support
for
creator.
I
think
what
we're
proposing
is
in
the
first
phase-
and
this
might
even
be
called
that
explicitly
it
would
not
be
stack.
Packs
would
not
be
supported
for
creator,
because
the
extend
phase
can't
run
in
the
same
container.
You
have
to
have
a
separate
environment
for
that.
A
I
think
we
have
an
idea
of
how
that
might
actually
be
implemented
in
creator
where
you
do
the
canico
thing
of
like
loading,
the
file
system.
You
know,
what's
the
photos
at
steven.
B
B
If
you
have
a
round
of
edge
extension
right,
it
wouldn't
be
too
bad.
It
would
still
be
make
the
cli
faster,
yeah
yeah,
the
creator.
A
big
part
of
creator
is
a
performance
optimization
and
being
able
to
run
the
other
container
in
parallel.
Right
is
more
performant,
and
so
you
know
it
was
running
running
a
whole
bunch
of
new
containers
sequentially
in
the
docker
data
that
was
slow,
so
maybe
just
even
as
our
first
pass.
We
could
just
say
you
know
the
other.
The
normal
extend
phase
has
to
happen.
B
The
way
it
would
in
the
multi
thing
it
gets
a
little
more
complicated
because
I
know
we
talked
about
it
before
that
analyze
probably
needs
to
start
becoming
before
detect,
so
it
might
be
instead
of
two
containers,
one
for
build,
and
one
for
extend.
It's
really
like
three
containers,
so
you
do
like
the
or
two
or
four,
maybe
like
analyze,
plus
detect,
and
then
branch
out
analyze
can't
come
before
detect
because
you
don't
know
what
to
analyze
right.
A
All
good
things
all
good
things
in
the
future
yeah
and
then
the
last
one
was
I'm
not
sure
if
this
is
a
certainty
but
snapshot
to
cache
the
build
phase
so
that
you
know
bill
pack
could
say
it's
maybe
item
potent
or
whatever.
However,
we
want
to
configure
that
and
all
the
changes
it
makes
sans
some
excluded
directories
would
be
snapshotted
and
you
wouldn't
need
any
kind
of
snapshot
layers
configuration
file
to
capture
those.
A
It
would
just
sort
of
capture
everything
and
that
would
allow
you
to
like
you
know,
skipped
reinstall
skip
reinstalling
packages,
for
example,.
B
A
A
Okay,
so
drawbacks
of
this
proposal.
I
think
the
main
one
there's
a
lot.
I
think
that
this
is
kind
of
naive
or
optimistic
to
have
just
one
drawback
here,
but
the
main
one
is
that
you
can't
the
end
user
cannot
provide
stack
packs.
They
have
to
be
provided
by
the
stack.
A
Let's
create
the
stack
creators
of
stack
providers,
some
alternatives.
A
Cool
yeah,
this
might
be
an
important
one.
I
know
emily
has
asked
about
this.
B
B
Yeah,
I'm
not
sure
I
think,
there's
probably.
Are
there
really
like
two
villa
materials?
Now
is
that
kind
of
the
problem
we
kind
of
sort
of
have
to
merge
what
happened
during
build
and
also
extend
later
on?
I
don't
know
if
you
have
to
merge
them,
because
I
think
you
you
would
only
want
for
reproducibility.
You
would
only
want
the
extensions
to
the
run
image
to
end
up
in
the
bill
of
materials
on
the
final
image
and
report
tomlins.
How
we're
talking
about
information.
B
Yeah,
I
don't
even
know
if,
like
it's
a
very
high
level
description
of
that,
I
don't
want
to
get
into
the
weeds
on.
I
think
they're
already
already
in
the
weeds
too
much
I'm
getting
the
model
down
in
this
rfc
yeah
report.
Tamil,
isn't
persistent,
though
right.
It's
only
there
during
the
build
process.
So
I
think
we
want
something
persisted
in
labels.
B
No,
no.
We
we're
talking
about
so
for
the
changes
to
the
runtime
image.
We
want
to
persist
those
in
labels
in
the
final
image,
but
for
the
build
time
stuff
that
we
don't
want
it
to
affect
reproducibility,
because
if
you
have
a
different
set
of
build
time
packages,
it
doesn't
really
matter
if
they
don't
end
up
in
the
image.
So
report
tumble
is
ephemeral,
but
the
platform
can
capture
that
information
and
store
it
separately
from
the
image
and
that's
kind
of
the
point.
B
B
Or
not,
I
get
that
materials.
I
think
right.
I
think
stephen's
saying
you
probably
do
want
it,
but
you
probably
want
it
in
report
title
and
that's
up
to
the
platform
to
persist
right.
I
see
what
you're
saying
it's
like
in
the
bill
of
materials
we
or
build
packs
shouldn't
be
including
things
in
the
build
materials
that
don't
end
up
in
the
image
right.
They
should
use
the
didn't.
We
refactor
the
whole
materials
thing
recently.
B
Had
some
insight
in
that,
but
she's
not
here,
there's
like
two
pads
right.
Sorry,
good,
I
don't
know
I
was
just
saying
you
don't
need
to
know
that
jq
is
installed
to
like
perform
your
work
during
the
build
right
unless
jq
ends
up
on
the
run
image.
That's
kind
of
the
way
I
think
about
it,
but
you
can
report
it.
It
just
doesn't
end.
You
can
record
it,
but
it
doesn't.
A
B
It's
like
there
are
two
places
you
can
put
build
materials
entries
and
one
is
for
the
bill
of
materials
or
build
plan
entries
at
the
end
of
the
build
pack
phase
and
one
is
for
build
materials,
and
the
other
is
for
report
tomml,
and
so
we
just
direct
all
the
build
plan
entries
that
happen
during
the
build
phase
to
report
tommle
and
direct
all
the
ones
that
happen
during
extend
phase
into
the
build
materials.
In
the
end.
B
A
Cool
all
right,
yeah
we're
over
time,
there's
a
couple
of
good
questions
here.
If
you
want
to
take
a
look
offline
but
yeah
we're
getting
pretty
close.
B
A
Yeah
I'll,
I
think
I'll
get
to
that
today.
Maybe
we
can
do
tomorrow.
We
can
I'll
try
to
make
these
changes
before
the
working
group
tomorrow
and
then
we'll
discuss
awesome.