►
From YouTube: Working Group: 2020-09-23
Description
* Asset Images: https://github.com/dwillist/rfcs/blob/offline-buildpackages/text/0000-offline-buildpackages.md
* Experimental Mode: https://github.com/buildpacks/rfcs/pull/115
* Stackpacks
* Default Process Type: https://github.com/buildpacks/rfcs/pull/110
A
B
B
B
A
A
A
I
switched
to
the
crappier
mic,
but
whatever
works
have
we
talked
about
the
experimental
api
mode
rc?
Yet
I
know
that's
a
thing.
We
need
for
stackpacks
as
well
right.
A
D
D
Oh,
I'm
here
sorry,
my
video
was
off
yeah,
so
we
do
have
an
update
for
pack.
Today,
end
of
day,
we
are
planning
on
being
going
into
code
freeze
for
the
release
of
o14o
of
pack
we're
finalizing
some
experimental
features
for
the
buildpacks
registry
that
we
want
to
kind
of
get
in
still
the
milestone.
D
It
got,
trimmed
down.
Some
of
the
things
that
didn't
make
it
into
this
release,
got
pushed
into
the
next
release
and
we're
still
making
some
tweaks
into
how
we
manage
the
milestones
overall
going
forward.
We
are
switching
our
release,
dates
from
landing
on
tuesdays
to
landing
on
wednesdays,
just
so
that
they
line
up
with
the
sub
team
sync
meetings
and
also
these
meetings.
E
No
big
updates
from
last
week,
natalie
and
yale
are
working
on
finishing
up
the
new
release.
Automation
feel
free
to
jump
in.
If
there's
anything
you
guys
want
to
share
about
that,
but
otherwise
same
same
as
last
week,.
B
Cool
so
then,
we'll
move
on
to.
A
B
First
thing
on
the
list
is
add,
extension
spec
for
builder
just
need
anything.
I
think
I
see
david
here
is
this:
do
you
want
to
present
this
at
one
of
the
working
crew.
A
Meetings,
I
did
I
think
last
week,
I'm
happy.
D
To
talk
about
it
tomorrow,
I
think
there's
a
bunch
of
positive.
A
Reviews
emily
had
some
questions.
People
can
feel
free
to
weigh
in
on
them.
B
Sounds
good
sorry,
I
forgot
you
president,
before
I'd.
Definitely
suggest
everybody
take
a
look
at
this.
It's
really
important
change.
B
E
A
B
Definitely
encourage
people
to
engage
with
everything
in
there
because
that's
a
big
part
of
the
project-
that's
not
specified.
It's
gonna,
be
a
change.
B
Life
cycle,
experimental
api
mode
is
next
any
updates
on
that.
Terence.
B
You
realize
that
was
one
you're
talking
about
build
run.
Image
configs
must
contain
path
dot,
and
this
is
mica
here
or
anybody
from
the
windows
side
of
context
on
this.
D
Yeah,
I
think
in
pack
we've
closed
an
issue,
at
least
with
the
anticipation
that
this
will
go
forward.
But
that's
as
much
as
we
have.
A
Yeah,
let's
move
to
fcp,
I
can
take
a
look
before
it
closes.
B
All
right,
emily,
you're
gonna
add
that
up
to
it
awesome
distribution.
B
C
Yeah
just
a
reminder
that
I'm
ignoring
application
mix-ins
until
we
get
close
to
finalizing
stack
packs,
so
I
think
there's
some
comments
on
there,
but
I'll
I'll
get
to
them
later.
B
Cool
and
stack
build
packs.
It's
also
we're
talking
about
that.
So
yeah,
it's
on
the
agenda,
so
I'll
skip
that
too
build
packs
were
able
to
provide
default
process
for
an
app
natalie,
and
I
talked
about
this
a
bit
not
only
any
updates.
A
A
B
Would
be
helpful
to
talk
about
those
questions
during
you
know
today
or
tomorrow,
yeah
yeah-
maybe
you
could
put
on
the
end
of
the
list
for
today
and
then,
if
we
don't
get
to
it,
you
could
use
tomorrow
to
kind
of
dive
in
a
little
deeper
if
you're
available,
yeah.
B
A
It's
actually
me,
I
updated
this,
I
think
two
weeks
ago
or
something
right
or
maybe
last
week
before
I
went
out
and
I
think
just
a
bunch
of
people
voted
for
it,
but
then
I
just
want
to
since
I
made
changes
to
have
people
take
a
look
again,
so
I
just
requested
everyone
who
voted
before
cool.
B
So
ask
is:
just
people
should
take
a
look
at
this
yep
cool
and
offline
full
packages
where
it's
on
the
end
of
the
day
cool.
A
C
I
didn't
underneath
myself
and
I
lost
the
ability
to
do
that
when
I
shared
okay.
Can
you
guys
see
this.
C
Okay
cool,
so
at
the
end
of
last
week,
we
put
together
a
little
kind
of
like
small
meeting
to
try
and
resolve
some
of
the
questions
around
this
rfc
there's
still
one
little
wrinkle
that
I'll
get
to
that.
I
probably
need
to
talk
about,
but
kind
of
I've
incorporated
a
lot
of
the
changes
we
talked
about
into
this
rfc
already
and
kind
of
the
way
that
this
has
changed
a
little
bit
is
that
we
wanted
to
make
asset
packages.
C
Composable
forest
and
orion
raised
a
lot
of
questions
about
how
the
release
process
was
gonna
work
when
they
have
to
release
like
a
huge
number
of
build
packs.
C
Are
they
going
to
be
expecting
operators
to
download
one
massive
asset
package
that
will
provide
all
of
the
assets?
So
just
like
an
absolutely
ridiculous
number
of
layers.
That
sounds
really
difficult.
So
one
of
the
introductions
into
this
rfc
now
is
the
ability
to
like
compose
asset
packages
by
using
sub
asset
packages,
and
that
will
hopefully
mirror
the
same
structures
we
have
with
build
packages
leading
to
like
this
equivalent
thing
that
devs
can
use
when
they're
doing
a
build.
C
The
kind
of
other
thing
that
came
out
of
this
was
the
desire
to
have
a
one-to-one
linking
between
a
build
package
and
an
asset
package,
so
that
there's
some
obvious
reference
there.
This
is
the
asset
that
you
should
use
and
to
that
end
we
have
there's
just
like
kind
of
a
new
field.
That's
getting
added
to
the
package
tamil
file
used
to
create
build
packages.
C
If
we
ever
have
one
of
those-
and
I
guess
the
kind
of
last
big
thing
was
that
there
was
a
huge
amount
of
discussion
around
whether
or
not
we
or
how
we
would
enable
people
to
both
use,
archived
and
unarchived
files.
During
a
build
and
kind
of
based
off
of
some
discussion,
we
decided
to
like
pump
that
out
of
this
rfc.
C
One
of
the
thing-
I
guess,
though,
the
thing
that
still
to
kind
of
I
still
have
to
pause
a
lot
a
little
bit.
Is
this
potential
behavior
for
builders
to
have
build
packages
inside
of
them
that
are
both?
Some
of
them
are
vendored.
C
Some
of
them
are
unvendored,
and
then
the
ability
for
a
platform
to
then
decide
whether
or
not
to
pull
in
additional
asset
packages
when
creating
an
ephemeral
builder
and
just
like
trying
to
get
this
to
work
requires
you
to
have
this
large
tree
of
all
possible
asset
packages
that,
like
a
build
package
or
any
of
its
sub
build
packages,
could
include.
B
So
did
I
understand
that
this
takes
out
the
ability
to
have
a
node.js
asset?
That's
a
layer
that
then
follows
into
the
next
thing.
I'm
yeah
strongly
opposed
to
removing
that
from
the
rfc.
I
think
that's
a
core
core
feature,
the
the
linking
part
I
could
see
deferring
on
implementation,
but
I'm
I'm
thumbs
down
until
until
we
have
a
solution
that
works
for
that.
B
So
if
we
define
our
assets
so
that
we
have
a
an
archive
right
of
like
you,
have
a
node.js
tgz
that
has
a
bin
and
a
lib,
and
all
of
that
in
it
right
we're
going
to
end
up
compressing
that
twice
right.
Once
once
has
this
archive
of
stuff,
that's
you
know
not
rooted
in
any
particular
path
and
then
again
inside
of
the
layer,
blob
and
then
decompressing
it
and
then
decompressing
it
again
on
the
other
side,
really.
B
What
we
just
need
is
a
layer
that
is
nodejs,
and
if
we
have
that,
then
that
we
don't
have
to,
we
never
have
to
rebuild
that
layer.
We
can
just
link
it
across.
We
can
just
make
sure
that
layer
gets
exported
into
the
final
image
right.
There's
like
there's
a
much
more.
It's
like
a
very
efficient
thing,
we're
missing
where,
if
you
have
an
asset,
that's
pre-layerized!
It's
just
ready
to
go,
there's
no
solution
where
you
can
just
define
that
ahead
of
time
and
get
that
to
end
up
in
the
final
image.
B
E
It's
usually
it's
only
when
you
use
an
asset
for
the
first
time
right,
because
if
it's
a
launch
layer,
you
can
already
reuse
an
asset
without
doing
any
extra
uploading,
and
also
you
wind
up
in
a
situation
that
add
a
lot
of
complexity
to
the
exporter,
to
get
the
right
credentials
in
there
in
order
to
reuse
the
asset
layer
from
the
builder
image,
because
in
order
to
cross-reap
a
blob
mount,
you
need
to
prove
that
you
have
read
access
to
the
layers.
E
D
So
is
this
a
platform
concern
or
a
implementation
concern
when
we
talk
about
asset
packages,
because
I'm
ultimately
thinking
about
the
relation
or
association
to
build
packages?
And
if
I
recall
correctly,
that's
strictly
mostly
a
platform
concern,
so
I'm
curious
how
that
this
assets
or
these
asset
packages
play
into
the
whole
system.
E
I
think
it's
mostly
a
platform
concern.
So
what
taking
right
now
we
have
build
packages
that
sometimes
do
the
same,
build
pack
and
there's
an
online
and
an
offline
version
and
they're
just
two
different
build
packages.
So
this
would
change
it.
So
there's
only
one
build
package,
but
there
is
another
asset
package
and
then
using
those
two
images
you
can
make
a
offline
builder
that
has
all
the
pack
the
assets
in
the
builder
image.
E
D
I
guess
you
were
talking
about
the
life
cycle,
the
export
phase,
retrieving
credentials
for
assets,
and
I
guess
that's
where
I
got
confused.
E
The
only
time
the
life
cycle
would
need
to
care
about
assets
or
asset
credentials
is
if
we
enable
sort
of
the
linking
feature
that
steven
is
talking
about
for
performance
optimizations,
and
what
would
happen
in
that
case
is
instead
of
packaging
up
the
asset
in
a
layer
and
uploading
it
to
the
registry.
We'd
say
we're
linking
to
something
that
we
know
is
already
a
layer
in
the
registry,
so
we'll
just
use
that
layer,
but
there
become
problems
when
you
say
who
owns
that
layer
and
do
I
have
access
to
use
it.
D
B
B
That
could
be
a
lot
more
efficient
than
having
you
know
per
application.
Yes,
for
this,
given
application
the
first
time
you
regenerate
this
layer,
you
don't
have
to
redo
it.
You
know
for
that
application.
If
the
build
pack
sets
it
up
so
that
it
catches
it
correctly
right.
But
the
idea
that
we
could
just
have
layers
that
everything
could
reference
right
that
can
get
shared
between
images
seems
pretty.
D
I
know
there's
this
concept
of
foreign
layers
and
I
haven't
looked
at
it
at
all,
but
I
wonder
if
that
ties
into
this
conversation
at
all,
do
they
solve
that
that
problem
in
any
form
or
fashion.
B
Foreign
layers
are
about
having
a
layer
locally,
that
you
can't
do
certain
things
with,
because
they're
marked
as
like.
You
can't
push
the
layer,
you
can
only
pull
it,
for
instance,
and
it
comes
from
a
different
registry.
It's
like
how
microsoft
deals
with
distribution
of
windows
based
images.
Essentially
I
think
that's
why
the
feature
was
added.
I
don't
know
if
it
helps
us
too
much
for
this
case,
where
we,
actually
we
do
want
it
to
be
local.
E
I'm
also
not
sure
that,
in
my
mind,
when
I
think
about
the
way
you
just
described
the
problem,
stephen,
it
mostly
applies
to
layers
that
are
launched.
True,
build
false
to
think
you
could
update
something
sort
of
the
way
we
do
a
rebase,
and
in
that
case,
do
we
even
you
know,
need
the
asset
to
be
in
the
builder
container.
Maybe
what
you
want
is
that
a
build
pack
can
just
write
a
file
that
says:
go,
get
this
layer
right
like
maybe
there
are
other
ways
to
do
that.
B
You
know
reuse
like
that,
but
I
don't
want
to
create
a
solution
where
for
node.js
we
everybody
immediately
goes
to
double
compressed
node.js.
You
know
as
ipads
that
becomes
the
solution
when
instead,
we
could
provide
a
solution
that
you
know
doesn't
have
double
compressed
things
that
aren't
reusable
right
like.
E
E
B
It's
already
archived
like
like
it
can
just
can
just
be
the
files
on
the
disk
right
and
then
we
can,
if
we
want
to
defer
the
implement,
like
even
the
rfc
part
of
the
sim,
linking
of
that
layer
into
the
layers
directly
and
require
copying
for
now,
I'm
okay
with
that,
it's
just
it's
the
format,
it's
like
sticking
on
a
format
and
changing
it
later
to
something
that
makes
more
sense,
and
I
I
don't
see
a
reason
not
to
just
resolve
that
sooner,
but
I
am
okay.
Deferring
that
larger
feature
to
later.
E
B
B
E
Like
I
have
no
problem
with
adding
both
formats,
I
sort
of
had
proposed
some
different
ways
that
we
could
do
it
and
express
it.
It's
not
that
I
have
a
problem
with
it,
but
I
feel
like
the
set
of
people
who
need
to
approve.
This
cannot
all
agree
about
how
to
express
both
of
these
things
right
and
in
some
ways.
That
being
the
sticking
point,
is
stopping
us
from
solving
a
more
pressing
problem
that
we
all
can
agree
on.
E
B
That
would
be
the
only
only
blocker
for
me
or
like,
like
I,
I,
the
layer
linking
we
can
defer
to
another
rfc
anything
like
that.
It's
just
just
getting
the
format
down
initially
in
this
rfc,
so
we
don't
end
up.
Everybody
doesn't
move
one
way
and
then
they're
told
oh
yeah,
but
for
things
like
this,
you
gotta
do
this.
This
other
way
right
when
the
decision
is
just
willingness
to
make
a
design
decision
early
on
right.
B
It
seems
like
it's
just
worth
getting
past
that
point,
but
we
can
also
really.
I
don't
think
there
are
hard
questions
about
that.
Really,
that
seems
like
something
we
can
figure
out.
I
could
propose
a
directory
format
on
github
or
whatever
I
don't
know.
If
we
need
to
keep
chatting
about
that
part,
I
don't
want
to
detract
from
other
parts
of
the
rfc
too
much
during
this
conversation.
E
E
Of
my
big
questions
around
the
format,
there
is
sort
of
what
the
digest,
like
you're,
referring
to
that
directory
by
a
digest
in
that
acid
hierarchy.
Right
like
what
that
is
a
digest
of,
I
assume
it's
a
digest
of
sort
of
the
tar
representation
of
all
the
files
beneath
the
digest
directory,
because
that's
sort
of
how
you
could
put
all
that
information
into
something
digestible.
E
I
do
think
that
that
might
be
different
than
the
digest
of
the
asset.
You
fetch
when
you
go
to
create
that
layer,
acid
you're,
fasting
fetching
might
be
compressed
stuff
like
that.
So
I
think
we'd
have
to
solve
some
of
those
problems
to
come
up
with
a
format
there,
but
it's
not
impossible.
B
It
could
be
not
a
digest
right.
This
could
be
something
that's
when
you
do
things
with.
You
know,
because
the
digest
is
the
digest
of
the
layer
right
in
that.
In
that
sense,
it's
digested
the
original
artifact
there's
no
original
artifact
right,
and
so
it
could
be
id
inversion
like
was
proposed
before,
and
we
just
need
a
different
solution
for
that.
E
Maybe
just
for
layers,
I
think
maybe
we
could
do
digest
for
file
artifacts,
where
the
whatever
you
put
in
there
is
exactly
what
you
asked
for
right
and
we're
not.
We
don't
care
what
format
is
and
we're
just
sticking
it
in
in
the
layer.
But
if
we
want
to
do
layers,
maybe
we
go
away
from
having
a
digest
and
that
solves
some
of
the
complexity
there,
with
like
name
of
the
layers.
B
C
Okay,
all
right
that
gives
me
some
stuff
to
work
with.
I
feel
like
definitely,
I
know
from
the
last
conversation
I
had
ben
has
also
it's
like
kind
of
on
the
other
side
of
the
aisle,
with
respect
to
some
of
this
stuff.
So.
E
A
C
B
It's
been
a
problem,
the
I
think,
if
you
give
ben
what's
as
long
as
ben
gets
what's
in
the
rfc
right
now,
I
don't
think
he's
opposed
to
the
other
stuff
happening.
In
addition
to
that
is
my
understanding.
I
don't
know
emily
if
you
have
a
better
sense,
but
I
don't
think
he
would
push
back
on
additionally
doing
this.
He
just
wants
this.
A
B
That
I
had
a
question
unrelated
question
about
so
this.
This
associates
one
asset
package
with
one
build
package
and
then
lets
you
combine
asset
packages
into
additional
artifacts.
If
I
understood
correctly
that
are
then
used
in
meta
build
packages,
is
there
a
reason?
We
we
only
allow
a
one-to-one
association
between
build
packages
and
asset
packages
versus
having
asset
packages
that
you
can.
B
E
E
You
wouldn't
want
to
have
to
enter
a
bunch
of
like
one
repository
name
for
each
one
of
these
asset
packages
like
you
would
want
to
relocate
each
one
individually,
because
you
need
to
configure
where
they
all
land.
So
maybe
you
have
to
do
something
to
combine
them.
E
But
at
that
point,
isn't
it
easier
just
to
already
have
a
combination,
and
that's
just
one
example.
But
I
wonder
if
in
other
situations
is
easier
for
you
to
think
about
a
single
single
one
like
in
your
example,
if
we,
instead
of
using
the
builder
credentials,
use
the
asset
images
during
export
and
we
need
the
credentials
for
that
be
easier
to
have
in
the
build
one
of
those
rather
than
having
to
inject
20
different,
I
guess
they're
on
the
same
registry.
It's
still
one
set
of
credentials,
but
against
multiple
repositories.
D
D
Right,
I
see
what
you're
saying,
but
still
like,
even
if
that
was
true
right,
like
let's
say
that
these
assets
are
relatively
large
right,
you
have
multiple
different
sized
assets
and
you
wanted
to
pull
very
specific
ones
or
just
differentiate
between
those
different
assets
in
any
meaningful
way.
D
You
wouldn't
be
able
to
at
all
right-
and
I
don't
know
I
haven't,
heard
anything
specific
as
to
why
it's
a
one-to-one,
it
just
seems
like
there's
a
slight
inclination,
but
I
again
from
my
perspective,
I
don't
see
I
I
feel
like
there
would
be
more
of
a
negative
feedback.
D
C
Yeah,
I
think
the
the
way
that
this
is
written
right
now
doesn't
like
enforce
a
strict
one-to-one
mapping.
It's
just
something
that
I
think
that
the
release
or
the
build
packs
team
wanted.
They
can
definitely
have
that
be
one-to-one,
but.
D
B
It
was
one
to
one
if
you
go
to
there's
a
toml
file
that
had
a
single.
B
I
saw
I
saw
a
asset
package.
B
B
Saying
that
in
order
you
have
one
asset
package
associated
with
one
build
package,
but
then
you
combine
the
asset
packages
of
the
when
you
package
a
meta,
build
package.
You
combine
all
the
asset
packages
of
its
sub
package
into
one
asset
package
that
that
build
package
references,
but
I
think
it's
not
not
quite
that.
C
So
I
think
that
this
does
let
you
combine
asset
packages
into
one
larger
image
and
that's
just
kind
of
like
a
like
an
ease
of
use
thing
and
also
like
an
ease
of
release
thing
if
we
want
to
just
have
this
be
a
one-to-one
mapping,
which
I
think
is
kind
of
the
ask
from
the
release
engineering
team
here
right.
A
D
And
so
I
guess,
where
is
that
one-to-one
single
asset
image
coming
into
play?
Is
that
being
requested
here
in
the
rfc
or
is
that
an
implementation
detail
you
know
by
their
system,
or
are
we
expecting
pac
to
do
that
yeah?
So
it's
doing
that.
C
Yeah,
sorry,
I
think
that
it's
basically,
you
can
just
consider
what
like
they
want
to
do
as
a
special
case
of
a
one-to-many
mapping
right.
They
just
want
to
worry
about
one
extra
thing
that
they
have
to
release.
They
want
platform
operators
to
be
like.
Oh,
if
I
want
this
to
be
offline,
I
go
download
one
thing
and
that's
all
I
have
to
worry
about
right.
C
D
Okay,
so
someone
will
have
to
worry
about
some
tooling
to
enable
that
to
happen
and
we're
just
not
sure
who
yet
is
that
right.
D
C
So
I
think
it's
right
here
where,
when
we
make
an
asset
package
effectively
we're
just
making
a
single
image,
the
metadata
or
the
label
that
goes
along
with
this
image.
Just
has
a
list
of
layers
that
end
up
inside
of
it
right.
D
Okay,
so
then
I
guess
the
way
that
the
build
packs
or
pocato
team
would
use
this
is
they
would
call
package
asset
and
put
a
whole
bunch
of
stuff
in
here.
But
then,
if
you
look
at
the
package
tamil,
when
they
create
the
actual
build
package,
that's
when
they
only
list
one
thing
there,
and
so
it's
a
one
to
one.
Okay,
exactly.
A
So
the
build
package
itself
references,
the
assets
package,
registry,
location.
C
C
Yes,
so
that
so
there's
two
kind
of
ways
that
this
could
be
used,
and
I
think
that
they
are
right
here
right.
So
one
is
that
you,
when
you're
constructing
a
builder,
we
have
this
like
pull
policy
flag.
If
we're
going
to
enforce
you
always
pull
everything
down,
we
should
let
you
construct
a
completely
offline
builder,
but
additionally
should
also
probably
let
you
just
vendor
pieces
of
this
in
vendoring
little
pieces
of
your
asset
or
vendoring
individual
assets.
C
A
So
one
concern
we
have
from
our
platform
is
a
lot
of
the
end.
Users
are
relocating
to
build
packages
from
one
location
to
a
new
registry.
So
if
the
only
link
on
the
build
package,
the
necessary
assets
was
a
regis.
Was
the
registry
until
like
fix
that
pointer
you'd
actually
have
to
change
the
build
packages
underlying
contents
and
it's
digest.
A
A
It
almost
seems
like
there
should
be
like
an
asset
id
where
the
build
package
could
reference
the
asset
id
and
then
there
could
be
a
list
of
assets
provided
to
the
platform
and
if
you
request
id,
node
or
whatever,
obviously
a
better
id
than
that
and
there's
an
asset
provided
to
it
with
with
that
id
matching
id
paquetto,
node
or
whatever
the
platform
could
pull
it
in
that
way,
it's
not
tied
to
gcr.I
gcr.io,
but
keto
buildpack,
slash
node,
if
you're
a
customer
that
just
can't
have
access
to
that
location
or
doesn't
trust
that
location.
E
I
feel
like
that
it
makes
sense
to
on
the
build
pack.
Metadata
include
identifiers
for
all
the
assets.
It
could
work
with
right
like
digest,
so
that
you
can
match
it
up
to
the
correct
asset
and
then
a
platform
could
use
a
asset
package
at
a
different
location
sort
of
in
the
platform
config.
I'm
thinking
about
it,
sort
of
the
way
we
do
run
images
on
builders,
where
you
can
always
specify
your
own
run
image,
even
though,
in
the
builder
image
it
might
contain
a
hint.
E
E
I
was
going
to
say
digest,
but
I
think
we
want
diff
ids,
because
compression
algorithms
and
whatever
and
like
metadata
about
assets
that'll
exist
on
the
build
package
so
that,
given
an
acid
image,
you
know
which
ones
were
supposed
to
go
with
the
with
the
build
pack
right.
You're.
Never
taking
for
granted.
That
whatever
is
at
this
tag,
is
what
was
intended
to
be
packaged.
E
But
on
that
build
pack
metadata,
you
also
have
a
hint.
That's
an
image
like
where
you
might
go
find
those
layers,
but
you
could
always
add
a
later
point,
specify
a
different
location
to
find
them.
B
I
think
sorry,
man,
good.
A
I
guess
I'm
a
little
concerned
about
treating
the
d
as
the
like
mechanism
to
find
that
seems
very
opaque
to
end
users
like
we're
just
going
to
be.
We
have
all
these
assets,
apparently
you
want
node.
The
only
thing
we
know
about
it
is
that
it's
diff
id
abq
and
we
just
like
search
through
100
assets.
Even
here
abq,
it
just
seems
like
a
very
opaque
strategy.
D
I
I
wonder
if
really
we
want
to
look
at
like
image
mirrors
as
a
possible
solution
for
that.
B
So
exactly
kind
of
something
close.
What
I
was
going
to
say
is
you
could
build
a
solution
for
build
packages
that
looks
just
like
run
images
with
run
image
mirrors
right
where,
where
they're
registered,
they
have
a
tag
and
there's
alternate
locations,
you
can
look
at
right,
but
we,
I
don't
think
we
need
that.
The
reason
we
have
run
images
and
run
image
mirrors
is
because
there
is
no
exact
digest
of
a
run
image
that
you
can
tie
a
builder
to
it's
a
subscription
to
updates
to
it,
as
there
is
no
identifier
for
it
right.
B
That's
that's
that's
why
we
ended
up
with
a
mirror
strategy
where
it's
sort
of
more
active.
In
this
case,
I
think
really.
All
we
need
is
a
canonical
location,
and
I
mean
I
might
be
wrong
here,
but
a
canonical
location
of
where
the
build
package
lives
like
where
its
official
published
location
is
potatoville
pack,
slash
wherever
right,
on
docker
hubs
or
tcr
or
whatever,
and
and
then
the
digest
of
the
build
package
itself,
because
then,
as
kpac
or
whatever
platform,
given
that
in,
unlike
the
run
image
mirror
case,
the
digest
of
the
build
package
exists.
B
E
D
It's
a
smaller
risk
and
you
might
end
up
with
just
a
handful
of
different
variations,
but
overall
you
probably
still
get
that
performance
benefit.
I.
A
I
I
think
my
concern
was
just
using
the
digest.
It's
just
the
opaqueness
of
it
like
it
works
in
theory.
What
happens
when
you
do
a
dock
or
pull
docker
push
push
your
registry
and
suddenly
the
platform
just
is
giving
you
pic
error
messages,
because
it's
not
finding
the
asset.
You
think
it's
supposed
to
be
because
the
digest
or
dividing
change
somewhere
along
the
line.
E
You
can
definitely
couple
the
diff
id
with
a
bunch
of
other
metadata
right.
This
is
jdk
version
three,
so
that
you
could
put
that
in
output.
But
do
you
think
the
diff
id
itself
is
probably
the
best
identifier
to
use
when
you're
saying
trying
to
validate
whether
it's
the
correct
acid
or
not
all
right.
B
You
can
we
could
put
the
canonical
location,
the
digest
and
the
all
the
internal
diff
ids
on
the
metadata,
the
build
package
and
then
you'll
always
be
able
to
you
know
kpac.
If
you're
worried
about
relocation,
you
could
ignore
that
digest.
Right.
Just
compare
the
diff
ids
or
you
can
use
the
digest
if
you
trust
that
your
asset
packages
haven't
been
tampered
with.
E
A
Yeah
you'll
definitely
need
the
diff
id
to
construct
the
underlying
image,
but
it
seems
like.
A
It
seems
like,
even
if
you're
transporting
that
diff
id
alongside
the
build
package
and
then
attempting
to
use
it
to
find
it
from
a
list
of
assets.
You
just
also
need
some
like
strong
naming
or
identification
mechanism
for
the
asset
along
with
that
diff
id.
So
when
things
go
wrong,
it's
clear
what
is
missing
and
what
is
missing
and
what
is
trying
to
what
what
assets
are
supposed
to
be.
C
Identifier,
yeah,
okay,
that's
that's
really
doable.
I
think
that
there's
already
like
in
the
labels,
they
kind
of
like
use
this
div
id
to
pass
some
information
around
there's
already
like
some
metadata
section.
I
could
definitely
pull
something
stronger
like
a
name
up
out
of
this,
that
you
have
to
give.
C
C
Enforcing
any
like
uniqueness,
or
anything
like
that,
though,
but
yeah,
it's
definitely.
B
Oh,
we
just
have
10
minutes
left.
Do
we
want
to
keep
doing
more
questions
about
offline,
build
packages,
and
do
we
want
we
should
we
want
to
defer
the
other
things
to
tomorrow
when
we
have
more
time
or
does
anybody
think
nine
minutes
is
sufficient
to
talk
about
experimental
mode,
rfc,
stack
packs
or
default
process
types.
B
Yeah
and
then
the
stackpack
questions
were
those
kind
of
quicker
questions
or
are
also
better
for
longer
form.
B
A
Yeah
the
stackpack
stuff
can
wait.
I
should
probably
chat.
Listen
folks
about
the
build
plan
stuff,
it's
probably
not
gonna
fit
in
nine
minutes.
B
Well,
did
we
wanna
or
the
default
process,
type
questions
quick?
I
know
that's
pretty
pretty
well
decided
or
like
it
seems
like
it's
pretty
far
along
and
we
could
talk
about
that
for
a
little
bit.
D
And
just
as
a
refresher
while
you're
doing
that,
is
this
still
the
idea
that
the
build
packs
will
be
providing
what
they
hope
to
be
the
default
process
type
and
then
some
concerns
came
up
about
like
overwriting
it
or
the
last
scenario,
and
I'm
assuming
questions
around
that
have
been
resolved.
Sorry.
A
A
So
I
think
we
added
some
additional
logic
to
cover
edge
cases.
I
can
find
the
conversation.
A
So
we
had
added
some
some
philosophy
and
some
examples
of
edge
cases
that
when
a
later
build
pack
redefines-
let's
say
a
later
build
pack
redefines
what
web
means
when
an
earlier
build
pack
has
already
declared
web
as
the
default.
A
We
said
that
we
would
clear
the
default
process
designation
and
then,
if
the
later
build
pack
wants
to
redefine
it
and
declare
it
as
the
default,
then
we'll
respect
that.
But
we
effectively
shouldn't
well,
I'm
like
I'm
like
totally
messing
it
up,
but
I
should
just
go
through
the
edge
cases
here
and
the
philosophy
which
is
really
this.
A
If
a
build
pack
attempts
to
redefine
a
process
type
that
is
declared
as
the
default
by
an
earlier
build
pack,
the
default
designation
should
be
cleared
unless
explicitly
set.
So
that
was
kind
of
the
philosophy
that
we
articulated
to
address
some
of
the
concerns.
And
then
I
provided
some
edge
cases
to
give
examples
of
what
that
would
look
like.
B
Just
really
quickly
a
mental
model
for
that
that
was
helpful
for
me
is
thinking
about
it,
like
the
default
process.
Type
you're
talking
about
your
process
type
when
you
set
the
default
process,
type,
not
another
build
packs.
You
know
default
process
type
and
so
later.
If
somebody
else
redefines
web
right,
that
default
process
type
is
pointing
to
a
web.
That's
no
longer
available
right,
that's
that's
been
overwritten,
and
so
obviously
the
default
goes
away
with
it.
B
A
Yeah,
that
is,
that
is
helpful
and
then
so
just
to
call
out
what
sort
of
the
open
questions
are.
Jesse
raised
this,
which
I
hadn't
thought
of,
and
it
kind
of
takes
a
minute
to
understand.
What's
what's
happening
but
effectively,
we
said
we'd,
clear,
the
designation,
but
what,
if
the
later
build
pack
redefines
what
web
means-
and
it
says
I
would
like
web
to
be
the
default
process,
but
it
sets
this
override
property
to
false,
which.
A
B
I
think
that
I
think
the
name
spacing,
though,
if
you
think
about
it,
as
name
spacing,
it
makes
it
really
clear
that
it
shouldn't
be
cleared,
because
it's
not
really
a
clear,
that's
happening.
It
said
that
in
the
previous
case,
you're
pointing
towards
a
process
type
right.
You
said
the
first
build
pick
said
this
is
the
web
process
and
I
think
this
web
process
should
be
the
default
and
then
another
bill.
E
D
E
D
A
B
A
B
Model
and
then
with
override
yeah
and
then
with
override
meaning,
overwrite
whole
process
types
not
override
the
definition
of
that
pointer,
which
would
be
difficult
to
explain
in
the
right
right.
Can
you
override
the
default
process,
but
not
the
process
right?
I
think
you
do
open
up
some
other
edge
cases.
I.
E
There
are
cases
where
you
might
so.
The
original
intent
here
was
to
sort
of
be
like
the
way
we
have
default
and
override
in
environment
variables,
in
that
you
can
specify
an
environment
variable,
but
it
only
takes
effect
if
no
one
else
has
specified
it.
That's
the
default-
and
this
is
similar
to
that
for
default
process.
E
But
whether
or
not
we
actually
want
to
override
what
to
type
maps
to
is
an
interesting
question,
but
I
think
where
it
would
get
confusing
is,
if
you
would,
you
then
apply
that
to
like
the
worker?
Would
you
not
override
worker
if
there
already
was
one,
even
though
the
override's
like
a
default
process
concept,.
B
I'm
not
trying
to
understand,
though
the
problem
it's
like.
If
you
have
another
process
type
and
you
define
override
equals
or
if
you
have
another
process
type
another
pointer,
then
the
old
pointer
goes
away
completely,
because
you've
created
a
new
pointer
by
saying
by
selecting
what
the
default
is
again
in
the
next
build
pack.
E
E
B
It's
an
ordered
list,
you
could
just
say
the
last
thing
that
says
it's.
The
default
thing
wins
just
like
the
later
build
packs.
When
you
wouldn't
even
need
a
lot,
I
mean
you
know,
we
should
probably
warn
people
when
that
happens,
but
you
wouldn't
even
need
a
ton
of
validation
on
in
the
elk.
I
don't
think.
D
B
The
reason
I
bring
that
up
is
because
a
lot
of
people
like
tommle,
because
it
lets
you
append
stuff
to
the
end,
and
so
you
can
see
a
pending
a
process,
type
that
where
you're
like.
No,
I
think
this
should
be
the
default,
and
then
you
wouldn't
want
it
to
necessarily
fail.
You
just
want
that
to
be
the
new
behavior
right.
That's
something
joey
pointed
out
a
couple
times,
so
many
people
use
tommel.
A
B
Just
the
idea
is
like
how
does
this
work
with
multiple
api
versions.
A
B
A
Either
way,
so
how
does
that
work
across
multiple
build
packs
where,
like,
if
I
set
web
as
default,
but
then
the
later
build
packs?
That's
like
foo
as
default,
I
assume
foo,
wins
and
erase
out
the
other
default
yep.
D
We
did,
with
the
exception
of
the
platform,
presenting
that
information
until.
E
B
We
also
we
may
want
to
start.
We
may
want
to
assume
that
old,
build
pack.
Api
versions
before
this
are
override
equals
true
to
match
the
previous
behavior,
as
opposed
to
assuming
your
override
equals
false,
which
is
a
little
weird,
but
I
think
I
think
that
that
gets
us
to
the
point
where
you
don't
have
to
worry
about
it
anymore.