►
From YouTube: Working Group: 2020-07-29
Description
* Application Mixins: https://github.com/buildpacks/rfcs/pull/87
* Bill of Materials Updates
* Service Bindings Updates: https://k8s-service-bindings.github.io/spec/#application-projection
* Layer Origin Metadata: https://github.com/buildpacks/rfcs/pull/94
* Any Stack Buildpacks: https://github.com/buildpacks/rfcs/pull/97
* Override Env Vars Default: https://github.com/buildpacks/rfcs/pull/98
* Opt In Layer Caching: https://github.com/buildpacks/rfcs/pull/99
* Decouple Buildpack Plan and Bill of Materials: https://github.com/buildpacks/rfcs/pull/100
* Shell-free profile.d: https://github.com/buildpacks/rfcs/pull/104
A
A
A
A
All
right
reminder
to
sign
in,
if
you
haven't
already
do
we
know
if
we're
waiting
for
ben
or
terence,
this
time.
A
A
All
right
should
we
get
started
first
thing
on
the
list.
Is
I
already
mentioned
sign
into
the
doc?
If
you
haven't
already
next
thing
is
introductions
and
new
faces?
Do
we
have
anybody
here
who
hasn't
been
on
the
call
before.
A
I
think
I
recognize
both
people
josh,
I
don't
know,
have
we
have.
You
joined
us
before
yeah
I've
been
here.
B
I
was
here
roughly
two
weeks
ago
in
and
out
I'm
on
the
same
team
as
most
of
the
salesforce
folks.
A
C
A
B
A
A
Awesome,
unless
there's
anything
else
on
release
planning
we'll
do
our
10
minute
review
of
outstanding
rfcs.
A
All
right,
everybody
see
that
cool,
so
first
thing
shellfree
profile
d.
This
is
one
I
just
opened
right
at
the
start
of
the
meeting
that
I
want
to
talk
about
today.
That's
on
the
agenda
project
descriptor
schema
is
next
any
updates.
There.
B
B
B
A
I
was
going
to
ask
if
you
want
to
put
it
on
the
agenda
for
today
or
if
yes,
yeah
okay
on
the
agenda
for
today,
yes
sounds
great
awesome.
Next,
one
on
the
list
is
decouple,
build
pack
plan
and
build
materials.
The
next
four
after
that
actually
opt
in
layer,
caching,
override
environs
by
default
and
any
stack
build
packs.
I
all
put
on
the
agenda
today
also
with
the
top
one
they're,
all
kind
of
small
changes
going
up
to.
A
Oh
no
after
that
rfc
for
project
descriptor
flexibility,
those
terrance
joe,
do
you
know
if
any
reading
updates
on
this
anything
we
need
to
do.
B
A
Sounds
good,
do
we
need
to
do
any
do
anything
to
it
or.
B
A
Their
origin
metadata
is
a
draft
we're
going
to
close
this
or
is
this?
Was
this
replacing
a
previous
one,
or
was
this
replaced
by
a
previous
one?
I
forgot.
A
Awesome
did
you
put
that
in
the
agenda
for
today?
Do
you
want
to
chat
about
it?
Yeah?
You
can
add
it
to
the
agenda.
I
did
want
to
chat
a
little
about
it.
Cool
feel,
free
to
put
anything
before
my
my
five
on
there.
I
don't
expect
we'll
get
to
all
that.
A
Next
thing
is
pax:
sub
commands
javier.
Well,
I
made
a
few
updates
to
it.
I
believe
yesterday,
I
think
it's
just
a
lot
of
minor
details
that
I
added
to
the
rfc,
but
I
don't
think
I've
gotten
any
feedback
that
prevents
this
from
going
in.
So
just
want
to
see
if
I
could
get
some
re-reviews
on
it.
A
Awesome,
nothing,
nothing
needs
to
be
changed.
All
good,
correct,
cool,
multi-api
life
cycle,
descriptor
emily.
This
is
an
fcc.
C
It's
an
fcp.
There
were
still
some
open
questions
after
we
moved
it
into
sc
fcp
from
terence
about
the
experimental
apis.
So
I
just
took
all
the
controversial
parts
out
and
I
think
terence
is
going
to
open
experimental
apis
rfc
in
the
future.
A
We
need
to
take
the
fcp
tag
off,
or
is
it
still
in
fcp?
Are
we.
C
A
All
right
we'll
go
with
it.
Experimental
features.
B
Yeah
just
take
a
look
when
you
have
a
chance.
A
Cool
application
mix
ends
on
the
agenda
for
today.
Inline
build
packs
is
not
on
the
agenda
for
today.
Any
updates
there.
B
A
Offline
build
packages.
Do
we
have
dan
on
the
call
yeah
I'm
here?
I
still
have
some
changes
to
make
to
this
from
last
time.
Well,
I
think
we
were
talking.
I
was
talking
to
emily
and
ben
a
little
bit.
Let's,
let's
all
do
like
a
sig
meeting
on
offline
kind
of
the
offline
build
package
interface.
They
had
some
some
good
feedback
to
you
it'd
be
great
to
kind
of
work
together
on
that
perfect
rfc
for
stack
metadata,
there's.
A
Do
I
need
to
bug
somebody
about
the
dc?
I
got
it
cool.
Thank
you.
Add
root,
build
pack
interface
there.
That's
a
draft
rfc
for
custom,
ca
search.
Draft
image
exposes
metadata
for
all
layers.
This
is
also
draft.
B
That's
the
one
I
moved
out
to
draft
by
accident,
sorry
about
that
paul.
But
if
we
need
to
open
it,
we
can.
A
This
is
the
one
that's
replaced
by
the
other
one.
Is
that
right
paul?
Are
you
okay,
with
this
sticking
as
a
draft?
Do
you
want
me
to
close
it
or
any
way?
I
can
help
with
this?
I
think
it
can
be
closed.
I
wasn't
sure
if
I
should
do
that
myself
or
not
either
way,
I'm
happy
to
click
the
button.
Now,
if
it's
convenient
for
you
all
right
and
all
right
all
the
way
through
next
thing
on
the
agenda
is
application
mix-ins,
I'm
gonna,
stop
sharing
my
screen.
B
Yep,
so
I
made
a
couple
updates
based
on
our
discussion
last
time.
I
just
wanted
to
share
those
the
I
think
the
non-controversial
one
is
that
the
stackpacks
will
be
expected
to
be
in
the
image
prior
to
creation
of
the
builder
and
by
default,
all
of
the
stack
packs.
B
There
will
be
an
implicit
order
where
all
of
the
stack
packs
provided
will
have
their
detect
run,
and
you
can
override
that
in
builder
tamil,
because
of
that
I've
made
a
change
to
the
mix-in
contract
where
it
is
you
know,
we
talked
about,
should
be
a
pattern.
Well,
if
it's
a
pattern,
how
yourself
it's
a
pattern
versus
an
explicit
string
to
match
one
mixin.
B
So
I've
changed
this
to
a
boolean.
It
either
provides
mixins
or
it
does
not
and
talk
more
about
that
in
a
second,
but
that
changes
how
the
ca
certificate
example
and
again
reminder.
This
is
just
an
example.
We
could
implement
ca
certificate,
build
pack
many
different
ways,
but
this
no
longer
works
through
the
mixing
contract.
It'll.
B
Spin
detect
run
by
just
by
virtue
of
it
being
in
the
cmb
stack
packs.
You
know
the
provided
stack
packs
with
the
image,
but
otherwise
works
the
same
and
uses
the
bin
detect
to
provide.
You
know
ca
cert
that
might
be
required
by
a
user
space
build
pack.
B
The
the
mix
and
build
pack,
one
that
provides
mix-ins
would
refine
the
mixins
that
it
provides
in
some
way
in
been
detect,
and
this
is
still
a
work
in
progress.
I
you
know,
I
don't
think
that
should
be
done
based
on
application
source.
I
think
it
should
be
some
input
like
a
mixins
tunnel,
that's
available
in
the
cmb
dirt,
or
something
like
that.
A
Sorry,
do
you
mind
if
I
ask
a
question
about
that?
Yep,
I'm
done
the
so
mixon's
tomlin,
assuming
it
comes
from
other
build
packs.
Is
that
right,
so
another
build
pack
could
request.
It
comes
from
buildpack.
B
To
install
it,
could
it
comes
from
the
platform
which
will
combine?
You
know
whatever
mixins
are
needed,
I
was.
I
was
thinking
that
mixins
toml
might
have
the
exact
same
schema
as
build
plan,
or
something
like
that
and
you'd
have
you
know
ones
that
are
required,
but
not
provided,
and
the
build
pack
would
sort
that
out,
but.
A
To
me
this,
this
is
a
big
change
in
the
architecture
of
how
build
pack
detection
works,
because
currently
detection
can
run
it
runs
in
parallel
and
it's
declarative
and
they
don't
depend
on
each
other
right.
We
we
like
went
back
first,
we
had
this
like
pipe
of
things
and
they,
you
know
we're
all
interdependent
and
we
change
to
you
know.
Detection
is
completely
declarative.
Everything
says
this
is
what
I
require
to
provide
and
then
there's
a
thing.
B
C
C
This
is
sort
of
getting
one
of
the
things
I
was
lobbying
for
last
week,
which
is
because
mixins
are
defined
by
the
stack
author
and
we're
seeing
these
stack
packs
ship
on
the
stack
image.
I
feel
like
there's
no.
C
A
I
I
think
I
disagree
with
the
framing.
It's
like
the
stack
pack
may
be
distributed
by
the
person
who
creates
the
stack
images,
but
the
person
who
creates
the
stack
images
isn't
the
same
as
the
person
who
defines
the
stack
ide
right.
We
have
one.
I
o
build
pax
tax,
bionic
and
many
distributors
of
stacks
with
different
sets
of
packages,
so
we
could
have
many
different
types
of
stack
pack
and
keep
that
functionality
modularized.
I
don't
think
I
agree
that
there
should
only
be
one
stack
pack
per
stack
id.
A
B
Limiting,
I
don't
think
we're
saying
that
there's
only
like
you
could
create
your
own
stack
with
that
stack
id
with
your
own
mixins,
build
pack
or
app
build
pack
right,
but
it's
it's
really
about
that
contract
like
what
does
mixins
mean
on
this
stack
and
that
I
feel
like
that
really
is
defined
by
the
the
group
or
people
or
whoever
who
who
make
that
stack,
and
so
I
I
worry
like
when
we're
talking
about
patterns
and
stuff
like
that
about
some
real
oddballs.
You
know
matching
things
you
didn't
expect
to,
because
there
was
some.
A
I
mean
even
a
special
value
of
any
it's
just,
not
not.
We
don't
do
pattern
matching
and
we
have
a
value.
That's
like
I
can
provide
all
types
of
mixes.
There's
no
validation
right
that
I
I
think
the
the
root
of
the
problem
for
me
is,
I
see
I
don't
see,
mixes
as
operating
system
packages.
Necessarily
I
see
them
as
like
things,
things
that
follow
that
abi
contract
right
that
are
rebaseable
at
runtime
and
ca.
B
A
We
can,
we
can
add
additional
prefixes
right.
We
already
have
sets
as
an
example
and.
B
A
As
in
like,
if
you
can
only
do
an
any
or
an
exact
mix-in
name,
what
how
do
you
define
a
you
know?
I
mean
I
think,
they're.
C
I
think
there's
a
question
about
ca,
search
as
mix-ins
that
I
so
every
other
mix
in,
like
the
name
of
the
mix-in,
describes
a
like
known
set
of
changes.
So
if
that
mix-in
is
there,
then
that
exact
set
of
changes
should
be
there
right,
but
ca
search
are
different
because
you
can't,
unless
you
actually
put
the
insert
itself
in
the
mixin
name,
there's
no
way
for
that.
The
name
of
the
mixing
to
describe
the
exact
change
we
want
and
therefore,
if
they
trying
to
fit
it
into
that
interface,
is
awkward.
A
I
think
my
point
isn't
that
I
have
a
better
solution
for
the
ux.
It's
that
I,
like
the
idea
of
you
know
from
where
we
were
before
figuring
out
a
ux
for
being
able
to
specify
those
things
that
you
know.
Doesn't
that
keeps
the
architecture
of
how
detection
works,
and
things
like
that.
You
know
the
same
like
this.
This
would
be.
A
B
A
A
I
think
if
we
do
that,
then
I'm
much
more
okay
with
saying
that
there's
this
idea
of
abi
compatible
changes
that
aren't
mixins.
I
worry
a
little
bit
about
not
capturing
that
in
metadata,
but
that
could
be
solved
in
a
different
way.
I
just
I'm
hesitant
I'm
much
less
strongly
opinionated
about
this,
but
I'm
hesitant
to
say
that
we
shouldn't
describe
all
the
stack
pack
changes
as
mix-ins,
because
it
feels
like
mix-ins,
where
the
the
unit
of
abi
compatible
change,
that
we
wanted
to
use
in
the
past,
and
so
like.
A
Is
there
a
need
to
create
some
other
thing
here,
because
you
could
have
really
customized
ones
or,
like
you
know
like
when
you
install
a
bluetooth
package,
you're
not
guaranteed
to
get
the
same
result
each
time
it
runs
arbitrary
code
right
so
like?
Is
it
really
that
different
installing
arbitrary
c
certs
versus
you
know?
I'm
I
lean
towards
calling
those
things
make
sense,
but
I'm
not
strongly
opinionated
about
it
as
long
as
we
kind
of
don't
change
the
detection
interface
and
keep
it
declarative.
C
Mix-Ins
are
abi
compatible
and
stack.
Packs
can
only
make
changes
that
are
api
compatible,
but
I
don't
think
every
abi
compatible
change
has
to
be
a
mix
in
because
I
don't
think
it
fits
well
in
the
case,
or
example,
like
imagine
a
build
image
that
someone
had
installed
certs
on
so
then
it
said
it
already
had
the
ca
search
mixed
in,
because
that
mixing
name
doesn't
describe
the
requirements.
It
just
described
that
a
thing
happened
at
some
point.
C
I
feel
like
it
breaks
down,
whereas
all
the
other
mix-ins
describe
a
declarative
end
state,
not
a
thing
to
do.
A
It
makes
me
nervous,
but
I
wouldn't
block
moving
forward
on
that
or
like
like,
if,
if
there's
consensus
that
we
should
just
have
a
yes,
you
can
be
all
mix-ins
or
specify
exact
mix
ends
or
not
use
the
mixing
interface
and
make
other
api
compatible
changes.
I'd
still
be
willing
to
approve.
A
If,
if
we
made
the
change
during
detection
so
that
it
was
declarative,
you
could
just
say
all
mix-ins
or
explicit
mix-ins
or
you
know
not
going
to
use
the
mix-in
interface
but
going
to
make
api
compatible
changes
anyways.
I
would
I'd
be
willing
to
approve
that.
B
Yeah,
that's
that's
good!
That's
something
to
go
on.
I
think
that
the
pattern
matching
was
what
was.
I
was
really
stuck
on
from
being
able
to
find
something
that
actually
worked.
B
A
I
I'm
not
attached
to
the
pattern
matching
necessarily
as
long
as
there's
you
could
even
look
at
it
like
defining
the
mixins
ahead
of
time
and
build
pack
tommel
or
whatever
is
a
way
to
to
do
pre-validation,
and
then
you
can
turn
that
validation
off
and
so
any
mix-ins
end
up
getting
filtered
through
the
list
of
mixins
and
sent
to
it
right.
That
might
be
nicer
interface
than
the
pattern.
B
Matching,
okay,
yeah.
I
think
my
I
think
this
ca
cert's
example
is
kind
of
problematic
because,
like
even
if
this
were
possible,
there
are
other
ways
you
could.
You
could
still
do
it
with
mix-ins
right.
Like
this
example
doesn't
mean
you
can't
do
it
as
a
mix-in
and
I'm
not
I'm
trying
not
to
take
a
stand
on
like
what
are
mixins.
What
are
mixins.
You
know.
A
Somewhat
related
sorry
on
the
mix
and
stuff,
so
at
some
point,
like
the
stack
packs,
so
it's
two
things
one
of
them
is
pedantic
most
likely
and
the
other
one
is
maybe
more
in
line
with
our
conversation,
but
these
stack
packs
at
some
point.
A
We
kind
of
viewed
them
more
to
be
just
like
build
packs
right
and
not
so
much
as
or
just
a
different
type
of
build
pack,
but
now
it
seems
like
they're,
a
very
specific
type
of
thing
that
operates
very
similarly
to
a
build
pack,
so
the
naming
seems
to
be
a
little
bit
weird
right.
This
is
the
pedantic
part
where
it's
stored
in
this
cnb
stack
packs,
but
then
we
use
build
pack.
You
know
like
in
the
tunnel
itself
and
within
the
tumble
schema
itself.
B
A
Cool
and
then
the
other
thing
kind
of
in
line
with
that
is
that
at
some
point,
stack
packs
could
have
been
provided
by.
Let's
say
in
pack
as
specifying
you
know
as
part
of
build
pack,
you
could
specify
a
stack
pack
and
we
would
have
run
that
operation
first
and
it
seems
like
at
some
point
it
changed
into
being
something
that's
embedded
into
the
stack
yeah.
B
Of
scope,
I
I
think
in
the
future,
if
we
get
enough
end
user
demand
for
something
like
that,
we
can
explore
the
generic
root,
build
pack
concept
and
all
the
you
know
like
I,
I
feel
like
there's
no
way
to
get
that
without
giving
up
something
else
and
if,
if
enough
people
want
it
or
need
it,
I
think
we
should
explore
it,
and
this
doesn't
rule
that
out.
It's
just
out
of
scope.
A
So
then,
I'm
curious
why
the
stacks
array
comes
into
play
if
it's
something
that
you're
embedding
into
your
stack
image,
why
define
or
have
the
capability
of
defining
multiple
stacks
ids
that
it's
compatible
with,
like?
I
feel
like
we're
losing
that
compatibility.
If
it's
not
something
that
you
could
then
plug
into
multiple
stacks,
I
think
you
could
distribute
a
stack
pack
right
that
works
on
bionic
and
xenial,
because
they
both
use
apt
right
and
then
people
could
pull
that
into
their
their
stack
image.
A
You
know
for
bionic
or
for
xenial,
and
it's
just
a
validation
that
that's
right
so
who's
doing
that
validation,
I
guess,
is
I
guess,
I'm
trying
to
figure
out
who's
going
to
do
that,
validation
and
where
I,
I
think.
I
think
this
is
a
key
thing
that
keeps
coming
back
is
like
the
person
who
defines
the
stack
id.
Is
that
that's
in
many
cases,
that's
like
the
cnb
project
and
the
person
who
creates
a
stack
image
is,
like
you
know,
pocato
team.
A
You
know
in
the
cff
or
heroku
folks
or
like
it's,
those
are
different
people,
and
so
you
can
have
a
stack
pack.
That's
distributed
independently.
That's
compatible
with
a
number
of
different
stack
ids.
That
is
reusable
in
those
different
contexts
and
the
validation
would
be
when
you're,
creating
a
stack
image
or
when
you
do
a
build.
You
can
confirm
that
the
stack
pack,
you
know,
matches
the
id
and
it's
a
violation.
If
it
does,
if
it
is
an
incompatible
combination,
we
don't
have
much
tooling
to
create
stack
base
images.
A
A
C
Happen
yeah,
I
feel
like
the
ca
search
and
mix-ins
build
packs
could
easily
be
something
that
the
cmb
project
provides
for
bionic
now,
because
we've
defined
what
it
means
to
be
a
bionic
stack
and
then
stack
image
creators.
We
still
need
to
go
and
package
those
on
whatever
image
they're
building,
but
I
think
they
could
be
nice
and
have
built
packs
in
the
project
for
the
first
time.
A
I
have
one
last
question
joe
about
the
bin:
detect
outputting
provides
name
equals
casser.
So
that's
like
it's
providing
a
regular
build
plan
entry
dynamically
with
the
idea
b
in
this
model.
Where
we're
we're
still
doing
the
build
plan.
Declaratively,
you
can't
define
mixins
there.
You
can
only
define
regular,
build
plan
entries
there
and
then
mix-ins
have
to
be
defined
statically
in
the
build
pack.
Tommel-Like
thing.
B
Okay,
I
think
we're
almost
there
just
fyi
I've
started
working
on
implementation
for
this.
For
the
I
feel
like
the
surface
area
like
the
surface
part
of
it
that
we're
talking
about
is
not
the
hard
part
to
implement,
and
it's
going
really
well
like
using
canada
code
as
snapshot
changes.
It's
it's
pretty
sweet
so
using.
B
B
Joe,
I
was
going
to
ask
what,
since
you
brought
up
kenneco,
do
we
feel
like
that
could
be
a
just
in
the
like
the
windows
sense
conoco
is
not
windows
compatible?
Do
we
feel
like.
B
Yeah
yeah,
I
saw
that
comment.
I
think
so
I
have
no
clue
I
mean.
Could
it
be
like?
Could
it
be
swappable
within
lifecycle
yeah,
absolutely
what
it
would
take
to
actually
implement
that
I
have
no
clue
yeah
and
that
I
would
be
definitely
interested
in
doing
the
implementation
or
you
know
we
could
find
folks
who'd
be
willing
to
do
that.
I
would
just
think,
like
from
a
spec
perspective,
of
keeping
that
either
undefined
or
or
or
an
interface
that
we
could
build
yeah.
B
Absolutely
canico
is
an
implementation
detail
I'll
I'll,
send
you
the
code
that
I'm
working
on,
and
I
and
I
can
point
you
exactly
to
where,
like
I
have
a
canico
snapshotter,
that
you
have
an
instance
of
or
whatever
the
hell
you
called
and
go,
and
you
can
see
how
you
could
just
plug
in
a
windows,
snapchat
chatter
or
something
so
hello,
I'll
tag
up
with
you,
yeah
I'd
love
to.
A
I'd
like
to
understand,
maybe
we
don't
have
to
have
the
conversation
now,
but
I'd
like
to
understand
more
about
what
windows
needs
are
for
base
image
modifications,
because
because
I
don't
think
I
haven't
seen
that
done
very
much
before.
B
Yeah
so
for,
for
instance,
for
I'll
give
you
the
short
version.
So
there
is
this
sort
of
a
package
equivalent
for
windows.
There's
windows
features
the
catches,
there's
like
four
different
kinds
of
windows
features,
there's
windows
packages
and
windows;
optional
features.
B
B
Exact
there's
not
like
a
singleton
concept
of
that
say:
bionic
would
have
for
a
single
package,
but
but
yeah.
Those
would
be
the
way
that
I
say,
I'm
a
you
know,
building
on
top
of
a
microsoft
image,
I
want
to
throw
in
a
whole
bunch
more
software
like
media
codex,
or
you
know
some
dependency
for
this
particular
kind
of
app
and
along
that.
That
sense,
like
the
stack
packs,
would
be
a
great
fit
potentially
for
putting
that
one
set
of
packages.
B
Yeah
even
like
it
hit
well
I'll
spray
the
details,
but
it
helps
out
with
some
of
the
os
kernel
coupling
and
some
other
things
too.
That
mixing
concept
is
a
great
fit.
A
B
Just
been
yeah,
you
might
I'll
give
this
a
quick
share.
This
comes
from
me
find
the
right
document
here
internally
at
vmware
we
have
a
team
that
is
experts
in
what
it
means
to
have
bills
of
materials
across
different
deployment
strategies.
Things
like
images,
for
example,
they've
done
lots
of
work
with,
for
example,
government
agencies
and
are
really
tuned
into
like
the
ietf
and
iso
specs
around
this
kind
of
thing.
B
So
they've
asked
for
changes
to
the
cloud-native,
build
pack
specification
to
make
it
sort
of
compliant
with
what
the
rest
of
the
world
expects
here.
I
have
no
idea
what
this
is
going
to
come
out,
looking
like
at
the
end,
but
we
are
working
on
it
now.
This
is
an
open
document
for
anybody
who
wants
to
participate
in
that
discussion.
B
B
You
can
you
can
talk
to
misha
as
well
as
I
can
to
push
back
and
tell
her
what
what
we
think
we
also
need.
I
know
nothing
about
this
space
at
all,
so
other
than
providing
the
context
of
how
we
build
dependencies
and
include
them
in
cloud-native
build
packs.
I
don't
even
know
what
we
want
make
sense.
B
Yep
the
next
thing
on
the
list
I'll
move
this
along
is
service,
binding
specification
updates,
so
the
cloud
native
build
packs
project
put
together
a
straw
man,
cloud
native,
build
packs,
binding
specification.
Probably
almost
a
year
ago
now,
we've
had
great
success
with
it.
The
kubernetes
community
noticed
it
and,
as
they've
started
working
on
a
service
binding
specification,
they
have
adopted
the
important
bits
of
it,
especially
when
it
comes
from
application
projection,
but
it
is
not
fully
compatible.
B
I
don't
think
we
ever
expected
it
to
be
so
a
there
will
be
an
rfc
forthcoming
that
basically
changes
our
binding
specification,
which
was
always
an
extension
in
the
first
place
to
match
up
with
what
the
kubernetes
binding
specification
looks
like
both
in
the
structure
of
the
directories
and
names
of
environment
variables.
This
will
effectively
supersede
what
we
do
with
the
one
exception
that
we're
going
to
define
it
a
little
bit
more
tightly
during
the
build
phase.
Please
don't
be
surprised
by
this.
B
If
you
are
a
platform
that
is
thinking
about
service
bindings
know
that
this
change
is
coming
to
you.
I've
already
done
the
changes
in
the
library,
so
we
will,
for
the
fourth
disable
future
support
both
our
extension
specification
and
the
new
kate
specification
and
then
at
some
point
when
our
stuff
sort
of
ages
out
a
year
from
now,
we'll
probably
remove
that
support
from
the
libraries.
A
Then
we
we
yank
the
cnb
bindings
back
out
of
cnb
and
just
put
a
line
in
somewhere
that
says
and
to
do
service
bindings.
You
should
use.
B
A
All
right
next
thing
on
the
list
is
layer,
origin
metadata.
I
think
this
is
paul.
A
Okay,
so
my
my
main,
I
have
I
haven't,
touched
this
this
rfc
in
in
about
a
week
in
any
like
meaningful
way.
Well,
I
tried
yesterday
morning,
but
but
anyway,
the
kind
of
how
I'm
feeling
is
that
I'm
not,
I
think,
the
the
work
that
remains
to
be
done
on
it
is
like
pretty
like,
like
outside
of
my
you
know
like
knowledge
when
it
comes
to
build
packs.
So
it's
it's
like
not
clear
to
me
what
more
I
can
do
so
I
was.
A
I
was
mainly
asking.
I
mainly
wanted
to
come
in
and
ask
if
there
was
a
suggestion
to
that
end
or
someone
maybe
want
to
work
with
me
on
that
on
at
least
getting
this
thing
out
of
like
draft
form
and
answering
some
questions
before
it
gets
to
the
point
where
we
say
it's
anybody
who
wants
to
can
look
at
it.
Could
you
summarize
what
the
rfc
is?
A
Just
you
don't
have
to
go
into
a
lot
of
detail
if
you
don't
want
to,
but
just
at
a
high
level
like
what
you're
proposing
for
folks
and
so
interesting
people
can
opt
in
yeah.
Sorry
about
that,
okay,
so
layer,
origin
metadata
is
about
being
able
to
like
identify
all
the
inputs
that
go
into
a
layer,
and
there
are
some
kind
of
obvious
inputs
that
we
know
about.
A
Like
the
you
know,
the
version
of
the
application
that's
going
in,
but
other
things
that
are
inputs
are
like
the
state
of
other
layers
that
are
that
that
the
build
pack
executables
are
going
to
see
and
because
those
those
things
matter
and
can
affect
you
know
the
outcome
of
the
build.
We
should
be
able
to
know
we
should
be
able
to
identify
those
those
those
things.
A
So
this
is
about
being
able
to
identify
like
where,
like
cache
layers
came
from
and
and
and
saying,
you
know
that
this
build
pack
was
generated,
and
it
saw
this.
You
know
this,
this
cache
layer
at
that
time.
Things
like
that.
A
So
it's
it's
like
we're
technically
calling
this
something
like
traceability,
like
traceability,
of
of
layers
like
how
they
originate,
and
a
big
thing
in
this
is.
It
covers
build
time
layers
too
right.
C
A
Yeah
not
just
layers
that
make
it
into
the
image
yeah
yeah
yeah
yeah,
because
all
those
all
of
those
are
important.
A
Yeah,
I
was
just
thinking
about
that
too.
I
I
just
proposed
an
rfc
over
the
weekend
that
creates
a
separate
bill
of
materials
for
the
essentially
for
build
time
dependencies.
It
doesn't
end
up
in
the
kind
of
layer
label
metadata,
so
it
doesn't
affect
reproducibility
of
the
image.
It's
not.
It's
not
label
specific,
it's
just
just
general
bill
of
materials.
A
C
Yeah
I'll
take
a
look
at
that
everything
can
end
up
in
report
tamil
the
build
bom
and
the
layer
origin
metadata.
So
like
build
time.
Layer
metadata
specifically,
would
be
interesting.
I'm
like
interested
in
this
proposal,
and
I
did
I've
threatened
to
help
more
in
the
past
very
busy.
Lately,
though,
so
I
was
wondering
if
someone
else
would
like
to
step
in
and
help
trying
to
get
this
out
of
draft.
A
Have
you
brought
it
up
to
the
maintainers
in
training?
I.
C
C
Are
they're
both
out
doing
this
week
yeah
this
week?
I
was
thinking
that
that
might
be
a
a
good
opportunity
to
like
get
more
people
involved
in
the
rfc
process,
but
I
just
wanted
to
throw
it
out
here
too
in
case
anyone
who's
in
this
call
was.
A
Yeah
thanks,
I
didn't
want
to
like
put
like
all
the
pressure
on
you
emily.
So
that's
why
I
figured
I'd,
bring
it
up
in
in
the
working
groups
meeting
but
yeah
there's
no
rush,
so
we
can.
We
can
totally
wait
till
natalie
and
yael.
I
guess
you
said
are:
are
back,
I'm
also
willing
to
help
figure
out
how
this
kind
of
leads
into
report
toml,
because
I'm
really
interested
in
that
sort
of
build
time,
metadata
more
generically,
so
like
happy
to
provide
feedback.
A
I
I
don't
have
anything
else,
no,
all
right!
In
that
case,
I'm
going
to
try
to
go
through
five
rfcs
that
I
opened
really
quickly:
they're,
relatively
small
and
so
I'll
go
quickly,
and
just
stop
me.
If
you
have
questions,
give
me
one
second
to
share.
A
I
think
there's
something
in
this
for
everybody
here,
it's
mostly
stuff
that
I
collected
from
you
know
different
requests.
People
had
for
things,
they
thought
were
broken
before
one,
oh,
some
of
its
stuff.
I
care
deeply
about
some
of
its
stuff
that
I
just
thought.
Well.
A
If
we
don't
do
this
soon,
it's
probably
not
going
to
happen
so
starting
at
the
bottom
here,
any
stack
build
packs,
so
this
is,
it
just
makes
it
so
that
if
you
don't
specify
stacks
in
the
build
pack
that
you
can
use
it
on
any
stack
which
is
kind
of
dangerous,
because
it
means
that
you
know
you
could
end
up
with
an
ecosystem
of
build
packs
that
just
never
specify
stacks
and
no
validation
but
seemed
for
things
like
inline
build
packs
which
are
essentially
already
stackless,
build
packs
and
the
ability
to
just
write
a
quick
script
that
runs
in
you
know
any
linux
distro
or
just
contains
a
go
binary
it.
A
You
know
it
seemed
like
it
was
probably
a
good
idea.
The
you
know
like
this
may
be
a
question
of.
Should
we
have
like
a
an
operating
system
flag
like?
Could
you
accidentally?
You
know
run
a
windows
one
on
a
linux
one,
but.
A
B
A
A
Going
once
going
twice
and
moving
on
to
next
thing,
override
and
vars
by
default,
this
is
a
relatively
small
change
to
how
environment
variables
are
ordered,
as
they're
added
into
the
environment,
using
environment,
variable
directories
and
layers.
So
the
idea
is
that,
currently,
if
you
don't
specify
a
prefix,
it
means
append.
A
That's
pretty
specific,
and
it's
like
it's
quite
an
assumption
to
make,
and
it's
not
what
voice
users
would
expect.
I
think
they'd
expect
when
you
create
an
environment
variable
file
without
a
prefix.
The
behavior
is
the
same
as
what
happened.
If
you
did
environment
variable
equals
value
at
a
shell
right,
which
is
that's
the
environment
variable
overriding
the
previous
value,
so
this
would
change
the
default
to
override.
A
This
is
a
huge
breaking
change
that
would
cause
unexpected
behavior
changes
in
build
tanks.
If
you
didn't
know
about
it,
when
you
bumped
your
buildpack
api
version,
that's
definitely
something
we'd
want
to
do
before
one
owl,
but
also
definitely
something
that's
very
confusing.
Right
now,.
B
I
definitely
endorse
this.
I
have
also
lost
a
comment
that
I'm
sure
I
made
somewhere
on
here.
That
was,
you
should
probably
in
the
what
it
is
clarify,
process,
specific
environment
variables
as
well
in
there
your
example
only
had
all
process
variables.
A
Got
it,
do
you
want
a
suggester
change?
You
can
do
sign
offs
now
with
the
suggested
change
interface,
so
it
doesn't
break
dco
to
just
do
them
through
the
interface.
If
you
didn't
know
that.
A
Then
I
can,
I
can
take
that
opt-in
layer.
Caching
is
next
sorry
any
questions
on
override
and
vars
by
default.
Anybody
think
it's
a
terrible
idea.
We
shouldn't
make
a
big
breaking
change.
Anything
like
that.
A
All
right
next
is
opt-in
layer.
Caching,
this
is
maybe
the
most
fun
the
this
suggests
that
there's
kind
of
a
lot
of
discussion
on
this
one
too.
This
suggests
that
we
change
the
way
layer.
Caching
works
so
that
a
when
a
layer
comes
back
or
when
layer
metadata
comes
back.
It's
not
automatically
going
to
get
added
into
the
image
without
the
build
packs
knowledge.
A
So
if
you
had
an
old
version
of
a
build
pack
that
had
a
layer
called
foo
and
later
it
was
called
fubar
right,
you
could
end
up
in
a
situation
where,
when
you
upgraded
your
build
pack
to
the
latest
version,
if
your
build
pack
didn't
look
for
any
stray
layers
and
delete
them,
you
just
permanently
preserve
foo
continually
into
the
image
the
whole
time
and
that's
pretty
weird.
A
Even
during
build
time,
it
could
preserve
a
build
layer
like
that
it
could
just
stick
around
and
affect
the
environment
without
you,
knowing
you
know,
subsequent
build
packs
too,
even
more
unexpected
without
you,
knowing
about
it
pretty
big
hole
in
the
original
spec
for
how
layer
caching
should
work.
It's
very
simple
change.
It
just
sets
all
the
flags
to
off
when
layers
are
restored.
It
makes
the
build
pack
set
them
again
to
their
desired
values
and
also
to
prevent
edge
cases
where
other
build
text
could
still
have
references
to
those
layers.
A
It
renames
the
layered
directories
to
dot,
ignore
at
the
end
of
the
individual,
build
packs,
build
and
so
that
it
would
break
if
another
layer,
if
there's
like
a
weird
edge
case
or
another
layer
like
a
subsequent
build
pack,
could
have
been
built
to
link
to
the
previous
one
and
during
that
very
next
build,
but
not
the
build
after
it
would
still
be
able
to
reference
things
on
that
layer
at
build
time
and
so
renaming
the
directory
after
the
build
packs,
build
breaks
those
links
and
makes
it
discoverable
earlier.
If
you
had
that
edge
case.
A
So
that's
a
weird
thing
also
any
any
questions
about
this.
It's
this
also
breaks
all
caching
of
build
packs.
Essentially,
every
layer
on
the
next
build
after
this
is
implemented,
is
no
longer
cached
until
build
packs
start
writing
their
layer,
tommles.
B
A
The
and
that
the
caching
opt-in
logic
is
much
simpler,
because
it's
just
the
same
logic.
You
did
to
set
up
caching
before
compared
to
clean
up
logic
that
goes
through
and
removes
layers
that
the
build
pack
doesn't
know
about.
So
I
think
it's
straight
ux
improvement.
A
Moving
on
to
the
next
one
decouple
build
pack
plan
and
build
materials.
Emily
really
likes
this
one.
The
this
is
a
change
where
so
we
have
this
kind
of
really
interesting
interface
for
the
bill
of
materials.
Right
now,
where
you
know
the
build
plan
turns
into
the
bill
of
materials,
but
you
can
edit
it,
but
then
also
you
can
remove
stuff
and
that
kicks
entries
to
the
next
thing.
You
have
a
file,
that's
like
read,
write
and
it
started
at
a
value,
and
you
made
changes
to
it
and
rewrote
it.
A
It's
kind
of
weird
and
it
doesn't
allow
you
to
separate
build
time
metadata
from
runtime
metadata,
and
now
that
we
have
report
toml,
we
have
a
way
a
place
to
put
build
time
metadata
about
the
image,
and
so
this
kind
of
refactors
that
so
that
the
interface
stays
the
same.
You
still
get
build
materials
in
the
as
a
command
line
argument,
but
you,
instead
of
modifying
that
you
write
it
to
a
launch
toml
and
a
new
file
called
build
tommle.
A
That
makes
it
into
a
report
where
build
time
will
make
it
report
tommle
in
the
end,
build
toml
has
a
new
entry
section
where
you
can
push
entries
to
subsequent
build
packs.
The
same
way
you
could
use
removing
them
in
order
to
push
them
forward
last
time,
but
it's
much
more
explicit
that
you're
grabbing
this
and
pushing
into
the
next
one.
C
My
one
comment
on
this
was,
I
don't
think
I
think
the
build
plan
is
already
complicated
enough,
that
most
people
don't
use
it.
Who
aren't
paquetto
and
the
push
entries,
I
think,
are
an
additional
level
of
complexity
that
most
people
don't
use.
So
part
of
me
wonders
like
if
there's
a
clever
way
for
us
just
to
remove
that,
but
I
can
also
see
that
that
could
be
a
different
rfc,
because
this
is
an
easy
win,
as
is
without
sort
of
tackling
the
edge
cases
around.
Can
we
get
away
without
having
push
entries.
A
B
I
think
in
practice,
though,
whether
they're
complexity
or
something
else
we
don't
use
them
like
at
all
anywhere.
As
far
as
I'm
aware,
I'm
sure
there's
a
small
place
somewhere
but
like
as
a
general
rule,
we've
never
actually
used
them.
A
So
they're
they're
they're
not
useful
for
like
published,
build
packs
that
do
primary
app
compilation,
things
they're
useful
for
extensions.
When
you
have
a
meta
build
pack,
you
can
use
that
push
mechanism
to
create
something
that
conditionally
swaps
a
dependency
by
putting
it
before
the
meta
build
pack
as
like
a
thing
that
receives
the
provide
and
then
eats
it
up
and
does
something
or
that
says
no,
I'm
just
going
to
pass
it
through
and
send
it
to
the
next
thing.
A
So
it's
like
a
good
it's
useful
as
an
extension
mechanism.
More
than
for
like
primary,
build
packs
that
are
offered
you
know
from
different
vendors.
But
again
I
think
I
kind
of
think
I
want
to
take
that
discussion
to
make
it
a
different
rfc.
I
would
think
it's
a
bigger
change
to
remove
that
completely
as
part
of
this
than
just
to
preserve
it
in
a
way
that
doesn't
add
very
much
complexity
to
what's
already
being
proposed.
It
goes
in
the
same
file
as
the
build.
A
A
Cool
any
questions
about
this.
Anybody
have
feedback
on
the
ux
we
put.
There
was
an
original
version
of
this
that
had
three
arguments
going
in.
Instead,
then,
I
was
going
to
use
it
to
try
to
drive
out
moving
everything
to
environment
variables,
but
I
I
like
this.
This
is
actually
emily's
idea,
the
ux-
and
I
I
like
it
a
lot,
but
I
want
to
curious
if
others
have
thoughts.
A
I
think
we're
gonna
get
to
everything.
So
very
last
one-
and
this
is
opened
just
right
before
this,
based
on
conversation
I
had
been
two
hours
ago.
The
this
proposes
a
simple
change
to
profile
d,
to
allow
profile
d
to
work
without
a
shell
such
that,
if
a
profile
d
script
is
in
fact
executable,
it's
not
run
with
a
shell,
it's
executed
directly
and
it
outputs
a
list
of
key
value.
A
Pairs
in
the
environment
variable
set
format
with
the
equal
sign
that
gets
set
into
the
environment
by
the
app
process
by
the
launcher
directly.
This
would
allow
industrialists
like
run
image,
to
have
dynamic
logic
that
runs
before
the
main
process
starts.
A
There's
some
open
questions
about
how
how
much
we
want
to
if
we
use
the
executable
bit
we're
probably
going
to
break
a
bunch
of
existing
profile
descripts
if
we
use
the
executable
bit
and
kind
of
look
at
see
if
it's
a
text
file
and
if
it
has
a
bang
in
the
beginning,
then,
like
you
know
we're
in
some
weird
territory,
but
we
probably
break
slightly
less
people's
things,
but
there
are
probably
a
ton
of
profile
d
scripts
that
just
start
with
the
shipping,
because
people
don't
recognize
that
sourcing.
A
A
B
Is
there
any
precedent
for
something
like
this
in?
I
guess
profile
d
is
like
part
of
posix
or
something
like
is
there
like?
Is
there
anything?
That's
like
I
don't
see
a
prior
art
section
in
here.
I
don't
know
it
just
seems
it
seems
a
little
odd
and
it
just
makes
me
wonder
if
we
just
need
a
separate
construct
for
it.
Yeah
that
that'd
be
an
argument.
B
I'd
be
open
to,
like
I,
don't
feel
an
intense
requirement
to
overlap
on
top
of
profile
d
like
it
might
be
one
of
the
ways
we
could
disambiguate
between
the
two
use
cases,
but
I
do
want
the
underlying
technology
to
set
the
to
configure
the
environment
that
the
process
will
run
in
without
or
sorry
without
needing
a
shell
in
order
to
do
it,
yeah
that
makes
sense.
Yeah.
C
A
File
because
we
had
we
had
a
lot
of
stuff
writing
specific
metadata
to
standard
out
before
and
moved
back
to
files,
and
so
part
of
me
wants
to
do
that.
But
in
the
other
hand
these
are
like
you
might
have
a
lot
of
these
scripts
and
is
the
life
cycle
going
to
create
some
temp
file
for
each
of
them
and
pass
into
an
argument
like
it'd
be
doable?
But
you
know
like.
B
Yeah
those
were,
but
those
were
all
at
build
time.
This
is
a
run.
Oh
wait.
Wait
is
this
build
time
in
this
just
be
run
time,
just
from
the
time
the
the
ones
where
we
had
the,
where
we
moved
from
standard
out
to
files
was
all
build
time
right
and
it
was
like
it
was
more
about
like
oh
what
about
it,
you
need
it
between
these
different
phases
and
stuff.
Like
that,
I
don't
think
we
have
those
same
concerns
here.
So
the
concern.
B
A
A
B
I
like,
I,
would
prefer
a
file
even
with
the
additional
complexity.
I
think
it
disambiguates
and
gives
us
the
logging,
but
I
don't
have
a
strong
feeling
in
either
direction.
A
My
my
push,
I
was
going
to
do
file
originally
and
my
the
reason
I
felt
weird
about
it
is
this
is
at
launch,
and
so
before
your
app
launches
for
every
single
profile
d,
director
you're
going
to
have
to
create
a
file
f
open
right
to
the
file
it's
on
disk.
Like
you
know,
you
really
could
start
to
see
a
slowdown.
B
I've
got
a
I've
got
to
drop,
but
thanks
for
putting
all
these
together,
steven
I'll
take
a
look
at
them
as
soon
as
I
can.
A
Awesome
no
problem
so
I'll
probably
keep
standing
out
until
someone
tells
me
to
hate
it.
Does
that
make
sense.
B
Yeah,
I
mean
that's
the
other
thing.
The
the
problem
with
us
yeah
I
mean
that'd,
be
even
better.
The
big
problem
is
with
forcing
us
to
log
to
standard
error.
Isn't
that
it's
incorrect,
it's
almost
undoubtedly
correct,
but
the
way
our
platforms
specifically
stephen
log,
this
they
put
a
big
er
in
front
of
them
and
every
single
one
of
our
users
thinks
that
means
an
error
has
happened
and
so,
like
the.