►
From YouTube: Working Group: 2020-06-04
Description
* Read/Write Platform Volume Mount: https://github.com/buildpacks/rfcs/pull/85
* Root Buildpacks: https://github.com/buildpacks/rfcs/pull/77
A
B
B
A
B
C
C
C
C
D
A
C
B
A
E
D
D
You
know:
that's
not,
it
I
had
everything
actually
built
yesterday
and
ready
to
go
effectively.
This
is
a
request
to
be
able
to
mount
a
read/write
filesystem
into
and
are
into
the
container
in
some
way
the
use
case
behind
this,
as
we've
seen,
a
couple
of
requests
for
people
who
specifically
are
trying
to
do
maven
or
Gradle
builds
seems
pretty
common
in
the
Java
world,
but
could
be
just
as
easy
if
you
needed
to
get
access
to
a
big
go
path
or
something
like
that,
that's
local
on
your
machine
and
is
not
necessarily
in
the.
D
Not
certainly
something
that
you
want
to
share
across
a
broader
sort
of
multi-tenant
system,
but
it
allows
you
to
effectively
have
pre-populated
build
caches.
Is
the
the
strongest
use
case
for
something
like
this,
but
there
may
be
others
beyond
that.
There's
danger
here,
write,
read:
write,
build
systems
lead
to
shared
or
sorry
read,
write
file
systems
lead
to
shared
file
systems,
I'm,
not
sure
we
should
be
particularly
strong
on
policing
how
people
want
to
use
this
feature.
I.
Think
if
you,
this
is
a
very
dangerous
thing.
D
A
My
proposal
is
basically
the
same
thing,
except
don't
contractual
eyes
it
to
the
platform
directory.
Just
enable
arbitrary
read/write
volume
mounts
wherever
people
want
during
the
build
process
and
pack
or
in
other
platforms,
with
a
warning
that
says
like
danger
zone,
you
have
to
know
what
you're
doing
if
you're
going
to
do
this.
That
gives
you
as
much
flexibility
as
you
want
to
share
em
to
cache,
but
Joba
things
whatever
right.
It
just
lets.
You
say:
okay,
this
directory
is
going
to
persist
through
different
builds.
A
D
D
E
C
D
E
Is
that
been
like
I
get
the
I
think
the
m2
cache
use
case
made
sense,
but
like
from
a
build
packs
author
spective,
so
he
user
mounts
this
thing
like.
Where
do
you
expect
that
to
go
where
like
does
the
build
pack
have
to
specifically
know
that
it
can
use
a
read/write
mount
or
is
the
idea
that
this
is
like
replicating
work
that
the
bill
pack
has
already
done
somewhere
and
that
it
just
assumes
that
this
came
back
from
some
yeah?
So.
D
Yeah
so
the
way,
for
example,
the
maven
build
pack,
the
maven
picado
build
pack
is
written
today
it
says
if
tilde
m2
doesn't
exist,
create
a
new
cache
equals
true
layer
and
symlink
it
to
that
location.
If
it
already
exists,
don't
do
anything
at
all
and
another
edition.
That
might
say
if
slash
platform,
slash
unsafe,
slash,
maven
exists
in
link
it
to
tilde
m2
for
me,
and
then
the
third
option,
as
Steven
says,
is
like
don't
do
any
of
that.
Let
people
just
mount
that
tilde
M,
in
which
case
the
very
first
situation.
B
I
agree
the
spirit
of
Stevens
suggestion
and
it
does
work
in
the
Maven
case.
I
was
wondering
if
it
introduces
more
potential
for
brittle
logic
in
the
build
packs,
because
they
can't
check
in
one
specific
place
to
see
that
they're.
In
this
case,
where
someone
is
providing
something
that
they
would
otherwise
provide.
A
My
perspective,
that's
like
intentional.
It's
like
we
build
Peck
shouldn't
assume
that
there's
ever
going
to
be
information,
that's
shared
between
builds
of
different
applications
and,
if
you're,
if
you're
as
a
platform
owner
or
an
individual
developer,
saying
okay
I
know
this
build
tool.
I
know
how
it
works.
I
know
this
is
just
a
cache
directory,
but
for
these
apps
is
safe
to
share,
then
you
can
explicitly
say,
share
these
things,
but
a
build
pack
should
never
know
about
or
make
an
assumption
that
there
is
that
kind
of
directory.
D
Yeah
right,
like
the
maven,
build
pack,
knows
to
look
to
add
to
the
Gradle.
Build
pack
knows
to
look
to
adopt
Gradle
and
home
directory,
but
I
think
the
the
logic
actually
still
sort
of
to
be
defensive
right.
Like
imagine
this
feature,
so
the
feature
we've
already
implemented
today
from
a
defensive
point
of
view,
doesn't
just
blindly
create
a
new
cache
layer
and
create
a
sim
one
right.
The
very
first
thing
it
should
prudently
do
is
determine
whether
or
not
this
thing
already
exists.
D
If
it
exists,
nothing
happens
right
and
so
adding
the
ability
to
mount
to
any
location
doesn't
change
that
logic.
It
doesn't
even
change
an
expectation
from
the
build
packs
perspective,
whether
or
not
there's
a
read/write
filesystem
there.
It
basically
says
use
something
if
it
exists
and
if
it
doesn't
here's
how
to
create
one
to
exist,
but.
B
D
I
think
it
does
a
existence
test
before
ever
passing
a
directory
as
an
argument.
I
think
there's
a
a
more
questionable
one
for
places
that
don't
have
canonical
locations
right,
like
I,
just
need
a
read/write
cache
and
I.
Don't
care
what
it
is,
because
my
tool
doesn't
care,
it's
just
a
value
that
I'm
going
to
pass
in
there.
If
there
is
no
canonical
location,
we
end
up
with
variability
right.
It
could
be
anything
anywhere
anytime,
but
for
things.
A
D
Because,
like
I,
don't
think
even
having
like
a
single
unified
location,
actually
solved
the
problem
for
the
canonical
cache
location
right,
you're
still
going
to
check
to
see
if
dot
into
exists
and
you're
going
to
check
in
the
unsafe
directory,
see
if
that
directory
exists,
and
if
those
two
things
aren't
true,
then
you
go
off
and
you
create
your
cash
layer,
so
you've
just
introduced.
In
fact,
a
second
check
that
you
might
not
otherwise
need
to
do.
If
you
could
amount
to
a
very
to
any
location.
You
wanted
to
I.
F
Yeah
I
think
that's
something
that
worries
me
overall
of
how
it's
currently
specified
and
we
could
dive
into
a
little
bit
more
on
that.
But
just
you
know
the
general
hey,
there's
no
real
contract,
and
it's
just
you
could
write
anywhere
it's
what
build
packs
are
actually
going
to
leverage
whatever
one
specific
platform
sets
right
in
that
contract
between
said
platform
like
pack,
and
then
that
build
pack
seems
very
brittle
and
unless
what
we're
saying
is
we
want
to
try
to
create
some
sort
of
convention
first
and
then
let
that
convention
drive
out.
D
Isn't
the
isn't
the
per
tool
convention
that
convention
already
being
driven
out
right,
like
I,
don't
think
the
build
the
the
maven
build
pack
with
a
Gradle
build
pack
with
the
SVT
build
pack,
my
specific
ones
are
particularly
brittle
right
because
they
use
a
convention
is
dot,
m2
is
dot,
Gradle
is
dot
ats-v
T
already
in
existence
in
the
home
directory.
Now,
if
I
run
on
pack
and
pack
has
this
magic
feature
that
allows
me
to
mount
something
in
that's
great,
but
if
I
run
on
K
a
pack
and
it
doesn't
well,
my
build
pack.
D
Logic
hasn't
changed
at
all.
If
I
go
on
to
Heroku,
nothing
changes
there
either
right
so
like
I
totally
get
a
convention,
but
given
that
I
I've
you
this
particular
feature
in
the
context
of
there
are
big
tools
that
exist
in
the
world.
Already
that
already
determine
what
that
convention
is
I,
think
that's
where
we
we
get
the
safety
from
so.
F
F
F
A
I
want
to
look
I,
have
a
an
additional
sort
of
different
direction
of
how
I've
been
thinking
about
this.
One
too,
like
I,
agree
with
this
particular
need
and
I
think
we
should
solve
the
problem
and
I'm
a
reasonable
that
I'm
proposing
that
it
be
a
little
more
of
a
like
a
looser
solution
to
the
problem
than
the
way
we've
done.
A
Things
in
the
past
is
to
address
other
feedback
we've
gotten,
which
is
that
by
being
so
prescriptive
about
like
by
being
a
little
bit
inflexible
about
what
you're
allowed
to
do
during
the
build
process,
we
cut
a
lot
of
users
off
early
on
who
want
to
do
things
like
mount.
An
SSH
agent
and
write
I've
traditionally
been
coming
from
the
Cloud
Foundry
build
tax
background,
a
very
controlled.
A
Eli
is
probably
not
to
the
benefit
of
our
goal
of
you
know:
encouraging
widespread
adoption
that
sometimes,
if
you
want
to
mount
the
SSH
agent
socket
into
your
container
during
build
you're,
not
it's
not
going
to
cause
you
any
problem.
It's
just
going
to
make
things
work
right
and
so
I'd
like
to
I'd
like
to
be
open
to.
You
know
that,
and
so
if
enabling
that
arbitrary
read
route
read/write
mount
into
the
container.
In
this
case
you
know
for
just
just
four
platforms
that
opt-in
to
be
able
to
implement.
A
It
enables
a
lot
of
use
cases,
then,
if
we
need
to
contractual
eyes
things
on
top
of
that
later
and
and
refine
right
based
on
the
feedback
we
get
from
just
enabling
that
feature
that
I'm,
like
I'm,
not
saying
we
shouldn't
do
that
and
just
in
this
case
I
do
also
worried
about
contractual
izing,
something
that's
you
know
by
necessity,
unsafe
right
early
on
when
there's
a
solution
that
we
could
make
a
small
change
that
could
be
impactful
elsewhere.
That
could
give
us
feedback.
A
D
I
think
about
the
the
your
your
sort
of
point
in
the
comment
itself
about
well,
what
sort
of
is
the
point
to
putting
this
thing
in
unsafe?
You
can
imagine
the
SSH
SSH
agent
example.
The
very
first
thing
you
do
is
write.
A
build
pack
you'd
have
to
write
a
build
pack,
then
the
very
first
thing
it
did
was
notice.
D
This
thing
was
in
here
and
symlink
it
to
the
appropriate
location
and,
given
that
all
of
these
things
are
just
sort
of
symlinks
or
copies
or
what-have-you
to
the
unsafe
locate
or
to
places
outside
of
the
unsafe
directory
right
in
the
first
place
like
adding
that
extra
step
requiring
the
overhead
of
another
build
pack
to
do
the
thing
you'd
intended
to
do
and
could
have
done
with
you
know,
sort
of
a
straight
docker
volume.
Mount
feels
like
punitive
feels
punitive.
A
Even
even
in
the
m2
case,
when
a
user
asks
like
oh
I,
want
to
share
my
local
m2
cache,
you
know
in
my
home
directory
with
my
apps
and
building,
so
it
doesn't
take
forever
and
each
build
if
the
answer
is
just
yeah,
I'm
out
local
one
to
remote
one
in
the
container,
that's
so
transparent
right
that
feels
simple
in
a
way
to
it.
I
worried
that
if
we
move
too
far
away
from
with
you
know
contractual
ization
too
early
it,
you
know
it's
risky
sons,
yeah.
E
I
mean
I
guess
some
of
the
dangers,
though,
is,
if
you
do
contractual
eyes,
you
can
break
stuff
of
stuff
that
used
to
work
because
you're
our
contractual
izing.
So
that's
always
a
danger,
not
that
that
isn't
the
right
path,
but
that
I
think
is
the
trade-off
right
of
the
bosses
during.
That
is
that,
if
you
choose
to
contractual
eyes
something
and
people
are
because
it
is
so
flexible
for
and
people
are
doing
it,
it
becomes
a
breaking
change,
which
is
potentially
something
taking
account.
E
I
guess
my
other
concern
related
to
Stevens
kind
of
meta
point
with
experimenting
doing
things
in
pack
versus
other
platforms.
Not
adopting
is
that
if
people
expect
things
to
function
because
it
functions
in
pack
and
it
doesn't
work
when
you
use
another
platform,
I
think
that
not
that
we
should
not
do
that
generally
but,
like
I,
think
that's
a
thing
that
also
needs
to
be
considered
is
just
like
the
gulf
between,
like
people's
expectation
impact
versus
like
actually
the
rest
of
platforms.
I.
A
Don't
think
we
should
allow
all
kinds
of
features
right
that
you
know
go
outside
of
the
spec
during
the
build
process
like
I
agree
with
that
guiding
principle
right
in
this
case,
because
it's
about
doing
something,
that's
relatively
specific
to
a
local
machine
like
mounting
a
directory
through.
That's
just
sharing
it
right,
just
sharing
information
into
the
application,
or
you
know
into
the
container.
A
You
know
it's
like
it.
It
doesn't
feel
like
it's.
You
know
allowing
you
to
do
something.
That's
far
away
from
you
know,
it
would
break
your
build
later
when
you
tried
to
do
it
cloud
right.
You
understand
that
your
local
SSH
agent
right
isn't
there
in
the
cloud
and
that
you
have
to
account
for
that.
Some
other
way
right.
You
understand
that
if
you
want
to
mount
your
m2
cache
into
the
container
in
the
cloud,
then
you
have
to
use
a
solution.
A
That's
provided
by
your
platform
in
order
to
do
that
right,
it's
not
you're,
not
gonna,
be
able
to
use
your
local
machines.
M2
cash-
and
you
know
thing
so-
I
totally
agree,
I!
Think
in
this
case
it's
it's
safe
enough.
That
I'm,
not
as
worried
about
it,
I
think
the
benefit
of
the
flexibility
in
my
mind,
outweighs
the
you
know:
well
lose
I.
E
Mean
I've
been
said,
like
I've
done,
I've
seen
people
who
it
works
locally,
but
doesn't
in
production
like
not
production
but
like
remote,
builds
like
oh
I,
try
to
remote
build
this
thing,
but
doesn't
work,
and
it
turns
out
because,
like
some
local
files
and
difference
of
like
oh,
they
happen
to
have
this
thing
vendored
or
whatever,
and
it
probably
not
likes
with
em,
but
there's
something
like
other
built
systems
where
dummy
like
for
sure,
true
of
like
more
historic,
like
node
node
builds
for
sure.
Just
like.
E
E
D
Yeah
I
don't
want
to
over
index
on
that
a
lot
because,
like
no
matter
what
we
do
with
the
file
system
like
we
regularly
see
this
around
networks
and
proxy
configurations
right,
like
there's
always
gonna,
be
a
bit
of
this
works
on
my
machine
ISM
that
we
can
never
completely
eliminate,
and
this
particular
trade
all
feels
pretty
good
to
me.
I'm
I.
A
A
It's
worth
I'm
part
of
what
I'm
proposing
is
that
the
when
you
do
specify
that
you
want
to
do
this
mount
it's
on
the
command
line?
When
you
run
pack
build
it's,
not
something
hidden
in
configuration
so
that
if
it's
different
in
production
like
you
know
what
command
you
ran
locally,
you
know
that
there's
not
there
and
the
thing
in
the
cloud,
and
hopefully
it's
very
apparent
that
you
know
if
m2
builds
are
slow
suddenly
in
production.
Oh
it's
because
I
don't
have
that!
You
know
mount
there.
That
makes
sense
so.
F
D
D
F
B
Maybe
we
can
just
warn
for
some
of
those
like.
Are
you
sure
you
want
to
do
this
because,
like
every
now
and
then
I
want
to
debug,
something
and
pact
makes
it
very
hard
to
play
with
its
inner
workings
in
some
ways,
because
it's
stopping
me
from
doing
dumb
things,
but
sometimes
I
want
to
do
dumb
things.
You.
D
D
A
D
F
Mean
I
I,
definitely
think
so,
more
or
less
maybe
things
I
have
to
research
first,
as
far
as
like
it
seems,
like
you
know,
prescriptively
a
lot
of
the
languages
use
the
home
directory
and
so
I'm
thinking
about
that
right.
Thinking
about
like
really
we're
trying
to
get
to
em
right
and
we
really
like
from
a
user's
perspective.
I
just
want
to
be
able
to
cache
all
these
dependencies
that
the
applications
gonna
download
right.
F
So
do
we
give
them
some
like
obscure,
hey
just
mount
to
this
very
random
thing
that
your
build
pack,
you
know
it's
going
to
be
putting
them
there
or
is
it
more
yeah,
I
guess
specific
to
the
use
case
where
it
says
cash
right
and
then
we
kind
of
figure
out
exactly
where
that
should
go.
Do
some
other
means,
but
I
think
I
definitely
have
to
think
about
it.
More
yeah.
C
D
Like
the
idea
of
even-steven
suggestion
of
just
using
the
docker
syntax
right
like
if
you
were
doing,
you
know
some
sort
of
Dockery
kind
of
thing
you
just
use
dash
V
pointing
from
mine
to
theirs
and
if
they
chose
do
an
entire
home
directory
in
order
to
get
access
to
their
dot
into
you
know
more
power
to
them
or
they
could
surgically
link
in.
You
know
my
dot
into
two,
it's
dawn
into.
What's.
F
D
A
E
C
A
C
D
D
A
A
E
D
D
A
C
F
Know,
on
the
other
thing,
sorry
Steven,
we're
gonna
say
so.
No,
no,
no,
please,
okay,
I
was
gonna
say.
The
only
other
thing
is
that
we
currently
already
have
in
pack
at
least
the
volume
parameter
which
does
restrict
them
and
map
it
over
to
platform.
So
there'd
be
some
backwards
consequence
and
backwards
compatibility
consequences
that
we
might
have
to
overtake
there
as
well.
D
F
C
A
little
bit
more
details
like
flow
for
what
what
would
happen
that
each
during
each
phase
and
if
a
new
phase
is
required
and
I
haven't
quite
finished
that,
but
it
got
me
thinking
about
some
things
that
I
think
are
important.
I
do
think
there
like
this
proposal,
would
require
a
new
phase,
because
at
some
point
you
have
to
load
the
run
image
and
run
the
build
pack
on
top
of
the
run
image.
C
The
route
build
on
top
of
the
run
run
image,
but
initially
thing
about
that
is
I
would
propose
it
coming
after
the
normal
build
phase,
which
means
you
could
load
the
workspace
directory
and
then
the
route
build
packs
could
make
decisions
or
whatever,
based
on
the
contents
of
like
the
app
in
the
run
image
so
like
what
we
have
compiled
code
or
whatever
it
may
be,
or
just
dependencies
that
have
been
downloaded
yeah.
So
I'm
curious.
If
anybody
has
thoughts
on
that,
I
also
uncovered
some
interesting
things
like
with
the
snapshot
layer.
C
If
you
load
the
cache
and
then
take
a
snapshot
and
it's
empty
you,
don't
you
don't
want
the
you,
don't
want
to
like
blow
away
the
layer
right
so
there's
some
interesting
things
that
need
to
happen
like
basically
merging
the
snapshot,
I
guess
anyway.
So,
but
none
of
that
felt
like
it
felt
like
some
interesting
details
that
need
to
be
worked
out,
but
it
didn't
feel
like
it
needed
to
pick.
We
needed
to
change
anything
about
the
current
proposal
so
other
than
that
yeah
I'm,
not
sure
what
else
we
want
to
talk
about.
I
have.
A
A
couple
things
you're
talking
about
the
build
process
happening
or
the
process
of
extending
the
images.
What
stage
that
happens-
and
it
seems
like
that
can
happen
after
detect,
maybe
but
before
build.
C
A
But
but
it's
like
details,
I
agree.
I
just
wanted
to
understand
that
that
statement,
because
if
the
bill
passed
there
other
build
packs
need
the
things
that
the
root
pill
packs
are
going
to
install
an
adult
image
before
they
could
start
putting
right.
C
G
B
The
idea
was
that
I
think
one
of
the
issues
I
was
thinking
about
before
with
this
is
that,
let's
say
you
have
a
build
pack
that
is
requiring
a
package
that
a
root
build
pack
provides
like
an
operating
system-level
package.
The
mechanism
through
which
it's
declaring
that
it
needs
it
is
totally
separate
and
it
doesn't
interact
with
them
mixin
mechanism
right.
A
Want
to
propose
an
alternative-
that's
very
similar,
but
tell
me
if
you
think
this
is
better
or
worse.
Assuming
the
base
image
has
all
the
metadata
about
what
packages
are
installed
already.
There's
I,
don't
see
a
strong
reason
to
duplicate
that
in
the
bill
of
materials,
an
alternative
to
explicitly
providing
all
500
packages
in
the
base
image
back
in
the
build
plan
again
for
other
build
back
to
you
know,
match
requirement
that
another
build
pack
has
could
be
that
the
build
pack
a
build
pack
would
only
require
a
package
from
a
route
build
pack.
A
B
Seems
like
it
puts
some
more
of
the
complexity
in
the
build
packs
like
there's
two
different
mechanisms,
but
like
maybe
that's
fine,
because
I'm
not
sure.
My
idea
is
very
simple
either
because
it
seems
like
the
plan
is
there's
a
lot
of
potential
power,
but
it
gets
confusing
and
it's
hard
to
change
later,
provides.
A
E
E
A
I
had
one
question
in
here
about
I
wanted
to
chat
about
when
we
allow
route
bill
packs
to
be
used
so
I
like
where
we
allow
them
to
be
specified.
So
there
was
a
line
in
here.
That
said.
If
the
route
buildpack
was
included
in
build,
build
packs
or
an
Apps
project
time,
all
the
build
will
fail
and
the
example
underneath
is
putting
a
route
build
pack
and
project
hummel,
the
I'm
okay,
with
the
restrictions.
C
C
G
C
G
A
A
C
A
Actually
I,
like
this,
this
restriction,
I
think
is
maybe
like
I.
Think
it's
okay,
if
an
app
depends
on
a
route
build
pack
and
you
intended
an
app
to
be
able
to
depend
on
a
rebuild
pack,
I
kind
of
liked.
This
restriction
that
build
packages
shouldn't
be
able
to
include
rebuild
packs
and
the
reason
is
I
fear
a
world
where
everybody
in
the
world
has
a
build
pack.
A
That
starts
with
apt,
build
pack
and
installs
a
bunch
of
packages,
and
then
we,
like
totally
negate
that
you
know,
and
the
simple
way
to
avoid
that
is
just
build
packages,
can't
there's
no
way
to
distribute
a
build
pack.
It
automatically
has
apt
functionality.
You
can
tell
people,
you
should
use
this
build
pant
with
the
you
know.
We
build
pack
and
install
these
packages,
but.
C
C
G
E
Build
package
may
not
actually
do
anything
if
it
detects
that
mixin
already
exists
right,
but
like
now,
I
can't
distribute
this
thing
reliably
across
potentially
different
stacks
and
things
based
on
mixin
IDs
goes
back
to
Emily's
point
yeah,
exactly
it
kind
of
goes
back
to
the
thing
we're
talking
about
before,
like
it.
That
restriction
really
hurts
like
like,
if
I'm
just
trying
to
create
a
guild
package
that
can
be
more
portable,
like
ideally
I,
don't
want
to
run
this,
but
if
I
have
to
I
will
kind.
A
Of
thing,
so
my
theory
is
that
if
the
build
pack
itself
needs
the
package,
then
it
should,
then
you
shouldn't
be
allowed
to
use
it
with
a
route
build
pack
that
you
should
figure
out
how
to
get
an
image
that
has
that
package
in
it
and
that
the
apt
functionality
is
useful
for
supporting
application
logic
that
needs
the
packages.
They're
like
like
the
problem,
you're
talking
about
I,
see
as
a
feature.
A
If
it's
a
build
panic
dependency,
then
you
should
get
it.
You
should
start
with
a
stack
or
create
a
stack
that
has
those
packages
in
it
as
mix-ins
ahead
of
time.
It's
an
app
dependency.
That's
when
it's
like
okay
to
take
the
performance
hit,
because
the
individual
developer
is
deciding
to
take
the
performance
hit
for
their
app
they're,
not
a
build
Peck
developer
and
deciding
to
apply
a
performance
hit
to
a
large
number
of
consumers.
A
E
E
E
D
C
B
Part
of
me
was
wondering,
like:
can
we
just
sort
of
fold
all
of
make
sense,
basically
in
to
build
packs
and
the
build
plan
rhythm,
having
a
second
thing,
because
I
feel
like
this
litter
of
metadata,
without
necessarily
functionality
attached
to
it?
Like
I
I
know,
most
of
the
stacks
that
we
have
didn't,
have
all
the
Vixens
on
them
filling
this
time,
I'm,
not
sure
how
thoroughly
all
the
build
packs
are,
declaring
all
their
requirements
and
took
a
long
time
for
us
to
get.
B
A
Providing
value,
at
least
for
the
Ocado
build
packs
case,
because
they're
built
acts
like
PHP,
like
PHP
I,
think
the
only
case
where
they
actually
actually
does
need
a
ton
of
C
libraries
and
support
for
the
operating
system,
and
so
we
prevent
pH
use
the
mixin
contract
to
prevent
the
PHP
build
packs
from
running
on
this.
You
know
stack
that
doesn't
have
very
much
on
it
and
I
wouldn't
want
something
to
take
those
build
packs
and
combine
it
with
a
root
build
pack
that
installs
50c
libraries
right.
A
G
A
Consider
in
the
kpac
case
right
if
this
gets
pushed
things
that
can
be
provided
by
route,
build
packs
at
Build
time
rebasing,
you
know,
50
apps
is
just
going
to
stop
working
right.
Those
if
any
of
those
apps
depend
on
operating
system
packages,
that
rebased
process
becomes
run
a
container
that
runs
apt-get.
A
G
And
slow,
but
back
to
Steven
your
point
a
bit
ago
about,
like
the
distinction
between
this
being
an
app
developer,
concern
and
like
a
build
pack.
I'm.
Sorry
I
understand,
like
the
philosophy
by
that
sounds
pretty
sad,
but
I
feel
like
end.
Users
will
not
be
able
to
see
that
distinction,
it'll
be
very
frustrating
them
like
doesn't
really
matter
like.
Oh,
this
is
a
build
pack,
so
it
shouldn't
come
from
an
app
build
pack.
They
want
imagemagick
and
it
just
so
happens
that
they
want
it,
because
the
build
tech
means
it.
G
G
A
G
G
E
G
E
So
like
we
have
an
example
for
Python
like
Python,
has
a
dependency
based
on
like
sequel
Lite,
because
part
of
standard
Lib,
and
so
right
now
like
for
reasons
like
sequel
live,
is
not
on
the
room
stack
image
so
right
now,
like
the
Python
build
pack
for
now
has
to
like
vendor
their
own
custom,
whatever
thing
and
it
kind
of
sucks,
it's
caused
a
lot
of
like
churn
and
pain
and
basically
like
the
like.
It
would
be
nice
if
the
bill
packed
author
could
just
like
leverage
something
else
to
actually
handle
system
packages.
E
Iii
guess,
like
the
alternative
to
kind
of
what
Matt
is
saying,
is
that
if
you
can't
combine
them
into
a
bill
package,
the
alternative
is
like
the
Python
bill
pack
just
says.
In
order
to
use
this
bill
pack,
you
need
to
add
this
route.
They'll
pack
to
your
project,
Amma
right
to
make
this
thing
work,
but.
A
In
a
Heroku
case,
right
where
you,
where
you're
saying
your
Python
binary,
always
needs
to
equal
ID
to
be
installed
and
no
Python
apps
will
run
at
all.
If
sequel
light
isn't
installed,
would
you
rather
have
a
process
that
runs
before
every
Python
build
that
does
apt-get,
install
sequel
light
and
the
user
has
to
wait
for
sequel
light
to
install
has
to
wait
for
the
app
databases
to
update
adds
forward.
That's.
E
A
But
that's
pretty
fast
right:
it
running
apt-get,
update
pulling
down
the
latest
apt
package
cache
and
then
running,
apt-get
install
and
installing
sequel,
I'd
call
the
sequel.
It's
dependencies,
that's
going
to
add
like
a
minute
plus
to
every
single
build.
Wouldn't
it
make
more
sense
just
for
that
type
of
dependency
that
affects
all
the
apps,
because
it's
a
core
dependency
that
they'll
pack
that
has
to
be
in
the
base
image
ready
so
like
for
Python
apps,
you
use
a
different
stack.
It
seems
like
it's.
It's
really
beneficial
to
the
end
user.
A
G
Like
think
about
that,
they're
just
trying
to
get
their
image
to
build
and
they're
coming
from,
perhaps
the
docker
world,
where
they're
already
we're
doing
an
absence,
don't
image
get
it
so
the
perceived
cost
isn't
isn't
going
to
be
a
major
driver
to
their
decision
about
which
the
stack
to
choose.
If.
A
G
E
A
Are
gonna
make
a
ruby
build
pack
that
installs
the
system
Ruby?
Alright
people
are
gonna.
If
we
have
some
big
can
of
worms,
we're
opening
it,
we've
always
said
we
didn't
want
to
open
from
the
beginning
is
if
a
build
pack
can
arbitrarily
say
I
want
these
operating
system
packages
to
be
installed,
then
every
build
pack
in
the
world
is
going
to
start
using
operating
system
packages.
We're
gonna
lose
the
ability
to
rebase
apps
and
everything
is
going
to
be
very
slow
right.
C
F
Ultimately,
you'll
get
that
for
the
individual.
You
know
developer,
that's
maybe
trying
to
get
their
stuff
to
just
work.
It
would
just
work
to
some
extent,
but
the
build
pack
providers-
and
you
know
builder
providers-
they're-
probably
going
to
kind
of
go
the
more
performant
way
and
figure
out
how
that
works
for
them,
because,
ultimately,
that's
what
they're
competing
competing
against
right
is.
A
Think
if
our
goal
is
to
grow
a
large
ecosystem
that
build
packs
and
any
build
pack
can
install
operating
system
package
is
essentially
right.
If
we,
if
we
allow
any
build
pack
to
say,
is
the
packages
I
need
and
then
builders
have
an
app
build
tag
on
them
and
installed
it
or
reloj
a
pill
pack
to
be
part
of
that
builder,
then
immediately
we're
going
to
like
sure
the
potato
or
the
Heroku
Ruby
gold
pack,
you
know,
might
be
really
good
and
install
a
lot
of
dependencies
itself
and
not
use
that
build
packs.
A
G
Don't
think
at
least
I'm
saying
like
we
need
to
provide
the
ability
for
any
arbitrary
build
pack
to
install
advocate
I,
just
think
without
a
clear
system
of
interoperability
between
where
bloodpack
specify
mix-ins
and
the
app
to
build
packs.
It's
going
to
be
very
frustrating
to
end-users
because
they
are
not
the
experts
that
we
are
won't
understand.
The
distinction
between
since
provide
on
the
stack
or
whatapp
bill
pecks
can
provide.
C
B
C
B
A
F
My
last
two
cents,
like
we
just
talked
about
this
philosophical
change
or
shift
right
where
we
wanted
to
be
a
little
bit
more
permissive
and
let
them
innovate
freely
and
then
only
apply
restrictions
where
it
really
made
sense
and
I
feel
like
this
is
one
of
those
cases
where
we're
still
being
maybe
too
restrictive
and
preventing
things
from
I
guess,
maybe
a
wider
adoption.
You
know
again
granted
the
risk.
Is
there
and
it's
understood,
but
I
could
say
the
same
thing
about
the
volumes
right,
but
we
were
okay
with
that.
The
mediums
are
an.