►
From YouTube: Working Group: 2020-05-27
Description
* Offline Build Packages: https://github.com/buildpacks/rfcs/pull/81
A
B
A
A
A
A
B
A
A
A
A
A
So
we
are
currently
working
through
buttoning
up
the
release
for
PAC
eleven
Oh.
Everything
is
pretty
much
set
and
we're
working
out
the
final
things
for
the
release,
notes
and
a
blog
post
that
we'll
be
publishing
in
relation
to
some
additional
security
and
performance
changes
that
we've
done
in
this
latest
release.
This
should
be
done
here,
hopefully,
within
the
next
hour.
A
B
A
A
Fixed
soon
for
a
second
here
there
we
go
so
offline,
build
packages,
we're
about
to
talk
about
that
so
I'll
skip
over
it.
Next
enlist
RFS
is
a
draft
type
of
that
non-breaking
life
cycle
support
for
pre,
one
Oh,
API,
minor
versions.
I
think
this
is
a
hairy,
controversial
RSC.
That
needs
a
lot
more
discussion.
You've.
B
B
A
B
B
A
A
A
C
A
A
B
We
have
the
permissions
to.
Can
you
click
that
and
see
if
you
can
convert
it
into
a
draft
request,
I
think
there's
a
link
in
the
upper
you
definitely
with
lower
down
a
little
bit
below
reviewers
up.
B
B
A
They're
really
well
defined
simple
RFC.
That
I
think
is
a
good
thing
to
do,
but
you
know,
may
not
be
the
highest
priority
thing
in
the
entire
world:
I'll
concede.
Then
it
would
be
a
you
know
something
you
could
take
a
look
at
it
Pat
a
lot
of
code
in
the
Builder
and
the
yeah,
the
Builder
API
and
build
packs
all
right
export
report
FCP,
and
we
done
with
SCP
goz.
A
Move
right
on
then
to
RFC
for
custom
CA
certs
was
there?
Was
there
a
comment
on
there
or
is
that
so
I
think
Xander
is
putting
this
down
and
I?
Don't
know
if
we
have
somebody
who's
gonna
pick
it
up
and
run
with
it
from
here.
I
think
it's
kind
of
blocked
on
route
build
packs
is
the
latest.
What
we
decided
is
that
they
would
prefer
to
have.
A
A
B
A
A
Nice,
okay,
if
I
recall,
so
this
definitely
there's
some
really
good
stuff
in
here.
This
is
probably
the
word
worth
making
an
agenda
item
today.
That's
what
I
was
gonna
suggest
Dan
is
that
Dan
enforce
both
of
you
are
kind
of
authors
of
this
right,
forest
I
think
actually
forced
you
were
taking
this
one
Dan.
You
were
more
than
awful
and
built
a
thing
for
us.
Do
you
keep
chatting
about
this
more
today.
C
B
A
Image
exposes
metadata
for
all
layers
participate
in
a
build.
We
had
it
some
great
discussion
of
this
that
some
one
of
our
meetings
last
week,
but
we
decided
that
there's
an
issue
with
build
reproducibility.
It
also
relates
in
flow,
and
we
just
talked
about
all
sides
where,
if
we
put
metadata
about
strictly
build
time
things
on
the
image,
then
you
sacrifice
reproducibility
when
the
tooling
changes
and
in
significant
ways,
so
I
think
we're
waiting
on
Paul
to
look
at
I.
Think
we're
waiting
on
yeah
I,
look
at
the
portal,
but
also
for
Tom.
A
That's
draft!
Sorry,
it's
hard
to
tell
the
grave
RFC
and
up
image
extensions.
This
is
going
to
become
merge
with
go
away
or
something
in
paper
route,
they'll
Peck,
so
I'm
gonna
talk
about
yes,
packs
supposed
to
be
a
sever.
Our
suitors
at
the
same,
are
seen
as
that.
One
well
I
feel
like
we
said
both
things
at
one
point
and
you
asked
Joe
to
just
make
a
separate
are
saying
it
is
billing
things.
This
is,
this
is
the
first
RFC.
This
is
the
second
RFC,
but
it's
open
as
a
different
PR.
B
A
Would
I
think
the
answer
is
the
other
one
should
be
closed
in
favor
of
this
one,
but
I
want
to
I
want
Joe
to
say
that
this
is
the
like,
like
give
it
a
little
more
time.
Okay,
I'd
like
to
chat
with
Joe
about
it,
make
sure
that
we
definitely
don't
need
anything
from
the
previous
one
that
it
shouldn't
be
I,
know
I've
seen
the
poses
route,
build
packs
for
an
RC
that
proposes
other
kinds
of
app
extensions.
A
I,
think
we
don't
want
that
I
think
we
just
want
to
close
the
existing
one
or
the
old
one,
the
taper
of
the
new
one,
but
just
want
to
have
a
chat
with
him.
First,
okay,
cool,
so
I
think
that's
it
for
RFC
review,
I'm
going
to
stop
sharing
and
everybody
wants
to
move
back
to
the
dark.
The
next
thing
on
the
agenda
that
everybody
is
very
excited
about
is
offline,
detects
and
Dan
I'll.
Let
you
take
it
from
here.
C
C
Okay,
so
this
RFC
is
kind
of
designed
to
solve
a
couple
problems
that
we've
run
into
when
packaging
build
packs
for
offline
environments,
namely
that,
as
things
stand
right
now,
the
way
that
we
package
dependencies
for
build
pack
such
as
like
Ruby,
is
we
actually
end
up
chucking
them
into
the
tarball.
This
kind
of
has
some
not
great
side
effects,
meaning
a
online
build
pack
and
a
bill
pack
package.
With
these
like
offline
dependencies,
you're
gonna
have
different
shah's,
so
they're
gonna.
C
So,
basically,
what
this
does
at
high
levels.
It
just
takes
all
these
dependencies
that
we
would
normally
throw
in
a
build
pack,
pulls
them
out
and
puts
them
in
the
build
package
like
as
layers
and
as
a
result
now
that
their
layers
in
the
build
package,
the
lifecycle
has
to
have
some
mechanisms
for
like
handling
them,
making
them
available
at
Build
time
figuring
out
which
dependencies
you
should
be
added
to
a
build
so
that
you
don't
just
get
every
offline
dependency
on
your
registry
when
you
do
a
build
with
a
given
build
pack.
C
So
I
guess
any
questions
on
that.
So
far,.
A
Less
no
question,
but
a
little
bit
of
a
clarifying
point
for
folks.
What
would
when
Dan's
talking
about
layers
you
know
existing
in
the
build
package
and
then
moving
to
the
app
part
of
the
idea
behind
this
is
a
layer
in
the
build
package
can
stay
the
same
layer
as
it
moves
across
to
the
application
without
ever
needing
to
get
rebuilt,
or
so
it's
like
extremely.
You
want
to
install
Ruby,
it's
an
extremely
performant
way
to
get
Ruby
in
the
final
image
where
it
might
just
be
able
to
stick
on
the
registry.
A
A
B
A
I
had
a
quick
clarifying
question,
I
hope.
So,
when
we're
talking
about
build
packages
right,
they
exist
in
two
forms.
They
exists
in
the
image
form
where
it's
in
a
registry
somewhere
and
layers
are,
you
know,
kind
of
distributed
in
a
different
way
versus
a
you
know:
dot,
C
and
B
file.
So
in
the
this
proposal,
when
you're
about
packaging
the
dependencies
in
the
CMB
format,
specifically
what
we're
saying
that
in
that
format,
it'll
have
all
the
dependencies
embedded
in
it,
so
it'd
be
like
a
really
large,
potentially
dot
c
mb
file.
A
Is
that
right,
exactly
okay,
but
the
the
digest
of
the
build
pack
itself
wouldn't
change
between
the
offline
and
the
online
one?
So
you
can
imagine
extracting
the
online
one
out
of
the
offline,
one
is
still
being
usable
or
hydrating.
The
dependencies
of
a
offline
or
online
went
to
make
it
an
online
one
afterwards,
and
all
those
operations
should
work,
because
the
build
pack
code
itself
doesn't
change
between
the
two.
A
C
Yeah
and
I
guess
it's
kind
of
going
along
with
it.
It's
basically
just
an
extension
onto
like
a
build
package,
so
the
last
so
many
layers
are
just
gonna,
be
some
arrangement
of
these
extra
dependencies
that
you
want
added
in
right.
I
think
this
probably
gets
us
something
that
Ben's
wanted,
which
is
like
your
Java
dependencies
being
able
to
be
deduplicated
on
a
registry
and
not
have
an
entire
new
set
of
them.
Every
time
you
push
up
a
new
build
package.
A
C
So
I
guess
here,
I'm
I,
guess
I
can
keep
walking
through
this
a
little
bit
so
here's
kind
of
like
then
the
meat
and
potatoes
of
what's
being
proposed
to
you,
which
is
we
want
a
way
for
stuff
to
end
up
on
the
file
system.
B
The
fact
that
it's
getting
positioned
in
this
way
means
that
you
could
two
people
right
like
let's
say,
for
example,
Heroku
and
Cloud
Foundry
both
need
the
same
version
of
the
JDK.
By
making
this
you
can
never
do
layers,
since
the
paths
that
they're
gonna
be
at
is
different
yeah.
Why
not
use
the
sha-256
of
like
effectively
the
digest
of
the
directory
yeah.
C
So
this
is
a
great
question
and
so
I
think
means
to
even
had
a
long
conversation
about
why
we
would
want
to
do
this
and
I
think
that,
at
the
end
of
the
day,
the
reason
that
we
would
like
to
do
this
is
so
that
people
who
are
building
dependencies,
that
your
build
pack
is
going
to
use,
can
know
where
they're
gonna
live
on
the
filesystem.
C
C
A
Guess
to
be
really
clear
about
what
that
means.
There's
a
big
problem.
We
did.
We
didn't
think
about
ahead
of
time
when
we
talked
about
initially
using
sha-256
in
the
past,
which
is
that
everything
inside
of
that
sha-256
directory
can't
reference
the
SHA,
because
it's
a
shop
itself,
so
we
would
suddenly
not
be
able
to
use
absolute
pads
at
all
in
those
dependencies
which
lots
of
things
don't
work
really
well.
Lids
right,
like
it
seems
like
a
great
answer,
but
it
just
doesn't
work
yeah.
B
C
Actually,
it's
okay.
Do
you,
I
can
I
can
kind
of
jump
to
that
if
you'd
like
me
to,
but
it's
okay,
so
we
kind
of
then
to
find
this
interface
for
how
these
dependencies
are
gonna,
be
given
to
a
build
pack
and
it's
a
nother
configuration
file
packaged
tamil,
and
here
we
can
kind
of
like
look
at
the
contents.
B
That's
the
thing
that,
like
that,
feels
like
a
perfect
illustration
of
the
problem
right
like
today.
The
way
I
believe
you
do
it,
but
certainly
the
way
I
manage
my
dependencies
is.
We
know
we
can
predict
exactly
what
that's
going
to
be
named
right,
like
I'm
surprised
that
there's
a
need
for
this
file
if
I
know
that
my
build
tamil
describes
a
bunch
of
dependencies
right,
because
it's
also
going
to
need
to
describe
where
to
download
them.
If
the
dependent
that's
available,
it
describes
exactly
what
the
Shah
is
going
to
be.
B
I
have
I
can
predict
using
nothing,
but
my
build
Tamil
exactly
what
the
path
to
that
dependency
is
going
to
be,
and
it
feels
like
this
should
be
possible
as
well
right.
My
build
pack
Tamil
describes
all
the
different
versions
of
node
that
I
might
have
and
given
that
I
should
be
able
to
predict
exactly
what
the
file
on
the
file
system
should
look
like
and
not
me
to
go.
Look
up
everything
else.
Why
isn't
that
true?
Yeah.
C
So
it's
it's,
the
like
same
exact
trade-off
of.
Are
we
gonna
use,
Shaw's
or
not
right,
which
is,
if
I
have
something
that
perfectly
encapsulates
all
the
bits
for
one
of
my
dependencies?
It's
a
perfect
lookup
mechanism,
but
it
also
means
that
I
actually
can't
do
this.
I
can't
use
absolute
paths
right
so.
B
C
C
B
Grab
so
given
the
way
we
define
dependencies
today,
we
define
it
with
both
an
ID
and
a
version
number
right,
which
means
that
it
maps
to
this
dependency
directories
ID
and
version
number
using.
We,
you
have
to
be
able
to
resolve
the
dependency
using
nothing,
but
the
build
pack
Tamil
in
the
first
place
and
given
those
two
things,
an
ID
and
a
version
number,
you
should
be
able
to
predict
where
on
the
file
system
that
will
exist
if
it
does
and
go
download
it.
If
it
doesn't.
C
A
There's
also,
the
question
of
like
should
I
build
pack,
no
I
have
a
way
of
knowing,
without
checking
on
the
file
system,
that
something
was
downloaded
ahead
of
time
like
it
was
cash
versus
it
needs
to
download
it
dynamically.
That's
maybe
less
of
an
issue,
because
he
could
just
look
it
up
on
the
file
system
path,
no
particular
way,
but
because
it's
a
shared
dependency
directory,
it
feels
like
there
should
be
some
contract
that
says:
okay,
this
is
your
stuff.
B
Will
reserve
yeah
I'll
reserve
final
judgment
till
I
play
with
it,
but
that
seems
like
an
extra
bit
of
overhead,
like
you're,
still
going
to
need
all
the
keys
and
version
numbers
anyway
to
look
it
up
inside
of
whatever
this
file.
It
feels
like
you
could
just
go
to
the
file
system
and
look
up
that
same
information.
I'd
be
much
more
likely
to
say
the
build
path.
B
Hummels
should
describe
what
dependencies
it
expects
to
be
hanging
around
and
had
the
Builder
since
synthesiser
had
the
lifecycle
synthesizer
from
that
perspective,
then
to
add
yet
another
file
where
we
described
the
dependency
and
build
pack.
We
described
a
dependency
when
we
build
the
build
packaging
and
we
describe
a
build
and
we
describe
a
dependency
and
whatever
this
other
packaged
Tamil
file
is
as
well.
C
A
So
another
question
on
that:
how
does
the
build
pack
author
understand
where
CNB
deps
is?
Is
that
is
that
something
that,
like
you,
could
say
as
part
of
the
contract?
This
is
an
absolute
path
that
is
always
fixed
to
be
this.
Their
bill
pecks
are
allowed
to
install
anything
in
there
that
they
claim
to
have
provided
it
feels
pretty
loose
but
like
if
we
don't
expose
the
fully
qualified
paths
to
the
individual
dependencies
that
that
came
from.
Do
we
need
to
expose
like
the
route?
A
C
B
A
A
You
would
have
maybe
some
duplication.
You
have
to
make
a
dependency
specific
to
a
build
pack
and
then
the
Schad
that
dependency
on
the
registry
would
change
if
it
needed
to
be
used
by
a
whole
bunch
of
different
build
packs.
And
so,
if
you
wanted
to
make
a
node
just
a
dependency
that
installs
node,
that
restriction
would
suddenly
make
it
so
that
you
have
to
make
one
for
every
single
build
pack
that
you
want
to
be
able
to
use
node.
A
C
C
This
also
might
be
something
that's
a
little
too
much
detail
for
like
this
as
it's
an
RFC,
but
just
like
giving
an
optional
user
phase
user
facing
interface
that
you
could
use
to
package
these
things
up
again,
you
have
to
specify
the
inversion
is
in
Bill
pack
Tamil,
but
okay
and
I
guess
kind
of
the
the
final
little
piece
here
is
I.
Guess
the
final
two
pieces,
so
Terrence
I,
think
asked
a
question
very
shortly.
After
I
posted
this
about.
C
B
B
A
Okay,
let
me
can
do
it
in
a
reproducible
way
right.
It
could
be
with
all
layers
over
this
that
are
smaller
than
this
or
something
get
combined
together.
So
it
you
always
get
the
same
result,
but
I'd
suggest
having
one
user
run
into
it.
First,
in
this
case,
we
might,
though,
this
might
be
the
first
time
we
do,
because
if
you
have
a
builder
that
has
a
lot
of
bill
tax
on
it
with
a
lot
of
dependencies,
you
could
be
pushing
that
number.
C
C
Thank
you
all
right,
I
will
do
that,
the
so
okay,
then
we
have
this
kind
of
last
little
bit,
which
is
the
kind
of
some
of
the
plumbing.
How
a
how
the
lifecycle
is
able
to
determine
which
layers
it
should
add
to
a
build,
given
that
a
certain
build
package
run
is
even
that
a
certain
build
pack
is
running,
and
so
these
additional
kind
of
JSON
fields
just
need
to
be
added
so
that
they
can
look
this
stuff
up
and
also
kind
of
provide
this
file
down
here.
That
has
the
whoops
the
ID
inversion
information.
C
C
C
A
Now
this
is
the
label
on
an
image,
the
lifecycle
image,
so
it
necessarily
it
needs
some
other
mechanism
to
figure
out
which
of
these
dependencies
belong
to
which
build
packs
inside
the
image
itself.
It's
kind
of
a
thing,
okay
seems
like
we're,
not
sure
we're
gonna
keep
package,
Tamil,
anyways
right,
I,
wonder
if
we
could
defer
like
decide
if
we're
gonna
keep
it
first
and
then,
if
we're
gonna,
keep
it
figure
out
the
details
of
how
we're
gonna
get
the
metadata
from
the
image
into
the
container.
B
A
If
the
things
are
domain
scoped,
then
within
a
domain,
if
we
produce
a
ioad,
a
potato
JRE
and
then
a
customer
or
end
user
decides
to
have
a
another
IO
potato
Jerry.
That
is,
you,
know,
different
and
has
different
bits.
Then
they
did
the
wrong
thing.
They
should
have
called
it
something
else
right.
They
shouldn't
have
produced
a
dependency
that
they
called
IO
potato
jru
that
doesn't
have
the
checksum.
A
B
A
C
So
I
think
that
we
can.
This
should
hopefully
work
just
by
virtue
of
if
you
package
up
a
dependency
in
the
right
way,
you
should
just
be
able
to
make
a
sim
link
to
it
from
your
like
layers
path
right
and
then,
when
your
image
gets
exported,
it
should
say:
okay,
alright,
you
have
a
dependency
on
this
layer.
It
should
also
needs
to
end
up
in
the
wrong
image.
A
Another
layer,
no
just
just
a
layer
directory
itself
within
so
like,
like
slash
layer,
slash,
build
pack,
ID,
slash
node
right.
We
know.
We
know
that
that
thing
is
special,
because
it
also
has
a
node
tamil
right.
If,
if
the
directory,
there
is
instead
of
sim
link
to
the
depths
directory
layer,
then
instead
of
copying
the
contents
and
exporting
them
into
that
location,
it
just
preserves
just
copy
preserves.
The
Similan
can
export
and
preserves
the
layer
exact
same
layer
in
the
build
package
moving
across
into
the
final
image.
Okay,.
A
C
B
There's
we
do
maybe
we
could
get
away
with
this.
It
does
mean,
then,
that
you
are
not
sort
of
shipping
a
default
dependency
at
that
part.
Right,
like
one
of
the
nice
things
we
have
today
is
there
is
a
tarball.
It
contains
no
tor,
there
is
a
tarball.
It
contains
a
JRE.
We
ship
that
to
customers.
Customers
can
inspect
it
and
know
what
it
is,
and
we
then
get
to
add
things
in
an
end
directory
to
it.
B
Right
after
it
has
been
untoward
into
the
image
and
those
things
can
still
match
right,
they're
sort
of
relative
to
the
paths
that
are
being
passed
in
and
across
multiple
builds.
This
is
fine
and
stuff
like
that
parents
I
think
like
we,
you
might.
You
might
have
a
view
that
we
are
abusing
that
right.
Like
there's
nothing.
B
If
you
know,
if
you
know
the
layer
that
you've
unzipped
MRI
into
you
could
write
another
layer
that
has
all
of
the
ends
so
that
points
to
that
layers
path
right,
you
wouldn't
have
to
actually
code
that
into
the
dependency
we
just
you
do
it.
It
sounds
like
we
definitely
do
it
all
over.
The
damn
show,
but
like
we've,
just
gotten
lucky
that
no
one
else
has
a
profile
directory
or
an
end
of
directory,
that
we
would
that
we're
writing
into
all
right.
None
of
our
dependencies.
A
Something
to
think
about
is
you
know
where,
before
we
had
tar
balls
of
dependencies,
that
are,
you
know,
don't
have
an
absolute
path
in
them.
There's
still
a
concept
of
a
dependency
artifact
in
here
it
just
looks
like
a
layer
blob
instead
of
looking
like
a
tgz
right
and-
and
so
you
could,
you
can
still
treat
that
as
this
is
a
canonical
dependency
and
then
modifications
to
it.
You
have
two
options
right.
A
B
I
think
yeah
Terence
was
driving
towards
an
idea,
so
we
eat
when
we
on
tar
JRE.
We
write
some
environment
variables
into
it.
Right
should
actually,
we
write
those
environment
variables
into
an
ends
directory
in
the
depth
directory
before
it
goes
out,
since
the
paths
are
going
to
point
to
our
you
know,
potentially
relative
and
things
like
that,
we
have
a
problem
with
how
you
refer
to
this
directory,
so
we
do
not
currently
have
any
agreement
beyond
slash
workspace.
B
Always
that
true,
it's
unclear
to
me
if
we
have-
or
we
should
probably
actually
define
this
if
these
things
have
to
be
mounted
at
a
consistent
location
between
Build
time,
there
are
very
few
things
in
the
spec
where
that
is
true.
Today,
beyond
layers
will
always
be
in
the
same
place
and
the
workspace
will
always
be
the
same
place
so.
A
In
this
case,
the
Deaf's
directory
will
be
at
the
same
place
at
build
and
launch
time
as
it's
moved
across
its
original.
Its
location
will
stay
across,
and
absolute
paths
within
the
dips
directory
don't
have
to
reference
it
as
if
it
were
a
build
pack
layer.
They
can
reference
it
from
the
dev
story
and
it
should
work.
Everything
should
just
running
out.
A
So
one
thing
that's
interesting
about
this:
is
it
it?
This
kind
of
presupposes
that
you'll
have
a
tgz
ahead
of
time.
That's
not
rooted
at
the
thing
in
package
tamil.
It
adds
a
URI
to
tgz
right
I,
wonder
if
the
packaging
interface
should
be
more
like
a
tool.
You
can
run
on
a
directory
to
package
it
up
as
a
dependency
layer,
tgz
right,
that's
a
fish
alive
s
blob
and
then
that's
what
you
include
when
you
do
your
packaging
I
wonder
if
that
feels
better,
because
it
doesn't
create
two
artifact
formats.
A
B
The
problem
is,
we
have
to
open
that
up
to
almost
every
kind
of
dependency
right,
like
some
of
the
dependencies
certainly
are
tarballs.
Other
ones
are
zips,
other
ones
are
God.
Somebody
had
at
rxe
the
other
day,
but
I
think
the
more
insidious
one
is
we
get
jar
files
and
I
don't
actually
want
to
enter
the
dart
file.
B
A
Wouldn't
have
to
unpack
your
tarp
jar
file
right,
you
just
run
a
package
dependency
command
on
your
jar
file
and
provide
the
package
dependency
command
with
an
ID
and
a
version,
and
it
would
convert
your
jar
file
into
a
to
Jersey
or
dot
C
and
B
D
or
something
here.
You
know,
FS
layer,
blob
or
even
FS
layer,
blob,
single
FS
layer,
blob
wrapped
in
OCI,
wrapper
right
that
you
could
carry
around
as
a
dependency
file.
It's.
B
A
B
You
know
I
yeah
I
do
generally
like
that
idea
and
I
do
generally
like
this
Danny
you're
you're
gonna
get
write
articles
on
this
by
me,
because
I
desperately
want
this
to
be
really
really
good
and
to
start
using
it
before
the
spec
actually
supports
it.
So
be
aware
of
that,
but
this
is
not
me
being
negative
on
it.
Generally.
B
The
other
question
had
I
liked
the
idea
of
having
the
untoward
dependency
on
there
and
then
just
linking
the
the
layer
in
regardless
of
how
we
sort
of
get
around
to
doing
that.
But
one
thing
I'm
really
really
worried
about
is
where
I'm
concerned
we're
event
effectively
building
two
different
code
paths
inside
a
build
packs,
one
where
the
dependency
comes
to
it
already
untoward
and
another
one
where
the
dependency
hasn't
been
mounted
up
and
we
have
to
go
download
it
from
the
internet
and
do
the
I
was
four
miles.
Yeah.
B
B
A
A
B
That's
what
we
do
right
like
we
do
the
same
thing.
We
have
basically
treats
a
dependencies
directories
attached,
no
matter
what
we're
always
going
to
get
the
JRE
tarball
we're
always
going
to
unzip
the
tarball,
whether
or
not
we
need
to
download
that
tar
ball
is
more
interesting
right
like
we
can.
We
can
do
some
indirection
there,
but
this
is
if
this
is
already
on
the
directory.
We're
good,
but
if
it's
not
directory
doesn't
exist,
create
a
temporary.
B
You
know,
create
a
new
directory
here
and
then
on
TARDIS,
and
it
feels
like
in
all
the
build
pack
implementations.
You
basically
end
up
with
this
conditional
logic
before
you
do
anything
involving
layers
where
you
have
to
decide,
am
I
getting
an
already
untoward
thing
or
am
I
going
to
download
the
thing
and
then
have
to
unzip
it,
and
it's
something
that
is
difficult
to
hide
with
abstraction.
Since
an
artifact
could
be
anything
right,
it
could
be
a
thing
you
want
to
open.
A
Ball
but,
but
we
definitely
don't
want
to
take
a
tgz
and
put
it
inside
of
another
tgz
and
double
compress
it
right,
yeah
and
so
I
think.
Definitely
things
to
make
up
think
about
I
know
we
have
no
time
left.
I
had
one
one
final
comment:
if
you
scroll
up
a
little
bit
on
the
build
and
build
persistence,
there's
an
implication.
Sorry
I
stopped
at
the
bottom.
You
just
fell
past.
A
It
should
a
dependency
that
how
should
we
determine
if
independence
really
should
be
added
to
build
I,
think
that
would
involve
with
proposed
there
would
involve
rewriting
the
Builder
between
the
detect
and
the
build
step.
So
it
has
less
layers.
I
would
just
propose
that
we
keep
everything
on
the
Builder
exactly
the
same
way,
with
all
the
dependencies
from
all
the
build
tax,
regardless
of
if
they
were
selected
or
not,
and
I
thought
that
was
a
maybe
a
controversial
opinion,
so
I
wanted
to
throw
it
out
there.
B
B
A
You
can
build
a
builder
with
all
the
build
pack
offline
build
packages.
You
get
all
the
all
the
dependencies
from
every
single
offline,
build
package
in
the
taps,
directory
I.
Think
what
this
RFC
says
is
that
between
the
detect
and
build
phases,
you
have
to
generate
a
new
ephemeral
image
with
the
dependencies
that
didn't
correspond
to
the
build
packs
that
aren't
gonna
participate,
not
being
used.
I
think
that's
too
much
work.
Instead,
we
should
just
keep
all
the
dependencies
there.
We
already
they're
already
on
the
node.
There's
no
extra
downloading
it
happens.
A
It
doesn't
seem
too
dangerous
to
expose.
You
know
like
the
bill.
Peck's
already
gonna
see
the
other
bill
pecks
dependencies.
Why
not
just
let
them
see
everything
and
really
have
a
strict
contract
and
said
they're
only
allowed
to
touch
the
ones
that
they
they're
responsible
for.
It
seems
reasonable
enough
to
me.
Matthew.