►
From YouTube: Working Group: 2020-07-08
Description
* Cache Types: https://github.com/buildpacks/rfcs/pull/89
A
C
C
B
C
C
C
D
E
D
D
C
B
B
F
G
No
I
think
pretty
much
yes,
you
mentioned
we
made
the
release
yesterday.
Pack,
o
12
o,
has
a
lot
of
trust
builder.
Additional
trusted
builder
commands
and
some
other
changes
more
or
less
that
enable
the
doctor
and
doctor
configuration
going
back
to
the
lifecycle.
Is
there
an
expected
release
for
the
or
timeframe
for
the
next
upcoming
release
of
life
cycle?
Even
a
loose
one?
I.
F
Would
like
to
see
it
in
the
next
couple
weeks,
but
it
will
depend
on
getting
the
spec
releases
out
the
door.
I
think
the
actual
implementation
of
what
goes
into
those
spec
releases
will
be
fairly
predictable,
small
chunks
of
work,
so
the
spec
is
the
wild
card
there.
But
I
would
like
to
see
in
the
next
two
to
three
weeks.
If
we
get
there,
I.
B
F
A
F
B
A
A
C
F
F
B
A
E
C
A
C
A
G
B
F
F
G
Cool
so
more
from
cash
types
over
to
cash
scope
by
some.
Some
additional
conversations
overall
I
think
the
idea
hasn't
really
been.
There
hasn't
been
a
lot
of
pushback
on
the
idea,
but
there
are
some
concerns
related
to
I,
guess
how
it
would
work,
especially
the
one
where
it
is
shared
across
multiple
build
packs
or
sorry,
I
should
say,
multiple
build,
runs
and
honestly
I,
don't
necessarily
know
exactly
where,
like
as
myself,
not
being
able
pack
author
per
se
on
a
day-to-day
basis,
I'm
trying
to
understand
something
like
this
right.
G
F
It's
interesting,
though,
because
you
might
have
a
bill
that
then
no
longer
wants
that
cache
player,
but
you
don't
want
to
get
rid
of
it
right.
I
think
there
are
some
I
like
the
overall
idea
and
I
want
us
to
have
something
like
this.
The
more
I
thought
about
it,
the
more
I
think
there
are
some
stranger
questions
like
imagine.
One
of
these
shared
layers
isn't
m2
cache
I
put
this
in
a
comment
today,
right
now.
F
The
way
we
restore
cache
layers
is
we
copy
them
out
of
a
storage,
whether
it's
a
directory
or
an
image
into
the
layer
ster
now,
if
a
bunch
of
builds
were
all
turning
the
same
m2
cache
that
might
end
up
being
a
very
large
layer
and
I
feel
like
there
would
be
performance
concerns
around
copying
all
that
data
over
I'm
wondering
if,
if
we
wanted
to
do
something
like
this,
do
we
want
these
cache
layers
to
behave
differently
like
their
sim
LinkedIn
or
something
so
that
they
have
a
different
performance
profile?
That
might
also.
F
F
Well,
if
it's
per
image
and
you're
recovering
it
from
a
cache,
the
dependencies
you're
covering,
are
only
the
ones
that
your
app
you.
If,
instead
you
have
a
cache
that
has
the
dependencies
of
every
app
that
ever
used
the
cache.
It
could
get
a
lot
bigger
and
you're
ending
up
like
it's
already
a
little
bit
slow
to
copy
layers
across.
But
now
you're
copying
a
bunch
of
things
that
you
don't
even
need
across,
because
you
have
this
shared
layer.
G
So,
like
a
concrete
example
would
be
something
like
spring.
You
have
a
spring
app
and
then
you
have
like
a
JBoss
app
right
with
very
different
big
set
of
dependencies
and
we're
saying
now
we're
essentially
sharing
something
that
is
twice
the
size
for
what
typically
would
have
been
just
one
applications
worth
or.
A
I
guess,
like
coins
I,
feel
like,
though
I
agree
that
the
performance
concerns
are
more
apparent
in
a
shared
cache.
I
also
feel
that
that
performance
authorizations,
probably
a
thing
that
is
potentially
independent,
like
a
independent
concern
as
well
like
you're
gonna,
have
apps.
That
also
have
this
use
case
of
like
potential
to
think
been
said
unless
the
build
pack
potentially
prunes
them
like
over
time.
As
you
upgrade
your
app
like
you're
gonna
have
stale
things
in
that
cache
right
and
it
will
show
I
have
a
64.
A
B
G
And
I
wonder
at
least
for
the
use
case
of
a
pack
user
right
if
they
had
if
by
default,
this
was
enabled
and
you
share
stuff.
But
then
you
get
into
the
scenario
of
like
hey.
You
have
this
like
massively
huge
thing:
it's
like
slowing
down,
right,
I
think
that's
like
a
maybe
day,
two
concern
or
like
further
long-wear.
At
that
point,
you
could
have
other
options
like
you
know
some
flag
that
essentially
breaks
apart.
B
If
you
think
of
the
way,
this
has
always
worked
like
in
CI
systems
and
stuff
like
that,
but
like
they
never
do
anything
fine-grained.
If
you
find
out
that
there
is
anything
wrong
with
your
catch
right,
it's
been
polluted,
it's
been
corrupted,
it's
grown
too
large.
You
just
whack
the
cache
and
pay
the
penalty
to
start
downloading
from
scratch.
Again,
I,
don't
think
we
need
to
be
particularly
sophisticated
on
that
particular
thing
right.
We
already
have
a
clear
cache
flag.
It
could
apply
to
this
cache
as
well.
Yeah.
G
F
C
A
F
Another
concern
that
I
haven't
raised
in
this
yet
but
I'm
wondering,
is
there
ever
a
case?
Do
we
want
to
allow
these
shared
layers
to
be
launch
layers?
It
seems
to
me
like
there
might
be
some
danger
and
if
different
bill
could
put
any
old
thing
in
there
than
adding
it
to
an
image,
should
these
be
mutually
exclusive,
I.
E
Feel
the
same
danger
applies
to
cache
layers
that
it's
it's
it's
very
risky
once
you're
talking
about
once
you
have
to
define
what
apps
are
allowed
to
share
bits,
they're,
definitely
coming
from
the
same
users
to
allow
you
know
a
build
pack,
that's
building
one
app
that
could
have
proprietary
source
code
or
credentials
or
you
know,
proprietary
language
modules
to
leak
into
you
know
another
application
and
not
be
isolated
from
it.
But
I,
don't
think
that
applies
to
just
launch
layers,
I!
A
C
E
A
B
The
reason
the
cache
layers
are
structured.
The
way
they
are
is
anytime
that
you
share
a
cache.
There
is
possible,
like
the
overriding
problem,
is
there's
a
security
vulnerability.
Their
cache
poisoning
is
a
big
deal,
cache
layers
sort
of
solve
this
by
saying
the
only
thing
that
can
touch
a
cache
is
the
same,
build
pack
over
time,
and
so,
if
there's
any
cache
poisoning
to
be
done,
it
will
be
done
against
itself.
The
85
proposal
is
well
we're
using
a
shared
cache,
but
you
are
explicitly
opting
into
a
file
system
that
you
already
own.
B
B
G
G
E
Think
the
explicitness
that
85
requires
from
the
end-user
helps
play
down
the
security
debt
of
allowing
us
to
share
that's
between
no
separate
application,
builds
that
because,
because
85
is
just
just
defined
for
PAC,
which
is
totally
local
to
your
machine,
where
you're
a
single
user
right,
it's
not
something
where
a
build
pack
can
make
a
decision
to
share.
You
know
cache
layers
as
a
you
know,
potentially
unknown
thing
to
the
end-user,
and
you
have
control
over
exactly
what
is
shared
in
that
process.
E
E
That's,
why
it's
a
reason
I
like
85,
489,
though
it
solves
the
problem
of
if
you
want
to
do
a
fast,
rebuild
of
a
different
app
on
a
cloud
platform,
and
you
know
you
don't
want
to
have
to
rehydrate
them
to
cache.
85
definitely
doesn't
cover
that
right,
and
so
the
like
there's
there's.
Definitely
performance
benefit
to
it.
I
don't
disagree
with
that,
but
it
seems
very
risky
to
me.
We've
had
problems
on
Cloud
Foundry
with
there
was
one
time
where
we
like
do
the
way
build
packs
were
cached.
B
We
see
the
same
thing
like
in
CI
over
the
last
decade,
right,
like
Jenkins
agents
used
to
just
share
the
file
system
and
so
you'd
end
up
with
an
m2
cache
and
someone
would
download
a
snapshot
of
something
and
it
would
then
include
every
single
build
from
every
other
project.
On
your
entire
Jenkins
instance
or
bamboo
instance,
we
saw
as
well
like
shared
file
systems
across
applications
like
there's
nothing
from
to
stop
me
from
writing.
B
F
Do
think
the
way
this
is
written
now
you
don't
have
to
enable
the
shared
caching
feature
and
told
me
if
you've
actually
passed
a
shared
cache
and
the
platform
is
responsible
for
determining
the
scope
of
them.
So
you
could
each
platform
can
make
decisions
about
when
it's
comfortable
with
and
it
exposed
that
risk
to
the
user
appropriately
yeah.
B
For
what
it's
worth
the
way,
this
is
the
way
the
performance
issue
that
I
think
this
really
squarely
addresses
has
always
been
handled
in
enterprises
around
the
Java
ecosystem.
Is
they
just
put
a
squid
proxy
at
the
edge,
and
so
every
time
somebody
makes
a
call
for
maven
central.
They
end
up.
Building
this
giant
made
them
central
cash
with
you
know
like
call
it
24-hour
expiration
or
something
like
that
on
Prem,
so
transferring
all
the
data.
The
next
time
is
incredibly
fast
versus
having
to
go
and
actually
download
it
over
the
open
Internet.
E
E
If
a
build
pack
can
ever
like
prune
it
down
to
just
the
dependencies
of
one
application,
because
it
would
never
know
that
other
applications
might
not
be
using
it,
so
I
think
the
the
pruning
ability
worries
me
and
the
kind
of
difference
in
the
build
environments.
Things
like
build
tack
version,
also
worried
me.
If
we
just
ignore
the
security
problem,
also
for
some
written
feedback
sort
of
logistical
stuff
is
there.
G
So
I
guess
maybe
I
want
to
step
into
well,
no
I,
guess
that
makes
sense
cuz
right
now
they
are
segmented,
strictly
based
off
of
image.
Name
right
and
wouldn't
this
issue
be
the
same
issue.
If
we
were
trying
to
use
I
know,
we've
had
a
correct
me
if
I'm
wrong,
but
I
think
I've
heard
that
we've
had
an
issue
where
we
want
to
use
is
the
same
information
we
or
the
same
cache
with
a
slightly
different
image.
Name
right,
whether
you're
talking
about
the
latest
or
a
very
specific
versioned
tag,
it
wouldn't.
E
E
Imagine
that
when
we
propose
something
to
account
for
that,
you
know,
there's
either
an
older
new
or
like
there's
a
source
start
image,
and
we
key
you
know
some
key
the
cache
on
some
some
aspect
of
the
old,
a
new
image
so
that
it
would
be
difficult
to
run
into
that
issue.
I,
don't
know
what
happens
now,
though,
if
you
try
to
rebuild
with
the
you
know:
try
to
rebuild
the
app
twice
at
the
same
time
like
I.
Do
we
have
race
conditions
already
yeah.
B
That's
possibly
true
like
this
is
the
the
single
biggest
thing
and
I
responded
to
it.
I
think
yeah
in
the
original
85
proposal
like
this
is
a
big
deal,
build
systems
not
just
job
of
those
systems,
but
all
build
systems
generally,
just
like
they
can't
reasonably
concurrently
use
their
own
build
caches
that
it's
not
a
thing
that
they're
designed
to
do
and
they
react
in
determinately
when
it
happens.
G
B
G
So
maybe
to
step
back
a
little
bit
and
talk
about
when
cash
is
being
shared
across
multiple
build
packs,
and
maybe
this
is
something
look
into
a
little
bit
deep,
deeper,
but
I
thought
the
layers
themselves
were
isolated
to
the
build
packs
themselves.
Is
that
not
true,
and
does
that
change
in
this
RFC
I?
Don't
think
I've
made
that
proposal
so
I'm,
not
seeing
where
the
different
versions
of
the
build
packs
or
different
you
know
build
packs.
All
together
are
accessing
other
build
packs
information.
I
said.
E
E
Away
they
sorry,
you
didn't
want
to
blow
away
the
layers
anyways,
because
build
pack
is
upgraded,
so
much
that
you
just
never
get.
Caching,
if
you,
you
know,
didn't
at
least
allow
it
roll
forward.
But
it's
things
like
now:
we'd
have
to
support
forward
back
forward
back
forward
back
on
the
same
layers.
Sorry,
theoretically,.
F
G
E
We
might
want
to
blow
away
cash
right
now
when
the
build
pack
version
post
down
so
that
we're
not
making
a
really
tough
compatibility
problem
for
will
pack
authors
that'd,
be
pretty
easy
to
do,
is
just
stamp
with
the
bill.
Pettigrew
and
the
generate
is
a
layer
on
the
metadata
and
then
not
recover
it
at
philpapers
and
solder,
yeah.
B
Like
this,
while
it's
not
a
great
answer
to
this
question,
like
the
way
we
handle
this
already
today,
is
we're
really
really
careful
about
what
goes
into
these
caches
into
the
very
few
sort
of
cache.
Only
layer
or
cache
build
layers
that
we
do
so,
for
example,
like
we
fingerprint
every
single
file
right
like
if
we
need
to
cache
the
result
of
a
build.
G
So
then,
the
security
issues
or
concerns
that
we're
bringing
up
they're
not
specifically
about
built
different,
build
packs
having
access
to
different
the
other,
build
packs
cache
it's
more.
The.
What
a
build
pack
does
for
one
application
that
then
can
by
mistake,
be
used
by
that
same
build
pack
in
a
different
context,
for
a
different
application.
Is
that
the
box
yeah.
B
I
mean
like
by
mistake,
is
the
charitable
the
judgment
of
this
there's
nothing
to
stop
me
so
right
now,
if
I
managed
to
get
a
piece
of
code
into
your
enterprise
that
could
pollute
Spring
Corps
and
add
some
vulnerability
to
it.
That
can
only
affect
this
application
right
versus
if
I
can
get
one
in
and
it's
got
a
shared
end
to
cache
now
I
can
pollute
every
single
application.
I
can
add
that
vulnerability
to
everything
in
your
entire
footprint
and
that's
like
for
all
of
the
performance
improvements
you
might
get.
B
That
is
a
really
really
big
security
vulnerability
and
it's
why,
like
legitimately,
we
stopped
using
share
to
build
agents
all
over
CI
systems
right
everything
runs
in
a
container
these
days.
Performance
be
damned
because
the
that
that
sort
of
shared
security,
vulnerability
of
cache
poisoning
is
so
scary
to
people.
F
May
be
the
solution
for
pack
is.
Actually
we
could
do
something
a
lot
simpler.
Then
what
we're
proposing
here,
which
is
just
that
you
can
name
a
cache
and
then,
if
you're,
using
two
different
image
tags.
But
you
know
they're
really
the
same
image.
You
can
just
pass
the
same
cache
in
like
either
give
like
a
an
ID.
G
But
that
doesn't
solve
my
use
case
that
I
have,
as
far
as
the
motivation
right
like
I,
build
the
Java
app
I
build
another
Java
app
I
want
to
see
the
maven
dependencies
that
are
shared
being
downloaded
again
without
having
to
do
anything
like
I
know.
That's
like
a
pipe
dream,
maybe
but
that's
kind
of
like
if
it
doesn't
solve
that.
Then
I
feel
like
it's
not
like
it's
just
a
lot
more
effort
for
the
end
user
and
I'm,
not
sure
that
that's
really
when
I
what
I
want
to
drive
forward.
It's.
G
E
This
isn't
a
thing
that
docker
files
you
know
provide
either
or
like
people
in
containerized
environments
aren't
used
to
completely
automatically
sharing
cache.
You
know
language
module
between
different
killed
containers
or
I,
don't
think,
that's
necessarily
an
expectation
of
somebody
coming
into
the
ecosystem,
and
so
that
we
allow
you
to
do
it
that
it's
only
you
know
pretty
straightforward.
You
know,
set
of
arguments
to
pass
and
to
pack
seems
pretty.
It
seems
like
a
big
improvement
on
what
everybody
else
is
doing.
At
least
I
get
your
point
about.
E
It
would
be
great
if
if
the
user
could
be
explicit
but
but
not
have
to,
maybe
if
there
were
a
sort
of
easier
way
to
present,
it's
an
option
to
them
or
something
I
I
would
I
would
definitely
go
down
that
I
wanted
to
just
from
something
you
know
been
mentioned
earlier,
the
even
if
you
had
you
know
like
a
platform
like
kpac,
where
you
know
you
could
say
this
image
is
like
explicitly
okay
to
share
cache
with
this
other
image.
I
would
still-
or
you
know
you
made
it
very
explicit.
E
When
that
sharing
happens,
I
would
still
worried
about
edge
cases
between
you
know,
I
think
for
a
build
pack,
authored
and
now
think
about
these.
You
know
yes,
I
want
to
make
this
intentionally
shared,
but
now
I
have
to
think
about.
Oh
this
build
pack
or
a
different
version
of
that
is
gonna,
run
with
a
different
app
and
maybe
writing
to
the
same
location.
Right
I
feel
like
there
are
a
lot
of
edge
cases
there
that
suddenly
make
it
harder
for
build.
Pack.
Authors
to
you
know
not
make
mistakes
and
that's
another
aspect.
A
C
Yeah
do
I
do
like
your
motivation
here
for
like
just
I
like
it
as
a
pack,
centric
sort
of
local
machine.
You
know,
quality
of
life
thing
I
do
think
that
maybe
a
pack
explicit
option
would
be
nice
and
then
maybe
can
also
live
in
pack
config,
where
you
have
like
a
cache
scope,
variable
of
machine
or
app
level
by
default.
So
the
volumes
that
Pat
creates
could
be
machine
level
by
default
and
just
sort
of
you
deal
with
the
fallout
from
that
as
you
build
from
app
to
app.
G
I
think
we
explored
a
little
bit
of
that,
and
you
know
show
conversation
and
that
led
to
this
and
I
guess.
Ultimately,
we
realized
that
the
build
packs
are
the
best.
You
know
the
best
things
that
know
and
identify
what
things
would
be
appropriate
to
share
with
other
things
and
that's
why
we
put
them
in
this
location
versus
it's
harder
to
identify
at
the
platform
level
right
and
then
going
back
to
maybe
Stevens
point
where,
like
their,
that
expectation
might
not
be
there
for
other
tools
because
other
you
know.
C
G
Building
tools,
don't
necessarily
have
it,
I
mean
the
conversations
that
are
here
or
like
people
bring
this
up
right,
like
people
saying
hey
like.
Why
is
this
downloading
again
right?
So,
like
people
do
have
that
expectation
that
stuff
they
don't
expect
these
things
to
redownload
again
and
it's
maybe
because
they
don't
understand,
what's
really
happening
behind
the
scenes,
but
in
some
regard
I,
don't
think
that
they
should
right.
So
with
all
that
in
mind,
I
am
curious
about
the
you
know.
Proxy
I
think
that'd
be
an
interesting
solution
that
would
be
PAC
specific.
G
E
For
those
issues
that
users
have
brought
up
are
the
instances
where
users
are
expecting
different,
apps
a
different
source
code
to
share
dependencies,
or
they
is
this-
the
tagging
problem
where
they
want
to
make
a
new
up
with
a
new
tag,
and
they
see
it
gets
redownload,
and
they
don't
understand
that
you
have
to
use
the
same
tag,
which
is
admittedly
a
confusing
thing.
You
know,
because.
C
E
G
E
G
And
then
this
other
person
here
right
they
good
they
kind
of
pinpoint
a
couple
different
things,
but
this
was
one
of
them
that
I
read
into
here
right.
So
you
know:
if
I
do
app,
be
one
app,
the
app
2v1,
they
use
the
same
code
engine
few
lines
difference.
Why
do
I
have
to
recompile
everything?
He
goes
into
a
little
bit
more
abstract,
but
this
is
kind
of
what
I'm
gathering
from
it
as
well.
E
The
in
the
case,
one
who
talks
about
the
he
was
talking
about
different
versions,
the
same
app
right
in
the
case
two
he
does
say
same
code
engine
with
a
few
lines
of
difference
right,
so
it
kind
of
implies
it's
the
same
app
and
he's
building
it
a
little
bit
differently
right.
But
that's
that's
also
not
clearly.
The
same
thing
commits
different,
build
parameters
to
generate
a
different
image
off
the
same
source
code.
I.
You
know,
I
I,
don't
feel
as
bad
about
that
right.
E
G
And
I
think
I'm,
putting
myself
in
the
shoes
of
the
app
developer
right
and
the
experience
that
I
least
a
few
people
have
mentioned
from
what
I
hope
is
their
perspective
and
I.
Guess
we
could
dive
deeper
and
by
asking
them
exactly
what
they're
trying
to
do.
But
this
is
what
I
would
expect,
as
well
as
an
app
developer,
without
knowing
exactly
what's
going
behind
the
scenes.
G
E
I
guess
here's
an
example
of
that
the
type
of
interface,
maybe
that
I'm,
maybe
more
comfortable,
is
that
would
solve
both
these
problems
mention
their
say,
say
in
project
Tamil.
You
have
an
idea
in
version
right.
If
your
project
ID
doesn't
change,
you
always
get
the
same
cache
right
and
that
way,
as
a
user
you're
explicitly
saying
nope.
This
is
the
same
source
code
right,
even
if
you're
picking
a
different
image,
location
right,
and
so
you
could
have
have
one
and
app
too
in
case
two
have
the
same.
G
C
G
B
G
E
E
G
And
I
guess
I,
maybe
also
want
to
go
back
to
the
dependency
cache
poisoning
thing,
especially
in
m2
right.
So
when
we
talk
about,
if
the
project
ID
didn't
change,
then
you
could
reuse
the
same
cache
as
that,
but
then
I
think
about
em
too
right
and
we're
essentially,
at
least
from
packs
perspective.
It
would
in
the
same
domain
right
local
machine
development
building
stuff,
so
the
same
problems
that
would
appear
for
pack
and
this
feature
would
also
apply
to
em
to
write
or
maven
in
general.
Is
that
not
true
and
is.
B
Would
be
true,
but
the
big
difference
is
in
85
a
user
explicitly
ops
into
that
security.
Vulnerability
right,
like
they
have
decided
that
they
feel
enough
control
to
to
do
this
thing
having
it
as
an
automated
feature.
It
means
that
a
user
isn't
sort
of
like
every
time
they
do
a
command
signing
off
that
they
understand
the
security
risks
that
they're
doing
here
and-
and
that
is
in
truth,
in
absolute
direct
opposition
to
the
goal
that
you
have.
G
B
G
C
F
B
Me
also,
can
you
mind
if
I
do
a
quick
share.
B
E
G
B
I'm
not
sure
I
do
quite
the
documentation,
but
as
that,
I
would
probably
push
it
off
like
the
environment.
Variables
I
think
are
okay.
There
I
would
push
documentation
about
sharing
the
volume
out
because,
again
like
in
the
default
case,
I
don't
want
any
user
to
do
it
right,
like
even
with
pack
I,
don't
think
everybody
sure
I
think
you
should
just
pay
the
penalty
of
downloading
a.
But
in
the
case
where
you
want
to,
you
are
highly
motivated
to
read
some
documentation.
I.
E
I
do
think
it
if
you're
talking
about
the
same
source
code,
though,
and
the
thing
that's
varying-
is
just
the
app
tag
or
some
environment
variables,
you're
feeding
it
at
the
beginning,
I'm,
all
for
which
is
looking
at
through
the
github
issues.
That
seems
like
the
general
trend
of
what
people
are
asking
for,
I'm
all
for
making
that
very
seamless
and
providing
a
bunch
of
different
ways
to
ensure
that
you
do
get
the
cash
back
when
you're
building
the
same
thing.
Yeah.
B
E
It
could
look
really
different,
for
you
know
reusing
launch
layers
of
the
previous
version
of
the
app
versus
reusing
just
build
time,
cache
layers
too.
We
could
have
configurability
in
that,
so
that
you
don't
get
the
go-around
time
twice,
but
there's
no
chance
that
you
reuse
a
version
of
the
binary
that
was
built
with
a
different
flag.
That's
in
the
image
right,
I.
G
Guess
I'm
having
trouble
understanding
what
that
would
look
like
or
picturing
what
that
would
look
like
so
I
don't
know
if
anybody
has
any
ideas
where
they
would
be
able
to
propose
an
RFC
for
a
solution
to
that,
but
I
feel
like
the
demand,
has
definitely
been
there
from
a
user's
perspective
where
they
just
want
things
too
perform
faster
right.
I.