►
From YouTube: Working Group: 2021-06-24
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
I
I
guess
I
can
give
an
update
on
the
lifecycle
side,
we're
nearing
completion
of.
B
Put
as
a
topic
for
today,
just
you
know
to
quickly
kind
of
go
over
what
is
currently
in
platform
07,
consequently
being
targeted
for
the
next
milestone
on
the
life
cycle
and
some
changes
that
that
make
sense,
given
the
level
of
uncertainty
around
stack
packs
at
the
moment.
B
D
D
Oh,
it's
muted
apologies.
I
could
speak
a
little
bit
on
the
platform
side
for
pack.
It
doesn't
look
like.
We
have
a
lot
of
features
ready
to
be
shipped
out,
so
I
think
we're
going
to
need
a
little
bit
of
coordination
or
discussion
in
the
platform
platform
sub
team
to
determine
you
know
exactly
what
we're
going
to
try
to
release,
given
that
our
release
is
scheduled
for
next
week,
and
so
we
should
be
going
to
future
complete
this
week.
D
I
know
that
there's
been
some
discussions
about
report
tamil
as
a
pretty
you
know
requested
feature
recently,
so
that
might
be
something
we
might
try
to
squeeze
in
and
get
out.
A
Going
once
going
twice,
it's
like
no
we'll
jump
right
into
our
weekly
rfc
review.
A
That
first
thing
is
looks
like
this
is
just
dependable,
so
moving
on
to
add
issues
generation,
as
per
this,
isn't
an
rfc
right.
This
is
just
some
proposed
changes.
A
Part
of
the
automation,
so
the
first
one
is
remote.
Shell,
specific
logic.
A
Great,
I
also
put
remove
stacks
on
the
agenda
at
the
end
and
although
I
think
this
is
going
to
be
a
long
discussion,
probably
the
next
one
is
add
proposal
for
shared
layers
directory.
A
That's
not
on
the
agenda.
I
think
do
we
have
sam
here
or.
A
Sounds
good,
it's
not
an
fcp
or
anything.
Okay.
Cool
make
build
layers,
read
only
another
one
for
sam.
Should
we
also
skip
this
one.
A
Seems
like
there's
not
too
much
activity
here
recently,
this
one's
blocked
by.
A
E
B
A
C
E
D
So,
let's
see
I
put
this
on
the
agenda
based
on
some
discussions
that
have
been
ongoing
about
the
future
of
project
descriptor
and
just
wanted
to
bring
it
up
in
this
forum
to
see
if
we
could
either
brainstorm
a
little
bit,
discuss
it
more
in
depth
or
simply
just
determine
what
the
next
steps
should
be
because
it
does
seem
like
there's,
there's
some
pretty
big
issues
with
the
project
descriptor
that
we
want
to
address
sooner
rather
than
later.
D
So,
based
on
my
recollection
of
what's
been
happening
in
these
discussions,
is
we
have
concerns
about
how
the
platforms
should
incorporate
and
support
the
project
descriptor?
So
what
one
of
them
is?
Basically
the
portability
aspect
of
project
tamil
and
what
the
expected
output
of
independent
platforms
is
when
they
don't
support
everything
within
the
project
descriptor
right.
D
So
a
very
basic
example
would
be
pack
using
the
same
project
descriptor
as
something
like
kpac,
and
you
know
certain
parts
of
the
project
descriptor
not
being
supported
where
the
output
image
is
not
the
right
image
or
or
not
the
the
same
image
in
a
sense
right,
so
think
of
something
where
build
packs
can't
be
imported
on,
let's
say
kpac
for
some
reason
right,
and
so
you
can't
use
build
packs
that
are
either
inline,
build
packs
right
that
are
coming
up
or
something
like
that,
and
so,
therefore,
that
step
is
skipped
and
so
that
build
public
isn't
executed
and
the
world
just
you
know,
continues
to
work,
but
the
image
that
you
get
on
that
platform
that
doesn't
support
something
like
inline
build
packs.
D
B
So,
for
that
one
is,
I
guess:
are
there
cases
where
it's
so
like
the
case,
you
described
sounded
like
a
case
where
it
might
actually
should
fail
like
if
you
have
a
project
pommel,
that's
specifying
some
build
pack,
it
can't
use,
I
would
kind
of
expect
capec
not
to
ignore
it
and
to
fail.
So
are
there
other
cases
where
it
is
more
like
it
shouldn't
fail?
It
should
actually
do
something,
but
it
does
a
different
thing.
D
I
know
we've
talked
about
builder
as
another
construct
in
the
project.
Descriptor
k-pac,
my
understanding
is,
doesn't
support,
builder
key
parameter
because
the
builders
are
built
into
the
cluster,
so
the
way
that
they
are
defined
are
is
very
different.
A
Couldn't
kpac
just
fail
if
it
sees
project
tommel
and
it's
like
the
builder's
configured
in
a
crd
instead
of
you
know
in
the
app
directory
in
the
case
of
kpac
right
and
so
the
like.
If
kpac
can
read
project
tamil,
couldn't
it
see,
hey
builder
key
is
filled
out.
I
can't
do
that
and
just
refuse
to
build.
B
C
I
have
apprehensions
about
failing
mostly
because
you
know
there
might
be
things
that
are
convenient
to
do
for
your
pack
build
that
you
could
do
in
a
different
and
valid
way
in
kpac.
But
now,
if
you
fill
out
this
exact
field,
things
are
going
to
start
failing,
especially
because
it's
an
extension
spec.
B
I
mean,
I
think
I
mean
I
think
it's
pretty
common
to
have
like
development
configuration,
so
you
could
have
a.
I
could
see
having
a
project
tumble
that's
for
local
development
and
a
project
tumble
that's
for
kpac
like
that,
doesn't
seem
crazy
to
me
where
you
know
you
would
have
like.
Potentially
I
mean
you'd
name
them
differently,
but
that's
that's
allowed.
So
you
could
just
pass
p
or
make
project.
You
know,
make
one
the
default
or
whatever
or
explicitly
define
it
on
kpac.
D
I
do
believe
that
there
was
would
be
maybe
a
thought
where
certain
configuration
elements
within
the
io
dot
build
packs
domain
were
expected
to
always
work
right
and
work
across
all
the
platforms
and
in
those
cases
you
know
if
it
can't
be
supported,
for
whatever
reason
it
should
fail.
But
then
anything
that
is
maybe
specific
to
you
know,
let's
say
extension
right.
Like
the
builder.
D
That's
an
extension,
that's
not
technically
something
required
by
platforms
to
support
anything
like
that
goes
into
a
separate
section
right
in
which
that
case,
it's
more
like
quote-unquote,
optional
or
specific
to
a
platform
right,
so
you
can
have
a
configuration
or
options
that
are
specific
to
pack
configurations
that
are
specific
to
kpac
and
those
don't
kind
of
interact
together
right,
but
they've
been
isolated
into.
B
B
D
That's
my
recollection
of
somebody
proposed
that
I
believe
it
was
sam
and
I
know
he's
definitely
had
some
thoughts
around
this.
So
it's
it's
unfortunate
that
he's
not
here.
B
But
I
don't
think
that
ex
that
works
well
for
build
packs
group,
because
that's
not
an
extension
like
builder
right
and
that,
but
that's
still
a
problem
because
kpac
won't
honor.
Your
buildpacks
group.
A
I,
like
that,
the
domain
model
there
is
already
complex
for
simple
use
cases.
I
think
something
I've
been
talking
about.
B
B
The
thing
you
just
described
there,
where
I
want
k-pac
to
actually
support
builder
in
this
way
means
that
you
expect
all
platforms
that
choose
to
support
project
tomml
to
support
builder
as
well
like
you're
you're,
creating
a
dependency
graph
in
the
extension
specs
yeah.
I
think
that
was
one
of
sam's
or
points
early,
but
when
you
first
brought
this
up
was
how
much
of
product
descriptors
required.
When
I
say
my
platform
supports
the
project,
descriptor
extension.
A
B
Let's
use,
you
know
some
other
platform,
that's
out
there
like
ben's
personal
platform,
you're,
making
the
same
expectation
on
that
platform
as
well.
For
for
the
to
prevent
the
hard
failure
and
you're
you're
saying
you
have
to
implement
that.
A
B
C
I
feel
like
there's
two
categories
of
things
that
I
imagine
going
in
project
tamil
and
one
whole
category
could
be
easily
implemented
by
like
a
build
pack
that
has
a
contract
with
the
file
would
be
very
easy
for
platforms
to
include
this
build
pack.
It's
like
you
know.
If
you
want
to
set
process
types
or
environment
variables
or
labels,
anything
like
that,
it's
like
you
could
have
a
contract
between
a
build
pack
and
a
file.
C
Then
all
you
have
to
do
is
include
the
build
pack
and
I
feel,
like
that's
actually
very
clean
and
nice
when
you're
talking
about
things
that
a
build
pack
can't
respect
through
the
build
pack
api
that
are
instead
things
that,
like
a
platform,
is
configuring
through
the
platform
api.
D
So
that's
kind
of
the
drawback
for
for
that
proposal.
Right.
What
about
the
proposal
of
just
providing
multiple
project,
descriptors
right
and
then
per
platform,
I'm
assuming
each
platform
would
have
a
way
for
you
to
specify
where
your
project
descriptor
is
located.
A
I
don't
think
users,
I
can
see
sorry,
I
I
don't
think
users
would
understand
the
difference
between
like
if
you
had
one
project
descriptor
that
had
your
build
packs
and
builders
and
another
file
that
had
your
environment
variables
and
other
things
in
it.
I
don't
know
if
users
would
understand
it
feels
like
we're.
It's
like
an
architectural
difference
and
it's
you
know
really.
E
A
E
Yeah
with
the
project
I
get
it,
I
have
a
separate
thing.
I
was
going
to
bring
up
around
that,
but
I
wonder
how
many
people
actually
are
trying
to
define
these
builders
in
the
first
place
like?
Maybe
it's
really
not
that
much
of
an
issue.
It's
a
very
specialized
issue
for
certain
people.
D
Even
just
builder
right
as
a
very
you
know,
simple
example,
if,
if
I
have
a
builder
defined
in
my
project
descriptor
and
run
it
with
pac,
and
then
I
put
this
project
on
kpac,
but
it's
configured
to
use
a
different
builder,
then
the
outputs
could
very
well
be
different
right.
B
D
I
do
wonder
going
back
a
little
bit
if
something
like
the
idea
of
profiles
would
be
useful.
Some
profiles
with
inheritance,
adding
complexity
to
how
you
define
a
project
descriptor
right,
but
that's
what
comes
to
my
mind,
I
feel
like
that.
Interface
would
be
better
than
having
multiple
files,
but
I'm
not
entirely
sure
if
the
ux
is
like
the.
A
A
B
There's
a
lot
of
discussion
and
stuff
that
probably
needs
to
happen
here.
I
know
we're
half-time.
B
D
There's
definitely
other
points,
but
I
don't
want
to
take
up
the
majority
of
this
meeting,
especially
where
I
think
sam
could
have
a
lot
of
input
as
well.
So
maybe
it's
it's
worth
pushing
it
a
little
bit.
A
All
right,
we
will
move
on
to
asset
cash.
C
Spirit
of
throwing
up
drafts
earlier,
I
put
up
a
draft
of
an
rfc
to
create
a
writable
cash
for
build
packs
to
store
assets,
trying
to
solve
a
problem
that
people
have
been
complaining
about
since
the
dawn
of
time,
where,
when
you're
building
multiple
images,
I
mostly
talk
to
java
users
they're
like.
Why
do
I
have
to
demo
this
jdk
seven
times
when
it's
the
same
jdk.
C
We
don't
want
to
make
our
layer
cache
shared
across
images.
There
are
a
lot
of
good
reasons
not
to
do
that,
but
I
think
they're
easy
case
where
platforms
might
want
to
share.
You
know,
like
structured,
verifiable
asset
cache
across
images,
I'm
proposing
it
being
something
that
platforms
can
opt
into,
and
platforms
can
make
a
decision
about
how
widely
to
share
share
a
particular
cache.
So
on
pac,
I
think,
because
the
docker
is
sort
of
dr
demons
is
sort
of
inherently
single
tenant.
C
I
think
it
makes
sense
to
share
the
same
cache
across
all
builds,
but
other
platforms
can
make
different
decisions
or
decide
not
to
implement
it
at
all
in
order
to
avoid
cash
poisoning
or
other
concerns.
So
everyone,
please
take
a
look
at
that.
C
B
A
Sorry
can
I
have
the
nuance
like
explain
to
me
about
how
this
is
unique
to
dan's
asset
packages,
rfc.
B
C
So
the
asset
packages
rfc,
is
about
a
way
to
vendor
assets
into
a
build
pack
image
or
a
builder
image.
This
is
not
that,
but
it
uses
sort
of
like
a
similar
structure.
So
in
addition
to
having
c
and
b
assets
where
you'd
have
vendored
assets-
and
it's
not
writable
you'd
have
another
location.
That's
cnb
asset
cash,
where
in
a
case
where
these
things
haven't
been
vendored
in
it's
a
place
where
the
build
pack
could
download
the
asset
and
then
it
could
be
shared
with
other
builds.
E
Something
like
this
for
the
google
build
packs
because
we
run
like
in
scaffold,
so
it's
shared
and
so
user.
We
run
on
a
per
user
basis,
and
so
people
generally
want
to
share
assets
between
different
different
builds,
and
it
makes
a
huge
difference.
We're
looking
at
doing
it
a
bit
more
broadly,
so
we
can
share
asks
that
are
downloaded
like
jdks,
but
we're
going
to
also
add
support
for,
like
your
major
repository
sharing
that
in
us
in
a
certain
location
that
will
be
shared
across
the
different
images
as
well.
C
So
I
haven't
tried
to
solve
those
problems
in
this
rfc,
just
tackling
the
asset
stuff,
but
if
you
had
ideas
for
how
that
all
could
be
rolled
together,
you
know
I'm
curious,
but
I
thought
this
was
at
least
going
to
scope.
The
problem
down
to
the
least
controversial
parts
definitely.
A
So
the
because
app
code
can
get
executed
during
the
build
process.
Even
if
your
bill
packs
are
controlled,
you
know,
so
I
I
think
that's
okay.
As
long
as
you
just
want
to
call
out,
you
should
be
really
clear
in
the
rfc
that
should
only
be
enabled
for
local
builds
with
a
single
trusted
tenant
and
you
shouldn't,
even
in
that
case
you
shouldn't
build
source
code,
that
you
don't
trust
locally
and
then
build
source
code.
You
do
trust
afterwards,
because
you
know
I
built
this
thing
off
the
internet.
A
Now
I
created
a
malicious
version
of
the
jdk.
Now
I'm
going
to
build
my
you
know:
production
application
up
now,
that's
leaked
into
there
like
they're
they're.
Definitely
it
worries
me
from
a
security
perspective.
They're
they're
definitely
platforms
that
are
chosen
to
you
know
not
implement
this
type
of
interaction
anywhere
because
you
know
like
even
at
the
cost
of
performance,
because
it's
a
risk
doesn't
mean
we
shouldn't
do
it.
I
just
want
to
feel
obligated
to
say
something.
C
Yeah,
I
think
we
should
add
a
bunch
of
caveats.
It
should
be
optional
and
I
think
we
should
have
a
lot
of
strong
recommendations
for
build
packs
using
this,
like
you,
shouldn't,
pull
an
asset
out
of
the
cache
without
double
checking
its
digest
stuff
like
that.
But
I
think,
because
this
is
so
structured,
it's
not
arbitrary.
Build
packs,
hopefully
should
be
able
to
double
check
that
whatever
they're
using
is
what
they
want.
D
Is
there
I
was
going
to
say
like
since
we're
using
or
we
have
a
checksum
right?
Couldn't
we
in
some
form
or
fashion,
enforce
that
checksum
via
the
lifecycle
at
some
point
and
so
like
mitigating
the
risk
of
mutation
between
builds
essentially.
C
C
D
C
Performance
really
scares
me
because
I
see
this
asset
cash
was
sort
of
growing
until
someone
clears
it
and
the
amount
of
time
like
taking
checksums
is
already
the
slowest
part
of
our
build
process.
If
we're
going
to
go
and
take
checksums
of
a
bunch
of
large
files,
it's
going
to
be
too
slow.
Is
there
a
way
to.
A
Could
you
like
check
the
cache
at
the
end
of
a
build
and
then
you're
like?
Is
there
a
way
you
could
hide
the
time
because
it
doesn't
necessarily
have
to
happen
immediately
before
the
next
build,
as
long
as
you're
willing
to
like
trust
that
the
volume
is
coming
back
or
nothing
modified
the
volume
in
between
builds
or
something
like
that.
I
don't
know
trying
to
think
yeah.
C
C
Just
like
security
theater
at
that
point,
right,
like
I
think,
bill
packs,
just
need
to
like
a
bill.
Pay
can
only
pull
something
out
of
this
asset
cash
like
the
way
the
structure
is
laid
out.
If
the
bill
pack
knows
the
checksum
of
the
thing
it
wants,
so
I
think
it
then
makes
sense
for
the
bill
pack
to
check
whether
that's
true
or
not,
and
since
you
can
turn
it
off
like
I'd.
Rather
just
do
this
simple
thing
and
then
let
platforms
turn
it
off
if
they
are
uncomfortable
with
it.
A
E
I,
like
the
I,
like
the
concept
I
kind
of
wish
lifecycle
could
do
a
bit
more
like
I
wish.
E
If
it
already
knows
where
it's
at
so,
we
could
abstract
it
away
like
a
little
bit
a
little
bit
more,
but
that
sort
of
ties
build
packs
to
be
using
those
helpers
or
whatever.
It
is.
A
C
E
In
the
depends
right,
so
if
you
are
in
the
in
detect,
so
the
build
pack
says
yes,
I
can.
I
can
do
this,
and
here
I
like
these
urls
and
the
lifecycle
could
take
that
download
it
and.
D
See,
I
think,
the
complexity
that
we're
talking
about
trying
to
resolve.
We
should
maybe
understand
where
that
complexity
lies.
If
we're
trying
to
offset
that
complexity
from
ourselves
or
the
implementation
of,
let's
say
the
the
life
cycle
onto
the
build
pack
authors,
I
feel
like
that's
the
wrong
way
to
do
it
right,
just
because
we
don't
want
any
complexity,
we're
just
asking
them
to
do
a
whole
bunch
of
stuff,
but
I
think
it
should
be
kind
of
the
inverse
like
we
should
be
doing
more
so
that
they
don't
have
to.
C
Is
the
goal
here
to
stop
a
build
pack
from
doing
the
wrong
thing
like
when
we're
asking
for
these
features?
Is
it
like,
so
we
can
make
this
safer
and
have
a
security
guarantee,
or
is
it
to
make
the
pack
authors
lives
easier
because,
yes,
we
could
have
these
helpers
in
the
life
cycle,
and
maybe
we
want
to
do
that
to
make
bill
pack.
Authors
lives
easier,
but
I
think
the
way
our
permissions
model
work.
It
wouldn't
stop
a
bill
back
from
doing
the
wrong
thing.
C
It
just
makes
it
easier
to
do
the
right
thing
and
if
it's
more
of
a
convenience,
then
a
guarantee,
then
I'd
be
inclined
to
say.
Let's,
let's
do
this
to
begin
with,
especially
because
it's
you
know
opt-in
and
then
and
then
we
could
add
more
conveniences
later.
D
C
Of
this,
a
platform
can
provide
this
writable
cache
location
or
it
can
not
right.
So
no
one's
going
to
be
stuck
with
this
insecure
feature
if
they
don't
want
it.
So
I
imagine
some
platforms
like
large
multi-tenant
platforms
just
might
not
want
to
introduce
this
at
all,
but
I
think
pac
should
introduce
it.
Maybe
pac
provides
a
way
to
turn
it
off
for
people
who
are
uncomfortable
with
it,
but.
A
I
mean
only
in
the
case
that
you're
using
tecton
to
build
a
bunch
of
applications
where
it's
okay,
if
their
build
configuration,
can
contaminate
each
other
right.
It's
like
in.
C
D
E
I
would
actually
say:
maybe
pac
should
opt
out
of
this
by
default,
so
it's
something
that
a
user
has
to
opt
in,
but
my
plan
is
to
enable
it
by
default
in
scaffold,
so
people
who
are
building
a
microservices
type
application
will
be
able
to
share
the
jdk
across
that
build
and
I'm
not
sure
about
whether
we
want
to
create
a
separate
docker
volume
per
like
build
location
or
whether
we
want
to
do
it
on
the
user
basis.
A
I'm
I'm
definitely
supportive
of
this
as
it's
specified
in
the
draft.
A
C
E
E
That's
my
number
one
concern
with
this.
It
seems
like
it
would
be
easy
for
build
pack
to
accidentally
bind
itself
to,
like
I
guess,
using
this
feature,
and
if
a
platform
doesn't
have
it,
I
guess
life
cycle
would
would
fail
gracefully.
I
guess
in
that
case.
E
C
Don't
think
the
danger
of
bill
packs,
depending
on
it,
is
any
worse
than
it
is
for
asset
packages.
It's
like.
We
already
have
this
optional
place
where
a
bill
pack
could
look
for
a
vendored
asset.
If
it
wants
it,
and
now
we
have
another
place
where
it
could
look
for
a
cash
asset
or
put
a
cash
asset.
If
it
wants
it,
I
feel
like
any
build
pack.
That's
using
these
things
is
gonna
have
to
deal
with
the
case
where
they
don't
exist.
E
It
just
hardens
my
idea
for
the
helpers
like
it's
like
if
you're
gonna
have
three
different
ways
of
like
figuring
out
whether
something
already
maybe
exists
in
these
optional
places
in
every
real
pack
like,
we
should
probably
be
trying
to
solve
that
for
the
authors
at
some
point.
But
I
I
like
the
idea
are.
D
A
Just
on
that
topic
of
now,
there
are
a
whole
bunch
of
places
to
check.
I
think
it's
worth
calling
out
this.
This
is
like
the
fourth
caching
mechanism.
Now
we
have
you
know
layers
that
can
cache
now
we're
talking
about
the
built-in
build,
read
layer
or
the
ability
to
write
across
shared
layers
between
build
packs.
So
it's
like
you,
have
an
individual
build
pack
cache.
A
We
have
a
cross,
build
pack
cache
now
we're
going
to
have
a
cross
image
cache
and
now
we're
going
to
have
a
cross
everything
that
uses
the
same,
build
pack
cache
and
that
with
that
asset
caching-
and
they
are
all
totally
separate
interfaces,
like
maybe
there's
a
little
bit
of
similarity
between
the
types
of
layers,
but
talking
about
those
as
different
directories.
This
is
kind
of
a
similar
asset
thing
in
the
last
two,
but
those
are
still
separate.
A
You
know,
if
we're
kind
of
said
we
want
to
tackle
reducing
complexity
in
the
project,
get
rid
of
terms
that
are
confusing
for
users.
You
know
kind
of
make,
build
techs
do
what
they
do
best.
You
know
having
four
types
of
caching
not
saying
we
shouldn't
do
it
just
it
does.
Does
worry
me
a
little
bit.
B
C
B
Oh
I'm
not
like
I'm
somewhere
about
of
steven
like
I'm,
not.
I
don't
want
to
block
the
feature.
I
wonder
if
there
is
a
way
we
can
so,
I
think
there's
also
additional
complexity
on
the
bill
pack.
Author
right,
like
here's
yet
another
thing,
it's
like!
Oh,
please,
learn
about
the
four
ways
you
can
store
stuff
for
your
build
pack.
Like
I
very
much
in
favor
of
the
feature.
Just
I
wonder.
If
there's
a
way,
we
can
do
it
without
making
it
as
complex
with
the
billback
monitor,
but.
A
B
E
It's
hard,
so
we
looked
at
the
asset
cache
and
we're
trying
to
think
about
how
to
adopt
it,
but
some
of
the
things
that
we
have
within
app
engine
is
historically
we've.
People
didn't
like
that.
We
had
a
restricted
platform.
You
know
you
had
to
run
this
jdk
and
you
run
this
version,
so
we're
much
more
open
now,
so
we
download
the
latest
versions
by
default
of
you
know
whether
it's
python
or
the
jdk
or
whatever,
and
so
pre-hydrating
all
these
registry
cache
descriptors
is
kind
of
problematic.
E
We
can
do
it
for
some
of
the
major
versions,
but
we
want
to.
We
don't
want
to
box
people
in
so,
and
we
also
don't
want
to
download
like
three
different
versions
of
the
jdk
for
java
11,
java
8
and
maybe
java
15
right
cause.
That's
just
a
one
gig
that
we're
going
to
be
downloading
over
and
over
again,
so
we
kind
of
prefer
to
do
it
on
demand
based
on
what
the
user
wants.
A
E
E
A
Like
you
know,
that's
not
the
situation,
you
don't
have
things
pre-hydrated,
it's
not
free
to
recover
the
bits
right,
but
I
I
wonder
if,
if
you
combine
the
two
ideas
together
and
then
are
careful
with
the
permissions
between
the
two,
you
know:
does
that
you
know
give
you
a
thing
where
yep
build
packs
can
download
whatever
they
want
and
share
them
with
other
build
packs,
and
you.
A
Cache
that's
coming
from
your
images,
yep
that
can
show
up
there
as
well,
and
then
it's
one
you
know
only
only
one
more
type
of
of
caching
and
the
type
of
caching
is.
You
know
generally
felt
to
be
cross
image.
Caching,
you
know,
even
if
some
of
some
of
the
cache
can
go
further
than
that
right
and
then
you
know,
if
the
build
read
layers
thing
is
a
little
more
like
the
other
layers
thing,
then
maybe
it's
only
two
kinds
of
caches.
C
I
feel
like
you
need
two
locations,
though,
like
if
you
want
to
build
things
into
the
image
and
have
a
shared
volume,
they
can't
be
the
same
directory
right,
but
I
think
because
we're
laying
the
same
directories
out.
Hopefully
it's
still
like
a
single
concept
and
if
you
want
to
make
it
easier
for
build
pack,
authors
like
you
know
now
that
we're
promoting
these
things
in
the
spec
like
before.
I
would
have
implemented
this
in
lib
pack
and
it
would
just
work
for
all
the
java
build
packs.
C
If
we're
moving
this
up
in
the
spec,
you
know,
maybe
we
can
put
in
libc
and
b
and
then
anyone
using
those
bindings
we'll
get
it,
and
you
know
maybe
the
bill
pack.
Author
tooling,
provides
a
bash
script.
That
does
the
same
thing
like
I
don't
know
if
we
need
to
solve
this
in
the
life
cycle,
if
we
solve
it
with
bill
pack,
author,
tooling,
that
makes
it
easier.
A
I'm
always
a
little
bit
skeptical
of
like
make
the
api
complex
and
that
solve
it
on
the
client
side,
but
the
you
know,
I
think
that
definitely
reduces
complexity
for
those
users,
but
back
to
the
like,
they
have
to
be
separate
directories.
Could
you
just
sim
link
create
root
on
some
links?
You
know
between
the
image
based
asset
cache
sha's
into
the
shared
asset
cache
when
that
mode
is
enabled,
and
then
then,
when
the
mode's
enabled
the
directory
is
writable
and
when
it's
not,
the
directory
is
read
only
at
the
top
level.
C
A
A
You
don't
have
to
care
where
the
things
came
from
in
that
directory.
You
just
have
to
know
your
assets
right
and
sometimes,
if
the
platform
allows
you
to,
you
can
write
assets
into
that
as
well
right
that
that
seems
definitively
simpler
to
me
from
the
buildpack
author's
perspective,
although
certainly
not
from
an
implementation
perspective.
On
the
lifecycle
side,
you
know
if
we,
you
know,
have
to
create
sim
links
that
are
owned
by
a
particular
user.
You
know,
but
maybe.
C
E
When,
with
complexity
from
an
implementation
site,
it's
actually
we're
requiring
shot
256's
and
some
of
the
downloads
that
we
we
request,
we
don't
actually
have
a
shot
56,
it's
either
a
sha-1.
Sometimes
it's
a
sha-512
I've
seen
once
and
so
now
I've
got
to
recompute
values
and
so,
in
my
invitation,
we're
just
using
the
build
pack
id
and
the
build
pack
version
and
creating
a
direct
under
there
and
stashing
things
under
a
no
name
from
that
point.
Since
we're
the
ones
downloading
anyways
we're
not
really
that
worried
about
the
cash
poisoning
sign.
A
A
E
In
terms
of
the
tooling
that
I'm
providing
within
our
team,
it's
what
emily's
describing
we
have
like
a
library
and
we've
got
a
download
and
there's
a
download
and
extract.
You
provide
the
url
that
we
want
to
download
from
an
optional
hash
and
a
directory
where
you
want
the
file
that
you
want.
You
know
the
directory
for
extract
so
because
we
can
be
extracting
to
a
layer
and
we'll
in
that
library,
we'd
look
at
the
various
places
and
otherwise
we
download
it.
C
A
C
Do
remove
shell
logic
quickly?
I
just
want
to
plug
everyone
to
look
at
this
rfc
to
remove
bash
specific
logic
from
the
platform
from
the
life
cycle
itself.
C
We've
talked
about
this
in
working
groups
before
what
I'm
proposing
here
is
that
every
process
is
direct
and
that,
if
you
want
a
shell,
you
have
to
explicitly
include
bash
in
your
process.
I
think
it
will
make
it
easier
for
folks
to
understand
what
is
happening,
because
there
aren't
as
many
cases
of
special
logic.
C
In
order
to
do
this,
we
need
to
remove
profile
script,
support
from
the
life
cycle,
because
profiles
are
inherently
bashy
and
we've
come
up
with
a
you
know,
command
prompt
version
of
profiles,
but
I
feel
like
this
already
creates
surprises
for
users,
where,
like
my
dot
profile,
you
know
happens
to
work.
If
my
build
pack
created
a
shell
process,
but
it
doesn't,
if
that
same
process
moves
to
being
a
direct
process.
E
Thoughts
we
support
a
c
and
b
shim
for
v2
to
cnb,
I'm
wondering
just
hypothetically.
What
would
it
look
like
to
shim
something
in
like
this?
Would
you
think
we
could
create,
like
an
exact
d
to
like
wrap
these
profile
scripts?
Is
that
kind
of
how
something
like
this
might
just
like
high
level
work.
C
Then
I
think
you
know
we
can
more,
rather
than
having
these
implicit
dependencies
on
bash
any
build
pack,
that's
you
know
either
wrapping
a
user-provided
profile
script
in
an
exact
d
or
you
know,
creating
an
exact
d
that
is
bash
based,
can
then
declare
a
dependency
on
bash
rather
than
sort
of
having
it
be.
This
implicit
feature
of
the
life
cycle
that
you
know
sometimes
doesn't
even
work
because
the
stack
image
doesn't
have
bash
on
it.
B
I
guess
I'm
wondering
with
this
one
as
opposed
to
the
last
rfc
right.
You
were
saying
that
the
java
folks
are
complaining
to
you
voice
seriously
about
it.
I
guess
I'm
wondering
I
you
know
I
understand
what's
going
on
here,
because
I'm
wondering
if
you're
getting
the
same
pressure
just
to
have.
You
know
the
full
picture
around
this
one.
C
On
the
java
side,
we
actually
already
moved
all
of
our
processes
to
be
direct,
because
we
got
complaints
about
some
of
the
weird
argument
handling
if
you're
using
a
shell
process
right.
The
first
complaint
we
got
was
that
you
needed
a
way
to
append
additional
arguments.
C
We
solved
that
well
for
direct
processes
and
with
some
hackery
for
shell
processes,
and
people
turns
out
that
hackery
was
too
clever
and
people
don't
like
it.
So
we
moved
all
of
those
things
to
be
direct,
because
whenever
someone
encountered
a
shell
process
that
worked
like
this,
they
were
confused
by
it.
So
I
think
we
should
just
take
it.
E
C
Yeah,
I
think
that's
the
one
thing,
the
one
convenience
that
people
really
need
and
I'm
proposing
a
very
simple
version
of
it.
It's
like
kind
of
bash
like
notation,
but
with
no
nothing
fancy
other
than
you
know.
If
you
have
a
dollar
and
curly
braces,
we
will
replace
the
thing
before
running
it
so
kind
of
similar
to
what
kubernetes
does
I
at
first
I
wanted
to
use
the
parentheses
notation
that
kubernetes
does,
but
I
realize
that
actually
doesn't
play
well
if
you're
running
in
kubernetes,
because
it
will
get
replaced
too
soon.
E
C
Yeah
and
also
kubernetes
doesn't
replace,
based
on
and
vars
in,
the
config,
only
ones
that
are
in
the
in
the
end
section
of
the
podcast.