►
From YouTube: Working Group: 2020-09-16
Description
* Stack Buildpacks
* Builder RFC: https://github.com/buildpacks/rfcs/pull/116
* Asset Caching: https://github.com/buildpacks/rfcs/pull/81
* Mirroring/Proxying Images in pack: https://github.com/buildpacks/pack/issues/821
A
A
B
A
Cool
first
thing
is
new
faces,
definitely
don't
see
any
new
faces,
this
time,
release,
planning
and
updates.
E
Similar
over
here
we're
not
planning
to
start
focusing
on
future
complete
until
next
week,
so
we'll
have
more
of
an
update
there.
Next
week.
A
Cool,
let's
do
our
weekly
outstanding
rfc
review.
Give
me
one
second
to
share
my
screen.
A
So
first
thing
is
a
new
pr,
add
builder
to
distributions
pack
anything
actionable
on
this.
One
talk
about
right
now
is
this
on
a
list?
Yes,
this
is
on
the
list.
So,
but
we
don't
have
david
here.
Is
that
right.
A
Maybe
he
just
lost
connection
move
on
then
rfc
lifecycle,
experimental
api
mode.
This
is
a
new
experimental
api
thing
from
terence.
A
Cool
build
run,
image
configs
must
contain
m.path.
A
A
lot
of
people
on
vacation
stuff
anybody
have
any
status
on
this.
One
just
move
on.
A
Cool
I
can
reach
out
to
them.
B
Distribution
team:
I
was
waiting
on
some
small
changes
for
terrance,
but
nothing
on
waiting
on
me,
but
nothing
actionable
sounds
good.
I
think
we
got
some
approvals.
A
Just
waiting
for
turns
application,
mix-ins.
B
Yeah,
just
fyi,
I'm
not
paying
a
lot
of
attention
to
this
one.
I
know
there's
some
comments,
but
I
probably
haven't
addressed
them
waiting
for
the
mixing,
I'm
waiting
for
the
mix
and
stuff
from
stack
packs
to
kind
of
sort
out
and
then
I'll
come
back
to
this
one.
A
That
makes
a
lot
of
sense.
Let's,
let's
definitely
agree,
we
should
focus
on
stack
packs.
First,
is
this
on
the
stack
packs
on
the
android
this
time?
Yes,
it
is
awesome,
so
we'll
talk
about
that
and
then
build
packs
should
be
able
to
provide
default
process
types
for
app.
This
is
a
super
interesting
there's
a
lot
of
good
discussion
about
this
one
I
think
natalie.
We
were
going
to
chat
about
it
at
some
point,
just
based
on
the
feedback.
A
We
got
on
slack
anything
actionable
for
this
group
right
now
or
anything
you're
looking
for
cool,
I
don't
think
so.
I
guess
if
people
have
opinions
about
that,
please
put
them
in
the
thing,
because
it's
good
to
get
as
much
feedback
as
possible
on
what
we
should
do.
There
relax
mixing
contract.
A
This
is
an
fccp
and
is
it
the
17th
yet
nope?
That's
tomorrow,
so
that'll
go
out
tomorrow,
sub
team.
F
A
B
D
B
Yeah,
there
are
two
different
things.
I
think
the
changes
were
largely
around
like
like
how
it's
messaged
sort
of
like
how
it's,
how
a
user
knows
that
something's
experimental
and
how
we
deprecate
it
or
kill
things.
You
know.
A
Cool
anything
actionable
right
now,
I'm
just
no
I'm
terence,
and
I
should
take
a
look.
Cool
draft
for
mix
sends
and
offline
build
packages
rsc.
This
one
is
on
the
list
today,
yeah
acid
caching,
so
we
won't
talk
about
right.
F
B
Sharing
yeah,
let's
see
I've.
The
first
thing
on
the
list
is
a
question
about
creator
dropping
privileges,
but
I
think
jesse,
I
just
let
jesse
talk
about
it.
G
Yes,
I
guess
right
now:
creator
drops
privileges
pretty
much
instantly
and
it
drops
them
permanently
so
for
creator
to
be
able
to
work
with
stack,
build
packs,
we'll
need
to
be
able
to
elevate
backup
for
the
stack,
build
pack
execution
as
root
and
then
drop
permissions
again
for
the
remainder
of
the
creator
cycle.
So
I
just
wanted
to
make
sure
that
sounded
reasonable.
D
I
have
a
question
about
creator
and
stack
packs
in
general
because
after
detect,
you
also
want
to
kick
off
a
run
of
the
stack,
build
packs
on
the
run
image
before
you
export.
So
can
you
actually
use
creator
at
all
which
does
them
in
a
single
process.
A
So
I
was
going
to
suggest
also,
does
creator
make
sense
for
these
more
complicated,
multi-image
workflows
or
can
you
you
know,
tell
people
to
just
run
the
normal
workflow?
It's.
G
A
We
don't
have
to
make
a
decision
about
what
it
ends
up
being
eventually
right.
We
can
just
keep
it
at
a
creator
for
the
first.
You
know
until
we
get
a
working
workflow
with
the
full
thing
and
then
come
back
and
add
it
to
creator
later.
If
we
find
that
there's
people
want
it
there
or
there's
a
use
case
for
it.
E
It
has
stack
packs
like
techton
right,
where
you
select,
which
one
of
the
two
tasks
you
want
to
execute.
Now
it
becomes
a
very
difficult
choice
to
understand
when
you
should
be
using
certain
optimizations
and
when
you
shouldn't
so
restricting
stack
packs
from
working
on
the
creator.
Workflow
seems
pretty.
E
You
know
not
ideal
for
very
specific
use
cases,
but
I
do
agree
that
maybe
just
for
getting
it
out
the
door
getting
some
sort
of
feeler
for
the
feature
as
a
whole.
It
sounds
like
a
good
idea
to
have
it
not
be
in
creator
on
the
initial
pass.
A
Protect
on
this
is
going
to
get
way
more
complicated.
Right,
like
like
the
purpose
of
creator,
is
that
it's
a
one,
you
know
one
shot
go.
You
can
drop
it
into
an
existing
template.
If
you
want
to
it's
like
very
simple
to
set
up,
but
then
here
you
need,
you
have
to
have
multiple
containers,
no
matter
what,
and
so,
if
we
added
stack
pack
support
to
the
creator
tech
tone
workflow.
Would
that
be
a
third
template,
that's
like
in
between
the
two
templates
right?
A
G
D
Especially
because
this
can
be
done
dynamically
in
the
build
plan
right
deciding
whether
you're
going
to
install
a
mix
in
you
need
to
make
the
decision
about
whether
you're
running
in
one
container
multiple
up
front.
So
basically
it's
just
if
you
have
stack
packs
at
all.
You'd
never
use
the
creator,
so
we've
sort
of
lost
that
optimization
and,
like
maybe
that's
worth
it
for
the
future.
But
it's
a
trade-off.
A
As
long
as
like
er,
I
got
that
that
it
might
be
tricky
to
find
a
way
to
know
if
you're
going
to
need
stack,
packs
ahead
of
time.
But
if
there's
a
reasonable
way
to
rule
out
stack
packs
early
on
in
the
build,
then
having
a
trusted
builder
still
go
through
the
whole
process.
To
do
a
build
doesn't
seem
like
a
terrible
thing
right,
it's
a
little
bit
slower,
but
it's
not
so
bad
right.
D
B
A
I
don't
know
if
this
is
worth
bringing
up,
but
we
could
challenge
that
we
could.
We
could
use
the
same
container
and
just
the
conico
does
this
when
you
do
a
clinical
docker
file,
build
nuke
everything
in
the
container
load,
the
run
image
layers
into
it
dynamically
and
then
do
the
run
image
build
all
at
one
thing:
that's
technically
possible.
A
B
Well,
you
know
the
problem.
I've
been
thinking
about
the
parallel
thing
too.
That's
going
to
get
really
weird
in
terms
of
like
output,.
A
B
Yeah,
in
any
case,
I
kind
of
like
that
doing
the
kanako
style
thing
of
wipe
and
load,
and
we
don't
have
to
do
it
like
first
iteration,
but
I
think
that
I
mean
given
that
like.
Given
what
creator
is
meant
to
be
that
actually
sounds
pretty
appropriate.
G
A
Then
then
we
don't
get
good
caching
on
the
build
image,
though,
because
there's
no
place
to
put
it
temporarily.
The
big
disadvantage
here
is
that
now
now
you
have
to
download
the
layers
on
every
build
right,
probably
remotely.
G
A
Now
could
you
do
like
a
kanako,
some
hybrid
conical
thing
where
locally,
when
you're
running
creator
and
pack
and
the
docker
damon,
because
it
can
access
the
docker
dam
in
it?
It
loads
the
uncompressed
layers
through
the
docker
and
into
the
thing.
So
you
can
use
your
doctor
damon
cash.
You
know,
there's
a
whole
bunch
of
stuff.
You
can
look
into
how
about.
B
A
Don't
even
I
guess
it's
probably
good
if
it
goes
through
rfc
process,
but
it
could
just
be.
You
know,
figuring
out
what
the
best
way
to
do.
It
is.
B
G
G
It's
just
the
only
problem
I
have
with
that
is.
I
think
we
probably
need
to
do
something
to
make
something
like
pack
fail,
if,
if
there
are
stack
packs,
because
if
you
have
a
trusted
builder
or
if
it's
unjust
and
you
trust
it,
it
will
still
build,
but
it
will
not
build
the
same
thing
anymore
right,
because
it's.
G
D
Is
it
a
fail,
though,
you
could
just
fall
back
on
the
multiple
images
like
you,
don't
have
to
use
creator.
A
If
you're
using
you
know
one
of
the
cloud
founder
hero,
google
builders
right
is
use
the
trusted
workflow
and
so
that
that
would
make
me
hesitant
to
add
stack
packs
to
my
builder
because
it
would
slow
down
my
pack
builds
and
so
I'd
almost
rather
it
try
and
fail.
If
that
makes
sense
like
like,
if,
if
you're
using
a
trusted
builder
workflow
and
the
you
try
to
do
a
package
installation,
you
get
some
message
that
says:
hey
you
gotta
slow
down
your
builder
and
use
this
other
way
of
doing
it.
A
If
you
wanna,
wouldn't
be
great,
that's
like
definitely
hurts
the
ux
for
end
users,
but
I
it
hurts
these
extra
users
either.
You
know,
insidiously
and
persistently
right
where
everything
slows
down
on
a
release
in
one
direction
or
it
hurts
the
ux
with
a
sudden.
Oh
yeah,
I
gotta,
I
gotta,
run
up
the
slow
way.
If
I
want
to
do
this
kind
of
more
complicated
thing.
E
Yeah,
I
was
gonna
say
like
as
far
as
the
release
planning
for
this
feature
right.
I
think
we
might
have
discussed
it,
but
it
sounds
like
for
one
like
having
a
creator
seems
to
be
even
more
important,
as
you
explained
it
steven,
where
the
stack
author
probably
wouldn't
want
to
include
some
of
these.
E
If
we're
going
to
be
slowing
down
the
build
process
by
nature,
so
I
think
that
elevates
the
need
for
the
creator
work
to
be
done
and
then
on
the
other
side,
if
we're
having
a
lot
of
stuff
that
we're
really
just
wanting
to
implement
for
experimental
purposes,
is
that
kind
of
the
the
plan
for
it
to
be
experimental
behind
the
scenes?
And
then
people
opt
in
to
these
functionalities.
A
I
think
we
should
take
the
roll
out
of
this
feature
in
general
very
slowly,
because
it's
so
complicated
and
we
need
to
get
a
lot
of
feedback
on
it.
You
know,
so
I
don't
think
I
don't
want
to
defer
the
creator
thing
until
this
is
you
say
it's
ready
to
go.
Everybody
in
the
world
goes
to
use
stack
packs.
It
just
seems
like
getting
it
out
in
the
getting
one
one
working
workflow
that
covers
all
parts
of
the
rfc
out
for
untrusted
builders
first
and
releasing
that
and
then
getting
feedback
on
it.
A
You
know
it's
experimental
with
the
flag.
You
have
to
pass
that
says,
use
stack,
backs
and
untrust
your
builder
when
it
runs
then
getting
feedback
on
that
and
then
coming
around,
adding
create
builder
and
20
other
things,
I'm
sure
we
didn't
think
about
right
and
then
releasing
it
all
you
know
all
together
is
good
and
I
think
that's
going
to
happen
over
a
long
enough
period
of
time
that
it's
you
know
by
the
time
we
say
everybody
in
the
world
use
backpacks.
A
A
That
was
that
was
never
officially
supported,
because
it
wasn't
supportable
really
right.
H
A
Your
packages,
but
like
so
that
wasn't
it
wasn't
an
abi
issue
that
was
like
a
just
most
packages
you
install,
are
gonna,
get
installed
to
different
paths
that
just
won't
work
and
so,
like
50
of
things,
fail
in
crazy
or
silent
ways
right
it
just
it
wasn't
a
thing
we
could
ever
safely
say
is
usable
in
in
production
right
and
that's
why?
Even
though
we
finished
it
really
quickly,
we
didn't
wanna.
You
know
say
that
this
is
something
people
should
use
if
they
really
could
avoid
it.
A
In
this
case,
this
is
like
a
fully
supported.
You
know
it's
not
a
there's,
no
danger
to
using
it
at
all
right,
it's
it's
completely
safe
and
that's
why
there's
so
much
complexity
in
the
process
of
getting
it
out
there,
and
so
once
once
it's
usable
like
people
should
really
start
to
use
it
right.
If
that
makes
sense,
and
then
once
we
say
it's
it's
ready
to
go
then
like
we
can
it's
not
hard
for
us
to
say.
Yes,
this
is
an
officially
supported
way
of
doing
things.
B
C
J
Well,
sorry,
I
don't
know
if
this
is
useful,
but
I
just
wanted
to
point
out
that
the
heroku
google
builders,
all
those
are
trusted
by
default.
So
if
we're
doing
something
that
you
know
is
like
expecting
the
user
to
explicitly
opt
in,
we
should
just
keep
that
in
mind,
because
if
we
ever
put
stack
packs
on
those
builders,
then
it'll
be
a
broken
by
default.
B
B
A
H
Yeah,
which
is
almost
the
same
as
passing
in
dash
dash
untrust
builder,
like
not
just
builder,
just
follows
the
nun.
How
can
we,
how
can
we
do
that,
like?
How
can
you
opt
into
using
a
specific
build
pack?
I
guess
we
can
manually
do
that
impact
and
like
include
it
or
not,
included
in
the
list
of
build
packs.
We
provide
to
lifecycle.
E
I
do
have
a
question
about
where
we
are
right
now
and
how
this
is
being
developed.
So
I'm
assuming
there's
changes
being
made
to
life
cycle
are
changes
also
concurrently
being
made
to
pack
to
test
it
out
or
are
there
different
methods.
B
No
so,
and
also
the
changes
are
being
made
in
a
branch,
we're
not
putting
anything
in
master
until
the
rc
is
done.
This
is
mostly
just
to
validate
what
we're
saying
and
then
to
see
if
we
can
uncover
stuff
that
we
didn't
anticipate
yeah
like
crazy,
not
working,
yeah
yeah,
I
jesse
you
wanna,
like
jesse's,
been
running
lifecycle
manually.
I
don't
know
if
you
wanna
talk
about
that
at
all.
G
Yeah,
I
mean
it
just
we're
in
a
life
cycle
branch
where
you
know
just
we
got
custom
builders
with
some
stack,
packs,
testing
kind
of
the
build
plans
and
how
they
interact
and
detect
phases
and
ordering
and
writing
a
bunch
of
tests
around
that
stuff
as
well.
So
it's
currently
kind
of
where
we're
at.
G
I
think
we're
to
the
point
now,
where
we're
gonna,
I
think
it's
probably
on
joe's
list,
but
we're
going
to
do
some
like
cash
restore
stuff
next,
but
we've
got
the
tech
phase
and
build
phase
sort
of
doing
what
we
think
they're
supposed
to
be
doing
and
now
we'll
get
to
the
part.
Restoring
caching
and
the
next
boarding
will
kind
of
be
though
so
we
haven't
even
got
to
the
point
where
we're
creating
the
extended
run
image.
At
this
point,
yeah.
B
And
jesse's
added
some
acceptance
tests
which
you
know
has
been
pretty
helpful
in
validating
it,
but
nothing
in
pack.
Yet.
E
B
I
was
going
to
suggest
that
we
do
some
breakout
time
tomorrow
to
talk
about
the
extend
phase
and
then,
actually,
I
think,
very
related
to
extend.
Is
the
rebase
stuff
we're
just
gonna
like
if
we
talk
about
that
here.
We're
not
gonna
talk
about
anything
else,
so
unless
anybody
objects
I'll
put
that
on
the
schedule
for
tomorrow,
there
is
one
thing
I
don't
want
to
talk
about
too
long,
but
I
think
jesse
is
going
to
probably
run
into
this.
B
The
stack
already
has
a
mix
in
so
I
think,
there's
two
possibilities.
One
is
that
it's
like,
like
a
generic
input
from
the
platform
like
the
platform,
is
like
don't
worry
about
where
these
came
from.
These
are
provided
mix-ins
or
the
alternative
is
detector,
is
sort
of
aware
of
where
to
look
for
the
stack
metadata
to
see
what
mixins
are
already
provided.
So
I
guess
it's
a
question
of.
If
detector
is
aware
of
the
stack
mixins
or
still
remains
like
generic
and
like
the
build
plan,
mix-ins.
D
A
We
also
have
the
presidents
of
having
the
stack
id
set
in
the
environment
of
the
stack
image.
I
wonder
if
having
this
nixon
set
and
the
environment
of
the
stack
image
makes
sense
or
not,
I
kind
of
think
you
said
in
the
past-
we're
not
actually
sure
if
we
want
to
expose
that
list
to
build
packs,
anyways
so
stack
tom
will.
Maybe
that
may
be
the
right
answer.
I
just
wanted
to.
B
A
B
A
B
Not
not
filter
but
like
like
detector
would
start
with
a
list
of
provided
mixins
and
it
and
like
you,
can
think
of
it
like
it
would
just
wouldn't
care
where
they
came
from
it's
just
like
the
plot,
like
maybe
the
platform
you
know,
maybe
there's
some
other
mechanism
that
that
ran
before
life
cycle
that
enhanced
the
stack
you
know
like
who
cares
right
like?
Would
you
prefer
that
it's
just
a
generic
provides
mix
and
equals
true
or
the
other
two
options
like
you
were,
describing
with
an
environment,
variable
or
stack
tunnel?
A
Figure
out,
I
think,
should
we
should
we
wrap
this
the
stack
into
this
more
generic
mixing
contract
so
that
that
the
thing
that
the
detector
is
getting
is
a
build
plan.
That's
coming
from
the
stack
as
opposed
to,
but
then
there's
the
question
of
what
generates
that
build
plan,
build
plan,
that's
coming
from
this
tank
and
then
just
invent
some
other
concept
that
looks
like
the
tiger.
A
D
D
Right
now,
if
a
build
pack
provides
something-
and
it
is
not
required-
that's
not
a
passing
configuration
of
the
build
plan.
I
think
this
is
getting
back
to
like
the
build
mixins
any
because
to
describe
in
the
build
plan
that
you
can
provide
more
than
one
mix
in
if
it
isn't
any
like.
D
Maybe
we
have
a
nice
mechanism
for
any,
but
for
anything
besides,
like
move
a
special
case
for
any,
but
for
anything
besides
that
you
need
to
create
a
set
of
provides
or
provides
like
the
combinatorial
explosion
of
all
of
the
nixons
you
could
provide
and
or
not
provide.
I
wonder
if
we
just
want
to
relax
that
requirement
for
mixins
or
evaluate
the
contract
differently.
A
B
Something
worth
yeah
the
static
ones.
I
I
think
I
know
how
to
deal
with
that
for
the
dynamic
ones
that
should
be
fine,
like
I
don't
see
a
stack
pack
adding
provides
if
it
isn't,
if
they're
not
actually
required
like
that,
seems
fine.
It's
really
just
the
static
ones
and
we
can
handle
those
separately.
B
Yeah,
but
do
not
know
like
like,
oh
and
I
don't
think
we
need
that
like,
but
like
the
app
the
apps
build
pack
would
like
if
you're
gonna
provide
something
generically
or
like
like
any
you're
going
to
do
it
statically.
I
don't
see
a
stack
pack
saying
I
provide
anything
dynamically.
That
seems
wrong
to
me
right
like
if
it's
doing
it
dynamically.
It
needs
to
be
explicit
about
what
it's
providing
based
on
what
was
required
or
some
list
that
it
grabbed
from
a
url
or
something
I
don't
know.
You
know
what
I
mean.
D
B
Yeah,
maybe
it
is
it's
both
right,
like
I
think
I
thought
we
were
saying
that
we
wanted
to
and
that's
sort
of
what
I
was
getting
at
is
like
detector
gets
the
list
of
stack
provided,
mix-ins
and
uses
that,
as
part
of
its
like
resolution,
for
resolving
the
build
plan.
A
The
stack
packs
can't
dynamically
output
mix-ins
they
provide.
Is
that
right,
they're
pre-defined
and
build
pack
tommle.
B
A
A
Have
a
use
case
for
that.
I
thought
a
while
ago.
We
said
that
it's
too
hard
to
validate
things.
If,
if
the
mix
and
list
is
determined
dynamically,
you
have
to
start
the
build
first
like
that
was
a
mcnew.
Is
that
something
you
rose?
And
so,
if
we
keep
that
static,
then
we
can
say
yes
that
we
are
going
to
relax
this.
A
It's
like
a
their
sinks,
sort
of
for
for
those
requirements
that
allow
the
requirements
to
pass.
But
on
the
provider
side
you
don't
have
to
meet
those
mix
because
they're
statically
defined
and
then,
if,
if
no
mix-ins
match
on
the
provider
side,
then
we
choose
not
to
run
the
stack
pack,
which
is
just
like
optional
right.
It
like
feels
very
similar
to
what
optional
means
for
the
other
build
packs.
So
I
think
that
all
all
works
out
well,
you
could
even
do
it
dynamically,
but
I
thought
we
said
we
want
to
do
it
dynamically.
A
D
When
it
was
the
two
mechanisms,
it
was
bothering
me,
and
I
wanted
the
static
to
be
a
pre-validation,
but
then
dynamic
to
be
the
source
of
truth
during
detect.
But
I,
if
we
don't
have
dynamic,
provides
I'm
okay
using
the
static
in
the
actual
detection.
It's
the
combination
of
the
two
that
I
feel
like
isn't
clean.
B
Yeah
I
mean,
I
guess
the
thing
I
was
thinking
was
like
you:
have
it
look
for
the
app
to
build
pack,
you
have
statically,
you
say
you
provide
anything
and
then
either
a
build
pack
or
an
application
requires
a
package
or
a
mixin,
ffmpeg
or
whatever,
and
something
would
put
a
provides
in
the
build
path
in
the
build
plan.
No.
B
B
B
A
I
have
two
two
problems:
one
is
this
list
of
mixins
we're
going
to
pass
two
unrelated
problems.
Actually,
this
list
of
mixes
we're
going
to
pass
into
the
detector,
we'll
need
to
include
runtime
mixins
that
aren't
available
on
the
build
image
as
well
and
so
static,
and
we
can't
statically
bake
the
run
image
mix-ins
into
the
build
image,
because
we've
said
in
the
past
that
they
could
change
for
a
given
build
image
in
the
future,
and
so
I
don't
think
we
have
we
can.
We
have
a
canonical
source.
This.
A
D
I
think
when
you
create
the
builder
that's
a
time
we
validate
mix-ins,
that
we
value
that
the
two
images
even
work
together
and
I
think,
if
you're,
adding
mix-ins
to
the
run
image
after
that
dynamically,
you
should
be
recreating
the
builder
anyways,
because.
B
H
A
That's
the
next
problem
that
I
was
going
to
get
to
is
we
had
another
thing
we
were
going
to
chat
about
tomorrow,
which
is
there's
there's
a
some
big
disagreements
in
offline
build
packages
where
some
folks
on
the
cloud
foundry
side
who
work
on
build
packs
feel
strongly
like
a
separate
image
with
all
the
dependencies
in
it.
It's
very
difficult
to
manage
release
process
wise.
I
wanted
to
make
sure
that
emily
and
ben,
especially
you
could
talk
to
them.
A
A
Just
logistical
point:
I
guess
dan:
would
you
be
okay
reaching
out
to
folks
and
scheduling
a
separate
thing
for
offline
build
packages?
So
we
have
the
time
tomorrow
to
do
stack,
packs,
yep,
definitely
cool.
H
Builder
rfc,
so
I
put
this
up
pretty
I
put
this
up
yesterday.
I
don't
know
if
people
got
a
chance
to
look
at
it
or
not,
but
emily.
Thank
you
for
reviewing
it
and
joe
also.
I
just
had
like
a
few
questions.
I
wanted
to
discuss
emily
brought
up
most
of
them
actually
in
her
review.
So
that
was
great.
First
of
all
was.
H
I
think
that
was
mentioned
already
in
the
discussion
issue
on
spec
is
where
people
kind
of
had
whether
whether
people
had
strong
opinions
on
where
a
specification
for
builders
should
live,
whether
it
should
be
an
extension
spec.
H
I
kind
of
thought
that
to
some
degree
I
I
wasn't
considering
versioning
in
this,
but
at
the
very
least
like
it
seemed
very
much
tied
to
the
distribution
spec.
So
definitely
I
would
want
to
you
know
at
least
show
up
there,
but
whether
it
should
have
its
standalone
extension
spec
right
now.
I
just
wanted
to.
I
guess,
open
it
to
the
floor
and
see.
B
What
people
thought
it
being
extension
doesn't
mean
it
would
never
be
in
core
either
but
kind
of
in
favor
of
it
being
an
extension
at
first,
at
least.
E
A
I
I
don't
see
builders
as
something
that
go
away
completely,
because
they're
useful
in
that,
like
tecton-like
context,
you
want
an
image
that
doesn't
build
back,
build
right
and
in
any
case
where
you
want
an
image
that
doesn't
build,
build
right,
then
that's
we
have
this
great
solution
for
it.
That's
all
pre-baked
and
can
do
it
for
you.
I
just
I
think
the
discussion
there
is
more
about.
Should
builders
be
the
primary
way
people
to
build
pack
builds
with
something
like
the
pax
seal.
A
Given
that
you
know
they
are
restricted
in
a
sense
that
you
know
their
build.
Pack
versions
are
locked
in,
or
should
we
to
the
pax
cli
migrate
to
a
model?
That's
you
know,
doesn't
have
baked
in
builders
or
build
packs
and
kind
of
you
know.
Builders
become
a
little
more
of
maybe
something
that
doesn't
exist
in
pack
or
maybe
something
it's
kind
of
behind
the
scenes.
Where
really
you're
pulling
the
build
packs.
A
I
Yeah,
so
the
recent
release
of
kpec
doesn't
consume
builders,
but
still
plays
a
role
in
builders
because
it
creates
what
should
be
spec
compliant
builders
to
manage
the
builds.
So
it
seems
like
this
rc
is
useful
in
that
sense,
because
we'd
also
be
trying
to
adhere
to
the
same
spec
of
what
the
builders
are.
I
I
agree
I'm
interested
in
exploring
what
it
looks
like
that,
like
break
it
down,
so
that
users
of
pac-
and
perhaps
other
platforms
like
kpac,
are
not
are
not
treating
builders
as
their
like
primary
entry
point
and
that's
what
we're
doing
on
k-pac
now,
users
choose
build
packs
or
the
stack
images
and
then
provide
an
order
to
create
a
builder
and
then
utilize
that
to
build
a
build
or
build
images
and
builds.
E
So
the
definition
of
the
extension
section,
if
I
remember
correctly,
ultimately
meant
that
a
platform
or
any
given
system
would
be
cloud
native,
build
packs
compliant
if
they
adhere
to
the
core
specs
right,
but
the
not
the
extension
specs.
And
I
think
maybe
that's
what
ties
into
the
reasoning
for
the
extension
is
that
right.
A
H
A
I
would
vote
for
entirely
separate
thing.
I
also
think
the
core
specs
should
be
broken
down
into
like
three
different
layers
of
specs
that
make
sense
around
like
a
basic
api
that
could
even
be
its
own
tooling,
for
you
know,
layer
manipulation
in
the
way
we
do
layers
right
and
then
all
these
rules
about
you
know
how
the
environment
gets
set
up
and
all
that
stuff,
like
I
feel
like
it's
healthy
to
have
more
modular.
A
You
know
less
cross-referential
concepts,
and
so
I
would
support
filter
being
its
own,
dedicated
extension
spec
that
that's
you
know
without
saying
in
the
course
back
and
then,
when
you're
using
a
builder,
you
know
these
other
things
apply
right,
just
trying
to
keep
it
with
a
nice
nice
architectural
boundary
in
the
writing.
D
A
D
H
So
I
guess
that
I
don't
know
so
that
that
leads.
I
guess
to
the
second
question,
because
I
wasn't
quite
sure
how
that
would
work,
but
but
emily
had
also
brought
up
versioning
concerns
like
should
there
be
an
api
for
builders,
so
we
can
kind
of
better
enable
stable
changes
to
that
I
mean.
Are
there
currently,
I
know
we
just
started:
releasing
different
different
version
releases
and
spec.
H
E
E
I
would
go
further
and
say
that
I'd
like
to
maybe
ask
that
we
start
pushing
more
towards
the
oci
media
types
and
fields
to
leverage
those
to
more
explicitly
detail.
You
know
that
this
is
more
or
less
what
we
should
be
looking
for,
or
at.
E
So
as
part
of
the
oci
spec,
I
believe
there
are
certain
areas
that
go
into
different
media
types
and
different
fields
within
different
parts
of
the
oci,
where
those
media
types
are
then
used.
So
I
know
we
have
an
oci.
We
have
a
distribution
spec,
essentially
that
talks
about
oci
layout
and
it's
how
build
packages
are
distributed.
E
I
think
once
we
found
that
there's
an
oca
artifacts
spec
that
kind
of
goes
into
details
about
using
media
types.
I
think
we
should
kind
of
latch
on
to
that
and
then
thereby
get
those
media
types
cataloged
in
a
way
where
it's
recognizable
and
more
thoroughly
used
within
the
ecosystem.
A
Might
be
misremembering,
but
I
thought
a
reason
we
didn't.
We
talked
very
early
on
with,
like
steve,
lasker,
who's
kind
of
part
of
that
artifacts.
You
know
group,
you
know
how
should
we
label
our
build
packages,
or
should
we
use
different
media
type
for
non-run-up
artifacts,
and
all
that
I
thought
the
thing
we
ran
into
is
that
when
you,
when
you
start
doing
things
that
registries,
you
know,
don't
think,
are
very
normal
you're
pushing
docker
images
right
when
you
stop
using
media
types
that
are
really
normal.
A
You
lose
a
lot
of
registry
support
that
registry
start
behaving
strangely,
we
already,
you
know
we're
already
a
little
bit
restricted
with
you
know.
We
need
cross
people
blob
mounting
and
some
of
the
more
modern
registry
features.
You
know
anything
we
can
do
to
you
know
kind
of
push
forward
and
be
more
compliant
with
how
oci
should
work
sounds
really
good,
but
as
long
as
we're
testing
on
every
registry
to
make
sure
that
it
continues
to
work
because
it
doesn't
seem
worth
it.
If,
suddenly,
you
know
half
the
users
can't
use
it.
D
E
A
Azure
supports
that,
because
they're
they're
largely
driving
out
the
effort.
You
know
it's
things
like
like
ecr,
that
are
really
you
know
further
behind
in
their
implementation
of
this
back
when
we've
kind
of
looked
into
them.
I
I
definitely
like
I
would
be
very
surprised
actually,
if
github
registry,
I
was
super
up
to
date
with
it,
although
I
guess
they
are
owned
by
microsoft.
Now,
right
so
maybe
yeah
I
mean.
H
Just
because
that
registry
left
beta,
like
the
github
registry,
left
beta
like
two
weeks
ago,
so
I
wouldn't
there's
the
or
no
it's
it's
it's
in
public
beta
now
like
as
of
two
weeks
ago.
So
it's
still
fairly
fresh.
A
As
long
as
we're
testing
against
everything
right,
then
I'm
I
totally
agree
it's
the
right
thing
to
do.
I
just
worry
that
no,
it's
dropping
support
just
for
some
nice
metadata.
E
So
you
version
so
the
media
type.
The
way
I
tied
it
to
your
question,
sorry
is
part
of
the
media
type.
Is
you
specify
the
version
as
in
there
as
well.
H
Okay,
so
that
would
be
baked
into
the
oci
type.
Okay,
that
makes
sense,
and
then
the
third
question
which
which
joe
had
brought
up
was
what
whether
we
should
be
specking
a
builder
tamil
as
well.
I
was
at
first
like
that
we
seem
to
use
that
in
pack
as
a
kind
of
artifact
and
just
how
to
construct
a
builder,
but
I
was
wondering
whether
people
thought,
whether
I
don't
think
k
pack
uses
a
similar
thing.
B
Yeah,
that
makes
sense,
I
think,
the
bigger
question
around
it
is
like
if
we
want
to
spec
the
life
cycle
of
a
builder
like
how
it's
created
and
that
kind
of
stuff,
but
I
I
did
not
feel
very
strongly
about
that.
A
I
think
the
question
of
whether
the
builder
tunnel
file
goes
into
the
spec
and
is
thus
versioned
along
with
the
specification
is
maybe
a
different
question
from
should
that,
should
the
builder
tunnel
file
be
a
versioned
file
right
as
as
built
build
pack
tamil
is,
I
think,
that's
maybe
a
different
question.
I'm
not
sure
what
the
answer
to
that
is,
though,
doesn't
matter
that
much.
B
A
H
A
Caching
conversation
I'll,
just
I'll
mention
the
pushback,
but
then
let's,
let's
actually
use
most
of
the
rest
of
the
time
for
the
next
one
list,
because
we
don't
have
the
people
here
who
feel
strongly
about
this
most,
but
having
one
big
release,
image
that
has
all
your
dependencies
in
it
is
really
hard
when
you're
releasing
12
different
build
packages
at
the
same
time,
because
you
have
to
keep
that
thing
up
to
date
with
just
the
right
or
like
with
many
versions
of
all
the
dependencies.
It
actually
makes
the
problem
the
release
problem
much
harder.
A
So
you
know
for
asset
caching,
we
need
separate
asset,
cache
images.
Every
single
image
gets
it
gets
really
complicated
and
so
that,
having
a
separate
image
method,
there
was
some
strong
feedback
that
they,
you
know,
would
introduce
a
lot
of
complexity
in
other
places
that
were
unexpected.
So
that's
that's
the
general
feedback.
When
we
get
folks
from
the
kind
of
team
that
handles
that
release
process
on
our
side
and
here
who
have
those
stronger
opinions,
we
can
chat
about
it
separately.
D
A
Or
relocation
things
like
that:
how
does
that
work?
How
do
you
keep
everything
in
sync?
They
had.
They
had
some
very
good
points.
I
just
I
don't
want
to
try
to
represent
that
worse
than
they
could.
So,
let's,
let's
defer
that
to
the
next
thing
and
then
that's
okay,
and
then
we
can
talk
about
mirroring
proxy
images
in
pack.
E
That
essentially
takes
any
request
from
any
registry
and
essentially
like
aggregates
different
registries,
so
that
you
could
just
point
to
one
and
then
it
automatically
caches
them
and
does
a
whole
bunch
of
you
know
scanning
and
other
stuff.
So
in
an
enterprise
setting,
this
is
very
common.
I
guess
you
could
say
for
all
sorts
of
dependencies,
so
image,
yeah,
oci
images
being
that
a
dependency
kind
of
falls
into
that
same
bucket
of
a
use
case.
E
So
the
request
that
they
had
was
ultimately
in
a
simplified
fashion,
to
just
replace
the
host
of
the
image
name
with
that
of
their
specific
registry
or
artifact
store,
so
that
then
it
itself
can
then
retrieve
the
images
for
you
which
image
this
would
be
for
all
images.
C
D
This
for
run
images
and
for
the
app
image
you
can
name
it
whatever
you
want
right.
So
it
seems
like
builders
are
the
sticking
point
here.
E
A
Images,
I
think
this
feature
request
is
maybe
let's
take
a
step
back.
I
think
this
is
talking
about
proper
docker
registry
proxies
right,
like
the
feature.
That's
in
a
docker
registry
where
you
and
the
weird
thing
about
that
feature
is
that
it
requires
client-side
support
the
client,
because
the
name
is
going
to
be
different
when
it
pulls
through.
A
In
addition
to
the
you
know,
docker-damaged
one,
but
this
is
like
a
it's:
a
common
docker
feature
that
lots
of
docker
tooling
has
to
support,
because
you
know
if
we
don't
support
the
pack
we
should,
but
I
don't
think
it
requires
things
like
rewriting
and
run
image
in
our
image,
definitions
or
run
image
mirrors
or
anything
like
that.
It's
a
much
lower
level
kind
of
rewriting
of
of
which
register
you're
talking
to
and
your
host
name
you
don't
you
don't
actually
change
the
hostname
part.
You
just
talk
to
a
different
registry
with
this
host
name.
E
I
agree
that
this
feature
exists.
I
believe
I
looked
into
this
a
little
bit
and
you're
right.
It
wouldn't
work
from
ggcr's
perspective,
which
I
think
is
where
it
comes
by
and
I'm
not
entirely
sure.
If
it's
really
the
doctor
damon
that's
taking
care
of
this,
I
I'm
assuming
it
probably
would
have
this
feature
enabled,
but,
like
I
don't
yeah
like,
I
think
there
could
be
a
little
bit
more
research,
but
I'm
not
hopeful
that
that
underlying
feature
is
the
exact
same
feature
that
we're
looking
for
here.
If
that
makes
sense,.
E
No,
I
don't
think
that
that's
what
it's
going
to
solve.
I
know
the
it
is
a
proper
docker
proxy
right,
but
I
don't
think
that
the
solution
that
I
guess
the
docker
cli
or
docker
for
mac
will
have
will
be
the
solution
that
works
for
pac
right,
like
we're.
Gonna
need
to
do
some
sort
of
work
in
order
to
make
it
function.
A
I
I
guess
I'm
saying:
do
we
know
for
sure
that
ggcr
doesn't
have
the
future
built
in
to
support
that
proxy,
because
I
think
ggsr
has
been
around
for
a
long
time.
It's
very
it's
like
pretty
mature
at
this
point.
I'd
be
really
surprised
if
it,
if
it
didn't,
also
have
it
and
then
once
gcr
has
a
feature
then
it
recovered.
I
don't
know
the
answers
these
days,
I'm
just
it's
the
first
place.
I'd
look
before
we
start.