►
From YouTube: Working Group: 2020-06-17
Description
* Branch renaming
* Launcher RFC: https://github.com/buildpacks/rfcs/pull/84
* Root buildpacks: https://github.com/buildpacks/rfcs/pull/77
A
A
B
B
B
C
C
B
D
A
Was
wondering
if
we
should
cut
a
life
cycle
patch?
We
got
a
contribution
from
Lucas
that
fixes
an
issue
where,
if
you're,
using
a
cash
image
as
your
cash
destination
and
your
cash
image
is
identical
between
bills,
we
delete
it
and
we
have
the
fix
in
there.
So
I
was
thinking
about
cutting
cutting
a
pad
for
that.
If
that
suited,
everybody.
B
B
A
B
C
C
C
B
Cool
offline,
build
packages,
RFC
I,
don't
see
forests
or
Dan
here
for
this
one,
the
I
think
they
were
gonna.
Last
time,
I
talked
to
Dan,
they
were
gonna
clarify
some
things
about,
like
the
digest,
does
really
stay
the
same.
It's
just
a
different
diets,
just
the
digest
of
the
build
pack
layer.
So
there's
some
confusion.
They
mentioned
for
the
last
working
group
meeting
where
it
wasn't
there
and
they're
gonna
keep
making
changes
to
it.
It's
not
on
the
agenda
for
this
time.
I
think
may
plan
to
talk
about
tomorrow.
I.
A
Think
where
my
questions
about
the
digest-
and
we
can
talk
about
it
tomorrow-
is
how
an
online
and
and
offline
bill
pack
but
of
the
same
version
an
ID
can
live
in
the
registry
because
they
look
different
image,
digest
that's
sort
of
what
I
meant
with
that
dieters
question.
So
hopefully
we
can
talk
about
that
tomorrow.
B
This
hasn't
had
any
progress
for
a
while
are
their
outstanding
actions.
B
B
B
That's
the
draft
image
exposes
metadata
for
layers
that
participated
in
images
build.
This
is
Alvarez's
thing.
He
was
gonna
rethink
how
you
don't
want
to
include
stuff
about
the
build
layers
in
the
final
image
because
it
breaks
reproducibility
I,
don't
know.
If
he's
made
me
progress
Emily,
were
you
gonna
reach
out
to
him?
That's
why
I.
B
B
D
C
C
There
has
been
some
expression
of
a
desire
throughout
myriad
development
communities
to
stop
using
master
as
the
default
branch
in
git
repos
due
to
its
racial
connotations.
We've
had
a
discussion
inside
the
the
leadership
team
that
we
are
going
to
do
that
for
the
repos
controlled
by
the
build
packs
organization.
C
This
is
sort
of
supported
by
the
fact
that
github
has
come
out
and
said
that
they're
in
the
process
now
of
making
sure
that
making
it
so
that
master
won't
be
the
default
when
the
repos
are
created
and
from
my
own
personal
side,
the
spring
team
has
committed
to
a
very
large
effort
to
change
every
single
repo
in
sort
of
the
spring
ecosystem
as
well
and
I.
Think
that's.
C
You
know,
sort
of
indication
enough
that
there's
the
process
of
a
widespread
conversion
of
this
we're
going
to
change
it
from
master
to
main
and
take
on
the
responsibility
of
making
sure
that
you
know
all
of
our
CI
gets
fixed
up
and
stuff
like
that.
So
that's
more
an
announcement
than
anything
else
be
where
this
is
coming.
C
F
D
D
F
A
B
C
D
C
The
spring
community
has
also
been
grappling
with
the
allow
lists,
particularly
the
pendants
over
there,
really
don't
like
that.
It's
not
an
English
word,
but
I
think
they
ended
up
going.
I.
Think
the
the
way
it's
come
down
now
is
they're
gonna
go
with
a
loud
list,
because
there
is
a
broad
enough
swell
on
the
rest
of
the
community,
but
for
internal
stuff
like
this
I,
don't
think
it
matters
at
all.
E
A
Thursday
for
those
of
you
who
didn't
make
the
Thursday
working
group,
we
discussed
making
changes
to
the
launcher
argument
RFC
so
that,
instead
of
using
C
and
B
process
type
environment
variable
to
select
which
process
type
were
running,
we're
gonna
make
launcher
a
multi
call
binary
and
create
symlinks
for
the
different
process
types.
So
you
can
actually
just
put
your
process
type
in
the
entry
point
as
a
way
of
configuring.
What
process
type
you're
using
or
put
a
launcher
itself
in
the
entry
point,
and
that
means
don't
launch
a
process
type
launch
a
custom
command.
A
B
A
Even
the
last
version
of
this
RFC
still
used,
the
two
dashes
to
signal
signified
direct
versus
shell
parsing,
so
you'd
ever
had
to
explicitly
say
bash
what
chain
in
the
fail
person
arguments
which
could
be
broken
out
separately
because
it's
independent
of
server.
The
launcher
resolution
change
is
how
we
execute
in
bash
a
process
when
you're,
given
when
they're
multiple
values
in
the
image
print
right.
So
we
used
to
run
bash,
see
and
then
the
first
argument
was
the
script
and
subsequent
arguments
or
arguments
to
bash.
A
A
A
The
philosophy
is
to
make
C
and
B
images
behave
in
a
way
that
is
intuitive
and
similar
to
other
OCI
images,
so
for
many
I
would
say
most
images
that
run
an
app
the
command
that
actually
runs.
The
app
is
set
in
the
entry
point
and
the
command
field
in
the
OCI
image
provides
additional
arguments
to
the
app.
So
this
is
changing
our
default
behavior
to
be
similar
to
that
now,
there's
a
complicating
factor
in
that
we
need
the
launcher
itself
to
be
an
entry
point.
A
Any
idea
world
we'd
like
to
specify
the
launcher
and
the
process
type
in
the
entry
point,
but
that
gets
unwieldy
with
docker
run.
So
the
clever
solution
is
to
make
symlinks
named
after
each
process
type
that
point
at
the
launcher,
and
then
we
can
use
the
zeroeth
argument
inside
the
launcher
to
determine
which
process
that
we're
running.
A
So
you
can
do
docker
run
that
stash
entry
point
web
image
and
then
provide
arguments
to
it
after
that,
so
you
can
toggle
between
process
types
using
the
entry
point
and
if
your
entry
point
is
just
the
launcher
or
basically
anything
that
doesn't
match
your
process
type.
Then
you're
launching
a
custom
command.
F
C
Sorry,
I
can't
tell
you
how
happy
I
am
about
this
I
think
this
is
actually
a
really
really
good
outcome,
specifically
that
we
go
back
to
using
entry
points,
something
that
every
single
docker
user
is
familiar
with,
or
is
a
standard
docker
way
of
doing
what
we
want
to
do
rather
than
the
cnv
process.
I.
Think
that's
a
bigger
wind
and
even
putting
arguments
in.
E
Yeah
I
don't
agree.
Actually,
then
I
think
I
think
that
the
like
expected
docker
thing
would
have
would
be
to
have
launcher
as
the
entry
point
sort
of
always
and
I
think
that
launcher
I
think
that
the
process
type
should
be
an
argument
to
launcher.
But
I
also
think
that
that
third
case
there
is
a
huge
improvement
over
the
previous
proposal.
B
A
A
C
A
Really,
like
the
use
of
your
being
able
to
put
my
process
type
in
the
entry
point
and
I
like
that,
we
and
I'm
running
a
process
type.
The
default
interpretation
of
subsequent
things
are
arcs.
I
am
okay,
letting
a
process
type
be
an
argument
to
the
launcher.
If
you're
not
using
the
multi
call
functionality.
B
Now
that
you've
opened
up
this
additional
place
for
information
right
being
able
to
configure
it
use
the
entry
points.
You
know
the
you
call
it
our
v-0
of
the
entry
point
in
order
to
do
this,
which
I
think
we
can
account
for
all
these
use
cases,
because
because
we
have
that
extra
dimension,
so
you
could
have
a
special
process,
type
name,
for
instance,
that
when
you
assembly
the
launcher
to
that
it
does
take
the
first
argument
to
the
the
is
a
process
type
instead
of
an
override
command.
D
E
E
The,
and
it's
also
like
it
goes
beyond
that
to
like
I
want
people
to
you
know
reference
their
custom
process
types
or
whatever,
but
I
mean
like
I
yeah.
It's
fine,
whatever
it's
a
good
compromise
I,
just
like
my
thing
is
like
I
like
I,
don't
I,
don't
see
how
it's
less
ambiguous
when
the
process
type
name
is
not
an
argument
to
launcher
to
me
that
feels
like
very
ambiguous
when
you're
like,
when
the
only
arguments
to
launcher
are
frame,
profiles
and
stuff.
D
A
Actually,
the
executable,
so
the
difference
between
like
case
4
and
K
6,
is
that
in
case
for
the
launcher
is
setting
up
your
environment
for
you
and
sometimes
that's
the
best
shot
you
want
to
be
in.
You
want
to
be
in
the
shell.
That
looks
like
what
it
would
look
like.
I
mean
you
executed
your
process,
but
other
times
you
just
want
to
directly
bash
into
the
container,
and
you
don't
want
the
launcher
and
that's
case
6,
and
that
always
works
regardless
of
what
we
do.
B
D
B
Case,
what's
the
order
of
presidents
for
like
that?
That
seems
like
it
could
be,
I
mean
no
it's
better
than
what
we
had
before
for
the
overriding
it's
a
little.
It
worries
me
less,
but
adding
all
the
process
types
to
path
for
the
whole
execution
of
the
container
it
seems
like
it
could
lead
to
really
unintentional
behavior
later,
when
someone
chooses
a
common
process,
type
name
and
your
app
like
here's,
a
really
common
one,
you
have
a
web
app.
You
called
your
main
process
web
right
now.
Web
doesn't.
B
A
We
put
the
like
we
can.
We
could
require
to
be
an
absolute
path,
I'm
open
to
that.
My
suggestion
was
going
to
be.
We
put
them
all
in
a
place
like
C
and
B
process
in
the
exported
image.
We
prepend
it
to
the
path,
but
when
you
run
the
launch
here,
we
scrub
it
from
the
path
before
we
execute
other
things.
That's.
B
Pretty
good
scrubbing
it
from
the
path
makes
me
feel
a
lot
more
comfortable.
It
still
have
some
reservations,
but
I
would
not
block
that
from
happening.
That's
the
way
everyone
go.
I
had
some
questions
about
more
than
about
the
argument
parsing.
So
when
you
do
went
like
in
it's
a
case
3
when
you're
passing
that
screen
profiles
thing
what
I
really
liked
about
the
one
point
three
in
this
RFC
was
that
each
argument
you
pass
gets
fed
directly
into
the
the
process,
that's
running
as
rg1
ruv.
B
There's
no
glommed
shell
parsing
like
if
you
did,
if
you
tried
to
do.
If
you
try
to
put
symbols
in
there
like
a
dad
right,
those
would
not
get
interpreted
by
a
shell.
You
know
a
long
contextually
along
with
the
other
things,
and
then
it's
very
direct.
If
that
makes
sense,
and
then
the
indirect
mode
is
just
reserved
for
hey
I
want
to
skip
the
profile.
You
know
stuff,
I'm,
not
sure
I
understand
how
that
works
in
this
latest
version.
A
First
well,
this
isn't
any
different
than
the
previous
version,
so
the
thing
you
imagined
was
in
that
middle
version
wasn't
there.
This
is
the
same,
but
what
it
does
is
it
guarantees
your
behavior
that
looks
like,
but
it
guarantees
that
it
has
the
same
behavior.
It
would
if
you
just
executed
those
arguments
in
bash
in
the
container
after
the
profile
scripts
resource.
It
does
a
little
bit
of
magic
to
get
there
by.
A
Adding
all
of
the
arguments
as
arguments
to
the
first
wrapping
bash,
which
they
and
get
added
as
arguments
to
the
second
bash,
which
then
get
interpolated
into
a
command
that
evals
them
in
double
quotes
and
like
it
looks
a
little
complicated
how
it
works.
But
the
behavior
is
very
intuitive.
So
I
guess
is
the
question:
do
you
like
want
people
to
have
to
specify
you
want
to
remove
the
meaning
of
for
direct
and
not
direct.
B
E
B
B
That
that
argument
together,
the
individual
argument
is
evaluated
independently,
like
each
of
the
arguments
are
evaluated
separately,
right,
they're
not
evaluated
together,
and
then
we
do
some
kind
of
string
munging
to
pull
them
apart
right,
we
preserve
the
positionality,
the
independence
of
each
of
the
arguments,
all
right,
I.
Don't
then.
C
C
A
B
Order
to
use
that
right
and
then
put
a
totally
other
command
over
there.
The
user
would
have
to
specify,
see
and
then
do
a
more
complex
thing.
They
couldn't,
like
you
know,
use
arbitrary
shell
logic
and
shell
primitives
to
create
a
complex
command
inside
of
command.
You
gotta
use,
specify
the
command
and
the
arguments
independently,
and
if
you
want
to
do
something
more
fancy
introduce
more
complex
logic.
You
have
to
bash
see,
and
then
you
can
pass
all
of
that
as
an
argument
right,
yeah,
just
making
sure
that
behavior
was
preserved.
The.
A
A
B
Also
evaluate
the
command
parameter,
but
you
don't
evaluate
it
to
something.
That's
going
to
get
passed
something
later.
You
just
ensure
that
the
command
parameter
is
executed
in
the
context
of
bash
and
that
the
arguments
are
provided
to
it
after
it
being
independently
evaluated,
which,
if
that
thing
is
just
one
binary,
then
the
rest
of
the
things
get
passed
to
it
works
at,
and
if
that
thing
is
a
script
and
the
rest
of
the
things
get
passed
to
it,
that
also
works
out.
Something
like
that.
B
I,
don't
under
I,
don't
understand
how
you
could
achieve
the
backwards
compatibility
and
the
new
behavior
cuz
like
bash
C,
and
then
the
binary
name
followed
by
the
arguments.
Those
arguments
aren't
actually
gonna
get
passed,
the
binary
name,
they'll
just
be
ignored
because
it'll
be
a
little
script
inside
there.
That's.
F
A
B
A
F
A
B
E
B
E
We
don't
I
mean
we
can
spend
time
on
it
or
not.
I
just
want
to
make
sure
that
everyone
who
cared
at
seeing
the
comment
I
added,
trying
to
summarize
what
we
discussed
last
week,
we
had
a
little
breakout
session
trying
to
find
it
and
where
is
it
yeah?
There
are
a
few
things
that
came
out
of
that
one
is
that
we
established
it
in
order
to
use
a
root
pack.
It
must
be
essentially
provided
by
the
stack
szostak
maintainer
x'
can
choose
to
include
real
root
bill
packs
that
they
would
support.
E
It's
really
the
Builder
ready,
but
it
doesn't
have
to
be
in
the
order
it
could
just
be.
It
can
be
in
the
order
if
it
doesn't
have
to
be
in
the
order,
but
those
are
the
only
route
bill
packs
that
you
could
that
you
can't
include
in
your
build
up
bill
packs
of
project
tunnel
and
then
the
part
I
think
I'm
a
little
fuzzy
on
this
still,
but
the
run
image
also
needs
the
route
build
packs,
but
we're
gonna
do
some
magic
so
that
they're
like
wiped
from
the
final
app
image.
F
E
You
can't
use
arbitrary,
build
packs
route,
build
packs
without
somehow
adding
it
to
the
Builder.
So
in
this
comment
that
I
I
can
probably
link
to
a
river,
but
I'm
saying
the
thing
that
I
still
want
to
figure
out
before
we
move
forward
is
like
the
ux
around
extending
the
Builder
so
that
you
can
add
whatever
route
build
packs
you
want.
I
want
to
make
that
easier
if
possible
seems.
D
B
The
main
restriction
is
just
that:
a
build
that
author
can't
like
throw
a
bunch
of
you,
know,
distribute
a
build
package.
That's
it
has
a
bunch
of
route
bill,
paxton,
slowed
everything
down
right
or
like
that.
We
don't
end
up
shifting
the
community
over
to
everything.
Is
a
route
build
pack
and
all
builds
are
very
slow
right
that
the
primary
way
people
use
build
packs
is
through
the
build
pack
interface
and
not
by
doing
whatever
they
want
right.
B
The
I
I
assumed
that,
like
you'd,
be
able
to
pass
or
rebuild
pack
with
build
pack
redeem.
It
says
here
like
a
mechanism
that
would
extend
on-the-fly
every
build
pack
a
specified
and
build
that
built
acts.
I,
don't
think,
there's
not
position
to
that.
It's
just
the
you
know,
at
least
to
start
making
sure
that
route
build
packs
are
about.
You
know,
sort
of
stack
level
functionality
in
you
know
these
type
of
users
on
this
platform
can
install
Ubuntu
packages
right,
and
that
can
be
very
seamless
that
could
happen
automatically
when
a.
B
Bill
pack
requests,
you
know,
packages
via
the
dole
plan,
interface,
I.
Think
one
thing
I
don't
see
called
out
here
is
we
talked
about
using
the
build
plan,
interface
and
mix-ins
so
that
either
a
stack
could
provide
mix-ins
or
build
package
can
provide
Vixens
and
on
larger
stacks.
The
build
package
can
do
less
work
to
kind
of
help
play
down
some
of
the
performance
issues,
but
the
the
only
restriction
is
the
like.
We
can't
make
it
so
all
all
build
packages
everywhere
have
an
apt
route,
build
pack
in
them
and
install
packages.
B
B
Well,
I
think
there
are
also
some
issues
with
route
build
packs
and
the
run
image
where,
if
we
have
to
dynamic
I'm,
not
sure
Emily,
you
had
some
thoughts
about
dynamically,
adding
refill
packages
to
run
image
and
that
being
much
harder
than
having
them
pretty
baked
into
a
builder
like
run
image.
That's
one
thing
we
talked
about
within.
A
Correct
me,
if
I'm
wrong
McNew,
but
I
feel
like
it'd,
be
very
hard
for
K
pack
to
have
a
builder
that
had
route
build
packs
and
then
execute
the
flow
as
its
described
in
this
RFC,
because
you
need
an
image
that
has
is
at
its
base.
Their
own
image
has
root
bill
pecs
on
it
and
has
some
sort
of
root
bill.
A
Tech,
Runner,
lifecycle
phase
in
order
to
generate
the
run
layers
that
you
want
on
your
final
image
and
not
every
platform
can
dynamically
make
an
image
on
the
fly
you
very
hard
attacked
on
as
well.
So
the
idea
is
crate
builder.
If
you
have
root
built
eyes,
can
spit
out
two
images
and
then,
when
you
finally
export
what
you're,
using
as
your
base
image,
is
the
run
image.
B
F
I,
like
I
guess
we
could
talk
through
that
case,
but
to
me
that
doesn't
require
the
stack
restriction
like
I.
Think
that
seems
separate
to
me
because
in
the
case
we're
just
executing
I
it
locally
worth
doing
be
from
not
being
a
stack
you're
gonna
be
using
the
Peck
platform.
You
can
create
that
ephemeral
builder
and
those
ephemeral
run
image
builders
for
lack
of
a
better
term,
but
you're
not
going
to
be
doing
that
on
Tecton.
So
even
on
Tecton,
you
could
be
using
the
Builder
built
with
root
images
inside
though
I
think.
A
The
stack
restricting
like
I
need
a
better
word
for
it
from
understanding
correctly
and
maybe
I'm
interpreting
this
differently
than
Jo
is.
The
point
is
just
that
people
can't
ship
meta,
build
packs
with
root
bill
packs
in
because
then
they
were
all
proliferate.
Like
you
tried
to
be
a
little
bit
more
deliberate
about
how
you
add
them.
E
B
The
upgrade
the
root
build
pack
for
the
upgrade
to
work.
The
root
build
pack
needs
to
be
included
in
the
run,
image
or
needs
to
be
and
I,
don't
think
they
want
to
included
in
the
application
image
itself
right,
and
so
the
run
image
you're
rebasing
against
has
to
include
the
root
build
pack,
which
means
that
it
has
to
kind
of
be
baked
on
initially
right.
It
can't
be
something
that's
dynamically,
provided
we
don't
want
this
one
to
have
to
dynamically,
provide
to
rebase,
and
so
any
kind
of
dynamic.
B
D
D
B
Can't
do
a
rebase
if
do
and
you
modified
the
base
stack
rebase
stops
working
because
you
run
images,
don't
have
those
packages
and
you
can't
you
can't
swap
layers
across
you
made
arbitrary
changes.
Boundary
right.
You
can
only.
We
can
only
do
what
we
do,
because
our
the
layers,
the
build
packs
crates,
have
this
contract.
They
only
go
into
empty
directories
that
don't
want
to
fight
existing
things.
So
rebase
isn't
isn't
a
safe
operation
on
those
images.
So
you
have
to
do.
B
You
have
to
extend
the
original
image
with
the
rebuild
pack
again
and
then
you
can
do
the
rebase
afterwards
and
that's
the
operational
upgrade
that
would
replace
rebase,
so
they're
just
doing
a
rebase.
You
can
just
very
safely
only
run
upgrade
ever
and
in
cases
where
three
bays
is
appropriate.
It
does
rebase
in
cases
where
it
needs
to
do
more
than
a
rebase.
It
needs
to
be
extended.
The
run
images
in
the
rebuild
pack.
It
would
do
that
first
and
then
reapply
can.
D
I
get
a
like
concrete
example
of
what
that
you
know
would
look
like
what
what
exactly
would
break
in
a
concrete
example.
If,
for
instance,
I,
you
know,
I
have
a
OS
stack
image
that
has
something
installed
in
it
and
you're
saying
the
root
build
pack
could
either
delete
or
add
additional
packages
right,
you're
saying
that
I
can't
swap
the
lower
base
layers
anymore,
because.
B
Not
even
ABI
compatibility,
it's
it's
much
worse
than
that.
It's
like
the
imagine,
I'll
give
you
like
a
really
concrete
example.
Imagine
you
install
a
package
that
adds
a
user
to
Etsy,
password
right
and
then
later
you,
the
run.
Image
gets
a
security
patch.
The
changes
that
you
know
removes
a
password
that
was
accidentally
added
to
the
user
or
something
in
the
base
image
right
or
makes
another
change
to
Etsy
password
to
improve
its
security
posture.
When
you
do
the
rebase,
you
lose
the
lower
changes
and
you
lose
modifications
to
the
new
thing.
B
Yeah
there's
right
rebase,
it
does
not
do
a
merge,
and
so
it's
it's
only.
The
the
reason
that
only
we
and
and
Jeb
and
Cole
use
this
technique
of
being
able
to
read
base
layers
is
because
we've
been
strict
about.
You
know
contractual
ality
of
the
layers
we
generate,
and
so
the
rebase
operation
becomes
a
more
complex
operation.
When
you
talk
about
installing
Ubuntu
packages-
and
there
are
all
these
technical
limitations
that
apply,
and
so
the
reason
to
restrict
refill
packs
to
the
stack
is
kind
of
technical
things
around.
B
You
know
dynamic
image
generation
being
difficult
on
some
platforms
around
upgrade
needing
to
continue
to
work
in
a
way.
That's
not
complicated
and
not
wanting
to
proliferate.
The
you
know
not
wanting
to
get
rid
of
all
the
work
we
did
to
build
it
they'll
back
API
and
just
have
any
write-up
excellent
plan.
They
don't
have
this
sort
of
performance
and
meditated
benefits
rather.
A
B
E
E
E
F
B
We
talked
about
this
a
little
bit.
It's
like
it's
safe
as
long
as
the
route
build
tacks
are
safe,
which
was
another
reason
that
it's
nice,
that
if
they
come
with
the
Builder,
if
they're
approved
by
the
Builder
authors,
this
will
only
make
a
bi
compatible
changes.
Then
it
is
safe
right.
So
if
we
make
those
restrictions
that
we
absolutely
can
have
our
cake.
F
E
Kind
of
start
to
part
ways
and
that
like
I,
am
kind
of
okay
with
people
having
a
foot
gum
where
they
use
a
route,
build
pack
and
then
break
api
compatibility
and
rebase
does
something.
But
the
final
image
doesn't
work
or
whatever
and
I.
Think
Steven
you're
saying
that
you
don't
want
to
give
him
that
foot
gun
I'm.
B
Does
for
me
that
dress
is
that
completely,
but
but
the
needing
to
come
on
the
stack
like
built
when
you
build
a
builder
there's
like
a
run
builder
and
a
builder
like
we
talked
about
before
that,
so
that
rebasing
is
preserved
right,
that
we
can
keep
doing
those
operations.
You
know
the
technical
limitations
put
the
additional
restriction
of
you
know
it.
It's
going
to
be
tricky
to
make
it
so
you
can
dynamically
select
rebuild
packs
immediately
at
Build
time,
without
a
lot
of
additional
images
being
generated,
but
but
happy
to
make
it.
B
So
you
can
extend
a
builder
to
include
additional
rebuild
packs
if
you
need
to
right
just
a
lot
more
flexible
than
when
we
talked
in
the
past,
but
you
can
extend
a
builder
with
the
pre-sub
with
a
subset
of
packages
right
this
is
you
can
extend
a
builder
with
just
generally
the
ability
for
apps
to
install
pack
in
just
a
build
time.
That
seems
very
like
a
good
path
forward.
So
that's
a
lot
of
flexibility
I,
also
because
the
real
pet
isn't
a
build
pack
that
installs
a
preset
number
of
good
do
packages.
B
It's
often
a
build
pack
that
could
install
any
a
bunch
of
packages
and
99%
of
our
use
cases
are
installing
Ubuntu
packages.
Right,
like
you
know,
maybe
with
the
addition
of
it's
possible
in
a
builder
to
make
it
so
that
some
real
packs
always
run
regardless
of
what's
selected
or
what
the
user
selects
right.
I
don't
see
a
use
case
that
we're
not
accounting
for
by
doing
this.
E
B
Yes,
something
I
mentioned
the
last
one
that
wasn't
reflected
in
the
update
is
I.
Would,
if
needed
at
least
I,
think
a
feature
that
says
some
rebuild
packs
run
before
all
other
build
packs,
no
matter
what
would
make
it
so
the
developer
doesn't
have
to
touch
that
interface
at
all
right
either
their
stack
has
it
pre-built
and
it
works,
and
it's
performant
or
their
stack
doesn't
have
a
rebuild
in
the
apt
rebuild
pack
install
as
the
additional
packages,
but.
C
In
in
that
sequel,
light
case,
that
Joe
was
talking
about
as
they'll
pack
author.
If
I
have
that
dependency,
whether
it's
a
mixin
or
a
thing
that
I
want
to
enable
what
is
my
interface
has
it'll
pack
other
to
deal
with
that?
Do
I
just
like
lists
that
as
a
mixin
and
then,
if
the
stack
happens,
to
have
the
route
they'll
pack
it
like
I,
guess
the
problem
was
with
mix-ins.
You
have
to
list
them
ahead
of
time
right.
B
When
we
talked
about
is
making
mix
ins
a
more
dynamic
thing,
that's
determined
in
the
build
plan
during
detection,
as
opposed
to
like
you,
could
statically
declare
mix
ins
on
your
build
package,
but
you
can
also
require
them
in
the
build
plan
so
that
that
just
works
out.
If
it's
satisfied
statically,
but
the
base
image,
then
through
build
pack
doesn't
get
the
requirement.
If
it's
not
satisfied
by
it.
The
rebuild
picked
up
installs
the
package
and
it
happens
seamlessly
without
the
users.
B
C
E
That's
kind
of
why
I
didn't
address
that
in
the
comment
is
I
think
it's
still
a
lot
to
think
about
there,
but
to
Terrance
this
point.
My
experiences
question
like
whether
it's
Python
installing
sequel,
later
or
rails,
build
pack
that
installs
ffmpeg.
When
you
use
active
storage
the
it
can,
it
can
ask
for
the
mix
in,
but
then
I
think
you
would
still
need
to
tell
your
users
of
that.
E
Build
pack
to
like
manually
go
add
something
if
the
stack
doesn't
already
provide
it
right
or
are
you
thinking
well,
I,
guess
if
it
the
stack
that
you're
using
has
its
own
app
to
build
pack,
it
can
see
that
request
for
that
mixin
and
provide
it.
But
if
the
stack
you're
using
doesn't
have
sequel
light
or
whatever
and
also
doesn't
have
an
affable
back,
then
you
have
to
go
manually.
Do
something
and.
D
D
B
Are
three
options
right?
First
of
all,
I
think
most
builders
are
going
to
include
generic
apt,
build
pack
or
young
build
pack
or
whatever
it
is
that
can
install
packages
if
they
want
they're
comfortable
with
users
allowing
operating
system
packages
to
be
installed
dynamically
right.
It's
like
the
open
source
ones
for
potato
I'm
sure
you
would
do
this
right,
but
if
they're
not
right,
then
there,
the
user,
sees
that
hey
the
stack.
You
know
no
snacks
or
root
build
pack
satisfied
this
requirement
and
then
the
user
would
know.
B
I
either
need
to
satisfy
this
by
extending
my
builder
to
include
additional
packages
or
by
adding
a
root,
build
pack
to
the
Builder
and
allowing
the
functionality
of
arbitrary
act
build.
You
know,
package,
installation,
I,
think
if
they
want
to
restrict
it
to
just
okay,
you
can
install
Ubuntu
packages,
but
you
can
only
install
these
would
go
to
packages.
They
could
create
a
custom
rebuild
pack
that
you
know
just
provides
that
small
selection
of
mix-ins
right
and
install
that
on
to
the
Builder
image.
F
B
The
build
pack
that
needs
sequel,
light
during
detection
would
say:
I
require
sequel
light
the
during
during
detection.
If
that's
satisfied
by
the
base
image,
then
everything's
good
everything
was
on
route.
Build
peg
doesn't
get
anything
everything
pass,
there's
no
problem.
It's
not
satisfied
by
the
base
image
through
build
pack,
you
know
see
during
the
build
phase,
gets
that
package
and
installs
it,
but.
F
I'm
thinking
about,
like
our
case,
where
we're
building
builders
pretty
dynamically
and
right
now,
we're
able
to
provide
very
quick
feedback
if
they
attempt
to
like
quick,
build
pack
on
a
stack
image
that
doesn't
have
a
mix
in,
but
in
a
world
where
all
mix-ins
doesn't
get
satisfied
until
detect
time.
We're
not
able
to
give
the
user
feedback
until
much
later
that
perhaps
they're
trying
to
build
something
on
a
stack,
that'll,
never
work.
This.
B
B
Like
what,
if
you
wanted
something
to
be
able
to
run
on
a
like,
hey,
you're,
gonna
know,
if
your
builder
has
a
hat
build
pack,
it's
like
you
could
allow
it.
You
can
yeah
I
see
what
you
mean,
you're,
saying
that
if
why
would
you
put
mix-ins
on
a
build
package
statically
to
restrict
it
from
being
run
on
a
smaller
base
image?
So
when
dynamical
package
just
could
be
added,
yeah
I
think
that's
definitely
something
to
think
about
it.