►
From YouTube: Working Group: 2021-04-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Updates
we're
getting
to
getting
ready
to
ship
a
patch
release
of
the
life
cycle,
to
fix
an
issue
with
caching
for
build
packs
on
the
o6,
build
pack
api.
B
So
far,
not
many
that
we
can
see
or
any
build
packs
that
are
out
there
at
least
on
the
suggested
builders
have
upgraded.
So
we
think
the
impact
is
minimal,
but
we're
getting
it
fixed
out
as
soon
as
we
can.
B
A
All
right
and
oh
so
this
is
sorted
as
there
are
no
subteam
rfcs
here
no
drafts
and
yeah
so
add
build
right
flag
for
layers.
This
is
from
sam.
I've
been
talking
about
this
a
lot
sam.
Any
anything
you
need
to
get
this
going
forward
seems
like
I've
got
an
approval
from
joe.
B
I
should
go.
I
should
go
back
like
that
cool.
A
So
almost
there
on
this
one
allowed
setting
default
command
arguments
that
can
be
overbidden
by
the
user.
This
one
was
the
more
controversial
one.
Looking
at
emily.
D
Yes,
we
need
a
controversial
rfc.
This
came
up
again
today
about
how
the
way
we
handle
shell
things
are
weird.
I
wonder
if
you
put
something
on
the
agenda
for
later,
because
sam,
I
think
part
of
our
plan
that
we
had
sort
of
verbally,
discussed
and
still
needs
to
be
captured
would
involve
removing
support
for
profile
scripts
and
there's
some
feedback
that
the
exact
d
interface
is
maybe
not
the
friendliest
people.
A
All
right,
oh
gosh,
or
where
we
recommend
for
different,
build
runtime
user
ids.
I
think
this
one
is
going
pretty
well,
it's
missing
an
approval.
A
But
I
thought
I
approved
this
one.
I
guess
not.
Oh
no,
this
already
passed,
sorry
missed
the
fcp
label.
Terrence
was
going
to
this
is
a
shepherd
for
this
one
and
parents.
Are
you
there.
A
Sure
passed,
fcp
cool,
thank
you,
disambiguate
layer,
metadata
files
from
app
metadata.
C
I
updated
the
alternatives
with
all
the
ones
that
were
proposed.
I
think
like
this
one
came
at
the
top
and
there
are
four
other
alternatives
of
people
want
to
take.
A
look
at
that.
B
A
So,
just
just
waiting
for
more
review
of
this
one
yeah
cool
guidelines
for
accepting
component
level
contributions
is
mine.
I
thought
at
one
point
I
went
through
and
addressed
a
bunch
of
feedback.
Was
there
more
feedback?
I
was
supposed
to
add
to
this.
A
B
B
I
think
you
accepted
a
thing
I
suggested
cool
is
what
happened
now.
I
probably
just
need
to
look
over
it
with
all
the
changes
in,
but
there's
something
probably
blocking
my
mind.
D
Speaking
of
forgetting
things
that
we
talked
about
last
time,
natalie
had
brought
up
the
great
idea
that
we
should
perhaps
have
a
notetaker
for
these
meetings
and
I
kindly
volunteered
to
do
it.
So
I
had
forgotten
to
mention
that,
beginning
the
meeting
that
we
had
that
conversation.
But
I
wonder
if
we
should
start
taking
better
notes
during
working
groups,
because
I
think
we
are
all
less
good
at
remembering
things
that
we
think
we
are.
D
B
A
Good,
you
need
an
rfc
for
that
just
kidding
all
right.
I
guess
natalie
definitely
feel
free
to
take
notes,
and
I
think
it
should
just
go
into
the
same
doc
right.
I
don't
think
we
need
like
different
docs
for
that.
A
A
Yes,
terence
you're,
the
shepherd
for
this
one,
maybe
you
could
work
with
javier
seems
like
he
was
oh
javier
volunteered
to
help
with
the
issue
creation
from
last
time.
Yeah.
So
I
don't
know
you
should
figure
it
out.
D
D
This
simplifies
a
lot
of
things
in
the
life
cycle
and
would
give
us
a
chance
to
clean
up
some
of
the
weirder
things.
We've
done
to
make
support
arguments
for
batch
scripts,
because
there's
now
a
great
way
to
randomly
append
arguments
into
a
script
and
evaluate
them
correctly,
but
know
when
certain
things
are
not
supposed
to
be.
Certain
quotes
are
supposed
to
be
preserved.
Stuff,
like
that.
The
usage
for
some
of
the
features
that
are
most
useful,
like
offending
arguments,
are
weird
and
bash.
E
Would
that
mean
that
variable?
What
do
you
call
it
interpolation
wouldn't
happen
like
if
you
put
like
dollar
port
in
your.
E
Pretty
common
for
like
tools
as
port,
in
particular,
where,
like
they
won't
read
it
from
a
file
or
the
end
bar
and
like
you've,
got
to
pass
it
in
on
a
command
line.
Kind
of
thing.
A
A
If
we
wanted
to
reduce
our
dependency
on
bash,
but
still
allow
mode
where
you
don't
have
to
change
your
command,
but
you
can
still
do
variable
references,
so
you
could
have
a
flag.
You
can
turn
on
for
your
process,
that's
like
substitute
and
then
it
would
do
substitutions,
but
it
would
depend
on
and
subster
existing
there
there's
also
go
libraries
that'll.
Do
that
too.
That
implement
the
same
logic,
so
we
wouldn't
even
have
to
depend
on
the
unix
thing.
You
can
just
do
the
variable
substitution
yourself.
B
D
I
don't
think
the
launcher
would
interpolate
values
for
you
in
this
world
right.
If
your
command
had
dollar
port
in
it,
we're
not
going
to
interpolate
it,
your
command
would
have
to
be
bash
c,
a
command
that
has
dollar
port
in
it,
and
that
would
just
work.
And
then,
if
someone
wanted
to
set
the
port,
you
know
you
could
just
set
the
environment
variable,
but
you
couldn't
pass
a
new
flag
that
has
the
environment
variable
in
it.
A
I
was
saying
in
cases
where
you
don't
want
to
bash
c
right
or
you
just
want
to
specify
a
command,
and
you
want
environment
variable
to
be
in
there.
We
could
add
an
extra
flag
to
the
process
like
substitute,
and
then
we
can
use
our
own
substitution
logic
that
that
kind
of
substitution
for
variables
is
actually
fairly
standardized.
Just
like
a
you
know,
unix
command
called
msubster
that'll.
A
Do
it,
but
they're
go
libraries
that
do
the
same
thing,
and
so
we
could
still
provide
just
that
functionality
optionally
without
having
to
even
rely
on
a
shell
which
is
nice.
So
you
could.
You
could
write
a
build
pack
that
isn't
shell
specific,
but
still
has
that
functionality,
which
is
maybe
a
little
even
better
than
it
is
today
in
some
ways.
C
D
E
E
Basically,
everybody
writes
a
bill
back,
you
know
trip
them
up,
and
I
assume
this
is
also
related
to
profile
d
and
stuff
like
that
right
which
no,
like
I'm
happy
to
keep
discussing
that
like
we
talked
about
using
exec
d
instead,
but
you
know
maybe
there's
also
another
path
there,
where
it's
not
bash,
but
we
can
still.
I
guess
I
guess
what
kind
of
script
do
you
put
in
your
profile
d,
if
you're
not
seeing
bash
but
there's,
I
think,
there's
definitely
a
trade-off
here
and
user
experience.
C
I
think
this
was
also
something
that
bit
me
today,
where
I
was
using
direct
equals
false
and
like
the
and
the
variable
interpretation
was
like
being
done
in
cases
where
it
didn't
need
to
be,
and
that
was
causing
it
to
get
stripped
off
like
required
codes.
C
C
Sorry,
I
had
to
set
it
back
to
direct
equals
true
and
then,
as
a
consequence,
I
had
to
move
all
of
my
profile,
the
scripts
to
exactly
and
change
my
entry
point
scripts,
which
were
using
environment
variables
directly
like
the
process
commands
and
knots
which
are
using
environment
variables
directly
to
convert
that
into
another
bash
entry.
Point
script,
and
that
was
a
huge
pain.
A
It
like
we're
all
aligned
on
the
same
goal,
which
is
better
user
experience,
for
you
know
putting
code
in
different
places.
You
know,
writing
start
commands
and
if
we
can
achieve
a
good
user
experience
there
that
doesn't
involve
a
hard
dependency
on
bash.
Then
you
know
to
me:
that's
like
a
big
win-win,
but
I
definitely
agree
that
we
need.
We
can't
just
rip
bash
out
and
say
you're
on
your
own.
It
has
to
be
a
well-thought-out
thing
right.
I
I
really
like
saying
what
you
linked
about.
A
How
kids
does
I
had
no
idea,
you
know
it's
not
even
it's
not
exactly
the
same
environment.
Variable
format
is
bash
that
uses
parentheses
instead
of
curlies,
but
I
had
no
idea,
and
it
makes
so
much
sense
that
they
would
want
to
do
that
to
improve
their
user
experience
for
solving.
What's
the
same
problem,
essentially
providing
commands
at.
C
The
gates
only
evaluates
the
environment
variables
which
are
passed
in
the
spec
itself.
It
doesn't
evaluate
everything
as
far
as
I
know,
so
it
won't
evaluate
the
things
that
are
inside
the
container.
It
will
do
the
evaluation
before
the
container
is
launched.
A
C
D
A
It
wouldn't
have
to
be
a
binary
right.
It
could
still
be
a
shell
script.
It
would
just
have
to
be
an
executable
shell
script
and
that's
what's
happening
without.
C
There
was
one
the
file
descrip
the
file
descriptor
like
having
to
test
that
like
previously,
I
could
just
I
could
more
easily
test
those
profile
d
scripts,
even
if
I'm
not
using
it
directly
with
the
like
buildback
test
or
something
I
could
just
verify
those
bash
scripts
by
running
it
locally,
but
with
exactly
any
to
first
create
a
file
descriptor
check
what
the
outputs
are
like
when
I'm
just
developing
the
profility
or
exact
description,
I'm
developing
people
pack
itself,
so
that's
another
layer
of
like
like
you.
A
Can
you
explain
that
a
little
more
so
in
the
profile
case,
you
can
access
environment
variables
because
you're
sourcing,
the
profile,
script
and
they're
set
in
the
current
environment
in
the
execd
case,
you
should
just
be
able
to
set
the
environment
variables,
make
sure
they're
exported
and
you
should
see
them
in
the
xd
when
you're
testing
it.
What
was
the
difference
in
testing?
No,
it's
when.
C
Whereas
for
exactly
I
first
have
to
like
I
mean
it's,
it's
just
one
step,
but
it's
like
creating
a
file
descriptor
and
then
checking
the
values
that
were
output
to
those
the
file.
Descriptor
were
correct
or
not.
C
A
E
Yeah,
these
are
existing
profile
scripts
that
I
use
and
again
this
is
not
like.
I
don't
care
about
me
right.
I
think,
but
I
think
the
things
I'm
doing
in
creating
these
build
packs
are
pretty
indicative
of
what
I
think
a
sizable
number
of
build
pack
authors
would
want
to
do
especially
again
if
they're
writing
their
buildbacks
bash,
which
I
think
we've
seen
is
common.
D
E
Yeah,
I'm
gonna
have
to
like
experiment
with
moving
them
over.
Like
I
don't
know,
if
there's
there's
so
many
funny
things
about
profile
d,
like
the
order
they
run
in
is
like
technically
non-deterministic.
I
think
like
don't
set
minus
e,
because
you
don't
know
if
something
else
is
gonna
like
not
honor
that
and
then
screw
the
whole
thing
up.
So
there's
just
a
bunch
of
little
stuff
like
that.
I've
learned
over
the
years
that
I
don't
know
how
it
translates
you
know
so.
D
D
A
I
was
just
going
to
say
in
fact
the
file
you've
written
here
is:
you
didn't
need
to
put
a
shebang
at
the
top,
but
the
end
bash
in
the
profile
case,
but
that
turns
it
into
something.
That's
executable
and
the
file
appears
to
be
executable
and
get,
and
so
it
wouldn't
just
all
work.
I
think,
like
the
you've
inadvertently
written
this
as
an
exact
d.
C
A
It's
because
we
wanted
to
preserve
log
output
and
so
standard
error
and
standard
n
are
still
connected
to
logs,
so
that,
because
it's
just
kind
of
part
of
your
running
process,
we
didn't
want
to
hide.
You
know
log
output.
That
would
have
to
happen
first,
so
we
kept.
You
know
one
and
two
connected
to
out
air
and
said:
okay,
we'll
open
three
and
three
is
going
to
be
where
our
primary
variables
go.
B
C
A
D
I
feel
like
there's
an
argument
to
be
made
that
we
should
have
taken
output
and
standard
out
and
then
standard
errors
for
logs,
because
that's
sort
of
how
small,
sharp
linux
tools
work.
A
lot
like
standard
out
is
supposed
to
be
the
output
that
then
you
can
pipe
to
other
things.
Standard
errors
where
you
just
write
information.
But
I
feel
like
nobody
does
that
in
practice
and
everyone
would
log
the
standard
out
and
then
it
would
all
break.
A
D
D
We
could
instead
pass
a
like
a
path
to
a
file.
I
think
would
make
the
most
sense
to
people
from
like
an
intuition
perspective.
It's
a
bit
slower
to
have
to
write
things
to
disk
when
there's
no
reason
to.
B
Output,
we
don't
describe
what
it
has
to
be,
but
maybe
when
we
said
key,
we
met
value
here
in
the
spec
when
we
say
basic
string.
Does
that
mean
they
can't
include
m
bars
like
if
they
will
be
interpreted
or
not
interpreted
or
interpolated,
rather
just
looking
at
joe's
favorite
example,
which
is.
A
In
the
xid
case,
they're
not
interpolated,
it's
key
equals
value,
but
I
think
it's
actually
tommel
key
equals
value
format.
The
it's
not
interpreted
plated
in
the
same
way
that
profile
d
is
not
interpolated,
so
you
can
do
the
interpolation
locally
in
your
profile
descript,
but
in
both
cases
you
can't
put
put
a
variable
in
the
variable
and
then
expect
it
to
be
interpolated
later
so
the
behavior
is
the
same
as
it
was
before.
A
A
A
Joe
something
you
said
a
bit
earlier
was
about
the
order,
so
the
order
for
profile
d
is
enforced
as
it's
the
layer
order,
lexographically
and
then
it's
the
same
ordering
rule
for
the
profile
d
scripts
themselves
and
I
think
x
id
has
the
same
defined
order
as
profile.
He
does
so
it
wouldn't
change
shouldn't
change
the
order.
They're
executed,
just
changes,
the
directory
name.
E
That
yeah,
actually
I
forgot
that
make
that
solves
like
90
of
my
problems
actually
99,
probably
because
within
the
same
build
pack,
I
don't
usually
care
like
if
there's
a
potential
conflict
between
two
of
them.
I
can
solve
that.
The
problem
is
always
like
some
other
built
backward
profile
d
script
and
I
didn't
know
it
was
going
to
run.
A
D
C
D
A
There
might
just
be-
it
might
be
a
little
hiccup
during
the
migration,
where
somebody
switches
for
a
life
cycle
that
continues
to
support
bash
from
profile
to
exact,
and
that
could
change
the
that
could
make
their
pro
their
thing
higher
priority
than
other
build
back.
Things
would
be
something
to
think.
A
D
Basically,
have
the
launcher
act
as
a
rapper
around
a
profile
script
that
it
runs.
Then
it
takes
whatever
changes
it
sees
in
the
environment
and
sends
them
back
it'd
be
slightly
different
from
how
profiles
work
now
that
they
wouldn't
actually
all
be
sourced
in
the
same
shell,
but
it
sounds
like
they
would
actually
solve
a
lot
of
the
problems.
You're
running.
D
E
D
E
Yeah
yeah
yeah,
like
said
mana,
no
and
again
it's
not
like.
I
don't
care
about
me.
I
just
think
that
the
things
I'm
doing
are
representative,
so
that
that's
why
I'm
you
know
trying
to
suss
all
this
out.
A
I
think
something
else
about
I
like
about
getting
rid
of
bash.
Is
I
kind
of
get
the
feeling?
People
are
moving
away
from
bash
in
different
contexts,
so
on
a
mac,
if
you
install
like
latest,
I
don't
know
whatever
it
is.
You
get
zsh
yeah
z
on
ubuntu.
They
like
dash
now
more
than
bash.
You
get
both
in
the
docker
image,
but
they
seem
to
think
dash.
Is.
E
Yeah,
like
I
said
I
don't
actually
care
about
bash,
it's
just
and
dash,
would
align
with
yeah
most
of
the
docker
ecosystem.
Probably
it's
for
just
user
experience
that
I
care.
C
A
B
Sorry
can
I,
for
my
sake,
can
I
get
at
a
high
level
how
we
feel
about
this
convention
just
in
general
right,
like
there's
plenty
of
other
tools
in
in
you
know,
in
our
space
that
do
enforce
like
sh
or
something
like
that,
I
guess
I've
never
really
heard
up
till
now.
B
B
Just
the
idea
that
you
have
an
executable
binary,
that's
by
default
and
you
don't
have
to
opt
into
anything.
That's
the.
A
Do
you
mean
like
there's
a
convention
in
some
tools
where,
if
you
provide
a
command
string
as
opposed
to
command
and
args
that'll
get
like
in
docker?
If
you
do
a
like
an
entry
point,
that's
a
long
string.
Instead
of
an
array,
it'll
run
bin
sh,
dash
c.
That
thing
automatically
for
you
and
then
sh
is
like
tries
to
be
this
posix
compliant
shell.
That
you
know
doesn't
like
all
the
shells
are
supposed
to
kind
of
behave
somewhat
similar
if
they're
executed.
You
know
through
that
link,
you're
talking
about
that
kind
of
convention.
B
Yes,
that
you
don't
have
to
specify
your
executable
or
your
shell,
it
is
implicit,
it's
implicitly
defined.
A
I
think
we're
talking
about
a
model
where
there
is
no
shell
right.
The
operating
system
executes
the
thing
directly
and
so
that,
and
if
you
want
to
have
a
shell
there,
you
have
to
you,
know
kind
of
explicitly
specify
it
and
so
like.
I
think
that
is
part
of
the
decision
it's
like.
A
Should
we
keep
a
mode
where
it
implicitly
puts
spin
sh-c
somewhere
right,
you
know,
but
it's
complicated,
because
we
have
these
profile
scripts
that
should
kind
of
match
that
shell
that
are
currently
in
not
in
in
posix
shell
they're
in
dash
four
shells
specifically,
and
they
all
have
to
get
sourced
by
the
shell
first
before
the
commands
run
and
so
part
of
wanting
to
drop
it
is,
is
that
interface
means
the
bash
is
mandatory
separately
from
that,
or
maybe
I'm
not
answering
your
questions.
Sorry.
D
I
think
there's
like
two
cases
to
think
about
the
one
we've
been
focusing
on
is
like
a
build
pack,
contributes
a
process
type,
and
so
in
some
ways
it's
telling
you
what
what
it
wants
to
run.
No
matter
what
it's
like
a
case
where
someone
is
saying
what
they
want
the
process
to
be,
but
it's
just
about
whether
we
stick
bash
in
front
of
it
or
not
by
default.
D
D
B
C
C
I
think,
as
far
as
I
know
what
I
I
think,
docker
themselves,
don't
don't
recommend
the
string
form
of
entry
point
like
they.
They
tell
you
to
use
the
array
form
first
and
then
use
the
string
one,
because
the
string
one
has
some
other
void
issues
like
you
can't
any
any
extra
arguments
you
specify
where
the
command
line
are
dropped,
which
is
again
not
intuitive.
A
I
think
if
docker
could
read,
could
you
know
make
this
decision
again
today?
I
don't
think
they
would
choose
to
do
any
of
the
then
sh.
I
think
they
would
take
the
approach
case
took,
which
is
say:
here's
your
command
here,
your
args
fill
them
out.
I
don't
think
they
would
call
them
entry
point
in
command.
You
know
to
me
that
feels
like
it's
a
historical
decision
that
maybe
made
a
lot
of
sense
at
the
time,
but
you
know
I
wouldn't.
C
C
So
you
find
this
by
experience
and
not
through
documentation,
which
is
another
like
it
hits
you,
when
someone's
using
your
wealth
back,
tries
these
random
things
and
then
they
come
back
to
you
and
tell
you
hey.
This
thing
doesn't
work,
and
then
you
have
to
go
look
into
the
launcher
code
to
figure
out
what's
happening,
which
also
isn't
great.
C
Like
like
how
this
interpolation
works,
where
it
gets
rid
of
quotes
or
oh.
D
There's
probably
some
nod
to
it
in
the
spec
I
swear
like,
but
maybe
at
the
time
we
weren't
as
good
about
capturing
everything
I'll
I'll
look
for
it.
I
just
did
a
bad
job,
describing
it
is
what
I'm
guessing
happens
that
it's
in
there,
but
you
would
never
know
that.
That
is
what
it's
trying
to
tell
you.
A
A
Sam,
I
think
these
are
both
your
things
you
brought
up,
so
I'm
looking.
B
C
I've
documented
it
in
the
discussion,
so
just
in
case
we
don't
get
enough
time,
but
here
it
is
the
the
the
case
is
described
there.
It's
so
e.
It
starts
off
with
three
build
packs.
You
have
from
buildback
that's
a
generic
system
that
provides
like
generic
system
packages.
C
Now
the
provision
and
requirements
for
each
of
them
are
also
noted
in
that
discussion.
So
the
system
package
manager,
for
example,
provides
system
packages.
Go,
provides,
go
and
requires
like
it
can.
It
can
request
system
packages
with
different
versions
of
coke.
Let's
say
it
supports
114,
115
and
116.
and
go
mod.
Just
reads
the
go
requires
and
go
mod
and
requests
that
from
the
google
pack
and
currently
there
is
no
way
for
such
a
system
to
communicate
and
exist.
C
That's
that's
sort
of
the
issue.
Now
these
problems
go
away.
If
you
go,
buildback
is
directly
aware
of
the
system
built
back
and
it's
requesting
packages
from
there
directly
or
if
your
global
pack
itself
provides
the
whole
binaries
and
the
build
process
rather
than
requesting
it
from
system
package
manager.
C
But
then
you
lose
the
flexibility
of
having
a
generic
package
manager,
which
I
guess
you'll
get
with
something
like
stack
packs
and
having
like
these
shim
distribution
providers
like
something
that
provides
python,
go
specifically
and
configure
some
extra
environment
variables
for
them
before
being
required
by
some
other
downstream
built
back
like.
B
The
is
the
con
is
the
kind
of
place
where
it
falls
apart,
because
either
the
first
two
bill
packs
here
could
provide
go,
and
you
don't
know
which
one
is
providing,
which
one
version.
C
It
it's
because,
where
it
falls
apart,
is
that
you
can't
ask
like
the
the
go:
build
pack
will
never
be
able
to
provide
request
the
current
system
packages,
because
it
will
only
get
to
know
about
them
during
the
build
process
and
by
the
time
it's
got
into
its
build
process.
The
system
package
manager
would
have
already
installed
everything.
C
A
C
Yeah
so
the
issue
there
was
like:
let's
say
you
have
a
go
mod
built
back
from
something
like
the
ghetto
which
doesn't
need
to
know
about
like
doesn't
need
to
depend
on
a
specific
system
package
manager
like
the
go.
Multiple
pack
just
requires
go,
and
you
could
have
that
gold
mod,
build
back
running
on
like
corral
or
bond
or
whatever
you
can.
You
can
have
like
a
star
stack,
go
mod,
build
back,
but
by
introducing
a
direct
dependency
on
the
system
package
manager.
C
A
D
Good
yeah,
I'm
gonna
jump
in
and
try
to
explain,
just
make
sure
I'm
understanding
it
and
you
can
tell
me
if
I'm
getting
it
sam,
it's
like
go
mod.
Build
pack
wants
go
the
way
the
system
is
set
up.
It's
like
they'd,
like
the
system
package
version
of
go,
but
you
don't
want
the
go
mod,
build
pack
to
directly
declare
that
because
then
it
couldn't
work
with
other
providers
so
basically
like
a
proxy
build
pack.
That
looks
for
the
request
for
go
and
then
asks
for
the
system
package.
C
Yeah
so
like
the
the
go,
mod
build
pipe,
doesn't
care
where
the
co,
the
specified
version
of
go,
is
installed
from.
It
just
requires
that
specified
version
of
go
and
then,
if
you
directly
request
like
go
1.16
from
the
system
package
manager,
you'll
now
introduce
this
direct
dependency
on
app
or
rel
like
yum
or
like
some
other
thing,
where
it
doesn't
need
to
be.
D
I
think
the
way
we
went
normally,
this
gets
weird
with
stack
packs
and
makes
sense
in
particular
right.
If
this
was
just
one
regular,
build
pack
providing
a
dependency
like
you
would
just
ask
you
just
use
the
same
name
right
and
you
wouldn't
need
this
sort
of
translator
build
pack
that
takes
go
and
turns
it
into.
D
C
C
I
mean
that's
not
great,
but
I
guess
you
could
do
that.
You
could
either
directly
hard
code
like
the
so
let's
say
you
have
you
add
another
system
package
manager.
So
let's
say
apart
from
ubuntu,
let's
say
in
the
future:
we
have
a
rel
stack
and
that
has
a
different
concept
of
mix
sense.
So
now
you
have
three
plans,
one
where
it
requests
it
using
yum,
one
where
it's
requesting
it
using
app
and
one
where
it's
requesting
it
just
just
requesting
go
for
example.
C
So
that's
why
I
wanted
this
proxy
build
pack
so
like
if
you're
on
any
of
these
stacks,
the
proxy
build
pack
is
the
one
responsible
for
providing
go
and
the
go
mod
window.
The
go
mod
buildback
can
just
be
reused
across
everything.
A
In
a
way,
stackpacks
introduced
this
problem
because
before
we
said
that
all
base
images,
therefore
operating
system
packages
had
to
be
rebaseable,
lts
abi
compatible
bits,
and
so
you'd
never
really
want
go
from
your
operating
system
because
it
would
be
go
from
you
know,
2018.
You
know
with
some
security
patches
backboarded
to
it
right.
A
It's
not
really
what
you
want
to
build
your
application
with
most
likely,
although
there
are
arguments
for
you
know
like
I
shouldn't
speak
for
every
app
in
the
world
right,
but
in
general
people
didn't
want
to
do
that
now,
with
stack
packs.
You
really
might
want
to
get
your
go
from
an
rpm
distribution
right
that
the
stack
pack
is
going
to
install.
But
I
wonder
if
you
know,
stat
packs
are
going
to
be
a
lot
less
performant
they're
gonna
break
rebase
like
it's.
A
It's
never
gonna
be
preferable
to
me
to
use
stack
packs
over
a
build
pack
that
installs
the
binary.
So
I
wonder
if,
if
this
is
still,
if
this
limitation
is
actually
a
good
thing
in
the
end
it
discourages
makes
build.
Pack,
authors,
you
know,
write,
build
packs
that
work
on
top
of
the
api
for
in
places
where
you
know
you
have
that
kind
of
application
level
dependency
and
relegates
operating
system
dependencies
to
a
separate
thing.
That's
like
ltsc
libraries,
but
maybe
more
of
them
than
you
have
right
now.
A
Yeah,
I
think
I
think
you
want
this
to
be
analogous
to
the
case
that
you
made
earlier
with
it
was
like
node
packages
or
or
something,
and
that
example
felt
or
like
like
I
felt
like
there
was
a
real
limitation
there.
If
that
makes
sense,
it's
just
this
example
with
the
system
packages.
I
kind
of
don't
want
to
solve
this.
This
example
version
of
it,
if
that
makes
sense,
is
especially
closing.
B
C
D
So
it's
like
you
could
say:
oh
my
condo,
build
pack
provides,
go
like
it
can't
and
then
my
go
model
pack
requires
go
and
you
don't
need
a
middleman,
but
the
problem
is
that
you
want
to
list
like
literally
every
single
thing,
a
person
could
install
with
conda,
probably
not,
and
then
also
the
way
the
build
plan
works.
You
can't
say
I
can
install
any
of
x,
you
have
to
say
I
can
sell
exactly
this
set
of
things
or
exactly
that
set
of
things.
You'd
have
a
combinatorial
explosion
that
would
make
it.
C
The
other
one
was
having
like
a
resolve
stage
that
runs
after
the
detect
stage,
which
runs
in
the
opposite
order
from
the
build
order,
so
the
the
one
that's
getting
built
at
the
end
would
resolve
it
at
the
beginning
and
then
pass
information
forward
and
make
modifications
to
the
build
plan
so
that
by
the
time
it
comes
to
building
it,
it
has
all
the
information
it
needs.
C
So
it's
like
a
two
pass
system
and
the
last
option
was
having
something
run
before
the
detect
stage
that
can
modify
either
like
environment
variables
or
like
a
version
of
the
application
directory
which
then
the
detect
api
runs
on.
A
Maybe
capture
those
in
the
discussion
thing
because
we're
out
of
time.
D
I
worry
that
a
resolve
would
really
slow
down
detection
because
now,
instead
of
running
detect
for
every
build
pack
once
and
figuring
out
what
groups
of
things
can
work
together,
you
have
to
go
back
and
rerun
them
for
every
plan.
I
think
because
we
do
it
so
efficiently.
It's
not
obvious
to
people
like
how
many
permutations
of
bill
packs.
We
are
attempting
to
figure
out
if
they
would
work
before
we
give
a
group
thumbs
up.
B
A
Case
we
should
call
time,
though
we
should,
let's
pick
this
back
up.
I
think
we
have
the
thing
scheduled
for
tomorrow,
like
a
demo,
but
we
can
pick
this
back
up
either.
Second
half
tomorrow
or.