►
From YouTube: Working Group: 2021-09-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
first
thing
is
introductions.
Do
we
have
any
new
faces
here?
I
don't
think
so.
So,
move
on
to
release
planning
and
updates.
B
All
right,
I
could
speak
to
the
platform
side.
Pac
is
being
released
today
or
tomorrow
at
latest
we've
had
an
rc
out
there,
no
issues
found
or
reported
as
far
as
I'm
aware,
but
yeah.
That's
what
we've
got
going
on.
C
On
the
learning
team
side,
we
finally
had
cncf
follow-up
on
the
data
code
issue
looks
like
we
can
embed
the
like
tutorials
right
in
the
website,
so
we'll
probably
start
doing
that
soon.
A
All
right,
moving
on
to
first
item
on
our
agenda,
the
s
bomb
rfc.
C
Okay,
I
think
the
only
updates
that
I
have
here
are
from
terence's
comments
about
having
the
ability
to
see
what
s
bomb
format
to
buildback
supports
on
the
registry.
C
Maybe
so
I
added
the
s
bomb
key
to
the
buildback
normal,
which
is
an
array
of
values.
Currently,
we
only
allow-
or
I
propose,
allowing
cyclone
dx
and
spdx,
but
we
could
add
more
in
the
future.
We
can
also
use
this
metadata
to
add
warnings
from
back
or
other
platforms
like
foot
notices,
multiple,
like
different
formats
in
the
same
build
process,
it
can
warn
that
the
es1
may
not
be
complete.
C
The
other
comment
that
patrick
made
this
morning
is
that
cyclone
dx
apparently
has
a
concept
of
bomb
references,
so
you
could,
in
the
final
merged
cyclone
dx
bombs
that
you're
generating.
If
we
see
an
spdax
form
for
a
specific
layer,
we
could
actually
just
refer
to
that
and
say
that,
instead
of
a
cyclone
dx
form
describing
that
layer,
here's
an
spdax
bomb
that
describes
it
and
that
could
potentially
be
used.
C
C
I
think
the
only
real
blocker
or
thing
to
figure
out
at
this
point
is:
how
do
you?
How
do
we
restore
a
lot
of
materials
across
different
builds?
But
apart
from
that,
I
think
the
rfc
is
pretty
much
done.
D
This
s-bomb
array
is
just
to
indicate
like
additional
s-bomb
formats
outside
the
legacy
cnb
format,
because
I
assume
we're
gonna
keep
the
legacy.
Cnb
format,
I'm
not
going
to
say
forever,
but
like
forever
seems
like
the
best
word
that
I
can
use.
So
this
is
just
additionally
additional
s-bombs
on
top
of
the
legacy
format.
C
D
All
right,
that's
good
to
know,
because
I
think
there
was.
I
thought
that
there
was
some
discussion
at
some
point
of
keeping
the
older
legacy
format,
because
in
some
ways
you
can
have
like
some
enhanced
amount
of
data,
because
it's
such
a
free
free-form
field.
But
if
the
plan
is
to
ditch
that
as
soon
as
we
make
the
api
swap
over,
I'm
just
as
happy
with
that.
C
I
think
the
only
reason
I
mentioned
the
legacy
format
here
is
that
is
for
backwards,
compatibility
reasons
where,
if
a
buildback
cannot
update
the
new
api
and
the
form
format,
we
should
still
have
a
way
of
preserving
that
data
and
showing
it
to
the
user
somehow
if
they
want
to
make
some
sense
of
it,
but
I
I
don't
think
it
makes
sense
to
support
the
legacy
format
in
the
future
apis,
because
there's
no
real,
like
tools
out
in
the
wild
that
support
that
format.
C
C
So
anything
from
the
core
team
think.
Yesterday
we
wanted
to
keep
that
functionality,
even
though
it's
not
implemented
and
have
a
way
to
re,
restore
the
s
bombs
as
opposed
to
leaving
it
to
the
buildback,
also
to
figure
it
out
how
to
do
it.
A
C
Yeah,
I
think,
even
if
you
put
it
on
this
there's,
how
would
you
like
have
that
available
for
the
next
build.
C
A
Makes
sense,
does
it
block
the
rfc
or
does
it
you
know?
Is
it
just
an
implementation.
A
But
given
that,
like
it
feels
more
like
implementation
decision
to
decide
exactly
how
we
recover
that
you
know
you
know
layer
and
it
might
be
platform
dependent
the
for
me
it
doesn't.
It
doesn't
block
it
as
long
as
we
say
that,
yes,
we
do
need
the
correct
behavior,
but
whatever
people
feel
like
if,
if
it's
worth
anything
currently
for
most.
D
Of
the
paquetto
build
packs,
we
regenerate
the
billy
materials
for
like
every
time
we
do
a
rebuild
like
as
soon
as
we're
done
doing
the
logic
that
selects
the
dependency.
We
create
a
bill
of
materials
and
then
do
the
logic
to
whether
or
not
we
actually
install
it
so
having
the
ability
to
restore.
It
is
only
really
valuable
in
the
case
we're
doing
something
like
scanning
our
node
modules,
which
we
use
a
third-party
plug-in
to
do.
So.
It's
not
a
huge
press
for
us
right
now.
C
But
I
I
don't
know
if
you
want
to
do
it
as
a
part
of
this
rfc
or
the
other
one,
because
there
are
also
some
open
questions
on
how
we
store
the
s
form
in
general,
like
apart
from
the
restore
rebuild
like
how
do
we
store
it
in
general?
Currently,
it's
on
a
label
that
won't
be
feasible.
I
think
with
us,
generating
s
forms
from
a
bunch
of
these
tools
like,
for
example,
the
node
modules
one
sometimes
generate
transforms,
which
are
like
a
couple
of
mb
in
size.
A
Any
other
open
questions
about
this.
I
know
terence
a
couple
times
here,
you
brought
up
some
concerns.
You
know
I
really
especially
shared
initially
about
supporting
multiple
s-bomb
formats
or
how
are
you
feeling
about
it?
D
Put
a
comment
the
other
day
that
sam
address,
where
I
said
that
you
convinced
me
we
needed
more
than
one
just
because
you
know
there
isn't
a
winner
and
different
use
cases
different
s-bomb
formats,
but
I
just
you
know
the
I
think
the
bill
packed.
Humble
changes
was
kind
of
thing.
I
was
looking
for.
You
know
it's
it's
not
the
most
ideal
thing,
because
in
theory
you
could
lie
or
whatever,
but
I'm
assuming
bill
pack
authors,
you
gotta
trust
them
to
some
degree.
D
So
I
think
they'll
just
help
platforms
and
like
if
a
platform
only
supports
one
s-bomb
format
or
whatever
they
can
at
least
know
ahead
of
time
that
this
bill
pack
won't
provide
them
the
stuff
they
need
and
they
can
either
put
out
a
warning
or
something
all
right
got.
It.
C
A
Is
the
list
in
the
rfc?
Do
you
specify
the
list
is
valid
on
a
meta
build
pack
as
well
as
a
build
pack?
So
you
probably
don't
want
to
allow
it
on
a
meta
build
pack,
so
you
can
calculate
that
from
other
build
pack
build
pack.
Tunnels
could
be
less
accurate.
C
A
D
It
looks
good
to
me,
it
just
seems
like
sam
and
emily
need
to
touch
base
and
figure
out
whether
it's
a
blocker
for
her
or
not
cool.
A
The
doctor
dockerfile
one
at
the
end,
I
don't
have
anything
specific
for,
and
this
seemed
more
I
don't
want
to
say
controversial,
but
like
there's
more
to
figure
out
if
that
makes
sense,
so
I
think
we
put
this
one
first,
but
if
you'd
like
to
skip
it,
that's
okay,
too,.
C
So
this
rfc
tries
to
solve
the
issue
where,
if
the
buildback
or
the
build
process
wants
to
export
layers
which
do
not
include
the
slash
layers
directory
or
the
workspace,
there
are
some
use
cases
for
it
where
you
want
to
install
software
in
specific
parts
and
those
parts
are
rebaseable
and
you
want
to
preserve
them
in
your
final
application
image.
So
some
examples
that
I
have
here
are
like
op
directories
or
like
lambda
extensions
or,
like
other
common
standalone
software.
C
I
know
some
of
the
real
packages
also
can
be
installed
as
a
standalone
thing
in
their
own
directory
or
inside
of
similarly
like.
If
you
want
to
preserve
some
configuration
or
settings
in
the
home
directory
for
a
user
rather
than
regenerating
it
during
runtime
through
some
exact
ebooks.
C
We
also
do
that
and
the
main
idea
is
to
have
these
kind
of
changes
be
made
without
like
relying
on
hacks
like
creating
some
links
in
these
stack,
which
are
then
populated
during
the
build
process.
C
The
main
blocker
here
was
the
implementation
of
it
and
how
this
might
go
in
a
different
direction
than
what
we're
trying
to
do
with
the
new
like
removing
stacks
and
the
new
dockerfiles
rfc.
So
we
just
wanted
to
figure
out
if
we
can
preserve
this
functionality,
but
maybe
implement
it
in
a
different
way,
so
that
it's
compatible
with
what
we're
doing
with
the
other
ones.
B
I
I
don't
know
what
your
thoughts
on
that
are.
Are
there
alternatives
that
that
could
work
differently.
D
C
C
So
if
it's
compatible
it
can
you
can
still
choose
to
support
like
hey,
you
don't
have
volume
x,
that's
fine!
I
can
write
my
stuff
to
someplace
else
or
make
it
compatible,
but
it
works
better
if
you
have
volume
x,
so
a
buildback
could
still
do
that
if
it
needs
to
it's
sort
of
similar
to
like
the
whole
hooks
proposal
with
the
dockerfiles
like
if
a
buildback
needs
a
specific
requirement,
I
guess
which
can
only
be
installed
either
by
a
hook
or
via
stack.
B
B
If
they
have
a
way
to
not
need
these
volumes
right,
then
why
are
we
introducing
the
complexity
of
providing
these
volumes
and,
if
you're
saying
that
they
do
need
it
or
work
better?
If
they
did
have
it,
then
it
goes
back
to
that
like
now,
this
build
pack,
what
would
fail
essentially
or
would
not
pass,
and
so,
therefore,
it
is
kind
of
coupled
to
the
implementation
of
a
staff.
B
C
That
also
makes
sense
so
like
buildback,
would
declare
a
list
of
additional
paths
that
it
wants
to
able
to
export
players
from
and
then
during
the
builder
creation
process
or
when
you're
running
analyze
it
to
figure
out
if
well.
I
think
it
might
be
easier
during
the
builder
creation
process
if
the
platform
could
just
provision
all
of
those
volumes
or
like
declare
them
somehow
in
the
builder
metadata,
and
then
when
it's
actually
doing
the
build
process,
it
could
provision
them.
C
C
B
C
D
Yeah,
I
know
I
guess
I'm
just
thinking
you
need
the
list
of
bill
packs
at
build
time
to
know.
First
right,
like
you're,
not
like
sam
was
saying
you're,
not
always
gonna
have
necessarily
a
builder
that
has
all
the
build
packs
like
you
might
be
downloading
the
build
packs
at
build
time.
B
Yeah,
I
definitely
see
the
complexity
because,
for
one
are
we
talking
about
all
of
the
build
packs
available
on
a
builder
right
and
aggregating
all
the
volumes
that
could
potentially
be
used
or
is
it
after
detection
saying
okay,
these
were
the
detected
build
packs,
and
these
build
packs
then
want
these
volumes
or
this
exported
directories.
C
I
could
potentially
imagine
so
even
with
platforms
like
techton,
I
could
potentially
imagine
the
platform
being
able
to
do
that
with
a
single
volume
if,
before
it
sees
its
privileged
permissions
like
and
moves
on
to
the
build
flow,
if
it
can,
if
even
if
it
has
a
single
volume,
if
you
can
create
the
appropriate
sibling
so
that
the
data
is
preserved
at
the
end
for
the
life
cycle
to
export
all
of
that
out
into
appropriate
places
that
would
still
work
and
then
for
a
platform
like
mac
or
kpac,
which
can
run
things
in
individual
phases.
B
Something
slightly
different,
but
when
we're
essentially
providing
these
volumes,
I'm
assuming
this
is
ultimately
replacing
the
directory
that
the
build
pack
is
requesting
right.
So
if
you
do
the
you
know,
root
opt
directory
right,
then,
basically,
anything
inside
of
there
vanishes
from
the
build
process.
C
C
C
The
alternative
implementation
would
be,
the
buildback
declares
the
parts
it
needs
to
be
exp,
it
needs
the
lifecycle
to
export
it
and
the
platform
plus
the
lifecycle,
have
some
safeguards
to
make
sure
that
they
are
actually
exportable
and
the
platform
make
sure
to
provide
those
volumes.
However,
it
can,
during
the
build
process.
C
A
A
A
A
C
A
A
A
A
C
My
use
case
it's
one
like
typically,
it's
one
build
back
looking
at
one
specific
directory.
Okay,
I
can
model
that
like
even
if
it's
not
it's
easy
to
break
up.
My
large
build
back
into
small,
smaller
build
buildbacks
that
model
it
to
one
lab
directly,
that's
visible
to
them,
which
can
all
be
different.
That
also
works
for
me.
C
D
Yeah
yeah,
I
guess
either
one.
D
No,
I
guess
I'm
just
asking
about
yeah.
Sorry,
sorry,
so
you
like,
as
you
were,
describing
the
multiple
app
directories.
I
was
questioning
if
we're
actually
talking
about
the
workspace
with
like
the
app
source
code
or
just
arbitrary,
empty
directories
that
can
be
like,
like
layers,
you
know.
D
A
I
think
we're
talking
about
the
app
directory,
like
whatever
directory
ends
up
with
source
code
or
the
user
source
code.
In
the
end,
when
the
build
starts,
I
mean
like
right
now.
That's
coupled
to
the
base
of
like
your
build
time
based
image
says
this
is
my
app
directory
right,
and
that
means,
if
you
added
multiple
app
directories,
the
base
image
would
have
to
say
these
are
the
approved
app
directories.
A
I
wonder
if
that
route,
if
you
decouple
that,
like
remove
that
complexity
that
lets
us
that
opens
it
up
to
the
ability
to
specify
more
inputs
into
a
build
right,
maybe
they're
not
called
app
directories
anymore,
which
could
be
a
little
less
specific
than
like.
You
know,
build
packs
claiming
volumes,
except
for
this
other
one.
That's
special!
That's
the
app
directory
right.
A
If
you
like,
get
rid
of
all
the
app
directories
in
the
base
image
and
then
say
during
a
build,
you
can
you're
allowed
to
provide
source
code
in
any
number
of
locations
in
the
image.
The
first
one
you
specify
is
passed
to
the
build
packs
is
the
app
directory
and
the
rest
are
accessible
one
way
or
another.
Right,
then,
is
that
cleaner.
C
Opens
up
more
use
cases
than
what
I
need
like.
I
don't
need
dynamic
selection
for
the
source
code.
I
just
need
x
places
to
write
to
whether
it's
decided
by
the
stack
by
the
build
packs.
I
don't
care.
I
just
need
some
way
to
specify
that
these
locations
are
available
for
the
buildback
to
write
to
and
they
will
be
exported.
A
I
think
it's
always
bothered
me
a
little
bit
that
there's
exactly
one
directory
that
has
all
of
the
input
at
build
time.
Right,
that's,
like
kind
of
makes.
It
feel
like
it's,
it's
very
centered
towards
building
a
specific
kind
of
application,
and
if
that
were
a
default
right
that
there's
one
directory,
but
actually
you
can.
You
know
mount
your
tests
over
here.
You
can
mount
your
source
code
here.
You
know
you
can
have
multiple
source
inputs.
A
If
your
app
is,
you
know,
has
multiple
source
repos
that
build
it
right
and
not
being
restrictive
about
where
those
appear
right.
That
sounds
like
a
thing
that
a
docker
file
will
let
you
do
so.
You
know.
C
A
C
A
A
I
worry
that
it
creates
a
lot
of
complexity.
It's
like
now
build
packs
are
executing
in
an
environment
that
has
some
stuff
and
then
may
also
have
some
stuff
later
that
you
know,
especially
because
currently
it
lets
you
replace
the
whole
base
image.
If
you
want
to
would
that
look
like
right
and
then
no,
I
worry
that
we
we
added
a
lot
of
complexity
with
the
original
stackpack
rfc
and
we
cut
it
cut
it
down
with
the
most
recent
changes.
It
sounds
like
a
way
to
add
a
bunch
of
complexity.
A
Back
all
right,
it'd
be
good
to
deliver
on
the
they
run
before
before
we
deliver
on
that
they
run
after.
If
that
makes
sense,
but
multiple
app
directories
doesn't
feel
wrong
to
me.
That
feels
like
it's.
A
It
may
be
solved
some
problems
that
I've
I've
seen
come
up
a
few
times
where
people
are
forced
to
consolidate
things
into
a
single
reaper
with
sub
modules,
or
you
know
whatever,
and
then,
if
it
allows
flexibility
where
you
could
say
if
we
decouple
it
from
the
location
in
the
base
image,
and
it
allows
flexibility
where
you
can
say
you
know.
Actually,
here
are
all
my
inputs
in
different
places
in
the
image
and
I'm
going
to
contractually
I'm
going
to
constrain
the
image.
A
D
In
that
case,
does
do
the
bill
packs
then
have
to
track
each
after
like
each
app
directory.
I
assume
you
would
probably
change
the
name
of
that
input.
Maybe
it's
not
the
app
directory,
but
so
like.
If
I'm
a
bill
pack,
I
and
you
have
potentially,
like
you
split
a
part.
A
mono
repo,
for
instance,
into
you
know
like
there's
sub
directories
as
different
inputs,
then
does
each
build
that
does
each
buildpack
need
to
inspect
each
of
those
input
directories
to
see
which
one
they
care
to
process
potentially?
A
I
think
in
the
normal
case,
there's
so
right
now
the
working
directory
for
build
pack
is
the
app
directory,
and
so
in
the
normal
in
the
most
common
case,
the
first
app
directory
you
specify
it's
probably
the
build
packs
working
directory
right
and
so
buildpack
gets
source
code
like
normal
and
doesn't
have
to
know
anything
about
the
other
inputs
right.
But
then,
if
we
had
some
way
of
of
saying
you
know
hey,
this
build
pack
supports
additional,
maybe
named
inputs
like
it.
A
A
C
and
b
underscore
app
underscore
test
or
something
based
on
the
name
of
the
thing
becomes
the
location
of
the
test
input
and
then
now
during
the
build
there's
a
new
directory
that
contains
tests,
that's
accessible
to
all
the
build
packs
right.
It's
just
one
environment
variable
that
build
pack
expects
to
receive,
and
then,
if
build
pits,
don't
expect
to
receive
a
test
directory,
then
they
don't
they
don't
care
about
it.
Something
like
that.
A
A
I
actually
didn't
have
all
that
much
to
chat
about
on
the
dockerfile.
When
I
was
just
going
to
call
for
approvals,
does
anybody
feel
strongly
like
you
want
a
group
we
should
move
on
from
this
one.
B
I
think
there
were
a
couple
action
items
that
I
still
haven't
seen.
I
think
the
big
one
is
we're
still
having
issues
loading
the
page,
so
I
thought
we're
going
to
create
a
new
pr.
A
I
fixed
it.
I
thought
I
fixed
it.
Okay,
if
you
go
to
it,
it
should
load.
Now
I
had
I
I
it's
still
loaded
on
mobile.
It's
completely
stopped
loading
on
in
your
browser,
but
still
loaded
on
mobile
if
you're
refreshed
enough
times,
and
so
I
used
that
to
close
about
half
of
the
things
and
so
now
I
think
it
loads
on
desktop
again.
A
Name
any
of
the
files
yeah
okay,
so
before
we
jump
on
samurai,
okay,
moving
on
to
the
docker
file,
when
do
you
have
any
last?
Okay
do
if
you
want
to
sync
up
on
the
maybe
a
way
to
frame
it
as
like,
multiple
inputs?
That
seems
really
interesting
to
me.
A
I
think
there
are
use
cases
there
that
I've
seen
like
we
could
enable
some
more
generic
things
that
are
interesting,
maybe
around
testing
or
whatever
that
are
close
to
what
you
have,
but
I
think
keep
the
coupling
away
and
frame
it
more
like
multiple
outputs,
cool,
okay,
sorry
happier,
unless
anybody
has
anything
else
on
expert
layers,
we
can
move
on.
B
No,
I'm
I'm
not
in
a
particular
rush
right.
I
just
want
to
make
sure
that
if
there
are
people
that
are
interested
in
it,
you
can
keep
the
conversation
going
for
tom
boxed
like
we
hope
to
do,
but
yeah.
I
think
the
the
things
that
I
mentioned
are
the
things
that
I've
noticed
thus
far
that
are
still
missing.
I
don't
know
if
there
are
comments
associated
to
it
because
I
haven't
gone
through
all
of
them,
but
it
does
seem
like
there's
still
a
decent
amount
of
comments
that
are
open.
A
Yeah
the
comments
that
are
open
are
mostly
addressed
and
just
waiting
for
other
people.
I
see
joe
has
one
new
thing
here,
but
I
was
just
waiting
for
other
people
to
say:
yep,
they're,
okay,
with
the
answer
and
close
it,
I
didn't
see
anything
actionable,
but
I
could
be
wrong,
but
you
listed
some
things
there.
Sorry,
I
just
forgot
what
it
loads
now.
What's
the
other
one.
B
So
yeah
so
now
that
it
loads,
I
could
probably
go
in
there
and
add
my
comments
about
some
of
the
naming
stuff
we
discussed
in
the
last
meeting.
A
D
Yeah
no,
I
agree
with
that.
My
cons,
one
of
my
concerns
with
hook,
was
that
to
me
it
implies
something
a
little
bit
different
like
a
web
hook
like
a
sort
of
asynchronous
fire
and
forget
thing,
but
more
like
more
important
to
me,
is
like
if
we
can
tie
it
to
something
that
exists
like
actually.
I
was
thinking
either.
D
I
was
reading
through
the
docs
for
doctor
file
from
and
stages,
and
you
know
they
describe
from
the
from
instruction
creates
a
build
stage
that
defines
the
base
image
that
blah
blah
blah
blah
blah.
So,
like
there's
a
few
terms,
and
there
are
concepts
that
we
could
use
like
stage
or
base
dash
image,
or
even
from
that,
I
think,
might
you
know
you
risk
tying
it
too
strongly
to
something
that
isn't
exactly
the
same
thing
but
yeah,
even
like
from
just
cmb
slash
from
is
the
directory
from
tamil.
A
D
D
A
B
A
A
D
Yeah,
I
thought
about
that
too.
I
think
that
seems
fine.
D
A
C
So
at
least
to
me
like
this:
if
we
had
been
detect
for
everything
and
then
if
a
buildback
I
mean
this
is
also
going
a
bit
back
into
the
buildback
territory,
but
you
could
have
like
a
detect
binary,
which
is
common,
extends
binary
in
the
build
binary.
So
the
extends
binary
outputs.
What
do
you
have
detect
this
common?
So
if
pullback
needs
something,
you
can
still
do
that
and
then
the
build
part
is
the
normal
part
for
a
buildback.
A
C
I
think
the
idea
was
to
so
I'm
imagining.
A
lot
of
these
would
have
repeated,
detect
logic
in
the
actual,
build
back
and
then
deport
you
right
strength
to
get
around
that
by
having
like
a
single
thing
that
does
the
detect
then
leave
some
parts
to
extend
in
some
parts,
people,
but
that
might
get
too
complicated.
A
Maybe
I'm
not
totally
clear
in
the
workflow
so
like
the
way
I
imagine
this
working
is
because
they
participate
during
detect.
Another
build
pack
could
require
something
they
can
only
provide.
They
can't
require
right,
so
they
get
requirements
from
detection
and
then
they
pass
it
on
to
their
build
script.
They
can
modify
the
base
image.
C
A
C
A
Maybe
maybe
we
could
do
this
first
and
then
come
back
later
and
add
the
you
know
other
kind
of
build
functionality.
I
guess
then
you're
saying
the
name
of
the
script
is
weird.
I
don't
care
about
what
the
name
of
the
script
is
for
stack
packs.
If
we
wanted
to
call
it
been
extend
that
instead
of
been
billed
for
now,
it's
okay
with
me,
but
I
think
other
people
had
strong
opinions
about
it.
A
Maybe
if
you
want
to
put
that
the
suggestion
for
bin
extend
in
the
rfc
and
tag
emily
and
if
she
doesn't
say
no,
then
I'll
change
it.
How
about
that.
C
C
The
the
other
thing
I
wanted
was
like
the
build
or
the
extent
whatever
binary.
That
is,
if
that
had
a
way
to
write
out
the
talker
file,
arguments
that
are
passed
to
the
build
process.
That
would
be
great,
like
it
writes
a
normal
file
or
it's
a
some
file
out
there
with
the
key
value
pairs
of
the
build
argument,
name
and
people
document
value.
That
would
be
beneficial
because
then
I
don't
have
to
worry
about
incorrectly
generated
docker
files
because
of
this
whole
templating
object.
A
B
Can
this
actually
be
inserted
into
an
order
right
and
it
doesn't
work
just
like
a
build
pack,
so
I
had
to
like
go
back
to
the
rfc
and
be
like
okay,
no,
not
at
all
right
like
it's,
not
even
included
in
any
form
or
fashion.
It
just
happens
to
be
used
in
the
the
same
process,
but
they're
defined
completely
separate
right,
but
then
they
still
look
very
similar.
So
it's
just,
like
my
mind,
is
kind
of
toggling
back
and
forth
between.
D
I
think
it's
a
virtue
in
this
case
actually
like,
because
we're
not
calling
them
build
packs
like
I
know
where
you're
coming
from,
and
I
think
we
really
had
to
be
careful
with
that
with
stack,
build
packs,
because
we
were
calling
them
build
packs.
But
in
this
case
I
don't
think
it's
on
the
table
that
we
would
rename
hooks
to
hook
build
packs
or
something
right,
and
so
in
that
case
I
feel
like
the
parallels
are
a
virtue
because
you
can
kind
of
quickly.
D
D
B
You
know
binary
or
script,
but
then
I'm
having
to
think
of
in
some
sense
the
context
of
what's
possible
within
this
detect
or
how
it
operates
or
the
same
thing
for
the
build
right
within
the
context
of
am
I
writing
this
hook
versus
am
I
writing
a
build
pack,
and
if
I
look
at
documentation
for
how
to
work
with
build
packs
right
that
could
very
easily
overlap
with
documentation
that
may
actually
be
applicable
only
to
the
hooks
and
that's
why,
like
changing
the
file,
names
was
one
of
the
first
things
that
I
I
kind
of
proposed
right
to
at
least
eliminate
some
of
that
context.
A
We're
at
time
I
just
wanted
to
say
it
seems
like
two
outstanding
things
are
the
name
of
the
thing
and
whether
how
close
it
looks
to
a
build
pack,
if
somebody
feels
like
it
looks
too
close
to
a
build
pack
wants
to
completely
suggest
we
rename
some
things.
Can
you
actually
suggest
that
on
github
in
in
the
commit,
and
then
we
can?
A
You
know
ideally
with
a
like
a
proper
suggestion,
so
I
can
hit
the
button
if
that
makes
sense.
If
we
decide
to
do
it
and
we
can
have
a
discussion
there
and
then,
if
somebody
has
an
idea
for
the
name,
if
you
have
a
really
concrete
idea
for
the
name,
that's
like
it
will
be
called
a
staging,
something
you
know
or
whatever
just
be
like
really
concrete
about.
This
is
what
I
think
it
should
be
called.
These
are
how
it
should
be
used
and
how
the
directories
should
look
like.
A
I
feel
like
there's
a
it's
a
very
large
rfc
and
there
are
a
lot
of
different
ideas.
There
are
very
good
ideas,
but
I
want
to
make
sure
that
we
kind
of
keep
moving
forward
with
some
concrete
when
people
want
to
change.