►
From YouTube: Working Group: 2020-08-27
Description
* 1.0 Breaking Changes
A
A
B
C
D
I
don't
think
so,
do
we
have
look
at
the
agenda.
D
A
Fine
thanks,
google
docs.
B
I
guess
I
put
some
comments
on
there
and
he
was
responding
to
them
and
then
he
just
pushed
some
commits.
I
think
earlier
today,
so
maybe
he'll
be
ready
to
present.
Maybe
he'll
think
he's
ready
to
present.
I
don't
know
like
that's
the
first
time
I
really
helped
him
out.
So
I
have
no
idea
exactly
what
we're
expecting,
but
I
just
wanted.
A
A
Yeah,
I
just
wanted
to
make
sure
we
weren't
being
the
sliders
on
our
end
after
committing
to
helping
him.
B
D
See
you,
I
don't
think
so.
If
folks
have
things
they
want
to
talk
about,
definitely
put
on
the
agenda.
A
few
people
just
joined,
but
we
may
not
have
too
much
this.
D
D
I
guess
any:
should
we
talk
about
stack
packs
or
do
you
have
any
questions
on
on
that
we
could
output.
A
I
did
open
a
draft
pr
where
I
tried
to
split
out
the
stack,
build
packs
part
of
the
app
nexon's
rsc.
A
So
I
guess
you
get
a
chance
to
take
a
look
at
the
draft
pr
just
to
see
if
you
think
that
split
makes
sense
I
haven't,
I
haven't
made
any
changes
to
the
app
mixins
one
yet
and
I'm
thinking
I
might
leave
that
one.
The
way
it
is
and
close
it.
So
we
keep
all
the
comments
and
stuff
like
that
and
then
open
a
new
one
for
app
mixins.
A
A
I
guess
I
can
say
I
just
I
just
pushed
up
a
change
to
address
your
comment.
Terrance
on
the
rfc
labeling
thing,
where
we,
just
whatever
you
do
the
weekly
review
label
new
rfcs.
So
I
think
that
that's
pretty
much
good
to
go
cool
yeah.
I
think
I
already
approved
it
and
depend
on
that
change,
but
I
figured
since
we
covered
it
in
the
working
group
and
I'd
leave.
That
comment.
A
We
could
always
talk
about
stuff
related
to
one
now
or
like
other.
I
know,
there's
your
breaking
change
ducks
steven.
Are
there
still
things
from
there,
because
I
know
you
only
opened
like
three
rfs
and
there
was
a
handful
of
other
topics
on
there
as
well.
D
D
C
C
C
This
would
be
necessary
to
do
that
to
know
what
the
file
format
is,
but
nobody's
actually
doing
that
as
far
as
I'm
aware
so
practically
doesn't
have
a
lot
of
consequences
and
getting
rid
of
it
will
make
that
whole
situation
easier
to
deal
with
anyways
and
then
second
thing.
I
wanted
to
do
a
reorganization
of
labels
on
the
app
image
I
feel
like
they
grew
sort
of
organically
and
the
data
is
not
key.
Names
are
bad.
E
What
about
the
builder
specification,
I
I
feel
like
we're,
still
pretty
attached
to
builders
and
how
they're
composed,
but
we
don't
have
a
spec
for
it.
C
D
D
A
C
Oh,
and
the
last
thing
is
that,
right
now
on
your
app
image
metadata,
when
you
see
the
build
packs
that
participated,
you
don't
see
meta
build
packs.
If
they
were
there,
you
only
see
the
leaf
build
packs.
I
think
we
can
put
that
under
labels,
because
that
information
would
end
up
in
a
label,
but
I'd
like
to
include
that,
rather
than
purely
reorganizing
the
data
we
have.
I
think
that
data
is
nice
to
have
on
display
later.
D
C
D
I
think,
if
to
support
that,
if
we
went
full
reproducibility
right,
we
wouldn't
be
putting
the
build
tech
versions
in
that
metadata.
Also
because
you
know,
buildpay
versions
are
gonna
bump
a
lot
and
you
probably,
if
you
really
care
about,
reproducing
the
same
image,
even
though
in
the
build
packs
we're
all
gonna
install
the
same
versions
of
things.
You
know
we
wouldn't.
We
would
be
including
that,
so
I
I
don't
think
it's
a
big
issue.
I
just
wanted
to
call
out
that
it
would.
It
would
theoretically
have
that
problem.
C
D
We
can
move
all
the
build
pack
names
and
versions
to
report
toml
instead
of
putting
on
the
image
right
and
then
like,
we
could
really
refine
the
build
pack,
the
metadata
that
goes
in
the
image
to
just
describe
the
things
in
the
image
that
would
where,
if
those
bits
change,
that's
the
only
thing
that
would
trigger
a
change
in
the
thing.
But
if
we're
not
willing
to
go
that
direction,
then
it
probably
doesn't
make
sense.
So
I
think.
E
I
do
wonder
when
we
talk
about
reproducibility
if
it
would
be
worth
maybe
determining
exactly
what
scope
we're
we're
referring
to
right,
and
maybe
it
if
we
just
say
reproducibility
of
very
specific
layers,
that
we
associate
the
application
to
and
then
say
excluding
these
other.
You
know
attributes
that
then
are
things
that
could
be
modified
outside
of
it,
which
would
be
like
more
or
less
annotations.
D
Like
there
are
things
like
image
signing
that
you
know,
I
think,
while
they're
talking
about
whether
they
want
to
do
layer
signing
on
layers
and
notary
v2
as
a
but
whether
that's
a
concept
that
should
exist
that
may
make
it.
So
you
could
treat
reproducibility
the
layer
level
differently,
like
I
think
most
people
build
an
image
and
then
sign
the
digest.
And
then,
if
the
digest
changes,
they
you
know
should
be
resigned,
and
you
know
really.
E
D
The
the
config,
so
you,
even
if
all
the
layer,
fs
layers,
are
the
same.
The
config
blob
can
affect
the
environment
which,
because
you
can
do
ld
preload
and
things
like
that,
you
know
really
does
affect
the
security
story
for
the
you
know,
so
the
I
think
figuring
out
what
what
limited
reproducibility
would
mean
and
what
the
advantages
of
it
are
would
be
an
important
part
of,
like
figuring
out
where
it
makes
sense
to
move
in
that
direction.
E
Yeah-
and
I
think
that's
ultimately
what
I'm
proposing,
I
don't
have
the
answer
right,
but
I
think
it
would
be
worth
trying
to
determine
whether
or
not
we
want
to
say
reproducibility
equals
x,
minus
these
fields
right
and
then,
whether
or
not
we
want
to
attribute
something
else
like
add
a
different
label.
That
then
says:
okay,
this
hash
is
exactly
the
same.
So
therefore,
all
the
contents
of
the
meaningful
parts
are
the
same.
That
might
be
worth.
You
know,
setting
up.
D
Makes
sense
the
an
rfc
where
a
lot
of
people
could
comment
on
it
and
provide
perspectives
would
be
probably
be
a
good
path
forward.
I
I
feel
I
kind
of
suspect
that
reproducibility
doesn't
mean
very
much
unless
you're
reproducing
that
image
digest
at
the
end
from
what
you
know,
I
I
was
didn't
always
hold
that
perspective,
but
having
seen
how
people
do
image
signing
and
things
like
that,
I'm
worried
that
you
know
the
config
blob
is
where
the
labels
go.
C
Yeah
and
practically
the
diff
id
of
the
config
blob
is
the
right
place
to
do
reproducibility
rather
than
the
digest
of
the
image,
because
things
like,
if
you
pull
and
push
you,
get
different
compression
algorithms
and
white
space
and
things
that
don't
matter
but
everything
functional
is
in
the
config
blob.
But
practically
people
look
at
the
image
digest,
they
don't
say:
what's
the
diff
id
of
the
config
blob
until
they
pull
into
docker
and
then
docker
uses
that
as
its
identifier
right,
I
think.
D
C
E
D
Annotations
are
like
a
generic
oci
concept
and
you
can
put
them
on
lots
of
different
kinds
of
objects
or,
like
the
label.
Format
are
annotations,
but
they're
in
the
config
or
but
in
the
label
field
of
the
config
blob.
The
oci
spec
treats
this
very
weirdly,
but
my
understanding
is
that
people
don't
put
metadata
about
the
image
on
the
manifest
for
because
it's
it
doesn't
always
get
preserved
and
that
the
config
blob
is
the
safe
place
to
put
that
information.
But
I
don't.
E
And
I
guess
I'm
totally
like
relating
this
as
a
very
different
conversation
now,
but
the
project
tamil
or
project
crypter
came
up
in
slack
and
one
of
you
know,
one
of
the
conversations
we
had
was
about
putting
that
information
into
standard
oci
annotations
instead
of
putting
them
into
specific
labels-
and
I
think
that's
where
my
disconnect-
and
maybe
just
misinformation
on
annotations
comes
in
and
wanting
to
know
whether
or
not
we
should
be
leveraging
them
a
little
bit
more.
D
D
C
Hit
the
one
place,
the
spec
has
gone
towards
manifest
fields:
well,
not
manifest
fields
to
annotations,
so
I
think
their
config
annotations
not
manifest
annotations
is
some
of
the
we
used
to
be
standard
labels
like
org,
open
container
author,
whatever
the
spec
used
to
recommend
they
were
in
labels
and
now
it
recommends
them
in
annotations.
But
I
think
that's
still
at
the
config
layer,
not
the
manifest
layer.
E
D
I
think
these
are
just
this
is
just
the
generic
definition
of
what
annotations
are,
but
then,
if
you
go
into
config
and
then
you
look
at
labels,
it
says
like
look
at
the
annotation
spec,
or
at
least
it
used
to
be
very
confusing
about
this.
So
like
it's,
not
the
backwards
compatibility
with
label
schema.
But
if
you
go
into
sorry,
I
don't
have
to
keep
talking
about
this.
But
if
you
go
into
config
and
then
you
look
under
labels,
it
points
back
to
the
annotations
thing
for
what
you
should
put
in
that
field.
C
E
D
Should
I
add
that
to
I'm
adding
the
things
we're
talking
about
the
breaking
changes?
Doc,
that's
why
I'm
not
putting
in
the
other
dock
I'll
add
reconsider
annotations.
D
There
were
some
other
breaking
changes
that
are
some
other
changes
that
were
sort
of
breaking
or
near
breaking
that
we
haven't
haven't
gone
through
yet
just
to
run
through.
At
some
point
we
talked
about
renaming
at
least
launched,
export
and
build
to
expose
for
the
flags
to
like
you
know,
launch
layer
versus
build
layer.
People
just
don't
really
mean
those
things.
Build
means
expose
this
to
subsequent
build
packs
and
launch
means.
Put
this
in
the
final
image.
D
That'd
be
really
easy
to
make
post
one,
although
just
continue
to
support
the
old
flags.
It's
fixed,
fixed
schema.
You
know
really
wouldn't
be
a
hard
change,
so
I
don't
consider
that
breaking
for
1-0
to
that
level.
D
Seems
like
no
the
we
said
at
some
point.
We
wanted
to
revisit
these
with
positional
arguments
for
been
build
and
been
detect,
but
again
it's
the
same
type
of
change.
We
can
introduce
environment
variables
that
have
the
values
later
and
then
in
a
million
years,
stop
passing
those
things,
because
it's
very
easy
to
ignore
the
arguments
that
are
coming
into
the
files.
So
again,
I
didn't
didn't
put
that
one
out
as
rfc.
A
D
C
A
It's
just
like
it's
a
one
knows
as
much
of
a
marketing
as
a
kind
of
stake
in
the
ground
for
us,
so
I
think
a
lot
more
well.
My
hope,
if
we're
successful
anyways
is
that
more
people
will
look
at
bill
packs
when
we
do
100
because
of
the
marketing
other
stuff.
And
so
if
we
put
a
foot
forward
of
this
is
how
you
write
bill
packs,
and
it's
done
this
way,
and
you
know
we
don't.
A
D
So
since
you're
so
interested
for
that
for
positional
args
or
either
of
you
willing
to
open
an
rfc
to
move
it
to
environment
variables,.
D
A
D
If
yeah,
that
makes
sense,
I
I
don't
have
terribly
strong
opinions,
but
the
positional
seem
like
they'd,
be
it
becomes
very
easy
to
add
additional
args
and
not
worry
about
the
arity
of
the
you
know
kind
of
binaries
we're
creating
in
the
future.
So
I
think
it's
probably
worth
thinking
about
nvars,
but
I'm
also
not
actually
not
very
strongly
opinionated
about
it
either.
A
A
What
is
positional
what's
mvar
and
I
think
that
is
actually
much
more
confusing
than
anything
else,
whereas
like
having
the
n-vars
gives
you
this
sort
of,
like
maybe
progressive
complexity,
is
not
the
right
phrase,
but
it's
like
you,
don't
really
need
to
even
think
about
anything
until
you
need
it.
You
know
I
also
hate,
copying
and
pasting
that
same
like
prefix
on
every
build
pack
where
it's
like
I
named
them
as
in
bars
so
yeah.
I
think,
a
few
months
ago
I
was
like
indifferent,
but
more
and
more,
I
feel
strongly
about
the
yeah.
A
The
thing
I
like
about
their
d
is
that
there's
a
cost
to
adding
an
argument
seems
to
be
a
theme
for
me.
A
No,
I
think,
there's
going
to
be
there's
going
to
be
things
like,
I
think
we're
going
to
need
a
stack
type
for
the
stackpacks.
You
know,
but
they're
just
going
to
keep
stuff
like
that's
just
going
to
keep
coming
up
and
and
a
lot
of
that,
like
the
stack
type,
does
not
belong
as
a
positional
argument.
It's
way
too
infrequently
used.
A
A
A
Yeah
I
mean
I,
I
think
the
point
I
mean.
Maybe
this
is
just
discussing
the
rfc,
but
like
the
point
is
mostly,
it
allows
you
to
highlight
certain
things
as
more
important
than
others,
and
I
guess
it's
going
to
be.
You
lose
a
little
bit
of
that
distinction,
so
I
know
I
agree
with
that.
I
don't
think
we're
doing
that.
Well,
today
is
sort
of
my.
D
Position
did.
C
D
I
could
totally
see
that
as
an
option,
it'd
be
still
easy
to
do
that
because
the
first
argument,
I
think-
and
so
you
could
still
support
the
rest-
that's
yeah
makes
sense.
That's
a
pretty
good
compromise.
F
For
what
it's
worth
is
like
a
note,
we
have
across
the
paquetto
set
of
build
packs.
We
probably
got
like
40
or
50
build
packs.
I
think
it's
probably
like
at
least
10
or
12
of
those
that
do
not
create
layers
at
all.
So
it's
like
a
not
insubstantial
number
of
build
packs
that
don't
care
about
where
the
layers
directory
is.
F
D
You'd
still
need
a
place
to
put
the
end
vars
on
disk,
so
they
can
be
read
by
the
launcher
at
launch
time
and
so
you'd
end
up
creating
some
kind
of
layer
where
the
end
bars
for
each
build
pack
like
you
do.
The
life
cycle
would
end
up
doing
the
same
thing
that
the
developer
would
end
up
doing
when
they
created
the
dummy
layer,
yeah.
F
So
I
don't-
I
don't
want
to
like
dive
steven,
and
I
have
been
having
an
ongoing
discussion
about
this
particular
api
that
the
launch
process
environment
variables,
api.
We
have
for
these
specific
build
backs.
We
have
a
bunch
that
just
just
set
like
launch
commands
and
some
of
them
maybe
want
to
set
some
environment
variables,
but
the
build
packs
themselves
don't
have
any
layers.
F
So
you
end
up
with
this
like
kind
of
weird
rube
goldberg
machine
api,
where
we
say
like
here's,
what
the
launch
processes
are
and
then
create
this
other
layer
here
that
doesn't
really
have
anything
in
it
except
these
environment
variables,
and
then
I
don't
know
off
the
top
of
my
head.
I
can't
remember
at
the
top
of
my
head,
where
the
launch
tone
will
go
that
goes
in
the
layers
directory.
If
it
does,
then
why
isn't
there
just
a
launch
directory
with
an
m
directory
inside
of
it
right
next
to
the
launch.toml
file?
F
C
I
think
in
some
ways
it's
about
reuse,
so
you
don't
have
to
rewrite
them.
Every
time
you
can
say
the
important
things
here
haven't
changed.
Whatever
I
put
in,
there
was
good
the
time
before
yeah.
A
A
A
If
you
have
like
a
node.js
engine
build
pack,
you
would
put
together
a
layer
for
installing
the
node
runtime
right
and
you'd
want
the
m
bars
associated
with
the
data
of
the
contents
in
that
layer
with
the
kind
of
environment
that
is
need
to
set
up
like
node,
so
a
subsequent
build
pack,
that's
impairing
it
that
needs
node
at
build
time,
can
just
pull
that
layer
and
and
have
that
kind
of
stuff
available
to
it.
F
Yeah
so
like,
in
that
case
you're,
not
even
like
there's
the
generalized
case
of
I
have.
I
have
a
build
pack
that
installs
a
node
runtime,
but
itself
doesn't
set
any
sort
of
launch
process,
because
I
could
use
node
in
just
build
time
configuration
I
might
use
node
to
do
something
like
compile
like
you
know,
do
some
transpiling
of
some
some
static,
typescript
or
something
into
a
static
javascript
file
that
then,
maybe
is
served
by
nginx.
So
node
doesn't
ever
ever
actually
end
up
in
the
launch
environment
itself
so
like
it
doesn't
set
any
process.
F
How
would
it
even
go
about
saying,
like
I
know,
ultimately,
that
this
launch
environment,
specific
process
thing
needs
to
be
set
like
it
doesn't
know
that
the
default
process
is
going
to
be
called
web
or
worker
or
anything
along
those
lines?
It's
just
kind
of
like
wildly
guessing
at
that
point,
or
do
we
just
set
it
as
like
a
regular
like
end
of
launch
environment
variable,
at
which
point
like
it
begs
the
question
of
like
why
the
process
specific
environment
variable.
C
F
Yeah,
I
guess
I
just
like
you
have
this
coupling
of
like
the
process
is
highly
coupled
to
the
launch
tomo
file,
but
its
environment
variables
are
highly
coupled
to
like
some
other
arbitrary
thing
and
I'm
like
not
connecting
the
dots
between
like
how
that
was
like
how
that
makes
sense
like.
Why
isn't
it
that
like
say
the
you
know,
these
process,
specific
environment
variables
are
connected
somehow
to
the
launch.tumble
like,
for
example,
a
launch
directory?
That's
right
next
to
the
launch
tunnel,
because
that
is
where
the
processes
are
defined.
D
F
D
F
The
answer
was
like
in
order
to
create
a
process,
you
were
like
creating
a
layer,
and
it
was
part
of
the
layers
metadata
where
you
said
this
was
a
launch
process
like
that
totally
is
also
a
reasonable
avenue.
I
think
there's
just
this
kind
of
disconnect
in
the
api
of
like
yes,
layers
want
to
set
environment
variables
and
processes
want
to
set
environment
variables,
and
we
now
have
this
set
of
features
that
span
across
those
two
things
and
like
the
api,
for
it
just
seems
a
little
bit
disjoint
it
just.
F
D
I
think,
if
you
view
the
process
types
as
like
highly
highly
conventional
right,
like
people
use
web
or
work
or
like
very
predefined
set,
it
feels
a
little
bit
less
weird.
It
doesn't
not
feel
weird,
but
it
feels
a
little
bit
less
weird,
but
I
don't
I'm
not
suggesting
you
should
necessarily
keep
it
the
way
it
is
it's
just.
I
I
think
that
you
know
maybe
the
initial
rfc
kind
of
played
down
my
my
worries
about
that.
A
little
bit.
D
Like
you
might
want
to
make
a
build
pack
that
contributes
jvm
configuration,
but
only
to
debug
servers,
regardless
of
what
those
debug
servers
are
or
something
like
that.
I
could
see
these
case
there.
D
It's
definitely
very
flexible.
We
have
right
now,
even
if
it's
not
it
doesn't
make
the
most
sense
to
you
know:
users
when
they
approach
it.
A
Is
the
I
guess,
I'm
just
I'm
still
trying
to
understand
the
use
case
of
what
you're
talking
about
ryan,
so
you
have
a
build
pack
that
writes
a
process
in
launch
tunnel
right
of
some
sort,
and
you
have
m
bars
that
are
used
in
that
launch
tomml,
and
so
you
want
to
associate
those
mvars
with
that
launch
toml
without
necessarily
writing
it
in
its
own,
like
weird
layers
directly.
That
only
includes
those
marks
in
my
understanding
that
correctly.
F
Yeah,
so
I
guess
like
to
me
just
like
the
the
coupling
I
guess
of
the
the
process
to
the
launch
tunnel,
but
then
its
environment
variables
being
coupled
elsewhere
in
the
kind
of
like
file
api
for
the
sets
of
like
things
you
can
specify
is
like
that
is
there's
kind
of
a
to
me
like
it.
Doesn't
it
doesn't
feel
like
cohesive
as
far
as
like
how
like
how
I
would
think,
oh,
I
need
to
set
an
environment
variable
for
this
process.
A
F
C
Even
if
you
could
get
around
the
issue
you're
describing
for
environment
variables,
it's
still
going
to
exist
for
profile
scripts
right,
because
those
are
actual
files
that
need
to
be
contributed
to
the
file
system
and
have
to
be
in
a
layer
and
if
you
still
need
process
specific
profile
scripts
and
things
like
that.
Even
if
we
solve
the
environment
variable
problem,
I
feel
like
it's
sort
of
impossible
to
solve
globally,
because
you're
still
going
to
need
file
system
diffs
that
are
associated
with
processes
unless.
F
Yeah
that
actually
might
be
like
the
more
reasonable
route
to
go.
It
might
not
be
that,
like
we
create
this
separate
set
of
ways
for
specifying
things
like
profile,
scripts
or
environment
variables
that
are
specific
to
processes,
but
instead
say
you
can
relate
a
process
type
to
a
layer
and
when
you
go
about
declaring
the
layer
as
part
of
the
metadata
for
the
layer,
you
can
say
this
layer
has
a
process
type
attached
to
it
and
declare
it
there.
Instead
of
in
the
launch
tunnel.
C
A
I
mean
as
a
potential
alternative
suggestion,
because
I
think
the
use
case
makes
sense
is,
would
it
just
be
helpful
to
have
like
a
launch
layer
that
is
specific?
That
is
includes
like
profile
stuff,
like
am
I
saying
and
bars
that
are
tied
to
process
types.
F
D
Would
that
would
we'd
have
to
define
special
rules
about
that
comes
before
or
after
it
doesn't
follow
the
alphabetical
ordering
right?
There's
nothing
special
about
the
letter.
L
so
we'd
want
to
just
find
a
special
rule
for
where
that
that
should
go,
and
then
we
wouldn't
have
to
subtract
anything
from
the
rest
of
the
api,
just
just
as
long
as
we
added
stuff
to
that.
So
you
could
have
that
kind
of
associative.
That
way
you
could
have
a
layer
that
contributes
just
a
web
specific.
Whatever.
F
Yes,
I
would
not
suggest
that
we
remove
the
apis
that
are
already
there.
I
think
that,
like
that's
still,
the
existing
api
still
has
like
a
possible
use
case.
D
C
Is
this
a
specific
launch
layer,
or
do
we
just
want
to
weigh
that
in
a
generic
layer?
You
can
specify
it
a
process
type
like
maybe
the
fact
that
processes
go
and
launch
toml
is
the
weird
thing
here:
processes
within
layers,
then
everything
that's
left
in
launch
toml
is
coherent
and
then
the
processes
move
into
layers
where
the
other
launch
config
does
right.
Now.
D
It
does
feel
a
little
bit
weird
to
have
a
prop
the
ability
to
add
process
types
per
layer
because
they're
so
global
and
definitely
override
each
other.
It's
like
if
at
some
point
we
did.
You
know,
process
type
wrapping
where
you
could
refer
to
another
process
type
in
a
process
type
then
having
them
in
the
layers
can
be
a
really
powerful
abstraction,
but
right
now
it
they
feel
pretty
global
in
a
way.
A
D
A
A
F
That
is
kind
of
ultimately
what
we
want.
What
we
have
today
with
the
api,
which
is
to
say,
like
I
would
put
the
process
type
information
in
the
launch
tunnel,
but
then
I
still
have
to
make
like
a
process
specific
layer
that
really
is
just
setting
the
environment
variables
for
the
processes
I
define
in
the
launch
tunnel
and
that's
because
the
build
packs
that
do
set
launch
processes
don't
have
layers
that
are
doing
other
things
right.
So
it
is
kind
of
already
what
the
api
is
doing.
We
just
don't
have
it.
D
Maybe
maybe
this
is
really
all
purely
additive.
Yes,
you
can
specify
process
types
per
layer.
Yes,
there's
a
special
directory
for
launch
tunnel
and
you
can
keep
specifying
process
types.
We
can
make
all
of
that
as
a
non-breaking
change
right
and
let
you
choose
between
this
kind
of
global
layer.
That's
already
reserved
anyways
right,
because
we
have
launched.tom
on
it,
reserves
launches
layer
and
you
know
specific
layers,
and
it
gives
you
a
default
layer
name
for
those
weird
cases
where
you
want
to
add
environment
variables-
and
you
know
I
just
call
it
something
right.
A
If
we
do
it
in
the
layer,
then
we
hopefully
move
to
that
direction
or
if
we
keep
launch
toml
where
it
is
like,
I
can
see
for
backwards
pad
ability
reasons,
but
I
think
we
should
be
pushing
one
way
or
another
because
unless
there's
like
a
big
benefit
to
having
two
ways
to
do
it,
because
it's
solving
two
different
real
use
cases.
D
Here's
a
here's,
an
argument.
So
if
we
start
looking
at
layers
as
distribution,
artifacts
right
with
the
you
know,
you
asset
rfc,
you
could
just
sim
link
to
them
directly
and
those
layers
also
come
with
start
commands
right.
You
can
imagine
a
build
pack
say,
say
the
launch.
I'd,
expect,
launch
and
launch
toml
to
be
last
right,
so
the
build
pack
could
pull
in
a
dependency
that
comes
with
its
own
start
command
right,
isolated
into
one
layer,
and
then
you
can
use
launch
time
to
override
that
with
something
custom.
D
You
could,
but
you
know
then
then
you're
like
coming
up
with
a
random
layer.
Name
that's
later
in
the
alphabet.
You
know
trying
to
prepend
to
underscore
when
you
might
have
this
very
convenient
launch
tunnel
that
always
runs
at
the
end.
It's
kind
of
defined
that
way
right
and
launch
layer
directory.
D
D
A
Well,
I
guess
like
as
a
build
pack,
that's
running.
I
don't
actually
know
when
in
the
order
I'm
running
necessarily
if
I
like
just
own
and
write
this
build
pack,
so
I
wouldn't
know
without
basically
doing
math
to
figure
out
like
where
what
number
to
actually
prefix
a
layer.
If
I
wanted
to
do
it
that
way,.
C
D
D
This
sounds
like
a
view.
I
put
it
on
the
breaking
changes
sheet
and
this
seems
like
it'd,
be
a
good
rfc
that
could
kick
off
a
lot
more
discussion.
We
can
keep
talking
about
it.
There
are
a
few
other
breaking
large
breaking
change
or
not
a
few
other
sort
of
breaking
changes
that
we've
mentioned
over
time
that
I
could
talk
about
also
if
we
want
to
move
on
cool,
I
don't
want
to
end
discussion
now.
If
there's
more
stuff
there.
D
Cool,
so
with
the
you
know,
changes
to
arguments
where
we
do
bash
parsing
of
each
of
the
arguments
we
had
talked
about,
whether
that
was
a
a
little
bit
too
something
or
if
we
want
to
be
able
to
turn
off
bash
parsing
of
arguments
when
bash
is
there
or
one
idea
I
had
emily
I
mentioned,
was
you
could
use
m
and
subster
instead
of
bash,
so
it
wasn't
too
dynamic.
You
didn't
get
too
much
weird
behavior,
I'd
written.
That
down.
Is
that
something
we
care
about.
C
C
Or
you
can
have
you
know
multiple
parsed
arguments,
but
I
think
instead
of
doing
that,
based
on
whether
there's
one
or
multiples,
we
should
just
have
a
like
a
script
boolean
like
do.
You
want
me
to
treat
this
whole
thing
as
a
script
to
run,
and
then
arguments
get
passed
to
the
shell
rather
than
turned
into
a
script.
Or
do
you
want
these
to
be
arguments.
D
I
think
I
agree
with
that.
It
was
like
we
wanted
the
ability
to
specify
environment
variables
from
the
outside
that
don't
get
interpreted
in
the
outer
shell,
all
right,
you're
running
the
docker
run
in,
but
you
do
get
interpreted
inside
of
the
process
and
then
the
interplay
between
that
and
having
isolated
arguments
to
get
passed
directly
to
process
gets
very
confusing
one
option:
there's
a
command
called
nsubster
that'll
do
environment,
variable
interpolation
without
other
shell
magic.
D
C
We
can
do
that.
I
think
it
doesn't.
That
alone
doesn't
solve
all
the
problems
because
of
a
tokenization
issue.
Right,
I
think,
there's
a
question
of.
I
think.
If
you
say
this
is
a
script
you're
taking
responsibility
for
quoting
things
in
a
way
that
when
they
get
evaluated,
the
tokenization
is
correct,
and
if
you
say
it's
not
a
script,
however,
you
originally
tokenized
it.
It's
gonna
stay
tokenized.
That
way
like
the
fact
that
your
environment
variable
evaluates
to
multiple
items,
they're
still
gonna
be
treated
as
one.
D
I
think
I
agree
just
wanted
to
bring
it
up
as
an
option
anything
else
on
that
one
or
I
had
two
more
that
are
pretty
floofy
sort
of,
I
guess,
but
the.
D
Cool
next
one
was,
there
was
a
lot
of
talk
about
development
api
at
one
point:
when
tilt
did
their
big
demo
build
packs
cloud
new,
build
packs,
you
know
integrated
scaffold,
integrated
cloud
into
build
packs
and
all
that
do
we
need
to
make
any
change
like
I'm
pretty
sure
this
isn't
no,
but
do
we
want
to
do
anything
around
that
before
100,
though
we're
worried
about
the
current
api,
not
not
sufficiently
supporting
changes
that
you
know
we
need
to
make
in
order
to
allow
for
development
use
cases.
A
A
B
Yeah
they
basically
each
build
pack
understands
an
invar
of
like
google
dev
mode
equals
true
and
they
basically
branch
the
process
start
stuff.
So,
like
you
can
imagine
like
a
node
build
packs,
instead
of
just
doing
node
index.js
they're
doing
like
nodemod.
D
B
And
so
they
just
they
contribute
different
layers
and
different
processes.
Because
of
that
flag.
D
It's
a
really
easy,
build
pack
feature
to
be
able
to
build
a
development
image,
but
the
the
use
case
the
tilt
folks
were
talking
about-
and
I
don't
know
how
scaffold
works,
but
scaffold
could
do
something
like
this
is
like.
You
know,
make
to
find
a
generic
concept
called
dev
orchestrator
right
that
sits
in
the
container
running,
live
and
then
you
know
sends
code
back
and
forth
until
it
gets
some
like.
Oh,
this
is
too
much
cone.
D
I
can't
do
this
signal,
in
which
case
it
like
tells
the
client
to
do
a
rebuild,
and
you
know
create
a
new
image
or
syncs
even
more
of
the
image
like
that.
The
use
case
is
like
you
know
your
cloud
native
developer
right.
You
have
minicube
running
on
your
workstation
and
your
app
inside
of
mini
cube
connected
to
a
whole
bunch
of
microservices
and
you're
typing
in
your
head,
and
you
type
button
for
button
and
see
a
website
update,
live
as
you're
typing,
with
your
code
updates
right.
That
was
the
tilt
demo.
B
Salesforce1
we
we
do
an
in
container
like
binary,
that
we
call
develop
that
basically
just
it
reruns
the
build
packs
on
watch
changes.
So
it's
responsible
for
watching
like
a
bi-mounted
volume
of
your
source
code
and
then
basically
shuttling
that
over
to
the
workspace
and
then
executing
the
build
banks.
It
works.
Okay,
you
know
for
like
quick
things
and
you
get.
We
can
envision
tying
multiple
containers
together,
running
containers,
kind
of
like
compose,
but
the
I
don't
know
some.
B
Some
of
the
challenges
that
we
have
right
now
are,
like
you
know,
some
bill
packs
just
aren't
made
to
like
rerun
again
safely,
like
they
just
assume
an
empty
workspace,
which
means
you
kind
of
have
to
like
dump
the
workspace
and
copy
back
over.
So
those
are
the
sort
of
things
we
could
think
about
in
this,
like
grand
scheme
of.
If
we
want
to
really
support
develop,
we
could
start
talking
about
build
packs
being
in
a
certain
mode,
so
that
restorations
are
maybe
different
in
this
mode.
D
E
Sorry,
I
added
in
the
chat
one
of
the
issues
that
gitlab
brought
up,
and
I
know
that
it's
been
some
conversation
with
some
clients
in
regards
to
how
to
run
the
unit
test
as
part
of
the
build
process
and
fail
the
build
or
the
creation
of
the
image
if
those
tests
were
to
fail
and
my
understanding
right.
I
wasn't
here
for
the
legacy,
build
packs
era,
but
was
that
such
a
phase
existed
where
testing
would
occur
and
in
get
lab
moving
over
from
the
heroku-ish
implementation
to
the
cnb
implementation.
E
They're
missing
that
part
of
it
and
they
they're
still
asking
for
it
and
what
I
think
is
happening
is
we're
conflating.
Maybe
the
testing
aspect
versus
the
development
aspect
right
where,
in
my
mind,
they're
very
different.
One
of
them
is
about
getting
very
fast
feedback,
and
you
know
iterating
quickly
during
the
development
process
and
another
one
is
ensuring
the
the
contents
right
or
the
integrity
of
the
build
within
the
same
environment
for
which
that
build
occurred.
E
D
I
think
people
I've
seen
people
have
run
unit
tests
during
the
build
process
and
fail
to
build
if
it
doesn't
work
just
like
on
top
of
the
existing
api
we
have
today.
Is
it
like.
You
know,
I've
heard
people
complain
that
it
doesn't
work
very
well
when
you're
running
integration
tests
during
the
pack
build
for
some
reasons,
and
then
I
told
them
not
to
do
that.
Yeah.
A
Sorry
I
mean
yeah,
so
I
think
jesse
also
helped
work
on
test
back
stuff,
but
I
know
from
our
from
how
it
was
done
with
heroku.
Is
that
one
the
big
differentiator
was
was
that
you
can
do
it,
but
it
requires
you
to
kind
of
custom
roll
your
stuff,
whereas
if
it
was
part
of
an
api,
it
would
allow
every
bill
pack
author
to
write
a
like
way
to
do.
A
Testing
for
your
bill
pack
in
kind
of
a
standardized
way,
and
I
think
that's
kind
of
the
big
difference
so
like
if
you
are
a
platform
like
git
gitlab,
and
you
want
to
be
able
to
say,
let's
say,
for
example,
like
picketto
supported
this
test
api
as
part
of
the
cnb,
because
you
support
you
know
everything
right
for
it,
and
it
meant
that,
like
no
matter
what
language
or
whatever
I'm
using.
When
I
push
this
thing
and
I
build
it,
I
can
run
it
on
through
gitlab.
A
I
can
also
run
the
tests
associated
with
it
and
every
bill
pack,
author
kind
of
defines
how
that's
done
specific
to
the
set
of
bill
packs.
Some
build
packs
may
like,
if
I'm
just
setting,
say
like
the
launch
process
stuff
that
ron
was
talking
about.
Maybe
I
don't
have
to
support
that,
because
that's
not
there's
not
like
a
thing
specific
to
that.
That
really
needs
to
be
tested
as
part
of
that
they'll
pack.
A
But
maybe,
if
I
like,
have
a
node
thing
that
has
no
tests
like
for
doing
that,
like
I
can
have
that
part
of
that
bill
pack
define
a
way
to
kind
of
run
it
in
that
environment
and
pull
in
maybe
separate
dependencies
or
whatever
and
kind
of
figure
out
how
to
do
that,
and
it's
like
own
defined
area
and
every
build
pack
has
the
opportunity
to
kind
of
opt
into
that
where
it
makes
sense.
B
C
Work
ahead,
so
I'm
just
gonna
quickly,
plus
one.
What
terence
said
like
you
can
do
it
through
our
existing
api.
I
don't
think
there's
a
gap
but
to
figure
out
how
to
do
it.
You'd
have
to
go,
read
the
documentation
for
each
build
pack
and
pass
whatever
it
needs
on
a
one-off
basis
to
get
the
test
to
run.
But
if
you
could
do
something
like
pack
build
dash
dash
test,
if
there's
a
if
you
just
defined
what
the
flag
is
or
the
environment
variable
is
to
turn
it
on.
D
B
Yep
that
gets
you
part
of
the
way,
but
some
build
packs
are
really
aggressive
about
like
cleaning
up
stuff
in
the
workspace
and
like
those
are
the
sort
of
things
that,
like
you,
want
to
skip
over
build
packs
that
aren't
explicitly
marked
for
tests.
I
think-
or
at
least
that's
what
we
do
in
like
kuroko,
we
skip
any
build
pack
that
doesn't
work
for
tests
now.
That
may
not
be.
You
know
the
same
api
we
want
to
do
here.
B
I
think,
but
the
yeah,
like
you
just
do
extra
work
that
you
don't
need
to
for
buildbacks
that
don't
have
anything
to
contribute
to
the
test
process
essentially
and
the
same
true
for
like
if
you've
got
something,
that's
kind
of
just
getting
rid
of
everything
after
a
go
build
because
all
you
need
is
the
binary
like
now,
you
can't
run
go
test
if,
like
you
bring
that
in
later
now,
obviously
you
could
do
it
today
with
the
api,
but
just
a
bit
cleaner.
B
If,
if
everybody
knows
it's
going
to
be
test
and
it
works,
you
know.
A
Maybe
I
also
mentioned
talking
about
it
as
well.
I
know
that
sometimes,
as
you
have
like
dependencies,
that
you
specifically
pull
in
during
development
for
testing
and
then
you
would
probably
want
those
excluded
from
the
final
build
as
well.
That
means
we
would
then
have
to
like
run
like
whatever
supplying
our
modules
or
whatever
twice
then,
if
we
wanted
to
run
like
a
set
of
unit
tests
and
then
actually
run
the
build.
A
That's
kind
of
how
we
do
it
on
heroku
is
that
we
realize
that
some
people
definitely
want
that
clean
where
the
same
image
is
used
for
build
end
tests,
but
we
found
a
lot.
There
are
also
a
lot
of
people
who
want
to
do
just
what
you
said
as
well
right,
where
it's
like
I'm
pulling
in
all
these
dev
dependencies
that
I
don't
need
in
production
that,
but
I
needed
to
do
testing
right.
C
So,
are
you
creating
an
image
and
then
running
a
test
process
on
the
image?
What
I
was
imagining
was
sort
of
tests
happening
during
the
build,
and
then
you
get
your
normal
built
launch
image
at
the
other
end,
because
you
can
always
install
test
dependencies
into
layers
that
are
launched,
false
right.
A
Yeah,
I
think
we
have
more
options
today
in
cmb
than
we
did
in
not
cblind.
D
This
this
seems
like
a
great
discussion,
pick
up
next
time
or
put
in
rc.
We
are
out
of
time.
I
had
one
more
thing
on
a
very
long
list.
D
That's
very
silly
when
that,
just
for
completeness
at
one
point,
we
mentioned
doing
something
like
having
detect
images
that
act
as
manifests
that
defer
out
to
other
builder
images,
so
you
could
run
an
auto
detection
with
things
that
use
different
stack
images
and
there's
a
whole
very
complicated
proposal
around
that
that
I
put
on
the
list
of
things
that
we
would
definitely
want
to
do
before.
D
I
don't
know
if
we're
going
to
do
it,
I
don't
think
we're
going
to
do
that,
but
isn't
that
john
johnson's
suggestion
or
something
from
the
dinners
this
came
from.
I
thought
this
came
from.
Josh
collins
had
some
questions
and
how
to
use
it.
D
C
I
think
the
reason
we
kept
them
separate
when
we
combined
export
and
cache
was
there
some
idea
that
we
would
move
analyze
before
detect
and
do
some
crazy,
like
detect,
could
tell
you
whether
or
not
to
restore
cached
layers
to
improve
performance
type
thing,
but
if
we
are
really
not
going
to
do
that
in
the
101
and
that'd
be
a
big
breaking
change
anyway.
So,
if
we're
going
to
go
to
1
0
without
doing
it
we're
not
doing
it
till
2-0,
I
think
we
should
combine
these
things
for
one
hour.
D
Got
in
the
dock,
I'll
link
the
breaking
changes
doc
again
and
the
working
group
that
we're
over
time
thanks.