►
From YouTube: CNB Weekly Working Group: 2022-01-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
do
we
have
any
new
faces
here
today.
A
No
new
faces
so.
C
D
My
name
is
gabe.
I
work
with
aidan
and
sambhav.
Internally,
we've
been
doing
a
lot
of
build
pack
stuff.
A
C
A
Happy
to
have
you
up
next
on
our
agenda,
we
have
release
planning
and
updates.
Do
we
want
to
start
with
the
implementation
team.
E
F
A
D
Know
for
techton
there
were
some
changes
that
we
looked
at
regarding
security
context.
D
I
think
the
openshift
people
contributed
something
in
that
regard,
and
so
I
think
we're
going
to
tack
on
a
couple
little
things
to
come
up
with
a
a
soon
new
tecton
version
for
some
of
our
tasks
outside
of
that
for
pack,
I
don't
think
there's
anything
scheduled
I
did
reach
out
to
david
and
I
think
what
we're
looking
to
do
is
maybe
see
if
we
could
make
a
request
for
these
sort
of
updates
to
happen
in
slack
as
sort
of
like
a
reminder,
so
that
that
way
we
could
present
them
in
this
meeting.
A
And
take
that
as
a
note,
so
moving
on
circling
back
to
our
life
cycle
release,
I
think
natalie
is
going
to
talk
about
maybe
a
strategy
around
releasing
a
version
of
the
life
cycle.
That
is
a
bit
more
conscientious
about
the
s-bomb
migration.
So
we
migrated
from
the
old
s-bomb
format
to
the
new
s-bomb
format.
But,
unlike
in
most
of
our
other
migrations,
we
did
not
provide
a
way
such
that
the
bomb
that
used
to
be
available
on
old
platforms
is
still
available
on
old
platforms.
E
And
so
there's
a
spec
pr
linked
in
the
in
the
working
group
document.
I
think
emily
summarized
it
pretty
well,
but
you
know
basically
look
the
platform
api
and
the
build
pack
api
are
supposed
to
be
able
to
function.
You
know
somewhat
independently,
but
when
we
released
the
new
s-bomb
feature,
we
created
a
situation
where
old
platforms
that
expect
to
find
a
bomb
in
a
label
can't
upgrade
their
build
packs
without
that
build
that
bomb
just
going
away,
and
you
know
that
it
could
be
a
problem
right
for
some
platforms.
E
E
D
A
D
B
A
A
Which,
I
think,
makes
sense:
it's
like
you're
not
going
to
upgrade
to
that
platform
api
until
you're
ready
to
change
the
format
you
get
the
bomb
in
like
at
some
point.
We
have
to
start
removing
things
or
we'll
never
get
out
of
this
situation
where
we're
breaking
something.
But
I
think
the
platform
making
api
can
be
a
good
like
opt-in
situation.
There
right.
C
Well,
I
I
guess
I
want
to
kind
of
say
like
it's,
not
always
the
bill
pack
author's
choice
like
it's,
not
like
the
pocket
situation,
where
you
control
the
bill,
packs
right
like
if
you're
on
a
platform
where
bill
pack
authors
don't
work
at
that
company.
That's
not
always
like
a.
I
opted
in
to
this
platform
being
upgraded.
The
platformer
decided
to
upgrade
independent
of
my
choice
to
upgrade
right.
A
So
when
you
change
platform,
apis
you're
getting
a
different
feature
from
the
platform,
and
I
think
we
need
to
get
to
a
place
where
we
can
remove
it,
because
these
large
labels
are,
you
know,
like
taking
down
kate's
notes
and
causing
a
lot
of
problems.
B
In
terms
of
the
current
migration
plan
like,
for
example,
the
way
we
migrated
was
update,
the
platform
still,
while
keeping
the
whole
life
cycle
around
and
once
you
have
the
new
new
build
packs,
just
update
the
life
cycle
that
ships
with
the
builder.
That's
it
I'm
missing
something.
B
B
A
If
we
allow
build
packs
of
a
situation
where
they
can
supply
both
formats,
then
you
can
still
get
the
old
behavior
on
old
platforms
and
the
new
behavioral
new
platforms.
You're
never
going
to
get
the
new
behavior
on
old
platforms.
In
this
case,
like
we
have
tried
really
hard
to
keep
these
apis
orthogonal,
but
in
this
case
they're
not
perfectly
decoupled
from
each
other
right.
So
we
can't
upgrade
things
in
the
exact
ways
that
we
have
before,
because
there's
an
interaction
here
and
I
don't
think,
there's
any
way
totally
like
undo
that
interaction.
A
A
I
think
we
don't
need
to
patch
the
spec
if
we're
talking
about
allowing
the
bill
pack
to
supply
both
because
nowhere
in
the
spec
does
it
say
you
can't
set
extra
fields
right,
but
I
think
in
the
next
version
we'd
like
to
be
more
explicit
about
it
and
included
as
like.
You
know
in
that
deprecation
section,
like
here's,
a
way
to
supply
things
in
the
old
format,.
A
Our
proposal
is
to
patch
the
life
cycle
so
that
we
can
on
break
people
in
a
way
that
is
compliant
with
the
spec,
but
maybe
not
fully
described
in
the
spec
and
then
add
more
stuff
to
the
spec,
so
that
people
know
what
happened.
A
That
is
one
way
to
do
it,
we're
proposing
doing
it
in
the
other
direction,
because
we
have
actively
created
a
problem
where
people
build
pack,
authors
are
stuck
and
cannot
move
forward,
and
the
change
we
want
to
make
to
the
life
cycle
is
not
going
to
break
anyone.
It
will
fix
people
and
it
is
in
compliance
with
the
existing
spec,
like,
although
we'd
like
to
add
more
details
that
describe
the
deprecated
behavior,
we
can
make
this
change
without,
while
still
being
100
compliant
with.
What's
in
the
spec
right
now,.
A
Like
the
life
cycle
has
to
do
everything,
the
spec
says
the
lifecycle
has
to
do,
but
the
lifecycle
can
do
something
more
if
it
wants
to,
and
maybe
we
also
want
to
put
that
in
the
spec
and
we
will,
but
I
think
getting
a
patch
out
to
fix
people
sooner
rather
than
later.
Is
the
responsible
thing
to
do
here.
E
E
C
This
happens
right
like
this
is
the
thing
that
happens.
I
guess
it
strikes
me
a
little
weird
to
be
like
we
are
underhandly
like
under
the
table,
basically
knowingly
and
doing
this
thing
and
basically
underspecifying.
It's
just
like
for
sure
lifecycle
could
add,
like
any
functionality
that
does
stuff.
That's
not
in
the
spec.
I
guess
like
I
don't
know.
It
just
feels
like
a
weird
stance
to
take.
A
A
If
the
behavior
that
we
say
exists
and
is
deprecated
in
08,
also
exists
and
is
deprecating
no
seven,
I
think
it
might
be
perennially
confusing
if
it's
missing
in
o7
and
then
is
reintroduced
deprecated
in
08.
Like
I
don't
think,
that's
the
I
think,
that's
a
worse
long-term
position
for
us
to
be
in
even
if
the
process
did
not
unfold
as
perfectly
as
we
like
it
to.
A
F
We
could
formally
denounce
that
version
when
we
release
the
next
version
and
then
document
that
we're
providing
a
migration
strategy
from
it.
B
B
No,
you
can
update
your
platform.
208.
Have
your
build
packs,
be
on
oh
five
or
o
six
they'll
still
up
they'll
still
put
out
the
old
format
that
still
gets
put
on
the
label.
So
you
updated
the
platform
208,
it's
fine,
it's
running!
Then
you
slowly
update
your
build
packs
from
06
to
o7.
No
information
is
lost
in
the
middle,
that's
how
we
did
the
upgrade
and
like
it
was
painless
for
us.
B
I
mean
that's,
that's
the
issue
with
anything
right
like
unless
you've
updated
your
life
cycle
or
unless
you've
updated
your
platform
api.
How
can
we
introduce
new
features
in
in
buildbacks
api
with
like
expecting
it
to
work
in
all
the
past
versions
of
platform
aka
when
we
have
forever
frozen
that
platform,
api.
A
Yeah,
like
we've,
actually
managed
to
do
that
to
a
large
extent
like
as
long
as
you
have
the
newest
life
cycle,
like
there's
no
world
where
you
can
freeze
the
life
cycle
and
update
other
things,
lifecycle
always
has
to
be
out
front,
but
up
until
this
point,
like
we've
made
a
lot
of
breaking
changes
to
the
build
pack
api,
but
none
of
them
sort
of
like
removed
things
that
were
then
visible
to
the
end
user.
Through
the
platform,
it's
been
decoupled
up
until
this
point,
but
this
is
a
case
where
it's
not
decoupled.
A
E
Had
done
this
change
the
way
we
did
other
changes
in
the
past,
we
would
have
taken
the
data
in
the
s-bond,
the
s-bomb
file
format
and
on
older
platforms.
We
would
have
put
that
in
the
label,
but
that
would
have
definitely
created
labels
that
are
too
large,
so
that
path
was
like
just
by
definition,
cut
off
for
us
and
that's
kind
of
how
we
got
here.
B
A
A
B
B
A
A
A
You
have
to
know
what
kind
of
platform
it's
running
on
so
it
will
write
both
the
old
stuff
will
get
dropped
on
the
floor
in
a
new
product.
C
Right
the
bill
pack
doesn't
know
what
platform
you're
on
right.
I
guess
that's
the
idea
as
a
build
black
option
you
don't
have
to,
but
presumably
I
did
work
to
write
that
bomb
right
and
the
fact
that
it
just
gets
dropped
on
the
floor
is
a
thing
that
if
I
was
a
bill
pack
off
of
that
bill
pack,
I
you're
saying
I
should
care,
but
then
not
care.
A
A
A
Yeah
I
mean
we
can't
people
feel
really
strongly
about
it.
Like
I
part
of
me
feels
like
the
warning
is
very
useful.
If
you
only
have
the
old
one,
because
clearly
you
haven't
realized
that
you
should
be
migrating
and
doing
the
new
thing,
but
if
you're
doing
both,
I
feel
like
it's
like
yeah
you're
doing
the
thing
that
we
expected
you
to
do
here.
B
B
A
C
A
I
can
speak
from
the
perspective
of
pocketo
and
I
feel
like
it
would
apply
to
a
lot
of
other
build
bags
as
well.
Those
build
pick
authors
want
to
make
the
jump,
so
they
can
provide
the
new
stuff
on
the
new
platform.
The
problem
is
people
use
their
build
packs
on
all
platforms
and
expect
the
old
behavior
to
work,
and
they
don't
want
to
break
all
of
the
people
in
those
platforms,
and
they
also
don't
want
to
have
to
start
maintaining
two
versions
of
all
the
build
packs.
F
A
I
don't
want
to
continue
talking
about
it
explicitly,
but
I
think
we
need
to
know
whether,
like
it's
okay
to
move
forward
with
actions
after
this
meeting,
so
if
not,
if
someone
like
has
a
you
know,
stop
break
problem
with
it.
Maybe
like
come
talk
to
me,
not
only
after
the
meeting.
If
we
don't
talk
about
anymore
here,.
G
I
think
someone
needs
to
write
down
what
is
being
proposed
here,
because
it's
not
entirely
clear
to
me
and
I'm
basing
off
what
I
understand
off
of
what's
in
like
the
spec
prs.
So
I
don't
know,
I
don't
know
if
that's
an
rfc
or
clarification
and
spec
or
what.
But
let's
let's
clarify
that.
I
think
that's
the
next
step.
F
B
B
I
still
have
to
make
a
few
changes,
but
I
will
try
to
summarize
what
stephen
terrence
and
I
discussed
a
while
ago
on
this
rfc
and
if
that
makes
sense,
okay,
the
idea
is
that
as
a
buildback
author,
you
may
want
to
output,
or
you
want
to
create
outputs
in
the
final
application
image
in
parts
that
are
not
slash
layers
or
slash
workspace.
B
The
use
case
being
like
certain
pieces
of
software
require
them
to
be
in
certain
parts
and
they're,
not
relocatable,
and
these
parts
are
technically
rebase
safe,
like
you're,
not
you're,
not
doing
the
whole
buildbacks
stack,
dockerfiles
rfc,
where
you're
changing
the
system
level
layers
and
like
adding
new
files
to
folders,
where
they
already
were
files,
so
you're,
not
you're,
not
making
changes
to
slash
users,
slash
slip
or
something
like
that.
You're
doing
these
kind
of
changes
you
know
separate
directly
directory,
maybe
like
opt
my
extension,
slash
whatever.
B
Enabled
image-
and
you
want
them
to
be
persisted-
the
the
a
single
app
directory
is
not
enough
because,
like
it
just
fixes
you
to
putting
everything
under
slash
workspace,
so
the
idea
would
be
that
a
buildback
can
like
choose
which
of
the
x
workspace
directory.
Is
it
wants
to
output
things
in
and
the
way
it
can
do?
That
is
by
declaring
like
the
kinds
of
workspaces
it's
compatible
with.
B
Names
are
subject
to
change.
These
are
just
like
placeholders,
but
the
idea
is:
these
are
like
the
name
of
the
additional
workspace
directories,
so
the
default
one
would
just
be
called
the
default.
But
now
let's
say
you
want
to
have
an
aws
extensions,
workspace
directory
or
a
user
user
config
directory
and
so
on.
B
So
as
a
buildback
you're,
defining
okay,
these.
These
are
the
kind
of
workspaces
that
I
want
to
write
to
and
export
out
to
and
then
on
the
stack
side
or
whatever
your
base,
build
image
or
rom
images.
If
you're
removing
the
concepts
of
stacks,
you
can
define
what
these
workspaces
mapped
to
in
terms
of
actual
file
locations.
B
So
you
can
say
that
the
aws
extensions
workspace
maps
to
like
slash,
opt
aws
or
something.
Maybe
your
user
configure
mouse
to
slash
home,
cnb.config
and
so
on
in
the
base,
build
and
run
images,
and
if
they
are
not
defined,
then
the
life
cycle
will
just
use
slash
workspaces
with
an
additional
s
at
the
end
and
the
name
of
the
additional
directory.
B
So
it
will
be
slash,
workspaces,
slash,
aws
extensions
or
so
on,
and
the
way
it
would
work
is
lifecycle
will
simply
take
whatever
is
outputted
in
these
workspaces
directory
and
put
it
in
the
final
image.
B
I
also
added
a
few
compatibility
clauses
on
how
you
could
potentially
implement
this
in
platforms
like
tecton,
which
have
a
single
volume
so
instead
of
mounting
the
workspace
as
a
volume,
you'll
mount
workspaces
as
as
the
default
volume
and
the
lifecycle,
when
it's
initially
doing
well,
when
it's
before
it's
dropped,
privileges
we'll
just
create
some
links
between
the
appropriate
workspaces,
slash
whatever
directory
do
the
directory
names
that
were
mapped
in
the
base,
build
and
run
image,
and
then
output
them
or
export
these
things
out
when
it's
running
the
exported
task,
that's
that's
the
gist
of
it.
B
B
So,
for
the
first
one
I
don't
have
a
strong
preference,
but
there
were
arguments
that,
like,
if
you're
an
aws
extension
spellback,
you
know
that
you
want
to
output
something
in
of
aws.
Why
would
you
I've
put
it
in
slash
workspaces
or
whatever
that
might
break
things?
So
that
was
one
argument
for
having
default
site
by
the
buildbacks,
which
can
be
overridden
by
the
stack
say
and
the
other
one
was
like
in
terms
of
multiple
build
back
sharing
the
common
workspace
directory.
B
B
The
large
question
was
like
the
default
mappings.
For
these
directories
to
exact
file
parts,
the
pros
of
this
is
that
it
allows
you
to
output
to
things
after
the
build
process,
as
opposed
to
the
current
rocker
files
rfc,
where
that
happens
before
the
build
process.
B
So
if
you,
if
you
want
to
like
decide
how
to
construct
the
output
image
after
the
build
box
builders
run,
you
can
now
do
that.
All
of
these
changes
are
rebased
safe.
So
you
don't
need
to
worry
about
like
what
would
happen.
If,
like
I
added
some
extension
and
now
I'm
rebasing
the
base
image
would
would
I
lose
some
of
the
mixins
or
something
like
the
all.
B
So
that's
what
the
proposal
was.
There
were
a
few
other
extensions
to
the
proposal
which
have
not
yet
included
in
the
rfc,
which
were
things
like
a
common
problem
with
buildbacks
right
now
is
when
you
mount
the
app
directory
they
the
first
build
pack
that
decides
to
wipe
the
app
directory
can
do
that
and
then
all
your
subsequent
build
backs
now
have
no
way
of
reading
the
original
source
code
or
performing
some
other
like
build
processes
on
them.
So
let's
say
you
have
a
google
pack
and
a
javascript
buildback.
B
Your
javascript
built
pack
is
responsible
for
building
the
front
end,
and
your
global
pack
is
responsible
for
building
the
back
in
google
pack
out
of
like
whatever
form
and
circumstance
whatever
it
just
wipes
the
entire
app
directly
the
system
compiling
the
assets
and
then,
when
your
javascript
buildback
tries
to
take
a
look
at
the
app
directory.
It
just
says:
oops
there's
nothing
there.
B
B
You
as
a
build
pack
operator,
you
have
to
be
very
careful
about
the
order
in
which
you
keep
your
build
packs
and
you
have
to
introduce
conventions
such
as,
like
bp
keepers,
or
something
like
that
to
prevent
your
go,
build
back
from
removing
certain
directories,
so
the
the
other
part
of
this
rfc
was
like
if
we
could
mount
the
app
directly
a
read-only
version
of
that
in
a
separate
volume,
that's
used
for
detect
and
as
an
input
to
build,
but
the
build
packs
output,
their
final
artifacts
in
the
normal
workspaces
default
directory.
B
They
can
choose
to
do
that.
The
the
read
only
more
volume
will
not
be
exported
out
to
the
final
image,
so
buildbacks
have
to
selectively
choose
to
keep
files,
as
opposed
to
just
everything
from
the
mounted
volume
being
kept
by
default.
There
were
again
some
controversial
opinions
on
this,
but
like
what?
If
I
want
to
keep
some
extra
files
now,
I
need
to
set
an
environment
variable
to
tell
my
buildback
to
keep
all
of
these
extra
files
in
the
output
image
and
so
on,
which
could
be
solved
by
a
project
descriptor.
B
F
Idea
on
the
original
rfc,
what
what?
What
are
the
build
packs
see
when
they
so
right
now
the
current
working
directory
is
the
workspace
directory.
Now,
there's
going
to
be
multiple
workspace
directories.
Is
this
a
big
breaking
change
where
they
see
a
bunch
of
folders
inside
of
workspaces
in
the
current
working
directory?
Is
this?
Are
they
moved
inside
of
the
new
one
and
if
so,
how
do
they
find?
The
other
workspace
directories.
B
They
would
continue
to
see
that
as
an
app
directory,
if
you're
on
the
newer
buildback
api
that
does
support
this
you'll
be
passed,
an
additional
environment
variable
with
like
workspace,
the
name
and
the
like
the
path
to
where,
where
that
is
so
similar
like.
I
think
we
have
this
recent
rfc
on
removing
like
some
online
arguments
to
the
build
api
and
moving
them
to
environment
variables,
so
follow
something
along
that
line.
We
can.
B
We
can
figure
out
whether
we
want
to
pack
all
of
these
additional
workspaces
into
one
environment
variable,
that's
just
like
a
json
string
or
whether
we
just
want
it
to
be
like
cnb
c,
like
b
cnb,
workspace,
workspace
name
and
then
the
file
location
where
the
buildback
should
output
and
the
buildback
can
expect
from
its
from
its
api,
like
buildback
normal
file,
the
list
of
names
that
it
declared
it
should
get
access
to.
So
it
can
use
that
to
map
the
environment
variables
that
it
needs
to
read.
B
G
Good
yeah,
so
you're
saying
well
what
I
expected
you
to
say,
but
I
don't
think
you
did
say
was
that
you
could
have
one
environment
variable
that
pointed
to
the
workspaces
with
an
s
directory
and
then
because
the
directories
under
that
would
map
to
the
names
that
you
put
in
the
build
pack
tunnel.
G
B
Write
to
that,
you
automatically
write
to
the
other
one,
but
it's
just
that
if
you
use
that
workspaces,
slash
whatever
convention
and
like
you're
writing,
config
files
or
whatever,
like
you,
have
to
make
sure
that,
like
like,
let's
say
you
set
a
path
variable
or
something
in
your
one
of
your
software
packages
expects
like
slash
aws
to
be
in
the
path
now.
Instead,
you
have
workspaces,
slash,
aws
extensions
in
the
path
instead
of
slash,
op,
slash
aws,
so
that
that
was
the
only
confusing
bit
like
if
you're
setting
environment
variables.
B
B
C
D
B
C
What
does
this
look
like
in
your
aws
extensions
example
like
you?
Have
the
value
of
slash
opt
but
there's
other
stuff,
potentially
already
in
slash,
opt
on
the
base
image
so
like?
How
do
you
know
what's
changed
because
in
the
normal
one
like
if
it's
assembling
to
an
actual
directory
like
how
are
you
gonna
know,
what's
been
created
by
the
buildback.
C
Like
if
I
created
a
new
binary
and
put
it
in
like
this
aws
extensions
that
sim
link
to
slash
opt
if
there's
a
hundred
other
files
and
slash
opt,
and
I
create
a
new
file,
how
do
I
know
what's
changed
for
lifecycle
to
export.
B
C
B
C
Did
you
consider
just
having
extensions
run
after
the
build
process
like?
Would
you
be
able
to
get
the
same
value
out
of
that,
and
you
know
reduce
complexity
right.
C
B
C
Just
time
check,
we
have
nine
minutes
left.
Do
you
want
to
keep
talking
about
this
sam
and
punt,
the
like
kind
of
run?
I
know
you
wanted
steven
around
for
the
run
time
dinner
run
time.
Every
selection
thing
are
you,
okay,
punting,
that
for
next
week,
oh.
G
I
feel
like
I
need
some
time
to
read
the
rfc.
I
haven't
read
it
yet,
so
maybe
we
do
that
and
then
come
back
to
it
yeah,
unless
I'm
the
only
person
that
still
needs
to
read
it,
but
no
way
this
is
great
yeah
I
was
just
gonna
say
I
like
this
is
I
feel
like
you're
describing
problems.
I
kind
of
knew
I
had,
but
like
the
aws
extension
thing
is
actually
something
I'm
dealing
with
right
now
and
just
kind
of
fudging
it
in
the
build
and
run
images
so
yeah.
F
This
still
leans
a
little
towards
adding
a
new
special
thing
to
me,
as
opposed
to
extending
like
when
we
talked
about.
I
think,
the
last
time
we
talked
about
this
we
talked
about
you
know,
is
there
a
way
to
make
the
concept
of
one
app
directory
a
little
more
generic
right?
And
and
can
we
use
that
in
order
to
say
well,
you
know,
could
opt
to
be
an
app
directory
and
some
you
know,
platforms.
I
think
this
is
better
than
past
proposals
and
moving
towards
that.
F
It's
a
simple
api,
especially
if
you
know
like
joe
said
we
have
a
workspaces
environment
variable
you
know,
and
the
build
packs
can
find
their
directory
under
it
right
like
I,
I
like
the
simplifications.
I
think
it's
heading
in
the
right
direction
there
I,
I
still
worry,
it
doesn't
quite
feel,
like
you
know,
we
had
app
directory
before
this,
isn't
adding
complexity.
It's
just
turning
that
into
a
more
generic
concept.
F
I
think
that's
probably
possible
to
get
further
in
that
direction
with
maybe
just
some
name
changes,
and
you
know
the
ability,
maybe
for
other
directories,
to
be
supplied
at
build
time.
Just
like
the
app
directory
is,
I
think
that
could
maybe
help
also
the
I
do
wonder,
because
we
we
support
rebase,
safe
extensions.
F
B
B
Up
the
chunk
of
work,
that
is
there
to
support
the
whole
dockerfiles
rfc,
so
you
you
originally
had
two
rfcs.
Even
one
was
for
the
dynamic
run,
limit
selection
and
the
other
one
was
for
docker
files,
but
we
just
we.
We
found
out
that
we
could
just
do
a
dynamic
run
limit
selection
through
the
dockerfiles
thing,
using
the
from
instruction,
so
the
the
last
from
instruction
wins
and
that
becomes
your
new
run
image.
B
The
idea
was
that
we
could
build
out
first
in
parts
of
the
life
cycle
that
do
simply
the
orchestration
of
the
extension,
so
they
run
the
detect,
build,
etc
for
the
extensions
and
then
coordinate
passing
the
provisions
and
requirements
between
the
build
packs
and
the
extensions
and
so
on,
but
the
extensions
can
only
output
docker
files
with
a
subset
of
the
dockerfile
instructions.
B
So
you
can't
use
copy,
you
can't
use
run,
but
everything
else
seems
fine
to
me
like
setting
environment
variables,
setting
the
base
image
and
so
on.
So
like
let's
say
you
said,
you
set
a
new
wrong
image.
So
it's
not
about
changing
the
build
image,
that's
sort
of
the
question:
it's
only
about
changing
the
output
run
image
for
the
app
image.
B
So
let's
say
the
extensions
decide
that
so
so
the
the
common
use
cases-
let's
say:
you're.
You
have
a
single
builder
that
can
run
different
kinds
of
build
packs.
One
of
those
is
go
when
you're
running
the
global
pack.
You
want
the
output
image
to
be
the
output
base
image
to
be
a
scratch
image
when
you're
not
running,
go
when
you're
running
anything
else,
you
want
it
to
be
like
a
proper
operating
system.
B
Now,
in
the
current
case,
what
you
have
to
do
is
you
have
to
create
a
separate
scratch
builder,
just
including
those
specific
languages
like
gore
native
jvm
or
whatever,
and
then
you
include
the
same,
build
packs
in
your
normal
builder,
even
though,
like
technically
for
most
people,
the
scratch
builders
are
better
options.
Now
you
have
two
builders
and
the
user
has
to
be
cognizant
of
the
fact
that
hey
I'm
building
a
go
application.
B
B
B
If
it
finds
any
non-compliant
instructions,
the
lifecycle
simply
fails,
and
that
can
happen
before
the
build
step.
It
won't
be
expensive.
Since
all
it's
doing
is
just
like
recording
some
config
values
for
the
output
app
image
and
then
recording
the
run
image
argument
to
be
passed
to
the
exporter
and
the
analyzer.
I
guess.
F
B
That
was
that
was
like.
That
was
what
people
were
worried
about
the
last
time
we
discussed
it
like
you're,
introducing
the
dockerfiles
format,
but
then
you're
just
allowing
people
to
set
environment
variables
and
like
arguments
and
base
images
but
like
if
you
want
to
build
towards
a
common
api
and
if
you've
settled
on
dockerfiles,
it
seems
weird
to
introduce
a
new
api
just
to
say.
F
F
You
know
like
it's
a
common
pattern
to
just
kind
of
treat
dockerfile
as
a
you
know.
A
type
of
you
know
way
of
specifying
a
file,
and
so
I'm
not
I'm
not
opposed
to
it.
Because
of
that
right.
If
it
were
more
strictly
defined,
I
would
be
pretty
I'd
feel
pretty
weird
about
it,
but
because
of
the
prior
art,
it
doesn't
bother
me
too
much
it
still.
B
That's
that's
what
the
other
question
we
had
does
this
need
to
be
an
rfc,
or
does
it
just
need
to
be
like
pull
your
rfc
and
merge
that-
and
this
is
just
how
the
life
cycle
team
decides
to
implement
it
over
the
course
of
different
buildback
and
platform
apis,
the
rfc
would
be
implemented
as
a
whole
at
some
point
in
the
future.
It's
just
that
in
the
middle.
This
is
one
chunk
of
work,
and
then
you
do
the
next
chunk
of
work.
D
I
think,
wouldn't
the
spec
need
to
be
able
to
say
that
it
supports
certain
doctor
file
instructions,
yep
so
admit
on
that
right,
maybe
not
to
my
heart.
I
was
going
to
bring
up
since
we're
talking
about
different
things.
I
know
we're
at
time,
but
what
about
labels
is
label,
something
that
would
be
supported
in
docker
files
already.
B
B
I
think
we
can
include
a
conditional
clause
that
whatever
parts
of
the
rfc
are
implemented
must
work
for
our
standard
gates
clusters
under
normal,
like
under
a
normal
security
profile,
that
at
any
point
when
it's
implemented,
that
would
cover
all
of
our
bases.
So
this
part
can
be
implemented.
It'll
work
on
all
standard
clusters
with
a
normal
profile
when
you
want
to
implement
the
exported.
F
B
I
think
natalie
already
has
a
request
to
do
the
orchestration
bits,
or
at
least
it's
in
draft.
So
that's
why
I
was
thinking
like
that,
would
be
an
easy
and
quick
win.
B
It's
not
something
that
we
have
to
prove
or
it's
like
the
karnikov
build
implementations
might
take
a
while,
and-
and
this
is
just
like
orchestrating
binaries-
that
we've
been
doing
with
the
buildbacks
api.
F
Sounds
good
to
me,
I
can
add
a
note
in
the
rfc
that
says
these
parts
will
be
implemented
first,
if
you
wanted
to
make
it
more
faster,
you
could
pr
to
my
rfc
some
notes.
There
yeah.
C
F
It'd
be
a
pretty
confusing
rfc
just
or
to
split
it
onto
here's.
Here's
the
whole
extensions
proposal,
but
then
no
file
system
changes
and
then
here's
the
extensions
proposal
with
file
system
changes
or
like.
If
you
came
back
and
read
the
rfc
later,
it
would
be
pretty
unusual
and
also,
I
think,
we're
ready
we're
pretty
much
with
the
implementation
thing
resolved.
I
think
I'm
ready
to
approve
at
least
or
I
guess
my
rfc,
so
I'm
ready
to
say
people
should
look
at
it.
A
Well,
we're
quite
a
bit
over
time
now.
Should
I
wrap
up
and
come
back
to
this.