►
From YouTube: CNB Weekly Working Group - 2022-06-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Nice,
nice
all
right,
so
I
saw
that
javier
kicked
off
the
live
stream.
Thank
you
very
much,
javier
I'm
gonna
skip
over
introductions
because
I
think
everyone's
been
here
before
and
move
on,
to
release,
planning
and
updates.
B
I
know
natalie
is
going
to
join
us
later,
but
she's
actually
late
today.
So
I
don't
see
anyone
from
the
implementation
team.
We
can
keep
updates
there.
So
maybe
we'll
skip
right
to
platform.
Javier.
C
All
right
yeah,
so,
let's
see
what
are
we
working
on
pack
is
sort
of
scheduled
for
a
release.
We
had
an
rc
sent
out
last
week.
We
are
working
on
delivery
pipelines
specifically
for
linux,
so
we're
trying
to
get
that
all
squared
away
before
we
actually
ship
this,
and
then
we
did
get
a
report
that
we're
investigating
on
windows
that
we'd
also
like
to
take
get
taken
care
of
to
make
sure
that
our
pipelines
are
nice
and
healthy.
On
that
side,.
B
B
So
moving
on
to
our
first
agenda
item
here
today,
user
provided
processes
in
a
post
shell
world.
So
I
threw
this
on
here
because
we
are
working
on
rolling
out
the
implementation
of
this
rce.
Let
me
share
it
for
a
second
here.
B
And
well,
there
is
a
lot
of
detail
in
this
original
rfc
about
migration
steps
for
build
pack,
authors
and
migration,
steps
for
platform
authors
dealing
with
the
removal
of
the
dot
profile
functionality.
I
think
the
one
thing
we
probably
did
not
flesh
out-
and
we
should
have-
is
the
migration
steps
for
users
that
are
sort
of
providing
the
process
on
the
fly
at
the
last
minute.
B
So
right
now
we
have
a
special
syntax
for
sort
of
indicating
that
you're
have
creating
a
direct
process,
but
the
default
is
to
do
a
batch
evaluated
process.
I
think
he
wouldn't
want
that
to
be
the
case
here.
It'd
always
be
a
direct
process,
but
moving
from
like
our
double
dash
syntax
to
not.
That
would
be
like
a
change
for
the
user
interface,
so
I
just
wanted
to
work
with
this
group
sort
of
talk
through
the
migration
steps
here.
One
of
the
things
I
think
I'd
like
to
propose
is
that.
B
B
That's
where
the
upside
of
that
the
downside
of
that
is
the
fact
that
we
chose
that
syntax
does
have
some
limitations.
It
doesn't
let
you
I
don't
know
in
what
case
you'd
be
starting
your
process
with
a
double
dash
in
the
real
world,
but
there
are
some
like
limitations
of
what
you
could
do
there
and
then
so
it's
like
do
we
want
to
remove
this
thing
totally,
that's
ugly,
or
do
we
want
to
provide
compatibility
in
this
case
and
then
sort
of
how
do
we
want
to
roll
out
the
changes?
I
feel
like.
B
But
I
wanted
to
talk
that,
through
sort
of
with
platform
authors
to
make
it
clear
that
that
sort
of
platform
api
upgrade
would
carry
through
all
the
way
to
users
we've
sort
of
done
process
like
launcher
interface,
changes
like
that
in
the
past,
but
it's
always
unfortunate
when
the
platform
api
changes
leak
through
to
users.
In
that
way,
I'm
not
sure
what
the
other
options
are
other
than
like.
We
could
have
like
sort
of
some
explicit
way
of
setting
it
between
the
two
behaviors,
but
then
it'd
be
harder
to
remove
later
right.
B
Now,
it's
like,
if
we
deprecate
and
remove
that
platform
api,
then
that's
the
day.
We
can
rip
out
all
the
shell
logic
for
real
and
like
get
in
a
place
where
we
can
have
a
1-0
where
there's
no
bash
logic
baked
into
the
launcher,
and
if
we
didn't
encapsulate
it
in
the
platform
api,
then
you
know
that
gives
you
some
more
options,
but
then
there's
more
thought
that
needs
to
go
into
deprecating.
It.
B
B
It
could
be
viable.
We
would
need
to
investigate
it
for
users
to
be
able
to
set
that
at
launch
time,
so
you
could
sort
of
like
downgrade
it
and
you
get
the
older
behavior.
I
think
we
need
to
look
into
like
would
that
work
for
everything
else,
that's
happening
with
the
files
baked
into
the
image
like
we
said
that
was
possible
in
the
past
last
time
we
changed
the
launcher
interface.
I
think
we
need
to
evaluate
that
is
still
possible
in
the
future.
B
C
C
It
is,
but
it
isn't
right
and
I
think
the
the
the
mention
that
you
made
about
an
environment
variable
or
something
that
you
could
pass
to
it
to
sort
of
determine
its
behavior,
that
that
actually
seems
ideal
right,
because,
as
a
platform,
you
expect
you
know
a
certain
interface
with
the
launcher.
With
the
final
image
that
you're
creating,
I
mean
I
could
bring
up
hypothetical
scenarios
right
in
which
these
cases
might
be
affected.
C
If
you
are,
you
do
have
an
intent
platform
where
you're
building
source
code
to
an
image
and
then
running
that
image.
Within
that
same
platform,
you
might
allow
users
to
have
a
gui
interface
to
pass
additional
arguments,
and
that
in
itself,
might
you
know
hide
the
fact
of
like
this
double
dash
scenario
and
all
that
stuff,
and
that
could
very
well
potentially
break
right
if
they're
doing
something
there
and
the
problem
is
we
don't
know
what
platforms
made
out
may
actually
be
using
that
sort
of
workflow.
B
B
B
Let
me
come
back
with
a
different
independently
variable
thing:
if
we
need
that
that
definitely
adds
complexity,
though
so
I
don't
want
to
do
that
unless
we
first
need
it
from
a
technical
perspective
like
if
the
platform
api
is
good
enough,
just
use
that,
but
then
also
from
a
platform
perspective.
You
don't
want
to
be
solving
for
a
problem
that
people
don't
have
and
adding
complexity.
C
Yeah,
I
could
definitely
see
that
I
think
I
see
the
attractiveness
of
having
this
sort
of
contract
where
you
are
able
to
pass
in
a
platform
api
and
it
actually
risks
behavior,
but
I
understand
the
complexities
behind
it
as
well.
B
Deprecating,
sorry,
we
need
to
start
deprecating
older
platform,
apis,
making
a
plan
to
actually
remove
them.
So
we
can
give
natalie
a
break
on
supporting
everything
under
the
sun
in
the
life
cycle.
B
E
That's
end
user
facing
like
an
app
developer
facing
api,
like
they
expect
those
arguments
to
be
interpolated
by
shell
when
passed
in
a
certain
way
and
not
interpolated
by
shell,
when
passed
in
a
different
way
and
now
we're
introducing
a
third
behavior
where
it
will
not
be
interpolated
by
shell.
But
it
will
substitute
environment
variables.
B
I
guess
I'm
hoping
that
it'd
be
nice
if
you
could
build
with
one
platform
api
and
then
maybe
change
it
at
launch
and
get
whatever
launch
behavior
you
want,
so
that
we
don't
need
to
then
think
of
a
third
api
and
the
evolution
and
combinatorial
overhead
of
people.
Considering
that.
B
From
the
entirety
of
lifecycle,
I
think
it's
always
been
a
little
bit
weird
that
there's
special
shell
behavior
built
in
you
know
every
time,
then
we
want
to
do
something
on
windows.
It's
like
what's
the
windows
version
of
this,
it's
weird
that
the
life
cycle
has
a
dependency
in
a
specific
version
of
bash.
E
Think
a
proper
mic
creation
part
for
this
would
be
to
rename
the
launcher
like
keep
the
launch
of
the
old
launcher,
which
is
simulink.
That's
the
launcher
on
the
path
the
same
and
then
have
a
new
name
that
has
the
new
behavior
so
like
people
know
what
they're
getting
into
and
then,
if
they're,
using
this
new
api
they're,
not
calling
it
the
launcher,
they're
calling
it.
E
E
I'm
not
saying
another
binary,
I'm
just
saying
a
different
symbolic
name
and
the
behavior
changes
based
on
that.
So,
like
the
same
way,
we
have
different
process
types.
Just
like
and
launcher
is
one
of
those
process.
Types
I'm
like
not
a
processor
but
like
every
processor,
have
assembling
to
the
launcher,
and
the
behavior
differs
so
like
this
thing
is
just
like
something
new
and
if
you
invoke
the
launch
directly,
it
says
invoke
this
new
thing.
Instead,.
B
E
B
Yeah,
I
think,
about
the
migration
steps
for
the
launcher
more
clearly.
I
both
understand
what
you're
saying
and
why
and
kind
of
agree
that
it's
necessary,
but
I
also
wish
it
was
less
complicated
than
that
and
I
think,
if
we're
doing
something
like
that,
we're
going
to
need
to
plan
it
carefully.
Yeah.
E
I
think
I
think
we
just,
I
think,
every
time
we
merge
in
rfc,
we
should
go
through
the
spec,
see
all
the
changes
it
would
need
yeah
and
like
if
something
is
not
reflected.
We
should
make
sure
that,
like
we
talk
about
it
during
the
rfc
process,
because
I
see
this
as
like
a
big
part
of
shell
removal,
but
we
just
forgot
about
it,
and
I
think
at
this
point
we
probably
need
another
rfc
to
figure
out
how
to
deal
with
it.
E
B
B
In
one
go
like
we
could
roll
out
just
the
build
pack
facing
parts
of
this.
The
argument
handling
parts
of
this
and
then
anything
that
bubbles
through
to
the
user,
like
the
environment,
variable
syntax
and
the
removal
of
the
dashes
yeah.
We
could
roll
out
separately.
B
B
B
What
do
we,
what
have
we
got
here?
Natalie
want
to
talk
about
the
dockerfile
spec
pr,
I
think
anytime,
we
talk
about
dockerfiles,
it
can
take
the
remainder
of
the
time.
So,
if
we
wanted
to
in
do
you
know
if
the
project
scaffolding
topic
is
a
fast
one
or
a
slow
one.
D
Yeah
no,
I
was
just
complaining
about
all
of
my
my
woes
getting
set
up.
I
have
my
laptop
on
a
pile
of
books
right
now,
but
let
me
share
my
screen:
where
is
it
here
this
one?
D
So
you
can
see
github,
oh
no.
I
don't
have
it
okay,
so
just
to
give
an
update.
Where
are
we
with
docker
files?
This
doesn't
really
show
you
anything,
but
basically
we
had
said
that
docker
files
would
roll
out
in
phases
and
the
first
phase
would
just
allow
extensions
to
provide
a
docker
file,
a
run.docker
file
that
switches
the
base
image
that
would
be
used
during
export,
and
so
you
know
we
had
a
proof
of
concept.
D
You
know
the
things
that
I've
learned
and
get
feedback,
so
it
probably
makes
sense
to
start
with
the
platform.
I
don't
know
who
I
know
javier
left
some
comments,
but
what
am
I
showing
yeah?
It's
changed
a
little
bit
since
I
first
opened
this
pr
a
couple
months
ago,
so
I
just
wanted
to
kind
of
go
through
those
changes
and
anyway
so
the
first
I
sort
of
big
change
from
when
I
showed
the
proof
of
concept
is
that
I
realized
that
we
don't
actually
need
a
generator.
D
As
far
as
like
a
separate
entry
point
for
the
platform
because
detect
runs,
you
get
a
group
of
extensions
and
then
in
user
space
you
can
just
run.
Each
of
those
extensions
like
detect
does
not
require
privileges
right,
so
detect
sorry
generate,
does
not
require
privileges
right,
an
extension
sorry.
So
an
extension
has
a
bin
detect
just
like
a
build
pack,
and
it
has
a
bin
generate
which
is
similar
to
a
build
tax
bin
build.
D
And
then
you
have
the
extender
that
comes
in
and
takes
the
generated,
docker
files
and
applies
them.
That
requires
privileges,
but
the
actual
extension
does
it
right.
So
to
make
it
easier
on
platforms
you
know,
and
just
for
everybody
I
thought.
Okay,
the
detector
can
just
take
some
extra
arguments
right.
D
D
It
needs
I
discuss
this
further
down,
but
it
needs
like
an
output
file
containing
metadata
describing
what
was
generated,
and
then
this
I
just
thought,
would
be
nice
to
like
actually
get
the
docker
files.
If
you
want
to
look
at
them,
you
know
you
need
to
put
them
somewhere
that
the
platform
could
save
them
off.
So.
D
All
right
I'll
keep
going
so
here
you
know,
is
where
I
describe
what
each
of
these,
what
I
just
what
I
just
did
describe
what
each
of
those
are
for,
and
maybe
it
makes
sense
at
this
point
to
just
kind
of
look
at
the
schema
changes
as
well,
because
you
know
you're
kind
of
stepping
through
what
a
platform
is
is
providing
and
what
it's
getting
back.
D
I
wanted
to
show
those.
So
let
me
just
scroll
down
to
the
bottom,
and
I
know
javier
had
some
comments,
but
so
the
first
would
be.
You
know
a
platform,
as
its
entry
point
is
going
to
provide
the
order
tomml
right
now.
It
would
provide
a
order
that
just
contains
build
packs
and
an
order
that
just
contains
extensions.
D
So
this
order
for
extensions,
you
can
think
of
it
as
like,
a
meta
build
pack
for
extensions
right.
It's
just
going
to
be
placed
in
front
of
every
build
pack
group
when
we
run
detect
and
we
see
if
we
can
resolve
a
valid,
build
plan
by
doing
that
after
detect
runs,
you
then
get
two
files
right.
You
get
your
plan
which
now
is
going
to
contain
this
in
the
providers
you're
going
to
have
a
field
to
tell
you
whether
that
provider
is
an
extension
or
not.
D
D
Javier
had
the
suggestion
that
you
could
have
a
separate
group
for
extensions,
which
I
think
is
nice,
because
you
have
a
separate
order
for
extension
so
like
that
parody
feels
good,
but
I
would
need
to
update
the
code
to
reflect
that
so
you're
gonna
get
these
as
outputs
of
detect,
no
matter
what
and
then,
if
the
extensions
detected
you
will
we'll
run
bin
generate
for
each
of
the
extensions
in
order
and
the
output
will
be
this
generated
tomml,
where
I
was
actually
thinking
today.
D
Instead
of
the
kind
equals
run,
it
might
be
nice
to
group
them
by
run
and
build
so
that
you
know
you
can
say.
Oh
do
I
have
any
do
I
have
any
build
docker
files?
Oh,
I
don't
even
need
to
run
extender
on
my
build
image
or,
like
you
know
anyway,
so
this
schema,
I
think,
could
be
made
better,
but
the
idea
is
to
say
what
happened
right.
D
What
docker
files
actually
got
generated,
the
extension
that
that
did
it
and
the
path
to
find
it
and
then
finally,
the
output
directory,
as
I
mentioned,
would
contain
those
actual
files.
C
C
Sorry,
the
the
files
there
was
a
file
where
you
made
that
comment
specifically.
C
D
I
thought
about
this
a
little
bit
right
now,
like
you,
don't
have
to
worry,
there's
no,
there's
no
point
at
which
they
would
collide,
but
an
earlier
version
of
the
rfc
proposal
said
that
extension
should
be
able
to
provide
s-bom
files,
and
that
would
have
been
a
problem
because
we
copy
it
to
layers
f-bomb
build,
fill
pack
id
right
and
in
that
case,
like
sorry,
we
only
have
one
spot,
but.
D
C
Yeah,
I
think
trying
to
prevent
those
from
colliding
is
actually
the
wrong
approach.
Right
again,
wherever
we
use
build
pack
id,
we
should
be
talking
about
a
build
pack.
We
should
never
try
to
think
of
it.
Is
it
a
build
pack
or
an
extension?
I
think,
if
we're
having
to
think
about
that,
it's
just
going
to
create
a
lot
of
confusion.
B
Would
we
ever
want
to
put
them
in
a
regular
stream
right
now?
Registry
is
just
for
build
bags.
I
don't
think
there's
a
desire
to
do
extensions
in
the
registry.
That's
another
area
where
you
might
want
ids
to
be
global,
I'm
kind
of
okay
with
them
being
scoped
to
different
things
like
we
can
migrate
the
registry
to
have
build
packs
in
one
section,
extensions
in
another,
just
trying
to
throw
out
all
the
questions
here.
B
B
Maybe
we
want
to
do
this
instead
because
it's
better
than
our
directory
structure
api,
but
if
we
just
wanted
to
limit
inputs
and
files
and
things
to
think
about
like
is
the
world,
we
don't
need
this,
but
instead
we
just
you
know
like
write
all
the
extensions
to
a
path
that
includes
the
extension
id
or
you
know,
maybe
start
with
numbers.
So
you
know
what
order
to
run
them
in
that
kind
of
thing.
D
We
would
know
the
order
so
as
you're
speaking
I'm
thinking,
but
we
would
know
the
order
based
on
the
group
already
right
and
the
way
this
proposes
that
the
output
directory-
oh
gosh-
I
don't
think
I
put
it
but
like
it
would
be.
D
I
need
to
I
need
to
make
this
more
explicit,
but
it
would
be
you
know,
output
directory
and
then
it
would
kind
of
mimic
the
way
that
layers
s
bomb
works
right,
so
it
would
be
like
output
generated,
you
know,
maybe
run
extension
id
and
then
there'd
be
a
docker
file
in
there.
So
you
could
yeah
you.
Could
you
could
do
away
with
generated.com?
Will
you
just
have
group.tumble
online
directory
structure?
I
like
it.
C
D
The
only
other
thing
that
I
wanted
to
call
out
here
is
that
you
know
the
run
image
that
the
analyzer
writes
in
to
analyze.
That
tumble
is
a
digest
and
the
generate
phase
doesn't
have
registry
credentials,
so
you
can't
provide
a
digest,
but
it's
sort
of.
D
D
Right,
but
that
would
always
be
the
case
right,
like
the
sorry,
the
extent,
so
what
happens
after
the
generate
phase
you
build,
then
you
export
at
no
point,
did
you
actually
like
read
anything
off
of
the
run
image
until
the
export
fade
right,
the
only
time
it
would
be
important
to
know
the
digest
is
when
you're
just
going
to
start
to
mutate.
The
run
image
at
which
point.
B
Yeah,
I
think,
at
the
end
of
extender
we
need
to
like
lock
the
output
down
to
a
digest
right,
and
you
can
imagine
cases
where
we
you're
thinking
about
some
of
the
proposals
for
how
to
roll
out
like
stack
removal
like
we
may,
at
least
in
the
case,
where
we're
not
running
these
extensions,
there's
labels
that
are
suggested
that
we
need
to
read
or
we
need
to
read
things
about
the
image
it's
like.
What's
the
os
version
what's
the
platform,
so
we
can
check
compatibility
with
the
with
build
packs
right.
B
D
D
D
D
C
The
stock
structure
at
the
end
right,
so
I
was
actually
going
to
before-
I
guess
we
removed
generator.
I
was
actually
going
to
propose
the
same
thing
for
the
life
cycle,
instead
of
there
being
an
analyzed
file
and
a
generated
file.
What
if
you
just
give
it
a
directory
like
here
life
cycle,
put
all
your
crap
here
right,
because
the
platforms
don't
actually
care
for
those
files.
B
B
I
think
what
you're
saying
about
the
outporter,
not
only
like,
really
resonates
with
me,
but
I
feel
like
we
have
it
to
some
level
and
it
has
the
unfortunate
name
of
layers
yeah
like
do.
We
just
want
to
like
put
this
in
the
layers
dirt,
because
it's
nice
for
platforms
to
be
like
mount
one
volume
to
the
one
place.
D
B
C
There's
a
counter
to
be
set
there
if
you
wanted
to
break
that
into
multiple
sort
of
volumes
right
like
if
you
wanted
the
contents
within
that
to
be
broken
up
into
different
volumes
for
different
reasons
or
because
you
wanted
to
add
a
like.
Let's
say
if
we
put
order
into
that
layers,
which
it
is
right
but
like,
if
you
didn't
have
an
option
to
then
say
an
order
path,
then
that
becomes
impossible
right.
B
D
B
Yeah,
no,
I
think,
as
long
as
we're
the
default
is
something
in
layers
which
I,
like
I'll
admit,
didn't
notice,
the
first
time
we
were
going
through
it,
then
I'm
actually
totally
fine
with
it
that
we
have
these
options,
but
it
all
defaults
to
being
within
this
one
volume,
and
everyone
already
knows
about
the
one
volume.
So
the
default
is
simple.
C
D
D
D
Like
you
know,
the
platform
spec
doesn't
make
mention
of
a
generate
phase
like
as
a
distinct
entry
point,
but
I
think
for
the
build
pack
spec,
it
does
make
sense,
and
this
is
just
a
whole
lot
of
a
lot
of
changes,
but
also
point
out
that
I'm
going
to
update
the
terminology,
because,
if
you
think
of
old
attempt
to
describe
a
build
pack,
that
is
not
a
meta
build
pack
which
we're
also
not
using
anymore,
but
I
think
the
the
like
actual
hard.
D
You
know
questions
are
kind
of
buried
in
this
sea.
Of
of
of
word
changes.
I
don't
know
how
much
we'll
be
able
to
get
to
here,
but
I
could
maybe
go
through
and
like
highlight
them.
If
that
would
be
helpful
to
reviewers
to
say
you
know,
here's
where
I.
I
really
need
your
feedback
on
the
technical
questions,
although
it
it
mostly
reflects.
What's
in
the
rfc.
D
B
I
have
to
think
harder
about
this
like.
Is
it
the
order
definition
for
the
build
pack
or
is
it
the
order
and
it's
the
platform's
order
right?
I
feel
like
when
you're
talking
about
the
order
resolution
and
the
build
packs
back
it's
about
how
like
the
build
packs
order
can
be
expanded,
but
then
it
also
applies
to
the
ordered
tunnel
and
it's
not
in
the
right
place,
and
it's
like.
Oh,
my
brain
hurts.
I
need
to
read
it
more
carefully
if
only
we
had
cleaned
up
the
spec.
That's
what
we
say
all
the
time.
D
The
one
piece:
sorry,
I'm
just
okay
yeah,
so
this
is
the
image
extension.
So
basically,
we
said
that
image.
Extensions
are
they're
like
build
packs
except
they're,
not
build
packs,
but
they're
like
build
packs
right.
So
it's
this
kind
of
weird
relationship
that
they
have,
and
so
what
I
have
done
is
I
pulled
out
a
separate
file
just
to
describe
the
image
extensions,
but
it
says:
hey.
D
I
go
along
with
this
buildpack
api
version
and
it
points
back
to
the
build
spec
where
applicable,
so
it
basically
just
calls
out
all
the
ways
that
image
extensions
are
different
from
build
packs.
This
was
one
thing
that
I
inferred
from
the
rfc,
but
wasn't
written
like
explicitly.
D
I
think
stephen
had
written
that
when
you
have
a
when
you're
missing
a
bin
build,
you
should
treat
the
extension
root
directory
as
like
just
pre-populated
and
grab
the
docker
files.
That
might
be
there
already,
and
I
thought:
okay,
if
you're
missing
a
bin
detect
you
are.
D
D
You
get
an
output
directory
not
to
be
confused
with
the
platform
output
directory,
although
they
are
the
same,
it's
just
specifying
it
two
build
packs
instead
of
two
platforms
and
then
the
live
cycle.
If
it's
pre-populated,
the
life
cycle
will
copy
the
docker
file
out
of
the
root
directory
into
the
output
directory.
That
was
provided,
and
that's
that's
it
pretty
much.
D
So
I
don't
know.
I
I
just
like
really
at
this
point
just
to
just
kind
of
summarize,
where
we
are,
we
have
some
prs
that
are
making
their
way
through
the
review
process.
D
We
are
planning
to
ship
this
in
a
real
api,
but
as
an
experimental
feature
that
platforms
have
to
opt
into,
and
I'm
anticipating
that
we'll
have
a
life
cycle
release
candidate,
binary,
just
sort
of
out
there
and
you
know-
would
be
looking
for
feedback
on
that
before.
Actually,
shipping,
the
spec
and
the
life
cycle
itself.
D
D
Can
we
have
like
a
release,
pre-release
version
of
pac
that
consumes
a
pre-release
version
of
the
life
cycle?
Right,
because
if
without
a
pack
that
can
do
this,
then
you
know
how
you
gonna
try
this
out.
B
D
B
D
Exciting
and
then
just
to
speak
a
little
bit
further
about
like
because
I
mentioned
the
different
phases
right
like
phase
one.
This
is
the
easiest
part
right.
You
just
switch
the
run
image.
I
think
the
next
sort
of
easiest
one
will
be
to
extend
the
build
image
like
in
container
right.
You
don't
have
to
pull
it
or
push
it
anywhere.
You
just
execute
the
docker
files
and
then
run
build
packs.
I
think
that
would
be
the
next
milestone
to
tackle
the
the
extending
of
the
run.
Image
is
going
to
be
hard.
D
We're
gonna
need
to
do
some
stuff
to
get
the
run
image
like
basically
into
the
registry
or
potentially
pull
in
new
rom
and
run
images
if
they're
referenced
by
dockerfile.
So
it's
just
a
little
messy
and
I
anticipate
that
that
will
take
longer,
but
there
you
have
it.
B
A
Yeah
I
mean
in
the
testing
I've
been
doing.
I've
been
generally
using
a
an
image
that
was
a
valid
rum
image
because
you
know
selecting
from
any
image
is
great,
but
then
you
end
up
with
images
that
don't
have
any
other
labels
on
them
that
you
expect
for
a
stack
run
image
like
if
it's
missing
the
cnb
user
id
and
group
id
enviro
variables
in
the
image.
You
pick,
then
obviously
things
don't
work
very
well
at
runtime,
but
yeah
yeah.
A
So
I
think
there's
some
of
that,
but
I
think,
on
the
other
hand,
when
you're
talking
about
your
weary
over
it
because
there'll
be
multiple
run
images
possible.
Well,
it's
only
going
to
get
worse
once
we
start
allowing
the
run
image
to
be
modified
by
the
dockerfire
one
and
allowing
the
builder
image
to
be
modified
by
the
dockerfile,
but
they're
all
things
that
need
to
happen.
If
we're
able
to
install
dependencies
via
apps
or
via
rpm.
So
we've
got
to
figure
out
how
to
deal
with
them.
Somehow.
B
A
You
know
yeah
and
I
think
it's
one
of
those
ones
that
it's
you
know,
spider-man
quote,
has
named
great
power,
comes
great
responsibility.
Once
you
get
the
dockerfile
capability.
Yes,
you
can
absolutely
shoot
yourself
in
both
feet
simultaneously,
but
on
the
other
hand,
if
you're
careful,
you
can
actually
achieve
great
things
and
still
maintain
rebasibility
and
it's
just
a
matter
of
you
know
being
cautious
about
what
you
do
and
understanding
the
implications.
B
I
think
it
will
really
matter
like
how
we
talk
about
this
feature,
like
I
think
I
see
its
power
sort
of
like
most
clearly
in
the
case
where
you're
like
having
a
smart
system
installing
runtimes
from
rpms
or
selecting
from
different.
You
know
base
images
that
are
already
built.
I
think
we
should
be
careful
about
like
trying
to
lean
into
this
is
like
an
escape
hatch
for
the
masses,
like
do
any
random
things,
I
feel
like
the
chances
of
all
of
that
working
out
correctly.
A
Yeah
I
mean
there's
great
potential
for
disaster.
There's
no
two
ways
about
it,
because
once
we
start
allowing
the
the
customization
in
the
docker
files
beyond
just
the
run
image
switch.
What
if
they
do
change
cnb
user
id
to
something
completely
daft,
what
if
they
replace
all
of
the
cmb
lifecycle
binaries
with
something
else.
You
know
it's
there's
a
there.
A
Exactly,
but
I
think,
that's
that's
kind
of
acceptable
in
a
way
because
it
does
provide
it.
I
mean
the
alternative
was
going
back
to
the
stackpack
stuff
that
was
like
two
years
ago,
so
yeah,
and
that
was
in
some
ways
more
restrictive
in
what
it
allowed
to
happen,
because
you
didn't
have
the
opportunity
to
switch
images.
You
only
have
the
opportunity
to
execute
root
level
commands
on
existing
run
or
builder
images,
but
we're
this
far
down
this
path,
and
I
don't
really
want
to
have
to
go
backwards
now,
because
no.
A
A
I
think
some
of
that
will
come
down
to
once.
It
leaves
experimental
how
we
document
this.
If
we,
if
we
provide
examples
that
show
you
know,
these
are
the
kind
of
scenarios
that
we
thought
this
would
be
needed
for.
Here
are
some
simple
examples
that
show
this
being
used
to
solve
that
problem.
Please
note
you
can
go
off
and
do
whatever
you
like
with
it,
but
bear
in
mind
that
you
know
here
are
some
downfalls.
We
can
think
of.
There
are
many
many
many
more
off
you
go
and
see
where
people
get
to.