►
From YouTube: CNB Weekly Working Group - 24 Feb 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
should
we
check
things
off.
First
thing
is
introductions:
do
we
have
any
new
faces
today?
A
I
don't
know
lack
chat.
I
don't
know
if
I've
seen
one
of
these
before.
B
Hey
yeah,
hey
everyone.
I've
been
here
the
last
two
weeks.
Are
there
last
meeting.
Wasn't
there
before
that.
A
All
right,
great
next
thing
is
release
planning
and
updates.
B
A
Seems
like
no
so
we'll
move
on
to
sorry
any
other
sub
team
updates,
distribution,
learning,
nope.
C
Cool
on
the
learning
side,
we
did
discuss
the
possibility
of
adding
friendly
badges
for
contributors,
especially
as
a
way
to
encourage
or
like
have
contributors
or
even
mentees,
and
like
programs
like
lfx
or
gso,
being
able
to
say
that
hey,
we
did
the
mentorship
with
buildback.
So
we
contributed
to
buildbacks.
So
probably
that.
A
Awesome,
I
guess
some
discussion
that
slack.
A
All
right,
we'll
move
on
to
the
first
agenda
item,
sift
as
a
moving
target.
D
This
one,
so
I
know
everyone's
favorite
topic
is
s-bomb
compatibility.
D
I
feel
like
one
of
the
things
you
know
that
we've
been
running
into
lately,
they're,
not
sure
we
thought
there
was
a
group
when
we
were
planning
out.
These
formats
is
sort
of
how
to
deal
with
the
fact
that
these
formats
themselves
are
young
and
evolving
through
multiple
major
versions
like
in
our
build
pack
spec.
D
You
know,
we've
said
like
these
media
types
are
going
to
describe
like
very
specifically
how
you
write
a
format,
that's
for
sift
or
a
format,
that's
for
cyclone
dx,
but
when
those
formats
themselves
are
major
versions,
I
was
wondering
like
have
we
thought
about
like
do?
We
need
a
way
to
express
those
different
major
versions
in
the
file
names?
There
can
be
a
compatibility
window
where
you
support
both
of
them.
Like
is
this
something
anyone
else
has
run
into.
C
So
I
can't
say
much
about
self
spd-x
and
cyclone
dx
have
a
strike
backwards,
compatibility
a
guarantee
around
the
major
versions,
and
they
also
put
the
version
numbers
in
the
document
itself.
C
C
Any
cyclone
dx
one
dot
x
bombs
are
like
they
will
always
be
backwards
compatible.
So
if
you
use
one
of
their
official
libraries
to
decode
or
encode
things,
you
will
always
be
able
to
decode
a
future
version
of
the
bomb.
With
the
past
version,
but
you'll
be
missing
fields,
you
will
still
be
able
to
like
decode
it
and
encode
it
back.
There
might
be
lots
of
information
if
you're,
using
an
older
version,
though.
D
So
I
feel
like
the
problem
might
be
more
exaggerated
and
sift
which
is
slightly
younger
but
like
even
taking
the
cyclone
dx
example
like
you
know,
when
you
move
to
2x,
there
might
be
a
breaking
change
and
that
schema
version
is
in
the
bombs.
You
can
look
it
up
and
see
that
there's
a
breaking
change.
D
I
think
what
I'm
curious
about
is,
like
we've,
been
pretty
prescriptive
about
how
you
specify
the
file
name
for
something
right.
So
let's
say
I
want
to
add
support
as
a
build
pack
author
for
the
new
v2
cyclone
dx
bomb,
but
I
know
that
there
are
integrations
that
people
are
depending
on
that
are
built
around
my
build
pack
producing
the
one
x
format.
D
D
C
C
D
C
C
D
Group,
because
we've
gone
actually
tried
very
hard
to
make
build
packs
very
stable,
such
that
you
can,
you
know,
depend
on
their
output
and
we
can
still
upgrade
things
and
stuff
like
that.
But
you
know
now
that
we've
added
sort
of
encouraging
people,
I
think,
first
of
all,
we
haven't
thought
through
what
will
happen
when
there's
like
a
breaking
change
in
spdx
like
we
should
figure
that
out
early.
But
then
I
think
there's
another
question
which
is
like
by
adding
sort
of
the
sith
media
type
or
sort
of
encouraging
people
to
create
this
format.
D
C
The
the
reason
why
we
introduced
swift
in
the
first
place
was
because
of
its
usage
with
grid,
as
of
the
next
month.
Great
will
be
supporting
both
cyclone
dx
and
spdx
as
input
formats,
so
the
need
for
outputting
shift
bombs
would
be
lesser
and
lesser.
Hopefully,
and
as
of
last
week,
we
also
introduced
a
way
to
encode
additional
properties
on
on
both
like
that.
That
was
specific
onto
both
cyclone
dx
and
spda.
So
I
think
the
swift
folks
at
least
want
to
use
standard
formats
this
if
jason
was
like.
A
Going
back
to
that
more
general
question
about
like
how
do
we
manage
major
version?
Changes
in
you
know
the
s-bomb
files.
I
wonder
if
the
problem
isn't,
you
know,
like
build
packs
needs
to
come
up
with
a
way
to
support
multiple
major
versions
of
of
different
formats,
but
if
it's
just
a
problem
of
we've
added
support
for
providing
multiple
s-bom
files,
but
you
can
only
have
one
s-bomb
file
of
a
given
type
for
in
some
cases
like
like
layers.
A
Right
is
the
problem
that
maybe
it's
not
that
we
need
to
introduce
a
you,
know
s
layer,
name
dot.
You
know
spdx.json.2
or
something,
but
really
that
we
need
to
introduce
like
the
ability
to
use
a
layer,
prefix
or
something
like
that.
In
addition
to
just
the
layer
name
or
or
a
layer
directory
that
is
named
the
way
a
file
would
and
then
contains.
You
know
multiple
s
bombs,
because
then
you
could
put
multiple
swift,
s-bomb
versions
of
the
same
stuff,
multiple
cycle
and
dx
s-bom
versions
right.
A
They
could
share
the
file
extension,
but
that
that
would
that
would
be
a
change
that
you
know
in
the
end,
puts
more
of
the
onus
to
manage
that
on
the
build
pack
authors,
but
that's
kind
of
the
direction
we're
going
with
this
whole
thing
you
know
anyways
is
like
you
know
we're
not
going
to
try
to
merge
s-bomb,
it's
you
know,
build
pet.
We
want
to
give
bill
packs
kind
of
a
maximum
api.
They
could
have
to
produce
these
things
in
a
way
that
we
can,
you
know,
deliver
into
something
you
can
attest
to.
D
C
It
could,
I
think,
the
reason
why
it
didn't
initially
add
the
version
numbers
and
the
file
name
was
because
you
could,
like
each
of
these
formats,
had
the
version
number
inside
the
file
itself.
So
you
could
figure
out
what
the
version
was,
but
we
never
considered
the
case
where
a
build
pack
would
want
to
output
both
b,
two
and
v,
three,
for
example,
but
I
like
steven's
solution
of
just
how
giving
the
buildbacks
the
oled
output
multiple
bombs
like
of
the
of
the
same
format
that
also
gets
rid
of
like.
A
Sounds
good,
maybe
someone
will
open
an
rfc
next
one
is
cosine
rfc
from
sam.
C
One
I've
completely
removed
the
dependency
on
the
existing
buildback
faces,
introduced
a
new
face,
signer,
which
runs
after
export.
Can
it
can
be
a
part
of
the
life
cycle?
Repository
can
live
somewhere
else
like
pack
or
whatever.
It's
just
a
go
binary
that
or
just
any
binary
really
that
takes
in
the
following
inputs
of
the
report.normal
produced
by
the
exporter.
When
it's
run
in
ocr
registry
mode
and
your
cosine
config.
C
There
were
a
few
comments
around
the
course
I'm
config
containing
the
password
and
the
private
key
in
line.
I
remove
those
and
set
them
to
be
parts
to
files
which
will
which
can
be
mounted
like
as
volumes
from
secrets
or
whatever
I've
also
made
the
s
bomb
attachment
option
more
generic.
C
The
signer
will
verify
that
it
has
like
based
on
the
cosine
config
and
the
output
tags
and
report.normal.
It
will
try
to
construct
a
matrix
of
signatures
and
other
stations
it
needs
to
put
out.
It
will
then
reuse,
like
registry
credentials.
The
same
way
lifecycle
does
and
export
out
the
launch
bill
of
materials
from
the
output
image
in
the
registry
as
an
attachment
or
an
attestation.
C
It
will
read
the
final
s
bomb
from
the
output
image,
since
that's
the
only
thing
we
currently
spec
out
in
the
platform
api,
the
internal
sort
of
places
where
the
lifecycle
moves
it
around
is
not
really
something
that
the
platform
should
concern
itself
with.
So
that's
why
I
just
wanted
to
keep
it
to
the
output
image
itself
and.
C
C
But
think
like
life
cycle
has
a
few
utilities
that
would
make
dealing
with
multiple
platform.
Api
is
easier.
The
other
thing
that
this
raises
is
like:
where
does
this
go
in
the
spec,
or
does
it
go
in
the
spec
and
if
it
does
not
go
in
the
spec?
How
do
platforms
with
different
platform
api
is
try
to
interact
with
this.
C
D
D
Model
of
distribution,
whereas
it
makes
a
lot
of
sense
to
put
the
life
cycle
on
the
life
cycle
image
or
on
a
builder
image,
because
you
know
you
have
build
packs
there,
and
you
want
to
make
sure
that
the
life
cycle
is
new
enough
to
run
the
build
packs
that
are
there,
whereas
they
feel
like
you
know,
the
platform
could
be
responsible
for
adding
for
pulling
in
the
version
of
designer
that's
right
for
it,
like
the.
I
feel
like
that.
One
makes
sense
to
be
totally
within
the
platform's
control.
D
C
That's
true:
it's
completely
decoupled
from
the
buildback
api
just
dependent
on
the
platform,
the
what
would
have
been
nice
if
we
could
ship
it
with
the
lifecycle
image
now
we
could
have
bundled
it
like
with
builders
or
like
back,
create
builder.
You
don't
now
have
to
worry
about
adding
this
sign
binary
in
a
trusted
fashion,
and
ideally,
this
sign
binary
should
also
be
wrong
in,
like
that
restrictive
mode
that
we're
on,
like
analyze,
restore
and
export
and.
C
It
depends
on
report
normal,
but
that's
just
the
consequence
of
the
platform
api
like
like
it's
it's
entirely
dependent
on
platform
api,
but
it's
independent
of
the
buildback
api,
which
is
what
the
lifecycle
currently
mediates
between
this.
This
is
just
purely
platforming,
fair,
which
is
the
first.
I
think
we've
never
had
to
touch
the
platform
api
as
a
standalone
thing,
we've
always
sort
of
used
lifecycle
to
tie
the
buildback
api
to
the
platform
api.
D
A
C
Once
published,
rfc
does
a
bit
of
that,
so
I
think
that
would
be
a
nice
takeaway
when
we
later
talk
about
it,
but
as
as
this
rfc
is
right
now
any
thoughts
on
whether
this
should
be
a
new
repository
or
life
cycle
or
like
how
to
bundle
it.
How
will
we
introduce
it
in
the
platform
api
if
we
will,
if
we
want
to.
E
I'm
from
pac's
perspective
or
from
platform
perspective,
I
think
if
it's
an
optional
thing
that
you
know
hypothetically,
doesn't
need
to
exist
for
other
distributors
of
a
life
cycle
right
because
again,
that
is
a
possibility,
then
I
would
say
that
it
probably
shouldn't
be
included
in
the
lifecycle
image
and
be
treated
as
a
standalone
utility.
That
pack
could
then
fetch
from
wherever
it
may
live
right.
E
Platforms
like
pack
know
that
it
is
within
that
location,
as
opposed
to
then
always
fetching
it
from
somewhere
else.
If
need
be,
I
think
that's
a
better
approach.
A
E
Going
to
say,
I
think,
we've
talked
about
this,
where
I
think
pac
needs
both.
Potentially
it
needs
definitely
the
the
go
library
aspect
of
it
would
speed
up
and
make
a
lot
of
sense,
but
in
some
cases
I
know
pac
is
used
as
a
development
tool
for
a
lot
of
these
components
so
being
able
to
specify
hey.
I
want
to
use
this
cosigner
that
happens
to
be
on
this
other
image
that
I'm
working
on
right.
C
I
mean
I
think
this
would
be
in
a
very
similar
place
as
freebase,
I
think,
yeah,
where
it
can
be
imported,
both
as
a
go
library
and
like
a
standalone
binary.
The
standalone
binary
would
be
useful
for
platforms
like
town
who
just
want
to
like
run
this.
F
But
I
can
see
like
thinking
through
like
how
it's
going
to
do
the
registry
authentication.
If
it's
going
to
like
bind
itself
to
to
using
the
existing
life
cycle
like
c
and
b
registry
off
then,
like
you
know,
maybe
it
could
belong
in
the
life
cycle.
But
I
I
guess
I
don't
have
a
strong
opinion
on
it,
but
I
think
it's
still
linked
towards
separate,
but.
F
I'm
not
sure
I
think
they're
internal
or
at
least
something's.
E
Yeah,
given
that
it's
a
different
output
right
of
the
the
thing
that
it's
producing
and
then
that
way,
it
could
also
have
more
self-autonomy
for
any
sort
of
updates
that
it
needs
to
produce
and
not
have
to
be
tied
to
a
very
specific
life
cycle
release.
It
might
be
smarter
to
go
with
a
separate
repo.
D
F
C
F
C
It's
just
the
trust
aspect
of
it
once
you've
pushed
the
image
out
to
the
registry,
you
need
a
trusted
way
to
know
the
exact
digest
of
the
image
that
was
published
and
the
input,
hopefully,
since
it's
coming
like
the
report.com,
would
have
been
hopefully
produced
by
a
trusted
export
step
and
captured
over
the
science
step.
So
it's
just
maintaining
a
trust,
boundary
you're
doing
things
if.
F
You're
using
this
as
a
library,
you
would
like
the
platform
would
read
and
report
tom
will
potentially
not
have
to
pass
it.
A
report.
Comma
file
is
kind
of
what
I'm
getting
at
like.
If
we
have
sort
of
the
either
or
option
of
passing
a
file
for
like
tekton
sort
of
implementations,
and
then
platforms
can
pass
into
different
arguments,
then
that
feels
pretty
good.
D
I
feel
like
passing
in
a
report
tommle
versus
reading
a
report
tomorrow
and
passing
in
the
digest,
is
not
materially
different.
Trust
stamps,
like
the
fact
that
it's
got
the
same
file
name
and
schema
doesn't
mean
it
was
really
produced
by
the
exporter
and
if
it
keeps
the
you
know,
apis
easier
just
to
pass
in
that
one
piece
of
information.
Let's
say
we
do
that
instead.
C
A
I
just
want
to
interrupt
for
a
second
and
say
we
we're
a
little
past
half
time
and
we
have
five
things
in
the
agenda.
Can
we
kind
of
move
this
into
github
or
slack
or
pick
it
up
during
office
hours?
A
B
This
one,
okay,
you
can
see,
go
land
right
now,
chrome,
okay,
all
right,
so
I
think
most
of
us
are
familiar
with
the
support.
Docker
files
rfc
from
stephen-
and
what
I
will
show
is
the
product
of
effort
from
myself,
jessie
ozzy
and
charles
who's,
not
here
to
implement
at
a
high
level
what's
described
in
the
rfc.
B
So
we
made
a
test
script
to
to
do
that.
So
I'll
just
walk
you
through
the
script.
So
you
know
what
you
will
see
right
now:
we're
assuming
that
we're
using
a
registry
as
our
mechanism
for
interacting
with
images
we
build
the
lifecycle
of
this
branch.
B
B
Yet
so
this
we're
just
using
docker
as
our
very
simple
platform-
and
I
will
show
the
fixtures
over
here
so
we're
just
using
kind
of
like
mimicking
what
the
container
will
look
like
with
a
cnb
directory
that
has
build
packs
and
extensions,
there's
an
extension
that
will
install
curl.
B
We
also
have
an
extension
that
really
does
nothing,
but
it
sort
of
demonstrates
the
concept
of
a
pre-populated
output
directory.
Where
the
extension
author
has
already
created
the
docker
file,
it's
just
hello
world,
and
then
we
have
a
build
pack
that
uses
curl.
That's
this
one
just
to
print
the
version
and
demonstrate
hey.
I
I
have
curl
right
so
when
we
run
detect
we're
going
to
mount
in
that
cnb
build
pack
cnb
extensions
directory,
we'll
also
mount
in
a
layers
directory
that
has
an
order.
B
Tomml
and
you
know,
here's
where
we're
saying
okay
run
detect
on
these
build
packs
and
extensions,
and
then
we
invoke
the
detector
and
at
this
point.
B
It's
actually
before
we
jump
into
the
demo
I'll
just
walk
through
the
other
phases.
So
the
next
thing
we
do
after
we
run
detect
we
get
as
an
output,
the
group
and
the
plan
we
are
going
to
provide
that
to
the
builder,
but
we're
going
to
tell
the
builder
only
run
build
for
extensions.
B
That's
what
this
step
is,
so
we
actually
the
binary
we're
invoking,
is
the
extender.
But
then
the
extender
afterward
is
calling
the
builder.
I
can
show
that
and
then
we
extend
the
run
image,
which
is
just
simply
basically
replacing
or
placing
into
the
registry
a
new
image
that
will
provide
to
the
exporter
later.
B
So
you
can
see.
This
is
the
extended
image
and
we
provide
that
as
an
argument
to
the
exporter.
When
we
run.
C
B
And
then,
finally,
we
validate
that
our
image,
our
app
run
image
is
extended.
B
So
I
think
to
show
I
guess
to
prove
that
this
works.
What
I'm
gonna
do
first
is
comment
out
this
samples,
curl
extension,
so
when
we
run
the
build
the
build
pack
that
needs
curl
should
not
find
it.
B
B
All
right
so
now
I
think
we
just
saw.
We
just
saw
the
build
happen.
It
was
really
fast,
but
that
completed
successfully
now
we're
extending
the
run
image
again,
it's
slow.
So
it's
all
happening
fresh.
B
So
that's
that's
where
we
are
there
if
anyone's
interested,
there's
lots
of
comments-
and
you
know
kind
of
notes
about
all
the
uncertainties
and
things
still
to
be
done,
but.
A
This
is
awesome.
I
had
one
question
I
saw
earlier
in
the
like
shell
script.
You
had
where
you
were
running
the
different
life
cycle
phases
that
were
invoking
the
builder
phase
twice
as
opposed
to
having
a
different
phase.
A
Is
there
like
the
phases,
aren't
separate
binaries
they're,
all
sub-commands
of
the
same
binary?
Is
there
a
reason
to
reuse
the
same
sub-command
like
if
we
just
need
to
reuse
the
code
that
runs
build
packs
or
something
like
that
you
know,
could
you
is
there
I
don't
know,
would
it
make
sense
to
make
a
separate
sub
command,
slash
phase.
B
A
Got
it
that's
that
wasn't
like
an
intentional
design
decision,
because
the
apis
are
similar,
it's
just
fastest
way
to
get
the
demo
out
yeah.
That
makes
sense.
B
C
C
A
Is
there
any
difficulty?
Switching
back
to
the
default
user,
like
I
think
the
environment
variables
should
still
be
there
right.
C
C
C
A
A
F
C
F
E
Yeah,
I
guess
to
that
my
question
is:
is
it
the
responsibility
of
the
extension
to
set
the
final
user,
or
should
the
life
cycle
have
the
opinion
that
it'll
always
switch
back
so
that
the
extending
extender
doesn't
have
to
worry
about
that?
I
would
rather
the
ladder
right
like
if
I'm
writing
an
extension
that
just
wants
to
install
a
package.
I
don't
have
to
remember
like
hey.
I
have
to
switch
back
to
this
other
thing.
A
Maybe
there's
like
a
middle
ground
where
we
could
pass
the
user
id
and
group
id
as
build
arcs
instead
of
relying
on
the
environment
variables
that
way,
and
that
way
they
could
be
the
values,
the
actual
values
of
user
right
before
whatever
docker
file
executes.
So
you
know
you
can
reliably
switch
back
to
something
that
you
know.
Another
build
peg,
didn't
change
to
something
else.
If
that
makes
sense
that
might
be
safer
than
relying
on
the
environment
variables
and
at
least
it
would
give
you
a
really
consistent
way
of
switching
back
and
forth.
A
I
I
don't
know
if
I
would
go
as
far
as
to
have
it
switch
back
automatically
because
you
know
otherwise,
like
the
whole,
the
point
of
the
docker
file
integration
is,
you
can
do
whatever
you
want.
You
can
switch
to
a
different
entire,
entirely
different
base
image
right
using
a
multi-stage
docker
file,
and
so
it's
possible
that
the
I
see
some
smiles
about
that
one.
A
But
it's
possible
that
the
you
know
attempting
to
switch
back
would
switch
back
to
something:
that's
not
not
the
same
user
right
with
different
permissions
in
the
file
system.
That
could
be
a
security
issue.
I
think
I'm
not
sure
if
I
would
try
to
interfere
too
much
with
what
those
docker
files
are
doing.
G
I
think
sorry,
I
think
it'd
be
nice
if,
if
there
was
a
way
as
an
extension
author
that
you
could
have
a
user
directive,
that
would
set
it
back
to
what
it
was
before
the
thing
began.
If
you
say
I
mean
because,
as
an
extension
author,
if
I'm
writing
something
that
just
needs
to
run
root
to
install
a
runtime,
I
don't
really
care
what
the
user
id
was
before.
I
did
that,
but
I'd
like
it
to
go
back
to
being
that
after
I'm
done
now
either.
G
That's
something
I
cooperate
in
by
having
a
user
statement
that
references,
build
args
or
references
environment
variables
that
are
available
to
me
or
it's
something.
That's
an
option,
that's
automatic,
but
I
don't
think
it's
safe
to
rely
on
everybody
to
have
to
figure
out
what
the
original
intention
of
the
run
image
was
regarding
user
id
reality
and
the
other
part
that's
coming
up
with
that
is.
G
There
was
a
discussion
over
on
the
spec
channel
about
the
fact
that
the
run
image
may
have
a
different
uid
gid
at
runtime
than
the
build
image
will,
because
that
way
you
know
you've
got
a
user
id
that
doesn't
have
permission
to
go
right
to
certain
areas,
whereas
the
the
build
image
needed
that.
E
Good,
I
was
just
gonna
throw
my
case
one
more
time.
I
think
it
like,
as
somebody
trying
to
use
the
build
packs
even
just
very
recently,
and
thinking
about
this,
like
again
for
the
use
case
that
I
have
where
again,
I
just
want
to
install
something
the
less
I
have
to
know
about
the
internals
of
what's
happening
behind
the
scenes,
the
better.
So
if
I
just
want
to
switch
over
a
route
execute
my
thing
and
you
know
fire
and
forget
that
would
be
ideal.
E
When
we
talk
about
those
escape
hatch
type
scenarios,
I
shouldn't
have
to
know
what's
happening
or
that
you
know
I
just
affected
the
the
runtime
user
aspect
of
things.
So
I
propose.
Would
it
be
difficult
to
when
you
specify
the
order
without
an
extension
somewhere
where
there's
a
config,
that
the
default
is
that
it
automatically
switches
back,
but
then
there's
some
sort
of
configuration
element
where
you
could
make
it
more
or
less
sticky.
A
I
I
don't
know
if
I
agree
that
the
obvious
thing
to
do
is
to
switch
the
user
to
root
and
then
switch
it
back
to
what
it
was
before,
because
if
you're
using
docker
files
you're
used
to
extending
a
base
image
that
you
you
know
have
a
deep
enough
understanding
of
that,
you
can
modify
right,
metadata
and
contents.
You,
like
you,
know
people
don't
write.
A
Docker
files
with
variable
base
images
right
and
like
we're
doing
with
you
know,
taking
the
build
argus,
the
name
of
the
entire
image
right
and
then
expect
to
use
that
with
you
know,
expect
to
run
commands
on
generic
images.
That's
that's
like
a
it's,
not
a
common
pattern
right
people
use
rpms
to
install
packages
or
use
them
to
install
packages
right
and
so
generally,
if
you're,
starting
from.
A
If
you
know,
you're
starting
from
a
base
image
where
the
user
isn't
root
right,
you
would
you
would
expect
to
change
the
user
to
root
and
then
you
know
have
to
change
it
back.
You
know
if
that's
what
your
base
image
was
if
you're
starting
from
a
basement
that
is
root.
Obviously
you'd
expect
not
to
need
to
do
that,
and
so
it
seems
kind
of
different
from
how
docker
files
normally
work
right
to
change
the
user
to
one
thing
and
then
change
it
back
again.
Kind
of
at
the
end.
E
A
E
D
I
feel
like
there's
a
subset
of
things
that
are
kind
of
we
add
requirements
that
are
different
than
what
you
can
do
in
a
dockerfile
case.
Even
with
this
new
escape
hatch
right,
like
the
spec,
still
says
that
the
group
of
the
default
user
in
the
run
image
has
to
match
the
group
at
build
time.
So,
to
the
extent
that
our
spec
says
this
must
be
true,
I
feel
like.
We
should
either
make
it
true
or
fail
if
it's
not,
but
I
think
that's
a
small
set
of
things.
C
D
The
idea
of
doing
like
build
arcs
if
we
want
to
provide
it
to
the
file
I
feel
like
when
it
comes
to
setting
things
back.
The
environment
variable
isn't
as
important
as
like
the
actual
gid
of
the
starting
user
right.
C
The
use
case,
like
the
the
main
reason
why
we
started
out
with
this
thing
in
the
first
place,
was
people
who
wanted
to
do
things
as
good.
The
first
thing
they're
going
to
do
when
they
use
docker
files
is
use
the
instruction
user
root
do
stuff
and
then
change
it
back.
I
think
that's!
That's
imagining
the
most
common
use
case.
G
Usually,
if
you're
doing
this
in
a
multi-stage,
build,
you've
got
full
visibility
of
the
docker
files
that
came
before
you
in
this
case.
If
you're
writing
an
extension,
you
don't
you're
only
you're
one
of
many
extensions.
You
don't
know
what
came
before
you,
you
don't
even
know
which
run
image
you're
running
against
or
what
user
id
that
may
have
left
you
to
run
as
so.
If
you
don't
know
what
user
id
you
came
in
now,
and
then
how
are
you
supposed
to
put
it
back.
A
Like
the
multi-stage
example
right
say
you
have,
you
know
the
new
base
image
you're
pulling
from
uses
different
user
ids
right
from
the
original
base
image
and
the
user
that
was
cmp
user
id
on
the
original
image
is
now
a
user
that
can,
you
know,
write
to
some
privileged
location
that
you
definitely
don't
want
written
to
during
execution
right
just
or
say,
say,
c
and
b.
I
think
before
c
user
id
to
be
numeric
right.
We
don't
let
them
be
names
that
might
not
exist.
A
It's
a
little
bit
better,
but
but
I
still
think
there's
a
there's
a
risk
that
we're
doing
something
magical.
If
that
makes
sense
that
the
person
who
authors
the
extension
may
not
understand,
is
going
to
happen
right
and
that
because
this
is
very
much
an
escape
hatch,
do
whatever
you
want
like
here's
your
base
image,
you
know
we're
we're
hands
off
now.
You
know
it's
no
longer
rebaseable
up
to
you
right,
I
feel
like
it's.
A
It
feels
a
little
risky
to
me
to
start
going
back
the
other
direction
and
saying
oh
we're
going
to
clean
up
the
interface.
So
it's
a
nice
way
to
write
it
a
docker
file
when
the
purpose
of
this
is
kind
of
like
nope.
You
know,
you're
on
your
own
you've
got
your
base
image.
You're
gonna!
Do
whatever
you
want
right.
A
I
think,
because
we've
opened
that
amount
of
flexibility
up
trying
to
go
back
and
then
paper
it
over
with,
like
oh
we're,
gonna
switch
the
user
ids
back
afterwards
and
we'll
do
all
these
nice
things
for
you
right.
I
think
it
has
to
be
possible
to
switch
back
to
the
same
user
id
that
you
started
with
through
something
like
build
argus,
but
I
don't
I'm.
I
hesitate
to
suggest
that
we
should
do
it
automatically
for
the
user
without
you
know,
at
least
without
really
making
sure
that's
the
behavior.
They
want.
F
D
C
D
D
C
The
other
option
was,
I
don't
remember
if
this
is
just
me,
making
things
up,
but
ron
maybe
had
a
flag
in
the
dock
of
file,
syntax,
where
you
could
specify
the
user
or
was
that
for
copy,
I'm
forgetting,
but
like
one
of
those
things,
you
could
specify
my
experience
user
along
with
the
argument
and
just
change
it
for
that
step.
I
don't
remember
if
that
was
for
run
or
just
for
copy.
A
I
think
if
we
decided
to
introduce
functionality
where
we're
going
to
switch
the
user
back
and
forth,
that's
not
doesn't
require
the
user
to
explicitly
put
things
in
a
docker
file.
At
least
it
shouldn't
be
the
default,
and
it
should
be
something
that,
as
the
extension
author,
you
have
to
explicitly
put.
You
know,
root,
dash
and
dash
back
you
pull
through
somewhere
to
really.
G
So
the
only
reason
I
think
that
we
might
want
to
provide
something
like
that
is
because
the
order
of
extensions
isn't
guaranteed,
because
there's
no
there's
no
way
to
specify
an
order
for
extensions,
there's,
no
way
to
know
which
extension
would
have
run
last.
So
if
an
extension
ran
that
flipped
it
away
from
something
you
wanted
it
to
be,
there's
no
but
there's
no
way
within
an
extension
to
ever
control
what
user
idea
ends
up
as
because
you
can't
control
the
order.
G
Yeah,
I'm
fine
with
that.
That
would
be
a
daisy
chain
connection
of
user
ids.
That
would
flow
through
the
extensions
at
execution
time,
regardless
of
the
order
of
execution
of
the
extensions
everybody
has
the
opportunity
to
potentially
contribute
and
if
they
can
change
it
on
the
way
out
as
well.
I
don't
know
it
gets
messed
up
there,
but
we're
going
to
end
up
talking
in
circles
and
we're
hitting
near
the
hour.
So
I'm
going
to
go
quiet
now.
A
C
You
should
also
limit
the
syntax
of
dockerfile
stuff
you're,
supporting
like
like
we
we're,
not
gonna
support
every
instruction
out
there.
I
think
we
should
be
very
explicit
in
that
rfc
or
spec,
on
which
instructions
we
do
support.
I
don't
think
we
can
support
anything
and
everything
like
I
certainly
don't
want
this
to
support
on
build
or
volumes
or
stuff
like
that.
So,
let's,
let's
limit
the
scope
to
what
we
support
as
well
like
it.
It
should
not
be
implementation.
Dependent
like
kaneko,
supports
this.
So
that's
why
we
support
that.