►
From YouTube: CNB Weekly Working Group - 31 March 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
well
we're
signing
into
the
document.
I'm
gonna
skip
over
new
faces
because
I
feel
pretty
confident
that
I
know
all
of
you.
We
can
move
on
to
release
planning
and
updates
natalie.
I
think
give
some
updates
on
the
live
cycle.
B
C
We're
we're
close
to
shipping
0-14.
I
think
we're
waiting
for
the
specs
to
be
released.
That's
should
be
soon,
hopefully.
D
I
should
have
failed
before
this
question
right
now.
The
kind
of
top
of
mind
something
I'm
working
on
is
a
windows
related
issue
with
samples.
It
might
not
actually
require
a
new
version
to
be
released,
but
it
might
be
too
early
to
tell
at
this
point
in
time,
but
if
there
is,
there
might
be
a
new
release
for
pac
as
a
patch.
A
C
Correct
okay,
so
I
made
some
updates
to
this
rfc
just
before
this
meeting,
just
to
make
it
a
little
bit
more
opinionated
kind
of.
Let
people
speak
up
if
they
don't
like
what
I
propose
instead
of
offering
a
bunch
of
options,
so
I'll
just
quickly
go
through
it
and
then
I
I'll
kind
of
linger
on
the
areas
that
are
kind
of
uncertain
for
me.
C
So
the
idea
is
that
platforms
can
provide
s-bombs
for
run
images.
We
are
then
going
to
put
those
s-bombs
in
a
specific
location
in
the
final
exported
image
and
there's
a
bunch
of
detail
about
how
that
would
work
in
different
cases
rebasing
that
kind
of
thing.
C
C
This
directory
is
special
because
normally
this
is
the
id
for
a
build
pack,
so
this
would
make
base
image
a
reserved,
fill
pack
id
and
oh,
no,
I
didn't
refresh
since
I
updated.
So
let
me
go
back.
I
had
some.
This
is
kind
of
approaching
the
place
where
I
had
questions
so
kind
of,
as
originally
proposed.
We'd
have
a
label
containing
the
diff
id
of
the
layer
that
has
the
run
image
s-bomb
and
it
would
be
a
separate
layer
from
the
one
that
has
the
build
pack
provided
s-bomb.
C
I
propose
that
we
do
some
validation
around
the
file
types.
Like
you
know,
looking
at
the
file
extension
just
saying
is
this
an
accepted
media
type.
C
And
then
this
is
new,
I'm
just
bringing
in
stuff
that
existed
in
anthony's,
pre-existing
rfc
that
this
one
supersedes
right
now
you
know,
buildback
provided
s-bombs
are
designated
in
the
lifecycle
metadata
label.
They
have
a
special
key
there,
but
for
parody
with
the
base
s-bomb,
we
should
duplicate
this
information
in
another
label.
C
C
A
D
B
D
C
So
I'll
just
add
that
as
a
comment
I
I
personally
I
mean
I
don't
really
care
what
the
label
is
called
or
you
know
whether
it's
in
life
cycle
or
not,
but
I'll.
I
will
try
to
sort
that
part
out.
I
updated
this
rfc
since
the
last
time
we
all
looked
at
it
to
kind
of
explicitly
spell
out
what
happens
when
you
rebase.
C
You
know
just
that.
You
replace
the
layer
and
you
also
update
the
label.
I
think
this
idea
of
like
what
happens
when
a
platform
provides
a
run
image
that
already
has
the
s
bomb
baked
in.
C
C
A
E
Like
I,
I
don't
get
the
point
of
this,
if
back,
for
example,
can
already
source
like
let's
say
we
leave
this
part
out
of
this
rfc
and
we
we
make
it
back
specific
that
it
sources
as
forms
from
like
known
formats,
which
is
like
cosine,
I
just
like
integral
other
stations
or
cosine
s1
format,
or
something
else
or
just
scans
it
with
certain
goal,
like
what
benefit
does
having
this
and
lifecycle
offer
that
that
you
can
just
solve
with
a
factor,
click
back
and
just
do
this
and
then
provide
the
file
to
the
life
cycle
using
the
flag.
D
I
don't
think
that
that's
the
concern
right.
I
think
what
we're
saying
is
what,
if
the
stack
provider
already
has
that
information
should
we
be
removing
it
proactively,
and
I
think
the
answer
to
that
I
think
hopefully
we
would
all
agree.
We
should
leave
it
alone.
A
Right
head
on
for
a
second
and
force
can
correct
me
if
I'm
getting
any
of
this
wrong,
but
kettleproject's
been
really
excited
about
providing
like
a
full
ass
bomb
for
everything
for
a
long
time
right,
and
I
think
there
was
at
some
point
where
it
seemed
very
likely
that
we're
going
to
do
this
in
the
image
layer,
so
there's
basically
already
functionality.
That's
like
feature
flagged
off
that
could
bake
these
things
in,
and
it's
not
saying
that
oh
yeah
paketto
did
this,
so
we
should
make
it
a
top-level
feature.
A
It's
more
like
what
is
going
to
be
the
convention
and
how
long
would
it
take
to
roll
out
and
pack?
So
like
does
not
having
this
like
make
it.
You
know
many
many
months
before
we
can
provide
a
full
s
bomb
versus.
If
we
had
this
there's
not
it's
not
a
ton
of
work
on
the
cmp
side,
and
then
people
who
wanted
to
provide
it.
It
would
work.
E
A
E
B
A
E
D
D
D
E
No,
but
we
are
introducing
this
label,
we
are
introducing
the
attachment
format
like
you
can
already
attach
the
usb
using
the
adjustation
format,
and
I
could
just
download
that
and
provide
it
to
the
life
cycle.
C
It
would
so
where,
where
should
it
go
in
the
final
image?
I
guess
that's
my
question,
because
that
is
presenting
our
own
standard
right,
like
that's
us.
E
B
E
Output
image,
I
think,
like
the
the
the
proposal,
that's
there
with
the
the
output
format,
location
and
image,
that's
fine!
For
now.
We
can
change
that
in
the
future.
If
need
be,
but
like
I'm
just
talking
about
the
inputs.
D
B
E
C
So
it's
either
the
case
that
the
like
the
platform
they
provide
a
run
image.
It
already
has
the
label
it
already
has
the
s-bomb
baked
in
they
don't
actually
provide
a
run
image,
s-bomb
flag,
you
know
like,
then
what
do
we
do
right
because,
like
do
we
say?
Oh
we're,
gonna
strip
that
label
off
we're
gonna,
remove
that
layer.
C
D
Yeah
before
I
lose
my
thought,
I
think
I
found
the
counter
argument
for
why
we
shouldn't
try
to
have
the
stack
authors,
provide
that
and
I
think
it's
based
on
the
idea
of
the
platform
api
right
at
that
point.
The
stack
author
would
have
to
adhere
to
the
platform
api
outcome
right,
because
all
of
this
is
going
to
be
defined
on
the
platform
api.
D
If
I'm
not
mistaken
right-
and
so
let's
say
it's
a
platform,
api
o10
right
and
then
you
end
up
with
a
lifecycle
that
now
supports
o11,
and
we
happen
to
change
where
the
labels
are
located,
where
the
location
of
the
s-bomb
you
know
ends
up
in
then
the
stack
author
would
have
to
like
that.
Just
wouldn't
work
right,
like
the
outcome
of
something
that
the
platform
expects
would
no
longer
be
accurate
to
that
that
aligns
to
the
original
input.
D
A
I
think
one
of
the
things
that's
interesting
here
is
have
we
thought
in
the
rebase
case
sort
of
about
what
merging
looks
like
right,
because
we
said
we
were
going
to
do
that
for
certain
s
bomb
formats
like
cyclone,
dx
and
sort
of
it
doesn't
exactly
work.
A
A
No,
you
need
to
actually
do
something
with
the
data,
but
that
got
me
thinking
and
like
this
is
just
I'm
not
sure
if
I'm
ready
to
stand
behind
this
idea,
yet
just
throwing
it
out
here,
but
it's
sort
of
like,
I
think,
the
part
that
we
really
need
to
have
a
exactly
one
way
to
do
it
with
the
s
bomb
stuff
is
for
layer,
specific
bombs
and
it's
because
it's
not
just
an
output
from
the
platform
for
users
to
consume.
It's
also
like
an
input
to
a
rebuild.
A
We
have
to
restore
it
to
the
layers,
so
it
gets.
You
know
coupled
into
the
build
pack
workflow,
but
I
think
the
run
image
s-bomb
is
not
exactly
like
that
like
so
maybe
it's
worth
opening
up
the
question
like
you
know
the
full
result
of
everything
for
the
app
image.
That's
like
the
run
image
s
bomb
plus
the
other
ones
like
we
need
the
layers
in
the
image
for
the
build
pack
generated
stuff
for
rebuilds.
A
But
do
we
want
to
take
all
of
that
and
then
let
the
platform
put
that
somewhere
else
similar
to
what
we
were
talking
about
on
the
run
image
like
in
a
cosine
attestation,
rather
than
putting
it
in
the
image
and
then
a
rebase
would
just
generate
a
new
one
of
those
it
wouldn't
be
like
rebasing
it
somehow.
You
know.
A
E
My
my
original
plan
was
like
if
you
were
not
baking
all
of
this
into
the
image.
The
rebase
case
was
trivial
in
some
cases
like
let's
say
we
don't
even
talk
about
merging
things.
Let's
say
we
just
talked
about
like
like
storing
spoms
separately
and
have
some
other
tool
for
merging
and
uploading
it
later
on.
A
But
when
it
comes
to
regenerating
the
final
product
on
a
revis,
I
don't
think
we
actually
need
all
of
the
previous
products
like
if
we
had
the
things
that
we
already
have
in
layers
like
the
bill
pack
provided
us,
and
then
we
had
the
run
images
bomb
from
wherever
it
is.
You
could
create
a
new
attestation
without
having
to
find
the
other
one
sort
of
as
a
platform
right,
so
we're
not
depending
on
I'm
trying
to
voice,
depending
on
having
a
real
registry
as
an
input
to
make
an
accurate
build.
A
E
I
mean
that
there's
nothing
stopping
a
platform
from
doing
that
right
now.
Oh
that's!
That
was
also
the
point
of
like
the
the
cosine
rfc.
It
already
proposes
that
the
the
the
cosine
binary
or
whatever
we
generate,
do
that
so
like
the
idea
is
once
the
life
cycle
is
exported
everything
we
know
where
these
as
forms
are
kept
like
by
the
life
cycle,
and
then
you
can
just
attach
it
in
whatever
format
it
needs
life
cycle,
just
stores
it
there
for
internal
bookkeeping
or
whatever,
and
you
don't
really
care.
A
Guarantee
that
the
layer
s-bombs
are
actually
the
layer
s-bombs
is
why
we
end
up
baking
them
in,
because
you
could
get
really
weird
results
from
your
build.
If
you
restore
the
wrong,
there
are
s-bombs
with
your
layers
right,
so
I
think
that's
why
those
particular
ones
need
to
be
baked
into
the
image
the
way
we're
doing
now,
but
I
don't
think
the
same
requirement
holds
for
the
run
image.
B
A
E
E
We
don't
need
to
store
it
in
some
way.
We
don't
need
to
handle
it
on
our
own.
We
just
need
some
to
piece
all
the
metadata
together
and
output,
whatever
needs
to
be
outputted
and
do
so
in
a
convenient
fashion
that
is
well
defined,
like
I
think,
that's
the
actual
problem,
rather
than
trying
to
store
this
form.
E
But
it's
like
for
things
like
the
random
tar
walls
that
you've
downloaded-
and
you
want
to
provide
in
this
one
for
that,
like
that
was
the
main
reason
behind
providing
such
a
functionality
like
discover
your
first
pumps
within
the
image
you
could
potentially
use
it
to
have
sift
automatically
support
the
vagas
forms
in
the
way
they're
currently
stored,
but
yeah
it.
It
also
would
mean
that,
like,
let's
download
the
entire
image
and
scan
through
the
whole
file
system,
rather
than
just
downloading
that
specific
layer,
but
that's
a.
E
A
A
I
think,
those
result
in
kind
of
different
trade-offs
and
like
maybe,
if
you're,
the
one
who
built
the
stack,
you
can
know
certain
types
of
things
about
it
more
accurately,
so
you
can
make
a
more
accurate
s-bomb,
but
I
do
worry
that
that
is
sort
of
like
fighting
the
introduction
of
like
the
docker
file,
features
and
stuff,
like
that,
where
it's
like
we're
changing
things
on
the
fly,
so
we
need
to
be
able
to
dynamically
generate
the
s
bomb
on
the
fly
as
well.
B
E
E
You
ask
back,
has
a
s
form
like
s1
download
command?
That
merges
and
provides
the
complete
respond
to
you
and
the
way
that
could
work
is
if
this,
the
wrong
image
you're
using,
has
a
sperm
attached
to
it
and
one
of
the
formats
of
black
support.
I
just
uses
that
if
it
doesn't-
and
you
really
want
to
complete
a
spawn,
I
can
just
run
a
sform
generator
on
the
run
image
and
generate
it
for
you.
E
You
can
do
that
and
then,
if
you
create
a
new
start
with
another
image,
that
has
an
s
bomb
that
you
have
provided
and
there's
more
accurate,
actual
sources
from
that.
Instead
of
trying
to
generate
that,
and
at
the
end
of
the
day,
there's
like
clearly
a
handful
of
platforms,
it's
just
like
three
platforms:
hey,
never
get
why
we
think
that
we
should
generalize
this
problem
for
like
n
platforms,
there's
just
like
three
platforms
out
of
which
pack
is
the
only
one.
E
Kpac
has
a
cli
tool
that
again
like
it's
not
a
platform.
The
cli
tool
is
not
a
platform.
It's
just
a
tool.
Kpak
again
is
not
going
to
handle
all
of
this.
For
you
like
downloading,
extracting
displays
and
stuff.
It's
like
at
the
end
of
the
day,
it's
just
back.
A
B
E
C
C
A
A
F
But
if
you're
putting
it
somewhere
else,
if
you
allow
platforms
to
put
it
somewhere
else,
what
what's
the
spec
here,
you
know
what
will
be
the
specification?
What
are
you,
what
are
you
trying
to
achieve
at
the
end,
because
yeah
I
mean
I
look
after
a
java
implementation,
that's
sort
of
parallel
to
the
spring
boot
one
as
well.
F
E
F
F
I
can't
really
go
exec,
something
out
because
I
don't
know
what
system
I'm
running
on
at
that
point
so,
and
I
don't
really
want
to
spin
a
container
to
do
that
unless
because,
if
I'm
doing
that,
essentially
I'm
calling
a
life
cycle
binary
right.
So
then
are
you
saying
that
the
tool
that
you're
going
to
provide
would
be
akin
to
a
life
cycle
binary
I.e,
it's
another
phase
that
I
would
run
that
would
do
s
bomb
generation.
E
Not
just
form
generation,
but
just
fetching
so
like.
I
think
we
are
well
beyond
the
scope
of
a
buildbacks
platform
in
terms
of
fetching
in
small.
The
platform
of
build
just
a
buildbatch
platform
is
responsible
for
providing
the
environment
and
orchestrating
the
build
process,
not
fetching
things
not
creating
things.
That's
that's
like
something
that
back
does,
but
hack
is
also
more
than
a
platform.
It's
just
a
cli
of
useful
utilities.
We
package
together
like
why
would
the
spring
boot
plugin
need
to
fetch
s-bombs,
for
example?
F
The
springbook
plug-in
has
to
meet
the
specification
of
what
a
build
pack
platform
is.
It
has
to
meet
that
specification.
So
if
that
specification
contains
stuff
saying
that
gen
packages
needs
to
be
run
in
order
to
generate
best
bombs
and
stuff
like
that,
then
it
has
to
invoke
that
and
therefore
is
going
to
be
playing
in
that
field.
E
This
is
not
gent
packages,
though,
like
that's.
That's
that's
for
dynamic,
ground
image
stuff.
This
is
like
a
static
base
image
and
as
an
end
user,
you
want
the
full
s4
from
your
output.
Image
like
this
has
got
nothing
to
do
with
how
you're
generating
s-forms
or
whatever,
like
at
least
this
specific
rfc,
has
got
nothing
to
do
with
that.
It
doesn't
at
all
say
anything
about
gen
packages.
F
I
mean
that's
just
as
an
example.
You
know,
because
it's
playing
we've
got
that's
that's
an
example
of
stuff
that
is
in
the
platform
spec
at
the
moment.
You
know,
so
it
has
to
be
met.
So
I
just
get
twitchy
when
you
start
saying
about
things
that
it
doesn't
have
to
be
run,
or
this
isn't
part
of
this
or
we
can
let
it
be
baked
in
or
we
could
make
it
something
for
the
platform
to
decide.
I
don't
like
that
last
phrase.
F
F
E
F
It
reproducible
that
that's
where
I
struggle,
because
that's
that
should
be
the
case,
because,
if
you've
got
a
build,
if
you've
got
a
project
directory
that
contains
the
same
input
and
you're
running
it
with
the
same
build
packs
it
shouldn't
matter
which
platform
you
ran
it
against.
You
should
get
an
equivalent
output
image.
There's
been
a
lot
of
effort
into
trying
to
make
that
happen
right.
That's
why
all
the
dates
inside
the
image
are
set
to
40
years
in
the
past.
A
I
feel,
like
you
know,
I
think,
there's
room
for
some
decisions
for
platforms
around
the
edges
for
some
of
this
core
stuff.
It's
you
know
like
which
major
version
of
sift
json
do.
I
want
my
s
bomb
to
be
in
something
like
that.
Is
it
the
kind
of
thing
where
you
can
imagine
a
platform
trying
to
make
different
choices
about
some
of
this
stuff,
but
I
do
agree
mostly
with
ozzy
here
and
that
the
idea
is
that
you
know
platforms
can
provide
end
users
with
different.
A
You
know,
user
experiences
and
maybe
with
different
like
additional
features
or
the
different
ones
fit
better
into
different
workflows.
But
the
idea
of
sort
of
having
the
core
spec
in
the
center
of
it
is
that
you
know
if
you're
running
the
same
app
same,
build
packs
same
stack.
You
should
basically
get
the
same
thing
at
the
other
side.
A
D
So
I
think
the
the
question,
in
my
mind,
is
very
similar
to
project
descriptor,
though
right
it's,
whether
or
not
we
want
to
make
that
decision
to
be
inside
of
the
platform
spec
or
outside
as
an
optional
platform,
sort
of
just
additional
tooling.
E
I
don't,
I
don't
know
of
a
single
platform
that
can
reproduce
the
other
platforms
exactly
and
it's
like
not
even
minor
changes.
I
see
some
major
changes
between
two
like,
for
example,
k-pac
puts
the
input
git
commits
and
the
project
source
information
in
the
output
image,
whereas
pac
doesn't
do
that,
which
is
a
huge
thing
for
me,
like
that's
an
important
piece
of
like
metadata
about
the
input
that
I'm
not
capturing
in
the
output.
B
A
C
So
there's
nothing
preventing
us
like.
We
can't
prevent
stop
authors
from
doing
that
right.
Someone
could
think
like
I'm
being
very
efficient
here
and
I'm
just
you
know,
doing
some
work
for
you
and
so
my
that
paragraph
isn't
to
say
that
we
should
even
like
we
should
advocate.
We
should
recommend
it.
We
should
advertise
that
this
is
like
a
way
of
giving
the
input.
It's
just
saying.
What
do
we
do
in
this
edge
case.
A
E
A
E
A
B
A
E
C
So
should
we
just
say
like
a
a
run
image
with
an
s
bomb
baked
in
is
unspecified
behavior
right,
like
just
like
everything
else
that
we
just
mentioned.
You
know.
E
Why
would
you
even
bake
that
in
it's
like
trying
to
guard
the
spec
against
the
wrong
image
with
slash
layers
directory
is
unspecified
behavior?
We
don't
do
that
right
now
or
we
we
don't
say
a
run
image
with
io
buildbacks
lifecycle
metadata
is
unspecified
behavior.
We
don't
put
that
anywhere
in
the
spec.
Why
make
a
special
case
for
this.
E
I
mean
that's
what
we
do
for
everything
else
right
now
like
there's
so
many
things
that,
like
we
tell
what
happens
if
the
run
image
includes
a
slash
workspace
directory,
for
example,
we
don't
say
anything
about
that.
We
don't
say
anything
about
slash
sales.
We
don't
say
anything
about
slash,
c
and
b
life
cycle,
because
runners
can
have
that.
E
A
E
E
F
Yeah,
you
raise
a
good
point
that
you're
not
checking
it
at
the
moment,
and
you
don't
want
about
it
for
other
things.
Therefore,
why
should
this
be
the
first
point
that
we
care
about
it.
I'd
argue
in
response
that
if
this
is
the
first
point
where
we've
now
becoming
aware
of
just
how
many
places
that
that
kind
of
problem
exists,
that
fair
enough-
let's
not
deal
with
it
in
this
case,
but
we
should
raise
an
issue
to
visit
this
again
across
all
of
the
different
places
where
it
is
because
it's
like
this
one.
F
Doesn't
it's
not
it's
not
right
that
this
one
gets
a
free
pass,
because
everything
else
got
a
free
pass
once
you
know
it's
wrong
right,
you've
got
to
go
back
and
fix
the
wrongness.
How
that
gets
fixed,
isn't
something
that
you're
right.
How
that
gets
fixed,
isn't
something
that
should
be
part
of
this
item
that
that
part.
I
will
concede.
You
know
it's.
F
A
F
Yeah
I
came
slightly
unstuck
when
I
wrote
my
platform
that
I
didn't
use
the
same
names
that
everything
else
expected
and
and
some
stuff
did
fall
apart
until
I
realized
what
was
going
on
but
yeah.
So
it
does.
The
problem
does
exist
out
there,
but
you're
right,
because
it's
configurable
it's
so
much
harder
to
figure
out.
You
can't
just
release
a
list
of
directories.
People
should
avoid
because
that
list
would
be
different
per
platform,
which
would
be,
and
maybe
even
per
arguments
passed
to
a
given
platform
as
well,
which
we're
gonna
start.
A
B
B
B
F
E
Like
at
least
for
this
rfc
like
I
would
be
happy
if
we
can
just
expose
that
flag
for
the
platforms
to
provide
in
this
form,
we
can
store
it
in
the
output
image.
As
we
said,
we
would
so
that
there's
a
consistent
way
to
put
things
in
the
output
and
to
download
it,
and
then
whoever
wants
to
do
things
apart
from
that,
like
attaching
it
or
like
storing
it
in
some
other
format,
can
do
so
by
some
other
tool.
B
A
B
B
C
B
A
B
E
B
C
C
C
Nobody
said
anything
so
yeah
yeah
yeah,
like
in
other
words
like
we
could
have
actively
discouraged
right
as
an
input
putting
the
s-bomb
in
the
run
image.
A
A
B
E
B
A
G
So
in
the
docker
file
proposal,
there's
a
binary
that
lives
in
the
image
right
is
that
still
there
did
we
change
that
right
that
always
gets
run
after
every
docker
file,
that's
responsible
for
regenerating
the
s-bomb
right
and
so
that
that
keeps
it
up-to-date.
Is
there
a
concern
about
that
portion
or.
C
C
C
C
Is
you
know
if
we
specify
the
location
of
the
s
bomb
in
the
exported
image,
it's
possible
that
a
run
image
could
come
with
that
s
bomb
baked
in
and
then
what
do
you
do?
Do
you
do
you
fail?
Do
you
ignore
it?
Do
you
overwrite
it?
You
know
what
and
I
had
sort
of
said
that
we
should
just
take
it
unless
the
platform
specifically
overrode
it
with
a
flag,
then
we'll
update
it
as
if
we
were
rebasing
it.
C
C
But
let's
say
we
don't
allow
this
right.
Let's
say
that
we
we
just
tell
like
highly
discouraged,
you
know
like
no
one
would
ever
think
of
this
right.
I
just
I
maybe
if
I
hadn't
said
anything,
this
would
never
come
up
right.
So
if
we,
if
we
kind
of
proceed
to
docker
files
right
after
the
run,
image
is
extended,
you
you
have
a
way
of
generating
your
own
s-bomb,
which
you
know
is
up
to
date.
Right
and
I
guess
in
this
scenario,
we
actually,
we
wouldn't
update
this
label
on
the
extended
run
image.
G
I
want
to
like,
maybe
back
up
a
little
bit
like
the
user
experience
that
we
want
to
see
like
in
the
long
term
right
right
now.
We
have
an
app
image
that
has
pieces
of
an
s-bom
inside
of
the
app
image
right
and
we
like,
I
think,
we
kind
of
view
the
app
image
as
something
where
we're
generating
this
thing.
Contractually
it's
gonna
have
some
things
that
are
specific
to
the
project
in
it
like
fragments
of
an
s
bomb
distributed
in
different
places.
Right.
G
It's
like
a
it's
definitely
safe
for
us
to
call
that
app
image
artifact,
you
know,
like
a
build
pack
generated,
build
pack
managed
app
image
right,
we're
like
there's
a
private
api
and
a
public
api
right
with
the
kinds
of
metadata
we
add
and
how
we
interact
with
it.
In
the
past,
we've
said
stacks
are
the
same
things
right.
We've
said
a
stack
image
is
also
a
you
know:
build
pack
managed
artifact,
you
probably
use
a
build
pack
tool
to
generate
it.
It
has
build
pack
specific
metadata
on
it.
G
Right
like
it
could
also
have
it
would
it
would
feel
consistent
with
the
api
for
it
to
have
fragments
of
an
s
bomb
inside
of
it
like?
Maybe
they
cover
the
different
layers
if
you
ran
the
different
docker
files
against
it,
right
like
to
be
consistent
with
what
we're
doing
in
the
app
image
right,
I
could
see
an
implementation.
This
is
like
anthony's
original.
You
know,
rfc,
for
what
a
run
image
should
look
like.
Is
it
it
has.
G
You
know
an
s
bomb,
fragments
of
s
bomb
in
it
right
that
looks
similar
to
an
app
image.
I
think
we're
moving
away
from
stack
image
being
a
build
pack
managed
artifact.
As
my
my
instinct
right
like
I,
I
open
the
rfc
that
says
we're
getting
rid
of
stacks.
It's
just
going
to
be
we're
going
to
call
them
base
images
they're
just
going
to
be.
You
know
that
format,
and
that
makes
you
know
using
that
same
build
pack.
G
Specific
api
for
storing
s-bomb
in
a
stack
image
seem
like
we're
doing
the
wrong
thing
now
right
and
I
think
it
comes
up
in
a
few
places
like
you
know,
sami
keep
pointing
out
like
well.
Now
you
have
to
construct
this
the
image
in
a
special
way
and
if
you
want
to
extend
it
with
the
docker
file
before
the
build
it
leads
to
all
these.
You
know
problems
right,
and
so
I
would
say
if
we're
really
moving,
that
direction
of
a
stack
image
can
be
any
base
image
and
we're
going
to
start
accepting
docker
files.
G
You
know,
like
you,
really
don't
have
to
do
anything
special
right
or
you,
the
only
special
things
you
have
to
do
to
that
base.
Image
are
following
requirements
that
are,
you
know
already
existing
requirements,
you'd
have
for
docker
file
like
setting
a
user
id
right
to
a
non-root
user.
That's
not
a
it,
doesn't
feel
very
build
pack
specific
right.
G
You
know,
then
I
would
say
we
shouldn't
include
the
s-bomb
in
the
run
image
at
all,
but
do
we
in
general?
Do
we
feel,
like
that's
a
feasible
end
goal?
Are
we
aligned
that
base
images?
Should
not
be
kind
of
build
pack
specific,
they
should
be
whatever
docker
based
image.
You
want
to
bring
into
your
build
with
extra
responsibilities
that
aren't
built
back
specific,
but
may
be
important
for
our
api.
Like
api
compatibility
and
setting.
You
know
uid
and
group
ideas,
you
know,
numerically,
you
know
financially
right.
A
I
feel
like
our
remove
stacks.
Rfc
does
not
get
us
100
of
the
way
there,
but
the
difference
between
some
random
image
that
we
picked
up
off
the
street
and
an
image
that
would
work
is
small.
So
you
can
imagine
smart
platforms,
handling
that
or
eventually
us
making
that
something
that
could
be
handled
dynamically
on
the
fly
right.
G
B
A
I
think
the
requirement
to
like
bring
up
binary,
called
gen
packages
feels
onerous
like
when
we
were
talking
about
it.
Coming
on
the
base
image,
it
seems
like
maybe
that's
putting
the
control
in
the
wrong
place
like
what
is
trend
package.
It
depends
just
like
sift
right
and
like
should
we
just
should
the
platform
just
handle
running
that.
E
I
think
the
last
time
we
discussed
this
between
ali
stephen
and
I
we
decided
that
we,
we
shipped
that
with
some
extended
version
of
the
life
cycle.
If
the
platformer
is
cares
enough
about
the
gen
packages
to
replace
it,
they
can
do
that
and
then
provide
a
way
for
the
base
image
to
have
an
attach
just
form
in
case.
We
are
just
reusing
the
base
image
as
it
is
without
extending
it.
E
So
the
idea
was
like
the
the
project,
provides
some
binary,
that's
gen
packages
that
conforms
to
a
specification
and
so
on.
If
someone
cares
enough
about
it
to
change
it,
they
can
write
one
and
bundle
it
and
for
stack
authors
or
app
developers
to
provide
it.
They
can
attach
it
to
the
image
in
some
like
one
of
the
other
attachment
for
formats,
and
then
you
can
just
use
that
so
that
was
that
was
the
that
was
month
or
so
ago.
When
we
discussed
it.
G
So,
if
we're
trying
to
keep
our
run
image
totally
generic
right,
like
you
know,
it
doesn't,
doesn't
need
any
special
build
pack
specific,
no
internal
api
against
the
life
cycle.
I
think
that
makes
sense.
I
worry
about
like
a
disturbance
like
image
right.
There
are
things
that
sift
is
just
never
going
to
deal
with
right
like
like
an
implementation
of
gem
packages.
That's
just
run
sift
against
it
is
going
to
is
not
going
to
work
for
a
lot
of
existing
use
cases
right.
So
it
needs
to
be
some
way
for.
G
You
know
gen
packages
to
understand.
At
least
there
is
a
pre-existing
s-bomb
for
this
image.
That's
going
to
that
isn't
going
to
be
replaced
right
or
or
something
like
that
right.
There
needs
to
be
a
you
know:
an
additional
needs
to
be
more
functionality
than
just
gen
packages,
as
sift
and
sift
runs
every
time.
It
replaces
the
existing
s
bomb
right.
C
I
think
this
is
pointing
toward
a
long-standing
question
that
I've
had
from
the
daca
files
track
of
work.
I
I
wonder
if
we
could
maybe
pivot
a
little
bit
toward
that,
because
we'll
probably
end
up
circling
back
around
to
this
anyway,
let
me
find
it
so
I
have
this.
This
is
the
spec
pr
that
kind
of
introduced.
C
You
know
more
formally
what
what
changes
we
want
to
see
and
one
of
the
questions
I've
had
for
a
long
time
is
extensions
are
supposed
to.
C
C
A
Because
it's
just
it's
different
enough
right,
like
what
we're
doing
with
launch
tunnel
and
build
time
when
we're
calling
them
launched
on
bill
tumble
but
they're
serving
a
different
purpose
right
and
therefore
we
end
up
asking
these
questions
and,
like
I
don't
know
exactly
what
the
answer
is
for
the
s
bomb
question,
but
I
feel
like
we
end
up
framing
it
because
it's
like
you
know
a
regular
bill
pack
can
do
x,
but
it's
the
things
that
are
happening
are
different
enough
in
these
cases
that
I
I
worry
that
saying
it's
just
like
the
bill
pack
api,
except
for
seven
exceptions,
is
not
the
most
helpful
way
to
think
about
it.
A
G
A
I
think
it
gets
awkward
because
you
know
when
build
packs
are
making
layers
they're
all
making
totally
disjoint
layers,
so
they
can
each
make
an
s
bomb
that
describes
what
happened
in
their
layer
and
then
all
those
s-bombs
together
describe.
You
know
everything
that
happened
in
the
layers
directory
or
other
in
the
after
changes
that
bill
packs
can
make.
B
A
G
G
It's
like,
like
sam
to
answer
your
question:
how
can
I
write
s
bomb
here
without
access
the
actual
files
right?
Maybe
the
answer
is
this.
This
is
this
is
the
place
where
you
can
contribute
s-bomb
that
doesn't
require
the
actual
files,
but
there's
another
place
to
contribute
s-bom.
That
does
require
the
actual
files,
which
is
you
know,
kind
of
how
the.
G
I
I
just
I
can
imagine
this
common
use
case
right.
That's
like
I
need
this
extra
dependency
in
my
application
and
it
needs
to
be
installed
into
user
local,
and
so
I
curl
something
and
install
it
into
user,
local
and
but
now
again,
like
you
know,
this
isn't
a
base
image.
I
maintain
I'm
not
going
to
replace
gen
packages
on
it.
There's
no
way.
G
E
The
issue
there
would
be
so
like
again.
The
way
we
deal
with
this
right
now
is
whatever
artifacts,
that
we're
unpowering
and
installing
in
the
image
a
folder
where
it
contains
the
s4,
and
we
just
reuse
that
so
like.
Even
if
the
thing
that's
doing
wrong,
blah
doesn't
know
about
the
actual
s
bomb.
In
the
thing
it
can,
it
can
still
produce
in
this
form,
so
the
idea
would
be
if
there's
just
an
s1
file
there
or
gen
packages
would
pick
it
up,
because
here
whatever
is
the
extension,
is
just
writing
the
docker
file
right.
E
It's
not
actually
executing
the
steps.
So
at
this
point
now
it
has
to
both
write
the
docker
file
and
if
it's
putting
tar
volts
in
there,
it
has
to
fetch
the
appropriate
just
form
as
well,
whereas
in
this
case
it
just
has
to
write
the
docker
file
and
if
the
tar
rule
contains
the
s4,
we
just
pick
it
up.
G
If
you're
installing
a
specific
version
of
a
piece
of
software
using
a
docker
file,
I
think
it's
going
to
be
easier
to
write
the
docker
file
that
installs
the
specific
version
and
then
write
the
s-bom
entry
not
inside
of
the
docker
file
but
outside
the
docker
file.
Right.
In
order
to
capture
that
thing.
But.
B
E
B
G
Like
I'm
saying
in
cases
where
you're
installing
a
specific
version
of
a
piece
of
software
using
a
docker
file
right,
then,
if
it
doesn't,
if
that
there's
not
a
snippet
of
s-bond
configuration
somewhere,
it
covers
that
component.
It's
probably
easier
to
do
that
in
the
build
pack
than
the
docker
file,
and
so
I
could
see
a
use
for
keeping
this
api
available.
F
That's
right
take
the
example
that
we're
most
likely
to
end
up
using
here
where
the
the
docker
file
will.
The
extension
will
be
writing
a
docker
file
to
install
a
system
package
by
dnf.
So
if
it's
installing
a
jdk
we're
not
going
to
know
the
exact
version
of
it,
we'll
just
know
that
it's
in
11,
capable
jdk
and
the
patch
level
is
taken
care
of
by
whatever
the
upstream
repository
is
so
there's
not
going
to
be
a
particular
s-bomb.
G
But
that
that
doesn't
work
through
this
api,
though,
because
this
api
doesn't
give
you
the
version
of
the
thing
right.
This
would
only
be
this.
Api
would
only
be
useful,
for
you
know
exactly
the
version
of
the
piece
of
software
ahead
of
time,
and
you
don't
have
a
pre-generated
s-bomb
somewhere
for
it
right.
F
E
G
E
G
E
That
I
mean
this
is
like
this
specific
api
is
for,
like
a
really
really
small
edge
case.
Right,
like
the
thing
that's
applying
the
extension
knows
the
world
accurately
enough
that
it
can
produce
an
s
bomb,
but
the
original
artifact
didn't
yeah.
It's
a
artifact
that
you
downloaded
as
a
random
file
off
of
somewhere.
G
It's
that
it
exists
in
the
build
pack
api
already
right,
and
so
like
I'm
just
thinking
about
the
experience
of
like
a
new
extension
author
right
who
knows
the
build
pack.
Api
isn't
familiar
with
the
expansions
api
right,
any
similarity
of
the
build
pack.
Api
is
going
to
be
good,
and
so,
if
it's
easy
and
obvious
for
how
they,
you
know
for
their
one
weird
docker
file
that
grabs
data
from
somewhere
and
puts
it
in
the
image
how
they
contribute
an
s-bom
for
the
change
they
made.
G
It
just
looks
exactly
like
the
build
tech
api
right.
That
feels
pretty
good,
but
I
agree
that
in
most
cases
it's
you
know,
they're
not
going
to
have
enough
information
to
contribute
a
realistic,
s-bomb
right
entry
that
because
and
because
many
of
our
doctor
files
aren't
going
to
be
curl,
you
know
just
go
binary
from
the
internet
and
put
it
at
this
place
right.
C
C
Where
it
should
go
basically,
instead
of
putting
it
so
if
we
put
in
layers
s-bomb
extension
id,
it
could,
like
extension,
ids
can
collide
with
build
pack
ids
right,
and
that
seems
not
optimal,
so
we
could
put
it
in
like
layers,
s-bomb
extensions
or
layers.
S-Bomb,
launch
extensions,
or
you
know
just
have
like
some
top-level
directory
holding
all
the
extension
contributed
s-bom
files,
and
then
we
just
have
a
reserved
build
pack
id.
It's
like
you
can't
make
an
extensions,
build
pack.
G
So
you're
talking
about
in
the
docker
file,
that's
being
generated
right,
yeah,
like
you're,
not
talking
about
in
the
in
the
outside
of
the
docker
file
you're
talking
about
now.
The
docker
file
is
running
and
it's
running
on
a
build
image
and
you
want
to
store
it
in
the
layers,
location
or
the
sorry.
C
C
G
So
the
way
the
rfc
is
phrased-
and
I
don't
know
if
the
changes
we've
talked
about
change
this
at
all
right
there
is
no,
it
doesn't
make
sense
to
have
a
an
s-bomb
contributed
by
an
individual
extension
right.
What
happens
is
each
docker
file
runs
and
then
gen
package
runs
and
replaces
the
past
s-bomb
completely
right,
and
so
the
question
there
is:
if
one
of
the
docker
files
wants
to
contribute
something
additively
right,
it
doesn't
want
it's
not
going
to
rely
on
packages
to
run
and
then
regenerate
the
s-bomb
right.
There's
there's
additional
last-minute
configuration.
G
B
C
C
C
C
G
I
think
we
I
think
we
can
take
out
that
part
of
the
api
that
wasn't
it
wasn't.
I
don't
think
that
part
of
the
api
was
in
the
original
rfc,
so
I'm
I'm
pretty
sure
at
least
I
forget,
but
sorry
I'm.
It
seems
reasonable
to
me
to
take
it
out,
but
I
think
there's
another
it's
going
to
come
back
right
and
that,
like
there
probably
needs
to
be
a
way
in
a
docker
file,
it's
like
maybe
there's
some
hook
right
where
jen
the
gen
package
is
binary,
knows
about
a
special
location
in
the
image.
G
You
know
that
it's
running
in
where
it
pulls
additive,
s-bom
entries
and
appends.
Those
straight
lines
includes
those
with
the
s-bombs
that
gen
packages
generates,
but
maybe
that's
more
about
the
gen
packages
api,
as
opposed
to
the
you
know,
life
cycle
api
with
the
extension.
Maybe
the
idea
here
is
that
s-bomb
is
handled
completely
outside
of
the
life
cycle
right
in
this
case,
except
for
you
know
the
output
of
gen
packages
running
every
time
against
every
you
know
intermediary
generated
image,
something
like
that.
C
Okay,
I
think
that
helps
I
mean
I
think
I
think
we're
not
we're
not
closing
any
doors
right
to
you
know
future
improvements
that
we
might
want
to
make.
I
did
actually
I
had
one
more
question
in
this
area
which
was
around
what
happens
if
I'm
just
using
docker
files
to
switch
the
run
image.
C
G
Oh,
that's
interesting,
so
if
you,
I
kind
of
feel
like
we
shouldn't
special
case
right,
changing
the
base
image
like
the.
If
you
said
this
is
my
run
image
s
bomb
and
then
you
know
like
because
we've
said
that
it's
going
to
be
an
input
to
the
life
cycle.
Right,
if
you
say
this
is
my
run
image
s-bom
as
an
input,
the
and
one
of
the
docker
files
of
many
docker
files
changes
the
base
image
right,
we're
not
going
to
request
that
the
user
give
us
another
s-bom
for
that
image
right.
G
G
That's
on
the
image.
I
guess
you
know.
If
there's,
if
there's
a
special,
you
know
s-bomb
format
that
gem
pack
just
can't
pick
up
or
like
special
there's
their
additional
changes
to
the
image
right
when
you
swap
the
base
of
into
something
else,
needs
to
be
possible
to
provide
additional
entries.
Something
like
that.
G
F
Yeah
because
we'll
be
looking
at
doing
something
like
this,
because
you
know
for
the
golden
path
where,
if
it's
just
a
java
application,
there
are
already
ubi
eight
images
out
there
that
have
a
jdk
installed.
So
that
would
be
easier
for
us
if
we
could
just
swap
the
run
image
straight
to
one
of
those,
and
we
don't
need
to
worry
about
it.
F
But
we'll
know,
what's
in
that
image
in
advance,
because
effectively
at
the
moment,
we'd
have
to
knock
it
up
as
like
a
whole
bunch
load
of
parallel
stack,
run
images,
so
they'd
all
have
the
cmd
lifecycle
stuff
already
in
it
ready
to
roll,
and
it's
just
a
question
then,
during
the
build
thing,
we'd
write
a
docker
file
that
swaps
the
run
image
from
our
empty
one
that
handles
the
generic
case
to
the
pre-existing
one
that
we
know
handles
this
particular
case.
F
G
So
I
would
say,
if
that's
interesting,
I
would
say
if
the
life
cycle
is
going
to
take
the
s1
as
a
separate
argument
from
the
run
image,
then
we
need
to
keep
the
build
peg
api,
like
external
s-bomb,
because
when
you're
swapping
that
run
image
for
a
different
run
image,
you
need
to
be
able
to
provide
that
run.
The
s-bomb
for
that
run,
image
outside
of
the
context
of
the
docker
file
right.
C
Well,
not
necessarily
right
because
we're
we're
returning
we're
returning
the
reference
to
the
run
image
back
to
the
platform
right,
so
we
could
still
say
hey
it's
the
responsibility
of
the
platform
to
figure
out
the
s-bomb,
for
this
run
image,
and
so
you
know
all
we
do
is
here's
what
the
final
run
image
was
right.
Yeah.
F
But
how
I
was
going
to
say
practically
there's
no
way
to
do
that,
because
you'd
need
to
be
able
to
predict
which
extensions
ran
to
know
which
final
image
you
ended
up
with,
so
that
you
could
predict
which
s-bomb
you
now
need
to
apply.
It's
again,
the
knowledge
that
we
need
has
already
happened
inside
the
magic
box.
C
I
mean
I'm
thinking
of
after
the
so
I'm
thinking
after
the
extend
phase
ran
right.
So
after
you,
you
applied
all
the
docker
files
and,
like
let's
say
the
last
docker
file
was
from
my
new
run
image
right
then
the
result
of
the
extend
phase
is
like
okay.
Well,
I
didn't
actually
have
to
build
anything.
You
know,
but
here's
here's
the
image
I
ended
up
with.
Oh
it's,
it's
one
that
already
exists
in
your
registry
right.
G
Like
say,
you
have
a
bunch
of
extensions
and,
like
you
know,
one
in
the
middle
swaps,
the
image
with
a
different
image
right
like
how
and
then
it's
the
other
extensions
keep
building.
On
top
of
that
right,
the
platform
is
just
going
to
get
back
like
at
some
point
the
base,
maybe
the
platform
would
get
back,
and
I
think
this
is
actually
more
information
than
I
think
we
should
expose.
But
maybe
the
platform
would
get
back.
G
F
For
that,
essentially,
the
rebaseable
flag
is
covering
that
distinction,
because
if
rebasible
is
still
true,
as
you
come
out
the
other
end,
then
you
know
you've
swapped
to
an
image
that
didn't
have
any
further
modifications
from
extensions
downstream.
Whereas
if
replaceable
has
gone
false,
then
you
know
that
you
still
have
to
apply
all
of
those
things.
If
you
ever
want
to
build
a
new
running
edge.
G
F
G
Right
so
so
maybe,
but
maybe
the
solution
here
isn't
we
should
provide
this
api
that
we
should
provide
this
open
api
at
the
build
pack
level,
because
we've
provided
the
open
api
lifecycle
side.
Maybe
the
solution
is,
we
shouldn't
provide
the
opening
version
of
the
api
and
the
life
cycle
side.
We
should
standardize
on.
Like
you
know,
sam
was
saying
you
know
pick
code,
cosign
attestation
format
right
as
the
way
that
s
bomb
will
be
specified
right
and
then
now
we
are
allowed
to
integrate.
G
Reading
the
s-bomb
from
a
new
image
directly
into
the
extensions
api
right
and
so
like
when
you
do
a
life
cycle
build
life
cycle
pulls
you
know
the
image
right
sees
if
the
image
has
an
s
bomb
associated
with
it
and
in
every
stage
of
extension,
the
life
cycle
can
look
at
the
digest
right,
see
if
there's
a
co-label
digest
and
pull
that
as
the
extension
and
then
automatically
know
as
the
run
image
changes.
If
it's
s
bomb
changes,
I
think
I
think
that's
got
to
be
the
only
way
to
do
this.
G
Does
that
make
sense
like
if
you
change
the
s?
If
you
change
the
image
in
the
middle,
like
you,
have
a
whole
bunch
of
extensions
that
are
running
right
and
you
change
the
image
in
the
middle
to
a
new
one
right,
all
you
have
to
do
is
at
every
stage
of
extension
running
you
take
the
digest
right
and
you
go
back
to
the
registry.
You
see
if
there's
an
attestation
associated
with
it,
you
pull
that
s-bomb
and
that
s-bomb
becomes
the
new.
F
F
G
F
Yeah
because
I
already
think
there's
a
big
kind
of
bio:
beware:
sticker!
If
you
try
changing
the
run
image
multiple
times
from
multiple
extensions,
thus
losing
all
of
the
stuff.
That's
been
aggregated
to
a
previous
round
image
along
the
way,
but
this
this
is,
you
know:
you've
got
to
figure
that
the
person
building
the
builder
image
that
assembles,
which
extensions
are
possible
to
be
in
play,
is
responsible
for
in
a
way
creating
a
universe
where
you
can't
screw
up
that
badly.
Where
you
know
when
it
runs,
you
don't
have
an
extension
that
says.
F
I
think
this
is
an
application
of
type
x
I'll,
swap
the
run
image
to
one
that
provides
x,
then
another
extension
runs.
No.
This
is
an
application
of
type
y.
I'm
swapping
to
that
and
you're
like
okay,
well
x
is
going
to
be
very
unhappy
now
and
we
can't
guard
against
that.
No,
no,
that's
you
know.
I
don't
see,
there's
any
way
to
fix
that
from
a
a
much
bigger
picture
kind
of
perspective.
I
think
that
the
s-bomb
stuff
plays
into
that
same
problem
when
you,
when
you
get
there.
F
G
Right
right,
totally:
okay,
so
strip
strip
s-bomb
out
of
the
api
entirely
right.
If
the
gem
packages
is
or
if
there
are
only.
There
are
two
things:
two
ways
the
life
cycle
knows
about
basin
address
bombs.
One
way
is
any
a
standardized
image,
external
format,
probably
co-signed
onto
stations,
but
we
should
argue
about
that
more
because
cosine
also
has
a
separate
s-bom
format
right.
Actually,
I'm
curious
ozzy.
If
you
have,
I
don't
know
if
I've
heard,
if
you
have
preferences
on
s-bomb
storage,.
F
G
So
so
we'll
argue
with
sam
or
whoever
or
somebody
about
if
it's
cosine
attestation
format
or
cosine
s
bond
format,
but
we'll
pick
one
of
those
two
things
and
I
guess
we
could
do
both.
You
know
we
could
have
like
some
order.
It
tries
to
look
if
it's
using
the
specific
cosine
s
bond
when
it
looks
for
that,
if
that's
not
there,
then
it
looks
for
an
attestation
that
has
s
bomb
in
it.
G
Maybe
I
don't
know
we'll
figure
it
out
the
use
that
external
format
get
rid
of
the
internal
api
for
sbom
the
two
ways
the
lifecycle
knows
about
it.
Are
it's
going
to
look
for
that
external
format
on
ever?
You
know
at
every
time
right
like
even
if
the
docker
file
runs
and
changes
stuff,
it's
just
every
time
take
the
digest.
You
have
an
enemy,
go.
Look
it
up
right
if
because
it's
always
safe,
because
it's
by
digest,
if
somebody
said
this
s-bomb
is
for
this
digest,
we
should
do
it.
G
Maybe
it's
a
reproducible,
docker
file
right
or
something
like
that,
and
we
can
reuse
the
s-bomb.
You
know
from
many
previous
runs
and
not
have
to
run
jump
packages
again
right
so
like
at
every
stage.
The
process
is
maybe
even
at
the
first
stage
for
the
first
run
image
we
picked
up
right.
It's
always
it's
always
the
same
process.
It's
just
go
out
to
the
registry,
look
for
an
s-bomber
soul
sleeve,
with
this
digest.
G
C
G
Right
so,
in
the
damn
case,
there's
still
going
to
be
so
it's
like
in
the
damon
case,
if
you
started
your
you're
going
to
build
an
image
at
the
end
right
like
at
the
end
of
each
extension,
you're
going
to
have
an
image
output
at
the
end,
do
we
not
ever
get
it?
It
would
be
too
expensive
right
to
recompress
the
image
between
each
step
and
try
to
calculate
a
digest
for
it
just
to
have
an
id.
That
would
be
a
consistent
way
of
pulling
yes
bomb
right.
G
It
seems
like
that's
a
bad
idea,
so
I
think
in
the
diamond
case.
We
need
to
come
up
with
an
analog
to
pulling
the
s-bomb
from
the
registry.
That's
probably
a
file
or
something
like
that
right,
like
if
it's
the
demon,
you're
talking
about
local
storage,
for
your
images
anyways,
maybe-
and
so
in
that
case,
is
it
like?
We,
you
know,
take
the
image
id
right
and
then
we
look
for
a
file
on
disk
in
a
certain
place.
G
That
has
the
you
know,
and
we
just
replace
that
operation
of
reach
out
to
the
registry,
with
look
for
file
in
s-bomb
folder
right
in
in
location
and
pull
s-bomb
from
there
if
it
exists.
G
Oh
sorry,
last
last
thing:
maybe
when
we
implement
the
stuff
in
pack,
that's
going
to
do
more
mirroring
of
the
daemon
into
the
register
kind
of
like
getting
rid
of
damon
support
right.
Part
of
that
could
be
if
you're
doing
builds
against
the
daemon
and
the
images
came
from
the
registry.
The
s-bomb
already
gets
pulled
into
like
dot
pack,
slash,
s-bomb
and
associated
with
the
image,
and
so
we
preserve
the
s-bombs
that
you've
pulled
from
the
registry
remotely
and
it
just
works.
G
C
I
do
wonder
I
mean
I
I
I
think
this
this
all
makes
sense,
but
I
I
wonder
if,
at
least
in
my
mind
it
was
all
a
little
bit
simpler
and
I
wonder
if
I'm
missing
something
so
it's
like
I'm
thinking
from
the
life
cycles
perspective
right.
The
life
cycle
has
a
bunch
of
docker
files.
It.
It
knows
right.
The
life
cycle
knows
at
the
end
of
running
the
extend
phase.
C
Did
it
create
a
new
image
or
not
right,
because,
right,
if
all,
if
all
that
happened,
was
you
switch
the
from
image
right,
you
know
like,
then
it
knows
it.
Okay,
I
didn't
create
something
new
right.
There
was
something
that
existed
before.
In
that
case,
it
can
just
like
spit
the
reference
back
to
the
platform.
Okay,
give
me
an
s-palm
right.
G
G
I
think
it's
the
api,
where,
in
the
middle
of
the
build
the
life
cycle
has
to
because
in
the
end
we
need,
we
want
that
s
bomb
to
be
incorporated
into
the
amp
image
right
saying
that
in
the
middle
of
the
build
after
the
extension
phase
right,
the
platform
has
like,
gets
the
image
back
and
has
to
make
some
decision
about
it.
I
guess
that's
not
terrible.
If
it's
like,
could
you
overload
gen
packages,
so
gen
packages
runs
either
in
the
context
of
the.
G
Image
right
and
then
you
know
it
can
generate
an
s
bomb
for
the
image
or
if
you
provide
gen
packages
with
a
image
digest
right
or
like
you
know,
a
fully
qualified
image
digest,
then
it
pulls
the
s-bomb
and
it's
a
package
that's
distributed
with
a
life
cycle
and
that's
the
way
the
platform
can
re-query
right
and
then
we
could
open
up
the
api
on
the
other
end
again
and
say
that
you
have
to
not
standardize
in
a
format
right.
C
C
I
know
we're
we're
coming
we're
coming
very
short
on
time.
I
wonder
if
it's
it's
possible,
just
in
the
last
couple
of
minutes,
there's
like
a
few
open
questions.
I
feel,
like
we've
been
kind
of
approaching
something
that
may
work
with
this
s
bomb
stuff.
I
just
want
to
make
sure
that
we
that
I
get
input
on
other
things
that
could
unblock.
C
C
In
the
the
short
time
that
we
have
left,
I
just
want
to
ask
anyone
who
cares
to
please
look
through
the
list
of
allowed
doctor
file
instructions
that
I've
put
here.
I
think
that
was
an
out
outcome
of
a
previous
conversation.
C
We
should
be
explicit
about
what
we
allow,
what
we
don't
allow
so
for
just
the
philosophy
that
I
I
used
to
try
to
come
up
with
a
list
was,
you
know,
with
run
docker
files,
it's
like
pretty
much
anything
goes
like
who
cares
right,
you're,
just
building
a
new
image,
but
for
the
build
extensions,
because
we're
kind
of
keeping
the
context
around
after
they've
run
in
order
to
run
build
packs
right
after
in
the
same
container,
like
a
more
limited
set
of
instructions,
make
sense.
C
F
G
G
If
we
keep
this
a
must
not
for
now,
and
then
we
come
up
with
a
platform,
it
can
reliably
like
if
we,
if
we
say
for
now
all
the
buildings
have
to
start
with
this
right
and
then
later
we
have
a
platform
that
has
a
reliable
way
of
implementing
swapping
the
build
image
right
through
a
docker
file,
and
we
see
that
it
works.
Then
we
can
change
it
to
a
you
know.
Not
we
can
even
just
get
rid
of
the
requirement
right.
G
What
I
worry
about
is
like
allowing
all
extension
authors
to
do
this,
but
then
the
extensions
that
do
this
don't
work
on
any
platforms
right
so
like
because
lifecycle
isn't
going
to
support
it.
You
know
like
I,
I
wonder
if
we
can
keep
the
strong
requirement
in
for
now
and
then,
if
a
platform
shows
up
that
says,
hey
we
we'd
actually
like
to
support
this.
Then
we
change
the
api
and
say:
okay,
now
extensions
can
do
this.
G
F
Yeah,
because
I
mean
the
the
current
proof
of
concept-
supports
this
all
right
so
to
be
clear
here.
I
agree
with
the
must
begin,
with
arg
base
image
from
base
image.
It's
the
second
must
not
contain
any
other
from
instructions
that
one,
because
it's
the
ability
to
start
from
the
base
image
you've
got
and
then
swap
over
to
another
one
and
then,
as
I
did
then
use
a
copy
to
bring
the
cnb
folder
over
from
the
original
one.
So
you've
still
got
access
to
the
stuff
you're
supposed
to.