►
From YouTube: CNB Weekly Working Group - 14 April 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
I
think
that
is
that
should
be
all
for
release
planning.
Do
we
want
to
move
on
to
our
first
agenda
item.
B
Yeah,
I
think
it's
it's
mostly
sam,
but
I'm
I'm
definitely
interested
in
it.
Sam
brought
up
the
question
about,
in
particular,
there's
a
cosine
integration
rfc
and
it
talks
about
sort
of
creating
another.
What
is
it
like
build
operation
and
the
idea
was
that
this
would
be
like
an
independent
utility
that
platforms
can
use
to
essentially
sign
right,
using
cosine
and
and
do
everything
necessary
there.
The
question
that
came
up
is
where,
where
can
this
live
right?
There's
some
complexities.
B
If
we
were
to
build
this
outside
of
the
build
pack
namespace
to
then
migrate
it
over
to
the
buildpack
namespace,
and
we're
essentially
wondering
if
there's
a
way
that
we
could
kind
of
begin
building
it
within
the
build
pack
namespace
and
then
essentially,
what
are
the
concerns
there
and
attributing
it,
as
maybe
experimental
and
the
ownership
being
more
independent,
as
opposed
to
you
know,
being
part
of
any
particular
team,
that's
more
or
less
the
train
of
thought.
I
don't
know
sam.
If
I
did
the
explanation,
justice.
C
C
I
think
the
like
last
time
it
was
suggested
that
the
platform
team
might
want
to
own
these
things
as
part
of
like
platform,
author
tooling
components,
but
like
the
current
platform,
maintainers
were
not
comfortable
with
owning
or
like
extending
their
maintenance
responsibilities
to
these
components.
Given
that
they're
already
stretched.
B
Yeah
so
effectively,
we
want
to
build
these
things
right.
Individual,
maybe
contributors
want
to
build
these
tools.
The
concern
is
what
happens
if,
for
some
reason
that
individual
is
no
longer
able
to
support
or
maintain
that
tool
right,
then,
I
feel,
like
we've
just
added
more
stuff
to
our
plate,
that
we
in
the
long
term
can't
support
or
maintain.
D
I
think
I
worry
about
well,
you
know
pre-optimizing
a
little
bit
right
like
if
somebody
wants
to
build
something
now
that
belongs
in
the
project.
Right.
Where
we've,
you
know,
we
have
rfcs
accepted
or
whatnot.
That
say
no
yes,
this
is
this
is
something
we
experience,
we'd
like
to
offer
and
somebody's
willing
to
work
on
it.
If,
in
the
future,
we
end
up
in
a
state
where
it's
hard
to
maintain
to
the
extent
that,
like
you
know,
we
can't
even
look
at
pr's
and
merge
them
in
right.
D
E
C
I
think
that's
sort
of
what
we
were
trying
to
get
at
like.
We
can
just
put
a
warning
on
the
readme
that
says
that,
like
while,
during
the
time
that
it
is
experimental,
we
can
put
it
out,
put
out
a
warning
that
it
is
experimental
and
that
it's
only
meant
for
like,
like
use
at
your
own
risk
or
whatever,
like
every
open
source
component,
is
technically
user
to
her
own
risk.
But
let's
just
make
it
prominent
and
clear
that
it
is
used
at
your
own
risk
and.
C
I
think
in
general,
like
some
point,
we
might
want
to
discuss
how
like
how
we
decide
which
team
owns
it
or
like,
even
in
case
of
components
that
are
donated
to
the
project
which
come
with
existing
maintainers.
For
for
those
components
like
do
they
get
assigned
to
a
team
or
do
they
does
the
project
just
live
in
our
repository
and
those
maintainers
just
have
access
to
that
repository
to
maintain
a
much
more
request,
but
so
just
general
things
like
these.
C
Like,
let's
say
like
I,
I
want
to
work
on
this
sign
or
stuff,
but
I
I
don't
want
to
take
on
the
rest
of
the
responsibilities
of
maintaining,
like
the
other
platform
components
right
now.
There's
no
good
way
to
enable
me
to
do
that
same
thing.
Let's
say
like
we
were
discussing
like
getting
things
like
other
testing.
C
Libraries
like
auckland
to
be
donated
to
the
buildbacks
project,
like
the
current
maintainers
of
that
wanted
to
be
involved
with
the
ongoing
maintenance
of
that
repository
and,
like
the
number
is
large
enough,
like
they
have
five
six
maintainers
that
all
want
to
have
the
ability
to
maintain
those
repositories
without
necessarily
being
maintainers.
For
the
entire
team
yeah.
B
Yeah,
I
think
that's
the
the
main
problem
right.
It's
like
we're
trying
to
attribute
ownership
of
repositories
to
a
large
group
instead
of
individuals,
and
I
think
we're
finding
that
to
be
challenging
and
maybe
even
slightly
unclear
as
to
what
the
sort
of
expectations
there
are
right,
because,
according
to
some
community
doc
right,
it
says
that
the
platform
team
manages
these
repositories
and
the
github
sort
of
rules
are
set
up
in
the
same
fashion
right.
B
But
I
feel
like
this,
along
with
akam
and
other
things
right,
like
it's
really
more
of
independent
maintainers
for
very
specific
components
is
more
or
less
what
we're
hoping
could
be
possible.
C
I
mean
it.
It
seems
like
a
lot
of
overhead
that
if
you
want
to
maintain
a
set
of
repositories,
you
need
like
a
new
team
for
it
or
you
need
to
assign
it
to
an
existing
team
like,
and
that
was
one
of
the
reasons
why,
like
some
of
these
teams,
were
a
bit
reluctant
to
donate
things
because
they're
like
do,
we
now
do
all
of
our
maintainers
now
get
to
maintain
this
new
project.
Do
all
of
our
maintenance
now
also
have
to
maintain
these
other
projects
that
the
team
is
responsible
for.
D
Is
there
any
issue
if
a
new
component
enters
a
team?
Is
there
any
issue
with
the
team
level
maintainers?
Also
having
access
to
you
know
push
to
that
repository?
In
addition
to
you
know
hypothetical,
individual,
repo,
maintainers.
C
When
we
trust
the
maintainers
that
we
have
in
the
projects,
I
would
trust
them
to
not
do
anything
that
the
repository
maintainers
don't
want.
But
I
I
can't
speak
for
the
repository
maintainers
like
if
they're,
if
they've
trust
the
project
enough
to
donate
something
to
it.
I
think
they
should
trust
the
maintainers
to
like
not
make
malicious
changes
to
the
project
that
they've
donated.
C
So
I
think
the
other
thing
works.
Fine,
in
my
opinion,
can't
speak
for
others,
but
I
think
what
worries
me
is
the
the
other
way
around,
like
maintainers,
for
a
specific
component
now
being
responsible
for
a
wider
set
of
repositories.
E
D
It
was
probably
something
we
never
really
thought
about
like
when
the
project
was.
You
know,
platform
and
pack
implementation
that
life
cycle.
E
E
C
B
B
Could
we
do
both
of
those
sort
of
at
the
at
the
same
time?
Could
sam
start
the
signer
repository
within
the
build
packs,
name
space
and
then
have
you
know,
have
the
rfc
to
update
the
governance
or
do
we
want
to
wait
on
the
governance
rfc.
E
I
think
we
have
plenty
of
like
if
it's
just
a
mechanical
thing.
We
have
plenty
of
repos
under
build
packs
that
we
don't
consider
maintained
or
like
like
belonging
to
a
team,
and
maybe
in
this
case
it's
worth
calling
out
explicitly
that
it
is
an
experiment,
and
you
know
we're
figuring
things
out,
but
I
mean
I,
I
think
it
should
be
fine
to
essentially
yeah
start
an
experiment,
even
if
we
have
to
back
out
of
it
later.
F
F
Most
of
the
build
packs,
either
like
to
specify
the
registries,
sometimes
also
for
bill
packs
that
install
typically
like
the
go
binary
like
we
would
need
to
change
that
also
to
hit
something
that
is
internal,
and
that
means
that
we
have
a
lot
of
work
about
forking,
all
of
those
for
something
that
is
basically
configuration
and
when
thinking
about
how
we
could
fix
that.
F
Like
one
of
the
things
that
seemed
natural
to
us
was
to
actually
allow
builders
to
provide
like
settings
to
specific,
build
packs
such
that
the
builder
owners,
which
would
probably
like
be
like
an
operator
team
inside
a
company,
would
be
able
to
say
like
what
is
what
the
defaults
and
what
is
the
configuration
that
should
be
used
by
each
and
every
of
the
build
packs
that
is
running
in
this
environment.
F
So
yes,
so
there
might
be
other
ways,
but
we
think
really
that
the
builder
level
might
be
the
best
way
of
allowing
typically
enterprise
users
greater
control
over
buildbacks
without
having
to
fork
did
I
forget
something?
Some.
C
Yeah,
I
think
it
can
also
just
provide
some
other
context
from
other
folks
who
were
in
the
back
team
meeting
when
we
discussed
this.
So
the
other
sort
of
use
case
that
we
gathered
from
other
companies,
for
example,
for
hero
heroku
in
salesforce
folks.
They
have
the
same
set
of
build
packs,
but
with
slightly
different
settings
for
like
enterprise
versus
like
internal
usage,
and
it
would
be
nice
to
be
able
to
set
things
like
the
registries
or
like
default
to
certain
settings
during
detect
and
build
time
using
environment
variables
or
other
configurations.
C
D
Does
the
builder
let
you
set
environment
variable
like
if
you
just
set
environment
variables
on
the
builder
image,
do
they
make
their
way
into
the
build?
I
thought
they
used
to
at
least
do
we
clear
the
environment
now.
C
Life
cycle
strips
out
every
variable
in
the
builder,
except
for
a
list
in
the
law
listed
set
of
environment
variables,
which
typically
include
like
basic
basic
linux
variables
like
path
la
library,
path
and
so
on,
doesn't
even
include
variables
that
start
with
bp
or
bp
e
underscore
whatever.
So
even.
B
C
Yeah,
I
think
there
were
two
pieces
we
we
discussed.
One
was
like
changes
to
environment.
The
other
was
changes
to
the
provisions
or
requirements
the
bill
plan.
Essentially
so,
let's
say
there
is
a
build
pack
that
requires
go
to
be
present
like
it.
It
requires
go
as
as
a
as
a
requirement
and
typically
there's
a
bill
pack
that
provides
it.
But
now,
let's
say
you
want
your
stack
to
provide
the
go
distribution.
C
So
it's
not
doing
anything
apart
from
just
making
small
changes
to
the
build
plan,
so
that
is
also
something
that
can
currently
be
achieved
by
a
buildback
like
you
can
attach
appropriate,
build
packs
either
to
the
beginning
of
the
group
or
to
the
end
of
the
group
to
modify
the
build
plan.
So
it's
not
as
much
of
a
concern.
C
The
the
main
thing
was
just
like
these
environment
level
settings
and
the
other
common
use
case
was
like
downloading
binaries
from
some
sort
of
place,
but
I
think
that
can
also
potentially
be
like
a
convention.
We
agree
on
rather
than
something
that's
specified,
because
there's
nothing
in
the
spec,
that's
preventing
you
from
doing
it.
So
I
think
if
it's
just
during
detect,
we
could
get
environment
variables
or
like
we
could
expose
the
environment
variables
during
to
certain
build
packs
through
the
builder.
C
C
So
I
was
imagining
an
interface
similar
to
like
our
default,
override,
prepend
and
append
interface,
so
that
if
the
builder
wants
it
can
override
the
environment
variable
it
can
default
it.
It
can
prepend
or
append
to
certain
environment
variables
as
well.
C
I
think
emily's
suggestion
was
like
system.
Buildbacks
is
the
first
place
where
we
are
doing
builder
level
configuration.
B
E
I
wanted
to
check.
There
are
a
lot
of
rfcs
that
I
wasn't
sure
if
needed
discussion,
they
haven't
been
brought
up
and
I
haven't
given
them
a
lot
of
attention.
Is
there
anything
that
that
we
sh
that
we
should
be
paying
to
it
ac,
paying
attention
to
asynchronously.
B
I
think
you
were
absent
joe,
I
don't
know
if
this
is
the
the
right
time,
but
we
could
definitely
bring
it
up
in
regards
to
the
project
descriptor
stuff.
Instead
of
work.
There
was
a
an
rfc
for
pac
tunnel
and
I
think
that
has
a
lot
of
sort
of
discussion
going
there
and
I
think
it
kind
of
swung
back
and
forth,
but
I
just
want
to
let
you
know
that
that
is
now
an
independent
rfc
from
the
whole
rest
of
the
project.
B
B
That's
the
one
that
you
that
you
wrote
the
pactome
one.
E
A
B
Yeah-
and
maybe
we
could
just
sync
up
asynchronously
over
slack-
make
sure
that
we're
we're
clear
on
what's
happening.
There
sure.
E
Yeah
this
one
is
tough.
I
I
feel
bad
about
this
one
because
it's,
I
think,
an
important
thing,
and
I
know
the
folks
that
authored
it
did
some
good
work
on
it,
but
there's
still
a
lot
of
challenges
that
I
think
I'm
just
looking
back
at
it,
I'm
not
sure
if
they've
actually
been
addressed
or
not.
B
Thing
yeah.
I
know
I
put
my
final
thoughts
on
there,
which
was
basically
the
sort
of
stance
that
we
shouldn't
decompose,
the
the
url
or
uri,
and
that
we
could
simplify
the
different
images.
E
E
I
think,
like
I
mean
I
think
I
think
the
it's
probably
not
a
straight,
no
and
more
of
a
one
day.
We
would
like
to
do
something
like
this,
but
there
is
a
decent
amount
of
burden
involved
in
it,
and
there
are,
I
mean,
like
there
are
workarounds
sort
of
like
whether
even
if
it's
just
running
more
builds
so.
A
I
wanted
to
make
a
quick
comment
about
the
dockerfiles
rfc.
I
I
think
at
some
point.
We
had
talked
about
like
a
phased
implementation,
where
docker
files
are
used
just
to
switch
the
run
image
dynamically,
and
I
think
that
that
makes
sense-
and
I
know
the
spec
pr-
that
I
put
up
kind
of
did
everything.
But
as
I'm
looking
more
closely
at
the
implementation,
I
think
it
will
be
better
if
we
do
in
pieces.
A
So
I
made
a
comment
on
that.
Pr
and
I'll
probably
follow
up
with
another
pr
to
break
it
out.
A
I
think
I
think
the
switching
will
it
will
deliver
some
value
right,
so
it
would
be
nice
just
to
have
that
at
a
minimum
and
in
the
meantime,
we're
kind
of
working
through
some
questions
about
how
kaneko
will
behave,
how
we
provide
images
to
it,
how
we
save
images
after
they've
been
extended,
and
I
think
that
will
that
will
take
some
time
and
it
will
also
kind
of
pair
well
with
other
work
that
we're
doing
so.
D
I
had
a
question:
what's
the
status
of
the
we're
talking
about
whether
we're
going
to
integrate
cosine
s
bomb
attestations
more
directly
into
the
kind
of
different
phases,
especially
with
docker
file
tracking
thing
versus
keep
that
interface
open
and
then
implement
the
s-bomb
format?
On
top
of
that,
it
seems
like
cosine
anti-stations
have
at
least
the
person
who
wrote
the
cosine
s-bomb,
the
other
integration
is
saying
they
don't
think
it's
going
to
work
anymore
or
they're
they're.
Even
moving
to
at
the
station
format.
C
So
there
were
still
two
things
right:
there
was
generating
the
s-bomb
and
just
fetching
it
from
the
attestation
there's
like
generation
or
fetching
versus
attaching
it
at
the
end
of
export
right.
So
I
think
the
export
attach
we
can
sort
of
consolidate
towards
other
stations,
at
least
for
the
initial
like
cut
off
the
signer.
I
think
it
seems
fine
to
just
support
like
address
adjustations
or
nothing
at
all
and
in
terms
of
fetching
we
can.
C
D
D
D
But
you
still
want
the
logic
to
live
at
that
for
fetching
it.
Okay,
that
makes
sense.
A
I
I
don't
know
if
this
is
helpful,
but
I
made
a
comment
on
the
spec
pr,
with
just
like
a
little
diagram
showing
how
docker
files
and
run
image
s-bomb
kind
of
the
interplay
between
those
two
features.
I
think
they're
somewhat
orthogonal,
but
to
get
the
complete
picture.
Obviously
you
need
both,
but
those
little
squares
are
all
to
my
eye,
like
discrete
chunks
of
work
that
you
know.
D
D
If
we
make
that
more
swappable
so
like
you,
could
replace
it
with
something
that
isn't
you
know
sift.
You
know
it's
like
it's
more
specific
to
your
base
image.
Then
it
feels
a
little
weird
to
build
in
the
retrieving
of
the
s-bomb
into
that
binary.
C
Again,
we
needed
gen
packages
because
we
wanted
to
get
the
ron
images
bomb
in
the
app
image
right
inside
the
app
image
not
attached
to
it
but
inside
the
app
image.
But
in
the
last
meeting
we
decided
that
it
was
fine
to
just
leave
it
out
of
the
app
image
right.
D
I
I
think
we
wanted
a
way
of
like
I
don't
know
if
the
reason
it
needed
to
run
in
the
image
was
so
that
the
s-bomb
ends
up
baked
into
the
image.
I
think
it
needed
to
run
in
the
image,
so
we
can
have
a
like
user-provided
way
of
generating
the
s-bomb
without
having
an
external
tool
that
has
to
pull
all
the
image
layers
separately
from
the
like
build
tech
process.
D
It's
like
if
you're
running
in
an
environment
like
tekton
right,
you
can't
you
don't
want
to
point
a
tool
at
an
image
in
a
way
where
that
tool
has
to
pull
the
image
again
separate
from
kubernetes
docker
data
right.
You
want
to
execute
the
tool
in
the
context
of
a
running
container,
and
I
think
that's
why
we
ended
up
with
the
architecture
where
it
runs
where
it
needs
to
run
on
the
local
file
system.
Instead
of
you
know
something
where
you
point
at
edit
image
and
does
the
analysis.
C
D
So
I
worry
about
if,
if
gen
package
is
going
to
be
something
like
sift,
I
don't
want
to
point
sift
at
an
image
that
has
the
build
pack
layers
in
it,
because
it'll
pick
up
on
the
same
things
twice,
because
the
build
packs
are
also
going
to
be
running
sift
on
the
right,
and
so
you
want
to
run
sift
on
the
run
image
not
on
the
whole
generated
app
image
right,
and
if
you
did
that,
you
could
do
that
out
of
band
of
the
you
know
like
not
inside
of
the
container
but
for
a
platform,
that's
container
based
where
you're,
relying
on
the
platform
itself
to
cache
the
layers
of
the
you
know,
different
containers,
you're
running
it's
going
to
be
a
lot
less
efficient
to
you
know,
run
the
s
bomb
generation
tool
from
inside
one
of
those
containers
against
a
remote
image
where
it
has
to
on
every
build
to
re-pull
all
the
layers
right.
D
That's
my
big
worry.
It's
like
I'm
thinking
about
this
across
many
builds
on
an
application
platform
right
and
any
any
step
we
have
which
says
I'm
going
to
pull
an
entire
image
in
the
context
of
a
container
right.
Without
you
know,
relying
on
the
platform's
ability
to
cache
layers
is
going
to
be
a
big
performance
hit.
C
C
The
ability
to
scan
in
inside
the
container
itself,
because
the
the
only
issue
with
that
is
like
bundling
this
binary
somehow
into
the
builder.
So
like
the
issue
I
see,
is
like
if
this
is
a
standalone
container
image.
That's
shipped
by
the
project,
that's
signed,
verified
and
you
give
it
registry
access
it's
similar
to
our
other
phases,
which
are
trusted
and
currently
have
access
to
these
things.
C
If
you
invoke
it
inside
the
builder
you're
running
all
of
this,
essentially
in
untrusted
mode
and
giving
it
access
to
the
entire
file
system
and
then
you're
also
giving
it
access
to
the
registry
to
attach
the
other
stations
or
oh,
no
wait
we're
just
outputting
it
to
some
folder
right.
D
C
D
Well,
so
in
the
original
proposal,
gem
packages
of
something
you
include,
it
comes
with
the
run
image
right
and
then
we
decided
to
move
it
outside
of
the
run
image
so
that
it's
you
know
you
don't
have
to
modify
your
running
as
much.
So
if
you
want
to
get
it
back
in
the
run
image
again
right,
then
you
need,
let's
do
a
volume
out
of.
A
A
I
think
there
are
two
binaries.
Maybe
I'm
gonna
paste
a
diagram
that
I
made
that
again
may
or
may
not
be
helpful,
but
I
think
there
are
two:
where
is
this
thing?
There
are
two
binaries
there's
one
binary.
That's
like.
I
know
how
to
locate
an
s
bomb
like
give
me
an
image
I'll
I'll
see
if
there's
an
attestation,
you
know
and
then
there's
another
binary.
That
knows
how
to
generate
an
s
bomb
and
they
kind
of
have
to
like
communicate.
So
let
me
paste
this
diagram.
A
I've
been
thinking
about
this
for
a
little
bit.
Oh
my
gosh.
That's
a
long
link!
There's
a
shareable
link.
A
Yeah
the
the
main
thing
that
I
I
can
share
my
screen.
If
that's
helpful,
let
me
see
you
see
this
thing
the
flow,
so
the
idea
is
like.
Okay,
you
have
analyze,
you
have
the
text,
you
do
the
generate,
you
generate
docker
files
and
then
somebody
probably
the
platform,
basically
needs
to
look
for
an
s-bomb,
and
you
can
kind
of
ignore
this
local
registry
thing
for
a
moment.
A
But
it's
like
the
idea
is
like
I
know
if
the
run
image
is
just
getting
switched
to
another
image
that
exists
in
my
registry
right,
so
I'll
go,
look
for
an
s-bomb,
that's
associated
to
it,
and
if
I
find
it
then
when
I
run
the
extender
I
can
tell
it.
Oh
you
don't
need
to
run
gen
packages
right.
I
already
have
an
s
bomb.
D
Just
to
stop
there
for
a
second
is
there
a
reason
that
it
matters
if
the
extender,
if
the
run
image
change
or
the
extender
ran
or
not,
it
seems
like
in
the
end,
at
the
end
of
the
extend
phase,
you
always
end
up
with
a
digest
that
you
could
have
ended
up
before.
Even
if
you
remain
a
doctor
file-
and
it
probably
always
makes
sense
to
check
for
an
s
bomb
associated
with
that
digest
just
so,
you
don't
have
to
generate
the
same
s-bomb
again
for
no
reason,
if
that
makes
sense.
A
D
A
I
thought
about
this
little
dance,
I'm
currently
so
kaneko
has
a
way.
I
don't
think
it's
going
to
work
for
our
purposes,
but
it
has
a
way
that
you
can
provide
base
images
just
on
the
file
system
right.
So
you
could
like
kind
of
pre-pull
all
of
the
base
images
that
you'll
need,
and
you
just
give
it
to
you,
give
it
to
kaniko
and
then
it
can
apply
the
docker
file.
A
A
If
we
did
that
you
could,
the
final
image
could
be.
You
know
oci
layout,
on
the
file
system
and
you
never
need
to
talk
to
registry
at
all
right
or
you
could
spin
up
a
local
registry.
You
know
just
to
use
as
the
interface,
but
it
doesn't
matter
at
the
end
of
the
day,
we're
using
the
file
system
to
shuffle
you
know,
put
the
bits
in.
Do
the
extension
pull
them
out?
It
would
be
great
not
to
have
any
registry
involved
right
right,
like
a
secure
registry,
where
you
have.
D
D
Oh,
you
didn't,
you
didn't
modify
anything
it's
the
same
thing,
though
my
point
is
just
like
all
that
matters
is
the
digest
of
the
final
thing
you
end
up
with,
because
if
it
changes
10
times
the
earlier
ones,
don't
matter
right,
and
so
my
question
is
just:
can
we,
instead
of
having
logic,
to
figure
out
whether
things
changed
or
whether
docker
files
were
run,
that's
related
to
determining
s
bond?
Can
we
just
always
take
that
final
digest?
D
E
A
I
think
the
problem
will
be
like
doing
that.
All
in
one
container
right,
like
you
could
you
could
just
run
gen
packages
and
then
you
can
still
prefer
you
can
still
look
for
an
s-bomb
in
a
registry
and
prefer
that
one
to
the
generated
one-
and
in
that
case
you
know
why
are
you
running
docker
falls
at
all
right.
Why
don't
you
just
switch
like?
Why
are
you
going
through
the
process
of
building
a
new
image?
That's
just
exactly
the
same
as
the
one
that
you
already
have
you
know.
D
It's
hold
on
sorry,
the
problem
is
the
thing
you're
worried
about
is
you've
done
this
extension
phase
inside
of
the
image.
Maybe
it's
changed
some
stuff,
and
maybe
it's
around
some
docker
files,
and
at
the
end
of
that
you
don't
want
to
run
something
that
then
needs
to
reach
back
out
to
the
registry,
but
you
would
need
to
reach
back
out
to
the
registry
in
order
to
know
whether
or
not
you're
supposed
to
run
gem
packages
on
the
system
right.
C
A
A
A
C
I
guess
like
we're,
introducing
all
this
complexity
because
of
optimizations
right
like
let's,
we
can
just
start
with
the
the
signer
looking
at
the
run
image
digest
at
the
very
end
like
it
doesn't
need
to
scan
the
entire
app
image.
It
will
take
the
app
images
as
bombs
as
it
is.
You
can
fetch
those
from
the
layer
it
can
get
the
run
image
digest
from
the
lifecycle.
Metadata
run
the
scan
on
that.
If
the
run
image
has
an
address
station,
it
attaches
that
to
the
app
image.
If
it
doesn't,
it
generates.
C
D
E
C
D
Yeah,
I
understand
what
you're
saying
just
in
the
I
worry
about
the
simple
case
in
in
that
situation,
right
where
you
are
starting
with
the
run
image
that
isn't
extended
with
any
docker
files
right
and
you
you
have
to,
and
it
doesn't
have
an
s
bomb
at
the
station.
In
that
case,
you
have
to
pull
the
run
image
every
time.
C
D
C
I
mean
it's
it's
I
think
we
can't
just
have
partial
supply
chain
security
like
if
you're
trusting
run
images
to
to
not
have
s-bombs
and
stuff
and
then
just
like
relying
on
our
build
process
to
create
one
for
you.
I
get
that
case
for
when
they're
extending
the
image
that
they
can't
provide.
One
like
if
you're
just
switching
the
image.
I
I
don't
see
why
people
would
do
that
and
if
they're
doing
that,
they'll
still
end
up
with
the
correct
sperm,
it's
just
gonna
take
them
a
while.
D
Oh
well,
I'm
I'm
not
advocating
for
not
generating
the
s-bomb
for
the
run
image,
I'm
just
advocating
for
an
implementation
where
we
pull
the
run
image
first
and
then
run
packages.
Sorry,
we
pull
the
run
image
using
the
platform's
tool
for
pulling
images,
and
then
we
run
packages
in
that
running
container
in
the
context
of
the
container,
instead
of
just
running
the
tool
against
the
bits
in
the
registry,
so
that
we
end
up
it's
the
same
outcome
right,
you
always
end
up
with
the
s-bomb
of
the
generated
image.
C
They
can
get
cached
by
the
platform
without
needing
to
run
that
during
the
extension
phase.
Right
like,
for
example,
sift
has
a
volume
where
it
stores
cache
layers.
If
the
platform
wants
to
it
can
just
reattach
that
volume
during
their
swim
generation
phase,
and
it
will
just
look
at
that.
Instead
of
rebuilding
the
layers.
D
C
How
how
would
you
poison
cash
poison,
like
we
wrote
that
trusted
tool
that,
like
that,
ensures
that
it's
generating
the
righteous
form
it's
looking
at
digest
and
if
the
digest
maxes,
it's
just
skipping
a
bunch
of
steps.
D
D
You
know
you
can
rely
on
the
platform's
cache
of
whatever
if
it's
a
container
platform,
whatever
images
it's
pulled,
but
it
worries
me.
You
know
that
you
can
make
a
pretty
strong
security
guarantee
there.
We
just
rely
on
kubernetes
right
to
do
the
isolation
now
you're
introducing
a
cache,
that's
shared
across
tenants
that
relies
on
build
pack
tooling
right
in
order
to
preserve
tenancy
and
that
that
worries
me.
C
It's
up
to
the
platform
to
make
that
decision.
You
can
provide
them
like
it's
the
same
thing
as
the
volumes
that
are
mounted
during
build
time
right,
like
there's
nothing
preventing
a
platform
from
actually
mounting
shared
volumes
across
multiple
builds.
It's
just
that
the
platform
may
choose
to
not
do
that.
D
Yeah,
but
the
right
now,
it's
just
at
the
application
layer
right
where
there
may
be,
or
it's
just
application
layers
that
you
would
you
know,
have
to
decide.
Oh
do
I
want
to
trust
this
tool
more
right
to
separate
it's.
It's
not
the
base
images
that
potentially
larger
base
images
that
the
builds
were
on
top
of
or
that
or
they
you
know
they
run
image.