►
From YouTube: Office Hours; 2021-08-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
don't
think
I
have
any
specific
urgent
requests
for
it.
I
just
wanted
to
sync
with
folks
natalie,
especially
I
guess,
because
she
has
been
most
adjacent
to
it-
I'm
still
waiting
to
find
arm
resources
to
test
out
the
lifecycle,
binaries
that
would
be
released.
I
believe
we're
the
current
state
is
we're
building
them
in
ci,
but
not
yet
including
them
in
the
release.
Artifacts.
B
Natalie's
signing
images
in
the
release-
pr
that
I
think
includes
also
the
arm
binaries
in
there,
but
they
wouldn't
be
included
in
the
release
artifacts.
They
would
only
be
included
in
the
release
image.
I'm
not
sure
if
that's
true,
but
either
way
I
wanted
to
give
a
status
update
arm
is
still
on
track
as
soon
as
we
can
find
resources
to
to
test
it
out
manually.
My
main
question
for
this
group
is:
how
do
we
get
resources
to
test
them
in
ci?
B
C
D
The
latest
that
I
heard
I
know
javier
had
done
some
investigation
about
getting
the
cncf
to
pay
for
some
resources,
and
I
had
pinged
the
ticket
back
in
july
and
they
were
gonna
check
again
with
the
finance
folks,
and
then
I
can
think
it
again
today.
D
B
Would
that
be?
Would
that
be
separate?
Is
that
going
to
be
github
action
runners
on
arm
or
is
that
a
separate?
I
don't
know,
I
don't
know
how
pac
arm
is
is
tested
either,
but
there
is
that,
like
a
separate
system
outside
of
github
actions
that
gets
like
triggered
by
github
actions
or
something.
B
B
B
That's
great,
I
look
forward
to
hearing
more.
Do
you
have
a
link
to
the
to
the
ticket
with
cncf
if
it's
a
public
one
just
so,
I
can
follow
along.
B
Sure
sure,
okay,
great
so
remaining
blockers,
are
some
manual
testing
and
then
some
ci
testing
so
that
every
commit
gets
tested.
I
don't
anticipate
big
changes
because
I
like,
after
the
one
change
that
blocked
arm
support
in
the
first
place,
the
rest
of
it
is
relatively
unplatform
specific,
but
definitely
don't
want
to
release
things
without
testing
them.
So
great.
Thank
you.
C
B
Yeah,
that
would
it'd
be
helpful
to
have
some
tests
in
place
just
to
detect
like
assumptions
that
are
being
made
deep
in
the
bowels
of
the
code,
about
what
platform
to
pull
or
whatever.
So.
B
I
guess
the
next
one
is
also
me
I
wanted
to
I
filed
a
feature
request
on.
I
think
the
spec
is
the
right
place
to
do
it.
B
I
don't
know
whether
how
you
want
me
to
file
this
request,
but
the
oci
added
annotations
to
the
spec
to
specify
base
image
information
similar
to
but
different
from
how
build
packs
specifies
base
image
information,
and
I
wanted
to
get
feedback
not
necessarily
right
now,
right
here,
but
on
that
issue
about
making
that
a
recommendation
for
build
pack
implementations
to
set
that,
in
addition
to
the
labels
that
are
already
being
set
like,
I
said
you
don't
have
to
decide
or
approve
right
now,
unless.
B
In
which
case
great
but
generally
curious
about
people's
thoughts.
A
B
Right
right
so
right,
I
don't
think
that
I
would
want
to
recommend
that
buildpex's
rebase
implementation
change
at
all
you're
doing
fine
you're
doing
something
that
works,
and
I
don't
want
to
change
that.
I
think
a
goal
would
be
to
specify
this
in
a
standard
way,
so
that
other
tools
that
aren't
build
packs
could
detect
out-of-date
base
images
and
potentially
rebase
again,
if
they're,
safe,
not
all
think
build
packs
is
safe,
great
work,
but
not
everything
will
be
safe.
B
So
it's
sort
of
dicey
out
there,
but
in
general,
just
like
to
gather
more
adoption
for
this
annotation
so
that
more
tools
can
interoperate
would
be.
I
think,
useful
one.
B
Do
you,
you
have
a
pure
push
mode
right
where
you
don't
even
need
to?
Have
you
don't
even
need
to
write
to
the
local
demon
you
can
just
push
straight
to
a
registry?
Is
that
right
am
I
misunderstanding?.
C
B
C
C
And
the
problem
is
that,
like
manifest,
is
not
a
first
class
concept
in
the
docker
demon
like
you,
need
to
have
already
sort
of
pushed
an
image
to
a
registry
and
then
use
their
experimental
features
in
order
to
create
a
manifest.
That
would
have
the
annotations,
I
think,
sort
of
what
I
had
been
proposing.
Maybe
what
we
want
to
do
it's
because
the
demon
is,
is
a
bit
weird
as
a
destination
in
a
lot
of
ways
and
causes
us
a
lot
of
pain.
B
So,
just
to
summarize,
to
make
sure
that
I
understood
that
the
the
concern
would
be
that
this
would
introduce
without
larger
architectural
changes.
This
would
introduce
a
divergence
in
behavior
between
publish
mode
and
docker
demon
mode.
That
makes
you
rightfully
uneasy
the
it
would
be
a
small
change,
but
it's
infinity
big
times
bigger
than
the
current
changes,
which
is
zero.
So
I
get
it
yeah
and.
B
C
Small
thing
to
add,
but
it
might
be
a
larger
architectural
change
for
us,
because
I
don't
like
sort
of
in
our
helper
libraries.
We
don't
sort
of
expose
ways
to
set
annotations
on
manifest
because
we
don't
been
treating
the
manifest
as
a
first
class
concept
in
these
interfaces
because
we
never
use
them
because
we're
trying
to
have
sort
of
parity
between
our
two
targets.
B
Yeah
yeah-
that
is
a
a
an
entirely
reasonable
answer
to
the
question
of
whether
we
should
do
this.
I'm
curious
about
the
size
and
roadmap
for
that
larger
architectural
design
change,
but
yeah
I
mean
that
this
isn't
like
an
urgent.
You
know,
hair
on
fire
need.
This
is
just
like.
It
would
be
great
to
be
able
to
do
this
if
this
is
if
this
ends
up
being
like
just
one
more
log
on
the
fire
of
reasons
to
make
this
architectural
change.
I'm
happy
with
that
too.
That's.
C
B
A
It's
not
just
a
difference
in
implementation,
though
right,
if
you
did
a
it's,
I've
seen
a
lot
of
people
that
do
a
pack
build
and
then
a
docker
push
afterwards,
instead
of
doing
pack
build,
publish,
and
then
you
kind
of
create
inconsistency
where,
when
you're
doing
that,
you
don't
end
up
with
a
label
and
then
when
you
do
with
publish
or
use
a
cloud
tool,
you
do
end
up
with
a
label
and
so
right
it
feels
like
something
we
should
solve.
If
we
don't
want
to
create
that
inconsistency,
it
feels
like
something
we
should
solve.
F
Sorry,
I
was
gonna
say:
could
I
ask
where
all
the
known
places
for
the
damon
use
case
are
at
because
obviously
I'm
aware
of
pack
using
the
daemon
use
case?
Are
there
others
that.
C
I
think
few
platforms
other
than
well
pack
and
the
spring
boot,
maybe
even
gradle
plug-ins.
So
basically
all
the
local
workstation
use
cases
tend
to
have
a
demon
mode.
But
all
of
you
know
the
kate
space
platforms,
don't
because
that
wouldn't
make
sense,
but
I
think
you
know
for
most
people's
daily
use
with
pac
and
these
plugins
they're.
Almost
nobody
is
using,
publish,
there's
a
strong
community,
people,
don't
even
know,
publish
exists
and
they
just
use
the
demon
mode.
F
Yeah,
I
guess
because
I
was
proposing.
I
know
we've
talked
about
it
in
the
past,
but
if
pac
were
to
switch
over
to
not
use
the
daemon
case
and
use
an
ephemeral,
you
know
registry,
let's
say
for
the
sake
of
the
transaction
that
would
solve
it
there.
But
then
I
guess
the
question
is,
you
know
spring
boot
will
still
require
it
and
there
would
still
be
other
cases.
I'm
thinking
like
somebody
like
gitlab.
A
Pac
could
store,
wouldn't
even
have
to
use
an
ephemeral
registry,
it
could
just
store
the
images
on
disk
and
then
we
could
kind
of
change
the
pack
workflow
a
bit
and
so
there's
a
way
to
pull
it
into
the
daemon.
But
that's
not
the
default
way
of
generating
images.
I
think
we
have
some
good
architectural
options.
C
F
C
A
It
could
be
fairly
automatic,
like
emily
was
saying
dash.
Deck
demon
could
do
the
import
automatically.
As
long
as
it's
very
explicit
that
that's
what's
happening,
you
know
and
that
you
would
lose
information,
then
right.
You
could
see
that
in
the
definition
of
the
flag.
You
do.
Oh,
how
do
I
get
this
in
my
docker
damage?
Oh
okay,
oh,
but
I'm
going
to
lose
my
annotation,
so
I
probably
should
push
this
afterwards
right.
Yeah.
F
B
B
That
my
my
feature
request
has
both
ballooned
in
size
and
my
excitement
for
it,
because
I'm
really
excited
about
build
text
and
pack
not
requiring
the
demon
anyway,
but
it
doesn't
sound
like
there's
any
sort
of
fundamental
opposition
to
the
feature
it
just
would
require
sort
of
a
larger
architectural.
C
B
I'm
totally
fine
with
that,
when
you
all
are
done
with
that
architectural
change,
I'll
come
back
and
I'll
put
some
annotations
in
there
and
we'll
all
be
happy.
C
A
A
You
know
other
tools
could
read
them
and
determine
if
the
image
is
out
of
date
and
all
that
one
one
use
case
that
you
know
I
mentioned
before
that
it's
important
for
us
to
keep
that
top
layer
digest,
so
that
we
don't
need
the
previous
base
image.
The
really
specific
use
case
there
that
I've
seen
is
artifactory
especially
tends
to
be
configured
not
to
preserve
previous
manifests.
So
soon,
as
soon
as
somebody
pushes
a
new
run
image,
you
know
to
the
same
repository
if
they
didn't
label
their
previous
run
image
right.
A
If
these
things
are,
you
know
just
security
patches
sets
of
packages,
you
might
not
be
tagging
every
single
time.
You
push
right
or
you
might
be
re-tagging
something
over
and
over
again
you
lose
the
previous
one
and
so
a
lot
of
times
I've
seen
workflows
where
the
previous
run
image
isn't
available
in
a
registry.
Just
the
new
one
is
and
then
you'd
have
this
reference
and
you
wouldn't
be
able
to
find
other.
You
know
you
could
still
tell
that.
A
B
Absolutely
like
I
said
I
don't,
I
don't
intend
to
propose
changing
how
how
rebase
works
for
you
all
where
it
works
completely.
Fine,
I
don't
want
to
break
something
or
fix
something.
That's
not
broken
anyway.
The
this
also
came
up
some
in
the
discussion
of
the
base
image
annotations
in
the
first
place,
and
I
think
the
use
case
is
entirely
reasonable.
I
mean
obviously
reasonable
to
support
if
this
becomes
a
more
serious
issue.
B
I
think
it
would
make
sense
to
add
yet
a
third
annotation,
which
is
basically
the
top
layer
digest
like
like
you
all
use,
and
then
I
just
didn't,
want
to
incl.
It
already
took
six
months
to
get
these
two
annotations
in,
and
so
I,
at
that
rate,
it
would
be.
You
know,
sometime
next
decade,
to
get
a
third
one,
but
now
that
we
have
the
first
two,
I
think
it
would
be
easier
to
motivate
the
third
one
with
a
specific
use
case
of
you
know,
expiring
digest.
So
I.
C
B
Yeah
yeah,
I
I
think
this
is
absolutely
an
area
that
we
all
want
to
do
better
at
and
I
think
the
trick
is
going
to
be
bite-sizing
it
so
that
it
only
takes
six
months
to
make
progress,
but
absolutely
a
top
player
or
a
top
history
or
whatever,
whatever
we
all
end
up
deciding
to
call
it
or
have
it
be
specified.
As
I
think
I
would
I
would
help,
I
would
either
do
this
proposal
with
your
help
or
help
someone
who
is
interested
in
pushing
this
through
oci.
B
I
know
things
I
would
do
differently
now
to
maybe
shave
a
month
off,
but
yeah.
I
think
I
think
the
idea
is
still
something
we
all
agree
on,
which
is.
These
should
be
standardized
things
tooling,
that
is
not
pack
rebase
and
and
in
the
buildpacks
ecosystem
should
be
able
to
consume
this
information
and-
and
you
know,
perform
rebases
successfully-
also
so
that
more
stuff
that
wasn't
built
by
build
packs
could
be
rebased
using
systems
that
build
packs
uses
to
replace
anyway.
A
B
B
Valid
I
mean
it
absolutely
is
a
thing
that
is
a
deficiency
in
the
current
annotations
and
I
thought
about
adding
it
as
a
third
one.
But
then
it
was
already
three
months
into
the
process
and
I
didn't
want
to
reset
the
clock,
so
we
can
add
it
in
in
the
next
one.
If,
once
we
have
regained
the
ability
to
push
things
through
the
spec,
I
think.
C
B
That
would
that
would
absolutely
help
speed
the
process
along.
Some
part
of
the
delay
was:
what
are
you
using
this
for
in
in
production
already,
and
it's
sort
of
there
was
some
delay
because
no
one's
using
it
because
it's
not
specified
and
it's
not
specified
because
no
one's
using
it
chickens
and
eggs
type
problem.
B
I
think
you
all
are
in
a
very
unique
position,
because
you
are
doing
it
in
production
right
now
and
you're.
Just
sort
of
you
would
just
be
proposing
moving
where
that
data
is
so
your
operational
production
experience
is
absolutely
going
to
help
that
be
quicker,
but
I
think
you're
absolutely
right
emily
that
using
the
annotations
that
are
there
today
and
then
having
the
production.
Experience
of
this
is
there's
a
gap
here,
there's
a
deficiency.
B
This
would
solve
the
deficiency
that
would
make
it
much
faster
and
easier
to
get
through,
but
yeah
I'll
come
back
when
all
of
the
annotation
stuff
is
settled,
and
then
we
can
go
from
there.
B
I
would
love
to
help.
I
would
love
to
be
able
to
help,
but
yeah.
It's
very
exciting,
though,
that
you
that
you
are
sort
of
that
you
have
your
eyes
on
not
having
docker
demon
storage
as
the
default
or
as
any
kind
of
default
would
be
great.
A
Before
we
move
on
to
the
next
one,
there
was
something
arm
I
forgot
to
mention
the
I
have
a
rfc
open
for
removing
the
concept
of
stacks
and
mixins,
but
that
also
introduces
build
targets
for
build
packs
and
would
let
you
do
cross
compilation.
So
it
would
like
give
you
information
about
the
run
image
and
the
build
image.
Even
if
the
run
image
is
a
different
architecture.
A
B
I'm
absolutely
interested
in
better
and
more
multi-architecture
support.
I
think
the
future
is
multi-architecture
and
we
should
prepare
and
play
groundwork
now,
for
that
I
don't
think
I
know
nearly
enough
about
stacks
or
mix-ins
to
weigh
in
knowledgeably,
but
I'll
take
a
look
at
it
and
see.
If
anything
we.
A
C
B
B
B
All
right!
Well,
thanks!
I
don't
I
don't
mean
to
to
monopolize
the
time,
but
thank
you
for
your
for
your
feedback.
D
Yes,
I
put
that
there.
This
is
actually
kind
of
related
to
the
discussion
we
were
just
having.
D
So
this
is
like
you're
running
the
life
cycle
and
it
has
a
connection
to
the
docker
socket,
but
you're
actually
communicating
with
podman,
because
it
provides
the
same
api-
and
I
guess
podman
is
closing
the
connection
after
20
seconds,
which
can
be
a
problem.
When
we
we
establish
the
connection
as
root,
then
we
drop
privileges.
D
We
do
a
build
that
might
take
longer
than
20
seconds
and
then
we
try
to
export
and
it's
terrible
hunger
yeah.
So
there's
some
possible
solutions
that
have
been
proposed,
but,
given
the
I
guess
just
given,
you
know
how
serious
we're
taking
the
you
know:
elevated
permissions
and
that
kind
of
stuff.
I
thought
I'd
surface
it
to
a
larger
group.
A
C
A
A
You
know
that
kind
of
pack
will
maintain
its
own
list
of
images
and
then
you
can
either
go
daemon
or
you
can
go
registry
and
that
way
when
you're
rebuilding
it.
It
never
makes
sense
for
the
daemon
to
be
the
source
of
a
rebuild,
because
you
mentioned
you
also
read
from
the
daemon
at
the
beginning,
because
pax
source
would
always
be
it's
either.
The
registry
or
its
local
cache.
C
A
Yes,
exactly
just
instead
of
exporting
the
images
as
like
files
in
the
app
directory,
or
something
like
that,
I
think
pack
should
maintain
its
own,
like
on
disk
registry.
A
F
Does
the
life
cycle
need?
I
guess
that's
the
part
where
I'm
getting
a
little
bit
confused,
because
you
know
there
is
the
oci
layout
format,
which
is
pretty
much
the
registry
on
this.
F
F
A
Yeah,
I
would
instead
of
letting
you
export
oci
images
to
disk
at
specific
locations.
I
think
it's
okay,
if
we
support
that
too,
but
I
wonder
if
the
default
should
be
export
images
to
disk
in
you
know:
dot
files
in
your
home
directory
right
and
dot
pack,
something
and
then
kind
of
keep
have
pack
keep
its
own.
You
know
separate
registry
of
images
in
that
format.
Okay,
so.
C
And
I
was
just
already
pre-optimizing
thinking
about
you
know,
maybe
spending
time
copying
them
into
containers
and
copying
them
back
out.
We
could
do
some
sort
of
find
amount
thing,
but
that
can
be
slow.
So
is
there
a
case
where
we
just
wanted
in
a
volume
to
begin
with,
but
I
guess,
if
you
want
to
support
like
that's
hard
for
when
pack
just
wants
to
manipulate
it,
it's
easier
for
the
life
cycle,
but
it's
bad
for
pack.
A
C
A
F
And
I
I
do
think
that
there
is
a
platform
that
has
the
opposite
requirement.
If
I
remember
correctly,
for
bitbucket
or
their
ci,
I
forget
what
it's
called.
They
actually
can't
have
named
volumes,
so
they
can't
have
the
docker
buttons,
they
have
to
have
bind
volumes
and
it
has
to
be
based
off
of
the
current
work
directory.
C
F
D
So
I
guess
just
for
the
short
term,
because
we
are
talking
about
a
larger
effort.
Is
there
anything
that
I
can
like
you
know
we
just
anything
that
the
life
cycle
can
do
to
make
this
problem
less
painful
right
now
or
or
do
we
just
say
you
know
sorry,
this
is
going
to
be
broken
until
we
do
this
big
change.
D
I
think
the
thing
that's
like
probably
the
most
doable
is
just
keep
keep
pinging
the
daemon
at
regular
intervals.
To
avoid
closing
this,
the
connection,
like
you
know
sure.
F
Is
the
timeout
configurable
I
mean
is
that
the
workaround
for
pogmen.
C
F
Me
think
this,
through
or
or
maybe
help
me
think
this
through
when
we're
talking
about
replacing
the
damon
in
the
long
term
right.
What
does
that
mean
for
podman?
Obviously,
you
know
from
pax
perspective
still
being
able
to
use
podman
we'd
still
be
able
to
do
that
right
and
in
this
case
we're
talking
about
the
trusted
scenario
or
the
untrusted
scenario.
Sorry
right,
I
guess
maybe
that's
the
part
where
I
become
confused.
F
C
C
So
what
they
do
is
it
does
everything
it
needs
to
is
root
and
then
drops
privileges
like
the
trusted
versus
untrusted
is
not
about
root,
not
root.
It's
about
whether
the
container
that
has
build
packs
in
it
is
the
same
one
that
has
registry
credentials
in
it.
C
We
could
encourage
these
folks
to
contribute
a
thing
to
keep
alive
as
well
like
I
feel
like.
We
would
accept
that,
even
if
we
there.
E
A
I
see
you
posted
something
about
us
bomb.
He
had
a
couple
questions
about
s-bomb
and
the
spec
channel
after
the
working
group
meeting
this
morning
is
any
of
that
worth
going
through.
I,
I
left
put
a
thing
there
mentioning
like.
Maybe
we
could
at
least
what
I
kind
of
thought
was
strip
spdx
out
until
we
can
figure
out
how
to
merge
it
and
just
do
cyclone.
E
That's
what
I
still
don't
know,
so
we
we
don't
have
we
we
haven't
decided
yet
on
how
to
store
these
things
right
if
we
are,
if,
if
the
community
as
a
whole
ends
up
sort
of
going
along
with
the
six
store
cosine
way
of
storing
s-bombs,
we
don't
really
have
a
problem
with
spd-x
like
as
it
is
right
now.
E
It's
actually
the
perfect
use
case
for
us,
because
we,
I
think,
are
the
only
container
building
tool
that
can
actually
generate
s
bombs
that
are
layer
specific
which,
which
makes
sense
the
other
ones
may
or
may
not
make
sense,
which
is
just
random
operations,
so
it
in.
In
that
sense,
it
makes
like,
I
think,
it's
fine
to
support
spdx,
given
that
other
cloud
native
tools
seems
to
seem
to
be
supporting
it
like
and
and
there's
so
much
development
going
on
on
the
spdx
front.
E
So,
just
it's
I
I
don't
want
us
to
skip
out
on
all
of
the
like
integrations
with
other
components
in
cncf,
because
we
chose
not
to
support
spdx
right
now,
especially
since
that
security
tag
working
group
is
creating
their
whole
architecture
document.
Right
now
in
how
all
of
these
pieces
connect
together
and
I
think
six
store
spdax
build
packs.
I
think
it
would
be
nice
if
all
of
these
things
connected
together
for
them.
B
E
E
E
A
I'm
not
not
saying
that!
It's
not
easy!
I'm
just.
I
want
to
make
sure
that
if
you
know
there
is
a
workflow,
that's
pdx,
that's
reasonable
and
then
the
other
problems
on
the
other
end
is
that
if
build
packs,
all
output
cyclone,
because
you
can
end
up
with
one
cycle
and
dx,
you
know
s
bomb,
that's
scannable
with
a
single
gripe
command.
A
E
E
A
That
makes
sense
if
we
say
we
support
both
formats
off
the
bat
we
support
cyclone
with
us
with
a
merged
one,
and
we
support
spdx
without
merging
in
cosine
format.
Is
that
going
to
be
a
problem?
For
I
know
we
talked
about
outputting
a
single
file
in
the
daemon
case.
Is
that
going
to
be
a?
How
are
we
going
to
handle
that
in
the
spdx
case.
A
E
A
We
don't,
we
probably
don't
want
to
create
another
image
in
the
daemon.
That's
going
to
be
missing,
annotation
metadata,
it
wouldn't
be
in
cosine's
format.
When
you
push
it,
how
are
we
going
to
take
all
the
spdx
bits
and
do
something
reasonable
with
them
are
useful
with
them
in
the.
C
Compromise
solution
here,
but
I've
always
wondered
if
it
would
make
sense
to
have
build
packs,
be
able
to
output,
have
random
reports
and
outputs
that
get
copied
back
to
the
user.
So,
like
the
thing,
a
platform
like
pac
would
copy
out
of
the
container
at
the
end
of
the
build
would
sort
of
be
a
an
archive
of
stuff,
and
in
that
stuff
we
could
have
specific
stuff
like
report
tomml.
That
has
a
schema
that
you
can
build
an
integration
against.
You
have
a
merged
cyclone
dx
bomb
that
people
can
build
integrations
against.
C
E
E
So
I
was
thinking
you
could
just
take
that
whole
small
folder
start
it
up
and
produce
that
as
an
output
or
you
can
just
copy
the
entire
folder
out,
which
is
which
can
then
be
uploaded
to
the
registry
using
the
cosine
s1
upload,
like
it
just
takes
in
json
files
on
disk
to
upload,
and
I
think
the
sort
of
I
can
think
of
one
other
integration
where
this
would
help.
E
So
there's
this
thing
called
in
total
or
whatever,
which
is
also
supposed
to
have
integrations
in
this
like
whole
cosine
supply
chain
security,
thing
where
each
input
leaves
behind
certain
outputs,
which
can
then
be
recorded
against
a
transparency
log
and
like
recorded
and
put
somewhere
in
them
in
a
metadata
store
or
whether
it's
a
registry
or
something
else-
and
I
think
even
there
like
the
way
they're
currently
generating
s-bombs,
is
something
similar
right.
They
first
generate
the
container,
then
they
run
a
container
scanning
tool
which
outputs
a
bomb
on
disk,
and
then
they
record
that.
E
So
if
we
could
produce
these
files
on
disk,
it
would
also
satisfy
these.
Other
container
scanning
integrations
that
people
have
currently
built
out
where
they
first
build
a
container
in
the
next
step.
They
scan
it
and
produce
this
permanent
disk,
and
then
they
do
whatever
they
want
with
it
like
they
can
upload
it
either
to
a
registry
or
some
other
place,
like
some
metadata
store.
E
A
C
I
want
to
do
like
layers,
outputs
and
then
maybe
s-bomb,
underneath
it
so
that
we
can
start
a
pattern
of
platforms
being
able
to
say
I
take
outputs
and
then
they
can.
You
know
we
can
expand
what
we
put
in
there,
but
then
we
won't
have
to
have
more
things
that
the
platform
needs
to
copy
out.
If
we
add
similar
features
that
aren't
s-bombs.
E
C
C
E
C
The
reason
it
needs
to
be
in
the
image
is
it
contains
all
the
process,
types
for
the
launcher
to
use
like
that
whole
system
could
probably
use
some
refactoring
like
we
should
be
moving
those
process
types
somewhere
else.
You
know
like
cmb
process
types,
launcher
tomml,
something
rather
than
coupling
it
with
the
bomb
and
a
bunch
of
other
stuff,
that's
happening,
but
that
that
is
why
it's
in
the
image.
E
C
A
I'm
definitely
interested
too.
You
mentioned
a
second
ago
how
we're
gonna
restore
the
s-bomb
from
you
know
when
you're
doing
rebuilds,
maybe
this
was
answered,
but
the
can
we
just
store
it
in
like?
Can
we
put
layers
config
in
the
final
image
and
then
make
a
layer
out
of
that
and
then
pull
that
layer
and
then
that's
how
we
restore
it?
E
That's
one
option:
some
people
were
concerned
about
shipping,
the
s1
with
the
output
images
I'll
see
a
better
choice.
F
Yeah
I
mean
I'm
personally
against
it,
but
I
don't
have
enough.
You
know
data
to
back
that
up.
I
couldn't
we
do
the
cosine
solution
or
a
co-located
image.
A
If
the
collocated
image
isn't
there,
then
your
next
build
is
slower,
because
you
have
to
rebuild
all
the
layers
from
scratch
because
you
can't
pull
their
s-bomb
information
or
we
need
to
implement
some
complicated
logic
that
looks
for
missing
s-bomb
pieces.
That
knows
that
the
layers
have
to
be
rebuilt
in
order
to
rebuild
their
s
bomb.
Is
it
just
a
security
concern
or
a
radio.
C
C
Kind
of
weird
and
sam
wrote
a
great
rc
to
fix
that,
but.
C
A
E
It
it
wasn't
a
concern
from
my
side.
I
think
the
last
time
I
brought
it
up.
People
were
opposed
to
putting
the
bill
bomb
on
there
and
I
think
it
was
mainly
around
like
I,
I
don't
know,
what's
better
like
more
transparency,
to
help
people
find
vulnerability
like
components
or
put
this
transparency
out
in
the
world
so
that
others
can
exploit
it.
So.
F
That
you
want
to
retain
as
much
information
as
possible
from
the
final
app
image
right,
especially
when
we're
talking
about
the
build,
the
things
that
were
used
to
compose
the
image
itself.
A
Senses
so
I
still
understand
so
you're
worried
about
I'm.
Not
I
like
literally
don't
it's
not
coming
coming
together
in
my
head.
So
is
the
worry
that
we're
putting
the
asp
if
we
put
the
runtime
and
build
time
s-bomb
in
the
final
image
that
you
could
exploit
a
running
process
to
get
information
about
what
the
image
looks
like
you
know,
return
back
through
some
api
and
then
that
could
lead
to
you
know
discovery
of
other
security
vulnerabilities.
Is
it
something
like
that
or
is
it?
Is
it
something
that
I'm
not
thinking
about?
A
E
F
E
E
I
I
I
can
imagine
that
as
one
of
the
possible
use
cases,
the
other
thing
that
the
ntia
notes,
which
is
like,
if
you're
putting
licensing
information
in
your
s
bomb
with
that
to
track
patent
trolls,
and
then
they
explicitly
state.
Oh
it
won't,
because
you
were
already
in
violation
anyway.
At
least
this
would
make
it
transparent.
A
That's
funny
I.
C
Guess
I'm
worried
about
the
case
where
people
are
a
security
standpoint.
If
someone
is
shipping,
the
container
like
an
and
the
potential
hacker
already
has
access
to
the
container.
I
I
don't
think
that
putting
the
build
bomb
is
gonna
change,
something
cause.
They
can
already
figure
out
what
software
is
in
there
if
they
want
to.
C
A
Here's
here's
the
case,
you're
a
cots
vendor
right,
you're,
producing
images
and
then
handing
them
to
other
people
right
and
you're,
using
build
packs
to
build
your
images
and
you
build
the
images
using
like
a
whole
bunch
of
internal
language
modules
right
with
a
private
say:
it's
a
node.js
app
and
you
have
a
whole
bunch
of
private
npm
repos.
That
s-bomb
will
definitely
contain
the
locations
of
those
private
and
pm
repos,
but
you're
not
intending
to
disclose
the
details
of
how
your
you
know.
A
Big
blob
of
javascript
that
you're
shipping
in
the
end
is
constructed,
especially
if
the
local
registry
locations
are
private
registered.
You
know
like
disclose
things
about
your
infrastructure
right.
It's.
A
C
A
Well,
well
that
that
feels
weird,
because
then
you
did
one
build
with
no
bomb,
and
then
you
do
a
build
with
bomb
against
no
bomb
image.
Is
it
going
to
fail?
Is
it
going
to
still
not
generate
a
bomb
surprisingly
and
miss
security
vulnerabilities
right?
You
have
to
be
really
that
that
worries
me
that
that
particular
combination
of
no
bomb
bomb.
Sorry.
A
I'm
convinced
there
should
be
a
way
to
ship
an
image
without
a
bomb.
I
just
want
to
make
sure
that
the
ability
to
produce
a
cloud
into
build
packs
with
a
bomb
and
then
feed
it
back
into
the
build
process,
doesn't
lead
to
people
who
think
they
have
images
that
have
a
valid
bomb
on
them,
but
they
don't.
If
that
makes
sense,.
C
We
could
store
the
bomb
in
the
cache
and
right
now,
if
you
clear
your
cache,
you
don't
get
your
cache
players
back,
but
you
can
still
reuse
launch
layers
because
right
they're
in
the
registry,
but
but
actually
the
way
we
do
things.
We
try
not
to
reuse,
launch
layers
to
confuse
people
once
upon
a
time.
If
we
just
stored
the
bomb
and
the
cache
instead,
then
it
would
always
get
restored.
Unless
you
cleared
your
cash
and
then
then
you
had
to
rebuild
everything.
F
A
So
I
I'm
worried
that
it
means
that
on
a
rebuild
right,
if
you
don't
have
that
separate,
s-bomb
image
available
right,
which
is
especially
going
to
be
weird
locally,
when
you
do
it
in
the
daemon
case
right
where
we,
those
files
may
like
output
in
the
app
directory.
Or
you
know,
we
don't
have
a
way
of
storing
those
necessarily
that
on
a
rebuild
we're.
Not
if
you
want
a
bomb
and
your
previous
image
was
generated
using
a
bomb,
but
that
bomb
is
no
longer
available.
A
C
That's
not
demons
demons,
demons,
demons
yeah,
it's
the
real
big
problem,
because
I
think
we
should
do
this
before
we
do
the
demon
stuff.
I
think
this
is
something
we
should
view
and
ship,
which
I
think
means
maybe
in
the
great
order
of
operations
problem
it's
like.
First,
we
solve
structured
bomb,
but
we
don't
restore
it.
We
keep
our
existing,
it
doesn't
get
restored
problem.
A
But
the
story
problem
is
really
bad
for
launch
layers
that
don't
get
regenerated,
because
you
then
the
build
pack
doesn't
know
what's
in
the
layer.
It
just
has
some
information
about
maybe
what's
in
the
layer
and
then
it
has
to
use
that
to
generate
an
accurate
s-bomb
for
it
on
a
rebuild
that
feels
like
it's
like
you
couldn't
scan
your
node
modules
in
every
case
and
generate
a
valid
s-bomb
for
them.
If
your
known
modules
are
in
a
launch,
true
cache,
false
layer,
I'm.