►
From YouTube: CNB Weekly Working Group - 3 March 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Would
you
mind?
I
guess
you
already
kicked
off
the
live
stream.
B
D
D
C
Hey
everyone,
I'm
suraj,
I'm
just
just
joined
ben's
team
really
and
I'm
working
on
the
bill
pack
side
of
thing
in
our
team
as
well.
So
yeah.
That's
that's
me.
E
C
C
Cycle,
I
think
we're
just
waiting
on
a
couple
things
to
be
finalized
in
the
platform
api
before
we
can
wrap
up
all
the
items
for
the
next
release.
So.
E
B
All
right
thanks,
everybody.
Moving
on
to
our
topics
for
today,
we've
put
support
docker
files
on
the
agenda.
C
B
With
that
or
faster
do
I
just
move
this
to
the
end,
then
yeah,
so
we'll
come
back
to
that
one.
Let's
talk
about
the
run
image
s-bomb.
Hopefully
this
will
be
a
shorter
topic.
I
feel
like.
Maybe
the
point
of
this
is
to
corner
steven
here
to
talk
about
our
desire
to
switch
to
providing
the
run
image
s-bomb
to
the
exporter
and
sort
of
like
kicking
responsibility
for
managing
it
to
the
platform
for
now,
instead
of
baking
it
into
the
image.
I
know
you
had
mentioned
some
hesitation
about
that.
So
maybe.
F
Right,
I
I
don't
think
I
have
a
preference
for
it
being
baked
into
the
image
itself
and
the
way
that
we
make
it
into
the
app
image
so
that
we
we
can
reconstruct
it,
but
I
wouldn't
be
opposed
to
that
either
so
it'd
be
easier
to
you
know
kind
of
pull
that
s-bomb
the
way
we
you
know
construct
the
s-bomb
for
the
app
image.
F
The
what
you
know
what's
the
issue
with
consuming
the
run
image
with
the
s
bomb
attached
to
it
in
some
way.
A
A
It
could
read
off
that
read
that
off
from
the
run
image
and
make
it
available
to
the
exporter
or
like
if
it
wants
to
inject
other
things
in
the
years
from
which
we
do
that
as
well.
So
like
it,
it
removes
the
requirement
from
for
us
as
a
project
to
maintain
the
specification
for
also
tagging
s-pumps
for
the
base
images.
F
So
if
I'm
running
say
tacton
and
I
do
a
build
just
using
the
life
cycle
and
that
uses
a
run
image
from
docker
hub
and
that
run
image
has
is
you
know,
cosine
has
a
cosine
attestation
for
the
s
bomb
in
it.
How
do
I
make
sure
that
the
app
image
I
get
has
the
base
image?
A
Yeah,
the
latter,
so
in
case
of
techton
they
could
like
there's
a
prepared
step
anyway.
That
does
a
bunch
of
things.
It
would
just
run
cosine
s1
download
with
the
base
images
and
put
them
in
the
directory
on
the
volume.
The
common
volume
that's
mounted
across
everything
and
the
lifecycle
would
be
that
it.
A
F
B
Some
of
our
some
of
the
reasons
we
brought.
We
had
this
discussion
before
number,
one
being
it
changes
the
tooling
that
you
use
to
create
your
s-bomb
right.
You
meet
your
run
image.
You
can't
just
use
a
docker
file
anymore.
You
now
need
to
use
this
tooling.
That
then
modifies
creates
the
image
and
then
modifies
it,
because
the
image
itself
is
referencing,
a
digest
of
the
layer
right
or
the
diff
id
number.
B
Two
like
people
modify
these
images
with
docker
files
to
make
new
versions
of
them
and
then
once
you've
baked
the
run
image
in
it's
fallen
out
of
date,
and
we
don't
know
whether,
like
at
build
time,
we
don't
know
whether
people
didn't
know
that
it
was
there
and
now
we're
using
the
out-of-date
thing
or
if
it's
you
know,
being
updated
or
is
still
valid
and
then
number
three.
We
don't
want
to
pick
a
winner
like
we
sort
of
had
to
in
the
layer
meta
the
layer
bomb,
because
it's
important
that
we
restore
it.
B
F
So
those
same
like
reasons
not
to
it
or
most
of
those
seem
like
reasons
not
to
encode
the
s
bomb
in
the
same
format
that
we
have
it
in
the
you
know
app
image
right
like
inside,
of
the
image
itself,
but
they
don't
or
most
of
those
wouldn't
apply
to
a
s-bomb,
that's
external,
to
the
image,
because
it
would
be
referencing
a
digest
that
would
like
change.
If
you
use
a
docker
file
or
right
so
are
there
any
disadvantages
to
just?
F
F
F
Are
we
putting
other
things
in
the
prepare
like
do?
We
need
to
integrate
more
prepare
phase
into
the
creator?
Would
that
kind
of
solve
the
problem.
A
I
think,
like
it,
sort
of
ties
back
to
the
conversation
we
had
last
week
in
group
right
like
these.
This
is
again
something
that's
very
specific
to
the
platform.
The
build
pack
will
never
see
the
input
bomb
attached
to
the
build
image
or
the
run
image
so
like
we
can
include
it
as
a
common
binary
that
we
provided
as
a
project.
It
doesn't
need
to
be
a
part
of
the
lifesaver.
F
I
mean,
I
think
it's
not
about
whether
it's
it's
like
I,
I
just
care
about
the
user
interface
for
the
life
cycle
right,
like
there's
a
really
simple
use
case
here,
which
is
like
you,
have
a
run
image
on
the
registry,
you're
building
an
app
image.
You
want
to
end
up
with
a
valid
s
bomb
in
the
end,
and
then
you
know
if
you,
if
to
implement
this
a
platform
has
to
execute
a
bunch
of
steps
outside
of
the
life
cycle.
F
Just
because
we
said
well
technically,
you
know
we
don't
want
to
expect
this
part
yet
right.
You
know,
I
think
it's
okay.
If
we're
saying
we
don't
want
to
spec
how
a
platform
needs
to
consume
that,
but
you
know,
maybe
that
means
that,
for
the
single
phase
lifecycle
run
right
where
it,
you
know,
users
expect
to
be
able
to
just
invoke
one
thing:
to
do:
a
build
pack
build
in
the
container.
We
integrate
more
of
those
external
binaries
if
they're,
external
or
extra
life
cycle
binaries.
If
they're
in
the
life
cycle
and
prepare
phase.
F
A
They
would
do
things
one
even
apart
from
the
window,
if
you're
consuming
a
run
image
of
the
game,
for
example,
you
won't
have
the
spawn
what,
if,
as
a
platform,
you
want
to
make
sure
that,
regardless
of
whether
the
input
image
had
an
s4
or
not,
you
want
to
produce
an
additional
response.
How
do
you
do
that
and
the
other
thing
was
like,
regardless
of
great
amount,
I
thought
prepare
was
gonna
run
as
a
separate
phase
or
a
separate
container
entirely.
E
B
Maybe
I
think
the
life
cycle
serves
a
very
particular
purpose,
which
is
providing
this
compatibility
layer
between
platforms
and
build
packs
so
like
we
don't
want
the
platforms
to
have
to
deal
with
the
complexity
of
orchestrating
the
bill
packs
themselves.
But
we
have
this
well-defined,
build
pack
api,
so
we
need
the
bill
packs,
be
orchestrated
in
a
specific
way.
The
life
cycle,
like
bridges
that
gap,
I
think
for
things
that
are
just
platform
features.
B
F
I
think
it
doesn't
bother
me
as
much
if
life
cycles
goes
outside
of
scope
of
what's
in
the
spec
right,
it's
like
the
tool
people
are
going
to
use.
There's
behavior
that
we're
saying
you
know
these
are
part
of
the
contract
right,
there's
behavior,
that
you
know
isn't
defined
as
part
of
that
contract,
but
I
don't
have
a
very
strong
opinion
about
it.
F
I
think
if
we
want
to,
if
we
want
to
provide,
we
want
life
cycle
to
be
very,
you
know
not
to
air
away
from
clean
user
interface
more
towards
or
to
air
away
from
useful
user
interface
and
more
towards
implementation
of
the
spec.
As
it's
written
in
a
you
know
as
generic
way
as
possible,
then
I
think
we
should
pull
stuff
out
of
life
cycle
in
order
to
build
that
cleaner
interface
for
platforms
that
you
know.
F
Lead
that
like
tucked
on
right
and
so
I'd,
say
again
like
I,
it's
fine
if
we
don't,
especially
because
I
agree
that
we
can't
really
pick
a
winner
so
far
for
the
attachment.
So
I
I
you
know
it's
it's
fine
with
me.
If
we
don't
you
know
in
the
life
cycle
itself
or
in
you
know,
at
least
in
the
spec
right
consume
the
run
images
bomb,
but
for
a
platform
like
tekton,
you
know
the
default
should
be.
F
If
there's
a
s
bum
attached
to
the
run
image,
you
know
we
should
carry
that
through,
and
so
maybe
that
just
means
we
should
pull
creator
out.
E
E
That
being
said,
I
I
do
have
a
concern.
I
know
someone
at
some
point.
There
was
a
proposal
that
the
run
image
might
be
dynamically
selected
through
the
from
docker
file,
yeah
syntax.
F
I
think
I
don't
think
it
would
be
too
difficult,
because
you
already
need
a
separate
container
in
order
to
build
the
run
image
unless,
unless
we're
going
to
implement
something
where
we
replace
the
entire
current
container
with
the
stacker
file,
build
and
then
swap
back.
I
think
at
some
point
you're
going
to
have
to
execute
another
life
cycle
binary
and
that
would
give
the
platform
an
opportunity
to
hook
grabbing
whatever
selected
run.
Image
came
out
of
the
dockerfile
build
and
using
that.
F
Sorry
yeah,
it's
just
feedback.
I
I
think
that
gives
us
an
injection
point.
The
only
case
is,
unless
we,
you
know
we're
planning
a
creator
flow
where,
in
the
middle
of
the
build
it
uses,
kaneko
to
replace
the
entire
current
container
with
the
run
image
container
does
a
build
and
then
swaps
the
whole
container
back
to
the
builder.
F
F
B
You
know:
we've
talked
in
the
past
about
how
the
implementation
team
might
be
poorly
named
right.
There's
many
things
that
are
implementations,
I'd
like
to
think
about
sort
of
like
platform,
author
tooling,
like
what
tooling
do
we
provide,
so
that
people
can
construct
platforms
right,
and
I
think
the
life
cycle
is
just
one
piece
of
that
and
it's
a
special
piece.
B
It's
the
piece
that
every
platform
must
use,
because
otherwise
we
cannot
make
guarantees
that
the
bill
packs
work,
the
way
that
they
were
intended
to,
and
then
we
should
think
about
sort
of
like
what
interfaces
we
want
to
provide
platforms
to
build
on
top
of,
without
always
coupling
that
to
what
do
we
want
to
add
to
the
lifecycle
interface,
because
I
think
these
two
things
could
be
different.
E
B
The
implementation
team
right
now
is
actually
like
platform,
author,
tooling,
like
it's
producing
something
for
platforms
to
use
and
then
what
we're
calling
the
platform
team
right
now
is
really
about
like
if
we
reframed
everything
in
terms
of
the
consumers
of
it,
it's
more
like
directly
and
user
facing
tooling,
including
platforms.
A
B
F
Like
you
know,
in
an
out
of
band
context
right
when
we
don't
want
to
version
them
with
the
platform
api,
the
platform
might
want
to
use
those
tools
separately
and
the
versioning
doesn't
matter.
I
worry
that
if
we
scope
lifecycle
down
to
you
know
this
just
implements
the
spec
as
it
is
and
really
avoid
adding
any
other
tools.
They're
tools
that
you
know
we're
going
to
have
this
other
contract
that
we
create
between
the
life
cycle
and
this
other
tool.
B
I
don't
think
we
should
have
a
contract
between
the
life
cycle
and
the
other
tools,
which
is
why
I
was
sort
of
pushing
back
on
sam's
when
we
were
talking
about
the
cosine
rfc
right
like
should
it
take
in
report
tunnel
or
something
like
that,
but
I
feel
like
if
you
leave
the
you,
don't
need
to
create
new
interfaces
as
long
as
the
platform
is
mediating.
F
F
I
think
that's
what
I
think:
that's
what
we
end
up
needing
to
do
if
we
want
to
create
a
good
interface
for
platforms
to
be
able
to
do
image
builds
right
like
like
you
know,
I
don't
think
you
should
have
to
use
pac
if
you're
in
a
ci
platform
that
provides
containers
and
doesn't
provide
docker
right.
There's,
no
reason
you
couldn't
just
execute
a
binary
and
do
a
build
pack
build
right
in
the
context
of
you
know.
F
Wherever
you
are
right
like
it
seems
important
to
be
able
to
preserve
that
you
know
being
able
to
build
without
docker
functionality
without
requiring
somebody
to
deeply
understand
the
build
pack
api-
and
you
know,
implement
all
these
kind
of
things
outside
of
the
life
cycle,
because
they,
you
know
technically
fall
outside
of
some
very
generic
spec
right,
like
I
think
we're
working
against.
A
A
B
B
B
C
Can
I
interject
that
we're
at
half
time
just
calling
that
out
here
and
I
feel
like
we
have
well
great
discussion
and
important,
I
feel
like
we
have
moved
away
from
the
original
rfc
discussion,
even
though
it
is
related
with
toy
and
other
stuff.
F
I
think
I'm
hesitant
to
say
we
should
proceed
with
kind
of
continuing
to
move
stuff
out
of
the
lifecycle
implementation
until
we
have
a
better
solution
for
platform
authors
who
want
to
or
sorry
for
oh
yeah
people
who
want
to
do
a
build
pack
build
without
docker
right.
I
feel
like
we're
moving
too
much
out
of
that
implementation.
F
We
can
call
time,
but
you
know
we
should.
We
should
figure
this
out
eventually.
B
Yeah,
like
maybe
we
should,
I
feel
like
we
should
come
back
to
this
sooner
than
next
week,
because
I'd
love
to
get
the
s
bomb
stuff
to
a
place
where
we
can
really
stand
behind
it
as
having
everything
you
need
in
there.
So
we
need
a
solution
for
the
run
image
to
do
that
and
therefore
it
seems
like
we
need
to
get
through
the
philosophical
conversation
to
get
back
to
the
practical
one
right.
G
Yeah,
that's
mine.
Let
me
share
my
screen.
G
Well,
I
I
just
want
to
answer
two
things,
so
we
can
try
to
use
the
time.
So
first
of
all
here
is
the
I
I
update
my
previous
rfc
with
that
I
did
for
the
publish
operation
and
I
changed
that
one
to
based
on
what
we
discussed
on
the
last
office
hour
to.
G
Just
add
into
the
exporter
the
capability
to
save
the
exported
image
when
the
daemon
flag
is
enabled
to
disk
using
the
oci
layout
format
and
then
update
the
report
terminal
with
some
metadata
that
this
new
tools
can
use
to
complete
some
operation
and
verify
that
everything
is
it's
fine,
more
or
less?
That's
the
idea,
and
that's
what
I
tried
to
put
here.
You
can
take
a
look
on
the
draft
rfc
if
you
want,
but
I
want
to
focus
on
two
things.
The
first
one
is
yesterday
on
the
sync
meeting.
G
Jesse
suggests
this
thing
I
I
put
it.
I
expressed
the
feature
just
to
be
used
only
when
the
demon
flag
is
enabled,
because
I
mean
that's-
that's
actually
the
the
use
case
that
it's
having
troubles
right
with
them.
The
annotations
and
the
concise,
the
cosign
stuff,
but
jesse
point
out
that
why
we
can
just
enable
the
export
into
oci
format.
G
No
matter
you
are
using
the
daemon
or
or
not,
and
so
I
just
wanted
to
to
ask
that
to
to
everybody
if
everybody
see
any
any
reason
to
why
we
can't
just
export
the
image
if
the
user
wants
to
to
to
to
the
disk
right.
I
believe
the
only
reason
we
put
it
with
the
demon
was
because
the
demon
was
the
one
that
was
causing
some
troubles
with
losing
the
metadata.
G
If
we
sign
the
image-
or
we
add
annotations,
but
anyone
see
any
case
for
someone
that
published
the
image
to
a
registry
and
actually
wants
also
the
image
exported
on
desk
that
can
prevent
us
to
enable
that.
A
F
G
F
You
yeah
I'm
just
thinking
about.
If
we
already
have
an
interface
where
you
you
can
export
to
multiple
registries
right,
does
it
make
sense
to
view
this
as
like
an
extension
of
because
you're
talking
about
adding
just
so
understood,
you're
talking
about
adding
exporting
to
disk
as
a
thing
in
addition
to
a
registry
or
the
statement
right
like?
F
F
B
I
do
think,
maybe
in
the
past,
we've
been
too
restrictive
on
people
in
order
to
prevent
them
from
doing
really
slow
things
like
I
guess
if
people
want
to
do
really
slow
things
they
should
be
able
to,
but
there
is
a
communication
challenge
there,
where
I
think
then
people
will
just
assume.
B
So
there's
like
a
trade-off
between
you
know.
Sometimes
people
have
a
good
reason
for
wanting
to
do
this
and
they
don't
care
that
it's
slow
versus.
F
We
should
support
any
number
of
targets
specified
in
any
order
for
any
location,
and
then
you
know,
use
a
uri
or
something
to
denote
the
daemon
versus
the
you
know,
oci
on
disk
there's
there's
some
tools
that
have
conventions
for
that
already
right
we
could
use.
And
then,
if
you
do
multiple
registries,
we
do
a
warning
and
say:
oh,
you
know,
this
is
going
to
be
slow,
no
matter
what,
because
the
right
image
has
to
come
from
one
place
or
another.
B
We
do
say
things
like
we'll:
never
like
redownload
a
launch
layer
from
the
registry
like
if
you'd
you
would
not
be
doing
that
anymore.
Right,
like
if
you're,
using
a
wrong
image
and
a
registry
and
when
you're
export
targets
is
the
disk
you're
going
to
download
a
lot
of
layers
and
that's
fine.
I
feel
like
we
just
like
literally
write
that
down
and
say
it
in
a
lot
of
places.
F
So
the
platform
has
to
decide
if
the
platform
has
multiple
images,
it
cares
about
multiple
locations.
It
has
to
decide
first,
where
it's
going
to
pull
metadata
from
where
it's
going
to
like
pull
cash
from,
and
so
it
doesn't
really
matter
at
the
level
of
the
exporter.
None
of
these
are
special
yeah.
Okay,.
C
C
The
oci
scheme
uri
scheme
to
to
control
this
and
being
able
to
do
multiple
that
would
be
that'd,
be
great
because
you
could
yeah.
I
could
see
that
easily
feeding
into
what
we
have
now
when
we
remove
like
daemon
support,
we
can
just
append
a
pack
to
just
pass
in
an
oci
layout
url
scheme
to
accomplish
this.
G
G
It's
just
a
basic
idea
I
had,
but
I
would
like
to
hear
some
feedback
about
it.
So
the
idea
is:
if
we
enable
the
flag
to
export
to
disk,
then
what
we
were
discussing
was
okay.
So
the
idea
for
this
information
is
to
be
the
input
phone
and
for
another
tool
to
do
something
with
the
image
exporting
in
oci.
G
And
then,
when
that
image
can
push
that
to
a
registry,
then
it
it
can
verify
the
consistency
right
of
the
of
that
image
versus
the
one
that
I
pushed.
So
what
I
did
was
okay,
so
we
need
the
digest
right.
So
that's
what
we
were
talking
and
the
digest
must
be
calculated
based
on
the
compressed
layers,
which
is
the
expected
one
to
be
used
when
we
pushed
the
manifest
size.
G
Okay,
I
saw
that
in
the
previous
thermal
file
and
then
what
we
discussed
yesterday
was:
okay,
javier
point
out
that,
for
example,
if
someone
uses
a
different
library
than
the
gcr
library
or
use
some
other
things,
maybe
it
the
calculation
could
be
different
right.
So
we
said
okay,
we
need
to
specify
that
this
information
is,
if
you
use
our
tool,
you
use
this
library
and
everything,
that's
what
we
are
trying
to
suggests
right.
G
So
actually
I
try
to
put
these
things
right,
so
we
are
using
this
library
to
create
this
hema,
this
image
in
this
format
and
this
compression
algorithm
and
everything
else
that
can
be
useful
so
more
or
less.
This
is
the
draft
idea
of,
and
I
put
it
on
a
different
section,
because
I
don't
know.
Maybe
we
export
that
thing
on
a
different
format
and
it's
not
oci,
then
we
can
add
something
else.
There.
F
So
I'm
trying
to
understand
those
last
four
fields.
I
I
don't
if
the
goal
is
like
to
be
able
to
reproduce
the
digest
in
the
first
field.
I
don't
think
it's
enough
information,
because
it's
not
you
know
it's
not
just
the
compression
algorithm.
That's
used.
It's
like
the
compression
level
and
even
specific,
you
know
different
implementations
of
gzip.
Can
you
know
it's
like
okay,
there's,
not
there's,
not
a
very
strong
guarantee
that
you
know
unless
you're
using
the
same
code
right
and
you
have
the
library
you
know
you
have.
F
G
The
goal
for
this
information-
if
I
understood
correctly
yeah,
if
I
understand
the
super
click-
was
okay,
so
we
are
trying
to
use
this
one
when
you
are
using
the
demon
right.
So
we
create
this
new
image
in
the
format
in
the
disk,
so
you
can
use
the
other
tool
to
complete
the
push
of
that
thing
and
have
everything
you
wanted
there.
G
G
F
So
when
you
do
a
build
and
you're
targeting
the
registry
or
sorry
when
you
do
a
building
you're
targeting
a
daemon
right,
why?
Why
not
just
store
all
that
information,
like
you
know
everything
you
need,
including
all
the
bits
right
outside
of
the
daemon,
also
so
that
you
don't
have
to
reach
back
into
the
damage
and
pull
it
out
and
then
rerun
these
operations
on
it
like
why
not
just
cache
everything
outside
and
then,
when
you
push
you're
just
uploading
stuff
in
the
cache.
A
I
think
the
the
original
area
was
like
we
can
do
two
things
so
if
we
can
put
the
layers
and
compress
format
on
disk
or
we
can
leave
them
uncompressed,
just
sort
of
what
lifecycle
already
does
and
reuse
it
for
the
launch
cache.
So
we
were
trying
to
satisfy
two
things.
One
was
like
life
cycle,
already:
sort
of
kind
of
puts
the
output
in
oci
ish
format
for
the
launch
cache.
But
it's
uncompressed,
it's
not
compressed.
Could
we
reuse
those
bits
for
efficiency?
A
The
inputs
of
it
sorry
the
things
that
it
sort
of
took
as
input
matched
what
the
lifecycle
was
expecting
to
publish
out
if
it
were,
if
it
would
have
published
it
out
to
the
registry
like
the
goal,
at
least
from
from
when
I
was
imagining,
it
was
not
reproducibility,
but
just
correctness
like
if
you
could,
if
you,
you
just
need
the
digest
and
the
manifestation
to
make
sure
that
what
you're
pushing
on
is
the
exact
same
thing
as
what
the
life
cycle
expected.
The
others.
G
A
G
A
You
will
not
have
the
same,
and
if
it's
not
the
same,
you
just
fail.
But
if
you
want
to
have
a
compliant
publish
which,
like
it's
unlikely,
that
there
are
multiple
different
versions
of
publish,
you
need
to
be
somewhat
unlocked
up
with
what
the
life
cycle
looks
like.
G
G
Otherwise,
it's
up
to
you,
but
that's
why?
That's
that's
what
more
or
less
we
discussed
yesterday
right.
Why?
Why
why
we?
We
need
to
do
this,
put
this
digest
and
stuff
for
something
that
it
didn't
happen,
but
it's
it's
like
predicting
the
future
right.
If
you
push
this
thing,
you
are
expecting
to
have
this
digest.
G
A
E
Yeah,
when
I
try
to
reproduce
it's
not
really
ggcr,
it's
go
right,
it
has
to
be
and
go
the
gzip
part,
but
yeah
like
there
was
no
other
way.
I
can
get
the
same
digest
using
various
libraries
or
languages.
F
So
I
want
to
back
up
a
little
bit
from
how
we
we
would
reproduce
digest.
Why
are
we
trying
to
reproduce
the
digest
like
this
implies
that
we're
going
to
compress
every
layer
twice
right
like?
Is
there
instead
of
doing
that
like?
Why
are
we
trying
to
generate
a
digest
when
you're
publishing
into
the
daemon?
If
we
need
to
store
it
uncompressed
on
disk?
Why
don't
we
just
store
it
on
compress
on
desk
put
in
the
daemon
there?
You
have
your
image
and
then
you
know.
A
F
I
would
either
do
that
or
if
you're
going
to
compress
all
the
layers,
initially
anyways,
you
might
as
well
just
store
them,
even
if
you
need
the
uncom,
even
if
you
want
to
keep
the
uncompressed
versions
around
for
caching
right
like
if
you're
going
to
spend
all
that
cpu
on
compressing,
you
know
what
could
be
in
the
case
of
some
images.
You
know
500
megabytes
right.
You
might
as
well
just
put
that
on
disk
yeah,
and
so
you
don't
have
to
deal
with
trying
to
reproduce
all
those
bits
later
right.
G
G
You
can
use
it
if
you
want
and
do
not
mix
it
with
trying
to
keep
or
and
the
other
one
is
trying
to
reuse
the
catch,
to
implement
the
cache
to
handle
this
this
thing,
and
then
we
will
have
the
problem.
I
believe
my
idea,
I
believe
the
more
practical
idea
is
just
that
right,
just
say:
save
the
final
image
that
you're
going
to
push
to
the
registering
disk
and
that's
it
you
can
do
whatever
you
want
with
it
right.
F
I
think
if
we
want
to,
I
I'm
less
convinced
that
it's
important,
that
we
store
the
on
both
uncompressed
and
the
compressed
version
on
disk
for
performance,
because
gzip
decompression
is
really
relatively
fast
compared
to
compression
and
less
cpu
intensive.
A
B
Yeah
boring
people
to
death
with
our
details,
yeah.
I
think
we
should
try
to
figure
out
how
to
make
this
meeting
about
things
that
matter
more
to
a
wider
array
of
people
and
not
these
very
real
and
important
problems,
but
very
boring
problems.
A
We
have
two
more
items
on
the
internet.
Do
we
wanna
say
we
can't.
A
D
Yep
sure
so
that's
my
item,
and
so,
for
I
don't
know
if
everyone
here
is
aware,
but
like
the
init
system
like
pid-1
on
linux,
has
some
specific
responsibilities,
mainly
about
reaping
zombies,
so
processes
from
processes
that
are
dead,
that
weren't
really
before,
and
this
is
important
because
on
a
linux
system,
you
have
a
number
of
pids
that
are
available.
If
you
don't
have
any
pids
anymore,
you
can't
create
new
processes.
So
that
means
you
can't
open
the
shell
to
restart
your
machine.
D
D
However,
this
only
works
when
you
have
the
shared
pid
namespace
between
all
the
containers
in
a
pod
which,
for
in
some
cases,
is
not
done
for
security
concerns,
and
otherwise,
basically
you
need
your
pid
one
process
in
your
container
to
actually
like
do
it
for
you
most
applications.
Obviously,
don't
do
this
because
you
are
not
expecting
to
be
run
as
an
init
system
and
thus
like.
We
are
in
a
case
where,
on
some
platforms,
we
need
to
ensure
that
we
have
an
init
valid
program
at
the
start.
D
It
seems
to
me,
and
after
a
few
discussions
with
some
people
that
the
best
place
to
actually
have
this
pid,
one
like
reaper
would
be
in
the
life
cycle,
launcher
that,
instead
of
executing
directly
the
entry
point
that
was
defined
in
the
buildback
process,
we
would
like
install
the
launcher
itself
as
a
pid
1
and
then
like
fork
and
exec.
The
entry
point
itself
and
I
before
creating
an
rfc.
D
I
wanted
to
get
a
bit
like
room
temperature
like
whether
that
seems
like
a
valid
plan,
whether
there
are
any
concerns
or
things
that
would
make
this
not
feasible.
D
B
I
think
my
first
reaction
is
like
slightly
vague
hesitation,
because
we've
tried
to
like
we've
done
this
exacting,
rather
than
for
exacting
and
management
on
purpose,
so
that
when
people
it,
the
container
looks
more
like
what
people
expect
right
like
a
lot
of
times.
B
A
One
of
it
was
to
make
sure
that
each
and
every
build
pipe
that
you
have
comp
that
you
want
compliant
with
this
introduces
like
in
its
subsystem
that
handles
this
for
each
process
that
it
creates,
or
to
inject
it
in
the
launcher,
because
there's
no
way
to
create
an
additional
pack
that
can
take
previous
processes
defined
by
other
world
packs
and
modify
them
with
some
logic
to
inject
other
things.
I
think
the
common
use
case
of
taking
other
processes
and
modifying
them
has
come
up
a
bunch
of
times
in
our
past
conversations.
A
The
goal
here
was
actually
regardless
of
which
buildback
we
were
using.
We
wanted
some
assurance
that
this
would
never
happen,
like
the
zombie
processes
being
just
there
forever.
Nothing
cleaning
them
up
regardless
of
the
platform,
regardless
of
where
it's
used.
Just
the
container
images
produced
by
us
should
be
pristine
and
follow
best
practices.
B
That
makes
sense.
Let
me
let
me
think
about
this
more
because
I
feel
like
in
general,
I'm
being
loath
to
take
on
more
responsibilities
into
the
life
cycle.
If
there's
another
way
to
do
it
right.
B
I'm
more
warm
on
process
modification,
but
I
know
we've
talked
about
it
before
and
I
did
not
come
out
of
that
with
like
a
slam-dunk
way
to
do
it
so
yeah.
I
guess
I'm
not
like
totally
written
off
either.
I
prefer
a
great
solution
to
process
modification
if
we
add
one
but
I'm
willing
to
consider,
I
need
to
think
more
about
the
unit
process
launcher.