►
From YouTube: CNB Weekly Working Group 2021-10-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
All
right
next
thing
on
the
list
is
introductions
new
faces.
I
don't
see
anybody
here
who
hasn't
been
here
before
so
move
along
to
release,
planning
and
updates.
A
For
pac,
we
are
trying
to
get
an
rc
out
this
week.
There's
a
bit
of
package
restructuring
happening
for
the
library
we.
We
were
informed
of
a
bug
where,
right
now,
the
client
isn't
really
usable
for
some
of
the
options
or
yeah,
and
so
we're
trying
to
get
that
address
before
we
do
this
release.
So
some
point
this
week
we're
hoping
to
have
that
rc.
B
Yeah
for
the
for
distribution
for
github
actions
there
there
must
have
been
a
change
in
the
github
api
that
we
used
to
automatically
approve
new
namespaces.
B
So
it
was
failing
every
time
for
a
couple
months
and
just
released
a
new
version
with
a
majorly
updated,
go
github
library,
so
just
fyi.
C
The
the
implementation
team
is
actively
working
on
the
s-bomb
track
of
work
should
be
the
majority
of
the
next
platform
and
buildpak
api.
So
we
hope
to
have
a
new
minor
release
in
the
next
week
or
two.
B
B
A
A
So
if
you
click
on
that
link,
there
jamail
is
working
on
some
changes
to
basically
define
how
a
life
cycle
is
distributed,
and
I
think,
in
our
initial
take
it
was
about
the
oci
image
itself,
something
that
we
already
produce,
but
never
really
quite
specked
out
or
documented,
but
a
lot
of
platforms
are
kind
of
relying
on
it
and
so
we're
trying
to
put
that
into
the
distribution
spec
with
the
end
goal
of
defining.
What
an
overall
builder
looks
like
right.
A
So
a
builder
has
the
build
packs
at
a
certain
place
and
locations
with
certain
labels
and
then
the
life
cycle
in
a
certain
place
with
some
of
the
other
labels.
And
so
our
idea
is
to
kind
of
create
this
composition
of
all
these
components
that
are
defined
of
how
they're
distributed
so
build
packs
life
cycle
and
build
image,
and
then
say:
okay,
you
put
all
these
together
and
then
that's
called
a
builder
right
or
something
like
that,
and
it
might
have
a
some
additional
data.
A
So
we
kind
of
got
into
that
and
we
started
working
through
some
of
this.
I
think
the
latest
thing
that
came
up
was
about
the
life
cycle
descriptor
and
how
that
opened
up
a
conversation
around
how
we
distribute
the
life
cycle
outside
of
images
right
and
whether
that's
something
that
we
also
want
to
spec
out
right
because
again,
there
are,
you
know,
I
guess.
Pac,
for
instance,
does
consume
the
life
cycle
as
a
tar,
and
so
then
it
expects
the
life
cycle,
descriptor
there
and
so
forth.
A
So
I
think
that
was
natalie's
point,
as
I
kind
of
slept
on
that
for
a
little
bit.
I
did
want
to
kind
of
bring
up
or
or
talk
about
other
aspects
of
it.
The
primary
is
sort
of
like
suggestion
that
I
have
is,
it
depends
on
this
is
all
going
to
be
in
a
distribution
spec.
A
This
distribution
specs
going
to
be
versioned
right
and
if
we're
distributing
the
life
cycle
in
atar,
how
do
we
determine
that
distribution
version
right
and
what
I'd
like
to
have
is
something
that's
consistent
across
all
the
different
components
right,
so
that
you
know
exactly
how,
where
to
find
the
sort
of
information
for
the
life
cycle
and
where
you
can
find
it
for
the
build
pack
or
build
packages.
A
And
so
one
of
the
my
initial
thoughts
was
okay,
so
build
packs
are
distributed
in
build
packages
which,
in
a
tar
format,
would
be
oci
layouts
right,
and
that
kind
of
opens
up.
My
question
of.
Would
we
be
opposed
to
releasing
the
life
cycle
as
an
oci
layout,
tar.
A
C
Well,
so
my
initial
thought
is:
oh,
that's
that's
more
work
for
what
benefit
right.
I
guess.
If,
if
what
prompted
this
question,
is
I
want
to
do
pat,
create
builder
with
my
homegrown
life
cycle
right?
C
You
know
the
uri
to
the
life
cycle
in
the
same
way,
then
you
don't
even
have
to
worry
about
the
tarball
right.
That's
just
like
some,
like
pack
specific
oddity
that
we,
whatever
we
don't
care
right
from
the
purpose
of
enabling
others
to
bring
their
own
life
cycle.
A
C
Yeah
yeah
because,
like
the
tar
to
my
knowledge,
the
only
the
only
like
consumer
of
that
tar
is
pack
when
creating
a
builder
and
we've
had
other
requests,
for
I
want
to
be
able
to
just
give
you
a
life
cycle
image.
Why
do
I
have
to
go
get
this
tar
from
github,
and
you
know
like
just
cut
out
that
study.
A
Yeah,
so
my
idea
came
more
from
the
request
of
speculative.
I
think
I
was
on
the
side
of
just
doing
the
oci
image,
but
it
does
kind
of
put
us
at
a
strange
predicament,
maybe
of
this
life
cycle,
descriptor
right
that
still
then
wouldn't
be
defined
anywhere
in
the
spec.
C
A
D
Said
this
point
earlier
that
it's
more
work
right
if
you
want
to
use
a
developer
version
of
the
life
cycle
and
not
the
of
you
know,
which
should
be
the
use
case,
it's
more
work
to
make
it
an
image
and
host
it
somewhere
and
use
it.
It's
not
a
big
deal
right
as
far
as
like
work
is
concerned.
It's
just
if
we're
talking
about
the
use
case,
it's
it's,
it
doesn't
help.
It
doesn't
give
us
much
is
what
I
think
we
said
in
a
different
conversation.
Sorry.
C
Yeah,
I
think,
there's
sort
of
there's
like
arguments
to
be
made
for
and
against
both
sides.
There's
no
clear,
there's
no
clear
winner.
I
mean
I'm
not
opposed
to
this.
You
know
like
a
dot
cnb
file
for
the
life
cycle.
It's
just
it's
more
work.
It
will
make
our
pipelines,
like
you
know,
whatever
we
have
to
we
to
figure
out
how
to
do
that
in
the
context
in
the
context
of
github
actions.
So
it's
like
I'm
lazy.
That's
the
only
the
really
reaction
that
I
have
against
the.
A
Your
opposition
is
no
not
wanting
to
do
the
work
yeah.
C
Because
the
thing
is
pack
we
we
got
asks
for
that.
For
that
you
know,
can
I
give
my
life
cycle
as
a
as
a
you
know,
a
uri
to
an
image
like
for
other
reasons
right,
so
it's
like
it
would
make
some
people
happy
and
it
would
just
you
know,
like
we
don't
know,
really
who's
who's,
bringing
their
own
life
cycle.
You
know
and
so
like.
When
that
person
emerges
from
the
theoretical
you
know,
then
we
can
help
them
with
you
know.
Okay,
I
don't
want
to
have
to
make
an
image.
A
Mean
I
guess
to
me,
like
I
look
back
and
think
like
we're
a
spec
project
at
the
core,
at
least
that's
what
the
cncf
says
we
are,
and
this
seems
a
lot
more
important
to
spec
out
how
would
distribute
components
so
that
you
could
piece
them
together
from
wherever
they
may
come
right.
Because
again,
we
may
not
know
but
like
git
lab
microsoft,
whoever
right
like
they
might
actually
be
trying
to
do
their
own
life
cycle,
which
actually,
I
think
I
heard
maybe
red
hat
at
some
point-
was
trying
to
do
their
own
thing.
A
But
anyway,
like
that's
the
thing
right
like
I
get
it
from
a
tooling
and
implementation
perspective.
It
might
be
a
higher
level
of
effort,
but
I
do
wonder
what
our
primary
objective
should
be
and
if
that's
the
spec
right
like
because
I
feel
like
a
lot
of
things
like
the
life
cycle,
descriptor
and
stuff,
like
that,
we
had
an
rfc
for
it.
But
then
terence
went
to
go.
Look
for
and
it's
not
defined
anywhere
in
the
spec
right.
A
And
so
it
doesn't
necessarily
give
us
the
sense
that
we're
pushing
all
the
way
through
with
some
of
the
intentions.
B
I
don't
think
it's
it
like
incongruous
with
the
spec
or
or
something
if
we
still
want
to
continue
to
distribute
life
cycle
binaries
and
even
have
the
pac
cli.
You
know
add
life
cycle
binaries
from
a
tarball
and
disc
to
things
like
you
could
view
it
as
we're
distributing
them.
So
you
can
follow
the
spec
and
create
a
lifecycle
image
that
looks
like
it
says
in
the
spec
right.
So
I
don't.
I
don't
see
like
a
conflict
there
in
dropping
and
saying
the
way
you're
supposed
to
distribute
a
life
cycle.
B
B
B
So
I
wonder
if,
like
distributing
a
couple
binaries
that
you
know
like,
we
should
think
about
what
the
format
should
look
like,
but
the
other
thing
is,
I
don't
think
we
should
use
dot
c
and
b
as
an
extension
because
we're
using
that
for
build
packs,
and
so
I
think
the
expectation
there
is
that
it
looks
that
looks
like
a
build
pack
does
unless
we
want
to
kind
of
think
about
that
differently,
but,
but
so
far
I
think
that's
how
we've
talked
about
the
file
extension.
A
Yeah,
I
think,
for
simplicity's
sake,
on
the
distribution
spec.
The
way
I
would
see
it
is
that
anything
that
could
be
distributed
as
an
oci
image
should
also
be
able
to
be
distributed
as
an
oci
layout
tar
right
and
if
we
keep
it
that
simple,
it
makes
it
really
easy
to
digest
and
there's
already
tooling
that
helps
you
create
these
sort
of
packages.
So
we
don't
have
to
you
know:
do
our
own
homegrown
thing
right
so
natalie
to
your
point
about
implementing
this
right.
A
It
would
be
using
what
is
it
like
crane
or
something?
I
forget
which
one
of
the
gcr
tools
is
that
basically
lets
us
create
oci
layouts.
B
A
Yeah,
I'm
a
huge
proponent
for
dot
tar.
I
I,
I
really
don't
see
the
benefit
of
the
file
extensions,
but
that
seems
like
again
bike
sharing.
B
C
I'm
I'm
good
with
I
mean
yeah
we
can,
we
can
do
the
oci
layout.
You
know
distribution
for
the
the
life
cycle,
but
I
think
by
the
same
logic,
I
also
want
pac
to
accept
both
formats,
because
I
think
that
would
like
then
it's
translatable
across
the
board.
A
When
you
say
both
format,
which
specifically
you're
referring
to.
D
A
D
It's
not
the
one,
the
runtime
bundle,
not
that
one
I
mean,
because
if
I
create
the
image
according
to
the
format
to
the
oci
format,
then
I
have
you
know
the
the
the
index
manifest
and
whatever,
but
I
can
run
that
I
have
to
create
the
runtime
bundle
to
run
actually
that
thing
so
with
what
will
be?
D
What
will
be
back
accepting
that
one
I
mean
because
I
I
can
have
the
oci
image
right,
then
I
create
the
bundle
which
then
I
can
put
into
a
tar
and
then
put
that
into
a
registry,
and
it's
going
it's
going
to
work
in
theory
right.
So
it's
it's
a
container
image
that
I
can
run
so,
which
one
are
we
expecting
for
back
to
to
handle
the
one
that
I
can
execute
or
or
or
the
other
one,
the
the
actual
oci
image.
A
So
steven
you
brought
up
the
the
term
oca
artifact
right.
Could
you
elaborate
on
that
a
little
bit
just
for
clarity
on
everybody's
part.
B
Yeah
so
so
I
think
I
think
maybe
what
people
are
kind
of
imagining
here
is
something
like
how
a
build
pack
is
distributed
right
now,
which
is
like
a
in
the
build
package
format
like
a
container
image
that
has
metadata
on
it.
That
looks
like
it's
a
runnable
container
image.
I
don't
think
we
changed
the
media
type
unless
we,
unless
I
missed
something
and
the
file
system
layers
for
where
the
build
packs
would
live,
but
without
any
of
the
bottom
layers.
If
you
tried
to
run
it,
it
would
be
garbage.
B
You
know
you
wouldn't
be
able
to
start
it.
Oci
artifacts
is
a
little
different.
It's
instead
of
using
the
this
is
like
what
cosine
is
doing
for
signatures
and
s-bom.
Instead
of
using
the
media
type
for
the
blobs.
That's
like
file
system
layer.
You
use
the
media
type
like
for
a
cyclone
dxs
bond.
B
It's
application,
slash,
cyclone
dx
right,
and
so
then
dr
david
won't
try
to
run
it,
but
also
you
you
then
wouldn't
usually
upload
the
thing
as
as
a
tarball,
you
know
with
the
file
system
path
containing
the
artifact
you'd
upload,
the
artifact
directly,
usually
uncompressed.
B
So
the
at
least
that's
what
they're
doing
for
the
signature
in
s
bomb,
and
so
it's
a
little
easier
to
get
the
binaries
directly
from
the
registry.
The
registry
just
becomes
a
blob
store
with
a
merkle
tree
on
top,
and
so
sometimes
that's
helpful.
In
our
case,
I
wonder
if
it's
less
helpful,
though,
because,
like
with
the
reason
we
didn't
do
build
packs
like
this
is
so
we
could
just
manipulate
the
references
to
the
tar
balls
and
then
end
up
with
a
new
image.
That's
ready
to
go.
B
I
just
think
it
should
be
a
conscious
decision,
one
way
or
the
other.
I
think
that
the
main
thing
I
like
about
the
proposal
I
meant
to
say
earlier
is:
it
means
that
you
can
get
all
the
bits
that
pack
is
going
to
use
to
do
a
build
from
the
same
host.
You
don't
have
to
go
out
to
github
to
get
a
life
cycle,
and
then
you
know
somewhere
else.
You
know
to
get
build
packs
and
whatever
really
really
makes
it
easy,
regardless
of
what
workflow
you're
using
to
pull
everything
from
docker
hub.
B
A
B
A
I
think
even
cosign
has
a
like
that's
not
by
default
that
it
uses
artifacts.
I
think
you
have
to
enable
it
for
this
particular
reason-
and
I
remember
that
that
was
one
of
their
biggest
pain
points-
is
trying
to
push
registries
to
allow
for
artifacts.
B
A
I
think
yesterday,
sam
brought
it
up
like
when
we
talked
about
artifacts
art.
Sam
might
have
more
insight
here.
Okay,.
B
C
Can
I
can
I
raise
a
quick
point
about
the
image
just
to
go
back
to
what
you
were
saying
about.
There's
like
anthony
asked.
The
question:
is
this
gonna
be
based
on
a
scratch
image
right
from
pax
perspective?
You
know
I
want
to
create
a
builder,
it
doesn't
care.
What's
the
base
right,
I
mean
it
could
be
a
scratch
image
but
like
as
an
implementation
contributor
right,
I'm
thinking,
okay.
Well,
we
ship
the
life
cycle.
Image
is
actually
intended
to
be
run.
A
I
think
we
do
care,
though,
right
because
I
guess
the
way
I'm
envisioning,
it
is
everything's
layered
on
top
of
each
other,
and
so
there
would
be
a
build
image
and
then
you
would
throw
the
life
cycle
set
of
layers
on
top
of
it
and
they
could.
Theoretically,
if
you
have
os
layers
in
there,
they
would
essentially
be
overriding
the
build
os
layers.
A
We
can
do
that,
but
then
it
gets
a
little
bit
more
complicated
because
then
we
need
to
know
which
ones
are
the
life
cycle
layers
and
which
ones
are
the
base
layers
right
similar
to
what
we
do
for
rebase.
We
need
to
identify
the
layers,
so
we
could
do
that,
but
then
that
means
that
we
have
to
figure
out
a
mechanism
to
do
that
right,
most
likely
a
label.
D
D
Talk
about
the
end
user
here
right,
the
person
who's
doing
this
right.
You
know,
I'm
only
asking
you
to
be
sympathetic
to
how
I
think
a
normal
person
is
going
to
build
a
life
cycle
image
with
not
from
scratch
right,
just
just
they're
going
to
need
some
instruction
or
some
documentation
or
whatever
to
say.
Okay,
the
lifecycle
image
has
to
look
like
this
to
be
using
this
workflow
right
or
we
have
to
run
make
packages
or
something
right.
That's
that's
the
only
reason
I
asked
the
question.
A
It
sounds
like
that.
We're
gonna
need
some
tooling
around
this
now,
so
definitely
things
to
think
about,
but
I
wanna
be
cognizant
to
the
time
which
turns
through
out
there.
B
Yeah,
do
we
want
to,
I
guess,
come
back
to
this
async
on
what
we
want
the
image
to
look
like,
at
least
on
the
issue,
I'm
happy
to
pont
on
the
other
spec
thing
things.
I
don't
want
this
whole
me
to
be
about
the
first
topic.
A
B
A
Yeah,
so
I
was
looking
at
it
a
little
bit
more
in
depth
and
let
me
see
what
my
notes
say
about
this.
A
Okay,
so
my
comment,
let
me
see
if
I
could
share
this
real,
quick.
A
A
One
one
case
in
point
is:
is
the
builder
right
another
one
is,
I
think,
there's
been
talks
about
putting
the
image
name
as
well,
and
so
before
I
I
would
consider
this
pre-build
right.
So
before
we
execute
the
build,
we
need
some
information
from
the
project,
descriptor
and
so
pac,
for
instance,
in
this
case
would
use
this
quote,
unquote
binary
as
a
library
right,
so
we
would
have
to
be
able
to
consume
it
as
a
library
to
do
the
transformation
to
then
be
able
to
read
from
it
in
just
one
format.
A
The
one
downside-
and
I
think
it's
very
similar
to
the
rebase-
is
that
there's
a
sort
of
expectation
that,
because
it's
part
of
the
life
cycle
so
like,
if
you
were
to
say
pack,
build
right,
dash,
dash
lifecycle
image,
you
would
expect
potentially
that
it
would
be
using
the
logic
inside
of
there
to
also
do
the
project
descriptor
conversion,
because
it's
all
part
of
the
life
cycle.
A
According
to
this,
you
know,
rfc,
it's
a
a
component,
that's
a
part
of
the
life
cycle
or
shift
with
the
life
cycle,
but
in
reality,
because
we're
going
to
be
using
it
as
a
library,
we
wouldn't
actually
be
doing
that
at
all
right,
we'd
be
using
the
most
likely
an
older
version
of
the
converter
in
that
particular
case,
and
I
think
that
could
be
a
sort
of
like
negative
experience
where
we're
not
applying
the
logic
in
theory.
The
way
that
it
should
be
expected.
A
B
I
think
in
theory
like
if
they're,
if
the
api
spec
for
the
or
the
descriptor
you
know
version
or
whatever
the
schema
version
for
the
descriptor
is
really
what
should
determine
differences
in
behavior
and
and
the
version
of
the
life
cycle.
Well,
I
mean,
I
guess
I
was
gonna,
say
it's
a
bug
if
they're
different,
but
there
inevitably
will
be,
and
so
there
could
be.
I
mean
to
your
point.
There
still
could
be
differences,
even
if
we
don't
intend
them
to
be.
A
Yeah
so
I
mean
I
think
we
could
chuck
it
up
as
a
minor.
You
know
inconvenience
or
concern.
I
think
I
had
another
comment
in
that
regard.
C
I
think
we
could
just
I
mean
I
feel
like
this
is
sort
of
like
I
could
be
wrong,
but
I
don't
think
it
doesn't
seem
like
people
would
care.
I
don't
I
don't
know
like
as
an
end
user.
I
really
care
which
version
of
the
life
cycle
is
being
used
to
translate
my
project
tamil
from
one
schema
to
the
other.
That
seems
like
something
that
somebody
developing
on
the
life
cycle
or
developing
on
pac
would
care
about.
C
A
I
think
my
concern
is
coming
through
like
complexity
right
where
I
know
we
had
this
discussion
in
some
other,
maybe
scenario
or
case,
but
it's
starting
to
become
really
complex
to
determine
where
things
are
coming
from
right.
So,
like
we
had
an
issue
about
ca,
certs
right
that
came
up
and
it's
like
okay,
the
c8
certs
are
being
handled
by
pocato
and
they're
doing
all
this
work,
but
the
c8
search
don't
work
in
the
analyzer
phase
because
they
haven't
been
injected
at
that
point
right
so
like
it
you're
trying
to
like
decipher
okay.
A
What
what's
going
on
through
this
sort
of
facade
of
you
know
of
a
system,
and
I
think
in
this
similar
vein
right
for
rebase.
It's
like!
Oh
we're,
actually
not
using
the
lifecycle
image.
Even
though
you
provided
a
lifecycle
image,
we're
actually
using
something
built
into
pack
right,
and
so
then
you
ask
okay.
Is
it
pack
that's
doing
it?
Is
it
the
life
cycle
and
from
even
a
maintainer's
perspective,
to
keep
everything
in
line?
It
becomes
very
hard.
A
My
ultimate
goal
right,
going
back
to
this
kind
of
notion
of
of
being
spec
and
and
providing
just
simply
tooling,
it's
not
so
much
about
the
the
user
and
whether
they
care
or
not
it's
about.
If
I'm
trying
to
work
within
this
ecosystem
and
provide
a
life
cycle
right,
then
I
expect
that
I
can
replace
the
life
cycle
in
in
my
entire
process
within
the
use
of
pack
right.
A
So
pac
shouldn't
have
this
like
preconceived
notion
of
a
very
specific
life
cycle
that
I'm
using,
which
is
the
one
that
the
cmb
project
maintains
I
feel
like.
I
should
be
able
to
replace
all
of
life
cycles.
No,
you
know
notion
with
the
one
that
I'm
building
internally
right
so
again,
kind
of
modular
pieces
that
all
we're
doing
is
figuring
out
how
to
piece
them
together,
and
this
is
one
of
the
points
where
I
feel
like
we're
kind
of
going
against
that,
because
we're
really
much
embedding
them
for
the
sake
of
a
better
user
experience.
C
So
but
then
so
the
other
the
only
way
around.
That,
then,
is
if
you
it
whoa
two
points,
but
one.
I
don't
want
to
take
this
like
down
a
wrong
path,
but
I
think
it's
worth
noting
that
that
confusion
already
sort
of
exists
today
in
that,
if
I
do
pack
build
dash
dash
life
cycle
image
that
life
cycle
image
is
only
used
for
the
three
phases
right
it
it.
C
It
doesn't
replace
the
life
cycle,
that's
used
for
pro
detect
and
build
right,
and
that
could
I
mean
we
know
that,
but
the
average
user
might,
especially,
if
we're
doing
this
thing,
where
you
know
you
can
create
a
builder
and
provide
a
life
cycle
image.
You
know
it
just
creates
more
potential
for
confusion
right
so,
but
I
like
I'm,
just
pointing
that
out
as
like
a
separate
problem
that
maybe
we
don't
need
to
discuss
immediately,
but
the
other
thing
that
you
mentioned
was
okay.
C
I
want
to
just
replace
the
life
cycle
for
everything
right
and
if
we
were
doing
this
project,
descriptor
parser.
The
only
way
to
do
that
is
to
introduce
a
new
phase
or
to
call
it
as
part
of
like
right.
You
have
to
have
a
new,
like
pack,
will
spin
up
a
container
just
to
run
this
one
thing
and
that's
a
poor
user
experience
because
it'll
be
slow,
but
that's
the
only
way
that
I
can
see
that
we
can
solve
it.
A
Yeah-
and
I
mean
I
feel
like
we've
contemplated
that
for
rebase
as
well
for
the
same
reason
right,
it's
like
there's
no
way
for
you
to
replace
the
rebase
functionality
within
pack
right
now,
and
so
this
has
come
up.
Obviously
we
haven't
taken
action,
item
or
action
on
it,
but
I
think
again
it's
just
I'm
not
saying
that
we
should
solve
it,
but
it's
kind
of
again
putting
that
out
there
that
we're
not
very
decomposed,
as
maybe
we'd
want
to
be.
A
Let's
see
the
other
thing
that
I
think
is
is
pretty
important
here
and
I
don't
think
the
rsc
really
goes
into
too
much
detail
about
maybe
some
of
the
the
handling
of
this,
but
it
has
to
do
with
the
platform
api
right.
So
in
this
particular
case,
we're
now
associating
the
project
descriptor
to
a
platform.
Api
are.
B
A
So
are
we
saying
on
the
spec
that
now,
in
order
to
be
compliant,
this
thing
needs
to
exist
on
a
life
cycle
so
like,
let's
think
about
life
cycle
distribution
again
right
now
for
every
life
cycle
distribution?
Do
we
have
to
have
this
converter
in
there.
C
I
I
don't
have
a
super
strong
opinion,
but
I
I
did
have
a
more
basic
question
about
this
whole
thing,
which
is,
if
you
look
at,
if
you
look
there
somewhere
there.
If
you
search
for
question
mark
right,
I
I
was
trying
to
do
the
mapping
from
one
to
o
yeah.
Does
this
make
sense
right?
The
o2
project
tamil
has
a
concept
of
an
inline
build
pack
right,
the
o1
project
tamil
did
not
so
you
know
I
just
threw
that
inline
build
pack
in
the
01.
You
know
the
translated.
C
You
know
01,
but
like
really,
if
my
platform
so
say
my
platform
supports
the
01
schema
and
I'm
an
app
developer,
I
provide
the
o2
schema
and
I'm
using
an
inline
build
pack
sure
I
could
translate
the
schema
over
to
o1,
but
my
platform
is
still
not
going
to
know
what
to
do
with
that.
Inline
buildback.
So.
C
C
A
World
in
this
world.
B
So,
if,
like
you
know,
there's
the
inline
case,
but
potentially
you
add
feature
x
right
like
it
is
not
part
of
that
translation
thing,
so
it
would
require
a
platform
api
bump
to
support
it
right.
A
Yes,
so
then
it
may
not
be
worth
translating
it
over
is
what
maybe
I'm
I'm
hearing
is
that
right.
A
A
Which
one's
the
final
one
natalie,
the
one
that
we
would
get,
is
it
based
off
of
these.
C
A
Yeah,
that's
a
little
bit
confused
on
on
on
that
one,
and
that's
why
I
only
looked
at
this
for
reference.
B
I
guess
just
for
time
check
thing
we
have
like
13
minutes
left.
I
want
to
make
sure
anthony
gets
a
chance
to
talk
about
his
rfc,
which
we
promised
him,
or
it
seems
like
if
the
both
you
and
amy
are
the
people
interested
in
this.
Are
you
both
available
during
office
hours
to
down.
B
A
D
Okay,
okay,
I
I
did
just
want
to
introduce
this
rfc
once
again
for
any
anybody
who
hasn't
seen
it
already.
It's
supposed
to
be,
basically
in
addition
to
the
previous
s-bomb
rfc
that
got
accepted
right,
but
I
guess
this
original
rfc
only
sort
of
specified
for
the
build
pack
interface
right.
D
This
is
basically
trying
to
say
for
what
we
know
as
stacks
today,
right
or
more
just
the
run
image
like
hey
put
this
label
on
it,
put
this
label
on
it
with
the
digest
of
you
know,
and
that's
why
I'm
describing
the
packages
installed
on
the
container
image
that
that's
basically
at
a
high
level.
I
know
some
comments
were
just
given
like
this
morning,
but
I
haven't
got
a
chance
to
look
at
it,
but
I
did
wanna
feel
some
questions.
B
B
B
So
is
there
a
way
we
could
move
that
s-bomb
for
the
stack
either
into
like
if
we're
going
with
this
like
it
should
be
close
to
the
artifact
they
describe,
maybe
like
cnb
stack
or
something
like
that
as
a
location
or
or
we
could
move
them
close
to
the
other
s
bomb.
But
that
feels
a
little
weird,
because
I
think
layers
is
empty
by
default.
B
D
Yeah,
that's
a
good
idea,
I'm
the
only
thing
I
was
concerned
about
was
putting
it
basically
in
you
know
that
was
my
first
thought,
yeah
putting
it
in
here,
but
I
know
at
least
pac
sort
of
clobbers
everything,
at
least
at
the
beginning
of
the
whole
process.
So
I
think
it's
kind
of
slightly
dangerous
for,
but
you
know
to
put
it
in
here
for
risk
of
totally
getting
wiped
out,
but
yeah
c
and
b
s,
prime
or
something
makes
sense.
B
I'm
just
thinking
about
what
the
final
image
looks
like
and
if
it
like,
I
think
in
the
context
of
it's
a
stack
base,
image
that
has
s
bomb
at
the
root
and
that's
the
s
bomb.
That
makes
sense.
But
then,
when
you
build
the
image
it's,
if
it's
going
to
be
at
the
same
place,
then
I
think
it
looks
like
it's
saying
it's
the
whole.
It's
the
s
bomb
of
the
whole
image,
but
the
image
has
changed,
and
so
maybe
maybe
just
like
moving
into
that
cmb
directory
would
help
that's
my
only
feedback.
B
B
D
D
Yes,
it
does
talk
about
murphy.
If
it's
cyclone
dx
we'll
merge
it.
D
That's
a
good
question
yeah.
I
don't
think
I
thought
thought
that
one
ahead.
I
think
I
just
said,
merge
it,
but
we
can
decide
right
now.
B
A
Did
someone
educate
me
a
little
bit
on
what
merging
entails
like
almost
the
way
I
envisioned
it?
It's
taking
all
these
s-bombs
and
then
just
literally
putting
them
together
at
a
high
level
right,
but
when
people
describe
merging
isn't
supported,
I
get
the
sense
that
it's
maybe
slightly
more
complex,
where
there's
deduplication
involved
or
something
like
that.
B
D
I
don't
think
anybody
in
this
call
has
quite
intimate
knowledge
of
that
we've
sort
of
left
it
in
their
hands,
because
you
know
we
wanted
them
to
suss
out
any
of
these
nuances
that
we
probably
don't
know
so.
B
A
Is
there
I
haven't
looked
at
cyclone
dx,
but
does
it
have
sort
of
the
idea
of
categories
or
essentially
being
able
to
say
this
is
a
you
know,
os
level
sort
of
dependency
versus
an
application
level
dependency
that
sort
of
stuff.
B
I
think
it
differentiates
it
has
different
types
right.
I
don't
think
it
considers
an
os
level
thing
different
from
a
application
level
thing.
It
just
considers
ubuntu
package
different
from
a
node
module
or
something
like
that,
because
they
have
pearls
in
them.
That
reference
the.
B
B
A
You
the
question
about
rebasing
right
is
maybe
what
comes
into
mind
on
how
that
would
happen,
especially
if
we
only
then
have
a
merged
version.
B
C
I
just
want
to
bring
up
a
quick
point,
because
I
think
we,
unless
I
missed
it,
there
was
sort
of
an
oversight
in
the
earlier
rfc
for
build
pack
provided
s-bombs.
It
says
that
the
diff
id
for
the
layer
containing
the
merged
bombs
will
be
present
in
a
label,
but
it
doesn't
say
what
that
label
actually
is.
C
For
you
know,
it
should
be
like
kind
of
done
with
that
in
mind
right,
but
that
being
said
like
your
point
about,
if
we
just
throw
them
all
together
and
we
only
have
one
merged
one,
it's
gonna
make
rebasing
im
impossible
right.
So
we
need
to
keep
my
my
my
suggestion.
Is
we
keep
everything
separate
separately
layer
for
each.
C
No,
no,
no,
no
one
one
layer
for
the
build
pack
provided
s
bombs,
one
layer
for
the
stack
provided
s-bombs
and
then,
if
and
when
the
life
cycle
does
emerge
it.
It
does
another
third,
a
third
layer
with
all
the
merged
stuff.
And
that
way,
when
you
rebase
you
can
you
bring
in
the
new
layer
for
the
stack
and
you
replace
the
old
layer
with
the
merged
stuff.
D
A
D
C
B
The
way
I
interpret
what
andy
was
saying
or
what
sorry,
what
anthony
was
saying,
is
leave
all
the
build
pack
s
bombs
in
those
layers
directories
where
they
are,
those
never
get
deleted
right,
and
so
it
you
have
all
the
s-bombs
for
every
single
layer
actually
all
separate
permanently
in
the
image
and
then
leave
the
stack
one
where
it
is,
you
know,
by
label,
it's
label
reference,
leave
all
that
the
same
right
and
then
on
rebase.
You
just
make
sure
that
you
replace
both
the
stack
one
and
you
take
all
the
they
take.
B
The
new
stack
one
of
the
build
pack
ones
and
combine
them
together
and
add
another
merged
label
underneath,
and
so
now
you
have
do
have
two
copies
of
the
s-bomb
one.
That's
completely
broken
apart
and
one
that's
all
joined
together.
B
I
think,
and
then
you
have
three
labels
right:
one
label:
that's
built,
s-bomb
layer,
one
label
at
the
final
image,
one
layer,
that's
the
stack
one
and
one
label
that
points
to
the
merged.
One
is
that.
D
B
I
I
think
I
worry
a
little
bit
about
the
content
addressability,
it's
like
a
and
the
ability
to
sign
things
in
the
future
like
we
may
want
to
be
able
to
sign
the
original
build
pack.
S-Bomb
right
like
you,
may
want
to
know
that
the
original
build
pack
s-bomb
hasn't
changed
after
a
rebase
operation,
and
then
you
would
lose
the
context
in
the
original
build
package
because
that's
bomb
is
such
a
like.
B
A
kind
of
a
trail
of
you
know
something
you
might
want
to
keep
a
tight
train
of
custody,
for
I
think
it's
safer
if
we
do
kind
of
append.
Only
in
that
sense,
but
I
don't
know
if
we
I'm
a
little
tempted
to
say
I
don't
know
if
we
need
to
merge
the
s-bombs
together
and
then
write
that
into
a
final
s-bond
that
lives
in
the
image
at
all.
B
I
wonder
if
we
could
put
the
merging
off
until
we
talk
about
cosine
integration,
because
that's
when
we
would
actually
merge
them
and
then
put
them
in
a
separate
image,
and
then
your
image
wouldn't
need
two
copies
of
the
s-bomb.
It
would
just
have
your
app
image
would
have
the
disparate
copies
of
the
s-bomb.
An
analysis
operation
on
the
image
could
do
the
merging
at
analysis
time
right,
so
pac-inspect
image
could
pull
all
the
s-bombs
and
merge
them
together.
B
For
this
rfc
say
this
is
how
the
stack
s
bomb
is
specified,
and
then
we
can
push
that
off
in
the
future,
or
you
could
turn
this
one
into
one
that
talks
about
the
cosine
integration.
It
gives
us
that
I'm
not
super
opposed
to
having
a
merged
one
in
the
amp
image,
pre-merged
either,
but
I
maybe
lean
a
little
bit
towards
doing
the
merge
operation.
At
the
same
time,
we
write
in
the
cosine
format.
A
So
I
think,
just
to
circle
back,
I
think
merging
was
definitely
out
of
scope
for
this
rfc.
In
this
conversation,
I
think
what
natalie
was
mentioning
is
just
deriving
a
a
sort
of
format
for
labels
to
be
able
to
identify
where
these
different
s-bomb
you
know
references
are,
and
so,
if
we're
good
with
that,
then
I
think
we
could
kind
of
punt
on
the
emerging
and
final
s-bomb
to
later.
C
A
Is
that
a
single
reference
to
all
the
s-bombs
in
json
format,
just
a
stack
s
bomb,
just
a
stack
spot.
D
D
B
We
need
a
separate
label
for
the
stack
one,
but
we
don't
necessarily
need
a
separate
label
for
the
build
pack
one
because
it's
already
there
we
could
have
one,
though
I
don't.
I
don't
have
a
strong
opinion
personally,
but
whatever
sorry
I
just
had
to
think
about
where
everything's
coming
from
all
right.
Thanks,
everybody.