►
From YouTube: Working Group: 2021-08-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Please
remember
to
sign
in
on
the
attendees
list.
Actually
wait.
Maybe
one
more
minute
see
if
more
folks
show
up
before
we
get
started.
A
I
can
try
to
take
some
time
all
right,
thanks
jesse
and
do
we
have
any
new
faces.
A
Yep
pretty
sure,
all
right,
we'll
move
on
to
release
planning
and
updates.
B
B
I
think
we
said
yesterday
that
platform
of
seven
is
of
the
spec
is
about
to
be
ready
to
release
or
it's
it's
going
to
be
released
soon,
and
the
next
life
cycle
will
will
support
it,
but
will
not
support
bill.
Pakkerson.
C
On
the
back
team
side,
we
had
a
meeting
to
discuss
like
the
new
version
of
lip,
cnb
and
sort
of
the
features
it
should
support.
So
I
think
we
are
looking
at
the
existing
go
bindings
that
other
organizations
maintain
so
packet,
the
pack
and
the
gcp
one
and
see
which
features
are
common
and
should
be
provided
as
utilities
by
the
project.
C
C
That's
pretty
much.
It.
A
Seems
like
no
for
this
week
so
I'll
just
before
we
jump
on
move
on
to
the
rfcs,
just
as
an
overview.
A
We're
going
to
spend
10
minutes
talking
about
supported
utility,
build
packs,
try
to
time
box
that
another
10
minutes
talking
about
structured
bomb
format,
try
to
time
box
that
and
then
use
the
remaining
time
to
talk
about
their
move.
Stacks
of
mexicans
rfc
so
keep
keep
an
eye
on
the
clock.
A
First,
one
is
officially
supported
utility
bill
packs.
I
think
that's
joe.
D
The
first
is
around
some
of
the
criteria
that
I
added
for
accepting
a
utility
build
pack
into
the
project
and
in
particular,
that
any
utility
build
pack
we
maintain
would
provide
some
behavior
or
implement
some
behavior.
That's
part
of
the
specification,
and
so
like
my
thinking
there
is
that
the
build
pack
itself
is
just
an
implementation
detail
and
that
the
specification
just
defines
like
some
behaviors
that
we
that
a
build
pack,
life
cycle
or
platform
would
support
and
the
bill
packs
are
one
way
of
providing
that
behavior.
D
But,
like
other
platforms,
other
lifecycle
implementations,
you
know,
could
do
that
a
different
way.
So
I
I
kind
of
expected
that
to
be
a
little
contentious
and
require
some
discussion,
but
I
wanted
to
start
there
because
I
do
think
I
think
some
of
the
feedback
I
got
on
the
on
the
rfc
was
that
we
needed
pretty
strong
criteria
for
how
and
when
we
would
accept
utility
build
packs.
D
So
actually,
as
an
example,
that
would
probably
mean
that
we
would
not
accept
a
profile
build
pack
into
the
project
as
something
we
maintain.
But
it
might
mean
that
we
accept
something
like
the
packeto
environment
variable
build
pack,
because
I
could
see
that
being
a
thing
that
we
spec
like
a
certain
way
to
inject
runtime,
environment
variables
or
something
the
other
point
of
discussion
is
stack
support.
D
I
had
said
that
any
build
pack
we
accepted
the
project
should
support
all
stacks
again
kind
of
could
I
can
see
that
that
could
be
contentious.
Also
if
stats
are
going
away,
that's
kind
of
a
point
of
discussion,
so
I
think
there's
a
little
bit
of
this.
That
hinges
on
the
thing
we're
going
to
discuss
later
today
and
yeah,
and
I
think
part
of
why
I'm
asking
about
the
wild
card
stack
support
so.
A
A
I
think
those
build
packs
can
be
very
well
specified
by
the
project,
as
this
is
behavior,
that's
expected
of
something
that
implements
the
build
pack
api,
the
only
part
of
it
that
maybe
not
sold
on
is,
I
don't
know
if
that
should
be
specified
in
the
same
document
that
specifies
the
life
cycle
component
and,
like
I
get
the
argument
that
well
what
if
you
implemented
a
life
cycle
that
you
know
didn't
do
those
things
as
build
packs.
That
was,
you
know
where
they
implemented.
A
You
know
whatever
profile
support
right
as
part
of
the
life
cycle
itself,
but
I
think
we
get
feedback
that
the
specification
is
confusing
because
it
doesn't
kind
of
describe
each
component
separately
and
I
don't
want
to
kind
of
contribute
to
that
like
here.
We
have
a
document,
and
here
we
have
a
piece
of
software,
but
they
don't.
They
don't
really
match
up,
and
it's
hard
to
tell.
How
do
I
use
this?
Because
we
don't
have
very
many
life
cycle
right.
D
So
so
yeah
I
definitely
not.
Everything
would
be
like
some
of
these
would
be
extension,
spec
stuff
right
like
I
could
see
us
owning
project
descriptor
things
or
something
like
that.
Definitely
would
not
be
in
the
core
spec.
That
could
be
an
extension
and
I'm
not
even
sure
like
when
I
was
sort
of
saying
that
some
things
might
go
into
the
platform
spec
or
the
build
packs
back
where,
like
I'm,
not
sure
where
different
things
would
fit.
D
But
I
would
imagine
it
could
be
different,
so
I'm
I'm,
I
think,
I'm
fine
with
like
separate
files
for
different
parts
of
the
spec,
but
the
part
of
that
that
worries
me
is
like
a
one-to-one
mapping
between
like
a
component
and
a
spec
like
it
might
make
sense
to
split
it
out,
but
that
should
be
because
it
makes
sense
to
split
it
out,
not
because
we've
implemented
like
a
separate
component
to
do
that,
and
if
we
can
do
that,
then
I
think
I'm
I'm
on
board
with
what
you're
saying
I
don't
know
if
that
makes
sense.
A
Maybe
there's
a
separation
between
an
api
and
a
spec.
It's
like.
I
agree
that
this
should
all
be
in
the
build
pack
api
and
that
api
should
be
something
you
can
consume
easily,
as
this
is
the
api
that
a
build
pack
has
to
follow,
but
then
the
specification
that
describes
the
behavior
of
you
know
the
parts
that
you're
going
to
implement
right.
I
think
that
can
be
divided
right
now.
A
problem
with
that
is
that
it's
it's
all
you
know
one
thing
without.
D
A
For
the
different
stages
and
those
parts
can
we
talk
about
a
concrete
example.
D
E
D
D
That's
what
I'm
saying,
though,
I
think
that's
what
I'm
saying
is
like
that's.
That's
an
implementation
detail.
The
fact
that
it's
a
build
pack
thing
right
like
if
the
build
buildback
is
there
to
to
provide
or
support
or
implement
something
that
we
spec,
and
you
know
acme
platform
could
come
along
and
say
like
yeah
we're
gonna
build
our
own
lifecycle
and
it's
just
gonna
have
all
these
things
natively,
so
you
don't
need
the
utility
build
packs
to
do
it.
Maybe
I'm
bleeding
into
the
system
build
pack
rfc
2
here
a
bit,
but.
E
E
You
know
four
implementations
doing
the
same
thing
in
the
world
and
we
really
want
one
it's
sort
of
like.
We
want
we're,
trying
to
move
in
the
the
opposite
direction,
where
there's
a
single
implementation
of
some
of
these
behaviors.
Unless
you're
saying
that
it
is
a
really
strong
goal
to
then
like
have
the
flexibility
to
implement
this
feature
in
either
the
life
cycle
or
a
build
pack,
and
I
would
need
convincing
that
that
is
a
valuable
type
of
flexibility
and
not
just
extra
complexity.
D
Yeah
I'm
describing
it
that
way,
but
that's
not
really
the
motivation.
The
motivation
for
me
is
that,
as
a
build
pack
user,
I
don't
have
to
worry
about
the
fact
that
this
is
implemented
by
a
build
pack
like.
I
just
know
that
bpe
underscore
whatever
is
how
I
can
do
the
runtime
and
variable
environment
variable
thing
and
the
fact
that
the
build
pack
is
just
not
something
I'm
concerned
about.
It
just
works,
and
I
think
like
that's.
E
I
agree
with
the
goal
there
that
people
using
build
packs
shouldn't
have
to
care
about
these
like
where
it's
implemented,
but
I
also
think
that
people
using
build
packs
shouldn't
have
to
read
the
specs
either
like.
Is
this
just
a
how
we
documented
question
rather
than
like,
where
they
were
specking
things
just
because
people
use
it
for
documentation,
because
we
haven't
figured
out
how
to
communicate
in
that
way
in
the
documentation.
A
I
I
wonder,
if
we're
trying
to
use
this
back
as
both
a
spec
and
an
api
definition,
we
want
one
unified
api,
but
when
we're
talking
about
how
things
are
specified,
like
imagine
a
spec
in
the
future
that
goes
through
and
has
separate
documents,
for
you
know
the
underlying
layer,
exporting
process
and
the
the
caching
interface
right.
That's
totally
separate
from
the
part
of
the
bill
pack,
api,
that's
provided
by
the
builder
right,
which
provides
you
know.
A
So
imagine
you
have
all
you
know
different
documents
that
describe
here's,
how
the
layers
move
around
here's
the
api
provided
the
builder
provides
to
the
you
know,
build
packs
and
in
that
builder
document,
maybe
there's
a
section
for
like
mandatory,
and
this
is
a
bad
name
for
it,
but
mandatory
extensions
and
optional
extensions,
then,
in
mandatory
extensions
it
talks
about
runtime,
environment
variables
and
has
a
particular
interface
and
then
in
optional
extensions.
It
talks
about
whatever
we
want
to
add
that
you're
allowed
to
not
put
into
a
build
right
that
could
be
utility
bill
packs.
A
Also,
then
it's
not
saying
this
has
to
be
a
build
pack
or
it
doesn't.
It's
just
saying
it's
keeping
those
parts
separate
right,
those
extension
things
could
even
say
at
the
bill
pay.
You
know
this.
We
strongly
recommended
to
implement
at
the
build
pack
layer
instead
of
at
the
builder
layer
right
and
then
we're
still
talking
about
the
specifications
describing
the
behavior
of
components.
A
That's
not
an
api
definition,
and
you
know
maybe
it's
a
problem
that
it
also
also
creates
an
api
out
of
it
too,
and
that's
that's
something
else
we
should
solve,
but
I
think
that
the
underlying
problem
to
me
seems
like
our
spec
is
you
know
right
now?
It
serves
as
kind
of
like
documentation
for
things
we
have
documented.
It
definitely
serves
as
an
api
definition.
Even
though
it's
not
an
api,
it's
you
know
the
thing
that
people
view
as
specifying
components.
A
Then
it
serves
as
a
very
poor
thing
that
specifies
components
because
trying
to
do
those
other
things
at
the
same
time.
So
I
think
I
agree
that
it's
really
this
is,
I
think
we
agree
on
the
underlying
these
things
should
be
required.
It
should
be
obvious
to
build
peg.
Authors
builds
shouldn't,
be
able
to
not
have
this
functionality.
A
D
D
The
the
bpe
environment
variable
is
something
that
I
would
want
as
part
of
the
api,
but
that
that
I
think
you're
saying
that
should
be
independent
of
how
we
spec
the
components.
E
I
think
about
which
api
we're
talking
about
here
as
well
like
we
want
bpe
underscore
like
if
you
said
it.
This
is
what
happens
to
be
part
of
the
like
app
developer
api
when
they're
using
a
platform.
It's
not
part
of
the
build
pack
api,
because
it's
not
part
of
what
a
build
pack
needs
to
do
in
order
to.
D
E
Well,
I
don't
exactly
want
to
do
this.
I
don't
think
we
need
to
spec
these
things,
but
if
we
were
going
to
spec
it,
it
makes
sense
to
me
sort
of
as
like
an
extension
of
the
platforms,
an
extension
spec.
That
platforms
can
choose
to
implement
it's
like.
If
a
platform
wants
to
provide
environment
variable
support,
it
needs
to
include
a
build
pack
that
does
x
and
that's
describing
like
what
a
platform
needs
to
do
in
order
to
expose
an
optional
feature
to
end
users
and
the
interface
that
is
exposed
to
end
users.
D
Yeah,
maybe
didn't
follow
that,
so
I
I
think
I
can
take
a
stab
at
writing
this
up
and
then
I'll
probably
like
do
some
back
and
forth
with
y'all.
But
I
think
that's
a
good
next
step
and
I
would
say
the
other
part
of
this.
The
stack
support
like
let's
punt,
on
that
until
we
figure
out
the
stack
rfc.
I
think
it's
probably
better
just
for
our
time
just
to
move
on.
A
Sounds
good,
there
was
one
other
thing
you
mentioned
there.
I
forgot
what
it
was
around
wild
cards.
I
think
we
could
bring
that
up
and
because
it's
also
gonna
remove
stacks
and
make
sense
or
we
could
I'm
happy
to
you
know
I
don't
wanna
reserve
40
minutes
to
talk
about
the
other
thing
so.
A
Okay,
so
the
sorry
I
lost
the
what's
the
next
one
in
the
list.
C
B
I
I
think,
as
of
yesterday,
the
discussion
came
up.
Where
really,
I
was
just
being
more
curious
as
to
what
the
state
of
the
rfc
was
and
it
sounded
like.
We
were
leading
to
cyclone
dx
being
the
only
for
supported
format.
If
I'm
not
mistaken,
and
I
think
the
question
that
came
up
is
whether
the
rfc
is
ready
with
that
sort
of
change
in
place.
C
So
currently
the
rfc
states
that
we'll
support
both
the
spdx
and
cyclone
dxs
outputs
from
a
particular
buildback,
but
the
lifecycle
will
only
merge
the
cyclone
dx
outputs,
the
spdx
outputs
would
still
be
there
and
in
the
future,
once
the
lifecycle
does
manage
to
support
merging
spdx
documents
like
the
build
packs
wouldn't
have
to
update,
they
could
just
update
the
lifecycle
and
they'll
get
the
merged
sbdx
form.
C
I
think
the
open
questions
were
like
how
does
it
get
merged
with
the
stack
bomb
or
the
the
base?
Image
is
basically
like
how
does
the
bomb
from
that
get
merged
with
the
ones
from
buildbacks
and
the
format
we
store
it
in
and
how
do
we
handle
resto,
restores
or
rebuilds
with
bombs.
C
Which,
when
we
last
discussed
this,
we
said
it
was
fine
to
address
that
in
a
separate
rfc
and
for
this
rfc
just
to
specify
that
we
will
be
supporting
these
bomb
formats,
and
this
is
how
the
lifecycle
will
merge
them,
but
as
to
where
it
will
store
them
and
how
it
will
restore
it
during
the
rebuild.
That
would
be
a
separate
rfc,
since
we
need
to
move
that
from
a
label
to
some
place
else
anyway.
C
I
don't
think
it
directly
talks
about
it,
but
I've
linked
that
ndia
document
there
and
the
general
like
consensus,
is
that
as
of
right
now,
there's
no
like
default
or
best
bomb
format,
and
typically,
when
you're
consuming
these
files
downstream,
your
tools
may
support
different
kind
of
form
formats.
So
it's
not
just
about
generating
an
appropriate
bomb.
It's
also
about
consuming
them
in
an
appropriate
way.
C
So
we
want
to
leave
enough
flexibility
in
the
system
so
that
if
internally
your
tools
only
support
spdx,
you
can
still
take
the
individual
files
and
merge
them
together
and
put
it
somewhere
if
your
bill
packs
want
to
give
you
that
information,
because,
as
of
right
now,
conversion
between
the
different
bom
formats
is
lossy
and
there's
no
good
mapping
between
like
and
and
for
certain
fields.
There
cannot
be
a
mapping,
because
one
bomb
format
doesn't
support
it
so
for
a
minimal
set
of
fields,
it's
possible
to
convert
between
the
two.
C
But
if
you
are
looking
for
a
specific
use
case
or
if
your
organization
is
more
interested
in
a
compliance
use
case
rather
than
a
security
use
case,
one
might
be
better
over
the
other.
C
B
C
It
also
gives
us
a
easy
migration
path,
because
we
will
still
we
will
not
convert
between
the
old
legacy
format
and
the
new
cyclone
dx
format.
Those
files
would
still
just
be
there
as
as
they
were,
and
if
you
have
the
appropriate
tools
to
take
your
like
legacy,
build
packs
that
are
producing
the
bomb
in
the
legacy
format
and
some
new
build
packs
that
are
producing
it
in
cyclone
dx
format.
B
Great,
I
think,
terence,
just
added
that
question
about
spdx
and
cyclone
dx
to
the
rfc.
I'm
assuming
we
want
to
maybe
have
that
written
on
there
as
to
why
we're
aiming
to
support
both.
A
I
think
I
I
agree
with
the
goal
of
eventually
supporting
whatever
you
know:
s-bomb
formats
become
popular
that
can
be
merged
together
and
turned
into
one
s-bomb
format
or
one
s-bomb
for
the
image.
I
guess
I
have
two
questions.
One
is,
I
I
see
a
merge,
there's
a
thing
in
there
about
a
separate,
merge
bomb
binary.
The
life
cycle
calls
out
to
that.
Does
the
merging
automatically,
but
then
it
also
says
that
spdx,
maybe
I'm
just
missing
something
here,
but
it
also
says
that
spdx
merging
isn't
supported.
A
C
I
think
that's
a
leftover
commit.
I
haven't,
pushed
that
that
merge
bomb
binary
should
have
gone
away.
C
A
But
the
spdx
one
build
packs
could
still
output
spd-x,
but
then
that
output,
what
happens
to
that
output?
Does
it
end
up
in
ends
up.
A
It
said
end
up
in
the
runtime
image
just
next
to
the
actual
layers,
or
what
did
I
misunderstand.
C
If
we
want
to
put
that
somewhere
else
like
using
the
cosine
s
bomb,
artifact
format,
we
can
do
that.
E
Just
leaving
it
as
a
file
in
the
build
container
for
now,
not
in
the
app
image,
but
in
the
build
container,
and
then
the
platform
can
decide
how
to
handle
it.
Because
I
think
if
we
decided
to
norm
on
cosine,
we'd
have
a
bunch
of
problems
with
like
manifest
in
the
demon
case
that
I
don't
think
we're
ready
to
deal
with.
C
E
Also
just
ends
up
as
a
file
in
the
image,
and
then
a
platform
could
make
that
available
in
different
ways
like
maybe
in
pack
you
it
copies
it
out
to
the
path
that
the
user
provides
with
a
flag
or
maybe
in
a
platform
like
kpac.
That
already
has
some
built-in
support
for
cosine
with
those
conventions
and
always
uses
the
registry.
It
could
push
it
using
the
cosign
conventions,
but
I
think
if
we
leave
it
to
the
platform
that
gives
us
a
good
path
forward.
E
A
I
think
I'm
I'm
a
little
worried
about
supporting
spx
without
a
path
to
turning
spdx
into
the
like
collection
of
spdx
files
into
something
consumable,
but
cosine
actually
has
a
way
of
you
can
specify
an
s
bomb
and
a
cosine
manifest
in
a
way
that
lets
you
associate
an
s
bond
with
each
layer
like
yeah,
and
so
there
actually
is
a
format
that
could
capture.
A
A
A
So
that
makes
me
think:
well
should
we
actually
output
these
in
oci
format
and
then
just
not
specify
how
they
get
exported
into
the
image
or
whatever,
because
of
the
daemon
case,
or
you
know,
maybe
it'd
be
better
to
specify
this
so
that
spdx
could
be
added
in
another
rfc
easily
without
breaking
changes
and
then
scope
it
to
cyclone
for
now.
There's
another
option
I
can
see
too,
but
I
worry
about
the
in-between
state.
C
I
think,
even
as
it
is
right
now,
since
we
we
have
the
added
benefit,
that
each
layer
corresponds
to
like
a
build
pack
id
and
then
some
layer
underneath
it
except
for
app
slices.
C
So
when
we
are
generating
these
spd-x
bombs,
we
are
generating
it
under
that
path,
so
we
know
which
layer
it
maps
to.
So,
even
if
we
leave
it
on
disk,
when
we
eventually
support
spdx,
we
can
we
can
export
it
in
the
course
sorry,
when
we
export
it
in
the
cosine
format,
we'll
know
which
layers
those
files
on
disk
map
too.
C
E
What
if
we
created
just
to
make
sort
of
the
migration
here
easier
in
the
future?
What
if
we
created
a
directory
that
was
called
something
like
outputs
and
that's
where
we
put
report
tamil
the
merged
bombs?
And
if
we
wanted
to
put
layer
specific
bombs,
we
could
put
them
in
there
with
a
a
conventional
name.
That's
like
you,
know,
layer,
dot,
bom,
dot
whatever
and
then
in
a
platform
like
pack,
we
could
always
copy
out
that
whole
directory,
which
gives
you
more
options
for
how
you
could
merge
things
in
the
future.
B
I
was
going
to
ask,
I
think,
related
to
this,
of
what
the
value
was
for
the
life
cycle
during
the
merging,
as
opposed
to
the
individual
platforms,
if
they
needed
to
like.
Maybe
I
just
don't
understand
the
merging
aspect
of
the
bomb
specifically
because
it
only
relates
to
cycle
of
the
x
right
now.
E
I
still
feel
that
way,
mostly.
I
think
it's
a
less
of
a
strong
argument
now
that
we're
not
trying
to
like
combine
a
legacy
and
cyclone
together
or
do
more
complicated
things,
but
I
do
think
it
puts
a
pretty
big
burden
on
the
platform
to
have
to
keep
track
of,
like
here's,
the
stack
bomb
and
here's,
how
you
name
a
bomb
for
each
layer
and
here's
all
the
different
extensions
and
then
each
and
every
platform
has
to
implement
merging,
feels
a
little
bit
rough
to
me.
B
B
I
feel
like
that's,
maybe
even
something
for
either
a
very
specialized
platform
that
wants
to
do
like
I'm
only
going
to
support
cyclone
dx
and
they're
going
to
do
the
merging
themselves
and
specialize
in
that
format
or
they're
only
going
to
support
spdx
and
you
know
display
it
in
some
other
format.
So
I
guess
maybe
I'm
just
not
entirely
sure
that
there's
an
alignment
of
what
the
lifecycle
is
doing
and
what
the
expectation
of
the
platform
is.
E
Another
argument
for
putting
it
in
the
lifecycle
is
for
consistency
like
when
we're
saying:
oh,
maybe
one
platform
only
cares
about
sbdx
and
one
platform
only
cares
about
cyclone
well
that
gets
tricky
when
those
platforms
are
not
always
providing
all
their
own
build
packs
right.
We
have
a
goal
of
build
packs,
running
in
different
platforms
and
producing
consistent
results.
So
if
it
was
on
the
platform
to
merge
the
bomb,
you
could
get
a
really
different
bomb,
depending
on
which
platform
you
run
on,
which
could
be
weird.
A
Hack
is
going
to
want
to
parse
this
and
then
do
validations
against
it
on
rebase
and
things
like
that,
it's
not
it's
not
just
like
some
metadata
that
a
platform
owner
cares
about
the
data
may
get
used
extensively
by
the
tooling
of
the
you
know.
Project
provides
and
having
it
separate
makes
that
tooling
more
complex
because
the
now
we
have
to
have
the
tooling
you
know
parse
multiple
files
for
each
layer
or.
B
B
E
The
life
cycle
doesn't
really
generate
an
s
bomb.
It
only
merges
right,
like
the
stack
brings,
an
s
bomb,
build
packs,
write
s-bombs
for
the
different
layers
and
then
the
life
cycle
sort
of
just
stitches
it
all
together.
Life
cycle
doesn't
know.
What's
in
the
image.
Well
then
like
it
could
describe
the
launcher
like.
Maybe
you
should
do
things
like
that,
but
other
than
that?
No.
B
I
guess
that's
kind
of
my
rebuttal
for
steven's
statement.
Right
is
if
the
build
packs
are
providing
it
then
now
as
a
platform,
I
have
to
care
of
whether
they're
in
spdx
or
sitecore
dx,
and
that
seems
to
add
a
little
bit
more
complexity.
If
I'm
really
going
to
try
to
use
them.
A
A
So
the
next
one
is
is
mine.
Remove
stacks
and
mix-ins
I'll
go
ahead
and
show
my
screen
make
it
easier.
A
The
big
overarching
changes
it
still
gets
rid
of
the
word
stack
and
mixin.
It
still,
you
know,
replaces
mixins
with
more
canonicalized
metadata
about
linux
distributions,
whatever,
instead
of
using
custom
labels
for
each
field
for
the
os
and
architecture
and
architecture,
variant
which
I
realized
was
missing.
It
uses
fields
in
the
config
blob
that
already
exists
that
hold
that
data
that
are
mandatory.
It
must
be
there
for
every
image
the
labels
would
go
in
the
config
blob.
Also,
I
see
no
reason
for
those
to
be
labels.
A
I
think
they
can
just
be
fields
in
the
config
blob.
The
config
blob
does
have
a
version
and
features
as
well
that
I
didn't
use,
because
version
seems
to
it's
used
by
images
already
to
you
know,
use
values
that
aren't
exactly
these
values.
Currently,
these
three
fields
are
always
go.
Os
they're
defined
in
the
oci
spec
to
be
go
os,
go
arch
and
you
know
the
equivalent
of
go
arm
because
variants
only
use
for
arm
right
now,
but
the
other
other
field.
A
A
A
The
mandatory
fields
are
os
and
architecture
and
variant
if
necessary,
but
it
doesn't
have
to
be
there
if
it's
a
non-variant,
supported
architecture.
Other
fields
are
optional.
The.
A
Sorry
going
down
here
a
little
bit
further,
trying
to
remember.
What's
going
on,
I
took
out
all
the
packages
logic,
I
think
in
the
revision
before
this.
Just
as
a
reminder,
so
it
really
is
you
know
what
target
really
is
only
that
metadata
above.
A
I
think
one
question
I
think
it's
somewhere
in
here.
Maybe
it's
not
it's
not
a
comment
on
the
text.
Anymore
is
what,
if
you
don't
specify
any
targets,
I
originally
interpreted
this
the
sort
of
joe's
wild
card
question.
A
I
think
it'd
be
really
easy
to
extend
this
to
support
it,
because
these
targets
are
not.
You
know
they're
they're,
individual
images
that
get
built
right,
and
so
this
could
just
be
a
list
in
that
case,
right
just
like
name
and
conversions
here
is
okay,
and
this
is
distributions
not
distribution
right.
A
But
my
concern
about
doing
that
is
that
an
oci
image
has
those
fields
are
mandatory
at
the
top
and
must
be
populated,
and
so
I'd
kind
of
rather
generate
multiple
copies
of
it
of
the
same,
build
pack
marked
for
each
architecture
and
os
if
necessary,
and
then
it'll
just
work
automatically
because
manifest
indexes,
you
know,
capture
each
one
independently
in
the
same
repo
and
the
right
one
would
get
pulled
for
the
architecture
you're
on,
and
so
I
don't
like.
A
Yes,
we
could
like
build
one
for
x86
and
then
and
arm
and
then
put
x86
on
it
and
have
the
config
blob
lie
about
it
and
then
capture
that
it
also
works
on
arm
and
a
label,
but
that
feels
kind
of
complex
and
and
I'm
not
it's
something
we
could
add
later
if
we
needed
to,
but
I
think
the
actual
the
concern
that
joe,
I
think,
maybe
I
misunderstood
was
more.
What
happens
if
you
don't
have
this
targets
list?
A
Can
we
make
it
easy
for
people
who
are
just
writing
simple,
build
packs,
and
I
have
that
and
I
think
it's
fine
if
we
just
assume
this,
if
target's
list
is
empty,
just
put
linux
86
64
in
there,
if
there's
a
an
empty
list
and
then
a
build
pack
is
by
default,
a
linux
build
pack
for
x86.
A
E
A
D
D
There's
no
secrets-
and
I
want
to
like,
like
sort
of
forcing
this
making
it
mandatory-
is
a
limiting
factor
in
a
sense
like,
and
I
could
definitely
see
build
packs
that
just
start
putting
like
copy
pasting,
this,
the
linux
and
then
ubuntu,
and
it's
like.
Oh
now.
I
can't
use
this
on
centos
just
because
somebody
copy
pasted
the
ubuntu
thing
in
so
I
think
that's
why
I
favor
having
it
as
optional,
even
if
that
doesn't
mean
wildcard
necessarily.
B
D
Yeah,
you
even
have
a
different
script,
though
too
I
was
actually
wondering
if,
like
I
haven't
done,
any
windows
build
packs
but,
like
I
have
written
tests
for
them
and
you
can
just
have
like
been
build
and
been
build.bat
together,
and
so
I
wonder
if
you
know
we
want
mechanisms
where
it's
like
yeah.
I
just
spit
out.
A
A
A
The
lifecycle
pack-
sometimes
maybe
I
don't
like
oh
pack
when
it
runs
crate
package
right,
that's,
okay,.
B
Okay,
that
makes
sense
yeah.
We
could
do
that
right
and
that
could
be
like
a
utility.
The
utility
itself
could
determine
what
it
should
be,
but
because
those
are
mandatory,
we
could
at
least
pack
say
that
that's
the
default
and
it
could
get
smart
if
they
wanted
to
that
makes
sense.
A
A
If
you
don't
specify
distributions
at
all
and
just
say,
linux,
x86
it'll
build
a
ver,
a
copy
of
the
build
pack
for
linux,
86
corresponding
to
that
target.
Okay,
because
to
joe's
point
I
was
just
thinking
about.
If
you
get
a
drive
by
pr
for
your
build
pack,
because
someone
has
tried
it
out
on
arm
and
it
works
fine
and
they
add
a
target.
Are
we
going
to
stop
adding
the
linux
x86
if
they
add
arm?
Is
that
like
a
weird
upgrade
path?
B
D
E
One
thing
I
think
might
be
good
to
do
is
I
wonder
if
we
could
break
the
packaging
parts
of
this
out,
mostly
because
we
don't
in
the
distribution,
spec
sort
of
talk
about
multi-arch
build
packs.
Yet
we
need
to
define
that
we
have
like
what
does
it
look
like
when
you're
running
packaging
with
package,
tommle
and
pack?
I
feel
like
that
gets
a
little
confusing
like
now.
Do
we
need
a
way
to
provide
multiple
compiled
versions
of
the
build
pack
like?
I
think
there
are
details
there.
E
We
need
to
work
out
and
I
want
to
do
it,
but
I
wonder
if
this
rc
is
better
and
can
go
through
sort
of
just
as
a
description
of
build
pack
compatibility
with
the
stack
that
doesn't
touch
on
how
it
gets,
how
multiple
versions
of
it
get
packaged.
A
The
problem,
I
think
I
I
agree
that
the
rfc
can
leave
the
complexity
of
packaging
as
an
unresolved
question.
The
problem
I
have
with
not
mentioning
packaging
at
all
is
that
targets
list.
Isn't
it
it's
not
a?
I
would
use
a
different
format
if
targets
wasn't
intended
for
packaging
because
targets
lets,
you
repeat,
like
you,
can
say
like.
If
you
wanted
to
build
two
build
packs,
one
that
works
on
1804
and
2004
and
the
other
works
1404
and
1604.
You
would
list
multiple
targets
right
with
the
different
version
combinations.
A
So
it
it
kind
of
the
format,
describes
the
packaging.
It
doesn't
describe
the
compatibility,
but
as
long
as
we're
okay
saying
yes,
it
describes
the
packaging,
but
you
know
we're
gonna,
we're
gonna
punt
on
how
that
packaging
happens
right.
It.
E
Also
describes
the
compatibility
right
well,
it
describes
both
because
like
if
you'd
only
described
the
packaging,
I
would
say
we
could
be
moving
it
to
package
tumble
right
and
replacing
our
platform
field
that
we
have
they're
like
enough
to
get
rid
of
that
anyways.
In
this
case
yeah.
I
feel
like
you
need
to
then
move
the
package
tunnel
stuff
in
here
to
make
this
something
coherent
and
implementable,
or
you
need
to
move
this
over
there,
but
it
should
be
here
because
it's
also
for
compatibility,
it's
not
just
for
packaging.
A
We
always
said
we
wanted
to
get
rid
of
package
tunnel
eventually,
once
we
had
a
registry
and
now
we
have
a
registry.
So
I
wonder,
if
that's
something
to
think
about.
B
And
and
just
so,
I'm
clear
kind
of
on
the
same
lines
we're
we're
kind
of
defining
this,
but
also
in
mind
with
like
the
actual
distribution
of
it
right,
like
of
it
ultimately
landing
in
an
oci
registry
right
and
that's
why
we're
looking
at
these?
B
You
know
how
the
config
is
going
to
look
and
all
that,
and
then,
when
we
talk
about
this
multi-arch
right
just
so
that
I'm
clear
we're
saying
that
we
would
have
different
config,
blobs
right
and
a
manifest
list
that
basically
points
to
all
these
different
architectures
where
it
could
be.
But
then,
from
a
optimization
perspective,
we
would
still
leverage
the
oci
aspect
where
the
binaries
themselves
live
in
a
single
blob
right.
So
the
manifest
is
pointing
to
maybe
just
a
handful
of
blobs
that
are
the
same
across
all
these
different
manifests.
A
You
know
separate
build
pack,
blobs
get
produced
for
each
architecture,
because
you
know
sorry
for
each
target,
because
you're,
probably
if
they're
buying
areas
you're
going
to
want
to
build
different
versions
of
the
build
pack
for
each
architecture,
right
they're
not
going
to
all
run
on
you
know
the
same.
The
same
been
build
written
and
go,
isn't
going
to
run
on
windows
versus
linux,
vs
or
linux
x86
versus
linux
arm.
So
the
idea
is
that
they
are.
They
are
separate,
build
packs.
B
A
A
Okay,
cool,
that's
true
yeah
there
would
be,
there
would
be
different
manifests.
They
would
have
different
config
blobs,
but
the
config
blobs,
if
the
bits
were
the
same,
would
happen
to
point
to
the
same
bits
on
the
registry,
because
you
know
we
zero
the
make
sure
the
time
stamps
are
zero
and
all
of
that
stuff.
It's
not
like
we're
gonna
be
packaging.
It
yeah,
yeah,
yeah.
Definitely.
A
B
Yeah,
I
definitely
want
to
talk
about
it.
I'm
not
sure
exactly
how
feasible
it
is
to
kill
it,
but
it'd
be
interesting
to
discuss.
E
B
B
So
when
I
want
to
do
a
packaging
of
a
build
pack
right,
then
I
could
basically
say:
okay
go
look
at
up
based
on
this
other
mechanism,
and
that
mechanism
could
then
say:
okay
if
it
you're
looking
for
this
name
and
version
of
build
pack,
you're
actually
going
to
find
it
here
right
like
do
lookups
is
it,
I
guess.
Are
we
thinking
like
linux,
repos
sort
of
mechanism
where
you
can
have
lookups
based
on
repos.
A
I
see
like
a
gem
file
like
you
know,
in
a
gem
file,
you
see
rubygems.org
at
the
top
and
then
you
just
list
your
gems.
You
don't
specify
where
they
all
live,
but
you
can
override
their
locations
to
point
it
to,
like
you
know,
a
specific
git
repo
to
pull
a
gem
right.
That's
what
I
imagine
buildpectomel
looks
like
there's
a
it
points
to
the
registry
that
built
xio
by
default.
A
E
I
got
one
last
thing
I
want
to
throw
out
before
we
run
out
of
time
here
about
this
proposal
is
okay.
If
I
change
the
topic
briefly
rebase
and
cyclone
dx,
I
think
we
should
not.
E
E
The
io
bill
packs
id
because,
like
what
ideas
are
referring
to,
I
think
on
some
level
we
need
to
keep
the
word
stack
around,
but
I'm
willing
to
to
talk
about
other
words
for
it
I
mean
eight
word
I
think,
but
I
think
it's
fine
if
it's
optional,
but
you
should
have
to
force
unless
they
match
like
optional
means
like
go
ahead
and
rebase
it
I
mean
if
the
thing
is
set
and
they
match
that
means
you
can
rebase
it
without
forcing
otherwise
you
always
have
to
force.
E
I
like
that,
because
it's
simple
and
now
that
the
stack
id
isn't
doing
a
lot
of
other
work.
You
can
just
use
that
as
a
way
of
indicating
that
any
two
images
can
be
rebased
out
from
under
each
other,
like
maybe
it
doesn't
tie
build
to
run,
maybe
they're
not
the
same
id.
It's
like
any
run
image
with
this
id
could
be
the
base
layers.
They
could
all
be
swappable.
A
E
A
All
this
says
right
now
is
that
if
there
is
an
s-bom,
if
there's
a
merged,
s-bomb
pack
is
allowed
to
use
that
as
a
convenience
feature
not
part
of
the
spec
right.
In
order
to
warn
you,
if
you're
doing
a
rebase,
that's
bad,
I
left
the
cyclone
dx
example
in
there
to
show.
Yes,
it
can
have
packages
in
it.
Yes,
we
could
warn
like
this,
but
it's
not
it's
part
of
the
rfc
solely
because
it's
saying
we
had
validations
before
now.
A
We
basically
don't
but
here's
how
a
platform
could
take
advantage
of
metadata
that
may
or
may
not
be
on
the
image
to
warn
you
if
you're
doing
a
rebase,
that's
bad,
but
it's
not
specified
it's
not
it's
not
something
that
anybody
has
to
implement
anywhere
right.
It's
it's
just
a
convenience
feature
for
pack.
So
then,
separately
from
that
target,
id
is
not
related
to
rebase
validations.
A
It
is
just
information
that
goes
from
the
run
image
to
the
sorry
I
didn't
mention,
but
I
exposed
all
this
those
things
from
the
run
image
into
the
build
image
during
build
time.
It's
just
information
about
the
run
image
that
got
selected.
It's
not
it's
not
for
rebase
doesn't
have
to
do
with
validations.
This
still
has
zero
validation.
A
Zero,
mandatory
validation
on
rebase
is
that.
A
A
E
A
I
think
they're
kind
of
separate
use
cases
for
me
so
like
one
is
like
somewhere.
It
actually
says
that
all
this
metadata
is
checked.
I
don't
know
if
I
included
target
id
above
so
I
should
be
clear
that,
yes,
the
idea
was,
if
any
of
this
metadata
changes,
there's
a
rebase
complaint.
It's
a
problem.
So
so
I
think
that
that
should
be
part
of
it.
100
sold,
give.
A
No,
I
I
I
think
it's
not
even
it's
not
clear,
because
it
says
image
base
image
metadata.
It
doesn't
say
whether
it
includes
this
optional
piece.
That's
only
the
runtime
metadata
like
because
it's
not
up.
You
know:
okay,
anyways,
that's
fine!
This
is
different,
though
the
s-bomb
check
is
just
we,
you
know.
Occasionally,
you're
publishing
run
images
you're
getting
from
upstream.
If
you
were
to
mess
up
your
run,
image
was
different
than
it
was
in
a
previous
build
and
had
the
same
metadata.
It's
an
optional
way
path
could
validate
that.
A
Yes,
you
really
have
something
that
claims
to
be
abi
compatible
with
the
next
build.
It's
just
extended,
validation
right.
It's
not
you
know.
In
addition
to
validating
this
stuff,
it
doesn't
need
to
get
implemented.
First
right,
it
could
be
optional,
it
could
be
pack
rebase
dash
dash,
strict
right
if
we
wanted
to
just
just
wanted
to
mention.
A
E
E
A
A
It
was
very
interwoven
in
the
rfc
initially,
so
I
understand
resistance,
but
I
did
I
tried
to
take
it
out
as
much
as
possible
in
this
most
recent
thing.
I
know
we
are
here
quite
a
bit
over
time,
so
I
think
we
should
call
it
thanks
for
pretty.