►
From YouTube: Working Group: 2021-04-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
it
seems
like
we
should
kick
things
off.
I
don't
see
any
new
faces
today,
so
we'll
jump
right
into
release,
planning
and
updates.
B
B
B
B
A
All
right,
let
me
see
that
cool
first
thing
is
additions
to
rfc
template.
B
Itself
had
a
comment
about
using
the
the
name
of
the
rfc
for
some
of
this
stuff,
and
I
thought
that
was
a
sensible
suggestion,
but
maybe
could
update
this
to
have
it
also
be
a
change
in
our
process
where
we
sort
of
enforce
that
the
name
is
good.
You
know.
C
Yeah
thinking
about
the
names
of
the
issues
so
like,
if
we're
releasing
the
spec
like
we.
I
do
this
on
other
work
like
on
paquetta,
where
we
use
the
titles
of
issues
and
prs
to
auto,
generate
release.
Notes.
C
And
I
think
that
could
work
for
us
and
our
components,
because
we
might
want
slightly
different
titles
if
we're
describing
even
the
same
result
of
the
same
rfc,
but
how
it
affects
like
the
life
cycle
or
pack
or
different
specs.
B
This
one's
also
mine,
it's
really
for
the
implementation
team
to
kind
of
get
some
process
around
how
we
do.
B
Yes,
this
is
mine.
I
have
gotten
some
feedback
and
I've
got
some
questions.
I'm
still
responding,
but
sounds.
A
Good
add
bomb
to
layer,
content,
metadata.
B
A
A
Two
merged
there,
so
this
is
this
is
just
yesterday,
so
we're
just
waiting
through
the
final
comment
period.
A
E
We
should
definitely
come
up
with
a
tag.
I
guess
we
could
talk
about
it
in
the
one
of
the
slack
channels.
Sounds.
C
C
C
A
C
C
A
Take
there
we
go
disambiguate
layer,
metadata
files
from
that
metadata.
C
B
C
Yeah,
these
are
just
like
the
first
options
that
seemed
like
obvious
ways.
We
could
get
another
directory
on
the
same
volume.
D
Should
I
update
the
proposal
with
this,
or
do
you
want
to
start
like
a
new
one
that
supersedes
this
rfc.
C
I
think
it
makes
sense
to
update
this
proposal
with
putting
the
config
in
different
directories,
which
is
what
we
decided.
I
feel
like
the
next
question
after
that
is
in
order
for
anyone
to
implement
they're
going
to
need
to
know
what
directories.
C
And
I
think
we
should
update
the
proposal
with
that
as
well,
if
you're
open
to
it,
but
I
wanted
to
put
something
concrete
here
so
that
people
could
react
against
it
or
bike
shed
if
they
want
to
early,
because
I
think
it's
very
easy
to
norm
on
a
different
directory.
But
when
you
look
at
what
our
actual
existing
directory
names
are
and
where
they
are,
it
is
less
obvious
what
a
non-ugly
way
to
do.
It.
B
A
Okay,
just
looking
for
feedback
still
now
and
then.
B
A
Next,
one
is
guidelines
for
accepting
component
level
contributions.
There
was
some
feedback
in
here
that
I
addressed
before
the
meeting.
Did
you
ever
reach
out
to
the
cncf?
I
think
that
was
like
the
one
I
have
not,
but
I
did
add
a
statement
clarifying
that
the
guidelines
don't
they're
not
intended
to
be
anything
more
than
what
you
know.
The
core
team
decides
is
acceptable.
They
don't
override
linux
foundation
or
cncf's
legal
approval
interview,
because
any
kind
of
contribution
is
subject
to
those
kind
of
ip
questions.
A
It
seems
like
it's
a
little
out
of
scope
of
this
to
say
to
kind
of
document
the
you
know
linux
foundation's
policies
about
that
in
here,
but
I
could,
I
can
definitely
reach
out.
I
just
don't
know
which
blockchain
I
see
like.
I
think
it
should
be
a
question
that
comes
up
when
a
component
is
proposed
for
contribution.
B
A
A
A
B
Okay,
I
guess
I
have
a
comment
of.
We
should
just
make
it
clear
who's
who's
responsible
for
doing
that.
B
Just
like
an
additional
sense,
that's
like,
and
for
acceptance
so
and
so
will
I
can.
E
A
No
we're
not
quite
there
yet
but
good
to
go
after
that's
added.
A
Issue
generation:
this
is
an
fcp
there's.
This
closes
today
looks
like
terence.
You
want
to
merge
this
one.
E
Yeah,
I
think
we
have
to
put
the
automation
somewhere.
I
could
help
with
some
of
that.
B
E
B
C
I
emergencies
myself,
I
usually
just
know
I'm
about
to
merge
it,
so
I
create
the
issues
with
the
links
that
will
exist,
but
that's
definitely
not
something
you
can
do.
If
you're
trying
to
do
this
asynchronously
with
someone
right,
you
just
put
a
tbd
in
there
and
then
fill
it
in
after
you've
learned
it.
B
E
B
B
B
D
Yeah,
this
was
something
I
put
this
sort
of
started
as
a
slack
thread
where
we
were
discussing
like
if
the
so
currently
that
cnb
falls
under
the
implementation
team,
and
that
slack
thread
was
about
whether
the
implementation
team
should
continue
owning
lip,
cnb
or
not,
or
whether
there's
a
need
for
a
different
sub
team
to
be
created
which
owns
buildback
author
related
tooling,
or
whether
this
should
be
owned
by
some
other
sub
obtained.
D
The
the
main
issue,
I
guess,
was
that,
like
these
tools
should
be
kept
up
to
date
and
should
have
proper
documentation
and
someone
maintaining
them
actively.
C
My
mind
a
separate
sub
team
makes
sense
because
there's
already
affinity
groupings
of
people
who
care
about
different
things,
and
I
would
say
the
group
of
people
who
are
most
engaged
in
the
cmb
and
the
group
of
people
most
engaged
in
the
life
cycle
and
image.
Util
and
other
implementation
concerns.
C
Don't
have
a
lot
of
overlap
and
also
folks
who
might
be
interested
in
contributing
more
to
the
project.
I
feel
like
there's
a
pretty
natural
division
about
whether
you
know
if
they're
a
frequent
build
pack
author,
they
may
very
well
want
to
get
involved
in
lib,
cnb
and
not
in
life
cycle.
C
C
I
almost
feel
like
the
way
that
conversation
unfolded
in
some
ways
proves
my
point
because
he's
not
going
to
come
to
the
implementation
sub-team
meetings
where
we
talk
about
life
cycle
all
the
time.
So
I
have
to
go
find
him
on
my
own
to
talk
about
libsy
and
b,
maybe
proving
that
we
need
a
forum
to
talk
about
libsy
and
beef.
Do.
B
C
Argument
was
that
it
is
in
fact
implementation
of
the
api,
so
if
it's
naturally
under
implementation,
I
wonder
if
my
implementation
team
would
really
mean
life
cycle
team,
and
maybe
we
should
just
be
more
explicit
about
that,
because
that's
how
we
organize
and
practice.
C
Reason:
implementation
of
the
platform
spec
a
consumer
of
the
platform
api
if
we
actually
fixed
our
expect
to
be
more
clear
as
ozzie
kindly
suggested
a
long
time
ago.
I
think
this
whole
conversation
would
be
easier
because
we'd
have
the
concepts
in
our
head
that
made
it
make
sense
right,
but
at
least
because
we've
had
a
hard
time
getting
around
to
spec
refactors,
because
everyone's
super
busy,
the
least
we
could
do,
is
organize
the
teams
in
ways
that
made
more
sense.
A
A
If
you
made
a
new
life
cycle,
you
know
you
could
follow
these
rules
and
make
one
that's
also
compliant,
and
so
we
called
we
called
that
sub
team
implementation,
sub
team
because
they're
implementing
the
spec,
but
I
agree
it
probably
doesn't
make
sense
anymore
and
also
that
to
me,
libsanby
really
doesn't
feel
similar
to
a
life
cycle
in
what
it's
trying
to
accomplish
right.
One
is
the
thing
you
have
to
use,
no
matter
what,
when
you're
doing
build
packs?
C
My
goal
for
organizing
sub
teams
is
not
particularly
philosophical.
That's
very
practical!
It's
like
are
these
con,
these
groups
of
people
useful
for
getting
the
right
group
of
people
in
a
room
to
talk
about
issues,
and
is
this
governance
structure
useful
for
getting
the
right
people
to
put
eyeballs
on
things
before
approving
them,
and
I
think
using
that
framework
it
makes
sense
to
kick
libsyn,
be
out
into
a
build
pack,
sub
team
and
then
potentially
think
about
moving
other
build
pack.
Author
focus
tooling
there.
If
there
was
any
separable
things.
B
B
Yeah,
do
we
want
a
new
sub
team?
Do
we
want?
I
mean
we
don't
have
to
decide
this
or
discuss
it
now,
but
I
did
think
the
distribution
idea
joe
had
was
also
an
interesting
one
as
well,
because
that
is
just
like
built
back
ecosystem,
but
yeah
I
mean
we
can
reposition
the
distribution
team
as
ecosystem
team
or
something
else
too,
because
a
healthy
like
go
binding
and
set
of
tools
like
that
sam's
just
is
all
about
growing
the
ecosystem
built
backs
and
making
that
healthy.
C
B
B
Cool
I'll
pull
purples
together,
you
can
hash
it
out
there
thanks.
A
A
Cool
next
thing
in
the
list
is
s-bomb
github
discussion
about
bill
of
materials
formats.
I
am
really
interested
in
this.
D
Yes,
so
this
was
also
something
we
were
discussing
in
one
of
the
previous
office
hours,
where
one
of
the
things
we
wanted
to
do
was
investigate
the
various
s
bomb
standards,
figure
out
their
pros
and
cons
and
see
if
we
can
suggest
one
of
them
as
sort
of
like
the
go-to
format
I
saw
in
the
so
in
the
github
discussion,
I
started
the
proposal
with
three
formats:
spda,
cyclone,
dx
and
suid,
and
the
discussion
was
also
geared
more
towards
recommending
it
rather
than
putting
it
in
the
spec.
D
But
I
saw
steven
was
in
the
favor
of
just
putting
it
in
the
spec,
so
that
tools
like
pack
can
use
this
information
and
offer
features
like
cve
detection.
D
I
guess
the
only
reason
I
wanted
to
bring
it
up
in
the
working
group
meeting
is:
if
anyone
has
any
opinions
about
the
different
formats,
or
I
think
the
last
comment
someone
left
was:
what's
the
pros
and
cons
of
the
different
formats,
and
why
should
we
use
one
over
the.
D
Other
the
small
amount
of
investigation
that
I've
done
seems
like
spd-x
is
a
linux
foundation.
Project
cyclone
dx
was
something
that
was
started
because
of
shortcomings
of
spd-x
by
a
set
of
forks
from
the
o
wasp
area,
and
it
seems
like
it
does,
give
it
more
towards
tooling
and
sort
of
cve
detection
as
like
a
first-class
citizen
which
asspdx
is
planning
to
handle
in
the
v3
spec,
but
cyclone
dx
seems
to
have
some
existing
tooling,
like
dependency
track
also
offered
by
owasp,
and
that's
my
current
investigation
on
these
different
formats.
A
The
last
comment
in
there
is
left
by
nisha
who
maintains
turn,
which
is
a
like
a
linux
foundation,
project
that
does
kind
of
container
build
materials
generation,
and
she
was
wondering
if
we've
talked
about
the
kind
of
metadata
that
needs
to
be
collected,
kind
of
to
represent
dependencies
in
that
format.
A
C
Unintuitive
ones
too,
like
I
feel
like
we
want
to
catch
the
cmb
launcher
in
there
as
well
stuff
like
that
that
people
might
not
think
of.
I
should
have
wrote
a
manifesto
about
all
the
entries
that
I
think
a
bomb
needs
not
in
a
public
place,
but
it's
appropriate
to
share
it
publicly.
So
I
can
try
to
copy
that
over,
but
I
think
there's
two
hand-wavy
parts
of
the
manifesto.
I
filled
out
the
part.
That
was
what
entries
I
think
we
need
to
fill
out
the.
C
It
sounds
like
s.
Pdx
is
the
more
widely
adopted
standard.
I
don't
know
if
that's
true,
but
let's
say
it
was
for
a
second,
because
it's
the
older
one
and
cyclone
dx
is
the
newer.
More
fully
featured
one
might
make
sense
to
do
cyclone
dx.
If
you
could
then
translate
it
to
s
pdx
like
if,
if
translation
only
goes
one
way,
we
should
pick
the
one
that
can
be
translated
to
the
other
one.
A
For
me,
the
most
interesting
question
is:
what's
compatible
with
most
possible
security
scanners.
What
are
security
scanners
going
to
support
in
the
future?
It
kind
of
seemed
like
cyclone
dx
had
some
uptake
there.
I
linked
to
the
trivia
issue,
but
you
know
it's
going
to
be
hard
to
pick
a
winner
when
there
are
a
bunch
of
different
options,
just
picking
the
one
that
gives
us
the
most
utility
might.
A
I
I
think
the
most
valuable
thing
we
can
deliver
around.
This
is
tooling
that
reads
that
metadata
and
gives
does
things
like
give
cder
reports
right,
like
imagine
when
you
do
a
pack
build
at
the
end,
if
you
just
saw
the
cvs
that
affect
your
app
right,
there's
so
much
power
in
being
able
to
parse
that
data.
A
D
Do
we
also
plan
on
providing
some
tooling
around
it?
So
I
guess
if
it's
just
pack
specific,
that's
one
platform
implementation,
whereas
if
you
put
it
in
the
spec,
then
all
the
platforms,
I
guess,
would
have
to
end
up
using
that,
which
is
why
the
original
proposal
was
just
put
it
in
the
best
practices
thing
and
then
back
and
figure
out
if
it
finds
tags
that
conform
to
the
schema
that
it
expects
it
can
display
that
information.
D
A
I
think,
in
the
end,
we're
probably
going
to
want
features
we
build
into
pack
to
support
one
of
those
formats
right
like
it's
going
to
be
really
annoying
to
implement.
You
know
pac
scan
for
three
or
more
different
things
and,
in
the
end,
that
kind
of
becomes
the
de
facto
format,
but
then
we've
left
everybody
who
didn't
pick
the
one
we
picked
at
the
end
out
in
the
cold
with
their
images.
A
C
A
Even
so,
you
can
make
a
build
pack,
that's
usable
by
different,
build
pack
ecosystems,
you
don't
if
your
build
packs
does
spdx,
but
then
you
want
to
use
it
with
a
you
know:
potato
build
packages,
spdx
heroku,
build
pack,
uses
cyclone
dx
and
then
their
metadata
is
in
different.
You
know
mixes
together
in
different
formats.
Right,
that's
going
to
be
a
big
problem.
B
E
Think
that's
their
problem.
I
think
that's
the
research.
We
need
to
do
right
as
part
of
what
I
would
expect
to
be
an
rfc.
I
think
those
sort
of
findings
should
be
depicted
there.
I
think
there's
a
couple
really
good
questions
that
came
out
through
this
whole
conversation,
but
I
did
want
to
add
one
thing
from
a
platform
perspective
right
where,
when
we're
trying
to
implement
or
integrate
with
the
lifecycle,
it
would
be
nice
to
have
a
very
specific
common
format
to
be
able
to
then
leverage
additional
toolings
for
different
specific.
E
You
know
platforms
so
for
pac,
where
we
use
go,
we
want
to
be
able
to
use
that,
but
on
something
like
tech
time,
we
might
want
to
be
able
to
use
some
cli
tooling,
that
does
additional
parsing
and
and
whatnot.
So
I
think
it's
something
that
has
an
ecosystem
and
tooling
around.
It
would
definitely
be
more
beneficial
than
a
home-grown
solution
that
then
pack
translates
into
a
standard.
E
E
A
D
The
other
thing
is
that
we
want
to
go
with
the
format
that
currently
has
the
most
support,
or
has
the
potential
to
have
the
most
support,
because
those
may
also
lead
to
different
formats
from
what
I
could
see.
Spdx
is
taking
these
things
into
account
like
the
feedback,
they've
gotten
and
they're,
creating
a
new
version
of
the
spec,
whereas
cyclone
dx
already
supports
a
bunch
of
these
things,
but
it's
not
a
linux
foundation
project
which
I
don't
know
how
that
translates
into
other
organizations
adopting
it.
D
So
it
currently
has
a
lot
more
tooling,
but
spdx
is
also
aware
of
their
drawbacks
and
they
are,
they
seem
to
be
investing
efforts
into
it.
So
I
don't
know
how
we
want
to
tackle
this
issue.
E
I
think
that
goes
like
the
question
or
the
answer
to
whether
or
not
we
could
transform
something
from
cyclone
dx
to
spdx
would
be
helpful
right.
If
the
answer
is
no,
I
think
that
would
have
us
take
a
closer
look
at
that
decision.
But
if
the
answer
is
yes,
then
you
know
it
seems
like
it
would
be
a
winner
to
some
extent,
at
a
glance.
A
I
think
one
thing
about
this
that,
like
it's
like
part
of
the
reason
why
I
think
we
should
drive
towards
a
single
format,
you
know
trying
to
get
build
packs
to
produce
that
you
know
consistently,
because
I
think
this
could
really
be
a
killer
feature
for
the
project.
It's
like.
If
you
could
reliably
get
a
cve
report
with
a
high
degree
of
certainty.
You
know
for
every
cloud-native
vilpex
image
that
doesn't
require
scanning
the
image
it
just
requires.
You
know,
processing
that
that
bill
of
materials
be
really
really
nice.
A
A
A
I'm
I've
took
notes
and
left
a
bunch
of
questions
that
were
asked
in
the
discussion
feel
free
to
add
more
things
there.
If
I
left
anything
out.
A
Nisha
kind
of
said
she
might
write
it.
I
don't
know
what
timeline
she
was
thinking
about
for
it,
but
we'll
we'll
make
sure
that
that
happens.
Okay.
D
Yeah,
this
is
also
something
we've
discussed
a
lot
of
times
in
the
past.
It's
mainly
around
the
fact
that
people
currently
try
to
like
use
buildbacks
from
pocatello
or
heroku,
and
then
they
discover
that
they're
only
supported
by
certain
stack
ids,
and
then
they
get
confused.
Why
such
a
restriction
exists
in
the
first
place,
if
you
are
an
operator
and
you're
one,
creating
a
bundler
with
specific
sorry
builder,
with
specific
buildbacks
and
a
stack,
how
likely
is
it
that
you've
not
tested
anything?
D
And
I
I
don't
know
that
that's
another
conversation,
that's
in
the
general
channel,
where
someone
posted
that
they
couldn't
use
reuse,
the
buildbacks,
because
it
it
has
specific
restrictions
that
are
based
on
string
validation
rather
than
some
actual
validation
of
what
the
build
pack
actually
requires.
It's
just
like
a
heuristic.
Anyone
can
say
I'm
compatible
with
this
buildback
mods.
C
This
is
a
top
priority
for
me.
I
want
to
change
stack
ids
so
that
they
were
meaningful,
even
if
we
hadn't
defined
specific
ones,
using
a
format
that
captured
like
the
oci
platform
information
in
the
stack
id.
I
have
it
on
my
to-do
list
or
I
did
an
rfc
for
this.
C
A
I'm
curious
where
the
current
stack
id
for
not
talking
about
make
sense.
If
we're
just
talking
about
stack
ids,
I'm
curious
where
we
run
into
conflicts
now,
because
the
only
case
where
your
stack
id
doesn't
match
should
be
because
the
base
images
you're
using
are
you
know
very
incompatible.
We
there's
an
rfc
a
while
back.
A
That
said,
you
know
if
it's
based
on
ubuntu
bionic,
you
should
have
the
same
stack
id
as
everything
else
based
on
ubuntu
bionic
and
then
build
packs
that
you
know
like
unless,
like
I
can
see
some
weird
cases
like
bionic
and
focal
being
different.
Maybe
even
though
you
know,
there's
no
abi
compatibility
guaranteed
there.
It
seems
dangerous,
but
you
know
maybe.
C
A
C
Java,
right,
like
there
are
a
lot
of
java,
should
be
able
to
work
on
any
linux.
The
build
pack,
an
api
compatibility
between
different
linux
disk
draws
doesn't
matter
because
we're
not
going
to
let
you
switch
between
stack
ids
in
a
build,
but
there
are
many
build
packs
that
could
work
on
different
distros
of
linux
that
we
are
artificially
restricting
to
ubuntu.
Right
now,
with
the
stack
ids
in
buildpak
tunnel.
D
I
guess,
like
you,
have
we,
I
sort
of
have
a
similar
conversation
on
the
pacquiao
side,
where
they
sort
of
categorize
their
build
packs
into
things
that
install
dependencies
and
then
buildbackset
configure
your
application
based
on
those
dependencies
and
for
the
latter,
which
has
a
lot
of
complicated
logic.
You
you're
completely
stuck
agnostic
most
of
the
times
or,
if
you're
limited,
you're,
probably
limited
by
the
operating
system
like
windows
versus
linux,
or
something
like
that,
and
I
don't
think
that's
captured
anywhere
right
now.
A
I
think
I'd
support
improving
this.
What
I'm
worried
about
is
that,
and
I
think
you
brought
up
the
jbm
as
an
example,
I'm
worried
about
encouraging
build
pack
authors
to
do
more
static,
linking
to
try
to
make
their
dependencies.
Like
the
you
know,
language
run
times,
compiler
really
language
runtimes,
they
install
you,
know
more
statically
linked.
They
work
across
a
lot
of
different
distributions,
because
then
you
launch.
C
A
But
but
in
a
lot
of
cases
the
jvm
isn't
built
during
the
or
the
whatever
runtime
you're
talking
about
isn't
built
during
the
build
process,
and
it's
not
just
important
that
it
stay
the
same.
It's
that
the
artifact
distributed
with
the
build
pack
is
pre-built
to
be
dynamically,
linked
against
a
particular
set
of
lts
packages.
A
And
then
I
worry
if
we
get
rid
of
the
stack
restriction
or
if
we
loosen
that
up
too
much
right,
then
we
end
up
with
build
pack
authors
that
are
trying
to
make
build
packs
that
are
compatible
with
many
different
kinds
of
you
know
like
both
ubi
and
you
know,
ubuntu,
bionic
or
whatever,
because
they're
you
know,
and
to
accomplish
that
they.
You
know
statically,
link,
limb
c
and
the
whole
world
into
the
things
they're
building,
and
then
we
don't
get
the
benefit
of
that
update.
F
There's
no
way
that
when
we're
doing
ubi
based
ones
and
we're
installing
things
for
our
rpms
via
stack
packs
that
we're
ever
going
to
expect
those
to
run
on
any
other
like
an
ubuntu-based
system,
because
you
know
in
our
mind
that
that
will
be
the
ecosystem.
We're
going
to
have
to
build
over
ubi
images,
and
I
don't
envisage
that
the
build
packs
there
are
going
to
be
transferable
to
another
ecosystem.
F
C
I
feel
like
backpacks
definitely
won't
but
like
if
I'm
installing
a
buildpack,
that's
installing
a
dependency
that
I'm
fetching
from
an
upstream
provider
and
that
provider
provides
one
version
of
that
dependency
and
it's
called
x.
Linux
like
I
should
and
all
I
do
is
install
it.
I
should
be
able
to
install
it
on
any
linux
and
I
think
buildpack
authors
can
use
expressive,
stack,
ids
and
should
be
aware
of
their
dependencies
to
be
able
to
say
which
of
those
situations
they
support
and
also
in
our
like
assets.
C
C
That's
what
I
want
to
do.
I
want
to
expand
our
like
stack
id
definitions
and
use
wildcards.
So
if
I
run
on
any
linux,
I
should
be
able
to
say,
like
I
o
built
packs
linux
star,
if
I
run
on
any
linux,
but
only
amd
64
should
be
like
I'll,
build
packs,
linux,
star,
dot,
amd
64.,
and
then
you
could
do
flavors
like
any
ubuntu
stuff
like
that
like.
C
If
we
came
up
with
a
schema
where
we
haven't
just
defined
one
stack
id
that
some
people
share,
we've
defined
a
a
plan
for
how
to
make
an
expressive
stack
id.
Then
I
think
we
could
get
more
collaboration
between
stacks
and
build
banks.
F
Like
a
json
object
with
key
fields
in
there
and
that
sort
of
stuff,
so
that
you
can
query
that,
because
when
you
start
getting
into
platform,
you
know
if
you
say
you
want
to
run
on
every
linux,
but
you
don't
run
on
ubuntu
on
powerpc,
then
creating
a
filter
that
says
that
in
the
negative,
for
that
one
case
for
a
string
based
thing
gets
extraordinarily
difficult.
As
the
number
of
permutations
and
combinations
start
to
stack
up
with
things
that
you're
trying
to
validate
against.
F
That
would
make
sense
for
some
of
it.
I
think,
because
you
know
the
sort
of
like
the
the
processor
architecture,
and
that
would
be
better
encoded
at
that
layer,
because
you
know
the
oci
image
is
going
to
be
only
able
to
run
on
the
power
pc
system
anyway.
So
it's
already
going
to
be
tagged
in
some
manner.
At
that
level,.
D
That
was
something
what
I
was
imagining
like,
instead
of
id,
which
imposes
some
hierarchy
or
limitations
on
how
you
can
express
these
types,
you
can.
You
can
have
like
a
stack
types
for
both
your
stackdom
and
the
buildback
domain
that
can
describe
various
attributes
and
you
can
match
them
against
the
stack.
So
the
build
back
side.
It
can
be
a
list
of
like
attributes
it
is
compatible
with,
whereas
on
the
stack
side
it
will
be
a
single
string
for
each
of
those
attributes,
so
you
can
have
like
platform
operating
system
flavor.
D
B
D
Like
linux,
flavors
or
something
as
a
new
attribute,
you're,
not
breaking
and
reimagining
the
whole
id
again
you're,
adding
an
additional
attribute
which,
if
absent
like
it,
will
just
fail
validation.
If
it's
present,
it
should
be
present
both
on
the
little
back
side
as
well
as
the
stack
site.
So
you
can
move
forward
without
breaking
existing
validation
and
you
don't
impose
this
hierarchy.
D
A
I
haven't
heard
anything:
that's
not
already
supported
by
stack
id
equals
star,
so
the
the
platform
and
the
operating
like
base
like
linux
versus
windows.
Those
are
both
encoded
in
manifest
lists,
right
that
that
stuff
already
works.
There's
already,
you
know
when
you
say
you
want
to
build
an
image
on
top
of
a
linux,
build
x86,
build
image.
You
can
only
do
that
with
build
packs
that
are
compatible
with
linux.
X86
right,
that's
already
already
built
into
the
oci
spec.
So,
like
I
don't
think
it.
I
don't
know
if.
A
We
did
an
rfc
that
says
that
it's
not
how
stack
ids
are
supposed
to
be
they're
supposed
to
be
linux.
Distribution
flavors,
like
bionic
with
with
one
one
set,
I
o
build
pack
string.
That's
I
o
build
pack
stacks
bionic,
for
instance,
it
says
this
is
this
is
what
bionic
is
when
it.
So
I
heard
one
thing
that
was
different
or
that
we
don't
support,
which
is
like
saying
ubuntu
as
opposed
to
motu
bionic,
but
I.
C
C
A
C
I
guess
this
the
way
we
did
stack.
I
used
to
make
sense,
made
a
lot
of
sense
to
me
on
paper
when
I
first
did
it,
but
having
lived
with
them
for
a
while,
I
think
they're
cumbersome
and
they
kind
of
prevent
things
from
working
together.
Like
first
of
all,
people
don't
actually
use
make
sense
to
indicate
what
their
build
pack
requires.
They
just
don't.
C
They
just
assume,
like
you'll
run
it
on
my
stack
image,
I'm
too
lazy
to
even
figure
out
what
set
of
mixins
it
needs
and
then
number
two,
I'm
less
worried
about
a
situation
where
a
build
fails,
because
it's
missing
a
package
than
I
am
about
a
situation
where
we
prevent
a
bunch
of
builds
that
could
succeed
from
succeeding
artificially.
C
I
feel
like
it's
having
someone
have
one
failed,
build
and
open
an
issue,
and
then
someone
tell
them
like
oh
try
it
on
this
other
stack,
it's
like
not
great,
but
I
think
having
a
world
where
no
one
can
use
a
whole
swath
of
build
packs
because
they
are
running
on
a
you
know,
an
image
with
stack
id
heroku,
20
and
they're.
A
bunch
of
paquetto
bill
packs
is
worse.
D
A
D
A
Are
there
so
many
of
those
things
within
each
group
that
it's
it
would
be
better
to
have
a
designation,
that's
like
ubuntu
or
debian
based
or
something
like
that,
as
opposed
to
just
having
people
list
the
different
flavors
of
linux
distribution
that
are
debian,
based
that
their
their
build
pack
supports
like
even
versions
of
apps
in
lts.
You
know,
ubuntu
distributions
can
change
a
lot
like
the
command
line.
Interfaces
can
break
so.
D
I
guess
my
point
was
like
each
time
we
come
up
with
a
common
grouping,
we'll
have
to
figure
out
a
way
to
express
that,
and
that
would
mean
we'll
have
to
change
the
format
of
these
stack
ids
each
time
so
like
you
can
have
arbitrarily
number
of
commonalities
between
different
linux
distributions
and
it's
not
possible
to
capture
all
of
that
in
one
stack
id
and
some
specific
buildback
can
choose
any
particular
group
of
these
stacks
and
decide
hey.
I
I
want
these
group
of
stacks
that
I'm
compatible
with.
A
D
A
C
I
think
this
needs
to
change
like
we
need
a
way
an
extensible
object.
I
think
sam's
writing
objects
better
than
my
wildcard
stack
id,
where
you
could
describe
exactly
what
you
need
and
then
it
is
optimistic,
and
I
think
we
should
err
on
the
side
of
letting
something
fail
every
now
and
then
then
preventing
a
bunch
of
things
from
succeeding.