►
From YouTube: Working Group: 2021-04-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Great
first
item
on
our
agenda:
decoupling
stack
ids
and
compact
guarantees.
B
A
B
But
then
I
thought
about
it,
and
it
seemed
very
weird
to
me
that
we,
the
fact
that
then
the
stack
ids
which
like
currently
seem
to
be
like
io
dot,
build
packs
or
something
and
some
organization
may
want,
for
example,
com,
dot
organization,
name,
not
something
to
id
to
uniquely
identify
the
sorry
stacks
that
is
coupled
with
what
build
packs
are
compatible
with
that
stack.
B
It
allows
you
to
get
away
from
like
some
add-ons,
but
the
fact
that
the
main
compatibility
guarantee
is
still
tied
to
the
id
seems
weird
to
me
that
the
identifier
of
the
stack
somehow
like
is
an
indication
of
what
it
should
be
compatible
with,
and
then
you
have
this
overloaded
id
that
everyone
has
to
use
if
they
want
to
use
most
of
the
build
packs
out
there,
which
is
like
I
o
buildbacks,
like
the
bionic
stack
and
that
that's
sort
of
where
I'm
coming
from
that
the
id
should
just
be
a
way
of
identifying
the
stack,
not
what
it
is
compatible
with.
A
A
B
B
So
let's
say
even
if
we
were
to
implement
that
proposal,
which
we
were
talking
about
earlier,
like
star
stacks,
like
wild
card
stacks
for
like
linux,
stacks
or
ubuntu
stacks,
whether
it's
bionic
or
something
else,
you
you're
still
left
with
the
fact
that
if
a
build
pack
says
I
am
compatible
with,
let's
say
anything
that
looks
that
that
supports
debian,
for
example
like
debian
like
things
so
you
have
debian,
you
have
ubuntu
and
you
have
some
other.
I
don't
know
os,
you
still
have
to
individually
name
them
and
they
can't.
B
I
guess
you'll
have
to
change
the
taxonomy
each
time
you
discover
a
common
aggregate
so
earlier
your
aggregation
may
have
been
something
like
linux
ubuntu.
Now
you
have
to
do.
Linux
deviantlike
then
ubuntu,
which
again
bothers
me
that
each
time
someone
comes
up
with
a
new
aggregation.
Now
you
have
to
figure
out
where
you
want
to
put
it
in
that
taxonomy
of
domains.
B
And
and
makes
it
seem
like
an
easy
way
out,
because
they're
literally
just
tagged,
so
you
could
have
like
a
platform
mix
images
like
platform
linux
platform
whatever
or
you
can.
You
can
define
new
sort
of
prefixes,
so
you
can
have
like
language
languages
supported
python
or
something
like
that.
So
that
is
more
easy
to
compose
and
to
define
contracts,
because
then
you
can
just
take
x,
y
and
z
and
put
them
together
without
assuming
there's
a
hierarchy
between
them.
B
A
A
Basic
shared
features
like
os
architecture,
distribution,
distribution
version
and
maybe
we
can
leave
some
room
for.
A
Room
to
grow
into
arbitrary
things
like
distribution,
flavor,
something
something
right
and
then
have
everybody
use
one
of
those
stack
ids
and
move
the
domain
specific
stuff
into
mixins.
So
you
could,
if
you
we
can
define
some
mixins
that
are
like
os
packages
are
like
always,
if
you
say,
package
colon
something
and
you
specified
a
disk
draw
it's
the
package
with
that
name
using
the
package
manager
of
the
distro
you
specified
and
then
when
people
want
to
do
other
things,
they
could
name
space.
A
The
mix-ins
so
it'd
be
like
io,
pochetto
colon,
something
mixed
in
like
and
that
I
feel
like
in
some
ways
would
solve
a
lot
of
problems
because
by
not
allowing
definitions
of
arbitrary
stack,
ids
we're
forcing
everyone
to
have
a
stack
id.
That
tells
us
something.
So
we
know
when
things
can
be
compatible.
But
then
you
can
still
use
the
mixins
to
declare
a
specific
requirement
that
wasn't
defined
in
the
in
the.
B
A
A
C
Yeah,
I
think
it's
mostly
around
pack,
authors
currently
like
knowing
what's
available
and
what's
not
like
mix-ins
weren't,
a
thing
really
for
being
able
to
do
some
of
that.
I
don't
think
we're
against
it.
We
just
haven't
like
we
did
explore
it.
We
did
try
to
get
to
like
bionic
for
heroku
18,
but
it
did
present
some
challenges.
As
far
as
like
compatibility
and
yeah
it
wasn't,
there
were
major
hurdles
that
I
remember.
I
think
we
probably
could
get
there
for
vocal
for
sure,
especially
when
the
mixons
are
there.
C
A
But
I
know
on
the
paquetto
side,
there
are
build
packs
that
only
exist
on
our
full
builder
because
they
in
reality
require
mixes,
but
I
don't
think
the
build
pack
tamil
actually
declares
those
requirements
like
people
just
pull
down
that
builder
or
read
the
docs
that
say,
use
this
build
pack
with
that
builder
and
of
all
the
things
all
the
you
know,
many
growing
pains
we've
run
into
as
both
of
these
projects
grow
that
actually
hasn't
been
one
of
them.
I've
never
come
across
a
situation
where
someone.
B
Because
the
thing
I'm
getting
at
is
that
it
makes
buildback
reusability
really
hard
like.
If
someone
wants
to
reuse
a
buildback,
even
if
they're
sharing
almost
all
of
the
code,
they
they
still
have
to
like
create
something
new,
create.
A
new
buildback,
normally
define
a
bunch
of
things
when
they
are
literally
just
pulling
down
the
same
code,
and
I
think
with
like
once
we
once
there
is
something
like
stack
packs
or
something
like
that
which
allow
you
to
specify.
These
are
my
requirements.
B
I
think
a
lot
of
these
concerns
would
be
alleviated
and
we
wouldn't
need
like
compact,
like
the
stack
id
being
the
compatibility
guarantee,
I
guess-
and
rather
it
could
be
just
a
system
of
requirements
and
provisions
similar
to
what
we
have
with
different
build
packs.
But
the
buildback
would
say
that
before
I
run
detective
build,
I
require
something
else.
I
don't
know.
A
That
I
thought
would
be
in
the
oci
spec
and
that's
implemented
by
gcr
and,
like
what
fields
are
there
to
figure
out
whether
like
we
could
actually
just
use
these
oci
fields
instead
of
stack
ids.
D
So
see,
I
think,
for
me,
there
there's
still
something
worth
having
you
know.
I
guess
you
want
to
call
it
a
namespace
right
instead
of
an
id
or
something
like
that,
but
something
that
represents
that
the
mixins
right,
which
I
kind
of
find
synonymous
to
like
labels
on
resources
and
stuff
like
that,
is
that
that
definition
of
what
these
labels
are
could
change
between.
D
You
know
this
image
and
this
other
image
right
and
I
think
if
we
go
too
far
in
one
direction,
where
like
compatibility,
is
defined
by
something
else
right,
like
all
these
other
parameters
of
what
the
platform
is
for,
like
the
architecture,
the
linux
distribution
and
all
this
other
stuff
like,
we
probably
still
want
a
way
to
be
able
to
say,
okay,
these
mix-ins
right.
These
labels
mean
this,
but
they
don't.
You
know
they
don't
mean
the
exact
same
thing
if
you're
talking
about
this
other
combination
right
of
or
this
other
namespace
or
domain.
D
So
I
guess
maybe
like
I
know
it
was
brought
up
at
some
point
and
I
think
it
was
the
idea
that
build
packs
could
support
multiple
stacks
and
define
their
their
compatibility
with
multiple
stacks.
D
That's
not
a
thing.
That's
possible
right
now,
right
like
they
could
have
the
any
stack.
A
A
I
do
wonder
if
you
know
like
maybe
we
don't
use
the
actual
platform
field
in
the
oci
spec,
but
if
we
created,
I
o
bill
pack
stack
ids
that
included
all
of
the
same
elements
and
you
could
stick
wild
cards
and
ones
that
you
could
support
any
of
like.
Would
that
sort
of
like
be
a
nice
combo,
where
it's
reusing,
known
concepts,
but
also
still
available
in
one
label.
D
A
If
I'm
like.
Oh,
I
want
almost
everything
that's
being
done
here,
but
I'd
like
this
one,
you
know
function
add-on,
it
just
doesn't
work
and
then
you
have
to
pr
to
add
in
your
stack
id,
but
the
mix-in
name.
Space
is
totally
different.
This
is
very
difficult.
Actually,
if
the
build
pack
author
was
designing
for
a
different
stack
it
by
default,
won't
work
on
your
stack
and
trying
to
get
it
to
work
is
a
little
bit
ugly
in
some
ways.
B
Which
is
why
I
think
like
so
I
get
the
fact
that
you
want
mixins
to
have
a
specific
meaning
based
on
the
name
space,
but
why
not
push
that
down
to
the
mixing
itself?
What
emily
was
suggesting,
which
is
like
you,
have
a
prefix
with
whatever
and
then
calling
the
mix
and
name,
so
you
can
still
what
this
gives.
You
is
having
multiple
mixes
from
multiple
namespaces
be
part
of
the
stack,
rather
than
once
you
choose
an
umbrella,
then
all
the
mixins
have
to
be
from
that
namespace.
D
Yeah,
I
think
my
concern
is
really
just
overlap
right
or
conflict
of
meaning
of
the
mix-ins,
but
I
think
that's
where
maybe
something
more
concrete
of
a
proposal
would
shed
some
light
on
on
what
that
would
really
look
like
and
what
the
drawbacks
would
be.
D
Because,
again
I
I
I'm
coming
from
like
the
resource
cloud
management
aspect
of
things
right
where
mixins
are
labels
and
labels
like
the
the
individual
that
defines
how
labels
are
structured
right
and
what
they
mean
is
more
or
less
kind
of
defined
by
the
operator
right
by
the
yeah.
Within
that
context,
within
that
domain
right,
and
not
so
much
by
everybody,
agreeing
that
hey.
This
is
what
we're
gonna
fix.
You
know
set
this
to
be
because
then
otherwise
they
would
just
be
fields
right.
They
wouldn't
be
arbitrary
labels
or
mixins.
A
I
think
you
can't
add.
I
think
it's
true,
that
you
can't
add
a
mixing
requirement
when
you're
saying
you
work
on
any
stack
right,
like
the
total
wild
card
stack
that
we
support
like.
Maybe
you
could
do
that
if
you're
using
any
wild
card
in
the
stack
or
we
could
be
more
specific
about
it,
it's
like
only
if
you've
gone
up
to
a
distro,
then
you
can
have
mix-ins,
but
you
can't.
B
I
think
this
might
also
be
something
that
the
registry
could
help
with.
So
currently
we
show
buildbacks,
but
if
you
could
aggregate
them
by
the
stacks
or
the
mix-ins
they're
compatible
with
or
list
out
all
the
mixins
from
the
build
packs
and
say
that
okay
mix
in
xyz
has
so
many
buildbacks,
then
you
could
at
least
have
some
reference
as
what
other
people
are
using
to
to
indicate
compatibility
rather
than
inventing
your
own
new
thing.
B
So
it's
not
direct
like
you're,
not
directly
saying
that
this
is
the
definition
of
of
something,
but
it's
it's
definition
by
the
people
who
are
using
it
or
implementing
it.
So
if
five
build
packs
say
that
I'm
compatible
with
mixing
debian
python
3,
then
you
know
that
if
you
want
to
be
compatible
with
python
3,
that
was
that
was
that
was
distributed
as
a
debian
package.
Then
you
can
define
that
mix-in.
B
So
maybe
it's
also
the
lack
of
information
that
people
don't
know
how
many
build
packs
use
the
stack
id
or
how
many
build
packs
depend
on
a
certain
mixing
that
forces
them
to
be
very
restrictive,
or
they
might
choose
a
stack
id
that
at
first
they
thought.
Okay,
like
I'll,
invent
my
own
thing,
but
if
they
could
see
that
all
of
these
build
backs
like
support
the
stack
id-
and
these
are
the
build
packs
I
want
to
use,
then
as
an
operator,
you
can
say
that
this
is
the
stack
id.
B
A
I
think
it's
weird
that
we
just
defined
bionic
in
the
project,
but
also
just
in
rfc
and
didn't
like
produce
a
document,
that's
more
discoverable
or
a
stax
repo
like.
If
we
had
guidelines
around
how
you
could
specify
any
stack,
then
at
least
we
could
shepherd
people
towards
a
norm
more
easily.
I
feel
like
at
this
point
it's
like
one
rfc
back
in
history,
and
it's
only
because
we
talk
to
each
other
that
there's
any
any
overlap
right.
A
I'd
love
to
specify
like
like
okay,
you
know-
maybe
maybe
getting
worried
about
this,
so
we
don't
get
rid
of
different
stack
ids.
But,
let's,
let's
say
there's
a
you
want
to
specify
a
ubi
image
like
we
should
have
a
generic
schema
where,
if
you
say
I'll,
build
pack
stacks
and
put
other
things
after
that,
it
means
something
so
you're
like
I'll
build
pack
stacks
linux,
ubi,
8,
amd64
and
like
based
on
the
places
of
all
those
identifiers.
D
C
A
C
Unbelievably
annoying
like
because
everyone
depends
on
ca
search,
but
no
one
realizes
it
and
then,
like
every
time
you
change
it.
You
basically
have
to
go
through
that
process
again
right,
because
I
mean
in
our
case
we've
got
folks
who
are
writing
just
bash,
build
scripts
and
there's.
You
know
these
are
carried
over
from
a
decade
worth
of
work
and
so
like
there's,
probably
certain
paths
that
rely
on
things
that
just
no
one
even
knows
anymore
and
so
accurately,
describing
a
build
pack
and
all
of
its
dependencies
was
like
basically
possible.
C
D
Yeah
see
because,
like
I
wonder
if,
in
that
particular
case,
right
like
if
we
had
dependencies,
let's
talk
specifically
about
packages
right
if
we
had
a
different
mechanism
for
defining
packages
and
package
just
had
a
way
that
it
constructed
a
dependency
tree
right.
So,
like
you
know
that
you
depend
on.
Let's
say:
python
right
and
python
depends
on
ca.
Certs
right,
I'm
sure,
there's
a
whole
bunch
of
other
dependencies
right,
but
somehow
that
was
encoded
somewhere
somehow
right
and
you
could
create
this
dependency
tree
and
then
you
could
validate
against
the
the
dependency
tree.
A
Like
it's
the
same
problem,
though,
because
like
right
now,
if
you
put
a
dependency
on
the
os
python
package,
like
you,
don't
need
to
put
a
ca
cert
one.
If,
in
fact,
python
depends
on
ca
search
because
then
python
won't
ever
be
there
without
ca
serves,
so
it
doesn't
matter
tracing
down
the
tree.
It's
just
like
actually
figuring
out
what
packages
you
depend
on
is
not
a
trivial
exercise
for
everyone.
All
the
time.
C
And
it's
dynamic
as
well
right
like
just
because
your
bill
pack
might
require
you
know
python
in
this
one
situation,
but
maybe
it
doesn't
and
another,
and
so
like
there's
no
way
to
express
that
and-
and
you
don't
really
want
to
like
if
you're
talking
about
making
tiny
images
at
some
point
like
the
go
built
back,
might
require
like
all
this
stuff,
like
the
main
heroku
stuff,
but
then
like
on
a
tiny
image
it.
Maybe
it
gets
away
without
doing
it
as
long
as
you
meet
these
certain
requirements.
A
B
B
B
B
I
mean
I
could
imagine
a
use
case
where
you
have
some
metabull
packages,
where,
let's
say
they
define
multiple
stack
ids.
I
want
to
use
like
nine
out
of
the
eleven
build
packs
included
in
that
medieval
pack,
and
I
don't
want
to
define
the
order
or
something
again
and
again,
there's
currently
no
way
to
do
that,
rather
than
just
take
all
of
them
apart
and
then
individually
define
a
meta
build
pack
with
the
correct
audio.
C
It's
even
trickier,
like
we
talked
about
with,
like
the
ruby
one.
I
believe
there's
like
do
you
list
something
like
the
postgres
like
libraries,
because,
like
most
folks,
are
running
postgres
for
ruby,
but
not
everybody,
so
you
don't
want
to
like
restrict
stacks
like
if
someone's
using
mysql
and
like
the
ruby,
build
pack.
You
don't
really
want
to
you
know
ex.
C
You
know
you
don't
want
your
build
pack
to
say
that
it
requires
my
sequel,
because
the
buildback
itself
doesn't
require
postgres
or
my
sql,
but
like
most
of
the
apps,
it
builds
do
and
so
that
gets
kind
of
awkward
as
well
like
because
you
can't
really
yeah
it's
hard
to
express
these
very
dynamic
conditions.
It's
basically
all
there
is
to
say.
B
I
think
like
this
is
the
main,
like
all
the
other
ecosystems,
have
a
nice
way
of
specifying
dependencies
which
allows
them
to
be
reused,
whereas
build
packs.
I've
seen
it's
very
easy
to
proliferate
them
so
like
if
someone
wants
a
new
buildback
or
like
a
new
buildback,
or
they
have
to
create
it
rather
than
reuse,
some
of
the
existing
ones
or
parts
of
existing
ones.
B
So
it's
very
hard
to
define
atomic
build
packs
and
then
aggregate
them
together
in
in
a
way
that
others
can
use
them,
because
in
creating
a
buildback,
no
one
thinks
about
how
x
person
is
gonna,
use
it
in
some
other
aggregation.
They
just
think
about
their
specific
use
case
and
their
specific
order
or
aggregation
so
like
when
fan.
A
Like
we
could
do
something
do
a
lot
more
documentation
on
the
ghetto
side
that
would
actually
make
it
easier
to
reuse,
but
that's
also,
basically
just
like
a
a
build
pack.
Author
namespace,
where
different
build
pads
can
use
the
same
keys
to
mean
different
things.
A
I
feel
like
we
left
some
of
these
contracts
really
loose,
which
has
had
some
benefits.
We
can
do
cool
things
that
weren't
necessarily
planned
out
exactly
for
us,
but
I
feel
like
we've
gone
back
and
forth
to
this.
A
lot
and
ben
was
a
big
proponent
of
keeping
things
loose,
but
I
wonder
if,
like
tightening
things
up
and.
A
Stronger
distinctions
between
when
something
is
supposed
to
be
understood
by
everyone
and
when
it's
domain
specific,
so
that
things
are
more
likely
to
be
interoperable,
could
create
a
better.
A
B
I
mean
I
think
this
is
where,
like
some
standards
or
guidelines,
would
help
where
you,
the
spec,
could
still
keep
it
loose.
But
if
you
want
maximal
compatibility
with
this
huge
ecosystem
out
there,
you
can
define
your
things
or
use
things
according
to
these
guidelines,
if
you
don't
find
you
can
still
do
whatever
you
want,
but
then
that
means
you'll
have
to
do
extra
work,
but
currently
it
stops
like
it
doesn't
stop
people.
It
makes
it
harder
for
people
who
do
want
to
use
things.
A
Like
if
the
thing
you
require
in
the
bill
plan,
if
what
you
expect
to
come
out
of
it
is
an
executable
to
be
on
the
path
like
you
should
name
it,
the
name
of
the
executable
or
something
like
that,
or
maybe
we'll
say,
exec
colon,
the
name
of
the
executable
like
just
some.
Even
if
we
don't
define
a
specific
yeast
guideline,
so
there's
a
higher
probability
of
things.
Working.
A
So
I
talked
about
that
one
for
a
long
time.
Sorry,
I'm
not
doing
a
good
job
facilitating
this
tightly.
Javier
shouldn't
have
interested
me
with
the
power.
Should
we
move
on
to
standard
format
for
bomb
security
scanning.
B
So
that
was
also
something
I
put,
which
is
along
the
same
lines,
which
is
we
have
a
bomb
which
contains
all
this
metadata,
but
the
real
usefulness
there
is
being
able
to
have
some
common
tools
that
can
parse
it
and
then
identify
like
information
from
it,
which
we
can
act
upon.
Otherwise,
it's
just
random
metadata,
so.
A
You're
picking
all
my
favorite
topics,
these
two
topics
are
like
the
two
things
that
are
giving
me
heartache
thesis
I
feel
like
we
should
spdx
format
instead
of
having
our
own
standard
bomb
format,
this
sort
of
the
sentence
people
have
been
throwing
around.
A
I
don't
know
enough
about
the
format
to
know
how
much
of
a
good
idea
it
is,
but
I
feel
like
if,
if
anybody
has
defined
a
standard
that
isn't
just
the
one
we
made
up,
it'd
be
nice
to
use
that
instead
to
generate
data
in
that
format,
and
maybe
we
can
have
a
place
to
aggregate
things
that
don't
fit.
But
if
there
are
a
couple
known
activities
that
people
want
to
do
like
gather,
osl,
license
information
or
create.
A
But
if
we
formatted
it
uniquely,
both
in
the
overall
structure
of
the
bomb
being
something
we
made
up
and
then
each
build
pack
has
too
much
freedom
in
what
it
puts
in
there
that
it's
not
standard.
You
might
have
all
of
the
information
you
need,
but
not
in
a
in
a
way.
That's
useful,
like
you're,
so
close
to
having
everything
you
need.
But
you
you
don't
quite.
A
B
I
mean,
and-
and
it
doesn't
have
to
be
something
enforced
by
respect-
it
can
just
be
a
suggestion
that
hey-
maybe
if
you
use
this,
you
get
some
other
benefits
from
these
other
tools
or
something
like
that,
but
currently
because
it's
a
free
form
field
that
make
a
metadata
table,
you
can
just
put
anything
there
and
people
will
reinvent
formats
and
then
reinvent
tools
and
then,
when
they
try
to
connect
it
to
some
existing
tools,
they'll
see.
Okay,
no,
my
form
is
now
not
correct.
B
A
B
A
From
next
on
our
list
intro
video
workshop
yeah-
I
edited
it
so
I
created
the
discussion
a
few
weeks
ago
about
creating
a
one
minute
video
to
introduce
what
build
bags
are.
There
are
many
thoughts
about
what
should
be
in
this
video
since
it's
so
short,
then
it
should
be
very
clear.
A
So
after
some
discussion
with
some
people
around,
I
would
decide
it
will
be
great
to
have
like
a
one
hour
workshop
meeting.
Whatever
you
call
it
to
I,
I
think
it
will
be.
At
least
the
first
meeting
will
be
some
brainstorming
on
what
this
intro
video
should
include
I'll,
probably
schedule
it
for
next
week
on
tuesday,
once
I
know
for
sure
the
times
I'll
also
posted
it
in
the
general
channel,
so
everyone
can
join
yeah.
That's
I
just
wanted
to
say
this
cool.
B
B
So
the
easiest
thing
is
that
you
have
like
a
mono
ripper,
where
the
sub
path
is
the
project.
You
want
to
build
right
and
that's
easy
to
do
the
the
more
difficult
one
is.
Let's
say
you
have
like
different
folders
and
each
of
them
have
to
be
acted
upon
by
different,
build
packs
in
the
same
build
process,
or
let's
say
you
have
dependencies
in
a
different
folder
which
you
want
to
include
in
your
build
process,
but
you're
building
like
something
in
the
sub
directory.
B
A
You
want
to
build
to
the
build
pack
and
it
just
builds
that
one
and
then
you
also
give
it
like
a
or
if
there's
a
build
process
that
builds
multiple
artifacts.
You
give
it
like
a
matcher,
so
I
can
pull
the
right
one
out
of
the
target
directory
and
I
know
other
build
packs
like
the
paquetto
go.
Build
pack
include
a
option
where
you
can
specify
the
like
the
package.
You
want
to
pass
to
go
build
stuff
like
that.
B
So
you
would
so
you
could
technically
push
the
detection
of
like.
Let's
say
you
want
to
detect
a
particular
buildback
for
a
specific
subfolder.
You
can
do
that
by
specifying
a
key
so
that
the
buildback
looks
at
that
key,
and
then
it
runs
its
detection
logic
in
in
that
subfolder,
instead
of
at
the
top
level
directory
yeah
yeah,
okay.
Yes,
that
makes
sense.
A
I
wonder
if
that
makes
sense.
It's
like
an
extension
spec
like
an
optional
extension
spec.
I
don't
think
we're
gonna
put
like
too
much
more
onus
on
what
it
means
to
be
a
compliant
buildback,
but
it
still
provides
guidelines.
A
The
other
thing
would
be
like
log
level
like
I
know
you
can
change
the
log
level
of
paquetto
build
packs,
all
of
them
except
the
same
environment
variable,
but
I'm
sure
we
didn't
pick
the
exact
same
one
that
other
people
did
so
now.
You
gotta
set
three
environment
variables
if
you're
using
a
set
of
bill,
packs
in
a
build
that
don't
all
have
the
same
conventions.
B
A
A
B
Because,
now
that
you
have
users
using
this
buildbacks
for
using
any
specific
buildback,
the
user
has
to
go
and
read
the
specific
detection
process
of
that
buildback.
In
order
to
use
it
no
convention,
they
can
think
of
that.
Okay,
if
I'm
using
buildback
x.
If
I
specify
like
this
variable,
I
know
that
for
buildback
x
it
will
use
this
position
or
it
will
use
this
long
ago.
It
will
use
this
directory.
B
You
know
you
have
to
go
and
read
the
implementation
of
the
detect
step
or
like
whatever
documentation
if
they
have
provided
one
to
figure
out
how
it
runs.
Today,
it's
just
a
problem.
I've
been
facing
in
terms
of
trying
to
teach
people
what
packs
like
how
buildbacks
would
work
in
their
projects,
because
now
they
have
to
learn
a
whole
new
way
of
specifying
what
things
they
want
or
how
to
get
the
bill
back
to
do
what
they
want.
B
A
Like
I
know,
on
the
paquetto
side,
we've
tried
within
the
tools
available
to
us
so
like
whenever,
like
in
detect
most
time,
you
don't
get
it,
you
don't
get
output
unless
it's
verbose
and
nothing
detected
and
something
went
wrong.
So
our
place
to
write
output
if
we
want
users
want
to
give
users
hints
for
things,
is
build.
A
But
all
that
information
is
coming
from
fields
and
bill
peck
tommel
that
could
be
put
into
a
label
that
contained
all
the
bill
pack
metadata.
If
there
was
some
standard
format
for
that,
then
you
could.
You
know
pack
document
build
pack
and
it
could
just
print
out
what
all
the
options
are
stuff
like
that.
A
A
A
I
like
the
idea
of
making
it
a
little
bit
more
structured,
so
that
pack
can
make
a
really
pretty
output
for
cli,
but
you
could
also
you
know,
like
the
bill
pack
registry
could
eventually
make
pretty
web
pages
that
display
the
same
thing.
B
A
Yeah,
you
could
also
say
like
here
are
the
build
plan
entries
I
provide
or
require
here
are
the
names
of
the
keys
I'll
put
in
the
bom?
But
you
know
if
we
fix
that
format.
Maybe
even
me
too,.
A
All
right,
I
think,
we're
at
time.
I
know
I
gotta
run.
A
A
There's
a
lot
of
big
ideas
here
and
I
think
all
of
them
would
be
really
valuable.
It's
hard
to
think
about
how
to
make
progress
on
them
with
all
of
the
other
stuff
in
flight,
but
I'm
glad
we're
talking
about
it.
I
wonder
like
which
of
these
we
want
to
approach.
First,
might
be
a
conversation
we
could
have
later.
D
Yeah,
I
do
wonder
if
the
first
step
right,
like
I
know,
typically
like
oh,
create
an
rfc,
but
I
do
wonder
if
at
least
starting
a
discussion
on
the
discussion
board
would
be
a
good
place
to
start
or
an
issue
on
the
rfc
repo
right,
but
at
least
somewhere
we
could
track
the.