►
From YouTube: Working Group: 2021-06-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
don't
think
so,
so
we'll
jump
right
into
release,
planning
and.
B
A
A
Cool
and
for
pack
yesterday,
during
our
platform
sync,
we
decided
that
the
next
release
will
be
on
june
30th
and
so
because
of
that
we'll
be
going
into
future
complete
next
week.
So
if
there's
any
changes,
we
want
to
get
them
in
before
that.
B
Once
going
twice
seems
like
no
next
thing
is
our
weekly
rfc
reviews.
Give
me
one
second
I'll
share
my.
B
B
So
the
first
one
I
just
opened
last
night,
it's
kind
of
a
stack
pack
alternative
that
removes
the
concept
of
stack.
It's
a
little
bit
of
a
sweeping
change.
I
think
sam
did
you
put
that
on
the
or
someone
put
stack
nice
alternative
there.
C
Not
really,
I
think
we
had
some
discussions
going.
I
don't
know
if.
C
C
B
B
Cool
so
I'll
move
on
to
the
next
one
and
feel
free
to
put
that
on
the
agenda
at
the
end,
if
you'd
like
make
build
layers,
read
only
for
subsequent
build
packs,
I
think
this
one
is
on.
B
This
is
still
blocked.
I
think
emily.
A
On
the
back
bill
pack,
actually
just
before
this
meeting
put
up
a
draft,
I
think
for
the
bachelor
pack.
It
was
most
of
the
content,
but
it's
not
corralled
into.
B
Do
you
want
to
put
that
in
the
agenda?
Is
it
worth
chatting
about
or
just
because
I'm
really
interested
in
that
one?
My
remove
stacks
that
I
put
together
late
last
night
is
probably
worse
than
what
you
did
with
bash.
So
I
wouldn't
worry
too
much
about
it
all
right
next
thing
is
disambiguate
layer
metadata
from
my
data,
so
this
on
the
intended
to
see
s
bomb
rfc,
but
I
don't
see
this
one.
So
any
updates.
C
A
A
B
C
C
C
So
a
couple
of
updates,
since
the
last
time,
sophie
and
forrest
from
the
potato
side
helped
me
out
with
doing
a
comparison
of
the
different
like
current
container
scanning
tools,
what
kind
of
output
they
generate
in
different
bom
formats,
how
complete
they
are,
what
kind
of
fields
they
output
and
how
much
time
they
take.
C
I
think
they
also
put
forward
like
some
conclusions
from
their
initial
testing.
I
think
the
the
parts
of
the
left
are
like
trying
out
individual
cycle
and
dx
scanning
tools
which
scan
the
source
code
instead
of
scanning
the
final
image,
but
in
general,
if
anyone
wants
to
take
a
look
there,
there's
this
repository
that
sophie
and
forrest
put
together
around
comparing
what
sort
of
metadata
pack
currently
outputs
as
a
result
of
like
building
the
application
using
picato
build
packs.
C
C
C
So
I
think,
if
you
go
in,
you
can
see
the
actual
time,
which
I
think
may
also
be
a
concern
for
us,
because,
if
we're
doing
this
during
build
time,
we
do
not
want
like
build
times
being
affected
a
lot
where
it's
spending
like
two
minutes
just
on
scanning
and
generating
an
s
bomb
so
that-
and
there
were
some
interesting
conclusions
so
from
from
the
kind
of
fields
that
we
are
concerned
with.
C
As
far
as
I
can
see
it,
both
cyclone
dx
and
spdx
have
appropriate
places
to
put
data
in
so
there's
this
field,
comparison
that
was
put
together
with
them
on
where
the
existing
fields
that
are
used
in
poquito
are
and
how
they
map
to
the
cyclone,
dx
and
spdx
fields.
C
A
couple
of
things
to
note
which
are
not
here
is
cyclone
dx
has
this
concept
of
for
pedigree,
so
you
can
basically
say
that
you
took
like
a
source
distribution,
which
is
what
pacquiao
or
heroku
often
does,
and
you
modify
it
slightly
to
have
that
buildback
directory
structure
layout.
C
So
that's
actually
a
concept
that
cyclone
dx
has
as
a
first
class
thing
where
you
can
define
in
a
library
or
a
component
or
application
which
points
to
the
original
source
url
and
then
have
another
component
and
connect
the
two
of
them
through
this
pedigree
and
say
that
hey
there
were
no
modifications
like
you
can
also
specify
what
kind
of
modifications
were
made
in
the
middle.
C
So,
although
currently
it
says
that
so
cri
and
source
are
external
references
which
you
can't
put
it
there,
but
I
think
this
might
be
another
better
way
of
specifying
like
these
kind
of
modifications
we
often
have
to
make
when
we
are
shipping.
Tar,
balls
in
buildbacks
format,.
C
The
other
thing
to
note
is
cpes,
so
cyclone
dx
can
only
accommodate
one
cpe
field,
whereas
spdx
has
this
thing
called
package
external
reference,
which
is
just
a
like
a
list
of
external
references.
You
can
put
different
kind
of
references
there
like
security
references
or
package
urls,
and
you
can
put
any
number
of
cps
package,
urls
sh
with
tags
and
other
things.
C
C
It's
component,
I'm
forgetting.
C
C
Okay,
thank
you.
It's
it's
some
way
that
nvd
uses
to
identify
like
packages
which
are
like
it's
it's
another
alternative
way
of
specifying
like
here's,
the
package
version,
the
hardware
tag,
the
distribution,
tag
etc
and
like
which
ones
are
deprecated,
so
that
database
is
maintained
by
nvd
and
there
are
two
other
formats.
One
is
prl
of
all
which
there
was
again
some
discussion
about,
and
the
rfc
itself
at
the
bottom,
and
there's
this
other
one,
which
is.
A
C
Software,
historical
there's
also
schwi,
I
don't
know
what
that
is,
but
they're
also
another
way
of
identifying
like
the
package
version
plus
any
modifiers,
and
the
security
scanning
tools
typically
map
these
tags
to
versions
that
are
affected
and
yeah.
So
the
with
cps
you
can
have
like
some.
C
C
That's
why
you
might
have
multiple
component
cps.
This
can
also
be
noisy
at
times
when,
like
you
put
like
a
lot
of
fuzzy
matching
logic
in
there
and
your
scanner
then
matches
against
it,
but
it
might
also
catch
up
things
which
you
might
otherwise
miss
if
the
scanner
doesn't
have
information
about
particular
cpu
that
you've
specified.
A
C
It's
because
sometimes
you
can
have
cvs
that
that
match
like
the
range
of
things
so
like
the
bug,
was
in
the
source
distribution
and
then
you
package
it
for
different
architectures
and
different
operating
systems
and
all
of
them
have
the
same
bug.
So
that's
when
you
want,
like
some
wild
cards
to
say
all
of
these
things
are
affected.
A
B
C
So
that's
that's
what
the
cpe
thing
is
and
interesting
thing
to
note
is:
cyclone
dx
has
marked
this
as
deprecated
with
it
being
marked
for
removal
and
sometime
in
the
future
in
one
of
their
specifications,
but
I
don't
know
when
patrick
who's,
one
of
the
core
team
members
of
cyclone
dx,
commented
here
saying
that
nvd
will
deprecate
cvs
at
some
time
in
the
future
and
replace
them
with
split
tags.
C
So
again,
this
is
another
aspect
of
it
like,
apart
from
the
sbom
format
itself,
the
unique
identifier
that's
used
is
also
something
we
may
have
to
consider
anyway.
The
what
I
have
currently
put
here
is
this,
so
I
propose
that
we
pull
out
the
s-bomb
into
a
separate
file
which
sort
of
mirrors
the
layer
normal
launch
tomorrow
and
bill
toml,
so
you'll
have
a
layer,
dot,
bom,
dot,
cdx
or
json,
launch
dot,
bom,
dot,
tx
and
so
on
and
so
forth.
C
This
is
simply
because
a
lot
of
these
form
documents
need
to
be
in
a
certain
format,
and
we
cannot
put
them
in
that
exact
same
format
in
in
these
normal
files,
unless
we
want
to
create
a
mapping
between
the
metadata
table
there
to
the
original
s-bomb
document,
and
the
idea
would
be
that
the
buildback
would
generate
these
documents
and
the
life
cycle
would
be
responsible
for
taking
all
of
them,
merging
them
and
then
storing
them
in
some
appropriate
place.
C
Originally,
I
included
the
fact
that
all
of
this
metadata
should
be
moved
from
a
label
to
a
file,
but
from
the
last
working
group
meeting
it
was
suggested
that
that
should
be
a
separate
rfc
which
should
be
linked
to
this
one
and
should
be
implemented
at
the
same
time
with
their
separate
concerns.
So
yeah.
A
Talked
about
that
last
time,
but
I
cannot
remember,
there's
no
restriction,
but
there
are
reasons
not
to
have
large
labels,
because
when
they
get
too
big
it
can
cause
bugs
on
kubernetes
nodes
like
when.
I
think
the
cubelet
is
asking
container
d
for
the
list
of
all
the
containers,
it's
using
grpc
and
there's
a
message
size
limit.
And
if
you
have
a
bunch
of
containers
with
really
big
labels,
you
can
take
down
your
node.
A
A
B
Don't
it's
not
a
problem
for
me,
but
it.
B
C
A
But
we
probably
can't
do
both
because
the
bombs
gonna
get
a
lot.
I
mean
we
have
to
do
both
if
the
bomb
gets
a
lot
bigger,
because
we
move
to
this
format.
We
also
need
to
solve
the
label
size
problem
right,
yeah
and
that's
like
that's
the
whole
thing
that
sam
was
saying
we're
committed
to
doing
that.
It's
moving
it
back
to
a
file,
just
not
as
part
of
this
rfc.
C
And
in
terms
of
the
alternatives,
but
two
options
here,
one
is
like
support:
spd-x,
instead
of
cyclone
dx,
the
main
reason
being
that
it
is
again
a
linux
foundation
project
and
there
are
people
who
are
pushing
for
it
to
become
the
standard,
and
it
is
trying
to
invest
a
lot
into
like
tooling
itself
and
like
filling
in
the
gaps
where
like
because
spd
x
was
originally
meant
to
be
used
for
more
compliance
use
cases.
C
It
does
license
scanning
and
storing
metadata
about
licenses
better
than
it
does
about
provenance
and
security
scanning
and
supply
chain
analysis.
So
I
think
spdx
is
trying
to
catch
up
there,
and
I
don't
know
if
cyclone
dx
will
want
to
like
fulfill
the
other
side
of
the
thing
where
they
currently
do
have
a
licenses
field,
but,
like
spdx,
has
things
like
declared
license
versus
what
the
inferred
license
was
or
like
license
comments
from
each
of
the
files
etc.
C
C
So
in
this
case
the
responsibility
falls
on
the
platform
or
like
something
like
back
to
take
these
individual
files,
merge
them
together
and
produce
an
output
in
whatever
format
is
required,
and
this
merging
and
pulling
things
from
the
image
could
be
like
a
library
or
clio.
That's
provided
alongside
back,
so
that
other
platforms
can
also
use
this.
A
C
Because
a
lot
of
the
times,
you
would
probably
want
this
s
bomb,
maybe
elsewhere
like
in
this
last
discussion
with
dan.
They
were
thinking
about
putting
the
s
bomb
as
a
separate
oci
artifact,
or
you
might
also
want
to
extract
this
bomb
and
put
it
into
a
separate
database.
That's
used
by
a
scanner
so.
A
C
A
A
Even
if
the
life
cycle
is
responsible,
you've
got
to
pull
all
of
that
data
out,
merge
it
into
a
single
file,
put
it
back
into
somewhere,
so
that
the
the
exporter
then
runs
against
that
same
thing
right,
because
it
if
it
needs
to
be
an
artifact
in
a
layer,
it
has
to
have
been
created
and
put
back
onto
the
file
system
in
the
right
place
before
the
exporter
is
run.
C
B
A
B
Know
I
think
I
lean
that
way,
but
our
client
is
already
going
to
be
pretty
complex
because
it's
got
to
look
up
a
manifest
and
then
look
up
a
config
blob
and
then
find
a
label
that
points
to
a
blob
and
download
the
blob.
You
know
it
may
not
be
that
diff
if
we
have
strong
reasons
for
keeping
it
separate,
we're
already.
B
C
So
the
reason
like
separate
doesn't
like
why
not
merge
it
in
the
life
cycle
and
keep
it
in
the
platform.
The
only
reason
was
like
currently,
one
of
the
things
that
I've
proposed
here
is
since
this
whole
sponge
conversation
is
still
up
in
the
air.
People
might
want
different
formats
right,
they
want,
they
might
want
an
spd
experimented
version
or
they
in
the
future.
Something
else
comes
up
and
they
might
want
that.
A
A
You
know
pulling
all
of
these
things
off
the
file
system
and
merging
them
would
also
have
to
have
changed
in
lockstep
and,
like
I
just.
I
can't
see
that
as
a
possibility.
I
think
we're
better
off
basically
making
the
life
cycle
aware
of
everything
it
could
be
and
then
somehow
normalizing
that
into
whatever
it
thinks
the
internals
should
be.
Otherwise
it's
totally
not
dependable,
right,
like
suppose,
you're
a
downstream
client
trying
to
use
this
data.
If
you
can't
depend
on
it
being
in
one
of
these
formats
like
what?
A
What
what
are
we
even
doing
here
right
like
we
might
as
well
write
our
own
yaml
format
at
that
point,
because
they
don't
know
when
they
look
at
it.
Is
it
going
to
be
spdx?
Is
it
going
to
be
cyclone
like?
I
don't
know,
especially
if
you
have
a
group
of
bill
packs
that
are
making
different
decisions,
and
then
you
got
to
merge
them
together.
That
makes
for
a
very
complex
client.
C
The
the
only
reason
I
put
in
this
alternative
was
like.
There
was
some
concerns
around
introducing
this
as
like
a
first-class
thing
in
the
life
cycle
like
whether
we
want
to
stick
with
oneness
one
format
and
like
have
to
support
it
forever.
Essentially,.
A
But
it's
I
mean
like,
I
think
we
should
right
like
this-
is
actually
a
promotion
of
the
idea
of
an
s-bomb
to
a
first-class
construct
inside
of
the
cloud
native
build
packs
project
to
date,
all
of
sort
of
the
bomb
stuff
that
we've
done
have
been
just
sort
of
like
yeah.
Whatever
you
can
fill
it
in,
you
cannot
fill
it
in.
There
might
be
enough
information
to
go
figure
out
what
your
cvs
are.
Maybe
not.
Who
knows,
but
we're
saying
this
is
a
valuable
thing.
A
This
should
be
a
first-class
construct
of
a
standardized
s-bomb
in
cloud-native,
build
packs,
which
I
think
puts
it
on
the
table
that
we
should
have
first
class
support
inside
the
lifecycle
for
it,
as
far
as
supporting
it
forever
like
yeah,
okay,
but
like
if
we
want
to
make
a
change
at
some
point
in
the
future,
and
we
want
to
switch
from
a
to
b.
A
The
life
cycle
already
has
all
the
code
for
a
it
can
just
hold
them
for
whatever
we
think
the
length
of
the
compatibility
and
standard
is,
while
also
adding
b,
making
sure
to
convert
them
to
whatever
the
canonical
representation
is
inside.
Of
the
build
pack
at
any
given
time
or
inside
of
inside
of
the
project
at
any
given
time,
and
then
we
eventually
drop
support
for
a
you
know.
When
you
go
to
cnb
2.0.
C
Yeah
that
that
was
the
only
question
like
do.
We
want
to
go
through
all
of
that
effort,
or
should
we
just
keep
it
and
like
should
we
just
move
all
of
that
complexity
to
the
platform,
which
is
easier
because
there
are
more
platforms
than
there
are
life
cycle
implementations,
so
people
can
still
choose.
A
Like
it
lowers,
it
lowers
compatibility,
though,
like
that
that's
fundamentally
my
argument
about
it
is
okay
great.
We
now
have
made
s-bombs
a
first-class
construct
inside
of
cloud-native
build
packs,
and
you
have
no
idea
what
format
a
build
pack
is
going
to
provide
to
you.
So
good
luck,
you
actually
need
to
write
support
for
all
three
to
five
different
versions,
and
if
we're
going
to
write
the
library
to
help
them
with
that,
then
we
might
as
well
just
write
it
into
the
life
cycle
right.
A
C
B
About
sorry,
thinking
about
that
ability
to
like
switch
between
different
formats
right,
we're
talking
about
migrating,
the
build
packs
over
to
produce
different
formats.
Anyways
like
I
could
see
an
argument
for
like
the
format
on
the
image
at
the
end,
you
know
being
allowed
to
be
in
some
fixed
number
of
formats
and
you
can
look
at
the
name
and
then
you
know-
and
I
think
there
are
disadvantages
to
that,
but
because
no
format
is
really
one.
B
C
But
I
I
was
imagining
this
as
small
like
like
we
would
like.
The
life
cycle
would
just
verify
a
couple
of
things.
It
would
verify
that
here's
a
list
of
formats
that
it
accepts
so
the
extension
at
the
end
like
spdx.json
or
cdx.json,
is
the
only
thing
it
has
to
check.
It
wouldn't
worry
about
the
contents
themselves
and
then
that's.
Those
are
like
these
sort
of
two
options
that
are
well
packed,
can
choose
to
provide
things
in
and
we
have
good
tools
to
convert
between
either
of
them
and
merge
them
together.
C
C
C
We
could
simply
add,
like
support,
face
pdx
and
that
other
binary-
and
there
are
already
tools
for
converting
and
like
mapping
the
fields
between
the
two.
So
it
wouldn't
be
that
bad,
but
again
it's
it
it's
more
of
what
we
decide
to
do,
rather
than
like
whether
we
want
to
put
more
complexity
into
the
life
cycle,
whether
we
find
that
it's
worth
it
providing
buildback
authors,
the
flexibility
between
two
different
bomb
formats,
whether
it's
again
worth
it,
providing
conversion
tools
as
first-class
citizens
and
back
and
then,
if
you're,
providing
conversion
tools.
B
If
we
need
to
do
a
migration
desk
pdx
later,
I
agree
that
if
buildpacks
could
output
as
pdx
now
there
would
be
a
you
know,
less
work
for
build
packs.
That
chose
to
do
that
right,
but
I
think
we
can
provide
that
tool
at
the.
If
we
really
need
to
do
that
migration,
we
could
add
that
functionality
later
build
text
have
as
much
time
as
they
want
to
convert
over,
because
it'll
always
be
possible
to
convert
anything.
That's
in
cyclone
to
ask
pdx
before
merging
it
together,
so
like
it
seems
like
we
can.
B
C
Okay,
so,
and
in
terms
of
putting
it
in
the
life
cycle,
that's
something
we
do
want
to
do
like,
because
I
I
don't
want
to
write
an
rfc
where
I'm
describing
a
bunch
of
these
things.
Where
then,
the
life
cycle
is
not
really
responsible
for
it.
It's
just
a
platform
concern
or
something
else
that
can
do
it
as
a
post-processing
step.
B
C
That's
it.
I
think
we
also
have
like
an
office
hours
later
today,
where
we
can
ask
more
of
these
questions,
but
yeah
we
can
move
on
to
the
next
stop.
B
B
C
B
So
it
wasn't
one
field
for
external
references,
okay,
that
makes
sense,
and
in
the
spdx
case
you
can
have
as
many
references
as
you
want,
but
you
have
to
dump
all
the
cpe
perl
and
all
that
into
the
same
kind
of
group
is
that
right
makes.
C
C
Yeah
yeah,
it
was
the
same
argument
that
emily
gave
that
you
should
have
like
the
most
like
pinpointed
version
of
it
to
to
match
against,
rather
than
giving
false
negatives.
But
there
are
other
issues
with
like
how
good
the
data
is.
So.
B
A
B
That
that
makes
sense.
Okay,
that's
all
I
had
thanks
to
move
on
to
next
thing.
The
agenda
is
stackpack's
alternative,
so
that's
mine.
I
guess
sammy
put
that
up
there.
So
did
you
have?
Did
you
have
specific
questions
you
want
to
lead
with?
I
was
going
to
leave
it
up
for
a
little
bit
first,
but
I'm
happy
to
talk
about
it
here
too.
C
B
C
B
Talking
about
stack
packs
and
the
complexity
stack
packs
ad
we've
also
been
talking
about
kind
of
general
complexity
of
the
project.
I've
had
a
lot
of
conversations
with
folks
who
feel
like
a
lot
of
the
terminology
in
the
project
kind
of
makes
it
hard
to
understand
how
things
work,
because
we
don't
always
use
the
terms
that
the
rest
of
the
ecosystem
uses
to
describe.
B
You
know,
concepts
that
are
very
similar
right,
like
stacks
versus
base
images,
for
example,
or
you
know
what
are
mixins,
and
so
this
kind
of
gets
rid
of
everything
at
that
stack
level
and
replaces
it
with
well-known
constructs
an
ecosystem
like
base
images
and
docker
files,
and
you
know
os
package
references
and
also
merges
that
with
the
bill
of
materials
stuff
at
least
more
more
so
than
it
is
now
so.
Basically,
summary:
is
we
replace
mixons
with
cyclone
dx
formatted
listed
packages?
That's
it
run
image
and
build
image.
B
Both
have
cyclin
dx
formatted
listed
packages
on
them.
It's
also
the
s-bom
the
label
for
those
is,
I
o
build
packs
s-bomb
like
if
you
look
at
the
base
image,
it
just
looks
like
we
took
a
regular
docker-based
image
and,
put
you
know
a
list
of
packages
on
it.
I
think
this
would
use
your
rfc
to
move
it
to
a
different
file.
Also,
the
sorry
good.
B
Yeah
it's
in
there
also,
it
use
a
perl.
You
put
a
pearl
url
into
it's
down
here
into
whatever
entry
is
appropriate
in
cyclone
dx
for
doing
that,
so
you
have
a
reference
to
them.
B
B
It's
done
the
build
image
and
the
run
image
before
the
build
happens,
but
it
also
introduces
an
idea
that
I
think
ozzy
brought
up
a
couple
months
ago,
where
you
can
have
multiple
run
images
and
a
builder
and
the
kind
of
smallest
run
image
would
get
selected
automatically
based
on
packages
that
build
packs
request
and,
and
then
it
also
just
replaces
the
idea
of
a
stack
with
just
kind
of
very
standardized
canonicalized
os
metadata
stuff,
so
I'll
try
to
go
over
those
things,
one
by
one.
C
B
Id
and
version
id
from
you
know
like
os
release
right.
It's
like
ubuntu
1804,
for
instance.
You
know
this
is
what's
used
to
match
for
stack
ids.
B
These
always
have
to
match,
but
if
you're
doing
like
a
rebase
or
if
you're,
building
with
a
different,
you
know
custom
run
image,
you
can
pass
dash,
drive
force
to
either
pack
build
or
factory
base,
and
you
know
these
things
don't
have
to
match
anymore.
So
all
the
api
api
compatibility
enforcement
is
overridable
and
is
kind
of
lighter
weight
than
it
was
before.
B
B
B
You
know
if
somebody
get
rid
of
mixons
entirely.
Mixins
would
no
longer
be
validated
between
the
run
and
the
build
image,
so
you
can
use
any
run
image
and
any
build
image
you
want.
But
if
you
try
to
rebase
with
a
run
image
that
has
fewer
packages,
then
you
have
to
patch
dash
dash
force
to
pack
rebase.
So
you
can't
so
there's
like
a
little
bit
of
protection
against.
You
know
doing
something
that's
unsafe.
In
that
case,
you
know
we.
This
is
like
I
use
id
like
I
mentioned
before.
B
We
would,
during
detect
build
packs,
can
output
a
list
of
packages.
When
you
build
a
builder,
you
can
specify
not
just
run
image
and
run
image
mirrors,
but
a
list
of
run
image
on
image
mirrors
combinations
in
order
and
the
first
one
that
meets
all
the
requirements
like
matches
all
the
packages.
The
vote,
packs
output
is
used.
B
B
While
the
build
pack
build
process
is
happening
in
parallel,
if
you
wanted
to,
if
a
platform
wanted
to
put
a
lot
of
effort
into
making
it
really
fast,
and
then
this
is,
you
know,
kind
of
the
replacement
for,
like
stack
packs
plus,
you
know
specifying
packages
in
project
tamil
would
just
be
a
build
docker
file
and
run
docker
file
in
the
app
directory,
but
these
would
also
be
the
same
docker
files
that
you
would
use
to
create
a
new
stack
or
to
extend
an
existing
stack.
The
format
would
be
the
same.
B
You
could
reuse
docker
files
from
different
contexts
to
do
those
different
things
if
you
wanted
to
so,
if
you
wanted,
you
had
your
app
with
the
docker
files
and
you
wanted
to
build
a
custom
stack
instead
of
doing
that
on
it.
You
know
directly
on
build,
you
could
you
know,
reuse
them
in
the
same
context,
they're.
Basically,
you
know
docker
files
with
the
parameterized
from
and
just
a
special
build
id
flag.
B
So
if
you
ever
wanted
to
like
ensure
that
every
build
gets
a
you
know,
rebuilds
from
scratch
or
rebuilds
from
a
certain
point
you
can
use.
The
build
id
to
you
know
create
a
different
instruction
as
a
cache
busting
mechanism
and
there's
a
special
label.
You
can
add
that
will
make
rebase
require
dash
dash
force
to
be
passed
to
pack
rebase
for
it
to
work.
If
you're
making
changes
in
your
run,
image
that
aren't
abi
compatible,
and
so
we
still
get
that
you
know
compatibility
guarantee.
A
A
I
think
well
for
one
right
like
how
difficult
would
it
be
for
the
stack
authors
and
I
guess,
build
pack
authors
to
kind
of
contract
out
based
on
cyclone
dx.
You
know
bomb
notation.
B
So,
there's
something
sorry
there's
something
I
left
out,
which
is
when
you're
creating
a
stack,
that's
it's
in
here.
I
thought
I
had
an
example
that
did
a
copy
yeah.
Here
you
go
when
you're
creating
a
stack.
You
have
to
provide
some
code
that
understands
how
to
parse
the
stacks
package
database
or
whatever
determines
the
packages
that
go
into
the
stack
and
output
that
cyclone
dx
formatted
list
of
packages
with
perl.
B
This
is
something
we
could
provide
as
part
of
the
project
for
ubuntu
ubi
whatever,
but
this
would
automatically
get
rerun
anytime,
a
stack
is
extended,
and
that
way
you
write
your
docker
file.
You
can
install
packages
like
this
and
your
you
know
whatever,
and
you
don't
have
to
worry
about
generating
a
list
of
mixins
or
a
list
of
packages
or
anything.
The
stack
author
is
responsible
for
providing
code
on
the
stack
again.
B
A
project
could
create
this
logic,
but
when
you
create
a
stack,
if
you
want
it
to
always
keep
its
metadata
up
to
date,
you
you
know
have
to
copy
that
into
the
stack
image
and
so
that
the
idea
that
that's
how
we
solve
the
you
know,
keeping
it
consistent,
was
to
format
things
up
to
date.
That's
distribution.
A
And
specific,
I
guess
the
other
part
of
that
right
for
me
was:
would
there
still
be
a
use
case
for
mix
ends,
because
you
know,
I
know
that
we've
talked
about
mixins
a
lot
in
regards
to
os
dependencies
and
packages,
but
I
feel
like
there
was
a
a
super
set
of
functionality,
which
was
just
that
you
could
add
attributions
that
then
made
it
match
very
specific
things,
and
so
I
do
wonder
if
there's
other
things
that
aren't
part
of
the
s-bomb
right,
that
could
be
useful
to
still
keep
mix-ins.
B
And
so,
if
you
added
other
things
to
the
stack,
you
could
add
other
entries
to
cyclone
dx
formatted
list
that
have
as
long
as
there's
you
can
come
up
with
a
perl.
You
know
thing
that
uniquely
identifies
that
pearl
has
some
generic
fields
to
it.
So
you
can
you
know
it's
like
you're,
not
limited
to
npm
packages
or
dev
files
or
whatever
there's
a
generic
key.
So
you
could
add
things
to
that
list
that
aren't
ubuntu
packages
and
then
expect
things
to
match
against
it.
A
You
know
if
the
other
topic
was
was
we're
ready
to
move
on
from
that
I
just
had
a
simple
one
about
the
app
run
docker
file.
I
guess
I'm
just
having
trouble
seeing
what
component
runs
that
right,
like
is
it
up
to
the
platforms
like?
Is
it
literally
up
to
pack
to
run
that.
B
So
the
life
cycle
would
handle
you
know
using
the
docker
files
to
extend
the
base
images
in
the
different
contexts
lifecycle.
You
know,
when
you're
using
a
build
kit
front
end
would
probably
use
build
kit
to
do
it
when
you're
on
a
platform
that
doesn't
have
a
daemon
or
build
kit.
We'd,
probably
use
conoco
to
do
it
right,
depending
on
your
build
strategy.
Could
you
could
do
it
different
ways?
B
Implementation
too
much
but
like
I
wanted
to
keep
it
kind
of
higher
level,
but
the
yeah,
I
imagine,
there's
another
phase
that
happens
like
before.
Export
that
you
know
generates,
run
image
and
there's
probably
another
phase
before
detect.
That
extends
the
build
image
and
the
creator
case.
You're,
probably
gonna,
like
use
konico,
to
run
the
dockerfile
against
the
build
image
live
before
you
start
the
build.
So
you
don't
have
to
have
another
phase
there.
You
know
there's
a
lot
of.
B
B
These
docker
files
can
be
used,
so
you
can
use
a
build.
Docker
file
run
docker
file
like
that,
like
format
of
docker
file
can
be
used
to
make
a
new
stack
to
extend
an
existing
stack,
both
outside
of
a
build
right
or
during
a
app
build.
They
can
run
immediately.
You
know
before
and
or
during
the
build
in
order
to
create
the
image
they
never
run
during
rebase
rebase
either
fails
if
you
have
this
label
with
the
ability
to
override
the
failure,
and
just
do
it
anyways
or
it
just
works.
C
C
How
would
like
whatever
the
equivalent
here
is
like
you,
you
have
a
build
pack
that
requires
certain
packages.
It
requests
them
if
it
can't
satisfy
them
as
it
is.
How
does
it
take
that
poll
and
convert
it
into
a
format
so
that
it
can
install
those
dependencies?
So,
like
you
know
how
the
other
one
had
star
mixins,
where
you
could
just
say
like
the
buildback
would
say
I
require
everything
and
the
stackpack
could
say
I
provide
star
and
then
it
could
just
install
like
all
the
appropriate
packages.
C
B
B
Not
very
attached
to
this
format,
but
currently
the
build
packs
output,
pearl
urls
without
the
version
or
qualifiers,
because
they're
optional
in
the
pearl
format,
although
it's
kind
of
using
perl
for
querying,
which
is
not
great,
but
I
couldn't
think
of
a
better
alternative
for
matching,
but
it
you
know
it
makes
a
big
list
of
those
from
all
the
build
packs.
And
then
it
just
runs
through
the
list
of
run
images
and
picks
the
first
one,
where
they're
all
satisfied,
and
if
none
of
them
are
satisfied,
it
just
fails
the
build
it
doesn't.
B
It
doesn't
install
them,
there's
no
functionality
for
build
packs
to
install
packages
during
a
build
anymore.
In
this
proposal
it
cuts
that
out,
because
I
think
that's
what
led
to
the
most
of
the
complexity
of
the
stackpack
proposal.
It
lets
you
have
different
size,
run
images
and
it'll
pick
the
one
that's
suitable
and
as
an
app
developer.
If
you
need
a
package
for
your
application,
you
can
use
a
docker
file
to
install
it.
So.
C
Now,
if,
if
you
go
through
this
way,
you'll
your
run
images
or
your
build
images
would
have
to
have
a
combination
of
all
of
these
right.
If
you,
if
you
want
the
most
minimal,
run
image,
you
would
have
to
accommodate
each
of
those
combinations
right.
B
C
C
If
you're,
not
python,
you
put
one
with
lipsy
and,
like
all
the
other
basic
bytes
and
dependencies,
but
then,
if
you
want
to
support
like
just
python
and
go
or
like
some
other
language,
but
not
support
java,
you
you
can't
you
would
have
to
accommodate
each
and
every
one
of
those
combinations
if
you
want
a
minimal
image.
B
That's
correct
you
you
have
to.
If
you
want
to
get
every
possible,
minimal
combination
of
every
combination
of
the
language
language
is
selected,
then
you
have
to
supply.
If
you
and
you
want
it
to
be
a
minimal
image
in
all
those
cases,
then
you
have
to
supply
every
combinatorial
version
of
that.
So
instead
I
imagine
people
would
create
a
run
image
for
the
different
for
each
of
the
languages
and
then
maybe
one
really
big
one
at
the
end.
So
when
you're
using
multiple
things
together,
you
don't
get
a
minimal
image.
B
You
get
one
that
has
pretty
wide
support,
but
there's
an
alternative
to
this,
which
I
think
is
really
like.
You
know
this
was
so
you
could
support
like
a
builder
that
has
a
lot
of
languages
on
it.
That
has
some
different.
You
know
run
image
options.
I
think
the
real
alternative
is
have
the
app
developer,
specify
packages
they
you
know,
need
for
their
application
for
their
very
custom
application
in
a
docker
file.
B
Instead,
because
the
dockerfile
run
first
it'll
extend
the
thing
with
the
packages
that
are
on
there
and
then
during
the
run
image
selection
process,
it
can
select
the
more
minimal
one
that
includes
the
stuff
that
the
app
developer
added
and
so
the
app
developer
can
use
a
run
docker
file
in
a
build
docker
file
in
order
to
create
a
more
minimal
image.
At
the
end.
B
B
That
means
that
if
there
were
a
lot
of
requirements
yeah,
so
I
I
don't
think
you
could
use
this
to
generate
a
perfectly
minimal
base
image,
because
the
run
image
will
be
selected
before
the
run
and
build
docker
files
are
selected,
so
we
can
come
up
with
something
else.
That
would
let
you
know
maybe
run
this
before
detect
on
a
more
minimal
one.
If
you
wanted
to,
I
agree:
they,
don't
they
don't
work
together
in
lockstep,
the
intent
was
more
different.
Different
use
cases
for
solving
different
problems.
C
The
only
reason
I
don't
like
the
app
developer
specifying
all
of
this
is
we
break
our
most
essential
contract
like
where
the
app
developer
does
not
have
to
know
about
like
this
is
one
of
the
most
basic
things
right,
providing
dependencies
for
the
buildback
themselves
to
run
now.
The.