►
From YouTube: Working Group: 2021-07-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
I
guess
on
the
life
cycle
side
yesterday
and
our
sub
team
sank,
we
talked
about
shipping
a
patch
release
just
to
get
in
a
fix,
for
we
noticed
that
we
were
logging,
sensitive
data
that
we
shouldn't
be
doing.
So
we
want
to
get
a
patch
out
for
that
and
then
we'll
probably
be
cutting
a
release
candidate
for
life
cycle
zero
in
the
next
couple
of
weeks.
B
Any
updates
on
the
platform
side-
I
guess
we
don't
have
any
here-
0
20
went
out
yesterday.
I
think.
B
All
right,
any
updates
from
distribution
or
learning
teams.
C
Distribution,
nothing,
you
can
recall
from
learning
either
we
do
have
a
new
contribution
page
on
the
website.
So
I
know
people
in
the
past
of
us
like
like
the
current
contribution.
Then
it
was
hidden
in
the
community
repository,
so
we
moved
it
to
the
website.
I
think
one
of
our
and
someone
who's
new
to
the
project
contributed
the
initial
stuff,
then
how
we
are
formatted
to
make
it
look
good,
but
it's
something
people
have
been
asking
for
a
while.
That's
it.
A
Maybe
the
last
update
to
note
is
that
we're
close
to
being
like
turned
pink
I'll
fix
that
in
a
second
we're
close
to
being
finished
with
platform,
api
07
and
the
spec
and
she'll
probably
get
that
release
out
in
the
next
week.
B
Awesome,
I
think,
that's
all
the
updates
we'll
go
right
into
anthony's
rfc
on
state
field.
B
Yes,
I
just
wanted
to
highlight
a
new
rfc
in
case.
You
know
y'all
that
you
busy
and
you
might
have
missed
it.
It's
it's
pretty
straightforward.
It's
you
know.
In
the
meta
section
of
every
rfc,
I
want
to
put
a
state
identifier
and
formally
introduce
a
new,
a
new
state
for
an
rc
being
on
hold.
I
think
we
have
a
couple
rfc's
recently
that
we've
don't
want
new
contributions
at
this
present
moment.
Until
you
know,
stuff
has
been
resolved.
So
so
that's
the
sort
of
the
two-fold
proposal
here.
B
B
Looks
awesome
I
think
we
can
introduce
more
states
in
the
future
too
okay,
so
we
can
kind
of
clear
up
that
make
it
easier
for
people
to
look
back
to
those
existing
rfcs.
Oh
sorry,
I
keep
forgetting
this
state
or
status.
I
honestly
don't
don't
mind
at
all.
I
was
flipping
between
the
two
when
I
was
writing
the
two,
so
I
have
absolutely
no
strong
feelings
about.
B
Maybe
I
lean
more
towards
status,
but
I
have
no.
I
don't
think
I
have
an
opinion
any
any
other
thoughts
about
this.
B
Yo,
these
are
all
very
good
comments.
I
would
appreciate
it
if
you
put
it
in
writing,
though,
as
opposed
to
right
here.
Thank
you.
C
So
I
just
wanted
to
share
a
couple
of
updates
on
this
rfc
been
speaking
with
stephen
and
I've
also
been
observing,
what's
happening
with,
like
the
new
gates
releases
and
like
other
things
in
the
cncf
ecosystem,
how
they're
adopting
bomb,
at
least
within,
like
linux
foundation
projects,
it
seems
to
be
largely
largely
tipped
towards
spdx,
which
means
that,
although
it
may
be
easier
for
us
to
adopt
cyclone
dx
now,
and
it
also
suits
us
well,
we
might
alienate
ourselves
from
the
rest
of
the
cncf
community.
C
I
don't
know,
what's
gonna
happen
there,
because
the
last
time
we
presented
to
the
secure
supply
chain
working
group
right
after
that,
I
think
there
was
some
conversation
on
s
bombs
and
it
seems
like
there's
there's
a
lack
of
consensus
on
cncf
as
a
whole,
on
which
bomb
format
they're,
leading
leaning
towards
and
they're,
also
trying
to
figure
that
out.
C
At
the
same
time,
so
dan
who
works
on
six
store
has
been
working
on
a
proposal
for
attaching
bill
of
materials
to
images
and
also
signing
them,
and
I've
been
working
with
him
to
figure
out
how
to.
C
Incorporate
our
use
cases
into
that
as
well.
So
currently
I've
updated
the
rfc
with
a
couple
of
suggestions
which
I
don't
know
how
others
might
react
to
that.
But
here
it
is
so
I've
updated
it
to
include
spdx
alone,
alongside
cyclone
dx
as
a
sponge
storage
format,
because
those
are
the
two
formats
that
six
store
currently
supports,
and
I've
also
given
up
on
trying
to
merge
it
in
the
life
cycle
itself.
C
So
what
this
means
is,
let's
say
we
have
multiple
different,
build
packs.
Each
providing
different
output
formats
like
let's
say
bill
pacquiao
provides
spd-x
buildback
to
provide
cyclone
dx.
C
You
could
still
upload
all
of
them
individually
and
point
to
the
layer
that
they
generated
at
least
within
the
buildback
like
since
they
control
the
layers
and
they
control
the
bomb
format.
There's
a
very
easy
one-to-one
mapping
in
terms
of
a
bill
of
materials
that
describes
the
entire
target,
object,
I've
sort
of
taken
some
inspiration
from
steven's
gen
packages
binary
and
proposed
a
merge
bomb
binary.
C
So
the
project
would
provide
a
default
one
where
the
input
to
the
merge
bomb
binary.
Is
this
layer
directory
structure
which
contains
the
bill
of
materials
produced
by
each
of
the
bill,
packs
and
output
file
locations
for
the
build
and
launch
bomb
which
is
merged,
so
the
merge
bomb
binary
will
take
these
three
things
in
as
input
it
can
read
from
this
directory
and
output
to
the
building
launch
form.
C
C
It
could
overwrite
this
merge
bomb
binary
with
the
one
that
project
supports
and
it
could
map
things
like.
It
currently
supports
like
source
shares
or
whatever
to
appropriate
fields,
either
in
cyclone,
dx
or
spdx,
and
the
lifecycle
would
just
use
that
to
merge
it.
If
such
a
binary
is
not
provided,
the
lifecycle
would
just
keep
merging
it
and
it
will
just
upload
the
individualized
forms
which
will
point
to
the
appropriate
layers
and
that's
pretty
much
it
these
s.
Bombs
can
be
different
formats.
C
And
I
have
also
been
I've
also
left
some
other
comments
on
how
we
could
also
incorporate,
build
and
launch
forms.
C
So
that
may
be
one
way
of
storing,
as
forms
with
different
formats,
trying
to
merge
them
into
a
singular
format
accommodating
for
future
versions
or
types
of
forms,
while
getting
rid
of
this
dependency
from
the
life
cycle
itself
and
the
project
could
provide
a
default
merge
form
which
back
could
include
during
create
builder
or
something
for
the
formats
we
do
support.
C
B
Now
yeah,
I
think
I
have
two
comments.
One
is
so
you
mentioned
that
having
the
separate
layers
like
in
the
sig
store
format,
I
I
think
using
the
sig
store
format
is
a
really
good
idea,
like
I'd,
be
really
happy
if
we
just
dropped
everything
we
talked
about
for
s-bomb,
you
know
like
labels,
or
you
know,
a
label
that
points
to
an
image.
You
know
a
layer
instead
of
whatever
and
just
did
exactly
what
they
did
upstream
and
said:
yep
we're
just
gonna
use
cosine
to
do
labeling.
B
I
think
that
would
why
make
another
standard
right
totally
on
board
with
that?
But
the
way
they
they
have,
they,
let
you
do
an
s
bomb
for
separate
layers,
and
my
interpretation
of
that
is
more
like
you
know,
because
they're
separate
objects,
you
might
be
using
oci
artifacts
and
they
might
be
separate
pieces
of
software
or
you
know
individual
things
you
know
like
because
there's
upstream
they're,
not
thinking
about
the
semantics
of
the
s-bomb,
it
makes
a
lot
of
sense
right.
B
In
our
case,
I'm
not
not
pushing
back
hard
against
this,
but
it
feels
like
the
ask
from
users
is
like
an
s-bomb
for
their
app.
That
includes
all
the
parts
of
their
app
right.
That
may
you
know
use
features
of
the
s-bomb
format
to
show
that
the
parts
have
relationships
to
each
other
like
separate
s-bombs
for
each
layer
in
the
application.
Don't
you
know
it's
further
away
from
the
semantics
sort
of
the
application?
B
So
I
would
kind
of
I
think
I'd
advocate
more
for
one,
although
it's
not
a
strong
preference,
we'd
still
want
to
label
each
of
the
entries
with
the
thing,
and
the
other
thing
is
very
related
to
that
with
merge
bomb
binary.
So,
like
the
reason
we
put
like
gen
packages
on
the
base,
image
or
like
we
have
stack
provided
binaries
is
to
form
a
contract
with
stack
and
the
api
where
the
stack
could
be
really
different.
We
need
some
kind
of
translation
layer,
that's
provided
by
the
stack
author.
B
In
this
case,
the
merge
bomb
binary
would
be
doing
something
life
cycle,
but,
like
it's
generic
right,
the
life
cycle
supports
two
formats,
we're
converting
you
know
or
merging
groups
of
two
formats
together
in
a
way
that
should
be
as
consistent
as
possible.
So
I
don't.
I
don't
see
a
strong
reason
to
have
that.
B
You
know
binary
not
be
part
of
the
life
cycle
process.
C
The
main
reason
I
didn't
keep
it
as
a
part
of
the
life
cycle
was
backwards
compatibility
so
since
there
is
like
there's
a
few
fields
where
we
have
a
clear
mapping
between
cyclone,
dx
and
spdx,
but
beyond
that,
there's
no
there's
no
consensus
in
what
field
maps
toward
so
as
a
build
pack
author,
if
you're,
like
or
as
a
project
for
example,
pocato
has
a
warm
format.
The
standardized
language
is
from,
like
the
previous
form
format
that
we
supported,
and
they
now
want
to
map
those
repeaters
to
stdax
so
cycle
only
x.
C
And
this
goes
back
to
like
introducing
more
hooks
in
the
life
cycle.
I
know
like
we
assume
life
cycle
as
the
unified
mediator
between
all
of
these
things,
but
it
also
prevents
like
extensions
like
these,
which
would
be
useful.
C
B
What
happens
if
you
use
the
cyclone
dx
to
spdx?
There's
a
cyclone
dx
test,
pdx
converter
right?
Is
there
data
loss
there
in
that
direction?
Yes,
because
spdx
doesn't
have
like
a
generic
things
field.
If
you
go
and
it's
called
cyclic
x,
if
you
go
what
about
I
know,
there's
there
aren't
tools
to
convert
from
spdx
to
cycle,
but
could
you
take
all
the
spdx
fields
that
aren't
because
their
way
to
do
this?
With
that,
where
data
loses?
B
C
B
C
There's
also
data
loss
rates,
for
example,
tycoon
dx
only
supports
a
single
cpe
field,
for
example.
C
Spda
supports
multiple
of
them
like
because,
according
to
cyclone
dx,
there's
only
one
true
cpe
field,
but
according
to
hpdx,
you
can
have
multiple
your
like
approximately
accurate
cpu
fields
and
then,
if
you
convert
one
to
the
other,
you
lose
the
other
approximations
for
example,
or
there
are
like
some
annotation
comments
or
like
file
specific
comments
or
whatever
in
sbdx
that
you
can't
convert
to.
Like
only
example,.
A
Yeah
sorry
we're
trying
to
stick
to
the
schedule.
So
I
wonder
if
this
was
a
good
introduction
to
the
changes,
and
maybe
we
could.
You
know
circle
back
on
a
longer
discussion
here
in
office
hours.
B
C
B
What's
the
next
rfc
on
the
list,
sam,
we
should
keep
chatting,
I
think,
we're
we're
close,
but
the
there's
definitely
still
some
things
to
figure
out
read
only
layers.
C
I
think
so
something
we
discussed
in
the
like
in
the
rfc
roundup,
in
the
core
team.
Sync
yesterday
was
this
rfc.
I
added
one
other
alternative
here
where
so,
currently,
this
rfc
sort
of
exports
the
layers
beforehand.
But
if,
if
some
other
buildback
modifies
it,
it
doesn't
actually
warn
that
buildback
that
any
modification
is
happening
or
that
they
may
be
making
changes
which
may
not
end
up
in
the
final
image.
C
So
the
other
alternative
that
I
was
thinking
of
is
like
one
alternative
is
that
the
life
cycle
could
change
the
user
randomly
between
each
buildback
step
and
restore
it
at
the
very
end.
The
other
alternative
was
that
the
build
pack
itself
could
specify
a
uid
and
gid
it's
built
by
tomo
for
the
user.
It
wants
to
run
as
and
the
life
cycle
will
just
run,
that
specific,
build
back
with
that
user
id.
C
So,
since
the
entire
creation
generation
processes
as
a
separate
user,
it
avoids
like
it,
avoids
smalling
the
layers
later
on
or
whatever,
and
that
way
it
sort
of
also
follows
the
dockerfile
paradigm,
where,
if
you
wanna
do
operations
as
a
certain
user
and
have
certain
files
owned
by
certain
user,
you
can
switch
the
user
in
the
middle
and
then
go
back.
C
C
A
I
do
like
any
solution
that
still
lets
us
sort
of
kick
out
the
layer
rising
into
export,
because,
although
I'm
open
to
doing
it
for
the
for
all
the
reasons
stated
in
the
rfc,
I
have
some
fears
that
there'll
be
moments
where
builder
would
be
hanging
between
build
packs,
trying
to
finish
play
rising
like
even
if
we
do
everything
under
the
sun
to
make
it
as
parallelizable
as
possible.
I
think
there
still
could
be
moments
where
it's
making
the
build
phase
longer
and
that
would
create
an
impression
of
a
longer
build.
A
B
I
think
I
I
definitely
prefer
not
to
elevate
builder
privilege,
I
think,
I'm
more
concerned
about
the
api,
where
all
the
build
packs
run
as
a
user,
that
they
build
pack
specify
because
it
either.
C
A
Yeah,
I
would
sort
of
want
us
to
run
them
as
random
users,
but
all
in
the
same
group,
and
then
we
can
set
a
strong
convention
that,
if
you
what's
nice
about
this,
is
that
then,
by
default,
nothing
can
modify
anything
it's
not
supposed
to.
Unless
the
build
pack
goes
out
of
its
way
to
make
the
things
it's
creating
group
writable,
which
I
think
is
like
the
convention.
We
want
to
have
for
allowing
the
app
to
modify
things,
so
it
might
all
fit
together
nicely.
B
B
B
C
A
You're
not
worried
enough
not
to
do
it
but
worried
enough
that
if
there
was
an
alternative,
I
would
be
excited
about
that.
A
Maybe
we
just
want
to
like:
could
we
in
a
much
faster
way?
So
you
know
we're
not
calculating
checksums
and
stuff
like
that.
Just
have
a
very
fast
parallel
process
that
goes
and
checks
the
mod
times
on
all
the
files
and
just
make
sure
nothing
has
changed
like
yes,
the
subsequent
build
pack
could
change
something
and
change
them
on
time,
but
like
at
that
point,
you're
on
your
own.
C
B
Warn
or
something.
A
Maybe
just
warn
it's
like
people
are
already
doing
this
now,
so
it
wouldn't
it
at
least
wouldn't
break
people
which
I
know
is
another
concern
that
this
would
happen
with
build
packs
that
exist
now,
but
it
would,
you
know,
help
people
know
that
you're
violating
the
spec
and
we
could
eventually
turn
it
into
a
fail.
But
for
now
we
could
just
warn
as
a
way
of
trying
to
corral
everything.
B
It
is
a
build
tool
right.
It's
not
like,
I
think.
Sometimes
we
look
at
the
thing
we're
designing
as
if
it
were
like
a
tightly
controlled
like
each
part,
is
part
of
a
tightly
controlled
platform,
but
everything
that's
happening
in
the
buildpack
api
they'll
text
do
whatever
they
want
anyways
right,
so
that
doesn't
feel
like
a
terrible
idea.
So
do
you
want
to
repeat
that
again
for
sam.
A
My
suggestion
was
that,
instead
of
a
parallel
layerizing
process,
we
just
ran
a
process
in
between
builds
that
check
the
mod
times
on
all
the
files
as
quickly
as
possible
and
warned
if
something
changed.
And
then
we
don't
have
a
hard
break
for
people
who
are
already
doing
this,
but
we're
starting
to
enforce
the
spec.
Here.
C
The
whole
reason
I
had
was
like,
if
you
run
something,
for
example
like
the
python
interpreter,
it
just
has
side
effects
if
the
layer
is
right
only
where
by
the
user,
that's
running
the
interpreter,
if
they
have
access
to
the
place
where
the
interpreters,
so
it
just
creates
random
cache
files,
then
you
have
to
go
through
a
lot
of
effort
to
turn
those
off
I'm
imagining.
You
can
also
have
this
with
other
things.
B
C
B
A
Because
then
you're
not
keeping
it
around
right,
it's
about
the
restore!
That's
the
problem
like
if
you're
saying
you
don't
want
to
fail
when
people
are
modifying
these
things.
The
crux
of
the
problem
is
that
when
it
gets
restored,
there
are
changes
that
the
build
back
is
not
aware
of
right,
like
maybe
we
could
add.
You
know
like
a
dirty
flag
to
the
layer
tunnel
to
be
like.
This
is
what
you
said.
It
was,
but
I
know
someone
else
modified
it
so
build
pack
you're
on
your
own
here.
B
B
Before
we
commit
to
something,
if
that
makes
sense,
all
right,
we
will
move
on
to
docker
files.
So
this
one
is
mine.
Let
me
share
my
screen.
B
B
B
My
most
recent
change
is
a
small
change
to-
I
guess
I'll,
just
I'll
just
do
an
overview
of
the
whole
rfc
first
in
the
state
that
it's
in,
because
I
don't
know
where
everybody's
at
and
thinking
about
it,
so
maybe
I'll
actually
start
over
here.
So
the
first
suggested
change.
There's
two
rfcs.
I
want
to
talk
about
them
together.
The
first
change
is
get
rid
of
stacks
and
mixins
completely
and
don't
replace
mixins
with
anything
base
images
in
build
pack
tommel.
B
Instead
of
a
list
of
stacks,
you
have
a
list
of
target
platforms.
These
aren't,
like
you
know
it
supports
this
architecture,
this
os
architecture
and
version
this
also
this
os
architecture
version.
These
are
actually
things
that
get
built
when
you
do
pack
create
build
pack
or
create
package.
B
The
so
like
this
build
one
for
linux,
x86
for
ubuntu,
1804
and
2004
built
a
separate
one
for
linux.
X86
would
go
to
1404
and
1604
right.
You.
I
use
the
same
os
and
architecture
here
to
make
clear
that
it's
different
targets,
but
you
know
the
you
could
use
this
to
build
for
different
for
arm
and
for
x86
and
10..
B
B
So
I
left
in
the
cyclone
dx
bomb
to
just
show
that
that's
possible,
but,
like
that's,
that's
the
only
semblance
of
mixins.
That's
left.
We
just
have
target
platforms,
there's
no
more
validation
between
the
run
images
at
all.
The
if
platforms
are
between
the
building
environment.
Image
at
all
is
what
I
was
going
to
say,
or
what
I
meant
to
say.
If
platforms
want
to
do
validations,
they
can,
but
it's
not
part
of
the
spec
anymore.
Keep
it
really
simple.
B
Does
that
mean?
Does
that
mean
that
that
section
is
completely
optional?
This
section,
oh
this
section,
yeah
targets.
If
it's
optional
for
platforms
to
validate,
then
does
that
mean
it's
not
a
required?
I
haven't
had
a
chance
to
read
it.
You
may
say
this
on
here,
but
mix-ins
are
optional
to
validate
because
they
don't
exist
anymore,
like
the
the
packaging.
Oh
okay,
packages
are
optional,.
C
B
Yeah
definitely
mandatory,
but
if
you
don't
specify
they're,
not
really
mandatory,
because
if
you
don't
specify
any
of
them,
then
your
build
pack
isn't
any
stack
the
equivalent
to
any
stack
build
package.
Just
works,
there's
no,
no
butts
about
it.
If
that
makes
sense,
okay,
don't
just
try
it
no
matter
what
this
is
just
for
your
benefit.
If
you
don't
do
anything,
it
creates
one
for
you,
that's
of
your
probably
of
your
current
architecture.
A
A
C
A
B
C
If
these
are
absent,
then,
when
you're,
when
you're
running
this
finally
and
you're,
trying
to
validate
this,
couldn't
you
just
use
the
values
from
there
like
if
you're
trying
to
use
a
build
pack
with
a
different
architecture
as
the
base
image
and
you're
trying
to
create
a
new
builder?
Couldn't
you
just
fail
there
or
something.
B
Yes,
in
that
case,
I
think
you'd
definitely
use
those
values.
This
is
just
for
when
you're
creating
this
is
when
you
run
pack
create
package
when
you're
at
when
you're
packaging
the
build
pack.
This
defines
the
targets
that
it
gets
packaged
for
this.
This
method
build
pack
these
values
and
build
pectomal.
I
imagine,
would
never
be
used
after
that
point,
because
you'd
have
to
reach
into
the
you
know:
you'd
have
to
download
the
build
pack
in
order
to
be
able
to
do
the
validation.
B
A
This
is
checking
docker
inspect,
which
does
have
os
and
arch,
but
it
seems
like
when
you
go
read
the
I'm
gonna
just
double
check
some
things.
Let's
continue
with
your
conversation.
B
Anything
else
on
on
this
rfc
before
maybe
I
should
just
take
a
look
at
the
comments
before
we
move
on
to
the
dockerfile
stuff.
On
top
of
it,.
C
Stack
specifying
which
os
it
is
and
which
version
it
is
or
something
like
we're
going
to
14.04,
I'm
assuming
decent
label
or
something
right.
B
B
A
Okay
in
the
oci
image,
config
spec-
it
does
include
os
and
arch,
but
it
does
not
include
like
os
version
and
the
full
platform
specification
that
exists
in
the
image
index.
So
I
think
we
can
set
useful
values
on
images,
even
if
we're
not
creating
a
manifest
list,
but
we
can't
set
all
of
this
data,
so
I
think
we
need
to
preserve
it
elsewhere,
so
that
we
could
access
it
in
the
case
where
there
is
no
manifestless,
like
a
demon
case.
A
B
B
A
I
think
my
last
comment
on
this
is
about.
I
like
the
idea
of
removing
makes
invalidation.
A
A
If
paquetto
team
is
making
a
bionic
stack,
you
put
something
that
says
like
it's
this
one
so
that
when
you're
trying
to
rebase,
you
normally
want
a
newer
version
of
the
same
thing
and
then
so
it
would
a
way
to
indicate
that
a
a
new
ron
image
is
a
successor
to
a
previous
one,
but
in
an
arbitrary
way.
That's
not
about
trying
to
understand
all
of
the
compatibility,
but
just
trying
to
understand
the
intention
and
then
it's
something
that
you
could
use
a
force
to
override.
If
you
didn't
want
to.
B
Like
image
id
of,
I
mean
not
not
like
a
docker
image
id
but,
like
a
you,
know
some
way
of
identifying
the
image.
I
think
that
that
goes
well
with
the
idea
that
might
be
useful,
because
run
images
will
no
longer
be
linked
to
build
images.
Build
packs
may
want
to
build
text
with
stack
id
right
now,
build.
C
B
May
want
to
know
about
what
run
image
got
selected
if
that
makes
sense,
I
think
I
said
this
in
a
it's
kind
of
similar
comment,
but
I
wonder
if
that
could
be
a
separate
rfc
for
like
a
run
image.
Identifier
that
you
know
could
we
could
enforce,
has
to
stay
the
same.
It
has
to
get
passed
into
the
you
know,
build
packs
as
an
environment.
Variable
to
build
text
could
then
use,
for
instance,
to
say.
Oh,
I
know
the
run.
B
Image
is
a
scratch
image,
so
I'm
going
to
you
know,
statically,
link
everything
or
something
like
that.
Really
supportive
of
that
idea
too.
B
B
Sorry
for
clarification,
that's
a
separate
run
image
id
from
a
stack
id
yeah.
A
stack
id
would
go
away
because
there's
no
more
stacks,
there
would
just
be
a
run
image
id
that
was
not
related
to
the
build
image
that
gets
fed
into
the
build
process
and
also
used
to
validate
rebase
combining
the
two
ideas,
one
that
was
in
the
comments
here
with
the
one
emily
just
brought
up.
A
Cool,
I
think
it
makes
sense
with
how
people
talk
about
these
things
in
the
wild.
Like
keto
is
two
stacks
full
and
based
that
share
a
stack
id
people
are
always
talking
about.
Is
it
the
base
stack
or
is
it
the
full
stack
like
that's
kind
of
how
people
are
naturally
thinking
about
it?
It's
like
this
is
a
line
of
images
that
is
produced
for
a
purpose
and
I'm
getting
the
next
one.
B
Yeah
agreed
makes
sense,
I
think
I'm
comfortable
with
that
for
having
an
id
for
the
run
image.
I
just
don't
want
to
link
it
to
the
build
image
anymore
and
I'm
not
convinced
that
we
need
it
for
the
build
image
necessarily
because
now
you
can
use
many
different
run
images
with
the
build
image.
If
that
makes
sense
and
like
yeah.
A
I
could
imagine,
maybe
wanting
a
different
one
for
the
build
image,
because
theoretically
we
could
be
rebasing
builders,
even
though
we
don't
it's
like
a
newer
version
of
the
same,
build
image
I
feel
like
it
makes
sense,
even
if
it's
not
as
obviously
useful,
all
the
time.
B
Subsequent
rfc
for
that
one
for
sure
how
about
that
all
right,
I'd
like
to
move
on
to
the
dockerfiles
part!
If,
oh
sorry,
someone
else
have
a,
I
was
just
trying
to
understand
how
validation
realistically
takes
place.
I
think
in
your
comment,
you
said,
like
builder
authors
would
just
look
at
the
documentation
of
the
said,
build
text
to
find
out
which
stacks
would
work
with
that.
B
Could
you
expand
your
thoughts
on
that?
A
little
more
that
was
just
about
packages,
so
it's
like
if
your
build
packs
require
packages
that
aren't,
you
know,
usually
installed
in
the
base
1404
or
ubuntu
buy,
or
you
know,
ubuntu
it's
a
trusty
image
right
then
you
have
to
put
that
in
your
build
packs
documentation.
So
people
know
that
they,
you
know
either.
C
B
B
There's
no
way
for
the
platform
to
give
you
any
help
on
selecting
a
base
image
when
constructing
a
builder.
Well,
that's
the
next
rfc
sort
of
oh.
There
would
be
no
way
for
a
platform
to
realize
that
build
packs
that
a
given
build
pack
needs
a
specific
package.
This
aside,
yes,
that's
that's!
That's
true!
Unless
you
know
there
were
a
separate
contract
for
build
packs,
defining
pearl
packages,
you
know
for.
A
Rather
than
requiring
everyone
in
the
world
to
put
accurate
metadata
everywhere,
so
we
can
do
that
magic,
which
I
feel
like
is
asking
a
lot.
I
wonder
if
maybe
a
better
way
to
do
that
would
be
to
have
a
build
pack
be
able
to
provide
hints
basically
like
if
we
had
build
image
and
run
image
ids.
You
know
a
bill
pack
could
say
like
here
are
base
images
that
I
know
I
work
with.
B
I
think
that
makes
sense.
We
only
got
eight
minutes
left.
We
haven't
talked
about
the
dockerfile
part
yet
so
I
kind
of
want
to
move
on
unless
people
feel
strongly,
but
I
I
agree
we
could
do
more
to
hint
there
like
we're
just
trying
to
remove
complexity,
necessary
complexity
for
buildback
authors,
okay,
so
I'm
going
to
try
to
go
through
this.
Let
me
get
some
feedback
really
quickly,
the
basically.
B
This
introduces
a
new
instead
of
allowing
build
packs
to
install
os
packages,
which
I
think
introduces
a
big
risk
and
that,
if
you
as
soon
as
you
know,
a
popular
build
pack
says
I'm
going
to
apt-get,
install
curl
or
something
like
that,
then
nobody
in
the
ecosystem,
who's
using
that
build
pet,
can
use
rebase
anymore
because
it
modifies
the
base
image
right
and
so,
instead
of
you
know,
for
stack
packs.
We
kind
of
solve
this
by
saying.
B
Well,
there's
a
special
kind
of
build
pack
called
the
stackpack
and
it
can
install
those
packages
but
separate
instead
of
going
that
approach
and
trying
to
integrate
it
to
the
build
pack
api.
The
approach
here
is
make
it
so
that
things
you
would
do
as
a
builder
author
or
stack
author
to
create
a
stack
that
matches
a
build
pack
things
you
could
always
do
before
creating
the
builder
can
happen
before
the
build
process
dynamically,
and
so
everything
described
here.
You
could
sort
of
read
as
like.
B
You
could
the
hooks
that
this
introduces
to
install
packages.
You
can
always
remove
those
hooks
and
create
a
more
narrow
builder
image,
and
none
of
the
build
packs
have
to
change
everything
works
the
same
way.
So
it's
just
it's
just
a
way
to
kind
of
it's
like
a
robust
mechanism
for
dynamically
ending
up
with
the
you
know,
run
and
build
image
that
you
could
have
pre-baked
right,
but
now
can
happen
immediately
before
the
build
process
with
dockerfile
like
caching.
The
advantage
here
is
that
it
doesn't
interface
with
the
build
pack
api
at
all.
B
It
just
lets
you
end
up
with
the
perfect
images.
You
sacrifice
a
little
performance
at
the
beginning
of
the
build
to
end
up
with
the
perfect
perfectly
sized,
if
that
makes
sense,
run
time
and
build
time-based
images
that
match
the
build
packs,
you're
you're,
selecting,
but
but
it
you
know,
the
the
build
pack
api
doesn't
change,
and
so
the
way
this
looks
is
inside
of
builder
towel.
You
can
specify
this
isn't
a
final
thing.
This
is
just
like
an
idea
for
what
it
could
look
like.
B
You
can
specify
hooks
that
are
essentially
docker
files
or
executables
that
output
docker
files
you
can
choose
which
one
you
can
map
those
to
specific,
build
packs
or
not.
If
you
don't
map
it
to
a
build
pack,
the
hook
just
always
runs
the
dockerfile
always
gets
applied.
If
you
map
it
to
a
build
pack,
it
only
gets
applied
when
that
build
pack
is
participating
in
bin
build.
B
B
Eventually,
it
could
be
llb
json
instead
of
docker
files.
If
we
wanted
to
get
really
fancy
the
you
know
an
example
of
implementing
app
specified
docker
files,
so
apps
could
have
docker
files.
You
could
do
this
with
a
hook
that,
just
because
the
docker
files
execute
in
the
context
of
the
app
directory
with
if
they're
executable,
the
hooks
executed.
In
that
context,
they
could
read
doctor
files
from
the
app
directly.
So
this
is
an
example
of
like
how
you
could
let
application
developers
specify
their
own
build
and
run
docker
files.
B
B
Alternatively,
say
you
had,
you
know
different,
you
have
a
you
know
different,
build
packs
that
require
operating
system
packages
to
get
installed.
You
could
put
hooks
on
the
builder
that,
for
those
build
packs
right,
you
know
do
something
like
install
ruby
or
install
node.js,
and
so
this,
as
a
builder
author,
you
can
now
kind
of
create
builders.
B
The
docker
files
will
be
in
this
format
where
the
base
image
is
injectable,
because
it's
got
a
chain
through
an
arg
there's
a
build
id.
You
can
use
to
expire.
The
cache
there's
a
gen
packages
of
binary,
that's
kind
of
explained
here
as
a
way
of
generating
an
s-bomb.
We've
talked
about
that
in
the
previous
working
group
reading.
So
I
won't
go
into
too
much
detail.
There's
not
much
time
left
eventually.
So
I
open
this
other
rfc
for
dynamic,
run
image
selection.
B
We
wouldn't
have
to
add
more
complexity
to
do
that
at
all.
We've
removed.
C
B
There's
no
more
package
lists
and
build
packs
at
all.
Instead,
that
could
be
implemented
by
a
new
format
called
ref
that
just
changes
the
ref
and
because
you
can
map
that
to
build
packs
that
gives
you
dynamic,
run
image
selection.
If
this
build
pack
is
present,
use
this
run
image,
and
so
this
kind
of
hook
api
can
be
used
to
do
anything.
You
want
to
end
up
with
the
right
build
time
and
runtime
base
image
before
build
happens
is
the
idea.
But
again
the
big
point
is
it's.
B
Into
the
build
pack
api
there's
some,
there
are
definitely
some
alternatives
here.
So,
like
one
idea
emily
had,
is
you
know
we
could
integrate
this
more
into
the
build
pack
process?
I
think
it
would
like
by
you
know,
having
the
hooks
participate
in
the
build
plan
instead
of
feeding
build
packs.
B
I
think
that
gets
a
little
dicey,
because
you
have
to
feed
the
requires
into
the
hooks
and
suddenly
they
start
looking
like
build
packs
and
they're
a
bunch
of
restrictions
like
they
have
to
be
provide
only
they
might
want
the
provides
to
be
static,
but
I
guess
they
wouldn't
have
to
be
like
it.
It
starts
adding
that
complexity
that
we,
you
know
arrived
at
stack
packs,
but
I
think
I
think,
there's
probably
a
line
in
the
sand
we
could
draw
where
you
know
that
kind
of
works
right
on
top
of
the
docker
file
api.
B
It
does
reduce
a
lot
of
the
complexity,
something
emily
and
I
were
talking
about
yesterday
to
output
the
docker
files
as
instructions
from
the
build
pack,
instead
of
having
a
build
pack
execute
directly
or
the
stackpack
execute
directly
in
the
context
of
the
container.
B
B
I
think
one
of
the
risks
may
be
that
or
maybe
the
trade-offs
like
with
regard
to
build
packs
not
being
able
to
install
their
own
app
package
or
whatever
is
that
this
may
perpetuate
builder
specific,
build
packs
and
that,
like
the
build
pack
that
needs
to
install
that
that
needs
to
do
a
thing
whatever,
like
the
builder,
has
to
support.
B
B
So
it's
it
seems
like
it's
not
unsolvable,
like,
I
think
the
build
pack
ids
would
let
you
be
like,
as
a
builder
maintainer,
you
can
be.
Like.
Oh
I'll
add
this
hook
to
support
that
build
pack,
but
that
I
don't,
I
think,
that's
potentially
a
risk.
The
idea
is
that
anything
hooks
can
do.
You
could
do
to
your
base
images
instead,
and
so
this
this
shouldn't
perpetuate
anything
that
can't
it
isn't
already
perpetuated
by
having
people
have
the
ability
to
use
different
base
images,
if
that
makes
sense
and
I'd.
B
Imagine
that
you
create
a
tool
that
pre-applies
hooks
right
so
like
your
build,
is
slow
because
it
uses
these
hooks.
You
run
some
tool
on
your
builder
and
it
outputs
a
builder
with
all
the
hooks
pre-applied.
Now
you
have
a
faster
builder
right,
because
it's
it's
all
it's
doing
is
automating
the
process
of
creating
a
more
narrowly
scoped
builder
for
a
set
of
build
packs.
The
you
know
any
outcome
of
this
well,
I
agree
it
makes
it
easier
to
customize
builders
in
some
ways
right,
so
you
could.
You
could
certainly
perpetuate
that
right.
B
B
Because
there's
no
there's
no
feed
like
you're,
not
feeding
like
the
detection
output
of
the
build
packs
anyway,
or
anything
like
that
they're,
you
know.
As
a
builder
author,
you
could
say
I'm
going
to
put
these
build
packs
for
this
app
here
and
run
these
commands
ahead
of
time
right
or
you
could
say,
I'm
going
to
put
hooks
here
and
it'll
slow
down
the
build
process,
but
that
stuff
will
happen
dynamically
before
build.
B
Charles
who's
been
chatting
us
up
in
slack,
I
think
one
of
the
drawbacks
of
dockerfile
or
like
llb,
json
or
other
mechanisms,
is
inability
to
write
tests,
although,
like
dockerfile,
isn't
a
testable
thing
in
terms
of
like
writing
unit
tests
for
it,
I
know
he's
brought
that
up,
but
I
don't
know,
I
would
argue
that,
like
the
job
of
any
of
this
stuff
is
like
massive
side
effects
that
are
difficult
to
test
anyways
and,
I
think,
like
I
think
your
testing
mechanism
is
like
run
the
docker
file
or
whatever
the
thing
is,
and
if
you're
going
to
test
it
all
it's
got
to
actually
do
the
thing.
B
Would
it
run
the
hooks
more
than
once
or
would
the
hooks
already
have
been
applied
from
the
first
build?
I
think
it'll
use
docker
file,
caching,
and
so,
if
you
use
this
build
num
build
id
somewhere.
It'll
start
rebuilding
from
there
because
it
supports
app
specified
docker
files
and
the
app
directory
could
change
right
like
because
you
can
read
the
app
directory.
There
are
some
doc,
you
kind
of
need
to
reapply
it,
but
there
is
the
docker
file.
Caching
right
if
nothing
changes-
and
you
know
it
should
be
very
fast.
C
B
B
Empty
directory,
guitar
tarball
sent
to
docker
demon
is
empty.
One
idea
I
thought
about
an
alternative
down
here
is
like
we
could
use
directories
here
instead
of
the
names.
Then
these
could
be
the
context
of
the
docker
file.
If
you
wanted
to,
I
wouldn't
be
too
opposed
to
that
it'd
be
a
little
weird
because
be
feeding
other
docker
file
into
that.
B
How
oh
yeah
I
got
to
drop
because
we're
a
couple
minutes
over
I'm
late
to
something.
If
people
have
more
comments,
please
leave
them
on
the
thing
glad
to
hear
nobody
is
extremely
freaked
out
by
the
idea
and
things
that
could
never
work.
At
least
probably
I
was
you
know,
I
wasn't
really
wasn't
sure.