►
From YouTube: Office Hours: 2021-07-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
opened
up
a
bunch
of
rfcs
last
night
that
are
split
out
from
the
original
remove
stacks
rfc.
Let
me
share
my.
A
A
So
these
three,
I
think
a
bunch
of
people
here
have
had
you
know
time
to
add
comments
to
them.
Maybe
I'll
go
through
them
really
quickly
and
see
if
people
have
questions,
I'm
going
to
start
with
this
one.
So
the
first
one
just
basically
just
removes
the
stack
and
mixing
concept
and
it
renames
stacks
to
base
images
and
replaces
mixins
with
just
a
cyclone
dx
formatted
listed
packages
on
the
base.
Images
moves
to
standardized
metadata,
for
you
know
matching
in
stacks.
A
It's
an
example
of
like
what
it
looks
like
to
define
compatibility
with
different
images
for
a
build
pack,
and
also
this
kind
of
introduces,
multi
architecture
build
packs.
Would
let
you
build
a
manifest
index
from
a
build
pack
that
works
on
different
architectures
when
you
pull
it.
B
Yeah
I
have
a
like
general
question
about
the
these
set
of
rfcs.
How
did
they
play
a
role
in
the
ongoing
work
with
stack,
build
packs
as
a
whole?
Right
like
there
was
a
set
of
like
there
was
a
pre-existing
rfc.
That
said,
like
stack,
build,
packs,
gonna
work
this
way
and
the
implementation
team,
I
believe,
broke
it
down
and
said
you
know
this
is
what
we're
gonna
start
working
through,
and
this
is
like
our
projected
completed
period.
B
It
seems
like
some
of
this,
you
know
definitely
plays
a
part
in
some
in
in
that
part
of
it.
So
I
am
curious
to
know
like
maybe
at
a
high
level,
what
the
expectation
or
or
maybe
like
the
the
form
of
moving
forward,
looks
like
in
regards
to
stack,
build
packs
as
a
whole.
A
I
think
these
defer
stack,
build
packs
or
any
additional
functionality.
Aside
from
you
know
an
escape
hatch,
you
can
use
docker
files
to
extend
your
base
images
until
after
we
deliver
something
more
simple
and
then
we
can
come
back
and
consider
if
we
need
a
more
complex
abstraction.
You
know
that
could
involve
generating
docker
files.
You
know
it
could
involve
using
the
same
apis.
We
created
different
ways
or
making
that
interface
more
generic
could
involve.
A
You
know
a
build
kit
front
end,
that's
different
from
docker
file,
but
that
we
support
like
docker
files
that
lets
you
add
os
packages.
You
know,
but
I
think
it's
easier
to
start
from
a
place
of
like
let's
deliver
this
simple
baseline
functionality
and
then
revisit
stackpacks
later
so
I
don't
know
if
that
means
we
want
to
withdraw
stackpack
rfcs.
If
people
like
this
direction
instead
or
want
to
go
this
direction
first
or
defer
them
officially
somehow,
but
I
would
imagine
that
we
would
put
down
that
work.
A
B
Okay,
so
so
by,
I
guess,
approving
or
accepting
these
rfcs,
it
would
be
pretty
disruptive
to
the
ongoing
work
of
stack,
build
packs.
That's
currently
happening
happening
right
or
the
trajectory
at
least
fully
disruptive.
B
D
Only
have
that
question
about
introducing
cyclone
dx
and
this
rfc
versus
something
else.
D
I
mean
if,
if
the
point
is
simply
to
replace
mixins
and
stacks,
I
think
we
can
do
that
with
just
pulls
and
this
new,
like
distribution,
thing
platform
thing,
and
that
would
give
us
the
exact
equivalent.
I
think
a
cyclone
dx
thing
is
an
edition.
It's
not
an
alternative.
A
A
I
lean
against
creating
a
separate
list
with
the
same
information
just
because
I
think
cyclone
dx
is
not
it's
just
a
json
list.
I
don't
think
it's
that
you
know
hard
to
parse
like
I,
I'm
not
convinced
that
we
need
to
duplicate
the
data,
but
I'm
not
very
strongly
opinionated
about
this.
I
wouldn't
let
it
block
if
others
felt.
A
You
know
that
we
really
should
keep
a
separate
list
of
pearls
there.
I
just
given
that
it
I'm
not
convinced
the
separate
list
of
pearls
could
go
on
a
label.
I
think
it
could
be
too
long,
so
we
might
have
to
move
that
into
a
separate
layer.
Also,
that
makes
me
feel
like
okay,
we're
already
going
to
have
to
grab
a
digest,
download
a
blob
open
it
up
open
a
file
now
we're
talking
about
like
there
are
two
files
and
one
is
a
json
flat
list
and
the
other
is
a
more
complex
object.
D
Wouldn't
they
just
put
the
additional
things
that
they
think
go
in
there
and
like
if
you're,
just
taking
an
existing
stack
with
mixins
and
you're
trying
to
convert
it
to
this
format?
I
think
the
easiest
thing
would
be
to
take
that
stack.
Converted
to
that
platform.
Distribution
thing,
take
your
mix
sense
and
find
the
equivalent
purls
for
it,
and
that's
it.
A
I
don't
think
there's
any
any
way
to
conveniently
go
from
current
mix
sends
to
this
format,
and
I
think,
if
we're
going
to
make
people
change
the
format,
then
the
nicest
thing
we
can
do
is
use
cyclone
dx,
because
then
they
can
just
run
a
tool
against
their
image.
That
spits
out
the
exact
thing
they
need
in
the
right
format
and
then
paste
that
on
that
thing,
as
opposed
to
having
to
figure
out
how
to
parse
their
app
database
pull
out
just
the
package
names
remove
things
that
don't
have
the
wrong
prefixes.
B
A
I
would
say
just
the
package
names
are
required
for
this
rfc.
I
don't
want
to
create,
I
think
the
other,
the
rfc
around
standardized
metadata
format
would
be
a
better
place
to
assert
that
there
are
other
things
besides
the
pro
formatted
package
names,
but
I
don't
even
know
as
long
as
it's
valid
cyclone
dx.
I
think
we
should
be
pretty
open
to
allowing
you
know
whatever
as
long
as
there's
an
identifier
that
can
be
matched
against
for
vulnerability
sake.
A
A
C
I'm
on
the
fence
here,
but
I'm
like
a
little
bit
inclined
to
want
to
decouple
like
this
stack
validation
stuff
from
the
s-bomb
stuff,
both
from
a
like
moving
forward
perspective.
So
we
don't
have
a
giant
tree
of
dependencies
from
between
our
rfcs,
but
also
to
make
life
easier
for
platforms
and
other
tooling,
so
that
not
everyone
needs
to
to
get
a
tool
to
parse
cyclone
dx
to
pull
out
the
relevant
information
here.
C
A
C
A
A
Agree
if
you're,
if
you're
a
platform
and
you're
willing
to
pull
the
label
and
parse
json
on
it,
then-
and
you
know,
platform-
isn't
creating
stacks
they're
dealing
with
existing
stacks,
that
you
know
that
that
format
could
be
easier.
I
don't
think
it's
much
easier
than
a
json
formatted
like
the
x
list,
because
it's
also
a
list
with
parallel
names
in
it.
But
I
don't
have
a.
C
Yeah
I'm
on
I'm
on
the
fence,
I
feel
like
we
should
be
keeping
these
things
separate.
So
if
we
ever
want
to,
you
know
like
change
the
bomb
format
like
I
obama's
this
nice
to
have
that
some
people
always
do
more
thoroughly
than
others
for
traceability
and
licenses
and
vulnerability
scanning,
but
I'm
a
little
bit
leery
about
making
it
an
integral
part
of
the
behavior
of
a
build
something
about
that
feels
wrong
to
me.
But
I
can't
quite
put
my
finger
on
why
it's
so
much!
C
A
D
D
D
D
It
would
be
decoupled
from
how
it's
represented
in
the
file,
which
is
the
bomb.
So
I
guess
you
could
also
argue
that
if
that
happens
in
the
future,
the
life
cycle
could
just
take
a
look
at
the
cyclone
dx
form
and
convert
it
to
the
spdx
one
whatever.
But
I
don't
know
it.
It
just
feels
like.
Maybe
this
rfc
should
just
say
that
perl
is
the
identifier
for
packages
and
then
leave
a
separate
thing
for
how
this
file
is
generated.
D
I
mean
we
need
one
anyway
for
the
stack
bomb
thing
right.
I
was
imagining
that's
where
it
would
go,
but
because
we
we
still
have
to
figure
out
how
we
use
this
bomb
right.
We
haven't
figured
that
out
yet
like
how
this
would
be
used
when
you
do
a
rebase
operation
or
you
do
a
build
operation,
and
how
do
you
update
the
final
bomb
of
the
image.
C
A
Rebazing
is
interesting
because
we
need
to
preserve
the
top
the
build
pack
layers
in
a
separate
file
right
for
sure
yeah
rebasing
is
a
good
one.
I
should
go
there,
I'm
okay
with
blocking,
so
this
one's
blocking
the
other
two,
I'm
okay,
with
blocking
this
one
on
the
bomb
one.
If
we
feel
like
we
need
to
decide
for
cyclone
first,
but
I
want.
A
That
like
what,
if
we
change
the
format
because
we're
telling
build
pack
authors,
this
is
the
format
and
you
have
one
format,
and
this
is
what
you're
supposed
to
output,
because
otherwise
any
build
pack
doesn't
output,
it
spoils
the
bomb
right
or
if
it
chooses
to
use
a
different
format,
you're
just
never
going
to
get
the
data
you
want.
I
think
we
have.
We
say:
yes,
everything,
outputs
in
cyclone
dx,
but
the
way
you've.
A
I
like
the
way,
you've
kind
of
orchestrated
the
other
rfc
or
the
architecture
for
how
the
what
the
file
names
look
like
and
everything
because
it
means
we
could
introduce
another
format
in
the
future
right.
But
we
would
control
how
that's
introduced.
So
we
could
say
the
you
can
do
spdx,
but
cyclone
dx
is
still
mandatory
or
we'll
you
could
do
cyclone
and
we'll
convert
it
to
spdx,
maybe
even
the
other
direction,
but
we
as
a
project.
We
can
control
how
how
those
additional
formats
are
introduced
and
not
kind
of
leave.
A
Create
this
thing
where
you
never
get
a
valid
s
bomb
at
the
end.
Because
you're,
you
know
frustrating
you
know
because
one
of
your
30
build
packs.
Doesn't
you
know,
output
in
the
same
format?
Right
or
maybe
it's?
Okay,
let
build
tags.
Do
that,
but
we'll
convert
in
some
directions
in
the
future
right
and
there'll
be
a
way
to
control
it
also.
A
D
Yeah
I
mean
if,
if
we,
I
think
what
would
be
nice
if
we
could
show
users
a
minimal
example
of
a
cyclone
dxs
form
that
we
expect.
I
think
it
would
be
fairly
easy
to
construct.
You
just
need
the
version
number
and
then
for
each
of
these
dependencies.
You
need
like
an
object
with
the
identifier
which
is
the
pole
and
that's
it.
I
think
everything
else
is
optional,
so
it's
just
a
list
of
objects
with
poles.
A
I
like
that
plan,
because
it's
then
we're
not
coupling
it
to
cyclone
dx,
we're
just
copying
to
some
json.
That
happens
to
look
like
all
this,
that
it
will
work
with
the
cyclone
dx
in
the
future.
Should
we
move
forward
and
then
it
doesn't
block.
So
that's
not
a
bad
idea.
D
B
D
Help
not
sure,
because
I
think
that
would
also
help
like
when
we
write
the
migration
date.
We
can
tell
people
here
are
your
mixins,
even
if
you
don't
want
to
use
the
cyclone
dx
tool.
This
is
the
minimal
format
you
have
to
convert,
to
which
it
is
technically
just
a
list
of
objects
in
json
with
one
additional
field,
so
it
should
be,
and
that
should
be
the
minimal
thing
we
validate
against.
So
that
would
be
a
good
migration
person
also.
B
A
Yeah,
so
this
it's
kind
of
explained
up
here,
but
more
vaguely.
Basically,
each
entry
in
in
platforms
creates
a
new
separate,
build
pack
image.
That's
part
of
a
manifest
index,
the
os
and
the
arc.
So
os
is
like
linux.
Architecture
is,
like
you
know
the
machine
architecture
and
then
there's
a
name
of
it.
Just
then
optionally
below
that.
A
Right,
optionally,
below
that,
for
that
you
know,
this
creates
a
single
image
right.
This
image
is
compatible
with
ubuntu
and
then,
in
the
case
of
1804,
requiring
these
packages
in
the
build
image
and
2004
and
in
the
case
of
24,
requiring
these
packages
in
the
base
image.
But
this
is
just
one
image
that
gets
created
and
this.
A
D
B
B
A
Because
packages
can
change
across
versions
right,
it's
actually
it's
available
under
distros.
Also
in
case
your
distro
has
the
same
package
names
for
all
of
its
versions
like
in
this
example
1404
and
1604.
I
pretended
that
they
have
the
same
curl
package,
and
so
you
can
just
assert
across
both
of
them.
So
you
don't
have
to
repeat
it
but
say
in
1804.
2004
package
is
different.
You
can
specify
different
lists
and
they'll
combine.
B
A
Oh,
like
this
up
here,
yeah,
that's
so
emily
had
a
question
about
I
do
like
where,
like
maybe,
if,
because
debbie
and
ubuntu
do
share
some
package
names,
should
you
be
able
to?
You
know,
have
an
entry
here,
that's
like
name
like
right
and
then
debian
and
then
it'll
match
against
the
name
id
like
field
and
os
release,
and
let
you
do
that.
So
that
could
be
one
way
we
could
solve
that.
But
otherwise,
no
I
didn't
envision.
A
D
A
B
Yeah,
I
think,
where
I've
seen
it
more
is
with
like
commercial
software.
Some
companies
will
publish
just
one
thing
for
then
they'll
call
it
debian,
you
know,
but
it's
works
for
debbie
and
it
works
for
ubuntu
anything
debbie
and
mike.
B
A
A
A
A
So
up
here,
all
these
fields
are
canonicalized
against
os
release,
or
at
least
these
two
are
os
release,
and
these
two
are
kind
of
the
standard
and
I
use
the
ones
from
go.
You
know,
architecture
and
os
names.
So
that's
the
os
release
on
any
linux
distribution.
It
gives
you
an
id,
that's
really
consistent
and
version
id.
That's
also
really
consistent.
C
For
one
it's
worth
the
go,
sort
of
enum
values
for
go
arch
and
goose
are
also
what's
in
the
oci
spec
for
os
and
art.
So
it's
not
just
go
conventions
also
matches
what
oci
conventions
are.
A
B
B
I
I
think
it'd
probably
be
easier
to
do
the
oci
one
just
based
off
of
feedback
from
me.
I
mean
I
don't
really
care
where
the
list
comes
from,
but
I
do
think
they're
just
trying
to
get
away
from
me.
We
just
do
things
that,
because
go
doesn't
there's
a
sentiment
of
like
having
to
be
a
go
expert.
D
D
A
Technically,
okay,
so
that's
what
I
thought
of
first
emily-
and
I
had
this
long
long
chat
about
whether
you
can
use
package
urls
to
query
things,
and
so
technically
there
are
valid
package
urls,
because
the
version
and
package
urls
is
optional.
Now
the
intent
of
that
version
being
optional.
Originally,
I
thought
wasn't
for
querying
that
it
was,
for
you
know
things.
A
Sometimes
things
don't
have
versions
and
you
have
to
use
qualifiers
to
describe
them,
which
are
also
optional,
but
there
are
issues
open
up
on
package
url
with
maintainers
who
talk
about,
like
you
know,
while
the
ability
to
use
wild
cards
and
pearl
urls,
they
clearly
feel
like
it
is.
It
is
a
thing
that
you
can
use
to
match,
which
I
didn't
expect.
I
was
like
emily
had
convinced
me
that
we
shouldn't
do
this
until
I
saw.
D
A
D
A
I
would
prefer
if
these
were
valid
package
urls
that
are
used
to
query,
because
it
seems
like
a
thing
they
talk
about
upstream,
but
I'm
if,
if
people
feel
strongly
that
these
should
be
a
little
bit
more
extended
to
support
wild
card
matching
of
the
distribution
and
we
need
a
way
to
generically
match
something
across
distributions.
A
I
would
be
okay
with
that,
but
I
think
I'd
prefer
id
like
instead
of
up
here
right
along
with
that,
instead
of
not
specifying
the
distribution,
but
not
specific
distribution
seems
to
it's
like
I
have
very
little,
you
could
convince
me
maybe
that
debbie
in
an
ubuntu
coral
should
be
called
coral,
because
this
you
know
this
packet
comes
from
here,
and
it's
inherited
by
this
and
built
over
here,
and
you
know
that
makes
sense.
But
you
know
between
ubuntu
and
fedora,
that
curl
is
going
to
be
the
same.
A
They
have
an
id
like
they're
they're.
Actually
I
don't
know
about
fedora,
but
at
least.
C
D
C
I
think
id
like
could
be
useful,
but
I'm
sort
of
interested
on
punting
on
it
for
the
first
version,
because
I'm
thinking
about
you
know
what
we
put
in
the
platform
fields
of
this
of
these
images
and
how
we
pull
them
down
and
there's
like
very
canonical
ways
to
do
this
for
os
version
and
therefore
select
the
image
you
want
out
of
the
manifest,
but
like
we'd,
have
to
use
it's
not
as
obvious
how
to
do
it.
A
It'd
be
very
easy
to
tank
all
this
on
top
of
the
current
rs.
Even
if
somebody
wanted
to
open
an
rfc
pointing
at
this
rfc
before
this
one
is
through
it
says
we
should
do
this.
I
would
be
supportive
with
that.
I'm
just
I
want
to
get
consensus
on
the
basic
thing.
First,
this
this
already
adds
more
functionality
than
what
we
already
have
right
right
now.
You
have
to
specify
the
list.
C
D
A
C
The
last
thing
I
sort
of
wanted
to
see
in
this
is
what
we're
going
to
do
with
assets,
because
we
built
stack
fields
into
the
assets
in
buildpacktomwell
when
we
approved
asset
packages
like
do.
We
then
replace
that
with
a
platform,
so
we
move
them
under
different
platforms.
C
A
C
Some
really
good
things
in
there,
but
I've
started
to
have
existential
worries
that
you
know,
as
we've
talked
about
simplifying
things
that
at
least
moving
assets
around
in
their
own
images
like,
I
think
we
made
it
too
complex
to
solve
too
many
problems
as
well.
Just
like
we
did
several
other
things.
A
C
B
Know
I
I
feel
like
the
eyes
are
on
me,
but
I
have
not.
B
C
D
A
So
it
seems
like
maybe
there
hasn't
been
much
ongoing
work
on
that
one.
I
I
don't
know
how
much
it
matters
I.
C
Think
if
we
don't
want
to
do
exactly
what's
in
there
now,
it
will
affect
the
life
cycle
release.
I
feel
bad
for
the
life
cycle
teams
like
ripping
out
all
the
things
we
told
them
to
ship,
but
it's
probably
the
right
thing
to
do.
If
we're
gonna
change
it,
I
think
we
need
to
talk
about
asset
packages,
and
that
is
a
whole
different
conversation.
I
guess.
B
Yeah,
so
if,
if
the
question
is
in
general
like
where
what
the
status
of
asset
packages
are,
there's
two
there's
two
aspects
to
it:
there's
the
work
that
was
necessary
on
pac
to
actually
provide
that
functionality,
most
of
which,
if
I
recall
correctly,
is
complete
right,
then
then
put
a
lot
of
effort
into
getting
that
to
a
finalized
state.
The
only
hang
up
that,
I
would
I
recall,
was
essentially
the
the
other
part
of
it,
which
is
the
changes
to
the
life
cycle.
C
B
Wouldn't
be
there
that's
right,
so
I,
if
my
understanding
is
correct,
I
believe
anthony
was
taking
up
some
of
that
effort.
That
was
remaining.
B
But
again,
I
can't
really
speak
to
that
yeah
I
I
did.
I
did
gather
a
lot
of
context
from
dan
before.
A
He
took
his
absence.
A
B
He'll
be
contributing
or
not
you
know
so
far.
It
doesn't
look
like
it's
the
case
right.
B
Absolutely
right,
I'm
just
saying
for
my.
I
also
want
to
know
that
you
know
it
is
of
value
to
people
other
than
dan
as
well
right
like
if
that's
the
case,
like
some
of
the
other
issues
that
you
know
he
was
partaking
in
absolutely
those
other
ones.
I
I'm
you
know,
I'm
spearheading,
I'm
taking
ownership
off
this
one.
In
particular,
I
haven't
really
heard
anybody
speak
about
it
other
than
him.
So
that's
sort
of
the
missing
piece
for
me.
C
C
D
A
Excuse
me
it
let's
use
doctor
files
to
extend
stacks
and
proposes
what
the
tooling
looks
like
around
that,
and
I
think
the
key
feature
is
it
lets
an
application
developer
if
their
platform
allows
it
put
stack,
put
docker
files
in
there.
After
do
a
pack
build
and
have
docker
files
execute
before
the
build
to
give
them
a
customized,
build
and
or
run
image.
A
Dan
has
some
really
good
points
here
about
security.
I
have
some
examples
of
docker
files
in
here.
If
people
are
interested,
there's
really
good
questions
from
charles.
D
A
B
Yeah
yeah,
no,
I'm
I'm
super
supportive
of
exploring
this.
I
think
I
mentioned
yesterday
in
just
another
chat.
I
think
when,
when
I
first
proposed
stack
packs
actually
called
it
root,
build
packs
was
about
a
year
and
a
half
ago,
and
the
intent
was
actually
very
similar
to
this,
but
I
at
that
time
wasn't
comfortable
having
like
a
docker
file,
support
or
integrating
with
dockerfile.
B
I
think
that
had
a
lot
to
do
with
optics
and
where
we
sat
in
the
ecosystem
a
year
and
a
half
later
after
understanding
what
it
takes
to
do,
stackpack
stack
packs
right
and
a
change
in
our
maturity
and
how
we're
perceived
in
the
ecosystem.
Where
I
think
people
look
at
buildbacks
today
and
say
you
know
how
does
build
packs
fit
in
with
everything
else
not
like.
Do
I
choose
it
or
something
else?
B
I
think
you
know
I
I
think
in
the
earlier
rfc
I
asked
about
you
know
what,
if
we
supplemented
this
with
sort
of
the
original
root,
build
pack
idea
where
bin
build
just
runs
his
route
and
there's
nothing
else
to
go
with
it,
and
I'm
pulling
back
from
that,
because
I
actually
think
the
challenge
there
is
perception
about
what
kind
of
guarantees
you
have
if
it
is
going
to
be.
If
you're
going
to
have
a
bin
build
like
that,
I
think
you're
implying
that
rebase
works.
B
A
We'll
have
a
lot
more
data
on
how
people
are
using
it
and
and
we
won't
be
blocking
people.
You
know
right
now,
they're
all
the
time
you
get
yeah,
I
can't
use
build
packs
for
my
app.
I
guess
I'm
going
to
go
to
docker
files
right
and
the
reasons
aren't
the
app
isn't
a
good
app
for
build
packs.
It's
like.
A
B
I
did
have
one
question
I
didn't
know.
I
know
you
responded,
but
I
didn't
have
a
chance
to
look
at
it.
The
gen
package
thing
I
think
I
asked
why
that
wasn't
optional,
and
can
you
just
kind
of
cover
that.
A
A
I
don't
know
if
that
was
in
this,
or
maybe
it
was
in
the
previous
one.
I
forget
where
that
was
there,
you
did
have
a
comment
on
jump
packages.
I
definitely
remember
that.
B
A
Yeah
yeah,
I
think
maybe
it
was
in
the
other
one.
It's
right
here
right,
the
most
you're
talking
about
yeah
it
was.
It
was
in
the
other
one
because
I
remember
I
said
stack
launch,
or
maybe
it
was
in
an
earlier
version,
this
one.
It
said
stacks
everywhere
and
I
replaced
it
with
image
instead,
because.
A
So
it's
it
is
optional,
but
it's
optional.
It's
you
still
have
to
acknowledge
that
you
know
you're
skipping
something.
So
it's
it.
It's
supposed
to
output
a
cycle
in
dx
formatted
list
of
packages.
It
can
just
no
op
if
you
want
to,
but
I
didn't
want
to
make
it
so
that
you
can
make
a
stack
and
have
no
idea
that,
even
though
all
the
other
tooling
around
you
is
contributing
metadata,
your
you
know
a
whole
bunch
of
stuff
is
going
to
get
missed.
A
The
reason
it's
executable
is
so
that
when
you
apply
the
app
specific
docker
files
or
extensions
people,
don't
have
to
worry
about
this
at
all.
Only
the
original
stack
author
has
to
care
about
it
and
they
can
care
about
it
by
grabbing
tooling,
from
the
project
that
understands
the
distribution
and
how
to
generate
the
cycle.
Npx
listing.
B
A
B
Like
absolutely
okay,
oh
so
go
down
yeah
I
get
it
go
down
to
your
example:
real,
quick
yeah.
So
the
first
example
is
the
og
base
image
right
exactly
and
this,
but
that
still
takes
base
images
or
that's
just
an
arbitrary
arg,
then
not
necessarily
one.
That's.
B
A
C
A
Yeah,
so
I
yeah,
I
imagine
that
there'll
be
a
pack,
create
stack
command,
that's
used
when
you're,
creating
or
extending
stack
images
right
and
that
it'll
take
these
docker
files
as
input
and
that
this
gen
packages
thing
will
be
will
make
available
for
distributions
through
the
project
or
we'll
publish
locations,
easy
binary.
You
can
grab
to
do
this
if
we
wanted
create
stack
to
automatically
copy
that
at
the
end,
either
with
a
flag
or
automatically,
if
it's
not
there,
if
it
can
detect
it
based
on
os
release.
I
am
totally
supportive
of
that.
A
I
think
that's
kind
of
impossible
in
some
ways
because
they're
in
the
end,
it's
a
dockerfile
runway
right.
So
there's
not
much.
We
could
take
a
check
some
of
it
ahead
of
time
and
make
sure
it
doesn't
change,
but
I'm
I
feel
a
little
bit
like
at
the
point
where
you
can
control
anything
in
the
image.
Maybe
even
if
we
take
a
check
some
of
it,
they
could
set
ld
preload
to
replace
it
with
different
code
right.
A
It's
not
a
it's,
not
a
solvable
security
problem.
You
could
make
something
that,
like
you
know,
tries
to
prevent
you
from
doing
that,
but
also
in
your
app
specific
docker
file-
maybe
you
you
know
add:
maybe
you
install
nyx
and
then
you
install
next
packages
and
you
want
gen
packages
to
execute
the
ubuntu
gen
packages
and
then
the
next
gen
packages
afterwards
to
you,
know,
work
together
there.
I
don't
see
a
reason
to
restrict
the
flexibility
of
subsequent
docker
files
being
able
to
understand
an
even.
B
Yeah
I
get
what
you're
saying,
maybe
a
warning
or
something
of
you
know
when
it
does
change.
B
A
A
That
is
something
that
project
handles
entirely
for
you
is
you
know
not
you
know,
and
in
that
sense
it
is
this
top
down
in
that
the
original
stack
author,
you
know,
definitely
creates
that
binary
and
nothing
replaces
it
unless
somebody
replaces
it
really
explicitly
in
a
docker
file,
that's
running
as
root.
A
A
Right,
yeah
or
like
I
mean,
if
you
made
one
that
had
an
original
list
on
it,
it
would
run
at
the
end
and
stamp
a
thing
on
it.
If
you
made
an
extended
version
of
it
would
re-run
it
replace
it.
If
you
did
an
app
specific
one,
it
would
use
the
output
of
the
app
specific
docker
files
gen
packages
and
not
the
earlier
ones.
A
B
B
A
B
D
C
B
C
D
Okay,
whatever
is
running
this
like
talker
build,
I
couldn't
it
just
run
a
random
binary
that
installs
the
packages
like
it.
It
has
to
run
this
like
take
this
docker
file
and
run
whatever
instructions
are
there
if
it
just
executed
a
specific
binary
in
the
stack
after
detect.
D
A
It's
an
important
thing
to
talk
about,
because
I
I
don't
feel
like
it's
functionality
that
I
think
this
has
kind
of
already
been
said.
Don't
feel
like
it's
functionality
that
shouldn't
exist
at
all
automated
installation
of
packages
with
some
interface.
It's
just.
I
think
we
spent
like
a
year
and
a
half
discovering
that
build
packs
are
really
good
at
operating
at
this
layer.
They
provide
a
ton
of
value
at
that
you
know
strong
contractual
dependencies
run.
D
A
And
language
modules,
on
top
of
os
packages
right
and
that
os
packages
are
a
really
hard
problem
to
solve
right,
and
so
this
tries
to
solve
the
easy
problem
that
people
have
first
and
then
says
we
can
come
back
later
for
the
os
package
problem.
If
we
really
think
we
need
to
solve
it,
but
no
one
has
ever
solved
that
problem.
D
A
Good
solutions
for
it,
aside
from
distribution,
specific
package
managers
and
they're
operating
system
packages
are
functions
that
operate
across
the
whole
file
system.
There's
no
way
to
rebase
with
them
right
they're.
Just
each
package
is
an
arbitrary
change,
so
we
should
think
about
it
more,
but
my
goal
was
to
scope.
Put
that
as
part
of
things
we
said
we're
not
going
to
solve
right
now,
put
that
out
of
scope.
If
that
makes
sense,
but
not
I
don't
think
that
it's
something
that
we
should
never
solve.
B
I'll
add
the
yeah
I'll.
Add
that
we,
what
you
described
is
the
target
we
were
trying
to
hit
with
stack
packs,
and
I
think
what
we've
done
here
is
change
the
target.
You
know
so
like
it's,
not
that
you're,
like
I
everything
you
said
I
you
know,
I
have
very
similar
feels,
but
it's
a
hard
problem
and
what
steven's
proposing
I
don't
think
is
you
know
it's
just
not
meant
to
solve
that
particular
or
that
larger
problem.