►
From YouTube: Working Group: 2021-04-15
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
there
nope
just
I'm
here.
B
Great
great
to
meet
you
yeah
great,
to
meet,
you
feel
free
to
put
anything
on
the
agenda.
If
you
want
to
chat
about
it,
yeah.
B
Awesome
all
right,
I
mean
we
should
get
started.
So
first
thing
is
issues
with
multiple
build
plan
entries
and
or
oh
yeah.
I
saw
that
last
on
wednesday.
What's
what's
up.
C
B
It
was
definitely
to
break
not
break
backwards.
Compatibility
in
in
the
the
we
didn't
add
a
new
top
level
list,
so
things
that
are
still
written
to
the
top
level.
Don't
have
to
be
nested
under
the
order.
So
there's
you're
allowed
to
have
top
level
entries
and
then
there's
a
special
entry
called
or
that's
now
a
list
of
objects.
That's
you
know
additional
alternate
build
plans,
but
if
you
don't
have
a
top
level
entry,
you
can
just
put
everything
in
order.
C
Okay,
so
that's
what
like
this
issue,
may
be
very
specific,
because
I
was
writing
like
a
binding
for
the
api,
and
I
noticed
that
most
bindings
just
ask
for
a
list
of
build
plans,
and
then
they
convert
it
to
the
structure
where
they
take
the
first
build
plan
out.
C
They
put
it
in
the
top
level
one
and
then
they
convert
the
others
to
this
list
of
four
entries
and
as
someone
as
a
buildback
author
reading,
this,
it
feels
like
the
first
bill
plan
is
special
in
some
way
and
the
others
are
not,
and
it
also
sort
of
is
a
weird
like
disconnect
between
your
api
bindings,
which
are
just
a
list
of
bill
plans,
whereas
when
you
check
the
spec
or
something
it's
now
converted
to
this,
like
one
top
level
object
and
a
list
of
things
under
or
it's
just
a
nitpick.
B
I
think
you
don't
have
to
use
the.
If
I
remember
an
implementation
could
have
changed,
but
originally
you
didn't
have
to
use
that
one
top
level,
one
you
could
just
use
or
as
a
thing,
but
we
still
support
not
using
or
using
the
top
level.
In
addition,
if
that's
not
true,
it
should
be
true,
and-
and
I
you
know,
I
think
we
should
put
an
issue
about
that,
because
it
definitely
feels
wrong.
If
you
have
your,
you
have
an
automated.
B
B
I
think
most
build
packs,
don't
use
or
or
was
added
after
build
plan.
Api
had
already
been
developed.
You
know
it's
based
on
feedback
that,
like
sometimes
like
an
example,
is
you
might
want
a
build
pack
that
provides
either
jdk
or
just
jbm
and-
and
you
couldn't
say
it
provides
both
and
you
couldn't
say
it
provides.
Neither
because
the
build
plan
is
like
a
it,
doesn't
give
you
that
that
much
power-
and
the
answer
to
that
was
you-
should
just
have
separate,
build
packs
for
jpm
and
jdk,
because
it
would
work
as
expected.
B
But
then
we
got
a
lot
of
feedback
that
it
was
too
much
overhead.
I
think
I
think
ben
was
a
big
fan
of
or
or
felt
like.
It
was
really
necessary
because
he
developed
java
build
packs
that
needed
to
use
this
in
some
contexts,
and
so
we
tacked
on
this
okay
there's
another
there's
a
special
top
level
key,
that's
a
list
of
objects
called
or
you
can
put
alternative,
build
plans
and
they'll
be
treated
as
alternative
build
packs.
B
Just
as
if
you
put
the
build
packs
vertically
in
the
build
plan,
but
they'll
run
the
same,
build
pack
code
in
the
end,
so
when
you're
parsing
or
think
of
it
like
it's
like
a
a
way
to
do
a
vertical,
if
that
makes
sense
in
a
two
by
two,
but
just
well
within
one
build
pack,
a
bunch
of
different
alternative
things,
and
so
the
whole
group
is
evaluated,
the
first
one,
it's
actually
a
little
dynamic
programming
algorithm.
I
wrote
in
there
that
does
fast
resolution
of
that.
B
The
thing
on
top
of
words
the,
and
so
that
was
the
that
was
the
motivation
for
it.
I
agree
it's
a
little
ugly,
but
I
don't
know
if
we'd
want
to
break
the
api
and
you
can
just
ignore
the
top-level
one,
I'm
pretty
sure,
or
if
you
can't
file
an
issue,
we
should
take
care
of
it.
Someone
should
take
care
of
it.
C
Yeah,
that
was
it,
so
I
guess
I'll
just
use
the
all
thing.
If
I
have
multiple
plans
and
if
not
I'll,
just
switch
to
just
the
one
single
entry,
okay.
B
And
let
me
know
if
it
doesn't
doesn't
work.
You
can
also
just
do
one
entry
in
order
to
really.
If
you
wanted
to
just
pretend
like
or
is
all
there's
at
there
is.
I
think
you
can
do
that
if
that
makes
sense,
but
if
you
feel
like
it
should
change,
definitely
open
an
rfc
or,
like
you
know,
my
opinion
doesn't
be.
I
only
get
one
vote.
B
Any
other
thoughts
on
r,
sorry,
I
didn't
mean
to
take
up
that
conversation-
cool
unintended,
build
pack,
layer,
modification
issues
yeah
this.
C
Is
this
random
thing
I've
been
facing
where,
if
you
provide
a
build
air
and
for
some
reason
the
future
build
packs,
run
some
executable,
which
creates
some
random
like
modifications
to
the
original
build
layer,
even
though,
like
they're
not
really
needed,
for
example,
you
run
python
or
something,
and
that
creates
some
random
cache
objects.
C
Now
that
invalidates
the
original
layer,
even
though
there
were
no
modifications
to
it,
invalidates
it
in
the
sense
that
I
have
to
push
a
whole
new
layer
out
to
the
registry
and
those
layers
and
pushing
them
out
and
pulling
them
in
can
be
like
really
expensive,
and
nothing
really
has
changed.
Apart
from
seeing
like
these
random
files,
and
I've
tried
a
lot
to
get
like
by
these
issues
by
like
disabling
some
random
specific
settings.
C
C
Like
typically,
you
have
some
system
executables,
which
are
on
some
protected
parts,
and
you
can
run
them
and
they
you
can
execute
them.
You
can
read
them,
but
you
can't
write
back
so
that
works,
but
in
this
case,
since
there's
only
an
option
of
like
built
as
a
boolean,
true
or
false,
any
future
buildbacks
can
then
unintentionally
modify
previous
layers,
which
seems
weird
to
me.
B
Is
this
for
a
build
equals?
True
launch
equals
true
layer,
you're
thinking
about,
because
it's
a
layer
that
needs
to
go
to
the
registry,
but
also
needs
to
get
exported
to
other
build
packs
and
and
the
inefficiency
is
when
us
a
later
build
pack,
then
it's
not
better
later
built
could
do
this.
It's
actually
when
a
later
build
pack
writes
into
that
layer,
then
invalidates
it
and
yeah.
So
the
api
says
that
those
are
are
contractually
read.
Only.
C
But
I
I
can't
control
it
right.
So
if
I'm
using
and
it's
like
I'm
using-
let's
say,
for
example,
python
or
gcc,
provided
by
one
buildback
in
the
future
buildback,
I
can't
control
the
modifications
happening
there
like
I,
I
had
to
do
a
lot
of
deep
dive.
Do
a
lot
of
cleaning
at
the
end
to
make
sure
those
things
were
not
happening,
but
it
shouldn't.
C
I
guess
it's
also
an
issue
with
certain
like
executables,
which
are
creating
these
random
files
in
the
same
players
directory,
but
I
would
imagine
that
others
may
be
facing
this
and
they
may
not
even
be
knowing
about
this
because
technically
your
cache
is
not
invalidated.
You're,
just
pushing
a
new
thing
out
to
the
registry,
which
is,
I
guess,
inefficient,
but
that's
about
it.
C
Yeah,
because
it
wasn't
even
easy
to
investigate
this
whole
thing
like
what
was
actually
causing
the
player
digest
to
like
change,
so
I
had
to
do
an
individual
file
diff
for
all
of
the
layers
and
figure
out
which
build
pack
was
actually
responsible
for
making
these
modifications
and
then
correct
all
of
the
future
build
packs
so
that
they
don't
modify
the
original
layer,
which
was,
I
don't
know.
C
I
think
that
check
something
would
really
help
so
that
if
some
future
buildback
is
modifying
like
a
previous
one,
it
wants
saying
that
hey
this
buildback
modified
this
previous
one.
So
at
least
I
know
okay,
this
is
my.
This
is
the
the.
B
I
see
a
couple
other
options
too
one
is
we
could
create
the
layers
like
create
the
layer
tgs
iteratively
like
during
the
build
pack
process,
instead
of
all
at
the
end
during
export,
and
that
way,
if
another
build
pack
later
does
make
a
change
to
it.
It
doesn't
actually
end
up
in
the
image
which
may
be.
That
may
be
worse
right.
B
It
may
be
better
to
prevent
that
from
happening
or
throw
some
error,
but
then
we
don't
have
to
calculate
a
checksum
in
the
middle,
and
then
you
know
store
that
checksum
and
then
make
sure
that
or
you
know
we're
not
calculating
each
xml
of
all
the
bits
twice.
If
we
didn't
also
compress
it
during
the
initial
step,
it
seems
like
the
implementation
would
be
a
little
harder
or
it
would
hurt
performance
right
to
take
iterative
checksums
of
everything
and
then
also
check
some
at
the
end.
B
In
order
to
export
so
that'd
be
an
option.
I
could
pick
up
to
consider
the
other
option
is
we
could
make
everything
rude
owned
kind
of
as
we
go
along
right,
so
the
builder?
Could
you
know
we
could
say
the
builder
runs
as
root,
which
I
don't
like
I
don't.
I
kind
of
don't
think
we
should
do
that
and
it
could.
You
know
after
each
build
pack
execution
switch
the
permissions
from
the
build
time
user
to
root
right
progressively
we've
kind
of
talked
about
how
it's
you
know.
B
B
C
Yes,
it's
it's
also
sidestepping
the
fact
that
currently,
you
have
technically
three
users.
One
is
like
whatever
was
the
user
that
created
things
in
the
stack.
One
is
your
build
user
and
one
is
your
runtime
user,
and
I
mean
there
are
a
lot
of
cases
where
you
need
more
than
just
growth
and
a
non-root
user,
and
I
I
definitely
wouldn't
want
to
trade
the
fact
that
there
is
a
separate,
build
user
with
fewer
permissions
than
root
that
owns
these
files.
B
We'd
also
have
to
be
really
careful
about
set
uid
and
set
gid
type
things
right
as
soon
as
you're
changing
something
that
a
regular
user
wrote
to
build
by
root.
It
could
introduce
security
vulnerability
to
that.
So
I
I
don't
love
that
solution
either.
I
just
wanted
to
throw
it
out
there
as,
like.
You
know
that
would
that
would
also
solve
the
problem,
but
without
having
to
deal
with.
You
know
with
nothing
computational,
at
least
because.
C
I
I
think
I
technically
also
have
use
cases
where
I
do
want
the
other
thing
as
well,
where
I
want
future
build
packs
to
modify
it
like
if
I
set
an
environment
variable
in
a
specific
build
pack
for
the
future
buildbacks,
and
they
expect
that
thing
to
be
writable
stack
violation.
C
I
thought
we
mentioned,
like
we
discussed
this
last
time
in
like
the
idea
was
that
if,
if
a
build
pack
exports
like
here's,
an
environment
variable
that
holds
a
folder,
and
it's
marked
as
that,
that
thing
is
marked
as
a
build
layer.
Why
should
the
future
build
packs
care
like
what's
wrong
with
future
build
packs
modifying
that
and
the
way
the
intention
of
that
build
back
is
so
that
others
can
write
into
it.
B
I
think
how
that
intent
is
communicated
would
be
very
important
because
oftentimes
it
build
so
like
we,
we
set
environment
variables
like
path
right
that
point
into
bin
directories
of
previous
build
packs
automatically,
and
in
that
case
the
contract
is
definitely
you're.
Not
supposed
to
write
to
this.
This
build
pack
took
a
check
somewhere.
B
B
It
seems
like
it's
always
going
to
be
risky
because
there's
this
process
of
storing
metadata
about
the
layer
and
the
registry
and
then
recovering
it
and
comparing
it
that
you
know
a
build
if
another
build
pack
could
ride
into
that
build
packs
layers,
its
metadata
would
suddenly
not
match
its
bits,
and
then
you
know
it
would
increase
complexity.
A
lot
right,
we'd
have
to
have
some
kind
of
management
system
for
ensuring
that
bill
peck,
intended
it
for
deridable.
B
C
What
I
was
thinking
of
proposing
like
if
you
have
build
right,
equals
true
or
like,
like
your
build
flag,
instead
of
a
true
or
falses
like
like
false,
read
or
write,
in
which
case
free
just
exports
it
out
immediately
and
right
just
makes
it
available
for
future
rights
as
well.
I
imagine
that
would
be
useful,
at
least
to
me.
I
don't
know
about
other
people
on
the
call
like
if
they
have
had
the
use
case
like
this.
B
If
we
did
the
middle
option
right
of
taking
of
exporting
the
layers
in
the
middle
right
between
each
build
pack,
move
that
responsibility
of
creating
the
layers
out
of
the
exporter
into
the
builder
right
and
then
when
it
sees
write,
equals
true,
it
actually
just
means
defer,
creating
the
layer
until
the
end
of
the
whole
build
process
right,
then
we
can
do
it
without
any
performance
hit.
It
would
guarantee
that
only
write
equals
two
layers.
B
You
know
actually
end
up
getting
exported
with
rights,
and
then
it
would,
you
know
guarantee
so,
but
but
it
does
have
the
problem
of
like
it
means
the
changes
are
allowed
and
then
blown
away
at
the
end,
as
opposed
to
you
know,
but
it
would
be
efficient,
but
that
that
doesn't
seem
too
terrible
to
me.
But
that
would
be
a
way
to
implement
that
right.
Functionality.
C
Cool
I'll,
let's
see
if
I
can
propose
that.
B
B
A
B
A
Yeah
lexi,
it's
a
game,
I'm
working
for
a
small
company
and
we
are
looking
for
a
way
to
adapt,
build
packs
and
currently
we
use
build
kit,
and
I
think
I
find
this
issue
found
this
issue.
I
want
to
understand
what
is
the
current
state
of
it
and
it
might
be
that
you
have
some
estimations
planned
work
or
maybe
deadlines.
A
I
could
speak
to
a
little
bit
I'll,
throw
whatever
I
can
out
there
and
then
stephen.
If
you
have
any
input,
you
also
can
what's.
A
Yeah,
I
guess
first
of
all,
I
want
to
start
off
by
saying
that
this
is
something
that
made
it
onto
the
roadmap
for
the
project
right.
So
it
is
something
that
has
a
little
bit.
You
know,
maybe
not
in
a
direct
way,
but
us
trying
to
get
like
that.
Docker
integration
story,
a
little
bit
better
suited,
is
a
reason
why
we're
pursuing
this
quite
heavily.
A
That
being
said,
the
thing
that
you
link
to
the
issue
there's
a
individual
from
bloomberg,
eric
who
kind
of
created
this
proof
of
concept
and
he's
kind
of
like
been
iterating
on
it.
We
got
some
really
good
insight
from
that
poc
and,
ultimately,
what
it
looks
like
is
that
it's
fairly
achievable
within
a
reasonable
time
frame.
The
only
drawback
is
that
it
does
require
some
changes
to
the
life
cycle,
which
is
kind
of
like
the
engine
right.
That
runs
all
the
build
packs
underneath,
so
it
would
require
some
changes
there.
A
In
order
to
be
able
to
properly,
I
guess
integrate
the
exporting
right
like
how
we
are
able
to
take
the
outcomes
of
the
build
packs
that
go
into
the
registry
and
instead
put
them
into
the
build
kit,
engine
or
system,
so
that's
kind
of
where
we're
at
right
now.
So
if
we
kind
of
dive
a
little
deeper
right,
I
think
where
we,
where
I'm
personally
with
this
is,
I
talked
to
emily
about
kind
of
how
we
would
do
that.
A
Build
kit
and
life
cycle
integration,
how
we
could
make
that
actually
work,
and
we
landed
on
supporting
oci
layout
as
an
exported
output
that
then
we
can
read
through
the
build
kit
to
again
put
it
back
in
there
through
the
build
kit
front
end
to
put
it
back
into
the
build
kit
system
there.
So
that's
going
to
be
like
an
rfc
process,
and
that
would
require
essentially
a
spec
change
to
finally
get
done.
But
it's
still
top
of
mind.
It's
just
hard
to
estimate
exactly
how
that
all
would
work
out.
B
Another
thing
to
point
out
is
that
there's
one
aspect
of
our
current
caching
model
where
you
know,
especially
if
you're
not
doing
a
local,
build
if
you're
doing
a
building
against
the
registry,
some
layers
don't
have
to
get
regenerated
locally
at
all
or
be
present
locally
at
all
to
you
know
recreate
the
final
image
if
they're
already
in
the
previous
image,
and
that
requires
a
bit
of
a
part
of
build
kit
that
hasn't
been
implemented.
B
Yet
there's
like
a
merge
op,
that's
they
have
an
issue
for
upstream,
so
we
some
of
the
efficiency
of
when
you're
building,
especially
when
you're
publishing
directly
to
a
registry,
you
know
probably
will
only
work
in
the
old
life
cycle.
B
A
A
B
That
makes
sense.
I
think
it's
important
to
note
that
you
don't
need
pac
to
do
a
build
pack
build
or
a
docker
demon
at
all
right
now.
The
life
cycle
by
itself,
when
running
a
container
on
any
container
coming
from
anywhere
it
could
be
on
kate's,
could
be
a
local
doctor
daemon
in
the
case
of
pac
right,
but
it
could
be
in
a
github
action.
Anything
that
can
give
you
a
container
can
do
a
build
pack,
build
entirely
in
user
space
with
no
namespaces
and
generate
an
image
and
export
it
to
a
registry.
B
So
we,
our
kind
of
security
model
and
number
of
dependencies,
is
actually
lower
the
build
kit,
which
requires
some
special
capabilities
like
I
think
it
still
needs
user
name
spacing,
and
things
like
that.
So,
if
the
reason
you're
looking
for
build
kit
is
that
you
want,
you
want
to
be
able
to
avoid
the
old
docker
daemon
you
can
avoid
that
today.
Actually,
you
know
you
don't
need
you
know
it's
only
the
pax
cli
interface
that
uses
the
docker
davin
and
that's
not
using
docker
build
underneath.
B
A
Yeah
just
to
add
to
that
tidbit,
if
you
are
curious
on
how
this
would
look
like,
we
do
have
like
the
tekton
integration,
where
it
works
more
with.
You
know
like
almost
direct
execution
of
the
life
cycle,
which
again
is
like
the
engine,
and
you
can
kind
of
see
how
that's
structured
or
architected.
So
again,
you
can
kind
of
build
anything
around
it
right.
The
reason
why
we're
pursuing
build
kit
is
more
so
that
we
could
have
it
in
front
of
the
end
user.
A
A
A
I
did
reach
out
to
eric,
where
we're
kind
of
discussing
the
potential
of
coming
to
one
of
these
office
hours
to
kind
of
give
a
presentation
of
you
know
what
he's
been
working
on
his
discoveries-
and
you
know
some
of
the
challenges
and
stuff
like
that.
So
hopefully
we're
able
to
schedule
that
here
soon
and
we
could
get
yet
again,
more
more
traction
on
it
and
more
discussions
around
it.
C
B
C
This
is
so
let
me
try
and
like
give
some
background,
so
let's
say
you
have
a
build
pack,
which
is
which
provides
a
generic
package
manager
like
after
him,
but
that
works
on
non-root
stuff,
so
you're
using
that
as
a
way
to
install
the
dependencies
typical
system
dependencies
and
you're
using
future
buildbacks
to
sort
of
encapsulate
specific
things.
C
For
example,
you
have
specific
package
names
for
python37,
python38,
etc,
whereas
your
python
build
pack
just
wants
to
provide
like
python
and
have
the
metadata
contain
the
version
which
future
build
packs
can
request.
C
Now
the
the
sort
of
issue
I'm
facing
is
that
this
particular
python
buildback
has
to
request
the
original
package
manager
that
I
want
this
list
of
packages
and
those
list
of
packages
depend
on
the
version
which
is
determined
by
the
let's
say
future
buildbacks.
C
So,
let's
say
a
future
buildback
tries
to
read
some
other
random
files
and
get
back
to
the
python,
buildback
and
say
hey.
I
want
python's
v8
and
during
the
build
process
for
this
python
build
pack,
it's
not
doing
much,
it's
just
like
hey
during
the
detect
process.
C
It
figures
out
that
it
requests
these
packages
from
the
original
package
manager
built
back
and
pretty
much
just
provides
a
small
encapsulation
that
it
has
the
knowledge
of
which
all
packages
to
request
so
like
python37
could
consist
of
like
five
or
six
different
packages,
and
I
don't
want
to
duplicate
that
logic.
Each
time
in
in
future
build
packs
that
are
actually
requesting.
The
specific
version
now.
C
The
since
detect
once
like,
once
as
a
whole,
there's
no
resolution
process
that
happens
before
build
and
by
the
time
it's
got
into
the
build
for
this
python
specific
buildback.
The
package
manager
has
already
done
its
entire
processing,
so
it
can't
re-request
the
packages
it
wants.
So
the
issue
is
that
the
resolution
is
currently
coupled
with
build,
which
means
that
if
you
have
a
case
like
this,
which
I
don't
know
if
a
lot
of
people
will
have
there's
no
there's
no
real
way
of.
B
B
So,
like
two
things,
one
to
answer
your
questions
directly,
but
without
without
expressing
some
concerns.
First,
the
the
way
this
usually,
I
think
the
way
people
usually
solve.
This
is
because
the
detection
api
lets
you
send
information
backwards,
so
the
a
require
to
contain,
like
version
information
in
the
build
plan
and
that
version
information
gets
sent
to
build
packs.
That
said,
they
were
going
to
provide
that.
So,
if
you,
if
you're
providing
build
tech,
says
I
need
a
package
list
or
something
like
that
right,
like
just
a
key.
B
The
build
pack
later
can
say
package
list
and
then
provide
information
about
the
individual
dependency.
All
of
those
will
get
collected
for
all
the
build
packs
that
say
they
require
that
name
and
that
way,
just
during
the
detection
process
in
parallel
so
be
really
fast
right.
You
can
have
all
the
build
packs
figure
out
what
things
they
need
from
that
package
management
you
know
build
pack
and
that
information
during
build
will
be
available
to
the
package
management
build
pack
when
it
does
the
installation,
and
so
that's
so
it's
kind
of
intentionally.
B
C
C
So
now,
if
the
future
build
packs
request,
python38
it
it's,
I
I
can't
modify
the
package
list
from
my
python
buildback
to
the
package
manager.
Build
pack
to
include
these
new
set
of
packages
is.
Does
that
make
sense?
C
So
I
it's
so,
let's
take
a
simple
example:
you
have
an
app
built
back,
so
that
provides
apt
packages.
Your
python
build
pack
requests
python,
37,
python,
three,
seven
pip
python,
three,
seven
virtual
environment,
python,
v7
dev.
So
there
are
four
packages
that
is
requesting
and
your
future
build
pack
that
uses
python
says
that
I
just
want
python
with
version
three:
seven,
that's
it
or
him
with
version
three,
eight
or
whatever.
B
Murray
state
three
build
packs.
First,
build
pack
is
app
build
pack.
Second,
build
pack
is
python,
build
pack,
python
pack
depends
on
some
packages
some
operating
system
packages
that
are
going
to
get
installed
by
the
optimal
pack,
and
then
you
have
a
third
build
pack.
That
then
depends
on
python
right
and
in
the
build
plan
you
would
see
app
build
pack
provides
some
things.
B
Python
build
pack
requires
operating
system
packages.
Third
build
pack
requires
python.
Did
I
get
that
right.
C
C
B
I
think
so,
could
you
I
wanna.
I
wanna
talk
about
other
concerns
with
the
operating
system
package
thing,
but
I'll
put
that
to
the
side
for
a
second
okay.
So
could
you
have
different
pipe
like
different,
alternative
python
names
that
reflect
the
different?
You
know
kinds
of
like
version
or
package
information.
Sorry,
I.
C
I
could
do
that,
and
that
was
the
alternative
answer
I
came
to.
But
that
seems
a
bit
weird
to
me
because
then
I
have
to
sort
of
list
out
all
the
possible
versions
and
also
make
sure
that
my
future
packages
know
the
the
version
string
like
any
meta,
any
extra
metadata.
I
have
to
concatenate
that
to
my
like
provision
name,
which
seems
weird
to
me,
the.
B
C
It's
not
great.
This
was
an
example
like
okay,
so
the
simplest
example
I
could
think
of,
but
you
can
imagine
the
same
case
happening
with
other
thing
which,
like
encapsulate
the
system
packages
that
they
resolve
and
some
future
build.
Pack
just
assumes
that
hey
I
have
python,
I
have
go
and
that's
all
I
require
with
this
version,
and
then
you
have
this
intermediate
binary
provider.
That's
like
plugging
in
requests
from
the
system
package
manager
to
these
individual
configuration,
build
packs
that
require
the
specific
binary.
B
Yeah
before
we
try
to
solution
that,
I
want
to
ask
about
the
operating
system
package,
installation
build
pack.
So
if
you
have
a
build
pack,
people
have
made
build
packs
like
this
in
the
past
that
are
like,
like
it's
like
an
app
build
pack,
that
downloads
app
packages
from
canonical
or
wherever
right
and
writes
the
packages
into
build
pack
style
layers
that
only
works
for
like
a
very
small
subset
of
packages
that
don't
have
any
hard-coded
paths
in
their
binaries,
and
it
can
actually
lead
to
some
pretty
nasty
security
issues.
B
If,
like
because
I
look
for
a
file,
that's
important
that
has
some
configuration
in
it.
That's
missing
now
in
the
file
system,
so
I
you
know,
I
think
this
problem
still
happens
with
stack
packs,
which
are
the
which
will
solve
this.
Like
stack
packs.
B
Let
us
introduce
an
app
build
pack
that
installs
app
packages
into
the
base,
image
right
and
then
subsequent
build
packs
will
need
to
depend
on
you
know,
mix-ins
that
get
installed
by
the
you
know
stack
pack,
then,
if
another
build
pack
depends
on
that
build
pack,
the
spill
pack
can
adjust
the
mix
sends
it
needs
to
install.
So
I
think
we
still
have
the
same
issue
with
stack
packs,
but
I
just
wanted
you
to
be
aware
that
it's
usually
a
bad
idea
to
use
a
build
pack
that
installs
operating
system
packages
and
players.
C
C
So
I
guess
that's
that's
an
issue
and
I
I
my
question
was
sort
of
like
also
what
you
were
getting
to,
which
is:
how
would
stack
packs
solve
this
like?
Is
there
a
resolve
process
in
the
middle
that
would
figure
this
out,
because
I
I
imagine
you
would
have
the
same
problem
once
you
get
there.
C
And
I
would
imagine
this
would
be
a
very
common
use
case
where
you
have
one
thing
that
encapsulates
or
your
dependencies
because
then,
like
you,
you
can.
You
have
like
these
whole
slew
of
config,
only
buildbacks
right
that
only
care
about
ruby
being
present
or
go
being
present
or
python
being
present.
They
don't
care
about
where
it's
being
fetched
from
and
now
they
have
to
know
that.
Okay,
I
require
this
from
the
from
the
package
manager,
instead
of
just
depending
on
a
python
dependency
with
this
version
or
a
go
dependency
with
this
version.
C
So
you're
breaking
this
like
nice
modularity
you
have,
and
I
I
don't
see
a
nice
way
of
solving
it.
Apart
from
that,
one
solution
you
mentioned,
or
just
depending
all
the
metadata
to
the
list
of
to
like.
B
So
if
a
provide
that's
if,
if
a
provide
is
met,
right,
like
you,
could
somehow
put
information
in
the
build
plan,
like
say
say:
if
a
provide
is
met,
and
it's
meant
by
required
right
now
provide
just
has
one
name
in
it
right.
So
this
is
like
an
idea
for
an
rfc
instead
of
putting
metadata
in
the
provide,
because
that
doesn't
make
sense
metadata
comes
from
require
you
could
you
could
have
like
matching
strings
against
the
subsequent
metadata?
B
That
then
imply
additional
requires,
and
that
way
we
could
handle
it
all
during
the
detect
phase
declaratively
in
parallel,
without
introducing
another
resolution
step,
you
can
make
the
provides
match
strings
coming
from
the
requires,
and
then
the
result
of
a
match
could
be
additional
requires
that
the
build
pack
that
get
added
to
that
build
text
list
of
requires.
Does
that
make
sense?
I
think
that
would
solve
the
problem.
B
Okay
sure
so
app
build
pack
first
right,
we'll
use
your
example
apt
python,
something
that
needs
python
right
matt.
This
is
an
rfc.
This
functionality
doesn't
exist
right,
have
the
python
build
pack
and
it's
it.
It
has
something
that
says,
provide
python
right
right
now.
That
provide
only
has
one
key
allowed
name
right.
Let's
add
another
key
filters
right
under
under
its
provide
python
right
in
filters.
It's
a
map
of
things
like
I
feel
like.
B
Maybe
it's
a
map
of
well
some
data
structure
where
it
would
match
something
like
version
equals
three
seven
star
right
and
then
you
know,
and
the
additional
filled
fields
requires
under
there
and
then
you
know
lib
vert
know
muscle
I
don't
know
whatever
making
up
c
librarians
and
then
in
the
case
that
that
provide
matches
that
requires
other
metadata.
It
would
imply
additional
requires
on
the
earlier
stage.
C
I
I
guess
it.
It
will
also
make
this
whole,
like
I
imagine
like
it,
would
also
make
explaining
build
plans
to
like
newcomers
extremely
extremely
cumbersome
like,
but
you
have
this
additional
thing,
but
you
know
that
that
would
solve
the
use
case.
B
I've
always
thought
that
you
know
there's
something
about
provide
being
a
little
too
like
it's
only
one
name
like.
Why
do
you
have
a
list
of
names
here
and
there's
this
metadata
that
gets
sent
back?
Should
we
do
version
matching?
Like
should
detections
say
you
know
this?
Is
a
python
3
build
pack,
it's
the
python
2
build
pack,
so
this
will
get
that
this
will
get
the
other
thing
and
I
always
come
back
to
like
on
the
end.
B
You
know,
I
don't
know
if
it's
a
good
idea,
at
least
it
would
be
like
kind
of
progressively
complex
right.
You
wouldn't
have
to
deal
with
it
until
you
had
this
really
weird
use
case
kind
of
like
or
right.
No
one
knows
that
or
exists
until
they
they
need
to
solve
the
problem.
So
it
doesn't
doesn't
bother
me
too
much
if
you
really
felt
like
it's
something
you
need.
C
Yeah
this
this
is
yeah,
I
mean
that
makes
sense
I'll
see.
If
I
can
write
this
up
it,
it
would
be
really
useful,
and
I
I
guess
this
would
come
up
more
often
once
once
we
have
stack
packs
and
you
have
other
like
use
cases
where
people
like
bill
packs
can
actually
request
system
packages,
because
then
this
would
become
really
common.
B
C
B
Sense,
it'd
be
great
if
somebody
invented
a
kind
of
staged
based
dependency
management
system
that
we
didn't.
So
we
didn't
have
to
invent
a
new
one
ourselves
over
here.
Right,
like
we
did
a
lot
of
research
into
like
you
know.
The
existing
package
management
systems
provide
some
kind
of
functionality
we
could
use
here
and
in
the
end
the
problem
is.
We
wanted
build,
packs
to
be
swappable
right
and
run
sequentially
like
they're,
build
stages,
they're,
not
like
the
things
that
depend
on
each
other
and
so
package
management.
We
couldn't
make
a
solution.
B
During
detect,
that's
the
only
way
during
build,
you
can
push
stuff
forward,
obviously
with
layers,
but
but
yeah
detect
always
goes
in
reverse,
build
packs,
always
communicate
backwards
and
then
build
build,
packs,
always
communicate
forwards,
and
at
least
that
keeps
the
data
flow
simple
right.
If
that
makes
sense,
because.
C
This
was
like
I
mean
this
is
one
use
case.
I've
imagined
I've
not
used
it,
but
if
I've
seen
it
in
potato
in
a
couple
of
places
where
it's
used
slightly
differently,
but
you
you
sort
of
have
these
convertible
packs
in
the
middle.
So
let
me
give
you
an
example
of
you.
C
I
tend
to
fall
back
to
python
because
I'm
most
familiar
with
it,
but
let's
say
you
have
a
python
buildback
that
that
looks
at
requirements.txt
and
installs
those
packages.
C
And
now
you
you,
you
are
using
some
alternative,
like
dependency,
like
specification
like
you're,
using
some
other
package
manager,
which
uses
some
lock
file
like
poetry,
lock
or
pip
file
or
whatever.
That
can
then
be
converted
to
this
requirements
or
txt
stuff
yeah.
C
So
now
you
you
want
to
maybe
modify
the
app
door
so
that
this
this
future
build
back,
which
has
no
idea
that
this
exists
can
still
detect
and
run
as
intended,
but
it
it's
sort
of
focusing
on
on
build
packs
that
don't
know
about
this
other
ecosystem
or
whether
it
exists,
and
they
have
limited
their
way
of
asking
for
provisions
or
requirements
just
to
like
specific
files
or
they
they
only
support
your
specific
api,
and
you
want
to
write
converters
between
one
api
to
this
popular
build
pack
that
supports
a
specific
api
now
in
in
that
use
cases,
I've
not
found
like
a
way
to
make
this
buildback
detect
like
even
in
get
past
the
text
state,
so
that
during
the
build
stage,
it
then
picks
up
these
things.
B
All
right,
I'm
going
to
try
to
say
this
back
to
you
again
so
you're
talking
about
the
case.
This
is
like
a
common
case
in
python.
I
vaguely
remember
from
working
on
potato
a
little
bit
where
you
have
you
have
your
app
directory,
it's
a
like
a
poetry,
app
or
I
think
at
the
time
it
was
a
pip
pippen
pip
file.
B
Whatever
it
is
pip
file,
I
think
app
and
it's
useful
to
convert
the
pip
block
pip
file,
lock,
whatever
it
is
into
a
requirements
text
in
python's
original
format
in
order
to
process
it
with
the
normal
standard,
python
tooling,
because
other
tools
use
that
it's
like
whole
conda
ecosystem.
That's
different
and
I'll
put
that
to
the
side
for
a
second
but
the
you
know
you're,
generating
that
requirements
file
and
then,
at
the
end,
there's
a
build
pack
that
you
know
does
whatever
it
is:
pip
install
and
the
requirements
text.
Okay,
they
get
that.
B
B
C
With
requirements,
so
the
installer
requirements
built
back
last
build
pack,
yes,
yeah.
Okay,
that
only
passes
the
detect
stage
if
it
finds
the
requirements
for
txt.
B
B
Okay,
so
stopping
there
for
a
second.
So
if
the
pitbull
pack
that
last
pack
shouldn't
look
in
the
app
directory
for
requirements,
it
should
output
a
build
plan
that
requires
requirements
and
make
itself
optional
right
and
so
another
build
pack
can
say
it
provides
the
build
plan
right.
C
Isn't
that
usually
the
yeah
that
that's
a
solution?
So
that's,
I
think
how
pocato
implemented
it
like
they.
They
removed
the
detection
just
from
requirements
for
kxt
and
added
like
optional
buildback
that
always
passes
and
added
a
provision
which
has
to
be
satisfied
by
future
buildback
so
that
it
can
be
included.
C
That
always
passes,
but
with
only
a
provides
and
then
because
the
whole
detect
stage
runs
a
set
of
bill
packs
and
the
provisions
have
to
match
requirements.
Exactly
only
then
this
bill
pack
would
be
selected.
Now
is
something
that
took
a
lot
of
time
for
me
to
explain
to
other
people
that
okay.
This
is
why
it
works,
and
this
is
why
you
have
to
always
make
it
pass
with
optional
and
with
the
provision,
so
that
future
build
packs,
can
then
require
this
and
then
like.
B
C
Yeah
I
mean
that's,
these
are
all
edge
cases
which
I
think
I've
had
a
really
tough
time
explaining
to
people
or
when
I've
tried
to
explain
the
whole
detect
and
build
phase
to
them.
So
they
it
it's
it's
easy
once
you
understand
the
whole
process,
but
for
someone
new
to
yeah,
it's
it's
easy
for
them
to
like
tell
them.
Okay,
the
detect
is
a
series
of
transformations
and
this
detect
works
in
isolation.
B
A
like
on
separate
things
right
like
on
the.
If
we
have
edge
cases
in
the
build
plan
that
make
it
hard
to
use,
we
should
figure
out
how
to
fix
them
in
elegant
ways
that
don't
make
it
too
complicated.
If
we
can't
so
like
really
interested
in
rfcs
about
that,
but
we
also
have
talked
about
like
it's.
It,
terence,
and
I
have
especially
talked
about
this.
It's
like
really
the
build
plan
solves
a
particular
problem.
That's
you
know
kind
of
designed
to
allow
this
modularity
api
right.
B
It
has
like
the
goal
is
good,
but
it
definitely
is
like
you
know
you
can
kind
of
ignore
it
if
you're
just
making
a
build
pack.
That
needs
to
do
something
in
the
middle,
but
if
you
really
want
to
integrate
into
this
kind
of
modular
ecosystem,
there's
a
big
learning
curve
there
that
we
haven't
figured
out
how
to
explain-
and
I
don't
know
if
the
answer
is
like
we
need
a
really
good
page
on
build
packs.
B
I
o
that's
like
build
plan,
and
it
really
has
a
bunch
of
images,
and
you
know,
shows
the
data
flow
model
and
you
just
get
it
after.
You
see
that
for
five
seconds
or
whatever
or
or
if
there's
you
know
some
way,
we
can
simplify
it.
You
know
some
some
if
there's
like
complexity
in
here
that
doesn't
really
need
to
be
there,
because.
B
B
No
no
worries
the
the
original
build
plan
would
actually
set
up
pipes
between
all
the
build
packs
and
they'd
all
run
in
parallel,
but
they
could
read
standard
in
to
pull
information,
and
then
it
was
this
very
you
know
it
was
like
kind
of
a
white
box,
and
then
that
requires
provides
things
started
happening
where
later
build
packs
would
say.
I
need
something
from
earlier
ones,
and
so
the
current
build
plan
model
is
a
simplification
of
a
much
more
abstract
thing
that
kind
of
landed.
B
Originally,
it's
like
definitely
evolved
a
lot
over
time
or,
like
they've,
been
breaking
changes
to
it
in
the
past.
You
know
if
there
are
ways
we
can
improve
it
to
make
it
easier
for
users
like
really
interested
like
it's,
not
something
that
you
know.
We
said
this
is
the
perfect
way
of
solving
this
problem,
and
you
know
with
that.
It's
more
like
you
know,
it's
really.
It's
really
been
a
long
process
of
making
changes
to
it
over
time.
C
Like
you
make
the
app
directory
writeable
during
the
detect
phase
and
run
buildback
sequentially,
that
would
break
the
parallel
detection
thing
and
may
possibly
slow
down
things
and
encourage
buildbacks
to
do
build
steps
during
detection.
C
Not
ideal,
that's
like
one
easy
way
to
get
out
of
this.
Like
I
mean
they
would
only
be
able
to
modify
the
app
directory,
but
that's
still
like
a
lot
of
freedom
during
the
detect
phase.
Yeah.
B
So
that
there
was
actually
I
was
talking
about
how
it
looked
used
to
look
different
so
before
they
were,
they
ran
in
parallel
that
they
all
they
could
output
to
standard
out
and
subsequent
build
packs
could
see
that
and
build
packs
could
read
from
standard
in
and
so
they'd
all
run
in
parallel,
but
you
could
kind
of
block
on
the
build
pack
before
you.
B
If
that
makes
sense,
so
you
could,
you
could
read
from
standard
and
then
wait
until
standard
enclosed
if
you
needed
to
get
information
forward
during
during
build
during
detect,
but
that
got
like
that
was
like
more
complex,
because
then
you
have
this.
You
know
like
you
have
to
know
about
how
the
you
know
whether
standard
in
is
open
or
closed,
and
you
know
you're.
You
know
also
passing
all
the
data
forward
meant
that
it
had
to
either
accumulate.
If
you
put
a
bill
pack
in
the
middle,
it
was
much
harder.
B
C
The
the
actually,
the
the
the
model
that
you're
talking
about
where
you
can
pass
data
forward,
would
also
fix
the
stack
package.
If
you
move
this
random
third
belt
back
between
the
package
manager
and
python,
where
it
can
request
python
version
3x
and
then
the
python
buildback
can
read.
Okay,
it's
requesting
3x,
so
that
I
can
now
request
these
packages
from
the
original
package
manager.
C
B
We
could
keep
the
current
model
intact
and
then
just
introduce
a
forwards
communication
mechanism
and
then
break
the
parallel
thing.
Right
or
or
even
you
know,
introduce
some
named
pipes
and
you
can
read
from
the
name
pipes
if
you
need
information
about
spill,
packs
right
and
then
you
introduce,
and
we
could
keep
the
parallel
thing
right.
But
I
worry
about
complexity.
There
too.
All
right.
C
B
It
was,
it
was
done
in
parallel,
because
switching
to
the
declarative
model
meant
that
the
build
packs
no
longer
depend
on
any
failure,
and
so
we
could
get
that
performance
benefit
and
also
because
you
know
sometimes
like,
if
you're
using
the
potato
builder
and
it
lists
10,
different
language,
ecosystems
and
each
of
those
you
expand
out
into
this,
like
you
know,
100
row
grid
right
and
if
the
build
pack
you're
selecting
is
near
the
bottom
of
that,
then
it's
a
long
time
to
wait
for
all
those
processes
to
execute
you
know
individually.
B
So
it
does,
does
save
a
little
bit
of
time
right.
The
rows
themselves
don't
execute
in
parallel.
Only
the
build
packs
across
here
right.
So
there
is,
there
is
a
bit
of
a
performance
benefit
to
it,
but
it's
not
huge
right.
So
that's
you
know
you
can
do
some
testing.
You
know
to
take
out
the
multi-threading
there.
I
really
get
to
drop
to
another
meeting.
This
is
great,
though
yeah,
please
open
rfcs.