►
From YouTube: Working Group 2021-04-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
we
are
six
minutes
in
so
we
will
kick
off
with
introductions
and
new
faces.
Definitely
don't
see
anybody
new
here,
release,
planning
and
updates.
A
All
right,
I
could
speak
a
little
bit
more
on
the
platform
side
for
the
pack
orb,
which
is
a
circle
ci
tool,
I'm
currently
working
on
making
some
changes
based
on
some
feedback
from
the
community.
I
anticipate
that
there
will
be
a
release
short
they're
coming
after
those
changes.
There's
no
set
release
cadence
for
the
pack
orb,
so
just
wanted
to
call
it
out
here.
A
Oh,
I
can
speak
a
little
bit
to
the
life
cycle,
we're
continuing
our
work,
our
initial
work
on
stackpacks,
I
think.
Last
week
we
had
scoped
this
epic
to
a
bunch
of
issues,
and
this
week
we
are
driving
forward
one
issue
that
will
unlock
a
whole
bunch
of
other
stuff.
So.
C
I
can
actually
give
a
quick
spec
release
planning
update
as
well.
We're
gonna
look
for
a
broader
forum
to
try
to
share
this
message,
but
because
stackx
is
so
much
work
and
it's
going
to
be
the
focus
of
the
implementation
team,
we're
trying
to
keep
the
next
releases
of
the
spec
pretty
tightly
scoped
to
things
relating
to
stack
packs.
So
it's
not
to
bundle
too
much
stuff
into
the
releases
with
stackpacks
and
overload
the
implementation
team
and
that's
not
to
say
that
no
other
changes
can
come
in.
C
But
we
would
like
other,
like
the
implementation
maintainers
to
be
focusing
on
stack
packs,
so
other
changes
would
be
coming
in
as
external
contributions
or
they'll
be
coming
in
after
stack.
Packs,
I
think,
is
the
plan.
So
if
you
have
a
rfc
that
is
approved,
the
implementation
and
core
teams
are
not
going
to
necessarily
pick
it
up
and
add
it
to
the
spec
or
then
add
it
to
the
life
cycle
right
away.
That
would
be
delayed
until
after
stack
backs.
B
C
We're
generally
pretty
trusting
and
we
don't
have
a
a
wide
amount
of
anonymous
contributions
to
the
spec.
So
it's
more
like
if
you're
going
to
make
a
spec
pr
to
platform
api
07.
If
you
give
us
a
thumbs
up
that
you're,
also
planning
on
making
the
lifecycle
pr
for
platform.
Api
07,
then
we're
full
steam
ahead.
But
if
it's
I'm
gonna
make
the
spec
pr
describing
a
very
complicated
thing
and
leave
it
to
the
implementation
maintainers.
Then
we
might.
We
might
park
it
for
a
bit
and
let
stack
packs
go
through
first.
A
Yeah
to
be
clear
like
I
don't
think
this
is
a
significant
change,
we're
happy
to
take
any
pr
at
any
time
on
about
anything
so
send
away.
This
is
just
warning
everybody
that
if
you're
waiting
for
us,
you're,
probably
gonna,
be
waiting
a
bit
longer.
C
And
then,
hopefully,
after
snack,
packs
we're
back
to
business
as
usual,
which
is
a
little
bit
slow,
but
not
quite
as.
C
A
First
thing
in
the
list
is
discrepancy
and
result
json.
This
is
a
new
one
open
four
days
ago,.
A
Probably
need
some
labels.
This
looks
like
a
distribution.
B
A
A
It
means
so
doing
that
getting
a
sherpa,
that's
definitely
a
thing,
and
then
I
think
we
just
want
approval
here.
I
don't
think
it
gets.
No,
you
don't
expect
anybody,
it's
being
picked
as
we
speak
by
the
tools
should
probably
be.
C
A
A
B
Yeah,
I
put
this
in,
I
think,
last
week,
after
a
discussion
in
the
office
hours
around
this
functionality
and
the
only
thing-
maybe
we
can
talk
later,
but
I
had
a
few
open
questions.
A
Could
you
add
the
build
pack
label,
I
believe,
or
implementation
label.
A
This
would
be
a
spec
change
to
build
packs
back
yeah.
A
A
Awesome
dude!
Is
this
a
I
don't
recall
anthony.
Is
this
a
sub
team
rfc?
I
think
we
yeah,
I
believe
we
said
this
is
going
to
be
platform
cool.
So
then,
maybe.
A
Yeah
steven,
do
you
mind
adding
the
label
for
subteam
rc
yeah?
Am
I
on
the
platform.
Sometimes,
though
I
thought
I
was
just
on
the
planet,
the
script
just
picked,
you
yeah.
Well
now.
I
know
it
might
be
why
I
thought
it
didn't
work
before
just
because
I'm
still,
I
forgot
that
it
was
wrong.
I
would
still
have
a
team
there.
C
A
Are
you
asking
me
yeah
I
mean
I
did
get
the
thumbs
up
or
down
I'm
done
with
that.
So
I
guess
I'm
waiting
for
it
to
be
approved
or
not
right.
Whatever
steps
needs
to
happen
there
I'm
eager
to
implement
the
issues
myself
so
kind
of
I'm
kind
of
waiting
for
next
steps.
Maybe
I
should
say
that.
C
I
know
sometimes
I
get
confused
personally
when
something
is
a
like
a
big
idea,
for
you
know
what
could
be
a
bunch
of
features
that
go
into
this,
whether
or
not
in
the
rfc
process
for
approving
just
the
big
idea
or
whether
we
should
be
diving
into
the
what
features
go
into
it.
A
A
I
would
say
if
an
rfc
doesn't
say
how
something
exactly
should
look
right,
then
you
know
it's
or
somebody
like
when
there's
a
issue
or
a
pr
that
says
it
should
look
like
this,
then
that
could
get
kicked
back
up
to
the
rfc
process.
If
somebody
says
this
is
too
big
of
a
decision
to
make
for
this
future
and
we
need
to
discuss
or
we
need
to
get
core
team
approval
for
it.
A
C
I
guess
what
I
would
love
to
do,
for
this
is
like
make
it
clear,
we're
proving
yes,
there
should
be
a
peck
interact
and
then
maybe
the
details
of
like
what
and
how
we
display
things.
I
don't
even
think
it
needs
to
be
an
rfc
that
goes
up
to
the
core
team,
but
it
might
just
be
easier
to
have
conversations
around
it
for
interested
parties
if
we
broke
them
out
into
smaller
discussions.
A
I
agree,
the
only
thing
is
if,
if
you
know
anthony
wanted
to
start
building
this
out
or
if
other
folks
wanted
to
start
building
this
out
before
figuring
out
those
details
right,
if
it's
like
easier
to
develop
this
kind
of
graphical
thing,
you
know
without
a
lot
of
planning
ahead
of
time.
I
wouldn't
want
to
say
no,
you
can't
do
that.
You
have,
to
you,
know,
get
the
rfc
for
all
the
details
approved
first.
C
I
agree
with
that.
Maybe
just
make
this
clear
that
this
is
like
yes
voting,
that
we
should
have
pac
interact
and
we're
going
to
create
a
bunch
of
sub
issues,
and
then,
if
people
have,
you
know
like
an
opinion
about
one
particular
sub
issue,
we
can
move
it
there,
instead
of
like
bogging
this
down.
With
that,
I
guess
I
have
some
micro
opinions
that
I'd
love
to
have
a
place
to
discuss,
but
I
don't
want
to
put
them
on
this
rfc
because
I
don't
want
to
drag
the
rfc
down
with
the
micro
opinions.
A
Sorry,
I
just
I
just
want
to
make
this
a
productive
exchange.
Right,
like
is
sub
issues
the
right
place
for
more
fleshed
out
details.
I
agree:
it's
not
fleshed
out
yet
right,
or
is
it
maybe
the
threads
where
we
can
get
a
breakdown
or
what
so?
I
was
going
to
suggest
this
maybe
offline.
I
was
going
to
maybe
just
ping
emily,
but
since
it's
being
brought
up,
my
suggestion
would
be
like
throw
in
as
many
details
as
you
want
on
here
right
and
then
as
part
of
the
discussion
they
could
be
deferred.
A
A
C
That
makes
sense
to
me,
so
we
can
throw
a
bunch
of
details
on
here,
but
like
have
a
one
sentence:
disclaimer,
that's
like
this
rc
is
proposing
the
introduction
of
this
command.
All
of
these
are
examples
of
what
we
could
do
with
the
details
be
worked
out
later,
because
I
don't
want
to
make
this
seem
like
the
source
of
truth,
because
sometimes
people
go
back
through
rfcs
to
see
exactly
how
things
should
work
and
if
it's
uns
like
to
keep
clear,
what's
uncertain
and
what's
decided
in
the
rfc.
A
All
right,
I'm
going
to
move
us
along
to
the
next
one.
Just
an
interest
of
time,
add
bomb
to
layer,
content,
metadata.
B
Outstanding
issue,
so
there
were
some
suggestions
by
emily,
which
I
updated
the
rfc
to
reflect,
but
there's
one
outstanding
issue
that
we
pointed
out
around
layer.
We
use
and
the
fact
that
you
could
have
all
the
layer
flags
set
to
false,
but
the
bomb
would
still
be
persisted,
at
least
through
one
little
cycle.
B
When
you
restore
the
there,
but
all
the
flags
are
set
to
false,
you
would
still
export
the
bomb
to
the
to
like
report
tommle
and
yes,
that
sort
of
breaks
this
rfc
52,
where
we
decided
not
to
do
that.
C
C
C
Those
things
were
still
there,
and
also
we
don't
know
whether
you
actually
use
the
tool
in
that
layer
to
do
something.
Even
if
you
didn't
choose
to
opt
into
exposing
it
to
pre
subsequent
build
packs,
caching,
it
or
exporting
it
right.
A
Yeah,
a
layer
that,
like
a
yeah,
a
layer
that
a
build
pack
creates,
that's
just
intended
for
it
to
use
just
during
that
run,
seems
like
a
thing
that
you
know
it's
like
a
first
class
idea
right.
It's
not.
A
Yeah
and
I
think
that's
the
way
to
approach
it
is
like
you
can't
really
track
what
was
in
your
environment
when
you
did
that
build
right,
its
sheer
existence
can
have
influence
right.
You
can
imagine
a
library
on
ld
path,
for
example
like
whether
or
not
you
intended
to
use
it.
It
does
have
some
sort
of
effect,
and
so
I
think
when
we
see
the
requirements
that
come
from
companies
about
being
able
to
track
the
provenance
even
at
build
time,
a
lot
of
it
is
around
like
what
was
the
ambient
state.
A
Yep
the
build
pack
made
the
decision
that
it
would
get
added
in
the
next
run
when
it
said
cash
equals
true.
On
the
current
run
right,
the
build
pack
explicitly
said:
yep
cache
this
layer.
I
want
it
back
on
the
next
run,
and
so
the
decision
for
it
to
show
up
on
the
next
run
was
was
sort
of
rained.
At
that
point,.
A
We
probably
shouldn't
use
the
our
rfc
review
to.
You
know,
make
decisions
about
the
questions
in
the
rfcs.
So
so
I
just
got
excited
you.
A
Question
we'll
go
ahead
and
move
on
to
the
next
one
sam.
If
there
are
more
things
than
that,
please
add
it
to
the
agenda,
but
it
seems
like
we're
good.
There.
B
A
Cool
propose
the
creation
of
best
practices
and
guidelines.
A
Should
I
put
it
into
fcp,
or
do
you
want
to
wait
for
ben?
Did
you
want
to
look
at
this
one?
It's
nfcb.
A
Sorry,
I
missed
the
label
to
be
merged.
Yesterday,
who's,
the
javier
you're
gonna,
get
this
one.
Brian
me
and
ben
are
working
on
some
some
issues
with
the
pushing
merging
cool.
You
should
be
good
to
go
now.
I'm
not
I'm
thinking
you
about
it.
A
Pack,
cash
options.
This
one
hasn't
seen
a
lot
of
review
javier,
where
we
have
this
one.
A
This
is
a
sub
team,
one
so
yeah
skip
is
there?
Should
we
make
a
rule
that
we
skipped
the
sub
two
rsvs
during
the
review?
I
haven't.
A
C
I
don't
know,
I
wonder
if
we
can
update
the
link
in
the
doc
to
exclude
that
label.
A
B
A
A
A
A
B
B
A
A
I've
updated
that
link
by
the
way,
I'm
not
actually
sure
it's
a
good
idea,
especially
skipping
the
drafts
one,
but
it's
in
there
now
cool.
A
Thank
you.
Next
thing
is
build
right,
flag
for
layers,
sam.
B
Second,
so
this
is,
is
the
like
sort
of
situation,
which
I
discussed
last
time,
where
you
want
a
layer
that
cannot
be
modified
by
future
build
packs,
and
there
are
cases
where
you
want
a
layer
that
can
be
modified
by
future
build
packs,
either
through,
like
an
environment,
variable
that
the
buildback
has
said.
The
use
cases
are
when,
like
you
want
a
collaborative
workspace
which
is
cached
across
runs.
B
Ideally,
I
would
have
wanted
the
build
flag
to
be
an
enum
which
was
like
off
read
or
write,
but
that
would
break
a
lot
of
things.
So
if
anyone
has
any
suggestions
on
how
to
deal
with
that,
that's
one
thing
and
the
other
thing
is
like
I
don't
know
what
others
think
of
this
idea.
A
We
do
have
an
option
for
if
we,
if
we
did
want
to
not
make
it
a
boolean,
we
talked
about
build,
launch
and
cash
being
kind
of
bad
names
for
the
flags
in
the
past.
That,
like
should
be
something
more
like
expose,
export
and
maybe
cash
is
okay,
but
like
there's
like
three
different
types
of
cash,
that's
just
one
of
them!
So
not
really
sure
about
that.
A
A
We
could
keep
the
other
two
the
same
we
could
we
could
just
fix,
because
build
is
the
worst
one
right
like
launch.
Yeah,
okay,
you
know
layer
is
going
to
be
a
launch
layer
right,
cache,
okay,
it's
going
to
be
cached
but
build
like
what
does
the
word
build
have
to
do
with
that
other
build
packs
can
access
this
dependency
right
it
just.
It
feels
very
out
of
place.
C
Click
cache
is
also
kind
of
bad,
because
there
are
layers
that
you
want
to
reuse
that
are
launch
layers
that
they
not,
I
don't
say
naive.
A
reasonable
person
might
assume
they're
like
oh.
I
want
to
reuse
it,
so
I
should
catch
it
if
it
should
be
called
like
restore
it's
like
do
I
want
it
back.
Is
the
question.
A
B
So
the
idea
was
that,
whichever
layer,
whichever
build
pack
is
intending
to
expose-
oh
so
even
this
flag
is
also
sort
of
poorly
worded,
because
I
couldn't
find
a
good
way
of
describing
it.
But
it
means
that
if
a
layer
is
set
to
build
right
equals
true,
it
would
be
converted
to
a
tar
ball.
That's
exported
at
the
end
of
the
entire
build
process,
whereas
a
a
layer
where
bilbright
is
set
to
false
would
be
exported
immediately.
B
C
B
At
least
this
states
explicitly
what
the
intention
is,
whereas
currently
you
can
just
do
whatever,
and
you
wouldn't
know
that
if
you
could
reliably
say
that
the
metadata
on
the
layer
is
correct
or
if
it's
been
modified
or
not,
at
least
here,
if
you
say,
build,
write
equals
true.
You
agree
to
exposing
yourself
to
changes
by
other
build
backs,
including
saying
that
okay,
some
other
thing,
may
modify
the
metadata.
You
agree
to
that.
But
right
now
it
this
is
the
default
behavior,
and
we
don't
we
explicitly
for
paid
it
in
the
spec.
B
But
you
can
do
it
anyway,
and
you
can
do
it
unintentionally,
which
is
worse
so
that
that
was
the
main
issue
because
of
which
I
created
this,
because
what
was
happening
was
future
build
packs
were
using
some
binaries
on
a
specific
buildbacks
layer
and
executing
that
binary
had
the
side
effect
of
creating
additional
files
on
the
same
layer
which
was
causing
this
specific
layer
to
change,
even
though
it
it
shouldn't
have,
and
it
was
causing
a
push
to
the
registry
because
the
the
image
so
the
layer
id
was
changing.
B
B
C
A
I
had
a
lot
of
concerns
about
the
metadata
too,
but
what
sold
me
on
it
was
that
the
build
pack
that
writes
the
metadata
opts
into
other
build
packs,
also
writing
to
it,
and
so
there's
never
a
situation
where
somebody
expects
metadata
not
to
change,
and
somebody
else
changes
it.
It's
actually
like
sam
was
saying
it's
stricter
than
the
way
it
was
before
in
a
sense
because
the
there
isn't
this
bug
in
the
implementation
that
lets
you
violate
the
spec.
C
Look
good
about
the
strict
part,
although
I
need
to
work
out
details
I'll
change
the
platform
api,
a
lot
right.
C
I
wonder
if
there's
a
more
intuitive
way
to
express
the
less
strict
part
like
I
don't
want
to
introduce
more
directories,
but
do
we
want
a
shared
layers
directory
that
every
build
pack
gets?
That's
where
you
can
look
to
even
see
if
there
is
a
thing
in
there
instead
of
having
to
know
the
path
to
it,
kind
of
thing.
A
You
know
using
that,
as
the
mechanism
that
you
know
allows
for
discovery
seems
kind
of
like
the
most
normal
thing
and
then,
if
you
created
the
shared
layers
directory,
you
know
telling
build
packs,
so
they
have
to
look
around
for
a
particular
named
layer
under
a
particular
buildpack
id
would
be
really
not
great,
because
buildpacks
aren't
supposed
to
know
about
each
other's
ids.
So
I
I
I
kind
of
liked
keeping
it
in
the
same
layers,
directory
kind
of
using
the
same
conventions.
A
No,
so
in
this,
in
this
case,
the
normal
way
right
now,
when
a
build
pack
says
build,
equals
true
right.
It
adjusts
all
the
posix
environment
variables
to
add
bin
and
lib
and
all
of
that
stuff,
so
that
the
build
type
doesn't
have
to
know
about
the
build
packs
id
it
just
can
black
box
the
path?
Yes,
this
is
something
you
know,
an
ld
library,
path
or
path
that
I'm
allowed
to
see.
A
If
that
makes
sense,
the
other
mechanism-
that's
really
common,
is
buildpec,
sets
an
environment
variable
using
the
end
directory
that
points
at
that
directory
and
that's
how
it
gets
exported
forward.
In
this
case,
it
just
means
that
those
directories
when
they're
exported
forward
the
current
means
we
have
are
become
writable
right,
whereas
right
now,
they're
always
writable,
but
we
pretend
they're
not
writable
and
tell
people
not
to
write.
B
I
I
still
don't
know
what
to
do
with
the
flag.
That's
still
an
open
question.
So
if
anyone
has
ideas
on
how
to
deal
with
the
fact
that
it's
only
applicable
or
it's
also
incorrectly
worded,
because
that's
not
what's
happening
exactly,
it's
just
postponing
the
the
layerization
at
the
end
of
the
build
process,
instead
of
at
the
end
of
the
build
phase
of
a
specific.
C
I
wonder
if
that's
getting
into
like
the
same
reason,
our
current
things
are
confusing.
Like
imagine
coming
into
that
fresh
and
you're,
like
it's
already
hard
to
know
what
a
build
layer
is
supposed
to
be
and
like,
is
it
a
right
or
a
read
one
and
like
you're
forced
to
have
the
choice,
because
it's
an
enum
and
you're
like
well,
I'm
writing
to
it.
C
C
A
B
C
B
My
idea
behind
that
rfc
was
to
not
have
those
delta
layers
because
of
those
unintentional
rates.
I
wanted
to
get
rid
of
those
unintentional
rates
somehow,
so,
even
if
you
did
add
that
you
would
still
have
those
unintentional
rights
persisted
in
the
image,
if
it
was
sponsors
launch
equals,
true,
which
I
did
not
like
that.
That
does
not
seem
same
for
any
user
that
there's
some
other
random
buildback
modifying
it
because
of
side
effects
of
some
binary.
C
B
B
A
C
I
gotta
think
about
it
more
guys.
I
think
you're
right
in
the
general
need.
A
Well,
one
thing
that
might
appeal
to
you
about
this
is
that
it
doesn't
have
any
performance
hit.
We
talked
through
a
few
different
implementations
involving
checksums
and
things
like
that
to
to
get
to
making
it
read-only,
and
this
this
one
I
like
the
most
or
sorry
to
get
to
making
it
safely
writable,
and
this
one
I
liked
the
most
because
it
didn't
introduce
any
performance
hit
whatsoever.
It
takes
exactly
the
same
number
of
cycles
as.
A
B
Yeah,
this
is
also
something
we
were
discussing
in
the
last
office
hours
around
the
current
limitations
when
it
comes
to
a
couple
of
detect
and
build
scenarios.
The
first
one
like
to
give
you
an
easy
example.
Imagine
that
you
have
a
stack
pack
which
can
provide
any
system
packages.
You
have
an
intermediate
build
pack
that
requests
mix-ins
from
the
stack
pack
so
like
it
say
so,
you
have,
let's
have
a
more
concrete
example,
so
you
have
a
apt
built
back
or
something
an
app
stack
pack
that
can
provide
any
system
packages.
B
Now
you
want
to
set
things
like
the
requested
python
version
through
the
pip
buildback.
So
let's
say
you
have
you're
doing
some
detection
logic
in
the
pack,
which
allows
you
to
figure
out
the
python
version
you
can,
you
can
think
of
something
similar
for
go.
So
let's
say
you
have
a
go
mod,
build
pack,
which
figures
out
the
version
constraint
from
the
go
mod
file,
and
you
have
this
gold
build
pack
that
just
provides
go.
B
There's
no
easy
way
for
the
core
mod
buildback.
To
tell
the
to
tell
the
go
buildback
that
this
is
the
version
I
require
and
then
for
that
go
build
back
to
add.
Like
request
the
mixins
from
the
stack
buildback.
C
Oh
because
the
nixons
it
requires
are
different.
Let's
consider
there
is
a
way
with
the
metadata
to
like
pass
a
when
you
make
a
require
in
the
build
plan
to
tell
the
providing
build
pack
something
else
about
it.
We
actually
used
to
have
virgin
as
a
first
class
construct
on
that
until
they
removed
it.
C
A
Scott
we
talked
about
this
last
time.
I
had
a
crazy
proposal
for
it.
I'm
pretty
sure
this
is
isn't
possible
using
the
current
api.
We
talked
through
a
bunch
of
different
options
or
like
interpretations
of
this,
and
I'm,
I
think
we
can't
do
it
using
the
current.
A
You
know
what
we
have,
but
one
option
would
be
right
now.
The
provides
api
is
really
simple.
It's
just
one
name
right
and
like
you
could
expand
like
there's,
there's
room
to
expand
that
to
be
more
useful.
You
could
have
something
a
little
weird
in
a
provide
that
says
if,
if
a
require
matches
this
provide
with
certain
metadata,
I
think
is
what
we
said.
Then
it
implies
additional
requires,
and
so
so
a
provide
can
then
specify
requires
if
it
gets
matched.
B
The
the
issue
I
had
with
that
is
that
lasting.
We
were
discussing
of
doing
some
like
string
matching
for
this,
and
it's
not
as
simple
as
that,
because
let's
say
your
goal
might
build
back
requests
version,
that's
cool
version,
greater
than
equal
to
30.
Sorry
like
0.30,
and
your
goal
built
back,
can
only
provide
versions,
15
and
16..
A
C
C
B
Simply
because
you
you,
like
let's
say
you
want
to
plug
and
play
build
packs,
or
you
want
like
some
abstraction,
that
abstracts
away
from
like
the
system
packages,
because
you
can
provide
python
using
either
a
tarball
or
through
apt
or
through
yum
or
through
whatever
else.
You
need
some
abstraction
in
the
middle
rather
than
directly,
depending
on
like
apt
to
get
your
python,
because
otherwise
you
now
have
forced
this
pivot
back,
which,
which
could
technically
be
stackless
to
be
dependent
on
one
specific
stack.
C
This
is
a
problem
with
our
stack
pack
plan,
not
a
problem
with
the
build
plan
as
a
whole,
where
we
forced
everything
into
mixing
names.
Therefore,
it
can't
expose
agreed
upon
abstractions,
but
that's
also
bad.
B
B
But
I'm
currently
facing
this
without
back
so
I
use
a
package
manager
that
does
not
require
root
and
that
can
provide
all
sorts
of
packages.
So
let's
say
conda,
for
example,
which
is
a
package
manager
that
does
not
require
root
and
can
provide
any
sort
of
package,
not
just
python
package
and
I'm
using
that
as
the
way
to
sort
of
like
request
these
intermediate
dependencies
to
provide
demand,
pay
for
co.
B
Well
like
go
mod
or
buildback,
and
you
end
up
with
the
same
problem
there.
It's
not
a
problem
with
stack
packs,
it's
a
problem
that
will
become
more
evident
once
we
have
stack
packs,
but
it
is
a
problem
with
the
spec
right
now.
B
B
So
typically,
what
happens
is
let's
say
you
have
again
I'm
going
back
to
the
python
world.
You
have,
let's
say
something
like
poetry.lock,
which
is
something
used
by
poetry,
which
is
a
separate
package
installation
thing
than
pip,
but
you
can
use
that
to
convert
it
to
like
a
requirements.txt
which
can
be
installable
by
pip.
That's
how
the
pequito
community
python
buildback
currently
works.
B
So
the
issue
there
is
that
you
need
to
teach
someone
like
how
to
construct
this
whole
detect
step
with
a
detect
step
that
always
passes,
and
then
what
fails
is
the
matching
of
the
provisions
and
the
requirements,
and
then
you
have
to
account
for
this
whole
thing
which,
which
is
not
the
most
intuitive
thing.
B
Alternatively,
if
there
was
some
simple
api,
where
your
like
this
poetry,
buildback,
could
be
in
front
of
the
buildback,
it
could
modify
the
app
workspace
so
that
it
converts
the
poetry
log
file
to
the
requirements.txt
file,
so
that
when
it
gets
to
the
pip
build
pack,
it
just
sees
the
requirements.
Hd
file
and
the
detection
logic
is
simple.
C
I
definitely
know
the
issue
you're
talking
about,
though,
where
there
are
a
lot
of
times
like
on
the
java
paquetto
bill
packs.
We
basically
just
make
everything
pass,
always
and
provide
what
it
could
provide
and,
like.
I
can't
think
of
a
situation
where,
based
on
what
I
found
in
the
after,
I
would
run,
but
I
will
always
provide
something
and
then
someone
else
can
force
me
to
run.
I
wonder
if
just
like,
having
pass
true
false
in
some
ways
was
our
mistake
like
should
it
always
just
have
been
like?
B
C
B
That
was
so
that's
why,
like
they
said,
this
was
not
an
issue
like
this
is
not
a
limitation.
It's
just
how
people
understand
the
detect
process
like
it
makes
it
really
hard
for
them
to
think
of
it
in
in
terms
of
matching
provisions
and
requirements,
rather
than
just
one
bill
back
doing
its
own
thing,
where
it
detects
and
if
it
detects
true,
it
builds,
which
is
sort
of
what
all
of
our
presentations
currently
sell.
Like
there's
a
buildback
that
does
detection
and
then,
if
it
passes
it,
it
goes
too
well.
C
I
think,
even
in
the
pacquiao
community
we
have
a
split
right.
It's
like
java,
just
has
one
giant
group
and
whatever
you
run
is
about
matching
provides
to
requires,
but
in
some
of
the
other
language
families
there
are
many
groups
that
each
have
a
subset
of
exactly
what
buildbacks
could
possibly
match
together
and
they
basically
use
true
false
stockton.
B
So,
like
you,
do
detection
from
the
first
build
back
to
the
last
buildback
in
resolution
from
the
last
one
to
the
first
one,
which
is
again
extremely
complicated,
but
it's
an
optional
extra
phase
that
you
could
opt
into
and
if
you
do
have
the
resolve
step,
you
can
make
modifications
to
the
to
the
build
plan
and
then
the
build
step
runs
as
it
is.
B
That
still
keeps
the
logic
within
the
buildback,
and
you
can.
You
can
have
arbitrary
logic
there
and
it's
an
ex
like
it's
an
extension
which
people
don't
have
to
worry
about
if
they're
not
using
it.
B
And
you
also
don't
have
to
worry
about
running
the
same
thing
multiple
times,
because
only
the
the
things
that
have
detected
all
the
way
through
can
then
run
resolve
so
like
with
the
tech.
There
was
this
other
thing
that
it
has
to
all
run
in
parallel,
so
that
you
can
do
multiple
things
at
once,
but
only
once
everything
is
passed
through
the
detect
stage.
You
sort
of
have
a
resolve
stage,
which
is
like
part.