►
From YouTube: CNB Sub-Team Sync: BAT - 25 Feb 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
to
the
back
team
sing.
We
do
have
a
document
attached
to
this
meeting.
Please
sign
in
and
add
any
items
to
the
agenda.
A
B
Sure
I'm
rob
lytle
and
I'm
an
engineer
at
google
working
on
our
build
packs
offering,
which
is
used
by
the
services
cloud,
run,
functions
and
app
engine
and
yeah.
So
another
variation
of
app
engine
called
flex.
But
anyway
that's
we're
using
it
with.
B
Okay,
aiden.
C
Hey
rob
I'm
aiden,
I
I
work
with
zambov
at
bloomberg
and
we
maintain
the
machine
learning
platform.
So
we
use
build
packs
to
build
lots
of
images
for
machine
learning
purposes.
B
D
B
E
Emily
I
work
at
vmware
on
a
variety
of
stuff
related
to
open
source,
our
open
source
investment
and
bill
packs,
including
the
paquetto
project,
car
team,
member
of
the
cloud
native
putbacks
project
and
then
help
out
with
efforts
like
kpac,
which
is
a
vmware
open
source
project
that
uses
cloud-native,
build
packs.
F
Hi
good
morning,
I'm
johnny
I
work
for
salesforce.
I
managed
the
languages
team.
We
you
know
developed
the
built
packs
the
legacy,
build
packs
and
now
we're
developing
the
cloud-native
build
packs
for
roku's
forthcoming
transition
onto
cmbs
as
well
as
salesforce
functions.
F
G
I'm
forest
I
work
at
vmware
and
I'm
a
maintainer
on
several
of
the
piccado
build
packs.
C
Hi,
I'm
david.
I
work
at
vmware
2
on
buildpacks,
mainly
the
cloud
foundry
java
and
cloud
native
java
packs.
D
I
work
on
the
at
salesforce
in
heroku
with
johnny
architect,
over
kind
of
the
build
and
languages
side
over
here
and
help
co-found
bill
pack's
product,
and
I'm
I'm
on
the
core
team,
along
with
emily.
B
Great
juan.
C
Hi
guys,
I'm
juan,
I
also
work
on
vmware,
I'm
in
the
same
team
as
mikey
in
the
cmv
project.
Buildbacks
contributed
in
lifecycle
and
all
this
stuff.
C
Hey
I'm
javier
vmware
as
well,
working
with
juan
and
mikey.
Let's
see
a
platform
maintainer
that
used
to
primarily
focus
on
pac,
but
now
I'm
just
kind
of
floating
around
with
different
ideas
and
stuff.
B
Cool
and
then
host-
I
forget,
I
forget
your
name
and
it's
not
in
the
zoom.
A
On
to
our
next
agenda
item,
which
is
status,
updates,
I
think
from
a
last
meeting.
I
think
I
believe
you've
now
updated
your
buildback
api
guys
on
docs
207,
thanks
to
eden,
also
documented
the
the
s
bomb
stuff
from
back,
because
it
was
used
in
our
book
hypothesis
guy,
which
is
great
thing,
feels
the
also
not
now
that
I
think
we'll
we'll
start
we'll
be
starting
with
the
documentation
of
the
go
library
soon
after
but
yeah,
that's
pretty
much
it
from
me.
D
A
And
then
the
next
item
is
rfcs.
I
have
one
that
I
wanted
to
discuss.
It's
also
on
the
agenda,
which
is
the
draft
rfc
for
a
testing
library
for
buildbacks
still
in
drop
state,
because
I
I
haven't
yet
fleshed
out
all
the
implementation
details
and
I
just
wanted
to
go
through
the
idea
with
the
rest
of
the
folks
in
this
meeting
to
see
if
it
makes
sense
or
not.
A
But
the
area
is
twofold,
like
one
provides
a
general
go
integration
testing
library
similar
to
auckland,
for
which
provides
a
wrapper
around
back
and
some
utilities
for
testing
the
output
container
and
some
of
its
properties,
and
the
other
idea
was
like
to
actually
provide
this
as
a
cli
driven
by
some
like
config
file,
either
in
the
aml
or
tomorrow,
whatever
so
that
you
can
declaratively
say
that
hey
built
so
and
so
image
for
me
than
from
the
output
container
from
these
kind
of
tests.
A
The
idea
is
pretty
similar
to
google's
container
structure
testing
library,
where
they
sort
of
have
this
yamu,
where
you
can
define
what
properties
you
want
to
test
for
the
only
difference
being
that,
ideally,
this
tool
will
also
be
able
to
test
for
cnb
specific
things
like
like,
which
build
packs
ended
up
being
in
the
output
image.
Certain
environment
variable
values.
A
So,
since
buildback
sets
these
environment
variables
during
runtime
as
opposed
to
what
these
are,
the
libraries
just
for,
which
is
just
the
config
being
able
to
test
for
them
and
doing
tests
for
things
like
small
contents
and
one
other
thing
which
I
think
like
really
frustrates
me,
is
like
checking
for
rebuild
logic
like
listing
out
various
scenarios
and
checking
how
your
caches
are
being
reused,
so
providing
some
easy
method
for
declaring
those
kind
of
tests.
A
The
reason
why,
like
I
wanted
both
to
go
library
and
like
the
cli
tool,
is
one
the
whole
library
would
be
useful
for
like
people
using
lip
cmd,
but
as
a
project.
So
far,
I
don't
think
we
really
offer
any
tools
for
testing
build
packs
that
work
across
languages
goal
is
a
great
candidate
for
creating
standalone
binaries
so
like.
If
we
are
writing
this
library
and
go,
it
might
make
it
easy
to
create
the
standalone
cli
tools,
it
can
also
be
used
by
other
languages
for
testing
buildbacks.
A
I
have
a
few
libraries
here
that
already
do
parts
of
what
we
plan
to
do,
so
our
implementation
can
either
be
just
augmenting,
like
some
of
these
libraries
so
like
using
them
directly
for
in
the
functionality
as
exposed
and
then
adding
on
any
buildback
specific
things
in
our
version
or
just
writing
things
from
scratch,
which
is
something
that
like
which
is
something
that
often
does
as
opposed
to
something
like
test
containers
which
again
provides
something
very
similar
to
auckland,
in
the
sense
that
you
can
write
tests
and
like
it.
A
A
So
these
are
the
sort
of
three
libraries
that
I've
currently
been
investigating
or
looking
at,
but
yeah
the
still
in
draft.
I
just
wanted
to
float
the
area
and
sort
of
the
motivation
behind.
Why
we're
doing
this.
So
as
as
of
last
week,
we
approved
the
utility
build
packs
rfc,
which
means
that
the
bat
team
is
now
going
to
own
actual
build
packs,
for,
although
they
will
be
small
and
hopefully
not
very
complex,
we
would
still
want
to
test
them
because
they'll
be
probably
used
in
production
quality
builders.
A
A
So
far
I
have
the
current
open
questions
are
whether
we
want
like
just
the
go
library
or
the
cli
as
well
and
like
if
we
go
with
either
of
those,
do
we
choose
to
use
existing
tools
and
wrap
them,
or
do
we
want
a
sort
of
different
kind
of
interface
curious
to
understand,
like
I
know,
we
have
maintainers
for
some
of
these
tools
here
in
this
meeting,
so
just
curious
to
understand
what
they
think
about
these
other
tools
like
test
containers
or
container
structure
tests.
D
I'd
be
interested
to
hear
from
the
paquetto
folks
because
of
they're
also
maintainers,
on
occam
of
their
thoughts
and
whether
we
chose
to
well.
I
guess,
like
there,
has
been
open
questions
around
potentially
donating
akam
as
a
starting
point
for
some
of
this
stuff
wholesale.
If
that
was
not
a
path
like
does,
that
mean
that
is,
we
would
have
to
pick.
D
One
of
the
other
options
would
probably
be
like
one
of
my
questions
and
then
what
is
the
likelihood
of
like
say
we
build
this
out
regard
the
solution
of
other
people
like
we're,
building
this
for
build
pack
authors,
so
are
teams
like
paquetto
interested
in
using
something
like
this,
even
though
they
have
something
like
aqua
today,.
E
I
can
speak
this.
A
little
bit
came
up
at
the
kitto
working
group
earlier
this
week,
sort
of
the
subject
of
whether
folks
would
be
comfortable
donating
outcome.
I
don't
think
that
it's
totally
off
the
table,
but
I
feel
like
we
would
need
to
do
some
work
to
make
that
group
of
folks
feel
comfortable
that
they
would
be
able
to
make
changes
to
it
once
they
donated
it.
You
know
in
a
short
enough
time
frame
to
enable
their
use
cases.
E
I
think
everyone
kind
of
agrees
having
a
shared
testing
library
would
be
good
for
the
project,
but
right
now
there's
a
very
short
feedback
loop
into
getting
changes
into
occam
and
it
would
sort
of
like
block
picado
folks
who,
like
want
to
ship
new
features.
If
the
feedback
loop
was
long
after
donating
it,
so
I
think
well,
it's
a
live,
possibility.
E
A
Daniel
isn't
in
this
meeting
with
the
other
library
that
pochetto
depends
on
his
lipsy
and
the
I
think
we've
been
relatively
quick
in
terms
of
cutting
releases
and
getting
things
out
quickly.
But
I
don't
know
if
that
inspires
any
confidence
for
the
potato
team
and
what
sort
of
like
iteration
cycles
they
can
expect.
E
I'm
you
know,
I'm
very
invested
in
sort
of
this
bill
pack,
author,
tooling,
sub
team
and
and
pro
donation.
I
can't
speak
for
everyone
else
in
picado
and
it
would
definitely
have
to
be
a
decision.
That's
beyond
me.
So
it's
I'm
feeling
like
it's
a
little
bit
unfortunate.
That
forest
has
this
sign.
E
That
says
I'm
away
right
now,
because
I
think
it'd
be
good
to
hear
perspectives
from
some
of
the
other
potential
folks
who
have
a
bit
more
hesitancy
there,
especially
because
you
know
not
to
dive
into
like
internal
pochetto
decisions
too
much,
but
I
feel
like
there's
a
group
of
bill,
packs
and
picado
that
depend
on
libsy
and
beam
and
the
folks
maintaining
those
are
like
would
be
comfortable
with
this
because
we've,
you
know
historically
used
tooling
out
of
the
cmb
project,
but,
as
we've
talked
about
in
this
group
in
the
past,
there's
also
like
the
packet
library,
which
is
a
different
set
of
language
bindings.
E
That
paquetto
maintains,
and
I
think
folks
that
are
used
to
using
that
are
comforted
by
having
packet
and
all
the
tooling.
That
is
like
crucial
to
them
under
their
control,
and
I
think
that
you
know
they'd
want
more
influence
in
the
governance
of
this
sub
team
in
order
to
feel
comfortable
letting
go
of
some
of
that.
D
Is
there
a
possibility
of
building
a
thing
on
top
of
akamon,
the
like
like
say
it
gets
donated,
but
like
the
keto
team
is
still
able
to
basically
build
stuff
on
top
that
then
potentially
gets
upstreamed
without
as
much
like,
assuming
that
they
think
it's
too
slow
to
upstream
or
to
like
get
features
in
initially,
as
we
kind
of
work
out,
some
of
those
kinks.
E
Yeah,
I
think
that
all
makes
sense.
I
think
what
we
should
do
is
like.
Maybe
if
we
want
to
talk
about
arkham
donation,
specifically,
we
could
have
a
meeting
with
some
of
these
folks
and
ryan
from
the
pacquiao
team.
I
know
he
would
like
to
be
more
involved
in
this
group,
but
this
meeting
is
at
a
time
that
he
can't
make
being
on
the
west
coast,
so
I
feel
like
if
we
could
either
shift
this
meeting.
E
C
A
C
A
C
A
It's
there
are
also
some
other
issues
as
auckland
currency
stands.
It
has
a
hard
dependency
on
packet
and
a
few
other
things
that
we
as
a
project
will
not
be
willing
to
accept
like
it
explicitly
yes
for
some
things
that
are
very
opinionated
about
the
way
the
ghetto
creates
buildbacks
and
like
at
least
those
parts
would
need
to
be
separated
out,
at
the
very
least
like
things
like
the
jam
packaging
tool
that
they
have
and
also
some
dependencies
on
other
libraries.
A
I
know,
for
example,
it
uses
freezer
which
is
still
under
forest
username,
so
it
it's
it's.
There
are
a
few
things
that
we
would
have
to
figure
out
before
we
can
donate
autumn,
but
it's
it's
not
like
those
can't
be
resolved.
It's
just
that
we
will
have
to
resolve
them
either
way
either
like
whether
we
as
a
project
want
to
depend
on
autumn
or
whether
the
cato
wants
to
donate
auckland.
I
think
the
the
differences
will
have
to
be
resolved
either
way.
A
Every
on
the
more
work
part,
I
still
think
we
we
have
to
just
like
it's,
not
a
decision
that
we
can
make.
The
picado
folks
obviously
have
to
figure
out
how
to
decouple
auckland
from
packet
and
all
the
other
dependencies.
It
currently
has
make
it
generic
enough
so
that
it
can
be
used
either
through
build
packs
using
lip,
cnb
or
buildbacks
using
package.
A
C
A
The
other
thing
was
like
just
the
container
testing
aspects
are
often
currently
just
two
things
that
flaps
back
and
provides
some
nice
utilities
for
building
the
images,
and
then
it
also
has
some
wrappers
over
docker
for
testing
the
output
images
like
just
from
personal
experience
test
containers,
is
a
very
well
maintained
project.
Anything
the
java
folks
would
agree.
A
The
eco
library
is
also
something
that
have
used
and
it's
more
versatile
and
flexible
than
the
container
testing
capabilities
that
occur
currently
has
it
also
allows
you
to
do
other
things
like
load
the
testing
configurations
through
a
docker
compose
file,
and
it
also
has
like
better
management
around
cleanups
of
test
containers
and
the
resources
they
consume,
so
that
that
was
my
other
hesitancy
as
well
like
do
we
also,
as
a
project,
want
to
take
ownership
of
writing
a
docker
raffle?
That
does
all
of
these
things.
A
So
can
we
take
an
established
library
like
test
containers
and
just
use
that,
because
auckland
currently
does
the
docker
wrapping
part
as
well,
apart
from
the
back
wrapping
part,
and
the
last
concern
was
like
strictly
around
how
easy
does
it
make
it
to
write
generic
cli
tool?
If
that's
something
we
want
to
do
as
well,
so
that,
regardless
of
the
language
you're
using,
can
you
write
scenario
based
tests
for
your
buildback,
like
checking
for
like
rebuild
logics
or
the
output
container
stuff?
Is
previous
language,
agnostic.
A
Like
that's
like
a
stretch
goal,
but
it
would
be
great
if
we
could
reuse
whatever
we
were
creating
for
not
just
for
google
based
build
packs,
but
for
buildbacks
written
in
any
language
just
being
able
to
easily
test
your
output
containers
from
bill
packs
and
checking
the
rebuild
logic.
Especially,
I
think
that's,
where
most
of
the
works
that
I,
as
a
buildback
author
encounter,
are.
E
G
A
A
D
I
guess
I'm
wondering
like
do
we
have
a
preference
on
which
of
these
options
or
do
you,
sam,
like
you,
listed
out
multiple
things,
I
mean
we
should
definitely
explore
them
but
like
if
we're
like
not
going
to
pick
ockham
because
of
potentially
wanting
to
use
one
of
these
other
tools
that
are
more
docker
focused,
so
we
don't
have
to
work
on
those
pieces
like
emily
was
saying
like
how
I
guess,
if
you
could
pick
like
assuming
you
could
get
any
of
these
things
like
which
one
of
these
would
you
would
be
your
preferred.
D
A
The
awkward
bits
are
really
nice,
like
they
wrap
back
adequately
enough
to
expose
the
sort
of
flexibility
you
want
for
building
the
images
but,
like
most
of
the
logic,
is
like
build.
The
image
then
test
the
output
properties,
so
we
sort
of
need
both.
So
in
my
ideal
world,
it
would
have
been
like
the
back
testing
logic
from
awkward.
A
A
D
I
I
guess
in
that
vein,
how
much
are
you
exposing
in
the
cli
like?
What's
the.
A
A
A
completely
different
story
that
would
require
us
to
explicitly
like
have
both
bits
right.
It
will
require
us
to
provide
an
interface
for
like
it.
It
would
be
a
way
of
building
containers
and
then
testing
them,
which
is
like
very
generic
domain.
That's
why
I
didn't
want
to
include
it
as
directly
an
outcome
of
the
start
of
c
because,
like
he
may
not
feel
comfortable
owning
that
whole
thing
as
a
team.
E
It
seems
like
the
big
value
proposition
of
having
a
cli,
in
addition
to
a
library,
would
be
for
sort
of
language,
agnostic,
testing
right,
and
I
in
my
mind,
because
you
know
we're
trying
to
be
only
take
on
take
on
a
limited
scope,
so
we
can
very
responsibly
maintain
it
like
whether
that
is
worth
it
or
not
would
really
depend
in
and
how
broad
the
interest
is
from
folks
writing
bill
packs
in
other
languages
that
would
like
to
use
it
like.
I
think
we
need
a
critical
mass
of
people
that
are
like.
D
I
mean
I'm
selfishly
interested
because
we
do
have
some
testing
stuff
that
we've
written
in
rust
and
it
would
be
nice
to
basically
shed
a
bunch
of
the
container
stuff
and
use
a
thing.
That
is
not
our
hacking
things
around,
basically
docker
and
pack,
and
then
I
imagine
for
all
the
bash
bill
packs.
I
assume
there
is
not
a
comprehensive
bash
testing
thing
for
testing
those
basketball
packs.
A
That
declares
that
here's.
What
my
build
parameters
are
here
are
the
list
of
things
I
want
to
test
for.
So
I
just
want
to
share
how
dana
structure
test
does
it,
so
this
just
does
testing
on
the
output
container.
It
doesn't
actually
build
it,
but
you
can
describe
things
like
hey.
I
want
to
run
this
command,
but
before
that
I
want
to
set
up
some
things,
and
then
this
is
my
expected
output.
A
If
I
run
that
command,
you
can
test
for
other
things
like
the
existence
of
certain
files,
their
contents,
like
all
the
fields
and
the
image
config
and
then
the
environment
variables,
but
we
could
potentially
add
on
more
logic
here,
which
is
built
back
specifically
find
out
which
build
packs
were
detected,
find
out
like
the
actual
environment
variables
that
were
said
by
the
launch
that
I
was
supposed
to
simply
looking
at
the
image
config
find
out
what
the
output
is
form
is
whether
it
has
a
specific
component
or
not,
and
things
like
that.
A
And
then
have
something
that
allows
you
to
declare
like
input
arguments
for
building
the
image
so
like
declare
which
bill
packs
you
want
which
builders
you
want.
If
you
want
to
rebuild
it
and
check
the
same
set
of
tests
again
like
just
having
some
option
like
rebuild,
equals
two
or
like
something
like
that
or
like
being
able
to
say
that
invalidate
this
specific,
build,
packs,
cache
and
rebuild
it.
A
I
wanna
see
if
the
set
of
container
test
side
would
still
work
or
not
so
like
just
being
able
to
declaratively
say:
all
of
that
would
be
great.
You
know.
Go
can
be
very
verbose
for
a
bunch
of
these
tests,
which
at
least
prevents
me
from
at
times
writing
as
comprehensive
rebuild
this,
as
I
would
like,
but
yeah.
D
A
D
B
We
have
something
less
mature
but
similar,
I
think,
similar
to
occam,
I'm
just
kind
of
like
browsing
it
right
now,
where
we,
you
know,
wrap
docker,
we
unpack
and
we
build.
We
build
the
the
image
and
then
run
a
container,
and
then
we
usually
run
an
app
inside
the
container
and
we
test
it
like
the
app
responds
in
some
way,
so
that
kind
of
we
would
definitely
be
interested
in
replacing,
I
think
yeah.
I
would.
B
I
would
caution
when
I'm
when
I
hear
about
talk
about
the
cli,
it
sounds
complicated
right,
whereas,
like
the
the
kind
of
go
solution,
similar
to
outcome
is
maybe
easier
for
us
to
integrate
with,
because
you
know,
obviously
we're
gonna
go
so
maybe
start
there
and
then
tack
on
the
cli
later
I
don't
know,
but
it
might
make
sense
to
have
at
least
one
first
class
language
that
doesn't
have
to
go
through
that,
like
kind
of
wrapped
experience.
A
B
So
yeah
we
would
be
interested
and
for
sure.
F
Terence,
it
sounds
like
what
he's
describing
they're
doing
at
google
is
similar
to
what
you
know.
We've
just
kind
of
undertaken
or
implemented
in
the
rust
library.
Is
that
accurate
yeah,
it's
pretty
similar,
yeah
yeah,
so
yeah
I
mean
if
we
wanted
to
integrate
with
this,
we
have
to
wait
for
the
cli
solution
which
and
then
weigh
it
against
what
we're
doing.
I
guess
I
need
to
understand
better
what
like
what
the
trade-offs
there
are,
what
we
stand
to
gain
versus
what
we
have
now
like
how?
F
D
D
Yeah
mission
and
it's
opportunity
to
you
know
as
well
contribute
stuff
upstream,
if
we
like
having
unified,
tooling
somewhere,
allows
you
to
basically
share
like
both
problems
and
code.
You
know
upstream,
to
make
it
better
for
everyone.
C
F
Yeah
I
get
the
virtue
we'll
just
have
to
yeah
figure
out
how
that
works
with
the
current.
You
know
model
that
we've
adopted
to
do
stuff
in
rust,
and
whether
or
not
that's
something
that
the
team
is
willing
to
transition
out
of
and
make
contributions
on
this
because
it
sounds
like
we
would
be.
The
first
to
you
know,
show
interest
in
the
cli,
so
we
might
have
to
participate
in
like
getting
that
done.
A
E
B
I
think
there
would
be
I
just
I'm
looking
at
welcome
and
it
seems
like
way
way,
better
organized
and
more
mature.
So
I
don't
know
if
this
is
what
we'd
wanna.
If,
if
this
is
what
the
you
know,
libcnb
would
want,
but
for
sure
yeah.
A
B
B
G
E
A
A
I'll
remove
the
cla
bits
from
from
the
rfc
for
now
focus
just
on
the
go
library
and.
D
Yeah,
you
could
probably
just
move
it
to
like
a
future
work
section
or
something
I
think
you
used
to
have
that
narcissist
before
we
made
them
all
gigantic.
D
I
definitely
would
love
to
see
and
hear
your
like
ideal
path,
assuming
all
the
stars
align
between
all
the
various
parties
of
like
what
you
would
like
to
see.
It
look
like.
D
C
D
I
didn't
have
anything
I
know
emily
brought
up
potentially
trying
to
move
the
meeting
to
accommodate
at
least
one
other,
if
not
more
people.
I
don't
know
how.
D
How
much
interest
there
is
in
that,
seeing
that
everyone
else
here
can
make
it
to
this
time
slot.
But
I
remember
when
sending
out
the
doodle.
It
was
a
nightmare
trying
to
find
a
time
slot
at
all,
and
this
was,
I
think,
like
one
of
the
only
options.
So
I
don't
know
if
that's
something
we
want
to
revisit,
but
I'm
always
open
to
it.
E
C
B
Yep,
okay,
so
we
we
use
buildpacks
to
create
docker
containers
that
are
then
run
on
google
cloud
build
to
build
customer
applications
and
cloud
build,
doesn't
do,
doesn't
give
us
a
lot
of
options
for
making
that
fast.
So
when
the
container
is
big,
it
gets
slower.
B
So
one
of
the
things
I
was
looking
at
was
the
number
of
binaries
that
are
created
when
you
use
build
packs
kind
of
the
way
they're
supposed
to
be
used,
where
you
create
lots
of
small
binaries,
basically,
and
so
like
as
an
experiment.
I
looked
at
our
node.js
build
packs,
which
we
have
like
five
or
six
of
them.
B
Each
one
is
like
eight
to
nine
megs
and
I
created
a
single
build
pack
that
called
the
main
function
in
each
one
just
just
to
see,
and
it
was
almost
the
same
size
as
the
largest
one,
just
like
slightly
larger.
So
I'm
kind
of
curious
like
is
there
anything
in
the
project
for
this,
or
is
there
any
appetite
for
like
potentially
combining
all
a
bunch
of
build
packs
into
one
and
then
having
maybe
some
op,
some
way
that
they
know
which
one
they're
supposed
to
run?
B
If
you
want
to
keep
them
in
separate
layers,
somehow
something
like
that,
so
I
was
wondering
if
anybody
else
is
interested
in
this
kind
of
use
case
and
yeah.
If
there's
anything
out
there
for.
A
You
can
talk
about
a
few
things.
I've
noticed
one
people
tend
to
combine
the
detect
and
build
binary
into
one,
and
we
as
a
project,
provide
that
option
so
based
on
the
execution
path
the
library
can
detect
whether
detect
binary
was
called.
The
binary
was
called
and
then
dispatch
it
accordingly.
The
other
option
I'm
seeing
is
for
exact
d
binaries.
E
B
Yeah,
you
would,
I
I've
never
looked
at
how
the
the
binaries
get
built
or
where
they
get
built.
So
that
probably
so,
I
guess
I
don't
know
enough
to
know
if
this
would
be
something
gold
packs
would
add,
support
for
like
lip,
the
lipstick
and
all
that.
A
I
think
the
main
issue
would
be
how
back
would
package
these
build
packs
so
that
your
sim
links
all
point
to
that
multi-call
binary
with
the
appropriate
part,
so
that
you
can
figure
out
which
buildback
was
called.
I
don't
think
pack
currently
has
a
way
of
putting
the
same
binary
in
like
with
sim
links
and
in
different
weld
back
parts.
You
can
construct
a
builder
like
that
manually
without
back
using
docker,
potentially
just
have
to
mess
around
with
the
file
parts,
but.
B
Okay,
so
we
are
doing
the
thing
where
we
combine
build
and
detect,
but
we
I
do
we
don't
use,
we
don't
use
libc
b,
I
think
for
implementing
it
like.
We
have
like
a
few
lines
where
we
do
it
ourselves.
So
that's
interesting.
I
should
figure.
B
I
should
see
how
the
like
blessed
way
of
doing
it
and
we
should
drop
what
we're
doing,
but
if
we
were,
if
we
wanted
to
change
pack
and
the
appropriate
things
in
build
packs
to
support
this,
would
that
be
something
the
project
would
be
on
board
with
or
or
would
it
need
to
go
through
more
review?
Or
what
do
you
think.
G
I
I
I
if
I
had
to,
I
can't
say
that
the
project
would
endorse
it,
but
I
would
say
this
is
an
issue
that
I
see
like
the
size
of
builders
is
already
inside
the
piketto
project.
Even
with
you
know,
supporting
what
I
would
say
is
a
pretty
popular
set
of
languages
is
already
enormous
to
the
point
where
we've
come
up
with,
had
to
come
up
with
solutions
to
make
smaller
builders,
because
our
other
builders
are
too
big
and
they're
slow.
G
So
I
would
be
interested
in
seeing
something
where
you
could
reduce
the
size
of
builders
and
still
have
lots
of
build
packs.
B
Yeah
that's
interesting.
We
recently
like
have
been
making
like
we
make
decisions
about
which
dependencies
to
bring
in
because
we
bring
in
a
dependency.
That's
like
large.
You
know.
Maybe
it
adds
a
few
megs,
but
it's
a
few
megs
times
like
n
programs,
so
it
ends
up
being
a
lot.
So
that's
interesting.
You
have
the
same
problem.
H
Yeah,
we
definitely
have
that
too.
It's
like
there's
like
100
to
200
megs
of
of
stuff.
That's
just
the
build
binary
or
the
main
binary
that
gets
used
across
all
the
build
packs.
I'm.
I
would
like
to
see
something
I'd
be
interested
in
ideas
on
how
you
could
tackle
that,
because
you
have
that
build
pack
image
that
you'd
have
to
consolidate
everything
into
one
image,
almost
otherwise
it's
hard
to
cross
that
boundary,
possibly
something
in
lib
c
and
b.
H
I
mean
what
we
do
with
with
like
the
exact
d
binaries
is,
is
there's
kind
of
a
the
library,
provides
an
interface,
and
then
it
provides
kind
of
like
a
like
a
general
runner
that
will
run
a
specific
implementation
of
that
interface.
Based
on
you
know
what
what
the
binary
name
is
called
or
the
sim
link
is
called.
G
G
H
A
I
don't
think
live.
Cnb
can
solve
this
specific
issue,
because
the
main
the
main
blocker
there
would
be
the
belt
back
down
like
we
could
technically
combine
the
binaries
there's
no
way
for
us
to
deal
with
the
folder
organization
that
the
buildback
api
expects
like
just
being
able
to
put
the
appropriate
sim
links
and
buildback
normals
in
the
appropriate
places
so
that
you
can
package
it
up
using
a
standard
tool
like
pack,
that's
not
possible.
A
A
E
B
E
There
might
be
some
complication
on
the
spec
side
as
well.
If
we're
talking
about
how
we,
if
folks,
are
distributing
these
build
packs
as
build
packages
in
the
spec,
where
sort
of
each
build
pack
is
a
separate,
it's
a
standalone
layer
packaged
up
in
a
specific
way
right.
Well,.
C
E
E
A
That's
what
I
was
imagining
like
now
that
I
think
about
it.
You
can
also
do
it
with
back,
like
you
just
put
the
the
multi
call
buildback
binary
somewhere
on
your
stack
and
then
just
for
each
of
your
build
packs,
hard
code,
the
same
thing
to
that
exact
path
on
your
stack
and
then
use
pack
to
create
the
wheel.
Facts
like
you
can
do
it
with
pack
as
well
right
now,
rather
than
thinking
about
it,
it's
just
a
bit
hacky,
that's
it
yeah!
A
E
E
A
E
I
have
to
think
hard
about
if
there's
any
situation
where
it
breaks
the
spec,
it
definitely
would
work
around
the
spec
and
would
have
some
requirements
which
it
would
be
impossible
to
express
in
a
compliant
way.
It
could
be
like
you'd,
be,
depending
on
a
really
specific
stack
image
or
a
really
specific,
build
pack
in
a
way
that
you
couldn't
do
in
a
first
class
manner.
A
E
Did
that
and
it
helped,
but
then
we
ran
into
situations
where
they
wouldn't
run.
If
you
were
like
emulating
amd
on
an
arm
max,
so
we
had
to
undo
it
all.
H
H
D
Cool,
well,
I
think
we're
over
time.
I
don't
know
if
we
have
next
step
action
items
for
that
particular
topic.