►
From YouTube: CNB Sub-Team Sync: BAT - 2021/12/17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
That's
a
great
question:
yes,
I
was
just
setting
up
the
livestream
one.
Second.
B
B
Should
be
it
okay,
hello,
everyone,
there
is
a
document
attached
to
this
meeting.
Please
sign
in
and
put
any
agenda
items
on
there
and
we'll
get
started.
B
C
Yeah
sure
hi,
I'm
manuel
fox.
I
work
at
salesforce
with
parents
where
I
maintain
the
java
build
packs
and
recently
spearheading.
You
could
say,
like
our
rust
framework,
to
write
book
by
then.
B
We
can
also
introduce
ourselves,
I'm
sam,
I'm
one
of
the
maintainers
on
the
buildbacks
project
and
I
work
at
bloomberg
in
their
ml
platform.
C
I'm
terence
also
a
maintainer
on
the
bill.
Packs
authors,
tooling,
team
work
at
salesforce
with
manual
like
you
said,.
D
I'm
dan,
I
also
work
at
vmware
and
work
on
piccata,
build
packs.
C
And
I'm
david,
I
work
at
vmware
too
also
on
the
catapult
box
and
the
job
of
the
bike,
specifically.
B
B
Let's
get
the
ball
rolling,
stylus
updates,
don't
think
there
have
been
any
significant
ones
on
the
loop
cnb
side.
I
know
forrest
put
in
a
couple
of
pr's
to
update
the
wii
to
alpha
branch.
B
A
On
that
front,
I
finally
did
get
the
opportunity
to
try
and
convert
one
of
our
build
packs
and
packet
itself
to
using
lipstick.
I
have
a
branch
pushed
up
on
packet
right
now,
that
is,
can
be
exclusive
branch.
Oh
it's
using
anything
like
that.
It's
not
a
hundred!
It's
not
a
hundred
percent
perfect
because
I
was
just
trying
to
spike
a
bunch
of
stuff
out.
A
So
there's
very
obviously,
no
s
bomb
usage
right
now
and
there's
still
some
other
like
assumed
structures,
but
it
was
not
as
frustrating
as
I
was
concerned.
It
was
going
to
be.
D
B
Let's
see,
I
don't
see
anything
new
here
there
is
this
additional
exportable
layers,
rfc,
which
I
updated
it
sort
of
spun
off
from
an
idea
that,
like
we,
should
be
able
to
export
additional
layers.
Apart
from
just
the
layers
directory
in
the
workspace
directory.
B
B
B
B
Apart
from
the
app
directory
that
you'd
like
to
export,
the
original
motivation
was
this,
but
then
there
were
some
other
use
cases
that
came
forward
like
there
are
often
times
when,
like
you,
have
multiple
build
packs,
reading
from
the
same
source
directory
and
then
like
one
bill
pack
may
go
ahead
and
clean
the
wipe,
the
entire
source
directory
after
it's
done
building
and
then
the
next
build
pack
has
nothing
to
work
on.
So
let's
say
you
are
you're
trying
to
build
an
application
that
has
a
javascript
front
end
and
a
gold
back
end
go
build.
B
Packs
will
often
go
ahead
and
wipe
out
the
entire
like
source
directory,
whereas
the
node
build
packs
might
need
that
to
do
other
things
like
npm,
install
or
whatever.
So
you
have
to
take
care
about,
take
care
of
things
like
preserving
order
and
knowing
when
to
do
up
like
workspace
modification,
so
that
the
rest
of
the
buildbacks
still
are
able
to
detect
and
build
upon
the
remains
of
the
app
directory.
B
So
the
idea
was
that
you
could
you
could
declare
these
additional
volumes
by
name,
so
you
could
do
something
like
hey
this.
I,
as
this
build
back
I'm
declaring
a
volume
called
aws
extensions
or
user
config
directory
or
something
like
that,
and
then
at
the
run
image
or
the
builder
image
level.
You
can
set
labels
that
map
these
things
to
a
specific
value.
B
So
you
can
you
can
you
can
have
the
build
back,
be
like
or
or
like
multiple
different
build
packs
be
like?
I
just
want
to
write
to
the
user
config
directory,
but
based
on
the
stack,
it
might
be
home,
slash,
cnb,
dot
config
for
one
stack
or
it
might
be
some
other
user
if
the
stack
is
entirely
different.
B
B
This
mapping
is
optional
and,
if
not
provided
it
just
defaults
to
this
new
directory,
it's
called
workspaces,
slash
the
volume
name,
that's
specified
here,
so
the
idea
would
be
that,
instead
of
a
singular
workspace
directory,
where
we
mount
everything
and
output,
everything
there
would
be
a
workspaces
directory
and
each
of
these
volumes
would
be
subdirectories
under
there
by
default,
and
the
current
workspace
directory
would
just
be
similar
to
workspaces
default.
B
There
were
multiple
restrictions
around
this
rfc,
which
is
like
making
it
easier
for
platforms
like
techton
or
other
things,
to
implement
it
using
a
single
volume
making
it
configurable
so
that
the
build
pack
is
not
coupled
directly
with
the
stack
and
you
can
change
the
mapping
of
these
volumes
based
on
the
stack
and
what
it's
compatible
with
and
while
still
allowing
like
buildbacks
to
declare
like
a
common
set
of
volumes.
B
They
want
to
work
upon
and
right,
there's
one
more
extension
to
this
that
I've
not
yet
proposed
in
this,
but
in
the
future.
What
we
wanted
to
do
was
mount
the
app
source
code
as
a
read-only
volume
that
all
build
packs,
detect
and
build
upon,
but
the
output.
B
The
the
idea
would
be
that
subsequent
buildbacks
can
have
one
like
have
one
or
more
than
one
output
directories
where
they
can
produce
outputs
in,
but
the
the
pristine
input
source
code
is
not
changed
and
that's
removed
at
the
end
of
the
like
image
creation.
So
you
don't
end
up
with
unnecessary
garbage
in
the
output
image
only
the
files
that
the
buildbacks
chose
to
put
in
the
output
image.
B
C
C
Build
pack
expects
us
to
be
in
slash
shop
and
that's
fine,
because
I
control
I
work
at
a
company
where
we
can
control
basically
the
run
image
so
and
I
guess,
as
a
build
pack,
you
could
also
specify,
I
guess
with
stacks,
maybe
going
away,
that's
less
true,
but
I
guess
it
in
free.
Before
that
stuff
gets
done.
You
could
specify
the
stack
yeah
it
supports
right,
but
maybe
that's
less
true
in
the
future.
B
Yeah
I
had
this
clause
to
like
again
like
I've
not
included
a
bunch
of
things
because,
like
people
were
super
opposed
to
buildbacks,
declaring
which
directory
paths
they
want
to
write
out
to.
B
B
And
then
the
life
cycle
can
like
simply
create
the
appropriate,
siblings
and
directory
permissions
when,
when
things
are
being
built
and
the
the
output
would
still
be
preserved,
because
the
way
the
life
cycle
would
do
this
is,
it
will
use
the
mounted
workspaces
volume.
B
C
B
B
C
I
don't
know
if
that
applies
like
in
this
particular
example.
You
had
over
the
use
cases
you
had
at
bloomberg
that
are
kind
of
driving
this
stuff,
but
obviously
on
bloomberg
you,
you
kind
of
control
that
so
it
doesn't
actually
matter
in
practice,
but
yeah
it
does.
I
think
her
portability.
If
that
does
matter,
I
don't
know,
I
don't
know
how
much
of
concern
that
is.
B
B
I
mean
I,
I
don't
have
any
strong
opinions
like
in
terms
of
hurting
portability.
I
have
yet
to
see
build
packs
being
used
by
someone
who
didn't
know
what
the
buildback
like
it's
either
an
operator
taking
a
bill
back
from
someone
else
and
then
adapting
it
to
their
stack
or
like
the
buildback
shipping,
with
a
set
of
base
images
or
a
builder
image
that
it's
compatible
with
I've.
Never
seen
people
go
like
search
the
internet
for
a
random
buildback
and
put
it
in
my
current
builder
or
stack
or
whatever.
B
So
I'm
not
sure
how
how
applicable
that
would
be,
and
then
some
cases
the
build
pack
could
like
the
bill
pack
would
have
access
to
the
the
information
it
could
choose
to
like
just
not
run
if
it
doesn't
find
the
appropriate
things.
B
Yeah,
I
still
can
fail,
I
should
add
blue
spots,
but
the
the
whatever
exportable
volumes
is
like
this.
This
specific
thing
would
be
passed
to
the
build
pack
during
detecting
build
so
like.
If
you
declare
that
it's
using
the
specific.
B
C
I
I
think
it's
mitigated
because
the
run
image
gets
to
dictate
where
stuff
goes
yeah.
So
I
imagine
if
you
did
that?
Yes,
you
were
probably
screwing
yourself
in
that
case,
but
you
explicitly
chose
to
do
that.
I
imagine,
I
think
probably
the
the
world
is
like
you're
putting
stuff
in
places
that
hopefully
don't
impact
rebase,
but
I
mean
I
guess
there
is
that
risk.
If
the
run
image
chooses
to
do
that.
D
D
B
A
Again,
just
thinking
about
that
situation,
where
you've
got
a
dodgy
build
pack
that
tries
to
replace
user
config
deer
in
a
subsequent
build
with
something
that
contains
a
different
contents.
B
B
D
B
D
B
That
might
be
too
complicated,
because
these
are
supposed
to
be
shared.
Directories
by
buildback
can
coordinate
together.
B
You
can
always,
I
guess
I
could
put
one
thing
in
there:
if
the
names
are
different
and
the
values
are
mapped
to
the
same
thing
might
not
make
sense.
B
So
it
should
instead
be
like
both
say,
aws
extensions.
Then
they
both
know
that
there's
a
common
thing
that's
valid
and
they
they
should
be
able
to
collaborate.
You
can
also
like.
I
can
also
make
these
name
space,
which
can
be
something
like
a
cato,
slash,
aws
extensions.
That
way,
you
know
that
all
the
poquetto
like
these.
C
B
Like
free
form
names,
so
you
can
put
whatever
you
want
in
there.
We
could
make
it
name
spaced
so
that,
like.
B
B
D
B
A
A
It
seems
like
it
kind
of
would
limit
at
least
my
usage
of
it,
because
I
like
to
try
and
put
all
of
my
dependencies
or
as
many
as
I
can
in
layers
so
that
I
can
reuse
them
and
not
have
to
re-download
them.
Can
you
just
like
talk
through
that
motivation
quickly?.
B
B
The
reason
I
didn't
want
to
treat
them
as
layers
is
because
then
you
get
into
the
whole
restore
like
how
do
you
restore
a
shared
directory
that
multiple
build
packs
have
worked
upon
like
each
of
them
will
have
to
do
some
own
bookkeeping,
to
figure
out
whether
to
delete
certain
things
or
add
certain
things,
so
it's
always
easier
to
just
restrict
all
of
that
logic
to
the
buildback
specific
layer
directory
and
then,
if
they
find
that
this
common
workspace
is
something
that
they
want
to
copy
things
out
into.
They
can
do
that
during
each
build.
A
So
so
like
this
is
really
to
facilitate
like
if
I'm
my
hands
are
really
tied
on
where
aws
extensions
are
looking
for
that
sort
of
thing.
Okay,
I
I
got
you
I
was
I
was.
I
was
being
a
little
greedy
and
hoping
to
be
able
to
solve
some
other
problems
that
I'm
having,
but
I
I
don't
think
that
it's
quite
right
for
those.
B
It's
it's
mainly
look,
so
the
the
the
support
dockerfiles
rfc
allows
for
some
of
this,
but
it
leads
to
it
might
lead
to
some
performance
degradation
since
you're
doing
run
image,
extensions
and
all
of
these
things,
and
it
also
doesn't
handle
rebase
the
like
it.
The
the
the
way
it
handles
freeways
right
now
is
like
we
rebuild
the
the
base
image,
reusing
the
like
some
of
the
older
layers
or
something
like
the
the
specifics
around
rebase
and
the
dockerfiles
rfc
are
still
up
in
the
air.
B
A
D
B
So
the
way
it
will
be
controlled
is
the
way
lifecycle.
Currency
makes
sure
that
you
have
access
to
the
like
workspace,
like
you
have
appropriate
permissions
for
the
workspace,
so
all
of
these
would
be
originally
their
directories
and
workspaces.
B
B
Yeah,
okay,
yeah,
so
that's
that's
also
why
it
goes
back
to
like
that
different
build
time
and
runtime
user
thing
right.
Where
the
build
time
user
is
different
from
the
runtime
user.
You
can
have
the
built-in
like
using
root
as
a
user
during
build
essentially
ensures
that
any
other
random
runtime
user
can't
modify
those
things
having
a
different
build
time
and
runtime
users
with
control
permissions,
sort
of
achieves
the
same
thing.
D
B
C
B
For
restore,
I
guess
the
way
it
would
work.
Is
you
compile
like
whatever
assets
you
want
to
restore
you
put
them
in
your
layers
directory
and
then
your
build
process
does
the
job
of
like
either
copying
or
sim
linking
these
things
over
to
the
app
directory.
So
it's
it's
on
you
the
same
way
right
now,
if
you're
doing
anything
with
the
app
directory,
it's
on
you
so
like.
C
A
We
already
have,
like
I
know,
like
our.net
buildpack.
Currently
already
does
this.
We
have
a
series
of
layers
and
we
co-locate
all
of
the
contents
of
those
layers
into
the
workspace
using
sim
links
so
that
we
can
do
both
so
it.
I
think
it
makes
sense
in
terms
of
a
workflow
it's
a
little
clunky,
but
it's
yeah.
B
B
A
I
think
I
think
I,
like
the
general
idea,
I
like
the
idea
of
having
more
removal
of
source
code
baked
into
the
build
pack
process
in
some
ways,
I'm
a
little
concerned
about
having
it
be
baked
in,
so
that
it's
remove
all
because,
I
think
sometimes,
like
you
know,
a
fair
amount
of
the
build
plaques
that
I
work
on
are
obviously
bill,
packs
that
build
a
compiled
artifact.
So
you
don't
really
need
any
of
the
source
code
at
the
end,
unless
the
user
has
static
assets
that
they're
referencing
or
something
to
that
effect.
A
A
B
A
And
so
then
is
it
on
like
an
individual,
build
pack
level
to
also
then
have
an
implementation
that
would
allow
a
user
to
say
so,
like
a
good
example
of
this
would
be
like,
say,
I
have
a
node
front
end
that
transpiles
into
static
assets
that
I've
done
on
like
httpd
or
something
like
that
right
do
I
is
it.
Is
it
then,
on
the
build
pack
to
basically
say,
okay,
node
and
it's
build
process
sets
that
one
wanted
everything,
but
I
don't
actually
want
everything
I
only
want
you
know.
A
B
So
that's
where
the
the
like,
like
these
multiple
exportable
volumes,
so
in
when
in
in
that
future
case,
which
I'm
not
yet
proposing
this
rfc.
The
idea
would
be
that
so,
let's
say,
there's
a
there's:
a
workspace
directory
for
node,
where
it
it
copies
over
everything.
B
If
some
other
buildback
wants
to
collaborate
on
that
workspace
directly,
just
say
like
hey,
that's
one
of
that's
one
of
the
things
I
want
to
collaborate
on
and
put
that
here.
If
it
doesn't
want
to
do
that,
note,
build
back
and
copy
over
everything
while
likes
and
in
that
directory,
and
this
other
build
pack
can
just
look
at
the
source
code
and
copy
the
specific
parts
it
wants
into
a
different
workspace
directory
and
then
based
on
which
process
is
invoked.
B
So
you
remember
you
you
put
in
that
rfc
to
have
process
launched
in
a
specific
workspace
directory
so,
like
your
other
buildback
can
be
like.
I
want
to
launch
my
stuff
in
this
directory
with
this,
which,
with
this
limited
workspace,
stuff
and
the
notebook
pack
can
be
like
no,
my
workspace
or
the
processes.
My
process
will
launch
in
this
directory
of
all
these
files
or
something
like
that,
and
if
they
want
to
collaborate
on
things
together,
then
they
say
like
okay,
my
the
common
directory
is
like
web
workspace,
or
something
like
that.
A
Sure
I
I
I
think
the
idea
is
interesting.
I
need
to
I
probably
shouldn't
speculate
it
too
much
considering
there's
not
actually
an
rfc
written
so.
D
I
was
gonna
say
I
couple
things
that
I
bother
me
about
it.
It's
it
seems
like
there's
a
lot
of.
It
adds
a
lot
of
copying
yeah
and
you
have
an
arbitrary
amount
of
files
that
an
application
could
be
and
if
I
now
have
to
copy
those
every
time
that
could
add
you
know
if
you
have
an
application
with
a
gigabyte
of
files,
that's
going
to
add
a
couple
seconds
to
every
build
so
that
that's
a
concern.
I
do
like
the
idea
of
having
some
sort
of
fixed
original.
D
You
know
copy
of
things,
because
there
are
times
where
I'm
like,
oh
crap,
I
deleted
that
already.
I
can't
I
can't
get
the
pom
file
anymore
or
something
like
that
and
it's
just
inconvenient
to
work
around
so
yeah
like
I
do
like
that.
You
know
if
there's
like
a
workspace
is
original
that
or
something
you
know,
that's
that's
always
there
and
always
read
only
you
know
that
you
could
count
on
that
would
be
kind
of
cool,
but
yeah.
I
could.
I
could
see
some
advantages.
D
B
Yeah
I
mean
I
the
reason
I
didn't
write
that
yet
is
because
we
were
like
still
unsure
on
what
like
how
that
would
be
perceived,
and
you
could
also
have
an
implementation
where,
like,
as
you
said,
there's
a
in
order
to
keep
things
the
way
they
are
right
now
for
backwards
compatibility.
We
copy,
like
we
mount
two
volumes,
one
which
is
read
only
and
has
all
the
source
code
and
the
same
thing,
with
the
write
only
volume
in
the
default
workspace,
so
the
the
build
packs
that
function
as
they
do
right
now.
B
They'll
they'll
see
that
common
workspace,
fully
populated
and
writeable
and
they'll
do
modifications
as
usual.
Once
things
get
deleted,
they
won't
know
what
what
to
do
with
the
rest
of
the
process.
So,
but
the
new
build
buys
that
rely
on
this
two
different
directories.
They
could
still
reference
back
the
original
source
code
as
read-only
version
and
put
things
back
in
the
writable
directory
if
they
want.
So
that's
the
alternative.
Where
you
mount
two
things,
one
speed
only
one
is
the
common
right,
only
thing
with
the
same
contents,
but
yeah
this.
This
is
all
like.
B
C
Does
this,
I
I
know
in
the
past,
you've
had
a
bunch
of
rfc's
kind
of
in
this
van
that
I
think
we
closed
a
handful
of
them.
Does
this
address
some
of
the
issues
you
had
with
like
the
read-only
layer
stuff
with
like
python
and
having
multiple
packs
kind
of
modifying
and
touching?
This
does.
B
B
This
was
this
was
more
to
solve
the
issue
where,
like
certain
applications
need
to
be
in
certain
parts-
and
I
don't
want
to
use
the-
I
don't
want
to
do
image
extensions
to
just-
have
them
be
there
because,
like
it's,
not
that
I
need
root,
it's
not
that
I'm
overwriting
existing
files
in
the
base
image
it's
an
empty
directory.
I
just
want
to
write
something
in
there
and
export
it
out.
C
B
The
pocket
of
java
will
packs
currently
do
some
some
magic
to
restore
things
from
the
like
disassembling,
the
home
directory
to
something
I
don't
before.
I
can
do
cash
for
cash.
I
don't
know
if,
for
anything
else,
the
other
thing
was
like
settings
or
things
in
the
home
directory
for
the
run
image
so
like,
let's
say,
you're
setting
certain
and
certain
config
values
like
the
aws
host
or
something
in
the
output,
home
directory,
dot,
aws,
slash
settings
or
whatever.
B
B
B
So
I
think
you
would
for
the
that
stuff.
I
am
not
not
an
expert
on
red
hat
stuff.
I've
just
seen
that
pattern
so
probably
want
to
do
it
as
an
image
extension.
I
don't
know
if
it
touches
any
other
files,
but
that's
the
other
place
where
I've
seen
like
individual
files
coming
living
in
some
specified
sub
directory.
That
does
not
like
use
a
bar
or
something
like
that.
C
Yeah
I
mean
I
I
know
for
me,
and
probably
some
others,
something
that
would
be
helpful
in
this
rfc
beyond
kind
of
the
example.
You
have
would
just
be
like
what
the
use
cases
like
like
practical
use,
cases
of
things
that
kind
of
require
or
use
this
thing.
This
is
because
I
imagine
I
feel
like
this
past
year
for
cmb
has
just
been
feature
creep
and
scope
is
so
large
we're
trying
to
cut
all
these
things
out
of
the
api,
like
you
know,
bash
and
other
things.
C
So
I
definitely
think
that
would
help
kind
of
the
rfc
of
not
that
this
isn't
a
useful
feature.
It
doesn't
have
things
just
like
being
able
to
enumerate
it
more
explicitly
instead
of
people
having
to
kind
of
imagine
and
kind
of
create
that
in
their
head
yeah,
because
it's
clear
like
you
besides,
the
aws
stuff,
like
there's
clear
use
cases
and
it'd,
be
helpful
to
just
pass
that
down.
B
B
Wait:
it's
it's
somewhat
of
a
similar
use
case
where
just
needs
to
be
under
a
certain
directory
tree.
It.
C
C
C
B
C
Cool
were
there
any
the
other
rfcs
or
things
that
you
want
to
touch
on
before
we
kind
of
closed
out
for
the
rest
of
the
year.
B
B
So
the
idea
is
that
we
currency
output,
bill
of
materials
through
through
buildbacks
and
the
like,
buildback
specific,
but
we
don't
have
one
for
the
run
image
and
a
buildback
couldn't
just
scan
the
old
rom
image
and
generate
one
because,
like
it
might
be
based
on
some
some
other
random
things.
B
B
B
Do
any
bunch
of
other
things
and
then
also
how
it
plays
with
the
other
dockerfiles
rfc,
which
involves
dynamically
installing
applications?
How
do
you
then
generate
an
inventory
of
all
of
those
applications,
while
it's
being
generated
without
embedding
some
weird
binary?
That
does
it
in
the
output
image
or
something
like
that?
B
So
the
current
idea
has
been
that,
like
we
take
the
we
take
the
input
image
pack
has
an
additional
command
called
pack
attaches
form
or
something
which
will
take
an
input
image
reference,
the
fill
of
materials
that
you
want
to
attach
to
the
image
as
a
single
file
and
then
create
that
output
file,
which
you
can
then
push
and
publish
and
use
as
a
wrong
image.
B
The
exact
specifics
around
it
is
that,
like
pack
will
put
the
output
spam
under
cnbs
form
and
then
the
s1
file
name
and
then
store
that
in
a
separate
layer,
the
gif
id
of
which
is
stored
in
the
label,
so
that
lifecycle
can
use
it
to
restore
or
merge
things
if
it
needs
to
in
the
future
and
the
same
thing
for
the
the
the
docker
files.
Rfc
would
involve
that.
B
So
the
the
final
idea
that
actually-
and
I
had
was
that
you
can
just
attach
or
put
this
gen
packages
binary
in
the
builder
or
on
a
separate
volume
that
doesn't
end
up
in
the
build
image
or
the
run
image
right
in
the
run
image.
And
then
this
binary
can
scan
the
files
on
disk
and
output
it
somewhere.
And
then,
when
the
life
cycle
is
exporting,
the
new
extended
run
image.
It
can
use
the
same
process
that
back
is
doing
right
here
to
attach
the
output
s1
on
file
on
disk
to
the
run
image.
C
The
first
party,
no
just
wait
for
emily
to
come
back.
She
texted
me
last
night,
so
she's
alive
and
well,
and
she
may
be
great
to.
B
B
D
C
Yeah,
I
I
do
think
if
anything,
a
law
that
should
be
driven
out
of
this
working
group
because
of
I
do
think,
a
lot
of
the
cash
and
concerns
come
from
bill
pack.
Authors
trying
to
do
things-
or
I
guess
in
the
case
of
caching,
not
do
things
yeah.
D
C
Are
your
caching
concerns
shared
across
all
of
paketto,
like
both
the
java
and
the
non-java
side,
or
are
they
also
different
or
you
haven't,
talked
about
it.
D
I
don't
want
to
totally
speak
for
forest,
but
I
think
we
have
similar
implementations
of
how
we
download
and
cache
and
store
things
in
the
images
they're.
Not
I
I
highly
doubt
they're
they're
compatible
doing
the
same
thing
but
they're
at
a
high
level
the
same,
and
I
think
we
have
similar
caching
issues
too,
and
that
like
we
want,
we
want
to
have
a
goal
of
like
a
binary,
gets
downloaded
once
you
know
for
for
a
user,
and
they
don't
have
to
download
it
multiple
times
because
they
decided
to
build
two
different
apps.
D
A
For
for
sure,
like
not
having
to
download
the
binary,
a
bunch
of
times
is
definitely
something
that
is
interesting.
The
biggest
use
case
that
we're
worried
about
is
of
enabling
customers
in
air
gap
environments
to
continue
to
use,
build
packs.
I
would
say
so,
or
you
know,
customers
users
any
of
those.
So
yeah
you
are
you're
correct
dan.
We
do
have
a
very
similar
implementation
in
how
we
just
kind
of
shove,
a
bunch
of
binary
or
compressed
binaries
into
images,
and
it's
pretty
ugly.
D
D
It's
hard
to
share
things
like
if,
if
forest
has
a
build
pack
that
includes
java-
and
I
need
to
include
java,
those
aren't
the
same
layer
and
so
they
you
know,
then
things
get
bloated,
really
fast.
There's
a
lot
of
concerns
like
that.
C
In
in
your
use
case,
force
is
that
it
feels
like
the
acid
cash
thing
that
daniel
came
up
with
at
least
definitely
came
from
that
spirit
of
trying
to
address.
Probably
the
air
gap
I
feel
like
air
gap
was
definitely
at
the
heart
of
that
proposal.
A
For
sure
I
mean
I
mean
daniel
was
working
on
paquetto
before
moving
into
focusing
more
directly
on
cnb,
and
we
started
that
rfc,
while
he
was
still
on
paquetto,
so
it
was
more
focused
on
it
was
more
focused
on
air
gapping,
but
I
think
that
there's
definitely
gonna
be
like
issues
in
the
future
when
it
comes
to,
like
I
mean
like
we're
already
hitting
weird
sizes
with
our
builders
already,
and
so
it's
it's
only
a
matter
of
time
before
it
gets
even
worse.
C
A
Like
builders
like
builders,
that
don't
even
have
all
of
the
have
like
like
things
vendored
into
them,
are
already
in,
like
the
magnitudes
of
hundreds
of
layers
and
hitting
hundreds
of
giga
or
not
gigabytes,
of
jesus
megabytes.
A
A
So
it
will
be,
it
will
be
difficult
and
there's
not
a
great
way
of
sort
of
allowing
users
to
do
a
own
set
of
dependencies
as
well.
B
So
the
other
side
of
this
or
like,
maybe
it's
not
something
that
we
as
a
project
have
to
solve.
There's
the
whole
there's
one
other
thing
that
does
come
up
in
the
discussion
around
this
in
the
past
is
like
the
e-star
layers
of
the
lazy,
pulling
capabilities
where
you
can
bundle
these
layers
in
the
output
image,
but
they're
not
pulled
in
until
they're
red.
B
So
you,
your
your
image,
is
still
small
and
for
the
minimal
functionality
that
is
needed
and
the
assets
that
you
have
will
only
be
pulled
in
when
they're
ready.
So
the
you
can
transport
the
whole
image
to
a
registry.
It's
still
air
gapped
when
they
pull
from
it
will
only
pull
the
necessary
layers
that
the
build
pipe
detects
so
like.
B
I'm
not
sure
if
there
are
any
layer
restrictions
on
on
than
this.
This
sort
of
overlay,
but
I'm
guessing
like
since
you're,
not
overlaying
all
the
layers
in
the
beginning,
you
don't
you
won't
end
up
with
128,
but
that's
more
of
a
unix
restriction.
I
guess
than
the
overlay
restriction,
but
I'm
guessing
that's
the
limit.
You're
talking
about
right.
A
A
C
Yeah,
I
didn't
mean
to
start
us
on
that
whole
kind
of
thing,
but
I
know
we're
at
time,
but
looking
forward
to
continuing
this
in
the
new
year,
and
hopefully
we
can
actually
do
something
about
it.