►
From YouTube: Working Group: 2020-05-20
Description
* Pack Publish Buildpack: https://github.com/buildpacks/rfcs/pull/75
* API Versions: https://github.com/buildpacks/rfcs/pull/79
* Plan Merging: https://github.com/buildpacks/rfcs/pull/67
* OS Extensions with Root Buildpack: https://github.com/buildpacks/rfcs/pull/77
B
B
A
C
C
C
B
A
A
B
A
B
C
Yeah,
given
the
deficient
test
in
a
meeting
with
us-
well,
we
know
at
least
about
some
life
cycle
changes
that
are
gonna,
come
down
the
pike
and
probably
some
packed
changes.
Then,
when
I
talked
to
him
in
slack,
he's
planning
on
later
this
week
for
a
pack
release.
So
that's
an
informal
announcement
to
you
want
anyone
on
the
pack
team
want
to
clarify
that
other
than
Xavier.
A
Planning
alright,
so
we
went
into
feature
complete
last
night
or
this
morning
for
pack
Oh
II
Levin.
Oh,
we
are
going
to
work
through
some
issues.
If
we
do
find
some
the
rest
of
this
week
until
the
release
date
of
Tuesday,
at
which
point
we
will
release
pack
and
we
will
start
the
feature
complete
I
believe
for
lifecycle.
If
that's
still
a
thing,
Emily
I,
don't
know
if
that's
changed.
C
B
C
Cool
I
think
actually
I
think
we
do
want
to.
We
just
shouldn't
discuss
any
of
them.
This
was
all
about
movement.
Make
sure
that
anything
that
has
been
sitting
around
we
understand
why,
but
mostly
I'm,
really
worried
about
the
fact
that
there
are
four
other
RFC's
that
we're
not
going
to
get
to
these
see.
C
A
B
B
Team,
please
take
a
look.
If
you
have
comments.
Next
is
add
interface.
This
is
on
the
list,
so
imma
skip
it.
Yep
publish,
build
Pat.
Would
pack
this
in
the
list
cool,
let's
get
that
process
specific
environment
variables.
This
is
what
I
was
going
to
take
on.
Eventually
they
can
make.
Let's
get
to
it
eventually,
I
see
for
the
export
report
report
Tamil
McNew,
it's.
B
A
B
B
C
Put
out
a
call
into
slack
today,
but
I'll
put
it
out
here
as
we
move
into
incubation,
or
as
we
attempt
to
move
into
incubation
at
the
CN
CF,
we
have
got
all
of
the
platforms
that
currently
integrate
cloud
native,
build
packs
covered.
We
have
them
listed
and
the
TOC
is
satisfied
with
them,
but
we'd
like
to
fill
out
the
number
of
end
users.
C
So
if
you
are
specifically
not
a
vmware
employee
or
not
as
part
of
your
vmware
duties,
working
on
individual,
build
packs
or
building
applications
with
fill
packs,
please
go
ahead
and
let
us
know
you
can
either
do
it
privately
DMing
me
or
you
can
respond
on
the
thread
there.
We'd
like
to
put
you
in
the
due
diligence
that
the
good
CN
CF
TOC
is
reading,
just
to
sort
of
fill
it
out
and
make
sure
they
understand
who's.
Actually
using
these
things.
C
C
A
B
C
Over
several
times,
Xavier
did
some
work
to
change
how
the
registries
are
defined,
and
this
is
a
in
the
pack
configuration
there
would
be
a
registry
entry
with
a
type
and
an
identifier
like
a
name,
and
so
the
type
would
define
the
behavior
it's
get
up.
It's
gonna
create
PR
and
do
all
that
stuff.
If
it's
get,
it's
gonna
do
a
commit
and
push,
and
then
this
leaves
it
the
opportunity
to
have
a
service
type
or
something
like
that.
Where
people
could,
you
know,
stand
up
their
own
service
not
depend
on
github.
C
So
the
actual
structure
of
the
registries
kinda
reminds
me
of
the
way
that,
like
maven,
manages
repositories
in
a
settings.xml,
so
in
each
of
this
commands,
I
describe
the
different
flows
for
the
different
types.
When
the
type
is
good
it
does
this,
one
type
is
get
up
it.
Does
that
I
think
that's
largely
the
same,
it's
more
about
how
we
control
the
path
through
that
flow.
That's
changed.
I,
read
all
the
comments
that
people
left
and
they're
mostly
on
the
alternatives,
seemed
fine.
The
one
I
wanted
to
call
out
is
the
infer
registry
type.
C
A
C
C
I,
like
I,
couldn't
think
of
any
overlap
either
like
for
for
github.com.
It's
ghetto
comment
nobody's
running
a
service
on
get
up.
Calm
forget
it
would
be
the
only
one
that
supported
the
local
file
system
path,
or
we
could
special
case
I
think
to
get
thing
the
could
just
special
case
and
then
whatever
service
would
just
be
HTTP.
Something
that's
not
get
hope
that
com
they
can
I,
don't
feel
strongly
either
way.
I
wonder.
A
C
C
A
C
C
C
C
Yeah
I
can
definitely
I
would
definitely
add
that
yeah
I
think
that
the
point
I
was
trying
to
make
is
like
we
can
do
falter
the
web
browser
love.
That
idea
I
think
it's
probably
the
best
you
extra
most
people,
but
if
we
are
going
to
support
this
kind
of
flow,
we
shouldn't
have
like
a
personal
access
token.
We
should
actually
have
a
real
Roth
token
right.
B
I
actually
had
one
question
on
the
config
under
the
how
it
works,
configuration
section
where
you
specify
schemas.
The
registry
names
are
allowed
to
vary
and
are
kind
of
like
it's
like
a
red
registries.
Is
a
map
and
there's
a
special
name
default
that
you
set
equal
to
another
name,
but
then
the
other
things
are
also
names
and
that
this
schema
seems
a
little
unusual
and
in
other
places,
we've
stuck
to
a
pretty
thick
schema.
B
B
A
C
C
Essentially,
I'm
proposing
that
the
life
cycle
will
implement
different,
simultaneously,
implement
different
versions
of
the
specification
and
build
packs
and
platforms
can
indicate
which
version
the
lifecycle
should
use
either
by
setting
a
bill,
PAC,
API
and
then
Bill,
packed
all
of
their
bill
pack
or
sitting
out
platform.
Api
environment
variable
if
they're
a
platform
and
the
goal
here
is
to
not
put
onerous
restrictions
on
the
spec.
So
the
spec,
pretty
100,
can
still
make
breaking
changes
and
remove
things.
But
the
live
cycles
are
guaranteeing
a
type
of
compatibility
and
I.
C
B
B
C
C
With
this,
we're
also
hoping
to
offload
some
repetitive
bill
pack:
action
onto
the
lifecycle
as
well
things
like
constraint,
merging
capability,
merging
and
version
merging,
we're
hoping
to
all
sort
of
offloaded
onto
the
lifecycle
and
end
up
with
an
artifact
that
you
are
able
to
more
seamlessly
use
with
the
bill,
packs
and
I.
Think
the
big
hope
one
of
the
big
house
with
this
is
by
completely
constraining,
build
plan
and
build
plan.
We
can
increase
bill
pack
interoperability
by
standardizing
how
they
communicate
with
each
other.
C
B
I've
caught
a
couple
things
that
I
really
like
about
this
or
that
I
think
are
interesting.
This
starts
to
make
use
of
so
we
have
this
build
plant
contract
right
now.
We're
early,
build
packs
can
say:
I
can
provide
these
things.
It's
sort
of
like
a
source
and
then
later
build
packs
can
send
messages
to
the
providing
build
packs
through
requires
that
say:
I
need
this
thing
to
be
available
and
then
for
detection
to
pass.
B
All
that
provides
must
match
up
with
at
least
one
require
and
no
requires
can
be
there
that
aren't
fulfilled
by
and
provide,
if
that
make
sense.
At
least
one
provide.
That's
the
old
build
play
mechanism.
In
the
past
we
hadn't
used
the
provider
side
for
anything
besides
saying
you
can
provide
something
we
hadn't
used
in
the
the
require
side
was
was
difficult
to
find
the
right
kind
of
set
of
fields
that
made
sense
in
the
required
submersion
field,
but
you
don't
do
anything
with
it.
B
There's
an
arbitrary
metadata
field
and
a
lot
of
patterns
around
designating
things
as
build
or
launch
that
were
kind
of
awkward
and
required
a
lot
of
logic.
On
the
provider
side
to
merge,
this
gives
the
provider
more
capabilities
than
just
providing
a
name
of
things.
That's
the
consent
to
it
like
it
can
say
things
like
lifecycle,
take
all
the
versions
from
all
the
previous,
from
all
the
things
that
require
the
dependency,
and
you
know
find
the
common.
B
B
B
C
B
One
thing
to
think
about
is:
we've
talked
about
because
the
Bill
of
Materials
or
because
the
bill
plant
entries
end
up
in
the
Bill
of
Materials.
Even
if
something
is
marked
bill
only
does
there
need
to
be
a
mechanism
where
build
equals.
True
is
called
out
special
and
it
doesn't
get
exclusions.
It
gets
excluded
in
the
Bill
of
Materials,
so
it
doesn't
look
like
your
image.
Has
a
bill
time
dependency
on
it.
Do
we
need
to
make
that
a
special
extra
flag
that
is
functional?
Are
there
I?
Think?
B
B
C
C
C
B
C
A
C
So
in
Stevens
PR
he
had
sort
of
three
different
cases.
One
was
creating
a
builder
with
the
extensions
second
was
extending
a
builder
and
then
the
third
was
pack
build,
and
so
I
moved
the
creation
of
the
builder
and
the
extending
of
the
Builder
to
alternatives,
and
we
actually
talked
Stephan
I
talked
about
this
and
I
think
we're
just
gonna,
remove
them
from
the
RFC
as
a
whole,
and
we
can
address
those
independently
because
they
present
a
whole
bunch
of
other
complexities.
So
the
focus
of
this
is
really
on
pack
build.
C
Let's
see
yeah,
so
the
a
built
pack
would
be
defined
as
a
privileged
build
pack
and
in
fact
amo
there
would
be
a
list
of
build
packs.
We
talked
about
actually
I'm,
showing
two
lists
of
bill
packs
here,
one
for
the
building,
as
you
want
for
the
run
image,
but
I
think
we
talked
briefly
that
we
would
like
to
just
merge
that
into
a
single
list.
C
Maybe
have
a
mechanism,
for
you
know
saying
this:
one
doesn't
run
against
the
run
image
or
something,
but
the
developer
just
be
one
list
of
route
bill.
Packs
list
would
run
before
the
regular,
app
application,
build
packs
and
then
each
bill
that
would
produce
a
single
layer
which
you
could
essentially
slice
and
exclude
from
using
watch
Donal
and
some
new
addition.
C
B
B
What's
the
example
here
forget
the
example
this
one,
the
PQ,
it's
trying
to
install
lib
PQ
using
an
app
build
back.
Sometimes
it's
going
to
install
the
PQ
on
top
of
an
image
that
already
has
lib
PQ
and
therefore
would
be
much
faster
than
not,
and
it
can
control
whether
or
not
it
can
get
that
pre-built
image
or
whether
it
gets
a
fresh
image
every
time
using
a
Lag.
That's
on
its
in
buildpack
tamil
I
forget
where
that
is
in
here.
A
B
C
Understand
why
we
want
to
call
the
build
pack
just
sort
of
fewer
domain
concepts
in
the
ecosystem.
I
think
one
of
the
things
I
find
confusing
about
it
is
the
rules
about
how
it
works
in
some
ways
are
opposite
to
a
bill.
Pack
like
go
pack
must
write
all
things
that
want
persisted
two
layers.
This
one
can't
write
anything
it
once
persisted
two
layers
and
I
wonder
if
it's
different
enough
that
using
the
same
name
ends
up
being
more
confusing
rather
than
simpler,.
B
B
C
Can
use
this
in
somebody's
interchangeably
in
your
order
and
the
result
of
running
like
putting
one
of
those
near
detect
phase
or
your
build
phase
like
what
happens
when
it's
done,
the
outputs
are
still
interpreted
the
same
way.
It
would
be
for
us.
If
you
put
your
regular
list
of
bill
paxton
build,
will
fail
fast.
C
C
Think
yeah,
it's
definitely
possible.
I
think
we
can
help
that
with
proper
error
messages,
but
I
think
the
actual
different
like
to
your
I,
think
that
that's
concerning,
but
I
think
the
actual
point
about
like
be
right.
Two
layers
versus
they
don't
I
feel
like
the
changes
or
the
differences.
There
are
few
enough
that
the
abstraction
and
what
you
gained
from
composability
and
reuse
makes
it
worth
keeping
the
same
interface
in
the
same
name.
C
If
you
like
half
of
what's
in
the
bill
pack,
API
right
now,
I
like
about
what
goes
into
layer
Tamil
file
and
what
that
means
for
how
things
are
cached
and
environment
variables
stuff
like
most
that
doesn't
apply
here
right
or
am
I
misunderstanding,.
C
C
It
seems
like
that's
a
good
place
to
incubate
it
because
otherwise
have
to
do
a
lot
of
clarifying,
but
which
parts
of
the
build
pack
API
pertained
to
these
or
not,
and
also
its
experimental
enough,
that
it
I
think
it
makes
sense
to
introduce
as
an
extension
to
them
for
later
I
agree.
It's
also
not
something
that
obviously
going
to
be
supported
across
every
single
platform
that
one
studio
is
quad
may
to
build
packs.
B
As
a
counterpoint,
I
like
this
feels
like
core
functionality,
the
ability
to
install
operating
system
packages
seems
like
a
basic
thing
that
a
lot
of
people
ask
for
and
I
feel,
like
platforms
should
generally
be
expected
to
support
it
and
using
Conoco
you,
you
know,
you
don't
need
a
backer
daemon.
You
can
do
it
all
in
userspace.
There's
no!
There's
no
trade-off
aside
from
performance
right
that
you
encounter,
especially
because
the
platform
administrator
could
control
what
build
packs
are
available
so
that
they
could
for
an
individual
user
of
a
platform.
C
A
C
B
C
A
C
Yeah
well
originally,
there
was
the
two
lists
which
would
make
it
not
possible.
But
if
we
have
one
list
of
build
packs,
then
that's
fine,
but
if
you
start
interweaving
how
they
run
I,
guess
like
well
in
this
poem
anyways
as
I
say,
you
start
having
like
files
change
to
the
app
source
code
that
are
owned
by
root
and
I'm,
not
really
sure
what
we
would
do
about
that.
Like
will.
A
B
Can
even
hoist
them
to
the
beginning.
So
if
you
define
a
meta
buildpack,
he
uses
a
root,
build
path
to
install
some
packages,
and
then
you
put
a
build
pack
before
it.
It
could
just
be
a
rule
that
all
root
build
packs
are
taken
in
order,
regardless
of
what
order
they're
specified
in
and
hoisted
in
the
beginning.
Before
the
build
starts,
it
might
be
a
little
confusing
if
someone's
like.
Why
isn't
it
installing
this
after
this
right?
B
B
A
At
least
concern
right
like,
but
at
the
same
time
I
do
like
the
idea
of
you
know,
build
packs
being
like
we
already
have
two
types.
So
adding
a
third
type
doesn't
scare
me
all
too
much,
but
what
I
do
think
we
should
do
is
kind
of
emphasize
the
fact
that
there
are
different
types
of
build,
packs
right
and
then
kind
of
elaborate
on
that
a
little
bit
more,
at
least
on
the
docs
website,
and
how
that
translates
into
spec
would
be
very
similar
if
I
think.
C
The
yeah
I
agree,
I
think
that
there's
that
a
powerful
construct
that
Stephan
talk
about
about
Matabele
pack
having
group
impacts
and
medical
packs,
like
you
I,
can
totally
imagine
world
where
you
were
building
a
Bill
pack.
But
you
don't
want
to
assume
something
is
on
the
stack
image
and
maybe
like
for
the
rest
of
these
things,
that
you
want
to
kind
of
transform
and
run
like
you
put
like
a
metal
pack
that
AC
sets
up
the
rest,
your
build
packs
for
success
in
that
chain,
and
then
it's
like
one
nice
package.
B
B
They're
definitely
I
think
mix-ins
are
more
about
LTS
packages
that
are,
you
know
a
bi
compatible
and
can
be
updated
out
from
under
the
app
right-
and
this
is
not
about
that.
This
is
about
a
package
that
could
be
from
a
PPA
or
you
know,
a
lot
more
flexible
than
what
the
mix
undernourishment
provides.
So.
A
So
I'm
I
work
on
a
platform
that
actually
implements
you
know
OS
packages
for
Doku.
We
don't
use
build
packs
for
any
of
this
stuff,
but
for
operating
some
level
packages
they
typically
don't
come
from
a
PPA.
Okay,
it's
an
option
for
our
users,
but
a
lot
of
them
just
use.
You
know
packages
that
come
from
the
base
operating
system,
they're
just
not
installed
by
default.
A
A
C
A
C
You
could
imagine,
potentially,
like
Javier's
point
really
being
in
order
like
a
builder
image,
could
provide
like
an
app
they'll
pack,
for
instance
on
a
Debian
or
a
bun,
to
stack.
That
knows
how
to
parse,
like
you
define
interface,
that
like
it,
was
how
to
get
the
packages
and
installs
them,
and
that's
like
the
first
build
pack
that
gets
run
in
a
builder
group.
Okay,.
A
That
makes
sense
from
my
perspective
is
that
that
sort
of
a
fun,
solid
work
for
me,
given
that
I
don't
think
that
it's
possible
or
that
it's
likely
that
we
would,
as
a
group,
all
share
the
same
notion
of
needs
for
operational
packages
like
their
their
package,
that
you
might
have
that.
Are
you
know,
files
in
the
repository
or
there
are
other
steps
that
you
need
to
take
in
order
to
prepare
them
the
base
or
base
image
to
support
it.
So
I
think
if
they're
just
built
axis
works,
I.
C
A
So
it's
kind
of
crappy
I'll
admit,
but
there's
a
set
of
like
ten
different
files
that
you
can
specify.
So
you
can
specify
like
environment
variables
that
you
want
to
have
in
scope
for
when
you're
running
a
command.
So
that's
useful
for
if
you're,
installing,
like
sequel
server
dependencies,
you
can
have,
you
can
specify
app
Kampf,
you
can
add
new
ppas.
A
You
can
specify
like
the
a
list
of
packages
that
you
want
to
install
just
you
know,
however,
and
then
you
can
also
specify
a
directory
that
has
a
list
of
Debian
files
that
you
would
install
onto
operating
system,
and
so
it
works
on
that
in
a
particular
order
that,
like
make
sort
of
make
sense
of
it
as
a
whole.
All
of
that,
like
I,
think
we
could
remap
to
a
built
pack
by
just
moving
that
shell
code
and
to
build
that
code
and
then
reading
those
files
directly
from.
A
My
perspective
I
think
this
is
like
kind
of
useful,
because
I
don't
I
can't
we
do
have
folks
who
want
to
have,
for
instance,
a
base
operating
system,
that's
CentOS
paste
or
something
and
I
can't.
Imagine
that,
like
it
would
make
sense
for
the
bill
tax
group
to
standardize
how
to
specify
packages
across
different
operating
systems
and
like
the
minutia
for
each
individual
operating
systems
and
instead,
like
each
platform,
might
say
this
is
this
is
a
thing
that
we
support
for
our
service,
whether
or
not
it's
a
blue
or
what-have-you.
A
So
a
question
from
my
end
is
that
would
this:
how
does
this
factor
into
like
the
mix
in
landscape
then
for
managing
packages?
Because
I
noticed
in
there's
like
that
paragraph
in
the
motivation
saying
that
makes
ins
already
allowed
bill
pack?
Authors
to
create
build
pack
so
depend
on
an
extended
set
of
iOS
packages
without
affecting
build
time,
and
then
the
next
sentence
is.
However,
it
is
not
uncommon
for
application
code
to
depend
on
iOS.
B
The
mixin
is
a
contract
between
a
bill
pack
and
a
stack
image
that
says
like
as
a
build
pack
I
would
I
should
only
be
run
on
stack
images
that
have
you
know
like
image
magic
right
built
in,
because
the
build
pad
logic
itself
or
the
thing
that
the
dole
peg
does
will
always
use
this.
You
know
package
and
it's
only
for
in
the
case
of
the
stacks
we've
defined
so
far
for
the
project.
B
It's
only
for
LTS
packages
that
you
know
like
Ubuntu
1804
LTS
packages
that
may
fall
out
very
out
of
date
in
terms
of
their
upstream
version
over
time,
but
receive
security
patches,
and
they
can
be
sort
of
safely
rebased
out
from
under
the
application
and
they're
very
performant,
because
it's
the
stack
authors,
responsibility
to
keep
them
in
there
right.
This
is
about
you.
Have
a
ruby.
App
like
image.
Metric
is
a
bad
example.
In
the
previous
case,
the
previous
case,
it's
like
you
want.
You
know,
curl
write
something
very
generic.
B
B
As
a
ruby
developer
right,
you
could
start
with
the
base
image
and
use
the
apt
route,
build
back
to
say,
I
just
need
an
image
magic,
and
maybe
you
want
to
get
a
matte
romantic
PPA
like
really
new
version
of
it,
that
has
the
latest
image
processing,
algorithms
right.
So
it's
more
about
the
app
developer
and
less
about
the
you
know,
person
you
know
creating
the
environment
for
app
developers,
not.
C
B
It's
more
like
because,
because
many
times
that
performance
advantage,
having
all
been
there,
is
worth
the
trade-off.
If
that
makes
sense,
I
think
the
outcome
is
more
like
people,
maybe
more
encouraged
to
use
the
base
for
organizations
that
really
care
about.
They
want
the
smallest
possible
run
image
in
the
world
right.
They're
gonna,
use
the
base
image
and
then
tell
their
app
developers
to
go
figure
out
what
packages
they
need
more
or
create
route,
build
packs
that
install
you
know,
packages
different
sets.
B
You
know
for
the
app
developers
right,
whereas
organizations
that
are
like
they
want
everything
to
work
can
be
fast
and
care
about
fast
iteration.
They
might
use
the
larger
base
images.
It
gives
you
option
ality,
though,
because
there
was
a
way
to
translate
between
these
different
ways
of
providing
packages.
I
think.
C
Because,
let's
say
I
have
a
build
pack
that
declares
a
mix
in,
but
there's
an
app
to
build
pack
that
could
provide
the
same
thing.
There's
no
way
to
do
both
like
not
require
that,
if
I'm
on
a
stack
that
already
has
it
versus
collaborate
with
UI
pill
pack,
if
I'm
not
allow
the
pack
to
modify
the
metadata
of
the
sort
of
ephemeral
stack,
that
it
creates
and
then
to
mix-ins
after
the
rebuild
facts
wrong.
That
kind
of
literally
suggesting.
C
B
A
A
Are
generic
right
so
like
mix-ins
can
be
used
for
just
about
anything
similar
to
like
labeling
resources
in
AWS
or,
more
specifically,
in
CI
tools,
saying
that
hey
I
want
this
very
specific
runner.
You
know
you're,
it's
more
narrowing
down
exactly
what
you
want
from
a
stack
perspective.
That's
the
way,
I
see
make
sense,
and
it
just
happens
to
be
that
you
know.
In
my
example,
we
use
them
to
reflect
OS
packages,
because
that's
one
of
the
RFC's
that
or
yeah
I
believe
that
we
kind
of
took
on
for
the
build
packs
proposed.
B
Exactly
a
mixin
is
not
necessarily
a
food
package
just
to
find
them
that
way
for
IO,
don't
pack
stacks
Bionic.
We
actually
have
different
notation
for
saying
like
this
is
a
mixin.
It's
not
an
imprinted
package.
It's
a
collection
of
utilities,
right
I,
think
the
framing
of
like
mix-ins
are
about
things.
Build
packs
want,
and
you
know
so,
as
extensions
is
about
things,
applications
want
right
and
then
the
problem,
the
real
underlying
problem,
is,
if
we
let
build
packs
arbitrarily
install
packages.
B
If
we
encourage
that
workflow
right,
where
every
build
pack
can
just
specify
a
list
of
packages
that
need
to
get
dynamically
installed
ahead
of
time,
we
really
slow
down,
builds
everywhere.
Everybody's
gonna
put
tons
of
packages
in
there.
Every
single
build
will
have
an
advocate
before
it
will
lose
a
lot
of
the
advantage
of
the
you
know:
performant
model,
where
the
build
packs
only
write
individual
layers
and
the
thing
under
it
can
be
rebased
out
from
under
it
as
part
of
I.
Do
think
that
risk
is
inherent
in.
B
You
know
modularizing
root,
build
packs
out,
but
at
least
it
gives
platforms
the
ability
to
control
whether
or
not
you
know
that's
a
feature.
That's
well
like
what
root
build
pack
are
available
right.
Where
platform
could
turn
that
often
just
say
hey,
you
know
everything
has
to
be
done
through
the
mixin
interface,
so
things
don't
get
really
slow
as
long
as
we
keep
it
as
a
contract
between
the
app
developer
in
the
platform.
More
right,
we
don't
risk,
we
take
less
risk
and
you
know
slowing
things
down.
Let's.
A
Still
gonna
be
very
frustrating
for
an
app
developer
if
they
want
to
use
some
pill
pack
and
they
need
image
magic,
they
have
build
pack
I
think
they
can
install
it
with
after
the
fact
that
they
can't,
because
we
have
two
sets
of
describing
OS
packages
they
have
the
mixin
set
for
then
they
installed
the
app
build
pack
that
doesn't.
It
also
provides
image
magic
but
they're
still
incompatible.
A
C
B
Sorry,
okay,
just
to
know
for
people
to
look
through
the
launch
tamil
stuff
here,
because
there's
an
interesting
caching
system
where
you
can
excluding
things
from
the
image,
is
tied
into
cashing
them
and
being
able
to
get
them
back
more
persistently
in
the
next
run.
There's
some
complexity
there
that
we
didn't
talk
about
that's
worth
looking
into
for
people
that
are
interested.