►
From YouTube: Working Group: August 8th, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let
us
go
ahead
and
get
started.
Could
I
get
a
designated
Note
Taker
for
today.
B
I
think
this
is
my
first
time
here:
I
can
start
if
it's
using
the
right
mic
and
people
can
hear
me
from
Terence
I'm
I'm
on
the
CMB
Upstream
project
work
at
Salesforce
Roku
and
have
been
doing
CNB
stuff
for
a
long
time
now,.
C
Hey
first
time
long
time,
James
used
to
be
VMware,
pivotal
now
Google
first
time
at
this
meeting,
though,.
D
A
A
Next
up
we
have
the
per
the
multi-arch
build
pack,
but
this
is
currently
drafted.
Jericho
you
are
here.
Do
you
have
anything
that
you
want
to
talk
about
on
this
right
now
or
sorry?
We
could
just
skip
over
it
for
this
week,
no.
A
To
skip
forward,
thank
you
cool
and
then
the
last
one
is
this
proposal
introduced
new
Graw
BM,
build
packs,
I,
don't
know
what
the
status
of
this
is.
It
looks
like
Dan
came
back
with
a
rather
large
write-up
I
will.
F
F
So
to
give
you
yeah
just
to
give
you
the
the
basics
Oracle
as
published
new
distributions
for
gradvm
Community
as
well
as
gravviem
corporate
and
the
licenses
have
changed,
and
the
girl
VM
the
Oracle
grad
VM,
so
the
property
one
as
a
permissive
license
that
allows
us
probably
to
distribute
it
and
to
let
one
to
basically
start
having
a
buildback
using
this
new
girl
VM
from
Oracle
this
new
yeah.
F
Well,
this
proprietary
Grand
VM
description
for
marker
it
is,
it
is
also
highly
desired
by
the
spring
native
team.
F
Why?
Because,
apparently,
the
performance
is
much
better
than
the
community
one
and
since
once
again,
people
can
can
be
using
it
because
of
of
this
new
Oracle
license,
then
it
should
totally
make
sense
that
people
start
leveraging
that
for
their
spring
native
work
or
any
other
Java
native
scenarios,
so
where
we
at
right
now.
F
So
we
are
basically
discussing
if
we
need
to
add
new
Beyond
packs
or
if
we
can
just
tweak
existing
build
packs
and
using
nevermind
variable,
actually
let
the
user,
let
the
user
choose
whether
or
not
they
want
to
use
a
Builder.
F
So
yeah
packaging,
wow,
basically
packaging,
is
one
discussion
that
is
and
actually
yeah.
You
can
see
that
then
my
Q7
yesterday
he
has
published
he
has
actually
published
POC
just
to
check
what
it
would
look
like.
F
So
there's
that,
and
most
importantly
licensing
Oracle
licensing
I
have
the
spring
people
who
actually
told
me
that
we
would
check
with
Oracle
and
the
cloud
Foundry
Foundation
at
Large
how
it
can
work.
What
is
whatever
required?
What
is
the
required
paperwork
so
once
again,
it's
a
license
free
to
use,
but
you
know
there
are
some
cables
on
the
way
you
cannot
distribute
it
along
with
your
software
or
something
like
that
or
you
can
use
it
to
build
your
image
but
you're
not
supposed
to
embed
it
something
along
those
lines.
F
So
there
are
conditions
in
relations
that
need
to
be
checked
by
lawyers
and
probably
Cloud
Foundry
lawyers
since
well.
Paquero
belongs
to
a
California
somehow
so
and
yes,
we
are
looking
into
it.
A
Those
are
the
only
outstanding
rfcs.
We
have
right
now.
A
All
right
after
that,
we
have
CNB
updates
and
questions.
A
B
There
Was,
the
removal
of
the
same
beat
Target,
ID,
I.
Think
from
some
of
the
spec
changes
from
last
week,
while
I
was
out
as
well.
Okay,
that's
part
of
the
removing
stack
bits
that
have
been
coming
down
the
pipe
as
well.
A
Okay,
don't
know
how
that
affects
anything
that
we
are
currently
doing
right
now
or
if
it
does
so
I'd
have
to
check
that
out,
but.
E
A
I
guess
since
Terence
you're
here
is
there
anything
that
is
coming
down
the
pipeline
that
you
feel
that
we
should
know
about
that?
We
might
not
currently
know
about.
B
There's
been
I
guess
just
a
lot
of
discussion
around
both
on
our
side
on
The
Rook
side,
but
just
how
various
groups
are
looking
at
stack,
removal,
I,
guess
as
a
whole
I
know.
That's
like
a
big
break
and
change
for
a
lot
of
folks
and
I.
Think
only
as
of
recent,
as
we've
been
trying
to
kind
of
get
these
spec
PRS
through.
B
Have
we
been
getting
a
lot
of
feedback
on
how
it
will
work
or
not
work
with
folks,
so
I
know,
David
has
been
pushing
back
a
lot
on
some
of
the
spec
stuff
and
so
I
think
that's
made
Natalie
a
bit
frustrated
as
she's
been
trying
to
get
some
of
these
spec
releases
out
kind
of
at
the
11th
Hour,
because
she
just
wants
to
get
doc
file
extensions
out
into
people's
hands.
B
D
D
And
despite
PR
movie
as
well,
yeah
I'll
dump
it
in
there.
You.
D
No,
it's
okay,
I
yeah!
So
that's
the
yeah!
That's
the
CMP
stack
ID
discussion,
and
then
we
had
some
well
somewhere
on
Slack
I
mean
so
like
this
is
sort
of
like
why
I
decided
to
join
today
at
Jericho's
suggestion
and
also
Terence
and
James,
decided
to
take
a
long
just
as
like,
because
we
like
discussed
this
a
little
bit
on
Thursday
and
the
CMB
working
group
with
like
how
do
you
deal
with
the
removal
of
stacks
Etc.
D
So
I
mean
if
you
want
to
finish
like
the
project
update
and
stuff,
and
then
we
can
maybe
use
this
for
the
open
mic
like
points
to
just
because
we
we've,
we
thought,
hey
we'd,
just
show
up
and
ask
a
bunch
of
questions
around
like
how
you're
planning
to
deal
with
some
of
these
things
and
what
your
stances
in
some
of
these
changes
are
and.
A
I
think
I
think
project
updates
will
be
pretty
quick.
The
only
main
one
that
I
can
think
of
is
like
straight
up
10
minutes
before
this
meeting
I
merged
the
initial
implementation
of
the
base.
Ubi
Builder
there
hasn't
been
a
release,
yet
we'll
have
to
go
and
and
then
do
that.
But
it
is
the
I
guess.
The
first
Builder
that
we
have
that's
using
Sac
extensions,
pretty
heavily
and
doing
a
little
bit
of
a
blend,
stack
extensions
and
traditional,
build
packs.
A
So
yeah,
that's
sort
of
our
some
of
our
stomping
grounds
and
prove
amounts
for
stack.
Extensions.
I
will
hopefully
it'll,
hopefully
be
just
as
easy
as
me,
going
in
and
cutting
a
release
of
just
like
making
a
GitHub
release
for
the
Builder,
but
I
don't
necessarily
have
faith
that
it
will
quite
be
that
easy
because
it
never
is
for
the
first
iteration
of
this
but
yeah.
My
goal
is
to
have
it
you
know,
hopefully
out
by
you
know
the
end
of
this
week
or
something
like
that.
A
And
then
I
guess,
Michael,
Dawson
and
Ozzy
can
start
using
it
and
continue
to
sort
of
prove
out
what
needs
to
what
kind
of
feedback
needs
to
come
back
to
us
and
what
kind
of
feedback
needs
to
go
back
up.
Cmb
the
cnb's
way.
B
You
said
it
uses
CMB
stack
extensions.
Is
that
the
docker
file
stuff
or
is
that
something
else
yeah.
A
I
think
I
it's
using
like
a
stack
extensions,
build
pack
I
have
not
been
super
involved
in
the
like,
proving
out
of
this.
A
It
is
typically
someone
who
would
be
better
equipped
to
tell
you
what
that
is
either
Michael,
Dawson
or
C
would
be
here,
but
I.
Don't
personally
know,
Tim
you've
been
looking
at
a
couple
of
the
stuff
from
node.js.
Do
you
know
off
the
top
of
your
head.
G
F
B
Yeah
I
mean
I
can
I
can
also
just
talk
to
Ozzy
whenever
he's
back,
he
or
she
hops
in
to
the
CMB
stuff
too.
A
D
So
for
this
discussion,
maybe
I'll
give
a
bit
of
a
summary
of
like
what's
happened.
Maybe
the
last
week
or
two
Terence
asked
me
to
look
into
some
or
give
feedback
on
some
of
the
stack
removal
consequences
and
so
like
the
the
pr
for
the
ours
for
the
spec
changes,
and
so
The
Good
the
cool
thing,
and
so
this
this
community
discussion,
230
link,
is
maybe
worth
a
read
and
also
like
at
the
far
bottom.
If
any
of
you
haven't,
read
it
at
the
far
bottom.
D
There's
a
proposal
in
there
as
an
alternative
to
this
target.
Id.
The
the
cool
thing
about
the
removals
of
Stack
is
that
you
can
now,
as
a
build
pack,
only
say
you're,
targeting
a
particular
distribution
name
and
version
and
nothing
more
specific
right.
So
the
target
ID
was
in
Spec,
but
it
would
have
only
been
exposed
to
the
build
pack
at
execution
time.
You
couldn't
you.
D
There
was
actually
no
mechanism
to
match
against
a
specific
Target
ID,
which
is
great
for
portability,
because
I
think
in
theory
you
know,
assuming
that
the
base
images
for
say,
perketto
and
Heroku
are
similar
enough
there.
It
should
be
possible
that
you
run
certain
I,
don't
know
the
Ruby
build
pack
or
something
of
paquero
on
top
of
the
Heroku
base
image
in
the
in
the
Heroku,
Builder,
right
and
I.
Think
that's
generally
really
great,
because
this
is
sort
of
like
the
whole
point
of
seeing
these
is
this
kind
of
inteloperability?
B
You
wanna,
it's
always
gonna
share
your
screen
and
maybe
for
a
while
I,
don't
know
how
familiar
folks
are
with
I
guess
some
of
the
stack
removal
changes
with
Target
and
stuff
or
not.
Let
me
see
if.
D
B
But
I
think
sing.
The
yaml
stuff
might
help
with
oh
yeah
yeah,
okay,
specialization.
C
D
Let's
have
multiple
images:
that's
pretty
cool
I
didn't
realize
that
so
the
so
Concepts
sorry
Stacks
are
going
away,
and
so
what
you
do
is
you
say:
I'm
targeting
Ubuntu
I,
don't
know
2204
right
and
that's
cool
in
I,
don't
know
the
average
case
of
someone's
running
on
the
full
Google
app
engine,
the
the
full
packetto,
the
standard,
Heroku
image
right
and
so
I
know
that
there
is
a
difference
between
the
build
image
and
the
Run
image.
The
Run
image
could
potentially
be
super
slim.
D
But
let's
ignore
that
for
the
moment,
for
the
sake
of
this
discussion
now,
obviously
I
don't
know
like
your
builds,
your
pre-compiled
Ruby
right
or
your
PHP,
or
something
they're
linking
against
Dynamic
libraries,
and
so
one
of
the
things
that
the
stack
ID
previously
provided
was
a
certainty
that
some
assurance
that
the
libraries
that
your
compiles
binaries
like
language
runtimes
Etc,
have
are
actually
on
on
the
image
at
built-in
runtime.
And
then
you
don't
have
to
do
anything.
D
Theoretically,
you
can
just
run
a
build
pack
that
targets
2204
the
Heroku
build
pack
that
expresses
this
on
the
a
kettle
Builder,
but
maybe
a
library
is
missing
right,
and
so,
if
something
fails
you,
as
a
build
pack
author,
maybe
have
to
like
ldd
all
your
binaries
and
shared
objects
to
figure
out
is
something
missing
or
something
and
then
give
an
error
message
to
the
user,
Etc
and
so
I
realized
that
hey
here's,
an
idea.
D
The
target
ID
right
really
should
be
not
some
arbitrary
string
like
full
or
minimal,
but
it
should
be
namespace
so
that
a
build
pack
can
at
least
say:
hey
it's
Iota,
paqueto.run,
dot
minimal,
for
example,
right
and
then
Google
app
engine
have
their
own
version
of
minimal
with
different
libraries,
and
so,
if
I'm,
the
piqueto
build
pack
and
I
see
as
a
Target
at
the
Iota
packet
or
something
all
right.
D
That's
a
stack
that
I,
you
know
because
it's
still
like
effectively,
let's
call
it
a
stack,
that's
something
I
know
about
so
I
can
make
some
educated
guesses
around
the
libraries
out
there
Etc,
if
not
I,
maybe
print
a
warning
to
the
user
right
if
I
see
com.orokut
or
whatever,
like
a
string,
I
don't
recognize
or
if
the
string
is
missing.
I'm,
saying
hey
you're
running
on
the
right
operating
system
at
version,
It's,
Ubuntu,
2204,
I,
know
that
you
know.
If
you
install
I,
don't
know
lib
SSL
or
lip
lib
GSS
API.
D
D
That's
going
to
work,
but
maybe
that
Library
isn't
installed
in
the
first
place,
because
you're
running
some
custom,
minimal
image
or
something
from
another
vendor,
but
it's
the
at
least
if
we
had
a
Target
ID
that
wasn't
sort
of
like
an
arbitrary
string,
but
somehow
at
a
a
reverse
domain,
notation
or
something
it
would
be
a
bit
safer
for
people
pick
authors
to
to
code
against,
and
then
there
were
some
ideas
where
I
said
hey.
You
know
you
could
couple
yourself
sort
of
like
against
CB
Target
ID.
D
You
could
just
hope
that
your
binaries
work
and
print
a
warning
Etc,
you
could
do
sign
into
checks,
and
so
then
I
said,
but
for
any
of
this
to
really
work.
D
The
CNB
Target
ID
should
be
namespaced
in
a
way
like
this,
and
also
shouldn't
contain
a
version
number
right,
because
the
version
number
is
sort
of
encoded
in
the
Ubuntu
version
number,
for
example,
and
so
the
idea
would
be
that
something
like
I
owed,
a
packet.images.run
minimal
means
pretty
much
the
same
set
right,
the
same
philosophy
towards
what
is
minimal,
whether
it's
Ubuntu,
2204
or
2404,
or
if
you
have
an
alternative
Builder
for
ubis
rather
than
Ubuntu,
it
would
still
sort
of
be
the
same
libraries.
D
Obviously
the
package
names
are
different,
but
as
a
build
pack,
you
don't
really
care
about
that,
and
so
then
some
blah
blah
blah
around
ADI
compatibility
Etc.
And
there
were
some
discussions
back
and
forth
and
at
some
point,
I
realized
hey.
D
This
is
really
not
that
great
still,
because
if
you
have
a
Target
ID
and
you
code
against
it,
then,
for
example,
a
paquetto
build
pack
that
works
with
that
has
a
simple
binary,
I,
don't
know
nginx
or
something
right,
and
it
doesn't
really
need
a
lot.
So
it
works
on
Iota,
poquito.images.randash
minimal,
but
also
on
run
Dash,
full
and
also
on
run.
Dash
super
size
right
and
then
you
introduce
a
fourth
one.
Then
you
have
to
touch
all
your
build
packs
but,
more
importantly,
all
the
build
packs
out
there
right.
D
It
would
have
to
encode
all
these
strings.
So
I
said
hey:
why
don't
we
make
it
basically
a
list
of
labels?
And
then
you
just
declare
compatibility,
and
so
that's
the
proposal
a
bit
further
down
where
instead
of
saying
hey,
this
is
my
ID,
and
this
is
my
ID
right.
So
this
is
not
a
real
file,
but
this
is
effectively
what
comes
out
of
the
Run
image
analysis.
D
You
would
say:
hey
I
have
a
list,
because
this
is
what
we
you
would
have
to
do
right.
Is
it
in
one
of
these
right?
D
But
this
is
not
enough,
because
you
also
have
to
check
for
Iota
packet
of
ground,
that's
minimal
and
for
gcp
DOT
standard
and
all
these
things,
but
it's
not
great
long
term,
because
we
would
have
to
chase
after
every
single
of
every
single
build
pack
author
out
there.
If
one
of
us
ever
adds
a
variant
for
these
base
images,
it
becomes
really
hard
to
yeah,
then
introduce
new
ones
Etc.
D
Also
like
we,
we
start
testing
them
against
the
Packer
Builder
right,
like
that's
for
interoperability
purposes,
a
desirable
long-term
outcome
that
we
have
at
least
basic,
has
for
one
each
other's
Builders
and
build
packs,
and
so,
if
we
instead
made
it
a
set
of
labels,
so
this
would
be
the
full
metadata
effectively
right
instead
of
saying
I
have
an
ID.
It's
like
a
list
of
compatibility,
strings
and
the
Heroku
one
would
be
this
and
then
as
a
build
pack.
D
Obviously
you
have
to
then
split
that
end
far
because
it
exposes
a
comma
separated
environment
variable
to
the
build
pack,
but
the
build
pack
can
say:
hey
is
it?
Is
it
one
of
those
right
and
and
as
a
person
who
wants
to
have
their
own
base
image?
I
can
say:
hey
I'm,
I,
upload,
a
base
image
and
all
the
packages
that
are
in
the
standard
packetto
and
the
standard
TCP
and
the
standard
Heroku
in
mind.
D
In
this
case
images
are
there
right
so
I
declare
all
these
labels
so
immediately
Google's
build
packs,
just
work,
parqueto's
build
packs,
just
work.
D
Obviously,
it's
now
my
responsibility
as
the
maintainer
of
that
Builder
and
its
build
image
to
ensure
that
if
paquero
dot,
run.tiny
gets
a
new
package
that
I
add
that
to
it,
but
at
least
it
will
be
possible
right
and
similarly,
if
Heroku
ever
wanted
to
add
like
a
some
sort
of
super
size
or
minimal
version
on
top,
we
could
just
add
that
as
a
compatibility
label
and
all
existing
out
build
packs
out
there,
let's
say
I
work
with
a
standard.
Heroku
image
would
continue
to
work
right.
D
You
could
have
debug
variants
easily
that,
to
the
build
packs,
behave
exactly
the
same
if
I
don't
know
another
large
corporation,
that's
in
the
business
of
cloud
services
wanted
to
Embark
like
on
the
C
B
train
right
they
could
for
a
while,
at
least
in
their
Builder
images,
declare
hey.
We
are
compatible
with
paquero
and
GAE
and
Heroku,
and
immediately
all
these
build
packs
would
start
working
right
until
they
have
enough
traction
to
convince
the
build
pack
community
to
add
tests.
D
And
then
maybe
these
these
checks
right
and
so
obviously
like
in
terms
of
like
checks
and
for
a
build
pack
to
know
what
exactly
am
I
even
running
on.
This
is
really
only
relevant
or
am
I
would
say,
99
of
cases,
it's
only
relevant
for
for
binaries
that
the
build
packs
build
right.
Web
servers,
language
runtimes,
other
programs
that
dynamically
link,
but
still
I-
think
it
would
be.
It
would
be
a
cool
thing.
So
this
is
what
we
discussed
on
Thursday.
F
Did
you
mention
that
if,
for
example,
one
of
the
labels
is
actually
I
don't
know
shipping
out
to
new
version?
How
do
you
express
that?
Because
there
was
no
version
in
the
compatibility
list
as
well
as
I
could
see.
D
Yeah
but
the
well
Builders
always
pull
the
latest
build
image
right.
So
if
you
update
a
build
pack
and
say,
hey
I
am,
let's
say:
I,
don't
know
at
Heroku,
we
add
ffmpeg
of
the
stack
image,
so
our
base
image
right
and
then
on
the
next
day,
some
build
pack.
Author
season
says
amazing.
Finally,
I
can
put
some
stuff
in,
and
so
they
say:
hey
great
I'm.
Just
if
I
see
it's
a
heroic
thing,
I
install
an
ffmpeg
wrapper
whatever
the
next
time.
Someone
picks
that
up
right,
the
the
base.
D
D
I
actually
thought
about
versioning
right
so
that
you
have
the
compatibility
labels
and
then
a
slash
and
then
some
sort
of
number,
but
that
it
makes
life
quite
a
bit
harder
for
the
build
pack
authors,
because
now
you
need
to
parse
that
as
well,
but
yeah
I
mean
that
could
be
a
possibility
right
because,
theoretically,
let's
say
let's
say
at
Heroku,
we
see
a
lot
of
paquero
build
pack
usage
and
we
just
decide
that
for
our
Builders
we
don't
just
build
compatibilities
for
Heroku,
but
also
for
paquero
I.
D
Think
that
should
like
across
vendors
just
generally
not
be
a
thing
that
we
do.
But
let's
say
we
did
it
right.
The
moment
you
at
paqueto
then
added
some
packages
to
your
base
images.
D
So
I
think
it's
mostly
for
build
pack
authors
to
easily
distinguish
between
different
vendors
variants
and
for
users
of
non-vendor
based
Builders
and
base
base
images
to
say:
hey,
I'm,
Bill
I
have
my
you
know:
I'm,
probably
even
they're,
probably
even
basing
it
on
top
of
the
packeto
or
the
base
image
right.
But
they
say:
I
need
a
few
more
packages
and
some
other
I
don't
know
an
image.
Magic
policy
change
whatever
right
and
then
I
say
okay,
but
it
is
compatible
with
vendors,
space
images
and
I'm.
G
D
So
what
happened
then
on
Thursday
is
that
an
argument
was
made
that
really
extensions
of
this,
and
we
shouldn't
like
it's
because
it's
it's
a
it's
a
better
crutch,
but
it's
still
a
crutch,
just
like
CB,
Target,
ideas,
right
and
so
I
then
proposed
and
that's
what
ended
up
happening
is
like
that.
Okay,
if,
if
that's
the
stance,
then
we
should
at
least
for
the
moment,
remove
CNB
Target
ID
from
the
spec
so
that
build
pack.
D
Authors
do
not
start
writing
code
against
it,
because
I
think
this
would
cause
all
sorts
of
like
undesirable
type
coupling.
D
But
really
the
question
is:
do
if
extensions
really
solve
this,
and
so
like
I've,
been
talking
with
Terence,
quite
a
bit
also
with
James
about
how
we
see
like
extensions
playing
out,
because
someone
also
from
VMware
I
forget
who
it
was
mentioned
that
they
did
already
or
plan
to
do
some
sort
of
like
squashing
of
Base
images
with
extensions
so
like
if
you
use
extensions
as
a
user
and
the
extensions
install
a
bunch
of
system
packages
after
the
build
is
concluded.
D
The
base
image
on
the
Run
of
it
right,
plus
all
the
extension
layers,
would
be
squashed
together
into
a
new,
unique
sort
of
run
image
and
those
would
just
be
consistently
rebuilt
continuously
rebuilt
and
then
rebasing
is
not
a
problem,
because
one
of
the
things
that
I
noticed
in
the
spec
and
so
I
brought
this
up
for
discussion
yesterday
in
the
build
tax
channel
on
the
cncs
lag.
Is
that
and
I
think
this
is
an
unintentional
leftover
from
an
older
version
of
this
whole
Docker
file
thing?
D
There
is
something
called
something
got:
layer.rebatable
whatever
for
the
extension.
So
if
an
extension
produces
a
Docker
file
layer,
a
Docker
file
for
a
for
an
extension
layer
and
doesn't
explicitly
Set
the
rebasable
labeled
true,
then
the
whole
thing
becomes
non-rebasable
in
the
sense
that
you
can't
rebase
the
base
image
without
a
full
rebuild
which
to
me
and
I've,
tried
to
find
why.
That
is
because
with
ABI
compatibility
and
everything,
I
can't
think
of
a
situation
where
an
extension
layer
can
do
something
that
makes
it
impossible
to
rebase
the
base.
B
Because
of
the
way
the
how
the
layers
work
if
they're
anything
that's
shared
like
if
you're
touching,
if
you
move
files,
if
you
delete
anything
like
the
markers,
don't
get
carried
through
so
technical,
like
ABI
compatibility,
does
not
guarantee
like
image
safety
in
the
sense
that
like
if
you
change
a
package
like
the
package,
listen
Ubuntu
and
I
have
an
extension
that
then
also
touches
that
file
and
you
rebase.
That
info
will
be
different,
but
it
doesn't
like
git
merge
it
right
it,
like
literally
just
one
layer
once.
D
B
B
Like
this
yeah
a
package
list,
but
really
like
any
follower
marker
on
the
file
system,
so,
like
I
mean
technically
like
it
is
not
safe.
Like
in
theory,
you
could
make
a
thing
where
it's
like
yeah,
like
the
binary,
probably
runs
because
the
essos
and
things
are
in
the
right
place
and
like
you're,
not
doing
anything
crazy,
but.
B
Like
like,
if
you
were
to
create
this
image,
you
run
it
through,
like
you
would
not
get
the
same
like
set
of
layers
right
like
you
would
not
get
the
same
image
at
the
end
of
the
day.
So
that's
why
it's
like
unsafe,
like
it
rebasable,
set
defaults,
because
it's
really
any
file
system
changes
and
because.
D
They're
essentially
layered
on
top
right
the
the
example,
the
example
that
explicitly
disables
rebasing
in
the
original
RC
from
a
long
while
ago,
or
it's
on.
Let
me
actually
hold
on
that's
a
good
link
to
put
in
the
discussion
document,
so
you
can
all
see
it.
D
That's
actually
I
mean
that
explicitly
RM
or
f
Varley
paths
list
star,
because
I
think
exactly
for
this
reason,
but
it
still
says
it's
not
re-based
safe
and
so
the
thing
I
think
I'll
put
the
link
in
here
in
the
Google
Document,
because
the
thing
is,
if
we
get
even
moderate
adoption
of
extensions
across
you
know
the
seeing
the
ecosystem.
D
This
means
that
for
any,
like
rebasing
will
be
impossible
for
anybody,
because
I
guess
most
use
cases
for
these
extensions
will
be
OS
package
installs,
and
that
means
no
rebases
for
anyone
ever
anymore.
D
That's
issue
number
one
issue.
Number
two:
is
you
have
to
put
extensions
into
the
builders
right
like
a
user
of
build
Pack's
conscious
in
their
build
pack,
as
or
in
the
project
that
Tom
will
say,
I'm
using
the
following
extensions?
D
Might
list
them
or
something
a
build
pack
can't
just
say:
I
need
the
following
extensions,
like
the
extensions
have
to
be
passed
into
pack
if
they're,
not
in
the
Builder
or
they
have
to
be
added
to
the
Builder,
which
means
you
know,
Heroku
would
have
to
add
all
of
paqueros
and
all
of
Google's
and
other
vendors
is
extensions
and
vice
versa,
which
I
think
is
not
realistic,
because
the
possible
permutations
thing,
the
testing
guarantees
you
make
your
customer
I.
Think
because,
like
this,
this
all
works
really.
Well.
D
If
you
control
the
infrastructure
right,
if
you're,
if
you're
doing
a
pack
build
and
then
you're
with-
let's
say
I,
don't
know
the
packet
of
Builder,
the
packet
of
extensions
and
the
packet
of
build
packs
and
then
deploying
it
on
your
own
infrastructure,
easy
if
you're
pushing
some
code
up
to
Google,
app
engine
or
Heroku,
where
it's
supposed
to
get
built,
then
that
gets
tough,
doesn't
matter
who
controls
the
extensions
of
ill
packs
right,
and
so
it's
from
the
discussion
on
Thursday.
D
It
sounded
like
the
your
VMware
colleagues
were
hoping
or
were
hinting
at
the
paqueto
projects,
all
moving
towards
extensions
for
their
binaries,
but
like
I,
wonder
how
that's
gonna
play
out
with
like
how
does
that?
How
do
these
then
work
well
on
other
builders
right
and
in
let's
say
on
passes
like
Googles
or
rokus.
C
H
Points
so,
first
of
all,
In
This
Crowd
of
group
like
today,
you're
not
going
to
get
a
lot
of
feedback
on
rebasing
because.
B
H
Yeah,
like
I,
don't
know
Jericho
what
you
all
are
doing
like
rapid
seven,
but
looking
at
who's
in
the
zoom
call.
H
Most
of
the
folks
today
have
not
got
a
lot
of
experience
to
be
facing
sap
folks,
my
IBM
folks
might
I
don't
know,
but
they're
not
here
to
speak
to
that,
so
that
whole
section
of
the
conversation
I,
don't
think
you're
gonna
get
anything
today,
taking
a
step
back
I'm
like
as
I'm
hearing
this
I'm
sort
of
like
wearing
two
hats
right,
I'm
wearing
the
Hat
of
like
a
Stacks
maintainer,
building
a
stack
and
building
maintainers
building
builders
I'm,
also
wearing
a
hat
of
like
a
bill
pack.
H
Author
who's,
like
kind
of
interested
in
the
stack
and
I'm
very
much
in
favor.
So
first
of
all,
I
didn't
get
on
to
read
the
RFC.
Yet
all
the
discussions
yet
so
like
if
anything
I'm
about
to
say,
is
like
just
go.
Read
it
just
tell
me,
because
I
don't
want
to
waste
people's
time,
but
I'm
very
much
in
favor
of
interoperability
but
I'm
having
a
hard
time
getting
over
the
high
level
idea
that
we're
trying
to
collapse
a
dynamic
problem
into
a
static
description.
G
H
At
build
time,
you
can't
see
it
until
you're
actually
running
a
process,
by
which,
time,
five
months
ago,
someone's
already
created
a
tunnel
file
with
some
data,
so
that
data
is
static.
Essentially,
the
evaluation
is
dynamic
like
we're.
Never
going
to
square
that
Circle
right,
like
you,
can
try
to
like.
Tighten
it
a
little
bit.
You
can
try
to
make
some
constraints
But.
Ultimately,
what
we're
going
to
keep
Reinventing
the
two
solutions
of
static
identifiers
and
dynamic,
mix-ins
extensions
right.
G
H
D
D
H
So
there's
two
things
there
right.
First
of
all,
so
there's
two
levels
of
like
package
compatibility,
there's
a
package
compatibility
required
by
the
build
pack
itself
so
like.
Let's
take
your
example
of
Ruby,
if
I
compile,
if
I
pre-compile
Ruby,
which
I
do
is,
it
will
be
built
that
maintain
a
pre-compile
dynamically
linked
that
pre-compiled
Ruby
against,
like
the
the
stack
that
I'm
going
to
be
shipping
against
so
I
know,
yep
Jammy,
full
stack,
I
know
that
Ruby
has
all
the
libraries
Ruby
needs
to
run.
H
Then
there's
the
second
level,
which
is
the
application,
might
itself
require
Dynamic
linking
to
other
queries
for
the
for
the
bundle
installer
yeah.
So
that
problem
like
I,
have
no
idea.
What's
going
to
happen
at
runtime
right,
like
Ruby
itself,
will
run
because
it's
dynamically
linked
to
the
right
version
of
the
SSL
or
you
know
whatever
things
that
Ruby
actually
needs
but
like
you
could
still
fail
at
build
time
or
even
at
runtime.
H
Dynamically
linked
to,
and
you
can't
encode
that
in
any
sort
of
reasonable
list.
So
even
if
you
just
scope
it
to
like
the
packages
that
the
dependencies
of
the
bill
packs
need,
like
that's
still
a
dynamic
problem
right
because
like
if,
if
a
new
version
that
requires
a
new
library
yeah
for
sure
my
version
of
Ruby
that
I'm
now
pre-compiling
I
need
to
add
a
package
to
my
stack.
But
if
you're
expecting
my
Ruby
build
pack
to
work
in
your
stack,
you're
going
to
have
to
then
go
and
add
that
package
as
well.
E
D
So
one
of
the
things
is
like:
if,
let's
say
we
don't
have
any
sort
of
identifier
right,
but
we
know
as
a
build
pack.
We
know
we're
running
on
something
that
matches
what
we
declared
in
our
targets,
because
we
said
Target
distribution
is
Ubuntu
and
2204.
For
example,
right
and
I
mean
the
life
cycle
picked
us
right.
D
So
it
said:
okay,
you
that's
compatible
with
the
run
and
build
thing
so
I'm
running
on
the
Builder
I'm
pulling
down
my
Ruby
and
the
bundle
install
Works,
which
must
mean
that
at
least
for
the
Ruby
binary,
all
the
dynamic
linking
is
there.
So
how
do
I
now
know
that
execution
time
the
Run
image
has
I,
don't
know
lib
read
line
right
like
whatever
Ruby
links
against
it
could
be
that
it's
not
there
and
so
right
now
we
have
no
guarantees
and
you
can
actually
at
build
time.
D
You
can't
do
anything
useful
to
it,
because
you
can't
inspect
the
Run
image.
It's
not
there
for
you
right,
which
I
think
if,
if
you
think
about
the
more
classic
Stacks
where,
like
you
know
the
the
sort
of
promise
that
is
sort
of
implicit
and
this
would
not
have
to
be
codified
anywhere
respect
anywhere,
but
it's
sort
of
like
for
Heroku
any
Dynamic
Library,
that's
in
the
build
image
is
also
there
in
the
Run
image
right,
but
others
might
see
that
differently.
D
And
so,
let's
assume
or
let's
say
that
someone
builds
a
build
pack
and
they
want
to
Target
some
new
vendors
who-
and
they
are
big
fans
of
super
minimal,
Run
images
and
they
say.
Oh,
we
don't
care
like
the
build
images
are
super
super
comprehensive,
but
I
run
images
are
minimal
because
we
have
mostly
go
people
around
building
their
apps
right
and
what
I
don't
know.
D
Some
arbitrary
fantasy
reason
that
I'm
trying
to
come
up
with,
but
so
then,
as
a
build
pack
like
you
know,
you're,
just
like
okay
metal
worked
fine
and
then
I
run
him
crashes.
D
I
think
it
would
be
desirable
if
I
could
say
hey.
You
know
the
the
target
ID
or
the
target
compatibility
list
or
whatever
that
I've
that
I'm
seeing
here
includes
something
that
I
know
as
all
the
necessary
libraries
for
my
stuff
at
runtime.
Yes,
you
can't
make
any
promises
about
your
bundle,
install
having
pulled
in
arbitrary
gems,
and
are
they
really
there
in
the
running
which
right,
like
I,
think
we
always
have
to
account
for
the
possibility
that
the
Run
image
will
be
so
drastically
different.
D
I
think
that
for
those
of
us
in
the
business
of
providing
Run
images
or
what
I
mean
right
now
we're
doing
Stacks
right.
It's
probably
not
going
to
be
the
norm
that
the
Run
image
deviates
drastically
from
the
build
image,
or
something
like
this
right,
because
it
just
reduces
our
of
like
our
maintenance
burden.
If
we
say
hey
the
packet
of
full
run
image,
it
has
a
bunch
of
libraries
and
you're
running
on
the
corresponding
full
Builder.
D
So
you
know,
if
there's
not
going
to
be
any
huge
gaps,
because
otherwise
people
will
open
GitHub
issues
every
five
days
and
every
five
minutes,
and
it's
going
to
be
overwhelming.
If
someone
says
well
but
I
want
to
bring
my
own
run
image
and
it's
going
to
Super
minimal
and
are
like
you
know.
We
have
like
some
person
who
knows
Linux
and
they
did
it
and
they
said
it's
perfect
and
then
it
crashes
at
runtime.
D
You
can't
even
warn
the
user
of
these
portabilities
and
so
I'm
not
actually
saying
that
we
that
we,
so
we
shelled
this
CME
Target
compatibility
thing
together
with
a
CNB,
Target
ID
for
the
moment,
I'm
not
saying
we
want
it
back
in
I'm,
more
like
wondering
if,
let's
presume
that's
extensions,
where
the
solution
to
this
right
or
are
they
right
like
so
on
the
paquero
side,
for
example
like?
How
do
you
see
this
like
playing
out
longer
term?
Like?
Do
you
think
the
stand?
D
The
norm
is
going
to
be
people
using
the
paquetto
full
Builder
and
run
images
for
most
cases,
because
that's
I
mean
you
know
most
bundle
installs
with
a
bunch
of
not
even
super
esoteric
gems,
like
even
PQ
or
whatever
PG
gen
they're,
just
gonna
work,
and
you
know
it's
fine,
it's
a
runtime
and
really
one
of
the
points
of
cnbs
is
that
apps
are
easy
to
build.
You
don't
have
to
mess
with
Docker
files
so
for
the
average
I
don't
know
python
PHP.
Also
right,
it
has
a
lot
of
like
everything
is
pre-compiled
there.
D
But
Ruby
where
you
have
a
lot
of
dynamic
linking
is
is,
is
the
long-term
goal?
Gonna
be
yes,
we
will
always
have
these
Stacks,
whatever
we
gonna
call
them
now
right
like,
but
some
sort
of
a
a
a
full
Builder
and
a
minimal
Builder.
The
minimal
builders
for
people
who
know
what
they're
doing
the
four
Builders
people
who
just
want
to
get
them
on
with
their
lives
and
build
some
Ruby
apps
or
is
the
do.
D
You
think
the
journey
is
more
for
everyone's,
going
to
be
more
towards
the
extensions
solution
where
Ruby
yeah
the.
H
H
You
were
like
waving,
I
I,
don't
want
to
speak
to
everyone
in
the
community,
I
I,
just
don't
I
personally,
don't
see
a
world
in
which
we
can
offer
a
like
any
sort
of
sensible
holistic
experience
without
providing
what
is
essentially
a
stack
today
right,
whether
whether
it's
like
oh,
we
have
one
minimal
image
and
a
bunch
of
extensions.
H
D
So
I
mean
so
one
one
I
think
the
the
biggest
Advantage
I
think
from
a
user's
perspective
would
be
that
I,
don't
know.
Let's
say
the
paquero
Ruby
build
typewright
has
been
tested
against
the
paquero
stack
and
also
the
Heroku
one.
And
so,
if
you
see
you
know
the
compatibility
label
list,
whatever
is
either
of
those,
then
everything's
fine,
because
you
know
that
you
know
like
there's
very
diligent
people,
maintaining
the
quote:
unquote
Stacks
at
paquero
and
the
perocus.
D
So
the
Run
image
is
always
going
to
have
the
necessary
libraries
and
if
the
bundle
install
succeeds
and
it
does
its
Dynamic
linking
stuff's
going
to
work
at
runtime.
Fine,
if
you
see
an
unknown
stack
area
or
something
else
at
the
end,
you
might
print
a
warning
right
and
say:
hey
the
Builder
base
image,
blah
blah
is
not
known.
D
Please
take
care
that
if
or
if
you're
running,
into
trouble
at
runtime,
you
might
be
missing
some
Dynamic
packages
or
if
the
bundle
install
fails
out
right
with
Ruby,
not
finding
something
dynamically
linked
right.
You
could
start
you
could
go
and
led
all
the
executables
and
so
files
and
say
hey
the
following
packages
are
missing
and
looks
like
you're
maintaining
your
own
base
image.
Here
you
need
to
install
some
packages
so,
whereas
you
could
skip
that
right,
if
you
knew
you
were
running
on
paquero's
image,.
H
Okay,
thank
you.
That's
helpful.
I
am
captain
Center
like
there's
really
only
two
people
in
this
conversation,
so
I'm
I'm
just
going
to
give
a
quick
response
and
then
I
think
maybe
we
should
find
a
new
forum
for
this.
So
it
sounds
like
the
goal
of
this.
Like
the
end
goal
of
removing
stack
IDs
is
to
essentially
give
a
slightly
looser
compatibility
to
allow
you
to
maybe
run
on
a
thing,
but
the
build
pack
maybe
didn't
know
about
ahead
of
time.
H
Yeah
yeah
right
otherwise,
99
of
the
user
experience
is
going
to
be
the
same
like
paquetto
is
still
going
to
like
make
stocks
and
build
packs
and
Heroku
is
going
to
make
packs
and
bow
packs,
and
they
might
happen
to
work
together.
They
might
not
at
the
moment
they
can't
and
the
proposal
is
to
basically
exactly
they
can
and
it's
kind
of
the
user's
problem.
B
Well,
I,
guess
let
me
which
I
think
is
much
better
already
yeah
in
that,
like
I,
think
a
lot
of
build
packs
are
using
like
the
the
like
any
stack,
which
is
the
wild
card
and
not
tying
to
like
specific
stack
ID
like
you
might
list
your
thing,
but
then
you
do
like
stack
equals
star,
which
is
any
like
Linux
thing
so
that
that's
basically
like
this
will
run
anything
which
in
reality,
it's
like.
B
If
your
thing
was
built
for
like
a
Ubuntu
Debian
based
thing
like
Kool-Aid,
doesn't
work
on
a
Ubi
image
until
you,
like
you
manually,
add
sport
for
it
or
whatever,
and
so
like
that.
Any
stack
was
clearly
a
lie
right,
and
so
he
thought
process
behind.
Rooming
stack
was
like
okay.
Well,
maybe
we
should
move
towards
you
actually
get
stuff
like
the
distribution
distribute.
B
D
You
can
express
it
for
the
life
cycle
right,
so
you
don't
even
get
matched
against.
Let's
say
red
hat
Builders
based
Builders,
if
you
don't
have
red
hat
binaries
in
the
Ruby
build
pack
which
and
so
I
think
this
is
already
a
big
step
right
and
but
so
mainly
I
mean
like
I.
D
Really
don't
want
to
like
sort
of
like
convince
you
all
that
this
CNB
compatibility,
ID
thing
is:
is
the
right
approach,
it's
more
like
the
the
outcome
of
that
that
discussion
last
week
on
on
Thursday
was
hey,
but
extension
solved
this
and
so,
as
I
started,
diving
into
extensions
I
realized.
Well,
if
everyone
moves
to
extensions
because
extensions
are
so
tightly
coupled
to
the
builders,
then
we
are
back
to
square
one,
where
particular
build
packs
only
work
with
the
particular
Builders
again,
and
then
you
cannot
like,
let's
say
for
at
Heroku.
D
We
were
to
move
to
extensions
for
all
these
libraries
right
for
all
of
our
build
packs.
Then
they
just
wouldn't
work
on
the
paquero
Builders
anymore,
because
the
packet
of
Builder
doesn't
have
the
Heroku
image
extensions.
D
And
so
that's
that's
my
main
concern
now
that
plus
the
the
the
way
that
the
fact
that
a
an
extension
can
disable
rebasing,
which
for
at
least
us
as
Heroku
I
presume
also
for
the
folks
over
at
Google
with
Google
app
engine,
would
be
catastrophic
because
I
mean
we
are
in
the
business
of
rebasing.
These
things
for
customers
and
we
have
to.
We
can't
rebuild
every
single
app
right,
a
it's,
not
safe,
B,
it's
computationally,
outrageously,
expensive
to
rebuild
thousands,
hundreds,
millions
of
apps
every
single
time.
D
We
update
our
our
base
images
with
just
a
little
SSL
security
fix,
plus,
if
extensions
themselves
are
not
rebasable,
which
they
are
not
right
like
unless
you
do
some
tricks
with
squashing
them
or
something
which
we
I
heard
about
last
week,
but
I
mean
extensions
themselves
if
those
are
really
ubiquitous
and
a
lot
of
libraries
end
up
in
being
installed
through
image
extensions,
the
image
extensions,
those
libraries
do
not
get
updated
as
part
of
a
base
symmetry
base,
and
so
you
end
up
with
security
vulnerabilities
that
will
not
get
mitigated
until
the
next
time.
F
Yeah,
thank
you.
Yeah
I'll
go
first,
I
have
a
question
that
that
was
triggered
when
you
were
discussing
I,
think
Terrence.
When
you
were
talking
about
workout
and
you
know
no
way
to
actually
I
mean
work
out
being
Goliath,
and
but
you
know
it
will
probably
not
work
everywhere.
F
It
just
reminded
me
that,
in
this
discussion
we
haven't,
we
haven't
talked
about.
You
know
the
build
plan
resolver.
So
what
about
I
mean,
of
course,
yeah?
We
have
discussed
about
extensions
and
then
labeling
labeling,
the
Run
images,
so
I
totally
get
that,
but
there's
another
widespread
solution,
I
mean
but
would
solve.
I
I
believe
some
of
the
problems
that
we
talked
about
is
basically
the
beer
plant
saying
that
my
build
pack
I,
don't
know.
If
it's
let's
say
it's:
the
maven
burn
pack
I
need
a
Java.
F
I
mean
I'm
I
need
in
the
plan,
I
require
in
the
plan
a
bill
pack
that
provides
Java
functionality.
What
I'm
saying
is
that
what
about
why?
F
Why?
Wouldn't
this
be
enough?
In
some
of
the
cases,
I'm
not
saying
it's
going
to
work
in
all
the
cases
where
provide
libraries
and
whatnot,
but
sometimes
you
know
you
just
need
one
of
binary
or
whatever,
but
I
mean
you
know
just
something
quite
simple,
and
then
you
can
just
say:
oh
there's,
never
a
buildback
that
I
acquire
of
it.
We'll
get
that
for
me.
So
just
say
that
work
out,
if
you
have
workout
as
well
as
I,
don't
know
a
build
plan
that
is
more
durable
enough.
B
I
think
there's
different
kinds
of
build
packs
right
like
if
you're,
probably,
if
you're
talking
about
a
maven,
build
pack,
it
probably
does
you
expect
Java
to
be
there
and
you
don't
have
compiled
artifacts
you're
not
like
Lincoln
gets
anything
on
the
OS,
and
so
all
you
really
need
is
to
be
able
to
execute
it,
but
that
Java
build
pack
that
you
depend
on
probably
is
not
wild
card
stack
or
maybe
Java
Java
probably
is
but
like
I
think
like
like.
D
B
I
expect
the
run
time
and
it
should
just
work
because
it's
just
JavaScript
files
at
the
end
of
the
day.
D
Vice
versa,
right
like
it's
like,
let's
assume
the
notebook
used
used,
Ruby
or
something
but
I
think
so
that
orb
is
pretty
far
off.
Like
you
know,
like
I,
don't
know
if
the
packetto
build
pack
Ruby
build
pack
has
a
dependency
on
node,
expecting
that
you
could
use
the
packeto
Ruby
build
pack,
but
with
the
node
build
pack
from
Heroku.
D
That's
very
far
down
down
the
road
like
because,
initially
like
when
this
first
came
up
all
of
cmbs.
We
had
this
vision
of
like
oh
imagine,
you
had
a
node
engine
from
paquero
in
the
full
node.js
Bill
pack
from
Heroku,
but
I
mean
it's
not
realistic
because,
like
even
that,
just
the
decisions
would
have
to
be
uniform
across
all
vendors
like.
Where
does
the
engines
build
Tech
responsibility?
Stop
what
does
each
little
bit
do?
How
are
they
structured,
but
so
yeah,
but
these
dependencies
right?
D
So
it's
saying
I
need
Java
is
a
bit
different
from
I.
Need
a
I
need
a
library,
dynamically
link
against
I.
Think
it's
because
one
is
just
you
execute
the
binary
and
if
it's
sort
of
updated
you
might
not
care
as
much
as
like,
with
the
with
a
dynamic
linking
case.
A
Sorry
Jericho.
C
A
E
Fine
yeah
I
was
just
gonna
say
that
the
stack
ID
today
is
really
useful
and
the
star
is
also
really
useful,
particularly
like
with
python,
because
Picanto
like
has
their
Builder.
They
know
what
they
have
on
that
image.
And
then
you
know
if
you
have
the
stack
ID
and
it
still
works
with
the
paquetto
stack.
Then,
when
you're
on
the
picado
stack,
you
use
those
dependencies,
you
can
say:
I
I
know
I'm
on
the
stack.
E
Let
me
pull
these
tar
files,
but
I'm
not
on
the
stack,
I'm
gonna
pull
down.
You
know
the
source
and
build
from
Source,
so
I
mean
getting
rid
of.
It
is
I.
Think
potentially,
like
you're
gonna,
have
to
create
some
solution.
That
will
replace
that.
So.
D
The
idea
there
would
be
instead
of
using
the
stacady
you
use
Ubuntu
and
the
version
number
right,
because
theoretically,
that
should
be
enough,
like
at
least
you
know
that
if
it
links
against
lib
SSL
lip
whatever
on
that
Ubuntu
2204,
it's
always
in
the
same
location,
whether
it's
the
gcp
or
the
Heroku
or
the
paquero
version
of
the
Ubuntu,
like
you
know,
Ubuntu
with
other
images.
On
top.
You
just
don't
know
that
the
library
is
there,
but
if
it
is
there,
it
will
be
in
the
right
location
and
your
program
will
run
so.
D
Anything
compiled
on
Ubuntu
for
a
kettle
will
work
on
you,
but
on
on
heroku's,
same
Ubuntu
version.
If
all
the
necessary
libraries
are
there,
and
so
that
was
the
idea
behind
this
compatibility
labels.
This
is
saying
the
build
pack
knows
that
you
know
it's
a
promise
that
if
paquero
says
hey
it's
paquero.full,
it's
the
following
libraries
they're,
always
there
they
never
get
removed
because
respect
forbids
breakages
there
at
runtime
right
and
so
that
would
basically
be
a
more
flexible,
almost
inversion
of
control,
coupling
and
replacement
for
the
for
the
stack
ID,
but
obviously
like.
E
My
two
senses
like
so
you
guys
aquedo
worries
about
their
paquetto
stacks
and
Builders
and
build
packs,
and
you
know
Roku
does
the
same
thing
and
like
I'm,
I'm
kind
of
like
I
want
to
consume
your
build
packs
but
I'm
bringing
my
own
builder.
So
there's
like
different
kind
of
users
that
use
these
things
a
different
ways.
So
it's
whatever
we
end
up
doing.
E
It
should
still
be
easy
for
me
to
say:
well,
I,
don't
want
to
have
to
pretend
I'm
your
stack
or
by
yeah
compatibility
thing
just
to
plan
I
mean
if
it
has
to
be.
That
way.
That's
fine,
but
ideally
it
would
not
be
something
where
I'm
impersonating
Builders.
If
I.
B
B
I
I
linked
into
the
chat
Jericho
but
I
I
did
push
for
the
fact
that,
like
as
a
build
pack,
you
can
say
I,
don't
care
about
the
distro,
so
I
think
that
would
solve
your
use
case
like
you
would
still
if
it
gets
provided
as
you
can
use
that
information
to
basically
do
that
switch
case.
You're
talking
about
so
I
I
think
the
any
Sac
will
still
be
supported
and
yeah.
B
B
D
A
D
B
B
I
think
I,
like
as
a
project
person
I,
would
love
to
have
just
more
interoperability
between
build
packs
and
stacks
and
and
stuff
and
I.
Think
that
wasn't
as
true
and
kind
of
classic
Bill
pack
land
before
CMB
and
I,
don't
want
to
hopefully
create
a
future
where
things
are
super
tied
down.
B
But
you
know
you
got
to
do
what
you
got
to
do
for
the
business
that
you
work
for
at
the
end
of
the
day
and
so
since,
like
if
a
bill
pack
requires
an
extension,
it
makes
it
significantly
less
interoperable
and
so
I
think
those
are
some
of
those
concerns
of
like
if
the
paquetto
PHP
Bill
pack
requires
these
extensions,
like
it's
just
never
running
on,
like
the
Google
stack
and
the
Google
platform,
and
it
will
potentially
try
to
force
vendors
to
either
support
that
stuff.
But
you
know
like
it,
but.
D
D
E
Yeah
and
I'll
just
add
to
the
rebase
conversation:
I
mean
David.
You
already
said
it
like.
You,
don't
want
to
have
to
rebase
every
little
thing.
Yeah.
The
extension
rebase
problem
is
definitely
a
problem.
I
wasn't
even
using
rebates
until
recently,
and
now
that
I
am
I
could
not
switch
yeah.
D
A
B
B
D
Mean
that's
not
about
that's
not
about
the
extensions
right
like
we.
We
sort
of
like
cut
that
out
of
there
or
it's
not
that's
not
a
thing
in
that
discussion
yet
also
on
Thursday
I.
Think
we'll,
hopefully
pick
this
discussion
up
again
in
the
scene,
CNB
working
group
from
last
week,
because
now
I
have
a
few
more
questions
for
some
of
the
feedback.
The
VMware
folks
gave
there
on
the
extensions
so.
B
Yeah
we
can
tag
you
in
Force.