►
From YouTube: Working Group: 2020-05-21
Description
* Behavior of pack: https://miro.com/app/board/o9J_ksUII8c=/
* Versioning: https://github.com/buildpacks/rfcs/pull/79
A
A
B
B
B
B
D
D
D
D
D
Okay,
okay,
so
the
original
idea
is
because
we
are
providing
registry
credentials
to
analyze,
restore
and
export.
We
want
to
make
sure
that
the
life
cycle
that
is
running
those
phases
is
trust.
There's,
like
you,
know
out
our
life
cycle
that
that
we
trust
and
because
of
that
we've
introduced
this
concept
of
a
trust
builder,
which
is
discussed
here,
which
is
to
say,
like
I
pack
user,
like
trust
the
Builder
that
I'm
running.
So
it's
okay
for
you
to
provide
my
credentials
to
this
builder
when
running
the
lifecycle.
D
D
Don't
trust
that
builder
with
my
credentials,
I
can
basically
rely
on
PAC
to
use
a
for
these
three
phases,
specifically
analyzed,
restore
and
export
use,
a
different
image,
and
so
that
what
was
discussed
in
the
RFC
is
that
we
would
build
an
image
on
the
fly
with
the
lifecycle
from
github,
and
you
know
there
wouldn't
be
really
anything
else
in
that
image.
So
we
can
sort
of
trust
that
I'm
providing
my
credentials,
but
it's
okay,
it's
the
official.
You
know
build
packs,
supported
lifecycle.
D
D
Previous
versions
of
the
life
cycle
required
the
C
and
B
user
to
be
valid
in
that
image,
which
is
no
longer
possible
right.
If
we're
publishing
something
on
docker
hub,
we
have
no
idea
what
the
C
and
B
user
is
going
to
be
for,
when,
like
users
out
there
try
to
use
Peck.
So
we
needed
to
remove
that
constraint
in
the
lifecycle
and
so,
as
a
result,
we
can
only
have
published
life
cycle
images
for
oh
seven,
five
and
above.
D
We
we
kind
of
believe
that
and
that
the
feedback
that
we
got
from
Emily
is
that
failing
is
probably
what
we
want
to
do,
because
if
now
that
we
introduced
the
concept
of
trusted
builders,
it
really
like
it
seems
like
something
that
we
want
to.
Honor
users
should
always
be
able
to
rely
on
the
fact
that
pack
is
gonna.
Do
the
safe
work
flow?
If
the
builder
is
not
explicitly
trusted.
B
C
That
would
only
speak
to
I,
guess
I'm,
assuming
builders
that
are
not
trusted
right.
So
if
we
look
at
the
things
that
we
suggest,
which
would
be
the
google
builder
Heroku,
Cloud
Foundry
and
potato
right,
Cloud
Foundry
kind
of
going
away
now
those
would
all
go
into
the
trusted
flow,
and
so
they
would
be
fine
right.
C
B
B
Yeah
I,
my
instinct
is
the
same
and
especially
since
only
the
ones
built
before
now,
not
all
the
ones
that
would
come
on
the
other
side
of
this.
You
know,
based
on
the
last
life
cycle,
as
we
get
to
zero.
Seven
five
would
also
only
be
affected.
So
not
only
do
we
think
it's
a
small
number,
it's
a
number
that
gets
smaller
as
a
percentage
over
time
and
I.
B
Think
I
would
just
do
like
a
call-out
on
slack
to
find
out
if
anybody
on
slack
is
likely
to
do
this,
but
I
think
fail
is
probably
correct
in
agreement
with
with
Emily.
Here
it's
a
bad
user
experience,
but
it's
a
bad
user
experience
that
affects
such
a
small
group
of
people
but
think
it's
worth
being
secure,
even
in
the
face
of
bad
UX.
A
One
question
about
this
graph
is
the
like
publish
pass
note
on
the
flow
chart
here,
because
in
the
daemon
case,
those
bills
are
still
getting
access
to
the
docker
socket
and
I'm,
not
sure
that
that
is
the
thing
we
want
to
do.
If
we're
creating
a
trusted,
untrusted
work
for
but
I,
don't
know
if
it's
as
bad
as
the
registry
credentials
right,
because
it's
usually
scoped
to
a
workstation.
D
C
F
But
no,
we
will,
because
we
we've
had
trouble
upgrading
just
because
there's
a
lot
of
clients
out
there.
We
have.
We
have
a
lot
of
clients
out
there
that
are
on
older
versions
of
pack
or
something
else
collecting
something
a
lifecycle
and
we've
broken
people
several
times,
and
so
we
extra
cautious
I
think
we're
working
on
some
mechanisms
that
will
like
auto-update
the
the
packets.
C
B
F
Now
I
think
I
mean
I.
We
were
talking
about
a
particular
case
that
she
was
worried,
wasn't
handled
well
by
the
original
version
of
the
RFC,
which
was
like
I.
Think.
The
really
good
example
is
we
remove
positional
arguments
and
replaced
them
with
environment
variables
and
I.
Think
Emily's
comment
was
that
where
there
are
in
the
original
version,
a
RFC
where
there
are
no
modes,
essentially
the
bill
pack
author
would
have
to
account
for
both
cases,
because
they
don't
know
like
what
version
of
the
lifecycle
they're
gonna
be
running
on
right.
F
F
Don't
think
I
think
you
still
have
that
problem,
because
I
think
like,
if
you're
running
on
an
older
version
of
the
life
cycle,
even
if
your
compatibility
version
is,
is
1x
whatever
you
have
to
be
like
unless
you're
willing
to
just
as
a
built
back
author,
ditch
older
versions
of
the
life
cycle,
you're
still
going
to
have
to
account
for
that
case.
So
yeah
I,
don't
know
if
you
want
to
rebuttal,
but
I
kind
of
like
the
first
version
of
the
PR
for
better.
A
A
You
know
the
life
cycle
is
the
first
thing
that
releases
a
new
API
version
and
especially
with
these
changes,
will
be
easier
for
the
lifecycle
to
be
the
first
thing
that
implements
a
new
API
and
for
people
to
pull
that
in
without
having
to
know
about
that.
So
I
think
in
some
ways
this
is
the
the
easiest
path
for,
because
there's
only
one
life
cycle,
and
this
should
make
it
easy
to
upgrade,
and
then
the
life
cycle
can
sort
of
take
on
that
burdens
of
spreading
it
around
mm
bill
packs.
F
F
So,
okay,
so
first
there's
this
is
a
separate
concern,
but
I
have
it
in
my
head.
So
let
me
talk
through
it
like
having
the
compatibility
version
in
the
in
the
bill
pack.
This
way
like
it
reminds
me
of
the
thing
we
were
just
talking
about.
Why
about
why
the
broku
builder
was
behind
on
the
life
cycle
version,
and
it's
that,
like
there
have
been
times
where
it's
like
we
have
to
in
order
to
release
this
new
builder,
it's
like!
F
A
A
I
totally
follow
like
why
that
feels
bad.
It's
always
the
same.
The
old
Bill
pack
running
against
a
newer
lifecycle,
and
then
the
question
is:
is
it
just
being
interacted
with
in
the
way
that
it
specifies
in
this
bill
pack
API
or
is
it
like
having
you
figure
out
what
version
of
the
API
is
being
implemented?
Maybe
one
doesn't
know
about
print.
F
Okay,
so
there's
two
things
that
we
know
one
is:
the
minor
versions
should
be
backwards
compatible
right,
like
we
agree
on
that,
we
agree
that
1.0
should
not
be,
should
be
a
non-event
right
like
when
people
upgrade
to
1.0.
It's
I
think
the
most
important
thing
is
that
it
just
works
and
there's
it's
not
like.
We
release
1.0
make
a
bunch
of
noise
about
it.
Everybody
tries
to
run
pack
and
it
breaks,
because
we
didn't
actually
update
a
bunch
of
the
build
packs
or
something
like
that.
E
F
B
B
We
can
support
both
versions
of
those
and
allow
build
packs
that
only
go
to
0,
3
right
or
0
4,
0,
7
or
whatever
it
is
going
to
be
at
the
same
time
as
allowing
one,
oh
so
that
the
effect
to
users
is
that
it's
nothing.
The
effect
you
build
pack
implementers
is
that
they
have
a
runway
to
make
the
updates
that
they
need
to
make
and
for
the
specification
it
is
internally
consistent
and
breaking
as
it
moves
through
its
life
cycle
lifetime.
Let's
go
with
lifetime
they're,
not
life
cycle.
A
A
Like
the
same
as
if
you
know
we
had
a
website
that,
had,
you
know,
served
up
two
different
versions
of
the
API
using
the
same
web
server
and
then,
whichever
endpoint
a
client
is
using,
they
get
the
API
that
they
have
expected
and
tested
against.
In
the
same
way,
we're
using
the
bullpen
tamil
to
figure
out
what
API
to
build
PAC
expects
and
they're
just
providing
it
that
API
a.
B
A
thing
that's
been
percolating
back
in
the
in
the
back
of
my
mind
and
I:
don't
it's
certainly
not
on
the
scope
for
this
RFC
or
any
PR
is
related
to
it.
But
is
something
interesting
to
think
about?
Imagine
a
scenario
where
the
specification
itself
never
said
anything
about
compatibility
right
it.
In
fact,
it
was
a
single
monotonically,
increasing
integer.
We
had
spec
one
and
then
two
and
then
three
and
then
four
and
some
of
those
would
be
breaking
and
some
of
them
wouldn't
be
breaking.
F
Not
sure
if
we're
ever
gonna
be
confident
that
every
version
of
that
spec
is
going
to
work
with
every
other
version
of
that
spec.
That
all
right,
like
the
nice
thing
about
having
a
major
minor,
is
that
it
does
give
us
an
option
to
say.
Oh,
we
need
to
change
something
where
one
point
one
is
not
compatible
with
2.0,
but.
B
Actually,
the
compatibility
we
guarantee
or
that
we
care
about
is
the
life
cycle,
implementation
compatibility
and
you
could
imagine
an
implementation
which
was
literally
like
you've
got
a
directory
that
has
all
of
the
version
2
implementation
in
it
and
go,
and
then
you
just
sort
of
copy
paste
that
call
it
version
3
and
then
make
the
changes
that
we
need
to
and
have
some
sort
of
switch
statement
that
sits
outside
of
all
of
these
directories.
That
says,
here's
version
2
if
you've
tried
to
come
in.
B
You
offered
me
like
your
build
Pak,
Tamil
says
or
your
build
Pak
Tamil
says:
3.
We
go
into
a
completely
different
code
path
and
that's
how
you
guarantee
compatibility
within
a
life
cycle
implementation
right
that
we
are
going
to
use
the
exact
same
code
that
always
works
like
this.
We're
going
to
make
any
changes,
whether
they
are
breaking
or
not.
In
new
code
going
forward,
I.
F
B
And
so
a
this
is
fundamentally
problematic
and
Emily
and
I
have
been
sort
of
kicking
around.
The
idea
that,
like
there
is
a
core
here,
is
latest
thing
and
you
use
effectively
migrations
to
say.
Ok,
you
were
on
and
the
metadata
you
kicked
out.
Look
like
this.
How
do
I
migrate
that
two
or
three
or
four
or
five
kind
of
thing
and
sort
of
do
it
transparently
to
the
build
packs?
But
it's
not
clear
to
me
that
bill
on
two
different
versions
of
the
build
pack
specification
can
actually
operate
in
conjunction
with
one
another.
B
I
think
this
is
actually
a
much
bigger
risk
than
we've.
Given
it
credit
for
it,
and
specifically,
is
different,
is
a
different
risk
than
lifecycle,
compatibility
up
and
down.
I
think
it
is
totally
reasonable
that
we
will
one
day
have
a
lifecycle
that
supports
effectively
every
single
version
of
the
spec
we've
ever
had,
but
whether
you
can
use
two
different
versions
of
the
specs
simultaneously
is
a
really
really
big
question.
It's.
B
Or
platform
side
as
well
participants
on
the
platform
side
yeah.
The
key
thing
here,
though,
is
and
I
think
and
that's
why
I
say
like
I-
think
it's
sort
of
a
ad
absurdum
argument
to
go
to
this,
like
single
monotonically
increasing
thing,
but
it
really
drives
home.
The
idea
that
compatibility
within
the
spec
is
not
actually
the
thing
we're
optimizing
towards
we're
optimizing
towards
usage
by
application
builders
so
that
they
don't
see
the
discrepancies
in
the
specification
or
the
changes
in
the
specification
and
the
things
that
they're
using
to
build
their
images
with
it.
A
B
F
Guess,
there's
an
environment
variable
that
already
exists
you're
already
using
and
it's
enriched
with
some
new
something
right
like
I,
don't
know,
maybe
there's
a
case
like
this,
but
yeah
just
let
me
confirm
what
I'm
thinking
so,
if
I'm
on
compatibility
level
of
1.1
and
we
released
that
change
in
1.2
my
built,
that
can
not
take
advantage
of
that
in
the
in
your
proposal.
My
build
pack
does
not
get
to
take
advantage
of
that
unless
I
upgrade
my
compatibility
version
is
that
correct?
That's.
A
A
F
F
B
F
B
So
you,
as
a
build
pack
author,
are
on
1.1
you're
like
hey
I,
need
labels
right;
I
need
to
go
to
this
other
version.
Well,
you're
going
to
change
to
the
other
version,
which
means
that
you
now
have
to
continue
to
be
compliant
with
that
version
of
the
spec
which
has
removed
some
feature
right
like
that
means
you're
done
you
don't
get
to
use
that
feature
to
in
order
to
gain
the
advantage
of
using
the
new
functionality.
You
also
have
to
come
into
compliance
with
everything
else
implied
by
that
new
functionality.
C
F
B
There
continue
to
be
there
and
you,
and
but
if
you
want
a
job
11
features,
it
means
you
don't
get
to
use
comma
dot.
Son
anymore,
right,
like
you,
have
got
to
make
more
than
one
change.
In
addition
to
getting
all
the
advantages
of
moving
up,
you
have
to
come
into
compliance
with
the
things
that
have
been
removed
on
your
way.
There
right.
F
B
B
No,
but
the
indication
is
still
there
right,
like
the
bytecode
version
inside
the
top
of
a
class
file
right
like
as
once
it's
been
compiled
down
and
says
you
know,
I'm
class
version
52.6,
that's
the
discriminator
that
causes
the
JVM
to
do
what
I've
wants
to
do.
In
our
case,
it's
an
environment
variable
or
a
line,
a
key
and
build
back
tamil.
Certainly
we
have
no.
F
Idea,
because
we're
talking
about
a
major
version
here
in
I,
don't
think
that's
the
point
of
contention.
I
think.
What's
really
the
thing
that
like
yeah,
it's
like
I
was
saying
from
one
point
out
a
2.0
from
one
point:
x2
2.0,
we
can
remove
things
break
things.
We
can
also
just
have
a
separate
version
of
the
life
cycle.
That's
embedded
in
it
like
you
were
saying.
The
thing
for
me
is
about
the
minor
versions
and
and
how
we
how
we
treat
those.
B
B
F
B
F
B
B
B
You
knew
so
so,
let's
use
as
a
compatible
thing
from
1
to
2
1
3.
We
add
the
ability
to
write
to
write
a
piece
of
metadata,
puts
labels
on
images
right,
so
I
write
my
buildpack
I
start
using
this
API
writes
it
to
the
proper
metadata
file,
so
I
use
PAC
in
the
output
image
doesn't
have
labels
on
it.
Even
though
I
followed
this
change
to
the
specification,
that
is
the
indicator
to
me
that
I
need
to
go
up
change
the
API
line
in
my
build
pack
tamil
from
1.2
to
1.3.
We.
A
Can
add
you
know
we
wanted
to
move
to
actually
cutting
releases
of
this
back
right.
We
can
write
release,
notes
that
say
what
all
the
new
features
that
came
in
a
spec
where
these
are
and
then
also
do
the
same
thing
in
a
lifecycle
release
when
it
implements
a
new
API,
be
like
implements
platform,
API,
o4
and
then
list
out
the
features.
F
Right,
so
what
I
prefer
as
a
counterpoint
to
that
is
that
you
don't
have
like,
regardless
of
what
you
specify
as
your
compatibility
version.
Let's
say
it's
1.1.
It
will
still
run
on
1.3
with
1.3
code,
but
with
deprecation
lines,
and
so
in
that
way
you
get
everything
that's
in
1.3,
but
you'll
get,
but
there
will
be
more
explicit
warnings
about
what
that
the
built
act
needs
to
be
updated,
which
I
think
is
a
better
prompt
than
needing
to
actually
update.
F
B
F
B
F
B
B
For
a
second
I'm,
like
I,
think
the
problem
here,
Joe
is
you're
thinking
that
there
will
be
a
performance
improvement
between
API
version
1.1
at
1.2,
and
that's
not
the
case
right
api's
don't
have
performance
in
any
way.
What
you
are
actually
talking
about
is
the
life
cycle
between
zero,
seven,
four
and
zero.
Seven
five
will
have
a
performance
implication,
and
so
users,
regardless
of
what
their
build
packs,
do,
can
continue
evolving
life
cycle
and
the
life
cycle.
Behavior
performance
behavior
can
change,
and
that
is
completely
orthogonal
to
the
API
version.
That
time.
E
A
A
B
F
Like
how
much
confidence
do
we
have,
the
these
point
versions
will
actually
work
with
each
other?
Are
we
like?
Are
we
putting
ourselves
in
a
position
where
we
have
no
choice
but
to
bump
a
major
version
six
months
from
now,
because
there's
something
that
we
want,
but
we
realize
that
it
can't
go
in
a
minor
version.
B
This
is
something
that
specs
deal
with
all
the
time
and
I
think
we
will
need
to
be
stricter
about
it
and
I
think
that
leads
to
compromises
and
that
compromises.
Maybe
we
don't
get
to
make
the
change
in
a
spec
or
maybe
we
don't
get
to
change
it
to
the
optimal
thing,
because
we
balance
the
need
for
compatibility
over
the
need
for
correctness.
B
F
Work
you're
right,
it
still
yeah
yeah.
It's
definitely
possible
I,
think
it's
less
likely
because
they're
ultimately
like
I,
think
just
by
nature
of
the
way
we're
talking
about
the
code
executing
we're
like
there's
a
thing:
it's
deprecated
but
they're
all
you
know
you're
picking
up
the
new
changes
as
you
go.
What
others
do
compatibility
version?
It
feels
less
likely
to
run
into
that
situation,
but
I
agree.
It's
deafening
still
possible
are.
A
E
A
B
B
A
Am
in
the
service
he
advocating
for
non
purely
forcible
changes
in
particularly
the
0x
minor
on,
which
is
what
we
already
have
now
so
I'm,
not
introducing
that
as
a
new
thing,
we're
allowing
we're
already
allowing
breaking
changes
in
them.
0X
minor
line.
It's
really
proposing
how
the
life
cycle
will
support
those
in
a
way
that
the
fact
that
it's
breaking
shouldn't
matter
to
platforms
or
pill
packs
the
fact
that
the
apos
are
not
strictly
for
apparel
I.
F
A
E
Yeah
I
mean
I,
think
about,
like
we've
done
a
lot
of
like
having
to
migrate
users
from
stacks
and
other
things
on
roku
I've
personally
done
a
lot
of
white
Club
service,
and
it's
super
painful
and
there's
not
me
a
thing
about
like
not
required
to
do
a
bunch
of
work
to
kind
of
get
a
set
of
features.
I
think
is
important
quality
to
have,
even
as
like
a
build
pack
out
there.
E
It's
like
in
the
example
Ben
gave
of
like
getting
this
feature,
but
then
having
to
change
a
bunch
of
code,
because
all
these
other
things
changed.
I
have
there's
some
yellow
flags
that
race
in
my
head
about
that,
mostly
because
it
might
just
mean
the
build
pack
house,
just
won't
bother
to
do
the
work
and
won't
do
it
like
it's,
not
their
full-time
job.
Right
like
it's.
A
lot
of
these
build
packs
in
the
ecosystem
that
are
funded
by
a
company
or
things
people
are
doing
on
the
side.
Yeah
yeah.
F
B
And
that's
the
argument
to
never
break
like
basically
to
never
have
a
major
beyond
one
right
to
never
break
back,
break
backwards-compatibility.
The
problem
I
think
that
your
suggestion
Joe
leaves
us
with,
is
if
you
want
modes
for
everything,
pre
one,
it
means
zero.
Twos
can't
run
with
0
3
s
and
0
3
s
can't
run
with
zero.
Four
zero
4s
can't
run
with
zero
5s
and
that's
problematic
because
it
forces
they'll
pack
authors
to
upgrade
versus
saying
e.
B
We
consider
these
things
to
be
compatible,
even
if
the
spec
isn't
strictly
we're
going
to
put
the
burden
on
the
lifecycle
to
make
it
as
if
they
were
compatible
with
one
another
and
then
once
we
get
to
1
1
1,
1,
2,
1,
3,
1
4
actually
has
to
be
compatible,
reducing
the
burden
on
the
lifecycle.
At
that
point,
and
if
we
ever
have
to
break,
we
have
to
go
to
two
Oh
like
we
better,
really
mean
it,
because
most
build
packs
won't
come
with
us.
A
F
A
B
E
I
guess
my
my
some
of
my
concerns
around
modes
and
other
things
with
kind
of
spec
versions.
Without
those
design
goals
in
mind.
Even
if
we
have
known
the
back
of
our
head,
is
that
if
you
remove
stuff
from
aspect
like
you,
don't
remember
that
that's
a
thing
someone's
using,
because
if
you're
always
just
looking
at
master,
like
it's
kind
of
like
out
of
sight
out
of
mind,
I,
think
I
made
a
comment
on
the
RFC
pre.
The
rework
that
is
now
not
applicable
but,
like
say
you'd,
say
like
point
2
to
point.
E
3
is
a
new
mode
right
like,
and
there
was
some
API
feature
or
thing
that
we
decide
to
remove
or
change
like
now.
It's
just
part
of
a
mode
that
you,
you
kind
of
now
need
to
know
the
context
and
say
we
get
to
point
7
right.
Like
are
you
gonna,
remember
like
say
we
support
all
these
modes
in
life
cycle
because
it
does
it
like.
Are
we
gonna
remember
like
what
people
are
doing
in
point
2?
E
A
Is
a
lifecycle
implementer
that
is
like
in
me
easier
problem
to
solve,
then
like
every
combination
of
deprecated
and
on
deprecated
features
sort
of
existing
together
at
the
same
time,
regardless
of
what
version
is
in
the
bill
pack,
API
API
version.
B
Yeah,
why
isn't
that
just
a
bug
like
every
other
bugs
like?
Obviously
we
don't
want
to
do
that?
We
would
endeavour
to
create
enough
testing
and
that
framework
around
it
to
be
sure
that
we
did
and
if
we
did
accidentally,
we
fix
the
bug
and
cut
another
release
of
the
lifecycle
to
reinstate
anything
that
we
accidentally
removed
like
it's,
not
a
purposeful
thing
that
we're
gonna
go
and
remove
some
sort
of
piece
of
functionality.
It's
an
accidental
thing
that
that
functionality
gets
dropped.
I.
F
B
Because
the
deprecations
didn't
actually
matter
the
API
stayed
forever.
As
you
say
they
we
would
have
had
the
exact
same
outcome
if
no
one
had
ever
marked
any
of
those
deprecated.
The
key
takeaway
I
think
there
is
actually
that
they
never
broke
compatibility
ever.
They
acted
like
they
would
one
day,
but
they
didn't
and
no
one
ever
no
one
ever
stopped
using
the
api's.
A
Example
that
I
think
of
that
I
want
us
to
be
more
like
a
sort
of
like
migration
from
docker
v1
to
v2,
so
they
realized
they
needed
to
overhaul
the
the
file
description
of
an
image
right.
So
they
made
a
brand
new
specification
and
the
docker
v2
spec
does
not
contain
within
it
the
entire
dr.
v1
spec,
but
deprecated.
It
is
different,
but
then
docker
the
command
tool
can
run
either
of
those
types
of
images
for
a
long
stretch
of
time.
Without
you
having
to
think
about
that.
F
Example
that
makes
me
and
I'm
thinking
of
the
docker
composed
specifically
and
the
things
that
I
had
to
do.
Would
there
were
not
major
versions
and
I,
don't
know
exactly
how
they
treated
it,
but
I
have
to
bounce
but
yeah.
We
should
definitely
keep
talking.
Maybe
set
up
this
special
session
for
this,
because
I
think
it's
really
important,
but.