►
From YouTube: Implementations Sync: 2020-09-24
Description
- Status Updates
- Release Planning
- Stack Buildpacks
A
Are
taking
longer
than
we
expected
it's
coming
along,
though
I
think
we're
we're
getting
really
close.
We
might,
we
might
be
able
to
submit
a
pr
today.
I'm
optimistic
yeah,
I
think,
probably,
when
we
submit
the
pr
there'll
be
a
couple
things
to
to
like
talk
over
and
see
if
we
like
how
we're
doing
doing
it,
but.
B
B
Sounds
good
and
then
I
see
stack,
build
packs
in
the
agenda
got
specific
things.
You
want
to
talk
about
jesse.
C
Yeah
I
wanted
to
kind
of
just
open
up
the
discussion
before
I
get
too
far
into
this
again,
so
with
provides,
requires
and
stack
build
packs.
The
kind
of
running
assumption
right
now
is
that
we
need
at
least
two
build
plans.
Output,
one
for
like
the
build
phase
and
one
for
the
extend
phase
is
kind
of
what
I'm
calling
it
just
to.
C
You
know
send
out
some
terminology,
but
I'm
kind
of
wondering
do
we
think
there's
value
in
having
a
third
build
plan
of
just
the
built,
just
the
privileged
build
packs
build
plan,
or
do
we
think
that
the
build
and
the
privileged
build
packs
could
share
the
same,
build
plan?
If
that
makes
sense
or
do
we
think
they
should
share
the
same
one?
I
guess
like
I
don't.
I
don't
know
enough
about
the
internals
yet
of
like
where
these
build
plans
are
used,
so
I
wasn't
sure
whether
splitting
it
out
be
good
or
bad.
C
C
So
I'm
trying
to
figure
out
whether
the
detect
that
creates
this
plan
that
heisel
provides
and
requires
and
knows
that
which
bill
packs
are
doing
it,
can
that
same
plan
be
used
for
kind
of
both
stages
of
filled
like
the
non-root
stage,
and
the
root
stage
basically
are
privileged
non-privileged.
B
I
feel
like,
for
the
most
part,
one
versus
two
doesn't
matter
in
the
build
image,
because,
what's
in
the
build
plan
at
that
point,
is
like
a
set
of
things
that
we're
going
to
give
to
a
provider
right.
So
if
it's
in
two
files
or
one
file
whatever,
I
think
the
one
place
that
breaks
down
is
if
a
build
claims
it
can
provide
something,
but
then
on
the
ground
he
decides
not
to.
We
have
mechanism
to
sort
of
like
kick
that
requirement
down
to
the
next
provider.
B
C
C
B
C
B
It
said
it
could
provide,
and
if
you
just
do
nothing
with
that
file,
none
of
those
requirements
will
be
given
to
future
build
packs,
even
if
they
also
claimed
they
could
provide
them.
But
if
the
build
pack
goes
and
deletes
an
entry
out
of
that
file,
it's
basically
saying
I'm
not
doing
this
one
and
then
it
will
be
given
to
the
next
provider.
B
After
we've
changed
a
single
file.
Yeah,
okay,
we've
approved
rfc
that
changes
the
mechanism
through
which
this
happens.
So,
instead
of
deleting
something
out
of
the
file,
you
write
an
entry
to
build
tommle
and
then
it
gets
given
to
a
different
provider.
But
in
my
mind
whether
one
or
two
files
makes
sense
comes
down
to
are
these
participating
in
the
same
build
plan?
I
think
yes,
like.
C
They
are
yeah
because
it's
really
too,
and
that's
kind
of
where
I'm
trying
to
draw
the
line,
because
one
trial
is
now
going
to
output.
Two
plans
right,
like
conceptually
like
one
detect
trial.
That
succeeds,
which
is
also
very
confusing-
and
I
may
I
mean,
like
you
know,
have
to
raise
a
flag
for
help
on
how
this
trial
stuff
works.
C
Exactly
because
there's
retry,
stuff
and
skipping
skipping
stack,
build
packs
is
is
complicated
like
if,
because
they
can
provide
extra
things
that
bill
packs
can't
do
today
and
anyway,
the
yeah
single
trial
outputting,
basically
two
plans
a
build
plan,
a
run
plan,
but
then
also
three
sets
of
different
build
packs,
because
you've
got
privileged
build
packs,
which
is
basically
the
stack
group
right
that
ends
up
running
and
then
you've
got
the
normal
build
packs,
which
is
just
a
collection
of
build
packs
and
then
you've
got
the
run
phases,
collection
of
build
packs,
the
extend
phase
and
those
can
be
different
right
because
you
may
have
something
that
only
has
a
run.
C
That
only
extends
the
run
image
and
opted
out
of
doing
anything
in
the
build
plan.
So
you've
got
a
stack
pack,
potentially
not
in
the
build
phase,
but
only
in
the
run
phase
and
but
not
in
the
actual
normal
build
phase
anyway.
C
B
B
The
one
the
thing
I
would
the
part
of
me
that
likes
simplicity
and
wants
to
build
in
complexity
later
is
like
can't.
The
stack
build
pack
group
be
the
same
for
build
and
run,
and
then
you
know
if
the
run
image
is
getting
past,
build
colon
whatever
it
just
does
nothing,
but
I
actually
think
we
shouldn't
do
that,
because,
whether
or
not
to
spin
up
an
entire
container
if
it
needs
to.
C
Yeah
because
someone's
going
to
have
to
understand
those
prefixes
and
so
right
now
one
of
the
questions
we
posed
to
joe-
and
I
think
he
wrote
the
rfc,
I
think
actually
to
relax
the
build
constraints
or
the
provide
constraints
is
the
bill.
Plans
will
not
have
those
prefixes
in
them
at
all
and
so
kind
of
regardless
of
what
happens
with
the
dependency,
because
we're
outputting
specific
plans
for
the
different
phases,
then
we
can
do
that
at
like
build
plan
creation
time,
basically,
so
that
no
one
has
to
think
about
those
prefixes.
C
A
A
C
Any
other
thing
that
makes
sense.
So
all
the
logging
is
getting
really
weird,
because
we
have
all
this
stuff
of
like
resolving
plan
and
then
we're
like
x
of
y,
build
packs
of
participating,
and
then
we
log
out
all
the
build
packs.
But,
like
I
don't
know
what
this
is
supposed
to
look
like
for,
like
things
that
are
only
going
to
run
on
the
extend
phase
side
of
things,
because
you
still
kind
of
want
to
know
that
stack-back's
going
to
run
it
like
it
did
pass
detection.
C
It
did
you
kind
of
want
that
in
your
detection
results,
but
I
think
we
may
need
to
like
change
the
change
the
logging
here
to
be,
like
you
know,
three
or
four
are
gonna
run
in
the
build
phase,
and
one
of
four
is
going
to
run
in
the
extend
phase
or
whatever
in
the
run
phase.
If
you,
because
you
are
outputting,
two
plans
so
like
a
single
stream
of
logs
is
not
really
easy
to
parse.
When.
B
B
C
Maybe
we
only
do
it
yeah.
Maybe
we
only
do
it
if
it's
a
privilege
built
back,
which
is
kind
of
the
internal
mechanism.
That's
really
defining
these
differences
right
now,
it's
whether
it's
a
privilege
built
back
or
not,
and
I
don't
like
that
word
either.
I
don't
really
know
how
to
make
that
better,
like
stack
packs,
are
not
really
defined
anywhere.
B
C
Okay,
yeah
well
I'll
play
around
with
the
logging.
So
I
guess
it
gets
a
little
weird
because
you
could
have
like
one
of
seven
and
then
you're
gonna
have
like
like
all
of
them
will
just
say
I
guess
extend
only
potentially
right
like
maybe
just
have
like
almost
all,
or
none
of
them
run
in
the
build
phase
you
just
you've.
Only
you
only
have
extend
phase
editions,
which
would
be
a
little
weird,
but
it
could
it
could.
B
B
It's
just,
I
think,
the
smallest
change
that
would
also
express
the
intent.
So
you
don't
like
find
yourself
in
it
not
doing
logging
when
we're
still
in
the
exploration
phase:
okay,
okay,
because
then
later
once
we
get
all
the
functionality
down,
we
can
redesign
the
output
if
we
want,
but
you
know
right.
C
I
guess
there
will
be
pax.
Gonna
have
to
do
some
more
work
once
we
kind
of
get
further
in
on
this.
Obviously,
to
do
the
initial
like
up
front
detection
like
right
now
it
looks
at
the
the
build
image
to
figure
out
the
mix
ends
and
the
run
image
to
figure
out
the
mixing
is
kind
of
up
front.
I
know
now
build
packs,
can
express
mix-ins
and
build
pack
tunnel.
C
B
C
B
C
B
C
The
dynamic
ones,
I
do
think
that
we're
gonna
have
to
do
something
there
eventually,
but
for
the
static
ones
that
only
exist
in
build
pack
towels.
Now
that
are
extensions
of
mix-ins,
because
you
can,
you
can
provide
a
requirement
in
a
bill
pack.
Tom
array.
B
C
A
A
Yeah
and
then
just
thinking
about
making
it
reusable
logic
for
multiple
platforms,
I
think
it'd
be
nice
for
the
lifecycle
to
provide
it
at
minimum
as
a
library
that
pat
could
consume.
A
I
do
have
a
question
about
about,
I
guess
the
current
existing
life
cycle
or
mix
and
validation
and
it
being
in
pack.
I
don't
recall
those
conversations
to
be
honest,
but
is
there
a
reason
why
they
don't
exist
in
lifecycle?
Does
it
just
not
make
sense,
or
are
we
just
expecting
a
certain
picture
of
the
world.
B
I
think
it
could
make
sense
to
redo
it
in
life
cycle,
even
with
the
world
we
have
now,
but
I
think
the
set
of
possible
errors
is
smaller
when
you're
not
asking
for
mixing
to
be
installed
on
the
fly
but
yeah,
I
think
for
platforms
like
techton.
It
always
would
have
been
helpful
if
the
lifecycle
did
a
sanity
check
and
we
just
didn't
do
it.
Yeah.
C
It
does
bring
up
the
run
image.
You
have
to
know
the
run
image
like
up
front
right,
which
is
probably
why
it
was
done
in
pack.
If
I
had
to
guess
right,
because
before
the
run
image
pack
was
the
only
one,
that's
sort
of
picking
the
run
image
that
it
wants
to
pass
to
the
that
particular
phase.
This
was
long
before
creator
right
so.
B
Yeah,
I
think
we
want
the
first
time
the
run
image
enters.
The
lifecycle
lens
with
the
run
image
is
during
export,
so
I
think
lifecycle
would
always
fail,
wouldn't
fail,
run
image,
mixing
validations
until
export,
which
is
why
doing
it
in
the
platform.
First
is
nice
because
that's
a
long
time
to
wait
for
your
failure-
and
I
don't
think
we'd
want
to
change
lifecycle
inputs
just
so.
We
could
do
that
early,
because
the
client
can
always
provide
the
early
check.
A
Yeah,
I
would
think
that,
for
that
case,
you'd
still
want
the
validation
on
the
export
right,
because,
if
you're
thinking
about
lifecycle,
as
just
these
individual
binaries
like
that
binary
should
be
kind
of
ensuring
that
that
is
true
right
but
yeah,
and
then
the
platform
for
the
sake
of
a
better
user
experience,
could
then
leverage
similar
logic
for
that
validation
way
up
front
yep.
I
agree
with
that.
C
B
C
Problem
is,
you
know,
window
support
yeah,
but
the
but
yeah
we've
been
playing
around
with
like
the
idea
of
having,
instead
of
a
single
creator
having
like
like
two
sort
of
canico
based
ones,
but
that
they
could
still
run
on
like
you
could
actually
run
this,
like
canaco
style
creator
on
the
builder
image
and
then
wipe
the
builder
image,
except
for
the
things
that
you've
done
with
canaco
and
then
mount
the
run
image.
C
Only
if
you
need
the
extend
phase
and
then
and
then
at
that
point,
you've
got
the
continuation
of
the
layers
directory.
It
means
sequentially
right,
you're
not
or
you
could
run
a
second
container.
I
guess
if
you
want
to
really
do
it
at
the
same
time,
but
then
and
then
you
go
into
exporter
as
its
own
image
at
the
end
with
just
all
the
layers,
and
it
could
could
make
it
more
dynamic
for
knowing
the
run
image
not
having
to
know
the
run
dimension
until
at
least.
C
Yeah,
we
just
talked
to
somebody
where
folks
were
using
kaneko
for
something
else
as
well
and
they're
doing
similar
things
where
they're
extending
some
images
with
canaco
for
like
ca,
certs.
That's
sort
of
a
use
case
that
we're
playing
with
too
so.
C
Yeah
I
kind
of
wish
creator
was
canon
co-creator,
like
just
can't
code
creator,
even
if
it
wasn't
like
just
as
a
small
scratch
image
that
basically
expanded
the
the
build
image
and
could
you
could
potentially
pass
like
the
the
image
in
a
cache?
So
they
could.
You
know
not
have
to
download
not
pull
from
a
registry,
but.
B
I
feel
like
having
watched.
B
Kpac
performance
in
different
environments-
I
think
sometimes,
depending
on
what
I
as
you're
running
and
pulling
in
a
registry,
can
be
faster
than
reading
from
a
cache
unintuitively,
because,
if
you're
running
in
like
gcp
or
something
and
the
registry
you're
using
is
gcr,
they
like
really
have
great.
A
B
C
Cool
all
right:
well,
that's
it
on
stack
packs
for
now
and
there's
lots
of
test
to
fix
and
plants
to
create,
but
that's
the
main.
B
We
just
get
back
to
our
individual
tasks
that
are
taking
much
longer
than
we
thought
they
would.
I
know
mine's
stretched
on
for
weeks
more
than
I
thought
it
would,
but
it's
not
relevant
to
this
group.