►
From YouTube: Working GroupL 2021-05-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
first
thing
on
the
agenda
is
introductions.
I
don't
think
I
see
any
new
faces
here
today,
so
we'll
move
on
to
release,
planning
and
updates.
B
I
speak
for
the
implementation
team-
sorry,
oh
I'm!
Sorry,
dan,
okay,
I'll
be
fast!
Yesterday
we
shipped
a
patch
of
the
life
cycle
to
fix
a
bug
where
we
were
retrying
to
pull.
Image
manifests,
even
if
the
first
request
was
successful,
so
that
added
a
total
of
300
milliseconds
per
manifest
and
tripled
the
number
of
polls
which
is
not
great,
but
we
packed
that
and
we
continue
working
on
life
cycle.
Oh.
C
A
C
D
A
A
So
first
thing
in
the
list
is
from
sam.
Actually,
I
think
mostly
from
sam.
D
A
F
Which
was
an
alternative
to
this?
I,
like
you,
you
were
saying
we
should
first
fix
this
issue
where
it
it
can
like
modify
things
and
then
have
a
separate
rfc
to
figure
out
how
to
share
layers
between
different
build
packs.
A
A
Do
we
want
to
talk
about
this,
or
do
you
want
to
put
on
the
agenda
for
this
week
or
yeah?
Sure
well
seems
like
there's
decisions
to
be
made,
add
bomb
to
layer,
content
metadata.
This
is
an
fcp
fcp,
closing
yeah.
It
says
seven
days
ago,
so
it's
probably
today
and
the
shepherd
is
emily.
E
C
G
H
D
A
Setting
default
command
line
arguments
can
be
overridden
by
the
user.
This
is
blocked
on,
I
think
emily
said.
You're
gonna
open
an
alternative.
A
C
G
I
think
the
open
question
is:
I
left
a
big
comment
about
this:
what
to
do
with
different
api
combos
like
two
build
packs
with
two
different
build
pack
apis
a
platform
api
like
because
this
new
structure
is
involving
changing
both
buildback
and
platform
apis.
So
how
do
we
want
to
handle
all
the
combinations?
G
G
A
G
A
A
Complicated
next
thing
is
mine
guidelines
for
accepting
component
level
contributions.
This
is
an
fcp
seven
days.
C
F
I
I
C
A
All
right-
and
that
covers
our
rsc
review,
let's
jump
into
the
agenda
first
thing
is:
how
is
kubecon,
what
were
common
questions
and
what
are
your
perspectives.
D
It's
really
not
much
deeper
than
that.
We're
all
cr
we're
all
curious.
How
did
you
find
the
outside
world.
I
I
stayed
very
much
inside
during
kubecon
european
time
zones
plus
american
time
zones
were
pretty
rough,
but
I
thought
that
turnout
went
really
well.
I
think,
on
it
was
tuesday
wednesday
right
so
for
the
ops
hours
they
were
at.
I
think
2
a.m.
Central
for
both
times,
I
think
sam
said
we
had
160
or
one
something.
On
the
first
day,
I
think
we
had
like
150
or
something
on
the
second
day
for
office
hours,
yeah.
J
I
I
was
there
for
the
second
day
with,
and
the
resenting
a
lot
of
stuff
around.
You
know
how
it
compares
to
other
build
tools
which
I
think
is
common,
so
I
think
we're
getting
a
lot
of
people
who
are
very
new
to
build
packs
or
maybe
hearing
about
bill
patches
for
the
first
time
and
then
a
lot
other
questions
that
I
remember
were
around
like
cicd
integration
as
well,
which
has
been
common
from
the
talks
that
I've
done
at
other
kubecons
for
intro
talks.
I
I
think
we've
seen
that
as
well,
so
I
don't
think
we
got
any
like
fairly
advanced
questions.
If
I
recall.
A
A
H
Yeah,
I
was
definitely
pleasantly
surprised
by
the
office
hours,
maybe
even
the
format
right.
We
did
a
very
quick
overview
of
build
packs,
but
I
think
the
fact
that
it
was
like
very
like
open
floor
for
conversations
and
I
think
that
was
more
or
less
the
expectation
did
kind
of
allow
people
to
really
ask
questions
right
and
have
a
informal
conversation.
D
How
did
people
know
about
bill
packs
before
the
event?
What
was
your
gathering
was
it?
They
just
found
out
about
it
that
day
or
just
curious.
J
I
D
I
Sure
and
yeah
I
mean
just
hypothesizing
to
answer
your
question
anthony,
I
think
maybe
it's
we
got
to
incubation,
so
maybe
we
get
more
marketing
kind
of
in
that
and
we're
on
the
schedule
on
the
maintainer
track
and
like
never
heard
of
this
thing
before,
what
is
it
I,
I
guess
we
didn't
actually
ask
questions
of
the
audience,
we're
really
just
on
the
receiving
end
of
answering
questions
so
honestly,
that'd
be
a
great.
I
would
love
to
know
the
answer
that
question
to
be
quite
honest,.
A
I
think
part
of
it
is
we're
the
only
cncf
for
like
a
high-profile
independent
project
that
tackles
container
builds
right.
Like
there's
some
small
things
like
you
know,
kanako
or
you
know
jeb
that
are
you
know,
either
docker
files
or
niche
language
specific
things.
But
there's
not
it's
not
like
we're
sitting
among
many
other
alternatives
for
solving
the
problem,
we're
solving,
and
so
if
people
come
to
kubecon
and
are
wondering
about
building
containers
right,
just
kind
of
one
set
of
talks
there,
you
know
as
far
as
cncf
projects
go.
I
F
D
I
At
least
na
is
coming
up
in
the
fall,
and
that
will
be
probably
easier
times
unwise
for
you
to
hit
there
anthony.
I
will
say
I
I
guess
as
a
closing
comment.
We
were,
I
think
we
were
kind
of
not
super
excited
about
the
ops
hours,
given
our
kind
of
experience
from
n
a-
and
I
was
really
similar
javier,
like
really
pleasantly
surprised
with
just
it-
was
valuable
and
and
kind
of
versus.
I
felt
like
for
n
a
like
it.
I
I
felt
bad
assigning
people
shifts
because
it
felt
like
you
were
just
torn
your
thumbs
during
kind
of
how
they
did
it
last
year.
So
it's
only
a
market
improvement
from
our
experiences
from
the
north
america
one.
I
think.
A
Awesome
well,
we
have
quite
a
few
things
in
the
list,
so
I'll
move
along
meta,
build
packs
and
life
cycle,
extensions.
F
Yeah,
I
put
these
items
in
the
reverse
order
from
the
past
meeting,
so
that
we
could
talk
about
some
of
the
other
things.
This
was
something
that
I
brought
up
at
the
very
end
of
last
meeting,
which
was
around
like
so
we've
talked
about
possibly
deprecating
profile
d.
F
So
typically,
when
you
have
like
some
new
rfcs
for
changes
to
life
cycle
or
like
something
that
other
common
functionality
across
buildbacks,
could
we
provide
hooks
or
plug-ins
into
the
lifecycle,
where,
like
people
at
the
appropriate
level
of
abstraction,
can
plug
like
plug-ins
specific
code
that
can
act
on
the
build
packs,
rather
than
the
application
to
modify
the
output?
F
So,
for
example,
you
could
have
like
this
metal
build
pack
that,
rather
than
looking
at
the
app
door,
looks
at
each
of
the
build
packs
their
directory
and
then
does
some
modification
after
the
build
process
or
during
the
launch
time.
To
like
add
these
extensions,
so
you
could
share
common
functionality
across
different,
build
packs
or
possibly
prototype
rfcs
before
committing
them
to
the
like
official
api.
A
F
Yeah
and
and
possibly
like
some
hooks
during
the
so
that's
why
I
have
two
parts:
one
is
printer
built
backs
and
one
is
like
life
cycle
extension,
so
the
other
part
is
like
having
some
hooks
during
the
launch
time,
which
could
be
triggered
to
set
up
some
other
things
before
the
application
is
launched.
A
Something
like
operator
specified
xxd,
where
are
the
same
operator
that
can
create
hooks
that
run
after
every
build
pack
on
all
the
vilpex
layers
to
they
could
contribute
something
that
ends
up
in
that
final
image,
like
kind
of
like
a
doc
profile
in
the
app
directory,
but
not
in
the
app
directory
it's
set.
You
know
for
all
things
on
the
platform
or
something
like
that,
and
that's
about
the
image.
F
Yes,
if
you
wanted
to
modify,
I
I
mean
I
say
hook
spot
like
if
you
wanted
to
modify
some
launcher
behavior,
which
is
not
exposed
to
the
buildback
api
again.
This
is
completely
hypothetical.
I
I
haven't
thought
of
a
lot
of
cases
on
the
the
other
side,
like
the
lifecycle
extension
side,
but
I
just
thought
it
might
be
useful
to
prototype
things.
F
The
first
one,
the
clear
use
cases
for
that
are
like
something
like
profile
d
or
having
like
just
reducing
duplication
across
different
build
backs
when
you
want
to
offer
some
common
functionality
to
to
each
of
them
so
as
opposed
to
each
of
them
using
a
library
and
like
executing
the
same
code.
You
have
this
one
thing
that
can
modify
all
of
them
and.
G
I
understand
the
use
case.
I
guess
what
I
would
get
worried
about.
Is
api
versions
like
with
this
meta
build
pack
need
to
describe.
Would
it
have
an
api
version?
Would
it
need
to
list
all
the
build
pack
apis
for
the
types
of
build
packs
whose
output
it
could
modify
like?
I
don't
want
to
create
too.
A
Kind
of
a
similar
note,
but
maybe
more
philosophically
you're
thinking
about
the
design.
I
worry
about
buildpak,
writing
a
layer
and
it
thinks
I'm
creating
this
read-only
quote-unquote
layer
for
node.js
runtime
for
ruby
for
python
or
whatever,
and
then
something
else
that's
in
a
sanctioned
part
of
the
build
process
actually
has
the
ability
to
change
all
that
stuff
and
violate
the
assumptions
that
bill
pack
might
make,
including
code
that
the
build
pack
created
that
you
know
intends
to
access
files
that
it
thought
couldn't
change
in
writable.
G
And
like
could
we
solve
this
instead
with
you
know
more
tooling,
for
build
pack,
authors
like
helpers
in
libc
and
d,
or
a
library
of
bash
utilities
for
basketball
pack
author
to
do
some
of
these
common
things.
I
I
guess
you
probably
have
a
very
specific
use
case
in
bloomberg,
and
I
assume
this
is
like
a
very
much
probably
a
company
level
kind
of
lever
that
would
get
pulled
like
I
can
imagine,
maybe
even
like
the
cigna
case
of
like
this
is
just
the
security
check
thing
that
happens
at
the
end
across
all
the
layers
or
something
along
those
lines.
Is
that
kind
of
what
you're?
Thinking
of
for
these
kind
of
meta
bill
packs.
F
Yeah,
I
guess
like
if
you
have
a
family
of
billbags,
that
all
sort
of
share
the
same
thing.
I
guess
one
way
you
could
do.
It
is
obviously
like
provide
some
some
like
library
or
extension
that
can
be
used,
but
that.
F
The
issue
with
that
is
all
the
build
packs
have
to
be
written
in
the
same
language
or
you
need
equal
implementations
of
that
library
code
for
each
of
the
languages.
You're
writing
your
bill
packs
in.
So,
if
you
have
a
bash
built
back
in
your
work,
so
if
you
have
a
written
bill
back
written
in
dash,
you've,
one
written
go
and
one
written
bytes
and
you
now
suddenly
have
to
write
the
same
plugin
for
all
three
of
them.
F
A
A
You
care
about
operator
control
here
right,
it's
like
you
can't
you
want
to
use
different,
build
packs
from
different
people.
You
want
them
to
have
to
modify
the
build
pack.
You
want
operator
people
say
here
are:
there's
a
function
I'm
going
to
apply
to
what
each
build
pack
has
done
yet
like
I
buy
that
that
is
very
valid
use
case,
but
it's
hard
to
think
more
abstractly
about
how
to
solve
that
problem
without
understanding
a
little
bit
more
about
what
you're
trying
to
do.
Is
that
something
you
can
share.
F
Let's
see
so
one
example
could
be
like,
like
verifying
the
bomb
in
some
way
or
like
for
for
each
of
the
build
packs
figuring
out.
If
what
they've
contributed
to
is
correct
or
modifying
the
bomb
to
add
some
additional
metadata
or
like
setting
up
some
environment
variables
for
each
of
the
like
buildbacks,
based
on
their
contents.
A
F
F
I
guess
I
wanted
to
be
part
of
the
build
process
so
that
it
ends
up
in
the
final
labels
and
it's
like
one
build
process.
Whereas
now
you
have
two
separate
build
processes,
one
that
builds
the
image,
then
the
second
one
that
modifies
some
labels
and
things,
and
that
creates
a
new
image
stages.
F
So
I
want
you
to
avoid
that
for
it
to
be
an
atomic
process
and
the
other
thing
was
like
if
I
want
to
set
some
environment
variables
or
something
like
that
for
based
on
each
of
the
buildbacks
layers
or
contribute,
let's
say
some
exact
d
scripts
based
on
each
of
the
layers.
Then
that
becomes
like
a
two-step
process.
F
The
main
issue
I
had
with
that
was
like
just
iterating
over
each
of
the
buildbacks
layers
directory
in
a
in
a
seamless
manner
through
a
final
buildback
like
if
you
could
solve
the
that
last
issue,
where
I
could
find
each
of
the
buildback
layers
and
then
at
least
even
read
that
and
write
it
to
my
own
like
to
that
own
buildbacks
layers
for
exporting
to
environment
variables,
so
like
creating
exactly
scripts.
That
would
solve
the
same
thing,
but
the
the
main
issue
is
iterating
through
all
of
these
layers.
F
Right
I
was
proposing
this
as
like
a
new
thing,
because
there's
no
way
to
do
this
right
now,
even.
G
F
Okay
and
and
I
have
to
make
sure
that
this
shim
is
present
on
each
build
pack,
that's
used,
so
I
have
to
take
existing,
build
packs,
modify
the
binaries
that
they
have
created.
Add
that
shim
repackage
it
and
then
use
it
in
my
final
corporate
image.
I.
J
J
So
it
it
if
the
bomb
contained
not
just
information
on
what
was
contributed,
but
also
on
the
build
packs
or
the
bomb
or
something
like
it,
such
that
each
build
pack
got
information
about
previous
build
packs
that
had
run
maybe
a
like
a
path
to
the
layers
that
they've
created
in
metadata
like
that.
But
that
then
I'll
enable
a
totally
separate
build
back
from
executing
against
those
like
another
operator
provided
built
back.
J
A
J
I
think
of
build
equals
true
more
as
like
visibility
in
the
sense
of
like
putting
the
bin
directory
on
the
path,
and
that
kind
of
stuff
doesn't
mean
that
the
files
are
not
there
and
you
can't
read
them.
G
F
F
So
if
it's
launch
equals
true,
it
executes
right
after
the
build
process,
puts
that
stuff
in
in
its
own
common
layer
or
or
like
you
can
have
like
some
shim
there,
so
that
these
layers
get
exported
out
with
that
build
pack
that
was
set
to
launch
equals
true
and
the
next
time
it's
built.
F
A
If
that
makes
sense,
there'd
have
to
be
something
that
tells
the
life
cycle.
There's
an
operator
specified,
build
pack,
that's
going
to
process
this
previous,
build
packs,
layer
creation
and
then
blow
away
the
cache
and
then
force
the
layer
to
get
rebuilt.
If
that
makes
sense,
if
it's
a
launch
equals
true,
cache
equals
false
layer,
for
instance
right
like
it,
it
can't
just
reuse
the
one
in
the
previous
image,
because
you
need
it
locally
for
your
operator,
control
build
pack,
and
so
that's
that'd,
be
a
big
limitation.
A
I
Change
yeah
there
is
that
rfc
from
sam.
So
I'm
wondering
like:
do
you
need
the
contents
in
the
layer
or
can
these
bill
packs
expose
stuff,
like
I
guess
in
joe's
point
like?
Can
we
is
this
solvable
with
better
inter-built
pack,
communication.
F
It
depends
on
whether
that
buildback
itself
is
exporting
all
of
that
metadata
or
not.
You
can't.
You
can't
force
that
bill
back
to
do
x
or
based
on
its
contents,
so.
F
So
it's
not
so
much
of
an
issue
for
like
buildbacks
that
the
operator
themselves
trust
and
wrote,
but
it's
like
when
users
get
into
the
world
of
using
their
own
build
backs
or
like
once.
You
have
something
like
in-line
buildbacks.
How?
How
would
an
operator
make
sure
that
the
final
image
that's
created
is
still
compliant
and
it.
G
A
F
It's
not
just
digest,
it's
like
also
preventing
things
that
could
be
potentially
harmful.
A
But
can't
you
do
that
after
the
image
is
built
and
then
like?
Yes,
you
do
have
to
have
another
validation
step,
but
you
can
like
you.
Could
you
don't
have
to
like
pull
the
whole
image
and
push
the
whole
image
or
something
you
can
just
write
extra
labels
into
the
config
blob
remotely?
If
you
wanted
to
like
you,
can
do
that?
It's
kind
of
it's
not
that
hard
to
do
post-processing
of
an
image
after
it's
built.
Are
you
sure
that's
not
an
option.
F
I
guess
yeah,
but
then
it
it
also
gets
to
the
point
where,
like
let's
say,
suddenly,
the
the
api,
the
platform
api
or
the
platform
they're
using
or
the
like,
the
buildback
api
they're
using
changes,
that's
the
same
issue
with
all
the
image
scanning
tools
and
build
packs
right
now,
right
because
the
structure
we
use
is
so
different.
All
of
course,
scanning
tools
don't
work,
so
you
can't
be
sure
that.
A
F
D
A
I
want
to
point
out:
we
got
eight
ten
minutes.
Eighteen
minutes
left
and
I
know
when
sam,
I
think
you
have
some
more
things
on
the
agenda.
Also,
do
you
wanna?
Is
this
helpful
feedback
to
connect
movie
forward?
Should
we
move
on
to
the
next
agenda
item
or
do
you
want
to
stick
on
this
one.
F
Yeah
sure
should.
F
This
as
a
possible
discussion
with
like
the
use
cases,
and
so
that
we
can
do
this
offline,
okay,
the
detect
stage
limitations.
It's
again
the
wait.
I
should
put
the
link
to
the
discussion,
but
it's
the
discussion
that
I,
where
is
it?
F
It
was
around,
like
those
that
case
where
you
have
three
build
packs
involved
in
the
process
of
contributing
a
dependency
which
the
again
that
also
falls
into
that
category
of
buildback
communication.
Like
interval,
black
communication,
I
I
think,
like
I've,
tried
to
think
a
lot
about
this
and
then
the
only
the
only
clean
way
I
have
thought
of
is
like
adding
an
optional
resolve
stage
that
I
said,
runs
in
the
opposite
direction
of
the
build
stage
which
can
make
changes
to
the
build
plan.
F
So,
if,
if
you
had
like
a
system
built
back
a
gold
disc
built
back
in
the
go,
multiple
pack,
it
will
start
from
go
mod,
then
go
dist,
and
then
the
system
will
back
so
the
co
mod
build
pack
would
make
changes.
The
go
disk
could
make
it
and
then
finally
it
would
reach
to
the
system,
and
if
it
wants
to
make
something,
then
the
build
process
will
run
the
other
way
around
to
make
sure
that
all
of
these
changes
are
picked
up
it.
F
It's
it's
roughly,
the
same
performance
overhead
as
the
current
build
step.
Like
you
do
the
resolution
in
the
build
step.
Instead,
you
could
opt
into
having
this
extra
resolve,
set
that
that
does
the
processing
for
you
and
it
has
the
same
limitations
like
if
you
fail
during
build
step,
because
you
can't
resolve
something.
The
same
failures
would
happen.
A
F
Detect
is
all
done
in
parallel
right,
so
you
don't
have
an
order
for
detect.
That's
why
I
had
to
introduce
this
if
you
are
willing
to
give
up
that
that
detect
has
to
run
in
parallel
and
it
can
run
things
in
any
order,
then
you
could
solve
it.
Also
during
the
tech
stage,.
F
F
C
A
F
F
This
build
pack
would
then
modify
its
build
plan
and
send
it
to
the
system
buildback
and
then,
when
it's
running
the
build
stage,
it
doesn't
have
to
do
a
resolution,
because
it
knows
the
right
things
to
install
and
you
just
resolved
everything
in
one
one
one
stage
before
that,
and
it
will
only
run
for
the
group
that
was
detected
so
any
I
guess
any
changes
you
you're,
making
any
failures
that
happen.
There
might
also
happen
in
the
build
stage,
for
example,
so
that
was
my
reasoning
behind
having
an.
G
Requiring
something
so
let's
say:
I'm
the
go,
build
pack
and
I'm
requiring
system
packages
right.
G
G
F
Yeah,
that's
that's
what
I
I
mean
you
can
do
that
technically
right
now,
depending
on
how
you
implement
your
system
buildback,
you
could
have
it
resolve
in
a
way
that
it
can
look
at
other
like
the
requirements
and
provisions
from
other
bill
packs.
First
of
all,
I
don't
know
if
a
bill
pack
can
look
at
metadata
from
bill
packs
it
doesn't
like
for
dependencies,
it
doesn't
provide.
Can
I
look
at
the
entire
build
plan,
or
is
it
given
like
a
subset
of
it?.
G
And
I
think,
even
when
you're
saying
this,
you
wouldn't
be
able
to
see
who
had
required
anything.
Yet
all
you
could
do
is
say
like
I
provide
go.
This
is
admittedly,
probably
horribly
complex.
I
provide
go
and
then,
when
you're
saying
I
require
system
packages,
you
can
provide
some
syntax
where
you
could
say
by
the
way,
the
metadata
from
anyone
who's
requiring
my
provision
like
feed
it
through
here.
F
F
A
It's
like
problem
is
that
it's
too
flat
sort
of
right
now
that
you
can
only
you
can
get
stuff
from
acquires
into
provides,
but
then,
if,
if
there
are
multiple
levels
of
c,
makes
a
difference
in
how
b
works,
which
makes
a
difference
in
how
a
works,
then
you
can't
there's
only
one
one.
It's
only
one
pass
of
resolution
right
now,
right,
yeah.
G
G
F
Like
currently
your
goal,
this
buildback
knows
of
all
the
different
ways
you
can
like
all
the
different
build
packs
that
can
specify
a
go
version
and
how
to
choose
between
one
of
them
and
solve
any
constraints
that
it
has
now.
It
needs
a
way
to
specify
all
of
those
constraints
and
that
that,
like
that
reference
way
to
the
system
built
back
in
the
system
built
back
now
has
to
resolve
all
of
this.
It
needs
to
know
that
for
each
downstream
buildback
that
uses
it
so.
E
G
F
G
G
F
G
A
F
I'm
fine
moving
on
to
the
next
one
I
guess
like.
If
you
have
more
thoughts,
it
would
be
really
nice
if
you
can
put
it
in
that
discussion,
I'm
just
trying
to
figure
out
like
what
would
be
the
good
alternatives
for
solving
this
problem.
B
Before
we
jump
into
the
next
topic,
can
I
just
very
quickly
do
mine,
which
will
take
10
seconds
part
two
of
the
user
research
share
out
is
tomorrow
at
during
office
hours.
So
please
come
to
wrap
up
the
discussion
we
had
last
week.
A
Awesome
all
right
and
so
we'll
jump
to
the
next
one,
which
is
build
right,
flag.
D
F
Instead
of
being
able
to
modify
previous
build
packs,
I
guess
those
were
the
two
alternatives
that
were
prominent
last
month.
C
A
Last
kind
of
you
know
three
out
of
four:
the
one
sim
has
open.
There's
some
unresolved
things:
we're
not
going
to
have
office
hours
tomorrow,
it'd
be
good
to
schedule,
a
follow-up
discussion
to
kind
of
resolve
some
of
those
things,
and
if
you
have
time
it
could
be
at
a
better
better
time
for
you
time
zone
wise,
also,
okay.
So
let's,
let's
do
that
and
then,
let's
move
on
to
separating
config
letters
four
minutes
ago.
G
Let
me
share
this,
I'm
hoping
that
this
will
be
uncontroversial,
because
even
if
people
don't
like
it
is
basically
the
only
way
to
do
it
all
right
so,
let's
say
there's
two
build
packs.
In
one
platform
we
want
to
support
new,
build
pack,
api
old,
build
pack
api
for
both
the
new
and
old
platform,
we're
doing
the
new
platform
where
we
have
the
output
directory.
G
A
We're
dropping
support
for
once
the
build
packs
bump
to
the
new
build
pack
api
they're
no
longer
able
to
have
their
metadata
read
in
their
buildback
directory,
but
the
build
packs
are
on
the
old
build
pack.
Api
will
still
be
able
to
contribute
files
into
their
individual
layers
directory
the
notepad.
The
old
build
pack
api
will
not
receive
cnbp
configure
and
will
have
to
continue
to
write
files
in
their
original
locations
from.
G
A
G
J
G
Controversial
part:
okay,
so
if
we're
using
the
old
platform
api,
we
don't
have
an
output
dir,
we're
saying
we
don't
change
things
for
old
apis,
so
this
has
to
be
called
layers
and
be
set
using
cmb
layers
term
and
I
think,
because
we've
actually
specified
in
the
api,
the
literal
paths
to
a
layer
tunnel.
So,
like
someone
implemented
their
own
exporter,
it
should
work
with
the
builder
of
the
same
platform
api.
A
G
B
Are
we
confident
that
the
advantages
of
doing
it
this
way
outweigh
the
the
effort
involved
in
implementing
it?
I
remember
people
were
like
on
the
fence
between
this
one
and
number
four.
B
B
Yeah
number,
the
alternative
four,
which
is
like
essentially
where
we
land
anyway
with
number
one.