►
From YouTube: CNB Weekly Working Group: 2022-02-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
any
new
faces
anyone
joining
us
for
the
first
time
or
one
of
the
first
times,
and
I
would
like
to
introduce
themselves
yeah
hi.
There.
A
Brown,
I'm
a
fairly
new
member
to
the
salesforce
roku
team,
and
so
I'm
just
kind
of
this
is.
I
think
this
is
my
second
meeting
or
so
so
I'm
just
kind
of
joining
in
to
get
a.
C
A
Cool
welcome,
thank
you,
hey
everyone,
I'm
laksh
and
I'm
from
india
and
I'm
a
student
I'm
a
second
year
student
from
india,
and
this
is
the
second
time
I'm
joining
this
meeting,
so
yeah
cool
welcome.
A
All
right
release
planning
implementation
team.
C
A
Cool
platform
team.
D
A
Cool
no
updates
from
distribution,
team,
bat,
team,
cool.
A
Alrighty
we're
ready
to
move
on
to
the
agenda
first,
one
up
is
a
run
image
s
bomb.
I
think
that's
sam's.
Did
you
want
to
pick
that
one.
B
A
Should
I
recall
the
reviews
for
the
core
team
re-request
review.
B
A
Okay,
did
you
any
discussion
on
it,
or
should
we
cool
all
right?
The
next
one
up
is
cosine.
A
B
Yeah,
I
think
it
probably
goes
along
with
a
bunch
of
the
game
removal
stuff,
but
I
just
wanted
to
bring
up
like
what
or
discuss
what
our
timeline
with
all
of
these
things
would
be
or
like.
In
what
order
are
we
expecting
to
do
these
things?
And
if
I
have
that
in
mind,
I
can
make
changes
to
my
rfc
accordingly
separately.
B
If,
if
we
want
this,
while
still
keeping
the
name
mode
and
packing
that
it
can
also
add
some
compatibility,
I
can
add
a
compatibility
section
where,
like
pack,
has
a
pack
published
command
or
something
that
allows
it
to
so,
while
building
things
it
can
store
all
of
this
extra
information
somewhere
and
then
during
fact
publish
it
can
republish
some
of
these
things
out
to
the
registry.
B
So
I
don't
know
what
what
timelines
we're
looking
at
for
this
versus
the
daemon
removal.
What
do
we
want
in
the
middle
and
if
this
is
still
a
hard
block
on
the
daemon
removal
or
not.
E
E
E
A
pack
thing-
or
you
know
just
like
a
separate
executable-
that
pack
happens
to
run
that
another
platform
could
run
as
well
like.
I
think
we
could
start
building
something
here
without
changing
the
api
and
then,
if
we
wanted
to
make
that
formally
part
of
the
lifecycle,
we
could
bring
it
in
afterwards.
I
think,
unlike
other
things,
it
doesn't.
This
is
because
it
happens
last
something
that
we
could
freely
experiment
on.
B
C
B
Even
though
it's
not
part
of
the
api,
if,
if
we
are
okay,
doing
that,
I
I
think
we
can
easily
convert
it
to
an
extension
module
that
can
be
developed
separately.
E
Just
the
life
cycle,
I
guess
instead
of
changing
the
exporter
api,
I
think
we
could
experiment
more
freely
if
it
was
separate.
B
Yeah,
I'm
saying
it
will
be
a
separate
binary.
I'm
just
saying
that
if
it's
part
of
the
the
image,
it's
it's
one,
less
thing
that
the
platform
has
to
like
import
and
inject
so
like
it
can
be
a
separate
binary
that
the
platform
runs
after
the
exporter.
But
it's
still
just
present
in
the
image,
and
it's
going
to
be
it's
going
to
be
run
in
the
secure
mode.
D
The
only
thing
I
would
say
is,
let's
say
from
pax
perspective:
I
don't
know
how
we
would
be
able
to
interact
with
an
extension
if
it's
not
part
of
the
api.
D
Why
not
because,
like
I
guess
we
could
go
into
like
how
do
we
know
whether
or
not
that
binary
is
present
in
the
builder.
B
D
Okay,
I
see
what
you're
saying.
D
B
C
Could
be
part
of
the
builder
if
we
made
like
the
extension
point
like
a
life
cycle,
slash
finalizer
and
it
runs
things
that
could
be
on
the
builder
or
not
or
something
because
obviously
this
is
just
one
example
of
something
that
can
happen
after
all,
that's
done,
and
then
we
still
have
to
the
discussion
of.
Does
this
get
registry
credentials
and
things
like
that
right
because,
obviously
for
something
like
this
cosign
you're
going
to
need
that
yeah.
B
I
mean
it
will
obviously
need
registry
credentials.
It
will
just
be
up
to
the
platform
to
provide
those
to
this
binary.
The
same
way
it
did
to
lifecycle
and
it
will,
it
will
have.
The
platform,
will
have
to
manage
some
like
state
between
the
creator
or
the
other
phases
in
this
phase.
So,
like
some
volume,
where
it
stores
some
common
things,.
B
The
only
weird
thing
I
find
about
this
is
like
what
happens
when
the
platform
api
changes
or
like
what
contract
does
the
platform
have
with
this
binary?
That's
that's
unknown
to
me
as
long
as
like,
if
you're
just
experimenting
with
it,
it's
fine
but
like
if
you
want
like
wider
adoption
with
this,
we
need
some
contract
with
the
platform
api
and
this
binary
and
that's.
D
Because
I
guess,
if
it's
like
a
pack
specific
thing
right,
then
that's
different,
but
it
doesn't
sound
like
that's
what
you
want
it
to
be
at
the
end
of
the
day
right
you
want
it
to
be
something
that
could
be
reusable
across
multiple
platforms
and
then
I
think
a
lot
of
the
same
concerns
come
up
about.
D
What's
the
contractual
agreement,
the
interface
between
this
thing
and
anything
going
forward,
any
other
platform.
E
That
all
makes
sense.
I
feel
like
there's
this
tension
between
this
and
the
conversation
we're
having
last
week
about
like
what
are
ways
where
we
could
experiment
without
having
to
approve
spec
changes.
First
right,
but
I
wonder
if
you
know,
could
we
get
there
in
phases
like
this
is
an
experimental
feature
and
it's
separate
when
it
all
goes
well,
we'll
put
it
in
the
formal
api.
B
But
I
think
a
precursor
to
us
going
down
the
extensions
route
was
like
having
a
clearly
defined
extensions
api,
that's
aware
of
platforms
and
whatever
apis
they
are
compatible
with
so
like
it's
free
form
in
that
can
do
additional
things,
but
like
there's,
there's
something
that
that
runs
this
like
something
in
the
life
cycle
or
something
some
translation
right.
That
runs
these
extensions.
B
If
it's,
if
it's
just
purely
experimental-
and
we
just
want
to
try
an
experiment
where
we
we
start
off
with
a
component-
that's
just
for
pack
and
see
if
we
can
reuse
it
in
other
platforms.
That's
fine
to
me!
I
can.
I
can
reword
the
rfc
so
that
it's
just
back
specific
for
now,
but
it's
implement
it
in
a
way
that
it's
it's
not
a
hard
coded
thing
in
inside
pack.
B
If
that's
okay,
like
we
can
start
with
that
goal
that
this
is
meant
for
back,
but
it
can
also
be
reused
by
other
platforms
and
then
we'll
we
can.
We
can
see
how
how
it
all
plays
out
and
does
that
does
that?
Okay,
so
let's,
let's
keep
it
back
specific.
B
Only
in
publish
mode
that'll
keep
things
easier.
Then
we
don't
have
to
worry
about
the
game.
B
So
pack
has
some
additional
flags.
If
you
provide
the
publish
mode,
those
flags
are
involved
or
allowed
packed.
Is
the
validation
that
those
extra
cosine
flags
are
only
added
when
you
are
using
publish
mode,
not
otherwise,
and
we
we
just
switch
it
from
life
cycle
to
pack
with
the
caviar
done.
C
I
think
we're
on
to
the
next
one
prepare
face.
I
don't
know
who's
owning
that
one.
D
Yeah,
I
could
give
a
quick
overview
of
this
new
rfc.
I
know
it's
been
in
draft
for
some
time
as
we
were
working
through
a
couple
kinks,
so
a
lot
of
people
probably
already
seen
it,
but
some
of
you
may
not,
and
then
additionally,
certain
things
might
have
changed
since
then,
so
I'll
go
ahead
and
show
a
little
bit
of
that.
Let
me
resize
this
real,
quick,
all
right.
D
So,
let's
see
starting
at
the
top,
so
this
ended
up
being
something
that
was
very
specific
to
trying
to
get
the
project
descriptor
utilized
within
all
the
platforms.
D
I
want
to
call
it
an
operation
instead
of
phase,
because
it's
outside
of
the
build
operation
right
instead
of
faces,
but
essentially
similar
to
what
sam
just
previously
brought
up
about
like
this
finalizer
idea,
write
something
that
executes
completely
separate
from
the
build
operation.
This
is
actually
on.
You
know
the
front
end
on
the
before
a
build
operation
occurs.
There's
this
prepare
operation
so
kind
of
like
a
precursor.
This
is
something
that's
not
in
this
rfc,
but
a
lot
of
platforms.
D
You
know
today
already
have
some
sort
of
preparing
mechanism
right.
If
you
look
at
tecton,
they
do
something.
If
you
look
at
pack,
they
do
something
and
so
forth.
So
this
is
more
or
less
trying
to
standardize
that
and
also
kind
of
allow
for
the
use
of
the
project
descriptor
to
be
part
of
that
specific
operation.
D
So,
as
we
walk
through
here,
some
of
the
things
that
motivated
this
specific
rfc
had
to
do
with
things
that
may
be
specific
to
just
pac,
but
they
were
to
serialize
the
cli
to
have
a
different
file
within
the
repositories,
but
then
also
for
platforms
to
recognize
the
project
descriptor
and
have
some
sort
of
ideas
on
how
that
should
be
used
throughout
different
platforms.
D
D
So
what
it
is
is
three
things
moving
the
io
build,
packs
properties,
so
the
things
that
used
to
be
in
there
for
the
most
part
to
a
I
o,
build
packs,
defaults
and
I'll
go
a
little
bit
more
into
that
detail.
A
replaceable
new
phase
right,
it's
still
a
phase
at
the
other
day,
but
it's
a
swappable
phase,
meaning
that
platforms
could
put
in
their
specific
prepare
operation
and
then
last
but
not
least,
is
supporting
utilities
that
the
project
itself
would
provide.
D
So
the
first
one
is
changing
the
I
o
build
packs
namespace.
So
the
primary
reason
for
this
is
because
I
think
beforehand
when
we
had
just
the
I
o
build
packs
namespace.
It
gave
this
impression
that,
as
a
user
right
like
those
properties
would
always
be
applied,
no
matter
what
and
by
transitioning
them
over
to
default,
it
hopefully
conveys
the
idea
that
these
are
the
potentially
things
that
may
be
applied,
but
it's
really
ultimately
up
to
the
platform
to
apply
those
right
and
and
part
of
this
prepare
operation.
D
D
D
The
reason
why
that
got
kicked
out
is
because
we
kind
of
started
defining
the
criteria
for
the
things
that
would
be
within
this
namespace
and
the
things
that
are
within
this
namespace
would
be
things
that
would
be
inputs
to
the
lifecycle
phases
right
and
the
builder
doesn't
meet
that
criteria,
so
it
doesn't
make
sense
for
it
to
be
part
of
their
part
of
that
namespace.
D
Let's
see
outside
of
that,
we
could
go
on
to
the
prepare
face
itself.
The
primary
responsibility
for
the
prepare
phase
is
to
apply
the
requested
configuration
right
and,
in
this
case
we're
talking
about
the
project
descriptor
or
the
project
tamil.
We
have
a
little
bit
more
guidance
on
that
below.
In
the
spec
changes
proposed,
but
for
the
most
part,
this
just
declares
that
it
will
run
before
the
create
operation
or
phase
or
or
before
the
detect
phase,
depending
on
how
you're
operating
with
the
life
cycle.
D
D
Let's
see
the
next
thing.
It
would
be
the
supported
utilities
that
the
project
provides.
D
One
of
them
would
be
a
go,
prepare
function
so
in
some
cases
like,
let's
say
pack
in
order
of
it
having
a
spin
up
a
container
to
run
this
prepare
operation
it'd
be
nice
if
it
could
do
it
natively
in
some
sense,
and
so
we'd
like
to
see
that
the
lifecycle
library
provide
some
sort
of
functionality
there,
along
with
that
for
other
platforms
such
as
like
techton
and
and
whatnot,
it
would
also
be
nice
if
the
project
provided
a
prepared
binary.
That
does
a
couple
default.
D
Things
like,
for
instance,
downloading
packs,
I
think,
is
something
that
we've
talked
about
wanting
for
some
time
now.
It
could
update
the
order
based
on
the
default
groups
and
also
it
could
set
up
the
environment
variables
into
the
platform
environment
variables
directory
that
are
defined
there
as
well.
Last
but
not
least,
something
I'd
like
to
see
is
to
notify
the
user
of
any
properties
that
might
be
in
defaults
that
aren't
being
applied
right,
because
there
are
some
that
may
not
be
applied
by
default
and
so
forth.
D
Let's
see
the
there's
a
couple
alternative
suggestions
here,
you
could
run
through
the
benefits
and
drawbacks
of
each
one
of
these.
I'm
not
gonna.
Go
too
much
into
detail
on
that,
and
below
here
are
the
kind
of
spec
changes.
D
One
of
the
things
that's
probably
worth
calling
out
is
my
proposal
to
move
the
the
namespace
of
I
o
build
packs
to
the
platform
stack
right,
so
the
schema
version
would
be
directly
tied
to
the
platform
api
version
as
part
of
the
platform
api
and
really
the
project.
Descriptor
spec
would
only
be
about
defining
the
overall
structure
of
the
project
descriptor
itself
and
not
so
much
the
any
like
project
specific
namespaces.
B
I
think
the
the
only
question
I
had
was
was
the
same
discussion.
We
were
having
last
time
which
is
like
how
does
this
modify
flags
that
were
passed
by
the
platform
from
the
fly
from
the
things
that
are
present
in
the
project
descriptor
and
given
output,
that
the
platform
can
then
pass
on
to
the
rest
of
the
lifecycle
phases.
D
Yeah,
so
I
purposely
left
that
out
of
this
rfc,
although
I
think
it's
very
valid,
I
left
it
as
an
unresolved
question.
I
think
there
is
a
high
possibility
that
we
could
leverage
something
that
I
think
jesse
brought
up
at
some
point,
which
is
like
a
life
cycle
config
file.
So
if
we,
if
the
interface
was
more
more
in
line
with
a
file
system
based
configuration,
then
we
could
do
that
a
lot
easier
than
what
it
is
right
now.
B
Given
that
this
is
a
new
phase
or
part
of
the
platform
api,
we're
introducing,
is
there
any
reason
why
this
can
like
this
cannot
be
driven
by
that
life
cycle
config
like
file,
so
rather
than
passing
everything
as
come
online
flags,
you
create
a
file
and
and
pass
that
file
to
this
prepare
phase.
It
modifies
it
according
to
whatever
it
feels
like
and
and
that
the
same
file
is.
B
Then
the
platform
is
responsible
for
taking
that
file
deserializing
the
same
struct
fields
it
gave
in
and
then
passing
it
on
to
the
to
the
rest
of
the
life
cycle
phases.
D
For
me,
it's
about
two
things:
one
is
minimizing
scope.
There
were
a
lot
of
conversations
within
the
lifecycle
config
file,
about
like
the
structure
of
it.
What
things
make
sense
and
so
forth
that
I
feel
like
it
could
become
its
own
rabbit,
hole
of
sorts,
and
so
I'd
like
to
keep
them
separate
two
is,
I
think
it
makes
sense
to
do
all
or
nothing
right
instead
of
having
like
one
phase
work
differently
from
all
the
other
phases.
D
It'd
be
nice
that
when
we
apply
this
change,
where
we
have
a
life
cycle
configuration
file
that
it
works
across
all
their
all
of
the
phases.
So,
for
those
reasons,
I
think
it
makes
more
sense
to
keep
them
separate,
and
you
know
keep
it
in
this
or
you
know
like.
Ultimately,
the
other
option
would
be
like
okay,
this
would
be
blocked
on
the
life
cycle.
Having
a
configuration
file
first
like
I,
wouldn't
want
to
tie
them
directly
or
use
this
as
a
driving
for
that.
B
Because
then,
this
this
in
itself
won't
be
able
to
completely
replace
whatever
pack
is
doing
right
now
to
prepare
for
the
life
cycle
phases
to
be
executed.
Really,
it
cannot
replace
packs.
Current
implementation
for
picking
the
the
tags
and,
like
does
does
pac
support
that
right
now,
or
does
it
not
at
all
support
that
through
a
project
description,
project.
D
D
Yeah
and
I
think
what
I
would
be
okay
with
is
blocking
more
or
less
the
the
implementation
of
sorts
or
the
release
that
this
goes
in
a
set
of
specs
for
the
configuration
file
right.
But
what
I
I
guess,
what
I
don't
want
to
do
is
I
don't
want
to
balloon
this
rfc
with
saying
hey.
This
is
now
defining
and
bike
shedding
on
the
lifecycle
configuration
file.
D
A
A
Is
just
making
feels
like
it's
just
making
a
statement
about
this
is
how
you
will
specify
this
thing
without
you
know
necessarily
implying
anything
about
the
behavior
of
a
particular
platform
or
even
life
cycle.
E
E
Like
we
sort
of
need
to
go
back
to
jesse's
old
rfc
about
sort
of
refactoring,
the
platform
api
file
system,
such
that
platform
has
a
place
to
inject
config
on
a
volume
in
a
way
that
works
better
in
like
a
kate's
situation,
rather
than
in
a
pack
copy.
The
files
in
the
container
situation.
D
C
Order
tomorrow
in
the
layers
volume
right
now,
I
added
that,
after
this
didn't
make
it
through
for
what
it's
worth
for
that
same
reason,
so
that
you
could
write
it
to
a
volume
and
have
it
get
picked
up.
So
it
is
possible
to
load
from
a
layers
folder
already,
but
it
would
be
nice
yeah
if
it
didn't
have
to
do
that
specific
one-off
for
every
file
that
we
support
in
the
future.
E
The
other
thing
that,
like
the
other
question
that
comes
to
mind
here,
is
we're
talking
about
making
this
part
of
the
platform
api,
but
what
I
assume
we
still
want
to
check
project
tunnel
into
the
root
of
an
after
right.
Does
that
mean
that
you
cannot
run
the
same
app
on
two
platforms
that
are
using
different
platform
apis.
D
Not
necessarily
right,
it
means
that
we
now
have
something
a
number
that
tells
us
what
platform
api
the
data
structure
is
based
on
right.
Like
the
schema,
the
platform,
the
prepare
right
operation
could
do
internal
translations
to
or
just
know
how
to
read
from
different
platform
api
versions,
different
schema
versions.
I
should
say.
D
D
Right,
unless
I'm
not
understanding
your
question,
but
like
this
input
right,
this
namespace
is
the
input
for
the
preparer,
no
matter
what,
regardless
of
the
platform
api.
E
D
B
I
mean
if,
if
we
can
get
that
life
cycle,
config
thing
going,
the
the
this
prepared
phase
could
be
supported,
starting
a
certain
platform
api
version.
I
mean
we
can't
fix
what's
already
out
there,
but
we
can
fix
things
for
the
future,
and
then
this
works
as
part
of
like
with
the
typical
life
cycle
behavior
that
detects
the
platform,
api
version
from
environment
variable,
given
that
environment
variable
and
the
config
file,
with
all
the
arguments
it
can
then
translate
accordingly.
B
C
It
might
be,
it
might
be
an
interesting
move
to
make
this
prepare
read
that
life
cycle
tommel,
config,
tumble
that
we're
talking
about
so
you
kind
of
as
a
platform.
C
C
B
That
was,
I
think,
the
idea
with
that
file
that
if,
if
it's
the
same
file,
then
this
prepare
phase
can
change
it.
However,
it
wants
given
the
project
descriptor,
and
that
way
the
the
platform
just
like
gives
the
file
somewhere
runs
the
prepared
face
that
modifies
it
in
in
a
way,
that's
that
the
platform
doesn't
have
to
care
about,
like
it's,
it's
still
compatible
with
the
same
platform.
It's
just
that
file
that
you
would
have
passed
directly
to
the
creator
or
the
other
phases
and
then
just
runs
it.
D
D
D
Because,
at
the
end
of
the
day,
I
think
we
it
makes
sense
or
it's
easier
to
say
that
the
namespace
is
tied
to
the
platform
api
so
that
we
don't
have
to
manage
or
maintain
two
different
things,
but
it
does
become
a
little
bit
confusing
the
alternative
to
this
would
be
to
have
yet
a
different
file
or
a
different
specification,
a
different
schema
right
somewhere.
That
says
this
is
the
namespace
and
that
is
versioned
independently.
D
Sort
of,
but
I
guess
what
I'm
trying
to
get
at
is,
I
think
we
should
separate
the
project
descriptor,
like
version
from
the
name,
io
build
packs,
stuff,
yeah.
D
C
I'm
definitely
in
favor
of
it
being
a
different
schema
version.
I
still
don't
know
how
I
feel
about
it
being
the
platform
api
version,
but
I
don't
think
it
needs
to
iterate
with
the
project.
Descriptor
version.
E
D
Of
this
namespace
sections.
D
So
if
we
want
to
then
update
project
descriptor
203
right,
we
would
then
update
the
platform
api
to
say.
No,
it
only
supports
o3
project
descriptor.
E
D
Well,
I
guess
what
I'm
saying
is
that
03
could
look
completely
different
and
it
could
be
a
completely
different
name
space
right,
like
let's
say
we
say
screw
it
reverse
domain
name
right
yeah,
because
that
has
come
up.
E
E
Like
it's
too
bad
that
we
now
need
to
say
both
of
these
versions
like
which
schema
version
of
the
actual
data,
do
we
support
and
which
version
of
this
project
that
allows
you
to
specify
different
schemas.
Do
we
support,
but
if
we're
going
to
support
project
timeline
natively,
I
don't
know
how
we
get
around
it
and.
D
C
E
For
my
mind,
having
to
throw
the
defaults
in,
there
makes
the
experience
of
using
this
file
worse,
and
I
don't
know
that
it
adds
more
clarity
that
makes
up
for
that
extra
longer
name
of
everything.
A
A
Yeah,
I
mean
to
your
point
right,
like
defaults,
doesn't
really
communicate
what
a
platform
supports.
All
it
says
is
like
it's,
the
wild
west
like
no
telling
what'll
be
supported,
and
I
think
what
emily's
saying
is
like
platforms
need
to
do
that.
Anyways
like
they
need
to
communicate
on
their
own.
What
they're
going
to
support
and
not
support.
E
A
D
I
think
schema
version
just
ends
up
being
a
a
special
key
right
and
that
would
work
yeah,
I'm
more
than
happy
to
be
outvoted.
I
guess,
and
if
the
majority
says
like
hey,
there's
no
need
for
defaults.
I
am
okay
with
that.
D
So
I'll
add
a
little
a
little
poll
on
the
rfc
with
little
emojis
and
you
can
vote
that
way.
D
All
right
any
other
okay,
how
about
so
that
was
bike
shitty!
I
I
get
that
any
other.
Like
major
concerns,
I
guess
maybe
buy
shitty
stuff
too.
E
I
think,
maybe,
on
the
same
theme,
it'd
be
nice
to
throw
in
some
examples
of
like
what
a
platformer
is
supposed
to
do
if,
for
instance,
it
does
not
want
to
allow
arbitrary,
build
packs
to
be
downloaded
stuff
like
that,
like
I
can
make
assumptions
like.
Oh
before
I
run
prepare,
I
should
delete
these
things
out
of
the
file
or
whatever,
but
just
sort
of
describe
some
of
those
workflows,
I
think,
would
make
this
easier
to
visualize.
E
E
D
E
D
E
E
D
D
So
it
must
call
executable
of
this
nature
in
some
form
or
fashion,
whether
it's
its
own,
you
know
custom
built
or
if
it's
the
one
provided
by
the
project.
I
think
that's
the
choice
that
the
platform
makes,
but
it
goes
back
to
the
finalizer
right.
It's
like
what
is
the
contract
for
the
thing
that
I
plug
in
in
this
specific
operation,
and
this
is
the
contract.
E
It
feels
confusing
to
me
because
it's
not
the
makes
sense,
we're
defining
sort
of
the
platform's
contract
with
the
thing
we're
providing
it's
like
here's,
how
you
give
it
its
inputs,
here's
how
you
get
outputs.
It
feels
weird
to
define
the
platform's
contract
with
the
thing
the
platform's
providing
like
what.
If
I
don't
want
to
use
anything
in
there
must
I
run
a
prepare
phase.
That
does
nothing.
You
know.
E
B
I
think
the
idea
was
that,
since
we
wanted
the
prepared
functionality
to
be
consistent
across
platforms,
the
the
platform
could
easily
choose
to
reuse
the
same
preparer
that
pack
has
or
this
the
default
prepare
from
some
other
place
and
and
not
implement
it
by
itself
like
if
pac
wants
to
reuse
the
prepare
that
kpak
has,
because
you
want
a
more
strict
version
of
prepared
and
then
what
pack
supports
by
default.
You
can
do
that
so
that
was
that
was
the
motivation
behind
having
a
standard
interface
for
prepare
just
portability
across
platforms.
B
You
can
create
your
own
one
if
you
want,
but
if
you
want
to
get
the
same
processing
pre-processing
across
different
platforms
and
then
you
can
use
the
standard
interface
and
you
can.
You
can
use
something
some
other
platform
code
for
itself
so,
rather
than
it
being
tied
to
the
platform,
it's
like
what
what
kind
of
an
audience
that
platform
is
aiming
for?
B
Is
it
more
of
a
secure,
lockdown
version,
in
which
case
like
you,
you
wanted
to
run
on
techton
on
kpop
in
that
exact
same
way,
and
the
default
platform
is
just
like
fully
flexible
meant
for
app
developers
for
pulling
in
random,
build
packs,
and
if
you
wanted
that
and
let's
say
takedown,
for
example,
you
could
do
that
if
you've
had
it
like,
we've
had
requests
at
some
point.
You're
like
I
want
to
download,
build
packs
dynamically
like
pack
does,
but
I
want
to
do
it
in
some
other
platform.
D
So
it's
it's!
It's
the
portability
aspect
of
it
right
that
we're
really
trying
to
aim,
and
in
order
for
it
to
be
portable,
we
need
to
have
some
sort
of
specification
around
it.
Some
sort
of
contract
around
it.
A
We
have
10
minutes
left,
I
feel
like
I,
I
haven't
had
a
chance
to
do
a
full
read
through
this.
I
think
that
would
be
helpful
for
me
so
wondering
if
we
should
take
a
step
back,
let
folks
read
it
and
then
revisit
again.
I
don't
know
I
mean
y'all
you'll
tell
me
if
that
makes
sense.
A
Okay
thanks
the
next
thing
on
the
agenda
is
the
profile
build
pack
and
it
says,
check
with
author.
I'm
not
sure
what
what's
implied
here.
C
I
think
we're
just
putting
it
on
there
because
it
looked
like
it
was
updated
somewhat
recently,
but
maybe
that
was
just
a
comment
from
you.
It's
only
a
10
day
old,
rfc
and
the
author
may
not
be
here.
So
we
wanted
to
know
if
anyone
knew
anything
about
this
and
how
to
move
it
forward
and
what
needed
to
happen.
A
Yeah
it
looked
like
there
was
activity,
I'm
tracking
it
at
least.
A
Yeah
I'll
make
sure
I
mean
this
is.
I
was
really
glad
to
see
this
because
I
thought
I
was
gonna
have
to
write
this
rfc,
so
I'll
help
make
sure
that
it
gets
through.
B
A
Think
I
think
a
couple
of
us
were
saying,
like
that's
a
cool
idea,
but
like
we
should
punt
on
that.
You
know.
B
A
Okay,
yeah
take
a
look
at
that
next
one
is
annotations.
B
B
The
the
main
motivation
behind
this
rfc
was
there's
a
there's
two
files
right
now,
where
which
directly
map
to
these
oci
annotations.
So
there's
the
project
metadata,
which
is
provided
by
the
platform.
It's
not
it's,
not
the
project.
Now
it's
it's
something
else
that
injects
other
metadata
about
the
project,
so
things
like
the
git
commit,
sha
and
other
things
and
there's
the
project
normal
itself.
There
are-
I
I
don't
know
if
this
was
intentional
or
not,
but
almost
all
of
these
fields
directly
correspond
to
the
oci
recommended
annotations.
B
So
things
like
the
git
revision,
for
example.
Let's
say
the
platform
is
downloading
some
git
repository
and
then
mounting
a
sub
folder
for
the
buildbacks
to
take
a
look
at
which
doesn't
have
the
git
folder,
there's
no
way
for
the
buildback
to
extract
that
information.
Put
it
somewhere
same
for
all
of
these
other
platform.
Level
features
like
buildback
can
potentially
read
a
project
descriptor
and
try
to
construct
all
of
these.
But
then
we
go
back
to
the
same
issue,
which
is
like
now.
The
buildback
has
to
be
aware
of
all
the
different
project,
descriptor
versions.
B
So
that's
why
I
was
like.
Can
this
just
be
a
platform
thing
platform
already
has
to
process
the
project
descriptor
in
some
way
it
already
outputs
everything
to
project
metadata
anyway,
which
is
a
platform
specific
file
and
those
map
out
directly
to
these
oci
annotations,
and
if
that
was
not
the
intention
behind
these
fields,
initially
in
the
project
descriptor
and
project
metadata
normal,
then,
like
I'm
missing
some
point,
you
know
why.
Why
were
those
fields
added
in
a
platform
specific
for
you.
E
When
you
describe
all
those
things,
it
makes
me
feel
even
more
strongly
in
some
ways
like
it
should
be
a
build
pack.
That's
doing
this,
like,
I
think
doing
those
things
in
the
platform.
Api
with
project
metadata
in
many
ways,
was
a
mistake
because
now
you're
getting
like
variable
behavior,
based
on
what
platform
api
your
platform
is
using
and
it's
harder
to
iterate
on,
and
if
we
could
instead
push
this
into
a
build
pack
that
reads:
project
tamil.
E
Could
easily
handle
multiple
versions
of
project
tunnel
in
a
way
that's
less
cumbersome
than
the
life
cycle
doing
it.
I
think
it
would
be
nicer
and
we
could
just
like
try
to
cut
out
project
metadata
like
to
get
rev
is
maybe
something
that
is
a
little
bit
weird,
but
you
could
get
you
could
think
of
ways
for
the
platform
just
to
pass
that
do
a
build
pack.
If
it
knows
it's
there
or
you
know,
work
with
a
collaborating
build
bag
like
I
think
it's
a
classic
utility
build
pack
like
slam,
dunk
yeah.
B
E
C
E
The
platform,
the
lifecycle,
also
sets
some
labels,
but
the
only
labels
it
sets
are
ones
that
it
like
functionally
needs
in
a
lot
of
ways.
So
if
we
start,
if
it's
a
life
cycle
specific
concern,
I
think
the
life
cycle
can
still
end
up
setting
annotations.
E
But
if
it's
something
more
cosmetic,
like
this
I'd
love
to
push
that
into
a
build
pack,
and
then
you
know
if
it's
a
utility
bill
pack,
if
it's
a
system
built
back,
we
can
create
that
atmosphere
where
it's
just
always
running,
but
also
give
people
a
chance
to
replace
it
or
upgrade
it
or
disable
it
in
an
easy
way
that
doesn't
involve
like
a
bunch
of
life
cycle.
Flags
that
are
about
magnet,
behavior
like
the
life
cycle
should
be
dumber.
I
think,
and
build
packs
should
be
smarter.
B
E
Yeah,
we
definitely
do
this
weirdly
and
incorrectly
in
a
bunch
of
places,
but
this
is
the
vision
I
want
to
push
us
towards
that,
I
think,
is
better
than
where
we
started
it's
like
that's
why
we
just
approved
all
these
utility
and
system
build
back
stuff
so
that
we
can
do
this.
We
can
go
in
this
direction
that
I
think
will
put
us
in
a
better
place.
B
I
mean
I'm
happy
to
move
out
the
annotation
stuff
to
a
buildback
api.
I
I
still
don't
have
an
answer
for
how
a
buildback
will
actually
set
all
of
these
annotations
because
it
doesn't
have
access
to
that
data.
I'm
completely
fine
with
buildback
setting
these
values
for
the
things
it
can
read,
but
there
are
simply
some
values
that
it
cannot
like
the
final
base
image.
That's
used,
that's
probably
going
to
be
something
that
the
exporter
knows.
E
Yeah
for
stuff,
like
the
base
image,
I
think
it
has
to
be
the
exporter
right,
but
for
things
like
author
and
homepage,.
C
I
don't
know
I
really
like
the
idea
of
it
being
a
utility
build
pack,
but
I
understand
that
there's
just
some
things
that
that
you
won't
be
able
to
do
if
you
implement
it.
That
way,
and
also
I
I
do
see
your
point
in
like
any
bill
pack
created
and
then
suddenly
having
all
these
is
like
a
nice
project
sort
of
perspective
like
every
build
pack
image
has
these
annotations,
which,
which
is
nice,
but
I'm
not.
C
I
don't
know,
I
don't
have
a
strong
opinion,
but
I
think
I
still
lean
towards
it
being
built
back
and
just
dealing
with
the
limitations
and
making
those
limitations
less
less
of
a
factor
in
long
term
like
if
there's
things
we
really
want
to
have
in
that
utility
bill
pack.
We
use
that
as
a
forcing
function
to
improve
the
api
or
make
these
utility
functions
have
a
bit
more
meat
to
them.
If
need
be,.
A
You
know
in
an
effort
to
limit
the
complexity
of
the
life
cycle
and
things
like
that,
but
still
give
more
power
to
the
utility
or
system
buildbacks
or
whatever
constructs
we
have.
B
Docker
files
are
actually
going
in,
like
there
were
proposals
for
the
run
image
extensions
to
run
barely
with
the
build
process,
because
the
build
process
actually
doesn't
either
on
image
and
then,
at
the
end,
when
you
get
to
the
exporter
phase,
that's
where
you
block
on
the
buildback,
build
process
the
image,
extensions
and
then
run
the
exporter
like
for
any
case
where
you're
doing
anything,
that's
platform,
level
or
like
buildpack,
doesn't
know
it's
running
in
a
container.
It
will
never
know
that.
C
It
feels
like
an
extension
like
like
most
likes
that
if
we
had
a
build
pack
that
did
annotations
that's
fine,
but
then
maybe
in
addition
to
that,
some
of
our
talks
around
having
extension
points
for
life
cycle
like
before
exporter,
but
after
a
builder
another
another
optional
extension
point
is
provided
that
takes
in
all
the
exporter
arguments
so
that
you
can
then
lay
down
some
annotations
on
the
file
system.
They
then
get
picked
up
by
exporter.
A
So
apologies,
we
are
at
past
time.
Actually,
so
I
want
to
give
folks
a
chance
to
kind
of
wrap
up
the
conversation,
so
we
can
bounce,
but
we
didn't
get
to
the
daemon
removal.