►
From YouTube: Office Hours: 2021-07-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
So
if
so,
this
is
my
fork
off
for
the
build
packs,
github
actions
and
if
we
see
like
whenever
we
release
buildbacks
github
actions
when
we,
whenever
we
release
like
4.2
0.0.
So
what
we
need
to
do
is
we
also
need
to
manually
update
right
now
the
registry
index
of
build
packs.
So
I
have
this
four
casper
available
here
like
this
is
my
fork
of
registry
index,
and
this
is
the
buildbacks
registry
index
for
so
right
now
it's
updated,
and
where
do
we
update
it?
A
We
update
it
here,
like
even
this
commit
was
doing
the
same,
so
version
updated
to
4.20
so
right
now
we
need
to
do
this
manually,
like
whenever
the
github
actions
is
released.
We
have
to
fix
the
registry
index
also,
and
we
need
to
change
it
like
from
4.1
to
4.2.
Everything
is
done
manually
with
the
project,
with
the
project
that
I
have
done,
which
is
now
working
on
my
folks
and
will
be
hopefully
working
in
the
original
repos
by
the
next
week.
A
What
it
does
is
it
automates
the
entire
process,
and
how
does
it
does
that?
Let's
go
to
my
fork
of
github
actions,
so
I
have
created
a
workflow
and
what
is
that
workflow
does.
Is
let
me
open
that
workflow.
A
So
what
does
this
workflow
does?
So
if
we
see
this
is
creating
the
pr.
So
this
is
not
exactly
creating
the
pr,
but
it
is
actually
dispose
dispatching
or
dispatching
an
event
that
that
that
is
being
triggered
here.
So
what
is
triggering
this
workflow?
A
This
is
being
triggered
on
release,
because
when
because
we
will
be
using
this
workflow
on
release
when
whenever
we
will
be
releasing
github
actions,
so
right
now
in
front
of
all
of
you,
I
will
be
releasing
just
a
dummy,
github
actions
on
my
fork
and
we
will
see
what
will
happen
so
what
it
is
doing,
it
is
disposing
an
event
and
that
event
is
being
disposed
where
in
the
repository
registry
index,
that
is
which
belongs
to
my
phone
for
now.
A
A
So
if
we
look
at
that,
like
this,
so
so
current
state
is
4.1.0
now
I
will
be
releasing
a
dummy
version
4.2.0
and
we
will
see
that
how
it
triggers
this
deposit
registry
index
as
soon
as
we
release
there
and
it
a
pull
request
will
be
created
here
right
now.
There
are
no
pull
request,
but
there
will
be
a
pull
request
that
will
be
created
here
and
that
will
request
will
automatically
be
committing
an
update
that
will
basically
update
this
version.
So
how
it
is
this
happening.
A
I
think
we
will
see
after
the
demo
itself.
So
let's
move
forward
to
the
demo
so
right
now
this
is
version
4.1.
So
I'm
creating
a
new
release.
Yeah
and
I
will
be
tagging
it
with
that
going
to
be
the
same
semantics
for
v,
4,
0.2.0
and
the
release.
Title
will
be
4.2.0
and
I
will
be
publishing
that
reset.
A
So
this
will
trigger
an
action
here.
First
of
all
that
action
will
be
create
creating
the
pr
this
this
this
action,
so
this
action
will
be
triggered
in
github
actions,
repository
and
now
another
action.
After
this
see
this
action
is
successfully
run
so
now
there
will
be
another
action
that
would
be
that
would
have
triggered
here
on
my
registry
index.
As
you
can
see,
it
is
visible.
It
has
been
triggered
here,
so
it's
all
had
happening
automatically.
A
It
will
now
create
a
pull
request
here,
as
we
can
see
the
event
that
action
has
run
successfully.
Now,
if
we
go
to
the
pull
request,
we
will
be
having
a
pull
request
yeah.
So
we
are
having
a
pull
request
and
this
pull
request
is
updating
the
version
automatically
so
like
I
released
the
version
4.20
so
now
from
4.10
we
can
see
this
is
4.20,
so
this
hap,
all
this
is
happening.
A
A
We
have
seen
the
workflow.
There
is
another
workflow
added
that
that
is
in
registry
index.
So
what
it
does
is
that
it
waits
for
that
repository
dispatch
event,
so
that
this
is
the
trigger
for
it,
the
trigger
for
that
was
released
and
the
trigger
for
it
is
this
repository
dispatch
and
which
right
I
have
for
now,
just
kept
it
my
event.
We
will
change
it
to
a
more
better,
a
better
name
later
on
just
just
for
testing
and
how
does
what?
What?
What
was
the
logic
so
we
are
using.
A
We
are
using
like
curl
to
get
the
latest
version.
For
now
we
are
getting
the
latest
version
of
my
fork
with
transistor
mod
394
github
actions
releases.
We
may
later
on,
get
it
off
buildbacks,
then
we
will
be
doing
it
for
the
official
original
repository
and
we
are
using
said,
and
the
this
logic
was
greatly
held
by
himself.
So
thanks
to
him
so
that
he
helped
me
develop
this
logic,
because
I
was
initially
not
doing
in
that
way.
A
I
was
also
using
set,
but
it
was
taking
a
lot
of
time
to
figure
out
to
come
up
with
the
best
logic,
and
this
does
our
job
pretty
easily.
This
creates
the
full
request
and
if
you
merge
it,
then
our
version
will
be
automatically
updated.
So
that
was
the
demo,
and
if
you
have
any
questions
anything
to
suggest,
then
I
would
love
to
answer.
A
D
Sorry,
I
think
one
thing
that
might
have
you
went
over
really
quickly
is
that
the
the
first
github
action
opens
an
issue
which
triggers
the
second
github
action,
which
opens
a
pr,
and
that
was
actually,
I
think,
a
really
clever
solution
to
what
we
thought
was
going
to
be
a
difficult
problem
where
we
thought
we
were
just
going
to
go,
get
up
action
to
pr,
but
there's
you
know
an
account
issue
there.
So
by
opening
the
pr
from
the
registry
action
it
just
completely
solves
that.
So
it's
a
really
really
clever.
C
All
right,
let's
go
ahead
and
move
on
to
the
next
item.
I
believe
forest
brought
up
discussions
around
build
bomb.
What's
going
on.
E
E
Sophie
I've
been
working
on
sort
of
enhancing
piquetta's
bill
of
materials,
and
we
are,
we
were
preparing
a
talk
to
give
at
cf
summit
about
cloud-native,
build
packs
and
bill
of
materials,
and
one
of
the
selling
points
that
we
were
trying
to
demonstrate
was
this
distinction
between
build
and
launch
bill
of
materials.
E
E
E
However,
that's
only
for
platforms
that
would
support
that
and
then
even
still
like
it's,
you
have
to
like
go
into
the
container
to
pull
it
out,
and
I
was
just
kind
of
surprised
by
how
inaccessible
this
piece
of
metadata
was,
and
I
was
wondering
I
guess
firstly,
if
there's
a
reason
why
we
should
not
make
it
more
accessible
and
then.
Secondly,
I
wanted
to
maybe
have
a
little
bit
of
a
discussion
about
what
what
we
could
do
to
make
it
more
accessible
and
more
user-grabbable.
C
So
I
I
guess
one
of
the
things
I'll
throw
out
there
is.
I
know
that
one
of
the
possible
solutions
that
was
thrown
in
the
conversation
was
like
doing
very
similar
to
the
runtime
bomb,
which
is
just
attaching
it
to
labels.
I
think
the
idea
of
labels
is
something
that
we've
already
got
in
scopes
to
remove
right
or
in
our
sites
to
remove,
and
so
I
think
the
natural
next
progression,
for
that
would
be
adding
it
to
as
a
layer
on
the
image
but
to
put
build
dependencies.
C
This
is
now
my
opinion
right
to
put
build.
The
build
bomb
in
a
runtime
image
seems
a
little
bit
of
a
of
a
smell
like
an
anti-pattern,
because
you
don't
want
to
provide
that
sort
of
information
in
a
deliverable.
Artifact
is
again
kind
of
like
just
my
general
gist
of
where
I'm
coming
from,
but
but
yeah.
E
E
But
I
still
do
think
that
it
has
important
value,
because
in
a
lot
you
know
in
in
some
cases
the
things
that
were
used
to
build
the
container
or
build
the
artifacts
are
no
longer
on
that
image,
but
they're
still
dependencies
of
that
image
right
and
sort
of
not
having
them.
In
that
final
running
image
feels
like
we're
missing
out
on
the
opportunity
that
we
have
when
we
do
bake
these
bill
of
materials
directly
into
the
image,
which
is
like
it's
a
receipt
attached
directly
to
the
image
throughout
the
whole
process.
F
Personally,
I
don't
see,
I
think
you
generally
want
to
make
this
information
available
to
end
users.
We
did
want
to
separate
it
from
the
launch,
build
materials
for
clarity,
so
people
don't
falsely
assume
like
oh,
why
did
you
put
you
know
maven
in
my
image.
I
don't
want
that.
You
know
it's
nice
to
have
the
separation
and
when
we
originally
had
this
conversation,
the
reason
we
chose
not
to
put
it
directly
in
the
image
was
for
reproducibility.
F
So
it's
like,
if
you
had
like
one
patch
version
of
the
build
tool
change,
but
the
end
artifact
that
was
generated
was
exactly
the
same.
Then
you'd
have
an
image
with
the
exact
same
digest,
I
think,
was
the
idea
we
can
discuss
how
much
that
actually
matters
to
people.
I
think
at
the
time
we
made
that
decision,
we're
taking
that
as
a
very
important
and
desirable
thing.
F
C
A
third
option
that
came
up
that
I
kind
of
want
to
throw
on
here
as
an
idea,
was
to
do
something
very
similar
to
cosine,
where
you
have
associated
artifacts
that
are
not
actually
part
of
the
image
but
can
be
retrieved
through
some
sort
of
association.
C
Right
to
me,
that
sounds
a
little
bit
better
because
I
don't
know
I
feel
like
someone
with
better
security.
Knowledge
should
come
in
here
and
say
like
letting
your
end
users,
anybody
that
can
pull
this
app
image
know
exactly
the
recipe
behind.
It
is
probably
a
bad
idea
right,
but
again,
that's
not
my
area
of
expertise
to
say
that
with
confidence.
G
I
think
it's
it
would
be
dangerous
or
it
would
maybe
be
more
dangerous
if
someone
who
hosts
the
image
accidentally
hosted
the
bill
of
materials
to
people
who
were
using
the
application.
That
was
behind
it.
But
I
I
don't
see
a
big
drawback
of
when
you
build
an
image
for
distribution.
Other
people
are
going
to
use
it
and
potentially
run
it
themselves
for
giving
them
the
transparency
like
having
them
be
able
to
understand
exactly
what
they're
hosting
that.
G
That
seems
like
very
positive,
not
very
many
negatives,
the
so
I
I
do
like,
but
I
do
like
the
cosine
approach
or
like
using
oci
artifacts
saying
when
you
you
know
the
default
when
you
do
a
pack
build
and
then
with
that
publish
or
after
you
or
publishing
into
a
docker
registry,
is
to
produce
two
images.
One
that's
you
know,
has
the
runtime
s-bomb
on
it
and
another.
B
G
Fits
with
there
are
kind
of
some
upstream
goals
of
putting
software
built
materials,
and
you
know
in
the
registry
and
oci
artifacts
format
that
we
could.
We
could
take
a
look
at
also,
so
I
do
like
that.
Emily
you
brought
up
reproducibility
whether
we
need
to
you
know
maintain
that
I
think
the
perspective
we've
always
had
is
make
reproducibility
possible
and
as
easy
as
possible
for
end
users
to
achieve.
C
G
If
we
wanted
the
default
to
be
both
build
materials,
go
separately
on
the
image
and
then
there's
an
easy
flag,
you
could
pass.
That's
like
actually
put
the
build
time,
one
here
and
the
runtime
one
here.
You
know
instead
in
different
places
in
the
registry,
I
would
be
like
I'm
not
going
to
argue
against
that.
If
that
makes
sense,
this
seems
very
reasonable
to
me.
H
So
cosine
recently
added
support
for
response
from
this
eli
and
they
support
both
cyclone
d8
and
spdx,
and
that
was
part
of
the
reason
why
I've
deferred,
like
the
movement
of
the
bomb
from
the
label
to
somewhere
on
the
layer
or
somewhere
else,
because
I
think
this
happened
like
last
week
or
in
the
last
10
days.
H
So
it
might
be
nice
to
just
leverage
what
they
already
have,
because
they've
also
figured
out
a
way
to
sign
that
as
form
along
with
the
image
and
like.
I
think
they
also.
They
took
some
pointers
from
like
a
bunch
of
other
security
and
supply
chain,
folks
to
figure
out
how
to
best
associate
like
signatures
with
the
s
form
the
image
and
all
of
these
things
together
into
like
things
that
work
with
most
docker
registries
today,
rather
than
like
the
oci
artifacts
thing.
That
is
eventually
what
they're
going
to
move
to.
F
F
Whether
we
decide
that
that's
like
a
file
in
the
image
or
this
cosine
approach
as
long
as
they're
separate
and
if
we
think
it
really
is
good
for
them,
you
know
to
be
published
and
travel
together.
I
feel
like
there's
no
reason
to
do
two
different
things.
B
I
Not
not
that's
a
short
term
just
that
it
shares
the
same
settlements.
Here's
a
separate
thing
that
we
don't
necessarily
want
victim
image
that
I
think
that
was
like
one
of
the
original
things
that
kind
of
came
up
through
this
right,
like
hack,
does
not
expose
report
toml.
How
do
I
get
access
to?
It
was
like
something
that
came
up
through
pac
and
pac
doesn't
have
to
actually
do
anything
with
it.
I
Right
like
it's
up
to
the
platform,
and
I
think,
like
the
cosign
thing,
is
potentially
a
way
we
can
maybe
piggyback
to
expose,
even
like
some
of
the
report
tunnel
things
as
artifacts
that
travel
with
the
image
that
can
get
associated
with
each
other,
that
we
you
want
to
standardize.
F
F
I
don't
know
if
that's
like
metadata
that
we
need
to
put
in
the
registry
and
then
fetch
again
stuff
like
that,
like
I
can
imagine
using
it
more
for
reporting
about
a
particular
build
in
a
way
that
is
not
long-lived
metadata.
You
want
to
associate
with
the
image.
I
There's
some
stuff
I
was
talking
to
natalie
about
at
one
point
of
like
I
know,
on
our
side
for
salesforce.
We
want
a
way
to
force
bill
pack,
authors
to
expose
metrics
of
some
kind
or
even
like
exceptions
or
issues
right
now.
There's
it's
kind
of
you
got
to
build
it
into
the
platform
and
figure
it
out
and
there's
not
a
standardized
way
to
kind
of
get
that
out
of
the
build
and
may
report
tamil
or
something
along
those
lines
as
a
way
to
kind
of
get
that
out.
F
I
could
imagine
a
world
where
you
can
get
like
a
whole
bundle
of
reports
out
so
like
build
pack
generator
reports,
plus
the
life
cycle
generated
report,
which
is
report
toml,
and
then
I
think
the
best
the
lifecycle
and
do
is
make
that
a
file
like
at
the
end
of
the
day.
It's
gonna
be
up
to
platforms.
What
to
do
with
that,
because
I
don't
think
we
want
to
introduce
this
as
a
standard
that
we
attach
to
the
image
forever
and
ever.
C
Yeah,
I
think
there
would
be
certain
things
that
we
do
want
to
export,
as
artifacts
right,
like
bomb
seems
like
something
that
naturally
fits
into
that,
but
a
lot
of
the
other
things
yeah
we'll
have
to
determine
what
makes
sense
to
export
out
versus
just
produce
and
keep
ephemeral.
H
I
think
something
we
talked
about
like
in
a
few
office
hours
ago
was
like
things
like
detect
logs
or
like
profiling,
the
build
times
or
detect
times
and
putting
that
in,
like
var
log
somewhere.
So
like
it's,
not
polluting
your
current
workspace,
but
it's
there.
If
you
want
it
and
I
guess
it
could
just
be
cleaned
up
after
a
while.
H
So
I
guess
these
kind
of
metrics
would
make
more
sense
to
just
stay
where,
like
the
platform
is
executing
the
build,
and
but
I
would
imagine,
the
bill
bomb
is
more
important
than
like
the
amount
of
time
it
took
to
build
the
image
or
like
detect
logs.
So
I
guess
report
tomorrow.
Whatever
those
reports
are,
could
stay
at
some
common
log
directory
and
for
pack,
and
these
other
things
should
definitely
be
artifacts
that
are
exported
out,
whether
it's
in
the
image
or
as
separate
artifacts
in
the
registry.
C
So
to
circle
back
and
maybe
more
directly,
try
to
answer
some
of
forest
concerns.
Does
it
sound
like
reasonable
action
items
are
for
the
short-term
pack
to
simply
allow
the
the
exporting
of
the
report
automo
file
through
some?
You
know
additional
flag
or
means,
and
then
more
long-term
kind
of
weight
to
see
what
happens
with
cosine
and
the
s-bomb
to
figure
out
whether
or
not
we
want
to
piggyback
on
that
implementation.
H
H
G
H
C
Yeah,
I
mean,
I
think
it's
definitely
worth
waiting
a
month
to
see
where
it
lands.
I
think
that'd
be
worthwhile.
E
No,
not
not
at
the
current
moment,
but
it
was.
It
was
definitely
just
a
hole
that
we
found
that
I
wanted
to.
I
guess
service.
C
Sense
cool
is
there
anything
more
here
to
discuss.
B
I
C
And
I
think
anthony
already
has
a
pretty
good
question
there.
So
just
following
up
on
that
would
be
great
to
more
or
less
set
the
expectation
of
what
the
user
workflow
is
and
it
seems
like
he
might
be
interested
in
implementing.
B
C
C
All
right
with
that
said
or
sorry
said:
let's
go
ahead
and
move
on
to
the
next
item:
project
descriptor
0.2,
next
steps
who
brought
this
one
up.
I
I
I
just
put
this
in
because
we
talked
about
trying
to
continue
this
conversation
in
office
hours,
but
if
there's
other
stuff,
I
know
it
can
probably
take
up
the
whole
rest
of
the
time.
C
That
is
true.
There
is
nothing
else
on
the
agenda.
Does
anybody
else
have
anything
to
discuss
before
we
dive
into
this
rabbit
hole.
C
All
right,
I
guess
yeah:
where
do
we
want
to
start
the
project?
Descriptor
conversation.
D
G
This
morning
was
around
you
know,
kpac
and
other
platforms
being
able
to
read
the
file
portability
of
the
file.
Things
like
that,
I
think,
there's
logistical
stuff
for
point
two.
We
could
talk
about
and
then
there's
like
the
larger.
You
know
how
portable
do
we
want
this
to
be.
What
are
the
goals
you
know?
Do
we
need
to
make
other
changes?
You
know.
D
Like
there
was
portability,
but
there
was
also
keeping
kpac
in
sync
with
changes
to
the
project.
Descriptor.
Is
that
correct.
F
F
And
I
think
we've
brainstormed
in
this
group
like
longer
term
ideas
for
solving
some
of
those
issues.
It's
like.
If
platforms
don't
have
to
respect
every
field
that
can
be
their
decisions.
It
is
an
extension
spec
and
if
we
provided
some
sort
of
translation
layer
that
turned
a
project
tumble
in
whatever
the
latest
version
was
into
a
version
that
was
locked
to
a
platform
api,
then
platforms
wouldn't
have
to
worry
about
maintaining
sort
of
a
life
cycle
of
changes
in
project
tomml.
F
I
think
both
of
those
solutions
would
address
some
of
those
issues.
I'm
not
sure
if
there
is
like,
are
we
when
we
keep
when
we're
talking
about
product
descriptor
o2?
My
question
is:
are
we
trying
to
solve
these
problems
before
we
release
o2?
Are
we
trying
to
solve
them
after.
C
F
D
F
It's
different
because
there's
no
upgradable
component
that
solves
it,
for
you,
like
the
life
cycle
right,
so
most
of
these
platforms
allow
you
to
sort
of
dynamically
pull
in
new
life
cycles.
Without
someone
having
to
ship
out
a
new
version
of
the
platform,
that's
sort
of
like
what
the
whole
builder
model
accomplishes.
F
C
C
F
F
H
C
H
F
D
So
I've
always
always
been
in
favor
of
having
like,
like
I
thought,
prepare
phase
could
potentially
do
this,
but
I
feel
like
if
we
agree,
I
think
the
bigger
thing
that
we
would
agree
on
is
that
we
would
push
this
into
a
life
cycle
phase
or
binary.
D
F
C
So
so,
really
it's
just
a
a
simple
translator.
Right,
like
literally
it
doesn't
have
to
be
a
life
cycle
phase
or
anything
like
that.
It
could
be
something
completely
external
third
party,
which
is
basically
I
pass
it
any
project
tunnel.
I
can
pass
it
any
project,
tamil
format,
and
I
tell
it.
I
want
the
output
to
be
v2
right
or
v0.2,
and
then
that's
the
only
thing
that
I
could
then
parse
or
worry
about.
But
I
don't
know
that
there's
a
guarantee
of
of
lossiness
right
of
data
and
is
that
a
concern.
G
G
We
can
have
this
kind
of
ugly
unchanging
platform
api
that
just
extracts
platform-specific
features
provided
by
this
lifecycle
binary,
but
it's
important
that
platforms
use
this
use
that
part
of
the
lifecycle.
Otherwise,
you
know
in
order
to
get
that
information,
otherwise
I
mean,
I
think,
if
a
platform
kept
very
careful
track
of
life
cycle
versions
to
make
sure
that
it,
you
know
translated
just
right,
it
could
be
okay,
but
I
I
think
it's
important
that.
C
Sorry,
the
only
difference
that
I
heard
from
my
my
description
was
that,
instead
of
the
platform
saying
I
wanted
in
you
know,
project
descriptor,
o2
version.
It's
just
saying
I
know
of
this
platform.
Api
03,
take
a
project
descriptor
and
give
it
to
me
in
the
format
that
is
now
encompassed
in
the
platform:
03
spec
right,
so
you're
associating
the
project
descriptor
into
into
the
platform
api.
G
Yeah-
and
this
way
we
avoid
a
situation
where,
if
we
bump
the
project
toml
schema
version,
you
don't
have
to
kind
of
carefully
make
sure
that
the
platform
and
the
life
cycle
are
aligned
because
they're
both
going
to
have
to
read
this
file.
If
we
don't
do
this
right,
you
don't
have
to
make
sure
they're.
They
have
this
kind
of
three-way
compatibility
contract
with
the
file.
D
G
B
C
That
I
don't
that
feels
like
a
slippery
slope
of,
like
tooling,
being
added
to
the
life
cycle,
which
I
envisioned
as
like
an
engine
right,
I
don't
know
just
seems
like
also.
Then
we
could
start
adding
a
whole
bunch
of
stuff
just
because
it's
convenient
as
opposed
to
architecturally
right.
F
As
a
platform
like
you're
gonna
have
to
create
a
container
to
run
a
build
anyways
from
the
builder
that
has
the
life
cycle
in
it
before
you
start
that
container,
you
can
copy
a
file
out
of
it.
That's
a
binary
run
it.
If
you
want,
I
guess
it's
probably
compiled
for
the
wrong
platform
at
the
point
you
might
as
well
just
run
it
in
a
container.
We
have
something.
C
Startup
cost
right
in
some
cases.
G
C
I
Like
intermediate
representation,
like
like
fight
coders,
like
you
know,
like
comment
in
like
compiled
languages
and
things
right,
it's
like
you
can
change
basically
the
front
end
part
of
the
language,
but
at
the
end
of
the
day
like
then,
you
can
have
this
intermediate
representation
that
can
then
be
read
interpreted.
I
assume
it's
like
a
similar.
I
I
I
So
this
would
just
be
part
of
the
irs
for
the
platform
api,
so
we
would
define
in
the
platform
api
this
ir,
but
then,
if
a
proj
descriptor
added
new
fields
that
needed
new
fields
on
the
ir
that
those
would
not
be
supported
by
the
platform,
they'd
not
update
the
platform.
Api.
F
Yeah
they'd
have
to
go
into
a
new
platform
api
and
the
platform
would
upgrade
to
the
new
api,
which
is
you
know
not
so
different
than
what
would
happen
if
the
platform
had
to
read
it
directly.
It's
like
you
still
need
to
update
to
pull
in
new
fields,
but
it
would
be
in
lockstep
with
the
platform
api,
so
the
platform
wouldn't
have
to
deal
with
a
scatter
shot
of
random
stuff.
C
Yeah-
and
I
think
that
answers
natalie's
question
she
put
on
on
the
chat,
it's
basically
about
why
project
descriptor,
1
to
project
descriptor
2
is
not
optimal.
I
don't
know
that
it's
not
optimal,
might
not
be
the
right
word
to
use
here,
but
it's
more
that
it
simplifies
the
the
usage
from
a
platform's
perspective.
G
G
B
H
I
think
it's
also
that
the
project
terminal
is
exposed
to
the
user
like
an
app
developer
and
they
might
like
they
might
expect
changes
to
be
more
often
or
as
opposed
to
like
changes
in
the
platform
api
like
if,
if
they
want
like
simple
features,
they
will
have
to
wait
for
their
platform
to
upgrade
to
a
newer
api.
I
mean
that's
already.
I
F
If
the
platform
is
responsible
for
implementing
the
feature,
no
matter
how
we
architect
it
people
are
going
to
have
to
wait
for
the
platform
to
upgrade
in
order
to
get
the
feature
like
unless
the
life
cycle
can
do
it.
And
what
I'm
saying
is
let
the
life
cycle
do
anything
it
can,
but
the
things
the
platform
has
to
do,
let's
give
them
a
consistent
format
for
the
data.
G
We
can
be
very
additive
with
that
format
so
that
we
don't
break
backwards,
compatibility
in
a
way
that
we
probably
wouldn't
want
to
do
with
project
tamil,
because
we
want
to
be
able
to
bump
the
schema
version
and
iterate
quickly
on
the
interface
right.
We
don't
want
to
have
to
maintain
permanent
backwards
compatibility
for
the
format.
So
that's
one.
G
Of
the
intermediary
representation,
the
it's
also
much
fewer
fields
right,
because
we're
just
talking
about
things
that
a
platform
cares
about.
It
also
gives
a
platform.
It
doesn't
have
to
be
an
intermediary
representation.
In
some
cases
it
could
be
the
life
cycle
expressing
build
packs
or
builders
or
whatever
to
the
platform
that
aren't
based
on
project
tamil.
It's
just
you
know
ensuring
that
there's
only
one
interface
between
the
platform
and
running,
build
packs
and
that's
the
platform
api.
It
doesn't
have
to
track
multiple
interfaces
and
worry
about
different
versions
of
the.
B
G
C
So
this
makes
sense,
but
one
of
the
things
that
I'm
may
find
alarming
is
when
you
mention
that
it
would
be
like
a
simplified
set
of
fields
where
I
know
that
some
platforms
may
want
to
add
their
specific
information
into
that
descriptor
right.
So
I'm
assuming
that
this
would
still
allow
for
that
capability
so
like
in
the
reverse
domain.
Name:
concept
right.
If
you
had
something
under
salesforce
that
would
still
propagate
forward
into
some
sort
of
salesforce
structure
in
the
output.
C
C
G
F
C
I
Oh
sorry,
can
you
hear
me
now
no
yeah
I
I
originally
proposed
to,
and
then
we
decided
we
didn't
want
to
deal
with
having
to
deal
with
two
different,
the
complexity
of
versioning,
two
different
things,
and
so
then
we
voted
to
collapse
under
one
I'm
happy.
If
we
want
to
split
that
back
out
into
two
to
support
this,
this
mechanism
or
not
magnus
but
whatever.
This
is,
if
that.
F
Mechanism,
I
think,
fits
well
with
two
for
the
same
reasons
that
two
felt
bad
without
this
mechanism.
It's
like,
oh,
not
only
are
we
gonna
make
platforms,
deal
with
api
versions
of
this
file,
but
we're
gonna
make
them
deal
with
the
build
schema
version
as
well.
If
we
have,
I
think
two
fits
better
if
we
want
to
go
this
direction.
G
It
doesn't
matter
that
much
if
it's
one
or
two,
the
important
thing,
I
think
everybody
would
agree
that
the
schema
version,
wherever
it
is
in
project
tomml,
doesn't
specify
salesforce's
schema
version
right
if
you
have
a
com.salesforce
key
and
project
tunnel,
you're
not
going
to
put
things
in
the
build
packs
back
for
a
schema
version
of
project
tamil.
That
say-
and
this
is
how
you
interact
with
salesforce
in
that
schema
version
right
that,
like
I,
don't
think
anybody
had
that
idea
in
their
head.
G
B
C
G
That
it's
you
know,
project
humble,
is
getting
kind
of
weird,
because
it's
like
this
is
trying
to
be
something.
That's
not
it's
much
wider
scoped
than
the
project
right
and
that's
why
we
have
questions
like.
Should
there
be
one
or
two
schema
versions,
because
there's
one
schema
version
and
it's
not
really
wider
scope
than
the
project.
It's
really
a
build
packs
descriptor
that
we're
trying
to
make
something.
G
B
G
D
A
schema
version
for
the
top
level
file
can
have
tables
that
are
undefined
or
arbitrary,
and
then
those
subtables
can
choose
to
have
or
not
have
an
api
version
within
them
right.
I'm
not
sure.
I
agree
that,
like
putting
a
single
api
version
means
that,
like
io.google,
is
fixed
at
whatever
we
say
it
is.
G
I
I
don't
think
you
can
assume
that
that
api
version
applies
to
io.google,
com.salesforce
com.vmware
right
like
because
those
things
would
need
to
be
specified
in
the
description
of
the
schema
and
like.
I
think
we
would
all
push
back
on
putting
platform-specific
fields
in
the
description,
company-specific
fields
and
the
description
of
the
schema
at
the
project
level.
So
you
can
put
it
at
the
top
and
say
it's
at
the
top
and
it
applies
to
the
file
but
nobody's
going
to
agree
that
it
should
apply
to
any
of
the
subfields
of
the
file.
C
It
thinks
about
like
things
as
they're
compartmentalized
right
and
that's
what
the
top
level
schema
version
means
to
me.
It's
talking
about
how
how
things
are
then
organized
on
the
subset
right,
where,
if
you
didn't
again,
let's
say
that
we
change
away
from
the
reverse
name
into
something
else.
That
would
be
the
way
that
we
could
trigger
to
say.
Okay,
this
now
overall
structure
is
different,
and
this
is
how
it.
G
I
I
G
C
Yes,
so
this
is
where
I
want
to
rebuttal
right,
like
I,
I
feel
like
it's
very
little
value
if
we're
trying
to
like
future
proof
this
to
say
that
the
platform
could
still
read
the
file
beforehand,
and
the
reason
for
that
is
because
then
that
means
I
still
have
to
look
at
that
top
level
schema
version
to
determine
whether
I'm
going
to
find
the
data.
The
way
that
I
expect
it
to
be-
or
it
could
be
in
this
next
version-
format
that
I'm
not
aware
of.
C
So
what
I
would
expect
and
propose
is
that,
no
matter
what
the
life
cycle
is
provided
the
project
descriptor
and
any
additional
like
again
those
comp,
you
know
those
small
sections
or
subsections.
Those
could
be
provided
in
that
platform,
api
output
as
necessary,
and
that
is
the
format
that
we
read.
So
if
the
nested
formats
were
to
be
placed
differently
right,
I
would
still
be
able
to
find
them.
However,
I
may
need
them,
but
only
have
to
read
the
file
ones
in
one
format.
B
D
Way
you
phrase
that
is
wait
the
way
you
phrase.
That
is
odd,
because
this
is
a
discussion
that
went
through
rfc
was
introduced
to
the
spec,
was
merged
right
and
it
but
you're
like
you're
framing
it
like.
If
we
do
this,
like,
I
feel
like
we
decided
this
like
so
that
we're
running
out
of
time,
so
I
want
to
try
and
get
some
kind
of
resolution.
D
D
So
if
we're
going
to
do
that,
then
I
think
the
only
question
is:
are
we
trying
to
do
that
before
0.2
and
then
that
would
be
a
separate
discussion?
So
I
think
we
should
do
that
or
something
like
that
regardless,
but
I
think
we
need
to
come
to
decision
on
whether
it's
before
zero
two,
so
we
can
get
it
out
or
all
you
know
backing
up
a
further
step.
Are
we
uncertain
about
what
we're
shifting
in
zero
two?
Those
are
the
really
important
things
to
us.
I
think.
G
F
F
That's
the
thing
it's
not
a
huge
deal.
I
think
I
don't
think
a
lot
of
people
are
using
it.
I
think
getting
this
right
in
the
long
term
matters
to
me,
but
I
don't
think
it
matters.
I
don't
think
it's
a
huge
deal
for
o2.
I
I
just
want
to
make
sure
emily's
point
for
long
term
that
we're
shipping
stuff
in
point
two
as
well,
that
we're
not
going
to
backpedal
in
like
a
0.3.
So
as
long
as
we
can
do
the
ir
and
other
things
without
having
to
break
the
world
and
0.2.
C
C
All
right,
we
are
over
time
three
minutes
over.
I
did
add
an
action
item,
I'll
probably
add
a
few
more
and
assign
some
people
to
it
feel
free
to
say.
I
don't
want
to
do
this
and
kick
back
on
that,
but
yeah
otherwise
have
an
awesome
day.