►
From YouTube: Office Hours: 2021-05-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Do
you
see
that
we
have
whoops?
I
see
that
we
have
some
newish
faces.
Do
do
we
mind
doing
an
introduction
for
some
people.
B
Definitely,
thank
you.
Thank
you,
javier.
So,
first
of
all
I
would
like
to
introduce
myself.
I
am
sharma-
and
I
am
a
junior
here-
computer
science,
engineering
student
at
jss
academy
of
technical
education,
noida,
india,
and
currently
I
am
interning
at
cncf
kubernetes
with
the
lfx
mentorship,
so
my
spring
mentorship
is
about
to
end,
and
I've
previously
also
contributed
as
a
gsoc
student
last
year
to
rtms
and
this
year.
I'm
glad
that
the
community
has
accepted
my
application
and
I
would
be
contributing
as
a
gsoc
student
to
buildbacks
as
well.
B
So
I
just
wanted
to
express
my
gratitude
and
thanks
to
the
entire
community,
to
everyone
for
such
a
warm
in
onboarding
and
for
always
asking
for
always
answering
my
queries
so
well
by
everyone,
whether
it
is
joe
anthony
or
anyone
from
the
community.
Thank
you
so
much
that
was.
B
Building
basically
github
actions
for
automatic
automatic
automating,
the
staging
test,
I'm
sharing
the
repository
they
just
give
me.
I
see.
C
He
has
mentioned
all
the
names
that,
even
before
being
a
part
of
the
community,
I
was
well
aware
of
everyone,
so
I'm
thankful
to
him
that
way,
and
I
have
also
interacted
with
xavier
over
slack
and
I
had
also
applied
to
the
lfx
mentorship,
that
he
is
a
mentor
at
and
yeah
I'm
currently
finally,
engineering
student
and
I'm
also
interning
at
microsoft.
As
a
support
engineer.
C
Apart
from
that,
I
have
worked
with
cloud
and
also
low
code
platforms
like
fire
platforms,
and
I
always
wanted
to
get
started
with
open
source,
and
I
think
that
this
was
the
start
that
I
am
trying
to
give
or
the
push
that
and
that
push
has
been
given
to
me
by
mr
jail.
So
yeah.
I'm
really
grateful
and
thankful
that
I
could
make
it
to
today's
call.
I've
been
wanting
to
attend
the
call
for
a
really
long
time,
but
since
it
gets
a
little
bit
late
here,
I'm
always
not
able
to
make
it
but
yeah.
A
Awesome
well
welcome,
and
hopefully
you
know
what
you
hear
the
good
stuff
you
hear
at
least
is
something
that
we
could
prove
out.
Yeah.
A
All
right,
let's
see
so
without
further
ado,
I
guess
is
the
is
the
first
topic
done
or
is?
Is
there
more
you'd
like
to
say
no.
B
It's
done,
thank
you
so
much
again
and
I've
shared
the
project
and
I'm
sharing
the
proposal
that
I
proposed
to
solve
that
project
as
well.
Just
give
you
a
moment
so
definitely
later
on
after
this
meeting
or
sometime,
I
would
definitely
like
other
community
members
also
to
if
they
want
to
review
or
suggest
any
opinions
about
the
project.
Definitely
all
the
suggestions
are
welcome.
Thank
you.
So
much
the
first
agenda
is
done.
Thank
you.
So
much
awesome.
A
And-
and
I
did
hear
kind
of
tie
this
into
the
whole
mentorship
side
of
things-
I
I
did
hear
that
we
might
want
to
do
like
a
quick
little
virtual
social
at
some
point
with
all
the
mentees
and
mentors.
So
hopefully
that's
something
that
we
could
get
done
here
soon
and
you
know
get
to
know
each
other
a
little
bit
more.
All.
A
All
right,
let's
see,
moving
on,
we've
got
a
couple
items.
It
looks
like
here.
Bomb
related,
I'm
gonna
assume
this
is
sam.
Is
that
right?
Okay,
I.
D
Just
wanted
to
bring
people
if
they
have
time
to
look
at
both
of
those
discussions.
I
think
the
bomb
one
I
believe.
D
Someone
from
vmware
was
supposed
to
make
an
rfc.
I
don't
know
them,
though,
and
the
other
one
was
the
detect
api
stuff
that
we've
talked
a
bunch
of
times.
A
It
nisha
who
we're
discussing
okay,
interesting,
I
don't
have
a
like.
I
could
contact
her,
but
I'm
not
sure
if
anybody
else
was
following
up
on
that.
A
Okay,
that's
interesting
all
right
so
yeah,
so
she
recommended
spdx
right
and
then
I
don't
know.
I
feel,
like
we've
gone
somewhat
in
circles
about
deciding
which
one
is
that
like
has
there
been
a
more
definitive
answer,
other
than
just
random
recommendations.
F
F
E
D
It
also
has
better
integrations
and
tools
out
of
the
box
that
generate
bombs
for
you
for
specific
languages
so
like
it
already
supports
fights
and
go
ruby
php
like
most
of
the
main
languages
that
both
vacato
and
hiroku
target.
D
So
that
was
like
one
of
the
driving
factors
for
me
to
consider
that,
because
that
would
be
easy
to
integrate
with
our
existing
like
popular
languages
that
that
existing
buildbacks
support-
and
it
also
has
like
there
seems
to
be
tool-
slash
open
source
scanner
that
ties
in
this
s
bomb
with
cves,
so
that
like
they
have,
they
may
was
maintain
some
open
source
database
that
can
scan
these
as
forms
generated
by
these
tools
and
then
flag
potential
cvs
for
you.
D
A
A
F
But
I
don't
think
that
we
should
use
that
to
make
the
decision
for
the
what's
best
for
the
project,
especially
if
it's
easy
to
convert,
but
I
do
think
being
able
to
convert
to
spdx.
I
would
see
as
a
requirement.
F
E
A
To
satisfy
yeah
all
right
so
then
I
guess
that
leads
me
to
my
recommendation
is
well.
I
guess.
Maybe
I
should
ask:
is
there
already
an
rfc
for
this,
because
I
see
the
discussion,
but
I
don't
think
there's
an
rfc
for
this
right,
so
I
guess
my
recommendation
would
be.
Should
we
just
start
with
an
rfc
that
proposes
cyclone
dx
and
kind
of
specifies
a
lot
of
the
stuff
that
you've
mentioned
sam
about
integration,
and
you
know
also
the
transformation
or
translation
capabilities
to
spdx
to
satisfy
that
need.
A
But
then
we
could
talk
about
its
benefits
and
values
as
to
why
we
were.
We
would
want
to
choose
it,
and
I
think
that
would
give
us
a
lot
more
concrete
data
to
actually
make
that
decision.
Final.
F
G
Yes,
she
a
long
time
ago,
she
said
that
she
might
be
interested
in
opening
an
rc
about
it,
she's
a
maintainer
of
turn,
just
like
a
tool
for
generating
spdx
formatted
things.
If
that
makes
sense,.
G
A
Yeah-
and
I
don't
know
that
she
has
a
driving
force
yet
right
like
as
to
why
she
would
want
it,
and
I
think
some
internal
stuff,
but.
E
A
So
I
don't
know
I
mean
I
personally,
I
don't
think
I
could
take
it
on
my
plate,
although
I
do
find
it
very
interesting.
So
are
there
any
volunteers
for
owning
it.
D
But
I'll
try
to
make
an
rfc
if
I
can
by
the
next
week,
if
I
can't,
we
should
find
someone
to
drive
this
forward.
A
Something
I'm
not
opposed
to
as
well
is
working
through
something
like
hackmd
as
a
more
collaborative
fluid
way
of
piecing
stuff
together,
and
I
could
probably
tack
on
some
information
onto
a
document
like
that
versus
you
know
like
a
branch
and
pr
and
stuff
like
that.
Okay,
so
I
don't
know
like
I'm
just
giving
you
my.
You
know
perception
that
if
you
create
a
hackmd
and
just
share
it,
I
might
add
a
couple
things
here
and
there.
A
Cool
all
right
anything
else
on
that
topic,
I
think
there
were
two
lumped
together.
D
F
E
F
A
I
know
at
least
for
these
two
they're
discussions.
I.
E
A
I
do
wonder
if
the
discussions
end
up
in
a
place
where
we
do
want
an
rfc
to
kind
of
be.
The
outcome.
If,
like
an
issue,
would
be
the
next
appropriate
step
right
like
not
a
full
commitment
of
producing
the
rfc,
but
at
least
a
log
of
like
this
rfc
would
be
nice
to
have.
F
A
Yeah
I
mean
if
somebody
comes
by
and
says:
why
isn't
you
know
the
bomb
specked
out
like
oh
well,
we
thought
about
it.
Here's
the
issue.
Do
you
want
to
like
build
it
out
so
yeah
I
mean
that
that
sounds
like
a
sounds
like
that
might
be
a
way
to
go
about
it.
Sam,
like
these
discussions,
don't
seem
like
there
is
any
real
opposition
to
it.
They
just
need
a
little
bit
more
driving
force,
and
that
might
just
be
the
next
step.
D
This
was
also
something
I
wanted
to
bring
up
as
something
that's
been
gaining
traction.
Recently
we
got
a
couple
of
questions
about
it
during
kubecon,
k-pac
recently
had
an
rfc
about
adding
cosine
support
and
also
the
author
dan
lawrence
has
reached
out
to
us
on
facts
issues
if
he
can
provide
help
integrating
cosign,
with
back
he's,
also
reached
out
on
the
kpac
site
to
help
with
integrations
or
any
advice.
D
To
to
do
the
actual
signing,
so
it
should
actually
be
fairly
simple
to
integrate.
It's
not
it's!
It's
not
like
this,
and
no
tree
are
like
in
competition
with
each
other
they're
complimentary,
and
I
think
we
should
invite
dan
to
one
of
the
officers
to
explain
that
part,
because
that's
something
that
I've
had
a
hard
time
figuring
out
like
where
you
would
want
to
use
like
how
they
are
actually
complementary
from
what
I
can
see
it
may
have
to
do
with
key
rotations,
like
the
keys
that
you
use
to
sign
your
certificates.
D
G
D
E
Yes,
I
guess
I
I
wanted
to
look
at
you,
sam
about
this.
I
was
hoping
to
know
if
maybe
at
bloomberg,
you
had
seen
sort
of
you
know,
requests
for
image,
signing
directly
and
like
a
preference
for
coastline,
specifically
or
or
any
other
thing,
and
the
reason
I'm
asking
is
because
you
know
I
know
the
notary
investigation
or
the
the
investigation.
The
implementation
itself
was
which
stream
was
just.
E
A
Like
we
don't
have
support
for
it,
but
it's
something
that
can
be
tacked
on
to
the
workflow
right
like
there's,
not
any
need
necessary
for
build
packs
to
have
any
real
specific
integration,
but
yeah.
I
think
cosine
works
slightly
different
in
that
the
lifecycle
might
have
to
produce
both
images
right,
yeah.
D
I
spoke
bloomberg.
I
don't
want
to
speak
on
behalf
of
the
company,
so
yeah.
F
It
could
always
be
added
in
at
a
different
layer.
I
have
to
think
about
exactly
where
is
the
right
place
in
the
process
to
hook
this
in,
but
I
think
it's
worth
investigating
and
it's
something
that
a
lot
of
people
would
want.
A
Cool,
I
I
I'd
be
down
for
that.
I
know
I
talked
to
steven
about
cosign
after
a
demo
that
we
saw,
because
you
know,
notary
v2
was
something
that
we've
talked
about.
I
think
last
year
or
so,
and
we
investigated,
and
it
was
again
we
came
to
the
conclusion
that
it
was.
It
could
be
done
at
a
different
layer.
I
guess
to
use
emily's
remark,
and
so
anyways
cosine
does
seem
very
lucrative
to
solve
that
problem
and
very
lightweight
and
simple
that
I
think
it
has
a
lot
of
value
to
it.
E
G
I
think
part
of
the
framing
for
this
or
the
way
I
see
this
is
you
know
not
very
many
people
like
notary
v1,
it's
kind
of
people
are
looking
at
notary
v2
and
looking
at
cosine,
notary
v2
looks
really
great
in
some
ways
and
solves
all
the
problems
and
kind
of
also
defines
like
a
bill
of
materials
format
and
many
other,
not
a
format,
but
a
way
of
specifying
that
in
the
registry
at
least
and
but
it
seems
like
it
might
take
a
really
long
time
to
land,
because
it's
a
bunch
of
changes
directly
with
the
oci's
back,
and
so
I
think,
a
lot
of
the
interesting
cosine.
E
G
G
I
think
people
want
a
solution
much
sooner
and
I
think
coastline
is
trying
to
provide
that
solution
and
has
a
lot
of
community
support,
and
so
I
you
know
we
can
support
notary
v2
in
the
future
when
it's
ready
and
we
can
support
coast
right
now
and
I
think
I'll
probably
make
everybody
happy.
So
I
don't.
I
don't
expect
a
lot
of
they're
like
I
kind
of
only
see
one
path
anyways.
E
F
V2
is
defining
a
custom,
artifact
format
right,
which
then
needs
to
get
approved,
but
it's
not
just
that
that
spec
needs
to
get
approved,
that
you
know
we
try
to
work
with
every
registry
under
the
song.
So
all
the
registries
that
need
to
implement
support
for
the
new
format.
It's
gonna
be
a
really
long
time.
G
I
think
it
depends
on
the
time
frame,
you're
thinking
about
the
you
know
right
now,
there's
notre
v1,
notary
v1
has
a
lot
of
complexity
and
a
lot
of
restrictions
that
make
it
unusable
in
some
use
cases
right.
So
people
are,
you
know,
there's
just
there's
like.
A
G
Of
agreement
that
we
need
something
else
right,
nerdy
v2
looks
great,
but
it
requires
changes
so
many
things
that
you
know.
G
I
hope
it
lands
and
everybody's
really
happy
in
the
future
when
it
gets
adopted
everywhere
they
can
switch
to
it.
But
it's
going
to
take
a
long
time
and
then,
in
the
short
term,
there's
only
one
alternative,
that's
being
proposed
and
it
does
have
a
lot
of
seemingly
a
lot
of
community
support
behind
it,
which
is
cosine
so
like.
Could
some
people
decide
that
there's
another
short-term
solution?
G
Maybe
could
people
put
down
notary
v2
in
the
long
run
and
say
actually
there's
another
long
run?
You
know
long-term
solution
that
people
like
you
know
that
seems
possible
because
they're
definitely
opinions
in
notary
v2
that
you
know
where
people
disagree
are
definitely
decisions
that
notably
two
that
have
made
where
people
disagree.
G
A
Oh,
you
have
to
like
deal
with
this
one
method,
and
I
would
think
that
if
there
are
different
signing
mechanisms
right
being
able
to
configure
and
choose
what
makes
the
most
sense
for
your
problem
or
solution
yeah,
then
you
should
be
able
to
choose
that
right,
and
so
that's
more
like
looking
forward.
But
I
again
cosine
is,
is
kind
of
taking
off,
so
it
makes
the
most
sense
to
incorporate
it
if
it
makes
sense
on
a
technical
side.
F
I
do
wonder
if
this
makes
it
better
as
a
platform
concern
rather
than
a
life
cycle
concern
so
that,
as
things
evolve,
we
don't
always
have
to
keep
pushing
support
for
a
variety
of
things
in
the
life
cycle.
The
life
cycle
can
do
it's
it's
one
job
and
then
platforms
can
have
more
opinionated
integrations
with
different
types
of
signing.
E
D
This
also
kind
of
brings
back
to
the
point
I
raised
last
time,
which
is
like
life
cycle
extensions,
just
things
you
want
to
try
out,
but
not
necessarily
maintain
in
the
long
term,
and
it
would
be
nice
to
have
that
as
like
an
add-on
so
that,
like
you,
have
four
different
platforms,
all
of
them
don't
have
to
implement
the
same
thing.
You
could
tack
on
this
extension
to
the
life
cycle,
and
then
everyone
now
has
them
with
signing
support.
A
I
think
that
would
have
been
really
nice
for
something
like
stack,
build,
packs
right,
as
opposed
to
like
an
experimental
api,
which
I
think
is
maybe
what's
supported
right
now,
like
an
extensions
interface
or
contract,
might
have
been
more
better
suited
there.
F
Over
yeah
all
those
options
in
in
depth
right,
I
think
what
makes
it
harder
for
stack
packs
and
something
like
signing
or
a
prepare
phase
is
that
it's
so
deeply
embedded
in
all
of
the
other
logic
and
requires
a
bunch
of
changes
to
the
core
logic
to
make
things
work
like
we're,
going
to
reorder
the
phases
and
do
a
bunch
of
other
things,
which
I
think
makes
it
a
bit
inappropriate
for
an
extension.
It's
not
just
one
extra
thing,
that's
happening.
A
Was
going
to
sorry,
I
was
just
going
to
say
I
was
going
to
propose
the
alternative
right
if,
like
life
cycle
becomes
so
quote,
unquote
basic,
that
its
concerns
are
more
or
less
very
tightly
bound
to
just
executing
the
phases
and
not
so
much
providing
support
for
you
know
x
integration.
A
Then
we
could
provide
like
tooling
platform,
specific
tooling
right
instead
of
baking,
everything
into
pack
like
just
providing
small
bits
and
pieces
that
pack
it
stuff
zooms.
But
you
could
also
integrate
into
other
platforms
like
techton
pakorbin,
so
so
on.
G
I
think
we
draw
a
distinction
like
there's
a
need
to
keep
the
life
cycle,
modular
and
small,
and
you
know
like
just
just
doing
one
thing
relatively
well
right:
you've
even
broken
the
life
cycle
down
into
parts
that
you
know
are
well
defined,
but
I
think
that
makes
us
say
try
to
make
a
decision
between.
Does
this
feature
go
in
pack
or
the
life
cycle?
When
the
real
answer
is
you
know,
the
life
cycle
can
have
other
packages
that
aren't
core
to
it.
We
can
make
other
repos
that
you
know
platforms
can
import.
G
There's
a.
I
think
you
have
a
wide
range
of
options
between
those
things.
It
might
be
interesting
to
figure
out
if
those
are
other
things
for
the
life
cycle,
sub
team
or
implementation
sub
team
versus
something
else
when
they
come
out,
but
it
seems
like
this
will
be
good
discussions.
A
I
knew
at
least
in
the
pack,
repo
jesse's
brought
up
the
idea
of
like
being
able
to
download
build
packs
right
like
that
functionality
seems
to
be
necessary
in
a
lot
of
different
platforms,
and
so
I
think,
we're
working
through
that,
as
you
know,
through
experimentation
to
figure
out
exactly
how
to
externalize
that
functionality
into
something
that
others
can
consume
and
that
kind
of
ties
into
the
package.
Refactor
that
we'll
be
doing
in
pack
that,
hopefully
at
some
point
will
spit
out.
You
know
functionality
in
some
way
or
form
or
fashion.
F
I
think
it
also
keeps
the
api
compatibility
stuff
simpler
if
this
is
sort
of
like
some
of
these
things,
like
you
know,
transforming
project
tamil
fields
into
inputs
to
the
life
cycle
phases
are
a
platform
concern
because
then,
as
a
platform,
you
can
say
I
support
these
versions
of
the
project
descriptor
api
and,
if
you
as
a
platform,
would
like
to
support
these
versions
now
you
can
use
our
setup
image,
but
the
life
cycle
itself
doesn't
need
to
say,
like
I
support
these
project
descriptors
and
this
buildback
api,
this
platform
api
in
this
distribution
api,
they
could
be
nicer
to
not
try
to
meet
that
all
in
one
place,
probably
especially
if
platform
is
going
to
have
to
interact
with
some
of
those
components.
F
E
D
I,
like
is
that,
like
we
already
have
the
life
cycle
as
separate
binaries,
whether
they're
something
done
as
a
separate
case
but
like
if
we
could
provide
more
binaries,
where
platforms
could
just
insert
them
between
phases
and
so
like.
Let's
say
you
provide
a
binary
that
does
project
descriptive,
passing
and
modifications
to
the
app
directory
based
on
that
or
other
things
that
could
just
be
a
separate
binary
or
a
library.
D
I
guess
go
in
this
case,
because
that's
what
the
rest
of
the
project
is
implemented
in
and
that
way
platforms
that
don't
want
to
consume
it
as
a
library.
You
can
just
put
that
binary
in
front
of
the
other
lifecycle
binaries
and
then
and
run
that
phase
like
so,
for
example,
this
cosine
thing
could
be
an
extension
to
the
export
phase,
which
you
just
run
after
the
export
binary,
and
then
we
could
define
the
api
for
these
extensions
so
like
how
the
lifecycle
api
has
certain
inputs
and
stuff
to
find
we
could
have.
A
You
know
binary
that
executes
after
the
fact,
if
again,
very
hypothetical,
if
we
export
to
oci
layout
right
and
then
all
of
a
sudden,
we
have
this
additional
binary.
That,
then,
is
able
to
take
this
oci
layout
to
sign
it
and
put
it
into
its
final
registry,
like
that,
doesn't
necessitate
a
spec
change
right,
like
that's
a
platform,
implementation
concern
and
pack
could
do
it
or
any
other
platform
could
do
something
similar.
D
I
think
the
reason
I
wanted
it
as
an
extension
spec
is
that,
let's
say
the
life
cycle
like
the
platform,
api
changes
and
the
outputs
from
all
the
inputs
to
the
life
cycle.
Now
change
like
I
want
there
to
be
like
something
that
says
that
okay,
this
is
compatible
with
this
api
and
if
you
want
to
support
future
apis,
like
some
team,
can
can
add
support
for
that
using
like
taking
a
look
at
that
spec
or
something
I
I
don't
know
if
that
makes
sense
like.
D
If
some
other
organization
wants
to
implement
their
own
prepare
phase,
they
can
look
at
here's.
What
the
life
cycle
expects
the
prepared
extension
to
look
like
here's,
my
implementation
of
it.
If
I
don't
want
to
implement
it
and
go
or
whatever
I
don't
know,
if
that's
something
we
want
to
go
through
the
effort
of
doing
or
whether
it's
just
like
some
random
tooling,
we
provide,
which
is
not
specked
out.
A
There's
the
prepare
phase
that
you'd
run
before
and
I
think
that's
another
example
right.
Sam.
D
G
D
Example,
this
resolves
thing
that
I
was
talking
about.
This
was
all
space
which
could
be
experimental.
You
could
say,
takes
the
output
from
like
the
detect
phase,
makes
and
runs
this.
This
resolve
binary
from
the
life
cycle,
which
in
turn
runs
the
resolve
binaries
in
the
build
packs
modifies
the
output,
so
that
the
next,
like
the
analysis
space,
could
has
the
right
input
for
the
rest
of
the
execution.
D
A
But
is
there,
I
guess,
is
there
a
clear
distinction
to
answer
that,
like
other
than
just
like?
We
don't
want
to
either
take
on
the
burden
of
specking
these
out
or
like
there's
value
in
in
just
making
utilities
like.
I
guess
why
project
descriptor
or
by
the
why
the
builder
are
extension,
specs
right,
but
these
other
utilities,
like
the
prepare
phase
or
signing
phase,
wouldn't
be
extension.
Specs.
F
F
You
really
need
to,
as
a
a
version
of
the
platform
know
that,
if
you're
pulling
in
a
newer
lifecycle-
and
you
set
what
version
of
the
platform
api
you
want,
you
can
always
expect
your
inputs
and
outputs
in
the
same
format
when
I'm
thinking
about
some
of
these
exp
extensions,
like
maybe
this
result
phase
would
be
the
exception
to
that.
But
the
two
we
were
talking
about
before,
like
the
prepare
and
the
signing
I
could
see,
is
only
interacting
with
the
the
platform
api
or
only
interacting
with
the
platform.
So
you
can
always
be.
F
You
can
just
ship
a
version
of
that
with
your
platform
that
matches
your
platform's
expectation
and
like
evolve
it
on
your
own.
You
don't
need
this
guarantee
that
you
can
always
pull
in
newer
ones,
so
that
you
can
run
newer,
build
packs
and
then
we
can
give
people
ones
and
they
can
use
our
interface
if
they
want
to,
but
I
can
think
of
there's
like
more
there's
a
wider
variety
of
things.
People
are
going
to
want
to
do
like.
Maybe
they
got
a
different
signing
mechanism.
F
D
Yes,
like
things
like
prepare
and
sign,
are
exceptions
because
they
come
at
the
beginning
or
the
end.
As
soon
as
you
insert
something
in
the
middle,
you
would
need
spec
or
an
api
that
it
conforms
to
otherwise.
F
I
feel
like
it's
only
true
if
it's
interacting
with
build
packs,
because
each
phase
in
the
life
cycle
has
its
own
api,
so
you
know
maybe
detect
makes
outputs
that
it
expects
analyze
to
take
right
now.
But
if
you
win
and
manage
those
outputs
and
change
their
values
as
long
as
it's
still
in
the
same
format,
analyze
can
still
take.
F
D
D
Is
one
example?
The
other
example
I
can
think
of.
Is
that
clean
up
phase
that
we've
talked
about
a
lot
of
times
that
runs
after
the
buildbacks
that
could
be
inserted
after
the
build
phase
and
before
export
so
like
I
can.
I
can
think
of
other
examples
where
people
want
something
and
we
we're
not
sure
about
it,
100
that
we
want
to
blow
the
life
cycle
with
it.
F
D
F
D
F
Actually,
then,
you
run
the
risk
of
creating
more
situations
where
things
can't
be
interoperable
right
and
then,
like
do
you
version
these
extensions
with
the
build
pack
api
you
know.
Do
we
have
to
support
what
the
extension
looks
like
for
a
variety
of
apis?
I
think
it
starts
to
get
complicated
like
in
some
ways.
I
I
almost
feel
like
we
have
too
many
apis
and
the
compatibility
is
getting
really
hard
now
that
we
want
to
make
bigger
changes.
There
are
some
that
I
wish
we
could
collapse.
F
F
Could
we
could
we
do
the
same
thing
with
fewer
numbers,
because
right
now
we
have
these
overlaps
with
the
same
assumptions
basically
need
to
be
ours
in
specs
that
are
supposed
to
be
orthogonal,
so
it
makes
it
hard
to
change
one
without
changing
the
other,
and
it
worries
me
if
we
introduce
like
what
is
supposed
to
be
like
an
orthogonal
concept
of
an
extension.
Then
we're
just
gonna
create
a
world
of
pain
for.
F
That's
why
binaries
that
we
ship
that
you
can
run
before
or
after
that
aren't
expect
feel
like
a
great
lightweight
way
to
test
things
out
in
terms
of
things
that
are
more
tightly
integrated,
that
you
can't
just
tack
on
the
end
or
in
the
beginning.
F
F
F
D
D
D
D
D
So
most
of
the
the
ghetto
build
packs
which
are
like,
which
are
providing
some
sort
of
a
binary
or
a
dark
ball,
have
the
structure
where
they
say
this
is
the
version
of
the
binary.
This
is
the
source.
This
is
the
build
like
distribution
that
I'm
gonna
install
and
like
a
couple
of
other
things
like
that,
so
that's
like
distributing
assets.
The
other
thing
that
they
specify
is
the
stack
ids
and
the
last
thing
which
they
currently
don't,
but
I
think,
would
be
a
useful
addition.
D
Is
that,
along
with
these
different,
like
tarbles,
if
you
could
specify
environment
variables
which
can
be
overwritten
that,
like,
for
example,
things
like
setting
the
pip
index
url
or
like
your
go
proxy,
so
you.
E
F
It's
not
like
passing
the
environment
variable
to
future
things.
It's
more
like
here's,
an
environment,
variable
that
I
is
a
build
pack
except,
but
if
you
don't
pass
it,
here's
the
default
value.
D
Yeah,
I
I
mean
even
having
that
ability
would
be
great
because,
as
an
operator
then,
rather
than
having
the
app
developer,
trying
to
figure
out
like
this
is
the
environment
variable
I
need
to
set
for
my
entire
organization.
This
is
the
tar
wall
that
I'm
using
in
my
organization,
or
this
is
the
stack
I'm
using
in
my
organization
having
rather
than
like,
using
something
like
bindings
to
configure
all
of
that,
if
you
could
re-package
the
original
buildback
so
that
the
functionality
remains
the
same,
but
you
essentially
replace
these
things
with
their
equivalents
or
mirrors.
D
So
to
say
you,
you
have
a
build
pack
that
you
can
essentially
mirror
and,
at
the
same
same
time,
reuse
without
traditional
configuration
from
the
app
developer
site.
So
it's
like
a
one
time
change
on
the
operator
and
the
app
developer
doesn't
have
to
do
much.
The
other
thing
I
want
to
get
at
is
like
this
in
general
seems
like
a
good
idea
for
buildbacks.
D
Like
I
don't.
I
know
we
have
discussed
like
having
more
metadata
in
the
buildback
tommle.
That
decides
things
like
deprecation
dates,
which
also
makes
it
easier
to
extract
bombs
from
the
build
pack.
So
I
know
like
the
pacquiao
ones,
for
example,
take
the
dependency
and
can
output
that
as
a
bomb
as
well
as
being
able
to
say
that
here
are
the
environment
variables
that
I
can
that
configure
the
specific
buildback.
D
D
We
can
provide
a
better
user
experience
to
the
app
developers
because
things
that
used
to
be
opaque
because
they
can't
actually
inspect
the
build
like
normal,
because
you
don't
know
what
the
structure
is
now
become
transparent
to
the
user
because
they
can
see.
Okay,
here
are
the
environment
variables
here
are
the
default
values
here
is
where
it's
getting
its
core
distribution
from.
It's,
not
some
random
website.
D
Some
sketchy
website
on
the
infinite,
but
but
something
that
I
trust
or
like
I
think,
just
having
this
level
of
configuration
and
transparency
would
make
it
a
better
experience
for
both
operators
and
authors.
Sorry,
app
developers.
F
I
just
wonder
if
there's
a
better
a
better
answer
than
editing
billbag
tomml
like
to
take
them
one
by
one,
the
stack
id
thing.
I
definitely
understand
why
you
want
to
do
that.
F
There
are
times
I
want
to
do
that,
but
I
think
it's
because
we've
modeled
stack
ids
wrong,
like
the
vilpex
should
be
able
to
describe
what
what
images
it
can
run
on
in
a
way
that
doesn't
make
you
want
to
do
this
for
the
asset
locations
like
have
you
read
to
the
rfc
about
asset
packages,
I'm
wondering
if
that
actually
does
already
solve
that
use
case.
F
It's
I'll
provide
the
link,
it's
only
for
sort
of
like
vendored
assets,
so
you
can.
You
could
vendor
in
assets
from
different
locations
into
like
a
offline
version
of
the
build
pack
using
this
utility.
D
F
Yes
and
that's
one
of
his
big
motivations
is
that
you'd
be
pulling
assets
locally
and
not
spending
time
downloading
it.
So
you
can
relocate
it
to
an
air
gap
environment
or
something
like
that,
if
you
want
to
but
the
way.
B
F
D
There
are
a
couple
of
issues
with
that
one,
you
package
all
of
this
into
the
builder,
so
whoever
is
using
it
has
to
download
all
of
these
assets,
even
if
they're
not
using
that
specific
language
so
like
with.
Let's
say
you
have
the
entire
slew
of
the
pocket
or
java
buildbacks
and
you're,
not
even
using
java.
D
But
let's
say
you
want
to
provide
a
consistent,
build
experience
where
you
have
one
builder
that
can
be
used
for
all
of
these
apps.
You
now
have
this
huge
builder
that
each
user
has
to
download,
even
though
they're
not
using
all
these
other
languages.
That
was
why
I
was
not
sold
on
the
idea
of
including
everything
in
the
buildback,
because.
F
So,
yes,
I
think,
there's
still
a
case
for
getting
the
same
asset
from
a
different
url,
but
I
worry
that,
if
you
can
just
edit
like,
could
that
be
solved
by
some
sort
of
better
convention
for
configuring.
These
build
packs,
because
I
worry,
if
you
could
just
edit
anything
in
the
build
pack
table
like
there's,
no
guarantee
that
that
bill
pack
is
gonna
run
correctly
afterwards,
like
you,
don't
know
how
much
those
assumptions
are
baked
into
the
build
pack
and
it's
not
guaranteed
to
be
a
a
thing
that
is
supported
right.
D
F
Against
those
a
things
overly
paternalistic,
sometimes
like,
if
it
breaks
well,
tough
luck,
you
edited
it
don't
edit
it
wrong.
Next
time.
D
I
mean
as
an
operator,
I
believe
it
should
like,
if
you're
making,
if
you're
going
through
the
effort
of
doing
all
of
this,
I
think
I
would
imagine
the
least
an
operator
could
do
is
verify
that
the
bill
buys
their
editing
and
redistributing
to
their
users
work.
I
don't
think
we
should
be
stopping
them
from
doing
it.
D
Being
overly
cautious
of
of
some
use
cases
where
users
who
don't
know
it
could
potentially
use
it
incorrectly,
I
think
that's
a
far-fetched
idea
because,
as
it
is,
people
find
buildbacks
complex.
If
someone
has
gone
through
the
effort
of
doing
all
of
this,
how
it
is
you
know
at
least
something
like
what
they're
doing
or
they
may
have
tested
it,
so
that
that's
my
assumption
that
if
they're
already
doing
it
regardless
and
it
like-
let's
say
the
bindings
use
case-
that
they
care
to
currently
provide
it's.
D
What's
stopping
you
from
providing
incorrect
values
for
those
findings,
I
guess
in
that
case
it
falls
back
which
ones.
D
D
So
that
was
my
other
use
case
like
currently,
it
assumes
those
cloud
foundry
tower
balls
and
it
calculates
the
digest
off
of
them.
Maybe
they
have
some
additional
cloud
boundary
data
which
the
build
pack
doesn't
need.
It's
just
like
metadata
generated
by
the
cloud
form
rebuild
process
and
my
digest
isn't
the
same.
But
I
know
it's
it's
the
same
thing:
yeah.
F
Because
yeah
there's
a
lot
to
think
about
here
and
I'm
late
for
that
next
thing,
I'd
be
worried
about
sort
of
like
I
think
you
need
to
at
least
be
able
to
explicitly
set
the
digest.
In
that
case,
you
can't
just
download
whatever
right
and
assume
it's
the
thing
you
want.