►
From YouTube: Working Group: 2021-04-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Cool
and
also
to
answer
your
previous
questions,
this
is,
I
might
have
some
more
of
these.
If
I
look,
this
is
an
example
of
a
proposal
from
last
year,
which
also
happens
to
be
related
to
bill
pack
registry,
but
this
is
a
good
format
to
use
for
drafting
the
proposal
and
I
think
we're
all
like
I
in
particular,
but
any
of
us
are
happy
to
like
help
guide
you
on
that.
Like
the
example
I
sent
like
it's.
A
I
have
sent
you
the
request
and
this
that
I've
sent
the
issue
on
the
channel
on
the
zoom
zoom
me
chat
channel.
So
the
issue,
if
you,
if
we
see
the
issue,
is
like
it,
it
has
to
verify
name
space
owner
action
should
accept
a
block
list,
so
my
question
is
just
only
that:
what
what
do
we
mean
by
block
list
or
array
of
name
like
there
are
two
one
name
spaces
in
linux?
I,
I
guess
the
the
name
space
that
here
we
are
trying
to
figure
out
is
different.
B
It's
part
like
a
component
of
the
id,
maybe
I'll
state
this
here
and
then
I'll
try
to
capture
it
in
the
issue,
since
those
are
good
questions,
but
the
namespace
is
part
of
like
the
specification
for
the
build
pack
registry
and
might
actually
go
beyond
that,
and
so
we're
trying
to
parse
that
out
of
the
id
it's
like
you
know
a
component
in
the
id
and
then
compare
it
to
a
block
list
which
might
include
things
like
cncf
or
build
packs
or
swear
words
or
things
like
that,
and
then
restrict
people
from
creating
those
namespaces.
B
So
I'll
try
to
capture
some
more
of
that
detail
in
this
issue,
just
so
that
it's
so
you
can
refer
back
to
it.
If
you
forget,
whatever
does
that.
A
A
I'm
on
the
slash
and
I've
introduced
myself
in
the
general
channel,
I've
given
okay,
I'm
sorry
for
the
long
introduction
there.
I
I
totally.
I
have
explained
what
I
have
done
till
now.
I
I
was
a
previous
year,
google
summer
of
code
scholar
as
well.
I
contributed
in
rtms
project
last
year,
successfully
built
their
continuous
integration
solution
with
epics
another
open
source
community
and
right
now,
I'm
a
linux
foundation
mentee
with
cncf.
A
Only
and
right
now,
I'm
contributing
to
kubernetes,
so
I'm
contributing
to
kubernetes
working
group
policy
report,
so
I'm
creating
their
policy
report.
Custom
definition,
custom
resource
definitions
right
now,
so
so
that
is
my
experience
and
now
I
am
planning
to
contribute
to
this
community
as
well.
I
have
started.
I
started
working
on
this
block
list
issue,
so
I
hope
that
with
your
today's
information,
I
can
work
on
the
pr
of
that
and
the
proposal
as
well,
because
time
is
less
so
I
will
share
the
first
draft
after
the
proposal
by
tomorrow
to
the
community.
A
C
E
Oh,
this
was
something
I
put
in
the
last
office
hours,
but
we
didn't
get
to
it
so
there's
currently
a
label
called
io
buildbacks
project
metadata,
which
is
supposed
to
be
filled
with
like
if
it's
a
git
repository
the
comments
of
the
app
directory
or
whatever,
or
if
it's
a
tower
paul,
I'm
guessing
some
the
the
show
for
that
tower,
but
because
back
currently
runs
everything
locally.
I
guess
that's
not
set
anywhere
or
I
guess
it
doesn't
detect.
If
it's
a
good
repository
and
set
that
label.
E
So
the
the
question
was
whether
we
want
to
add
support
for
something
like
apart
from
path,
whether
we
want
to
support
some
remote
fetching
options
or
is
that
out
of
scope
for
pack
like
you
can,
instead
of
paths
you
can
provide
like
get
url
and
that's
just
to
get
url
or
like
a
url?
That's
just
a
url
to
some
tarball
on
s3
or
whatever,
and
that
would
potentially
resolve
this
so
like.
Even
if
it's
a
local
path
and
you
provide
a
git
url
with
the
local
directory,
it
still
works.
F
A
related
issue
right
where
really
the
ask
was
more-
that
oci
specific
label
was
generated
with
that
information
and
from
there
you
know,
like
the
discussion
started
around
like
certain
things
that
we
should
already
be
doing,
and
I
believe,
there's
a
separate
issue
that
I
felt
to
find
right
now,
but
there's
an
issue
where,
even
from
java
binaries
right,
we
should
be
able
to
extract
where
the
source
is
coming
from,
based
on
some
of
the
information
that's
available
in
the
metadata
within
the
jar
contents.
F
All
that
being
said,
I
think
it's
like
something
we
desire
to
do.
It's
just
never
reached
you
know
the
top
of
whatever
anybody's
radar
to
to
get
done.
So
that's
one
thing
in
regards
to
applying
the
source
to
the
metadata
label.
F
The
idea
of
fetching
remote
contents,
I
think,
has
been
something
that
we've
might
have
touched
on
in
conversation
at
some
point,
but
nothing
that
has
ever
been
driven
out
completely.
Personally,
I
don't
think
I
would
be
opposed
to
it,
but
it
would
require
some
additional
thought
for
authentication
if
need
be,
and
what
sort
of
protocols
would
be
accepted
right,
but
that's
again
kind
of
like
a
bigger
conversation.
E
Yeah,
I
guess
that's
why
I
was
thinking
it
could
be
more
of
a
plug-in
thing
where,
if
you
want
to
provide
like
a
new
kind
of
fetcher
for
a
particular
like
uri
type,
you
could
just
plug
it
in
for
like
kit
or
like
s3
directly
instead
of
http,
or
something
like
that.
E
I
don't
know
if
you
want
all
of
those
things
in
pack
directly
or
whether
it
should
just
be
a
plug-in
system
that
can
accept
like
different
remote
fetches,
that
users
can
just
plug
in
hey,
can't
think
of
a
nice
way
to
design
it
immediately.
But
that's
sort
of
what
I
was
thinking
that
this
may
be
a
nice
way
to
enhance
back
with
not
just
working
on
local
directories,
but
also
remote
content.
F
I
think
the
only
statement
I
want
to
make
is
that
there's
a
certain
level
of
overhead
and
complexity
to
a
plug-in
system
and
unless
there
we
foresee,
like
the
need
for
this,
to
really
be
something
that
we
want
to
enable
for
other
you
know,
sort
of
methods
or
formats
of
bringing
in
source
code.
I
don't
know
that
it
would
be
worthwhile
at
this
point
in
time.
So
I
would
say
you
know
convincing
creating
like
a
proposal
that
embeds
this
into
pack
would
probably
be
the
lowest
hanging
fruit
to
just
resolve.
F
If
I
remember
correctly
when
I
was
playing
with
s2i,
it
allows
you
to
pass
in
a
remote
repository,
and
it
was
a
really
nice
ease
of
use
feature
right
like
it
made
my
life
hap,
you
know
feel
good
because
I
didn't
have
to
actually
clone
it
right.
I
have
to
go
through
those
manual
processes,
so
I
think
it's
a
it's
beneficial
in
that
sense,
but
I
get
where
you're
coming
from
yeah.
D
D
B
E
The
other
possible
use
cases
that,
like
now
like,
if
you
have
a
pack
container
image,
you
could
just
like
run
that
container
image
with
the
git
url
and
now
you
have
like
a
standalone
way
of
building
a
images
without
now
injecting
like
a
gate
binary
in
that
container
image
and
then
doing
git
clone
and
whatever.
So
I
think
it's
useful,
but
yeah
like
you,
could
you
could
achieve
this
using
like
other
methods?
E
The
the
main
reason
was
like
two
for
leader,
like
I
guess
this.
This
whole
issue
has
two
parts:
one
is
the
remote
source
fetching
thing
and
the
other
is
the
setting
of
that
label
metadata.
E
The
latter,
like
is
actually
difficult
to
achieve
using
a
build
pack,
because,
if
you're
doing
it
in
some
sub
directory,
which
does
not
have
dot
get
in
it,
it
never
gets
mounted
onto
the
workspace,
and
you
can
never
detect
that.
So
it
would
have
to
be
something
at
the
platform
level
that
actually
sets
a
label
like
that.
E
F
Yeah,
I
think
you
speak
to
a
very
specific
problem
that
even
pac
itself
might
run
into
right
where
you,
you
can
run
back
from
anywhere
within
your
system
and
you're
passing
a
path
to
execute
and
that
path
might
be
a
subdirectory
inside
of
a
get
repository.
F
So
I
don't
know
that
there's
a
very
clear
way
of
resolving
that
specifically-
and
I
don't
know
if,
if
we're
gonna
make,
you
know
a
hard
attempt
at
doing
so.
To
be
honest,
I
know
that
the
project
tamil
does
have
this
metadata,
so
I
think
we're
looking
there
first
would
be
better
than
looking
elsewhere.
But
again,
that's
another
challenge
where
we
think
about
the
different
presidents
of
where
we
want
to
look
for
this
metadata
to
insert
it
into
the
label.
D
You
maybe
don't
have
to
handle
all
the
edge
cases
where
it's
a
sub
der
right
like
then,
you
could
say,
put
your
project
tunnel
in
the
router
and
pass
the
path
to
that
which
will
then
forward
you
down
to
a
different
path.
But
the
reason
I
like,
even
if
we're
not
great
at
handling
all
the
edge
cases
on
the
first
pass,
is
that
not
every
project
has
a
project
demo.
In
fact,
lots
of
very
simple
ones:
don't,
and
it
would
still
be
nice
to
get
this
functionality.
F
F
I
guess
what
are
the
next
steps
for
this,
because
I
think
we've
we've
acknowledged
that
this
label
exists
right.
We,
we
haven't
really
concretely
thought
of
the
implementation
from
pac's
and
perspective.
I
feel
like
we're
missing
that
step.
Somehow.
D
I
guess
there
are
two
labels
right:
there's
our
project
metadata
label
and
then
there
are
the
oci
standard
labels
and
I
think
the
place
to
start
since
we
already
specced
out
our
project
label
is
filling
that
in
and
then,
if
we're
talking
about
the
oci
oci
label,
there's
two
ways:
you
could
approach
that,
like
you,
could
either
flesh
out
a
proposal
to
allow
platforms
to
set
arbitrary
labels
and
then
platforms
could
just
set
that
if
they
want
to
or
we
could
be
more
opinionated
and
say,
we're
not
going
to
fill
out
this
we're
not
just
going
to
fill
out
this
project
metadata
label
we're
going
to
make
opinions
in
the
life
cycle
about
when
to
set
some
of
these
standard
labels.
D
F
F
I
was
gonna
say
just
as
a
quick
follow-up
to
the
remote
source
fetcher.
It
doesn't
seem
like
anybody's
opposed
to
pack
doing
such
a
thing
right,
but
we'd
want
to
probably
again
think
about
some
of
the
limitations
there
so
I'll
create
an
issue
for
that.
That
may
require
an
rfc
based
on
conversations
there
as.
E
Github
yeah,
it
was
also
put
by
me,
so
I
was
trying
to
see
if
I
could
get
a
private
pullback
registry
up
and
running,
and
I
was
trying
to
use
back
register
for
it.
But
I
noticed
a
couple
of
issues.
One
it
seems
like
it
currently
relies
on
github
actions
primarily
to
do
some
management
around
like
like
from
back
calling
register
to
actually
like
creating
the
right
structure
in
the
registry,
and
I
tried
the
the
so
apparently
the
two
modes.
So
there's
a
github
mode
and
a
git
mode.
E
I
tried
the
git
point,
but
I
couldn't
get
it
to
work
properly.
I
don't
know
if
I
did
something
incorrectly,
but
in
in
general
like
are
there
any
plans
for
like
some
github
agnostic
way
of
managing
a
buildback
registry
which
does
not
have
reliable
actions
but
like
a
make
file
or
some
common
binary
that
you
can
just
run.
B
Yeah
no
plans,
it's
certainly
possible.
Our
expectation
was
that
it
for
private
registries.
You
would
use
that
get
past,
but
I
don't
think
we're
really
exercising
it
much.
So
it's
possible.
You
just
ran
into
a
bug
or
something.
B
So
if
you
go
and
if
you
can
share
that,
like
in
distribution
channel,
we'll
take
a
look
and
figure
out
whether
it's
I
mean
it
could
be
something
that
we
haven't
documented
well,
it
could
be
a
bug
we're
just
not,
I
just
don't
think
we're
exercising
enough,
but
that
that
path
is
what
we
intended
to
be
the
sort
of
recommended
private
registry
approach,
but
yeah
for
setting
up
the
github
path,
it's
totally
possible
and
like
we
actually
have
a
private
staging
registry
that
we
will
eventually
use
to
test
things,
it's
kind
of
a
pain
to
set
up
and
and
like
get
everything
working.
B
B
The
other
thing
with
the
the
get
approach
is
that
you
don't
need
to
manage
namespaces,
so
you
can
just
when
you're
doing
the
git
thing
you
can.
If
it's
a
private
registry,
you
don't
have
to
worry
about,
like
someone
else
registering
a
namespace,
so
you
can
just
totally
bypass
that
part
of
it.
B
C
G
I
actually
I
I
put
it
up
there,
but
it
is
sam's
rfc.
I
think
it's,
it's
probably
that
one
and
the
one
below
it
have
come
up
and
some
past
working
group
meetings
too.
I
put
them
on
the
list.
Yesterday
I
just
figured
we
have
most
of
the
core
team
folks
here
and
if
no
one
has
other
things
to
chat
about,
I
think
emily
and
I
had
some
ideas
about
those
too.
Hopefully
we
can
try
to
get
those
pushed
forward,
also
sam,
if
that's
okay,.
E
G
You're,
just
you're
opening,
so
many
great
things
it's
hard
for
us
to
keep
up.
I
just
wanted
to
make
sure
that
we
don't
leave
you
hanging,
so
the
148
is
about
default,
command,
arguments
and
emily,
and
I
talked
about.
I
think
it
would
be
good
to
do
to
make
the
specific
change
right
commands
have
multiple.
G
G
We
had
an
idea
of
what,
if
we
just
drop
everything
but
command
and
args,
just
keep
it
really
simple:
drop
drop.
The
ability
to
use
a
shell
like
implicitly,
if
you
want
to
use
a
shell
bash,
dash,
see
something
right
and
you
know
just
just
have
command
and
args
as
the
two
parameters
get
rid
of
the
eval
of
each
of
the
arguments
by
the
shell
and
get
rid
of
profile.
Also.
C
G
Because
we
have
xd,
now
and
profile
could
still
be
implemented
as
a
build
pack,
because
xxd
would
let
you
implement
profile
if
we
did
want
to
bring
that
back
and
that
way
we
completely
break
our
dependency
on
a
shell,
but
with
execd
we
have
all
the
same
functionality
we
had
before
it's
very
heretical.
It
is
would
be
a
major
breaking
change,
but
because
we
can
still
implement
profile
through
the
build
pack
interface
via
execd.
It
may
not
be
that
bad.
G
C
D
G
D
G
That
I
would
much
prefer
to
do
that
than
to
keep
the
dependency
on
a
shell
as
part
of
the
core
api.
I
think
just
like
thinking
about
the
whole
thing
more,
it's
direct
is
very
weird
right,
like
build.
Packs
can
all
contribute
profile
scripts,
but
only
some
stacks
have
shells
right
when
a
build
pack
contributes
a
profile
script
and
one
of
in
the
final
process
says
direct
false.
None
of
the
profile
scripts
run
so
we've
already
had.
We
already
have
this
weird
expectation
that
build
tech.
G
G
G
G
G
B
So
I
I
don't
really
care
what
the
mechanism
is
like.
I
actually
I
really
like
the
idea
of
like
built-in
build
packs
to
add,
like
modular
behavior
like
I
can
actually
think
of
a
lot
of
cases,
inline
build
pack
or
something
like
that
where
we
could
use
that,
but
it
being
a
default
behavior
and
being
built
in
is
what's
what's
important
to
me,
and
it's
important
like
like
having
shell
and
shell
things
is,
I
think,
important
for
the
majority
of
our
users.
B
Right
like
these,
like
the
shell-less
or
direct,
is
really,
I
think,
targeting
you
know
the
the
go
developer
and
that
and
that
type
of
persona
that
we
focused
on,
but
there's
a
lot
of
places
where
we're
missing
on
the
broader
community.
You
know
ruby
developers
and
javascript
developers
who
expect
and
would
want
those
things
so
like
I'm
totally
in
favor
of
making
sure
we
have
like
the
primitives
that
support
the.
B
I
don't
know
how
to
characterize
it
other
than
go
developers.
But
you
know
what
I
mean.
E
B
Least,
70
of
people
right.
No,
no
that
I
mean
it's
not
go
developers,
but
it
is
more
of
a
power
user
or
something
like
that
right
and
that
totally
makes
sense,
but
the
average
person
who's
building
a
build
pack
is
going
to
want
a
shell
and
is
going
to
want.
You
know,
profile,
dur
or
whatever.
I
don't
know.
F
D
G
G
B
F
G
I
definitely
think
we
should
support
that.
I
just
think
it
should
be
something
that
I
don't
think
it
should
be
an
assumption
that
the
api
makes
especially
that
it's
bash
force,
specifically
that
it's
like
so
tied
to
a
very
particular
shell
right
in
the
spec
that,
like
it
seems
like
that
can
be,
the
functionality
could
be
very
pervasive,
but
modular
and
disableable.
If
you
really
didn't
want
to
use
it
right,
if
you
had
a
stack
that
doesn't
have
a
shell
in
it,
so
it's
not
going
to
do
anything.
Anyways.
D
To
have
a
dependency
on
a
show,
because
the
build
pack
says
what
stack
it's
dependent
on
and
knows
characters
into
that
stack
right
now.
Our
life
cycle
has
a
dependency
on
bash
and
it's
not
smart
enough
to
know
when
it's
on
a
stack
where
there's
actually
bash
or
not
and
there's
bad
specific
logic
at
the
life
cycle,
and
that
feels
that
feels
wrong.
G
Most
of
the
containers
we're
building
pro
dot
profile,
doesn't
work
in
them
on
our
side
because
they
don't
have
a
shell
in
that.
You
know
final
container
for,
for
you
know,
kind
of
the
recommended
way
that
we
want
people
to
build
java
apps.
So
I
don't
think
that
goal
of
dot
profile
works
everywhere
is
achievable
because
it's
never
going
to
work
in
a
case
where
your
stack
run.
E
I
guess
the
the
only
other
parallel
I
have
to
compare
this
with
is
docker
and
docker
also
comes
with
both
forms
of
entry
points.
It
comes
with
like
a
shell
entry
point
and
it
comes
with
a
normal.
Like
direct
entry
point-
and
I
guess
the
assumptions
we're
making
here
also
hold
true
for
docker,
where
you
could
have
a
scratch
image
which
doesn't
have
a
shell
and
it
still
tries
to
execute
asset
bins.
Sh
minus
c.
E
G
I
think
yes,
the
or
the
the
build
pack
can
look
at
the
stack
id
and
unless
it's
like
one
of
the
special
any
stack
build
packs,
it
should
totally
be
able
to
tell
the
stack
id
is
a
stack
id
that
doesn't
have
a
shell
at
runtime,
and
you
know
you
need
to
do
something
else
for
profile
d.
The
case
that
that's
worse,
is
when
that
build
pack
doesn't
contribute
the
main
process
type.
Another
build
pack,
contributes
the
main
process
type
and
that
that
build
pack
sets
direct
equals.
G
E
D
Soon
as
complexity
right
like
as
soon
as
exec
d
landed,
there
were
like
a
couple
cases
where
we
needed
it
in
some
of
these
distress
images,
for
you
know,
native
images
for
java,
but
at
the
same
time
we
just
moved
everything
under
the
sun
to
exactly
because
then
you
never
have
to
worry
about
what's
running
when
and
what
the
compatibility
is
it's
just
simpler
and
now
that
we
have
a
mechanism
that
works
in
all
cases,
should
we
be
encouraging
people
to
use
that
or
if
it's
hard
in
some
ways,
creating
scaffolding.
So
it's.
G
Easy,
if
you
have
a
function,
service
and
you're
using
build
packs
that
need
to
start
or
that
you
build
want
to
build
images
that
need
to
start
very
quickly
having
a
shell
run
at
launch
is
expensive,
also,
and
so
that
ability
to
do
a
build
on
any
stack
that
doesn't
run
a
shell
at
launch
is
useful
in
a
lot
of.
G
C
C
C
G
Well,
I
I
think
it's
more
like
this
one
mixed
with
the
current
profile
mechanism
with
command,
where
command
can
be
something.
That's
not
a
list
is
really
hard
to
we'd
have
to
resolve
a
lot
of
things
to
get
this
into
the
current
api,
but
it'd
be
very
simple.
This
could
almost
go
in
for
data
minus.
You
know
be
taking
out
some
things
in
the
examples.
If
we
you
know,
could
get
consensus
on
getting
rid
of
the
or,
like
you
know,
implementing
profile
through
xd
and
taking
the
shell
related
stuff
out
of
the
api.
G
Then
then
this
would
just
make
perfect
sense
right,
and
so
it
could
be.
There
could
be
two
rscs
and
that
one
goes
first,
we
could
decide
that
this
idea
is
too
crazy.
A
joke
could
say,
never
going
to
happen,
and
we
could,
you
know,
figure
out
how
to
resolve
those
conflicts
in
here,
but
I
just
wanted
to
very
directly
present
that
idea
and
see
see
how
how
bad
it
would
be
if
that
makes
sense,.
D
We
didn't
mean
to
sort
of
like
huddle
up
and
then
drop
a
bunch
of
information
verbally
without
following
up
with
writing
stuff
down.
The
main
reason
we
synced
up
on
this
stuff
is
because
we
felt
like
we
were
derailing
working
group
by
arguing
with
each
other,
and
we
should
get
on
the
same
page,
and
then
we
can
come
back
and
capture
it
in
words.
We
have
more
time.
G
Yeah,
if
it
weren't
for
x
id
being
able
to
run,
you
know,
keep
that
dot
profile
compatibility
in
the
after
I
do
it
launch.
I
would
have
a
lot
bigger
concerns
about
it,
but
because
we
can
totally
implement
that
still
right
that
that's
what
sold
me
on
it
at
the
end
that
we
brought
into
xd
and
that
gives
that
same
functionality.
But
it
makes
the
shell
optional.
D
C
I
do
feel
like
the
lack
of
not
that
we
need
profile,
but
the
lack
of
users
ability
to
specify
process
types
is
still
a
big
gap
in
build
packs.
Today
of,
I
think,
that's
the
thing
people
want
to
do.
G
It's
like
we
could
take
this
and
try
to
solve
a
lot
of
hard
problems,
but
if
we
can
agree
on
a
longer
term,
you
know
direction
or
like
the
kind
of
different
larger
picture,
that's
more
simple!
Then
we
get
to
bypass
all
those
all
that
hard
work.
D
But
if
command
was
an
array
of
strings
writing
a
script
in
an
array
of
strings
where
you're,
not
prefixing
it
with
bash
is,
is
weird,
especially
because
I
don't
think
we
want
to
support
two
types
for
the
same
key
in
our
schema,
which
is
what
sort
of
like
docker
entry
point
is
like
where
it
can
be
an
array
or
a
string.
D
C
Okay,
I
guess
to
javier
at
this
point-
maybe
anthony
was
also
pushing
for
this.
What
are
next
steps
to
help
try
to
move
this
sword.
D
D
B
I'm
not
sure
how
to
answer
that.
I
think,
like
I'm,
I
understand
the
goal
of
this
spec
not
having
or
like
being
shell-aware
or
whatever,
and
if
you
can
rectify
that,
with
the
default.
Still
being
that,
I
can
like
that.
It's
not
a
backward
breaking
thing
or
not
or
yeah.
It
doesn't
break
like
existing,
build
packs,
for
example,
then
yeah,
that's
definitely
something
I'm
open
to,
but
I'm
definitely
like
yeah.
B
B
Yeah,
well,
I
mean
like,
I
think,
probably
literally,
every
build
pack
I've
written
contributes
profile
descript,
so
I'd
have
to
look
at
them
and
see
I
mean,
like
ngrok,
starts
an
ngoc
in
the
background
and
sets
up
the
config
because
it
doesn't.
You
can't
read
environment
variables.
It
does
that
you
know
at
boot.
B
They
have
the
web
tty
one
which
does
a
lot.
That's
probably
the
one
I
want
to
look
at
the
most
and
granted
like
these
are
not
widely
used
right,
I'm
probably
literally
the
only
person
using
them,
but
I
don't
think
that's
the
point
right
like
if
we
want
to
grow
that
ecosystem
and
we
want
to
make
this
accessible
accessible.
G
It's
pretty
easy
to
convert
because
you
can
just
end
at
the
end
like
a
profile:
dscript
can
pretty
easily
output
its
environment
afterwards
and
be
converted
to
xxd,
and
we
we
have
broken
the
bill
pack
api.
You
know
a
couple
times
with
the
api
version
bump
and
migration
instructions.
Recently
it
seems
like
that,
hasn't
been
as
much
of
a
problem
when
we
started
api
versioning
and
so
that,
like
before
1-0,
you
know,
if
we're
going
to
do
it.
B
We
yeah,
it
hasn't
been
a
big
deal
like
when
we
have.
We
do
get
people
internally
at
roku
like
bump
into
stuff,
and
they
get
really
confused
every
time.
There's
like
something
breaking
between
api
versions
like
it
just
came
out
yesterday,
and
so
I
don't
think
it
always
surfaces
in
a
way
that
we
would
see
it.
But
I
to
your
point
I
don't
think
like
the
broader
community
outside
of
like
salesforce
and
vmware
are
running
into.
B
C
I
guess
to
answer
your
question,
at
least
for
me,
emily.
It
is
not
dead
on
a
rival.
F
C
Yeah,
like
I
think,
it's
interesting,
probably
to
have
some
thought,
whether
it's
in
the
rfc
people
or
not,
or
some
discussion
around
like
what
does
what
does
supporting,
I
guess
the
shell
stuff
look
like
and
what
like,
what
is
the
cost
or
whatever
like
for
these
people,
who
want
to
do
that
or
if
I'm,
a
new
user
who
doesn't
have
existing
they'll
pack
right
like
because
that's,
I
think,
I
think,
that's
supported
pretty
well
coming
from
dockerfile
as
well
as
right,
like
they
do,
support
both
of
these
things.
So
how?
C
D
I
think
this
is
funny
because
I
feel
like
I'm
always
on
team
copy
docker
conventions,
and
you
guys
are
always
telling
me
that
docker
conventions
are
terrible,
but
I
would
say
that
the
two
forms
of
entry
point
are
one
of
the
docker
conventions
that
I
I
really
don't
like
at
all.
C
I
don't
know
if
I
like
the
convention,
I
just
think
it's
the
like
this
is
probably
the
world
of
where,
where
people
were
coming
from
doesn't
mean,
I
have
to
like
it,
but
I
think
that's
the
community
and
world
we
live
in.
So
if
we
can
find
a
better
way
to
kind
of
support
that
I
think
that
is.
That
would
be
a
huge
one
for
us.
G
B
Yeah,
I
worry
about
that
making
like
I
feel
like
we're
already
kind
of
failing
at
bill,
peck's
working
cross
stacks
or
like
even
implementations
of
bionic,
like
just
some
of
the
questions
like
with
the
registry
of
like
oh,
I
want
more
information
on
the
stack
and
the
builder
makes
me
feel
like
we're
not
doing
a
great
job
there
and
if
we
start
to
have
like
oh,
the
stack
has
a
different
shell
from
that
stack,
or
this
one
doesn't,
and
this
one
does
towards
the
shell.
Like
I
don't
know,
maybe
maybe
not
maybe
not.
G
Although
I
think
there
is
a
risk
there,
too
or
like
I
know,
emily
wants
to
move
in
the
direction
of
pushing
configuration
out
of
the
stack
id,
so
it
could
be
configurable
in
a
different
way
per
build
pack.
Something
like
that,
but
it
seems
like
this
is
all
part
of
that
big.
What
do
we
do
about
profile
and
shells
and
that
kind
of
early
design
decision
before.
B
1-0
I
mean
if
we
did
it
at
the
stack,
the
whole
default.
Build
pack
thing
could
maybe
be
a
stack
pack.
G
G
Yeah
yeah,
that
makes
sense
this
one
felt,
especially
like
you
know
I
felt
like
I
should
be
very
direct
about
it,
because
it's
such
a
you
know
could
break
a
lot
of
build
penetration
changes
major
functionality.
Sorry,
no.
G
Good,
so
the
next
one
I
put
up
there
145
just
touch
on
really
quickly
is
also
what
emily
and
I
talked
about,
would
also
be
a
large
breaking
change,
but
maybe
not
as
controversial
as
this.
So
what
if
we
moved
all
of
the
arguments
to
build,
you
know
been
build,
then
detect
all
that
into
environment
variables
and
then
introduced
a
new
configure
to
put
the
configuration
so
move.
G
You
know
argv
one
to
layers,
dir
right,
all
caps
and
an
environment
variable
and
then
introduce
a
new
configure
and
that's
where
that
goes,
and
then
we
can
keep
passing
the
arguments
for
time
immortal
until
so
it
wouldn't
actually
be
breaking
and
then
remove
them
in
you
know
some
future
date.
G
B
C
G
D
But
just
in
a
we
can
make
the
pads
for
those
ones
long
because
they
don't
actually
get
exported.
It's
fine
and
then
maybe
someday
we'd
like.
B
And
then
you're
saying
no,
this
is
also
that
would
also
be
getting
away
from
the
positional
arguments
and
doing
the
environment
variables
to
find
these
we've
talked
about
that
a
bunch
even
independent.
E
The
the
only
other
reason
I
moved
that
build
back
workspace
to
a
top
level
concept
and
layers
to
a
subdirectory,
underneath
it
was
that
there
was
this
other
conversation
that
came
up
around
things
you
may
want
to
configure,
which
don't
necessarily
relate
to
a
specific
layer
so
like
we
were
talking
about
process,
specific
environment
variables
and
the
only
way
to
specify
them
right
now
is
like
create
a
layer
and
then
create
the
end
directory
inside
it.
G
D
This
one
I'm
less
opinionated
about.
I
do
agree
that
the
path
names
are
long.
I
feel
like
we
could
solve
that
in
the
build
pack.
Workspace
thing
by
just
naming
it
l
instead
of
layers
and
then
also
you
could
set
two
environment
variables
as
well
for
both,
if
you
wanted
to
use
that
instead,
but
but
I
guess,
that's
hacky,
so
two
directories
also
works.
G
I
would
be
hesitant
to
change
to
something
that
we're
not
very
happy
with,
because
it's
like
just
it's
like
kind
of
all,
doesn't
make
a
huge
difference,
doesn't
make
any
difference
in
the
functionality
of
the
you
know
api,
and,
if
we're
going
to
make
a
breaking
change,
we
should
at
least
you
know,
make
a
breaking
change
that
everybody
thinks.
Yes,
this
is
the
right
interface
for
it
before
we
do
the
one
hour
thing.
E
The
one
thing
I
like
about
that
breakdown
between
a
configure
and
layer
store
is
that,
then
it
makes
it
clear
that
the
layers
down
would
actually
be
exported
and
like
these
are
the
layers,
whereas
the
configure
can
be
ephemeral,
and
none
of
that
would
be
exported,
which
I
think
right
now
is
is
thank
you.
You
have
to
read
the
spec
to
know
that.
Okay,
these
are
not
going
to
be
exported
in
the
same
output
path.
D
C
I
am
sorry
that
we
didn't
give
this
one
as
much
time
young
samuel,
but
hopefully
we
can
bring
it
up
again
right.