►
From YouTube: CNB Weekly Working Group: 2021-11-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah,
I
could
speak
a
little
bit
to
that.
So
pac.
We
just
put
out
an
rc
a
couple
days
ago
and
that's
out
there
we're
planning
to
release
sometime
next
week
so
a
little
bit
behind
schedule,
but
yeah
we're
making
traction
there.
A
The
implementation
team
is
hard
at
work
on
platform
08
and
build
paco7,
which
is
mostly
s
bomb
stuff.
B
I
should
probably
add,
on
the
platform
side,
sorry,
that
we
got
a
contribution
for
tekton,
where
it's
adding
support
for
chains
which
has
to
do
with
signing
and
cosign
integration
and
we're
planning
we're
basically
doing
all
the
network
right
now
to
see
when
we
can
release.
But
it
seems
like
we,
we
might
have
something
coming
up
in
the
near
future.
C
On
the
back
team
side,
we
released
a
new
version
of
flip
cnb
that
has
exactly
support
and
we're
planning
on
the
next
major
version
of
grateful
live
cnv.
So
there's
a
separate
branch
where
we've
been
working
with
some
of
the
folks
over
at
pacquiao
to
figure
out
a
unified
api
so
that
they
can
reuse
lip
cnb
instead
of
packet,
potentially.
D
Yes,
thank
you,
so
you
know
I
just
wanted
to
reiterate.
I
believe
our
conversation
last
time.
There
were
three
big
changes
that
came
out
of
it
one.
Instead
of
putting
the
s
bomb
in
s-bomb
right,
the
top
level
we're
putting
it
in
c
and
b
s
bomb.
D
Another
change
we
decided
on
was
we
wanted
labels
explicitly
for
the
stack
s
bond,
which
the
most
pragmatic
reason
being
for
rebasing.
It
would
be
easy
to
just
pull
the
s-bomb
layer
on
a
rebase
to
basically,
you
know,
keep
it
up-to-date
as
opposed
to
doing
some
sort
of
weird
searching
and
then
the
last
one
was.
D
I
believe
I
believe
we
said
we'll
keep
merging
out
of
scope
merging
you
know,
cycle
dx,
recycling,
dx
or
just
merging
anything
at
all,
we'll
keep
it
out
of
scope
for
this
rfc
and
I
don't
know
just
defer
that
to
till
later,
if
I'm
missing
anything
else,
I
believe
that
that
was
the
outcome
of
the
last
conversation
and
that's
what
I've
updated.
The
rc
with.
C
D
I'm
sorry
I
wanted
to
say
sneaky
and
just
for
parody
with
this
new
label,
for
the
stack
the
bill
pack
s-bombs,
that
the
life
cycle
writes
would
also
have
its
own
top-level
label.
Sorry
now
I'm
done.
B
D
B
I
am
curious
as
far
as
the
extension
goes.
I
don't
know
if
I
recall
there
being
a
json
at
the
end.
Is
that
true,
for
the
other
s
bombs
as
well.
D
Well,
that's
certainly
how
it
is
in
the
original
s
bomb
rfc.
I
think.
A
They
can
also
or
actually
forget
in
the
original
rfc.
Do
we
support
xml
from
at
all,
in
addition
to
json,
or
are
we
saying
that
we
only
support
json
formats.
C
C
C
I
had
some
questions
about
like
I,
I
didn't
catch
these
like
slash,
cnb,
slash
response,
stuff
versus
the
top
level.
D
C
D
Okay.
Okay,
I'm
sorry
now
I
understand
that's
a
good
question,
so
that
one
is
is
staying
under
layers
as
bomb
right.
That's
that's!
This
is
a
completely
different
s-bomb
right.
This
is
one
made
when
the
stack
is
made
by
the
stock
operator
right
and
did
we
said
this
last
time
right
the
we
don't
want
to
put
it
in
layers,
because
by
the
time
the
platform
does
a
build
right
like
we.
D
D
C
D
Yes,
it
would
go
in
a
different
place,
probably
in
the
same
layers
directory
the
layers
as
found
directory
right.
This
was
just
just
for
a
source.
S-Bomb,
a
source
stack
form
right.
This
is
no
modification,
goes
goes
into
this
layer
or
direction.
A
I
might
suggest
that
if
we
support
merging
in
the
future
that
we
merge
outside
of
the
image
like
we
support,
pulling
all
the
individual
s
bombs
and
merging
it
to
like
a
cosine
image
that,
like
isn't,
isn't
part
of
it.
That
way,
you
don't
like
have
to
duplicate
all
the
entries
or
you
know
whatever
tool
knows
how
to
find
the
special
label
and
go
to
the
layer
and
download
the
fs
layer.
That
could
also
be
the
merge
at
the
time.
I
I
don't
think
we
have
to
decide
this
now,
it's
just
like
I.
C
The
question
I
have
is:
what's
the
plan
for
compatibility
with
this
sperm
location
in
the
run
image
with
the
support
dockerfiles
rfc.
D
Oh,
so
this
is
actually
supposed
to
precede
that
you
know
it
was
inspired
by
some
comments
inside
of
those
not
merged
yet
support
dockerfiles
rc.
So
hopefully
it
should
just
carry
forward.
C
A
I
I
o
build
packs
base
s
bom,
pointing
to
a
file
system
layer
that
contains
cnb
s.
Bom
bom
becomes
the
new.
I
o
build
packs
s-bomb
in
the
docker
files,
rfc.
C
C
But
how
would
that
work
in
the
the
other?
One.
A
In
the
docker
file
case,
you
run
gen
packages
and
it
regenerates
the
s-bom
location
on
disk
and
then
whatever
is
responsible
for
repackaging
or
for
you
know
doing.
The
build
of
the
image
should
ideally
ensure
that
that
file
ends
up
in
a
separate
layer
that
continues
to
be
referenced
by
the
label
and
update
the
label.
C
Okay,
and-
and
we
also
need
to
make
sure
so
that
in
the
in
the
in
in
the
intermediate
term,
where
we
don't
have
that
support
doctor
files
rfc,
we
do
list
out
these
constraints
explicitly
and
have
the
tooling
to
do
that.
So,
like
the
s
form,
should
live
in
a
separate
layer
in
that
location
and
the
label
should
point
to
a
layer
that
just
contains
the
s4.
C
C
I
don't
know
where
you'll
validate
that
but
like
if,
if
whatever
was
generating
that
label
in
this
case
back
like
it,
would
be
easier
if
we
just
hide
like
the
docker
file
or
the
docker
image
that's
already
built
but
like
as
the
run
image
currently
is
built
in
in
the
like
documentation,
and
then
pack
exposes
like
a
package
command
that
takes
in
the
container
image
tag
the
location
of
the
s
bomb.
I
don't
know
where
we,
it
should
ideally
be
on
the
disk
locally.
B
C
C
B
A
A
B
This
sort
of
feature
is
really
more
targeted
to
enterprises
that
are
going
to
have
some
sort
of
workflow
right
that
are
okay
with
having
this
two-step
workflow,
and
it's
not
really
like
an
end-user
centric
feature
where
we
need
to
provide
a
lot
of
tooling
for
it
to
make
it
as
easy
as
possible.
B
A
You
know
we've
talked
about
this
for
a
while,
like
a
stackify
right,
some
some
command
that
helps
you
create
a
stack
or
now
gonna
be
base
image
because
there's
metadata
on
there,
that's
yeah,
yeah
right
stacks,
go
away!
There's
there's
metadata
on
there.
That's
non-obvious
anyways
right!
I
think
some
of
it
you
can
add
using
dr
file,
but,
like
you
know,
I
think
it
ties
into
that
work.
We
should
just
make
it
really
easy
for
whoever
wants
to
create
a
stack
to
create
a
stack
using
pax
cli.
D
A
I
definitely
don't
want
the
support
docker
files
rfc
to
make
it
so
that
users
feel
like
it's
much
easier
to
dynamically
install
packages
at
application,
build
time
versus
create
a
stack
that
could
be
faster
if
they're
going
to
use
it
for
multiple
apps.
So
like
I,
I
very
much
agree
that
we
should
make
it
really
easy
to
add
to
metadata
and
attach
the
s
bom
or
whatever,
when
you're
making
a
stack
image,
but
but
it
doesn't
seem
unreasonable
to
me
that
it's
just
one
command,
you
run
after
the
docker
file.
D
B
I
I
am
curious,
you
know,
as
you
brought
that
up
whether
or
not
there
are
other
alternatives.
I
can't
come
up
with
any,
but
maybe
somebody
else
could
I
just
don't
see
how
you
would
be
able
to
create
something
dynamically
and
attach
it
after
the
fact.
A
A
Do
we
need
the
pac
qx
as
part
of
the
rfc,
given
that
we
don't
have
any
any
standardization
right
now?
I
think
the
cata
is
going
to
pick
their
own
or
pick
a
label
go
with.
It
might
be
good
to
just
standardize
what
it
looks
like
to
have
an
s
bomb
at
all.
B
I
I
wouldn't
think
so
right,
because
I
don't
think
it
should
be
necessary
for
pac
to
be
used
in
this
sort
of
workflow
like
they
should
be
able
to
do
it
with
a
two-step,
build
docker
file,
build
process.
A
C
A
C
I
think
the
only
thing
is
that
labels
carry
over
in
ocl
images,
especially
when
you
do
a
from
whatever
so
the
the
problem
is
like.
You
need
to
be
able
to
invalidate
that
label
that
you're
setting
to
identify
the
ds
form
is
created
when
you
add
a
new
layer
at
the
end,
that's
the
only
issue.
Otherwise
that
works
perfectly.
B
A
If
you
override
the,
if
you
upload
a
new
s-bom
layer
that
has
a
new
s-bum
in
it,
you
just
override
the
label
the
point
of
the
new
thing,
even
though
you
didn't
delete
the
previous
layer.
If
the
file
name
is
the
same,
it
should
work
or
like
you
don't
have
to
do.
You
just
have
to
do
the
same
operation
again.
A
Like
if
you
I
don't
know
if
this
helps
exactly,
but
if
you're
appending
layers
right
and
then
your
old
s-bomb
is
in
the
middle
and
then
you
run
the
same
operation
again
of
generate
a
new
s-bomb,
put
it
at
the
bottom
and
then
point
the
label
at
the
new
one,
because
the
file
path
is
the
same.
At
least
the
new
one
will
override.
A
A
A
We
could
have
a
validation
checks
to
see
if
the
s
bomb
layer
isn't
last
and
then
complains
that
additional
layers
have
been
added
to
the
image
so
like
keep
the
same
label,
do
everything
in
the
rfc,
but
in
pacquix,
if
you're
doing
something
and
it
finds
that
the
s
bomb
isn't
last
and
there's
other
stuff
after
it.
It
could
complain
that
this
is
maybe
an
invalid
s-bomb
right.
It's
like
a
could
be
a
nice
validation.
I.
A
D
A
A
D
A
D
A
A
Which
is
mine,
support
docker
files?
This
was
the
poc
stuff.
You
want
to
talk
about
right,
so
I
think
the
main
thing
I
wanted
to
go
over
for
this
is
the
kind
of
approach
we
take
to
implementing
it,
because
I
know
charles
was
looking
into
implementing
this,
using
like
a
combination
of
both
user
space
and
or
like
of
non-user
space
non-privileged-ish
operations.
A
That,
like
would
require
username
spacing
or
something
like
that.
I
wonder
if
it's
better,
if
we
focus
on
two
implementations.
First,
that
I
wanted
to
get
people's
thoughts
for
which
one
we
should
prioritize.
So
the
way
I've
thought
about
this
is
we
do
one
entirely
user
space
implementation
that
doesn't
require
any
user
name
spacing
or
anything
just
like
the
build
pack
build
using
conoco.
A
A
I
think
it
might
have
been
it
could
have
been
me
at
some
point
yeah.
I
think
that
I
think
that
that
seems
right.
So
so
I
think
we
wouldn't
be
able
to
do
a
create.
We
would
always
be
able
to
use
static
images,
but
we
wouldn't
be
able
to
do
a
creator
run
without
two
containers,
but
you
can
do
that
in
tecton.
A
I
mean
yeah,
so,
like
you
really,
you
might
be
able
to
do
with
a
single
creator
if
you're
willing
to
use
conoco
to
wipe
out
the
whole
builder
at
the
end
of
the
process
and
pull
the
run
all
the
running
bits
into
the
image
and
then
extend
it
and
export
those
that's
another
option
too.
I.
C
A
A
I
was
imagining
that
they
would
still
be
cached,
so
I
could
use
conoco
as
a
library
to
take
a
snapshot
of
the
builder,
apply
the
changes
and
then
take
the
layers
that
got
generated
and
store
those
in
a
volume,
at
least
in
the
life
cycle
like
if
we're
talking
about
the
pure
user
space
implementation.
A
Right,
like
everything,
this
whole
thing
could
happen
like
you
know,
especially
if
we
do
that
act
at
the
end,
the
whole
thing
could
happen
in
a
single
container,
even
though
we're
extending
multiple
images,
because
we
could
do
that,
the
build
image
gets
its
extensions,
live,
we
cut
the
layers
off
and
then
at
the
end
of
the
build.
Once
everything
is
ready.
We
wipe
the
whole
builder
image
and
replace
it
with
the
run
image
and
then
run
the
runtime
docker
files
on
top
of
it.
A
B
A
That
could
also
use
creator
in
some
contexts
where
we
would
every
time
a
doctor
file
needs
to
get
built,
we
would
run
out
the
daemon
build
a
new
image
and
then
the
big,
interesting
part
of
this
implementation
would
be
that
it
would
change
the
image
each.
The
phases
would
be
variable
so,
like
the
build
type
build
would
start
on
an
image
that
we
did
not
know
ahead
of
time,
and
so
it's
something
pac
could
implement.
But
it's
not
something
that
you
could
do
in
tecton
with
access
to
a
docker
data
or
something
like
that.
A
But
I
think
pac
should
use
that
implementation
because
I
think
it'll
be
faster.
The
cache
will
be
easier
things
like
that
and
we
can
do
the
run
image
build
in
parallel,
even
in
the
creator
case.
A
It
gets
more
complicated
because
pack
also
has
untrusted
builders
which
run
in
the
phase
yeah
and
like
the
fate
individual
phases
as
well,
so
like
pack
won't
always
use
the
creator
flow
locally,
but
I
guess
it'll
still
be
the
same.
Docker-Based
execution
yeah
if
you're
doing
docker
files.
The
creator
phase,
if
you're
using
the
docker
daemon,
is
interesting
because
we'd
want
to
do
the
build
for
the
build
time
image
between
the
detect
and
the
build
step
or
whatever.
A
So
I
then,
it's
just
a
couple
other
phases
right
besides
those
so
like
the
if
you're
doing
a
runtime
docker
file,
I
don't
know
if
the
creator
phase
locally
makes
sense
if
we
go
with
the
daemon
implementation,
if
that's
faster
than
the
other
implementation.
Yeah
creator
really
throws
a
wrench
in
this
because
it
really
only
works.
It's
kind
of
opposite.
Like
creator.
I
only
really
use
locally
today,
like
pack
sort
of
based,
and
we
don't
use
creator
in
our
like
sort
of
tectonish
sort
of
environment.
A
But
then
it's
almost
going
to
be
the
reverse,
where
it'll
be
easier
to
run
creator
in
a
tech
ton
environment.
But
if
you
have
any
extensions,
you
just
won't
be
able
to
really
use
creator
on
the
local
at
least
not
perform
it.
A
A
A
A
I
unfortunately
have
to
leave
a
little
bit
early.
Can
I
assign
somebody
else
the
to
run
through
the
agenda.
D
All
right
peace
out
thanks
for
hosting
we
done
with
that
we
we've
done
with.
Is
there
anything
else
to
say
on
any
last
comments
about
the
different
poc
options
for
support,
docker
files.
B
Yeah,
I
was
just
saying
not
me,
I
think
it
was
steven,
but
I
I
could
given
our
audience
here.
Let
me
see
real
quick.
I
know
the
the
question
joe.
You
might
have
been
absent
on
yesterday's
conversation,
but
basically
from
from
his
standpoint,
I
think
it
was
the
idea
on
whether
or
not
we
really
wanted
to
go
with
this
build
packs.
Centric
configuration
file
as
opposed
to
the
project
descriptor.
B
As
like
to
encompass
more
than
just
build
packs
right
and
kind
of
bringing
that
to
the
community
here,
instead
of
just
on
the
core
team.
E
Yeah,
I
think
the
the
discussion
I
remember
a
couple
weeks
ago
was
also
included
partial
versus
complete
support.
Like
there's,
I
think,
there's
two
questions
around
the
project.
Descriptor
one
is,
is
it
you
know,
is
a
platform
expected
to
100
supported,
or
is
it
acceptable
to
partially
support
it,
and
then
the
second
is
sort
of
a
cosmetic
experience
part
of
whether
this,
whether
we
continue
with
this
being
a
more
generic
project,
descriptor
or
if
we
try
to
make
it
more
pac-native.
Well,
then,
there's
a
whole
beyond
that,
there's
a
whole
set
of
questions.
E
C
E
Well,
it
does
yeah,
it
does,
but
I
think
there's
still
some
questions
yeah.
I
think
it
does
start
that
it's
sort
of
a
forcing
function
that
we
need
to
answer
that,
but,
like
I
as
it
exists
today,
I
don't
think
life
cycle
can
be
sort
of
fully
aware
right.
There
are
things
in
project
descriptor
that
life
cycle
that
need
to
be
addressed
at
the
platform
level.
I
think
so.
E
Natalie's
rc
is
like
a
conversion
process
right,
so
I
think
it's.
It
is
slightly
different
for
it
to
be
aware,
in
the
sense
that
there's
some
tooling,
that
it
can
use
to
convert
into
a
format
that
it
expects,
but
that's
not
not
exactly
the
same
as
it
being
integral
to
the
to
the
life
cycle.
E
I'm
not
sure
I,
like
a
personal
level
care
the
things
I'm
doing
are
largely
the
things
that
are
I'm
working
on
and
you
know
in
outside
of
the
project.
I
largely
care
about
pack
using
it
right.
I
think
gosh.
I
forget
what
the
original
motivation
was
for
some
of
this
I
mean,
I
think
it
may
have
been
just
to
get
things
out
of
pack
that
were
not
specific
to
pack
and
move
them,
but
they're
yeah.
C
E
B
Yeah,
I
don't
know
that
I
would
say
that
that's
a
life
cycle
concern,
although
it
very
well
could
be.
I
think,
that's
like
a
a
pretty
hard
discussion
to
have
of
exactly
how
we
want
to
do
that,
because
you
know,
if
you
think
about
it,
it
doesn't
necessarily
depend
on
the
project
descriptor
right.
If
the
somehow
we
could
pass
build
pack
uris
to
the
life
cycle
in
some
other
format,
configuration
file
or
anything
like
that,
it
could
still
have
that
sort
of
functionality.
B
I
think
what
makes
sense
right
now
is
like,
let's
say,
environment
variables
right.
The
platform
right
now
is
all
it's
having
to
do
is
do
like
a
pass-through
of
the
environment
variables,
and
so,
in
that
case,
we're
having
to
take
something
from
a
config
file
and
put
them
into
the
platform
m
directory.
So
that's
a
sort
of
process
that
seems
almost
mundane
and
the
lifecycle
could
do
it
on
its
own
or
consume
that
file
on
its
own.
B
B
So
for
me,
I
wrote
down
the
three
questions:
right:
cosmetics,
whether
it's
required
for
the
platforms
and
whether
this
is
a
pack
config
or
a
lifecycle,
config
or
or
something
in
between
so
the
cosmetics.
I
think
we
could
talk
about
that
in
depth,
but
for
the
required
for
platforms.
I
think
that's
something
that
I
really
strongly
want
to
advocate
for
right
is
the
idea
that
this
configuration
file
should
be
understood
by
platforms
so
that
we
could
carry
the
user
experience
and
user
expectations
across
all
the
different
platforms
that
support
build
packs.
A
I
I
feel
like
we
were
advocating
for
that,
but
then
was
it
techton,
I
think
techcon
or
I
think
it
was
a
techton
like
we
couldn't
figure
out
a
way
to
make
it
work
nicely
techton
or
easily.
I
guess
yeah.
E
E
You
know
like
in
its
in
a
single
stage,
pre-process
the
project
tamil
into
because
I
know
emily
was
very
adamant
that,
like
the
life
cycle,
should
not
be
a
like
aware
of
the
project
tamil
schema
and
all
that,
but
it
could
take
some
other
like
intermediate
language
as
input
or
intermediate
format
as
input,
and
so
that
preprocessor
would
convert
it
into
whatever
other
tamil
files
needed
to
exist.
On
the
system
for
a
tecton-like
platform
to
consume.
B
Yeah,
so
I
think
there
are
two
there's
like
a
back
end
to
the
configuration
file
and
there's
the
front
end
right.
So
the
front
end
is
the
piece
that
again
I
care
for
the
platforms
being
able
to
support
for
the
user's
sake,
and
so
when
we
talk
about
techton
specifically
the
way
I
envision
it
is,
we
would
make
use
of
the
converter
in
tecton
and
say
this:
is
the
user
provided
configuration
project
descriptor?
B
E
So
maybe
it
would
help
to
get
like
we're
talking
about
a
bunch
of
different
problems.
Maybe
would
help
to
get
really
crisp
about
like
the
problem
we're
trying
to
solve.
Without
you
know,
with
sort
of
being
value
neutral
on
the
possible
solutions.
E
E
And
maybe
tied
into
that
is
that
not
all
platforms
may
be
capable
of
supporting
all
features
like
builders,
for
example.
Some
platforms
may
not
want
to
or
be
able
to
allow
custom
builders.
I
don't
think
we
have
builders
in
the
project
tamil,
but
we
bring
it
up
because
it's
something
that
potentially
could
be
in
there
not
sure
if
that's
a
developer
experience
problem,
but
it
definitely
is
a
a
problem
in
terms
of
defining
how
support
for
project
descriptor
works.
B
Yeah,
I
think,
if,
if
you
think
about
the
project,
descriptor
as
and
basically
just
the
whole
build
pack
system
as
an
alternative
to
docker
files
and
their
docker
files
have
a
back
end
in
the
front
end
right.
The
front
end
is
the
file
that
you
write
and
the
back
end
is
the
execution
of
these
steps
and,
in
my
mind,
the
project.
B
Descriptor
is
the
format
for
you
know
similar
to
that
doctor
file
and
then
the
platforms,
the
back
end,
the
life
cycle,
execution
is
is
again
just
that
back
end
piece
and
when
I
take
this
configuration
file
and
I
give
it
to
a
different
platform
right.
I
expect
the
same
outcome
similar
to
when
you
take
a
doctor
file
and
you
run
it
through
conoco
versus
you
run
it
through
docker
or
podman.
Right,
like
all,
these
should
yield
the
same
result.
You
know,
theoretically,
or
at
least
that
should
be
the
goal.
B
It's
something
that
allows
you
to
either
air
out
or
warn
for
certain
things
that
might
be
ignored
or
not
supported,
and
that's
something
that
platforms
could
opt
in
to
leverage
as
well.
So
we
could
do
that,
but
again
the
ultimate
goal,
if
is,
if
you
have
two
platforms,
and
they
both
want
to
support
everything
that
the
project
descriptor
entails,
then
it
should
be
really
easy
to
do
so.
Instead
of
having
to
duplicate
the
logic
and
feature
set
on
each
individual
platform,.
B
Yeah
I
mean,
I
think,
that's
that
could
be
a
that
could
be
something
for
pushing
into
the
life
cycle.
I
think
for
me
right
now,
I'm
barely
at
the
step
of
hey
right
now.
It's
the
project.
Descriptor
is
an
extension
spec
which,
by
its
nature,
says
that
most
platforms
could
just
simply
ignore
it,
and
I'd
want
to
bring
it
into
the
fold
and
say
that
you
should
not
ignore
the
project.
Descriptor
right,
like
platforms,
cannot
ignore
the
project
descriptor
file.
They
have
to
do
something
they
have
to
tell
the
user
hey.
B
We
don't
support
these
features
or
yeah
or
just
do
it
right,
and
then
we
can
make
it
easier
for
the
platforms
if
we
push
into
the
lifecycle.
But
that's
like
you
know
next
steps
like
right
now.
I
think
we're
still
at
the
point
where
we're
discussing
whether
or
not
we
want
to
bring
the
project
descriptor
into
a
required
set
of
the
spec.
E
Something
like
app
engine,
google
app
engine
where
a
platform,
a
tablet
required
a
platform
may
not
necessarily
like
build
packs,
are
an
implement
implementation
detail
of
a
particular
platform.
E
E
B
I
mean
that
is
a
good
question
right,
so
I'll
throw
a
different
one
right,
like
scaffold
right
scaffold
to
me
as
a
platform,
because
it
itself
has
a
configuration,
but
your
application
could,
you
know
theoretically,
have
the
project
descriptor
and
then
it
is
passed
along
two-pack
because
they
use
pack
internally
and
it
would
support
all
the
features
as
you'd
expect
from
an
end
user's
perspective
right.
E
E
B
Yeah,
that
is
interesting
if
they
essentially
leverage
the
build
pack
technology,
but
they
don't
expose
any
of
the
configuration
to
the
end
user
yeah.
I
think
that's
a.
A
Yeah,
there's
also
like
waypoint
right,
which
is
like
the
hashy
thing,
which
I
guess
is
similar
to
maybe
the
scaffold
use
case,
which
is
they
have
their
own
config
thing
and
their
users
probably
may
not
know
about
all
these
things.
B
E
So
I
would
say
we're
coming
close
to
time,
so
maybe
we
should
talk
about
some
like
concrete
next
steps
like
I
feel
like.
We
maybe
need
a
straw
man,
because
there's
a
lot
of
different
aspects
to
this
that
we're
batting
around
and
actually,
I
think,
multiple
problems
that
we're
trying
to
to
discuss.
E
It
doesn't
have
to
be
like
an
rfc,
but
I
think
it
would
help
to
have
like
a
like
a
one-pager
kind
of
thing
on
the
problem
and
what
we're
what
somebody's
proposing.
Even
if
you
aren't
like
100
sure
about
it,
yeah-
and
I
might
take
a
stab
at
that-
if
I,
if
I
get
a
chance
but.
B
D
Cool
any
other
com
I
mean
you
know
getting
through
five
topics
was
sort
of
ambitious
for
any
working
group.
Any
any
last
comments
on
this
one
project,
descriptor
questions.