►
From YouTube: Working Group: 2021-04-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
It
it
does
help
with
like
the
the
actual
process,
that's
happening
during
detect,
and
it
takes
that
like
from
the
spec
to
the
actual
website,
but
we
still
need
some
examples.
I
believe
either
in
samples
or
something
that
is
more
complicated
than
just
than
just
what
we
have
right
now.
I
don't
think
we
have
any
complicated
examples
that
use
that
other
kind
of
detect
in
in
the
samples
repository
we
mainly
just
have
like
one-off,
pass
or
fail
buildbacks.
B
Yeah,
I
think
the
samples,
at
least
when
it
started
right.
It
was
more
so
that
we
can
have
something
to
like
dog
food
and
test
right
in
in
a
almost
acceptance
or
end-to-end
test.
They
weren't
really
expected
to
be
like
very
high
fidelity
examples
of
very
specific
solutions.
B
It
definitely
sounds
like
that
would
be
extremely
helpful.
I
just
don't
know
that
we
have
any
like
real
formal
process
for
those
things
to
actually
materialize
and
it's
more
like
yeah,
okay,
so
we've
identified
that
we
want
them
right.
We
need
them
and
stuff,
but
like
who's
gonna,
actually
do
it
or
what's
gonna,
be
the
forcing
function
that
makes
us
actually
create
those.
C
This
sounds
like
a
another
thing
that
would
fit
really
well
into
the
best
practices
stuff
all
right,
so
that
I
think
at
first
heroku
had
some
different
ideas
about
how
they
were
going
to
use
this
and
since
piquito
is
kind
of
like
they
have
some
patterns
that
seem
like
they
work
pretty
well
now,
it's
probably
a
good
idea
to
start
capturing.
Some
of
that.
A
A
That's
just
like
a
system
manager
and
you
leave
it
to
the
user
through
decide
like
they
can
specify
the
dependency
and
the
like
package
manager
can
install
things
or
you
individually,
write
a
build
pack
for
like
special
use
cases,
and
I've
also
gotten
a
debate
like
which
one
is
better
or
the
other
like
whether
it's
worth
maintaining
like
whether
it's
worth
the
overhead
of
maintaining
individual
build
packs
or
whether
you
should
just
provide
like
a
generic
one.
That
does
everything
I'd
be
curious
to
hear
like
other
people's
thoughts.
C
I
think
we've
I
worked
on
the
bakito
project.
I
think
we
came
to
similar
conclusions
right,
but
I
don't
know
if
that's
just
you
know
based
off
of
where
we
started
out
or
whether
that's
like
a
pattern
that
you
will
always
follow.
No
matter
where
you
start,
that's
that's
hard.
B
You
know
that
that's
really
interesting,
and
I
think
at
this
point
we're
just
talking
more
generally
right,
but
it
does
seem
like
at
least
maybe
personally
for
me,
like
the
the
scope
or
the
domain
of
like
the
actual
build
pack
authoring
process,
that's
something
that
is
like
absent
as
a
platform,
an
implementer
like
yeah.
I
know
what
a
build
pack
is,
and
I
know
like
some
of
the
the
api
interactions
with
it.
B
But
as
a
built
pack
author,
like
what
things
we
could
do
in
the
project,
to
improve
that
it
seems
like
that's
absent,
and
I
don't
really
hear
it
all
too
often
right.
I
hear
it
and
maybe
every
once
in
a
while,
but
typically
it
seems
like
there's
this
separation
between
like
whether
it
be
the
heroku
people
that
implement
the
actual,
build
packs
or
the
pocato
project
as
a
whole,
right
or
even
like
the
google,
which
I
think
they
never
really
talk
to
us
about
their
build
packs.
How
and
how
they
work.
B
But
like
we
don't
have
that
that
line
of
communication
with
those
groups-
and
so
I
think
I
don't
know
what
can
be
done,
but
maybe
they're.
I
don't
know
sam
or
I
know
dan.
You
have
some
history
there,
but
sam
have
you
looked
at
the
pequito
project
and
like
interacted
with
their
working
groups
and
all
that.
B
Oh
you,
sam,
not
the
absent
sam
yeah,.
A
A
I've
interacted
less
so
with
the
heroku
ones,
but
the
pekito
ones
have
been
useful
because
they're
like
singular,
build
packs
that
do
one
thing
and
one
thing
good.
So
it's
easy
to
plug
and
play,
and
I
think
a
lot
of
the
rfcs
have
been
opening
have
been
around.
Like
my
experience
as
a
buildback
author
trying
to
fill
in
all
these
gaps
right
around
our
use
cases,
I
I
think
if
we
had
like
more
buildback
authors
attending
these
meetings,
we
would
come
up
with
even
more
common
use
cases
but
yeah.
So
how.
A
I
know
they
do.
It
ended
like
at
times
when
something
breaks
or
something
to.
A
But
that's
that's
about
like
I,
I
haven't
interacted
with
them
much
but
just
like
I,
I
helped
out
with
their
with
their
equivalent
of
lip
cnb
to
get
it
up
to
date
with
the
the
buildback
api.
That's
pretty
much
all
the
interaction
I
have
with
them.
A
A
A
C
Yeah,
I
could
I
mean
I'm
I've
gone
to
some
of
their
sinks
in
the
past.
I
could
definitely
start
doing
that
again,
but
I
think
that
some
of
the
awkwardness
between
that
is
because
we
have
a,
we
already
have
a
go
bindings
library
right
and
they
have
their
own
one.
That's
like
well
like
this
is
not
project
related.
We've
been
doing
our
own
thing
right,
so.
B
A
In
the
last
working
group
meeting,
the
outcome
was
that
there's
be
creating
an
rfc
to
either
repurpose
the
distribution
game
or
propose
a
buildback,
author's
team
that
would
then
take
on
the
maintenance
of
lip
cnb
and
incorporate
the
feedback
from
the
past
meetings.
I
think
the
conclusion
was.
It
was
easier
to
just
change
lip
cnd
to
be
similar
to
packet,
rather
than
pull
back
it
out
of
the
cato
and
remove
the
parts
that
are
the
cable
specific
and
have
something
generic.
A
I
was
gonna
create
an
rfc
about
which
was
around
documentation
for
buildbacks
and
how
pac
can
be
used
to
like
better
display
environment
variables,
files
or
link
dependencies
contributed
by
a
bill
pack
or
how
to
use
it,
and
the
pequito
ones
like
already
have
a
lot
of
this
information
like
they
have
the
default
values
for
different
environment
variables
like
what
they
do
and
things,
but
that
would
have
been
a
good
source
of
like
keeping
it
true
to
what
detect
and
build
is
actually
doing
and
making
sure
that
the
documentation
is
consistent
with
like
detecting,
because
that's
the
other
thing
where
your
pullback
is
doing
something
else.
A
C
A
Also
that,
if
you
have
a
builder,
you
can
check
the
documentation
for
the
included,
build
packs
and
see,
what's
what
it
expects
and
what's
going
to
happen
or
which
environment
variables
can
you
set
or
which
files
is
it
looking
for,
I
guess
the
good
thing
about
having
posix
like
conventions
is
that
you
can
just
boil
it
down
to
environment
variables
and
files
most
of
the
times,
and
if
you
could
just
put
that
in
the
buildback
normal
documentation,
maybe
back
can
parse
it
in
a
nice
way.
B
A
So,
like
you,
let's
say
a
platform
owner
gives
you
a
builder,
and
you
want
to
figure
out
like
what
the
spell
pack
does.
The
only
link
you
have
right
now
is
really
just
the
home
page,
so
you
do
back
inspect
builder,
you
go
to
the
home
page
for
each
build
pack
and
see
the
documentation
and,
like
most
other,
like
package
ecosystems
have
like
the
inline
documentation
or
something
like
a
readme
attached
to
the
to
the
package
itself.
A
C
C
I
think
that
this
would
be
like
we
have
this
inspect
builder
command,
that'll
list
the
build
packs,
but
then,
if
you
could
see
the
provides
and
requires
and
how
they
could
potentially
match
up
in
a
group
that
would
be
like.
Oh
okay
also
make
this
problem
that
you're
having
around
visualizing
how
build
packs
interact
during
detection.
A
I
I
think
the
main
at
least
issue
that
I
have
seen
around
adoption
is
that
this
does
something
magical
and
people
are
not
used
to
that,
and
they
at
least
want
to
figure
out
how
to
tweak
certain
things,
and
the
only
way
to
do
that
is
to
go
to
the
canonical
documentation
for
a
specific
buildback
and
oftentimes.
It's
hard
to
discover
that
from
your
platform.
A
B
Like
I
wonder
how
much
this
plays
in
with
you
know
that
rfc
we
have
for
pack
interact
like
an
interaction
mode,
where
you
could
actually
hopefully
see
a
little
bit
more
detail
so
like,
for
instance,
right
like
the
providing
requires
you
could
run
detect
and
at
that
point,
just
inspect
that
before
actually
running
the
next
phase.
A
B
Yeah
totally,
I
agree.
I
know
that
there's
certain
individuals
that
I
guess
maybe
are
more
down
that
path
right
where
they
want
to
bring
a
lot
of
that
app
developer,
ux
centric
issues
and
improvements,
so
that'd
be
really
cool
to
see.
B
And
it's
funny
how
it
ties
in
back
to
like
the
big
pack
author,
because
a
lot
of
this
seems
like
it
also
helps
like
the
build
pack
authors
just
like
build
their
build
packs
right
like.
Why
is
this
thing?
Failing,
like
you
know
that
quick
loop
of
like
trying
it
out
see
getting
a
little
bit
more
information
than
just
a
random
error
or
exit
code
would
be
great.
D
Well,
sort
of
goes
back
to
that
principle
of
when
providing
an
error.
You
should
also
do
your
best
to
provide
where
one
might
look
for
the
solution.
B
A
A
I
don't
know
if
detection,
because
one
other
issue
right
now
with
detect-
and
this
was
also
something
I
noticed
in
the
kpac
chart
right
now-
is
that
the
logs
from
detect
only
show
up
when
you
run
it
with
verbose
mode.
Is
there
any
history
behind
that
like?
Why
does
it
only
happen
when
opposed
mother
said
to
twist
it
because
you're
running
all
of
these
things
in
parallel,
and
you
don't
want
the
crazy
amount
of
output
to
flood.
B
C
C
So
I
think
that
the
it
was
crazy
output.
One
of
the
things
we
started
distributing
was
like,
at
least
for
pocato
was
large
builders,
and
so
you
detect
on
every
single
group
in
parallel
and
then
it's
like.
Well,
I
only
care
about
python,
but
I'm
seeing
10
line
or
like
10
pages
of
go
detection
output,
so
yeah
that
was
overwhelming.
A
Because
that
I
guess
also
causes,
I
wonder
if
like
pack
should
just
put
those
build
logs
somewhere
like
a
temple
logs
or
something
like
that,
instead
of
just
throwing
it
away
and
only
putting
it
when
it's
in
the
post
mode,
so
that,
if
something
fails,
it
can
say,
detect,
failed
detection
logs
at
them
and
go
check
that
out
and
then,
if
it
passes,
you
can
just
clean
up
those
logs.
After
after,
like
everything
is
past,
I
guess
I
think
after
the
builders
passed.
B
A
Yeah,
but
I
think
the
main
like
at
this
point
for
me:
it's
mainly
about
adoption
and
how
the
ux
are
on
this
can
be
better
than
what
docker
offers,
because
with
docker
people,
people
can
see
okay,
this
theorem
commanders
they
copy
on
anyways.
They
get
some
idea
what's
happening
for
us
here,
it's
mostly
just
the
build
logs
and
then,
if
your
application
doesn't
match
a
buildback,
you
have
no
output
like
no
no
information
coming
back
to
you.
A
Maybe
the
other
thing
would
be.
If
we
can,
I
don't
know
if
we
differentiate
between
standard
error
and
standard
out
for
detect
right
now,
maybe
if
something
goes
to
one
or
the
other,
it
would
be
displayed
even
in
non-verbose
mode,
because
that's
also
something
I'm
not
seeing
right
now.
B
Yeah,
I
definitely
recall
something
in
regards
to
that
and
I
think
people
wanted
just
the
general
logs
to
be
split
based
on
standard
error
and
standard
out,
but
not
so
much
what
the
actual
issues
were
right
like
not.
The
actual
error
itself
would
go
to
standard
error
but,
like
let's
say,
verbose
would
go
to
standard
error
and
you
would
have
everything
else
go
to
standard
out.
B
I
don't
recall
exactly
what
the
outcome
of
that
was,
but
I
do
recall
it
was
kind
of
controversial
just
because
there
wasn't
a
real
agreement
on
exactly
what
it
should
be.
So
I
think
the
problem
needed
to
be
determined
first
before
the
solution.
If
that
makes
sense.
C
Yeah,
just
as
like
tying
this
together,
I
think
that
there
is
some
like,
I
feel
like
we're
getting
a
lot
of
feedback
from
you
around
this.
Like
idea
of
like
this
magical
layer
and
building
trust
there,
letting
people
do
inspection
on
what's
happening,
not
making
it
too
magical,
so
there's
definitely
work
to
do
there.
A
B
Yeah
totally
so
I
guess
I
know
we
have
like
the
action
item
of
like
an
rfc
that
hopefully
will
will
give
this
sort
of
goal
to
somebody
to
follow
like
maintainers,
hopefully
right
to
then
follow
and
pursue
trying
to
satisfy
the
build
pack
author
persona
a
little
bit
better.
Are
there
any
other
action
items
that
we
could
take
from
this
whole
conversation.
A
Maybe
that
and
this
other
one
around
like
putting
detection
logs
in
some
temporary
file.
I
think
that
can
just
be
a
back
issue.
B
I
see
the
value
I
think
I
I
personally
have
like
a
backlog
of
rfcs
that
I
have
to
do.
Are
any
other
people
interested
in
pursuing
it?
Well.
C
B
Yeah
yeah,
I
think
we'll
definitely
want
to
discuss
it
a
little
bit
more,
and
maybe
we
could
take
that
to
the
platform
sync
and
then
just
kind
of
dig
into
it,
exactly
what
we're
trying
to
achieve
and
how
we
would
maybe
even
implement
it.
B
I
know
I
have
a
couple
action
items
in
this
document.
No
I
haven't
but
yeah.
I
I
do
have
to
do
that.
I
guess
unless
I'm
you
have
some
free
time
and
want
to
throw
something
together,
but
otherwise.
B
Right
yeah,
I
think,
if
anything,
it
might
become
like
a
what
is
it
like
the
sub
team
rfc
just
to
like
really
specify
exactly
what
protocols
and
credentials
will
use,
but
again,
a
really
good
start
for
just
an
issue,
and
then
maybe
it
doesn't
even
have
to
go
to
that
length.
If
we
find
that
there's
a
very
specific
use
case,
we
want
to
solve,
like
just
you
know,
checking
out
from
github
public
repos
right,
like
that's,
very
easy,
trivial
thing
to
do.
A
C
B
Well,
hey,
if
you
could
get
two
for
one,
but
I
do
think
at
that
point
right
like
that
would
be
bigger
scope,
so
that
would
probably
require
an
rfc.
C
B
Cool
is
there
anything
else
anybody
like
to
discuss?
I
know
emily
said
she
was
gonna,
try
to
join
the
second
part,
but
if
there's.
D
Nothing
yeah
I
mean
I
came
to
asked
permission
to
schedule
for
the
next
two
sessions,
but
I
see
that
we
have
something
next
time
so
we'll
take
the
two
after
that.
Is
that
all
right?
I
want
to
do
a
workshop
natalie,
and
I
want
to
do
a
workshop
to
share
out
the
user
research
that
we
did.
B
Cool
yeah
that'd
be
awesome
and
we'll
promote
it
as
well.
During
the
working
group.
D
All
right,
great
sam,
I
love
hearing
about
your
experience
here.
It's
super
interesting,
I
feel
like
we
should
we
should.
We
should
get
her
some
time
to
chat
if
you'd
be
open.
To
that.
I
know
you
obviously
have
a
full-time
job
and
you're
doing
us
a
favor
by
being
here,
but.
A
Yeah
yeah
sure
can
I.