►
From YouTube: Working Group: 2021-01-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
Let
me
share
my
screen
and
basically
I'll
just
give
an
update
on
where
we're
at
with
this
situation
so
yeah
as
I
shared
before
the
break.
There
is
in
flight,
this
user
research
around
a
couple,
different
topics
relevant
to
cmds,
which
I
put
into
this
rfc,
and
so
the
next
step
in
the
process
has
been
defining
the
participants.
C
C
We're
working
with
this
is
a
description
of
how
folks
will
be
recruited,
and
so
we
have
this
survey
here,
as
well
as
like
a
set
of
follow-up
questions
that
they
do
on
a
call
with
them
to
ask
them
to
learn
more
about
the
details
of
their
work,
particularly
with
regards
to
how
well
they
understand
containers,
and
so
that
so
far
has
resulted
in
this
document
which
is
linked
in
there.
But
let
me
see
if
I
can
change
the
permissions.
C
So
everybody
can
look
at
it
and
yeah.
We
have
about
a
dozen
candidates
in
here,
so
why
don't
I'm
gonna
go
ahead
and
make
a
column
here
for
community
comments.
C
The
client,
I
guess,
is
me
since
I'm
sort
of
directing
things
with
them
and
if
folks
are
keen,
you
are
welcome
to
go
through
this
list
and
see
the
the
who's
on
the
you
know
after
about
a
week
of
looking,
we
expect
this
to
be
a
longer
list,
certainly,
but
who
are
the
possible
people
we
can
talk
to
and
who
some
context
on
what
languages
they
use
like?
What's
the
scale
of
their
application,
how
long
they've
been
working
with
containers
and
their
experience
with
kubernetes
their
job
title?
C
That
kind
of
thing
so,
please
feel
free
to
add
a
column
c
here.
If
there's
anybody
you
think,
is
an
interesting
candidate
to
talk
to,
and
similarly
this
document
the
recruiting
package,
if
you
want
to
make
sure
that
this
is
also
editable,
if
folks
want
to.
C
Not
sure
I
can
change
the
editability
of
this,
but
you
should
be
able
to
get
a
sense
from
this.
What
are
the
questions
we're
using
to
screen
folks
yeah.
C
We
have
people
in
the
pipeline
and
we
can
start
scheduling
interviews
as
soon
as
next
week
for
the
first
phase
of
research,
which
is
as
detailed
in
the
rfc,
which
is
kind
of
focused
more
around
the
different
segments
of
app
developers
using
build
packs,
as
well
as
docker
images,
and
maybe
what
might
be
their
goals,
challenges
and
motivations
to
switch
to
build
packs
from
other
tools.
C
C
Cool
well
once
we
please
other
folks
jump
in
or
message
me
on
slack
or
anything.
Once
we
start
scheduling
coach
for
the
first
round
of
interviews,
I
will
add
those
to
I'll.
Let
folks
know
about
that
in
slack,
and
as
always,
anybody
wants
to
either
help
conduct,
interviews
or
observe
or
do
tell
take
notes,
would
very
much
appreciate
any
help
from.
A
D
I'll
give
you
an
overview
of
what
we
were
going
to
talk
about,
but
yeah.
I
think
it's
probably
best
just
to
wait
for
emily.
I
mostly
you
know,
wanted
jesse
and
steven
to
hear
so
the
at
the
before
the
holiday
emily
and
I
started
talking
about
what
the
first
phase
of
introducing
stack
packs
would
be,
and
there
are
a
few
pr's
that
capture
this
so
I'll
post
them
in
the
chat.
D
There's
two
spec
prs
that
start
to
sketch
out
this
first
phase,
which
is
essentially
just
adding
mix
and
validation.
So
it's
really
not
even
we're
not
even
getting
to
stack
packs
yet
and
what
like?
What
we've
uncovered
is
that
the
detect
phase
is
the
ideal
place
to
do
this,
but
it
doesn't
have
all
the
information
from
the
stack
at
that
point.
D
The
first
place
you
might
get,
that
is
the
analyzer.
So
the
the
idea
we
were
sort
of
flirting
with
was
switching
the
order
of
analyze
and
detect,
which,
for
the
most
part,
is
fine,
but
in
practice
and
here's
the
draft
pr
where
I
started
to
work
on
that.
D
I
ran
into
some
problems
and
that's
because
the
analyzer
accepts
or
requires
the
group
tunnel,
that's
output
from
the
detector
to
know
currently
to
know
which
layers
it's
going
to
analyze
and
then
some
decisions
about
api
compatibility
version.
I
actually
think
some
of
what
it's
doing
today
is
wrong,
like
especially
with
regards
to
api
compatibility
version,
because
I
think
it's
using
like
if
you
were
to
switch
versions
between
two
different
builds.
D
So
in
switching
the
order,
we
could
replace
group
tomml
with
the
metadata,
that's
on
the
previous
image
and
then
and
the
cache
too
to
determine
which
build
packs
were
used,
and
you,
I
think
the
the
disadvantage
of
that
is.
You
might
restore
a
layer
that
you
weren't
going
to
use
because
that
build
pack
was
removed
from
the
next
build,
but
like
it's
not
that
big
a
deal,
I
think
like
there's,
definitely
there's.
D
Actually
what
I
want
to
realize
is
there's
trade-offs
in
like
regardless
of
which
order-
and
I
think
that's
probably
acceptable,
but
the
bigger
problem
is
we
just?
Don't
have
all
the
information
we
need,
we
don't
have
the
new
version
of
the
build
pack.
So
if
you
had
multiple
versions
of
a
build
pack,
you
can't
simply
like
take
the
latest
one
to
figure
out
its
api
compatibility
version,
all
that.
D
If
we
left
them
in
the
same
order,
detect
and
then
analyze
what
you
know,
what
could
we
do
to
make
that
work,
and
so
something
I
was
talking
to
jesse
about
this
morning
was
maybe
detect,
doesn't
necessarily
finish
the
detection
it's
not
kind
of
like
these
are
the
groups
that
might
work
depending
on
mixins
and
then
a
later
step,
like
analyze,
could
sort
of
finish
up
and
select
a
group
based
on
what
mixins
are
satisfied
or
not
satisfied.
D
Yeah
the
I
think
it's
essentially
that
at
detect
you
don't
have
the
run
image
mixins.
I
think
that
I
think
that's
really
the
only
problem.
B
D
B
B
Yeah,
I
definitely
you
know,
even
if
we
broke
it
out
into
an
entirely
separate
step
by
itself,
it
could
run
in
parallel
to
other
containers
on
a
real
platform.
The
one
strong
separation
that
the
two
things
that
would
make
me
want
to
make
me
want
to
keep
detect,
not
put
this
in
detect
joe
you're.
Talking
about
that
option
is
it's
one.
Is
it's
helpful
to
have
detector
and
builder?
Not
do
any
oci
related
to
anything
just
be
good
at
running
the
build
packs
we're
using
that
to
run.
B
V3
builds
on
cloud
foundry,
for
instance,
where
there's
no
images,
because
you
can
just
pull
the
detector
and
builder
in
and
then
run
it
on
top
of
the
old
lifecycle,
and
the
other
thing
is
the
it
should
be.
I
think,
just
security
wise.
It
feels
better
for
me
if
it's
possible
to
do
a
build
where
the
credentials
are
never
involved.
B
When
build
pack
code,
that's
untrusted
could
be
executed
in
the
same
container
right
in
case
there's
some
kind
of
escape,
and
that
would
stop
us
from
doing
that
and
that
for
the
platform
case,
and
so
I
think
I
think
I'd
like
another
step
would
be
my
preference.
D
No,
I
think
that
sounds
great.
I
was
kind
of
trying
to
avoid
that,
as
I
was
thinking
through
things,
but
I
hadn't
thought
about
leveraging
prepare
for
it.
The
only
problem
is
it
kind
of
starts
to
make
prepare
like
a
required
thing
versus
an
optional
thing.
I
don't
know
if
that.
B
D
Oh,
like
you
can
run
some
subset
of
it.
Is
that
kind
of
yeah,
something
like
that?
Yeah
yeah,
that's
fine!
I
mean
I
I
don't
mind
it
being
more
of
a
required
thing
because,
like
I
was
saying
yesterday,
I
feel
like
your
platform
has
to
do
some
setup
like
tekton.
Has
this
phase
where
it
does
some
things
that
I
think
that's
somewhat
inevitable
and
prepare
sort
of
an
attempt
to
standardize
those
things
or
at
least
put
them
in
a
single
place
where
you
don't
have
to
just
kind
of
yolo
your
own
setup.
B
You
know
having
the
authentic,
like
having
a
step
between
all
of
the
unauthenticated
steps.
That's
authenticated,
I
feel
like
at
some
point.
There'll
be
some
need.
Well,
you
know
we'll
we'll
need
to
perform
those
operations.
Oh
and
having
creator
as
a
thing
that
runs
through
the
whole
process
right
and
makes
local
builds
really
fast.
Makes
me
a
lot
less
worried
about
introducing
more
steps
in
the
for
the
cloud
builds.
C
D
A
A
D
B
There's
one
thing
we
can't
do
with
creator,
which
is
extend
the
run
image
right.
At
least
it'd
be
pretty
crazy,
but
so
that's
the
only
thing
where
locally
you
know.
Pac
would
have
to
run
another
container
in
parallel,
but
that
could
run
in
parallel
to
the
normal
build,
so
it
wouldn't
slow
it
down.
A
This
this
takes
a
little
bit
of
like
when
we're
talking
about
having
like
a
config
file
like
we
were
talking
about
yesterday.
It
gets
a
little
weird
because,
like
pack
is
currently
the
one
who's
part
processing
project
timeline
and
creating
an
order
timeline.
So
if,
at
some
point
we
introduced
prepare
that
spits
out
an
order
tunnel,
then
that
would
be
problematic
for
pac
right.
You
would
have
to
not
do
that
to.
A
A
B
On
the
topic
of
swapping
the
other
option
of
swapping
the
analyze
and
detect,
I
like
detect
before
analyze,
because
it
feels
like
we
should
know
what
build
packs
we're
going
to
use
in
the
build
before
we
start
figuring
out
what
we
want
to
know
about
the
previous
build
just
as
a
as
an
architectural
thing.
You
know,
and
I
don't
see
a
reason
for
it
to
happen
before
so.
It
also
makes
me
lean
towards
just
saying
yeah.
We
have
another
phase
now
yeah.
D
As
I'm
saying
you're
making
some
compromises
like
I'm
just
gonna
analyze
these
things,
whatever
I
kinda
like
the
way
you
framed
it,
though
steven
it's
like
privileged
or
you
know,
has
creds
doesn't
has
grids,
doesn't
like
that.
Just
makes
a
lot
of
sense
when
that's
like
dropping
in
dropping
out.
D
B
Have
the
different
stages
in
the
first.
B
A
A
I
guess
right
now
you
wouldn't
have
to
like
I
don't
know
like,
should
it
be?
I
don't
know
I
could
see
it
being
optional
unless
you're
doing
stack,
packs
still
right,
because
currently
tech
does
its
own
owning.
You
don't
have
to
know
the
run
mix-ins
if
you're
not
going
to
do
or
I
guess
I
guess
we
want
them
to
eventually,
though,
so,
that
we
get
the
validation
right.
D
A
D
Yeah
so.
D
What
emily
and
I
were
talking
about
was
that
detect
would
take
the
output
from
analyze,
which
is
like
an
analyzed
tunnel.
I
think
as
one
of
its
inputs,
which
has
all
the
information
it
needs,
but
we
could.
It
also
has
some
stuff
it
doesn't
need.
So
we
could,
you
know,
have
prepare
pass
some
of
that
along
too.
A
Yeah,
I'm
just
thinking
of
like
the
tecton
template
today.
If
it
upgrades
to
this
b
next
of
life
cycle-
and
it
doesn't
have
the
prepare
statement,
are
there
tecton
workflows
that
are
just
going
to
fail
without
an
analyzed,
you
know
list
of
mixins.