►
From YouTube: Working Group: 2021-01-14
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
A
And
splitting
it
into
two
parts,
so
writing
the
analyze
that
tamil
in
the
first
part
and
then
moving
the
bits
that
actually
analyze
those
layers
to
the
restore
restore
and
so
I'd
like
to
kind
of
open.
The
discussion
there
and
figure
out.
Is
that
something
I
think
that's
the
path
we'd
like
to
go.
I
think
it's
the
least
amount
of
changes,
because
it
means
that
we
still
have
the
same
number
of
binaries.
A
The
order
in
the
next
platform,
or
at
least
would
be
swapped
and
the
arguments
would
be
changed,
but
you
know
functionally.
It
would
be
very
similar
to
how
it
is
today.
So
before
writing
an
rfc.
I
kind
of
wanted
to
figure
out
whether
I
needed
to
run
an
rfc
to
get
all
the
votes
or
if
it's
just
sort
of
moving
code
around
is
a
spec
change
sufficient.
C
C
I
feel
like
thinking
about
how
the
implementation
would
work.
The
other
thing
I
want
to
discuss
with
this
is,
though
it
seems
like
it
could
be
a
last
minute
detail.
D
My
only
pushback
initially
on
swapping
that
the
two
things
was
the
idea
that
the
parts
of
analyzer
that
recover
the
layer
metadata
didn't
feel
like
they
should
happen
before
restore
before
we
know
what
or
sorry
before
detect
before.
We
know
what
build
packs
we're
going
to
use.
So,
if
we're
going
to
move
that
portion
to
restore-
and
we
have
you
know-
and
we
maintain
the
inner
leaving
of
privileged
unprivileged
privileged
on
privilege
and
don't
put
two
steps
next
to
each
other-
that
have
those
same
requirements.
That
sounds
perfect
to
me.
D
So
I'm
not
just
the
idea
of
swapping
the
keeping
the
names
analyzer
and
detector
and
swapping
order
as
long
as
the
functionality
ends
up
in
the
right
places.
That's
probably
my
would
be
my
preference.
C
A
Yeah
I
had
no
objections,
objections
based
on
what's
currently
in
there,
but
I
do
think
at
some
point.
If
we
decided
to
download
buildbacks
or
potentially
modify
the
app
directory
with
like
project
descriptor
like
removing
files
that
are
excluded
by
the
project
descriptor,
then
I
could
see
that
sort
of
the
analyzer
name
no
longer
fitting,
but
until
that's
sort
of
rfc.
That's
I
wasn't
going
to
push
to.
D
Both
of
those
things
seem
like
they're
ideal
in
my
in
my
head,
at
least
for
that
preparer
phase.
That's
optional,
because
certain
platforms
are
going
to
want
to
exclude
outside
of
container
land
right,
like
pack
will
probably
want
to
exclude
things
from
the
upload,
instead
of
deleting
them
in
the
you
know
container,
because
it's
local,
and
that
makes
sense
right
or
kpac
before
it
uploads
a
source
blob.
If
you
said
you
wanted
to
exclude
some
credential
local,
you
wouldn't
accidentally
want
it
to
end
up
in
your
registry
right
there.
D
I
think
there
are
strong
use
cases
for
that
that
excluding
ability
being
something
that
is
optional
in
container
land
and
can
happen
outside
and
similarly
for
build
packs
right.
Some
platforms
are
going
to
want
to.
You
know,
construct
the
builder
image
with
the
correct,
build
packs
by
orcas.
You
know
by
creating
a
new
ephemeral
image
that
has
the
build
tag
layers
orchestrated
on
it.
Some
might
want
to
download
them
dynamically.
D
So
so,
even
if
it's
a
privileged
step
sitting
next
to
a
privileged
step,
because
that
preparer
is
like
kind
of
a
platform
specified
privilege
step,
it
feels
like
it's.
Okay,
for
that
to
be
separate
to
me,
I
don't
know
if
that
changes
your
opinion
about
analyzer.
C
D
D
C
Like
there
are
a
couple
things
that
enough
platforms
want
that,
even
if
not
every
platform
wants
it
like
downloading
build
packs,
I
could
see
it
might
make
sense
just
to
include,
and
then
we
wouldn't
have
to
specify
what
preparer
is.
But
we
can
leave
that
namespace
open
for
platforms
to
inject
their
own
first
container.
A
I
think
that
could
come
later,
like
I
think,
moving
analyzer
with
the
current
stuff.
We
have
makes
sense
to
me
and
then,
if
we
do
decide,
we
want
to
change
it.
I
think
we
could
have
this
discussion
again
and
sim
link
it,
but
that
will
be
after
everyone's
already
moved
it
to
the
you
know,
the
first
phase
and
everything,
and
maybe
it's
a
1-0
discussion
of
like
you
know
it,
maybe
we'll
be
closer
to
understanding
where
what
we
think
extensions
of
prepare
might
look
like.
C
But
I
like
naming
an
analyzer
and
starting
with
the
smaller
set
of
things
so
that
we
can
either
add
in
more
functionality
to
this
face
and
keep
the
name
the
same
or
add
a
new
phase
later.
If
we
feel
so
inclined.
A
Yeah
for
backwards,
we
could
rename
everything
prepare
now
and
then
have
a
single.
You
know:
yeah
keep
a
binary
that
points
to
prepare
for
now,
right
as
a
and
then
just
give.
C
It's
simple
in
that
we
keep
the
number
of
binaries
worshiping
the
same,
which
would
be
helpful
for
systems
like
pac,
but
it's
more
complicated
than
that.
You
have
one
entry
point.
Then
we
have
to
do
forking
logic.
It's
really
not
that
different,
though
honestly
I
mean
not
that
the
amount
of
work
isn't
that
much
different,
but
the
implementation
will
look
a
little
bit
different.
A
He
is
not
coming
today,
he
doesn't
have,
he
doesn't
care
what
it's
named.
I
was
talking
to
him
earlier.
He
doesn't
really
care.
A
A
A
C
Feel
like
we
could
update
this
rfc
because
moving
the
phases
around
and
what's
in
this
rfc,
like
our.
A
A
C
I
had
one
extra
question
about
this.
Actually,
while
we're
looking
at
it,
we
have
a
input
file
that
I
think
most
people
don't
use
now
other
than
I
think
kpac
uses
it
called
project
metadata
where
we
include
source
information,
and
I
think
it
was
the
place
where
we
also
wanted
to
copy
over
some
of
the
metadata
values
from
project
tamil,
like
authors
and
documentation
url
as
a
way
of
providing
them
to
the
life
cycle.
C
A
C
I
believe
kpac
if
it
knows
that
the
source
came
from
a
gate,
repo
it'll
provide
the
commit
shot
in
project
metadata.
I
think
it's
something
we
want
to
pack
to
do
in
the
long
run.
I
don't
think
pac
does
it
now,
and
this
information
ends
up
in
a
label
on
the
image.
A
C
A
A
C
D
C
C
B
B
Pretty
quick,
I
have
so
I'm
working
on
this
research
study.
We
spoke
to
two
people
in
the
past
yesterday,
which
was
great
natalie
joined
for
one
of
the
interviews,
and
I
used
to
use
a
researcher
from
vmware
to
join
another
one,
and
I
have
a
couple
questions
that
we're
using
in
the
script
that
I
feel
like
are
not
like
playing
the
way
that
I
thought
they
would
play,
and
I
was
wondering
if
I
could
get
some
feedback
oops
some
feedback
from
you
all
on.
B
Maybe
how
to
make
these
questions
a
little
bit
better.
So
one
of
these
questions
is
so
we're
speaking
to
folks
who
are
doctor
users
today,
who
may
also
be
users
of
another,
more
automated
kind
of
container
build
system
like
build
packs
or
jib
or
something,
and
so
one
of
our
questions
around
docker
is
what
which
of
the
advanced
features
of
docker.
B
Are
they
using,
and
maybe
even
thinking
of
these
things
as
advanced
features,
is
not
even
the
right
way
to
think
about
it,
but
right
now
these
were
the
ones
that
I
was
able
to
come
up
with
around
like
ways
people
might
be
using
docker
in
like
a
more
sophisticated
way.
Are
there
others
that
you
all
would
suggest
adding
in
here.
A
I'd
be
curious,
I
guess
to
know
if
they're
using
docker
like
alternatives
as
well,
like
you
know,
using
the
docker
cli,
but
the
some
of
the
like
replacements
that
don't
use
a.
I
forget,
what's
called
like
it's
not
pecker.
Is
it.
D
I
might
I
might
phrase
the
question
a
little
more
generically,
maybe
like
what
tool
do
you
use
alternative,
tooling
that
either
wraps
docker
or
replaces
docker
to
do
builds
because,
like
I
could
see,
you
know
it
might
be
interesting
if
somebody's
using
docker
compose
instead
of
calling
docker
build
right
or
it
might
be
interesting,
somebody's
using
scaffold
to
do
their
iteration
on
top
of
docker.
It's
like
kind
of.
Maybe
it's
just
maybe
just
as
interesting.
Whether
something
shells
out
to
docker
is
whether
it
replaces
it
completely.
A
Man
wasn't
what
I
was
thinking
yeah
like
the
ones
that
maybe
don't
use
a
don't
use
the
docker
desktop
installation
right
and
wrapping
it.
A
D
I
think
I
think
people
might
not
know
that
exactly
so
and
keeping
it
generic
will
get
all
of
those
answers
right
as
long
as
they
ask
the
answer
specifically,
what
they're,
using
I
think
conoco
right
might
be
another
which
one
is
kanako's
workplace,
yeah
kind
of
comes
our
place.
Packer.
D
A
packer,
I
think,
is
like
a
it's
a
hashicorp
tool
from
well.
I
think
you
you
just
you're
thinking
of
podman
right.
D
The
two
above
this
question
build
caching
and
run
as
unprivileged
user.
I
don't
I
don't
know
how
if
people
are
going
to
understand,
it's
like
when
you
use
build
caching
with
docker,
there's,
no
explicit
kind
of
caching
isn't
very
explicit
right.
It's
just
rebuilding
on
top
of
layers
that
already
exist
right,
like
a
lot
of
people
may
just
think
of
it,
as
my
builds
are
a
little
bit
faster.
There's
no,
like
you
know,
kind
of
build
volume
you
might
use
to
explicitly
cache
things
and
restore
it
like.
B
D
You
can
you
can
design
your
layers
so
that
they
cache
more
effectively
like
you
can
hoist
your
package
json
and
you
know
like
we
have
some
slides
for
clogging
to
build
packs
that
are
like.
You
really
want
to
do
this
soccer
file.
You
know
versus
just
you
know,
having
build
logic
that
was
kind
of
more
organized,
so,
like
you
could
ask
it
that
way.
Right,
you
know.
How
much
do
you
contort
your
docker
files
in
order
to
achieve
faster,
builds
right?
That
could
be
an
interesting
question.
A
A
Another
one
that
might
be
interesting
would
be
how
many
how
many
users
use
like
remote
docker,
because
I
know
we
use
volumes
and
stuff
like
that,
so
understanding
how
many
folks
are
starting
to
build
their
images
locally
to
a
remote
instance
somewhere
would
be
useful
information.
I
think.
D
B
Okay,
and
then
you
got
you
guys
mentioned
that
there's
there's
a
difference
in
how
you
would
interpret
it
if
someone's
using
dr
composers
as
dr
bill.
The
folks
that
I
spoke
to
yesterday
were
both
using
document
those.
What's
the
difference
in
like
what's
the
meaning
that
you
two
take
out
of
that,
if,
if
you're,
using
one
or
the
other.
D
For
doctor
compose,
very
specifically,
it
just
meant
there's
a
in
your
doctor,
composing
ammo.
Instead
of
saying
deploy
this
image,
you
can
say,
build
this
directory
into
a
docker
file
and
deploy
it.
If
that
makes
sense-
and
you
can't
really
do
that
with
pac
right-
you
have
to
pack
build
first
and
then
reference
the
image
and
so
understanding
like
if
everybody
in
the
world
isn't
using
pack
because
it
doesn't
fit
into
their.
You
know
kind
of
fast
get
it
running.
Docker
compose
workflow
that
that
could
be
really
useful.
Information
for
us.
B
Okay,
so
you
might
be
doing
a
docker
compose
and
just
composing
those
containers
locally,
but
you
might
also
be
when
you
run
compose.
If
it's
in
that
yeah
well
config
file,
it
might
automatically
like
put
it
up
in
your
dev
environment,.
D
Not
exactly
this
is
more
about
like
because
docker
composes
for
running
containers,
I
mean
it
doesn't
often
times
when
people
are
using
docker
composes
to
run
containers
locally
in
their
docker.
Damn
if
that
makes
sense,
and
a
lot
of
those
workflows
involve
pointing
docker
compose
at
the
directory
to
build
that
has
a
docker
file
in
it.
Instead
of
doing
a
docker
build
first
and
then
putting
the
image
reference
inside
of
your
docker
compose.
D
C
D
D
Docker
I
mean,
did
you
mean
more
for
runtime.
C
Yeah,
I
guess
I'm
curious
about
what
features
I
don't
know
if
this
is
captured
elsewhere
in
the
research
survey,
but
I
think
when
we
were
planning
questions
to
sort
of
like
what
features
of
docker
are
people
familiar
with,
even
if
they're
not
using
it
in
a
build
context
like
do
they
use
vault
bind
mounts.
People
said
the
network
stuff
like
that.
C
E
A
Like
another
docker
image,
that's
running
that
might
not
be
exposed
to
your
normal
host,
but,
like
you
know,
maybe
you
need
to
talk
to
redis
during
your
build
or
something
like
that
right
or
talk
to
resources
that
are
only
available
on
your
corp
network.
So
running
your
build
on
the
corp
network
means
you
would
gain
access
to
like
artifactory
or
something
like
that
during
your
build,
just
by
virtue
of
being
on
the
network.
B
D
One
that
might
be
really
like
some
functionality
that
I
don't
know
if
we've
implemented,
you
know
in
a
satisfactory
way,
is
docker
kind
of
recently
introduced
secrets
as
like.
You
can
build
time
mount
a
file
into
a
specific
location
that
has
credentials
in
it.
It's
not
not
as
powerful
as
volumes,
but
it
lets
you,
like
you,
know
inject
something
that
definitely
doesn't
end
up
in
the
you
know
metadata.
D
If
people
are
using
that
a
lot,
I
think
people
don't
really
know
it
exists.
Quite
yet,
even
though
it's
been
out
for
a
while,
it's
my
might
take,
but
people
using
that
a
lot
to
be
good
to
understand.
You
know,
I
think
we
do
have
something
kind
of
similar
with
build
time,
environment
variables
essentially,
but
in
the
platform
to
be
interesting
to
know
cool.
B
That's
a
great
one:
okay
yeah,
please
feel
free
to
keep
adding
more
of
these
as
you're
coming
to
your
mind,
but
I
will
move
on
to
the
next
one.
Unless
there's
more.
D
Actually,
when
you
say
unprivileged
user
up
there,
that's
one
of
the
ones
at
the
top.
Does
that
there's
two
really
different
interpretations
of
that?
One
is
they're
running
the
daemon
as
an
unprivileged
user
and
the
other
is
they're
running
the
container
as
an
unprivileged
user
like
like.
Do
they
do
they
use
a
non-root
user
inside
of
the
container,
I
would
say,
non-root,
even
because
it's
like
well
even
root
inside
the
containers,
can
can
be
name
spaced
outside.
B
Perfect,
perfect,
okay,
and
then
I
have
this
question
that
I
feel
like
is
just
like
feels
super
awkward
when
I
say
it,
I'm
trying
to
get
at
this
question
of
sort
of,
like
you
know
a
container,
you
know
what
goes
into
your
container
is
kind
of
a
very
intimate
thing,
with
respect
to
your
application.
Sort
of
requires
like
a
lot
of
trust,
if
you're
gonna
trust
someone
else
to
generate
that
environment
for
you.
So
we
have
sort
of
like
this
one-liner,
which
is
describing
sort
of
the
idea
of
build
packs.
B
Very,
very
generally,
it's
like
a
self
creating
docker
file
that
analyze
your
code
to
build
a
container
image
without
having
to
write
the
file
yourself.
So
yeah,
I'm
sure
you
guys
take
issue
with
this
is.
A
B
Is
there
a
better
one
liner,
you
would
prefer
and
then
secondly,
I'm
I'm
sort
of
getting
at
this
thing
about,
like
you,
you
know
using
like
how
comfortable
are
you
with
building
your
containers,
using
logic
that
somebody
else
wrote
and
I'm
using
this
phrase,
logic
that's
off
the
shelf,
but
I
feel
like
that
could
be
better.
D
I
think,
on
one
hand,
that
first
it's
like
a
self-creating
doctor
file
line
is
very
descriptive
or
like
it
like.
I
think
it
it.
It
does.
Describe
the
you
know,
outcomes
of
the
project,
if
that
makes
sense
for
a
local
build.
You
know
in
a
way
that
I
understand
why
it's
like
it's
helpful
for
making
people
understand
it.
D
The
problem
I
have
is
that
there
are
a
lot
of
projects
that
automatically
generate
docker
files
by
templating
things
into
them,
and
that
that
description
gives
you
an
impression
about
some
limitations
that
cloud
data
build
packs
might
have
right,
especially
working
at
scale
or
replacing
individual
layers.
Right
that
you
know
one
would
assume
from
that
sentence
and
aren't
true
right.
They
were
kind
of
a
totally
different
way
of
building
oci.
D
B
Fair
fair,
so
with
that
in
mind,
your
preferred
one
liner.
D
B
Okay,
so
maybe
this
it's
a
replacement
for
a
docker
file
that
transforms
source
into
image
without
having
to
create
that
file
yourself
yeah
without.
B
D
B
D
B
Cool
and
then
so
there's
that
sentence
which
introduces
it
and
then
there's
sort
of
in
this
question
of
like
comfort
level
with
you
know,
essentially
this
logic
like
how
comfortable
are
you
with
build
logic
that
someone
else
wrote
basically,
so
I
think
I
I
kind
of
like.
B
B
Can
you
rank
them
for
us
on
a
scale
of
1
to
10
in
terms
of
importance
to
you
and
the
list
of
items
which
I
aggravated
after
many
conversations
with
smes
and
reading
documents
and
stuff
and
natalie-
and
I
worked
in
this
together-
is
this
list?
Is
there
anything
that
you
feel
like
is
missing
that
you
should
replace.
D
Just
this
looks
great,
the
one
thing
is
ability
to
make
os
updates
as
fast
as
possible.
I
might
I
might
be
a
little
because,
like
this
feels
like
a
pretty
specific
feature
of
build
packs
right,
I
might
say
ability
to
patch
security
vulnerabilities
as
quickly
as
possible
to
make
that
a
little
more
generic.
E
I
don't
just
well,
I
guess
security
vulnerabilities
is
very
generic
right
like
because
that
could
be
app
level
or
like
is
that?
Does
that
just
mean
updating
an
npm
module?
Because
it
is
now
right
out
of
date
in
my
app
because.
D
We
have
three
bullets
under
it
or
something
that
are
like.
You
know
us
packages,
question
mark
and
you
know
for
language
runtimes
or
for
application
dependencies.
B
C
E
D
E
And
then
I
feel
like
that
terminology
at
least
is
very
docker-centric,
so
it
would
make
sense
to
people
right.
It's
like
base
image
versus
like
code.
I
own.
B
C
B
E
B
D
I
almost
think
the
interesting
thing
to
me
is
that
one
and
two
together
would
be
at
the
top
for
the
same
person.
Almost
you
know
like
if
I
find
like,
or
I
found
some
people
are
really
concerned
about
correctness
in
building
the
container.
You
know
it
has
a
very
minimal
base
image.
It
looks
like
this
and
this
and
then
some
people
are
very
concerned
about.
D
B
And
this
guy
works
for
a
real
estate
sas
firm
in
texas.
B
B
So
yeah
the
fun
thing
about
this
is
that
we'll
be
able
to
take
an
average
at
the
end
and
be
able
to
have
like
a
aggregated
course.
B
Think
this
is
like
this
is
kind
of
a
net
new
like
more
modern
one
being
created.
I
think
his
concern
is
that
at
his
organization,
everybody
is
responsible
for
their
own
docker
file
for
their
own
app,
and
he
really
feels
like
he
wants
that
personal
satisfaction
of
really
knowing.
Why
he's
why
each
thing
is
in
there
and
you're
not
blindsided
by
something
you
really
know.
The
purpose
of
every
depends
on
your
command.
E
Yeah,
it's
interesting
that,
like
I
feel
like
it's
stephen's
comment
right
like
he
cares
about
like
being
able
to
easily
update
and
add
things,
but
then
like
size
of
the
image
is
like.
I
think
it
was
like
seven
or
something
near
the
bottom.
That
list
so.
B
E
Almost
feels
like
what
you
just
said
would
make
me
think
that
size
would
be
much
higher
on
his
priority
list,
but
it
doesn't
mean.
B
Yeah
and
there's
some
additional
color
in
the
notes
about
the.
Why
of
that?
But
I
haven't
like
gone
through
it
in
alaska.
D
It's
very
interesting
data
also
what
taran
said
before
about
being
able
to
correlate
this
with
other.
You
know
information
about
what
their
use
cases
look
like
would
be
really
helpful
too.
I
think.
B
B
Trying
to
get
that
context
to
the
start
of
the
interviews
great
well,
that's
all
for
me.
For
today
I
will
keep
you
guys,
updated
and
share
some
more
context
soon.