►
From YouTube: GMT 2018-05-03 Containerization WG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
A
A
Yeah
so
currently
there
two
ways:
one
ways
have
a
mix
from
image
as
a
table
or
another
way.
It's
like
talk
to
docker
registry
get
the
manifest
first
and
then
after
we
get
manifest,
we
know
it
which
layers
we
should
download,
and
then
we
crew
down
the
layers
so
I
think
those
way
both
way
we
could
support
with
HDFS.
A
So
for
now
we
focus
on
the
image
start
first,
because
because
I
don't
know
how
many
people
using
docker,
read
the
port
O'call
registry
on
HDFS
I
am
assuming
document
people
just
talk
to
HTTP
or
HTTPS
registry,
no
matter
local
registry,
all
remote
registry.
So
so
we
want
to
adjust
because
turbo
is,
it
might
be
a
big
file
and
it
can
come
from
different
spins
like
HDFS,
s3
and
etc.
A
A
We
can
prove
the
image
from
a
local
tar,
and
otherwise
we
will
try
to
prove
an
image
from
the
remote
registry
and
another
option
is
happy-happy
URI,
because
the
image
is
useless
best
accountant
and
we
could
have
an
optional
URI
in
the
photobook,
no
matter
in
container
info
or
the
image
further
back.
We
could
specify
HDFS
URI
and
specify
where
the
turbo
comes
from
so
and
if
it
is
that
we
put
it
as
a
HDFS
image
Torah.
Otherwise
we
do
what
what
we
used
to
be
doing
so
both
ways
sounds
good
to
me
and
I.
A
A
But
right
now,
I
am
still
hesitating
to
introduce
this
new
API
because
we
do
not
support
OCR.
Yet
we
don't
know
what's
the
best
way,
we
suppose
I
for
now,
for
example,
we
could
introduce
a
new
image,
stop
those
two
super
OCI
and
then,
which
means
we
have
to
introduce
and
to
do
some
refactoring
unified
to
factor
and
unify
the
cache
so
that
we
can
have
a
unified
architect's
daughter
to
manage
all
the
users
basic
content
in
one
place.
So
here
the
user
specify
content.
A
It
is
image
for
us,
the
artifacts,
the
act
that
specified
by
URI.
So
if
they
are
conquered
the
trestle,
so
we
had,
we
should
have
a
unified
data
structure
to
nating
those
contents
and
that's
one
way
into
super
OCI.
Another
way
it
is
to
introduce
a
new
container
Iser
and
which
leverage
can
either
continuity
or
run
see.
If
we
use
run
see
it
means
we
have
to
reflect,
do
some
refactoring
and
then
have
the
image
ruling
Coast
refactor
to
the
new
continuously.
A
So
both
can
United
rely
on
the
Senco,
the
URI
fetcher
to
prove
image
and
and
then
we
rely
on
C
to
start
a
new
container
if
we
have
a
continue.
So
those
are
the
decision
we
do
not
decide
yet
because
we
start
to
have
more
concern
right
now.
We
have
more
and
more
continued
runtime
in
different
community
and
different
company.
They
develop
they
develop,
they
have
to
develop
their
own
runtime.
For
example.
A
Right
now
we
have
run
see,
we
have
cattle,
continue
film,
hyper
and
Intel,
and
then
some
people
will
announce
the
G
visor
recent
three
from
Google,
which
is
a
sandbox
continue
and
a
media
have
their
own
runtime
and
Norma
has
their
own
runtime.
So
the
four
different
continued
runtime
it
seems
to
me
they
have
different
set
of
features
so
of
then
they
can
launch
continue,
but
they
have
different
concerning
features,
so
some
of
them
focus
on
security.
Some
of
them
could
do
more
on
networking,
so
I
start
to
think
about,
like
maybe
in
the
future.
A
You
should
have
a
continuous
there
that
we
could
allow
user
to
specify
different
runtime.
What's
the
runtime
they
want
to
use
and
then
they
could
rely
on
that
run,
tie
to
do
whatever
specific
they
would
would
like
to
achieve
and
missus
unified
containers
are
gonna,
be
reflected
as
a
standalone
continue.
Runtime
and
it
will
be
the
default
runtime
for
missus,
but
you
circus
to
rely
on
some
any
other
entity,
but
we
like
to
use
and
we
made
on
reanimating
work
sink
one
single
continue.
A
The
coup
we
want
to
achieve
in
the
future-
and
so
so
that's
one
reason
I
yeah.
So
that's
one
reason
I
to
another
agenda
items
here
to
discuss
a
lot
more
container
and
I
hear
they
are
all
related
and
they
are
really
related
to
how
we
support
image
tower
for
image
to
archives
from
HDFS,
so
yeah.
So
so
right
now
the
thing
the
thing
stopping
me
to
introduce
more
API
are
the
concern
that
we
might
have
for
we
might
have
more.
A
So
if
the
user
used
the
image
again,
if
you're
not
to
download
the
image
from
the
hdfs
server
again,
because
we
maintain
a
unified
cache,
that
is
just
called
metadata
management
in
mazes.
So
there's
some
limitation
on
this.
It
means
like
when
the
user,
when
the
user
specify
or
HDFS
based
server,
they
have
to
put
all
the
image
start
under
one.
There
are
three
in
the
HDFS
and
if
HDFS
damaged
or
they
could
not
use
local,
taco
or
remote
or
remote
docker
registry.
A
This
is
kind
of
like
a
big
limitation
and
we
yeah
once
we
have
the
bad
idea.
What
how
we
suppose
the
I.
We
will
definitely
affected
this
part
and
allow
user
to
use
different
source
example.
The
local
tar
hdfs
star
and
the
mode
docker
registry
by
the
API
level
against
beside
whatever
they
want.
So
yeah
I
would
like
to
hear
what
what
you
guys
think.
D
D
It
means
the
bikini
my
network
was
broken,
so
so
I
feel
like.
There
are
the
multiple
problems
here.
One
is
supporting
multiple
runtime.
That
is
one.
The
second
is
because
we
need
to
support
multiple
runtimes.
We
might
need
to
support
different
image.
Formats
continue
image
formats,
and
the
third
problem
is:
how
do
we
serve
these
image
formats
from
HDFS
and
I
am
I'm,
seeing
I'm
everything
a
problem
correctly?
Yes,.
A
A
D
What
one
quick
question
about
so
I'm,
fairly
I
used
to
be
familiar
with
a
docker
image
format
I,
but
I
forgot
you
had
OC
I
for
a
while,
and
I
don't
know
what
gee
visor
things
active
either
use,
but
do
they
all
use
the
same
content?
Addressable
format
like
it
back?
Are
they
I
am
similar
to
docker
image
and
the
being
traffic
be
sick
or
something
like
content-addressable
I.
D
D
D
B
D
Going
to
get
there
actually
actually
so
one
project,
a
my
teammates
has
been
working
on
is
to
build
a
p2p
distribution
engine
for
content-addressable,
artifact,
binary
artifacts,
no
more,
no
less
than
that,
and
that
they
do
broken
up
into
roughly
two
logical
components.
One
is
the
core
part
of
p2p
distribution.
The
other
is
sir
using
HTTP
as
the
source
of
the
PWD
Jovian
system.
However,
for
their
clients
we're
having
me,
we
have
been
saying
that
the
only
supported
Indiana
clients
and
the
HTTP
serving
these
content-addressable
binary
artifacts.
D
A
D
If
you
remember
docker
image
itself,
just
the
darker
image
artifact
itself
is
one
content:
addressable
file,
a
small
file
yeah
when
we
read
and
expended
it
I
think
it
has
a
lot
of
matter
prior
to
metadata,
but
then
all
the
layers,
which
again,
is
all
the
content,
the
dreadful
files
worked
in
the
same
system.
So
it's
like
two
wrongs
of
two
rounds
of
HTTP
HTTP
that
it'll
be
download
through
HTTP
and
that's
it
yeah.
D
A
D
D
A
It
is
called
Jack
and
Froy
and
they
asked
me
if
we
are
interested
to
to
like
cracking
the
deck
component,
to
make
it
to
support
it
on
masers.
So
we
used
to
have
the
two
years
of
the
years
ago.
We
have
a
juror
to
soup
up
here.
Ppp
content
patching
it
napping
about
that,
but
I
don't
think
it
is
not
prioritized
by
Mesa
sphere,
but
I
do
I
think
there
are
many.
A
B
The
main
problem
with
when,
when
like
technical
problem
with
by
promising
everything
through
like
a
docker
compatible
used
to
be
in
the
face,
is
well
when
you
do
that
approach,
you
always
have
to
pull
down
the
whole
archive
I
guess
these
would
probably
have
to
be
that
time.
Okay
anyway,
but
when
you're
bringing
a
container,
you
typically
don't
need
most
of
it.
So
you
can
do
better
if
you
have
better
understanding
of
the
body
or
cooling
down.
Sorry.
D
B
D
Put
a
thin
layer
as
a
HD
as
a
poker
registry.
So
exactly
like
what
James
described
we
have
there.
The
local
p2p
agent
I
believe
the
load
copy
to
P
local
p2p
agent
has
an
extra
interface
which
acts
as
a
docker
registry
interface.
So
all
our
docker
images
I've
been
translated
into
local
host
fixed
approach,
so
it
has
other
images
through
there.
Okay,
so.
A
You
you
guys,
who
do
p2p
fetching
between
the
between
your
own
layers
to
deliver
each
layer
as
a
tar
and
then
serve
them
in
the
in
the
local
registry
and
the
local
registry
run
in
runs
in
each
of
the
agent
and
laces
down
missus
registry
pool
to
pull
out
the
manifest
and
then
download
each
of
the
tar
from
local,
yes
and
consume
the
image
make
sense.
This
is
champion.
D
A
We
should
have,
we
should
have
this
own,
we
should
have.
We
had
to
have
this
discussion
earlier
and
because
people
have
been
discussing
like
step
up
here
to
be
fetching
for
a
while
on
missus
and
I
believe
at
some
point
that
we,
when
we
introduced
a
unified,
artifact
store,
we
have
to
unify
to
furniture,
and
maybe
we
could
introduce
to
appear
to
be
fetching
in
the
you
are
in
in
the
unified
venture,
with
a
unified
cache
so
that
we
could
yeah.
E
A
D
Another
comment
I'd
like
to
make
is
just
based
on
my
interaction
with
a
team
and
kind
of
I
guess
provide
some
production,
sometimes
with
these
ideas,
but
I
don't
work
on
eyes
directly,
but
given
give
them
an
impression
with
our
work.
I
think
p2p.
A
lot
of
optimizations
are
improving
on
p2p,
a
very
organization
specific
and
the
related
to
both
music
news
pattern
and
the
network
actual
physical
network
fabric
of
the
organization.
Using
this.
So
imagining.
A
A
D
B
D
B
For
our
use
case,
one
of
the
principles
is
that
we
want
to
have
you
know
what
we
want
to
have
all
the
images
over
the
user
task
artifacts,
which
can
include
images
to
be
the
live,
is
stored
within
the
cluster,
so
whereas
we
could
use
p2
p4
for
distributing
things
across
agents,
but
we
made
we'd
want
also
to
have
some
some
guarantee
that
the
whole
country,
all
the
content,
is
reliably
stored
within
a
single
cluster
and
we
don't
have
to
reach
out
to
s3
or
some
other
external
store.
Yes,.
D
I
think
that's
also
our
requirement,
although
all
the
other
content
will
be
safely
started
or
some
kind
of
consider
that
cold
storage,
you
guys
are
three
actually
I,
think
that's
the
what
happens
behind
the
upload
paths,
other
tools
or
the
food
ended
up
I.
Believe
our
system
first
ensures
that
all
the
content
are
simply
starved
on
the
either
as
308
GTS
and
after
that
there
might
be
optimizations
to
speed
up
the
distribution
and
there,
but
mostly,
we
hope.
Most
of
the
activities
you
put
into
the
agent
happens
through
p2p.
D
A
A
What
was
the
what's
the
requirement
in
configuration
in
nature
and
and
basically
I
believe
they
may
be
possible
to
have
some
universal
support
and
with
different
different
with
different
options
for
user,
to
talk
in
different
components,
but
yeah
we
could
discuss
the
p2p
fetching
in
the
future
once
we
get
to
the
adjusters
Bechet
at
the
end
desire
new
venture
in
a
cisco
base
and
for
now
James
do
you
think,
do
you
think
the
HDFS?
The
purpose
on
the
talker
and
the
score
registry
is
it?
Is
the
limitation
cannot
fulfil
your
requirement.
A
Yeah
yeah
I
think
I
think
eventually
we
have
to
somehow
support
like
continent
address
both
matching
with
a
unified
cache
on
nation
side
and
if
we
introduce
more
runtime
there's
a
second
second
separate
problem.
We
may
not
be
may
not
rely
on
the
third
party
software
to
put
image
because
we
already
have
the
support.
We
don't
need
duplicate
things
which
means
like.
If
you
want
to
support
OCI,
we
have
different
runtime.
We
may
not
set
continuty,
we
may
just
use
a
lightweight
currency
as
a
separate
container
and
then
still
crew,
the
image
by
ourself.
A
A
A
Flexible
image
source,
no
matter
from
HTTP,
HDFS
or
either
table
or
registry,
and
at
any
point
we
understand
how
how
the
OCI
OCI
support
api,
OCI
support
should
looks
like
must
be
happy
to
decide.
We
couldn't
refactor
this
part,
so
there
might
be
something
we
gonna
be
doing
for
the
HDFS
and
I
will
have
you
send
out?
An
email
understand,
understand
emulator
about
what
we
can
then
do
with
HDFS.
D
D
A
A
D
A
Okay,
so
yeah,
and
we
can
we
can
continue
the
hdfs
discussion
in
the
email
thread
and
and
I
will
send
it
to
the
dev
list.
So
for
the
second
one,
I
think
this
is
more
important
to
the
continued
Isaac
continued
ization
on
May.
She
owes
it
determined
what
the
future,
what
it
kinda
looks
like
what
the
continued
continued
runt
icon,
how
to
continue,
aren't
I
gonna,
be
supportive
or
missus.
A
So
as
far
as
I
mentioned
in
the
past
half
year,
I
see
a
lot
of
other
continue
around
have
become
popular
Perry
people
just
use
stalker
and
then
basis
to
two
years
ago.
Measures
have
in
fact
continuously
with
the
their
own
continued
entire
rely
on
Linux
features
and
then
different
other
project
like
nomads.
They
start
to
announce
their
own
runtime
so
and
then
kubernetes.
What
we've
talked
her
to
open-source
continuity
and
and
then
darker
country
build
the
run.
C2O
CI
community
so
runs
so
run,
see
so,
which
means
they
all
rely
on
run.
A
Seen
as
a
continual
runtime
and
and
like
six
months
ago,
I
heard
about
the
project
of
Keita
continue
from
Hyperion
and
in
tio.
So
basically,
they
the
the
announced
that
as
a
lightweight
container,
basically
it
is
change
from
the
current
VM
and
then
they
they
move
some
sub
component
out
from
the
VM
and
then
they
credit
as
a
container.
Basically
it
is
something
between
container
and
IBM,
but
it
has
some
vn
features
like
it
can
isolate
some
some
resources
by
half,
which
means
which
means
it's
more
secure.
A
They
by
submitting
the
kernel,
and
then
they
realize
rewrite
some
feature,
some
kind
of
feature
by
go
so
they
could
achieve
some
security
go
in
G
visor,
so
people
concerned
you
I,
say
bye-bye
docker
run
by
run,
C
I
believe
it
is
supported
by
run,
see
and
they
can
specify
a
different
run
type
I
runtime
or
continue
runtime,
and
it
is
parking
it
can.
It
is
configurable
wire,
both
kubernetes
or
talker,
so
they
can
specify
whatever
were
continue
around
height.
A
They
would
like
to
rely
on
and
run
high
compatible
with
those
platform,
so
it
means
kubernetes
and
august
1
and
my
concern
here.
It
is
in
the
future.
We
might
have
more
and
more
continued
runtime
and
different
run
tactic.
They
can
launch
continued
process,
but
they
they
might
have
different
set
of
features
for
their
own
runtime
but,
for
example,
some
some,
some
user.
They
really
care
about
security,
do
not
use
ki,
runtime
or
the
device
runtime
and
some
user.
They
might
have
some
storage
based
features
and
they
can.
A
A
I
would
like
to
see,
like
mrs.
user,
had
flexible
options
for
different
runtimes
in
the
future,
so
that
because
it
is
hard
to
catch
on
with
different
support
like
we
have
been
discussing
about
like
a
V
and
continuation
to
support,
be
M
PI
by
a
violent,
a
VM
or
hypervisor,
but
those
we
are
still
in
investigation
stage
and-
and
it
has
been
there
for
a
while
open.
We
don't
have
too
many
focus
on
that,
so
which
means
we,
if
you
want
to
support
more
runtime,
we
might
rely
on
the
third-party
software.
A
It's
just
no,
but
which
does
not
seem
bad
to
me,
because
you,
sir,
have
the
option.
They
still
have
the
different
ion
mazes
and
they
have
the
option
to
really
use
any
other
and
hide
everyone.
So
so
the
discussion
here
I
would
like
to
hear
what
you
guys
thinking
about
support,
multiple
runtime
users
and
how
we
will
be
going
to
support
them
and
maybe
make
it
compatible
in
one
can
be
one
single,
continuous,
I
think
I
think
chance
to
just
enjoy
how
you
guys
might
have
some
idea
about
this.
B
Yeah
I
think
the
direction
is
great
and
I
support
be
able
to
plug
in
their
party
runtimes.
To
me,
sauce
I
didn't
realize
that
the
OCI
interface-
this
is
so
atrocious
I'm
in
general.
My
experience
as
their
executives
is
a
terrible
API.
It
will
come
back
to
bite
you
every
time.
The
debugging
experience
is
propagation
and
error
handling
to
being
to
sleep.
Just
generally,
try
to
interact
with
these
things
as
an
API
is
shameful
and
disgusting
horrible.
C
B
A
Yeah
so
I,
if
I
remember
correctly,
continuity
we're
on
a
PC
and
I'm,
assuming
once
the
is
also
around
Jake
easy,
but
it
is
no
problem
for
missus.
We
already
have
the
Jackie
sister
for
him
be
processed,
so
we
can
leverage
by
leverage
the
GRP
see
with
to
commit
with
different
component
very
easily.
It
should
be
very
simple.
A
Yeah
yeah,
that's
that's
something
we
people
have
to
figure
out
there
in
this
eye,
because
I
agree,
I,
don't
agree
on
on
the
API
mapping,
because
I
I
guess
there's
a
maybe
some
Peacham
it's
just
have,
but
the
everyone's
in
may
not
have
in
in
the
runtime
and
they're.
More
maybe
more
features
runs
to
support
when
a
system
don't
support
those
features
yet
and
we
have
to-
and
there
are
some
feature,
maybe
both
of
facial
supported,
missus
API,
supported
and
runs
to
support
it,
but
they
might
not
be
compressible.
A
So
we
have
to
deal
with
those
each
of
those
case
carefully
and
maybe
initially
we
could
support
some
very
basic
basic
functionality
and
then,
in
the
future
we
could
add
different
feed.
We
can
add
each
feature
step
by
step
to
the
API,
so
it
will
be
more
if
you
capture
more
typos
on
the
death
list
and
people
could
get
together
to
discuss
whether
or
not
the
API
love
makes
sense.
There
might
be
something
we
could
move
step
by
step.
C
A
Yeah
yeah
I,
think
I.
Think
G
was
trying
to
ask
by
someone
who
interesting
on
this,
and
if
we
have
don't
have
any
opponents
here
on
on
supporting
like
not
will
continue
anti,
we
can.
We
can
check
out
this
side
together
and,
if,
like
once
I
have
time,
I
can
put
some
stuff
in
in
a
document.
I
was
gonna
talk
members
in
so
we
every
time
we
have.
We
have
to
have
a
chance,
we
can
add
more
stuff
in
it,
yeah
so
investigation
every
step
in
native,
so
different
runtime.
D
Know
I
think
yeah
I
know
I,
don't
have
too
much
to
contribute
here.
I
think
I,
like
like
dreams
and
I've
I
support
to
look
into
more
runtimes.
We
have
have
possible
potential
use,
kids
and
using
the
end
here
they
were
not
high
priority,
but
if
our
existing
plug
in
there
to
begin
just
take
off
the
shelf
and
use,
we
might
support
that
to
some
customers.
B
B
A
Think
I
think
I
think
we
could
still
support
both
case
and
and
so
make
a
docker
container.
That's
an
example
is
support
both
way,
so
people
could
build
their
own
custom
executors
and
the
constant
executors
gonna
be
going
to
be
running
inside
of
docker
container
and
the
other
way
it
is
relying
on
the
comment.
Dr.
comment
executors
and
the
docker
comment
has
a
q-tip
and
launch
the
docker
container,
so
so
I
think
for
for
the
other
container
run
time
we
might
still
have
both
for
both
case,
but
I
am
start
thinking
like.
A
Maybe
we
could
do
that
so
have
the
depart
secures
will
support
the
new
runtime
with
the
new
continuously
and
so
and
then
people
could
still
specify
a
custom
executor
which
is
running
as
a
new
container
runtime.
So,
for
example,
the
runs
he
launched
the
container,
so
the
guy
secure
they
run
the
inside
of
that
container
that's
another
way,
less
than
other
ways
to
go,
because
the
Popo
I
think
yeah.
You
need
to
think
about
that.
A
comment.
D
I'll
make
is,
if
so,
for
people
who's,
not
that
familiar
with
the
missiles,
but
have
some
idea
about
containers
and
even
scheduling
things.
The
concept
of
executor
has
always
been
pretty
strange
to
them.
I
think
I
think
so.
I
think
from
historical
point
of
view,
the
well
when
people
started
at
first,
the
first
cosmic
scooters
their
make
a
container
may
not
be
as
white
as
widely
adopted
as
now.
D
So
there's
a
lot
of
functionality,
custom,
custom,
runtime
functionality,
that's
better
built
in
a
chasm,
executor,
I
think
as
as
time
evolved
in
the
patterns
like
pod
and
especially
patterns
like
part
or
task
ago,
starting
to
get
immature
I
think
people
realize
more
and
more
work
can
be
slowly
moving
into
using
those
design
patterns
and
the
north.
We
involved
a
various
custom
executor
that
is
specifically
fitting
one
purpose,
so
I
mean
I'm
just
and
just
express
some
impressions
that
maybe
sometime
in
the
future.
D
A
Yeah,
basically,
meters
does
not
have
the
concept
pad,
so
it
is
still
as
executor
based
so
people
launch
tasks
or
tasks
inside
of
an
executor,
so
I
think
I
think
yeah
I
think
they
are
because
this
is.
It
has
been
if
I
add
on
my
own
understanding.
So
it's
a
different
concept
task
and
execute
their
concept
for
me.
So
specific
and
continue
is
another
concept
on
mesas
and
people
could
run
executors
inside
of
the
continue
or
people
could
have
the
executor
running
outside
of
the
container
and
have
the
executor
who
launcher
continue.
D
Task
is
really
critical
because
the
entire
the
in
her
status,
update
design
and
the
system
system
reliability.
Brief
installation
is
solely
built
around
tasks.
People
have
people
are
grab
seeing
that
contact
by
row.
Well,
continuing
font
cuts
can
generalize
maps
to
its
own
runtime
system
isolations
and
its
Pisa.
It's
a
piece
of
work.
That's
actually
happening
on
a
machine.
People
can
kind
of
grab
that
concept
pretty
well,
but
the
concept
executor
day
gets
pretty
big
there.
Was
it
actually
well
that
the
piece
piece
of
code
or
confab
we've
actually
doing
and
actually
healthy?
F
D
I
from
the
developer
spectrum,
we
from
we
for
very
familiar
with
its
codebase,
it's
obviously
it's
all
the
easier
parts
understand,
there's
a
pretty
big
difference
here,
but
for,
if
you're
also,
if
you're
asking
how
to
make
a
metal,
why
did
you
adopt
it
and
easier
to
use
people
expected
one
more
containers,
people
bill
the
image.
This
is
what
they
run
on
the
developer
workflow,
they
say.
Ok,
you
have
up
now,
let's
have
a
container
for
tasks
and
then
your
thing
will
seamlessly
running
the
data
center
in
the
cloud.
People
can
understand
that.
D
A
Yeah
I
think
I
think
this
tester,
some
of
the
confusion
comes
from
like
this
user,
our
docker
use
your
kubernetes
knew
that
their
continued
based
and
our
mazes,
let's
say
as
soon
as
I'm,
assuming
if
a
new
user
is,
if
they're
using
some
executed
building
executor
like
comment
or
the
particular
mazes.
So
our
expectation
is
they
only
specify
or
two
things.
One.
Is
that
what
the
test
looks
like
in
protobuf
another
is
like:
they
have
options
the
option
to
specify
an
executor.
A
So
if
it
is
the
before
secure
build,
do
you
want
to
use
the
building
an
executor?
They
just
need
to
be
if
I
the
task,
so
the
cause
is
the
only
user
facing
concept
and
for
advanced
user.
They
might
not
want
to
build
their
own
executor.
So
those
for
those
are
for
some
advanced
user.
They
want
to
do
some
customization
with
their
own
guy
secure.
They
want
to
do
different
things,
so
they
need
to
space
work,
executed,
executor
info
and
for
continuing
I.
A
Think
there
are
those
those
confusion
comes
from
some
legacy
issue,
because
we
don't
have
a
concept
of
continued
many
years
ago.
We
we
only
have
executing
task
and
the
task,
it
only
managed
the
Linux
process
and
monitored
the
lifecycle
of
the
process,
and
then
we
add
the
steagle.
So
when
we
add
a
single
support
for
the
task,
we
don't
have
the
concept
of
container
yet
and
after
the
sequel,
v1,
who,
let
me
I,
want
to
add
more
isolations
mineralised.
Oh,
this
is
the
next.
A
D
It's
both
for
new
user
in
a
much
longer
term
like
like,
if
he
looks
maybe
one
year
or
two
years
down
the
down
the
road
in
the
future,
and
if
we
honestly
put
a
rank
about
different
concepts
in
which
things
in
metal
is
most
most
important,
I
think
to
to
the
ecosystem
on
the
user.
Here
container
these
days
honestly,
the
most
important
thing
we
care.
We
also
need
a
task
like
set
because
I
mean
without
task
and
I
took
the
model.
New
task
is
the
ID
and
the
status
the
interface
around
it.
D
A
D
A
D
Am
I
arguing
is
from
my
from
my
experience
with
these
computers.
I
think
a
majority
of
that
functionality
is
can
be
modeled
through
the
task,
phonetic
container
or
toxic
group.
If
you
guys
probably
do
not
necessarily
need
to
these,
it
does
not
necessarily
need
to
model
these
things
with
a
custom
piece
of
code
or
running
on
the
actuation
yeah.
B
D
A
B
A
Ai-Yai-Yai,
I
can
help
you
build
Apaches
if
once
you
have
them-
and
I
because
initially
I
I
have
seen
I've
been
thinking
about
like
this
might
be-
might
not
be
easy,
because
we
are
running
first
level
container
as
and
then
right
now
in
the
persecutor.
It
is
a
nasty
candidate
in
Black,
Creek
I'm,
not
sure
in
your
break,
UI
and
I'm,
not
sure
if
you
break
some
other
things
and
but
yeah
we
could.
We
could
definitely
achieve
that
at
some
point.
Okay,
we
are
running
out
of
time,
so
so
Jen's.