►
From YouTube: GMT 2018-02-22 Containerization WG
Description
Agenda and notes:
https://docs.google.com/a/mesosphere.io/document/d/1z55a7tLZFoRWVuUxz1FZwgxkHeugtc2nHR89skFXSpU/edit?usp=drive_web
A
B
B
Okay,
I
think
that's
guy
started
Cobra.
What's
that
Jana
today,
so
that's
the
icons.
C
In
the
agenda,
the
first
ones
from
circa
to
discuss,
discuss
about
the
net
wooden
in
space,
configure
Burnett
one
in
space
change
at
the
CNI,
isolated
and
I.
Think
I
think
you
and
Chen
both
of
you
guys
already
revealed
the
PR
and
you'll
be
like
ten
to
fifteen
minute
and
then
maybe
the
rest
of
the
time.
Jason
in
saitama
proposed.
C
C
D
Hey
guys,
my
name
is
Sagar
and
I
work
at
yet.
As
a
software
engineer,
I
work
on
our
distributed
computing,
which
is
responsible
for
running
missus
and
today
I'm
going
to
talk
about
change
that
we
propose
to
network
isolator,
CNI
oscillator,
so
basically
ideal.
We
have
this
application
called
seagull.
So
what
this
application
does
is
it
runs
things
in
bedlam,
so
we
used
it
for
a
lot
of
different
use
cases,
for
example,
running
tests
and
parallel
running
for
the
classification,
machine,
learning
models
etc,
especially
for
the
testing
use
case.
D
The
application
was
developed
before
me,
so
sad
support,
photography
executed.
So
basically
how
we
have
run
continuous
right
now
is
we
have
a
custom
executor
which
gets
carried
on
on
a
host.
It
then
talks
to
dr.
Damon
and
starts
a
bunch
of
containers,
and
we
have
this
testing
process
which
uses
these.
You
know
internal
micro
services
for
basically
making
sure
that
api's
are
consistent,
etc.
D
So
we
are
kind
of
trying
to
containerize
the
main
testing
process
right
now
it
runs
on
the
host,
but
we
are
trying
to
get
it
into
a
container
and
because
all
these
docker
containers
run
without
the
knowledge
of
measles,
it
kind
of
becomes
difficult
for
us
to
clean
up
these
containers.
So
we
have
some
solutions
that
we've
developed
in-house.
For
example,
we
have
a
proxy
for
daca
daemon
which
recognizes
that
the
process
that
created
docker
containers
has
gonna
be.
It
then
goes
ahead
and
cleans
of
the
containers
that
were
spawned
by
this
process.
D
We
also
have
my
out-of-band
cron
jobs
that
you
know
do
the
cleanup
of
these
containers,
but
we
are
basically
looking
for
do.
It
means
two
kind
of
container
eyes
the
testing
process
and
bring
all
these
donor
containers
life
cycle
of
these
docker
containers
random
useless.
So
he
basically
able
to
delegate
the
responsibility
of
life.
So
it's
lifecycle
management
of
these
containers
to
measles.
So
it
is
easier
for
us
to
clean
up
the
containers
and
not
run
any
kind
of
custom
stuff.
D
So
we
were
looking
for
a
different
solutions
and
we
came
across
spots
or
task
groups
which
basically
provide
you
all
or
nothing'
semantics,
which
is
what
we
are
looking
for.
We
were
all
either
all
the
continues
to
start
together
or
we
don't
want
them
to
start
out
at
all
so
I.
Looking
at
these
solutions,
we
came
across
one
limitation
that
all
the
containers
on
the
nested
continuous
loop
share,
the
same
network
namespace
and
monk
names
disks.
So
this
is.
D
This
is
not
ideal
for
our
use
case,
mainly
because
the
service
is
complete
mechanism
used
for
this
testing.
Application
requires
all
the
all
the
services
to
have
a
separate
IP
and
all
of
our
internal
micro
services
by
into
container
port
eight
eight,
eight,
eight
so
meaning
unless
containers
have
separate
names,
network
namespace,
they
cannot
bind
to
the
same
port.
So
we
started
with
TG
and
having
us
a
couple
of
weeks
back
and
we
discussed
a
few
solutions.
D
So
that's
why
I
created
this
thicket
missus,
eight
five,
three
four.
This
is
to
basically
track
progress
of
this
work.
I
looked
at
the
CN
isolator
code
and
basically
looks
like
we
can
use
the
existing
code.
All
the
containers
right
now
can
specify
Network
info
protobuf
in
in
the
container
in
we
have
a
explicit
validation
check
in
the
master
which
prevents
nested
container
from
you
know
having
this
networking
protocol.
D
So
this
work,
what
it
will
entail
is
basically
getting
rid
of
the
check
and
basically,
if
continuous
in
our
task
group
are
specifying
network
enforce,
then
I'll
go
get
the
network
that
they're
requesting
for
which
will
also
cover,
which
means,
like
they'll,
also
get
separate.
Namespaces
I
posted
a
initial
pass
and
github
when
I'm
able
to
do
that.
Continuous
can
have
separate
namespaces
if
they
want,
and
this
also
retains
the
existing
functionality,
meaning
people
can
continue
to
use
spots
as
they
are
doing
right
now.
This
is
thing
for
often
feature.
B
Sounds
great,
so
I
think
Jake,
James
I,
think
I
saw
you
have
a
question
I.
Think
some
of
the
folks
has
some
question
around
in
the
motivation
of
this,
whether
it's
a
good
idea
or
not
like
it
would
be
curious,
like
what's
the
reason
where's
the
reason
you
think
that's
not
a
good
idea
or
like
we
think
it's
like
we
should
not
do.
This,
like
I,
would
like
to
understand
that
the
rationale
behind
those
comments
I
saw
James
DeFelice.
E
I
I
kind
of
echo
thoughts
on
that
I
think
the
notion
of
pods
is
originally
about
sharing
stuff.
So
this
is
a
departure
from
there
internally
in
these
house
Isolators.
My
main
concern
is
that
the
semantics
of
nested
containers
are
basically
hard-coded
in
all
the
Isolators
in
slightly
different
ways,
so
it
what
you're
really
it
sounds
like
what
you
really
need
is
not
pods.
You
need
separate
containers,
you
need
top.
You
need
top-level
containers
that
are
orchestrated
from
into
it
into
a
tree.
E
D
B
It's
totally
up
to
the
isolator,
to
decide,
and
also
up
to
the
API
it's
up
to
the
isolator
to
interpret
the
API
on
to
to
decide.
What's
the
semantics,
for
example,
we
do
allow
sharing
and
not
sharing,
on,
pin,
name
space
right
now
for
containers
running
inside
the
same
executor,
so
I
think
adding
support
for
now
a
namespace.
It
makes
sense
to
me
and
just
like
a
a
more
more
flexible
way
to
support
different
use
cases.
I,
don't
think
we
should
dictate
that
everything
should
follow.
B
The
same
semantics
like
every
single
Nesta,
continue
to
share
the
same
name
system
in
our
name.
Space.
We've
never
stayed
that
when
we
designed
the
the
kind
of
pod
feature.
It's
not
really
polished
task
group
plus
nasty
container.
It
makes
us
that's
my
take
yeah.
You
said
you
said
flexible,
but
I
heard
complex
right.
That's
the
trade-off
right.
D
E
Sorry
I
consider
around
this
is
that
I
don't
feel
like
the
I,
don't
feel
like
meat
sauce
really
defines
the
semantics
of
custom.
Ik
of
mystic
containers
very
well
today
like
without
actually
going
to
reading
a
whole
bunch
of
code,
it's
very
difficult
to
know
what
a
nested
container
means
and
what
a
pod
means
and
whatever
the
stuff
means
and
how
and
how
others
work
together,
and
this
sort
of
continues
down
that
path
of
making
a
ton
of
making
it
hard
to
understand.
The
think
use
case
is
totally
reasonable.
B
Yeah
I
think
I
think
it's
anyway.
It's
a
trade-off.
I
think
like
kubernetes,
when
the
other
PLN
will
define
a
predefined
semantics
for
all
the
the
parts
where
we're
trying
to
be
flexible,
but
that
also
means
complexity.
Yeah,
we're
yeah,
that's
a
trade-off,
but
I.
We
do
have
use
cases
for
that,
and
we
do
have
most
use
cases.
I
think
the
best
thing
we
can
do
I
think
we
should
do-
is
adding
documentation
to
make
sure
that
those
semantics
are
clearly
documented.
I.
B
B
E
E
B
E
B
E
Right
that
I
guess
the
point
now
is:
once
you
go
down
this
path,
then
you
start
thinking
well.
Is
the
porting
isolator
compatible
with?
What
is
that?
What
is
the
port
mapping?
Isolator
semantics
has
different
manners,
do
I?
Can
I
but
it's
a
I
want
to
have
network
say
well,
you
use
mr
container.
Do
they
behave
consistently.
B
For
it
pretty
soon,
I
am
caught
napping
I
think
it
does
not
support
as
a
container,
so
yeah
I
think
like
we
haven't
documented
this,
but
we
should
document
that
the
poor
mapping
I
said
like
you,
cannot
use
pro
mapping
I
see
that
in
conjunction
with
nasty
container
it
doesn't
work.
It
simply
does
not
work.
Okay,.
C
So
yeah
so
I
understand
the
use
case
and
so
for
for
y'all.
You
guys
want
to
like
launch
a
coup
of
continue
with
this
lifecycle,
which
you
can
manage
using
by
using
the
device
security.
While
you
also
want
to
like
help
each
of
the
container
has
this
own
net1
namespace
and
make
it
a
figure
Rosen's
reasonable
to
me,
and
some
other
people
might
also
be
able
to
leverage
this
feature.
My
only
concerns
about
the
the
API
are
we
introducing
some
inconsistency
in
the
API
test.
C
We
have
some
new
space
control,
wired,
the
Linux
info
in
the
container
info
and
for
the
net1
in
space
for
the
standalone
evidence-base
Banesto
container.
We
need
to
add
network
info
in
the
container
info,
so
imagine
in
the
future.
We
can
have
more
configurable
namespace
for
continue
and
we
view
a
user
from
the
API
side.
They
have
to
define
a
different
location
to
make
those
names
basic
configurable,
so
yeah
I
just
want
to
post
put
that
out
and
if
someone
yeah.
B
I
think
for
networking,
it's
probably
fine,
because
I
think
our
namespace
alone
doesn't
mean
anything
because
you
cannot
use.
You
cannot
just
use
the
narrow
namespace
itself.
You
have
to
assign
an
IP
set
up
a
DNS
things
like
this.
So
that's
the
reason
we
have
the
now
our
info
to
have
those
kinda
more
information
about
the
network
data
container.
Whilst
you
doing
it's
not
just
simply
an
hour
namespace,
it's
I
mean
it's.
B
It's
different,
damn
things
like
pending
space,
which
is
bubbling
flag,
saying
that
you
want
to
share
the
different
namespace
or
not
now,
where
it's
more
than
just
name
space,
but
also
the
different
things
that
you
want
to
config
by
DNS.
Ip
address
things
like
this
and
that's
really
healthy
and
I
so
that
the
lender
can
plug
in
their
solution
for
the
networking.
D
So
we
already
have
you
be
done
one
framework,
but
we're
drunk
like
when
user
triggers
run
that
basically
spawn
a
new
framework,
and
that
is
responsible
for
reading
all
the
tests
or
running
all
the
jobs
in
the
machine
learning
model
or
something
like
that,
but
I
think
going
beyond
that.
Are
you?
Are
you
suggesting
that
we
should
have
a
scheduler
for
a
single
part.
E
D
A
Equities
point
uber
actually
had
some
discussion
about
the
world
who
handle
the
complex
logic,
whether
it
seems
down
to
the
agent
X
integration
point
I,
think
our
feeling
is.
Our
early
research
seemed
such
as
at
handling
this
complexity,
down
the
stack
easier
because
scheduling
surveys
at
the
sugar,
the
system
is
actually
already
a
pretty
dish,
is
already
non-trivial
together.
This
really
lacks.
A
Even
why
there's
only
one
thing
to
and
the
collocation
makes
it
almost
explanatory
contracts
versus
on
the
agent
input
in
her
industry
means
they're,
pretty
straightforward,
it's
easier
to
predict
the
behavior
of
the
system
and
even
if
they
scope
to
the
single
agent.
Now
that
execute
isolator
interaction.
There.
B
B
Try
just
trying
to
recall
and
just
trying
to
refresh
my
memory
to
understand
all
these
different
use
cases.
I
definitely
agree
with
you
that
it's
alright
to
complex
yeah.
We
wish
of
her
document.
This
I
think
it's
a
very
hard
to
follow.
The
kolodziejczak
right
now
give
it
after
like
a
year
not
working
on
that
thing.
C
F
Am
so
yeah,
okay,
so
I
think
them
so
today
we're
talking
about
gonna
talk
about
like
a
move
on
like
container
just
a
solution,
the
size,
isolation-
and
this
is
something
like
a
problem
for
many
people,
I
believe
and
since
them.
You
know
what
right
now
I,
like
knees,
was
disabled
to
launch
now
dr.
and
like
I've,
seen
images
natively
and
like
right
now
for
the
continuous.
The
model
systems
cannot
be.
F
We
need
to
also
consider
like
Amara
making
it
as
working
with
the
existing
Isolators
come
on
like
from
six,
my
disk,
which
is
no
previously
known
as
Emma.
This
cannot
be,
and
also
this
XFS
so
without
trying
to
introduce
like
a
new
dependency,
or
you
know,
or
at
least
minimizing-
that
also
tries
to
make
the
small
part
of
our
the
don't
target,
for
you
know,
tasks,
containers
and
then
domestic
containers
and
standalone
containers
so
for
their
system.
F
I
think
this
is
mom
kind
of
be
you
know
initiating
at
the
discussions,
and
there
are
several
economic
solutions,
and
now
it's
not
like
a
mom
sitting
stone
unit
and
you're
still
also
exploring-
and
this
is
not
not
also.
This
proposal
is
not
targeting
at
that
mod
making
the
file
system.
You
know
we,
even
though
it's
totally
useful
but
I
think
that
different
story
that
know
what
we're
talking
about
so
the
background
here
is
the
ma
like
not
know,
for
the
the
root
filesystem
saw
from
containers,
and
we
have
from
several
like
ma.
F
You
know
immutable
several
ways
to
provision
it
right.
So
for
the
mutable
like
new
process
in
layers,
and
we
have
different
back-end
storage,
we
can
do
that
and
there
are
four
of
them
right
now
background
and
copy,
and
then
overlay
and
EFS.
So
now
for
my
mount
and
copied
those
three
two
things
are
actually
for.
My
mom
is
from
it's
only
so
we
probably
don't
need
to
care
too
much
about
the
size
of
that
because
number
it's
so
it's
origami
and
nobody
wants
to
it
and
copy.
F
That's
a
little
bit
tricky
all
all
the
contents
will
be
copied
onto
really
fast.
It's
a
persistent
path
and
for
overlay
it
actually
creates
the
multi
mutable
directory
some
of
the
aperture
and
then
water
and
overlay
Avesta
mother
kernel
module.
We
priced
about
both
of
those
territories
to
be
under
the
same
volume,
and
this
is
actually
a
resigning
under
the
scratch
territory
off
of
this
packet,
so
for
a
BFS
system
are
similar
and
yeah.
F
This
is
the
mob,
basically
like
the
background,
the
posix,
some
disk
isolator
and
it
keeps
tracking
of
the
usage
of
the
directory
somewhere,
now
being
like
the
sandbox
and
so
trying
to
preview
like
the
size
of
it
after
article
II
and
then
there's
the
mod
x,
ms,
which
uses
the
body
exactness
phone
app
to
actually
set
a
set
a
part
of
it
now.
So
we
have
a
few
like
amount
proposed
on
a
like
designs
in
here
right
now,
because
some
only
the
sandbox
not
directory
is
being
tracked.
F
So
we
would
also
need
to
have
from
you
know
the
extra
way
to
track
the
new
file
system
of
the
container
and
like
one
one
way,
I'm
thinking
about
islam.
Are
you
know
exposing
in
this
to
the
the
Isolators
and
with
an
extra
interface
like
a
market
sites,
method
and
activity
existing
back-end
and
how
it
actually
does
some
other
science
tracking
without
family,
between
the
implementation
of
the
different
backends
and
also
there's
a
question
like
nah
how
we
actually
draw
how
we
actually
account
for
these?
You
know,
directories
the
size
of
them.
F
So
do
we
want
to
account
for
the
the
new
file
system
and
the
sandbox
individually,
or
do
we
want
to
like
continue
the
accounting
for
the
not
altogether,
and
also
like,
there's
some
that
there's
something
that
we
also
need
to
discuss
about
like
whether
like
we
need
to?
If
we
do
it
separately
and
would
there
be
like
you
know,
how
do
we
set
up
the
sizes
individually.
B
F
So
proposal
number
one
but
there's
also
and
and
we
need
to
figure
out
like
about
how
we
actually
don't
track
them
so
looking
exercise
so
there's
an
open
question
here:
Isolators
to
actually
don't
have
direct
access
to
the
traditionary
provision,
they're
object.
So
how
can
we
expose
that
to
those
beckons
yeah.
B
E
B
Function,
they
can
call
to
to
get
that
on
path
for
each
container
Wow
but
yeah.
It
will
be
tricky.
I
mean
it's
also
like
backhand
specific
right,
I
can't
imagine
if
you're
using
copy
back
end,
then
essentially
the
entire
route
FS
is
subject
to
that
coda
control
because
that's
per
container,
where,
if
it's
overlay,
only
the
operator
or
all
the
worker
is
subject
to
coda
control.
Exactly.
F
F
B
Well,
I
won't
saying
like
if
you
use
overlay
back
and
essentially
like
you're
doing
a
mount
like
some
directory
like
like,
for
example,
the
the
common
base
layers
are
sure
it's
potentially
shared
between
multiple
containers.
Are
you
saying,
I
have
a
size
limit
for
that
and
the
layer
is
actually
a
bone
itself
and
it
has
a
coda
but
I
guess
it
doesn't
matter
because
we're
overlaid,
the
the
the
underlay
is
actually
read-only
right.
So
it's
on
the
only
about
the
size
is
the
operator
and
the
worker.
Are
you
think.
B
F
C
Yeah-
and
it
seems
to
me
like
this,
it
is
related
to
the
limiting
the
roof
out
system
of
a
continue.
But
this
there
was
another
discussion
about
like
putting
the
some
like
writable
layers,
for
example,
in
the
overlay,
putting
the
upper
layer
and
there
after
there
and
what
they're
into
a
volume
it
doesn't
matter
what
world
in
abysus,
yes,
I,
worked
locomotive
and
but
I.
Think
in
this
proposal
is
not
a
discussion
about
like
how
to
limit
the
root
filesystem
exactly.
F
So
I
don't
way
we
like
either
we
actually
track
about
the
usage
under
the
tree
of
that
volume.
Or
you
know
the
aperture
or
we
actually
have
a
way
to
you
know,
try
to
track
them
out
using
a
de
or
we
have
a
way
to
actually
see
if,
if
that's
a
bone
and
then
some
IDF
way
to
to
actually
track
it
so
which
is
much
much
easier
right
so
and
like
after
that,
I
will
I
have
a
been
you
proposed
or
not.
F
No
like
a
happy
overall
a
backing
and
it's
just
kind
of
proposed,
optimization
or
the
existing
overly
and
since
them
right
now
it's
about
the
size
of
olive
oil.
It's
only
found
by
the
disk
anomaly.
It's
a
pattern
and
also
the
work
director
on
to
and
overly
itself
doesn't
know
really
kind
of
constraint
on
the
operator.
You
know
and
work
they're
on
the
same
volume
with
the
lower
orders.
F
F
So,
for
example,
like
we
can
set
a
size
for
one
gig
and
then
for
volume,
and
then
you
know
trade,
the
aperture
and
then
work
there
in
there
and
then
we
we
put
like,
for
example,
the
loaders
morning
that,
for
example,
the
image
stored
at
the
last
you
and
making
that
man
they
get
separate.
And
then
the
container
in
that
will
fascism
can
only
write
to
that.
Are
one
gig
most
to
you
know
to
the
new
file
system
and
after
that
is
not
like
a
lot
more.
B
I
think
so
that's
interesting,
so
so
basically
I
mean
so
I
think
the
downside
is
that
you
don't
have
like
you
cannot
like
because,
like
how
do
you
decide
the
size
of
that
volume?
So
you
press,
like
you,
need
to
predefined
like
a
fixed
number
like
a
one,
gig
and
say
every
single
continuous
after
cannot
exceed
that
limit
yeah.
But
you
don't
actually
I
mean.
Do
you
actually
detect
that
one
gig
from
the
disk
usage
of
the
task
or
not
so.
A
From
my
from
my
point,
I
think
we
should
attack
that
sorry,
I
didn't
read
the
latest
version
of
document
before
the
meeting,
but
I
think
we
should
attack
from
my
readings
because
it
were
making
sure
the
accounting
is
correct.
Yeah
also
I,
don't
think
I.
Don't
think
users
really
cared
that
much
about
what
the
leaving
is
sent,
how
much
space
the
using
sandbox
versus
how
much
space
uses
in
other
location
on
the
disk,
which
is
not
on
our
sandbox.
They
only
know
we
gave
them
some
disk
space.
They
correctly
used
some
amount
in
this
case.
A
F
Yeah
so
I
agree,
like
man
I,
believe
this
term,
and
this
can
be
like
mother
predominant
way
for
our
ocean
and,
on
the
other
hand,
I
do
feel
like
I
mean.
If
there's
me,
don't
we
need
to
separate
those
two
mono,
the
accounting,
and
can
we
actually
provide
a
flexible
way
to
make
them
out
to
be
accounted
that,
like
separately
yeah.
B
But
III
think
I'm
I
think
one
thing
I
like
this
idea
is
simple
enough:
that
it
should
be
pretty
easy
to
on
to
to
implement
right
I
mean,
of
course
it
has
some
downside.
It
has
some
trade-off.
Where
are
you
the
the
I
mean
either
you
are
yeah,
I
think
to
get
the
usage
of
that
upper
thirties.
You
need
to
Renaissance
fool
into
to
get
that
information.
F
So
that
would
be
very
simple.
So,
for
example,
if
that's
if
we
have
to
use
the
default
on
our
overlay
backend
and
we
can
do
yeah.
F
F
F
F
F
Think
of
one
one
other
thing
we
can
give
our
operators
the
option
to
actually
choose
between
the
you
know
the
way
to
provision
their
beautiful
layers
from
either
a
new
back
about
device
for
either
for
making
a
sparse
file,
or
you
know,
from
the
logical
volume
or
a
real
block
device
to
mention
toppers
device.
Rocker,
no
storage
driver,
I.
B
F
B
F
B
They
won't
because
it's
like
four
different,
if
fought
NASA,
for
example,
you
have
continuing
a
and
you
have
an
SE
contain
called
eight
dot
B,
and
then
you
will
have
two
top-level
container
ID
under
provision
or
containers.
So
there's
no
nesting
structure
under
because
there's
an
engineering
for
Lou,
Ephesus,
yeah.
F
F
I'm
sure
and
and
then
an
alternative
way
here
is
also
you
know:
provision
Nadi
the
mountable
like
the
mutable
layers
inside
of
the
sandbox,
but
I
think
my
job.
So
that
means
come
out.
Everything
is
accounted
under
the
same
sandbox
territory
and
then
we
can
use
some
existing
like
an
existing
facility
summit.
Hourly
like
this,
the
POSIX
mano
slash
disk
isolator,
or
this
can
slash
except
it's
isolated
to
do
the
same
about
I
think
it
would
impose
like
some
some
difficulties
in
terms
of
migration,
because
the
mothers
would
actually
explore
its
most.
B
F
B
I
think
I
think
I
do
agree,
I
think,
like
long
term,
we
definitely
one
alternative
to
I,
think
that
I
mean
I
think
most
I
agree.
This
is
the
long
like
to
correct
a
long
term
solution
to
make
it's
more
sustainable,
I.
Think
the
only
downside
is
it's
not
backwards
compatible
in
no
way
we
can
make
it
backwards
compatible.
So
it's
a
breaking
change,
so
I
guess
we
should
be
fine
with
breaking
change
just
need
to
on.
If
we
want
to
go,
is
option
number
two.
B
E
B
Might
be
able
to
do
that
we
need
to
have
a
way
to
tell
this
is
a
new
containers
old
container
I
don't
know
like?
Is
there
an
indicator
for
that?
Maybe
there's
a
way
yeah,
which
is
which
I
think
more
on
this,
so
Jason
I
think
yeah
more
especially
around
the
backwards
compatibility
like
it
will
be
nice
to
support
that
without
so.
F
So
so,
actually
for
this,
the
MA
I
don't
see
a
like
really
big
difficulties.
So,
for
example
like
about
the
sandbox
territory,
could
actually
be
a
volume
itself
to
and
that's
not
a
slice
off
and
then
because
it's
a
volume
and
then
it
could
be
mounted
to
no
star
trees
and
then
the
underlining
you
know
underlying
values
could
be
anywhere
else
and
we
can
keep
track
of
that.
One
yeah.
F
B
Yeah
I
think
we
should
definitely
spend
some
time
working
like
think
about
alternative
to
and
what
would
be
the
directory
layout
of
the
sandbox,
especially
I,
think
you'll
have
some
open
question
here.
I
think
one
thing
highlight
is
like:
how
do
you
handle
a
sea
container
exactly,
as
you
mentioned
in
the
end,
that
like
what's
the
layout
for
nasty
containers,
so
things
like
this
like
once,
we
figure
out
that
layout
and
then
we
can
see
if
we
can
make
it
backwards
compatible.
E
B
B
So
a
perder
is
actually
now
from
the
it
will
be
from
the
sandbox
of
the
right
right.
So
so
that
might
complicate
I
mean
that
I
mean
just
be
very
careful
amounts,
because
I
think
that
the
amount
propagation
flag
on
the
sandbox
directory
is
set
to
be
sure.
So
I'll
just
make
sure
that
when
you
do
that
mount
don't
create
additional
mouse
data,
it's
not
necessary
I.
Think
there
are
some
issues.
I
ran
into
a
bunch
of
issues
with
that
I
think
the
amount
propagation
stuff
things
that
exist.
B
F
Right
right
so
I
mean
in
here
to
to
add
to
that
I
I'm
thinking,
whether
we
can
you
know,
set
up
a
life
cycle
all
from
all
those
things
either
we
did
how
we
kind
of
tied
them
together
or
not
attach
them
in
some
way.
So,
for
example,
like
the
scratched
our
tree,
they
can
share
the
same
life
cycle
of
the
same
volume,
but
you
know
they
can
get
a
mountainous.
The
container
get
a
canonical
is
away,
but
I
like
the
same
posture,
even
though
it
uses
the
same.
F
B
I'm
yeah
I
kind
of
less
worried
about
the
cleanup,
because
I
think
I'm,
just
like
unmount
by
the
provision
there
right
so
might
actually
be.
Okay,
but
I.
Didn't
you
yeah,
the
part
I'm
worried
about
is
so
you
do
amount
from
the
sandbox
when
you're
like
when
create
a
new
FS.
So
you
have
to
have
some
directory
scratched
or
actually
thrown
your
sandbox
and
then,
when
you
actually
launch
the
container
you
bite
mount
the
sandbox
into
the
container.
So
there's
a
like
a
cycle
there.
B
F
Like
okay,
so
do
we
also
want
to
design
some
mechanism
to
know
to
actually
track
the
size
size
usages
of
the
potential
directions
mounting
inside
of
a
container
like
right
now
we
have
actually
two
directories
for
not
having
really
special
semantics
in
here
right,
a
1/8
lung
being
the
loofah
one
being
the
root
filesystem
and
another
being
the
sandbox
them
out
directly.
So
they
could
actually
could
have
actually
been
one
and
then
actually
like
the
two
alternatives
can
be
joined
together.
F
B
I
think
right
now,
I
think
the
tea
I
mean
if
I
understand
you're
correct
correctly,
like
nothing
right
now,
the
DUI
so
later
or
I,
don't
know
about
XFS
but
DUI,
so
they're
actually
like
check
for
different
disks.
So
if
you
have
a
persistent
volume,
you
can
post
the
sandbox
and
purchase
the
money
and
the
reporter
size
and
enforce
the
limit.
B
Well,
it
would
just
treat
it
like
way
to
go.
Go
hit
like
this
test
statistics,
endpoint
well,
I,
don't
know
if
it's
the
statistics,
yeah
I,
think
so
then
I'm
gonna
see
like
they
use
a
chocolate
disk
per
volume.
Now
you
have
three
persons
among,
and
you
will
see
the
usage
for
all
of
them,
one
for
each
and
also
another
one
for
sandbox.
So
you
have
actually
seen
four
if
you
have
three
persistent
volume.
F
F
B
So
sure
yeah
I
think
that's
something
that
we
don't
have
right
now
like
understand.
What's
underlying
or
whether
it's
a
dedicated
volume
like
even
for
person,
volume
I,
think
the
way
we
do
like
we
don't
use
to
you,
I
think
I,
don't
know
if
we
do
that,
I
think
we
still
use
to
you
if
we
know
that
it's
a
mound
volume
and
it's
backed
by
a
single
logical
volume,
we'd
still
2d,
you
I,
think
that's
something
that
we
can
optimize
optimized
rather
than
current
view,
should
call
DF
no.
B
B
F
Anyway,
so
I
think
that
for
the
dis
game
you
know
disk
amaz
statistics,
the
MA,
we
do
have
a
my
field
of
source
and
to
actually
account
for
different
sources.
We
want
to
keep
tracking
right
so.
E
F
F
B
Yeah
yeah
I
think
TL
DR,
for
the
feedback
here
is
definitely
like
flush
out
option
number
two
down
there:
okay,
specially
the
layout,
the
sandbox.
That's
super
important
too,
to
help
us
understand
whether
that
makes
sense
whether
it's
can
be
backwards
compatible.
So
that's
that's
feedback
number
one!
B
Do
that
and
then
I
think
optional
number
one
is
not
too
bad,
like
the
one
you
mentioned
like
if
we
say
that
option
number
two
do
too
complex,
I,
don't
think
that's
too
bad
idea
to
introduce
year
and
then
the
backhand,
like
just
like
a
plug-in
that
you
can
plug
into
other
people's
to
use.
Just
the
other
overlay,
like
you
add
a
new
back-end
I,
think
that's
not
too
bad.
B
F
B
And
also
I,
like
the
idea
of
like
like
since
we
get,
we
have
CSI
right
now,
have
the
the
way
to
provision
the
CSI
volume
and
I
like
the
idea
that
I,
like
the
direction
of
like
a
using
CSI
volumes
for
whatever
sandbox
or
on
or
persistent
bone
I,
think
we
are
support,
persistent
volume,
but
we
don't
have
a
way
to
support
using
CSI,
one-in-four,
sandbox
or
using
CSI
one
in
force
tracks
Questor.
So
so
that's
the
interesting
direction
that
might
need
some
thoughts
on
this
yeah.
F
C
B
Okay,
so
we
still
have
five
minutes
yeah.
So
do
you
want
to
discuss
or
I
think
like?
Maybe
we
move
that
to
next
week,
yeah
I
think
I
read
my
minutes
is
far
too
short
for
that:
okay,
yep,
all
right,
okay,
okay,
so
I
think
I'm,
just
a
reminder
that
we
did
some
grooming
last
week.
I
just
want
to
make
sure
like
those
those
tickets
are
being
follow
up.
I
think
I
did
some
other
additional
grooming
after
that
meeting.