►
From YouTube: GMT 2018-03-08 Containerization WG
Description
Agenda and notes:
https://docs.google.com/a/mesosphere.io/document/d/1z55a7tLZFoRWVuUxz1FZwgxkHeugtc2nHR89skFXSpU/edit?usp=drive_web
B
C
A
A
A
A
Hey
morning
guys
so,
okay
guys,
let's
see
my
screen
yeah,
okay,
so
yeah.
So
for
the
for
the
first.
What
first
agenda
item
it's
about
the
Daugherty
man
handing
issue?
So
if
you
would
be
pretty
quick
like
ten
minutes
and
then
we
hand
it
over
to
town,
so
basically
we
have
the
dog
a
demon.
This
is
so
we
fix
and
epic
for
the
doctor
demon
hanging
and
it
is
not
back
on
missus.
It
is
a
back
on
dog,
a
demon
so
in
production.
A
Symptoms,
for
example,
tasks
could
not
gonna,
be
kill
forever
and
missus.
You
never
know
a
task
is
running
or
not
so.
Missus
have
no
idea
what
the
task
status
and
eventually
it
turns
out
to
be
their
capo
praise.
We
have
to
work
around
on
dog
a
demon
to
do
with
the
Double
Diamond
heading,
because
this
behavior
is
not
consistent.
For
example,
sometimes
dog
the
whole
dog,
a
demon
hands.
So
if
you
hands
forever
no
matter,
docker
PS
don't
run
any
dog
a
comment.
A
It
cannot
be
invoked
being
executed
and
or
they're,
not
a
saint,
and
it
is
some
of
the
dog
a
comment.
It
does
not
work
and
hence
forever,
but
if
you
invoke
the
second
time,
maybe
work,
for
example,
nothing
instead.
The
first
time
is
just
hanging
there
and
if
you
involve
a
couple
artists,
some
of
them
Isis
something
might
not
and
there's
ammo
be
issue.
An
open,
Lobby
issue
on
the
mobile
repo
and
I
think
it
is.
It
was
placed
submitting
in
under
this
epoch,
but
they
did
not
fix
it
yet.
A
So
we
have
to
work
around
this
and
I
believe
the
we
have
this
epic
back
ported
to
1.5
1.0.
Once
three,
these
new
versions
and
I
breathe.
It
will
improve
the
user
experience
quickly
for
people
who
use
docker
demon,
introduction.
Okay,
let's
take
a
look
and
what
exactly
they
are
so
the
first
one
it
is
the
one
from
fixed
by
Chen
and
basically
this
is
the
issue.
A
So,
basically,
basically
once
the
docker
demon
could
not
capture
like
the
container
is
finished,
so
Mesa
spirit
card
container
still
running,
we
have
no
way
to
do
the
status
updates,
the
scattering,
and
instead
we
implement
and
walk
around.
So
we
use
the
link
next
rip
to
rip
the
pain
of
the
docker
container.
So
once
we
have
the
access
status,
we
just
return
corresponding
test
status
like
test
finish.
A
pass
failed
to
make
those
scheduled,
and
then
we
will
killed
you.
A
We
will
not
rely
on
any
any
other
information
from
the
demon
so
because
it's
create
this,
what
taka
demon
do
because
in
double-teaming
it
also
rely
on
Linux
content
to
read
the
container
pit.
So
so
because
before
this
fix,
we
have
a
very
bad
situation.
If
the
doctor
didn't
hand
on
that,
and
then,
if
our
scheduled
retry
curing
this
test,
it
could
not
be
killed
because
or
the
behavior
rely
on
target
even
or
the
continuous
are
actually
cute
or
finished
that
we
just
don't
get
the
status
update.
A
So
this
is
a
workaround
we
is
already
by
party
I.
Should
I
should
just
the
fixed
version
day,
then
it
should
be
now
at
from
one
party
to
the
latest
version.
Just
a
time
tire.
You
have
five
more
minutes:
oh
okay,
okay,
so
the
other
one.
It
is
the
about
the
fixed
we
do
together
from
grad
myself
and
entry,
and
maybe
I
can
I
could
introduce
them
all
together.
So
darker
demon
could
hand
in
any
comments
and
they
just
rely
on
token
inspect,
to
get
the
handle
of
the
continual
pet.
A
So
every
time
we
want
to
get
a
handle
of
this
container,
we
need
to
call
doctor
inspect
and
doctor
in
Spanish
was
caught
in
many
place.
For
example,
when
we
pull
the
image
in
dr.
continue
noise
that
before
pulling
Nico
Douga
since
we're
in
toka
comment
as
cutie,
we
call
doctor
instead
after
daughter
run.
So
if
any
of
this
doctor
is
the
hand
we
have
no
way
to
get
the
handle,
so
we
could
not
do
anything
if
this
docker
container.
A
We
don't
have
any
other
update
with
this
docker
in
Spencer
process,
so
we
update
the
we
change
the
library
it's
just
done
by
crack,
so
we
support
the
disk,
our
behavior
from,
for
example,
if
the
color
of
this
docker
library
called
open.
Instead,
we
call
this
car
so
in
the
live.
We
have
a
combat
on
this
guard
to
handle,
like
all
this
promise,
for
the
method
of
a
stop
talk
abou
order.
Instead,
we
will
make
a
discarded
future
and
return
so,
which
means
we
will
not
have
a
pending
future
forever.
A
We
will
return
the
discarded
if
the
caller
discard
caught
this
car.
It
is
a
litmus
test,
is
kind
of
pretty
early
process.
Things
might
be
confusing,
and
so
we
update
the
library
and
then
in
the
continuation
and
as
a
killer,
we
put
correspond
a
fixed
after
docker
run
and
dr.
Khoo.
So
every
time
we
after
a
timeout,
we
will
discard
this
future
and
then
we
will
return
either
failed
status
or
discarded
status
for
this
particular.
Instead,
don't
stop
or
don't
approve.
So
we
will
not
end
up
with
the
situation
in
the
continuous
daylight.
A
We
are
launcher
continue
and
then
it
just
handing
forever
there.
So
from
the
sake
from
the
user
side,
user
could
always
get
best
something
for
the
content
from
the
continuous
status.
This
is
another
thing,
and
the
last
thing
we
improve
is
from
NJ
it's
a
film
entry,
so
we
fixed
the.
We
fix
the
darker
common
as
a
cutie.
A
A
Basically,
the
retry,
it
does
not
change
the
retry
logic
of
tokens
back
in
the
command
executed.
It
does
not
change
any
semantics
before
it
just
improved
in
this
improve
the
experience.
So,
for
example,
like
the
first
stop
into
the
hands,
we
will
retry
again
and
it's
very
likely
for
the
issue
if
the
daughter
demon
hands
for
a
couple
tight.
If
you
adjust
this
issue,
okay,
I
think
that
say
so.
A
C
C
B
Yes,
oh,
this
is
great.
We
actually
see
quite
some
darker
and
stability
now
cluster
to
the
degree
that
other
words
of
giving
up
this
thing
so
I
think
I.
Think
this
case
can
increase
the
quality
of
our
what
we
offer
to
our
internal
customers
in
this
you
know
in
a
short
future,
a
couple
question
is
number
one.
These
improvements
are
people
to
both
a
talker
executor
and
the
custom
execute
are
using
docker
container
either.
A
Yes,
yes,
one
of
them
might
not
be
there
yet
so
might
be.
The
might
be
the
first
one
takes
my
chain,
so
this
one
I
I
realized
the
fixes
for
the
executor
only.
We
should
do
the
same
thing
for
the
custom
executed,
but
there
are
all
the
other
things
they
are
fix
on
the
library
site
or
the
continuous
I,
so
they
should
be
sufficient
for
the
person
executed
as
well.
It
might.
B
Be
good
to
so,
secondly,
is
like
I:
don't
think
we
I
don't
think
we
have
time
to
go
over
the
hot
Holocene,
somewhat
a
symptom,
it
would
have
sake
fix,
but
I
think
it
might
be
a
good
idea
to
call
out
what
kind
of
symptom
people
typically
reserved
for
the
bug
before
other
fix
and
after
the
fix.
So
they
have
a
better,
better
idea
to
understand
whether
this
bug
has
a
good
chance
to
fix
the
problem.
Yeah.
B
A
B
C
A
I
think
a
blog
will
be
great.
Yeah
I
agree:
yeah
I.
Let
me
think
about
that.
I
think
I
was
I
agree,
that's
a
turkey.
It
was
a
Brock,
but
for
the
I
think
for
this
ticket
we
each
take
it.
We
identified
the
root
cause
of
each
for
each
symptom.
We
just
need
another
Google
Doc
paste
on
this
epic
I
think
go
to
let
people
know
what's
the
symptom
of
from
the
user
side
from
the
user.
A
B
C
C
C
I
see
got
it
okay,
because
I
think
that
part
is
not
well
tested,
at
least
like
we
don't
have
any
unit
tests
for
that
the
custom
and
secure,
because
I
don't
think
we
have
any
well.
We
used
to
have
one
but
I.
Think
it's
not
actually
I
mean
it's
the
coat
color.
It
is
pretty
low
on
that
code
path,
yeah.
B
I
think
if
you
I
can
think
of
ways
to
improve
the
test
car
radio.
So
if
you
tell
tasks-
and
you
can
almost
I-
might
need
them
direct
to
me,
I
can't
figure
out
a
way
to
improve
test
coverage
on
them.
I'll
find
someone
else
here
who's
over
here
we
have
a
couple
engineer:
increasing
contributing
to
metals.
That's
a
great
start,
starting
point.
In
my
opinion,
okay.
C
That
sounds
great
yeah,
I.
Think
yeah,
definitely
shout
to
me
and
I
can
point
the
way
of
doing
that.
B
B
So
I
think
that
so
this
is
just
about
surprising,
resizing
present
volume,
so
this
economy
has
been
there
for
I,
think
more
than
two
years
and
uber
is
actually
running
a
very
big
storage
cluster.
Using
this
feature,
I
think
I
think
we
are
running
one
of
the
largest
Cassandra
installations
purely
on
this
class
right
now,
and
it's
recently
brought
to
our
team's
attention
net
because
we
do
not
have.
We
cannot
resize
presence
podium.
B
What
ended
up
from
storage
team
is
they
had
they
allocated
a
very
large
volume
size
for
every
customer
in,
but
in
practice
wasted
a
lot
of
disk
space.
Because
of
that
and
to
arrange
these
things
around
is
very
operationally
extremely
efficiently
expensive
and
then
they
don't
want
to
suffer
the
down
times.
They
move
to
a
smaller
volume
to
other
things
that
both
make
their
framework
complex
as
long
as
as
well
as
probably
possibly
bring
down
time
to
the
customer,
so
they
were
really
requesting.
C
C
And
okay
I
see,
and
then
you
want
to
resize
that
I'm
persistent
long
and
so
that
well
I
think
if
it's
sharing
re-enforcing
disk
Hoda
I
mean
if
okay
I
see
so
I
see.
So
the
reason
you
want
to
do
resizing
is
because
you
want
to
make
sure
that
the
colon
enforcement
is
working
properly,
because
otherwise
they
share
the
same
disk.
They
can
just
easily
go
up
their
shares,
but
even
that
we
create
problem
allocation
because
they.
B
A
B
Think
in
MVP
phase
we
can
totally
accept
that
both
increase
and
decrease
can
only
happen
when
the
volume
is
not
being
used
by
any
task
or
executor.
I
mean
one
is
offline.
I
think
it
simplifies
a
lot
of
problems
and,
like
I
explained
from
our
side
as
well
as
I
believe
he
covers
the
route
tap
disk.
We
will
be
happy
I
think
the
work
should
be
easily
extendable
to
path
disk.
If
my
understanding
is
right,
don't
defer
that
much
mm-hmm.
A
C
Yeah
I
think
mount
is
the
toe
depends
on
the
owned.
At
the
back
hand,
I,
for
example,
decreasing,
is
most
likely
not
possible
in
many
of
the
file
systems.
Like
you
know,
you
cannot
simply
just
say:
hey
I'm
gonna
remove
some
of
the
disk
space.
From
that
virtual
disk,
like
I,
mean
the
expanding
is
possible
with
something
like
LVM
I
can't
increase
the
size
all
by
we're
logical
desk
and
then
that
can
just
essentially
increase
the
size
of
her
system.
Long
but
decreasing
is
gonna,
be
harmful.
Now,
this
okay,
let.
C
B
So
so
I
just
so
I
started
a
quick,
quick
document,
quick
working
price,
the
dog
could
capture,
seems
like
what's
needed
across
the
stack
to
make
this
happen.
I
think
the
first
thing
I
want
to
discuss
here
is
the
weather.
Api
should
look
like
mm-hm.
It
seems
like
the
the
seems,
like
the
reservation
reserve
and
I
reserve.
Api
right
now
is
incremental.
B
So,
for
example,
what
I
mean
is
if
we
have
a
one
gigabyte
of
disk
reserved
on
this
machine
and
I
want
to
make
the
total
reservation
to
to
think
about.
As
another
reservation,
one
gigabyte
mmm-hmm
instead
of
say
I
was
a
total
reservation
to
two
gigabyte
here.
So
that
is
why
I
list
how
to
alternative
the
api's
ITR
direction,
because
I
will
be
describing
a
grow
or
shrink
of
the
doubt
apart
and
the
other
would
simply
be
update
message
describing
the
total
size
of
the
volume
right.
C
So
yeah
I
was
thinking
about
similar,
like
I.
Think
I
was
thinking
just
like
adding
a
new
framework
offer
operation,
maybe
like
resize
volume,
I
agree
of
onion
or
shrink
volume
and
that
operation
you
should
take
the
original
volume
and
plus
some
additional
discs
resource
where
the
additional
volume
size
is
gonna,
be
coming
from
yeah
and
they
have
to
be
reserved
on
the
same
row.
Otherwise
master
will
just
reject
oasiz.
Yes,.
B
C
B
Think
information
wise
this
should
not
be
hard,
especially
after
recent
changes,
post
CSI
and
it's
a
generic
configuration,
because
research
has
already
HK
a
lot
of
resources
can
really
change.
I
took
a
quick
look
at
the
code
that
understand
was
necessary
seems
like
only
she
know.
It
hinges
on
alien
science
seems
virtual,
almost
a
trivial
in
that
master
simply
needs
which
has
send
a
new
checkpoint,
the
resources
message
and
let
agent
apply
it
if
we
and
if
we
believe
the
the
volume
is
offline
at
that
point,
and
everything
is
fine.
A
D
C
B
E
B
See
I
think
that
this
base
is
when
the
long
to
the
product.
Now
it's
very
hard
to
gauge
that
so
the
case
a
give
everybody
a
two
to
five,
a
terabyte
of
volume
space,
but
then
a
lot
of
product
and
not
smaller
at
the
end
the
day,
so
they
say
they
probably
need,
maybe
a
hunger,
a
200
gigabyte
in
stable
state
Sandra
has
the
behavior.
They
need
double
the
disk
space
force,
natural,
absolute
necessarily
needing
is
less
than
500
gigabyte.
B
C
Got
it
okay,
yeah
I.
Think,
as
you
mentioned
here,
I
think.
One
issue
that
I
can
think
of
is
I.
Think
for
D
you
base
isolator.
It
should
be
okay,
because
once
you
yeah
I
should
be
okay,
you
should
adapt
to
the
new
resources,
but
I
think
for
XFS
arm-based
eiseley.
Don't
need
to
be
extra,
careful,
I'm,
not
sure
if
that's
supported
right
now,.
C
B
C
C
F
So
one
common
here
is
them
are
like
so
some
of
the
like.
We
probably
should
think
about
adding
the
same
ability
for,
like
you
know,
a
capability
for
a
shrink,
and
that's
also
I'm
sorry,
so
I've
got
adding
a
capability
for
shrinking
so
which
would
be
you
know,
determined
determined
by
the
actual
implementation
from
the
precision
volume.
Would
that
be
a
consideration?
F
C
I
think
like
moving
for
so
this
is
just
for
agent,
the
resources
disk
resources
provided
by
agent
directly
I
think,
given
that
CSS
being
introduced-
and
we
do
have
CSS
support
for
local
volume,
I
can
read
something
like
LVM
right
now
so
moving
forward.
If
you
want
to
do
resize
for
a
CSI
back
to
volume
there,
the
CSI
plug-in
should
give
you
information
about
whether
it
supports
resize
or
not,
for
example,
I
all
VM
support
resize,
so
it
should
support
at
grow.
F
C
E
C
Do
shrinking
or
a
grill
and
and
I
think
the
question
is:
how
do
we
expose
that
information
to
the
framework?
What
kind
of
API
we
exposed
to
the
framework
about
that
capability
right
now?
We
don't
have
that
mechanism
going
to
think
about
that,
but
at
the
very
least
you
can
reject.
If
you
don't
have
that
capability,
mm-hmm.
E
C
B
So
a
question
about
a
hell
of
a
question
between
prison
volume
and
CSI.
There
is
no
current
plan
about
the
changing
of
the
volume
to
be
back
from
the
CI
Sai
interface.
Yet
right
so
sorry,
City,
yes
to
talk
so
so
present
modem
is,
will
still
have
their
own
API
in
the
short
to
mid-term
future.
It's
not.
C
C
I
mean
these
the
persistent
morning,
API
is
totally
out
out
to
the
tutor
like
Trayvon
in
the
live
audience
years
later.
The
API,
because
once
you
create
see
us
I
mean
the
CSM
only
just
like
create
voting
from
a
poor
storage
and
then
on
whether
you
want
to
use
that
voting
as
an
ephemeral
volume
or
pursuit
ammonia.
It's
another
decision
you
had
to
make.
So
these
two
are
fully
open.
Oh
I
see
okay.
B
D
C
D
D
D
C
C
Enforcement,
it's
possible
that
the
container
that
that
I
used
that
the
persistent
moment
will
go
over
code
that
because
the
U
is
based
on
time
and
the
time
you
do,
the
check
is,
if
it
later
than
when
you
write
a
disk
and
you
might
still
go
over
Cola
and
your
container
will
be
kill
eventually,
but
that
the
disk
space
you're
using
for
that
container
is
still
go
over
its
quota.
So
that
will
affect
other
containers
quota.
Well,
this
that.
D
C
I
think
ultimately,
I
think
that
the
past
base
disk
is
very
hard
to
I
mean
unless
you
use
something
like
extra
fast
to
to
enforce
strictly
enforced
quota.
I,
don't
know
like,
even
if,
in
this
case
like
like
how
do
you
reject
the
shrinking
if
the
disk
space
used
by
that
contain
like
that
continue
on
that
particular
bond
is
already
go
over
coded
I
wish.
Should
we
do
so
that
there.
C
D
C
D
D
C
D
But
it's
it's
really
in
the
sense
that
I
guess
cuz
the
they
perform
operations
on
like
the
raw
resources
which
persistent
volumes
are
created
and
top
up
right,
mm-hmm.
So
does
it
it
kind
of
means.
It
feels
to
me
kind
of
means.
You
have
the
possibility
of
like
doing
to
kind
of
resize
to
achieve
one
purpose
like
if
you,
if
you
create
your
persistent
volume
on
top
of
LVM,
and
then
you
have
to
issue
two
operations.
D
D
C
So
what
you're
really
talking
about
is
to
the
one
that
has
the
storage
is
actually
the
underlying
volume
and
the
year
that
the
shrinking
and
the
grill
is
actually
for
the
underlying
bond
and
persistent
money
and
just
think
about
that
as
a
label
on
top
of
that
volume
that
also
the
framer
can
get
that
information
and
points
the
same
volume
when
the
task
being
restarted.
If
you
think
about
that
way,
there's
only
one
backing
volume.
So
there
is
really
one
grille
or
shrink
operation
on
that
volume.
Well,.
C
Lbm
building
can
you
have
two-person
Simoni
on
or
not
the
mount
phone
yeah?
So
if
you
expose
the
LVM
as
a
mountain
volume,
then
you
don't
have
a
way
to
do
that.
Okay,
you
can
also
expose
that
as
a
path
volume,
you
can't
expose
that
as
a
path
volume
Dania
centuries.
Oh
I
see
you're,
saying
like
a
you
for
path,
phone
iam
that,
since
its
shared,
then
you
have
another
layer.
On
top
of
that,
I
see
sounds.
C
F
Intuitive
really
seems
so
I
come
out,
not
so
safe
on
for
my
point
of
view,
but,
like
my
you
know,
I'm
not
sure,
like
the
long-term,
we
should
also
do
detect
like
the
cardinality
for
this
kind
of
things,
reality
carnality.
So,
for
example,
like
my
LVM
being
used
as
opposed
to
purses
in
volume,
it
then
again
it
is
being
used
them
as
a
path
line
and
it's
being
shared.
C
F
But
like
them
again
so
like
this
would
lead
to
like
I'm,
some
sort
of
you
know,
conflict
or
like
a
reuse
of
those
volumes.
Them
are
not
in
really
not
not
really
like
in
the
desirable
state
right.
C
F
A
F
B
Yeah
I
will
rephrase
this
in
a
different
way,
so,
especially
in
a
very
complex
crafter,
when
there's
multiple
frameworks
and
multiple
workloads
sharing,
while
really
large
results
pool,
which
is
the
intention
of
how
missiles
to
the
work
uh-huh.
So
what
we
vary
in
practice
is
that
I
think
it's
practically
possible
for
every
firm
work
to
do
the
right
thing,
but
one
you
mean:
go
code
from
different
or
written
by
different
organizations
or
different
open
source
projects
versus
closed
and
the
close
out
project.
B
You
don't
always
end
it
out
all
compatible
with
each
other
on
the
framework
level
so
like
in
many
cases,
I
feel
it
might
be
better
as
the
operator
to
make
three
arrangements
for
these
things.
They
simply
say
now.
When
you
launch
our
framework,
you
don't
really
need
to
do
any
reserve
volume
things.
Volume
should
be
there
ready
for
you,
if
not
instead
of
directly
interacting
with
a
metal
Slayer
interesting
with
a
very
shape,
very
thin,
shame
of
API
server,
which
will
make
all
these
things
available
for
you,
yeah.
C
So
that's
what
we're
doing
this
us,
so
we
in
this
us
we
have
this
centralized
controller
to
actually
responsible
for
creating
those
who
volume
and
then
we
introduced
this
concept
called
profile
which
is
essentially
similar
to
like
Ruby,
nice
storage
class.
You
can
distinguish
multiple
discs
based
on
their
attributes
and
parameters
and
the
friendly
one
will
just
pick
those
tips
based
on
the
profile
name
of
those
disks
yeah.
So
that's
what
we're
doing
these
to
us
and
there's
a
centralized
component
actually
responsible
for
creating
those
volumes.
So
Freimuth
don't
do
anything.
I
think.
B
C
B
E
E
To
address
question
about
increasing
size,
I
think
we
can
unify
the
is
to
layer,
the
backend
volume
and
the
persistent
volumes
of
that
by
because
we
always
need
to
first
calculate
so
when
we
receive
receive
resource.
Maybe
it
has
a
path
online
and
we
want
to
calculate
the
number
of
disk,
not
disused,
but
we
so
long
to
compute.
E
E
C
So
yeah
I
think
we
should
definitely
do
some
design,
because
I
think
this
is
pretty
early
stage
right
now
and
there's
multiple
ways
to
do
that
so
yeah,
what
you
say
is
one
way
I
think
the
other
way
is
just
like.
Don't
ask
I
mean
there's
some
other
ways
to
do
that,
where
you
always
model
the
free
disk
resource
as
resources,
and
then
we
do
crave
on
you
have
to
specify
which
resource
which
additional
this
resource,
this
new
disk
space
is
really
is
going
to
be
coming
from.
C
B
B
C
I
think
I
think
June
I
think
since
you're
quite
familiar
with
step
part.
So
if
that's
something
that
you
can't
help
with
gee-tar
and
you
guys
can
design
together
on
this
yeah
sure,
okay
sounds
great.
Yeah
I
think
the
follow-up
binding.
Is
you
guys
just
think
on
this
and
then
come
up
with
design
dog
so
that
we
can
discuss
again
in
during
the
working
group
meeting
in
the
follow
in
the
next
month
or
next
two
weeks,
so
that
we
can
decide
what
to
do
next?
Okay,
so
I
think
just
a
time
chart.
C
C
G
G
Previously
we
were
using
the
misty,
you
isolator,
that
we
were
talking
about
previously
the
problem
that
we
have,
and
it
looks
like
others-
do
as
well
tickets
mentioned
here,
but
basically
there
is
a
look
when
the
hard
limit
is
set.
The
current
functionality
of
the
disc
FS
is
that
when
you
set
a
limit,
it
understandably
sets
a
hard
limit,
and
so
the
result
of
that
is
just
that.
Your
application
will
try
to
allocate
space
and
most
applications,
don't
actually
handle
the
end
of
quota
error
correctly.
G
So
they
just
sit
there
and
you
know,
do
nothing,
do
nothing
useful.
They
may
be
complaining,
but
because
you're
in
this
space.
Well,
you
you
don't
really
know
so,
there's
a
couple
ideas
that
we
have
and
it
starts
out
with
providing
a
sort
of
headroom
so
and
with
this
head
room,
essentially
whatever
the
user.
G
So
if
somebody
is
using
marathon-
which
is
the
case
that
we
have
what
you
would
do
is
just
say,
whatever
amount
people
request
make
the
soft
limit
be
that
requested
amount
and
then
provide
an
offset
such
that
such
that
it
can
be.
You
know,
at
a
global
level,
you
can
say:
okay.
Well,
we
want
to
provide.
You
know
it
can
be
something
as
small
as
like
one
Meg.
It
can
be
10
Meg's.
It
can
be
a
gig
whatever.
G
Whatever
for
your
place,
you
would
want,
but
what
this
enables
is
now
your
applications
can
actually
go
over
said
limit,
which
in
most
cases
might
be
fine
and
as
a
result,
you
would
be
able
to
create
a
kill
switch
as
well.
So
if
there's
going
to
be
applications
that
might
go
over,
you
know
that's
fine,
if
they're
teetering
along
that
line
and
depending
on
how
you
have
your
quota
limits
set
up,
you
may
not
need
to
worry
about
worry
about
this
at
all.
C
I
guess
how
do
you
mean
like,
for
example,
I
from
meso
standpoint
view
like
you,
have
a
bunch
of
discount
agent
and
mixes
just
allocating
those
disk
resources
to
every
single
framework
when
marathon,
for
example,
has
a
one
framework
accept
an
offer
for
and
he
wants
to
use
a
disk
and
and
the
user?
Actually,
the
user
of
America
I
just
had
a
softer
limit
on
the
disk
of
rock
on
and
I
say
one
gig
and
I
say
the
Harlem
is
like
1.5
gig.
G
G
You
know
if
I'm,
if
I'm
asking
for
a
hundred
I'm,
possibly
getting
more
than
I'm
asking
for,
but
that's
because
at
the
level
of
providing
offers
you
want
to
like
from
a
report
from
a
resource
perspective
you,
you
obviously
want
to
only
allocate
as
much
as
you
have
available
about,
and
but
this
this
allows.
This
allows
the
slack
on
the
on
the
applications
that
are
running
or
the
containers
that
are
running.
C
Okay,
so
that
okay,
so
if
I
understand
just
correctly
like
like
say
on
that,
the
user
is
asking
a
hundred
and
I
will
ask
Maysles
for
a
hundred
fifty
and
like
the
fifty
being
the
Headroom,
and
you
want
you
in
forced
Dakota,
at
150
as
a
hard
limit
and
and
the
soft.
That
means
a
hundreds.
Like
he's
really
waiting
for
soft.
Let
me
exit
XFS,
yes,.
G
There
is
okay,
so
this
is
something
that's
built
in
functionality,
so
I've
actually
started
a
little
bit
on
a
POC
I'm,
just
trying
to
prove
prove
it
out
and
I
have
I,
have
it
to
the
level
that
I
can
set
soft
limits
right
now,
I'm,
just
using
a
default.
So
in
here
there's
the
idea
for
flags
that
I
have
is
you
have
a
hard
limit
offset
since
I,
just
whatever
they
ask
for
you
know,
default
is
zero
to
provide.
You
know
functionality
as
people
expect
right
now.
G
So
nothing
changes
in
your
face
perspective,
but
if
they
want
to
set
it,
then
they
can
set
it
above
and
then
there's
also
the
two
other
flags
that
are
related
to
actually
killing
the
the
applications
that
don't
want
to
or
killing
the
applications
that
are
over
there
their
grace
period.
So
in
XFS
there
is
actually
an
ability
at
the
project
level
to
set
a
a
grace
period.
Default
is
seven
days
like
that
is
the
standard.
So
this
would
just
be
a
false
that
you
know.
D
G
Yes,
of
course,
so
I'm
XFS
at
a
base
level
has
three
potential
ways
depending
on
at
the
latest.
Version
has
three
ways
that
you
can
do.
You
can
do
it
in
a
user
level.
You
can
do
it
at
the
group
level
and
you
can
do
it
at
a
project
level
for
maize
O's
itself,
we're
using
projects.
So
that's
that's
what
applies
here
when
you
set
a
soft
limit.
It
is
something
that
then
starts
off
a
timer.
That
timer
is
gonna,
be
based
on
what
what
grace
period
you
decided
on
for
your
application.
G
C
G
G
E
G
G
Your
app
can
use
all
the
way
up
to
the
hard
limit
of
space,
and
you
know,
unfortunately,
you
know
if
your
well
actually,
fortunately
if
your
app
is
still
growing,
for
whatever
reason
like
beyond
whatever
you've.
Even
even
beyond
that,
you
know,
it's
not
gonna,
fill
up
your
disk
space,
so
you're
safe
in
that
aspect,
but
if
you
have,
if
you've
then
used
up
your
grace
period,
if
you
and
you
haven't,
met
your
soft
limit,
you're
now
blocked
from
using
more
space
and
as
soon
as
the
as
soon
as
the
watcher
well
I.
G
G
Big
well,
it
is,
but
then
you
get
into
this
weirdness
where
somebody
at
because
you
could.
You
could
technically
have
that
hard
limit
be
enabled
right,
but
then
you
you
can
have
the
hard
limit
enabled.
However,
you
can
never
break
the
hard
limit,
because
it's
a
hard
limit
right
right
and
so
then
you
have
to
wait.
G
That's
not
always
possible
because
the
problem,
the
problem
with
how
that
is,
is
its
debate.
It's
based
on
how
much
your
application
is
trying
to
educate
at
time
and
if
it's
over,
so
if
you're
at
just
use
simple
numbers,
we're
at
a
hundred
again
and
you
are
using
99,
but
you
want
to
allocate
even
two.
G
G
A
D
I
think
so
that
customizability
is
because
you
would
think
so
yeah.
If
we
have
that
kind
of
limitation,
then
if
you
have
a
sets
soft
limit,
but
then
your
user
application
actually
try
to
allocate
beyond
that
range.
So
it's
like
no
matter
how
much
the
gap
is.
You
could
always
like
try
to
allocate
a
wider
gap
right
right.
G
What
was
bigger
than
the
space
that
I
had
requested.
Unknowingly,
like
I,
didn't
know
how
big
my
actual
executable
was
at
the
time,
and
so
I
went
to
try
to
run
it
and
it
would
just
die
and
I
had
no
explanation
for
why
that's
happening
so
this
would
this
would
help
alleviate
some
of
those
sorts
of
issues
as
well,
as
you
know,
as
well
as
others.
G
D
Have
a
short
comment,
which
is:
if
you're,
the
implementer
of
the
framework,
like
you
think
well,
I
can
customize
it,
but
then,
ultimately,
that
they
are
this
different
group
then
who's.
Writing
the
end-user
application
right,
so
not
always
that
the
framework
will
be
able
to
set
the
limit
for
that
and
the
user
application
mm-hmm.
G
D
C
G
C
G
Everything
that
I've
been
trying
to
add
in
here
that
would
have
default.
You
know
so
the
offsets
default
to
zero.
So
essentially,
if
you
have
a
disk
limit
or
if
you
have
your
hard
and
soft,
if
you
don't
have
a
hard
and
soft
then-
and
you
don't
have
this
set
to
anything,
but
whatever
the
default
is
function
at
current
as
it
does
currently
okay,
so
the
watch
won't
be
triggered
yeah.
The
watch
won't
we,
the
watch
won't
be
triggered
okay
trigger
because
of
this.
C
G
I
started
talking
with
James
speech
so
I
if
I
can
get
yeah
if
I,
if
I
can
work
with
them,
that
that
would
that
would
be
good
yeah.
C
G
C
C
Right:
okay,
that's
good
thanks!
Thanks
all
right,
I
think
we
are
over
time,
that's
wrap
up
here
and
see
you
guys
in
two
weeks.
Thanks
thanks,
guys
see
you
guys
yep
right.