►
From YouTube: Kubernetes SIG Node 20180828
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Great
so
I'm
going
to
assume
people
are
familiar
with
the
new
ephemeral
storage,
aka
local
capacity,
local
storage
capacity
isolation.
So
the
implementation
currently
is
done
by
periodically
running.
Do
you
quite
literally
on
each
empty
durval,
um--
and
writable
layer,
and
for
that
matter,
writable,
layer
and
logged
err?
That
has
a
number
of
problems.
First
off
the
performance
is
poor.
It's
it's
actually
consuming
enough
resources
that
it's
caught.
It's
resulted
in
at
least
one
issue
being
open
it.
B
So,
as
my
prototype
currently
only
applies
to
empty
durval
yems,
as
detailed
in
my
in
my
proposal,
there
are
additional
complications
with
writable
layers
and
log
directories,
specifically
interaction
with
the
container
runtime,
so
where
I
am
now
I've
created
a
prototype
on
all
empty
door
on
all
and
Peter
volumes,
the
empty
door
code
calls
into
the
quota
layer.
The
quota
layer
determines
whether
it
can
apply
a
quota
to
the
directory
in
question.
B
By
looking
at
the
file
system,
the
directory
is
on
determining
whether
it's
an
XFS
file
system
and
if
it
is
whether
it
supports
quotas,
if
it
does,
it
applies
the
requested
quota
to
the
files
to
the
emptied
our
volume,
rather,
if
an
empty
Durham
is
created
without
without
a
quota
limit.
In
other
words,
if
there's
no
limit
on
the
ephemeral
storage
utilization,
it
applies
essentially
an
infinite
quota,
so
that
can
be
is
for
monitoring
for
monitoring
purposes.
Only.
B
A
So
I,
don't
know
is
anyone
from
Windows
actually
on
today's
call
it
would
want
to
get
their
perspective
on
what
would
be
possible.
I,
don't
know
if
Patrick's
here
I
do
not
see
Patrick
either
so
we're
very
late
attendance
this
week,
which
is
abnormal,
so
it
could
be
that
maybe
we
get
to
the
details
on
that
next
week,
but
I
see
David.
Both
you
and
wish
had
started
to
comment
through
the
document
at
a
high
level.
B
Those
should
be
in
fact
managed
by
the
wrong
time,
so
my
basic
thought
my
basic
thoughts
on
that
matter.
One
is
that
the
the
quota,
the
quota
code
is,
should
be
I,
think
split
out,
probably
as
vendor
code
that
could
be
used
by
multiple
components.
I,
don't
think
we
want
I,
don't
think
we
want
to
implement
over
a
thousand
lines
of
code
separately
in
in
each
client
user
of
quotas.
A
Boundary
between
the
cubelet
and
the
runtime,
or
there's
no
pseudo
kernel
boundary
between
the
qubit
and
runtime
so
referring
to
like.
If
a
qubit
was
configured
to
run,
cotta
containers
or
a
queue,
it
was
configured
to
run,
G
visor
containers,
it's
not
clear
to
me
without
Tim
here
or
maybe
maybe
someone
from
the
cauda
community
is
on
today's
call
how
if
the
qubit
was
to
tell
the
runtime
what
the
quota
ID
would
be,
how
that
would
actually
have
any
meaning
for
those
runtimes.
B
A
B
Let
me
just
let
me
just
back
up
briefly
here
the
way
they
the
way
the
quote
at
the
quota
code
is,
the
quota
code
is
written
so
in
such
a
way
when
you
say
virtual
I'm,
assuming
you're
a
see
me
you're,
saying
that
the
runtime
and
the
Cuba
may
live
in
on
different
virtual
machines,
presumably
with
different
file
systems.
I'm.
A
B
B
B
A
But
just
let's
say
we
made
this
the
domain
of
the
CRI
to
say
that
the
qubit
will
tell
us
Eri
implementer,
that
this
is
the
container
I
want
you
to
run,
and
this
is
the
CPU
shares
you
should
get.
This
is
the
memory
limit
you
should
get,
and
this
is
the
amount
of
ephemeral
storage.
You
should
use
for
your
copy,
one
write
layer.
What
would
you
what
would
you
want
the
behavior
to
be
of
the
container
runtime
if
it
exceeded
the
ephemeral
storage
limit.
B
I,
if,
if
quotas
or
if
quotas
are
obeyed,
so
if
quotas
are
available
and
the
container
run
time
has
imposed
a
quota,
then
the
kernel
will
simply
block
any
attempts
to
exceed
that.
If
it
cannot
impose
a
quota,
then
I
think
the
behavior
would
be
the
same
as
it
is
today.
Where
container
run
time
simply
reports,
the
usage
and
the
cubelet
decides
what
to
do.
A
B
Yes
and
the
reason,
the
reason
is,
is
that
the
I
mean
the
memory,
the
fact
the
total
ephemeral
storage
consumption
consists
of
the
sum,
the
consumptions
of
all
each
empty
der
and
the
logs
and
the
writable
layer
at
least
at
present.
The
container
runtime
doesn't
know
anything
about
the
pardon
me.
It
doesn't
know
anything
about
the
empty
durval
yems
believe
it
just
those
are
just
mounted
into
the
container.
B
A
Okay,
so
then,
if
because
I
component
has
disk
that's
out
of
space,
it
probably
means
many
of
those
components
won't
be
able
to
potentially
gracefully
terminate,
because
they
won't
be
able
to
write
a
termination
message
to
a
log
or
anything
like
that.
So
what
I
was
just
trying
to
figure
out
is
if
this
is
turned
on,
our
admins
potentially
left
with
a
set
of
not
pods
that
can
never
shut
down.
B
So
I
afraid
this
has
probably
passed
my
specific
level
of
knowledge,
but
one,
but
one
other
thing
I
should
point
out
is
that
the
quota
the
quota
is
applied
per
directory
so
that
each
each
empty
durval
um--
has
its
own
separate
quota.
Of
course,
that
quota
has
to
be
the
same
as
the
as
the
total
limit,
because
we
can't
we
can't
set.
We
can't
force
the
pod
to.
B
A
Yeah
I'm
just
trying
to
figure
out
if
we
what
we
can
do
to
avoid
trading
one
set
of
problems
for
another
set
of
problems
to
much
and
so
I
think
pretty
clearly
protocols
out
like
the
set
of
problems
with
the
current
implementation
and
I,
was
just
trying
to
tease
out
what
the
set
of
problems
would
be.
If
you
know
as
discussed,
if
your
application
was
literally
told
no,
you
can't
write
anything
anymore.
B
That's
it
so
that's
a
problem,
that's
a
problem
with
the
existing
mechanism
to
write
that
this
could
run
out
of
the
disk,
could
hard
run
out
of
space,
and
then
it's
no
longer
able
to
write
anything.
Containers
are
no
longer
able
to
write
anything
at
anyway.
Running
out
of
running
out
of
storage
is
always
a
risk.
That's
taken.
B
A
A
A
Don't
know
who
else
honestly,
David
I
think
we
should
probably
at
least
on
this
group.
It
doesn't
seem
like
there's
any
widespread
disagreement,
that
the
container
runtime
should
own
management
of
local
disk
enforcement.
I.
Think
the
next
step
is,
we
maybe
revisit
it
again
next
week,
when
you
do
and
down
here
at
well
to
give
their
perspective
if
they
don't
count
before
then
and
then
start
to
translate
this
into
like
what
the
actual
material
changes
to
the
CRI
would
have
to
be
yeah.
It's
it's.
D
Worth
mentioning
that
we've
already
come
to
the
conclusion
that
metrics
for
disk
usage
should
come
from
the
runtime,
so
that
is
something
that
we
don't
necessarily
need
to
open
up
but
yeah
as
far
as
enforcement
of
container
limits,
as
only
the
the
backbone
there
yeah
just
limiting
the
writable
layer.
I
mean
the
image
file
system,
yeah,
so
yeah
I
think
that's
something
that
we
can
discuss.
But
I
don't
see
there
being
any
huge
objections
to
that.
B
A
Okay,
so
why
don't
we
get
this
also
on
next
week's
agenda
to
revisit
any
iterations
that
have
been
had
on
it
since
then,
and
make
sure
that
since
this
will
touch
some
other
carriers
of
the
capelet,
that
we
traditionally
let
off?
There's
comment
on
we'll
meet
up
on
that
next
week
and
on
that
David.
If
we're
reaching
code
slush
week,
are
there
particular
pr's
that
you're
aware
of
that?
We
need
to
bring
forward
to
make
sure
to
get
reviewed
this
week.
A
E
A
Yeah
so
I
know,
because
actually
you
because
are
you
on
the
call?
Yes,
so
at
this
point,
I
think
we
need
to
make
probably
Jordan
comfortable
with
the
API
change
and
I'd
love
to
see
like
progress
on
it.
I
know
because
you've
made
some
updates
that
I
was
talking
about
before
this
call
that
I
have
not
had
a
chance
to
review
so
I
guess
I'd
like
us
to
see
it
land,
but
I
am
NOT
certain
that
it
will
definitely
land
this
week.
Okay,
I
see.
F
A
Proposing
it
as
an
alpha
feature
gate
with
the
feature
off
by
default,
so
I
think
it
should
have
worked
with
either
docker
or
cryo
or
container
D
in
theory,
but
yeah
I
think
we
had
originally
said
in
the
proposal
that
multi-year
runtime
support
would
be
a
beta
criteria
so
well
anything
that's
there
now
is
just
purely
alpha
if
it
lands.
Okay,.