►
From YouTube: Kubernetes SIG Node 20210720
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
it's
july,
20th
2021,
it's
a
signature
meeting
welcome
everybody.
So,
as
usual,
we'll
kick
off
this
recap
of
what
happened
during
the
last
week
in
terms
of
prs.
We
can
see
that
even
though
it
was
a
slow
week
and
we
it
caught
free,
so
no
new
features
being
merged.
We
still
have
a
pretty
good
turnaround.
We
immersed
a
few
pull
requests,
we
closed
all
the
unnecessary
ones
and
there
is
no
like
stale
or
anything.
A
I
I
checked
it
just
intentionally
closed
some
and
yeah
the
minus
one
from
last
week,
which
is
good.
If
you
interested
in
see
what
work
is
happening,
just
click
these
links
and
you
will
be
up
to
date
so
now
ilana,
you
have
couple
reminders.
B
Yeah
a
couple
reminders
so
reminder
that
today
is
the
doc's
reviewable
deadline,
so
there
isn't
a
doc's
deadline
yet,
but
sigdocs
requires
everybody
to
have
their
work
in
progress.
Docs
prs
for
any
of
their
enhancements
for
this
release,
ready
to
review
by
the
end
of
today
today.
So
if
you
have
not
had
a
chance
to
get
your
docs
ready-
or
you
maybe
forgot-
just
go
and
check
your
enhancement,
make
sure
that
your
docs
are
up
to
date,
so
sigdocs
can
review
them
because
today
is
the
deadline
as
well.
B
I
have
a
link
there
for
the
122
burn
down,
so
this
is
everything
left
for
sig
node
in
the
122
milestone.
There
are
some
test
failures
that
existed
from
previous
releases,
so
they're
not
regressions,
but
I
think
we
have
at
least
one
release
blocker
that
pod
stuck
in
container,
creating
bug
and
possibly
another
failing
test
that
I
think
clayton
has
a
pr
that
he's
working
on.
So
I
think,
keep
an
eye
on
those
things.
If
you
are
pinged
make
sure
that
you
get
the
chance
to
review
them.
B
A
C
Oh
so
I
would
is
that
a
part
of
stacking
test
figure
all
those
kind
of
things
we
needed
to.
I
I
think
some
of
this
will
related
to
native
change
right,
so
you
are
the
reviewer
for
that
pr.
Can
you
also
pay
extra
attention
on
those
failures?
Okay,
thanks.
A
Yeah,
this
release
went
very
well.
We
don't
have
too
many
release
blocking
or
like
urgent
fires
to
deal
with,
but
just
pod
related
things
like.
We
know
that
both
life
cycle
is
hard,
so
let's
pay
attention
yeah
and
last
time
we
discussed
that
we
want
to
do
some
capri
perspective
for
122.
A
we
had
24
caps
scheduled
for
122.
quite
quickly.
We
get
rid
of
some
of
them,
so
I
think
we
been
tracking
17
primarily
for
for
this
release,
and
I
created
this
small
document.
This
a
little
bit
of
retrospective
font
may
be
too
slow.
So
let
me
try
to
increase
it.
A
So
what
I
try
to
do
is
I
I
copied
all
the
all
the
caps
from
from
the
documents
that
release
tracking
team
striking.
This
is
a
caps
numbers
enhancements
name
and
some
notes
from
like
mostly
the
pr
reviews
and
what
I
wanted
to
say
like
I
want
to
see
what
we
implemented.
A
A
I
think
exception
is
one
of
the
signs,
whether
like
we
almost
done
it
and
that
we
can
discuss
it
somehow
or
we
can
discuss
like
what
went
well
and
what
didn't
went
well
for
like
other
caps.
A
So
we
can
either
go
one
by
one
or
if
somebody
want
to
have
to
speak
up
before
we
go
like
deep.
Let's
let
me
know.
D
B
I
think
a
lot
of
these
there's
there's
difficulty
with
getting
the
api
reviews
done,
and
I
know
some
of
them
weren't
flagged
early
enough
for
api
review,
so
they
didn't.
Have
anybody
assigned
that
also
sort
of
happened
for
the
the
configurable
grace
period
for
probes?
B
So
we
we
requested
and
got
an
exception
for
that
one,
and
it
was
like
there
were
just
these
like
small
api
review
comments,
because
it
didn't
get
an
api
review
until
the
last
day
and
then
we
needed
to
fix
them
and
we
just
didn't
have
enough
time.
So
I
know
when
I
was
going
through
a
lot
of
the
implementation
prs.
I
had
found
a
bunch
of
them
that
had
api
changes,
but
weren't
marked
for
api
review,
so
nobody
was
assigned
until
very
late
in
the
release.
B
E
A
I
just
wanted
to
recognize
that
dynamic
config
multiplication
was
the
same
way,
so
we
like,
I
spent
time
getting
approval
from
approvers
and
then
went
through
another
cycle
of
going
through
api
reviews,
and
it
was
very
late
in
the
game.
Alright,
also
so
yeah
last
one
for
that
go
ahead
and
run
out
yeah.
D
Actually
yeah,
so
I
think
maybe
like
getting
when
we
are
doing
the
cap.
Maybe
we
can
get
the
api
portion,
including
the
comments.
Maybe
if
we
could
somehow
get
it
reviewed
there,
then
it
may
help
avoid
such
issues.
I
don't
know
what
I
thought.
C
I
don't
know
that
I
I
don't
know
we
used
to
doing
that.
But
to
do
the
implementation,
you
may
change
your
api
yeah
yeah
yeah
reviewer
pro
is
too
small.
So
it's
more
challenging
right.
I
think
they
already
did
the
best
and
and
and
also
there's
something
like
from
the
api
perspective
and
my,
for
example,
for
vpa
feature,
and
sometimes
it's
quite
different
from
the
eg6.
C
So
so
there's
always
have
some
back
force
just
beyond
api,
a
lot
of
depending
on
the
reviewer.
Also.
Sometimes
they
have
a
lot
of
back
force
debate
design.
So
so
so
it's
just
yeah!
Oh
I
can
see.
I
I
like
the
water
alaila
say
that
we
have
to
process
like
make
sure
avoid
unless
through
latency,
but
sometimes
our
feature
especially
a
lot
of
node
feature
is
pretty
complicated,
and
so
they
pay
extra
attention
and
also
not
all
the
api
you'll
understand
a
lot
of
node
features.
C
So
there's
the
from
the
api
perspective
there's
certain
requirements,
but
from
the
node
there
also
have
some
complexity
so
yeah
just.
B
So
for
the
pod
priority
based
graceful,
node
shutdown
in
particular,
I
think
that's
a
good
example
to
go
with
so
that
pr
was
opened
like
quite
early
in
the
release
cycle.
It
was
open
june
16th
and
at
some
point
I
was
going
through
and
reviewing
all
of
the
pr's,
and
I
noticed
that
it
was
missing
an
api
review
label
and
it
didn't
have
an
api
reviewer
assigned,
but
it
was
changing
files
that
needed
api
review.
B
So
I
think,
like
the
last
week
of
june,
I
added
the
api
review
label,
but
the
api
reviewer
still
had
not
picked
it
up.
So
I
think
at
some
point,
let's
see
like
I
had
pinged
folks
in
the
channel,
still
didn't
hear
back
from
them
and
then
finally
about,
I
think
you
know
a
week
sort
of
before
the
deadline.
B
I
was
able
to
get
an
api
reviewer
assigned,
but
so
that's
the
sort
of
thing
where
I
think
in
theory
like
we
could
have
marked
that
one
as
api
review
off
the
bat
and
then
that
would
have
like
reduced
the
sort
of
risk
of
missing
it,
because
we
just
didn't
have
an
api
reviewer.
Looking
at
it
early
enough.
D
B
Yeah
that
one,
I
think
we
requested
an
exception
very
early
and
oh
wait.
No
I'm
thinking
of
yes
yeah
that
one
requested
an
exception
really
early,
like
it
missed
the
initial
deadline,
but
there
was
a
bunch
of
stuff
missing,
so
we
we
didn't
make
it
then.
A
So
I
think
I
wanted
to
mention
what
went
well.
I
think
this
release
we've
been
asking
all
the
caps
implemented
to
have
end
to
end
the
same
pr,
so
we
ensure
the
quality
all
together
and
it
helped
us
prevent
situations
when
something
is
merged
that
supposed
to
be
working,
but
then
implementing
end-to-end
tests
we
find
in
issues.
A
I
think,
there's
a
good
thing.
An
example
for
that
is
grpc.
Probes
is
one
of
those
examples.
I
think
we
found
quite
a
few
issues
just
requiring
tests
and
making
sure
that
it's
coming
as
a
single
pr.
A
F
I
can
talk
about
that,
one
yes,
for
that
one.
We
required
two
prs,
basically
one
for
the
cri
changes
and
then
one
for
the
kubelet
level
changes.
I
think
the
the
cri
changes.
We
made
a
lot
of
good
progress
on
and
I
think
we
held
that
almost
ready
to
go.
However,
the
kubelet
level
changes.
F
We
need
a
little
bit
more
time
to
iterate
on
to
get
like
tests
and
all
that,
and
so
we
were
about
to
merge
the
cry
changes,
but
we
were
a
little
a
little
concerned
about
getting
that
merged,
but
without
the
kubelet
changes,
because
then
we
can't
really
validate
the
whole
thing
end
to
end,
so
we
requested
exception
for
that,
but
unfortunately
it
was
denied.
F
So
I
think
we'll
just
pick
it
right
back
off
in
in
123,
but
we
made
a
lot
of
good
progress
here
and
I
think
it'll
be
ready
to
go
pretty
early
on
in
the
next
cycle.
F
B
B
Specifically
from
this
one,
one
possible
learning,
so
one
thing
that
I
noticed
is
a
lot
of
prs
for
the
implementations
were
marked
as
work
in
progress
until
like
the
final
week
before
the
deadline.
If
those
things
are
ready
for
review
before,
then
we
need
to
take
the
work
in
progress
flags
off
of
them
because
they
won't
get
looked
at
because
we
have
pretty
limited
reviewer
and
approver
bandwidth.
So
I
think
that's
an
area
that
we
could
also
probably
do
better
in
in
order
to.
F
Yeah,
I
agree,
I
think
yeah.
Definitely
the
main
reviews
came
in
kind
of
kind
of
late
for
for
both
the
cri
and
the
cuba,
level,
changes
and
and
yeah,
and
I
think
another
learning
in
general
is
if
you're
making
cri
level
changes
it's
it's
a
little
bit
more
difficult,
also
because
you
usually
have
to
follow
up
with
some
of
the
level
changes
and
also
contain
a
runtime.
Like
you
know,
the
actual
container
runtimes
have
to
implement
those
changes.
So
it's
it's
an
additional
complexity.
Due
to
that,
but
yeah.
C
Actually
this
one,
I
also
have
a
question
when
I
show
that
to
request
exceptional
requests
send
on.
I
was
a
little
bit
shocked.
So
that's
why
I
didn't
comment
I
think,
support,
but
I
thought
that's
not
righty,
so
so
I've
when
I
saw
that
exception
or
I
feel
so.
Maybe
we
are
this.
This
one
is
giving
me
necklacing.
C
We
don't
treat
the
the
diademine
right,
so
we
feel
like
okay,
if
you
give
me
that
we
just
try
exceptional
and
then,
if
you
go
enemy
and
then
I
will
lucky
to
get
the
extra
one
week
development
time.
C
So
something
give
me
like
like
a
signal
because
before
the
deadline,
I
look
at
that
piercing
person
already.
So
so
so
I
feel
this.
I
just
share
my
what
my
observation
and
doing
that
that
night.
F
Yeah,
I
think
that
makes
sense
yeah
for
this
one
specifically,
I
think
a
lot
of
work
got
done
and
and
peter
helped
a
lot
from
this
from
from,
I
don't
know,
peter's
on
the
call
but
yeah
he
helped
a
lot
on
this
one,
but
it
was
kind
of
a
lot
of
a
lot
of
movement
on
the
left
in
a
week.
So
definitely,
I
think,
just
maybe
a
little
bit
more
planning
and
work
ahead.
It's
a
deadline
would
help
this
one,
so
yeah.
A
I
think
if
you
move
on
to
other
pr's
caps,
I
think
good
learning
is
we're
doing
quite
good
on
small
caps
like
this
fkgn
one
and
size
support
size
memory
backed
volumes.
They
were
merged
very
early
as
they
like.
We
went
through
the
process
very
like
smoothly
and
it
just
went
great.
So
it's
fun.
Let's
do
a
very
good.
A
I
think
we're
doing
good.
If
you
start
early
on
the
process,
maybe
key
upset
will
migrate,
move
to
the
next
milestone.
If
they
will
like
keep
the
pace
and
will
try
to
merge
early,
we
will
do
as
well.
B
So
I
think
one
thing
that
may
have
been
new
for
some
folks
this
cycle,
since
it
only
became
mandatory
last
release,
was
a
production
readiness.
Reviewer
prr
did
folks
find
that
that
helped
with
some
of
the
implementation
or
like
ensuring
that
there
weren't
any
like
upgrade
surprises
or
things
like
that
when
implementing
this
cycle.
A
Yeah
I
just
remembered
about
this
sec
comp
by
default
thing,
and
I
was
surprised
that
pr
didn't
cut
for
the
issue.
So
this
pr,
like
this
cap,
suggested
to
enable
second
by
default
everywhere,
and
it
could
have
broken
some
workloads
so
pr
supposed
to
catch
this
issue
early
and
don't
let
it
slip
through,
but
during
implementation.
During
the
pr
stage
the
issue
was
caught
and
kept
essentially
was
changed
to
enabled
by
default
when
flag
is
set.
A
A
B
A
Yeah,
but
my
point
is
that
pr
supposed
to
catch
the
situation
when,
like
we,
have
this
feature
and
enabled
by
default
on
upgrade
so
after
upgrade,
our
customers
may
not
realize
that
their
workload
will
be
broken.
B
B
Think
about
the
only
one
that
jumps
out
is
possibly
the
pod
priority
based
node
graceful
shutdown.
A
And
I
don't
think
it's
true
that
just
there
are
so
many
desired
caps
that,
unfortunately
we
didn't
have
enough
bandwidth
to
do
it
already
or
like
just
didn't,
make
it
in
time.
A
Okay,
I
think
that's
enough,
I
think
a
lot
of
taking
notes
on
action
items.
I
will
switch
to
agenda.
G
I'm
not
sure,
what's
a
name
here,
hey
yeah!
This
is
deep.
I
put
that
in
the
agenda.
Do
you
want
to
get
a
co-host
to
present
something
sure
that
would
be
great
if
I
can
share
my
screen.
G
All
right,
hopefully,
everyone
can
see
my
screen
so
hey,
I'm
deep
de
bruy,
I'm
looking
at
this
concept
called
run
time,
assisted
mounting
of
persistent
volumes.
G
So
this
is
basically
I'm
at
the
phase
where
I
spoke
briefly
about
it
with
the
sig
storage
in
their
last
meeting
and
they're
they're,
you
know
a
few
things
to
consider
on
the
runtime
side
as
well,
which
is
why
I
wanted
to
sort
of
socialize
the
idea
here
and
then,
if,
if
there
are
no
major
set
objections,
kind
of
go
ahead
and
file
the
issue
and
go
ahead
with
the
official
cap
and
the
the
whole
stuff.
G
So
let's,
let's
get
into
the
overall
concept.
This
is
kind
of
like
a
very
high
level
overview
of
all
the
things
that
happen
in
a
in
a
node
as
a
pod
that
is
trying
to
mount
a
persistent
volume,
either
inline
or
through
a
pvc
gets
scheduled
on
a
node
right.
A
node
might
have
different
runtimes
configured.
For
example,
it
could
be
a
micro
vm
runtime
like
kata,
along
with
run
c
as
the
default
that
are
configured
on
the
node
as
the
cubelet
prepares
to
bring
up
the
pod.
G
It
needs
to
talk
to
something
called
a
csi
note
plugin
in
order
to
set
up
what
are
called
the
staging
and
the
publish
paths
for
a
volume
that
are
eventually
passed
to
the
cri
runtime
through
the
create
container
and
eventually
into
the
ocr
runtime
through
the
oci
mount
specs.
G
That
specify
where
exactly
is
the
file
system
mount
in
the
host
that
the
container
needs
to
be
looking
at
so
in
case
of
something
simple
like
run
c,
the
the
way
the
flow
works
is
the
cubelet
first
talks
to
the
csi,
node
plugin
says:
please
stage
the
volume
as
part
of
that.
G
What
the
csi
plug-in
typically
does
in
case
of
a
block
backed
volume,
is
mounted
with
a
file
system
like
the
xfs,
then
it
will
typically
bind
mount
that
staging
pad
into
something
called
a
publish
path
which
is
pod
specific
and
then
the
the
the
pod
specific
publish
pad
gets
sent
through
cri,
as,
as
I
mentioned
over
to
run
c
in
case
of
run
c,
the
pad
is
presented
within
the
sandbox,
as
the
simple
bind
mount
to
the
publish
path
and
that's
pretty
much
it
in
case
of
something
a
little
more
complicated
like
micro
vms.
G
What
ends
up
happening
is
a
few
more
steps.
So
something
like
something
specific
to
kata,
for
example,
would
be
kara
looks
at
the
at
the
publish
path
it
would
create.
Its
own
bind
mount
to
sort
of
manage
that
path.
G
Next,
it
works
with
a
agent
within,
what's
called
the
micro
vm
guest
os,
to
set
up
verteio
fs
to
project
the
publish
path
into
the
guest
os
and
then
finally
use
something
like
what
ifs
fuse
in
order
to
make
that
file
system
available
through
the
kernel
of
the
guest
to
the
final
container
and
eventually
get
the
data
path
going
between
the
container
and
the
block
based
volume.
Something
very
similar
happens
also
for
something
shared
like
sifs
or
nfs.
G
Instead
of
a
block
and
a
file
system,
you
just
have
like
nfs
being
directly
mounted
as
as
the
publish
path.
G
So,
as
you
can
see,
the
cut
up
the
the
micro
vm
path
and
the
on
the
left
is
quite
complicated,
and
one
of
the
main
problems
with
this
approach
is
that,
although
water
fs
is
pretty
awesome
in
projecting
the
volumes,
there
are
various
issues
as
the
pod
is
running
the
actual
file
system,
that's
mounting
the
volume
stays
mounted
on
the
host.
This
is
not
ideal
from
from
various
security
points
of
view
in
terms
of
performance
it
it.
G
It
is
not
ideal
in
in
in
case
of
various
I
o
I
o
patterns
and
the
other
main
problem
is
the
full
fidelity
of
the
file
system.
That's
used
to
mount.
The
volume
is
lost
by
the
time
it
gets
through
the
vertifs
layer,
all
the
way
to
the
containers.
So
certain
things
like
I
notify,
etc
does
not
quite
work
out
the
way
it
is
supposed
to
work
with
the
real
file
system.
So
one
of
the
enhancements
that
that
will
be
proposing
as
part
of
the
cap
is.
G
It
involves
quite
a
few
changes
to
the
different
components,
but
what
you
can
see
on
the
left
compared
to
the
previous
state,
was
that
a
micro
vm
run
time
like
kata
in
this
case.
What
it
does
is.
It
uses
something
like
vertio
block
to
basically
project
the
block
device
up
to
the
file
up
to
the
guest
os
and
then
mount
the
block
directly
within
the
within
the
guest
os
with
the
file
system
that
is
specified.
G
So,
instead
of
the
csi
node
plugin
mounting
the
file
system
at
the
host
level,
it's
at
the
run
time
performing
the
mounting
of
the
file
system
within
the
guest.
So
in
this
case
the
container
gets
the
full
fidelity
of
the
file
system.
Things
like
I
notify,
etc.
Just
work
performance
characteristics
are
much
better
weird.
You
know
I
corner
case
issues
involving
the
file
system.
That
may
be
what,
if
is
limitations,
do
not
show
up
and
yeah,
so
those
are
kind
of
like
the
three
main
advantages.
G
One
of
the
changes
from
the
first
phase
onwards
is
the
csi
node
plugin
needs
to
present
a
mechanism
for
users
to
specify
that
hey
this
needs
to
be
enabled
on
a
certain
persistent
volume.
So
csi
plugin
authors
have
various
ways
to
specify
this.
It
can
be
a
storage
class
parameter,
it
can
be
a
pvc
or
a
pod
annotation
with
the
storage
class
parameter.
G
Basically,
all
pvs
in
that
storage
class
have
an
option
set
that
that
specifies
to
the
csi
plug-in
that
hey
we
are
going
to
use
a
specific
runtime
in
as
part
of
the
pod
to
mount.
This
pv
so
make
sure
the
csi
plug-in
does
not
perform
the
mount
and
yield
the
mount
to
the
runtime
with
the
pvc
animation
is
the
same
idea.
It's
it's
not
like,
as
as
much
of
a
recommended
thing
from
a
csi
perspective
which
tries
to
be
container
orchestrator
agnostic.
G
But
the
idea
is
the
pvc
that
the
pv
binds
to
has
an
annotation
saying
a
specific
container
runtime
needs
to
perform
the
mount
and
perform
the
mount,
and
what
that
means
is
that
pv
now
basically
requires
that
the
container
runtime
be
able
to
mount
it.
No
matter
which
node
at
which
node
mounting
pod
gets
scheduled
to
that
might
be
considered
a
little
sub-optimal.
So
the
most
flexible
solution
is
specifying
that
this
mount
deferral
be
done
through
pod
annotations.
G
In
that
case,
it's
basically,
the
pv
is
independent
of
how
it
gets
mounted
it's
up
to
the
pod
to
specify,
through
through
annotation,
that
the
csi
node
plugin
author
specifies
that
I
need
to
be
mounted
with
the
runtime
deferring
the
amount
whoops
yeah
so
based
on
either
of
these
three
mechanisms.
It's
completely
up
to
the
csi,
no
plug-in
authors
to
to
to
advertise
how
they
want
the
specification
to
be
done
based
on
them,
a
set
of
fields
gets
specified
in.
G
What's
called
the
csi
note,
publish
volume
response
which
allows
the
csi
node
plugin
to
tell
to
signal
to
cubelet
that
the
mounting
is
being
deferred
and
it
expects
the
runtime
to
perform
the
mounting.
G
So
this
basically
covers
the
communication
between
the
node
plug-in
and
the
cubelet.
With
changes
to
the
note,
publish
volume
api,
let's
see
next
up
is
the
cubelet
needs
to
deliver
this
information
to
the
cri
runtime.
This
is
kind
of
the
most
critical
aspect
that
I
want.
Signored
inputs
on
the
like
one
way
we
have
come
up
with
is
expanding
the
mount
message
that
is
passed
as
part
of
the
cri
create
container
rpc
to
be
enhanced
with
two
new
fields.
One
is
the
mount
options
and
one
is
the
mount
type.
G
Additionally,
the
second
field,
which
used
to
be
host
path.
We
are
suggesting
that
it
be
renamed
to
source
where
the
source
can
be
a
host
path
or
it
could
be
a
block
device
or
nfs
or
civ
server.
G
With
the
exported
mount
path,
one
thing
you
might
realize
is
the
oci
mount
spec
is
very
similar
in
that
it
allows
the
specification
of
an
option
and
a
type
which
is
basically
the
file
system,
mount
options
and
the
file
system
type
as
well
as
a
generic
source,
but
for
some
reason
it
seems
cri
did
not
offer
this
flexibility.
This
used
to
be
always
a
host
path,
and
the
expectation
was
this
will
always
be
a
bind
mount.
G
So
that
is
one
of
I
think
the
major
things
from
the
signal
perspective
that
I'm
trying
to
get
across
this
is
going
to
be
one
of
the
major
changes
that
allows
the
communication
between
the
cubelet
and
the
cri
runtime.
G
The
crn
runtime
is
going
to
deliver
the
same
option
to
the
to
the
oc.
I
run
time
the
oci
spec,
as
I
explained
the
ocr
mount
spec,
already-
has
fields
to
specify
the
file
system
type
and
the
amount
options.
So
basically
the
runtime
will
need
a
set
of
enhancements
to
say
well,
here
are
the
here:
the
amount,
options
and
type
from
the
cri
side.
G
Ocr
run
time.
You
know
process
them
appropriately
and
finally,
for
like
a
micro,
vmoc,
runtime
it'll
need
to
process
those
fields,
and
you
know
take
care
of
setting
up
say
what
are
your
block
or
whatever
mechanism
it
chooses
and
perform
the
the
the
final
amount.
G
This
covers
pretty
much.
The
flow
of
setting
up
the
pod
and
beyond
the
part
is
set
up.
There
could
be
other
file
system
oriented
apis
coming
in
from
the
cubelet
as
part
of
file
system
management
that
might
involve
get
volume,
stats
and
node
expand
volume
where
the
node
is
being
resized
in
an
online
fashion.
G
These
are
all
optional
capabilities
that
the
plugin,
the
csi
plugin,
can
say
hey.
I
do
not
support
them
in
this
scenario,
but
if
it
does
want
to
support
them,
one
of
the
additional
changes
that
that
is
being
proposed
is
the
is
that
cubelet
have
the
ability
to
send
down
an
opaque
sandbox
id
to
the
front
that
it
obtains
from
the
cri
side
to
the
csi.
G
This
is
just
opec
and
it's
you
know
purely
optional,
and
it's
going
to
be
mainly
used
by
the
csi
node
plug
by
the
csi
node
plug-in
to
talk
to
the
to
an
interface
that
the
the
runtime
surfaces
potentially
through
a
cli
that
says
you
know
this
is
how
you
obtain
the
file
system,
stats
or
resize
the
file
system
that
the
runtime
has
mounted
so
that
pretty
much
covers.
You
know
all
aspects
of
this.
G
I
guess
what
I
wanted
is
you
all
think
this
is
something
worth
pursuing
and
if
so,
I
guess
the
next
steps
is
going
to
be
file,
enhancement,
issue
or
get
pr.
So
are
there
any
sort
of
major
objections.
H
This
is
lantau
and
I
remember
a
while
back
qatar.
They
support
supporting
something
like
this
and
they
they're
using
the
kubernetes
log
block.
So
basically,
if
customers
among
their
volume,
says
job
block
on
the
on
the
host
and
then
their
katashim
will
identify
that,
and
instead
of
mounting
it
into
the
the
vm,
it
will
directly
expose
it
into
the
vm,
and
that
logic
is
on
the
other
side
in
the
katashim.
I
remember
they
did
that
one
or
two
years
ago.
H
If
I
remember
correctly,
I
don't
I
mean
I
don't
know
whether
you
know
that
and
does
that
happen
and
whether
that
works
for
you.
G
So
this
was
actually
like
this
got
inspired
by
the
kara
community,
so
I
was
working
with
eric
ernst
and
others
from
and
basically
the
feedback
there
was
let's
try
to
pursue
this
as
a
cap
and
sort
of
get
to
this
phase,
where
cubelet
and
cri
is
kind
of
more
natively
involved
and
making
this
you
know
general
purpose,
because
other
microgreen
runtimes
beyond
kata
might
be
able
to
leverage
this
as
well
so
yeah
the
kada
community
is
sort
of
ins.
H
Yeah
because
I'm
curious
about
whether
that
approach
works
and
if,
if
it
does,
I
mean,
what's
the
current
status
and
if
it
doesn't
what's
the
problem
so
that
maybe
you
can
better
evaluate
whether
we
should
make
the
change
in
cube
and
kubernetes.
Instead.
Because
I
remember
previously,
they
tried
to
do
that.
Purely
underlying
and
just
based
on
the
kubernetes
raw
block
volume
to
implement
this
and
yeah.
H
G
Sure
sure
one
one
of
the
main
things
that
this
also
enables
is
like,
although
this
diagram
presents
the
raw
block
scenario,
this
can
also
work
for
nfs,
in
that
the
nfs
mount
might
need
to
happen
inside
and
the
specific
file
system
and
the
mount
options
that
are
involved.
G
Those
are
something
that
those
are
something
those
are
like,
basically,
parameters
that
the
csi
node
plug-in,
can
control
and
basically
pass
over
to
the
runtime
through
this
approach,
which
may
not
be
possible
with
the
raw
block,
because
in
case
of
roblox,
it's
just
the
blocked
device.
That's
getting
passed
over.
C
I
haven't
this
is
my
first
time
reading
this
one,
just
like
what
the
ninth
house
said:
yeah,
it's
quite
a
there's,
the
many
similarities
from
to
the
previous
proposal,
so
we
at
least
I
need
more
time
so
to
understand
this
one,
and
and
also
did
you
have
you
talked
to
the
sig
storage.
Yet.
G
Yes,
that
presentation
on
thursday,
as
part
of
the
last
storage
yeah
and
basically
like
you,
know,
bent
over
an
overview
like
this
didn't
have
this
specific
diagram
there,
but
essentially
they
did
want
to.
You
know,
follow
up
through
kind
of
like
the
official
kept
and
more
sort
of
design
focus
design
sessions.
G
H
Yeah
to
me,
I
think,
because,
luckily
that
the
the
qatar
community
has
already
experimented
the
previous
proposed
approach
and
assume
that
if
because
we're
having
this,
it
seems
that
that
one
doesn't
work
very
well,
and
I
feel
like
that.
It's
useful
to
understand,
which
part
doesn't
work
so
that
we
know
that
we
know
better,
which
part
of
objection
we
should
move
up
and
and
then
understand
the
problem
better,
because
if
everything
can
be
loved
outside
of
kubernetes,
why
do
we
add
more
obstruction
to
burning
his
for
this?
H
F
D
Okay,
so
this
also
reminds
me
with
conversations
I've
had
recently
with
folks
from
kata,
where
they
wanted,
like
more
information
being
passed
as
part
of
the
running
pod
sandbox,
so
kata
can
behave
better
compared
to
how
it
does
today,
like
some
examples,
were
around
mounting
devices
and
so
on.
So
is
there
like
an
intersection
with
the
problems
over
there
or
not?
That
will
be
useful
to
understand.
D
G
C
C
I
I
like
what
the
menu
and
also
ninth
house
suggested-
let's
have
like
the
in
the
dark,
include
what
previous
proposal
from
qatar
community
and
to
figure
out
a
bottleneck
where
it
is
and
would
you
earlier
you
mentioned
some
the
performance
and
the
complexity
and
all
those
kind
of
things
actually
make
sense
to
us.
C
This
is
why,
in
the
past,
we
tried
to
looking
into
how
to
improve
the
performance
overall
and
also
usability
overall
for
the
qatar
use
cases
right,
that's
the
couple
years
ago,
but
the
obvious
thing
is
not
to
meet
the
other
requirement,
and
the
manual
also
mentioned
that
which
is
in
your
plantation
earlier.
You
also
mentioned
that,
for
example,
certain
information
to
exchange
between
several
entity
or
component
involved,
for
example,
for
the
sandbox
id
all
this
kind
of
information
right.
G
C
Yeah,
I
think
it's
worse
time
to
pursue,
but
but
it's
not
mean
like
the
worst
time
to
persuade
me
like
this
is
a
final
design
and
I
just
want
to
show
it's
definitely
it's
worse
to
pursue
right.
We
want
to
understand,
what's
the
problem,
what
kind
of
problem
more
we
definitely
know.
This
is
a
problem
for
user
perspective.
We
just
want
to
see
the
previous.
What's
the
problem,
then,
what's
the
problem
yeah,
why
couldn't
the
pursue
it's
not
to
satisfy
customers,
use
cases.
G
Great
thanks
a
lot
for
the
for
the
inputs
and
I'll
follow
up
with
more
details
in
the
doc
and
go
from
there
thanks.
That's
alright,.
A
Thank
you
deep.
I
think
next
one
is
not
graceful,
not
shut
down,
and
I'm
sorry
I
don't
know
the
name.
E
Is
hi
yeah,
so
I'm
working
on
this
number
is
for
no
shout
out
cap.
Basically,
when
a
note
shuts
down
after
five
minutes
by
default,
the
parts
we'll
get
into
this
terminating
state,
but
then
it
will
get
stuck
because
there
are
still
volume
attachments
that
are
not
deleted.
So
the
cap
proposed
to
clean
those
things
up
clean
up
the
parts
and
clean
up
the
welding
attachments
so
that
the
parts
can
run
a
different
node
that
is
running.
E
The
problem
is
that
we
don't
really
know
when
when
node
is
not
ready,
we
don't
really
know
that's
really
because
it's
shut
down
or
it's
just
like
a
network
partition
so
see
that
you
have
the
subgraceful
note,
graceful,
no
shutdown
feature,
and
then
you
have
this.
Node
shutdown
manager
introduced
there,
which
actually
knows
exactly
whether
the
note
is
being
shut
out.
E
Not
I'm
just
wondering
if
it's
possible
that
we
do
something
there
like
at
this
at
this
taint
over
there,
because
right
now,
I
think,
from
outside
from
other
controllers,
we
can
only
see
notice,
not
ready,
but
we
don't
know
that's
really
because
of
a
shutdown
right.
F
Yeah,
there's
not
obtained
the
notice,
not
ready
state,
there's
a
big
message
applied
for
a
shutdown
specifically
so.
E
E
E
Okay,
so,
okay,
that's
awesome,
so
I
will
okay.
I
will
check
the
code
and
see
exactly
what
string
that
is,
and
if
I
have
questions
maybe
I
will
reach
out
to
you
dave.
E
Nice
to
meet,
you
looks
like
okay
looks
like
probably
solved,
then.
E
So
the
problem-
yeah,
okay,
yeah-
so
that's
another
part
of
this,
so
the
non-graceful
no
shout
out
initially.
I
think
the
cop
is
trying
to
address
both
a
shout
out
case
and
also
a
like
a
network
partition
case,
which
you
know
you
don't
really
know.
It's
really
shut
down.
D
E
But
I
think
the
problem
is,
I
think
we
have
some.
There
are
some
review
comments
or
people
are
concerned
that
if
you
don't
really
reboot
the
machine
it's
like,
if
this
is
like
on
a
ball
metal,
then
it
will
cause
corruptions.
E
So
that's
why
I
was
actually
thinking
whether
we
can
narrow
down
the
scope
of
the
cap
just
to
address
a
real
shutdown
first
before
we
otherwise
right
now
we're
kind
of
stuck
because
because
you
don't
really
know
right,
it
could
be
either
way
it
could
be
really
shut
up
or
not
shut
down,
and
then
initially
we
don't
really
check
because
we're
just
we
just
look
at
how
long
it
has
been
now
ready
and
then
it
actually
rely
on
sort
of
system
to
determine
whether
it
is
safe
to
do
or
force
detach.
E
So
that's
another
problem
right,
because
not
every
storage
system
can
do
that.
I
think
initially,
the
cap
is
really
mean
for
all
providers,
but
even
for
cloud
providers,
we're
not
sure
whether
everyone
can
actually
make
that
check
so
yeah,
so
there
are
still
some
gaps
there.
E
So
but
I'd
like
to
start
somewhere
now,
if
we
can
narrow
this
down
at
least
we
can
start
to
address
this
problem,
because
this
one,
I
can
actually
reproduce
all
the
time
you
know
if
I
just
go,
shut
down
that
vm
and
I
I
I
have
this
problem,
you
know
I
have
a
slave
upset
and
the
pods
just
get
stuck
they're,
not
going
anywhere
until
you
know
we'll
apply
this
change.
At
least
that
can
solve
this
problem
yeah.
E
So
that's
why
I
want
to
have
a
way
to
really
know
that
the
node
is
really
really
shut
down
or
not.
C
E
C
E
Had
a
meeting
on
this
before
right,
it's
quite
some
time
ago,
yeah
it's
a
long
running
problem,
but
I
like
to
make
some
progress,
at
least
because
I
know
even
if
it's
real
shutdown,
I
know
right
now.
If
it's
running
a
staple
set,
I
can't
get
the
parts
moving
to
a
node
that
is
still
running.
So
at
least
you
know,
if
I
can
get
this,
you
know
subset
of
problem
solved,
hopefully,
and
then
and
then
can
move
forward
and
then
maybe
you
can
come
back
to
the
real
network,
partition
problem.
F
Yeah,
I
think
that
makes
sense
too
because
or
sorry
yeah.
I
think
that
makes
sense,
because
yeah
I've
also
seen
this
cab
for
quite
a
while.
I've
also
been
tracking
a
little
bit
makes
sense
to
start
at
a
narrower
use
case
of
when
you
already
know
the
shutdown
is,
is
actually
happening
and
go
from
there.
I
think
that
makes
sense.
B
Yeah,
I
think
that
it's
a
great
idea
to
check
the
the
reason
for
like
similar
to
the
graceful,
node
shutdown.
I
would
caution
against
using
a
taint,
I
think,
like
I've
looked
at
teens
proposed
in
other
enhancement
proposals.
B
I
think
the
problem
with
taints
is
they're,
just
potentially
unreliable
and
a
lot
of
the
time
we
try
to
leave
them
up
to
like
the
cluster
administrator
to
decide
if
they
want
to
set
them
so
like,
for
example,
if
we
are
passing
this
through
in
a
reason,
then,
if
somebody
wanted
to
also
add
a
taint,
they
could
do
that.
But
if
we
apply
a
taint,
we
don't
give
them
any
choice.
D
B
It
can
be
a
bit
fragile
in
terms
of
how
I
think
the
cubelet
can
only
like
apply
a
taint
once
at
startup,
and
then
it
can't
ever
do
it
again.
E
I
think
the
reason
another
thing
is
when
the:
if
the
shutdown
node
comes
up
again,
we
want
to
be
able
to
kind
of
stop
it
from
getting
any
parts
scheduled
on
it.
We
want
to
clean
it
up
first,
so
in
that
case
I
think
this
this
not
ready
the
reason.
It's
probably
not
enough
right.
So
that's
why
I'm
thinking
we
may
still
need
a
taint,
because
if
the
node
comes
up,
I
think
the
status
will
change
right.
E
E
Okay,
all
right
sure,
maybe
okay,
that's
an
idea.
We
can
take
a
look
at
that.
So
if
the
previous
status
is
the
reason
is
shut
down,
we
wait
clean
up
and
and
okay
and
then
we
and
then
change
it
back.
Okay,
okay,
I'll,
take
a
look,
so
that
will
be
so.
Would
that
also
be
in
the
oh?
That's
probably
not
in
the
shadow
manager
anymore?
That's
this.
I
think
it's
probably
the
other
part
of
the
tribute.
E
Okay,
great
all
right
so
yeah,
so
I
will
update
the
tab.
I
think
this
is
you
know
this
is
the
the
cab
is
owned
by
six
storage,
but
I
think
signal
is
participating.
A
E
A
So
I'm
looking
at
lantau
thank
you
lantau
for
updating
the
comment
from
the
previous
like
action
items
from
previous
discussion
with
deep.
It's
great
anybody
else
has
any
agenda
items.
We
have
six
minutes
left,
so
we
can
either
use
them
or
take
them
back.