►
From YouTube: Kubernetes SIG Storage 20190131
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 31 January 2019
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.trrf65a1ive9
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:07:28 From hemant : https://github.com/kubernetes/sig-release/blob/master/releases/release-1.14/README.md#enhancements-freeze
A
All
right
today
is
January
31
2019.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group
on
the
agenda
today,
we're
going
to
review
the
planning
spreadsheet
and
then,
if
there's
anything
else,
that
you'd
like
to
discuss,
please
feel
free
to
add
it
to
the
agenda,
looks
like
we
don't
have
anything
at
the
moment
for
planning.
A
We
have
the
enhancement
deadline.
This
week
it
just
passed
on
Tuesday.
The
enhancement
declaration
process
was
slightly
changed
up
this
quarter
to
require
a
cap.
I
think
this
came
as
a
surprise
to
some
of
us,
so
some
of
the
features
didn't
end
up
getting
caps
created
in
time,
and
if
that
is
the
case,
one
of
the
options
that
we
have
is
to
file
for
an
exception.
A
I
think
Michelle
has
been
reaching
out
to
folks
to
coordinate
that,
and
so,
if
you
have
a
feature
here
that
doesn't
have
a
cap,
it's
probably
been
removed
from
the
milestone
and
if
that's
the
case
and
we
and
we
still
want
it
as
part
of
the
milestone,
please
make
sure
that
the
kept
exists
and
is
ready
to
merge
and
file
an
exception.
So
let's
go
ahead
and
get
started
with
the
status
review.
Unless
somebody
has
a
question,
it's.
B
C
B
B
C
A
A
A
Okay,
well,
the
status
update
as
far
as
I
understand
is
that
part
of
the
migration
engine
planned
the
larger
plan
was
blocked
by
waiting
for
the
CSI
see.
Are
these
there
was
a
question
about
how
those
CRTs
would
get
installed
today
we're
using
an
add-on,
but
that
process
is
not
scalable
we're
looking
for
a
more
robust
solution,
but
the
immediate
migration
engine
plans
for
alpha
for
this
quarter
are
not
at
jeopardy
because
of
that.
A
D
The
cap
has
been
merged
on
time,
so
we're
looking
good
there
I
got
some
feedback
post
merged
from
from
Jordan.
I
should
have
that
wrapped
up
today
and
I
have
a
PR
with
his
recommendations
and
suggestions
I'm
to
the
cap
and
then
get
trying
to
wrap
all
that
up
to
sleep.
So
I
can
focus
on
on
technical
implementation,
cool.
E
Yeah
this
one
I
I
did
not
catch
the
email.
That
said,
we
need
to
cap
and
furthermore,
I
was
in
the
impression
that,
because
this
was
almost
in
the
last
one,
that
it
was
that
nothing
needed
to
be
done
in
terms
of
like
process
wise,
so
so
that
totally
my
fault
there
I've
proposed
a
cap.
As
of
now
or
as
of
last
night.
Michelle
said
she
was
gonna.
Ask
you
to
review
it.
Ok,
I
think.
B
E
One
can
get
an
exception
because
yeah
this
is
one
of
those
features.
It's
already
a
feature
this
merged.
It's
just.
We
want
to
move
her
from
alpha
to
beta
shouldn't
shouldn't,
make
anyone
feel
weird.
I
I
do
have
one
small
issue
to
bring
up
with
another
guy
name
is
dark
owls
on
a
slack
I,
don't
know
who
that
is
in
real
life
posted
a
a
PR
to
add
the
roblox
support
to
the
host
patch
driver,
which
is
one
of
the
things
that
needs
to
be
done
here.
So
I
was
reviewing
his
patch.
E
It
I
noticed
that
the
host
path
driver
is
stateful,
but
if
it
restarts
it
loses
all
of
his
state.
So
like
you
could
never
use
the
host
path,
CSI
driver
in
production
and
expect
the
right
thing
that
happened
so
is
that
intentionally
like?
Is
it
understood?
The
CSI
driver
CSI
host
path
driver
is
only
for
testing
yeah,
okay,.
A
E
C
I
think
yon
needed
changed
a
while
back
to
the
host
path
driver
to
actually
have
it,
so
it
used
to
just
create
a
volume
out
of
the
container
root
of
s.
I.
Remember
yon
made
some
change
to
actually
create
the
volume
out
of
like
slash,
temp
or
something
I,
don't
know
if
that
would
help
in
terms
of
making
the
host
path
driver
actually
more
persistent.
C
E
I
mean
that
this
does
right
here
approaches
but,
like
I
posted
a
bunch
of
code
of
your
comments
about
like
this
is
not
idempotent.
This
is
no
idempotent.
This
is
not
important.
Maybe
we
don't
care
if
it's
only
for
testing,
but
I
was
assuming
that
you
would
want
to
have
a
hardened
implementation
even
in
the
host
path
driver,
yeah,.
A
I
mean
the
intent
is
for
the
host
path
driver
to
be
used
for
testing
supposed
to
be
a
simple
kind
of
mock
driver,
but
also
have
some
sort
of.
If
there
are
deficiencies
in
terms
of
the
test
that
we're
trying
to
run,
then
the
driver
should
be
improved.
If
somebody
has
a
use
case
for
production,
then
we
need
to
like
take
a
step
back
and
talk.
That's
not
good.
Yeah.
E
A
A
E
A
Yeah,
go
ahead
and
filed
exception.
Take
a
look
at
that
email
that
was
sent
out.
I,
think
kubernetes,
dev,
okay
about
this
and
there's
I
think
they
burned
down.
Email
group
that
you
can
send
the
message
to
see
me
and
Michelle
and
I
will
go
ahead
and
+1
it
as
like.
Yes,
extorted
approves
this.
So
do
you
need
to
review.
A
B
A
B
C
B
Think
that
that
does
not
matter
mostly
like
well,
because
our
online
edges
like
a
place
where
that
also
calls
like
volume
resizing.
So
we
can
focus
on
that
like
we
can
like
reduce
the
scope
of
it,
but
just
a
small
thing
actually,
because
online
resizing
code
is
separate
and
and
it
just
the
blip
the
time
and
the
place
from
where
the
actual
RPC
call
will
be
made
for
resizing.
So
it's
not
all
the
different.
That's
that's
what
I'm
trying
to
get
across
I
guess:
okay,.
A
A
C
This
is
also
blocked
by
the
CRT
install
decision.
We're
gonna
have
we're
gonna,
be
we're
gonna
discuss
this
at
today's
sake
architecture
meeting,
so
they
so
sick
architecture
is
working
on
a
long
term
solution
for
the
CRT
install.
But
what
I
want
to
discuss
today
is
what
short
term
solution
should
we
should
we
pursue
so
that
we
can
deliver
this?
This
release
make.
A
C
A
A
If
anyone's
interested
tune
in
to
the
sega
architecture
meeting
today,
I
think
will
be
in
a
couple
hours.
Next
up
is
redesigning
keys
for
burn
per
node
volume
attach
limits.
This
is
design
only
and
it
is
required
for
the
CSI
migration
come
on.
Any
updates
on
this
I
think
you're.
Looking
for
an
owner
last
time,
yeah.
B
D
A
C
F
A
G
The
issue
here
is,
we
will
not
be
doing
it
as
a
component
suite,
but
we
probably
would
be
doing
it
as
a
validation
suite
to
start
with
so
I
set
a
test
that
would
be
promoted
as
validation
suite
at
home.
Eventually
at
some
point
in
time,
when
components
decides
on
profiles
or
whatnot
will
then
become
convinced.
Part
of
the
process
is
to
break
up
or
table
for
method
tests
to
individual
tests.
So
there
is
a
duplication
in
test.
G
C
B
A
A
H
So
the
the
cap
has
been
submitted.
It's
not
been
merged,
I've
been
addressing
issues
with
the
submission
process.
First
cap,
so
it's
been
a
learning
learning
process
and
there's
also
a
work-in-progress
pr4
that
it's
it's
up
and
available
right
now,
I'll
edit
the
description
here
afterwards
to
get
you
that,
but
it's
it's
currently
in
flight
I,
think
it's
just
pending
the
merging
of
the
cap,
which
I
believe
we
may
have
to
file
an
exception
for
if
I
understand
correctly.
A
A
A
B
A
It
is
this
FS
group
solution
that
we
have
is
very
hacky.
It
requires
us
to
go
in
and
recursively
iterate
through
the
entire
volume
to
trone
every
single
file
in
directory,
which
has
a
number
of
issues,
including
just
the
fact
that
it
takes
so
long.
It
can
take
hours
to
complete
on
a
very
large
volume.
A
B
And
one
more
thing
just
to
throw
in
is
that,
like
it,
people
want
to
use
secrets
and
all
as
such
keys
and
because
the
these
keys
have
a
first
group,
so
it
naturally
has
group
readable,
writable
permission.
These
things
cannot
be
used
as
as
this
SSH
keys
and
stuff
like
that.
So
that's
another
motivation
for
up
letting
user
users
use
UID
for
permissions
rather
than
just
FS
groups,
isn't.
A
There
was
we
had
like
an
issuer
somewhere,
but
what
I
want
to
do
is
take
a
step
back
from
like
a
specific
implementation
to
say
all
right.
What
are
all
the
problems
that
we're
trying
to
solve
here
and
what
is
the
best
way
of
solving
it?
It
might
be.
The
UID
solution
is
the
best
answer
that
we
have,
but
if
somebody
could
kind
of
evaluate
what
the
current
state
is
and
what
options
we
have,
that
would
be
awesome.
I
think.
C
I
A
A
J
A
A
A
A
Okay,
next
up
is
the
refactor
of
the
cubelet
driver
registration
to
a
reconciliation
model.
So
there's
somebody
in
the
community
who
actually
stepped
up
to
work
on
this.
She
submitted
APR
as
a
workaround
for
CSI
temporarily,
and
this
interested
in
implementing
the
larger
solution.
I
will
have
to
dig
up
that
that
issue
and
PR
and
posted
here.
J
A
J
D
A
That
is
excellent
progress,
cool
yep,
so
we'll
take
a
look
at
that
PR.
Yes,
next
up
is
provisioning
capacity
reporting
for
generic
topology,
which
is
required
for
local
volume,
dynamic
provisioning.
This
was
a
design
for
this
quarter.
We
didn't
have
an
owner,
for
it
think
we're
still
looking
for
an
owner,
yeah.
C
A
Okay,
so
if
anyone's
interested,
please
reach
out
to
Michelle,
take
a
look
at
the
brainstorming
doc
that
she
has
around
this
and
and
then
hopefully
we
can
find
an
owner.
So
a
quick
summary.
The
item
here
is
to
make
kubernetes
aware
the
capacity
that
it
has
available
for
local
storage
and
then
use
that
to
actually
be
able
to
provision
from
essentially
a
pool
of
storage
dynamically
generating
volumes
instead
of
having
to
go
in
and
pre
provision
volumes.
That's.
C
It's
so
right
now,
the
major
use
case
is
is
local
volumes.
However,
the
si
si
spec
is
generic
enough
to
work
with
not
just
local
volumes,
but
any
other
type
of
driver.
So
the
the
challenge
here
is:
how
can
we
represent
available
capacity
in
a
generic
way?
That's
not
just
specific
to
local
volumes
and
I.
Think
that's
the
main
challenge
that
this
design
is
trying
to
work
out
who's.
K
L
C
Steve
reach
out
to
me
I,
don't
want
to
share
it
globally
right
now,
because
it's
just
like
sort
of
brainstorming
ideas
so
reach
out
to
me
and
I'll
share
it
with
you.
A
Okay,
so
maybe
it's
worth
having
a
larger
discussion
on
this,
if
there's
enough
interest,
maybe
we
can
set
up
a
one-hour,
separate
meeting
to
talk
through
the
use
cases.
I
think
Ben
raises
a
valid
point
of
like
what
do
we
really
absolutely
need
and
if
it
becomes
so
difficult
to
move
this
design
forward,
you
know
a
more
generic
way,
maybe
it's
easier
to
just
unblock
local
Peavey,
rather
than
come
up
with
a
more
generic
solution.
E
A
A
A
C
I,
don't
think
there's
an
owner
for
this
yep,
so
it
might
just
be
punted.
Okay,.
A
A
A
A
A
A
M
A
K
A
A
B
Matthew
moved
the
existing
feature,
design
from
old,
dark
old
community
I
put
two
cap
yesterday.
It
needs
to
be
merged.
It's
also
has
to
be
like
the
exception
process,
and
we
have
at
least
one
item
worth
discussing
stove,
so
whether
we
want
to
keep
it
make
it
default
or
our
mean
Michelin
couple
of
ways
we're
talking
a
tweak
on
whether
it
should
be
opted
in,
like
my
asterisk
last
parameter.
So
yeah.
That's
thus
the
Sun
kind
of
open
item,
but
to.
A
B
A
A
Next
up
is
auditing.
Si
si
PV
PV
C
code
for
issues
related
to
number
72
347.
We
had
nobody
assigned
to
this
work.
It
was
a
serious
blocker
bug
found
in
local
PBS
that
we
wanted
to
do
a
broader
security
audit
off.
We
were
looking
for
an
owner,
I
believe
Michelle.
This
is
still
something
that
we're
looking
for
an
owner
for.
C
C
This
is
for
local
local
volumes.
Yeah
I
think
yeah
I
put
this
as
one
of
the
items
that
we
need
to
do
before.
Local
volumes
goes
GA,
okay,.
A
B
It's
a
current
latest
blogged
on
the
idea
that,
like
it
assumes
that
resizing
all
resizing
will
be
online
and
that's
a
bit
of
a
problem,
because
that
doesn't
work
all
the
way.
So
the
author
of
the
of
the
design
has
not
got
back
to
us.
So
I
don't
know
what
to
do
and
it
could
be
solved.
But
I
think
we
need
some
feedback
from
the
author
to
do
to
make
it
like,
like
yeah.
B
A
A
Alright,
thank
you
very
much
for
the
update.
That
is
all
that
I
have
for
the
1.14
planning
spreadsheet
looks
like
there
are
no
PRS
that
need
to
be
reviewed
for
this
and
no
design.
Reviews
last
item
here
is
from
Jerry
Jerry,
Jerry,
vault,
Linux
storage
and
file
system
conference.
Do
you
want
to
talk
about
this
yeah.
F
In
theory,
IIIi
had
I
met
with
Michelle
over
a
year
ago,
and
she
has.
We
had
a
flex
driver
POC
for
the
Andrew
file
system.
Apis
I
would
even
left
the
company
and
I'm
just
pack
of
the
company
and
haven't
really
were
at
ease
at
all
in
the
last
year,
but
in
theory,
I'm
gonna
have
a
a
CSI
driver
to
show
at
vault
and
then
shortly
after
that
they
had
a
conference
on
high-energy
physics,
conference
and
I'm
struggling
a
little
bit.
If
anybody
what
compared
some
time
to
help
out
that
degree,
what.
M
F
A
A
A
A
A
In
the
way
that
topology
works
for
CSI
is
a
two
step
process,
one
is
when
the
plug-in
is
registered.
The
driver
is
registered
with
cubelet.
It
basically
gives
cubelet
a
set
of
labels
to
apply
to
that
node.
So
if,
at
that
point
you
have
some
way
to
discover
that
yeah
I
the
components
that
you're
looking
for
are
available
on
this
node,
you
can
set
some
labels
to
say
you
know
this
is
a
scheduled
herbal
note
for
your
CSI
driver.
A
A
The
challenge,
though,
is
it,
will
still
require
a
user
to
set
up
some
sort
of
affinity
on
their
pod
to
to
basically
use
that.
We
have
no
way
for
the
scheduler
to
say.
I
know
that
this
driver
is
currently
only
available
on
these
nodes
and
not
on
these
notes.
It
requires
user
intervention
to
complete
the
loop,
though
I
think
they.
F
Want
to
say
schedule
me
anywhere:
it'll
just
be
flow
schedule
me
on
on
on
a
pod:
that's
near
Network,
near
the
file
server,
or
actually
we
can
do
something
even
wackier.
We
could
they
schedule
me
anywhere
that
has
room
for
it
and
in
the
background,
backfill
the
data
and
when
it's
done
switch
over
remote
to
local.
You
know
local
in
the
network.
Okay,
so.
C
C
C
A
F
F
F
F
A
So,
assuming
you
don't
move
the
volumes
around,
assuming
that
they
are
sticky
to
a
given
node
once
they're
provisioned.
It
should
be
this.
This
use
case
is
supported
by
the
existing
topology
feature.
So
somehow
some
way
you
need
to
basically
label
every
node
when
the
node
is
registered
with
your
CSI
driver,
which
is
the
call
in
CSI.
A
That's
gonna
generate
a
bunch
of
labels
on
your
node
object
and
then,
when
you
dynamically
provision,
you
don't
have
to
specify
any
new
parameters,
because
your
provisioner
automatically
takes
care
of
figuring
out
where
to
provision,
but
once
the
volume
is
provisioned,
your
CSI
driver
should
return
the
topology
fields
on
that
volume
to
say
that
it
is
sticky
to
whichever
node
you
know
note
through
and
then
the
scheduler
automatically
says.
Oh
I
know
that
this
volume
is
only
accessible
from
node
foo
and
it
can
make
the
appropriate
pod
scheduling
decisions,
though.
A
F
Example,
if
I
want
to
do
something
like
schedule,
say,
I
want
to
run
a
job.
The
data
is
not
even
in
the
kubernetes
cluster.
It's
in
my
file
system.
Far
far
away.
We
have
a
global,
namespace
and
I
say
I
want
to
provision
this
pod
I,
don't
want
it
to
be
scheduled
until
that
volume
has
been
migrated
into
the
cluster
so
that
I
can
locally
asset
it.
We
just
we
find
some
place.
Some
know
that
it's
a
good
place
to
put
that
data,
and
then
we
want
to
say
now
run
that
job.
A
C
You
could
might
be
able
to
do
that
if
you
delay
provisioning
right,
you
can.
You
can
delay
provisioning
so
that
the
scheduler
can
tell
your
driver,
which
node
it's
chosen
and
then,
when
you
provision
the
volume
you
can
go
migrate,
the
data
at
the
same
time.
So
then,
once
the
data
is
migrated,
then
the
PV
object
actually
gets
created,
but.
F
D
K
C
F
D
F
F
A
A
D
A
Pv
is
automatically
even
get
down
to
the
PVC,
so
basically,
what
you're
doing
is
you're
delaying
your
provisioning,
so
a
provision
request
comes
in
a
create
volume,
request
and
you're
going
to
hold
off
on
completing
that
request.
Until
you
are
able
to
make
a
determination
about
where
it
should
be
placed
which
node
and
then
once
you
have
that
information,
you
complete
the
request
and,
as
part
of
the
response
for
the
request,
you
say,
I
have
a
new
volume.
The
name
is
this:
oh
also,
it
has
these
topology
labels,
which
are
it
belongs
to
new.
A
F
A
F
A
A
Yep,
so
you
can
have
lots
and
lots
of
storage
classes.
There
has
been
a
request
to
allow
us
to
change
storage
class
parameters
per
persistent
volume.
Claim
we've
been
very
hesitant
to
do
that
because
it
causes
the
PVC
object
to.
It,
causes
us
to
introduce
implementation
details
for
a
storage
system
into
the
PVC
object
and
the
PVC
object
is
supposed
to
be
portable
right
now.
The
best
way
to
do
it
is
through
the
storage
class,
so.
F
I
just
had
long
as
ice-free
allocator,
pre-created
storage
class,
corresponding
to
that
data
set
and
whatever
other
requirements
that
he
then
then
okay,
cool
and
administrators,
great
for
third
cousins
or
users.
Administrators.