►
From YouTube: Kubernetes SIG Storage 20200409
Description
Kubernetes Storage Special-Interest-Group (SIG) Workgroup - 09 April 2020
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.2e66yd1ccuyj
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
A
A
So
we
have
this
planning
sign-in
sheet
for
a
1.19,
we'll
just
go
over
item
by
item
and
find
out
the
owners
and
see
who
will
be
working
on
it.
The
first
one
is
it's
the
first,
it's
the
first
one
sees
a
migration.
Next
steps.
Do
we
still
need
this
item?
Mushara
see
you
added
a
few
individual
items
for
which
club
provider.
B
A
Actually,
on
okay,
so
who
will
be
the?
Is
it
beta
or
alpha
we're
trying
to
go
beta?
If
we
can
get
a
CI,
then
we
want
to
get
it
to
beta.
We
couldn't
guess
yeah
I,
wasn't
then
of
course,
then
it
has
to
be
alpha,
but
we
are
trying
okay,
trying
to
you,
get
your
beta
I,
don't
know
who
I
should
put
on
as
a
oh
just
leave
this
one
for
now,
but
I
can
help.
Give
you
an
update.
A
A
E
A
B
So
I
think
Andy
from
Microsoft
has
been
the
main
person
working
on
this.
Oh
sorry,
column,
I/o,.
B
B
A
B
A
A
B
F
E
E
E
E
A
A
A
A
A
A
A
I
A
I
A
B
J
So
we've
been
trying
to
discuss
this
in
the
CSI
proxy
meetings,
but
we
have
not
yet
gotten
the
right
folks
to
discuss
from,
like
you
know,
windows,
side
of
things.
So
it's
like
a
group
effort
at
this
point
and
you
have
not
really
finalized
sort
of
about
this
forward,
so
maybe
like
either
me
or
they
will
keep
updating.
J
A
E
A
A
A
A
C
C
A
E
A
E
Right
sorry,
I
was
saying
that
for
what
item
number
4
there's
a
manual
work
around
that
works,
so
if
I
know
I
mean
the
last
call
we
agreed
and
Jing
had
some
concerns
and
we'll
schedule.
Another
call:
ok,
yeah
yeah
off
chance.
We
say
that
ok,
this
is
not
something
we
need
to
fix,
because
the
manual
workaround
is
enough
and
quota
system
will
be
broken
and
on
all
this
consideration.
So
it's
probably
fine
to
have
item
number
for
sp2.
In
that
sense,.
C
C
C
A
A
A
A
B
A
B
A
A
B
A
C
C
A
B
Know
Jing
was
working
on
a
couple
of
bug
fixes
around
this
area.
I
can
follow
up
with
her
to
see
if
she,
because
she's
able
to
continue
working
on
it.
Okay,
so.
F
F
It
read
rescheduling
pods
that
is
now
fixed
in
118
or
will
be
fixed
in
our
external
provision
of
1:18,
and
we
can
now
come
back
to
the
TV
if
it
is
iron
for
increasing
the
chance
that
the
scheduler
actually
picks
a
good
node,
where
the
pot
can
run
the
part
where
I'm
uncertain
is
how
to
handle
the
femoral
volumes.
We
have
another
line
item
here
and
further
down
about
si.
F
Si
si
our
formula
volumes
GA,
but
when
I
started
thinking
about
this
you've
probably
seen
the
mail
on
the
ballistics
George
mailing
list,
I'm,
not
sure
whether
it's
a
good
idea
to
really
extend
keep
adding
on
more
special
cases
to
Oh
si
si
ephemeral,
inline
volumes
for
a
storage,
capacitor
tracking.
We
currently
don't
have
a
size
field,
for
example,
so
my
my
current
caps
proposed
to
add
that,
but
that's
just
another
clutch
on
top
of
other
clutches,
in
my
opinion
and
I,
rather
would
like
to
see
it
as
reefing
a
bit.
C
C
F
C
C
C
So
I
think
Louis
brought
this
up
on
the
six
storage
mailing
list
at
some
point.
I
think
the
way
that
we
use
it,
we
Auto
generate
a
random
volume
ID
when
we
do
a
mount
call.
Basically,
the
node,
publish,
call
and
I
think
the
spec
implies
that
that
ID
should
be
something
that
comes
from
the
storage
system
itself,
that
in
this
case
there
is
no
provisioning
step.
C
So
it
may
be
that
we're
okay
with
that,
but
it
might
it's
something
to
kind
of
consider
as
we
drive
towards
GA
for
this
feature
and
I
also
agree
with
Patrick
there's
a
lot
of
other
functionality
that
we
need
to
add.
I
think
we
were
talking
to
Jen
yesterday
and
she
mentioned
how
or
was
a
Michelle
that
we
don't
do.
C
F
The
design
is
good
for
that
purpose,
but
other
useful
use
cases
for
it,
for
example,
are
pmm
storage
or
just
LLVM
storage,
but
you
bet
that's
provisioned
by
a
normal
size,
I
drove
over
a
scratch
space
for
applications
right
very
easy
to
specify
that
an
application
needs
force
volumes
if
it
can
be
in
line
in
the
pots
back,
because
that
removes
all
of
the
need
for
for
higher
level
logic,
for
you
know
an
operator
which
creates
four
volumes
for
specific
parts,
and
all
of
that,
so
it's
good.
It's
good
good,
API!
F
My
proposal
is
that
we
keep
the
CSI
ephemeral
volumes
current
design
as
it
is,
but
we
keep
it
in
beta
and
the
reasoning
is
that
we
just
don't
know
yet
how
to
do
the
other
things
that
are
ephemeral.
Volumes
in
general
would
be
useful
for
and
therefore
I
think
it's
too
early
to
just
say
we
are
done
with
this.
We
know
that
this
is
exactly
what
it
should
be.
Perhaps
we'll
come
back
to
it
and
may
have
to
add
for
size
field,
for
example,.
B
B
F
F
Generic
or
generalize
both
works
generic
ephemeral
volumes.
The
the
goal
has
to
be,
in
my
opinion,
that
this
works
with
a
standard
si
si
driver
without
changes
in
BC's,
I
driver
or
perhaps
even
any
other
storage
system.
It
might
even
be
an
inline
storage
system
that
fits
provisions
of
volume,
but
for
handling
for
semantics
should
be
ephemeral
and
it
should
be
in
line
because
that's
for
useful
thing
for
for
for
scratch
space.
But
you
don't
need
to
manage
PVC
separately
from
the
pod,
which
uses
it.
C
C
C
C
If
we
follow
what
we
did
in
tree,
then
it
makes
sense
for
ephemeral
volumes
only
to
be
defined
in
line
not
through
the
persistent
volume
infrastructure,
because
their
lifecycle
is
tied
to
the
pod.
And
so
if
the
lifecycle
is
tied
to
the
pod,
having
a
object
like
a
PV
exists
but
the,
but
the
underlying
storage
not
exist
because
the
pod
was
deleted
would
be
very
odd.
K
F
C
F
A
B
C
F
L
The
real
concern
is
that
we
need
to
decide
if
we
want
ephemeral
volumes
to
go
Ga
before
we
finish
volume,
pools
or
storage
pools,
because
if
storage
pools
and
could
have
monitoring
the
available
available
capacity
goes
betta
with
proper
support
for
ephemeral
volumes.
Or
if
we
come
up
with
a
solution
for
ephemeral
volumes
together
with
storage
pools,
then
we
can.
You
know
we
need
to
get
a
consistent
design
before
fennel
wines
goes
ta,
so
I.
F
But
that's
why
I
came
up
with
this
generic
storage
generic
ephemeral
volumes,
because
the
solution
that
was
originally
envisioned
was
to
have
a
special
extension
for
for
a
femoral
volumes
for
the
storage
capacity
tracking
I.
Just
really
don't
think
that
business.
This
is
right
thing
so,
but
yeah.
Let's
that's!
Let's
discuss
this
storage
pool
out
here
together
with
femoral
wardens
in
in
some
upcoming
meetings,
I
yeah.
A
A
A
F
With
the
simplified
plan
for
storage,
pools
and
capacitor
tracking
would
be
to
ignore
ephemeral
volumes
for
now
it
it
would
just
be
in
alpha
anyway,
but
I
think
that
it's
it's
worthwhile
to
target
some
progress
already
now
and
I.
Think
alpha
with
the
API
as
in
the
cap
that
is
pending
is
mostly
same,
and
we
can
just
execute
Efrain
or
volumes
for
now
and
make
it
make
it
clear,
but
this
still
needs
to
be
solved
before
for
beta.
We.
A
F
L
Agree
and
and
I'll
just
like
put
it
out
there
I
think
the
the
right
solution,
in
my
opinion,
would
be
to
change
the
current
a
thermal
volume
support,
because
we
we
do
have
that
gap
with
the
femoral
lines
that
they
try
to
get
scheduled
on
a
host
that
doesn't
have
the
capacity.
Then
then
we
have
you,
know:
stock,
pods,
okay,.
C
A
A
B
A
B
B
C
B
B
B
A
B
A
C
I
A
A
A
H
A
A
A
C
A
Okay,
next
one
is
okay:
what
in
house
yeah
so
this
one?
We
are
I,
think
getting
close
to
get
the
CSS
back
in
then
we
will
continue
what
kind
of
so
this,
yes,
so
should
still
be
alpha,
because
we
we
get
a
caffeine
last
time,
but
we
have
not
finished
implantation.
So
it's
still
alpha
Sonique
is
the
the
tab.
Lead
me
show
myself
be
reviewer
yeah,
so
I
think
this
is
fine,
we'll
just.
A
F
C
C
C
A
I'm,
just
storage,
API,
oh
I,
think
we
probably
go
change
it
early.
I
think
all
of
this
end
status
should
not
be.
Should
we
also
move
everything?
Maybe
that
doesn't
matter
it's
the
end
status
but
okay.
So
this
one
I
see
that
there
are
design
meetings
every
week
so
Jeff
you
are
still
the
wt4
this
for
Windows.
M
C
A
F
A
A
And
the
next
one
is
PVC
Christopher
said,
would
not
be
Auto.
Remove,
don't
know
today
was
day.
Buy
you
a
line,
it's
probably
not
in
a
call
I
will
check
with
him
last
so
much
everything
he
said.
He's
interested
just
do
that.
Won't
you
ask
him
if
he's
too
interested
in
this
human
doll
19,
so
just
keep
his
name
here.
For
now,
so
I
can
still
be
reviewer
and.
A
B
So
this
came
up.
This
came
up
in
sig
arch,
inelastic,
arch
meeting.
Basically
we
in
117
we
move
the
mount
library
out
of
the
kubernetes
repo
into
kubernetes
util
and
in
118
we
had
a
number
of
regressions
happened
due
to
changes
made
in
the
the
other
repo
that
was
not
caught
with
quickly,
because
we
don't
run,
we
don't
run
windows
builds.
We
don't
run
e
to
e
tests
against
these
changes,
so
the
ask
from
sig
arch
is
to
so.
B
The
other
problem
is
that
this
kate's
utils
has
a
lot
of
other
things
besides
Mount
and
whenever
we
have
to
cherry-pick
a
fix
back
to
update
the
new
Kate's
utils
dependency,
we
end
up
pulling
in
potentially
pulling
in
a
bunch
of
unrelated
changes.
So
sig
arch
wants
us
to
revisit
this,
pushing
mount
library
out
to
Kate's,
utils
and
sort
of
think
about
a
better
way
to
be
able
to
isolate
the
changes
that
we
make
and
also
have
better
testing
for
it,
so
that
do
they
suggest
you
have
the
seen
its
own.
B
A
C
If
anybody
really
wants
to
touch
a
lot
of
the
core
code
and
move
a
lot
of
the
core
code
around,
this
is
your
opportunity
right
here.
The
mount
library
is
used
by
all
of
core
kubernetes
and
a
lot
of
the
CSI
drivers,
and
so
anyone
who
wants
to
really
get
involved
with
the
kubernetes
code.
This
is
your
opportunity.
Travis
worked
on
this
last
and
I
think
it
would
be
a
good
experience.
So,
if
you're
interested
please
reach
out
to
any
of
us
and
we'd
be
happy
to
have
you
work
on
this.
A
A
E
B
C
C
I
need
to
jump
out
for
another
meeting
through
the
shingle.
Let
you
carry
on
sure.