►
From YouTube: Kubernetes SIG Storage 20200130
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 30 January 2020
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.cfull7vwinc
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
N/A
A
A
We
recently
had
the
feature
freeze
deadline
for
1.18,
so
we
want
to
keep
track
of
what
items
made
it
and
what
items
did
not
and
if
there
are
any
PRS
that
you
want
to
discuss
any
designs
you
want
to
review,
please
feel
free
to
add
them
to
the
agenda
and
then
we'll
go
over
the
miscellaneous
items
and
if
there's
anything
else,
you
want
to
talk
about,
feel
free
to
go
ahead
and
add
them
to
the
agenda
doc.
So
jumping
straight
into
the
spreadsheet,
we're
gonna
start
from
the
top.
A
B
So
the
two
piers
in
flight
for
bringing
the
CSI
latest
changes
there's
another
pier
that
does
not
require
any
Cape,
which
is
about
calling
read,
write,
not
expand
on
all
notes
that
hit
bottom
in
Maori
right,
I'll
make
a
fix
for
it,
and
then
there
was
a
third
issue
boat
that
recovering
from
resize,
which,
as
you
like,
those
were
people
keeping
track.
I
agree:
I
opened
the
enhancement
pull
request,
but
it
got
did
not
get
merged
in
time
for
an
enhancement,
so
I
think
we're
gonna
miss
it.
B
The
reason-
and
it
took
so
much
time
for
me-
is
because
it
took
time
for
me
to
realize
that,
like
truly
fixing
the
issue
of
like
shrinking
the
volume
and
keeping
the
PV
PVC
size
in
three
places
is
almost
impossible
to
fix.
We
don't
know
the
true
size
of
the
volume.
Actually,
the
create
volume,
RPC
call
itself,
the
size
is
optional
and
the
one
response
capacity
is
optional.
So
when
a
volume
is
created,
we
don't
know
the
size
of
it
the
true
size
of
it.
B
We
only
know
that
what
use
it
requested
the
other
problem
is
like
is
that
node
expand
volume
has
has
the
capacity
like
input
capacity
as
optional
and
output
capacity
as
optional
so
setting
the
value
is
coming
from.
These
RPC
calls
in
PVC
status,
which
capacity
could
lead
to
situations
where,
where
the
you
know,
like
the
the
plug-in
reports
FS
size,
which
is
smaller
than
the
the
user
requested
size,
it
will
lead
into.
You
know
like
resize
loop,
where
it
will.
C
C
B
There's
like
other
race
condition
so
yeah,
so
I
proposed
the
PR.
That
kind
of
not
100%,
fixes
it
but
fix
allows
the
user
to
recover
from
the
the
research
failure
and
try
resizing
with
a
lower
size
at
the
cost
of
that
the
quota
will
remain
to
the
highest
value.
So
if
you
expanded
the
volume
from
10
200gb
that
failed,
because
you
know
out
of
capacity,
is
provided
out
of
capacity
or
something,
and
then
you
tried
with
200-220
and
that
succeeded,
but
your
quota
will
still
be
calculated
20.
So
that's
my
peer
fixes.
B
A
A
C
C
Around
yesterday,
late
night,
I
was
able
to
get
the
disk
working
for
the
first
time
with
everything
into
and
integrated.
There
are
PRS
required
into
node
registrar.
There
are
a
few
PRS
required
into
the
cubelet
CSI
code,
basically
around
path
translations
for
Windows.
It's
the
main
theme
of
these
changes
other
than
that,
it's
more
like
we
have
to
unroll.
Some
of
these
workarounds
make
sure
that
CSI
proxy
has
all
these
fixes,
which
has
been
added
to
make
it
work
and
then
we'll
have
a
stable
thing.
A
A
B
B
A
Cool
Thank,
You
MA
next
item
is
improving
CSI
metrics
that
was
already
completed
next
item
is
issues
related
to
assuming
volumes
are
mount
points,
and
this
was
a
bug
fix
that
I
believe
we
were
looking
for
an
owner
for
I.
Think
last
status
was
Jing
may
have
started
to
work
on
this
Michelle.
Do
you
know,
do
you
have
any
more
on
this.
A
E
So
the
cup
got
merged,
it
is
a
trimmed-down
version,
so
it's
a
smaller
than
what
we
have
planned
for,
but
it
has
like
two-thirds
of
what
we
wanted,
because
it
has
the
controller
and
the
engined
part
and
also
CA
sorry,
changes
are
in
there.
Api
changes
are
removed
for
now.
So
just
between
events,
reporting
for
now
next
is
to
submit
a
issue.
D
E
E
A
A
Calculations
in
the
scheduler
were
the
information
that
was
being
proposed
and
there
were
a
bunch
of
edge
cases
and
instead
of
trying
to
hack
something
together.
The
feedback
that
we
got
was
let's
come
up
with
something
more
robust,
maybe
look
into
a
reservation
type
system,
and
so
we're
gonna
go
back
and
look
at
that
for
the
remainder
of
this
quarter.
A
G
So
I
I
wrote
up
the
cap
and
we
started
to
go
into
a
rat
hole
around
exactly
how
the
data
populated
should
work.
So
we
scaled
back
the
cap
to
just
focusing
on
an
an
alpha
feature:
gate
to
open
up
the
the
data
source
fields,
so
it
to
be
to
any
object
and
that
kept
was
merged
so
that
the
plan
is
will
implement
that
feature
and
then
use
the
next
quarter
to
you
know:
try
out
various
ways
of
doing
data
populate
shouldn't
see
if
we
can
find
one
that
everybody
likes
I.
A
A
So
the
drivers,
initially,
we
were
planning
on
just
removing
these,
because
we
didn't
have
any
owners
for
them.
Michelle
had
folks
reach
out
to
her,
offering
to
help
pick
up
the
drivers
and
drive
them
actually
start
implementing
them.
So
the
new
plan
is
to
keep
them
and
get
them
into
a
better
State.
F
H
The
kubernetes
I
forgot,
which
sig
is
doing
it,
but
maybe
security,
sync
but
they're,
looking
at
rebasing
all
of
the
controller
binaries
to
use
destroy
lists
and
what
that
means
is
that
the
controller
manager
can
no
longer
shell
out
two
things.
So
that's
going
to
break
the
controller
part
of
a
flex
drivers,
so
I
think
that
part
that
functionality
of
flex
may
get
deprecated
I.
B
A
A
H
C
H
C
I
I
The
cat
currently
talks
about
a
library
similar
to
the
external
storage,
provisioner
library
and
the
feedback
we
got
was
that
CSI
approach
would
be
better.
So
John
cope
on,
our
team
wrote
up
a
CSI
like
provisioner
for
a
simple
s3
provisioning
bucket
and
the
interfaces
following
the
CSI
approach,
and
we
have
that
working
right
now
and
we
are
then
ready
to
schedule
a
community
meeting
to
discuss
the
cat,
but
learn
that
andrew
at
google
has
something
also
written.
That
is
very
similar.
I
It
sounds
like
which
we
haven't
seen
so
I'm
Saad
and
I
we're
going
to
try
to
talk
about
when
we
could
meet
what
we
do
next
saw
I,
don't
really
even
know,
because
I'm
not
sure
it
makes
sense
to
review
our
cap.
If,
if,
if
you
guys
are
about
to
open
something
that
I
didn't
see
a
cat
for
years,
I
don't
know
if
what
you
guys
are
doing
would
be
an
implementation
of
the
high
level
concepts
of
the
kept
words
different.
So
we're
kind
of
in
limbo,
right
now,
I
think.
A
The
end
goals
for
Andrew
are
the
same.
Let's
just
set
up
a
meeting
to
talk
kind
of
what
the
API
he
has
proposed
versus
the
one
that
you
have
to
code
and
figure
out
what
the
overlap
is
and
what
the
differences
are
and
if
the
I'm
gonna
guess
the
differences
are
probably
relatively
minor,
because
the
end
goal
is
identical
and
then
it's
a
matter
of
merging
the
two
proposals
into
a
single
proposal
and
moving
that
forward.
So,
let's
set
up
a
meeting,
that's.
I
I
A
C
A
C
A
So
if
there's
no
response,
then
what
we
can
do
is
have
a
new
owner
open
up
a
new
cap
and
then
and
then
point
a
reference
from
this
one
to
the
new
one
to
say:
hey
since
you
know
this
one
hasn't
had
any
activity,
we
forked
it
and
we're
gonna
continue
work
on
it.
There,
the
owner
of
the
original
one
ever
comes
back
into
the
fold.
There
welcome
to
help
continue
to
drive
it
quick
cool.
Thank
you.
A
C
A
E
This
one
is
getting
complicated,
so
she
has
done
all
the
work
he
has
to
be
ours
once
the
API
changes,
the
others
controller
change
so
I've
been
reading.
The
API
changes
actually
Stella
implemented
that
based
on
the
merged
cap
and
then
Kim
reviewed
the
code,
and
then
Tim
has
some
comments
which
basically
will
go
a
total
different
direction.
E
So
well
actually.
Well
it
what
he
said
is
reasonable,
but
we
actually
already
gone
through
those
during
the
previous
movie.
We
actually
had
many
meetings
when
we
were
designing
us.
There
are
mainly
two
directions.
The
first
one
was
actually
what
he
proposed.
Her
in
the
initial
cup
was
two
out
of
this
hook
in,
like
a
lifecycle
struct
inside
the
container,
and
then
the
second
option
is
to
do
the
CCR.
Do
you
think
of
external
controller?
So
those
are
like
two
main
directions,
but
there
are
variations
of
each.
E
In
the
cap,
but
then
before
moving
from
alpha
to
beta
we're,
going
to
evaluate
and
look
at
a
movie
machine
to
cubelet,
but
Tim
will
say
if
you
will
have
to
make
all
this
change,
wait
before
you
beta
bishops,
just
do
it
now,
so
you
need
to
decide
what
is
the
high
level
the
direction
right?
It's
totally
different
direction.
If
we
go
with
cubelet
direction,
then
that
means
we
don't
really
even
need
this
excursion
hoop
cook
repo.
We
shouldn't
even
created
this
repo
in
the
first
place.
So
first.
E
E
A
E
A
Because
I've
didn't
get
well,
I
guess
the
CAF
is
here,
but
yeah.
Okay,
that
makes
sense.
Thank
you
Shing
and
sorry
for
all
the
back-and-forth
I
think
the
interesting
thing
now
is
that
I
think
all
the
low-hanging
fruit
for
this
SIG
is
has
been
picked
and
all
the
problems
that
we
have
are
the
big,
thorny,
difficult
ones,
and
so
there's
always
multiple
ways
of
doing
it,
and
these
are
gonna,
be
the
hard
ones
that
will
take
time
to
get
in
all
right.
Moving
back
to
the
agenda
looks
like
there's
no
PRS
or
design
reviews.
A
A
F
A
A
Okay,
next
item
is
cube.
Connie
U
is
coming
up
very
quickly
at
the
end
of
March,
beginning
of
April,
and
the
question
is:
are
there
any
sig
storage
related
activities
there?
So
there
are
a
number
of
storage
related
talks,
of
course,
and
we
have
a
sig
storage
intro
session
that
has
been
approved
beyond
that.
I
am
not
aware
of
any
additional
activities.
If
anybody
is
interested
in
organizing
something,
let
me
know
and
let
the
sig
know-
and
we
can
get
something
going.
We.
A
E
A
A
G
A
H
G
H
G
H
H
H
G
Ok,
why
I
guess
I
will
point
out
one
thing
that
I
that
came
out
of
the
discussion
I
had
with
him
on
a
guess:
it
was
Tuesday.
I
had
I
hadn't
realized
what
problem
E
I
was
trying
to
solve
a
different
published
in
this,
and
it
turns
out
that
my
problem
was
the
wrong
problem
and
they
he
was
solving
the
right
problem,
but
his
solution
doesn't
actually
solve
the
problem.
G
So
if
you've
decided
to
ridiculously
large
size,
your
quota
would
be
that
huge
size
forever
and
and
the
way
you
actually
get
out
of
that
situation.
If
you,
if
you
end
up
in
it,
and
you
don't
want
to
be
in
that
situation,
is
you
have
to
delete?
You
have
to
change
the
PvE
to
retain
then
delete
the
PVC
and
make
a
new
PVC
of
the
correct
size
and
then
to
the
TV?
And
that's
how.
G
Know
my
piece
is
is
way
larger
than
it
should
be
situation.
So
as
long
as
you
have
that
escape
hatch
to
to
address
accidental
resizes
to
to
the
wrong
size,
I
don't
know
if
it's
super
valuable
to
have
all
the
other
stuff,
because
that
was
the
problem
that
I
was
most
worried
about.
Is
you
know
I
meant
to
resize
it
to
100
gigabytes?
A
Yeah
I
think
my
concern
is
just
even
with
the
new
proposal.
There
is
not
a
good
clean
solution
for
the
quota
issue,
so
I
think
the
it
solves
part
of
the
problem,
like
you
said
so,
a
user
that
hits
this
can
fix
their
own
volume
and
get
their
volume
into
a
good
state.
But
the
problem
is
that
the
quota
for
it
for
that
user,
for
that
namespace
is
still
screwed.
Yes,.
G
A
A
Like
addressing
the
symptoms,
not
the
root
cause,
I
think
the
root
cause
goes
back
to
what
Michelle
is
saying,
which
is
you
know
the
drivers
are
returning
the
wrong
thing.
The
CSI
spec
doesn't
really
require
them
to
return
anything
at
all,
all
of
which
combine
to
make
it
impossible
for
kubernetes
to
ever
know
what
the
true
capacity
of
the
volume
is
yeah,
but
I
I
think
that.
G
All
of
those
issues
are
tangential
to
the
quota
issue
because,
because
the
quota
is,
is
always
going
to
be
what
the
court
has
never
related
to
how
big
the
volume
actually
is.
The
code
is
always
related
to
how
big
you
wanted
the
volume
to
be,
because
you
can
ask
for
a
one
gig
volume
and
the
system
can
give
you
a
10
gig
volume,
and
that's
that's
legal,
but
you
don't
get
charged
ten
gigs
quota
for
that.
G
G
I
think
that
it's
nice
to
know
the
actual
size
of
the
volume
of
sure,
but
that
that
doesn't
get
us
closer
to
to
addressing
the
quota
problem,
which
is
just
that
the
kubernetes
quota
system
doesn't
deal
well
with
trying
to
like
modify
the
the
quota
of
an
existing
object.
It
wants
to
set
it
at
creation
time
and
then
only
increase
it
yeah.
H
Can
reconcile
like
in
the
case
where
you
in
the
case
where
you
accidentally
bumped
it
up,
and
then
you
decrease
it
back
then,
instead
of
keeping
that
accidental
big,
too
big
of
a
value
you
can
actually
like
reconcile
and
be
like
hey
I
strung
it
back
and
the
actual
volume
size
is
the
size
of
the
of
the
lower
one.
So
I
can
actually
reduce
the
quota
again.
A
G
A
A
A
G
Is
we
had
an
allocated
capacity
field
and
it
was
like
a
one-way
ratchet
that
only
gets
larger
and
and
and
that
that's
that's
fine,
because
it
prevents
all
the
malicious
use
cases
of
where
a
malicious
user
attempts
to
steal
more
space
without
getting
charged
a
quota
for
it.
But
you
do
have
this
other
problem
where
sometimes
the
system
just
gives
you
more
space
than
you
asked
for,
and
that's
not
your
fault
and.
A
So
what
if
we
let
the
system,
in
addition
to
returning
the
true
value
of
the
volume,
also
return
a
value
which
is
what
should
the
quota
be
like
how
much
quota
should
be
charged?
And
so
then
you
let
the
storage
system
decide
whether
that
is
equal
to
the
real
value
of
the
storage
or
it
could
be
equal
to
the
requested
storage.
A
H
A
G
G
That
was
what
I
tried
to
get
out
of
him
and
he
said
any
any
attempt
to
do
that
is
gonna
cause,
API,
races
that
are
unresolvable,
and
that's
why
he
said
it
can't
ever
shrink,
because
the
quota
system
just
doesn't
like
that
and
and
and
that's
why
I
think.
Maybe
the
better
answer
is
like
just
get
a
whole
new
PVC
in
there
and
delete
the
old
one.
The
quota.