►
From YouTube: CSI Community Sync 20210303
Description
Community Sync up Meeting for Container-Storage-Interface (CSI) - 03 March 2021
Meeting Notes/Agenda: -
Find out more about the CSI here:
https://github.com/container-storage-interface/spec
Moderator: Michelle Au (Google)
B
All
right
today
is
march
3rd
2021.
This
is
the
container
storage
interface
meeting.
I
think
we
will
just
go
down
our
agenda
for
today.
First
topic
is
yawn.
C
C
C
There
is
no
easy
way
how
to
get
physical
size
of
the
file
system
like
how
much
the
file
system
thinks,
how
big
it.
So
we
are
scraping
x,
you
in
dump
dump
efs
e2fs
utility,
which
is
nice
well,
nice.
It
is
possible
and
we
were
parsing
is
xfs.
Information.
Ugly
and
xfs
engineering
told
us
that
there
is
a
special
io
ctl
for
xfs
to
get
the
size
and
there
is
a
simpler
command
line
utility
or
including
the
pr
to
get
the
value
without
too
much
parsing.
C
B
Guess
one
one
question
is:
is
that
in
kubernetes
we
only
have
things
for
xfs
and
ext4
and
we
don't
really
support
other
file
systems.
So
if
a
driver
is
supporting
a
different
file
system,
does
that
mean
they
need
to
sort
of
come
up
with
their
own
utility
library.
C
And
well,
but
then
we
didn't
get
answer
if
this
is
safe
and
how
how
safe
it
is.
D
B
Okay,
I
guess
so
do
we
have
a
conclusion
on
do
we
need
a
conclusion
on
this
one,
if
there's
some
other
file
system
that
can't
actually
get
this
information.
E
I
actually
implemented
this
in
a
couple
of
my
drivers,
and
I
noticed,
while
I
was
implementing
it,
there's
a
few
different
options,
but
like
things
like
xfs,
grow,
fs
and
resize
e2fs
are
both
item
potent
like
you
can
just
call
them
over
and
over
again,
and
if
the
volume
is
already
the
size
of
the
underlying
block
storage,
like
just
nothing
happens
so
like,
I
would
imagine
that
any
other
file
system
would
conform
to
those
constraints.
C
Oh
yeah,
but
then
there
is
a
huge
warning
in
the
month
page,
but
he
used
to
be
ornate.
The
nightmare
pages
for
xfs
then
be
very
careful
and
call
through
fs
or
it
was
somewhere.
I
don't
know,
I
didn't
already
have
documentation.
E
Well,
I
was
gonna
say
that
that's
that's
one
implementation!
You
can
also
call
xfsgrofs
in
such
a
way
that
it
just
outputs
the
current
size
of
it,
and
then
you
can
compare
that
to
what
you
think
it
should
be
and
and
then
do
something
so
like
xfs,
has
a
bunch
of
commands
that
dump
the
current
file
system
size
without
trying
to
change
anything
yeah.
D
C
E
E
I
think
the
downside
to
calling
eye
octals
is
is
porting
them
across
architectures.
E
D
E
D
D
E
E
D
B
All
right
yeah,
I
think
that
sounds
good,
given
that
right
now
we
mostly
know
about
xfs
and
ext4,
and
we
have
solutions
for
those.
A
So
it's
the
plan
to
have
this
merged
and
just
say
driver
need
to
do
this
and
then
in
the
future.
We
are
thinking
about
adding
this
in
kubernetes
to
expand
volume
with
something
in
the
future.
C
Mine
utilities
imported
by
by
many
csi
drivers,
so
the
resize
property
size
should
be
there
soon-ish.
I
hope
in
the
next.
C
A
B
D
D
B
Okay,
next
topic.
D
For
intent
of
mount
what
should
happen
like
the
new
files,
the
like
originally
I'm?
I
was
thinking
that
my
concern
was
that
it
is
too
specific
to
just
call
new
files
that
new
files
should
be
readable.
Writable
by
the
volume
amount
group
and
the
it's
in
it
does
prevent
like
ch
on
chmod
use
because
clearly,
like
you,
cannot
just
see
each
one
and
expect
the
new
files
to
be
group
writeable.
They
will
be
generally
user
writable.
D
D
It
does
seem
to
fulfill
what
we
want
for
azure
file
and
cfs
mounts.
So
I
kind
of
included
that
wording
back
in
like
what
james
suggested
in
the
in
the
spec,
and
I
think
he
hasn't
had
any
like
yeah.
So
I
kind
of
went
back
on
that
and
I
thought
okay,
fine,
it's
just!
I
just
put
it
there
and
I
think
in
future.
If
there
are,
there
are
volume
drivers,
it's
that
that
can
support
like
that
concept
that
apply
volume.
D
Group
identifier,
like
volume,
mod
group,
but
they
only
on
mount
basically
basically
like
there-
could
be
driver
that
behaves.
They
apply
mount
option
same
as
like
how
ch
own
behaves
internally
like
like,
where
on
each
mount
it
the
permissions
are
mapped,
but
new
files
are
written.
When
you
write
a
new
file,
it
doesn't
automatically
get
get
group
lead,
write
permissions
actually.
E
E
D
Yeah,
if
that
happens,
it's
not
good
enough.
You
cannot
use
that
capability
because
the
capillary
is
limited
to
like
the
new
files
should
be
group
writable
and
readable.
So
that
was
my
main
concern,
but
it's
proposal
being
alpha.
Maybe
it's
fine
too.
I
don't
know
what
other
folks
think,
but
it's
fine
to
have
that
wording.
It
prevents
ch
on
ch.
E
B
D
Yeah,
but
it
it
doesn't
support
the
hypothetical
use
case
that
I
was
talking
like
if,
if
there
is
a
there's,
a
the
volume
type
or
file
system
type
that
that
applies
the
permissions
on
mount,
so
that
you,
when
you
mount
a
volume,
you
get
group
ownership,
group
and
group
readable
writable.
But
if
you
create
a
new,
if
a
workload
creates
a
new
file,
the
new
file
doesn't
automatically
inherit.
D
E
D
E
Concern
I
I
say
I
say:
let's
get
it
merged,
you
know.
If
and
if
there's
questions
about
it,
maybe
we
can
specifically
address
those
questions
somewhere
or
I
I
like
the
idea
of
having
a
test
that
basically
says
this
is
what
we
expect
to
work.
If
you
pass
the
test,
you've
cleared
the
bar
and
then
because
tests
are
much
more
harder,
they're
much
harder
to
misunderstand
than
english
words
right.
You
either
passed
the
test.
You
didn't.
E
D
Yeah,
okay,
so
I
have
updated
the
wording
for
that
and
I
think
it's
fine
for
for
the
implementation
that
we're
posing
there.
One
more
thing
that
I
have
made
change
and
james
was
looking
into
us
had
some
concern
was
like
okay.
So
originally
we
wanted,
like
my
original
proposal,
had
like
node
stage.
If
you
publish,
if
you
note,
stays
the
volume
with
one
group,
then
you
cannot
note
publish
with
different
group,
and
I
put
put-
must
wording
there.
D
If
you
remember
ben
and
like
you,
the
stu
must
use
the
same
group
that
was
used
during
the
stage
I
changed
relaxed.
To
must
to
me
actually
like,
like,
I
know,
not
relaxing,
but
I.
D
No,
I
dropped
that
thing
kind
of
altogether,
because
I
realized
that
many
drivers
do
not
may
not
actually
perform
a
mount
on
on
the
stage
like
I
know
chris
was
chris's
driver.
His
driver
does
not
perform
any
mount
on
on
on
stage.
So
if,
if
the
actual
mount
happens
on
the
at
the
publish
time,
then
that
means
that
multiple
volumes,
like
the
same
volume,
cannot
be
note
published
with
the
different
gid.
If
this
so.
E
I
think
I'm
agreeing
with
you,
but
I
I
just
I'm
confused,
because
I
thought
that
what
that
we
had
agreed
to
when
we
last
met
was
we
want
to
the
only
thing
we
want
to
guarantee
will
work
is
if
it's
the
same
right.
If
you
use
the
same
one
for
node
stage
and
no
publish,
we
will
guarantee
that
will
work
if
you
use
different
ones
it,
whether
it
works
or
not,
is
up
to
the
driver
and
yeah.
So
the
way
I
would
communicate.
That
is
like
that.
D
D
C
D
D
E
D
E
So
so,
if
you
don't
support
it,
there's
no
other
option
than
to
return
fail.
Preconditioner.
Is
that
what
you're
getting
at
michelle
yeah
that?
What
we're
saying
is,
if
you
do
support
it,
you
should
succeed.
If
you
don't
support
it,
the
way
you
communicate
is
by
returning
failed
precondition,
error
and-
and
I
guess
the
may
emphasizes
the
fact
that
you
don't
have
to
support
it,
but
you
can
but
yeah.
I
don't
know
it's
pretty
subtle
wording
issue.
B
D
E
D
Yeah,
I
think
I
originally
had
more
like
what
say
more
verbose,
putting
wording
there,
but
I
kind
of
reduced
it
down
to
so
we
are
saying
that,
okay,
if
like
just
I
want
to,
if
plugin
we
could
say
if
plugin
supports
note,
publish.
D
B
D
Yeah,
the
in
this
case,
like
cu,
cannot
really
do
anything
like
the
error.
The
recovery
says
the
the
recovery
says.
The
caller
must
not
retry
until
like.
If
you
read
it
says,
color
must
not
retry
until
no
publish
volume
uses
the
same
value
as
the
one
that
was
used
during
note
stage,
but
but
in
real
world.
This
just
basically
means
that
user
have
to
co,
will
try
with
exponential
backup
but
user.
D
D
Support
is
that
sounds
spare
michelle
like,
or
did
you
take
a
note
in
the.
D
Okay,
yeah,
if
that's
not
okay,
so
similar
thing
we
have
to
put
for
node,
publish,
node,
publish
also
like
a
plugin,
may
support
node
publish
volume
with
the
one
that
was
used
different,
so
there's
a
node.
It's
just
not
pollution.
There's
a
note
stage
and
not
published
not
published.
So
a
plug-in
may
support
not
publish
volume
with
different
volume
amount
group,
then
that
was
used
during.
D
Okay,
I
could
update
the
wording.
James
always
said
he
said
that
he
needs
to
review
the
error
conditions
like
the
last
comment
was
that
we
need
to
review
the
error
conditions
and
I
need
to
think
error
cases
a
bit
more.
I'm
a
bit
concerned
about
the
complexity
either
here
I
would
rather
err
on
the
side
of
caution,
meaning
more
conservative,
sacrificing
flexibility
or
simplicity.
D
D
B
I
guess
this
is
james.
Referring
to
these
two,
these
two
clauses
here
about
having
the
different
noun
groups.
B
C
D
B
Okay,
yeah,
I
think
once
like
I
think,
once
you
change
the
wording
on
in
the
description,
then
we
can
reflect
it
here.
I
don't
think
it
will
make
it
any
simpler
than
I
think
what
james
was
maybe
asking
for
is
that
was
he
asking
for
if
we
can
simplify
the
error
conditions.
D
So
I
don't
know
like
his
intent,
because
it
was
very
brief-
and
I
haven't
talked
to
him
after
that.
So
so
the
thing
is
like,
but
but
looking
at
the
message,
it
can
only
interpret
that
when
we
say
note
publish
could
use
different
mod
group
as
note
stage
or
multiple
node
publishers
could
use
different
mod
groups
from
each
other.
We
are
being
basically
like
very
flexible
about
how
this
is
done.
Like
maybe
he's
saying
that,
could
we
make
it
more
strict.
B
I
guess
if,
but
if
you
look
at
something
that,
like
a
plug-in
that
doesn't
support
node
stage,
then
theoretically,
they
should
be
able
to
do
different.
Mount
groups
on
node,
publish.
G
It
can
work,
but
should
we
require
that
it
must
work
like
that
was.
The
whole
thing
was
that
we
wanted
to
leave
the
door
open
to
if
people
are
able
to
implement
it
sure
go
ahead
and
implement
it.
But
if
we
don't
want
to,
we
only
wanted
to
require
supporting
a
single
group
at
a
time
to
be
able
to
claim
the
capability.
G
But
if
I
well
that
that's
where
it
gets
tricky
is
we
want
to
make
it
clear
that
you
know,
even
if
you
don't
have
node
stage,
if
all
you
can
do,
is
one
one
group
that
it's
okay
to
fail
here,
and
that
was
that's.
That's
what
that's?
What
brings
in
all
the
extra
error
handling
the
complexity
that
we
may
or
may
not
want
us.
I
mean
I'm
fine
with
supporting
it,
but
I
don't
deny
that
it's
a
little
more
complicated.
D
So
we're
saying
that
up
so
basically
we
are
saying
that
for
same
volume
and
so
there's
like
a
bunch
of
permutation
combinations
when
we
are
looking
into
note,
publish
like
there
are
certain
operations
that
is
supported
so
the
target
path
if
the
target
path
changes.
So
if
the
target
part
is
same,
then
obviously
it
is
supported
like
like
you
could
use
not
different,
but
you
could
use
the
same
point
group
on
same
target.
Part,
that's
fine,
but
if
you
change
the
target
path,
then
you
got
to
use.
D
We,
if
we
restrict
the
wording
that
all
note,
publish
and
notes
note
publish,
should
use
the
same
mount
group.
We
still
have
to
kind
of.
D
D
Then
the
volume
already
exists
error
says
indicates
that
the
volume
corresponding
to
the
volume
id
has
already
been
published
at
the
specified
target
path,
but
is
incompatible
with
specified
volume,
capability
or
read-only
flag.
So
we
cannot
use
already
exists
error
for
publishing
the
volume
on
a
different
target
path.
As
far
as
I
understood.
D
D
D
H
C
D
I
I
yeah,
I
don't
know,
I
think
I
will
make
the
wording
change
that
we
discussed
today
to
say
if
supported,
but
I
don't
know
the
error
handling.
What
james
means
by
that,
and
how
do
we
address
that?
Like
the
the
comment
that
I'm
referring
to
is
the
last
one
michelle
if
you
pull
that
one
and
this
thing,
so
it's
it's
just
at
the
bottom
of
the
pull
request.
B
B
B
All
right,
any
other
topics,
anything
like.
A
This
open
pr
by
patrick
right,
that's
also
all
right
in
I
don't
know
if
we
got
everything
resolved
yet.
A
Yeah,
my
only
question
is
that
that
existing
field
available
capacity
are
we
not
going
to
do
anything
with
it?
Just
leave
it.
It
seems
to
be
pretty
weird.
B
B
A
D
B
B
D
B
A
B
I
Hey
I
just
joined
it.
Sorry.
I
A
I
was
just
wondering
if
every
driver
can
report
this
much
more
in
size
if
they
don't
know
how
to
report
just
report
available
capacity.
I
guess
let's
just
have
some
questions
I
mean
now
we
are
if
we
are
switching
to
this,
but
if
we
are
still
staying
in
afar,
I
think
that
well
yeah,
maybe
that's
bad.
I
think
that's
that
gives
us
some
time
to
figure
out.
B
Yeah,
I
guess
we
need
to
see
we'll
need
to
see
on
the
kubernetes
side
if
he
intends
to
still
use
this
field.
It
might
be
that
he
might
fall
back
to
it
because
I
think
it's
an
optional
field,
so
you
know
he'll,
look
at
he'll,
look
at
the
maximum
volume
size
first
and
if
it's
not
set,
then
he'll
fall
back
to
the
other
one.
B
D
D
Okay,
so
this
was
open
for
a
while
as
well
actually-
and
I
think
it's
pretty
straightforward-
I
mean
we
have
to
obviously
do
some
work
in
kubernetes
to
consume
it
or
make
it
accessible
and
we
have.
But
as
far
as
csi
specs
is
concerned,
it
should
be
okay
to
do
it
right,
I
mean
unless,
like
like.
I
think
it's
it's
fine.
B
Okay,
I
guess
this
one
seems
pretty
straightforward
right.
It's
just
adding
another
secret
for
a
call.
B
B
All
right
so
yeah
we
have
these
four
pr's
that
I
think
most
of
them
are
ready
to
go.
I
think
the
only
one
that
is
needs
a
little
bit
of
more
discussion
is
the
volume
out
group
one,
but
the
other
three,
I
think,
should
be
ready.
I
And
do
we
want
to
hold
off
for
the
volume
mount
group
one
or
proceed.
B
I
guess
I
think
we're
mainly
waiting
on
james
to
sort
of
clarify
his
comment
that
he
had,
I
think
ma.
If
you
can
try
to
ping
him
today.
That
would
be
great,
but
let's
set
do
we
want
to
set
a
sort
of
deadline
for
when
we
need
to
get
all
these
pr's
merged,
and
so
we
can
do
a
cut.
B
I
Yeah,
I
think
it
takes
at
least.
Let
me
check,
I
think
it's
at
least
three
days,
so
we
gotta
do
it
like
tomorrow.
If
we
want
to
be
on
track
for
a
getting
this
ready
by
monday.
B
D
Well,
I
think,
james
by
the
way
and
the
one
more
thing
was
like
the
vr.
Our
bills
will
be
broken
on
on
golang
1.16
by
the
way,
but
I
think
that's
not
important
right
now.
We
have
to
some
point
in
future:
fix
that.
D
I
Yeah,
okay,
so
it's
two
business
days.
So
if
we
do
it
on
this
afternoon,
we
can
cut
it
by
friday
afternoon.
I
G
I
I
say:
let's
get
these
pr's
merged
today,
get
the
rc
cut
end
of
day
today,
target
the
release
friday,
and
then
you
got
monday
tuesday
to
see
if
we
can
make
the
updates
and
in
the
meantime
we
can
use
the
rc
build
for
testing
and
getting
the
kubernetes
changes
ready.
I
B
All
right
sounds
good.
Any
other
topics.