►
From YouTube: 2021-11-16 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
So
we've
got,
let's
see
so
the
the
main
one
feature
it's
more
of
a
feature
really
than
a
backboard
fix,
but
the
migration
from
flex
volume
to
csi
driver
that
tool
is
getting
really
close
to
being
done,
hoping
to
have
that
out
in
well
in
the
next
week.
Really
we
need
to
get
at
least
the
first
version
of
that
tool
out.
A
A
Yep
is
there
anything
anybody
wants
to
discuss
specifically
for
1.7.
A
We've
still
got
the
same
plan
in
place
where
we've
got
the
beta
in
december
1st,
so
that
will
be.
We've
got
our
next
community
meeting
in
two
weeks
november,
30th
so
the
next
week,
we'll
finalize.
Are
we
ready
for
december
first
or
the
next
day
to
create
the
branch
and
then
prepare
for
at
least
I
don't
see
any
reason
not
to
be
on
that
schedule?
I
think
that
still
sounds
like
a
good
plan.
A
B
Yeah,
I
will
I
will
I'm
planning
on
this
just
before
the
the
first
beta,
I
guess
so.
Yeah
yeah
exactly
within
a
week
or
so
I'll
get
to
it.
A
A
A
Well,
with
that,
let's
see
we
have
a
couple
of
things
in
the
agenda
today.
Normally
we've
had
frequently
had
a
gender
item
for
the
ci.
I
think
the
ci
is
just
stable
and
working
well
with
github,
so
nothing
really
to
update
there.
So
I
guess
we'll
get
right
into
the
nfs
topic.
That
blaine
has
there.
C
Blade
yeah
do
we
do
we
want
to
talk
about
alexa's
topic?
First,
maybe
I
think.
D
Yeah
we
stumbled
upon
this
warning
on
in
the
soft
docks
for
the
16.26
release,
which
is
well
at
least
at
least
the
customer
is
quite
unsettling.
We
had
a
customer
contact
us
because
they,
like
always
keep
track
on
after,
like
the
latest
stuff
version
and
we're
like
hey.
D
C
Do
we
use
blue
store,
quick
fix
on
mount.
A
B
D
Yeah,
I
don't
think
we're
using
it
directly
in
rook
but
like
if
any,
if
someone
has
issues
with
sev,
at
least
from
I'm
sure
put
it
with
like
an
osd
or
something
they
might
use
this
tool.
So
it's
not
something
necessarily
for
rook
to
say,
hey,
like
there's
a
danger,
but
let
me
just
quickly
check
if
it's
in
a
code,
I
just
wanted
to
say:
hey
here's
a
warning.
I
haven't
looked
too
much
into
it.
If
it's
affecting
rook
there.
D
Yeah,
so
is
it
like?
The
reasoning
is
that
a
customer
approached
us
and
was
like
hey.
We
have
a
cluster
where
we
have
a
few
overseas
crashing
now
and
we're
like
we
haven't,
started
investigation
there.
Yet,
but
it's
as
it
like
more
so
hey
we
have
a
customer
which
says
hey.
This
seems
like
something
they
haven't
used,
seth
bluester
tool.
As
far
as
I
am
aware,
and
I
don't
think
they
have
well.
They
have
set
the
bluester
file
system
check
quick
fix
on
mount
manually.
D
Is
it
like
it's
more
for
like
a,
I
don't
know
yeah?
Is
it
like
a
probably
better
not
to
update
yet
till
1627
years
old.
D
C
I
heard
well,
I
I've
been
trying
to
figure
out.
Do
we
do
we
know?
If
do
we
know
if
this
is
default,
turned
off.
C
So
I
found
a
question
online
of
someone
asking
what
the
consequences
were
of
setting
that
to
false.
C
D
Think
I
should
mention
that
if
I
remember
correctly
the
customer
cluster,
where
the
issue
occur,
at
least
they
have
a
few
overseas
questions.
Thankfully
it
is
just
a
test
cluster.
I
think
it
was
initially
created
with
rogue
1.1
first
to
maybe
1.2.
I
guess
so
like
it's.
It
has
come
a
long
way
to
put
like
that.
A
Yeah,
if
we
could,
let's
definitely
investigate
that,
what,
as
of
what
version,
did
that
become
true
or
not
or
become
faults
by
default?
When
did
that
change,
and
what's
the
latest
so
we're
clear
on
what
versions
at
least
for
an
announcement,
I
want
to
be
clear
that
okay,
what
is
this
effect
instead
of
scaring
people
that
they
need
to
do
something
when
they
might
not
need
to
say
alexander.
C
B
A
D
Yeah
is
it
like?
Is
it
it's
just
one
customer
so
far
and
like
it's
as
it
more
of
a
hey,
they
had
multiple
osd's
from
what
they
said
fail
at
the
same
time.
So
it's
more
of
like
a
like.
If
it
happens,
it
seems
to
be
well
kind
of
well
clustered
breaking
I
would
say,
is
it
like?
D
I
still
need
to
get
on
more
information
from
them,
but
as
far
as
I'm
concerned,
is
that
like
multiple
occ's
of
their
well
as
a
thankfully
test
class
of
fail
and
more
or
less,
their
question
is
with
as
they
have
other
clusters
and
even
have
the
pro
production
clusters
updated
to
the
same
root
set
well
and
especially
ceph
version?
D
There
are
more
simply
worried
like
if
it
could
occur
there
as
well,
so
and
and
like
as
blaine
pointed
out
like
the
wording,
is
a
bit
of
like
a
if
any
time
like,
if
you
upgrade
from
version
15
to
version
16,
if
that's
already
like
you're,
affected
or
yeah,
is
it
like
it's
more
of
a.
D
As
far
as
I
can
tell
from
the
locks
they've
provided
from
the
lock
snippets,
it
seems
like
to
be
that
issue.
A
D
A
A
D
Sure
I'll
go
ahead
and
create
a
ticket.
Should
I
smaller
simply
point
at
the
pacific
16
point
to
well
zero
or
well
six
upgrade
warning
or.
A
Yeah
an
issue
sounds
good
to
go
ahead
and
link
to
that
issue
I
mean
if
people
are
tracking
issues,
they
could
become
aware
of
it
too.
A
C
Yeah
related
to
nfs,
so
I
know
we,
we
don't
really
have
plans
to
keep
nfs
up
to
date.
Right
now
and
I
did
find
through
some
other
research.
There
is
a
there's,
an
nfs
server
and.
C
An
nfs
server
provisioner
that
provisions
exports
on
persistent
volume
claims,
so
this
appears
to
be
not
not
specifically
like
a
csi
provisioner,
but
it
is
a
provisioner
that
responds
to
storage
classes
and
where's.
Excuse
me
and
persistent
volume
claims
in
much
the
same
way.
A
csi
provisioner
would.
C
Yeah,
if
we,
if
we
wanted
to,
we
could
suggest
that
users
consider
that
provisioner
as
well
as
the
the
nfs
csi
driver.
That
is,
as
far
as
I
know,
provision
or
less
that
are
part
of
the
upstream
kubernetes
ecosystem
in
order
to
get
similar
functionality
to
what
the
rook
nfs
operator
was
was
intending.
C
The
only
I
think
caveat
is
that
I'm
not
sure
how
well
this
provisioner
actually
supports
multiple
nfl
servers
running
like
to
serve
the
same
content.
E
So
real
quick,
I
mean
from
from
a
brief
look
yeah.
This
seems
to
be
one
of
those
legacy
provisioners
at
some
point
in
the
past,
the
communities
group
they
wanted
to
get
away
from
the
entry
provisioners
already
before
people
started
seriously
talking
about
csi
drivers
as
the
implementation
for
for
the
functionality,
and
they
originally
created
this.
This
kubernetes
incubator
space
with
external
storage.
So
this
is
where
this
came
from.
E
Apparently
it
was
moved
from
there
under
this
kubernetes
sigs
and
because
everybody
seems
to
want
to
get
away
from
the
legacy
provisioners
and
move
everything
to
csi
drive,
I'm
a
little
bit.
I
was
just
surprised.
This
is
why
I
also
mentioned
it
to
you:
blaine,
seeing
this
kind
of
legacy
kind
of
provisioner
moving
forward
and
it
seems
to
be
active
the
repo
right,
so
that
is,
I
mean
at
least
somewhat
active.
C
Yeah
I
did
ask
the
like
primary
person,
who
wrote
most
of
the
code
in
here
said
that
he's
no
longer
involved
directly,
but
the
other
people
in
the
owner's
file
are
committed
to
to
upkeep
and
yeah.
Did
it
have
another
release?
A
few
days
ago.
E
Right
so
who
who's
the
main
contributor?
Is
it
matthew,
1.
A
E
E
A
Right
because
yeah,
the
reality
is,
though,
is
with
rook
nfs
nobody's,
contributed
to
it
really
in
any
meaningful
way
since
it
was
created,
so
it
doesn't
make
sense
for
rook
and
fest
to
continue.
We
should
probably
deprecate
it
unless
we
get
someone
maintaining
it
or
if
we
find
this
other
one
really
replaces
it
then
yeah
great.
Let's
point
people
to
it,
since
it
does
have
more
active
community.
B
E
Yeah
fine,
I
mean,
but
this
is
a
very
conceptual
important
point
right
as
the
would
it
be
good
to
have
an
operator
with
proper
crds
to
manage
nfs
exports
and
an
nf
server
or,
and
maybe
a
provisioner
csi
driver
provisioner.
On
top
of
that,
if
you
want
to
use
those
exports
inside
of
a
shift
but
also
with
the
option
to
use
directly
used
that
operated
with
its
crds
for
other
purposes,
or
is
it
better
to
always
have
a
csi
driver
because
or
whatever
maybe
you're
a
legacy
provisional
driver?
E
My
my
reaction
is
that
we
probably
want
to
put
the
kind
of
management
of
the
nfs
servers
and
exports
in
an
operator,
because
this
year
these
might
give
us
greater
flexibility
for
those
cases
where
we
don't
need
the
csi
or
pvc
layer
on
top,
but
yeah.
We
can
of
course
discuss
that
it
doesn't.
It's
also
not
exclude
mutually
exclusive
right.
E
You
can
always
layer
a
thin
provision
layer
on
if
you
want
to
have
those
pvcs
on
top
of
an
operator,
but
for
external,
like
external
consumers
that
are
not
coming
from
inside
kubernetes,
you
might
want
want
to
have
additional
flexibility.
I
don't
know
that's
why
I'm
still
kind
of
intrigued
with
the
concept
of
an
operator
in
the
middle
yeah.
B
B
B
A
B
A
A
E
E
That
is,
you
could
call
it
an
application
that
is
managed
by
an
operator,
and
then
you
could
put
a
csi
driver
on
top
if
you
want
to
use
the
nfs
as
the
method
for
mounting
pvcs
in
kubernetes
right
so
just
to
compare.
This
is
exactly
what
we
did
with
the
snb
stuff
right.
We
have
a
storage
class,
a
csi
driver
below
that
in
our
case
have
csi.
E
Then
we
have
an
operator
to
manage
all
sorts
of
aspects
of
this
application
of
the
shares,
and
then
we
have
a
provision
on
top
for
those
use
cases
where
we're
going
to
consume
it
inside
openshift.
So
you
could
combine
all
or
two
of
these
layers.
Sure
the
question
is:
is
there
benefit
in
keeping
them
separate.
C
I
suspect,
like
it
is
the
provisioner
and
like
would
act
alongside
the
either
entry
nfs
driver
or
new
csi
nfs
driver
for
them
to
be
able
to
mount
that
storage
so
like.
I
think
this
is
a
separated,
like
use
case,
that
already
exists
in
the
ecosystem.
C
I
I
think
it's
good
to
consider
like
what
are
an
operator's
strengths
and
like
where,
where
is
it
useful
to
me?
It's
an
operator
isn't
as
necessary,
if,
like
it
is
really
about
the
complexity
of
the
thing
being
managed,
and
I
don't
have
a
super
strong
like
understanding
of
how?
How
difficult
is
it
to
manage
an
nfs
server,
but
the
server
external
provisioner
here
is
at
least
going
by
the
assumption
that
all
that's
really
needed
to
run
and
keep
an
nfs
server
running
helpfully
is
persistent
storage
underneath
and
a
deployment.
C
The
other
kind
of
thing
an
operator
might
be
good
for
is
like
do
we
need
to
limit
certain
like
configurations
like?
Would
what
are
they?
What
do
they
call
a
like
a
like
an
acceptance.
Web
hook
would
something
like
that
be
necessary
to
help
users
configure
it
or
not,
but
yeah,
I
I
don't.
I
don't
know
I
mean
to
to
me
seeing
the
external
provisioner
here.
C
I
I
don't
know
that
I
I
like
have
a
strong
like
view
that
there's
like
a
technical
reason.
We
need
to
have
an
operator.
B
B
It's
a
really
opinionated
deployment,
I'm
just
not
sure
how
they
would
manage,
upgrades
and
stuff
like
this.
I
think
they
just
assumed
that
whatever
deploys
the
csi
driver
is
responsible
for
upgrading
the
nfs
version
or
we'll
do
with
new
images,
which
normally
is
done
by
an
operator
but
yeah.
I
guess
we
can.
E
Yeah,
so
we
should
carefully
review
this
also
see
okay,
they're,
saying
hey.
This
is
for
kubernetes
14
plus
so,
but
what
is
really
the
target?
Are
they
keeping
live
for
those
older
versions,
because
I
I
want
to
check
with
with
the
storage,
sick
or
all
those
people
that
they
really
want
to
get
away
from
non-csi
drivers?
So
don't
just
like
bet
on
a
dead
horse
here
or
something
right
that
they're
keeping
up
for
older
releases
of
kubernetes.
C
Yeah
I
mean,
even
if
you
know,
even
if
we
don't
choose
this
solution,
we
may
still
choose
something
that
is
similar.
You
know,
I
I
think
at
the
end
of
the
day,
we
probably
want
to
choose
a
solution
where
you,
where
we
have
as
few
moving
parts
as
we
need
and
an
operator
is
definitely
another
movie
apart,
that
we
might
have
to
like
that.
We
would
have
to
maintain
that
it
might
be
easier
if
we
don't.
E
Well,
yes,
and
no
right,
we
would
have
an
external
like
we
would
have
an
additional
csi
driver
anyway
to
be
managing
and
there
are
moving
parts
there.
So
the
the
the
csi
driver
itself,
the
provisioner
part,
would
correspond
to
some
extent
to
the
operator
right.
But
then
you
also,
I
mean
you
spawn
it
up.
You
also
have
the
node
plugins,
okay,
those
I
mean
if
it's
csi
driver,
I'm
thinking,
csi
architecture
right,
you
have
the
node
plugins
that
would
be
nfs
kind
of
mounting
pods
or
something
like
that.
E
So
there
are
components
as
well,
then
the
other
thing
is
so
what
I
would
like
to
think
about.
So
who
are
we
doing
this?
For?
Are
we
doing
this
exclusively
for
those
consumers
that
want
pvcs
inside
of
kubernetes?
Fine,
like
no
brainer?
If
it's
simple
enough,
let's
use
the
csi
drive
approach,
but
let's
just
before
we
decide
this
think
about
for
those
external
consumers
right
and,
and
our
motivation
here,
certainly
to
all
to
first
and
initially
at
least
primarily
target
those
use
cases.
E
I
mean
from
what
from
why
we
are
talking
about
this
here
in
the
community
as
driven
from
from
what
you
want
to
do
in
the
in
the
products
right.
So
we
we
are
looking
at
those
external
consumers,
and
so
the
admin
of
the
kubernetes
cluster
would
be
the
one
creating
those
nfs
exports
for
the
for
those
consumers,
and
the
admin
needs
to
have
an
easy
and
convenient
way
of
of
communicating
the
exports
url
out
to
those
customers
or
to
those
consumers.
E
It's
not
the
sales
service
that
we
have
inside
kubernetes,
so
that
also
needs
to
be
easy.
If
it's
easy
enough,
get
it
all
out
of
the
pvc
and
kind
of
communicating
it
over
fine
right,
maybe
it's
not
making
things
more
difficult.
I
just
want
to
make
sure
that
it
doesn't.
A
E
Exactly
at
least
for
our
purposes
right
for
other
purposes,
yeah,
maybe
they
need
nfs
to
make
a
non-distributed
storage
available
on
other
nodes.
Right
I
mean,
if
you
have
local
storage,
put
nfs
on
top
in
principle.
You
can
mount
it
in
other
nodes.
So
that's
a
use
case
for
nfs
for
non-saf
back-end.
But
if
you
have
a
cfs
backend
fine,
you
would
directly
use
the
csi
driver
inside
kubernetes.
I
fully
agree
yeah.
A
And
really,
I
think,
comes
down
to
two
questions
for
me.
So,
first
of
all,
does
it
make
sense
for
what's
the
right
implementation
right?
Is
this
new
thing
better
is?
Do
we
like
the
operator
approach
better,
what's
the
right
approach,
but
then,
ultimately,
you
know
on
the
rook
project.
Are
we
going
to
put
resources
into
or
who
from
the
community
is
going
to
to
maintain
the
rook
nfs.