►
From YouTube: 2021-07-13 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
So
my
my
thought
on
1.6
is
that
we're
we're
still
having
bug
fixes.
Of
course,
I
think
we're
in
an
okay
state,
where
we
can
wait
until
next
week
for
a
patch
release,
but
we
could
do
one
this
weekend
if
needed.
Is
there
anything
this
week
that
people
feel
like
they
need
in
a
release?
A
So
a
few
things
in
progress
like
the
gpt
partitions
is
still
an
issue
and
we're
not
ready
for
that
one,
yet
so
fingers
crossed
by
next
week.
Maybe
we
can
get
there.
We
can
talk
about
that
later
in
the
agenda.
B
A
A
I
made
a
pass
through
the
106
board
as
well
and
tried
to
remove
things
that
seemed
obvious
that
weren't
going
to
get
into
1.6
or
more
of
the
small
feature
requests.
I
move
them
up
to
the
1.7
board,
so
I
can
think
about
getting
them
in
there.
There's
a
few
bug
fix
type
things
left
here
that
also
need
review
some
of
them.
We
might
certainly
move
out.
A
A
Do
I
have
to
stop
sharing,
so
I
can
let
people
in
I'm
gonna
try
that
real,
quick,
okay,
so
four
four
people
in
the
waiting
room
there
we
go.
Okay,.
A
A
B
A
B
B
B
B
A
Yeah
exactly,
I
think,
it's
just
following
the
cadence,
and
this
will
be
like
three
and
a
half
months
since
1.6
so
yeah.
We
do
since
we
do
backboard
so
much
it
doesn't.
Sometimes
it
doesn't
feel
like
there's
a
lot
of
new
release
and
then
we
end
up
getting
backboarding
a
lot
to
the
that
next
release.
So
right.
B
C
I
would
like
to
add
that
one
we
were
working
on
the
the
migration
thing,
so
we
are
planning
that
not
sure
like
and
yeah
you
you
guys
planned
some
demo,
so
maybe
we
will
see
the
migration
demo
for
our
for
the
flex
rbd
at
least
if
times
allowed
like
we
can
see
for
the
intro
ibd
too.
But
this
is
the
plan
I
we
had.
I
talked
about
this
presentation
with
a
sustain
and
travis.
I
forgot
youtube.
C
I
give
id
about
this.
A
Missed
that
so
the
with
1.7
we
do
want
to
basically
have
this
initial
migration
tool
and
then
mark
the
flex
volume
as
deprecated,
so
that
in
1.8
we'll
actually
remove
the
flex,
volume
or
flex
driver,
because
that
the
tool
will
help
give
us
a
path
to
migrate
from
flex
to
csi.
Finally,
so
that'll
be
really
nice
now.
So
how
is
that?
Looking?
Since,
since
I
missed
that
say
again,.
D
So
mainly
we
are.
We
have
a
basic
idea
of
how
to
proceed
with
rbd
and
me
and
shubham
both
have
a
demo
plan.
If
and
for
surface,
we
need
to
figure
out.
The
approach
for
it
and
mother
is
also
along
with
us,
helping
us
out.
D
So
we
have
a
basic
idea
about
rbd
and
we
have
we
have
a
background
approach
and
how
it
will
work.
So
before
proceeding
to
the
implementation
part,
we
just
wanted
to
share
our
approach
with
everyone
and
get
everyone's
thoughts
on
it
by
our
demo.
E
A
E
1.23,
I
think
they
are
removing
the
entry
drivers.
They
won't
benefit
anymore,
1.23.
A
C
E
C
Mode
and
migration
steps
are
on
the
procedure
for
rbd,
for
both
entry
and
flex
is
like
nearly
the
same
so
yeah.
It
is
the
good
thing
that
other
steps
are
same
for
at
least
already.
D
Sure
and
maybe
we
can
take
the
demo
at
the
end
or
now,
whenever
you
prefer.
A
B
A
Or
maybe
we'll
have
more
questions
when
we
see
the
demo,
okay,
great
so
as
part
of
1.7,
then
I
think
we'll
have
documentation
in
rook.
That
says
what
our
plan
is
for
this
for
this
migration,
even
if
some
of
those
migration
tools
aren't
done
yet,
I
think
if
we
can
at
least
document
1.7
what
the
plan
is.
E
Yeah,
this
is
only
for
rpd
but
stuff.
It's
completely
a
different
problem
because,
as
ffs
earlier,
it
is
just
a
path.
Now
it's
like
sub
volumes,
so
the
migration
is
bit
different
over
there
because
we
cannot
just
do
remove
like
rename
and
all
but
someone
there
should
be
a
tool
to
mount
both
volumes
and
migrate
that
data.
So
we
we
need
to
talk
to
ourself
team
and
how
to
do
this.
One.
A
A
All
right
so
we'll
come
back
to
the
demo.
That'll
be
great,
I
think
just
a
few
other
topics
here
which
may
be
quick
so
the
to
follow.
From
our
last
meeting
where
we
talked
about
the
faucet
license
scanner,
which
it's
been
showing
us
failing
forever
on
our
our
main
github,
because
I've
gone
back
and
forth
with
the
fossil
support
team
to
try
and
get
access
to
clear
those
up.
A
A
B
A
F
I
don't
wanna
go
to
technical.
This
is
still
a
pretty
elusive
issue.
I
think
the
the
distillation
at
this
point
is
that
the
linux
kernel
observing
an
atari
partition.
F
Is
seems
to
happen
from
a
more
relaxed
specification
than
even
like
the
tool
parted
using
pro
used
for
like
managing
partitions
on
disks
and
disk
labels,
and
things
parted
even
doesn't
really
recognize
the
partitions
as
valid.
F
F
Like
to
develop
a
metric
for
actually
figuring
out
when
there's
an
atari
partition
and
like
when
to
skip
it,
how
are
recent
attempts
have
been
to
use
gpt
support
only
but
step
volume
refuses
to
prepare
disks
for
free
use
as
ceph
blue
store
osds.
If
the
disk
has
a
gpt
partition
label
on
it,
so
we
that
is
one
avenue
to
look
into.
Do
we
need
to
make
a
change
to
self
volume
to
support
this
case,
but
I
think
it's
probably
better.
If
we
can.
F
Develop
a
case
where
we
don't
have
to
also
have
users
upgrade
to
to
like
the
latest
ceph
version
and
wait
for
the
latest
stuff
version
to
come
out.
So
I'm
I'm
going
to
investigate
more
like
what
information
this
blue
store
tool
can
give
us
about
disks
and
just
try
to
figure
out.
If
there's
a
way,
we
can
figure
out
what
like
sectors
of
a
disk
are
in
use
as
blue
store
or
as
as
yeah
as
blue
store
osds.
F
And
if
we
can
do
that,
then
we
might
be
able
to
like
disregard
some
partitions
as
being
available.
But
we
do
want
to
be
able
to
support
raw
disks
as
well
as
raw
partitions
for
for
uses.
Osds
and
the
partitions
makes
it
especially
hard
because
if
you
use
a
partition,
that
partition
can
also
like
end
up
having
other
partitions
created
off
of
it.
A
F
F
You
know
a
through
z
and
don't
match
partitions
that
have
like
a
a
numeral
they
come
after
and
so
that
that's
something
that
users
like
have
a
workaround
for
now,
but
yeah.
What
we're
trying
to
do
is
prevent.
F
A
Is
this
issue
one
of
our
pinned
ones?
I
think
so,
I'm
not
sure.
B
B
B
B
A
And
then
the
last
topic
we
have
for
discussion
before
the
demo.
I
was
the
last
couple
weeks
been
talking
to
to
sage
and
the
dashboard
team
from
steph
as
far
as
how
to
integrate
with
local
storage-
and
we
don't
have
a
lot
of
time
to
go
into
detail
here.
But
the
overall
question
really
is:
when
you
need
local
storage,
you
know
either
local
storage,
that
you
want
the
osds
to
be
backed
by
or
local
stores
that
you
want
to
use
for
something
else.
A
So,
like
the
idea
is
you
can
install
a
basic
cluster
with
no
osds,
just
the
manager
in
a
single
mon
and
then
run
the
dashboard,
and
then
the
dashboard
would
that
you
would
show
you
what
your
inventory
of
hardware
is
and
then
you
click.
You
know.
Click
create
osds
here
and
here
and
here
and
then
you
know,
maybe
use
these
other.
You
know
some
of
the
disks
for
some
other
application
and
some
for
ceph,
et
cetera
and
so
the
some
of
the
requirements
there
include
like.
A
So
the
question
is
kind
of:
is
there
an
operator
out
there
that
would
fulfill
this
name
because
there's
lso,
there's
topo
lvm
there's
there
are
others
out
there,
but
but
they
haven't
found
one
that
really
works
end
to
end.
So
the
question
is:
would
it
make
sense
to
bring
our
crew
one
and
rook
that
works
and
then
for
the
scenarios
that
that
we
really
need
for
the
for
this
whole
thing?
B
A
A
A
B
B
D
I
think
I
have
it
I'll
try
to
share
the
screen.
Okay,.
A
D
Okay,
so
this
presentation
is
regarding
migration
from
flex
volume.
Drivers
as
well
as
entry
drivers
to
csi
I'll,
be
proceeding
with
flex
volume
driver
migration
and
shimon
will
be
continuing
with
the
entry
driver
migration.
So
to
begin
with
I'll,
just
give
a
brief
overview
of
what
is
entry
and
outreach
drivers.
D
So
before
the
introduction
of
csi
and
flex
volume,
all
volume
plugins
are
mainly
intrigued.
I
mean
that
they
were
built
and
shipped
with
the
kubernetes
boundaries
and
now
cumulative
communities
that
does
not
accept
new
entry
volume
plugins
because
and
develop
and
recommends
to
use
csi
plugins,
the
reason
being
that
they
are
too
tightly
coupled
with
kubernetes
and
suppose,
if
any
volume
plugging
crashes,
so
it
crashes
the
kubernetes
components
with
it.
D
D
Now
so,
let's
before
csi
flex,
volume
was
an
outreach
out
of
tree
plugin
that
was
used
in
kubernetes
since
1.2
and
g8
in
191.8,
and
it
required
like
soft
flex
volume
binaries
to
be
installed
in
the
host
machine
and
the
kubernetes
performed
some
predefined
commands
against
the
driver
on
the
host
so
coming
to
the
main
topic
that
why
we
are
thinking
about
migrating
from
flex
to
csi,
so,
first
reason
being
it's
deprecated
or
because
it's
still
maintained,
but
new
functionalities
are
added
only
to
csi,
not
to
flex
volume.
D
Also,
the
storage
community
suggests
implementing
csi
driver
because
it
has
some
shortcomings
like
it
requires
root
access
to
its
own
host
machine
to
install
driver
files.
Also,
it
assumes
that
mount
dependencies
are
available
in
the
host
and
there
are
some
issues
linked
with
it,
which
I
have
put
in
the
talk,
the
efforts
that
are
going
on
to
migrate
this
and
they
are
tracking
the
issues
now
proceeding
further
I'll,
just
quickly
go
through
the
migration
procedure.
D
First
and
we'll
look
at
the
predicted
later
on
so
coming
to
the
migration
procedure
on
how
we
are
planning
to
migrate
from
flex
volume
rbd
to
csi.
So
first,
we
are
planning
that
we
will
first
scale
down
the
part.
So
what?
What
are
the
ideas?
The
main
reason
behind
the
issue
we
were
facing
in
migrating
flex,
volume
images
to
csi
images,
csi
images
have
certain
amount
of
metadata
involved
for
tracking
the
packing
their
images
for
getting
the
information
on
them,
and
that
is
not
present
when
we
create
a
volume
via
flex.
D
So
what
we
wanted
was
we
don't
build
metadata.
So
suppose
we
have
pvc
created
for
flex.
We
have
its
pv
and
we
have
its
back
back
end
image.
So
now
what
we
want
to
do
is
we
want
to
delete
the
pvc
created
by
flex
and
retain
the
pv
and
the
image
so
that
we
can
reuse
the
pv
and
the
image
we
will
use
the
image,
mainly
while
we
migrate
to
csi.
Now.
D
What
we'll
do
is
we'll
we'll
create
a
cs
pvc
in
csi,
with
the
same
name
and
so
that
all
the
metadata
around
it
will
be
created
now
to
receive
csi
will
delete
the
empty
image,
create
press,
csi
and
rename
the
in
the
name
of
the
image
of
flex
to
csi,
so
that
csi
csi
will
be
deceived,
and
it
will
think
that
okay,
this
is
created
by
csi
and
then
we'll
proceed
to
delete
the
pv
created
by
flex.
This
process
will
be
more
clear
when
we
proceed
to
the
video
demonstration.
D
D
Yes,
so
I'll
start
the
demo,
so
basically
what
we
have
done
is
we
have
started
the
rook
cluster
and
since
rook
is
not
sorry,
flex
volume
is
not
enabled
by
default.
D
Now
we
will
proceed
to
create
a
pod,
because
we
want
to
write
some
data
to
it
so
that
we
can
verify
whether
migration
is
successful
or
not.
It
is
proceeding
to
create
a
pod
as
soon
as
it
gets
created.
We
just
go
to
the
mount
path
and
write
some
data
on
it
so
that
we
can
verify
later
on
written
the
data.
Now
this
process
comes
with
a
downtime
because
we
cannot
migrate
a
pvc
which
is
in
use,
so
we
scaled
on
the
pod.
D
D
D
D
So
we
just
go
ahead
and
delete
this
initiated
by
csi
it's
deleted.
Now
what
we
do
is
to
receive
csi
we
rename
the
initiated
by
plex
to
cs
to
the
name
that
was
given
by
csi
to
its
image.
Now
we
can
see
we
have
only
one
image
in
the
packet
that
is,
that
looks
like
it's
created
by
csi.
D
Now
we
come
out.
We
see.
Okay,
the
pvc
is
still
bound,
but
we
still
want
to
verify
that.
Okay,
in
the
back
end,
whatever
data
we
wrote
is
present,
so
we'll
go
ahead
and
create
another
card
and
execute
it
to
the
pod,
go
to
the
mount
path
and
verify
that
okay,
we
sales,
we
still
see
our
file
and
we
can
say
that
okay
data
was
successfully
migrated
and
now,
since
we
have
an
extra
pv
that
was
created
by
flex,
so
we
can
now
go
ahead
and
delete
it
because
we
don't
need
it
anymore.
B
Just
want
to
be
clear,
you
mentioned
data
migration.
As
far
as
I
understood,
there
is
no
data
migration
right.
That's.
E
D
D
Sure
we
want
to
go
ahead
with
the
entry
migration
I'll
stop.
My
sharing
yeah.
D
So
mainly,
we
are
planning
to
have
a
complete
co-binary
of
it
and
go
project
separate,
so
that
will
do
the
complete
migration
automatically
and
this.
This
is
the
manual
steps
just
for
demonstration
purpose
and
sharing
what
what
approach
we
are
taking
and
getting
and
getting
everyone's
eyes
on
it
and
see
what?
If
we
are
good
to
go
with
this
approach,
then
we
will
proceed
on
implementation,
automating.
The
complete
process.
B
D
As
far
as
I've
seen,
I
might
be
wrong,
but
flex
and
csi
can
coexist
at
the
same
time,
so
I
don't
think
that
it
will
be
an
issue.
Flex
driver
can
go
ahead
and
provision
its
volumes
and
csi
can
do
its
job.
Also,
I
don't
think
yeah.
B
But
what
I
mean
is
that
if
we
really
want
to
complete
complete
this
transition
at
some
point,
we
have
to
disallow
the
creation
of
new
volumes
and
be
attached
and
be
consumed
by
applications
with
the
old
drivers.
Like
flex,
you
mean
yeah
right
yeah,
so
we
we
can
get
like.
Let's
say
a
consistent
point
in
time
that
we
have
a
given
list
of
volumes
and
this
list
is
not
going
to
change.
Otherwise,
we
keep
playing
catch
up
forever.
E
B
A
A
B
C
A
C
Okay,
so
I'll
just
skip
the
intro
part
like,
as
you
said
earlier,
like
into
his
mother
tightly
coupled
with
kubernetes
release,
and
it
make
difficult
for
kubernetes
developer
for
testing
and
all
and
also
or
this
with
the
entry
we
have
to
give
the
same
privileges
and
the
permissions
to
the
entry
as
a
cubelet
and
cubecontroller.
C
So
this
is
the
reasons
why
kubernetes
people
like
stop
supporting
this
and
also
it
is
dedicated
and
we
have
the
same
requisite
as
as
the
flex,
so
it
requires
csi
version
greater
than
1.16,
modulus
and
rook
to
be
implemented
and
volumes
would
be
dynamically.
Provisioned
and
volume
should
be
in
bound
state
and
not
in
used.
So
the
migration
steps
all
are
nearly
same,
so
I
will
just
type
into
the
recording
and
I
pasted
the
link
of
the
recording.
So
it
will
give
this.
C
Let
me
know
if
it
is
visible.
C
So,
on
the
right
side,
I
I
have
the
toolbox
like
looks
installed
and
I'm
inside
the
toolbox,
and
I
have
the
entry
setup.
So
I
have
the
image
like
rvd
image
for
that
and
on
the
right
side
top
I
can,
we
can
see,
we
have
a
pod
use
and
we
have
the
application
basically
and
we
are
like
for
the
sample
thing
or
the
testing.
I
have
written
us
like
a
text
called
sample
text
here
inside
a
file.
C
C
So,
yes,
I
have
deleted
the
part
of
the
application
and
then
I
will
just
I
will
change
the
retained
policy
of
the
pvc
from
delete
to
to
retain
yeah.
Let's
demo
verify
that.
C
C
We
will
create
on
so
now
we
will
create
a
new
stage
class
and
the
pvcs
and
the
name
should
be
changed
according
to
like
the
steps
mentioned
and
so
yeah
like
earlier,
I
created
like
by
mistake.
I
created
a
pvc
directly
before
creating
the
storage
class,
so
first
I
will
creating
the
storage
class
with
updated
team.
Then
I
will
create
the
pvcs
and
yeah.
C
We
can
see
that
our
pvc
was
created,
so
I
will
go
in
the
toolbox
and
I
will
just
see
that
we
have
two
images
like
one
earlier
that
ivd
created
I
like
named,
given
it
is
dynamic
pc
and
the
new
one.
That's
csiquita
csi
volume,
so
I
will
just
like
the
same
steps.
As
the
flex
says,
we
will
just
remove
the
csc
circuited
image
and
then
we
will
rename
that
older
image
that
we
had
from
entry
so
yeah
this
command
is
doing
the
same
so
yeah.
C
We
have
like
one
one
image
now
so
we'll
just
verify
that
our
data
is
safe
there,
so
we'll
bring
the
application
and
we'll
go
inside
the
pod
and
verify
that
if
our
data
is
present,
so
I
will
just
wait
yeah
it
is
running.
The
application
is
running,
I'm
going
inside
the
pod,
and
I
will
just
print
the
data
to
verify
that
it
is
inside
the
wrong
way
within
the
wrong
path.
C
A
E
B
A
B
E
B
B
B
E
D
Yes,
that's
the
forgot
to
mention
the
pvc
name
will
be
exactly
same
so
for
a
user.
It
will
be
no
difference
at
all.
E
Okay,
yes,
this
is
for
like
dynamic
provision,
pvc
for
static,
it's
a
different
problem,
because
in
the
perspective,
you
have
an
option
to
provide
rbd
image
name
and
all
so.
That's
that's
a
different
problem
for
entry.
Where,
because
it's
statically
provisioned,
we
need
a
lot
of
work
over
there
in
the
portfolio
itself,
you
will
mention
the
monitor
ip
key
rings
and
then
the
full
rv
image
name.
E
A
Right
and
yeah,
I
guess
if
we
could
do
we
have
so
I
we
have
that
document,
a
google
doc
how
about
opening
an
issue
or
maybe
even
a
pr
with
a
design.
Pr
that
says,
here's
our
approach
and
maybe
on
that
issue
or
pr
we
can
discuss
where,
where
it
lives
or
if
we
need
a
new
repo
or
if
it
even
makes
sense
or
where
it
makes
transfer
to
it.
A
B
B
The
only
thing
so
far
is
just
removing
support
for
flex
all
the
migration
will
be
handled
by
that
tool
and
will
be
really
outside
of
rook
scope.
Anyways.
E
A
few
things
to
rook
because
to
get
the
cluster
idea
or
the
other
information
because,
as
we
know
like
rook,
can
support
multiple
clusters
also
right.
So
he
a
user
need
to
provide
target
cluster
target
storage
class
or
whatever.
It
is,
then,
that
tool
need
to
get
the
monetary
information
and
that
keys
are
to
connect
to
the
sub
cluster.
A
A
A
F
Yeah,
I
I
think
something
that
is
interesting
to
me.
I
guess
to
know
is
like
the
flex
volume
like
if
we
consider
like
where
things
come
from
the
flex.
Volume
is
like
part
of
rook,
and
so
I
think
migrating
from
flex
volume
to
csi
is
something
that
it
makes
sense
to
have
in
rook.
But
the
entry
driver
is
more
related
to
the
stuff
project
itself
and
is
something
that
maybe
more
appropriately
belongs
in
seth.
F
A
F
F
I
I
think
there
will
be
some
users
who
might
want
to
use
the
tool
to
migrate
from
entry
drivers
to
csi
who
don't
who
aren't
using
rook,
but
that
doesn't
mean
that
we
don't
you
know,
don't
want
to
have
the
tool
as
part
of
the
rook
project,
although
I
I
do
feel
like
the
seth
project
might
be
the
more
appropriate
location
for
that
that
particular
tool.
I
think
we
can
take
a
lot
of
this
offline,
but
it's
it's
worth.
F
I
think
to
me
noting
that,
like
kind
of
the
the
source
of
of
you
know
where,
where
the
original
comes
from.
A
E
B
A
Going
once
going
twice
all
right:
well,
thanks
for
the
demo,
everyone
and
and
all
the
discussion
we'll
stop
the
recording
and
have
a
good
day.