►
From YouTube: Ceph Orchestrator Meeting 2021-06-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
Okay,
sure
am
I
heard
of
this
question,
probably.
B
Okay
sounds
good,
so
my
name
is
I
work
on
the
ui
aspect
of
the
ocs
and
yeah.
I
can
briefly
let
you
know
how
the
ocs
operator
works
or
what
is
the
workflow
from
the
ui
perspective
of
the
ocs,
and
so
starting
from
as
sebastian
was
saying
how
you
create
the
pvs
out
of
the
local
dish
and
also
after
those
pvs
are
created,
then
how
do
you
create
pvcs
out
of
those
pvs
that
can
be
consumed
by
the
ocs
applications?
B
I
I
think
I
I'll
just
give
a
brief
demo
about
it.
I
just
created
a
setup
sometime
back,
probably
that
will
give
a
clearer
picture
to
everyone
on
how
this
happens
from
the
openshift
ui.
So,
as
you
can
all
see
right
now,
I
hope
yep.
B
Is
it
yes,
so
this
is
the
openshift
ui
and
what
I
have
done
is
I
have
installed
the
ocs
operator
and
I
am
going
to
show
you
how
you
can
create
pvs
out
of
the
disks
that
are
there
and
and
out
of
on
the
top
of
those
pvs
how
you
create
the
osd
is
using
the
rook
operator
internally,
but
the
abstraction
layer
is
ocs.
On
top
of
it,
that
does
all
that
stuff.
A
Sorry
said
you
weren't,
muted,
all
right.
D
B
Okay,
so
yeah,
so
as
you
can
all
see
that
I
am
inside
the
ocs
operator,
the
full
form
of
the
ocs
is
of
openshift
container
storage,
and
when
I
come
to
this
page,
it
says
me
to
create
the
storage
cluster
as
the
cr
name
for
the
the
custom
resource
name
for
the
creating
the
storage,
basically
for
the
ocs
cluster
storage
cluster.
So
I'm
going
to
click
on
it
and
there
are
multiple
multiple
options
right
now
one
is
internal.
One
is
entire
attached
devices.
Another
is
external,
so
just
a
brief
introduction.
B
Internal
is
used
if
you
want
to
connect
to
your
cloud
provider
using
the
cloud-based
storage
class
name
as
if,
if
you
are
an
aws
setup,
the
the
cloud
provider
already
provides
the
story
class
name
as
gp2.
If
you
are
on
a
vmware,
it
provides
you
the
thin
storage
class.
So
if
you
want
to
use
that
workflow,
you
can
use
this
internal
workflow.
B
But
if
you
are
on
a
bare
metals
system,
which
is
a
use
case
from
the
safe
perspective
or
the
safe
orchestrator
perspective,
that
I'm
going
to
briefly
show
you
today,
we
can
click
on
this
mode
and
you
can
see
we
have
a
five
step,
wizard
flow.
What
what
this
five
step
does
is.
First
step
is
where
you
discover
all
the
disk
on
the
node
so
the,
but
by
default
there
is
no
job
running
on
the
disk
that
just
gets.
B
You
discover
all
the
nodes
so
because,
because
of
there
are
some
resource
constraints
that
if
there
are
like
thousand
nodes,
they
cannot
run
the
job
on
all
the
thousand
nodes
and
and
discover
this
for
you.
So
what
this?
What
lso
perspective
is
that
they
want
you
to
select
the
nodes
on
which
you
want
to
run
this
discovery
right,
so
I
can
select
particular
nodes
or,
I
can
say
all
the
nodes.
I
want
to
select
all
the
nodes.
I
just
want
to
take
questions
if
there
are
any
confusion
until
this
step.
B
Okay,
okay
sounds
good,
so
I'm
going
ahead
and
clicking
on
next
by
clicking
on
next.
You
can
see
that
I
have
three
nodes
available
and
three
disks
are
available
with
me.
B
If
I
click
on
this,
you
can
see
that
each
of
the
node
is
having
one
disk
available
that
is
of
100
gig
of
capacity
so
that
that's
what
I'm
going
to
use
this
is
the
local
volume
set
name
that
I'm
going
to
provide,
I'm
going
to
say
demo
and-
and
there
are
filters
provided
by
the
local
storage
operator,
so
you
can
either
say
what
kind
of
volume
mode
you
want
block
or
file
system.
You
want
to
select
the
disk
or
you
want
to
select
the
partition.
B
You
can
unselect
it
and
you
will
see
that
this
capacity
do
not
show
zero
capacity
over
there.
So
I
can
put
it
back
and-
and
you
will
see
that
the
selection
is
made
also,
you
can
select,
put
the
minimum
and
the
maximum
disk.
So
if
I
put
101
gigs,
it
will
and
get
unselected,
but
if
I
say
the
minimum
size
to
one,
it
will
get
selected.
So
I'm
not
going
into
deep
into
that
and
I'm
going
to
say
next
and
create
me
a
storage
class.
B
What,
in
the
background
this
is
doing
is
that
this
is
creating
a
story
class
for
me
and
on
top
of
the
discs
that
were
available
with
me,
the
three
discs
that
were
there,
you
saw
it
is
creating
the
pvs.
On
top
of
those
three
discs
that
that
are
there
till
now
there
are
no
nodes
found.
Basically,
there
are
no
pvs
that
are
created
on
that
it
takes
around
a
minute
or
so
for
this
process
to
complete.
B
So
at
the
end
of
this
flow,
the
pvs
will
be
created
on
top
of
the
disk
that
was
there.
I
am
idly
going
to
wait
for
30
more
seconds,
probably
until
the
time
I
see
this
over
there.
This
view
has
been
updated
now
and
in
the
4.9,
the
version
that
is
coming
now.
You
will
see
a
loading
screen
over
there.
That
will
say
how
many
pvs
are
created
out
of
the
disk.
Sorry
for
the
ux
flows
over
there.
C
I
just
I
just
wanted
to
mention
that,
so
in
the
background,
this
creates
a
local
volume
set
object
that
will
continue
to
match
the
tvs
with
devices
with
this
criteria
and
create
pps.
It's
not
just
a
one-off
operation.
D
B
So
there
is
so
from
the
ui.
There
is
a
abstraction
that
we
have
done
so
we
are
ideally,
as
rohan
mentioned,
we
are
using
two
or
three
different
crs.
One
is
lso
based
discovery.
Second,
is
creation
of
the
storage
class
using
the
local
volume
set
and
another
one
is
basically
the
storage
clusters,
cr
creation
that
we
are
doing
so
everything
is
extracted
behind
the
ui,
and
now
you
can
see
that
we
have
the
available
capacity
of
300
gigs
and
we
are
going
to
create
three
three-way
replicas.
B
So
so
this
is
the
total
available
capacity
that
is
going
to
be
with
the
user,
and
these
are
the
disks
that
on
which
these
are
the
nodes
that
on
which
the
disks
are
there.
So
I'm
going
to
go
ahead
and
click
on
next,
you
can,
basically,
you
have
basically
ocs
provides
the
encryption
functionality.
So
I'm
not
going
to
go
deep
into
that
and
at
the
end
of
the
step
five
you
will
see
that
three
nodes
are
selected.
B
The
cpu
or
the
memory
is
not
enough.
I'm
not
going
to
go
into
that
because
I
have
not
created
big
enough
nodes,
but
that
is
like
the
workflow.
If
I
click
on
create
all
the
pods
will
start
coming
on
the
on
those
nodes
and
the
osd
will
be
created
and
at
the
at,
I
think
it
will
take
five
minutes
for
the
ocs
cluster
to
be
created
at
the
end
of
this
process.
D
Okay,
so
just
to
just
to
summarize
those
first
three
sets
or
steps
we're
basically
deploying
the
lso
operator
and
creating
the
storage
device
that
discovery
or
whatever
it's
called
the
discovery
object
going
into
discover,
pv's
for
all
nodes
and
then
the
last
step
or
whatever.
When
you
deploy
rook
itself,
you
specify
a
device
set
with
the
storage
class
of
whatever
the
class
was
you
just
created,
and
you
set
the
count
to.
However
many
devices
you
counted.
D
B
D
Exactly
yep,
okay,
okay,
maybe
it'd
be
helpful
to
like
just
frame
what
the
high
level
high
level
goals
are
for
this,
because
we're
we've
gone
around
in
circles,
because
there's
so
many
different
like
ways
to
approach
this.
So
first
of
all,
the
goal
is
for
whenever
we
deploy
rook,
we
want
to
use
pvs
exclusively.
D
We
don't
want
to
have
to
make
use
of
the
of
the
bare
metal
support
in
rook,
so
we
want
to
use
lso
or
some
other
operator
that
lets
us
bind
to
a
pv
so
that
there's
some
transparency
to
what
devices
we're
using,
and
we
also,
I
I
think
lso-
is
what
we're
looking
at
right
now,
but
I
think,
ideally,
we
want
to
make
sure
we're
the
door
is
open
to
other
sources
of
pvs
so
like
maybe,
if
they're
manually,
created,
pvs
and
source
classes
like
it'd,
be
nice
to
be
able
to
consume
that,
in
whatever
form
we
can
or
if
there
are
other
dynamic
provisioners
like
ebs.
D
But
at
the
same
time
we
want
to
like
we
want
to
be
able
to
show
the
user
like
information
about
the
bare
metal
device
like
it's,
it's
which
host
it's
attached
to
what
the
device
path
is
what
the
vendor
is,
what
the
serial
number
is
all
that
stuff,
which
is
only
going
to
be
available.
Sometimes
I
guess
so.
I.
B
D
D
All
right,
sorry,
like
logistical
issues
this
morning,
right
so
I
mean
what
what
we
want
to
be
able
to
do
is
through
the
stuff
interfaces.
See
like
these
are
the
hosts.
These
are
the
bare
mobile
devices
that
are
attached
to
them
and
be
able
to
create
osds
on
those
on
those
devices
and
obviously
just
having
like
manually
created.
Pvs
doesn't
really
do
that
because
you
can't
tell
what
the
device
model
is.
D
Are
sealed
or
you
don't
know
what
kind
it
is
all
you
really
see
is
the
pv
size
and
name
which
file
isn't
going
to
tell
you
much,
but
in
the
case
of
lso
you
have
this
discovery
thing.
So
that's
good.
E
D
Yes
yeah,
so
the
derived
groups
support
all
these
filters
that
we
filter,
based
on
like
the
model
with
the
serial
number
or
the
path
or
the
size,
a
bunch
of
stuff,
it's
more
selective
than
what
is
possible
with
kubernetes
labels,
and
so
what
that's
sort
of
pushing
us
towards
is
even
if
all
the
labels
were
present
on
the
pvs.
I
think
that
pvc
label
selection,
filtering
stuff
is
not
sufficient
to
like
express
what
the
drive
groups
can
do.
D
So
I
think
what
should
happen
instead
is
the
implementation
in
either
rook
or
the
manager
module?
I
don't
think
it
matters
which
one
we'll
basically
look
at
the
available
devices.
Look
at
the
drive
group
do
its
own
matching
and
say:
oh,
I
want
to
create
an
lsd
on
like
this
particular
device
that
matches
these
criteria
and
then
go
tell
rook
to
go,
create
that
particular
osd
and
then
repeat
the
process,
the
problem
with
that.
D
So
on
the
lso
side,
we
have
a
bunch
of
sort
of
minor
gaps
that
prevent
us
from
be
able
to
do
that.
So
there's
the
there's.
The
discovery
result
which
has
a
dump
of
all
the
metadata
about
the
devices
but
there's
no
unique
identifier
for
a
device
except
the
device
id
which
is
only
sometimes
populated,
so
it's
not
actually
unique.
It's
like
on
a
vm,
it's
just
blank,
so
that's
sort
of
the
first
problem.
The
second
problem
is:
there's
nothing
on
there
or
it
makes
the
same
problem.
There's
nothing
in
the
discovery.
D
Result
that
lets
you
would
let
you
create
a
pvc
that
will
buy
into
that
particular
device.
If
you
go
look
at
the
pvs,
they
have
a
name
identifier
and
you
can
find
a
pvc
to
a
particular
tv
name,
but
that
name
isn't
included
in
the
discovery
results
and
if
you
go
and
you
I
think
the
only
thing
you
can
currently
do
is
you
could
get
the
discover
result.
Then
you
can
also
go
enumerate.
D
I'm
wondering
if
there's
a
technical
reason.
Why
there's?
No
identifier
in
the
discover
results
that
we
can
just
directly
use
or
is:
is
that
just
a
fact
of
life
that
we
have
to?
We
have
to
go
through
that
extra.
C
D
C
Yeah,
that
could
be,
we
could
just
converge
with
discovery
team
in
the
disk,
make
a
daemon,
and
that
would
make
it
much
easier.
F
Oh
yeah
joseph
so
at
the
moment
the
the
deal
is
that
there's
no
like
unique
device
id
if
a
disk
doesn't
have
a
serial
number.
So
the
issue
was
that
in
the
vm,
like
the
discs
that
I
was
attaching
were
like
virtual
disks
and
they
didn't
have
a
serial
number.
So
if
the
serial
number
for
a
disk
is
missing,
there's
no
like
unique
device
id,
but
if
there
is
one
then
there
will
be
a
unique
device
id,
but
I
think,
like
the
pv,
the
discovery
results.
The
pv
thing
should
still
like
happen.
D
D
Yeah
yeah,
which
I
think
that's
just
a
fact
of
life
right,
there's
something
really
there's
no
real
technical
way
around
that
yeah.
What
happens?
What
does
that
also
do
if
the
device
like
disappears
after
a
reboot
or
something
like
that
or
just
a
refresh
and
the
device
goes
away,
does
it
go
to
delete
the
pe.
C
At
the
moment
it
does
nothing.
I
think
santosh
is
working
on
some
metrics
that
will,
you
know,
unlock
the
user.
If
that
happens,.
D
Okay,
well,
I
think
in
any
case,
I
think,
closing
that
that
that
gap,
I
think,
will
solve
that
part
of
the
problem
which
sort
of
brings
us
back
to
the
the
bigger
questions
about
like
who's
responsible
for
what
so
for
ocs
operator
and
the
like
the
dashboard
workflow
that
you
guys
just
showed.
I
think
that
makes
that
makes
perfect
sense
for
ocs,
where
it's
like,
it's
all
in
one,
but
for
rook
itself.
D
I'm
not
sure
that
we
want
rook
to
be
responsible
for
deploying
lso
or
steph
to
be
responsible
for
deploying
also
because
I
don't
think
it
seems
like
we
shouldn't
be
opinionated
about
what
you
have
to
use.
We
should
take
advantage
of
something
that's
smart
if
it's
there,
but
we
should
fall
back
to
some
like
reasonable
behavior.
If
we
don't
so,
I'm
wondering
if
then.
D
If
we
want
to
have
like
in
the
rook
examples
directory
where
all
those
animals
that
you're
usually
pasting
into
like
create
the
operator
create
all
the
our
back
stuff,
whatever
there'd
be
one
in
there,
that's
like
deploy
also
that
it
has
like
a
default
cr
that
discovers
all
devices
on
all
hosts,
creates
a
storage
class
called
like
local
or
something
like
that
and
then
or
something
I
don't
know.
I'm
not
I'm
not
quite
sure
what
we
like.
G
For
this
stage
we
are
we're
actually
looking
at
consuming
lso
as
a
library,
instead
of
deploying
it,
so
we're
really
hoping
to
get
what
we
have
from
the
root
discovery
and
then
include
some
of
the
like
nifty
stuff.
I
guess
instead
of
having
to
deploy
it,
so
I've
run
into
a
small
prototype
today,
I'll
work
a
bit
more
on
it
tomorrow,
but
right
now,
that's
where
we
are.
G
G
G
It's
still
like,
under,
like
heavy
design,
yeah
yeah.
D
G
Sort
of
melts
there
right
now.
G
Spent
like
enough
time
really
and
now
we're
discussing
all
the
options
of
travis,
but
we
still
need
to
discuss
that
a
little
bit
more,
but
I
I
feel
like
we're
kind
of
on
board
with
the
idea,
and
it
feels
like
the
right
path
for
us
to
to
move
forward
which
should
kind
of
simplify,
also
our
transition
from
the
old-fashioned
way
of
deploying
on
burmetal
versus
the
all
on
pvc
setup.
Even
if
you
run
permittal
so
yeah,
that's
that's
where
we
are-
and
I
guess
this
is
also
some
kind
of
a
warning
as
like.
D
D
D
Well,
I
mean
that
sort
of
makes
me
wonder
if
what
we
really
want
to
do
is
if
we
get
away
from
looking
at
the
lso
discovery
result
and
instead
just
enumerate
pvs
like
if
lso
just
put
all
the
information
in
the
discovery
result
as
annotations
on
the
tvs
like
the
vendor
model
serial
all
that,
like
all
that
stuff
was,
if
all
that
stuff
was
there,
then
from
the
like
the
rook
or
the
manager,
rook
theft
perspective.
D
You
could
just
say
like
this:
is
the
storage
class
to
look
at
for
local
devices
and
we
could
and
then
we
can
show
whatever
information
we
see
and
understand,
because
if,
if,
if
it,
if
that
were
the
case,
if
then,
regardless
of
whether
you
deploy
also
manually-
and
it
creates
those
pvs-
and
you
point
it
to
the
right
storage
class
or
you
embed
it
as
a
library
and
you
end
up
with
those
same
tvs,
there
would
just
be
one
storage
class
where
we
just
list
all
the
list
pvs
and
we
would
be
the
stuff
that
we'd
show
in
like
orange
device.
D
D
I
would
say
device
is
I'm
kind
of
equating
the
two?
I
guess
right
now
and
maybe
that's
part
of
the
problem,
because.
D
G
It
already
so,
I
think
for
now
it's
more
like
how
do
we
present
it,
because
today
you're
looking
at
a
cr
that
could
be
a
config
map,
it
could
be
something
else,
but
yeah.
G
That
we
have
applied
so
it's
like
a
four
steps:
decoupled,
but
yeah
yeah.
I
Okay,
the
the
ocp
ui,
you
know
workflow
that
we
saw,
I
mean.
Why
would
it
have
to
be
any
different
from
a
workflow
perspective?
You
know
and
what
are
we
missing
there
that
you
know
based
on
the
conversation
I
know
I
know
we
don't
have
ocs,
so
we're
going
to
have
to
do
something
else,
to
do
some
of
the
automation
for
that
workflow.
But
is
there
any
reason
we
wouldn't
have
that
same,
look
and
feel
in
the
dashboard
from
a
high-level
ui
perspective,
I
mean.
G
I
think
the
constraint
is
more
around
like
fgm
versus
downstream
upstream.
There
is
no
ocs.
There
is
nothing
that
will
deploy
other
operators
for
you.
I
G
I
think
it's
the
overall
experience,
it's
it's
the
overall
experience
as
well
like
we
want
to
drive
everything
from
rook
and
seth,
where,
if
you
have
lsu
in
the
picture,
then
it's
like
yet
another
component
that
you
have
to
deploy
and
maintain.
It's
like
this,
like
really
large
prerequisites
that
we
don't
think
we
really
want
to
have
being
included.
G
Does
that
make
sense
for
you
jeff
well.
I
I
Again,
would
the
I
think
you
want
to
you
know,
be
less
prescriptive
in
this
world.
You
know
having
to
know
what
those
devices
are
and
controlling
those
devices
I
mean
you
know,
is
gonna
be
requirement,
so
no,
I
will
have
to
work
out.
You
know,
I
think
the
workflow
is
what
you
want,
and
you
know
make
sure
that
it's
implementable,
I
mean
so
I'd
I'd,
love
to
see
it
similar
to
what
we
just
saw.
I
You
know
if,
if
we
can-
and
you
know
I
I
just
don't
get
the
subtleties
of
why
it
can't
be-
I
guess
so.
G
Well,
honestly,
if
I
guess
I'll
just
put
it
differently,
if,
if
we
had
like,
if
we
had
what
I
just
described
like
the
the
wish,
the
goal,
the
workflow
will
be
much
simpler
downstream,
we
would
have
to
go
through
through
three
pages
to
select
all
devices
and
things
like
this.
This
could
be
much
much
easier,
but
that's.
B
I
G
I
think
this
is
just
by
bad
timing,
honestly
that
we
have
that
current
workflow
and
don't
get
me
wrong.
I
wish
we
could
like
really
homogenize
what
we
want
to
work
on
upstream
and
ultimately
have
that
implemented
downstream.
I
G
When
you
say
indoors,
do
you
like?
Is
that
bringing
your
community
around
an
sof
or
upstream.
G
Honestly-
and
this
is
just
to
me-
I
guess
I'm
kind
of
not
a
big
fan
of
the
build,
a
new
operator
for
anything
and
everything,
and
I
really
think
that
what
we're
trying
to
do
right
now
should
be
in
rook
and
could
have
been
in
rook
from
the
very
beginning,
and
I
we
have
to
stop
that
pattern
at
one
point.
Oh,
I
want
to
do
something:
okay,
let's
write
a
new
operator.
No,
no,
no
they're
like
there
are
things
that
could
be
controlled
by
existing
operators,
and
I
think
that
is
one
thing.
C
But
one
another,
I
guess
the
counterpoint
to
that
would
be
like.
If
every
storage
vendor
I
mean
and
local
storage
being
one
of
them
like
has
it
has
to
have
its
own
operator.
I
think
because
eventually
you're
going
to
need
like
to
converge
with
other
solutions
upstream
and
if
and
that
might
even
prevent
look
from
you
know,
collocating
with
other
upstream
providers
of
local
storage.
C
D
I
mean
the
thing
that
concerns
me
is
that
it
seems
like
the
management
of
like
of
the
hardware
devices
on
the
nodes
needs
to
be
somewhat
separable
from
rook,
because
you're
going
to
have
consumers,
you
may
have
consumers
of
those
devices
that
aren't
broke,
so
you
might
have
rook
consuming
half
the
devices
on
the
node
and
then
other
devices
are
being
used,
consumed
directly
by
pods
or
whatever
else.
D
G
You
can
manage
pvs
through
rook
that
it's
kind
of
colliding
with
other
lso-ish
solutions.
C
G
I
said
that
it's
not
because
we
could
have
this
embedded
in
rook
that
it
means
that
it
will
collide
with
existing
solutions
like
lso,
for
example,
code
both
could
just
like
coexist
yeah.
C
G
But
you
have
code
for
this
already,
the
one
we
just
did.
C
G
C
Look
at
very
little
specific
things.
G
Yeah,
but
I
guess
like
any:
it's
not
like
lso
is
like
super
standard.
Anyways
like
there
are
others,
local
storage
operators
out
there
too.
It's
like
we're
so
biased
because
we're
read
that,
but
there
are
also
other
afraid.
Operators
too,
and
that's
also
a
kind
of
argument,
but
I
yeah
I
lost
track
but
yeah.
I
guess
if
it
was
in
less
so
like
if
you
inject
multiple
crs.
G
What's
the
warranty
that
some
pvc
won't
be
picked
up
and
everything-
and
we
already
like
discussed
that
topic,
I
think-
and
we
also
had
that
issue
right
like
but
yeah.
C
Sorry,
like
I
mean
lso,
two
lso
tvs
won't
use
the
same
device.
It
could
pick
up
something
that's
used
by
something
else,
but
it
does
have
a
bunch
of
checks
for
that.
G
Okay:
okay,
okay,
I
thought
about
the
discovery
that
at
some
point
we
were
discussing
things
like.
If
you
run
two
discoveries,
they
could
pick
up
the
same
devices
and
get
like
pp's,
and
maybe
that's
what
we
have
the
locking
in
place
now
so,
okay,
never
mind.
D
I
mean,
I
guess
this
whole.
This
little
problem
area
is
why
I
was
sort
of
assuming
that
making
the
the
way
that
we
interface
with
pvs
or
devices
as
generic
as
possible,
so
that
we
can
consume
pvs
from
any
storage
class,
not
just
specifically
the
lso
one,
but
if
it
is
lso
or
some
other
operator
that
we
are
familiar
with,
we
can
look
at
the
annotations
to
present
the
user
with
as
much
information
as
we
as
we
can
find
right.
So,
if
you
had.
D
D
C
So
yeah
would
we
want
to
prevent
present
the
available
options
to
the
user
or
just
the
user
provides
a
rule
that
says
consume
all
of
these
that
match
this
criteria.
H
And
one
thought
is:
it
seems
that
the
probably
the
problem
that
we
have
in
the
orchestrator
is
that
the?
What
if
we
decide
to
use
a
let's
say,
a
less
a
oh,
is
going.
We
are
going
to
to
have
a
decision,
okay
and
then
to
force
to
use
one
one
specific
thing:
okay,
so
maybe
probably
we
we
should
include
or
create
a
new
interface
in
the
orchestrator
in
order
to
get
the
information
of
physical
devices
and
this
interface
to
have
the
possibility
to
communicate
with
different
kind
of
operators,
for
example
lsa.
H
Let's
say
oh
or
another,
one
okay.
In
order
to
get
this
information,
and
then
we
can
use
this
information
in
order
to
create,
for
example,
a
storage
class
and
use
this
storage
class
in
order
to
create
pvs
and
provide
these
pvs
to
look
in
order
to
create
the
osds
way.
I
think
that
we
are
more
independent
of
the
of
the
penalties
that
is
getting
the
information
from
the
hardware
okay
and
is
a
solution
that
I
think
that
we
can
use
in
upstream.
D
I
mean
that's
kind
of
where
we're
at
right
now,
like
we,
the
orchestra
already
has
an
interface
for
physical
devices.
Like
that's
the
only
interface.
That's
there
what's
missing
is
a
dynamic
one,
but
what
what
I
was
sort
of?
What
I
was
leaning
toward
was
basically
the
manager
rook
module.
You
would
tell
it
what
storage
class
to
look
at,
or
maybe
a
list
of
storage
classes,
and
it
would
just
it
could
just
go
okay.
A
C
Lock
in
lso
as
the
way
to
go,
I
don't
know
if
well.
D
I
mean
it,
would
it
wouldn't
in
the
sense
that
you
could
give
it
any
storage
class
from
any
I
mean
you
could
give
it
a
turbo,
lvm
storage
class,
or
you
could
give
it
a
search
class
where
you
manually
created
pvs
like
any
of
them,
would
work.
You
just
might
not
see
all
the
metadata
about
the
device,
but
if
it
is,
if
it
says
oh,
it
is
also,
then
I
can
show
you
all
this
stuff.
Maybe
if
it's
an
open
ebs
tv,
then
we
can
add
support
to
do
whatever
introspection.
G
Iran,
how
do
we
know
it's
an
lso
storage
class?
Because
I
thought
the
provisioner
was
always
the
same
when
you
do
local
tvs,
no
provisioner.
So
it's
always
the
same.
C
Right
yeah,
you
can,
but
you
can
look
at
the
the
owner
kind
label,
you
can
it,
it
will
say
local
volume
set
or
local
volume.
D
E
F
Yeah
sorry
just
quickly
like,
for
example,
with
open
ebs.
The
way
lso
works
is
that
it
creates
like
a
storage
class
and
also
creates
like
tvs
and
adds
them
to
that
storage
class.
The
way
open
ebs
works
is
that
it
doesn't
actually
create
any
pvs
like
preemptively.
It
only
creates
tvs
when
you
actually
make
a
pvc
and
then
it'll
actually
go
and
create
the
pv,
which
is
how
it
I
think
it's
supposed
to
work
so,
for
example
like
if
you
try
to
use
ceforc
device,
ls
and
you're.
D
Yeah,
oh,
I
mean
I
guess,
maybe
then,
that
the
way
that
that
code
would
work
is
it
would
look
at
the
storage
class
that
you
told
to
look
at
and
they
would
say.
Oh,
this
is
an
lso
storage
class.
I'm
going
to
do
the
lso
way
to
enumerate
devices
and
if
it's
an
open,
eds
storage
class
it'll
do
the
open
evs
way
and
if
I
don't
understand
the
storage
class
at
all,
it's
just
unknown
to
me,
then
I'll.
Just
why.
D
F
There's
there's
there's
like
a
way
of
doing
it.
They
have
their
own
crs,
which
is
for
every
like
device
that
you
have
there's
like
a
block
device
custom
resource
that
I
think
like
has
some
metadata
about
that
device.
I
I
didn't
look
too
deep
into
it,
but
there
is
a
way
of
like
yeah.
I
think
I
think
it's
like
discovery
automatically
and
it's
like
it
does
discovery
on
like
all
nodes
that
it's
running
on
for
all
devices,
there's
no
like
filters
or
anything.
C
D
Yeah,
I
mean
we're
talking
about
lso
and
open
ebs,
but
I
think,
like
a
another
goal
here
would
be.
I
mean
one
goal
is
to
just
have
some
fallback
behavior
that'll
work
in
like
weird
environments
that
we
don't
know
about,
and
I
think
that
the
simplest
would
be
like
the
user
manually,
creating
a
storage
class
and
manually
creating
pvs,
which
happens
to
be
what
toothology
does,
because
it
already
has
a
disc
carved
up
into
lvs
and
it
we
want
to
just
consume
those
pvs
as
different
devices.
G
I
feel
like
there's.
There
is
like
two
it's.
G
D
D
G
It
is
possible
yeah,
but
by
the
time
you're
strapping
the
cluster,
then
you
will
be
enemies
like
enumerating
the
filters
like
that
you
want
either
you
want
to
take
all
the
drives
or
or
say:
okay,.
B
G
G
G
Yeah
and
I
guess
what
I'm
saying
is
that
the
discovery
doesn't
have
to
be
part
of
el
esso
or
rook,
but
it
can
be
like,
for
example,
rook
has
a
discovery
thing
already
that
can
be
used,
and
once
we
kick
it
off,
then
we
report
for
each
node
what
are
the
disks
and
why
they
are
available
too.
D
G
G
D
G
J
G
D
G
D
Right,
but
that's
that's
brings
us
back
to
where
we
started,
though
like.
If
you
have
an
existing
source
class,
then
you
need
to
be
able
to
enumerate
the
devices,
so
you
know
which
pv
to
consume
with
the
pv.
So
you
can
apply
the
drive
group
filters
and
all
that
stuff
like
like
the
discovery,
doesn't
help.
If
you
have
an
external
storage
class.
D
It
seems
inevitable
that,
like
we
have
to
support
the
case,
where
there's
an
external
storage
class
managed
by
somebody
else
and
we're
sort
of
forced
into
the
a
world
where
those
source
classes
are
going
to
be
managed
differently
like
they
might
be,
you
know
it
might
be
an
lso
storage
class,
it
might
be
a
user
managed,
storage
class,
it
might
be
open
dbs,
it
might
be
something
else,
and
so
we're
going
to
have
to
like
think
of
them
in
different
ways.
D
E
E
Can
rook
actually
become
a
creator
of
the
pbs
and
manage
that
whole
life
cycle
so
that
we
can
get
a
better
integrated
experience
and
I
I
feel
like,
even
after
all
those
discussions
to
some
degree,
it
still
feels
like
well,
could
we
really
use
lso's
library,
or
does
it
really
need
to
be
a
separate
operator?
I
still
don't
feel
clear
on
that
on
that.
E
D
Know
I
mean
they
feel
like
it
feels
like
they
should
be.
They
should
be.
They
should
be
separable,
because
you're
gonna
over
the
lifecycle
of
a
cluster,
even
you
might
like
change
your
mind
about
whether
you're
consuming
the
local
devices
directly
or
you're,
consuming
them
with
rook
or
whatever,
like.
E
D
D
G
Yeah
but
it
doesn't
joseph
said
that
you
don't
get
the
pvs
out
of
it
because
they
only
get.
D
D
This
is
so
well
it
I
mean
it
could
right.
So
the
I
mean
my
high
level
suggestion
is
that
the
there's
like
one
basically
configurable
and
that's
that
this
is
the
storage
class
for
local
devices,
and
then
you
have
the
code
when
you
want
to
get
an
inventory,
you
look
at
the
storage
class.
Look
what
type
it
is
and
then
you
invoke
some
like
bespoke
method
based
on
that
and
with
some
basic
fallback
right.
So
we.
D
A
J
D
Fundamental
problem
is
there
isn't
a
standard
way
to
deal
with
local
hardware
in
kubernetes
right
like
there
isn't
a
standard
of
for
lso
or
for
whatever,
and
so
I
think,
no
matter
what
we're
gonna
be
in
a
situation
where
we
have
to
like
support
multiple
methods,
write
code
to
support
like
the
most
common
methods,
with
some
same
fallback,
yeah.
G
But
it
feels
like
we're
working
on
the
ten
percent
right.
I
think
right
now,
but.
D
G
A
little
like
overall
for
the
design,
looks
like
we're
really
targeting
the
10
exception
that
we're
gonna
get
and
not
really
trying
to
make
it
working
for
ninety
percent
of
the
use
cases.
G
Well,
the
the
the
ten
is
when
we
think
about
all
the
corner
cases
yeah
like
and
the
ninety
percent
is
like.
Okay,
we
can
get
something
done
working
right
where
like,
and
we
know
that
90
of
the
users,
for
example,
if
it's
rook,
are
going
to
go
with
the
let's
say
the
local
provider
of
brook
for
storage
by
using
lso
as
a
library
and
maybe
10
of
the
time
you
will
get
someone
that
has
its
own
local
sort
of
product.
No.
G
G
Just
like
you
said
there
is
no
standard
to
manage
storage
with
kubernetes,
and
it's
not
because
it
also
has
one
thing
that
it's
necessarily
the
right
one
and
we
might
just
be
biased,
because
it
is
the
way
where
that
red
hat
does
it
today.
E
G
Think
yeah.
D
Well,
what
if
we
go
away-
and
we
write
like
a
more
concrete
proposal
for
how
it
would
fit
together
and
then
we
can
have
a
separate,
concrete
proposal,
because
I'm
curious,
I
don't
really
understand
how
the
lso
embedding
plan
would
work.
I
don't
know
that
if
you
guys
wrote
anything
down,
but
that
would
be
maybe
that
would
freaking
go
back.
G
D
G
I
guess
I'll
keep
thinking
on
the
use
case
where
we
don't
know
the
pvs
based
out
of
the
storage
class
yeah,
because
if
we
can
accommodate
this,
then
well,
assuming
that
this
is
only
true
for
bermuda,
not
cloud,
because
because
you
never
know
it's
dynamically
provisioned,
you
have
a
limited
capacity.
G
G
A
D
C
Just
just
the
child
partitions
if
partitions
are
enabled
in
the
inclusion
spec,
but
if
a
device
has
child
partitions,
we
never
pick
up
the
parent.
C
I
guess
we
could
emit
like
an
alert
or
something
a
metric.
D
Yeah
I
mean
it
almost
makes
me
wonder
if
the
open
ebs
approach,
where
they
create
the
pvs
dynamically
based
on
the
pd
and
maintain
their
inventory
separate,
makes
but
yeah
you
end
up
with
this.
Like
every
operator.
Has
this
bespoke
method
of
like
representing
what
the
inventory
looks
like.
G
Just
real
quick,
so
you
said
like
a
list
of
drives,
but
how
do
we
know
again
which
hosts
we
should
be
looking
at
or
do
we
know
at
all?
Are
we
expecting.
G
H
B
G
Question,
I
guess
it
was
more
for
for
sage
or
joseph
on
how
the
orchestra
interface
would
know
how
many
hosts
are
available.
D
D
Yeah,
I'm
not
even
sure
how
they
were
considered
well,
the
orchestrator
has
a
get
host
thing
that
enumerates
hosts.
I'm
not
sure
what
the
current
implementation
doesn't
tell
you.
Everything.
C
G
Okay,
no,
no!
It's
just
like
I'm
just
I'm
just
I'm
just
thinking
out
loud,
I
guess
but
yeah.
If
we
like
we,
if
we
look,
if
we
look
at
all
the
hosts,
then
we
might
be
looking
for
hosts
that
are
not
storage
hosts.
So
we
might
have
to
be
a
bit
more
accurate
on
the
list
of
hosts
we're
looking
at
looking
at,
I
guess
or
looking
into
because
if.
G
G
Yeah,
because
really
the
entry
point
regardless,
if
it's
rook
or
ocs,
is
the
not
label
yeah.
So
the
label
is
like
storage,
node
label,
for
example,
and
then
you
really
get
the
list
of
host
that
will
be
becoming
stories.
Yeah.