►
From YouTube: Ceph Orchestrator Meeting 2022-02-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Yeah,
I
guess
we
can't
start
doing
that.
You
know
there's
any
going
to
be
any
objections
to
that.
You
know
if
we
had
any
new
big
features,
we
can
try
to
add
doc
notes
when
it
came
in.
B
C
A
Yeah,
maybe
yeah,
we
can
start
doing
that.
I
think
I
guess
you
probably
want
some
notes,
then
for
things
that
we
already
have
in
the
agent.
Probably
when
that
got
added
right
now,
it's
going
to
be
in
quincy,
maybe
with
popcorn.
So
I
guess
a
note
will
be
about
that
and
then
async
ssh.
I
think
when
we
change
to
that.
Maybe
we
should
have
a
note
for
that.
Those
are
like
pretty
big
ones
other
than
that.
A
A
B
Yeah,
so
to
give
a
little
bit
more
color,
so
originally,
this
was
a
partly
started
as
a
request
for
gosef
to
wrap
these
apis,
but
blaine
from
the
rook
team
noted
that,
while
rook
does
not
want
to
require
the
manager
module
to
be
for
rook
to
be
enabled,
they
do
want
to
use
these
api
calls,
and
today
it
just
fails.
If
you,
if
you
don't,
have
a
manager
module
enabled
and
you
try
to
use
even
the
most
basic
thing
like
export
ls,
it
fails.
B
So
I
did
a
few
minor
patches
around
this
area.
As
I
was
reading
the
code
and
I
do
have
a
work
in
progress
branch
where
I've
kind
of
it,
so
it
doesn't
require
the
dependency
to
be
there,
but
this
only
works
for
things
that
have
already
been
set
up
at
the
moment.
B
In
other
words,
you
already
had
tough
adm
orchestration,
enabled
you
created
an
nfs
cluster.
You
created
a
couple
exports,
then
you
turn
off
the
cepheum.
That's
my
work
of
progress
branch.
At
that
point.
What
I
want
to
raise
here
is
the
kind
of
the
ux
the
workflow
around.
B
B
You
know,
I
think
it's
good
to
have,
especially
with
stuff
adm
orchestration
turned
on,
but
for
rook
or
other
people.
In
fact,
I
believe
there
is
a
mailing
list
post
this
morning.
I
I
didn't
know.
If
I
should
reply
or
not,
there
was
a
you
know
there,
there's
like
how
do
we
so
sorry,
I'm
I'm
missing
something
important.
All
these
commands
take
a
cluster
id
the
cluster
id
roughly
maps
to
a
rado's
namespace
inside
of
the
dot
nfs
pool,
which
is
used
to
configure
ganesha
and
stuff.
B
My
two
thoughts
were
somehow
extending
the
existing
cluster
nfs
cluster
create
and
cluster
remove
commands
that
would
work,
but
you'd
essentially
have
a
big.
If
statement
in
each
function,
which
would
say
something
like
if
orchestration
is
there,
you
know
try,
it
would
set
up
the
orchestration
bits
if
the
orchestration's
not
there
and
then
the
other
option
would
be
like
a
command
line.
An
additional
command
line
like
boolean
switch.
Something
like
you
know,
unmanaged
unorchestrated,
something
along
those
lines.
I
believe
that's
what
blaine
proposed
in
the
gosf
issue.
B
I
wasn't
sure
I
didn't
really,
in
my
own
mind
really
like
the
ux
workflow
for
that,
so
the
other
one
I
was
considering
is
a
new
set
of
commands,
which
would
just
be
like
register
this
cluster
id
unregister,
this
cluster
id.
These
are
for
like
things
that
you're
managing
on
your
own
outside
of
orchestration.
B
So
you
have
basically
a
pair
of
commands
when
you're
running
with
orchestration
a
pair
of
commands
when
you're
running
without
I
kind
of
like
that
approach,
because
it's
the
jobs
of
the
commands
are
more
orthogonal.
B
One
is
dedicated
to
setting
up
the
cluster.
One
is
just
telling
hey,
there's
a
cluster
that
should
be
here
and
a
bunch
of
the
code
underlying
it
can
be
shared.
B
So
now
I'll
stop
rambling
and,
let's
let
me
know
what
your
opinions
on
that
are.
A
I
mean
the
first
thought
I
always
have.
I
guess
when
I
hear
about
this
is
like
the
reason.
I
think
that
the
orchestrators
or
back
ends
were
required
before
is
because
you
need
that
stuff
to
actually
set
up
these
clusters,
like
you,
can't
there's
no
way
to
deploy
demons
without
either
rook
or,
if
idiom,
to
sort
of
do
it
for
you,
and
so
we
really
want
to
get
all
of
the
commands
to
be
working.
A
Without
that
you
almost
need
this
manager
module
to
then
gain
the
ability
to
deploy
demons,
and
things
like
that,
which
I
don't
think
it
was
originally
intended
for
that'd-
be
a
pretty
sort
of
big
project
to
get.
B
All
of
that,
in
here
yeah
I
mean
from
what
I
can
tell
the
the
the
existing
nfs
module
does
in
fact
deploy.
You
know
it
uses
the
orchestration
stuff
to
create
new
instances
of
services
for
nfs,
ganesha
and
stuff.
Like
that
again
from
I
can
we
can
double
check
with
the
rook
team,
but
I
believe
rook's
intention
is
to
not
require
that
they'll
be
setting
up.
B
You
know
ganesha,
pods
or
whatever
in
kubernetes,
but
they
don't
necessarily
want
anything
beyond
the
export,
add
remove
and
delete
they
don't
so
the
commands
that
start
with
like
nfs
cluster,
I
don't
think
they're
interested
in
except
again,
maybe
as
a
way
to
say,
because
right
now
everything
all
these
commands
that
take
a
cluster
id
validate
it.
The
first
thing
it
does
is:
is
this
a
real
cluster
id?
B
A
A
B
D
The
question
I
had
is
what
what's
the
use
case
for
this
I
mean
this
rook
use
case
where
they
want,
like
totally
different,
like
separate
apis
to
list
exports
of
clusters
that
are
not
deployed
by
orchestrator,
be
it
look
or
yeah.
You
know.
What's
the
use
case.
B
Yeah,
I
hope
I'm
I'm
representing
them
correctly.
They
want
to
use
the
same
apis
for
creating
and
deleting
exports,
just
not
not
managing
the
resources
that
back
those
so
again,
using
kubernetes
mechanisms,
not
step
orchestration
mechanisms
to
run
ganesha
instances.
E
Hey
I'm
glad
I
popped
in
at
a
good
time.
Just
yes.
A
We
were
yeah,
we
were
talking
about.
E
A
The
other
guy
was
linking
again,
I
don't
know
if
the
message
just
keep
and
so
john
was
sort
of
trying
to
explain
like
what
we
want
to
do
moving
forward
there,
and
I
guess
we
were
trying
to
clarify-
was
exactly
what
your
like
use
case
is
and
the
requirements
and
things
I
don't
know
if
you
want
to
try
to
go
into
that
like
what
exactly
do
you
guys
want
the
manager
nfs
to
be
able
to
do
for
you
without
having
an
orchestration
back
end.
E
Essentially,
we
just
want
to
be
able
to
manage
the
exports.
You
know
nfs
create
cluster,
remove
with
with
no
orchestration
I
mean
it
seems
the
I
mean
I
don't
know
what
it
would
really
do
for
us.
I
mean
I'm
sure
it
could
create
the
like
tools
and
things
that
are
needed,
but
that's
pretty
pretty
low
level
stuff.
E
Mostly
it's
like
the
the
exports
are
all
kind
of
created
and
now
in
a
pool
that
has
a
defined
name
and
the
name
space
in
the
pool
where
that's
located
is
deterministic
based
on
the
cluster
name,
and
those
are
all
things
we
have
to
pass
to
the
nfs
like
export,
create.
D
So
my
question
is
how
the
how
the
nfs
cluster
is
going
to
be
deployed
for
this
use
case.
Is
it
it
wouldn't
be
through
the
orchestrator
module
the
group
could
separately
deploy
it
have
its
own
way
of
storing
the
radars
export
objects
in
a
different
port
rather
than
the
one
that
we
use
right
now.
It's
hard
coded.
E
Right
yeah,
rook
yeah.
I
did
a
lot
of
work
two
months
ago,
three
months
ago,
to
make
sure
that
the
integration
with
rook
using
the
new
pools
is
all
good.
So
rook
uses
the
the
default
pools,
the
dot
nfs
pool
it
uses
the
namespace.
That
is
the
same
as
the
name
of
the
nfs
cluster.
E
B
Yeah,
so
for
what
I
understand,
rook
is
going
to
deploy
the
necessary
pieces,
ganesha,
pods,
etc,
and
you
want
to
reuse
the
same
components
for
say
a
csi
driver
to
create
exports
that
were
already
already
there
in
the
nfs
commands.
So
when
I
was
looking
at
the
code,
I
guess
you
might
have
heard
me
say
this
I'll.
Just
repeat
it.
I
think,
because
you
might
have
joined
a
little
late,
breaking
the
dependency
of
like
listing
exports.
Creating
exports
from
an
orchestration
module
doesn't
seem
too
hard.
B
The
trick
is
knowing
what
cluster
ids
are
valid,
so
I
was
proposing.
We
could
either
add
new
commands,
like
nfs
cluster
register
unregister.
To
say
these
are
cluster
ids.
I
recognize,
but
are
not
going
to
be
managed
by
orchestration
or
we
do
something
where
we
use
the
existing
cluster,
add
and
remove,
commands
but
add
like
a
dash
dash,
no
orchestration
option
or
something.
E
Yeah,
I
I
guess
my
my
thought
for
how
that
would
work,
which
I
I
mean
I
I'm
I'm
open
to
other,
like
considerations
as
well,
I
I
think
the
register
creates
a.
E
Although
I
suppose
that's
a
concern
for
what
we're
doing
anyway
right
in
some
respect,
oh
let
me
get
my
variance
again.
E
The
the
the
way
that
I
figured
would
be
easiest
to
determine
like
does
this
nfs
cluster
exist
or
not
would
be
to
find
out
whether
there
is
a
namespace
in
the
dot
nfs
pool
that
contains
that
cluster
id.
B
B
E
Well,
yeah,
I
mean
as
rook
handles
it.
It
creates
the
when
it
creates
this
fnfs
cluster.
It
creates
that
it
makes
sure
that
nfs
name
space
is
created.
B
E
B
That's
that
is
exactly
what
I
was
thinking.
The
register
command
would
do
that
way.
We'd
have
kind
of
code
all
in
one
place,
but
if
rook's
already
doing
that
it,
it
could
be
something
we
would
add
in
the
future,
but
we
wouldn't
necessarily
need
it.
I
guess
the
only
concern
I
would
have
is
like.
B
If
things
change
in
the
future
now
rook
is
kind
of
doing
it
manager
modules
doing
it,
I'm
okay,
with
that
it
does
make
it
less
work
for
these
orchestration
changes,
because
it's
basically
my
prototype
cleaned
up
a
little
bit
so
yeah
we
could,
we
could
live
without
doing
it
for
now.
B
I
just
I
don't
know,
what's
the,
what
are
what
do
you
others
think
about
just
skipping
that
if
rick's
already
doing
that,
basically
forcing
forcing
by
the
wrong
word,
but
you
know
requiring
if
anyone's
going
to
do
setup
clusters
outside
of
orchestration,
they
have
to
manually,
create
the
required
rados
objects
and
name
spaces.
C
And
how
do
you
distinguish
between
this
case
and
between
another
case
than
when
somebody
just
has
forgotten
to
to
configure
their
orchestration.
B
D
E
Like
from
like
the
rook
side
of
things
it
it's
also
totally
fine
if
we
have
to
supply
a
like
a
flag
that
says
like
skip
orchestrator
or
something
like
that,
I
I
don't
think
that's
necessary,
but
that
is.
That
is
an
option
that
does
mean
that
any
you
know
that,
like
most
users
who
are
just
going
by
the
stuff
documentation
will
you
know,
encounter
an
error
if
there
is
no
orchestrator
hooked
up
right,
it's
mostly
like
mostly,
I
guess,
I'm
focused
on
like
automating
it
from
the
csi
driver
and.
D
Yeah
I
mean
the
the
question
I
had
is
so
about
like
going
back,
so
this
look
sets
up
this
cluster.
You
know
by
just
you
let
the
user
apply
nfs
cluster
manifest
or
something
like
that
is
that
is
that
how
it
starts
that's
passed
out,
starts
out.
E
Yeah,
so
the
the
cnfs
you
know,
custom
resource
will
create
an
nfs
server
cluster
and
it
configures
it
to
use
the
backend,
but
it
stops
at
the
point
of
it.
It
does
not
do
any
export
management.
D
Okay
and-
and
you
made
sure
that
the
way
the
cluster
is
set
up
is
exactly
the
same
as
what
manager
nfs
module
does
with
respect
to
not
just
creating
pool
and
setting
a
name
space
for
the
particular
cluster,
also
setting
up
the
recovery
database
for
the
particular
cluster
and
other.
You
know
other
information,
that's
required
to
set
up
a
cha.
So
all
all
that
of
all
that
work
that
manage
nfs
module
does
for
the
nfs
cluster
is
done
by
this
crd.
E
Yeah
yeah
and
I
spent
a
lot
of
time
two
or
three
months
ago,
verifying
all
of
that
and
verifying
that,
like
it,
I
think
it's
released
now,
but
it
was
like
pre-release
of
like
the
16.2.7
of
seth
to
make
sure
that
brook
would
create,
pools
appropriately
and
actually
tested.
Like
that,
we
could.
You
know.
A
E
E
A
E
We
do
support
multiple
nfs
clusters
running
on
our
configuration.
Yes,.
D
A
B
Yeah
so
verifying
the
cluster
id
is
just
a
matter
and
again
in
the
prototype
code
is
just
instead
of
current.
Today
it
loops
through
results
from
orchestra,
orchestration,
like
list
command.
B
We
could
just
change
it
to
a
radios,
object
list
and
grab
the
the
prototype
code
just
creates
a
set
of
every
returned,
object's
namespace
and
uses
that
as
the
validation.
So
if
you
have
a
namespace
foo
and
pass
it
cluster
id
foo,
everything
works.
If
you
don't
have
a
namespace
bar
and
you
pass
it
dash
dash
cluster
id
bar.
It
will
just
tell
you
you
don't
have
this
cluster
id
is
not
valid.
B
I
mean
that's
that
makes
sense
to
me:
it
simply
dropped
the
original
proposal,
since
rook
doesn't
need
it
they're
already
doing
this
setup
and
in
the
future.
If
people
want
a
command
or
a
way
to
register
new
cluster
ids
separately,
we
can
do
that
as
a
separate
effort,
because
rook
doesn't
need
it.
A
Does
that
work
for
you
blaine
just
dropping
those
sort
of
requirements
from
the
export
commands.
E
Yeah,
I
I
don't
think
those
requirements
really
become
necessary
unless
registering
a
cluster
starts
to
become
more
complicated
like
if
you
know,
registering
a
cluster
requires
creating
a
configuration
object
that
has
stuff
in
it,
but.
E
A
All
right
that
all
sounds
good
to
me
and
also
john
I'll.
Try
to
review
your
your
cleanup
test
stuff.
B
Yeah
thanks,
it's
all
basically
like.
Oh,
that's
not
too
pythonic
fix
that
and
there's
like
a
spelling
error.
A
A
All
right,
I
guess
that
covers
that
topic
I
mentioned
the
beginning.
I
know
you
had
your
the
topic
you
brought
up
in
the
irc.
Do
you
want
to
talk
about
that?
One
here,
the
natural
sorting
stuff.
C
Oh
yeah,
I
mean
it's
a
minor,
a
minor
issue,
but
basically
I
don't
know
who
is
the
back.
I
think
it's.
C
C
This
is
the
irc,
so
basically,
in
the
original
pack,
they
are
going
to
use
a
natural
sorting
for
the
death
arch
ps
command,
because
natural
right
now
it
seems
to
be
using
like
the
normal
python
sources
which
use
alphanumeric
order.
C
C
A
Although
I
wouldn't
say
that's
a
big
concern,
because
if
it
did
break
the
test,
we
could
just
change
the
tests.
That's
happened
before
we
change
the
output
of
things.
We
say,
add
a
field
remove
a
field.
You
have
to
go
change
it
up
to
test,
get
it
to
work.
C
A
A
C
From
the
from
the
the
initial
bug
or
the
issue,
it
seems
that
they
were
created
like
a
lot
of
osds
and
for
whatever
reason,
it
was
more
easy
for
them
to
list
to
list
and
sort
them
in
a
natural
way
because
they
were
using
some
counter
or
something
like
that.
A
Oh,
he
even
mentions
in
that
sort
package
there
yeah.
So
for
me
like
whether
or
not
we
should
do
this
for
multiple
commands.
That's
not
really
like
a
big
question.
I
guess
the
only
question
is
finding
out
which
of
those
commands
it
is
necessary.
For
so
archicad
is
the
obvious
one
orchestra
probably
also
wants
to
be
on
there.
C
A
C
A
F
A
A
Lists
everything
yeah,
the
the
issue
we
have
with.
That
is
just
that,
like
the
osd
ordering
or
like,
if
you
have
hosts
that
are
ordered
a
certain
way
like
with
like
a
numerical
id
at
the
end,
because
it's
alphabetical
like
they
said
the
example
from
the
tracker
was
you
have
like
oc,
989
and
then
osg
99,
and
I
know
it's
990.
A
C
Happen
before
okay,
so
we
just
implement
the
change
for
for
the
relevance
commands.
C
My
concern
was
not
like
if
we
want
to
do
it
for
all
the
comments
in
one
pr.
It
was
more
about
like
the
consistency,
as
as
a
matter
of
from
the
point
of
view
of
user.
A
A
You'd,
like
you
really
want
to
select
you
like
to
organize
it,
you
could
make
one
tracker
for
just
this
in
general
and
then
have
the
other
ones
a
subtest,
different,
okay,
okay,
whether
or
not
you
want
to
do
it
in
one
pull
request
with
different
commits
or
different
requests.
I'm
going
to
leave
that
up
too
okay,
I
I
thought
the
bigger
question
going
into
this
was
going
to
be
whether
we
want
to
try
to
do
some
sort
of
implementation
on
our
own
or
yeah,
and
the
other.
C
The
other
question
was:
if
we
want
to
add
this
not
sort
package
dependency,
I
mean
in
my
case,
if,
if
I
had
to
implement
it,
I
would
say
yes,
if
we'll
be
using
it
in
multiple
places.
So
this
way
you
already
are
dependency
and
in
the
future
we
have
new
commands
with
similar
requirements.
So
we
already
have
the
support
it's
more
trusted
than
like
trying
to
do
with
ourselves
the
regular
expressions
or
something
similar.
A
I
mean
I
can
see
that
yeah
we're
going
to
be
using
a
bunch
of
spots.
Maybe
we
just
use
the
package
yep.
Hopefully
it's
one,
that's
a
bit
easier.
I
know
we've
had
different
difficulties
with
different
ones.
Like
I
remember
with
like
cherry
pie,
it
really
wasn't
that
hard.
I
just
had
to
put
it
in
a
couple
different
spots:
make
sure
that
they
named
it
correctly
but
remember
with
async
ssh.
It
was
a
whole
thing
where
you
had
to
like
add
it
to
some
special
copper,
repo
and
everything.
A
That's
also
a
factor
if
it's
like
something
that
you
just
kind
of
put
in
some
file
somewhere
make
a
one
line,
change
and
it's
good
I'd,
say:
there's
no
reason.
Okay,.
C
A
A
Okay,
great,
thank
you
all
right
good,
and
that
was
our
last
topic
on
the
other
pad.
So
I
don't
think
I
have
anything
else.
There
was
a
whole
thing
with
the
centos
containers,
but
it
seems
like
that
should
be
fixed.
Now,
don't
really
have
to
worry
about
that
yeah,
so
anybody
have
anything
else
bring
up
here.