►
From YouTube: Ceph Orchestrator Meeting 2022-06-14
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
All
right,
it
looks
like
kind
of
look
quite
weak
here
I
added
one
thing
to
the
other
pad,
which
is
just
a
note
about
paul's
disk
rescan
feature.
We
talked
about
it
before
as
a
possible
replacement
for
the
staff
volume
inventory
stuff,
so
there
is
now
actually
like
a
pull
request
open.
It's
been
open
for
a
while,
but
I
haven't
even
looked
at
myself
on
that
stuff.
Just
so,
people
want
to
take
a
look,
giving
some
direct
feedback
on
there
and
we
might
talk
about
it
next
week.
A
There's
some
some
interesting
feedback
whatever
I
know
a
lot
of
the
problems
people
had
with
it
before
were
about.
Maybe
why
we're
moving
on
containers
or
wise
can't
be
at
set
volume?
I
guess
those
are
sort
of
the
same
problem,
because
I
think
one
of
the
main
reasons
it's
not
going
to
be
instead,
volume
is
because
it
runs
enough
volume
in
the
container.
A
That's
the
only
topic
we
actually
have
on
there.
Thanks
give
it
more
and
stop
next
week
to
talk
about
this
last
week's
been
sort
of
quiet
ramana
since
you're
here.
Do
you
want
to
give
an
update
on
the
concept
mark
for
the
hnfs
stuff
for
openstack.
B
Yeah
so
francisco
mainly
worked
on
it,
so
he
was
able
to
get
it
to
work
as
in
basically
copy
over
the
manually
copy
over
the
the
scripts
from
an
existing
nfs
ganesha
service
deployed
by
the
fading
and
then
edit.
It-
and
I
have
not
not
not
run
the
english
service,
but
instead
light
up,
write
up
people
id
config
manually
and
then
have
ganesha
buy
into
it,
and
we
were
able
to
get
that
to
work
so
and
he
had
some
code
changes
franchise.
B
Quite
some
code
changes
and
safe
adm
to
to
do
it
so
I've.
I've
asked
him
to
share
the
google
doc
where
we
have
documented
these
steps
and
we'll
clean
that
up
and
link
that
to
the
tracking
ticket.
B
So
there's
just
one
ganesha
instance
that
was
manually
deployed
and
manually
run
keeper
live
d
and
keep
a
live
d
attached
to
that
sorry,
the
ganesha
attach
ganesha
getting
bound
to
that
people
id
ip.
A
All
right,
that's
good,
that
it's
actually
working,
if
you
guys
gotta,
prove
concept
working
and
we
should
be
able
to,
but
it's
fading
be
too
bad
I'll,
look
forward
to
seeing
that
doc
and
what's
in
it
maybe
that'd,
be
something
else
we'll
talk
about
next
week,
once
it's
cleaned
up,
I
had
a
chance
to
look
at
it
a
little
bit.
I
want
to
move
forward
with
that.
A
That
was
kind
of
all
we
had
for
this
week.
I
know
john,
you
are
going
to
start
looking
at
the
fadm
refactoring
stuff.
The
work
like
was
was
take.
You
know
you're
taking
that
over,
but
I
assume
you
haven't
had
a
chance
really
to
look
at
it.
Yet
even
that
just
sort
of
started
yesterday.
C
Correct
I
was
reading
the
build.sh
script
that
mike
mentioned
as
kind
of
unpleasant
on
pythonic,
and
I
was
just
saying
to
myself:
oh
instead
of
calling
python
into
zip
app,
zip
app's
a
module,
we
can
write
our
own
build.py
that
calls
the
zip
app
function
and
instead
of
doing
a
bunch
of
string,
manipulation
and
other
goofy
stuff
in
gel,
just
bring
it
all
into
python.
I'm
assuming
that's
what
he
was
thinking,
so
I
was
gonna
prototype
that,
but
I
first
need
to
kind
of
pull
the
branch
locally
run.
C
What's
there
and
you
know
just
kind
of
get
a
feel
for
the
workflow
that
the
pr
is
implementing.
And
yes,
I
haven't
done
anything
practical
other
than
skin
code
right
so
far,
yeah.
C
A
Yeah,
it
doesn't
seem
like
we
really
have
much
to
talk
about
this
week.
A
lot
of
things
are
just
sort
of
starting
up
or
are
getting
to
a
point
where
there
may
be
some
discussion,
but
right
now
there's
a
lot
of
interim
things.
D
I
added
a
new
point.
I
just
remembered
it's
from
it
was
started
by
ernesto
the
last
week,
basically
a
request,
so
that's
some
support
for
balance.
It
static
placement.
A
Yeah,
if
I
remember
correctly,
this
was
just
like
the
default.
It's
like
we're
just
using
like
the
count
based
placements
and
we're
doing
it
right
now.
We,
I
don't
think
so.
This
would
be
random,
but
I
think
it
just
does
the
first
host
every
time
yeah
and
we
just
want
it
to
not
be
random
or
the
yes
evenly
distributed.
A
Yeah
yeah,
I
mean
it's
not
super
high
priority,
just
because
I
think
people
who
have
big
proper
clusters
don't
use
the
count
option
very
much.
A
Most
of
the
time,
I
think
in
like
an
actual
production
system,
they're
going
to
want
to
say,
like
I
want
the
demons
here.
I
don't
just
want
like
two
anywhere
but
yeah.
It
is
still
something
that
would
be
nice
to
do.
I
do
have
noticed
that
if
you,
when
you
set
that
up,
you
do
get
everything
on
the
first
host.
I
think
part
of
that
issue
was
just
with
that
when
you're
bootstrapping
it
puts
everything
on
that
first
host
and
then
it
doesn't
want
to
move
things.
A
It's
going
to
always
prefer
the
hopes
that
things
are
already
on.
So
I
am
sort
of
interested
in
how
we're
going
to
work
with
that
aspect
of
it
like
do
we
want?
How
much
do
we
want
to
actually
move
demons
from
that
initial
host
to
other
ones
based
on
the
sort
of
hashing?
We
do
here
yeah,
because
if
we
don't
move
anything,
then
technically
we'd
still
have
a
hotspot,
the
first
house,
but
I'm
not
sure
if
that's
really
the
case,
he's
concerned
about
or
if
it's
more
about
later
on.
D
I
I
had
some
discussion
with
him
and
they
think
he
had
some
practical
use
case
when
they
they,
after
after
some
installation,
they
end
up
with
the
the
demons
in
the
first
host
and
they
they
have
to
go
manually
and
and
move
them.
A
If
the
placement
static
placement
sort
of
changes,
but
that's
also
something
we
really
don't
want
to
be
doing
demons
all
the
time
whenever
we
had
a
host.
So
we'd
have
to
sort
of
balance
that
a
little
bit
yeah
yeah.
A
Yeah
yeah
I'm
just
saying:
if
there's
any
case
besides
the
one
where
you're
in
first
bootstrapping
and
adding
the
host,
then
it
should
fix
it.
If
it
is
that
case,
then
it's
a
bit
trickier
because
we
have
to
make
that
balance
in
between
how
much
we're
going
to
actually
move
things
around
yeah
I'll
get.
A
Right:
cool
yeah,
if
you
ask
them
about
that
stuff
and
get
the
more
clarity
on
the
exact
use
case
they
want
solved,
then
they
could
maybe
help
that
we
could
even
put
it
included.
D
In
the
this
tractor
issue,
yeah
and
the
trucker
working
up
the
tracker
with
new
information.
A
D
A
D
D
A
Okay,
the
one
thing
we
do
do
is
we
do
because
we
specify
what
images
we're
using.
We
can
sort
of
decide
which
version
we're
going
to
have
by
default
and
because
we
want
to
default
to
this
thing
being
off
in
general
anyway,
it
might
be
okay
to
still
go
forward
with
it,
and
then
we
just
have
to
be
careful
whenever
we
ever
change
our
defaults
monitoring
stack
images
we
got
to.
A
D
Right
now,
as
I'm
going
with
the
implementation
right
now,
it's
just
putting
a
variable
to
true
to
false
you
enable
or
disable
the
feature,
but
I
think,
probably
in
real
deployment.
We
should
put
it
more
explicit
instead
of
just
having
like
put
in
one
variable,
changing
the
whole
stack
to
to
other
options
so
to
have
the
user
more
aware
of
the
effects.
D
A
Yeah
and
again,
that's
okay
for,
like
an
experimental
feature
at
the
beginning
and
then
we'll
try
to
work
on
it
a
little
bit
yeah
all
right,
yeah,
it's
something
we'll
have
to
watch
out
for,
but
I
don't
think
you
should
hopefully
walk
anything
for
now.
As
long
as
I'm
careful
that
the
versions
that
we
have
by
default
for
the
monitoring
stack
demons
are
runs
that
have
the
experimental
feature
in
the
way
that
testing
it
yeah
all
right.
That's
fine!
Something
good
to
be
aware
of.
I
guess.
A
All
right
any
other
topics,
I
guess.
A
D
A
Know
too,
all
right
cool,
thank
you
and
then
other
than
that.
I
think
we
can
sort
of
call
it
here
and
just
we'll
have,
I
think,
a
little
bit
more
to
talk
about
next
week.
I
want
to
bring
up
the
rdw
stuff
I'll,
give
you
multi-site
stuff
next
week
as
well
yeah.
I
think
again,
end
here
and
I'll
see
you
all
next
week
great
see
you
next
week.
Bye.
Thank.