►
From YouTube: Ceph Orchestrator Meeting: 2022-05-24
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
A
Basically,
the
idea
was
that
the
staff
volume
inventory
is
a
bit
slow
gathering
the
disc
information,
so
we
thought
maybe
he
had
some
way
of
doing
it
faster,
and
so
we
put
together
this
secondary
editor
pad.
Basically
explaining
this
whole
thing.
A
It
goes
into
some
detail
on
what
it's
doing
and
why
it's
better,
and
so
I
just
wanted
to
sort
of
bring
it
up
here,
just
as
a
sort
of
idea,
my
my
sort
of
thoughts
on
it,
or
maybe
we
have
it
as
sort
of
an
optional
alternative
kind
of
like
how
we
have
the
the
agent
as
a
way
to
refresh
things,
maybe
something
you
can
sort
of
turn
on
to
do
your
device
refreshing
instead
of
the
normal
stuff
volume
one
and
we
could
sort
of
have
destinatology
for
everything,
make
sure
it's
stable
and
everything
before
we
really
do
anything
with
it.
A
Yeah
I
mean
I
guess
we
might
have
to
read
through
this
a
little
bit.
I
guess
to
sort
of
get
the
feel
for
it,
because
it
is
there's
a
decent
information
there
does
anyone
have
any
sort
of
initial
thoughts
on
us
or
issues
with
changing
over
from
stuff
volume
or
anything
by
the
way?
This
would
just
be
for
the
refreshings
wouldn't
be
for
like
deploying
the
osds
like
the
batch
commands
or
anything
just
before
refreshing,
the
device
information.
A
I
don't
remember
all
details.
I
know.
One
of
the
reasons
was
because
that
volume
inventory
is
running
inside
of
a
container,
and
this
would
not
be
because
it's
sort
of
right
now,
it's
part
of
our
like
stuff
packages,
which
we
don't
distribute
the
packages
themselves
necessarily
anymore.
We
have
them
in
the
container,
so
we
have
to
start
up
a
container
that
then
runs
stuff
volume
stuff
right
now,
it's
using
some
sort
of
tacky
stuff
with
ns
enter
to
run
commands
on
the
host
directly.
A
But
I
know
that
was
one
of
the
problems
and
I
know
he
did
try
to
mess
with
it,
but
I
think
he
said
he
would
have
to
have.
You
know
redone
like
enough
of
it
where
it
would
have
been.
A
I
don't
know
almost
like
a
full
overhaul
of
how
they
do
it.
There's
a
lot
of
other
built-in
things
like
the
way
they
do.
The
the
class
setups
and
everything
in
there
that'll
make
it
really
hard
to
change
inventory
without
also
messing
up
everything
else.
That
volume
is
doing
don't
want
to
touch.
B
A
B
A
Yeah,
that's
why
I
was
thinking
about.
Maybe
the
the
way
to
go
about
it
would
be
sort
of
similar
to
how
the
agent
was
set
up.
Where
right
now
we
have
tests
and
tautology
that
sort
of
branch
off
and
like
one
more
test
with
the
agent
on
one,
let's
just
put
it
off
and
to
make
sure
they
both
pass.
A
But
maybe
absolutely
like
this,
where
you
have
a
few
tests
where
you
have
an
option
like
a
config
option
to
turn
this
on.
So
it
uses
this
instead
of
set
volume,
and
then
we
have
in
technology.
We
make
sure
that
it's
passing
and
everything
and
then
we
give
people
a
chance
to
maybe
try
it
and
use
it
and
everything,
and
only
then
consider
trying
to
turn
it
on
by
default,
even
with
like
say
something
like
the
osd
memory,
auto
tuning.
A
It
was
like
off
by
default
for
a
really
long
time
and
what
we
ended
up
doing
was
we
turned
it
on
for
new
clusters
only
by
having
it
turned
on
in
bootstrap,
it's
not
turned
on
if
you
like,
upgrade
or
anything
that
way,
you're,
not
destroying
any
old
clusters
that
people
have
that
they
built
up
in
case
there
is
a
bug
like
new
people,
eventually
like
actually
try
it
and
everything,
because
you
turn
it
on
by
default.
A
You
know
start
with
that,
just
having
some
test
pathology
for
it,
and
then
you
know
have
the
optional
and
everything
and
then
eventually
turn
it
on
just
for
the
new
builds
and
be
from
there
where
people
say
any
bugs
pop
up.
A
Yeah,
so
that
is
there,
though,
that
there's
a
lot
of
information
in
the
other
pad
that
I'll
put
together.
A
You
know
I'd
request
people
if
they
have
time
to
look
through
it,
see
what
they
think
about
it.
I
know
it's
kind
of
hard
in
this
meeting,
just
to
say
like
suddenly
like.
Oh,
we
have
this
other
pad
and
read
through
it
now
because
it's
you
know
a
short
notice,
but
at
least
that
people
be
aware
that
it's
there
get
some
initial
thoughts.
People
had
any
any
immediate
issues,
people
have.
A
A
Yeah,
so
that's
what
I
guess
I
think
maybe
I'll
just
leave
it
on
the
agenda
again
for
next
week,
and
maybe
people
have
a
bit
more
time
to
digest
it,
then
because
it
is
a
lot
of
information
to
go
over
in
like
90
seconds.
So
we
can
just
move
on
for
now
see.
C
A
Topics
we
have
so
add
a
back
port.
The
red
vine
tracker
project.
D
So
this
one,
it's
kind
of
a
rehash
of
the
you
know,
do
we
want
to
keep
doing
batch
back
ports
or
do
we
want
to
create
backboard
trackers
there's
been
a
few
members
of
our
team
that
are
used
to
the
backboard
tracker
scripts,
which
is
the
it's
that
script
I
or
the
documentation
I
posted
there
in
the
pad,
which
seems
to
be
pretty
common
for
a
lot
of
the
other
subcomponent
teams,
including
a
lot
of
the
manager,
dashboard
teams,
and
so
when
they
come
in,
they
try
and
create
a
backboard
using
the
script.
D
They
cannot
do
it
because
we
don't
have
a
backboard
issue
associated
with
our
redline
project
and
so
obviously
there's
a
couple
pros
and
cons
here
doing
the
batch
back
ports
to
the
n
minus
one
release
is
awful,
convenient,
reduces
the
noise
and
then
we've
been
cherry
picking
to
the
end
to
the
releases
beyond
that
for
other
back
ports,
which
has
worked
well
for
us
in
the
past.
However,
we've
also
missed
a
few
back
ports
on
occasion
and
that's
caused
some
grief.
I
guess
it's
just
kind
of
a
to
re-raise.
D
A
Does
having
this
component
there
make
it,
so
you
can't
do
any
other
way
you
have
to
use
during
this
procedure.
A
I
know
like
there
was
one
thing
in
a
batchpack
port
that
I
know
had
a
backboard
component
open
because
it
was
not
cfdm
targeted.
It
was,
I
forget
which
one
it
was,
but
it
was
like
did
some
set
video
changes,
some
other
things,
and
so
I
backboarded
in
front
of
our
batch
pack
ports
and
I
sort
of
just
marked
on
that
tracker
that
it
was
going
to
be
in
there
yeah.
It's.
D
It's
only
if
the
component's
an
orchestrator
component.
So
if
it's
like
the
top
level
ceph
project
in
the
in
red
line,
then
the
script
does
work,
because
it
has
a
back
port
associated
with
that.
A
D
D
A
Yeah,
I'm
definitely
at
least
open
to
having
the
the
backboard
issue
thing
available
like
it
should
definitely
be
possible
for
people
to
do
that.
If
they
want
to
use
the
backboard
script
for
orchestrator
stuff,
I
don't
see
why
we
would
prevent
it.
A
A
I
don't
know
I
mean
I
think
we.
We
definitely
should
at
least
open
this
this
project,
so
it's
possible
to
use
the
back
core
script
and
I
guess
we'll
have
to
start
playing
around
with
it
see
how
much
of
an
impact
it
has,
but
I
think
definitely
at
least
for
the
first
step
of
opening
the
project.
I
don't
see
any
reason
not
to.
A
Cool
does
anyone
else
have
any
opinions
on
anyone
else
here,
use
that
support
script,
often
or
anything.
E
A
Yeah
so,
like
I
know
like
on
our
trackers,
we'll,
like
just
put
in
that,
like
backport
section
like
quincy,
pacific
and
then
like,
we
just
will
update
those
trackers
later
we'll
move
the
pending
back
port
and
then
the
resolve
later.
A
C
A
Tracker
for
like
the
thing
in
master
and
the
back
ports,
it
would
have
a
different
tracker
for
each,
and
so
the
idea
is
that
you
don't
you
can
track
it
better,
which
things
have
been
that
part
which
things
haven't.
If
you
have
a
separate
tracker
issue,
you
can
go
look
through
the
backboard
to
see
which
thing
they've
been
backboard
and
which
things
are
missing.
A
A
This
would
be
the
this
is
the
way
that
the
other
teams
typically
do
it
is
they
they
just
have
this
new
script.
I
don't
know
how
much
of
that
is
just
because
I
think
we
were
backwarding
more
heavily
than
a
lot
of
other
teams.
A
Physical
cepheum
was
sort
of
new,
like
the
octopus
was
his
first
release.
It's
not
like
it's
been
around
as
long
as
some
of
the
other
things,
yeah
we're
backfiring
super
heavily
and
maybe,
as
it
slows
down.
This
starts
to
become
a
better
idea.
A
A
D
No,
the
back
port
label
on
the
tracker.
A
Oh
everything's
done
through
the
tracker,
which
we're
already
doing
a
lot
anyway.
Like
I
know,
I've
said
a
bunch
of
them
to
say
like
pacific
quincy
or
whatever
it's
just.
We
were
doing
it
well.
I
guess
you
had
to
do
that
so
manually,
but
you
then
use
the
fact
that
you
set
that
to
have
it,
create
the
back
ports
and
the
other
trackers
for
you.
Instead
of
just
having
that
one
tracker
that
you're
tracking
yourself
and
making
sure
your
backboard
is,
it's
always
helped
not
miss
miss
things
that
need
to
be
backported.
A
Mean
where
the
link
in
the
track,
the
other
pad
there
it
goes
over
the
script
there.
It
looks
like
yeah.
I
see.
A
A
Okay,
especially
now
with
the
fact
that
I
think
pacific
backwards
are
going
to
start
slowing
down
a
little
bit,
and
it's
going
to
be
well
it'll,
be
like
a
lot
of
quincy
stuff,
pretty
much
everything's
going
to
be
going
to
quincy
for
the
time
being,
but
it
should
be
slowing
down.
It
should
be
a
bit
easier
to
end
it
and
stuff,
so
I'll,
definitely
at
least
test
it
out.
So
we
need
to
open
the
open
this
issue
the
or
the
the
tracker
project
for
the
back
of
our
stuff.
A
Yeah,
I
guess
overall
it's
just.
We
should
open
the
the
project
and
then
we'll
start
playing
around
with
it
and
we'll
see
how
it
goes
from
there,
but
I
think
overall
it
seems
like
it
would
be
good
at
least
try
it.
You
know
it
works
for
us
yeah.
C
Yes,
hey,
this
is
gotham
and
I
work
on
openstack,
manila
and
ramana
brought
this
up
here
in
this
meeting
last
week,
and
so
I
just
was
trying
to
fill
in
some
details
that
you
sought
last
week
and
try
to
see.
If
you
know
we
can
get
some
assignee
or
discuss
if
the
timelines
don't
make
sense
and
stuff.
A
Yeah,
I
remember
was
it
put
in
like
a
when
you
wanted
it
in
the
other
pad,
I
think
a
couple
weeks
ago,
yes,
yeah,
it's
like
in
august,
I
know
I
talked
a
little
bit
with
the
openstack
team
on
thursday
about
what
was
going
on
with
this.
I
think
they
said.
Romano
was
working
on
a
proof
of
concept.
A
I
don't
see
ramona
in
here
francesco.
G
Yeah,
so
basically
roman
is
working
on
this
plc.
We
don't
have
any
progress
at
the
moment,
I'm
not
sure
about
the
status.
Well,
we
reviewed
the
solutions
and
we
came
up
with
the
tracker
and
the
corresponding
badzilla.
I
think
we
need
romana
to
talk
about
this.
G
The
progress
on
this
plc,
because
I
have
some
questions
as
well
on
this
on
this
tracker,
like
I
see
we're
going
to
support
first
one
ganesha
and
one
virtual
ip,
we
need
to
manage
the
fade
over
for
the
hosts,
especially
if
you
have
multiple
virtual
appeal
and
multiple
connection
instances,
and
you
need
to
keep
consistency
between
virtual
ip
and
ganesha
instances
right.
G
So
if
we
remove
the
proxy
layer,
which
is
one
of
the
points
of
this
the
solution,
I
was
wondering
if
we
need
to
remove
keypad
d
as
well,
because
keeper
fd
is
it
can
help
in
this
scenario
with
the
failover
of
the
virtual
ip,
especially
if
you
have
multiple
utilities,
while
this
logic
can
be
complex
to
implement
in
the
instep,
edm
and
you're,
going
to
reinvent
the
wheel,
because
you
already
have
this
solution
based
on
keeper
md
that
works.
G
So
I
I
want
to
hear
opinions
on
this
on
this
kind
of
question
regarding
which
layer
we're
going
to
remove
I'm
okay
with
the
j
proxy
or
the
issues
that
this
that
we
came
up,
but
I'm
not
sure
with
kpop
d,
so
that
that
should
this
is
my
question
for
romana,
but
is
not
here.
So
I'm
not
sure
we
want
to
go
ahead
with
this.
A
Yeah
he
said
it's
a
bit
hard
without
ramona.
Here
he
was
one
who
was
sort
of
looking
at
it.
The
most,
I
think
you
know
I
think.
A
Ideally
it
would
be
nice
if
we
could
just
reuse
people
if
these
stuff,
instead
of
having
to
reimplement
it.
I
mean
I'm
not
sure
exactly
what
ramana
is
looking
at
in
terms
of
how
he's
setting
up
this
proof
of
concept
or
whether
it's
proof
of
concept
cans
use
people
if
they
would
have
to
ask
him
about
it.
G
Yeah
in
osp
we
have
peacemaker
and
basically
we're
doing
the
same
thing,
a
similar
thing
with
with
people
like
d
instead,
so
in
theory,
this
kind
of
solution
can
fit
at
least
one
ganesha
and
one
virtual
epi.
The
challenge
would
be
extending
the
same
solution
to
support
multiple
virtual
ip
and
multiple
ganesha
instances.
We
need
some
logic
and
cfdm
to
make
sure
we
have
consistency
for
clients,
at
least
so.
G
We
need
to
reach
the
right
ganesha
instance,
which
is
tied
to
a
specific
virtual
ip,
and
I
think
this
is
one
of
the
questions
for
for
ramana.
If
he
has
specific
ideas
on
how
to
solve
this
kind
of
challenge,.
A
G
I'm
not
sure
if
kodam
you
have
opinions
on
this
on
this
thing.
Yeah.
Let's
forget
about
this
this
part
and
I'm
looking
for
opinions.
C
No,
absolutely
so
yeah,
so
I
know
he
is
working
on
the
poc,
but
that
was
to
kind
of
replicate
the
the
issue
we
have
currently,
which
is
that
when
you
have
client
restrictions
in
the
exports
you're
you're
not
able
to
you
know
mount
these
shares
on
these
ffs
volumes,
because
the
client
source
ip
is,
is
appearing
differently
and
stuff.
So
I'm
not
so
I
think
he
was
going
down
this
path
of
view
of
of
not
using
the
ingress
service
but
having
the
nfs.
C
Ganesha
server
have
a
stable
ip,
which
means
that
if
the
ganesha
servers
down
and
needs
to
be
restarted
somewhere
else,
we
moved
the
ipl.
You
know
to
that
node
and
stuff.
So
that
was
that's.
C
I
think
what
the
what
the
initial
approach
would
be,
that
that
would
bring
us
back
to
parity
with
what
we
currently
have
with
openstack
manila,
where
we
are,
like,
you
said,
having
pacemaker
manage
the,
and
there
is
only
ever
one
active
ganesha
server
and
the
the
virtual
ip
moves
along
with
that
server,
wherever
it's
incarnated.
C
So,
that's
that's
what
that's
the
initial
solution,
we're
shooting
for
and,
of
course,
if
we
wanted
to
extend
that
fourth
scalability
sec,
we
I
mean
we'd,
be
interested
in
the
multiple
ganeshas
and
multiple
virtual
ips
solutions.
C
So
that's
what
this
tracker
is
all
about.
I
think,
and
so
he
was
at
least
telling
me
that
he
was
planning
to
get
somebody
assigned
to
take
a
look
at
you
know
in
terms
of
timelines
as
to
when
that
could
be
possible
right
now.
I
suppose
you
can
bind
the
ganesha
to
some
address,
but
that
address
is
not
really
going
to
move.
I
mean,
there's
nothing.
You
know
orchestrating
that
address
to
move
along
with
ganesha.
A
And
we
also
try
to
talk
with
them
a
little
bit
about
sort
of
the
details
and
what's
family
to
do,
I
would
say
definitely
like
the
first
thing.
We
do
exactly
get
back
to
the
vanilla
parody
thing,
because
this
is
really.
This
solution
is
sort
of
being
made
for
openstack,
and
so
we
can
at
least
have
it
so
we're
not
having
a
regression
there.
That
would
be
ideal
to
start
with
absolutely
yep
but
yeah.
A
Let's
talk
with
ramon
on
exactly
what
he's
doing
and
what
logix
fdm
would
need
to
manage
this.
I
guess
basically
do
I
guess
what
the
pacemaker
was
was
doing
for
you
guys
or
it
sounds
like.
C
Yeah
and
I
so
ramana
is
off
the
entire
week,
but
I
can
like
you
know,
send
him
a
note
and
we'll
probably
add
a
topic
to
the
next
week's
meeting.
If
it
helps.
A
A
I
said
we're
gonna
have
to
bring
them
again
next
week,
but
that's
fine,
so
some
good
discussions
today
does
anyone
have
anything
else
they
want
to
talk
about
in
here.
The
other
topics
that
aren't
on
the
pad.
C
A
Yeah
he
mentions
this
to
me
as
well,
because
we
have
one
floor
request
already
open
for
doing
this,
for
keep
alive
d
and
h
a
proxy
and
it's
been
open
for
a
while.
Actually,
I
think
the
reason
it
had
never
been
tested
and
merged
or
anything
was
because
I
know
we
don't
hit
those
rate
limits
in
our
technology
testing,
which
means
that
there
must
be
something
being
done
so
we're
not
actually
pulling
from
docker,
and
so
I
was
trying
to
figure
out
why
and
what
we're
doing
there
exactly.
A
I
never
got
around
to
doing
that
and
figuring
out
sort
of
what's
happening,
so
I
wanted
to
actually
test
if
the
point
that
image
itself
worked,
but
I
can't
test
until
I
figure
out
how
it
actually
works
and
there,
but
then,
in
the
general
sense
yeah.
I
guess
there's
no
reason
really
to
keep
using
the
docker
images
if
we
have
the
option
to
mirror
them
on
clay,
because
it
is
just
going
to
hit
problems
with
the
rate
limiting
and
stuff.
A
F
E
E
A
On
there,
I
think
some
of
the
other
monitoring
stacks
like
prometheus,
the
node
exporter
and
stuff,
might
still
be
on
there,
and
then
I
don't
remember
if
snmp
image
is
also
docker
might
be.
A
Yeah,
so
if
it's
possible
to
mirror
those,
we
could,
I
don't
know
how
much
work
it
is
actually
to
do
that.
I'd
have
to
ask
him
how
realistic
it
is
to
do
all
of
them
like
that,
but
I
know
he
used
to
be
within
the
one
who
was
interested
to
begin
with
and
maybe
doing
it.
So
maybe
it
is
possible
I
want
to
see.
I
know
we
and
we
don't
deploy
all
of
those
a
lot
in
tautology.
I
think
that's
why
it's
sort
of
just
been
ignored.
It's
really
it's
mostly
an
issue
for
rci.
A
If
you're
only
playing
the
image
once
you
know
it
shouldn't
be
a
huge
deal
yep
so
we'll
have
to
see
how
much
of
a
maintenance
task
it
is
to
actually
do
that,
whether
it's
worth
it
just
to
avoid
that
issue.
I
guess
again.
B
A
A
A
All
right,
don't
have
any
other
topics
bring
up
here.