►
From YouTube: Ceph Orchestrator Meeting 2022-03-08
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
Yeah,
so
the
first
thing
I
have
on
here
is
about
downgrades.
I
don't
remember
entirely,
but
I
think
at
some
point
we
did
support
downgrades
of
minor
versions.
I
can't
remember
what
how
far
back
it
went,
but
I
was
looking
into
it
at
some
point
recently
downstream
things
and
right
now,
it's
it's
like
broken
entirely
on
a
bunch
of
spots,
and
so
I
put
some
of
the
ones
that
I
know
are
messed
up
right
now.
So
one
thing
is
the
way
we
handle
our
migrations.
A
It
doesn't
work
if
you
add
a
new
migration
and
then
you
back
or
you
try
to
downgrade
to
a
a
version
before
that
migration
existed.
Then
the
like
migration
current
thing
is
like
too
high
of
a
value
and
it
just
loops
forever,
trying
to
finish
the
migrations
and
never
finishes.
This
tries
expects
it
to
eventually
get
to
a
number
that's
lower
than
where
it's
at,
but
there's
no
way
for
it
to
go
down.
So
you
can't
do
any
migrations
in
that
way
and
then
upgrade
states.
A
I
don't
remember
the
exact
fields
and
because
we
had
that
there
and
it
was
in
the
previous
versions,
they
can't
load
the
upgrade
state
out
of
the
json
properly
so
downgrades,
don't
really
work
at
all
in
situations
where
those
things
are
different,
basically
I'll,
just
kind
of
have
a
general
question
here
of.
Do
we
buy
this?
A
Can
you
try
to
support
these
downgrades
and,
if
so,
I
guess
we're
gonna
have
to
keep
track,
of
which
things
could
possibly
break
it
and
and
try
every
once
in
a
while
before
the
minor
releases
get
out
first,
we'll
start
with
that.
Does
anybody
have
any
thoughts
on
whether
we
should
be
still
be
trying
to
support
this,
like
where
obviously
major
version
downgrades
we're
not
going
to
support
but
like
across
the
minor
version?
I
like
two
minor
versions
or
something?
A
B
A
It
I
I
don't
think
so.
I
I
was
looking
at
this
for
some
downstream
thing
and
in
order
to
get
the
downgrade
to
work
really
all
I
had
to
do
was
just
lower
the
migration
current
number
to
the
value
like
it.
Maybe
did
some
migration
thing,
but
usually
the
things
it's
doing
in
migration,
don't
hurt
you
put
them
backwards.
There
might
be
some
cases
where
it
does.
We
have
to
be
careful
of
that
at
least
ones
we
have
so
far.
I
don't
think
they'll
kill
you.
A
It's
really
just
that
the
migration
current
being
too
high
just
makes
it
never
finish
the
migrations,
okay,
that
one
could
at
least
be
fixed
fairly
easily.
We
just
have
to
lower
the
number.
Do
I.
B
A
The
things
we
can't
actually
control,
it's
like
the
last
bullet
point
to
have
on
there.
Let
me
link
this
at
a
time
that
we
can't
really
do
anything
about
the
versions
that
are
already
out
there,
but
you
can't
say:
oh
we're
going
to
fix
downgrading
to
like
16.2.5,
because
it
could
already
be
broken,
and
you
can't
change
16.25
at
this
point.
So
there's
something
for
the
future,
maybe
across
quincy
versions.
We
can
try
to
do
that.
A
I
don't
know
about
maybe
for
the
first
we
get
a
couple
things
in
maybe
for
the
first
quincy
major
release
we
just
rather
than
support
it
across
the
quincy
minor
version,
and
I
think
it's
migrations
in
the
upgrade
state
that
have
been
the
ones
that
have
blocked
so
far.
I
don't
think
anything
else
has
been
a
huge
deal.
A
For
a
point
of
reference
in
rook,
we
we
do,
I
guess,
allow
downgrades
like
there's
nothing,
that
we
do,
that
prevents
a
user
from
doing
that,
but
it
is
something
that
we
expressly
don't.
Support,
which
is
I
mean,
being
able
to
roll
back,
is
part
of
like
kubernetes
philosophy,
but
with
storage
and
stuff.
Sometimes
it's.
A
It
is
very
challenging
so
yeah.
How
is
that
anything
we'd
explicitly
be
going
out
and
like
advertising
like
you
can
do
this,
but
it
would
be
nice
because,
like
if
there
was
a
bug
you're
like
oh,
I
want
to
get
away
from
this
bug.
I
want
to
go
back
a
version.
A
So
I'm
kind
of
in
favor
of
trying
to
figure
it
out
just
getting
a
couple
things
that
are
broken
right
now,
working
and
then
see.
If
we
get
that
into
quincy
and
just
across
quincy
version
see
if
we
can
handle
it,
I
don't
think
it's
going
to
be
able
to
then
pacific
like
it's.
Just
all
the
versions
already
out
is
what
it
is,
but
we
have
a
chance
with
the
new
major
release
to
maybe
support
this.
C
The
trick
with
downgrading
in
my
book
has
always
been
are:
are
your
persistent
things
like
on
disk
formats
and
wire
formats?
You
know
things
that
are
shared
between
different
code
bases
like?
Is
it
stable
enough
or
not?
So
if
you,
if
you're
like
generally
yeah,
you
know
minor
or
xyz,
if
z's
always,
you
know
don't,
if
you
have
a
good
policy
of
saying.
C
Oh,
we
don't
change
on
disk
format,
it's
safe
to
downgrade
to
an
older
z,
but
it
helps
to
know,
and
if
the
projects
they
big
or
people,
don't
communicate,
and
you
know
the
next
thing
you
know
you
know
if
the
code
is
really
naive.
It's
just
like
opens
the
file
and
pulls
in
content,
and
it
interprets
it
completely
wrong.
C
A
Yeah,
at
least
let's
have
video
excited.
I
don't
think
what
I've
seen
testing
it's.
It
breaks
anything
extreme.
It's
really
just
stuffy
name
itself
that
becomes
inoperable
like
you
have
to
modify
manually
a
bunch
of
these
sort
of
like
behind
the
scenes.
Config
things
we're
storing
like
data
structures
and
that
and
it
gets
like
an
okay
spot.
It
basically
has
to
remove
a
few
things,
and
then
it
can
sort
of
fix
itself.
But
it's
just
I
don't
know
they're
minor
things.
I
think
we
could
do
right
up
here.
C
A
Yeah,
so
I
don't
know,
I
feel,
like
I
guess,
kind
of
go
with
trying
to
support
it
across
my
divergence
if
it's
possible,
but
there
might
end
up
being
instances
where
it's
too
tricky
and
it's
not.
I
don't
know
it's
not
high
enough
priority
to
block
certain
things
for
that,
but
I
feel
like
right
now
we're
in
a
spot
where
it's
possible
we're.
Actually
it's
mostly
works,
only
a
couple
things
that
are
broken,
and
so
I
think
we
can
look
for
frequency,
at
least
unless
we
add
something
new
across
some
minor
versions.
A
A
D
So
you
know,
I
think,
that
that
that's
not
only
affects
the
fdm,
but
all
the
other
models
as
well,
because
everybody
must
be
aware
that
we
can
go
back
to
handling
data
structures.
As
john
pointed
out,
you
have
to
be
very
careful
because
you
could
install
something,
that's
other
some
new
fields
or
whatever,
and
then,
when
you
go
back
then
I
mean
you
go
back
to
recover,
but
you
you
could
you
go
to
worst
worst
state
of
in
the
cluster.
A
Yeah,
I
mean
generally
what
cepheum
does
when
it
does
these
this
works
same
way
as
the
upgrade,
so
it
like
it
fully
redeploys
the
demon
with
the
new
container.
So
a
lot
of
the
time,
it'll
still
sort
of
work
on
that
front
and
the
problems
end
up
lying
with
us
rather
than
other
places.
But
I
guess
that
could
also
be
an
issue
reason
why
we
not
don't
want
to
explicitly
support,
but
I
know
we
at
some
point
tried
to,
and
I
think
we
kind
of
forgot
about
it.
C
Yeah,
I
mean
it's
kind
of
a
joke.
I
could
you
could
say,
oh
well,
yes,
we
allow
downgrades
as
long
as
you
pass
dash
dash.
I
know
this
might
break
my
cluster,
but
I
don't
care
kind
of
thing:
yeah,
I'm
not
against
fixing
individual
bugs
that
prevent
it
and
set
adm
layer.
It's
just
one
of
those
things
where,
if
the
system
isn't
designed
from
scratch
kind
of
to
have
the
knowledge
that
this
might
be
attempted,
you
can
kind
of
shoot
yourself
in
the
foot
from
time
to
time.
A
I
could
maybe
try
to
figure
out
the
versions.
I
don't.
I
don't
think
we
do
that
right.
Now
we
try
to
actually
find
the
version
of
the
streets.
I
don't
remember,
we
could
try
to
do
that
and
require
a
force
flag
if
we
know
they're
going
back,
that's
risky.
C
Yeah,
I
think
that
might
be
a
nice
compromise.
Just
says
you
know,
I'm
taking
the
risk
upon
myself
itself
doesn't
promise.
This
will
work,
but
you
know
it
in
a
you
know.
You
know
if
I
was
on
you
know
a
b
c
and
I
want
to
go
to
abc
minus
one
c,
minus
one
that
might
work
because
there's
usually
a
small
delta
between
those
yeah.
A
A
All
right,
I
don't
want
too
long
on
that
topic.
The
next
one
is
sort
of
related,
actually
the
upgrade
state.
So
I've
had
a
couple
of
people
ask
about
putting
more
things
here,
and
I
guess
I
need
the
commitment
of
trucker
likes
is
actually
not
really
about
the
upgrade
stated.
So
I
upgrade
history,
it's
a
totally
different
thing,
but
generally
of
like
what
things
are
most
useful
to
put
in
the
upgrade
state,
so
like
say
like
right
now,
right
now,
I
think
we
have
a
list
of
completed
services.
A
I
know
again,
there
have
been
a
few
asked
people
to
improve
this,
like
I
think
somebody
wanted
anyone
like
all
those
types
listed
out
and
they
just
wanted
to
instead
of
having
like
a
instead
of
completed
services.
Just
have
it
almost
like
split
into
two
groups
where
it's
like
these
are
the
ones
that
are
done.
These
ones
they're
not,
instead
of
just
not
listing
anything
for
the,
not
completed
ones.
A
There's
also
the
issue
of
persistency,
I
think,
because
we
do
the
failovers
at
the
beginning
and
I
think
the
other
cases
we
need
to
restart
the
managers
as
well.
When,
like
the
monitors,
change
and
things
and
some
of
those
times,
the
fields
will
get
like
reset
to
nothing
like
temporarily,
because
I
don't
think
we
persist
all
of
them
the
way
we
do
certain
ones,
and
so
it's
a
question
of
whether
we
want
to
do
that.
Then
that
also
ties
into
the
downgrade
thing,
because
we
have
to
worry
about
that
there.
A
But
basically,
this
is
more
again
a
general
topic
of
what
do
you
think
is
worth
putting
the
upgrade
field?
Is
there
anything?
We
should
still
add
that
we
don't
have
things
like
that.
A
I
kind
of
like
the
idea
of
having
the
services
we
still
need
to
upgrade
be
in
there
and
I
kind
of
want
it
to
persist
to
some
degree
when
it
finishes
but
like
when
the
upgrade
is
over,
have
messages
complete
instead
of
just
having
in
progress
be
false.
I
thought
that
would
be
a
nice
thing
to
have
there.
A
People
who
are
just
checking
that
it
would
require
persisting
the
upgrade
state
after
the
upgrade
is
over,
which
requires
some
changes
there,
but
I
kind
of
like
the
idea
that
be
able
to
say
like
not,
instead
of
just
not
having
anything
listed
there.
If
it's
over
just
have
an
actual
message
saying
like
it's
done
now,
maybe
like.
C
A
A
Yeah,
it
doesn't
sound
like
anyone
has
any
strong
things
on
this,
so
I'll,
probably
just
go
forward,
propose
a
couple
things
I'll
put
up:
europe
I'll
see
people's
opinion.
There.
A
C
Sure
yeah
this
was
originally
kind
of
riffing.
On
a
downstream
bug,
someone
saw
something
like
one
of
the
various
commands
dash
dash
format.
C
Xml
doesn't
actually
return
xml,
it's
just
blacks
out,
json
dash
dash
format,
yaml,
blacks
out
json,
no
matter
what
you
ask
for
it's
going
to
give
you
json,
and
they
were
kind
of
you
know,
reporting
this
issue
for
one
particular
orch
command.
C
C
If
you
give
it,
you
know
a
format
it
will,
it
will
either
tell
you
it'll
either
convert
to
that
format
or
tell
you
no,
I
I
don't
support
this
format
actually
right
before
the
meeting
I
was
drafting
just
some
dummy
pseudo
code.
Did
I
paste
it
into
the
into
the
what
either
that.
C
There's
no
commit
this
is
this?
Is
me
playing
around?
Is
it
koshered
for
me
to
just
paste
something
into
the
ether
pack?
I
don't
think
there's
any
strict
policy
for
the
either
pads
all
right.
So
if
no,
no
one
mines
I'll
just
drop
this,
I
don't
know
what
just
happened
jump
to
the
bottom,
hopefully
yeah
maybe
got
posted
there.
I
can
see
it.
C
Okay
yeah,
so
that
was
my
kind
of
like
again.
This
is
five
minutes
before
the
start
of
the
meeting,
so
it's
not
well
actually,
during
the
I'll
I'll
be
I'll,
I
was
doing
it
a
little
bit
at
the
during
the
meeting
too
anyway.
This
is
my
silly.
You
know
off
the
cuff.
Basically,
the
idea
is
that
for
functions
that
opt
into
this
new
style
and
so
the
old
ones,
don't
all
need
to
be
converted
right
away.
C
We
just
have
a
decorator,
that's
that
is
ultimately
responsible
for
returning
the
the
response
from
the
manager
to
the
client,
and
so
if
it
sees
dash
dash
format-
and
it
says
basically,
your
function-
you're
required
to
return
up
like
a
python
dictionary
once
you
do
that
it
will
do
you
know
for
json,
it'll
call
call
json
dot
dumps
for
gamble,
it'll,
do
the
yaml
equivalent
and
then
for
something,
that's
a
little
bit
more
flexible
like
a
text
format
where
you
might
want
to
say,
walk
over
the
fields
and
print
them
line.
C
My
line,
you
would
provide
your
own,
like
writer,
call
back,
and
if
you
have
the
same,
you
you
know
from
the
nfs
work.
Some
of
the
objects
are
are
reused
from
some
of
the
calls.
You
could
reuse
your
callbacks.
C
A
Yeah,
at
the
very
least,
I
like
the
idea
of
a
decorator,
or
at
least
finding
if
we
support
the
one,
the
command
or
support
the
format
type
right.
I
like
this
format
you
have
here
where
it
just
lists
the
ones
that
we
have
when
it's
like,
true
or
not,
as
far
as
actually
having
the
decorator
responsible
for
any
of
the
conversion
work,
it's
a
bit
trickier
yeah,
but
at
the
very
least,
I
think
that
the
validity
of
staff
with
the
declare
would
be
really
good.
C
Yeah,
I
think
it
would
be
a
lot
nicer
just
to
you
know
if
they
do
dash
dash
form
an
xml
or
dash
dash
format
tommel
or
make
up
something.
It'll
just
say
I
don't
know
what
you're
talking
about,
rather
than
turning
them
json
or.
D
A
I
think
in
general
yeah
I
would
I
would
support
sort
of
a
verifier.
A
A
All
right
does
that
have
anything
else.
You
want
to
say
on
the
format
flag,
where
we
go
to
hnfs
stuff.
A
All
right
so
who
added
this
topic
here,
the
general
questions
about
nfs.
This
was
me
yeah.
This
is
mostly
me
just
trying
to
information
collect.
I
I
know
that
there
it
at
least
was
still
ongoing
conversation
in
seth
adm
or
in
in
the
likes
of
edm
and
aj
cent
world
around
how
to
handle
nfsha.
A
We
are.
I
have
been
looking
at
rooks
like
nfs
handling
a
lot
more
closely,
and
it
is,
I
don't
know
it's
sort
of
we
don't
have
in
that
world.
We
certainly
do
have
parallel
scale
out
and
we
have
in
the
sense
that,
if
an
nfs
server
like
dies,
the
pod
will
be
restarted,
but
we
don't
have
like
an
active
fast,
an
active,
passive,
failover
sort
of
scenario
yeah.
So
I
guess
I'm
I'm
just
trying
to
kind
of
understand
like
what.
A
Maybe
what
what
have
you
done
in
sep
edm
or
like
what
kind
of
conclusions
have
you
come
to
just
to
help
me
kind
of
understand?
More
more
of
the
domain
of
this?
I
guess
all
right,
there's
only
been
a
current
topic.
We've
been.
I
think
this
very
recently
so
we'll
see
the
things
that
we
already
have
so
for
one,
the
actual
demons
themselves.
A
Failing
on
the
host
that's
covered
by
just
sort
of
systemd,
we
just
kind
of
rely
on
some
data,
restart
things
that
fail
there
and
then
the
work
that's
been
going
on
recently
has
been
mostly
about
for
hosts
that
go
down
rather
than
the
demons
themselves.
A
A
So
the
idea
is
that,
if
there's
a
whole
like
fencing
thing
around
it,
that
sage
implemented
a
while
ago,
but
essentially
it's
like
the
they
have
like
a
sort
of
major
and
minor
rank
for
the
nfs
demons
and,
if,
like
the
one
with
like
the
highest
rank
or
something
goes
down
or
when
the
other
ones
are,
it's
yeah,
so
the
highest
rank
one
goes
down.
A
We
just
we'll
make
sure
that
one
gets
put
somewhere
else
over
a
minor
one
and
then,
if
yes,
I'll
explain
it
right,
especially
we'll
remove
the
nfs
demons
around
if
a
host
goes
down
so
that
they're
in
a
good
spot,
and
so
they
can
keep
going
and
really
the
big
challenge
with
that
is
not
moving
the
nfs
demons.
It's
finding
the
hosts
offline
fast
enough
that
you
can
move
the
nfs
demon
and
get
it
started
within
90
seconds,
so
that
the
race
period
hasn't
ended
already.
A
And
yeah,
so
we
have
a
few
things
open
for
that,
so
we
had
actually
had
a
general
issue
with
ssh
connections.
Getting
these
super
long
timeouts.
If
you
didn't
shut
down
properly,
it
was
like
a
unclean
like
cut
off
like
the
network,
cable
came
out
or
the
host
just
lost
power,
or
something
like
it
wasn't
working
properly.
A
So
we've
been
trying
to
fix
that
and
then
and
it's
generally
detecting
hosts
offline.
There's
we've
been
involved
today,
sort
of
thread
that
just
is
intended
for
that
purpose,
basically
just
to
check
if
those
are
online
every
I
think
it's
20
seconds
or
something
I
only
certain
hosts.
I
guess
that
have
the
nfs
demons
on
them
basically
and
there's
been
some
pull
requests.
A
That's
been
very
long
open,
that's
actually
responsible
for
doing
the
moving
the
nfs
daemons
if
they're
on
the
offline
host
to
an
online
host
that'll
keep
aha
up
that
way.
A
There's
that
and
also
I
guess
we
have
the
whole
thing
with
the
ingress
service,
where
you
can
have
the
hd
proxies
and
the
keep
alive
demons
deployed
along
with
them.
That
part
has
actually
been
implemented.
It's
already
in
there.
It's
been
there
for
a
while,
even
is
in
pacific.
A
I
mean
it's
all
about,
I
think
it's
mostly
just
for
demons.
Failing
it's
system
d.
We
have
the
ingress
stuff
around
it,
and
then
we've
been
working
on
offline
hosts
moving
the
demons
around
and
there
is
a
fencing
system
that
sage
implemented
for
knowing
which
demons
are
like
most
important
and
where
they
have
to
go
and
we're
working
on
our
offline
host
detection
stuff
be
able
to
move
the
demons
from
or
onto
the
correct
hosts
afterwards.
So
I
think,
that's
generally.
A
Failover
correct
yeah,
I
think
so
it
doesn't
help
with
the
demon
itself
beyond
like
a
dead
host
right
ip
stuff.
All
right
mike.
Do
you
want
to
comment
on
this
at
all?
I
know
you've
been
testing
a
lot
of
this
stuff
recently.
B
Yeah,
that's
a
pretty
good
summary.
The
the
only
other
kind
of
difficult
issue.
I've
noticed
is
that
we
have
to
have
more
nodes
available
to
reschedule
during
failover
than
there
are
deployed.
Nfs
head
ends.
B
So
by
way
of
example,
if
I
have
three
head
ends
and
three
nodes
and
one
of
the
nodes
fails,
that
completely
blocks
all
ganesha
clients
whatsoever,
which
seems
related
to
somehow
how
grace
and
the
consistency
works
on
ganesha
core,
and
so
until
that,
until
that
rank
is
redeployed
on
another
node,
all
the
clients
are
essentially
stuck,
which
is
unfortunate,
because
we
have
two
of
the
three
head
ends
still
available.
A
A
Right
now,
it's
not
implemented
at
all
to
co-locate.
Those
is
nfs
one
of
the
services
that
allows
that,
and
we
just.
B
B
No,
I
think
it
was.
I
think
it
was
for
the
ill
over
case,
because
it
really
didn't
make
sense.
A
Correct
yeah,
I
wonder
if
we
maybe
should
allow
some
sort
of
co-location
in
just
for
that
purpose.
Almost
like
you
could
put
one
of
them:
nfs
teams
on
the
same
host
as
the
other
one
just
so
that
it
stays
up.
A
But
I
feel
I
don't
feel
like
it
should
work
just
in
general
without
having
me
to
do
that.
I
need
to
look
for
how
nfs
works
like.
Why
can
we
not
use
it
if
the
rank
two
one
is
down
the
rank,
zero
one
one
are
fine,
so
we
can
deploy
it
with
just
not
doing
a
rank
two
one.
It's
like
we
need
the
third
one,
but
if
we
do
deploy
it,
then
it's
broken.
It
goes
down.
A
No,
it's
actually
it's
like.
I
think
it's
our
sort
of
next
big
thing
we're
gonna
clean
up
like
there
was
a
lot
of
work
on
it
last
year
and
it
got
some
of
it
done
like
ingress
stuff
and
then
like
the
like
mapping
like
the
ranks
and
everything
the
nfs
demons.
That
stages
put
a
lot
of
time
into,
but
as
far
as
the
the
case
where
the
host
goes
offline,
we've
never
fully
gotten
that
handled.
A
That's
sort
of
the
next
thing
we've
been
working
on
last
week,
like
I,
was
mentioning
before
the
problem
with
the
ssh
timeouts
and
the
detecting
the
offline
hosts
faster.
That
was
like
the
first
step
of
that.
We
had
to
be
able
to
protect
them
fast.
In
order
to
be
able
to
do
anything,
and
now
I
guess
we
have
to
clean
up
what
we're
actually
doing
in
those
cases,
because
right
now
all
is
implemented
is
if
you
do
have
enough
hosts
to
put
it
on.
A
Let's
say
you
had:
you
gave
us
like
you,
put
a
label
on
like
five
hosts
and
you
only
want
two
nfs
demons
and
then
there
you
pick
two
of
the
five
hosts
that
put
them
on
and
then
one
of
those
hosts
fails.
We
will
move
them,
so
it
will
that
actually
will
work.
A
But
it's
in
the
cases
where
there
is
no
new
host
to
put
the
demon
on
the
that
was
on
the
failed
host
that
not
doing
anything.
A
You
need
to
solve
that
gotcha
yeah,
I
I
guess
mostly
just
thinking
aloud.
I
I
wonder
if
fencing
will
be
something
we
have
to
handle
and
rook
as
well,
and
I
wonder
if
we
will,
because,
like
certainly
it
is
possible
that
you
know
we,
we
may
have
multiple
nodes
in
a
kubernetes
cluster,
but
they
all
might
not
be
like
labeled
as
available
for
the
dnfs.
A
So
I
wonder
if
some
of
those
things
are
also
still
a
concern,
I
think
other
other
questions
that
are
kind
of
follow-ups
that
I
have
are
like
how
how
are
the
like
scale
out?
Nfs
servers
handled
that
like
in
adm
or
is
it
generally
assumed
that
there's
like
an
export
per
server
or
is
it
like
configured
some
other
way.
A
No,
it
is
nfs
export.
Some
of
this
is,
I
think,
just
my
kind
of
ignorance
of
the
domain,
showing.
A
Yeah
some
some
reading
that
I
have
done
has
kind
of
indicated
that
a
good
strategy
for
like
having
multiple
active,
active
nfs
servers
is
to
have
like
an
export
or
or
maybe
like
a
subdirectory
served
by
each
server.
That's
I
mean
something:
that's
pretty
complicated,
so
I'm
trying
to
understand
like
if
there
are
other
ways
of
having
multiple
active
nfs
servers
in
a
way
that
is
useful
without
really
going
to
that
fine
grained
configuration.
B
I've
never
heard
of
the
deployment
like
that,
at
least
for
most
of
our
exports
on
a
common
rados
object.
It's
shared
between
all
the
daemons,
so
it's
just
a
include
directive
in
the
com
file
for
each
ganesha
damon.
So
each
naming
gets
like
a
segup
and
just
reads
from
that:
rados
object
and
they
all
have
symmetric
import
or
exports.
A
Okay,
sorry,
I'm
just
trying
to
take
notes
here
is
there?
Is
there
special
handling
that
has
to
be
configured
to
keep
that
client
server
connection
right,
because
I
know
the
the
connection
is
stateful
to
the
server.
As
far
as
I
understand
is
there
special
handling
that
needs
to
be
done
to
make
sure
that
the
client
was
always
connecting
to
that
particular
server.
B
I
believe
this
is
why
we
did
the
ranking
thing
and
why
we
have
that
other
issue
expressing
itself
I'd
have
to
look
deeper
into
that
one.
I
do
know
like
as
a
matter
of
fencing.
What
we
do
is
we
do
a
key
ring,
a
suffix
key
ring
for
daemon,
and
then
we
remove
that
key
ring
when
we
redeploy
the
ring
somewhere
else.
So
therefore,
that
essentially
fences
the
old
node
and
each
daemon's
deployed
with
a
new
key
ring,
but
that
key
ring
is,
you
know,
has
an
affinity
with
that
rank.
A
That
cover
you're
asking
me
yeah.
I
think
the
last
question
I
had
is
whether
there
are
special
nfs,
like
configuration
options
that
are
are
deployed
for
for
aha.
A
I
don't
think
so.
I
think
it's
just
is
like
an
ingress.
Well,
you
can
pass
with
making
the
cluster
and
that
that
puts
down
the
ingress
demons,
I
think
other
than
that
we're
just
kind
of
handling
nfs,
assuming
it's
almost
always
apha
like
quarter
gun,
we're
just
moving
them
around
like
that
like
where
we
redeploy
them
off
the
offline
host,
the
right
ones.
Always
we
don't
like
handle
it,
there's
no
like
flag
to
say:
oh,
we
need
to
do
this
or
not
he's
always
doing.
A
Okay
yeah.
If,
if
there
is
like
a
a
like
baseline
nfs
canasha
like
config,
that's
like
plopped
down,
it
would
be
helpful
for
me
just
to
kind
of
look
at
that.
I
think
I'm
wanting
to
make
sure
what
we
have
in
rook
is
still
an
appropriate
default
that
we're
not
setting
things
we
shouldn't
be
setting
or
missing
things
that
we
should
be
setting.
A
I
had
one
question
and
I
kind
of
lost
it
in
a
bunch
of
other
thoughts,
but
yeah
I
mean
this
is
all
really
helpful
and
I'm
just
trying
to
kind
of
start
scratching
the
surface
and
understanding
the
domain
of
it
more.
B
A
All
right
anybody
have
any
more
comments.
We
want
to
bring
up
real,
quick
about
h,
a
nfs
stuff.
A
A
All
right,
in
that
case,
I
will
see
you
all
next
week,
yeah
bye,
great.