►
From YouTube: Ceph Orchestrator Meeting 2023-02-07
Description
Join us weekly for the Ceph Orchestrator meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contrib...
What is Ceph: https://ceph.io/en/discover/
A
A
B
Haven't
heard
yeah
worthy
with
the
other
build
errors
resolved
yet.
A
No
I
think
that's
still
going
on.
I
saw
there's
a
threat
about
it
in
this,
the
sepia
Slack,
and
there
was
a
pull
request.
Somebody
opened
here
like
in
the
chat
series,
I
opened
that
one
that
I
think
was
intended
to
fix
it,
but
I
see
that
make
check
still
failed.
I
put
the
same
error,
but
I
think
it
was
something
like
that
where
it
was
a
problem
with,
because
there's
lip
boost
package
I
think
is
special
yeah.
A
A
B
A
A
Yeah
I
have
to
wait
on
that.
I
can't
merge,
there's
a
bunch
of
Pacific
backboards
that
can't
be
merged
because
of
that
they're
they're
done
they're
approved
the
pathology
is
passed.
It's
just
making
an
API
tests,
I'm
gonna
go
there.
B
B
So
the
info
was
kind
of
continue,
I
guess.
A
I
know
it'll
come
up
briefly
a
few
weeks
ago,
when
they
did
that
the
GitHub
thing
where
it
like
broke
the
the
shaw
ones
for
all
the
packages,
but
that
got
resolved
and
then
they
seem
to
have
just
gone
away,
because
now
it's
back
I,
don't
know
what
the
different
this
time
what's
going
on
and
because
it's
like
an
internally
hosted
thing:
I'm,
not
really
sure
what
to
find
out
about
it.
A
Don't
really
have
any
I've
ever
look
at
any
new
stuff,
I'm
trying
to
look
into
keep
alive
stuff
setting
up
or
considering
setting
up,
maybe
something
we
could
talk
a
little
bit,
especially
whether
Fredo
was
here
first
so,
but
I
was
going
to
be
late
because
he's
picking
somebody
up
but
we're
talking
about
about
having
the
manager
with
a
virtual
IP
over
it,
the
idea
being
that
for
external
Prometheus
instances
we
give
them
an
IP
or
a
URL
I
guess
they
can
attach
to
to
scrape
stuff
from
the
manager.
A
But
whenever
we
do
a
manager
failover
that
that
changes,
in
fact,
during
upgrade,
for
example,
that
could
change
over
a
few
times.
And
so
the
idea
was
to
use,
keep
alive
on
its
own
to
put
up
a
stable
IP
for
the
manager
to
connect
to.
A
But
I
was
trying
to
figure
out
because
I
haven't
looked
too
close
yet
it
yet
if
we
need
the
manager
to
be
binding
to
that
IP
for
this
to
work.
A
That's
I'm
not
sure
I'm,
not
sure
if
it
binds
to
like
the
all
IPS
or
if
it
binds
to
yeah,
just
a
specific
IP
on
those
I.
Don't
remember:
how
will
we
do
it?
Yeah.
B
Well
worst
case
scenario:
if,
if
it
had
to
change
like
say,
I,
don't
even
know
because
I
haven't
looked
at
the
docs
or
anything,
but
if
it
did
say
oh
well,
you
need
to
bind
to
a
particular
IP
to
work,
and
this
is
not
the
current
state
of
the
world
that
would,
that
would
be
like
a
manual
upgrade
step.
I.
Think
right.
If
you
want
to
use
this
feature,
you
need
to
XYZ,
because
it's
not
gonna
is.
It
is
the
planted
that
deployed
this
by
default,
or
is
it
just
like
an
extra
thing?
B
We
do
that,
but
then
you'd
have
an
additional
restriction
on
again.
Only
if
you
found
that's
the
case,
but
you
could
simply
say
hey
if
you're
going
to
deploy
this
additional
service.
For
you
know
persistent
IP
for
the
manager,
you
may
need
yeah.
A
B
A
Yeah
also,
don't
know
what
happens,
then,
if
the
keep
alive
views
go
down,
if
you
just
like,
maybe
still
have
it
bind
to
the
normal
host
IP
as
well.
This
is
like
a
fallback.
B
A
B
A
A
Like
proper
load,
balancing
and
so
date
shape,
proxy
would
have
to
only
have
the
IP
of
the
active
manager
in
it
it's
forward
to,
and
then
we
have
to
then
update
that
every
time
there's
a
failover.
A
Things
would
go
down
temporarily.
Well,
we
have
to
redeploy
the
proxy
with
the
right
config
yeah,
unless
we
could
have
a
way
to
redirect
the
standby
manager
stuff
to
the
active
manager
again.
B
A
Yeah
I
think
Perry
was
still
looking
at
that
stuff,
but
I
haven't
heard
the
updates
in
a
while.
A
B
That's
it's
one
of
those
that
might
still
be
in
the
hypothetical.
You
know,
research
phase,
you
know,
I,
don't
know
if
how
much
it's
been
talked
about
Upstream,
yet
so
I'm
not
gonna,
think
much
more.
A
Yeah,
that's
kind
of
what
we
wanted
to
do
eventually
or
for
now
is
try
to
see
if
we
get
keep
alive
over
the
manager
and
if
again
we
could
have
the
redirects
up
anyway,
even
if
we
they're
not
using
proxy,
they
redirects
from
the
standby
to
the
active
one
would
be
good.
I
know
the
dashboard
game.
Actually
does
that
Roberto
told
me
before
so
it's
not
like
it's
a
new
thing,
no
one's
done
before
and
then
even
if
we
had
proxy
or
not
it's
likely,
we
couldn't
get
the
lived
set
up.
A
A
The
idea
is
that
it'll
do
that
and
also
on
top
of
fixing
the
media,
so
it
would
make
the
dashboard
able
to
I
guess
sit
on
a
floating
IP
as
well,
but
it's
useful,
although
I
think
the
redirects,
maybe
it's
not
as
big
of
a
deal
for
them
unless
the
manager
moves
entirely
off
of
the
original
host
like
it's
still
maybe
have.
C
Let's
have
a
look
to
that
code
actually,
and
basically
there
are
a
lot
of
calls
to
this.
Get
manager,
IP,
so
I'm,
not
sure
if
it's
like
good
idea
just
to
go
and
modify
this
function,
to
read
some
configuration
to
get
the
floating
IP
or.
A
C
All
yeah-
that's
that's
very,
it's
very
dangerous
because
there
are
a
lot.
A
lot
of
places.
A
C
A
A
I
think
because
we
I
assume
well
the
way
I
was
thinking
this
would
happen.
Is
it
wouldn't
do
it
by
default?
They
would
be
there's
a
keeper
live
spec.
You
can
apply
over
the
manager
and
Ingress
which
just
keep
live
only
flag
or
something
but
I
haven't
said
that
stuff
all
up
yet,
but
and
then,
when
you
do
that,
then
the
manager
would
be
like.
Oh,
this
is
now
here:
I
need
to
bind
to
this
IP
as
well,
and
then
they
would
have
do
like
a
failover
or
something
after
the
people.
A
Id
is
deployed
and
then
try
to
bind
to
that
when
it
can.
A
Yeah
I
don't
want
to
risk
breaking
anything,
so
I
don't
want
to
have
it
not
bind
to
the
place
it
used
to?
It
would
be
nice
if
we
could
buy
in
the
Bold
things
yeah.
B
A
B
A
C
Yeah,
the
thing
is
that
dashboard
and
cfidm
and
Prometheus
to
bind
to
this
IP
and
if
the
IP
doesn't
exist,
this
binding
will
fail.
So
we
need
to
discuss
Adam,
say
somehow
dynamically.
A
Yeah,
so
that's
why
dynamically
would
be
so
much
easier
is
because
if
you
could
just
check
if
the
IP
like
works
and
then
bind
to
it,
if
it's
there
just
add
like
say
in
the
serve
Loop
somewhere,
then
this
is
pretty
simple.
A
You
just
deploy
the
service
and
as
part
of
the
server
we
just
check
and
then
find
there
when
we
can,
but
if
it
has
to
be
done
on
Startup,
then
you
have
to
deploy
them
like
I
guess,
you'd
have
to
then
like
check
later
and
then
once
you
check,
you
then
have
to
do
a
failover
or
something
it
ends
up
getting
a
little
bit.
Uglier
I
think
it
gets
to
it
that
way,
but
that
might
be
how
it
has
to
work.
C
A
A
A
A
Just
be
an
additional
bind
and
then
there
should
be
a
people
ID
service,
that's
maintaining
that
virtual
IP,
we're
binding
to
that's
all
I
think
we
really
want
to
do.
C
Yeah
we
can
decide
it
should
be
like
if
at
some
point,
we
need
to
to
have
another
Port
open
for
whatever
service.
We
can
use
this
day
any
availability
mechanism
which
not
only
a
limited
to
be
input.
B
A
Yeah,
okay,
but
yeah,
so
we'll
always
keep
finding
the
old
one.
I,
don't
think
that's
going
to
change
at
all,
that's
too
risky
to
change,
and
then
we'll
have
this
optional
extra
one.
Hopefully
we
can
bind
to
and
I
said
that's
given
that
we
best
of
all
sort
of
work,
because
if
it
ends
up
being
too
painful
to
get
that
to
work
properly,
I
think
maybe
the
proxy.
A
C
Yeah
and
probably
Adam,
we
will
need
to
provide
some
way
to
disable
this
because
I
think
probably
during
upgrade,
it
will
be
good
to
disable
the
each
day
to
the
upgrade
then
running
a
bit
later,
because
having
this
mechanism
active
during
the
upgrade
probably
could
lead
to
some
weird
Behavior.
C
Yeah
sorry
I
think
maybe
this
is
something
that
we
can
just
data
documentation
that
okay
yeah
in
order
to
upgrade.
You
have
to
disable
this
mechanism
if
it
is
active
to
whatever
oh,
it
has
to
say,
but
we
definitely
shouldn't
just.
A
C
Would
be
hard
to
like
to
imagine
all
the
scenarios
that
would
they
keep
alive,
be
an
upgraded
and
the
manager
with
each
day.
That's
like
combination
for
weird
backs
that
would
be
hard
to
to
figure
out
so
probably.
A
Because,
maybe
even
another
reason
just
we
want
to
go
for
putting
the
proxy
back
in,
because
that
that
setup
is,
you
know
we
already
use
that
for
like
rgw
and
stuff.
We
know
there's
not
really
any
issues,
there
hardly
know
of
anything.
A
That
is
what,
if
we
I
don't
know,
because
that
at
least
has
been
tested,
pretty
heavily
HD
proxy
with
keep
alive,
D
and
HD
proxy
we
know
should
be
okay,
there'll
be
a
second
where
they're
both
down,
but
we
know
they
come
back
up
and
there's
usually
not
been
any
issue.
B
C
A
We
were
running
people
ID
only,
but
it
depends
how
difficult
it
is.
I
said
I
need
to.
We
need
to
find
out
how
the
binding
works
one.
If
we
can
do
it
dynamically
or
not,
is
important
and
then
I
mean
I,
guess
I
think.
The
main
thing
is
how
that
binding
is
going
to
happen,
because
if
we
know
how
to
deploy
people
ID
that
attaches
to
air
sets
up
a
virtual
ID
that
part's
easy
enough.
A
C
As
of
the
Dynamic
binding,
I
would
say
that
we
should
be
able
to
do
that,
because
I
remember
fixing
some
bag
with
you.
You
change
the
port
of
of
Prometheus
that
you
have
to
restart
the
server
and
this
code,
just
like
read
whatever
IP
you
have
whatever
port
and
just
restarts
the
the
server.
So
if
we
just
provide
new
IP
in
this
new
bind,
we
should
we
should.
We
should
be
able
to
do
that,
I
think
and
not
in
the
current
code,
but
as
part
of
the
secure
monitoring
stuff
I.
C
A
A
A
B
C
Maybe
dashboard
could
be
more
complicated,
but
they
do
have
also.
This
server
address
parameter
that,
if
changes
I
think
they
have
to
restart
with
the
new
one,
so
I
would
guess
that
they
should
be
also
able
to
to
restart
dynamically
yeah
because
they
have
configuration
parameters
to
change
the
address.
A
A
All
right
so
I'll
take
that
into
account.
That's
that
means
that
at
least
our
stuff,
so
this
won't
break
anything
else,
because
it's
just
our
servers,
we're
talking
about
here.
C
Yeah
yeah
major
way,
just
like
surviving
something
like
zero,
zero,
zero
or.
A
A
A
It
could
be
that
it's
okay
for
it
to
be
like
this,
where
it's
like.
It
breaks
the
the
old
IP,
because
you
have
to
explicitly
make
this
spec
for
Ingress
over
the
manager
and
give
us
an
IP.
It
was
not
like
we're
just
breaking
something
all
of
a
sudden.
A
A
Now
I'm
saying:
first,
you
know
like
the
stuff
manager,
Services
command,
it
outputs
like
for
Prometheus
module
itself
and
for
the
dashboard
it
gives
you
like,
the
the
URL
with.
B
A
C
Far
as
I
remember,
I
think
we
don't
because
probably
we
need
to
publish
this
somewhere
because
I
I
suppose
remember.
It's
only
reports
dashboard
from
fuse
yeah.
A
So
we
should
be
able
to
easily,
because
I
know
that
the
agent
actually
did
at
one
point
agent
endpoint,
and
then
we
removed
it
because
we're
like
this
is
just
an
internal
thing.
There's
no
reason
the
user
would
need
to
see
it,
but
that
means
we
definitely
could
do
it.
So
we'll
probably
have
to
do
something
like
that.
A
We'll
probably
want
I
have
to
like
end
up
documenting
this
as
like
you
have
to
deploy
this
add
this
keep
alive
spec
and
then,
once
you
have
the
spec
down
we'll
rebinds
to
this
iTunes
available,
and
you
can
check
the
staff
manager
services
which
we'll
set
every
time
we
set
up
the
or
restart
the
cherry
pie
server,
and
we
can
publish
that
so
they
say
no,
what
what
URL
they
have
to
buy
it
to
for
that
and
then
we'll
have
to
see.
A
If
there's
some
way,
we
can
maybe
change
the
dashboard
URL
as
well
to
to
also
use
that
yeah,
and
so
at
least
those
Services
can
be
maybe
bound
to
those
IPS.
A
Yeah
well,
I
imagine
we
would
probably
handle
it
from
our
side.
We
probably
use
this
like
set
address
thing.
You're
talking
about
I.
Think,
like
exception,
is
enough
to
trigger
this
I.
Don't
know
if
the
dashboard's
going.
We
want
dashboard,
acting,
keep
alive,
IPS
and
hiding
to
them
on
its
own,
because
it's
not
handling
keep
live
itself.
A
C
C
That
was
my
first
question
if
we
are
going
to
have
this
as
a
configuration,
because
that
way
well
anyway,
even
if
you
put
it
as
configuration
as
far
as
I
know,
the
configuration
are
local
two
models.
So
even
if
you
have
research
configuration
should
it
wouldn't
be
accessible
from
the
dashboard.
A
Yeah
we
had
to
set
their
configuration
option
I.
Think
yeah
after
we
set
up
our
stuff
like
we'd,
have
to
find
all
our
stuff.
There
make
sure
it's
working
and
then
tell
the
dashboard.
This
is
the
new
server
address,
find
here,
yeah
and
then,
hopefully
their
logic
will
already
be
set
up.
If
it's
set
up
correctly,
where
you
change
the
server
address,
it
will
rebind
everything
and
it'll
publish
that
instead
manager
services,
and
so
then
everything
should
show
up
there
and
the
user
will
know
like
okay
everything's
on
this
virtual
IP,
now
yeah.
C
A
It
does
all
that
stuff.
When
we
we
change
the
address,
then
it's
fine.
We
just
have
to
change
the
address.
All
we
have
to
do,
which
should
be
pretty
easy.
Just
you
know,
manager
module
command
in
the
background
from
our
side.
That
part
would
be
not
that
bad
if
we
get
it
working
on
our
side.
First
with
the
virtualizing
and
everything
yep,
so
we'll
try
that
we'll
well
one
of
the
things
we
should
the
first
left
out
is
we
want
so
normally
with
people
ID
with
h
a
proxy.
A
It
has
a
script
that
checks
the
health
of
the
proxy,
but
we
probably
don't
want
that
right
for
the
manager,
because
we
don't
want
it
doing
anything.
I
think
kind
of
I
think
if
we
need
to
check
the
health
of
or
like
check
if
the
thing's
active
or
whatever,
what
we
need
to
keep
a
live
view
to
be
doing.
In
the
background,
all
right,
I'm
still
not
your.
C
Naturally,
full
added
some
new,
probably
we
we
should
added
some.
C
In
the
in
the
cephadian
health,
endpoint
and
cefodium
HTTP
server
just
for
this
purpose-
and
this
way
we
check
if
it
is
active
or
or
standby,
don't
do
not
rely
on
dashboard
or
anything
else.
A
Yeah,
so
that's
the
way
I
was
thinking
of
it
before
with
HP
proxy
hours.
I
think
you
deploy
a
cable
ID
with
every
proxy,
and
then
they
check
if
DHA
proxy
is
up
the
one
that's
on
their
host
with
them
and
then
based
off
that
they
they
sort
of
set,
which
one
is
the
active
one
or
something
like
if
the
one
with
the
active
keep
alive
B
that
one's
proxy
goes
down,
one
of
the
other
ones
becomes
active.
Yes,.
A
A
A
It
might
be
okay,
if,
like
say,
like
the
the
end
point
for
the
standby,
just
doesn't
do
anything
like
it's
unresponsive.
The
active
one
just
has
a
working
one,
because
that
way,
it'll
just
set
whatever
it'll
Zoom
everything
else
is
down
and
whatever
one's
the
active
manager
will
consider
up.
Then
they'll
make
better
keep
alive
to
use
the
point
with
that
manager.
The
active
one.
A
We
needed
to
be
doing,
but
I
think
it
would
be
that
I
think
it
would
be.
It
has
to
check
the
help
of
the
the
the
manager
that
it's
with,
and
maybe,
if
I've
checked
it
out,
there
really
is
it's
checking
if
it's
the
active
one
by
checking
that
endpoint
yeah
we'll
have
to
add
in,
and
if
we
do
that,
and
then
we
can
dynamically
bind
by
restarting
the
cherry
pie
server,
and
we
can
tell
the
dashboard
to
rebind.
We
could
theoretically
at
least
have
those
services
on
there.
A
That
would
mean
that
this
wouldn't
work
necessarily
for
like
they
can't
use
this
for,
like
anything
they
want
to
do
with
the
manager.
They
can't
take
another
way
because
we're
only
finding
our
stuff
to
it.
A
C
A
Yeah
yeah
I
guess
my
opposer
said
it's
not
like
a
proper
actual
manager
if
it
only
works
for
our
specific
little
things
in
the
manager,
because
the
virtual
IP
wouldn't
do
anything
but
for
like
our
because
the
manager
failover
obviously
works
on
its
own
because
it
already
does
sort
of
some
level
of
because
it
it
fails
over
and
starts
with
a
new
one.
A
If,
as
long
as
there's
multiple
deployed
and
with
the
virtual
IP
as
well,
we
basically
have
our
bases
covered,
whereas
like
as
long
as
there's
one
multiple
managers,
even
if
one
goes
down
a
new
one,
will
pop
up
and
it'll
be
on
this
IP.
And
then
everything
will
work.
A
But
then
that
only
worked
for
things
that
actually
bind
to
that
IP,
which
is
going
to
be
our
stuff.
But
I
wouldn't
consider
this
to
be
like
an
actual
manager
implementation.
A
It
would
be
almost
like
h,
a
Stephan
and
dashboard
Plantation,
just
sort
of
a
weird
way
to
think
about
it,
but
that's
kind
of
what
it
is
because
for
the
only
ones
who
actually
can
actually
access
through
that
that
IP
yeah.
C
A
Alive
yeah,
it's
not
very
lucky
at
all,
yeah
I
think.
That's
probably
the
way
forward,
then
we'll
have
to
check
the
binding
stuff,
but
if
we
can
get
it
to
the
cherry
pie,
server
restart
can
bind
on
any
IP
we
want
and
we
need
some
way
to
check
that
the
IP
is
actually
working.
I
guess
we
can
just.
We
probably
do
like
a
good
try
catch
sort
of
thing
where
we
try
to
bind
to
the
virtual
IP.
A
If
we
can
see
that
it's
defined
because
say
we
have
an
Ingress
spec
over
the
manager
and
then,
if
it
fails,
we
go
back
to
the
the
old
one
like
that.
But
we
have
to
know
when
the
re-trigger
that
that
attempt
I
guess
there's
a
little
details
there,
but
I
other
than
that.
I,
like
the
sort
of
idea,
makes
sense
to
have
the
cherry
pie.
Server
restart
with
the
new
IP
will
trigger
the
dashboard.
A
C
Yeah
for
the
Dutch
Brothers
and
for
prom
series,
because
from
both
of
them
are
external
models
until
we
need
the
same.
B
A
So
those
will
probably
be
similar
similarly
done,
yeah
as
soon
as
we
get
archery
by
sort
of
working
on
the
new
IP
we'll
we
can
just
tell
them
both
to
the
news
that
that
IP
now
and
we
have
all
the
stuff
manager
Services
thing
that
are
being
advertised
Beyond.
Some
virtual
IP,
that
should
move
with
the
manager
should
stay
the
same,
regardless
of
where
the
manager
moves.
A
I
guess
they
can
be
accessible
and
that's
sort
of
what
we're
going
for
in
the
end
is
a
stable
IP
that
people
can
connect
to
the
manager
with
regardless
of
whether
it
moves
around
it
does
do
that
sometimes.
C
Yeah
we
need
some,
you
want
points
to
check
the
health
of
the
manager
and
return
if
it's,
whether
I.
A
Said
we
just
add
one
one
New
Path
to
one
of
our
that
vdm
things.
C
C
A
C
Yeah,
maybe
you
are
right
like
if
we
don't
want
to
standby,
to
return
anything,
so
the
connection
will
just
fail
because
nobody
is
listening.
There.
A
Yeah,
that's
basically,
that's!
Basically
what
we
end
up
wanting
is
you
wanted
to
just
think
those
ones
are
dead
so
that
it
always
puts
the
make
sure
they
keep
lived
with.
The
active
manager
is
the
active
keep
alive,
so
we
could
have
it
also.
They
have
the
same
by
return,
some
random
thing
that
says
like
oh
actually,
I'm
I'm
dead
or
something.
C
A
C
It
either
way
you
should
work.
Probably
the
solution
with
list
code
in
general
is
the
better.
A
Yeah
I,
don't
think
we
need
anything
for
the
standby
right
now,
I
think
we'll.
We
should
be
okay
with
just
the
active
just
an
active
endpoint
that
returns
to
health,
make
sure
the
thing
binds
to
the
the
virtual
IP
once
it's
available
and
then
tell
the
dashboard
and
the
Prometheus
to
find
that
IP
as
well
make
sure.
C
This
cases
strives
forward
because
if
we
put
standpoint
in
the
CF
EDM,
actually
it
doesn't
have
nothing
listening
when
it's
done
by
not
as
the
dashboard
because
dashboard
those
the
redirection
so
to
be
straightforward.
Just
A,
New
Path
in
the
Cherry
Pie.
A
Make
it
more
tricky
in
this
case,
because
the
redirect
should
break
that
aspect,
because
it
would
all
look
like
they're
alive,
but
without
redirect,
because
we
haven't
implemented
that
we
can
just
do
it
this
way
right
now,
we
can
just
have
it
that
that
be
unreachable
for
the
standbys
and
you
keep
alive.
They
will
just
be
configured
check
that
IP
of
the
actual
manager,
not
the.
C
Just
checks
The
Local
Host,
so
you
have
to
put
the
IP
of
the
localhost.
C
A
C
A
A
A
What
we
could
maybe
do
is
have
only
the
one
for
the
service
Discovery
communication.
C
A
B
Well,
it
might
really
be
I'll,
just
repeat
what
I
said
earlier.
I
think
you've
got
a
lot
of
good
questions.
It's
going
to
take
some
research
to
verify
some
of
them.
A
Yeah
I
think
I'll.
Do
this?
Try
this
and
I
just
go
back
because
there's
still
the
people
IV
over
NFS,
that's
not
done
yet.
That
has
to
be
input
before
we
do
keep
alive
via
on
its
own
anyway,
but
I'll
just
go
back
and
make
sure
that's
working
first
I
thought
I
thought
about
this.
I
think
even
that
floor.
If
I
see
some
work
because
I
don't
think
I
modified
the
back
end
script
for
the
keep
live
with
checking
there
so
right
now,
it's
just
try
and
check
for
an
proxy.
A
A
A
Yeah
I'll
probably
be
modifying
it,
though,
because
I
think
I
might
need
to
change
the
back
end
script
for
the
Deep
life
for
NFS,
because
if
it's
only
keep
alive,
it
shouldn't
be
checking
it
for
an
h8
proxy
being
alive,
which
is
what
it's
doing
currently
I
need
to
make
sure
I
have
to
modify
that
because
now
I
think
about
it.
It
doesn't
work.
Currently,
oh
weirdly
enough.
A
It
worked
fine
when
I
found
exports
to
the
virtual
IP,
even
though
the
people
ID
would
Theory
think
everything's
down
all
the
time,
but
it
somehow
worked.
C
Yeah,
maybe
we
can
create
some
new
Branch
visiting
that
and
like
to
keep
what
you
already
have
originally
for
NFS
working
without.
A
C
Somebody
called
me
to
the
phone
and
the
the
the
heads
set
connected
to
both
of
them,
so
it
was
kind
of
missed.
Okay,
that's
what
happened
before
that.
A
A
What
I
was
thinking,
maybe
we
could
use
a
workaround,
is
technically
the
the
agents.
One
does
not
find
that
the
the
virtual
IP
you
can
just
leave
it
as
it
is.
A
We
already
configured
all
the
IPS,
it
all
works,
fine
and
basically
it's
internal.
We
don't
need
to
worry
about
using
a
virtual
IP
for
it
and
we
could
have
it
find
there
at
least,
then
we
could
put
a
path
on
bats.
That's
a
health
check
that
would
only
work
in
the
active
manager,
and
that
would
use
not
the
localhost,
but
it
would
use
the
IP
specifically
of
the
manager.
So
we
could
check
on
that.
We
can
make
contact
with
that.
That
should
be
doable.
A
I
think
that
we
can
maybe
reuse
the
agents
cherry
pie
server
for
that
they
can
sort
of
become
the
de
facto
like
internal
stuff
server.
If
you
want
to
do
anything
there,
if.
A
C
B
So
if
there's
a
script
plug-in,
you
could
try
to
talk
to
the
traditional
manager
API,
like
the
manager
underscore
command
protocol,
the
lower
level.
If
you're
stuck
talking
to
an
HTTP
endpoint,
then
it
might
be
tricky.
A
It's
super
flexible.
Actually,
it
just
I,
don't
know
pretty
much
like
that.
Whatever
you
want,
yeah.
C
You
can
put
whatever
command
you
want
there
and,
depending
on
the
return
code,
if
it's
success
or
not
keep
a
life
to
your
kick.
A
A
It's
not
going
to
keep
life
that
you
can
really
run
well,
I!
Guess
if
you
can
run
manager
module
commands.
If
it's
about
maybe
able
to
do
that,
then
it
would
be
super
easy
if
we
could
just
just
output
that
can
check
which
one
is
the
the
active
one
but
I
don't
know
how
we're
going
to
get
those
permissions
leave
a
live
demon.
We
could
do
it,
but
it
seems
like
I,
don't
know
weird
to
be
like
deploying
the
keep
Alive
Demon
with
like
some
high
level
key
ring
or
something
doesn't
seem
right.
A
I'm
going
to
try
to
note
down
the
questions,
we've
come
up
with
here,
a
little
bit
but
I,
don't
know,
I
feel
like
we've
sort
of
talked
about
as
much
as
we're
going
to
sort
of
clap.
The
thing
I
think
we're
sort
of
hitting
walls
that
should
just
be
looked
up.
A
Or
module
I
don't
know,
I,
don't
have
a
concert
setup
right
now
to
check
with
I,
don't
know
what
else
manager
could
be
found
to
that.
We
could
check
so
I
think
I
think
well
after
you.
In
fact,
one
allowed
for
if
we
could
check
it
with
like
lower
permissions
or
something
maybe
if
we
could
do
it
as
well,
but
we
would
need
I,
don't
know
what
I
want
to
avoid
is
having
to
deploy
like
a
key
ring
for
the
people.
A
A
A
C
Balanced,
even
if
we
I
mean
for
the
scripts,
we
should
always
check
the
the
local
IP,
not
the
virtual
one,
even
if
Wireless
in
both
of
them.
C
At
that
point,
the
new
endpoints
providing
the
health.
A
A
C
A
Right,
I
guess
that
there's
always
an
option.
A
Well,
that
was
a
try.
It
out
gotta
write
some
of
this
stuff
down.
C
All
right
totally
like
drawback
I
I,
see
to
that
like.
If
you
have
some
customer,
that's
probably
wants
to
listen
on
some
specific
IP
on
her
face
because
they
have
some
special
VLAN
dedicated
to
management
and
to
control.
A
B
A
And
I
wrote
down
some
of
the
questions,
so
other
ones
we
had
are.
If
we're
going
to
keep
alive
the
early
HD
proxy,
it
was
a
high
level.
One
I
think
we're
probably
going
for
people.
I've
D
I
have
how
we're
doing
The
Binding,
whether
we
can
bind
to
all
the
IPS
and
if
you
can
dynamically
configure
what
IPR
cherry
pie
server
is
down
to
and
the
other
one
is.
B
B
A
All
right,
at
least
we
have
a
list
of
questions,
that's
something
I
guess,
but
over
the
course
of
the
next
week
or
so
try
to
look
into
some
of
these
and
then
next
week
we
can
that
actually
answer
the
questions.
B
A
A
All
right,
in
that
case,
we'll
call
it
here.
I
said
we'll
talk
about
this
again
next
week,
when
it's
a
bit
more
I,
don't
we
know
we're
talking
about
it
more
I,
guess
and
I'll
see
you
guys
all
later.