►
From YouTube: Ceph Developer Summit Quincy: Orchestrator
Description
00:00 - Dashboard integration: Facilitate initial deployment tasks and Day 2 ops.
28:38 - Cephadm resource scheduling: autotuning osd, mon, mds memory
30:23 - cephadm host draining
43:07 - cephadm-exporter
46:34 - cephadm HA
Full agenda: https://pad.ceph.com/p/cds-quincy
A
Again,
welcome
to
to
the
orchestra,
sds
yeah
and,
let's
start
with
the
dashboard
demo,.
B
We
can
see
the
list
of
hosts
and
the
services
running
on
each
of
this
host
so
over.
Here
we
can
click
on
any
of
the
hosts
and
we
can
edit
or
delete
or
put
that
host
into
maintenance
or
something
like
that.
C
B
B
So
one
of
the
next
feature
is:
is
the
maintenance
mode
like
putting
putting
a
host
into
maintenance
mode
so
for
that
we
need
to
select
a
host
and
we
need
to
click
the
drop
down
and
do
an
enter
maintenance,
and
it
will
show
a
warning
like
this,
like
removing
rcw
damage
can
cause
clients
to
lose
connectivity.
B
So
in
this
case
we
are
forcing
this
host
to
go
into
maintenance
when
we,
when
I
click
the
continue
so
yeah,
it
will
happen,
it
will
process
it
like
this,
and
if
it
is
successful,
it
will
put
the
host
into
a
maintenance
mode
and
we
can
see
the
status
over.
B
B
Like
you
know,
removing
this
rgw,
like
removing
other
w
damage,
can
cause
a
loose
connectivity
and
there
is
like
alert
like
not
enough
remaining
and
nerf
statements,
so
it
is
not
safe
to
stop
more
at
this
time,
because
there
won't
be
enough
mons
to
be
in
quorum
and
stuff
like
that.
So
this
won't
be
possible
and
we
cannot
put
this
hosted
remainder
unless
we
clear
these
errors.
B
B
Create
it
will
create
another
host,
sf03
and,
and
another
thing
right
now,
I'm
working
on
is
like
the
ability
to
create
a
host
in
maintenance
mode
from
the
dashboard
side.
So
if
I
create
a
host
like
this
and
check
this
box
and
create
this
host,
so
that
host
will
be
in
maintenance
mode
when
it
is
created
so
that
I
know
none
of
the
services
will
be
added
to
it.
While
the
time
of
creation
and
also
we
can
delete
this
host
like.
B
B
Yes-
and
this
is
all
from
the
host
side-
the
features
that
are
overlapping
between
sephardium
and
the
dashboard,
so
I
think
one
of
the
one
thing
we
can
add
over
here
is
like
when
we
are
creating
a
host,
we
can
include
like
adding
the
labels
when
we
are
creating
the
host
itself.
So
that's
some
of
the
things.
Maybe
we
need
to
create
over
here
too.
A
It
might
be
possible
that
we
don't
have
name
resolution
within
the
cluster
and,
if
that's
the
case,
we
need
the
host
name
and.
B
B
I
think
alpha
alphonse
yesterday
went
over
the
section
briefly,
so
we
can
see
all
the
services
running
over
here
and
service
and
disadvantage
the
service
running
on
this
host
same
with
this
nfs
and
like
that,
we
can
see
it
from
over.
Here
we
can
also,
if
you
want,
we
can
create
a
services
and
we
can
delete.
B
Even
if
you
want,
we
can
delete
the
services.
So
those
kinds
of
stuffs
are
possible
from
the
services
side.
D
And
one
thing
worthy
mentioned
here
in
the
services
is
that
there
is
right
now
there
is
the
possibility
to
create
hi
proxy
service.
I
guess
that
might
change
right
with
the
recent
changes
or
plan
changes
for
safety.
A
Might
change
sages
working
on
nothing,
something
like
that.
Yeah
yeah.
F
Of
that
just
to
comment
that
the
services
dialog
box,
I
think
that
is
too
much
generic
okay,
when
you,
for
example.
Yes,
okay,
that's
that's
the
the
dialog
box.
Okay,
we
have
something
that
is
common
to
every
kind
of
service,
but
I
think
that
we
need
a
special
section:
okay,
in
order
to
specify
configuration
parameters
that
now
is
possible
and
also
specific
attributes
for
for
each
of
the
of
the
services.
Okay.
F
So
I
think
that
in
order
to
do
that,
probably
we
need
to
provide
to
the
dashboard
some
kind
of
schema
okay
in
order
to
describe
what
what
are
the
attributes
that
the
user
can
can
use
in
its
kind
of
service?
Okay,
because
in
this
moment
I
think
that
is
too
much
generic
okay.
So,
in
most
of
the
cases
it's
going
to
to
deploy
something
that
is
by
default
and
probably
not
well,
we
need
to
use
expect
files.
A
Can
you
can
you
try
to
add
a
nzw?
Can
it
use
the
htw
type
right
and
there
from
here
you
have
the
port
and
the
ssl
pick
map,
so
some
some
some
things
are
missing,
but
I
think
it
it's
still
there
already.
If
you
yeah.
D
Yeah,
that's
one
of
the
one
of
the
barriers
of
this
approach.
Right
now,
every
service
has
a
crafted
html,
please
to
say
so.
If
we
have
this
exchange
of
schemas,
we
can
avoid
this
kind
of
out
of
things
situations
where
some
changes
happen
in
sfadm,
but
we
notice
that
in
in
dashboard
and
so
probably,
if
we
we
can
agree
on
on
that
kind
of
schema
for
the
respect,
we
could
use
that
definition.
Instead
of
of
hard
coding,
the
different
inputs.
F
Even
from
the
part
of
the
command
line
interface,
I
I
think
that
is
going
to
be
also
useful.
Okay,
because
in
the
documentation
not
for
all
the
services,
we
have
all
the
possibilities
that
you
have
to
do
that
you
have
in
the
in
the
expect
file,
okay,
so
to
have
any
kind
of
command
in
order
to
provide
at
least
any
schema
of
what
are
the?
What
is
the
description,
the
possibilities,
the
attributes
of
each
service
is
something
that
is
going
to
be
useful,
also
for
the
command
line,
interface.
E
D
E
Yeah,
maybe
if
there's
like
just
a
like
a
an
example,
if
you
can
point
us
at
how
to
do
it,
that
would
be,
I
think,
one
other
question
I
noticed
here
like.
If
you
do
services
type,
the
types
include
like
mom
and
manager.
Those
are
singleton
services,
they
don't
even
have
an
id
and
you
so
you
can
only
create
one
of
them.
I
wonder
if
they
should
be
like
grayed
out,
if
you
already,
if
they
already
exist,.
E
Unless
this
is
a
like
a
conversion
from
a
like
an
adoption
conversion
from
a
non-step
idiom
or
whatever,
then
you
would
create
it
even
then
like
it
does
you
have
to
do
this
to
the
dashboard?
You
could
just
do
that
through
the
cli.
So
could
we
could
just
leave
those
out
entirely
for
the
create
path,
vermont
manager.
G
E
Yeah,
this
is
the
same
as
the
orange
ls
output.
It
says
mix
if
they
aren't
all
the
same,
but
we
also
just
removed
it
entirely
from
even
the
orchps,
because
it's
usually
a
digest
and
it's
not
really
interesting
or
useful
yeah.
I
wonder
if
we
should
just
drop
it
from
here
too.
E
A
But
for
both
here
and
the
cut
and
this
yeah
in
this
list,
yeah.
C
E
E
B
Yeah,
so,
okay,
okay,
I'll
move
on
so
ost
yesterday,
iphone
so
short
over
here
and
I
have
configured
a
nfs
over
here,
and
so
we
can
create
a
nfs
export
with
this
phone.
So
currently
we
don't
support
enough
v3
or
it's
only
v4
yeah
and
I
have
also
configured
the
file
systems,
so
it
can
be
seen
over
here
with
the
teachers
can
be
expanded
and
yeah.
The
object
could
be
the
other
thing.
C
B
G
B
Yep
so
alastair,
you
are
taking
notes
of
this
right.
E
Thinking
out
loud
here
that
when
you
do
the
create
service,
there's
a
section
of
it,
that's
related
to
the
placement.
I
wonder
if
it
makes
sense
to
have
like
a
little
a
widget
or
a
pattern.
That's
always
used
for
defining
the
placement
that
we
use
both
for
this
create
flow
and
also
for
the
edit
flow,
because
just
that,
the
way
that
you
describe
the
placement
is
a
the
somewhat
complicated
thing.
The
placement
spec
like
either
you
specify
explicit
posts
or
you
can
do
labels
or
you
can
do.
E
E
C
E
When
you,
when
you
expand
one
of
these
services
and
see
the
demons,
can
you
do
that
for
a
second,
I
wonder
if
it
makes
sense
to
have
action
buttons
also
for
things
like
delete,
restart
redeploy.
Are
there
a
bunch
of
like
per
diem
actions
that
you
can
do.
A
Also
services,
you
can
restart
the
rule
servers,
you
can
stop.
Google
servers,
but
it
works
for
services
and
for
daemons.
B
So
I
think
that's
all
from
me
is
there
anything
else
you
want
to.
D
No
well,
there
were
some
open
discussions
regarding,
for
example,
with
with
the
sensible
example,
we
got,
the
configuration
for
the
rgw
stack,
the
api
keys
and
secret.
So
that's
something
that
is
currently
not
in
in
safe,
adm,
so
user
after
bootstrapping
the
cluster.
They
have
to
manually.
D
Do
these
actions,
probably
from
the
cli,
because
right
now
there
is
no
way
for
doing
that
from
from
the
wii,
so
yeah,
I
was
wondering
if
we
have
the
possibility
to
include
that
in
cfdm
or
in
the
orchestrator.
D
A
D
Perfect
thanks,
I
think
that's
all
right.
Nissan
malfoy.
H
Yeah,
no,
my
only
small
topic
was
the
we
had
a
preview
discussion
sebastian
and
me
about
the
the
development
environment,
for
example.
This
one
that
is
using
a
nissan
in
order
to
have
several
hoses
is
based
on
a
virtualization
project.
Well,
an
upstream
project
that
is
running
on
top
of
little
bit,
which
is
the
kcli.
H
This
is
the
upstream
project.
The
the
fact
is,
this
a
little
bit
heavy
weight
to
have
these
virtual
machines
and
it's
not
trivially
set
up
and
and
unless
you
can
make
it
work,
and
we
thank
juani
for
providing
appropriate
documentation
in
order
to
make
it
work.
But
I
was
wondering,
is
it
was
a
way
for
having
a
kind
of
isolated
environment
to
test
better
theft,
ibm
not
relying
on
only
be
start.
Is
there
a
way
to
do
a
bootstrap,
but
not
messy
map
not
messing
up
with
the
laptop.
H
I
mean
I've
been
trying
to
do
like
containerization
of
the
of
the
of
the
manager
of
the
bootstrap,
sorry
of
the
sephie
dm.
So
this
would
need
like
a
docker
in
docker
or
docker,
and
it's
not
quite
easy
to
make
it
work.
So
so
I
was
wondering
an
alternative,
or
how
can
we
make
it
easy
to
provide
an
environment
that
we
have
the
possibility
to
add
some
hosts
or
or
a
more
lightweight
virtual
machines?
Or
I
don't
know.
H
C
C
E
I
think
currently,
the
it's
not
the
best
solution,
but
currently
the
best
way
to
do
this
is
using
the
start.
So
that
means
on
your
on
your
machine.
You
have
to
have
all
the
dependencies
installed
and
compile
and
build
ceph,
but
if
you
run
csv
start
with
fadm,
it
will
start
up
stuff,
adm,
yeah,
yeah,
yeah.
E
A
F
A
H
Is
what
we
are
using
in
kcli
yeah
juani
made
some
steps,
so
we
are
following
this,
but
the
fact
is
we
are
using
right
now
we
are
using
virtual
machines
yeah.
I
think
it's
more
likely
to
just
in
your
physical
laptop
to
to
to
run
the
boost
the
ffdm
bootstrap
right
yeah,
what
that
is
what
you're,
pointing
at
with
the
minimal
dependencies
installed
in
your
in
your
system.
C
H
Yeah,
when
I
mean
isolation
is
in
order,
for
example,
because
if
we
do
fedem
bootstrap
in
my
physical
machine
will
it
be
using
my
ssh
folder.
C
H
H
I
E
Yeah
I
mean
the
only
thing
that
bootstrap
should
be
doing
is,
is
adding
the
ssh
key
that
it
generates
to
the
root
dot.
Ssh
authorized
keys
too.
It
shouldn't
be
touching
the
rest
of
your
ssh
configuration.
E
H
H
We
can
set
up
another
meeting
to
discuss
this
and
we
can
show
some
ideas.
H
F
G
And
and
besides
that,
we
need
to
document
all
the
different
options
and,
like
you
know
very
clear,
you
know
steps
to
to
to
get
to
to
that
development
cycle
in
each
of
the
options.
If
there's
more
than
one,
because
it's
going
to
I
I.
F
It's
important
to
know
what
are
the
options?
Okay,
because,
probably
when
there
is
a
option
that
is
very
good.
Okay
and
we
are
most
of
us.
We
are
working
with
our
environments
and
probably
spending
some
time
in
order
to
have
the
environment
and
needed
to
to
develop.
So
maybe
we
can
create
or
schedule
a
new
meeting
in
order
just
to
discuss
this
part,
okay,
and
in
order
to
to
decide
to
to
use
the
same
environment
everybody,
because
this
is
going
to
be
a
good
thing
for
for
everybody.
A
A
I
mean
we
have
a
rook
session
right
after
this
one
here,
so
I
I
wonder
if
we
should
move
all
the
manager
rook
specific
ideas,
thoughts
into
the
next
session,
because
that
that's
more
regulated
than
sev
adm
orchestrator
related.
B
A
C
A
Load
up
for
fossil
fuel,
so
my
my
thought
was
that
we
we
should
think
about
different
themes
that
we
that
we
should
should
think
of
like,
for
example,
looking
at
reducing
the
back
count,
then
increasing
making
the
developer
experience
better
by
looking
into
a
better
development,
environment
or
refactoring.
The
second
m
binary,
which
is
really
necessary
at
that
point,
skill
scalability,
is
a
thing.
A
A
A
C
E
There
there
are
a
couple
I
think,
items
on
the
on
the
agenda
that
I
think
we
should
touch
on.
There's
the
resource
scheduling
piece,
auto
tuning
osd,
mod
mds
memory.
G
E
I
think
there
are
still
a
few
decisions
to
make
there
as
far
as
exactly
how
to
how
to
structure
that
and
how
it
should
work,
whether
the
product
of
the
stuff,
idiom
scheduling,
would
be
config
options
and
the
resource
demons
or
whatever.
Exactly
how
I
don't
know.
I
don't.
I
didn't,
prepare
anything
beyond
what
we
were
talking
about
several
weeks
ago.
So
maybe
there's
not
a
lot
to
say
there,
but
accept
that
that
should
be
one
of
our
sort
of
deliverables
or
whatever.
E
Yeah
talk
about
host
training,
though
that
one,
I
think,
is
kind
of
probably
pretty
easy
to
address
and
something
that
we
need
to
get
in
pretty
quickly,
because
right
now,
you
can't
effectively
remove
hosts
from
the
cluster
right
now.
A
Yeah,
the
the
current
way
to
remove
host
is
to
remove
all
daemons
that
you
can
remove
easily,
except
for
maybe
the
node
exporter
and
the
crest
human
right,
because
it's
super
awkward
to
remove
it
and
then
remove
that
hose
and
and
then
cleaning
up
the
different
things
later
on,
like
manually,
removing
the
the
node
exporter
and
then
manually,
remove
it
that's
a
bad
workaround
right.
That's.
E
F
I
have
played
a
little
bit
with
that:
okay
and
we
have
the
the
host
maintenance
mode
that
I
think
that
we
can
use
the
flag
of
hosting
maintenance
in
order
to
avoid
this
problem.
Okay,
even
in
this
moment,
if
you
try
to
do
the
the
removal
of
the
host
using
the
the
host
maintenance
mode
is
more
easy:
okay,
because,
for
example,
for
the
theft,
dm
exporter
or
the
diamonds
that
are
problematic.
F
F
F
A
Yeah,
it's
not
quite
the
same.
I
mean
if
we're
setting
a
host
into
maintenance
mode
and
we
are
expecting
it
to
be
restarted
at
a
later
point,
which
means
that
we
do
not
want
to
deploy
the
demons
on
a
different
host
that
are
living
on
that
demon.
But
if
you're
removing
a
hose-
and
you
say
I
want
to
have
three
ndss
and
and
by
chance
one
of
these
nbs
is
on
that
host.
A
Then
you
want
to
make
sure
that
you
have
at
all
points
three
mbs
daemons,
then
on
a
different
host
before
removing
it.
So
I.
C
E
And
if
you
think
about
osds
too,
like
it
should
it
should
trigger
the
oc
removal
sequence
also
for
those
where
the
osds
are
drained
and
then
removed,
and
then
all
the
demons
are
removed,
one
by
one
shouldn't
have
to
do
any
of
that
manually.
It
should
be
a
single,
basically
single
command
and
then,
like
an
hour
later,
it's
like
it's
ready
and
maybe
the
host
is
even
immediately
removed,
or
maybe
it's
just
like
marked
as
it's
removable.
F
C
E
G
C
A
C
F
A
So
I
think
by
now
we
can
actually
start
to
tackle
that
problem
right.
A
year
ago,
that
was
even
more
problematic
because
we
we
we
missed
some
foundation
in
order
to
implement
that,
but
I
think
by
now
we
are
more
or
less
ready
to
start
looking
into
the
problem.
A
That
that's
very
similar
to
automatically
remove
demons
from
an
from
a
failed
host.
I
mean
we
could
start
automatically.
C
A
Okay,
and
then
we
also
have
to
to
to
discuss
features
right.
What
what
features
do
we
want
to
implement,
or
what
do
we
want
to
focus
on.
E
E
A
That
would
be
awesome.
I
think,
having
def
adm
be
properly.
A
C
A
F
One
one
one
thought
about
features:
okay,
it's
about
rehabilitation;
now
I
don't
know,
but
I
don't
know
if
nano
is,
is
very,
very
famous
and
very
used
in
the
world,
okay,
but
I
think
that
probably
what
my
my
view
is
that
probably
we,
the
list
of
features
that
we
need
to
implement
at
this
moment
is
all
the
features
that
we
need
in
the
dashboard
in
order
to
make
things
more
easy
for
the
final
user.
F
I
think
that,
probably
for
me,
is
the
the
most
the
most
important
so,
for
example,
to
have
a
wizard
in
order
to
start
the
cluster
okay
in
the
dashboard
and
the
support
in
the
in
the
state
of
that
we
are
almost
there
okay.
In
order
to
to
make
easy
to
to
install
the
the
cluster.
I
think
that
is
absolutely
there
to
improve,
for
example,
the
sd
management
in
in
the
dashboard
and
to
provide
the
the
tools
in
order
to
do
this
very
easy.
F
I
think
that
is
another
point
that
that
is
important:
okay
to
improve
the
creation
and
the
the
editor,
the
maintenance
of
the
services
in
the
in
the
dashboard.
So
I
think
that
probably
what
my
my
view
is
that
probably
we
must
be
focused
in
the
high
layer,
the
dashboard,
okay
and
start
to
work
the
words
down
in
order
to
provide
all
the
functionality
that
we
need
to
make
very
easy
to
install
and
to
work
with
that
f
cluster.
Using
the
theft
dashboard.
A
That's
one
reason:
I'm
a
bit
reluctant
to
to
try
to
add
as
many
features
as
possible
to
70m,
because
it
makes
the
user
experience
harder
a
good.
It's
it's
going
to
make
it
harder
to
see
if
a
good
user
experience,
if
we
added
tons
of
features
just
f8.
A
I
E
I
guess
on
on
my
list:
there's
two
things
I
want
to
cover.
One
of
them
is
just
really
quick.
I
think
this
cefadm
exporter
is
going
to
be
like
a
one
of
the
themes
whatever
over
this
next
cycle,
I'm
making
it
making
it
work
reliably
and
probably
the
default
model
for
house
f80,
fading
of
scraping
information.
E
E
Right
now,
can
we
call
it
an
agent?
Isn't
it
just
it's
just
a
per
node
agent
right
yeah
or
we
could
call
it
self
admlit
or
something
or
I
don't
know.
A
Paul
was
very
demanding
on
calling
it
exporter,
but
yes,
as
we
now
have
that
daemon
running
on
each
hose,
we
we
can
use
it
for
more
things
than
just
exporting.
Some
information.
C
A
I
mean
why
not
right,
why
not
use
this
f
adm
daemon
to
to
to
listen
to
the
one
database
and
let
the
cell
phone
exporter
detect
that
yeah.
I
need
to
deploy
a
the
domain
on
my
host
that
would
remove
the
ssh
completely
remove
the
need
to
run
ssh
connections
to
the
remote
host
and
let
the
self-admx
portal
do
it
by
itself.
C
A
I
don't
know
how
important
that
is,
but
it
would
certainly
make
it
possible
to
run
to
deploy
a
daemon
on
all
hosts
simultaneously.
A
But
we
don't
do
that.
Imagine
if
you
have
a
thousand
hosts
and
and
and
deploy
the
craft
demand,
for
example,
on
all
hosts
at
the
same.
E
I
think
it
sounds
good,
but
I
think
I
think
that
for
me
it
seems
like
that.
That's
going
to
be
a
prior
key
item
for
quincy,
but
there
are
a
few
things
that
we
gaps
that
we
need
to
fill
first,
particularly
around
the
aha
just
to
get
sort
of.
E
Oh,
I
hesitated
to
say
future
complete,
but
at
some
level
sort
of
future
completeness
before
we
tackle
that
that
big
scalability
jump
yep,
okay.
Well,
the
other.
The
other
big
item
that
I
wanted
to
just
to
talk
about-
and
this
might
bleed
into
the
work
discussion-
is
how
we
should
be
what
the
orchestrator
interface
for
aha
should
look
like.
E
So
this
this
started
when
I
looked
at
the
hargw
task
service
a
couple
weeks
ago,
and
I
didn't
like
the
way
that
it
was
combining
it
was
coupling
hf
proxy
and
the
people
id
into
one
blob,
and
you
always
had
to
deploy
them
together
as
a
pair,
and
it
only
worked
for
rgw
and
a
bunch
of
other
issues,
because
I,
in
my
case
I
wanted
to
deploy
only
aha
proxy
for
some
reason
and
there
I
could
imagine
situations
where
he
only
wanted
a
virtual
ip.
E
So
I
separated
those
out
into
two
separate
services,
but
then
once
I
basically
finished
that
I
have
doubts
about
whether
that's
actually
the
right
abstraction,
because
it
doesn't
really
map
onto
what
what
rook
does,
because
in
the
rook
case
in
the
kubernetes
case,
the
sort
of
the
quote-unquote
service
in
kubernetes
is
the
all
the
the
virtual
networking
and
the
load
balancing
and
the
external
ip
is
all
it's
always
there.
E
First
of
all,
and
it's
it's
sort
of
combined
with
this:
it's
a
property
of
the
rgw
service
d,
whatever
it
is,
and
so
I
wonder
if
the
orchestrator
interface
should
do
something
similar
where
for
any
service
at
least
any
service,
where
it
makes
sense,
there's
an
optional
property.
That's
like
external
ip
and
whether
or
not
I
guess
this
is
really
just
the
external
led
and
then,
if
that's
set,
then
fadm
will
deploy
it
to
proxy
and
or
keep
the
live
d
as
needed
in
order
to
expose
that
service.
A
Different
the
different
demons
on
the
different
systemd
service
units.
C
E
A
C
A
Yeah
different
service
destiny
service
units,
so
we
we
need-
and
I
also
like
the
idea
of
having
services
being
able
to
control
more
than
one
demon
per
host.
I
mean
we
all
already
have
that
with
co-located
demons
right.
A
And
what
what
do
you
think
of
indeed
using
this
kind
of
public
ip
bing
property
on
the
service
on
the
lgw
service
and
then
letting.
A
The
servers
create
some
kind
of
a
of
a
deployment
similar
to
a
kubernetes
deployment
and
then
using
that
external
layer
of
of
the
data
structure
to
separate
out
the
implementation
of
the
algebra
of
the
proxy
and
keep
it
live
d
and
the
user
interface.
E
C
A
Yeah
the
deployment
layer
tyler
implemented
a
way
to
to
deploy
any
kind
of
demons.
A
A
E
G
E
So
I
guess
in
the
rgw
case
I
mean,
I
wonder
if
whether
whether
we
have
this
intermediate
deployment
layer
is
sort
of
an
implementation
detail,
I
wonder
if
it
makes
sense
first
to
just
settle
on
what
the
orchestrator
api
should
look
like
for
this,
and
then
we
can
figure
out
what
the
most
elegant
or
expedient
way
to
implement.
That
is
in
the
rgw
case.
It
seems
like
that.
The
first
thing
that
we
would
want
is
in
the
kubernetes
for
kubernetes
and
rook.
E
E
And
then,
even
though,
in
lower
case,
you
can't
set
it
in
the
deaf
adm
case,
though
you
have
to
provide
that.
E
E
C
A
Think
we
have
existing
use
cases
that
deploy
their
own
yeah,
because
all
previous
dev
deployment
tools
did
not
or
not
all
previous
deployment
tools
provided
that
so
they
are
accessing
solutions
and
they
and
I
think
users
need
some
some
cycles.
E
So
that
means
that
the
the
at
the
orchestrated
layer,
you
have
a
couple
different
modes
right.
You
have
like
deploy
the
rgw
cluster
and
in
the
root
case
it
will
unconditionally
include
all
dha,
but
in
cefadm
you
might
have
no,
you
might
have
or
you
do
want
hj.
E
But
in
the
stuff
atm
case
you
have
to
provide
the
ip,
whereas
in
kubernetes
rook
case
you
don't
provide
and
you
can't
provide
the
ip
and
it
might
be
that
you
in
the
rgw
case
you
might
want
you
might
want
to
deploy
with
or
without
proxy,
like
you
might
want
active
passive
or
it's
just
a
virtual
ip
or
you
might
want
active,
active,
in
which
case
it's
a
virtual,
ip
plus
aj
proxy
or
you
might
have
active
active
but
like
without
the
virtual
ip,
where
you
just
have
multiple
id
proxies
and
maybe
use
round
robin
dns,
or
something
like
that
like
there
are
a
couple,
different
ways
that
you
could
do
it.
E
F
I
am
with
the
with
the
option
to
start
with
something
simple
like
adding
the
virtual
ip
to
the
specification
of
the
service
and
try
to
to
use
that
to
install
it's
a
proxy
plus
keep
alive
okay
and
provide
only
one
kind
of
configuration.
Okay,
because,
for
example,
in
the
case
of
rcw,
what
we
have
done
is
just
try
to
mimic
what
we
have
in
this
moment
in
defensible.
F
Okay
and
the
same
parameters
that
we
have
in
the
defensible
playbook
is
the
parameters
that
we
have
in
the
specification
for,
let's
say
for
a
hazard,
okay
and
what
it
seems
that
it
was
the
the
thing
that
was
needed.
Okay.
So
what
I
think
is
that
probably
what
we
can
improve,
that
we
can
separate
the
a
and
use
keep
a
life
and
proxy
in
other
services.
But
maybe
the
first
step
is
to
call
this
thing
to
make
easy
to
use
this
to
the
final
user.
So
using
a
virtual
id
is.
F
I
think
that
the
only
thing
that
we
need
and
and
and
user
is
going
to
be
very
happy
with
that.
Okay
and
after
that,
in
the
background
we
just
to
deploy
it's
a
proxy
and
keep
alive,
it
is
going
to
be
more
complicated
and
it
is
going
to
be
the
layer
that
you
do
not
want
states.
But
I
think
there
is
no
no
other
possibility.
C
E
Okay,
I
think
that
that's
probably
fine
and
if
there's
actually
a
need
for
somebody
to
not
have
h8
proxy
for
some
reason,
h,
just
a
quick
question:
aj
pricey
can
do
the
ssl
termination
right,
yeah,
okay,
so
that's
actually
good
anyway,
because
it
it
combines
all
that
logic.
So
then
we
know
not
to
configure
rgw
with
ssl,
whatever
it
like
ignores
all
of
those
papers
over
all
those
details,
I
think
the
other.
The
other
key
use
case,
though,
is
for
nfs
and
that's
the
other
like
key
deliverable.
E
I
think
that
we
need
to
get
into
stuff
adm
supported
as
soon
as
possible,
back
porter
specific,
and
in
that
case
I
think
they're
at
the
very
minimum.
We
need
a
virtual
ip,
a
minimum
at
the
minimum.
We
need
the
vertical
ip,
but
my
understanding
is
that
the
fancible
is
also
deploying
itchy
proxy
in
order
to
balance
nfs
specifically.
C
E
Let
me
see
if
I
can
find
the
link.
I
was
asking
gum
about
this.
The
other
day.
There
is
a
the
link
he
provided.
I
have
not
thought
I
haven't
looked
at
yet.
C
E
Yeah
it's
using
using
keeper
id
and
that's
rgw,
never
mind,
that's
the
wrong
thing.
Maybe
he
was
confused.
Actually,
maybe
he
was
thinking
of
rtw.