►
From YouTube: Ceph Orchestrator Meeting 2021-05-04
Description
A
So
I'm
a
bit
dependent
on
on
you
coming
up
with
topics,
but
I
guess
that's,
okay,
so
the
the
first
thing
on
and
again
I
have
is
so
for
quincy.
We
should
look
into
making
the
self
adm
exporter,
arca
self,
adm
agent
or
cf
adm
demon
a
bit
more
well
engineered
a
bit
more
yeah,
looking
into
it
again
right
and.
A
My
idea
is
to
just
look
into
into
that
thing
and
then
see
what
needs
to
be
done
or
what
should
be
done
in
order
to
make
it
work
and
dissecting
this
task
a
bit
more
and
then
come
up
with
a
more
detailed
plan
of
what
needs
to
be
done
here.
A
B
C
Yeah
one
one
thing
about
that:
just
to
to
propose
how
to
do
the
the
thing
I
I
think
that
it
could
be
very,
very
interesting
for
everybody
to
investigate
and
to
what
to
analyze
how
to
implement
this
feature.
Okay,
but
the
the
thing
important
is
that
maybe
what
we
need
is
just
to
provide
to
the
team,
a
good
description
of
the
feature.
C
This
is
the
the
more
or
less
the
guidelines
that
we
need
to
follow
or
the
things
that
we
need
to
study
in
order
to
do
this
in
the
in
the
in
the
right
way,
and
then
what
just
let
the
team
to
to
think
about
that
and
start
and
just
to
to
have
one
of
our
meetings
in
order
to
everybody
together
start
to
dissect
and
to
split
in
the
future
in
several
tasks
and
this
course
from
how
we
can
implement
in
a
collaborative
way
this
the
feature
so
yeah.
A
I
think
it
it
totally
makes
sense
right.
We
don't
need
to
do
that
for
every
feature
for
everything
that
we
need
to
care
about,
but
I
think
this
one
would
be
an
example
where
we
can
actually
do
that.
Yeah,
that's
a
big
one,
yeah
things
like
refactoring
the
existing
code,
things
like
looking
into
different
architectures
of
what,
but
how
the
communication
between
the
miniature
and
the
the
daemon
is
going
to
look
like
stuff,
like
that.
You
know.
C
A
Do
we
want
to
schedule
a
different
meeting
for
that,
or
do
we
just
want
to
reuse
the
orchestrator
meeting?
I
have
the
impression
that
it
would
probably
be
max
tends
to
have
a
different
meeting,
because
the
audience
is
just
maybe
not
completely
fitting.
C
C
I
agree
with
that
just
to
share
one
one
thought:
okay,
let's
open
another
part
in
order
to
define
this
kind
of
things-
okay,
let's
say
the
the
third
part,
but
I
think
that,
in
order
to
define
the
guidelines
and
the
things
that
we
need
to
to
implement.
C
Well,
I
think
that
sage
and
sebastian
could
have
the
the
lead
voice
here
in
order
to
to
define
the
the
mind
guidelines.
Okay
and
after
that,
we
can.
We
can
use
the
third
part
in
order
to
ask
more
things:
okay,
and
in
one
week
two
weeks
I
don't
know,
maybe
after
after
probably
downstream
release,
we
can
start
to
implement
that
okay
to
split
to
split
in
the
task.
Okay
sounds
good.
A
D
So
yeah
we
have
continued
with
this
well
try
from
from
the
dashboard
team
just
to
see
if
we
can
come
up
with
a
containerized
cepheum,
because
right
now
the
way
that
we
are
testing,
that
is
with
kcli,
which
is
basically
lippert.
D
So
it's
a
vm
running
sephidium
and
that's
not
very
performant,
and
but
we
also
wanted
to
have
this
in,
if
possible,
in
the
the
api
test
in
jenkins,
so
probably
containerized
deployment
could
better
work
rather
than
a
vm
based.
So
right
now,
I'm
I
managed
to
do
that
with
a
docker
in
docker
approach.
D
I
started
trying
postman
portman.
Well,
I
think
it's
still.
There
are
still
many
missing
well
loose
ends
just
to
do
this
in
importment,
and
we
also
try
the
different
approaches
like
mapping
the
socket
for
the
docker,
but
that
doesn't
work
because
well
you
you
have
to
share
the
file
system
between
the
different
levels
of
the
containerization.
So
so
I
think
this
is
probably
the
cleaner
approach.
Right
now.
I
think
the
from
postman.
D
I
recently
checked
a
presentation
from
some
red
hot
fox,
so
they
are
also
pushing
on
on
this
on
having
pot
man
in
portman.
So
we
might
have
something
sooner,
but
right
now,
the
only
way
I
managed
to
do
that
was
with
the
docker
inductor
and,
of
course,
the
true
docker,
not
rootless.
So
but
you
need
to
run
the
diamond
and
the
service
everything
so
that's
good
and
yeah.
But
I
I
was
running
some
tests.
It
failed
during
the
host
ad,
but
everything
else
worked.
D
Yeah,
that's
fascinating,
it's
a
systemd
container,
a
centos
based,
and
it
has
also
a
joker
c
running
running
on
it.
So
you
need
these
three
pieces.
You
want
to
run
that
I'm
not
sure
if
we
could
try
to
make
the
sfdm
requirements
more
lightweight,
there's
not
to
require
systemd,
maybe
crony
or
so.
Okay,.
B
E
B
D
C
D
B
D
A
D
B
I
think
it's
yes,
but
at
what
cost
right
like
it's
it,
it
sprinkles
like
three
files,
maybe
four
files
and
slash
etsy,
there's
like
a
system
b
unit
file,
there's
a
log
rotate
file
and
there's
some
stuff
in
var
log
and
var
lib,
but
they're
all
tagged
by
the
uuid
and
there's
one
step:
fading
command,
rm
cluster.
That
will
wipe
it
all
out,
and
so
yes,
it
puts
stuff
on
your
host,
but
it
also
cleans
it
up
for
you
having
a
completely
isolated
environment.
B
D
D
I
think
it
might
make
easier
to
run
this,
and
probably
the
cleanup
is
going
to
be
simpler
if
we,
just
by
removing
the
container
you
get
rid
of
all
the
stuff
that
is
there
and
also
if
we
manage
to
mount
more
volumes,
we
could
also,
instead
of
having
to
build
containers
for
every
changing
master,
we
could
directly
test
our
changes
mounting
volumes,
so
it
will
be
a
bit
complex
just
to
mount
the
volume
inside
the
different
containers.
But
if
that
works,
we
were
also
doing
that
for
kcli.
D
D
So
it's
just
this
docker
file,
pretty
simple
right
now
you
will
have
to
add
sshd
as
well,
but
it's
mostly
and
the
line
for
running
this
is
docker
ram.
I'm
planning
to
do
this
in
docker
compose,
because
probably
I
will
have
to
create
volumes
so
yeah.
It
requires
privilege
and
probably
will
require
some
caps
but
yeah.
I
think
it's
not
that
complicated
so
far,.
A
I
mean
if
it
if
it
really
simplifies
dealing
with
multiple
hosts.
A
A
Okay,
removing
contaminants
from
surface
specs.
B
Yeah
we
talked
about
this.
We
talked
about
this
a
few
days
ago.
I
can't
remember,
I
think
what
we
decided
was
that
we
wanted
to
have
a
single
way,
to
specify
images
and
for
seth
demons.
That's
the
config
option
yeah,
and
so
I
think
so
for
all
stuff
demons.
We
can
remove
the
image
property,
but
for
the
monitoring
demons.
B
B
It's
a
property
on
the
manager,
though
it's
a
manager,
big
option,
yeah,
as
opposed
to
like
a
config
option
on
that
particular
instance
of
the
service
or
demon
yeah,
which
is
fine.
It
just
means
it's
global,
as
opposed
to
like
granular,
but
I
don't
think
we
need
granular
images
on
prometheus
demons,
we're
going
to
run
multiple
treatments
containers,
so
I
think
yeah.
So
I
think
all
right,
so
I
think
I
can
just
go
and
remove
all
those
from
the
service
backs.
B
A
No,
they
don't
have
to,
but
rook
doesn't
use
the
config
values
of
sephardium
with
the
container
right,
but
they're
different
rook
uses
the
custom
resources
to
store
the
first
of
the
container
images
compared
to
the
config
options.
You
specifically,
so
if
we
would
re
completely
remove
the
cont
the
the
container
image
prometheus
config
from
the
module,
then
we
could
leverage
the
server
spec
exclusively.
I
guess.
A
A
D
A
B
I
have
a
feeling
that
we
won't
want
to
do
it
without
edm
movies,
whatever
anyway.
Okay,
so
should
I
go
ahead
and
remove
all
those
then.
A
There
is
one
container
image:
that's
the
custom
container
service
yeah.
Well,
we
have
to
make
it
part
of
the
service
pack
yeah,
that's
okay,
because
it's
just
an
abstracted
invocation
of
potman,
more
or.
C
B
A
A
Okay,
while
you're
doing
that.
A
B
B
Okay,
that's
good
okay,
nfs.
B
B
B
I
would
lean
towards
the
orchestrator,
but
actually
what
we
also
need
to
do
is
make
it
work
it.
There
needs
to
be
something
simple
that
has
just
the
right
information
that
the
dashboard
can
do
to
deploy
to
create
an
nfs
cluster,
and
that
needs
to
work
with
rook
too,
but
on
rook.
There
is
no
support
for
the
ingress
thing
yet
for
nfs,
because
on
rook
nfs
is
mostly
is
only
usable
from
within
kubernetes.
B
So
I
I
think
I
think
what
we
want
is
that
the
I
also
don't
understand
yet,
if
you're
doing
this
correctly
in
kubernetes
and
you
use
the
ingress
route,
whatever
thing
like,
do
you
give
it
an
arbitrary
public
ip
and
tell
that
provide
that
to
kubernetes,
and
then
it
sets
up
all
the
routes
or
does
kubernetes
like
dynamically
assign
something
out
of
a
pool
of
public
ips
that
it
has,
or
is
it
like
optional,
like
one
or
the
other?
B
Yeah
I
mean
in
inside
kubernetes
there's
the
like
the
service,
which
is
basically
just
a
way
to
name
it
as
far
as
I
can
tell,
and
that's
like
a
use
system
for
some
funny
ip
virtual
ip
thing
going
on
virtual
networking
inside
the
kubernetes
buster,
whatever
container
network
system,
but
that's
only
usable
by
other
pods.
So
it's
actually
and
there's
no
point
to
using
nfs
inside
kubernetes
unless
it's
like
a
like
a
windows
container
or
something
random
that
doesn't
speak
stuff
of
this.
B
It's
a
very
limited
value.
I
think
I
don't
know
like
that.
I'm
not
yeah
that's
sort
of
an
incomplete,
so
I
guess
I
I
wonder.
If
I
mean
maybe
the
thing
to
do
is
like
the
dashboard
would
be
like,
create
nfs
cluster
and
there's
like
a
check
box.
That
says
like
make
this
externally
accessible
with
this
virtual
ip,
and
that
also
then
deploys
ingress
on
top
and
if
it
doesn't
do
that,
then
it'll
just
deploy
the
cluster
without
ingress
like
it
seems
like.
C
B
A
A
B
C
B
B
B
A
question,
I
think,
that's
the
right.
Okay,
so
I
think
that's
the
right
way
to
do
it
to
make
that
if
you
deploy
nfs
with
or
without
ingress,
basically-
which
I
think
exological
says
it
sort
of
captures
all
the
use
cases,
it's
not
super
complicated.
The
dashboard
can
be
sort
of
opinionated
when
it
like
rechecks
the
box
for
you
or
doesn't.
B
A
We
have
exactly
the
very
same
problem
already
in
the
volumes
module.
It's
it's.
I
think
it's
called
self
fs
new
or
ffs
volume
new.
Also,
how
do
you
make
how
and
that
is
creating
pools?
A
A
B
A
B
B
I
think
if
we
really
want
to
capture
the
the
custom
data
pool
metadata
pool
thing,
we
should
just
add
a
thing
that
lets
you
pass
in
existing
pools.
Add
arguments
to
the
episode
and
create
because
that's.
B
C
One
question
said
when
the
nfs
module
is
is
is
executing
the
state
of
commands,
maybe
the
weight.
Well,
probably,
it
is
going
to
be
more
easy,
instead
of
using
several
orchestrator
commands
to
create
a
aspect
with
the
ingress
definition
and
to
do
just
one
command.
Okay.
In
order
to
avoid
this
kind
of
things.
B
So,
that's
that's!
That's!
Basically
what
I'm
thinking
like
it
feels
like
the
the
dashboard
shouldn't
be
using
a
cli
that
has
opinions.
It
should
just
be
using
the
the
the
actual
thing
like
this
is.
B
Specs-
and
I
mean
this
networks
isn't
even
really
needed
the
the
only
thing
it
really
does.
Is
it
in
a
pool,
I'm
removing
the
need
for
the
pool,
so
it
says
opinion.
All
you
really
need
to
do
is
specify
a
non-default
port
for
the
nfs
if
you're
going
to
have
ingress
so
that
the
ingress
can
run
on
the
regular
port.
C
C
One
question
from
from
the
dashboard:
it
is
maybe
it's
just
needed
to
to
use
the
nfs
command,
nfs
cluster,
create
and
with
the
parameters
okay
and
to
and
is
the
nfs
module,
the
one
that
is
going
to
be
responsible
to
create
the
the
spec
file.
Is
that
what
you
are
proposing.
A
B
B
It
creates
the
pool
and
it
and
it
creates
the
empty
config
file.
But
I
just
made
the
orchestrator
do
that
actually,
so
that
you
can
call
the
orchestrator
directly,
but
I
guess
yeah
we
could
refactor
that,
so
it
just
calls
it
helper
and
the
nfs.
A
B
A
B
B
B
B
C
I
want
to
start
with
to
work
with
that.
Okay,
once
we
have
more
clear
things
in
in
fdm,
I
want
to
to
start
to
be
focused
more
in
in
the
model
yeah
and
also
in
the
in
the
root
path.
So.
B
C
Sure
that
there's
that
yeah
yeah,
so
I
I
think
that
we
need
resources
taking
a
look
to
that
continuously,
because
in
this
moment
there
is
a
big
gap
in
functionality
and
a
big
gap
about
stability.
Okay,
because
even
we
don't
know
if
everything
is
working
or
not
in
this
one,
so.
C
B
A
D
D
One
more
comment
on
nfs
alfonso
to
leave,
but
he
wanted
to
bring
in
a
discussion
he's
been
talking
with
barsha,
because
some
discrepancy
in
behavior
that
I
think
it's
rook
related
when
creating
the
this
module
option
for
the
dashboard,
the
ganesha
pool
name
with
the
namespace
etc.
So
I'm
not
sure
if
there
is
any
difference
in
the
way
that's
handled
by
the
orchestrator
or
cdmr
for
rook
or
what's
their
issue.
But.
B
B
I
can't
think
of
any
reason
why
we'd
ever
want
to
use
a
custom
pool
for
the
kinesia
demons
or,
like
any
of
that,
just
making
it
I
mean,
I
think
that
there's
still
a
property
on
the
spec,
but
all
the
cli
commands
would
just
assume
the
default,
and
so
you
would
never
change
it
unless
you
would
even
remove
the
reason
to
totally
change
it,
but
basically
making
it
so
that
there's
an
nfs
ganesha
pool
that
always
is
the
pool
that
has
the
ganesha
configs
in
it,
and
each
daemon
is
in
the
little
name,
space
of
that
pool
for
their
configs
and
they're.
B
B
The
only
sort
of
bump
I
hit
is
I
wanted
to
make.
I
wanted
to
name
it
dot,
nfs,
dot,
nfs
ganesha,
but
the
current
default
is
nfs
ganesha
and
they're
already
like
a
million
places
in
the
test
code
and
other
places
where
it
assumes
that
name,
and
so,
if
I
rename
it,
then
I
have
to
change
it
in
a
bunch
of
places
not
clear
to
me.
If
we
should
add
it,
make
it
a
dot,
because
it's
sort
of
an
internal
thing
kind
of
like
the
dot
manager
pool
in
the
dar.rgw
pools.
B
B
B
Because
we
can,
if
it's
a
permissions
thing
like
we,
can
restrict
the
ganesha
demon,
so
it
only
has
access
to
that
namespace
of
that
pool,
but
they
can
still
be
locked
isolated.
B
Okay
and
I'm
inclined
to
kick
the
buzzy
thing
to
later.
B
B
B
B
B
B
In
the
well,
no
because
the
well
for
osds,
yes,
because
the
osd
specs
are
like
all
weird
yes,
but
for
other
things,
that's
that.
B
I
think
the
simplest
implementation
is
going
to
be
basically
the
first
three
patches
of
the
current
pull
request
drop
the
resources
thing
that
I
added
in
the
last
patch
and
then
basically
say
we
add
a
config
option
that
says
auto
tune,
true
false
for
the
osd
and
if
it's
true,
then
step
adm
will
basically
look
at
the
host
memory.
Look
at
the
number
of
osds
and
it
will
set
a
config
option.
B
B
The
config
limits,
scoping
things
that
map
to
specific
crush
things,
so
I
could
just
do
it
by
host
and
then,
when
you
do
a
sef
config
dump,
you
could
see
all
the
different
osd
limits
set
on
a
per
host
basis
should
be
not
too
verbose
and
you
could
actually
see
what's
happening
and
then
the
the
normal,
the
reconciliation
loop
in
cephadm
will
like
look
at
what
the
running
pods
memory
limit
is
and
what
the
config
option
says.
The
memory
limit
should
be
and
if
they're
different,
then
it'll
just
restart
the
pod.
B
A
We
we
need
to
split
the
memory
target
equal
across
all
the
demons
on
that
house
yeah.
So
if
we
have
our.
B
B
B
B
B
A
A
B
A
B
B
The
thing
I
want
to
avoid
is
that
you
have
five
osds
in
the
server
you
deploy
the
sixth
one
and
it
has
to
go
restart
all
the
other
five
right
and
with
memory
because
we're
divvying
it
up,
we
have
to
do.
We
either
have
to
restart
the
containers
with
new
limits
or
we
have
to.
We
can
just
adjust
the
config
option.
B
It'll
just
work,
but
if
we
have
just
like
a
each
lsd
should
use
no
more
than
four
cores
or
that
they
belong
to
a
ost
class
on
this
a
shared
class
on
the
server
they're
all
in
one
bucket,
or
something
like
that.
That
would,
I
think
that
makes
more
sense,
but
I
guess
I
guess
I've
inclined
to
just
deal
with
the
cpu
things
separate
and
deal
with
that.
Come
to
a
later.
B
Yeah
yeah
yeah
yeah,
because
I'm
guessing
what
we
actually
want
is
like
a
view
class
on
the
server
for
like
all
osds,
then
you
set
that
big
bucket
to
have
you
know:
16
cores
for
the
whole
server
for
all
osds
and
they
all
belong
to
the
same
pool,
which
I
think
you
can
do
with
that
with
the
systemd
weirdness,
whatever
I
think
it's
possible,
but
I've
never
done
it
before.