►
From YouTube: Ceph Orchestrator Meeting 2021-03-11
Description
A
Oh
yeah
welcome
to
the
latest
orchestra.
Let
me
do
we
have
one.
Second,
we
have
mainly
topics
regarding
requests
from
state.
Yep
do
not
want
to
dive
into
htw.
B
Sure
yeah
just
like
right
before
this,
I
had
a
call
with
juan
mi
and
alfonso
and
ernesto
daniel
and
pretty
talking
about
this.
B
I
guess
here's
what
I'm
what
I'm
thinking!
I
still
think
that
stuff
adm
should
stay
out
of
the
zone
setup.
I
think
it's
important
to
think
about
who
the
users
are
and
we're
prioritizing
what
we're
actually
focusing
on.
So
there's
the
the
user,
the
end
user,
the
customer
who
wants
to
set
up
multi-site
and
I
think
what
they
want
is
there
are
going
to
be
multiple
ways
you
can
set
it
up
right.
B
You
can
have
you
can
pre-write
this
complicated
bit
of
yaml,
that's
declarative
and
does
everything
all
in
one
go.
I
think
for
an
end
user
that
actually
isn't
a
great
experience,
because
if
you
get
it
wrong
to
iterate,
you
have
to
tear
down
the
whole
cluster
again
and
start
over
again
and
it's
slow
and
it's
yaml
and
which
isn't
that
friendly,
whatever
I
think
that's
actually
not
the
best
experience
or
for
an
end
user.
You
could
have
some
documentation.
B
That
says,
run
these
cli
commands
to
create
the
realm
of
the
zone,
and
this
is
what
it
means
and
explain
it
as
it
goes,
and
do
that
that's
sort
of
like
a
little
bit
tedious.
B
But
at
least
it's
like
understandable,
and
if
you
make
a
mistake,
you
can
like
back
up
and
redo
it
or
you
could
have
a
gui
that,
like
is
a
nice
wizard,
that
sort
of
guides
you
through
it
and
interactively
like
create
zone
and
a
thing
pops
up
and
whatever
all
that
stuff,
and
I
think
that's
the
nicest,
but
also
obviously
the
furthest
out.
B
I
think
for
an
mvp
like
for
for
new
clusters
for
new
new
new
users,
new
clusters
setting
up
multi-site,
I
think,
just
having
using
the
cli
to
set
up
the
realm
and
zone
configuration
before
allowing
the
cluster
to
deploy.
The
demons,
I
think,
is
totally
reasonable.
B
It's
not
like
the
greatest
experience,
but
this
is
a
one-time
thing
that
happens
like
when
you
set
it
up
so
like.
I
think
it
doesn't
seem
like
it's
that
that
big
a
deal
and
eventually
we
can
do
the
rest
for
us.
I
think
actually,
the
yaml
specification.
That
is
a
declarative
like
this
is
the
whole
thing.
B
Do
it
all
at
once,
I
think,
is
actually
most
important
for
us,
because
we'll
use
it
for
qa
testing
and
for
ci
and
for
everything
else,
and
because
we're
like
repeatedly
setting
up
and
tearing
down
clusters
all
the
time
like
that's.
B
I
think
we
should
still
do
a
declarative
form,
but
it
we're
going
to
be
the
primary
user
and
not
like
an
end
user,
like
if
b
of
a
wants
to
set
this
up,
they're
not
going
to
go
like
I
don't
know
whatever
I
think,
gamble
isn't
gonna
be
the
thing.
That's
gonna
be
most
helpful
for
them,
which
I
think
partly
means
that
it
doesn't.
It
doesn't
need
to
be
like
super
friendly
or
even
necessarily
robust.
It
just
needs
to
be
sufficient.
B
The
ammo
mode
of
deploying
all
this
stuff
analogous
to
what
rook
does
like
rick,
went
through
this
whole
process,
also
where
we
about
a
year
ago
and
we're
trying
to
figure
out
how
to
do
multi-site
with
rook
and
what
we
basically
decided
was
that
the
whole
concept
of
like
desired
state
and
reconciliation,
loops
and
controllers,
like
is
just
incompatible
with
how
the
site
works,
because
you
have
the
whole
all
the
complexity
and
the
multi-site
realm
stuff
exists,
because
you
need
to
be
able
to
have
differing
views
of
and
versioned
views
of,
like
what
the
state
of
the
system
is
so
like
right
now,
this
cluster
might
be
the
master,
and
this
one
might
be
the
slave.
B
But
this
one
maybe
goes
down,
and
so
you
switch
one
to
this
master
and
you
have
this
whole.
This
whole
period
concept
with
time
progression
and
reconciliation
and
so
on,
and
if
you
have
multiple
kubernetes
cluster
with
their
quote,
unquote
desired
state.
Reconciling
that
it
just
it's.
It's
never
going
to
work.
B
Right,
and
so,
as
in
terms
of
like
the
priorities
for
pacific,
I
think
they
should
be
moving
forward
with
removing
the
realm
fiddling
out
of
stuff
adm,
making
sure
that
the
documentation
is
accurate
and
complete
and
understandable
for
what
the
cli
commands
are
to
like
set
up
the
multi-site
config
before
you
deploy
the
zones
which
I
think
we're
mostly
there.
B
So
we
need
to
support
the
same
set
of
options:
either
mapping
them
directly
to
self-config
option
options
and
making
sure
that
we
can
run
the
demons
on
the
same
ports
and
all
that
stuff
so
that
we
can
have
a
an
orderly
for
an
existing
multi-set
deployment
that
has
petabytes
of
data
and
it's
like
in
live
use
like
we
need
to
be
able
to
upgrade
it
and
not
totally
make
it
blow
up,
and
I
think
that's
actually
that
that's
the
thing
that's
most
scary
to
me
and
where
I
think
most
of
the
efforts
should
be
spent
in
the
short,
the
short
term.
C
Is
that
a
couple
of
things?
Okay,
I
think
that
it's
a
good
plan,
okay,
so,
okay
with
that,
but
we
need
to
start
to
work
as
soon
as
possible.
Okay,
so
well,
I
think
that
daniel
is
going
to
start
to
to
do
the
the
test
in
order
to
deploy
a
cluster
using
defensible,
okay
with
our
tvw
multisite
and
see
what
are
the
things
that
we
need
to
do
in
order
to
to
update
what
to
do
to
adopt
the
configuration
in
the
in
a
new
fdm
cluster.
A
part
of
that.
C
D
Described,
hey
everyone
yeah,
I
kind
of
agree
on
deploying
minimal
things
and
then
providing
documentation
for
having
multiple
step
commands
that
you
need
to
provide
to
end
customers
in
a
more
complex
scenario.
So
it's
okay!
We
are
still
working
on
this
declarative
approach,
having
the
spec
trying
to
describe
as
much
as
possible
in
and
translating
the
parameters
defined
in
the
director
or
interval
and
providing
us
a
valid
spec
for
the
self
deployment.
Basically,
so
I'm
kind
of
okay
with
this
with
this
kind
of
solution.
D
But
there
are
many
more
things
like
the
networking
stuff.
We
were
talking
about
a
couple
of
weeks
ago
that
it's
a
mandatory
kind
of
mandatory
for
us
yeah,
because
people
have
this
kind
of
information
from
ansible
or
another
orchestration
on
top
of
the
em.
So
it's
it's!
Okay,
having
the
networking
information,
these
kind
of
things
and
you
to
build
a
balance
bank,
basically.
B
D
B
Just
one
follow-up
here:
I
think
that
the
case
where
having
this
like
declarative,
another
user
of
the
declarative
form,
would
be
an
opinionated
installer
though,
if
I
don't
know
exactly
what's
in
scope
for
director,
but
if
director
has
a
thing
where
it
automatically
sets
up
two
clusters
and
they're
pre-configured
to
talk
to
each
other,
no,
no!
No!
No!
We
are
planning
to
we're.
E
Ourselves,
no,
no
go
ahead.
John
sorry,
I
mean
it's
not
not
for
rgw
multi-zone.
So
what
you
were
saying
about
that
doesn't
affect
us
at
the
moment
and
you've
clarified.
It
doesn't
change
the
the
networking
stuff
we
talked
about
last
week.
Then
this
all
sounds
fine
to
us.
So
far
we
just
you
know.
We
just
need
that
networking
stuff
like
we
talked
about
and
it's
still
on
the
table.
So
that's
good
and
I
guess.
D
Okay
with
that
right,
because
in
intervalo
we
are
trying
to
handle
just
day
one
so
we're
going
to
deploy
the
first
cluster
with
the
basic
services
that
we're
going
to
support.
Then
the
plan
is
to
export
the
status
of
the
big
picture
of
the
cluster,
which
is
the
the
spec
the
yaml
file.
You
can
export
with
the
orchestrator
or
cli,
and
then
everything
should
be
day
two
operation
so
setting
up
something
different
from
the
basic
demons
that
are
supported
and
that
were
supported
in
sephonsible.
It's
it's
day,
two
so
yeah
yep,
okay,
thanks.
E
B
In
progress,
okay,
we're
working
on
the
scheduling
piece,
the
next
step
after
this
is
the
port
mapping
piece,
so
it'll
assign
ports
to
different
daemons
and
then
the
step
after
that
will
be
to
choose
the
ip
to
bind
to
based
on
the
subnet
list.
But
it's
all
it's
roughly.
What
we
talked
about
before,
like
the
spec,
will
have
a
list
of
subnets
to
choose
from,
and
it
will
pick
an
appropriate
ip
out
of
those
subnets
and
even
we'll
get
down
to
that.
One.
E
What
we're,
what
we're
probably
going
to
do
on
the
director's
side,
or
at
least
openstack,
is
that
it's
milestone
three,
so
depending
on
where,
where
that
is,
we
might
have
our
our
wallaby
release,
support
cephem
rbd
deployment
in
full
and
use
sephancibal
still
for
if
someone
needs
the
the
rgw
stuff
and
then,
when
the
other
stuff
lands
in
x,
we
can
do
the
full
conversion
in
x,
okay,
we'll
see,
but
that
that's
something
we
might
do.
I'm
guessing.
E
B
B
Yeah,
I
think
the
I
think
we
should
look
at
what
rook
did,
because
we
did
this
whole.
We
basically
went
through
this
whole
process
of
like
what's
the
minimal
set
of
crds,
we
should
define
for
rook
in
order
to
allow
you
to
like
bring
up
a
multi-site
configuration
for
day,
one
and
and
nothing
else.
I
think
it
might
be
helpful
to
go
look
at
what
how
that
worked.
B
B
A
A
F
B
Yeah
I
mean,
I
think
my
first
sort
of
impulse
is
to
just
to
do
that
and
then
the
orchestrator
thing
that
we
create
would
just
map
one
to
one
onto
the
rook
one
and
so
for
the
rook
orchestrator.
It
would
just
basically
create
these
crs
and
it
would
everything
would
just
pass
through
and
it
would
work
basically,
but
I
don't
know
yeah
we
should
probably
think
about.
B
We
should
probably
look
at
how
much
code
actually
rook
has
to
do
this
stuff,
because
I
think,
in
the
case
of
like
the
realm,
if
you
do
a
pull
endpoint
it
like
runs
some
commands
to
go,
pull
from
the
remote
cluster
and
there's
secrets
involved
so
whatever
to
bootstrap
the
whole
thing,
and
we
should
decide
whether
that's
actually
what
we
want
to
implement
in
stadium
or
not.
B
I
think,
if,
if
so,
then
it's
kind
of
nice,
because
we
just
we're
going
to
jump
right
to
parity
with
both
which
would
be
kind
of
nice
without
having
to
go
redo.
Everything
again.
So
hopefully
that's
the
case,
but
anyway.
B
I
think
in
the
meantime,
for
the
purposes
of
our
testing
and
reproducibility,
we
can
just
have
like
run
these
four
shell
commands
to
create
the
zones
and
realms
and
then
apply
the
rest
of
the
spec
that
deploys
all
the
rtw
zones.
That
would
probably
be
sufficient
for,
like
our
qa
tests
and
repeatability
and
whatever
else.
A
Yeah
right
now,
the
the
obvious
downside
is
that
you
cannot
assemble
one
big
cluster
spec
and
then
apply
it
at
bootstrap
time.
But
you
have
to
split
it
up
into
bootstrapping
with
a
part
of
the
spec
file
and
then
run
a
share
command
and
then
apply
the
identity
aspect.
File.
B
Because
for
us
I
mean
it's,
it's
cool
but
like
I
don't
think
a
user
would
ever
do
that
it
would
be
test.
Would
that
would
do
that
right
and
so
having
a
three-step
test
and
bootstrap
run
shell
commands
and
then
more
apply,
isn't
really
that
much
harder.
As
far
as
the
test
goes.
A
It
also
helps
us
to
write
to
build
up
a
consistent
user
experience.
If
everything
is,
this
is
doable
using
one
big
cluster
specification.
We
are
making
sure
everything
even
within
cepheidium
is
saying,
like
preventing
deadlocks.
Also.
B
Yep
yeah
about
that,
but
I
guess
that
that
end
experience
could
wait
until
we
define
these
new
crd
equivalents
for
rounds
and
zones
and
stuff
that's
the
end
goal
and
in
the
meantime,
just
for
our
own
testing
or
whatever
we
can.
We
can
do
the
three
step,
so
the
one
step.
A
I
don't
like
it
it's
it's
too
generic.
It's
a
bit
too
generic
for
my
for
my
thinking,
yeah.
I
agree.
Okay,.
A
B
A
B
B
Okay,
so
there's
that
and
then
there's
also
this
idea
of,
I
think
we
need
to
understand
what
the
behavior
for
the
default
behavior
should
be,
because
I
think
if
you
have
a
you,
have
a
two
node
cluster
and
you
say:
birch
apply
mds
foo
count.
B
In
this
case,
let's
sit
max
post
of
I
don't
know
like
eight
or
something
like
some
sort
of
reasonable
number,
maybe
four
right,
but
then,
if
you
also
did,
if
you
did,
orch
apply
mds
ooh
label,
mds
node,
like
that,
like.
I
think
there
should
be
an
implicit
num
per
host
of
maybe
one
or
two
right
like
something
smaller
like
we
don't
want
to
go.
Deploy
like
16
mds
demons
like
out
of
the
gate.
A
Yeah,
that's
super
complicated
for
him.
Yes,
if
we
are
using
the
ntw
thing
off
of
saying
max,
host
or
post,
although
I
guess.
B
B
A
If
you,
if
you,
if
you,
if
you
think
about
it
in
in
terms
of
a
low
co-locating
services,
yes
or
no,
then
suddenly
that
makes
sense
right.
If
you,
if
we
default
a
low
color
educating
services
for
mds
to
true,
we
do
it.
If
we
set
it
implicitly
to
true,
then
default
supply
through
count.
4
will
co-locate
services.
Implicitly,
that
makes
sense.
If
you
say,
self-auto
apply,
foo
mds
foo
label
mds,
then
it
will
deploy
as
many
hose
as
you
have
implicitly
that
that
perfectly
makes
sense
it.
A
B
C
One
thing
about
that
is
because
of
I
think
that
we
are
trying
to
what
to
specify
the
placement
just
with
a
number
okay
and
well.
If,
if
we
do
that-
and
I
I
think
that
the
way
to
do
well,
we
need
to
distribute
eventually
the
the
diamonds
between
all
the
hosts
that
are
available.
Okay,
if
the,
if
the
user
wants
to
deploy
more
than
one
or
two
different
diamonds
in
one
specific
host,
he
should
use
a
more
complicated
placement
clause.
C
Okay,
so
maybe
I
I
think
that
trying
to
to
introduce
a
new
variable
max
per
host
in
order
to
to
see
what
what
is
the
number
maximum
of
demos
that
we
can
put
in
one
host
is
is
to
to
do
the
things
complicated
is
just
if
you
are
you're
going
to
use
one
number
only
as
placement.
What
we
are
going
to
do
is
to
distribute
eventually
the
diamonds
between
all
the
hosts
available.
If
you
want
to
do
another
different
thing,
when
specify
use
a
more
complex
placement
is
what
I
I
think.
A
A
Colon
four
would
default
to
only
placing
one
daemon
per
house
by
default,
and
if
you
want
a
low
co-locating,
then
you
would
have
to
say
a
local
location,
colin
true,
and
then
it
would
would
bet
out
the
the
for
40
minutes.
B
B
Right,
but
I
mean
if
you,
if
you
said,
if
you
did
orange,
apply
mgs
foo
count
10.
like
that
should
co-locate
by
default,
you
shouldn't
have
to
say
you
know
countdown
colo,
true
or
whatever
yeah.
We
can
do
that
by
by
type.
That's,
not
a
big
problem.
Okay,
because
if
we
do
it
by
type,
then
I
wonder
if
it
needs
to
be
part
of
the
placement
spec
like
do
we
actually
need
it
in
the
placement
spec
at
all,
or
can
it
be
an
intrinsic
property,
the
type
and
not
of
the
place
on
spec.
B
B
Right,
you
would
use
count
for
host
and
combination
used
in
combination
with
labels,
filters,
etc.
So
like
count
per
host
or
label
rgw
and
then.
B
Or
like
host
budgeting,
foo
or
whatever
it
is
post
filter.
What
is
it
called?
I
don't
know
whatever
would.
B
B
Oh
yeah,
whatever
it'd
be
like,
if
you
know
you
could
do
something
like
that,
or
you
would
have,
you
know
account
for
count
10
for
like
an
mds,
but
you
would
never
say
count
ten
count
for
a
host
three.
This
would
be
an
error.
B
A
B
I
think
we
basically
in
order
to
make
that
work,
the
manager
standby
mode,
has
to
be
disabled,
so
there
could
be
a
test
setting
that
does
that
and
one
way
to
do
it
would
be
that
def
adm.
If
that
setting
is
on,
then
the
the
per
service
type
behavior
is
that
the
location
is
okay,
and
if
that
setting
is
off,
then
the
behavior
is
that
collocation
is
not
okay
like
we
could
still
get
away
without
putting
it
in
the
placements.
If
we
wanted
to.
A
A
A
B
C
B
B
A
I
I
mean
I,
I
don't
really
find
it
as
an
optimal
solution,
but
I
think
it
works
for
now.
B
B
B
B
A
Like
saying
I
I
want
to
have,
I
have
three
nds
hosts.
A
File
system-
and
I
have
if
I
have
three
nds
hosts
and
and
two
of
them
are
serving
one
pi
system
and
the
other
two
are
serving
a
different
file
system
and
the
max
per
house
is
then
per
globally
for
for
the
host
and
doesn't
really
make
sense
per
service.
B
So
for
now,
then,
we
can
go
with
account
for
host,
which
is
used
in
combination
with
patterns
and
labels
and
stuff,
but
not
count,
and
in
the
count
case
we
just
have
an
imprint,
intrinsic
property
of
the
type
that's
hard
coded
in
that
says
whether
we
collocate
or
not,
that
sounds
okay.
That
sounds
good.
Okay,
okay,.
A
A
A
We
have
this
one
here
that
adds
it
to
the
nfs
server
specification
like
we
talked
about
it
that
yesterday
and
only
fixing
it
in
the
documentation
has
its
downsides
like
like,
for
example,
it's
like
a
feature
of
last
resort.
We
we
are
able
to
change
the
change.
The
ginger
2
template
when
deploying
nfs
daemons,
where
we
are
able
to
to
change
the
ninja
2
template
for
the
ganesha.com
when
deploying
nfs,
daemons
and.
A
One
way
would
we
suggest
let
the
user
replace
that
table,
let
that
the
template,
without
touching
the
code
and
the
other
alternative,
would
be
to
add
the
project,
call
to
the
server
specification
and
that's
a
full-blown
feature
then,
and
we
have
to
somehow
decide
between
those
two
options.
A
And
the
other
one
is
that,
right
now
the
dashboard
supports
nfsv3
as
a
as
a
selection
in
the
in
the
dashboard
when
creating
an
fhs.
I
am
okay
with
dropping
the
full-blown
nfs
support
in
cfdm
and
just
let
the
user
replace
the
template,
but
that
would
be
a
regression
in
the
dashboard.
B
A
F
B
B
B
A
They
start
with
a
few
configuration
options
and
try
to
be
opinionated
as
possible
and
then,
after
a
while,
the
new
use
case
pops
up
and
then
you
need
to
add
the
possibility
to
to
modify
this
specific
config
options
and
then,
after
why
another
config
options
need
to
be
added.
And
then,
after
a
while,
the
han
chart
is
getting
really
complicated.
B
This
is
why
I
like
the
sort
of
like
a
pass
through,
but
you
can
specify
arbitrary
extra
stuff,
the
stuff
in
the
config
file,
because
it's
like
it's
sort
of
a
compromise
between
the
two
like
most
of
the
time.
You
don't
need
it,
but
if
you
do
have
something
that's
sort
of
off
in
the
leads,
then
you
have
a
path
forward.
B
Yeah,
well,
I
think
if
in
this
case,
we're
already
specifying
oh
because
we
already
saved
protocols
for
and
we
want
to
have
to
be
able
to
add
others
in
there
hold.
B
B
I
guess
I
would
lean
towards
this
one,
since
it's
pretty
easy
and
then
the
next
time
add
something.
That's
that
lets
you
just
specify
extra
key
value
data
to
stuff
in
the
nothing.
B
B
F
It
does
yeah,
that's
the
other
part
of
it
is
that
v3
doesn't
really
support
multi-head
very
well,
and
so
you
know
there's
a
risk
here
if
you
do
like
an
active,
passive
thing.
You're,
okay,
coming
from
like
prior
octopus,
but
rook
deployments,
are
completely
multi-head,
I'm
using
finishing
race.
So
that's
what
you
want
for
only
this
is
more
like
a
legacy
support
thing
like
maybe
there's
some
windows
nfs
or
something
that
somebody
needs.
B
F
And
the
dashboard
problems
sebastian
mentioned
so
so
it
maybe
it's
a
good
opportunity
to
just
go
nfs
v4.
Only
maybe
that's
what
we
ought
to
do
here
and
say:
everything's
multi-head.
B
F
I
think
that's
problems.
We
keep
contriving
use
cases
around
us,
but
the
like
one
of
the
things
is
like
if
we
think
we're
moving
with
fadm,
maybe
we
want
an
h8
proxy.
Maybe
we
want
this
multihead,
eventually
kind
of
like
what
we're
doing
with
rgw
and
we're
adding
this
protocol
getting
their
way
they're
doing
that
I've
had
another
challenge.
B
F
B
F
A
Overriding
the
template
is
a
feature
of
last
resort
like
it's
possible
to
do
that
and
it's
super
flexible.
F
B
Then
I
don't
know
yeah,
I
don't
really
like
the
first
full
request
request.
Sorry,
the
second
one
to
add
adding
into
the
spec
I
mean
I
think,
that's
the
right
way
to
do
it.
If
we
want
to
support
it,
but
I
don't
think
we
should
support
it.
I
think
we
sort
of
want.
A
B
Should
we
that
ports
pull
request
session
should,
is
that
on
top
of
my
other
one
or
is
it?
I
can't
remember.