►
From YouTube: 2019-06-03:: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
C
A
B
B
D
D
A
C
E
A
A
A
E
Okay,
in
my
cases,
yes
to
do
solve
some
doubts
that
I
have
about
the
AP,
for
the
stateless
services
is
well,
but
I
found
that
probably
is
is
used,
because
what
implementation
is
probably
I,
don't
think
in
in
truck
okay.
So
there
are
some
things
that,
for
me,
are
not
clear
or
difficult
to
understand
if
you
are
not
using
AK,
where
net
is
when
it
is
a
packing.
Okay.
E
E
The
first
thing
that
I
have
found
is
that
when
it
seems
that
we
are
supposed
to
have
a
multi
cluster
with
what
this
master
film,
okay,
but
that
is
intended
to
to
be
configured
as
the
foul,
a
multi
cluster
with
several
forms.
This
is
that
the
15
that
I
have
seen
I
have
found
in
the
truth,
documentation
information
that
says
that
when
this
is
the
default
configuration
also,
we
are
not
ready
to
manage
a
multi
cluster
environment.
E
E
E
C
E
But,
basically
is
that,
for
me,
is
a
little
bit
strange,
that
we
start,
for
example,
with
a
caster
with
without
the
rtw
service,
okay,
and
we
are
going
to
start
from
scratch
and
well.
The
first
thing
that,
for
me,
is
a
little
bit
strange
is
that
we
cannot
stack
configuration
using
the
parameters
that
the
final
user
wants
to
use
if
we
are
going
to
use
fgw.
E
What
we
are
going
to
create
is
a
multi
stone,
a
multi
zone,
rtw
service,
with
the
parameters
that
we
deem
as
the
Mauri
real
for
for
the
fourth
you
understand,
basically,
is
that
a
final
user
cannot
have
do
not
have
the
possibility
to
choose
what
parameters
they
want
to
use
in
order
to
create
the
layout
heater.
When
you
serve
it.
Okay,
that's
not
not
a
problem.
Okay,
we
can
start
with
a
default
configuration
I
mean
what
exactly
is
missing.
A
A
A
A
E
And
you
can
see
that
well,
we
are
only
in
the
case
of,
for
example,
at
new
rtw
services.
We
should
specify
the
name
of
the
tone,
the
number
of
academic
new
services
that
we
have
we
want
to
have
and
then
more
where
they
are
going
to
twist
on.
Okay,
so
I
think
that
if
we
well,
we
can
start
with
this.
Okay,
but
obviously
if
the
final
user
has
I
think
that
definitely
you
said,
could
have
more
more
control
about
what
what
is
going
to
be
created.
Yeah.
A
E
E
Perfect
perfect.
Another
thing
that
I
wanted
to
discuss
is
that
for,
for
example,
we
have
a
couple
of
methods
that
are
called
update.
I
think
that
these
two
methods
are
well
out
of
KFOR
for
rogue
okay
but
difficult
to
understand
in
another
Orchestrator
that
different
from
rogue
okay,
because
we
with
update
you
can
do
some
kind
of
operation
that
is
frequent
in
in
magnets
and
vitamins.
C
C
E
So
what
I
am
proposing
is
that
okay,
I
think
that
it's
okay
to
have
all
the
possibilities
in
the
in
Turkish
state
or
in
a
B,
but
I
think
that
probably
it
has
no
sense
to
implement
all
the
AP
in
all
the
stators.
Maybe
if
we
have
the
functionality,
I
think
that
it
is
completely
valid
to
use
at
or
are
in
EM,
in
order
to
to
manage
services
with
a
with
an
swallow
or
deep
sea
and
to
you
to
use
update
with
with
rook.
A
A
E
Imagine
that
you
are
going
to
use
all
the
update
in
order
to
increase
the
number
of
athlete.
Are
you
services
with
kubernetes?
You
can
issue
the
command
using
name
on,
for
example,
for
size,
okay,
and
this
is
all,
but
for
the
rest
of
the
carcass
theater,
for
deep
sea
and
for
ansible.
You
will
need
to
add
a
list
of
course,
and
this
is
imperative.
So
you
need
to
know
in
the
task
board.
What
is
your
case
theater
in
order
to
issue
the
command
I.
E
A
E
C
C
So
I
think
what
I'm
hearing
they
want
me
is
that
you're
really
optimized
for
the
storage
provider
or
not
or
the
orchestrator.
Maybe
we
need
a
concept
of
whether
or
not
an
api
or
a
method
is
implemented
by
an
Orchestrator.
So
if
ansible
doesn't
implement
the
update
function
because
it's
too
complicated.
E
Exactly
the
same
that
we
are
using
with
okay,
but
we
are
going
to
have
the
same
problem
in
other
cases
and
basically
the
problem
is
with
draw.
You
don't
need
to
specify
cost,
but
it
just
uses
a
draw
Europe
and
you
need
to
specify
in
what
horse
are
you?
Are
you
going
to
install
what
services,
and
this
is
something
that
is
a
common
problem
in
all
the
AP?
You
need
cost
if,
if
you
are
adding
using
Brook,
this
is
the
area
problem.
E
F
Maybe
for
or
maybe
we
had,
the
concept
of
parameters
and
types
to
the
orchestrator
API
calls
and
then
the
dashboard
for
a
given
call
could
interrogate
what
parameters
and
what
types
of
parameters
were
taken
by
each
call
and
could
then
use
that
to
generate
some
UI
pieces.
And
so,
if
we
had
different
orchestrators
that
had
you
know
one
one
Orchestrator
has
a
method
that
requires
a
list
of
hosts
and
another
Orchestrator
has
the
same
method
but
doesn't
require
this
hosts.
E
E
A
In
in
a
rope
context
is
actually
can
make
sense
to
specify
the
house
if,
if
you
want,
if
you
want
to
yeah,
so
it's
nothing
that
is
impossible
in
your
core
castrator,
but
it's
just
absolutely
optional
in
your
caucus.
Traitor
and
manatorian
are
the
orchestra.
So
there
is
some
kind
of
similarity,
even
in
the
orchestrator
case,.
C
C
E
E
A
A
E
If
that
has
sense,
it
could
be
I,
don't
know,
but
probably
if
I
just
well.
This
is
in
the
case
of
another
another
time
of
the
answer
world
or
for
the
deep
sea
world
know
that
probably
I
just
only
want
to
to
remove
one
one
service
in
in
one
host.
Not
all
the
zone.
I
know
that
another
time
this
seems
to
be
thought
in
order
to
be
used
only
with
rook,
because
you
have
the
update
in
order
to
change
the
number
of
cost
of
providing
this
informing
the
service.
F
E
E
E
C
By
default,
rook
also
removes
the
pools
that
are
backing
that
or
there's
been
some
discussion
about
how
we
retain
that
data.
If
we
only
want
to
remove
the
30w
services,
the
demons
that's
related
to
whether
or
not
rook
owns
the
pools
like
if
Brooke
shouldn't
create
the
pools
and
only
did
the
demons,
then
yeah,
that's
the
issue
we
already
mentioned
I
guess
previously.
E
C
On
the
the
rook
area,
just
a
few
things
to
be
aware
of
I,
don't
know
that
there's
a
lot
to
discuss,
but
there
is
an
updated
governance.
Pr
finally
opened
last
night
by
Jared.
I
haven't
had
a
chance
to
review
it
yet,
but
that
the
idea
around
this
is
that
you
know
more
people
would
have
a
push
access
to
rook.
C
C
But
yeah
the
idea
is
to
add
more
with
push
access,
we
call
them
owners
and
then
there's
approvers
who
can
approve
the
pull
request,
but
the
list
of
maintained
errs.
There's
four
of
us
wouldn't
actually
change
right
now
there
will
be
a
future
update
to
the
cut
governance,
which
will
just
take
a
few
more
weeks,
probably
to
figure
out.
We
create
something
called
it.