►
From YouTube: 2020-04-20 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
not
what
today's
weekly
meeting
guess,
what
every
clear
meeting
okay
topics
I
do
have
on
topics
and
a
half
is
okay.
Regarding
Sephardim
I
did
a
bunch
of
pig
parts
for
force,
F
ATM
for
15.2
dot.
Two
lots
of
bug
fixes
that
that
users
are
currently
running
into
plus
lots
of
documentation,
improvements,
project
over
which
is
also
really
helpful,
because
we
are
right
now
the
upstream
users
are
running
into
issues,
things
that
we
haven't
expected
or
usability
issues,
and
we
need
to
fix
em
as
soon
as
possible.
A
A
You
have
too
many
make
sure
that
no
demons
are
running
on
on
a
given
host
and
then
remove
the
host,
and
otherwise
you
were
left
with
some
some
house
that
are
still
part
of
the
cluster,
but
no
longer
known
to
save
idiom,
which
is
yeah
not
not
really
great
experience
and
then,
while
doing
that,
I
discovered
that
our
scheduler
seems
to
work
a
bit
at
least
unexpected
from
my
side.
So
let's
say
you
have
a
placement
specification
like
this
type.
A
Month,
for
example,
and
then
you
have
a
placement
specification
where
you
say
house
is
my
host
and
count
is
three,
so
you
want
to
have
three
monitors
but
you're
only
specifying
one
host.
A
C
B
B
A
A
E
E
Okay,
yeah
I
can
see
how
you
I
mean
that
makes
sense
to
me,
but
I
could
see
how
you
would
think
that
be
with
I.
D
Not
even
it
seems
that
host
and
count
maybe
have
not
compatible
okay,
because
you
can,
you
can
produce
not
compatible
or
not
not
good
place,
many
specifications
or
simple
things.
Okay,
it
seems
sensible
that
if
you
are
using
on
the
host,
you
are
going
to
put
the
list
of
the
host
and
after
you
say,
if
there
is
only
the
possibility
of
deploying
one
diamond
in
one
host.
So
if
you
put
host,
that
is
only
the
possibility
to
put
the
host
at
you
explicitly
type
if
you
use
count
count.
D
B
It's
compatible
and
but
it's
that
the
number
the
count
needs
to
be
always
lower
or
equal
to
the
number
of
hosts
that
you
have
specified
know.
Then,
if
you,
if
you
use
them
to
count
one,
it
acts
as
a
sort
of
a
limiter.
So
if
you
specify
a
host
list
of
ten,
but
you
only
say
count
three,
then
it
would
randomly
pick
from
one
of
those
ten
hosts
that
kind
of
makes
sense.
B
So
I
could
see
that
this
has
some
application,
but
if
it's
probably
the
rare
case,
I
think,
but
the
other
way
around,
this
kind
of
awkward
I
think
probably
should
be
rejected.
B
The
yeah
I
mean
the
user,
expects
to
have
three
right.
That's
why
he
used
count
three
and
then
there
will
be
only
spawning
two
and
then
he
needs
to
to
investigate
himself
that,
and
he
will
eventually
find
out
that
there
are
only
two
hosts
with
the
label
Mon,
and
then
he
needs
to
adapt
this
so
and
I
think
we
can
avoid
this
manual
investigation
kind
of
step
by
just
stopping
the
application
of
the
spec.
A
B
This
should
definitely
also
be
printed
in
health,
fail
dictionary
that
we
pass
to
to
self
and
if
this
is
not
being
executed
by
a
user,
this
should
definitely
end
up
in
a
self
def
warning
string
in
that
section.
But
this
if,
if
we
pass
it
already
command
line,
I
think
it's
a
good
option
to
bail
out
and
not
apply
it
and
not
save
it.
Probably
at
least
that
that's
I
think
right
now.
A
If
you
are
using
labels
or
a
pattern,
then
we
are
really
making
a
burden
on
the
NBS
autoscaler
that
it
first
has
to
carry
the
list
of
hosts
and
apply
the
future
or
a
possible
placement
specification
manually
just
to
discover
that
there
aren't
enough
host
and
then
not
execute
the
change.
So
we
are
simply
giving
the
burden
of
testing
that
to
the
mes
autoscaler.
For
example,
I.
B
Mean
there
could
be
an
interactive
and
non
interactive
mode,
so
if
if
this
is,
if
scaling
up
or
down
is
being
done
by
a
script,
you
could
like
do
what
you
can
do
some
sort
of
best-effort
scenario
and
print.
This
warning
saying
that
the
MDS
or
scaler
it
does
not
have
enough
resources.
Please
add
more
labels
to
your
host
right
and
if
this
is
done
by
a
user,
we
could
either
do
the
same
thing
and
just
print
the
warning
or
stop
I.
B
B
D
Well,
what
I
think
is
that
we
are
going
to
do
have
all
the
diamonds
and
we
are
going
to
have
an
error
in
the
placement
of
one
of
the
pots,
because
there
is
no
no
place
to
to
deploy
text
the
additional
time.
Okay,
for
example,
issues
if
I
pour
rtw
and
you
only
have
three
nodes
in
this
thing.
Three
nodes
are
going
to
be
the
breed.
They
are
theta
W
diamo
and
you
are
going
to
have
another
another
athlete
about
you
pot
with
error.
Okay,
because
there
is
no
place
where
to
deploy
this
pot.
C
B
C
A
C
C
D
A
B
C
A
A
Going
with
an
interactive
flag,
I
think
it
does
make
sense
right
if
you
specify
such
a
wrong
placement-
and
you
know
it
right
now,
because
you're
still
synchron
synchronously
executing
that
command
from
from
the
CLI
I
think
we
can
bail
out
it's
just
that.
If,
if
the
scheduler
is
running
and
constantly
trying
to
apply
the
current
specifications
and
I
think
we
should
do
as
much
as
we
can
as
long
as
it's
safe
to
do.
B
A
A
D
And
just
to
comment
that
we
have
started
to
take
in
look
who
they
well
not
only
the
the
high
availability
of
the
dashboard,
okay,
also
of
the
all
the
older
Prometheus
step.
Okay,
all
the
monitor
in
the
stack
with
Prometheus
graphing
on
the
a
lab
manager;
okay,
even
an
obvious
that
we
have
in
order
to
make
this
a
high
level
with
our
level.
Okay
and
what
this
is
or
I
think
that
maybe
we
can
do
yesterday
proposal
of
a
proposal
for
for
this,
which
oh.
A
D
D
D
If
we
have
a
problem
in
one
of
the
bro
metal
circuit,
they
ate
a
box
is
going
to
change
the
active
promoter
who
surveyed
okay
and
if,
if
we
have
a
problem
in
in
the
hole
for
a
country,
for
example,
that
the
host
is
down
when
what
we
are
going
to
have
is
just
a
manager,
a
changing
the
reactive
inductive
manager
and
we
are
going
to
be
another
different
server
with
the
Prometheus
server
running.
Okay
for
the
elect
manager,
Ravana
I
think
that
what
in
this
case
this
is
stateless
elements.
D
A
D
A
Load
balancer
we
still
have
two
over
without
for
each
a
proxy
will
still
have
to
reconfigure
Capanna
or
reconfigure
the
the
corresponding
demons
in
order
to
Oh
unmanage
available.
We
still
have
to
reconfigure
six,
unlike
reconfigure
from
Ezio's,
to
point
to
the
new
management
module,
so
I
wonder
if
we
really
should
focus
on
me
any
monitoring
part
when
it
comes
to
a
proxy,
because
pistou
can
do
lots
of
stuff
without
a
type
rocks,
at
least
for
for
the
internal
services.
A
D
With
with
this
approach,
basically,
what
what
we
are
going
to
have
is
all
their
monitoring
a
stack
in
each
of
the
service,
where
the
task
forward
is
trying.
Ok,
so
I
think
that
we
can
avoid
the
configuration
of
the
of
the
items,
because
in
each
of
the
horse,
what
what
what
you
are
going
to
have
is
all
the
pieces
in
the
in
the
same
host.
D
Ok
and
what
we
are
going
to
do
with
eight
a
proxy
is
just
to
move
the
pro
mateo
server
in
Texian
in
case
that,
if
you,
you
will
have
any
kind
of
problem,
but
for
the
for
the
dashboard,
the
address
of
the
primitive
server
is
always
a
solarium
is
always
going
to
be
the
address
of
the
eight
a
proxy
in
the
localhost.
I.
A
A
D
D
We
we
do
not
need
to
have
the
Elliott
manager
and
the
refiner
read
in
each
of
the
hosts.
Okay,
because
it's
easy
to
a
start
at
but
for
Prometheus
is
different
with
this
approach.
I
think
that
we
can
avoid
and
it
lost
of
data
in
the
time
series.
Okay,
and
also
we
avoid
problems,
for
example,
with
a
dashboard,
not
responsive
or
not
solving
the
ref,
Anna
Boch,
okay,.
A
D
A
D
B
D
In
this
case
is
just
to
to
start
a
new
container
with
a
proxy
and
the
right
configuration,
and
also
it
says
that
as
stateless
service
so
ECC
to
start
it.
Another
thing
is
to
obtain
monetary
system
monitoring
data
from
this
a
proxy
okay
is
a
different
thing,
but
I
think
that
if
we
can
use
the
version
2.0
or
a
pair
of
eight
a
proxy,
there
is
a
from
a
fellow
supporter
and
all
the
metrics
are
horrible.
A
A
A
Think
that's
okay!
For
today,
any
any
other
thoughts
on
the
topic.
A
Okay,
we
have
a
look
section
again,
Travis,
it's
that's
filming
from
you,
yeah.
C
B
C
Area
of
removing
or
replacing
an
OST
there's
anyway,
there's
a
start
open
now,
which
makes
it
simple
to
run
SEF
commands
as
a
toolbox.
Right
now
is
just
a
ipod
that
you
have
to
connect
to,
and
you
have
an
interactive
prompt
or
you
can
execute
things
right,
but
for
this
case,
where
you
just
want
sort
of
a
one-time
action,
Boris
or
maybe
a
whole
script
or
maybe
a
script,
you
run
periodically
or
something
it
seems
useful
to
be
able
to
run
the
commands
as
as
a
command
job.
So
it's
just
kind
of
an
FYI.
C
C
C
C
D
D
Okay,
because
the
suite
is
using
the
latest
market
master
image
of
the
FF
container,
okay
and
I
think
that
I
think
that
the
modification
that
I
did
in
order
to
to
have
in
the
list
of
the
device
with
a
available
13
that
available
our
contribute,
properly
reported.
Okay.
This
this
book
request
has
been
met:
okay,
but
I,
don't
know
why
this
is
not
appearing
properly
in
the
in
the
output
of
the
of
the
comment.
D
C
D
A
A
Nothing
okay
then
do
again
next
week.