►
From YouTube: Ceph Orchestrator Meeting 2020-06-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And-
and
we
have
okay,
let
lets
start
with
a
state
of
the
film
right
now,
so
we
were
really
affected
by
the
cpr
long-running
custom
mishap.
That
happened
last
week,
which
means
that
I
really
couldn't
get
any
pro
requests
merged
the
container
registry.
This
were
build,
failed
and
I
hope.
As
of
the
day,
things
will
start
working
again.
A
A
It's
just
another
way
of
doing
things,
I
think
so,
and
yeah
I
actually
think
that
we
can
talk
about
the
yellow,
based
communication
with
Nvidia.
So
there
was
a
problem
that
if
there
are
two
ways
to
communicate
with
FIDM
and
one
is
the
ECL
I
deaf
odds
apply
one
and
then
a
server
specification
or
self
watch
the
player
DW
myself
also
apply,
and
yes
for
example,
and
that
error-prone
at
least
that's
my
experience
and
it's
aerophone.
A
And
they
have
two
reasons
for
that.
One
is
that
we
have
to
let
the,
but
it
turns
out
that
if
you
forget
a
parameter
and
you're
using
parameter
by
positions,
then
it's
not
trivial
actually
detected
and
make
sure
we
are
getting
an
error
and,
in
general,
users
will
just
screw
up
their
server
specifications
and
with
an
ever
hard
time
recovering
from
that
again
yeah.
And
the
other
reason
is
that
the
apply
command
turned
out
to
be
actually
not
easy.
A
B
B
B
So
yeah
I
am
NOT
what
I?
What
I
should
think
about
this?
Because
there
are
love,
I
mean
he
used
this?
Are
you
really
really
do
you
really
really
want
to
do?
This
kind
of
thing
itself
introduces
all
over
the
place
and
I'm
not
sure
if
you
should
follow
that
one,
because
I
kind
of
find
it
stupid,
yeah.
A
B
Yeah
I
tend
to
be
on
your
side.
Oh
I,
guess
I
mean
we
can
get
very
sophisticated
and
say
if
a
certain
amount
of
services,
which
represents
a
certain
percentage
of
the
services
that
you're
running
in
the
cluster,
are
changing.
Then
we
could
say
well,
you
know
well.
Is
that
really
what
you
want
to
do
and
then,
if
yes,
pass
a
flag
or
whatever
but
I
mean
as
a
default
behavior
I?
Don't
think
you
should
do
that?
Oh,
it's
just
I
just
wanted
it
to
raise
I.
C
Had
generally
argue
for
most
services
that
are
stateless,
this
is
fairly
harmless,
don't
be
accidentally
scale
down
NFS.
You
could
always
scale
it
back
up.
I.
Where
do
we
may
want
to
think
about?
This
is
anywhere
that
you
could
lose
data,
so
maybe
there
should
be
a
minimum
threshold
on
the
mom
and
there
should
be
a
confirmation
for
that.
So
you
go
less
than
three.
B
Yeah
but
but
even
for
gateways
you
could
you
could
definitely
not
lose
lose
data,
but
if
you
have
a
high
load
on
your
cluster
and
you
have
a
lot
of
different
connections
and
removing
a
gateway
will
drop
connection
and
we'll
definitely
introduce
bottlenecks.
The
performance
issues
whatever
so
I
mean
it's
not
a
huge
problem
right,
but
that's.
A
A
D
No,
no
happy
with
that
Celestine
I
think
that
we
saw
them
take
the
tissues
by
the
final
user.
Okay.
So
if
the
final
user
wants
to
have
all
the
one
monitor
is
something
that
the
final
user
will
be
able
to
do,
I
think,
okay,
we
can
go
on.
We
can
add
a
parameter
in
order
to
when
the
are
you
really
sure
that
you
want
to
do
this?
D
Okay,
but
I
think
that
in
any
case,
we
should
take
that
issue
by
the
final
user,
because
we
do
not
know
what
is
the
lab
in
the
situation
where
the
final
user
is
going
to
deploy
a
disaster
or
what
he's
trying
to
do
with
the
test.
Okay,
so
I
think
that
to
take
the
officials-
and
that
requires
a
lot
of
information
about
the
cluster
in
order
to
do
the
right
thing.
Okay,
so
maybe
but
I
think
that
it's
good
to
have
warnings
Matt
not
to
avoid,
for
example,
to
have
less
than
three
monitors.
D
A
So
I
tend
to
agree,
so
it
must
be
possible
for
a
user
to
scale
down
to
one
monitor,
even
though
that's
really
going
to
be
a
net
case,
at
least
for
non-trivial
classes,
but
at
on
the
other
hand,
I
think
it
does
make
sense
to
one
users
and
make
sure
they
really
know
what
they're
doing.
If
they
scare
you
should
I.
A
B
B
Which
is
I,
don't
know
if
this
comes
down
to
educating
the
user
or
if
it
is
just
uncommon
or
just
unexpected,
or
it's
just
due
to
the
lack
of
documentation,
because
I
think
it
is.
It
is
fine
as
long
as
you
kind
of
understand
what
will
happen
so,
if
you're
not
caught
off
guard,
because
the
because
the
nice
thing
about
having
automated
deployment
is
that
you
just
specify
your
OSD
spec
once
all
this
everything
is
declarative.
B
It
yeah
I
will
say:
I
will
also
tend
to
thing.
We
should
keep
the
current
behavior
of
automatically
deploying
things,
but
explicitly
telling
this
in
the
documentation
having
sentences
like
this
will
be
automatically
pick
up,
anything
that
matches
to
the
specified,
filter
and
the
specs,
and
if
you
do
not
want
to
have
this,
there
is
this
unmanaged
lag
that
prevents
the
spectrum
automatically
placed
or
deployed
or
where.
A
To
the
state,
when
you
just
add
a
new
disk
to
the
class
itself,
what
about
zapping.
B
Do
have
a
tracker
issue
that
let
me
try
to
find
that
that
speaks
about
the
possibility
to
use
the
volume
to
blacklist
a
device
or
blockless
sorry,
and
so
that
I
mean
it's
not
implemented,
but
using
that
volume
to
specify
LVM
tags
or
whatever
mechanism
to
tag
certain
discs,
but
that
they
won't
be
picked
up
is
maybe
a
good
idea
and
that
could
just
be
attached
to
the
Zap
command.
So
after
the
disk
acept,
you
could
automatically
add
it
to
a
list
of.
B
You
know
unwanted
discs,
and
this
could
also
be
helpful
for
users
that
want
to
that
that
they
already
know
that
a
certain
disk
could
not
be
added
to
to
the
cluster
and
for
whatever
reason
they
can't
exclude
it
in
the
the
drive
spec.
So
I
do
SD
spec,
and
maybe
that
will
solve
some
problems.
But
this
is
not
implemented
and
probably
needs
were
thought.
B
D
Okay,
well
just
to
comment
a
couple
of
things
and
buddies.
This
would
personally
a
tell
it'll
be
suppressed
for
me
because
when
it
seems
that
everything
is
related,
okay
and
it
starts
with
the
concept
that
we
have
about-
50,
DM,
okay
and
well.
What
is
the
free
DM
is
just
a
study
tool
or
is
a
tool
that
is
going
to
manage
on
the
triode
master.
D
Okay,
so
I
think
that
probably
we
are
having
problems,
because
this
the
situation-
okay,
for
example,
example
about
other
universities.
We
are
working
in
a
declarative
way.
Okay,
we
we
have
a
installer
tool
that
tries
to
work
in
a
declarative
way
and
probably
phenol.
Users
are
not
understanding
this
because
they
understand
that
with
the
ADM
they
are
going
to
start
a
cluster
not
to
manage
running
cluster
and
to
solve
situations.
You
know
in
a
in
a
running
cluster.
So
what
I
think
that
we?
We
need
to
work
more
in
this,
but
a
part
of
that.
D
Another
thing
that
I
think
that
what
we
really
need
is
the
templates
for
the
different
services
or
to
have
a
good
description
of
what
is
a
service
and
to
create
the
service
using
this
kind
of
fight,
for
example,
for
monitoring.
Okay,
you
can
define
other
monitoring
service
with
more
or
less
the
spec
file
that
we
are
using
in
this
moment.
D
Okay
and
I
think
that
this,
what
is
probably
that
we
avoid
a
lot
of
problems
if
we
are
using
a
template
that
we
can
check
syntactically
and
semantically
if
it
has
sense
or
not
a
sense
about
the
monitoring
service,
not
just
deploying
one
time
with
with
one
monitor
and
I
will
also
with.
In
relation
with
this,
we
have
the
the
thing
about
to
modify
and
to
change
settings
automatically,
because
we
want
to
have
a
declarative
work
in
behavior
of
fpdm
and
we
are
checking
the
situation.
D
This
is
the
one
the
main
the
main
point
in
my
in
my
my
comment.
Ok
and
is
related
with
one
of
example
in
in
code
that
is
putting
them
in
the
path
and
I
think
that
what
until
we
have
in
this,
this
intelligence
included
inside
Cafe
diem
about
go
to
manage
in
a
proper
way.
The
cluster
we
saw
in
our.
We
should
avoid
tool
to
make
the
kind
of
modifications
okay.
So
in
the
case,
for
example,
this
example,
this
is
a
setting
that
is
called
config
dashboard
when
complete
tasks
work.
Why?
What?
D
D
A
A
D
D
A
That's
though,
the
the
schedulers
we
do
have
it
works,
but
it's
extremely
simple,
but
that
is
independent.
In
my
opinion,
that's
independent
of
if
Stef
a
DM
itself
is
actually
capable
of
being
a
deployment
tool
versus
being
an
installer
tool.
I
pick
because
I'm
safe,
a
DM
itself
is
based
on
day
to
operation
and
not.
D
A
D
D
D
Yes,
well,
I
think
that
we
can
okay,
we
can
do
a
lot
of
things,
okay
and
in
the
future,
it
could
be
something
pretty
awesome:
okay,
with
lot
of
functionality
about
manatee
in
first
class,
there
with
artificial
intelligence
and
a
lot
of
things
included.
Okay,
that
that
is
possible,
and
that
is
what
at
the
future.
D
Okay,
but
I'm
saying
is
that
maybe
in
this
situation
and
taking
into
account
that
we
are
going
to
replace
production
elements
like
deep
sea
or,
for
example,
an
seaboard,
we
are
going
to
replace
these
tools
tested
in
production
with
something
that
is
new
okay
and
taking
into
account
that
the
stabilization
of
the
of
the
system,
just
with
the
functionality
for
his
stunning,
when
I
think
that
we
have
lot
of
work
to
do.
I.
D
Think
that
maybe
we
should
focus
on
this
first
step
to
deploy
at
F
cluster
and
try
to
deploy
at
the
fifth
cluster
in
the
best
condition
possible
to
use
in
the
Preferences
of
the
final
user.
In
order
to
start
the
cluster
and
to
do
this
in
the
perfect.
Okay,
I
think
that
it
can
be
better
much
better,
okay,
better
than
than,
for
example,
defensible
or
Lipson.
I.
A
D
Perfect
tool
to
provide
a
fouls,
okay,
but
then
what
we
are
doing,
for
example,
with
this
setting
Raffin
on
URL,
is
just
changing
the
setting
that
the
user
decides.
That
is
the
good
one
with
a
default
one,
because
we
think
that
we
know
more
than
the
final.
You
said.
I
think
that
this
is
not
correct.
Yes,.
A
D
So
there
is
no
problem
in
using
the
default
when
you
are
the
brain
data
service,
but
do
not
change
this
this
this
default.
Okay,
do
not
change
our
again
to
the
default.
If
the
user
wants
to
use
another
different
setting,
that's
what
I
am
saying
with
this
setting
and
with
all
the
settings
we
can
use
it
that
we
use
the
false
when
we
are
deploying
diamonds
or
services
that
is.
Okay
is
perfect.
A
D
Well,
in
this
case,
I
think
that
it
is
easier
for
graph
Anna
and
for
every
diamond.
What
I'm
saying
is
just
to
deploy
new
diamonds
using
the
default
settings
or
their
settings
that
you
think
that
are
activists.
Okay,
this
is
okay.
This
is
perfect,
but
if,
after
that,
the
user
decides
to
change
one
setting,
we
can
revert
the
division
of
the
of
the
user
automatically
exactly
on
exactly
so.
We
are.
We
are
in
the
same
page
now.
Yes,.
D
D
D
D
F
A
A
A
Okay,
a
scuzzy
is
still
ongoing,
so
if
you
have,
you
know
if
we
have
existing
eyes
cause
the
test
which
I
have
topped
that's
going
to
be
a
bit
of
a
problem.
F
C
F
F
A
A
F
C
A
D
Aside
from
the
dashboard
team,
okay,
this
about
with
about
things
that
can't
effect
or
come
modify
behavior
in
that
world
can
affect
in
any
way
the
task.
Okay.
So
if
we
are
to
input
represent,
we
think
that
maybe
it
could
be
interesting
to
inform
transport
team.
Well,
that
was
just
to
use
the
the
dashboard
light
label.
Okay,
because
I
think
that
the
artist
a
total
grouping-
github-
that
is
people
from
the
spore,
like
nesto
Afonso,
I,
think
that
they
are,
and
the
decision
is
just
a
kill.
This
register.
A
A
When
doing
any
changes,
we
are
often
interfering
with
different
components
like,
for
example,
if
we
modify
the
deployment
of
a
GWS
and
it's
thermo
or
at
a
charge
of
W
right.
If
you,
if
we
change
the
deployment
of
an
MDS
server,
then
it's
directly
also
related
to
to
service.
So
I
was
in.
In
the
past,
there
was
a
bit
hesitant
to
actually
add
component
labels
to
to
safer
temperatures,
because
it
just
generated
a
lot
of
noise
if
I
had
bad
them,.