►
From YouTube: 2019-09-23 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
Already
possible-
and
it's
also
possible
to
make
it
so
they
have
the
crush
map
sees
their
actual
size,
but
then
the
weight
set
override
has
zero,
so
the
balancer
can
slowly
ramp
map
it
in
there's
a
somewhere
in
the
docks
that
describes
how
to
do
that.
There
are
like
two
settings:
they
have
to
set.
Basically
that's
sort
of
that
mode.
B
It's
so
yeah,
it's
possible.
It's
not
super
well
documented,
not
sure
whether
that
we
should
make
that
the
default
behavior
or
not,
but
as
far
as.
B
A
B
A
Okay,
the
other
thing
is
that
here
for
creating
drive
groups
in
the
dashboard,
because
the
dashboard
will
need
to
create
drive
groups
for
the
for
the
orchestrator.
We
need
a
proper
f
volume
inventory
from
rook.
For
creating
of
these,
there
is
little
way
around
it.
Basically,.
A
Need
to
have
the
safe
volume
inventory,
otherwise
we
have
to
do
a
special
hack
for
look
in
the
dashboard.
That
would
would
go
into
another
and
a
different
code
path
with
some
reduced
functionality,
and
that
would
be
kind
of
weird.
A
A
A
Summarizing
it
here,
I've
just
summarized:
what's
actually
right
now
in
there,
so
we
have
three
ways
of
deleting
overseas
from
from
rook,
so
the
one
is
removing
a
complete
node
from
the
self
custom
resource
up
that
works
as
far
as
I
know,
but
it's
kind
of
dangerous.
A
The
second
way
is
to
a
half
automated
version
of
removing
the
osd
with
osd
pert
and
then
waiting
for
rook
to
remove
the
deployment
and
then
manually
deleting
the
drive
from
the
dev
cluster
custom
resource,
and
the
last
method
is
to
make
everything
manual,
and
my
proposal
is
to
automate
basically
the
second
step,
the
second
way
of
doing
things
from
within
this
half
orchestrator,
because
it's
inherently
imperative
so
user
has
to
do
something
explicit
in
order
to
remove
overseas
and
it's
not
implicitly
done,
and
we
can
still
support
the
existing
functionality
from
within
rook
and
we
could
share
a
code
with
the
ssh
orchestrator.
A
C
C
A
So
in
this
dashboard,
for
example,
you
are
clicking
on
remove
this
osd.
Then
the
self
orchestrator
starts
on
calling
self
ocd
put
on
that
on
that
osd
and
then
zeph
makes
the
oc
nt
basically
and
then
rook
discovers.
Oh,
this
osc
is
safe
to
destroy,
removes
it
and
then
the
rook
manager
module
again
discovers
that
oh,
the
deployment
is
gone.
I
can
remove
that
device
from
the
saf
clusters.
Custom
resource.
A
C
A
D
Like
they're,
like
the
the
disc,
doesn't
necessarily
have
to
be
in
the
custom
resource,
so
you're
right,
though
like
if
that
disc
does
exist
there.
It
needs
to
be
deleted.
D
D
D
C
D
D
Can
you
remind
me,
I
know
I
think,
we've
had
the
conversation
before
the
osd
reports
that
it's
safe
to
destroy?
Does
that
mean
that,
like
removing
that
osd
from
the
crush
map,
then,
like
won't
happen
penalty?
Is
that
right.
B
I
don't
know
I
would
have
it
remove
it.
That's
gonna
work
better.
C
D
I
mean
you
know
when,
when
there
is
a
failure
like
there
there's
going
to
be
data
movement
like
we
can't
really
change
that,
but
like
I,
I
there's
some
risk.
If
rook
is
able
to
make
decisions
about
something
that
might
cause
unnecessary
data
movement
outside
of
you
know
because,
like
rick
doesn't
have
the.
E
D
B
It
depends
on
what
kind
of
removal
it
is
if
it's
a
device
that
fails
in
place
and
then
the
cluster
heals,
and
then
you
decide
that
you
don't
want
it
at
all
and
you
want
to
delete
it.
Then
yes,
you'll
have
data
moving
moving
again,
but
I'm
not
sure
that's
common.
If
you're
like
removing
a
whole
node,
then
that's
something
and
you
mark,
if
you,
if
you're,
basically,
if
you
mark
something
out
it
offloads,
and
then
you
remove
it,
then
data
will
move
again.
B
B
A
A
Doesn't
seem
so
when
one
small
thing
so
we're
aiming
for
discussing
the
rook
ci
long
term
in
the
in
this
week's
listing
community
call.
C
D
C
A
Yeah,
that's:
let's
try
to
yeah,
I
will
do
a
doodle
and
then
for
the
for
the
world
week.
A
A
Okay,
travis
want
to
continue
with
yeah.
C
Yeah
with
rook,
just
one
overall
thought
we
had
1.1.1
ship
on
friday,
so
just
a
whole
bunch
of
fixes
in
that
release.
C
Good
stabilization
things-
I
don't
have
a
lot
to
say
about
that
in
this
context,
but
good
to
have
that
out.
E
No,
I
think
we're
good,
it's
good
that
we
have
it.
I'm
sure
1-1-2
will
follow
soon.
As
I'm
working
on
fixes
already
and
yeah.
I
built
the
version
of
I
sent
a
pr
on
the
operator
hub
to
bump
four
one,
one
one
as
well
it's
spending
on
the
operator.
E
Yeah,
nothing
I'm
working
on
something
else
for
downstream.
At
this
point,
remember
the
bug
we
have
we've
looked
devices
and
everything
so
they're
working
on
this
one
so
not
really
related
to
orchestration
general
orchestration.
C
Okay,
yeah,
I
think
that's
all.
I've
got
to.
B
B
E
B
For
podman,
so
I
suspect
that
just
is
going
to
take
some
experimentation.
B
I
kind
of
like
the
idea
of
typing
the
script
to
standard
input
and
running
it
that
way,
but
I
don't,
I
think,
I'm
not
super
opinionated.
I
think
we
just
need
to
figure
out
whatever
works
go
from
there.