►
From YouTube: 2020 02 03 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
miss
H
is
not
going
to
John.
Let's
start
yeah
we're
come
everyone
to
please
orchestrate
a
meeting.
B
B
B
B
C
B
B
C
You
pass
it,
you
pass
the
orchestrator,
a
direct
group
specification
or
a
file
or
whatever,
and
that
gets
directly
translated
to
as
a
volume
call
and
that
will
then
eventually
be
executed
on
the
on
the
target
host
or
on
the
three
hosts.
Well,.
A
A
To
actually
accept
drive
groups
would
now
be
a
bit
easier,
as
we
just
have
to
add
a
dependency
to
the
the
person
common
librarian
self,
or
we
could
make
get
to
a
lot
of
it
of
the
act
of
the
existing
Translate
functionality,
which
is
already
packaged
and
easily
installable.
So
there
are
plenty
of
ways
to
make
the
translation
usable
to
draw
to
Taraka.
D
D
I
thought
I
have
around
this
challenge.
Is
that
the
difference
between
declarative
and
imperative
orchestration,
because
rook
is
all
about
declarative,
seppia?
Ms
about
imperative,
you
know
in
in
rook.
You
know
we
don't
really
want
the
imperative
stop
volume
commands
I
mean
we
could
probably
figure
out
how
to
make
that
work,
but
there's
always
a
way
to
make
things
work,
but
that
that,
if
we're
generating
supplying
commands
directly
from
the
orchestrator,
then
we'd
have
to
pass
those
around
through
the
cluster.
D
E
A
Id
or
a
drive
group
C
already
Anna
I,
don't
know
I,
don't
know,
but
as
long
as
we
store
the
drive
groups
in
with
the
CR
DS,
then
either
safe.
What
you
needs
to
translate
it
to
an
imperative
command
or
we
need
a
tool,
a
kind
of
a
wrapper
tool
to
translate
it
into
an
imperative.
The
freudian
command,
but
in
any
case,
during
Sephardim
commands
in
the
series,
doesn't
really
make
sense.
In
my
opinion,
exactly.
C
Yep
and
that's
that's
really,
a
very,
very
small
portion
of
the
code.
The
translation
from
from
the
drive
groups
to
an
actual
call
is
like
50
lines
of
code
so
that
that's
really
really
small
portion.
The
big
portion
is
translating
the
declarative
drive
group
to
actual
tututututu
the
mapping
of
the
devices.
That's
a
big
portion
and
that's
that's
universal
for
for
for
all
August
errors.
B
Is
that
something
that
we
can
like
a
feature
that
we
can
request
of
obsess
valium
is
to
accept
direct
groups
as
input
I.
Think
that
sounds
like
the
most
great
forward
way
to
do
things
that
way.
There
doesn't
need
to
be
like
a
third-party
library
to
to
do
something
that
is
using
that
shared
library
that
we
intended
to
like
be
able
to
share
code
and
be
build.
It.
A
D
D
A
D
E
One
one
question
my
concerns
about
that:
okay:
I
think
that,
basically,
the
drive
group
is
troop
tool
is
something
that
has
been
created
to
make
easy,
specify
a
set
of
device
to
be
used
in
Westies,
okay
and
he's
basically
a
selective
that
is
made
by
the
user
in
30
moment.
Okay,
so
probably
probably
the
definition
that
the
user
is
going
to
use
fist
Thursday
is
gonna,
is
gonna
not
is
not
going
to
be
valid
in
the
in
the
second
day
or
in
the
third
day.
E
Okay,
because
we
have
selected
part
of
the
devices
and
these
devices
are
used
or
osts.
So
the
second
day
is
a
different
set
of
devices.
What
is
going
to
be
selected
or
even
if
we
change
or
we
modify
infrastructure
in
the
cluster,
possibly
in
the
drive
group.
This
drive
group
that
was
used
by
the
user
in
the
first
day
is
not
right.
E
C
C
E
But
if
the
second
day,
for
example,
the
user
has
added
new
disk,
it
could
be
possible
that
this
disk
will
be
automatically
automatically
use
it,
because
in
the
specification
it
was
this,
this
disk
are
included.
What
I
mean
is
that
is
something
that
well,
if
you
are
going
to
use
any
tine
or
abstract
definition
of
the
of
a
set
of
devices.
What
you
are
going
to
have
is
something
that
is
true
in
thirteen
moment
in
the
time,
and
you
don't
know
if
this
is
going
to
be
true
in
the
next
day.
E
C
That's
correct:
yes,
there
is
currently
no
locking.
So
if
you,
if
you
deploy
your
euros
teas
on
a
first
day
and
then
the
second
day,
you
add
a
new
disc
and
that
matches
your
your
your
draft
or
specification,
it
would
go
out
and
look,
for
example,
at
the
new
OS
to
use
and
there's
no
locking
apparently
in
place.
Maybe
it
should
do
something
like
that,
but
yeah.
D
Design
of
drag
groups
right,
it's
just
a
commission.
If
the
dashboard
wants
to
treat
it
as
imperative,
then
maybe
the
dashboard
module
or
whatever,
however,
we
implement
it,
maybe
it
would
remove
it
from
the
spec
after
it's
done.
Yeah
Blaine
had
that
in
his
it's
PR
design
as
well,
so
make
it
look
imperative,
but
yeah.
C
Correct
so
that's
that's
the
whole
idea
that
we
do
not
use
paths
to
path
names
at
all.
We
just
use
the
properties
attributes
of
disks,
and
that
way
we
describe
our
setup
should
look
like
so
on
that
topic,
I
added
and
is
with
my
added
Docs
there
is
your
follow-up
Docs
I
added
that
to
the
etherpad.
Maybe
that
makes
it
a
bit
more
clear
and
we
would
read
this
I
would
strongly
recommend
against
using
static
pathnames
altogether,
because
that's
kind
of
defeating
the
purpose
of
draft
groups
and
the
whole
declarative
approach
that
we're
taking.
E
Well,
my
concern
is,
is
just
the
same
if
we
use
declarative
way
to
to
define
a
set
of
devices.
What
we
have
is
that
this
declarative
way
is
going
to
produce
different
results,
depending
of
what
is
the
infrastructure,
if
you
are,
if
you
remove,
is
host
node,
okay,
so
I
think
that
well,
it
has
13
utility,
but
if
well
I
think
that
is
Israel.
E
C
And
that's
that's
currently
correct,
although
it
does
only
add
things
and
doesn't
remove
things.
So
if
your
cluster
layer
changes
it
would
not
magically
remove
and
realign
the
O's
T's,
it
would
only
add
new
ones
currently,
maybe
because
it's
a
I
think
the
removal,
the
automatic
removal
or
rearrangement
is
a
bit
scary.
Currently
yeah.
C
D
That's
a
your
concern
really
is
about:
does
the
user
unexpectedly
get
OS
DS
when
maybe
he
wanted
to
use
his
devices
for
something
else
or
wasn't
ready
to
add
them
to
his
cluster
or
so
yeah
I?
Guess,
if
we're
gonna
design
to
be
declarative,
which
is
also
good
for
the
rook
side,
then
the
dashboard
should
somehow
make
that
visible
or
part
of
the
UX
that
hey
you've
selected
all
devices
of
this
side
in
the
future.
We'll
add
them
automatically
for
you,
however,
the
dashboard
is
designed.
It.
D
B
In
the
drive
groups
to
like
specify
like
if
you
wanted
to
keep
some
discs
on
reserve
or
like
have
some
ability
to
like
add
more
disks
and
prepare
for
train
dad,
like
do
some
different
configuration
to
have
the
drive
groups
whole
and
create
like
some,
some
number
of
OSDs
like
where
it's
like.
Okay.
After
this.
C
C
C
E
Yeah,
another
possibility
was
just
to
to
present
more
possibilities:
okay
using
drive,
who
knows,
if
you
don't
store
the
drive
group
itself?
Okay,
if
you
only
get
a
trifle
per
specification
and
translate
it
into
that,
for
you
commands
and
execute
these
commands
and
make
the
provision
of
the
arrestees
what
you
are
going
to
have
is
exactly
what
the
user
wants
in
that
moment.
Okay,
so
if
we
forget
the
drive
group
specification
in
that
moment,
the
no
new
or
no
collateral
effects
are
going
to
be
a
person,
so
this
is
a
possibility.
E
C
E
C
I,
don't
know
where,
and
you
have
to
be
aware-
or
we
have
to
make
users
aware
that
when
they
add
new
hardware
that
matches
this
kind
of
this
kind
of
specification,
it
will
will
result
in
new
O's
these,
because
I
think
that
is
actually
what
what
users
want.
So
they
have.
They
have
usually
nodes
that
have
a
certain
amount
of
disks
than
a
day.
B
So
I
think
part.
Part
of
a
question
related
to
this
is
whether
the
orchestrator
module
is
planning
to
remember
drive
groups
that
it
has
used
for
nodes
or
by
like
it
sounds
like
that's
something
that
is
kind
of
desired.
Is
that
something
remembers
the
driver
refused,
so
Rick
can
I
mean,
given
the
proposal
that
we
have
and
the
direction
that
we're
moving,
I
think
Rick
will
be
able
to
remember
the
drug
refused,
but
or
for
talking
about
sort
of
orchestrators
as
a
broad
topic.
B
E
E
E
A
B
Okay,
I
have
a
follow-up
question,
so
it
sounds
like
probably.
There
should
only
ever
be
one
drive
group
per
node
that.
C
So
if
you
do
usually
you
that
there
is
a
host
pattern,
key
in
in
every
drive
group
where
you
can
use
globe
expressions
to
map
one
drive
group
to
an
amount
of
hosts
right.
So
but
if
you
have
non-uniform
hosts,
you
probably
want
to
have
multiple
drive
groups.
So
in
a
simple
case
where
you
have
like
five
nodes
and
all
nodes,
look
the
same
you
can.
E
B
B
B
B
B
B
Not
just
so
if
stuff
volume
supports
trade
groups,
if
Steph
volume
accepts
the
drive
group
as
input
and
the
drive
group
doesn't
apply
to
that,
node
shouldn't
set
volume
just
say:
oh
okay,
this
no
doesn't
apply.
I'm
gonna
report
success
and
continue,
which
means
rook,
would
then
run
drive
groups
on
all
nodes,
and
so
then
it
actually
doesn't
have
to
have
any
logic
whatsoever.
It
just
has
to
run
all
the
drive
groups,
all
the
notes
during
OSD
provisioning.
C
But
yeah
I
would
certainly
yep
I
can't.
D
C
C
E
The
woman
I
think
that
fifth
volume
was
what
it
was
born
like
a
tool
to
make
easy
to
create
as
DC
in
each
of
the
of
the
hosts
okay.
So
if
we
follow
the
the
principle
that,
when
making
simple
tools
to
make
simple
things,
okay,
we
should
try
to
you
know
to
put
too
much
features
or
too
much
abilities
in
in
this
tool.
Okay,
a
part
of
that
well
I,
think
that
this
really
is
not
problem.
E
A
And
that
would
be
kind
of
an
alternative
to
your
to
Blaine's
approach.
Basically,
we
only
have
drive
groups
in
the
dashboard
and
then
they
are
transferred
to
the
managers.
Let's
work,
Orchestrator
module
and
there
the
drive
groups
are
translated
to
the
list
of
drives
and
the
drives
are
then
stored
in
the
surf
clusters.
Crd.
A
A
There
was
I
just
don't
know
if
I
like
the
idea
of
having
a
translation
between
the
user
interface
and
to
a
computer
for
different
structure
for
or
for
look
and
then
back
to
the
volume
I.
A
E
A
E
So
with
a
DM
and
you
are
going
to
do
exactly
the
same,
you
need
to
translate
the
drive
group
s
specifications
in
the
estate
owner
in
the
orchestrator
layer
in
order
to
have
that
volume
commands
and
use
these
staff
Orion
commands
to
to
make
things
in
in
external
hosts.
So
basically
that
reduction
the
translation
between
trifle,
oops
and
third
volume
comments
is
always
in
the
a
forecast
data
layer.
E
Yes,
but
this
Lehrer
library
is,
is
called
the
in
your
k-state
or
models.
I.
Think
yes,
is
what
I'm
saying
that
we
have
between
the
external
or
testator's
and
the
taskbar.
For
example,
we
have
one
layer
that
is
the
manager
module
called
the
station
and
in
this
layer,
what
we
should
do
is
just
to
translate
between
ref
groups
and
set
volume,
commands
and
then
use
this
dis
translation
to
stand
and
volume
comments
to
the
external
testator.
In
order
to
do
the
things.
E
A
A
That's
everything
that
lives
in
the
brook
operator,
all
the
go
code
that
lives
in
operator
lives
within
the
manager,
/
safe,
ATM
minutes
of
module.
So
there
is
no
real
external
Orchestrator.
E
The
result
is
that
with
F
AVM,
you
are
going
to
do
more
things
that
we
have
been.
We
will
need
to
do
in
their
model,
okay,
because
when
the
part
of
the
provision
is
something
that
is
going
to
happen
in
the
external
state
or
in
Troop
in
our
case,
and
not
with
F
ABM.
But
basically
what
I
am
saying
is
that
the
translation
between
drive
groups
and
third
volume
commands
is
something
that
is
going
to
to
be
in
the
in
their
state
of
smaller.
D
C
Currently,
the
only
advantage
that
I
see
that
I
can
can
remember
the
top
of
my
head
is
that
once
different
orchestrators
use
different
tools
and
not
serve
volume,
this
logic
would
need
to
be
implemented
in
that
in
a
different
tool.
The
currently
we're
just
saying
and
all
safe
IDM
uses
say
volume
and
rock
also
use
the
volume.
So
it's
it's
kind
of
fine.
We
distribute
send
out
the
same
commands,
but
once
for
example,
rook
uses
a
different
tool,
then
we
could
generate
those
different
commands
and
send
them
over
to
rock.
C
D
C
Think
we
also,
we
also
decided
against
it,
because
we
needed
a
way
to
to
get
the
actual
discs
that
are
the
result
of
the
drive
group
in
the
dashboard,
for
example,
and
I
think
the
dashboard
does
not
have
access
to
miss
volume
like
not
nailing
it
head.
It
has
access
to
safe
volume,
inventory
right
it,
not
not
natively
I
mean
it
would
have,
it
would
have
to
call.
C
It
would
have
to
pass
the
volume,
the
directory
of
specification
and
then
like
reach
out
to
every
every
host,
yeah
yeah
and
then
and
then
does
it
computation
and
send
it
back
to
the
dashboard.
But
if
it's,
if
it's
implemented
the
way
it
currently
is
in
Saif
pythons
have
common,
then
we
would
only
need
the
Saif
inventory
input
of
all
the
nodes
which
can
be
which
can
be
cached,
and
then
the
computation
can
happen
like
natively
in
inside
that
library
and
then
probably
the
bread.
The
representation
in
the
dashboard
is
quicker
and
more
native.
C
So
I
I
personally,
don't
see
a
I,
don't
see
too
many
advantages
of
having
the
drive
groups
being
passed
in
Sebo,
Liam
I.
Think
where
it
currently
is,
it's
probably
fine,
I
think
I
mean
I'm,
not
too
sure
how
exactly
handles
the
OSD
creation,
but
from
from
a
very
general
like
high
level
overview,
it
should
be
fine.
How
it
currently
is.
D
C
B
You
know
Sebastian
mentioned
having
a
preference
for
having
them
in
the
CID
and
I
don't
like.
If
and
where
we
are
now
is
we
you
know
we
we
should
track
what
rook
is
doing
in
the
CID,
even
if
it
like
it's
at
it
and
later
remove.
But
if
we
have
something
declarative,
I
think
we
should
use
that
compared
to
giving
rook
a
number
of
Ceph
I'll
you
man's
to
use
which
are
aren't.
B
B
E
B
B
A
B
B
C
B
C
Had
a
discussion
a
couple
of
days
ago,
also
with
sage
and
though
argument
for
zapping,
is
that
you
so
the
concerns
from
people
why
we
shouldn't
zap
was
that
there
is
a
potential
data
loss,
but
due
to
the
approach
that
we
that
we
take
and
an
hour
was,
the
removal
process
is
that
the
USD
is
completely
clean
and
has
no
P
G's
in
it
anymore.
So,
and
we
also
have
like
okay
to
stop
and
okay
to
destroy
commands
before
this.
C
So
the
probability
of
losing
data
while
zapping
is
like
close
to
zero
and
in
my
in
my
experience,
people
actually
expect
that
the
disk
is
clean.
It
has
no
LVS
on
them
anymore
in
no
partitions.
So
that's
why
we
added
zapping
on
top
of
it,
but
not
entirely
sure
if
that's
the
right
plan,
if
that's
the
right
thing
to
do,
I
mean.
D
So
with
the
declarative
approach,
I
guess,
the
question
I
have
is
even
outside
of
Brooke.
If
it
were,
if
it's
a
declarative
approach,
if
the
drive
still
matches
the
drive
group
spec
yeah
is
the
orchestrator
gonna
turn
around
and
re.
Add
that
OSD
after
its
removed
or
does
the
grab
drive
group
spec,
allow
us
to
blacklist
OSD,
so
it
doesn't
do
that
or
how
do
we
avoid
adding
them
back
in
the
declarative
case?
Oh
the.
C
Thing
is
it's
so
you
know
at
least
the
workflow,
for
this
FIDM
is
is
I
mean
you
do
have
declarative
at
draft
group
specification
so
but
but
you
have
to
actually
issue
a
command
to
deploy
these.
So
if
an
operator
decides
that
he
wants
to
remove
OSD
with
ID
number
one,
he
can
do
that.
But
then
he
shouldn't
call
the
the
OSD
deployment
command
again,
because
then
he
will
end
up
with
the
same
old
e
again
right.
E
C
C
Because
I
can
see
it
having
an
advantage,
so
some
people
definitely
want
to
have
it
zapped
automatically,
but
for
the
real
case,
I
definitely
see
the
problem
here.
So
if
we
just
add
an
option
and
default
default
to
not
zap
and
rock
think
that
sir,
but
be
final
because
blacklisting
is,
is,
is
difficult
right.
So
if
it's
a
complete
declarative
approach,
what
do
you
want
to
blackness
right.
D
A
Relates
to
the
to
the
globe
argument
for
the
target
of
the
drive
groups,
though,
if
we
have
different
means
of
testifying
towards
how's,
the
drive
group
should
apply
like
they
were,
or
any
other
mean.
Then
we
would
have
maybe
a
better
control
of
that
blacklist
in
the
wall
host
for
that
traffic
groups
like
if
you
are
removing
a
nova
theme,
we
could
automatically
remove
the
host
from
being
part
from
from
being
a
target
for
that
specific
traffic
rule.
A
B
Might
cause
greater
problems,
though,
like
being
that,
like
you
want
that
drug
group
to
apply
with
the
exception
of
like
one
one
OSD
so
I
mean
that
seems
like
something
that
would
be
good
for
an
interim.
It's
like
okay,
I
removed
this
disks
now
like
pause
orchestration
on
that
node,
while
I
can
go,
remove
the
disc,
but
if
someone
forgets
to
remove
the
disk
or
or
whatever
that
seems
like
it
might
cause
problems
with.
You
know
if
users
are
like,
why
is
there's
no,
not
getting
updates
and
then
so.
C
What
we
actually
could
do
is
so
the
the
discs
that
are
flagged
as
available
and
hence
are
used
by
the
drive
groups
or
by
sub
volume
in
the
end
and
depend
on
the
available
flag
that
have
volumes
sets.
So
we
could
leverage
the
volume
to
set
LVM
tags
and
saying
that
this
should
not
be
used,
or
this
should
be
flagged
as
unavailable.
B
C
Yeah,
because
we're
already
using
a
VTEC
for
basically
everything
that
volume
of
alienating
myself,
maybe
that's
the
right
approach.
Oh
it.
D
E
D
D
C
D
B
B
B
C
C
B
Yeah
my
plan
was
to
leave
a
replacement
until
after
adding
room
and
implemented.
Yeah
thought
that
and
remove
her
kind
of
critically
necessary
and
replaces
kind
of
a
nice
to
have
the
the
last
last
thing
I
have
is
is
something
that
is
probably
more
rook
related.
But
I
thought
would
be
good
to
bring
up
here
anyway
about
whether
the
chef
manager,
module
should
be
quote
unquote,
opinionated,
whether
it
should
just
sort
of
kind
of
by
force
take
over
the
storage
configuration
section
of
the
rook
cluster
set
clusters
here,
ready
ask
these
via
drive
groups.
B
So
this
this
could
mean
that
you
know
user
configuration
is
a
little
bit
forked
by
using
the
dashboard,
but
it
does
mean
that
then
the
dashboard
is
sort
of
taking
charge
and
it's
being
opinionated
and
saying
that,
like
I'm
gonna,
be
the
way
to
manage
OS
DS
now
so
it
sounds
like
since
drive
groups
are
sort
of
applied
to
clusters
at
a
time
and
not
individual.
No,
it's
one
of
my
questions
of
whether
the
manager
module
should
only
take
over
a
single
note
at
once
or
an
all
or
no
approaches
sort
of
more
all-or-nothing.
A
B
I
think
it's
more
like
if
the
user
has
specified
something
in
the
storage
config
and
then
the
users
start
using
drive
groups
should-should
using
the
dashboard
mother,
the
dashboard
or
the
CLI
module
automatically
remove
those
pre-existing
user
configs.
Assuming
that
the
new
drive
groups
based
configs
are
a
preferred
way
of
doing
things.
I.
A
B
Think
I
think
that's
fair,
I
think
this
is
also
a
question
that
we
can
like.
We
can
also
decide
later
it
doesn't
have
to
go
along
necessarily
with
the
proposals
that
I've
been
preparing
for
look
like.
We
can
make
a
decision
and
change
it
or
I
think
I
know
my.
My
interim
decision
would
probably
be
to
be
as
little.
B
To
do
as
little
change
of
what
the
user
has
configured
as
possible
as
make
sense
and
then
decide
later,
if,
if
actually,
we
should
be
doing
that
or
to
add
options
to
do
that
in
the
you
know
in
the
dashboard
or
something
yep.
If
that
does
make
sense,
although
I
don't
they
don't
want
to
add
a
whole
bunch
of
specific
up
to
the
dashboard,
if
that's
not
necessary,.