►
From YouTube: 2019-07-15 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
D
A
Yeah
first
things:
first,
we
have
a
nice
eternity
of
orchestrators.
I
would
propose
to
use
it
a
bit
more
often
we
have
the
opportunity
to
do
it
and
just
let's
make
use
of
it.
A
B
B
A
B
Guess
guess
the
idea
was
that
if
you
deploy
stiffest,
then
that's
on
point
you
might
want
to
have
noise
door
uniformed
and
if
you
don't,
then
it's
probably
I
mean
I'm
gonna,
say
it's
a
bad
idea,
but
if
you
have
noticed
or
not
uniformed
with
different
discs
and
type
of
discs,
and
basically
everything
is
a
mess
and
you're
going
to
have
a
very
performant
cluster.
So
I
guess
the
idea
was
to
do
the
selection
across
all
the
hosts
of
the
platform
and
then
just
report,
something
that
will
be
uniformed
across
all
the
machines.
B
D
It's
it's
not
so
much
whether
you
need
it,
it's
whether
it's
useful
to
avoid
getting
like.
If
you
just
have
a
drive
group,
that's
like
use
all
hard
disks
and
creatives
to
use
out
of
them
and
you
need
apply
it
across
a
bunch
of
nodes
and
some
notes
have
like
a
broken
disk
or
whatever.
You
won't
notice,
because
they're
all
locally
making
different
decisions,
but
II.
Just
if
you
define
the
drive
group
as
saying
I
want
exactly
seven
hard
drives
of
this
vendor
and
this
size
and
it'll
work
right.
D
Well,
it'll
work
in
the
sense
that
one
of
those
drive
groups
will
fail.
I
guess
I'm,
not
really
sure
what
the
feedback
is,
but
I
think
that
was
the
vendor
said
the
original
motivation.
That
was
the
idea
right.
You
want
sort
of
a
you
want
some
feedback.
If
you
had
uniform
host,
but
then
one
of
them
isn't
actually
uniform,
yeah.
B
Yeah,
but
for
this
you
have
to
have
this
tool,
which
has
the
global
view
of
all
the
disks
and
do
the
decision
and
then
pass
that
info
around.
Maybe
the
drive
groups
and
as
I
guess
the
missing
piece
at
this
point:
yeah
yeah
I,
guess
we
all
agree
about
how
things
are
going
to
be
handled
locally,
bison
yeah
via
I,
guess,
right
groups,
but
it's
more
the
collection,
the
analysis
and
in
the
decision-making
yeah.
That
is
missing.
D
D
Now
my
only
concern
there
is
I
think
SSH.
The
SSH
orchestrators
follows
like
the
illustrative
example.
Is
there
something
that
we
need
to
implement
in
order
to
pass
those
drive
group
descriptions
through
necessay
trader
or
is
there
is
choosing
to
put
it
in
set
volume,
basically
adding
a
burden
into
the
orchestrator
implementations,
I?
Think.
D
E
A
E
I
think
I
think
the
advantage
of
using
a
drive
group.
There
is
just
that
it's
I
think
much
easier
to
understand
or
for
a
human
user.
You
know
some
debugging
happens
in
there.
You
can
look
Oh,
okay,
I
want
you
know,
I,
don't
know
five
drives
on
this
particular
host
of
used
for
this
drive
group,
much
easy
easier
than
I
have
like
data
devices
a
bunch
of
drives,
and
you
know
then
you.
C
D
D
E
E
E
C
B
E
Though
the
intended
workflow
in
deep
sea
is
we
we
basically
pass
the
inventory
command
to
the
user
as
well,
so
they
can
run
inventory
across
all
notes
and
I
mean
normally.
A
user
has
a
rough
idea
of
how
their
notes
look
and
basically
I
mean
in
the
simplest
case,
you
have
all
uniform
notes.
You
write
one
drive
group
and
that's
it
right.
You
have
yeah
like
one
type
of
OSD
and
basically
we
have
conceptually
drive.
Groups
are
supposed
to
map
to
like
the
types
of
always
Gees
that
you
want.
So
you
might
want
some
standalone.
E
You
might
want
some
that
have
external
wall
dbe's.
You
might
want
some
that
are
encrypted
and
that
basically
is
supposed
to
make
up
a
drive
group.
It
gets
more
complicated
when
you
start
thinking
about.
Okay,
I
only
want
14
of
those,
but
the
the
I
think
the
rough
mapping
stole
the
clothes
ballot
so.
A
E
Have
it
in
deep
sea
already,
we
have
it
in
deep
sea
right
now
and
we
basically
do
we
analyze.
Those
drive
groups
match
that
to
our
inventory
and
then
pass
out
a
huge
set,
folium
command,
and
since
there
is
some
interest
in
you
know,
the
orchestrator
is
also
pick
of
that
concept.
That
would
make
sense
to
make
set
volume
aware
of
it
a
little
more
formalized.
E
D
E
The
putting
this
into
cell
volume
I
see
this
mostly
as
just
another
interface
to
the
batch
command.
I
mean
we're
not
going
to
get
rid
of
the
command
line.
Flags.
If
you
know
some
implementation
rather
uses
that
fine,
but
we
can
also
just
put
a
whole
lot
of
this
code
and
that's
mostly
the
stuff
that
only
needs
not
local
knowledge.
We
might
as
well
put
this
in
so
for
human.
Have
that
available
to
everyone.
E
A
C
Maybe
just
a
question
for
me
to
make
sure
I'm
clear,
so
the
drive
groups
ultimately
are
a
more
advanced
way
to
specify
some
batch
mode
or
what
we
configure
locally
on
on
an
oh
great
and
set
volume
owns
what
we
configure
locally
on
a
node
for
honesty's,
though
I
think
it
makes
sense
if
we
need
this
flexibility
for
this
more
advanced.
But
they
have
those
settings
within
set
volume.
D
C
A
D
C
B
But
I
think
it's.
We
already
discussed
that
and
I
think
Blaine's
gonna
pick
this
one.
Oh
yeah
I'm,
assuming
based
on
the
discussions
we
had
when
it
comes
to
just
refactoring
the
way
we
could
strap
boys,
DS
and
deprecating
old-fashioned
OS
these
from
work
to
so
that's
that's
one
step
two
step
super
there.
Oh.
B
No
but
I
think
it
would
be
nice
to
keep
in
mind
that
this
is
still
missing.
The
big
picture
like
what's
the:
what
is
the
entity
that
aggregates
all
the
inventories
and
just
just
generates
like
a
drive
group
based
on
that
and
distributes
it
to
all
those
according
to
rules
that
have
been
given
by
by
administrators?
That
is
still
actually
missing.
It's
being
done
manually,
that's
nothing.
D
A
B
But
I
think
the
idea
is
not
to
do
this
per
host
to
just
apply
like
one
pattern,
because
we
know
that
hosts
are
it
normally,
they
should
be
uniform
and
the
configuration
should
be
identical
yeah,
because
we
know
that
there
should
be
only
a
single
drug
group.
That's
the
easiest
case,
and
this
is
the
one
I
guess
who
should
be
aiming
for,
but
then
we
still
it
was
they're
missing
the
interface
that
actually
describes
that
and
can
actually
apply
it
to
all
the
notes.
B
D
C
A
A
Cuban
it
is
API
and
synchronizing
that,
with
an
in-memory
representation
in
the
orchestrator
module
and.
A
Everything-
that's
not
really
extremely
fast,
it's
too
slow
for
the
dashboard
because
of
the
the
request
goes
from
the
browser
to
to
the
safe
manager
daemon
and
then
to
the
accumulators.
Api,
then
back
to
the
death
manager,
demon
and
then
back
to
the
browser
and
within
the
safe
minute
that
we
have
to
return
really
fast.
C
A
A
C
A
A
A
D
D
C
Yeah
yeah
just
a
few
quick
things,
so,
just
as
a
note
one
night
over
that
before
was
released
on
Friday,
it's
critical
rgw
fix
using
the
correct
options
with
with
beasts.
This
is
Sebastian
got
ready.
So
that's
out
there
I'm
thinking
about
the
1.1
release
of
the
the
current
proposal
that
we're
gonna
bring
up
in
the
community
meeting
tomorrow
is
that
there
were
community
meetings.
Mid-August
is
like
the
date
for
feature
complete
we'd
like
to
shoot
for
and
then
end
of,
August
1.1
release.
I
think
that
yeah.
C
C
C
Then
one
thing
to
know
for
the
minimum
SEF
version
I
think
we
we
said
in
the
one
dot
o
release,
that
that
was
the
last
release,
where
we
support
luminous
before
I
opened
the
PR
this
weekend
for
setting
the
minimum
set
version
you
mimic,
13
4
is
what
I
proposed
in
the
PR
that
gives
us
stuff
volume
support
so
that
all
new
blue
store,
OST
is
being
created,
would
be
using
cept.
For
you
know,
more
of
our
old
partitioning
scheme
for
Brooke
was
my
goal.
A
C
C
But
at
least
forcing
them
off
luminous
and
on
and
making
sure
we're
on
set
volume.
At
the
same
time,
no
as
DS,
we
still
need
to
support
the
existing
augustine's,
of
course,
without
requiring
some
data
migration
anyway,
nooooo
Steve's
all
on
stuff
funny
any
other
thoughts
around
the
good
minimum
version
there.
What
is
that.
B
B
I
think
it's
nice
to
do
it
this
week,
so
we
really
encourage
people.
Upgrading
Fester's
to
this
is
something
we
have
been
doing
pretty
well,
I
think
for
the
past
years.
So
it's
we
should
continue
to
do
things
that
we
never
actually
had
any
major
issues
with
oh
great,
so
English.
You
really
encourage
people
debating
Christus.
C
B
I
was
just
thinking
that,
because
it's
like
swimmer
break,
a
couple
of
people's
gonna
go
on,
Petey
rose,
so
if
we
freeze
them
like,
if
you
freeze
a
half
August,
then
people
might
not
really
have
time
wrapping
up
their
patches,
I
think
so.
Perhaps
freezing
beginning
of
September
and
it'll
be
losing
half
September.
Something
like
this
could
give
people
more
time,
polishing
their
PRS
and
because
yeah
people
obviously
got
are
going
to
take
pts
doing
summer.
I
think
yeah.