►
From YouTube: 2020-03-02 :: Ceph Orchestration Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
B
A
A
A
C
B
C
As
hoping,
Sebastian
would
be
here,
I
just
one
item.
We
talked
about
for
Rick
last
week
in
the
community
meeting
the
kinks
Juan
Miguel
for
adding
that
teach
and-
and
we
talked
through
so
we'll
think
of
with
Sebastian
afterwards.
But
basically
the
conclusion
was
that
will
create
a
new
repo
in
rook
for
the
Python
client
and
then
we'll
move
that
Python
client
code
from
a
staff
repo
that
was
just
created
over
to
work.
So
we
don't
only
dos
that
in
SF
repo
anymore.
C
And
I'm
happy
to
start
talking
through
the
work
items
that
we're
on
so
all
right.
So
we
need
to
do
talk,
some
testing
with
the
octopus,
RC
e
so
and
rook.
We
actually
already
have
a
PR
open
for
them
just
to
do
some
testing
against
octopus,
so
I.
Let
me
go
find
that
PR,
but
basically
the
PR
just
updates
our
integration
tests
to
run
against
octopus
using
the
it's.
The
devil
latest
tag
you
think
pickup,
so
yeah
so,
and
so
we
had
this
PR
that
uses
the
latest
octopus
image.
C
B
D
Thundering,
okay,
yeah
I
got
a
long
list
of
stuff
I
guess
we
could
start
with
rook
so
good
to
hear
that
the
that's
coming
and
then
I
think
I
just
need
to
sit
down
and
figure
out
how
to
run
on
my
local.
But
one
me
installed
cube
on
my
local
boxes
and
so
figure
out
how
to
run
rook
on
that
cluster.
From
my
dev
bill
branch
or
whatever
must
figure
how
to
do
that.
D
How
to
do
that
anyway?
Okay,
okay,
but.
E
D
I
was
just
gonna,
say:
I
think
that
most
I
think
we've
got
like
the
rook
code
there.
All
these
interface
changes
that
we've
been
making
and
I
think
it's
all.
We
made
all
the
right,
hopefully
fixes,
but
like
I,
don't
know
if
anybody's
actually
tested
it
and
knows
what
the
current
status
of
the
set
of
work
shooter
commands
are
like
how
many
of
them
actually
work
in
Brook.
Oh
really
need
to
be
able
to
play
with
it
and
just
see
what
works
and
what
doesn't?
D
E
Seen
able
to
create
an
equivalent
exclude
just
at
you
saying,
cube
ADM,
okay,
but
in
this
moment
what
I
am
blocking
with
that
and
for
today's
in
this
moment.
Okay,
because
I
cannot
achieve
what
this
an
effort
when
I
start
the
day
cluster
in
the
in
the
monitors,
because
it
is
impossible
to
welcome
to
connect
with
a
with
a
monitors
and
the
operator
what
is
trying
to
Graham
v
quorum
status,
a
comma
and
all
the
time
is
timeout.
Okay.
What
I
have.
E
A
D
E
And
in
the
hand,
when
the
past
week,
I
answer,
we
started
with
a
within
tradition
test
for
the
stator
okay.
Well,
there
is
an
open
book
request
just
to
see
if
they
I
was
in
the
in
the
correct
path.
Okay-
and
it
seems
that
it's
okay
and
well
I,
have
this
moment
three
for
more
more
tests,
taking
the
state
or
AP,
and
let's
see,
if
I,
find
I
finish
with
a
with
a
desk
with
a
Lincoln
Knights
in
there
in
the
gas
step
and
I
continue
with
a
with
a
hope
integration
test.
C
D
D
Okay,
all
right
ADM,
the
Bombers
fashion,
isn't
here,
but
we
can
Josh
what
you're
here
so,
okay,
so
the
the
apply
thing
merged.
That's
the
main
thing:
the
big
change,
which
means
that
it's
tough
ATM
now
behaves
a
lot
more
like
uber
Nettie's,
where
you
basically
tell
it.
How
many
of
you
give
it
a
spec
for
a
service
like
I,
want
three
monitors
and
five
managers
or
whatever,
and
that
persists
that
and
then
in
the
background
it
has
an
operator.
D
Basically
that
sits
there
and
makes
that
true,
so
it
it
looks
and
feels
a
lot
more
like
kubernetes
down
or
so
I've
merged
at
the
next
one
is
the
scheduler
one
that
when
tested
out
several
times
over
the
weekend,
it's
rebased,
it's
ready
to
merge.
I
was
gonna,
wait
first,
the
bastion
to
look
since
he
had
a
comment
but
I.
Think
since
he's
sick
we
can
just
dismiss
it
and
go
ahead
and
merge
it.
D
D
B
B
I
mean
for
upgrades,
it
kind
of
makes
sense
right
because
it
could
potentially
be
very,
very
long
running,
but
all
the
apply
service
kind
of
things
and
and
well
OC
removals
they're,
not
that
long-running
I
mean
the
longest
time
that
it
actually
will
take
is
on
the
first
deployment
and
but
that
is
also
not
longer
than
I.
Don't
know
like
at
max
half
an
hour
if
it's.
If
the
cluster
is
really
really
big
but
I,
don't
know
if
it
makes
sense.
Might.
E
D
The
longest
delays
I've
seen
are
just
because
the
I
don't
know
why
it's
and
every
some
sometimes
I,
see
it
being
a
slow,
but
haven't
looked
at
logs.
Do
I
pay
attention,
yeah,
okay,
the
other
thing
was
the
the
service
description
right
now
doesn't
have
a
reference
to
the
spec,
a
second
we
should
replace
that's
like
baddest
or
whatever.
That
field
was
that
you
had
that
you'd
added
with
the
spec
itself,
so
that
we
can
enhance
the
output
of
the
LS
command.
You
could
see
what
the
placement
was.
B
B
D
Put
the
placement
description
in
there,
which
is
what
that
pull
request,
does
there's
something
else.
Actually
there,
oh
right,
so
the
other
thing
was
service
description.
Has
this
weird
Rados,
config
location
field
that
nothing
seems
to
use
right
now,
but
you
could
imagine
that
other
things
we'll
want
a
look
at
details
of
like
or
type
specific
details
about
the
service
that
are
all
in
the
spec.
That
makes
more
sense
just
references
back
then,
to
add
a
bunch
of
extra
random
fields,
the
service
description.
It
seems.
D
D
So
you
can
have
the
spec
basically
tell
to
move
up,
move
up
to
me
and
seamen
from
one
place
to
another
which
I'd
started
to
do
before
the
whole
apply
thing,
but
I
think
needs
to
just
be
redone.
On
top
of
it,
it's
not
something
that
you
want
to
look
at
Joshua.
D
D
B
D
You
do
you
do
it's,
so
it's
a
little.
So
basically,
what
it
does
is
when
it's
doing
the
node
placement.
D
D
That's
very
like
it's,
what
the
user,
it's,
what
the
actual
intent
is
and
then
after
you
go
through
this
node
assignment
it
basically
modifies
that
that
it
modifies
that
same
structure
and
populates
the
hosts
field
and
says,
and
then
that's
what
the
whole,
what
the
rest
of
apply
services
uses,
and
so,
if
you
look
at
spec
before
it
might
have
no
hosts,
because
you
have
like
label
foo
and
then
after
that
host
will
be
populated,
so
I
have
both
label
foo
and
all
the
hosts
I
think.
Maybe
we.
E
D
B
I
think
I
think
in
my
opinion
it
should
not
modify
the
the
specs
at
all
right.
So
if
you,
if
you
pass
in
a
placements
back
with
a
host,
you
should
also
like
save
it
with
the
host
right.
But
if
you,
if
you
just
save
it
with
a
only
account
or
a
label
specification,
you
should
only
get
that
ever
back
and
only
save
that
exactly
yeah
I.
D
Think
I
would
you
could
I
mean
I
think.
Basically,
that
just
means
it's
a
tie,
a
pretty
simple
pass
just
to
the
node
assignment
class
to
just
change
it,
so
it
just
returns
the
list
of
hosts.
That's
a
return
value.
Instead
of
it
should
be
a
quick,
pretty
quick
change.
There's
no
long,
one
there's
only
one
calling
now
and.
B
B
D
D
The
other
thing
was
there's
this
placement
argument
that
is,
on
every
command
now,
so
part
of
that
full
requests
are
about
to
merge,
fixes
all
the
CLI
commands
so
that,
instead
of
taking
account
a
node
list
and
a
label,
there's
just
one
argument
called
placement,
and
it
goes
through
that
string,
parsing,
helper
thing
or
whatever
to
do
it.
But
it's
it's
currently
a
list
of
strings
and
I
wonder
if
we
should
just
make
it
a
single
argument
ringing.
D
D
Okay,
both
yeah,
maybe
way
but
passionately,
I,
don't
that
one
too,
if
I
click
it
open
for
some
failures.
That
I
was
seeing
and
it
basically
boils
down
to
when
the
Ceph
testing
code
or
whatever
is
injecting
socket
failures
and
sometimes
to
CLI,
reconnects
and
resets
the
command
so
for
the
monitor
and
manager
commands
in
general,
they're
always
written
to
be
idempotent.
So
you
can
always
resend
the
same
command
multiple
times
and
it'll
it'll
still
succeed,
but
the
Deaf
ATM
commands.
D
B
On
that
note,
that
is
only
true
for
the
non-drive
group
and
base
deployments,
because
the
if
you,
if
you
deploy
something
without
a
drive
group,
it
actually
uses
a
different
path.
It's
a
volume
and
it
uses
two
folium
prepare
and
set
bullying
activate
and
create.
While
the
drive
group
based
one
uses
cell
volume,
LVM
batch
and
batch
is
idempotent,
oh
well,
it
yeah.
It
is.
D
By
now,
okay
I'm,
just
wondering
basically
I,
think.
Basically,
we
need
to
figure
out
how
we're
gonna
persist,
the
drive
groups
and
what
this
sort
of
what
that's
gonna
look
like.
So
maybe
the.
D
D
B
D
B
But
anyways
I
mean
obviously
create
the
the
become
an
especially
needs
a
bit
more,
a
bit
more
fanciness.
In
terms
of
that,
it
actually
will
tell
you
what
it
what
would
happen
if
you
apply
this
in
that
drive
group,
and
it
also
needs
to
be
persistent,
and
there
is
there's
some
room
for
improvement
and
but
I
agree.
Ultimately,
it
should
be
attitude
to
the
service
specification.
D
Well,
I
mean
we,
we
could
even
separate
it
out
into
two
commands
the
way
we
did
with
the
other
stuff,
where
there's
a
there's,
an
apply
OSD.
Where
you
give
a
drive
group
specification,
that's
persisted
and
you
name
it
and
all
that
all
that
stuff,
like
you,
could
have
a
OSD
dot,
SSDs
and
OSD
dot,
no
big
hard
drives
whatever
it's
like.
D
You
can
name
them
and
have
matched
models
and
all
that
stuff,
and
you
would
actually
see
that
when
you
do
torch
LS
and
then
you
could
do
if
we
do
have
a
demon
OSD
create
command.
It
does
sort
of
the
one,
the
one-shot,
creating
the
misty
on
this
device
right
now
and
don't
don't
have
any
persistent
state
surrounding
it.
Yeah.
B
D
All
right,
the
next
thing
is
dependent
updates,
though,
are
actually
I,
guess
the
last
thing.
First,
so
I
think
we
have
all
the
monitoring
pieces,
merged
or
net
export
or
alert
manager,
Prometheus
and
Griffin
out,
I.
Think
they're.
All
there
so
I
think
somebody
just
is
gonna
notice
how
this
works
just
needs
to
just
make
sure
they
all
deploy
and
they
all
get
in
figured
properly,
and
that
I
think
there's
a
lot
final
step.
D
D
Leave
that
one
for
Hosmer,
who
is
totally
back
from
vacation,
maybe
you
can
look
at
it
today,
we'll
see
but
sort
of
the
last
annoying
thing.
All
this
is
that
most
most
of
these
have
dynamically
generated
configs,
and
so
the
order
that
you
deploy
them
matters
and
you
have
to
reconfigure
them
after
other
things,
get
deployed.
I
think
we
need
to
add
some
metadata
to
well.
D
You
don't
know
that
you
have
to
reconfigure
move
it
from
the
config
and
so
I
think
what
we
actually
need
is
when
we
configure
deploy
a
service.
We
have
to
remember
what
all
the
dependencies
were
at
the
time
that
we
did
that
and
record
that
somewhere
so
that
later
on,
if
we
come
back
and
we
did
it
there-
the
dependency
set
has
changed
new
note,
exporters
or
note
exporters
are
removed
or
something,
and
we
should
trigger
a
big
to
go.
D
I
was
gonna,
add
a
method
probably
to
service
Beck
or
something
so
MIT
had
a
method
in
there
somewhere.
That
says
what
are
the
dependencies
for
a
given
service
and
it
would
return
a
list
and
then
base
every
base.
Everything
on
that
and
for
now
we
just
had
like
these
relatively
small
set
of
dependencies.
If
that
changes
in
the
future
to
be
more
complicated,
then
I
don't
know
that,
there's
anything
that
the
user
would
specify.
D
That
would
have
a
an
extra
like
these
dependencies
aren't
always
the
same
just
around
this
wandering
sec
yeah,
but
that's
okay,
sir,
with
that.