►
From YouTube: 2019-09-04 :: Ceph Developer Monthly
A
We
could
probably
start
there
are
only
really
two
things
on
the
agenda.
Now
we
originally
had
you
who
was
going
to
talk
about
the
wandering
blog
stuff,
but
not
actually
because
he's
in
Israel
and
I
am
there
right
now,
so
I
wasn't
gonna
work
so
well
and
then
Eva
was
going
to
talk
about
the
stuff
he's
been
doing
with
github
actions.
A
The
reason
is
that
we
have
to
record
this
in
the
PG
log
entry
or
in
the
du
pape
entry,
so
that
ops
are
at
impotent.
If
they
get
resent,
you
still
get
the
Real
ID
look
at
the
same
result
and
return
codes
so
far
has
been
sufficient
up
there.
A
bunch
of
cases
where,
for
class
office,
in
particular
in
our
GW
I,
had
a
couple
use
cases
that
I
don't
actually
know
what
they
are,
maybe
they're
in
the
toilet
card,
where
this
would
be
really
useful.
A
A
A
A
That's
basically
it
the
I
think.
The
only
thing
that
worries
me
about
this
is
all
they're
two
things.
The
first
thing
is
right
now
the
OSD
does
do
OC
ops
and
it
can
call
in
to
all
kinds
of
random
code
in
the
OSD
or
in
classes,
and
that
may
or
may
not
be
a
right
and
then,
when
it's
all
said
and
done,
it
explicitly
just
clears
the
output
buffer.
A
If
it's
a
right
just
to
make
sure
that
no
data
is
that
back
to
the
client,
but
it's
possible
about
their
class
ops
out
there
or
possibly
even
code
than
the
OSD.
That
is
populating,
that
output
buffer
and
then
it's
getting
cleared,
and
so,
if
we
just
allowed
it
to
be
bounded
instead
of
cleared,
it
might
be
filled
with
went
to
random.
A
Gunk
though
my
thought
is
just
to
merge
something
that
changes
that
to
an
assert
and
merge
that
to
master
and
let
it
sit
there
for
a
couple
weeks
just
to
make
sure
that
nothing
that
any
of
the
QA
runs
trigger
is
actually
populating
the
output
buffer
and,
if
so,
fix
it,
and
then
add
the
change
that
actually,
let's
it
lets
you
use
it
populate.
So
that
seemed
that
seemed
like
a
reasonable
way
to
address
that.
That.
A
A
D
That's
way
to
find
more
information
in
the
trailer,
but
I
don't
think
there
is
any
yet
that's
the
only
thing.
That's
gonna
make
me
nervous
as
to
what
are
we
actually
gonna
shop
there
as
long
as
it's
limited
as
long
as
we
can
make
draw
that
it
is
limited,
I
think
that
should
be
fine.
It's
not
a
bad
thing
totally
to
do,
but
yeah
just
to
get
the
use
cases
right
and
to
set
the
expectations
correct.
This
is
what
it's
meant
to
do
would
be
probably
the
first
thing
to
do.
I.
A
A
B
What
do
you
mean?
Well,
if
you
want
to
return
a
position
there,
you
want
to
return
a
position
and
some
other
piece
of
alignment,
information
yeah
in
the
future.
So
it's
it's
enough
for
this
use
case
and
I
suppose,
but
generally
speaking,
piggyback
and
stuff
in
the
PG
log
is
gonna,
be
weird
if
you
expand.
So
this
is
from.
A
A
D
B
A
A
B
A
A
A
B
A
A
A
A
Okay
and
the
second
topic
is
our
buddy
mirroring
and
making
it
really
simple,
and
that's
sort
of
that
driver
for
this
was
we're
trying
to
get
rook
to
basically
bring
up
already
mirroring
and
there's
gonna
be
a
whole
bunch
of
crazy
stuff
with
on
the
kubernetes
side,
as
far
as
like
mapping
TVs
to
Barbie
images
and
the
claims
to
TVs
and
making
out
the
JIRA
stuff
work,
but
even
just
getting
the
are
Bertie
marrying
set
up
in
the
first
place
is
like
million
steps
in
the
docs,
and
so
just
wanting
to
like
make
that
super
simple
like.
A
Ideally,
you
would
have
like
the
GUI
open
for
two
clusters
on
one
of
them:
you'd,
like
click
a
button.
It
would
have
something
that
you
copy
and
then
you
paste
it
into
the
other
one
and
assuming
the
network
connectivity
is
okay,
they
could
just
like,
join
and
set
everything
else
up
themselves.
I
think
that's
the
level
of
complicity
or
whatever
that
we
want
so
I
haven't
looked
at
that
this
Jason.
Yet,
since
you
made
notes.
E
But
from
just
the
starting
point
like
let's
say,
cluster
a
you
would
run,
the
command
are
Benigni
or
pull
up
here.
Bootstrap
like
the
action
of
bootstrap,
a
given
pool
and
that
would
just
output
a
base64-encoded,
Jason
blah
blah
I'm.
Just
saying
base64,
that's
easy
to
copy
and
paste
like
you
have
to
worry
about,
like
oh,
it
didn't
select
the
formatting
or
or
whatever
yeah.
E
So
yeah
that
that
will
give
you
your
blob,
but
you
can
then
you
paste
on
the
other
side,
which
would
be
then
I'm,
gonna,
cluster
B
I'm,
still
saying
that
you
should
keep
the
RBD
mirror
pool
enable
because,
right
now
we
have
two.
We
have
two
modes:
raucous
gonna
default
to
basically
only
do
image
mode.
So
that's
really.
It
has
to
execute
to
CLI
commands
versus
one
CLI
command.
I.
Don't
really
think
it's
that
big
of
a
deal
it's
more
about
the
key
exchange.
That
was
the
the
awkward
part.
E
So
then
I
was
just
hosting
the
the
new
command
D
in
the
our
BD
meter
pool
pier
imports
with
an
optional
one
way.
So
if
you,
if
you
only
wanted
to
import
it-
and
maybe
maybe
one
weighs
even
a
bad
terminology
for
maybe
it
should
be
like
receiving
receive
only
because
that's
if
your
import,
your
imported
it,
it
basically
explained
like
if
you're
just
taking
somebody
else's
keys.
That's
just
so.
You
can
read
stuff
from
their
cluster,
so
maybe
yeah
that
read-only
would
be
receive
only
to
be
a
better
afternoon
again.
E
E
E
A
E
E
E
This
idea,
yeah
I,
was
trying
to
avoid
the
having
to
do
ii
will
enable
cuz
it
because
then
in
theory,
once
you,
if
you
import
it,
it
could
just
copy
the
staff
between
like
make
sure
that
those
pools
are
in
the
same
mode.
Right
like
you,
would
see.
We
didn't
have
one
like
image
mode
and
one
in
full
mode.
Oh
right,.
F
D
A
E
E
A
A
E
A
A
F
A
A
A
E
A
A
A
Yeah
yeah
well
when
I
was
originally
thinking
about
this
I
thought
it
was
gonna
have
to
be
this
like
weird
hand,
shaky
thing
where,
like
cluster
a
would
write
something
into
object
in
cluster
B
and
close
to
be,
would
see
it
and
perform
something.
But
I
forgot
that
the
thing
you're
running
on
cluster
a
can
actually
just
talk
to
cluster
B
needs
to
do
so
makes
so
much
simpler.
Guess
how.
A
G
E
Don't
think
it
really
needs
to
do
a
bootstrap
user,
because
he
would
just
like
it's
only
for
the
remote
side
to
talk
to
your
local
side,
your
if
you're
using
admin,
it's
it's
gonna,
create
it,
because
it's
already
a
profile
out.
There
called
our
buddy
mere
profile
like
profile,
our
BT
like
cat,
but.
E
E
It
doesn't
need
one
though,
because
the
JSON
file
would
have
the
FX
for
the
RBD
Mir
profile,
and
it
would
just
directly
import
it
and
then
cluster
B
when
it
wants
to
inject
something.
On
the
other
side
it
would.
It
would
use
that
cap
that
it
was
given
to
talk
to
cluster
B
right,
okay,
okay,
it
could
create
the.
E
A
A
E
A
E
Key
so
I
guess
yeah.
The
only
thing
you
would
test
to
be
that
if
you
did
a
get
a
create
key
but
you're
doing
that
in
the
same
process,
so
that'd
be
like
you
issued
a
monkey
man,
and
it
gave
you
a
bogus
key
back
to
you
on
the
same
cluster
that
you
want.
Then
what
I
mean?
Yes,
you
can
log
into
the
same
cluster
using
that
keys
to
verify
that
it
works
prior
to
giving
it
over
to
cluster
a
yeah
any.
A
A
A
Do
I
guess
I'm,
just
thinking
like
the
next
step
here
is
looking
at
the
status
of
the
our
buddy
Mayer
demons
in
red
that
they're
up
and
happy.
Is
that
just
a
matter
of
looking
at
like
Seth?
Yes
on
those
clusters,
you
should
see
already
mirror
with
like
a
it's
like
demon
status
or
something
like.
What's.
E
The
specialist
tell
you
that
the
demons
are
running,
but
then
there's
the
self-service,
dumpers
I
can
it's.
I
was
mess
up,
yeah
looks
like
the
status
or
the
or
the
dump,
but
one
of
them
sorry
status.
Cuz,
that's
the
one
that
then
shows
you
like.
We
are.
Mere
demon
injects
like
a
little
like
JSON
formatted
into
it.
Saying
like
hey
for
this,
given
pool
I'm
replicating
this
many
images.
I.
E
A
Yeah
no,
but
we
could
make
it
I
mean
we
usually
have
like
lots
of
information
crammed
into
my
concise
string
and
some
like
goofy
ad
hoc
way,
but
maybe
for
instead
of
just
saying
to
team
is
running,
say
which
pools
are
okay,
warning
or
air
or
whatever
thing
like
that,
I'm
good
think
about
it,
be
nice.
If
you
could
just
type
steps.
That's
awesome.
You
would
see
like
there.
Okay.
A
E
Yeah
well
so
on
the
topic
of
every
Damir,
and
so
as
part
of
this
map
shot
mirroring
that
we're
working
on
right
now,
right
now,
status
in
is
kind
of
like.
Are
you
calling
to
the
status
of
your
side
of
like
how
are
things
looking
on
your
side?
So
one
of
the
tweaks
just
part
of
this
is
then
to
basically
mirror
the
status
on
both
sides,
so
you
can
say
like
yeah.
A
A
E
Yeah
that'd
be
good
cuz,
then
that'd
be
something
that
I'll
there's
already
ticket
up
and
after
this
all
gets
into
then
update
the
dashboard
to
then
also
expose
that
in
the
dashboard,
where
you
can
really
show
the
status
of
it
into
individual
images,
saying
like
yeah,
it's
up
and
replaying
but
sure
like
and
the
other
side
is
up
and
replaying
it.
It's
sinking
in
doors.
You
know
54%
behind
or
or
what
have
you.
C
I'll
have
one
thing
that
came
up
recently
is
a
respond
thing.
I
didn't
guys
this,
but
apparently
the
manager
responds
itself
if
the
active
models-
change.
That
seems
like
kind
of
a
baby
yeah.
A
It's
because
I
couldn't
get
Python
to
tear
down
properly
without
like
crashing
or
getting
stuck.
I
can't
remember
what
it
was.
Basically,
the
C
Python
stuff,
just
like
didn't
bring
out
nice
way.
C
A
A
E
E
A
I
think
that
well
I
think
yeah.
The
the
trick
is
that
that
we're
also
probably
a
hit
is
that
if
we
do
this
upgrade
orchestration
through
the
orchestration
layer,
then
the
manager
will
actually
get
upgraded
first
and
then
it'll
be
running.
This
module,
that's
orchestrating
the
upgrade
and
as
soon
as
all
the
monitors
get
upgraded.
They're
probably
gonna,
like
enable
all
the
new
always
on
the
answer.
G
A
A
C
A
A
G
A
I
am
okay,
yet
thinking
principal,
it
should
be
okay,
but
it
ought
to
be
a
little
bit.
It'll
be
slightly
tricky,
because
we
need
to
make
sure
that
that
cleanup
is
consistent
with
whatever
the
term
object
path
is
doing
and
right
now
term
object
is
the
only
way
that
clones
ever
get
removed,
so
we'll
just
have
to
make
sure
we
don't
miss
anything,
but.
E
A
Have
one
one
short
topic
that
we
could
discuss?
One
of
the
things
things
that's
been
on
my
list
for
forever
is
unifying
the
Ceph.
Tell
and
stuff
demon
commands,
I,
think
Josh.
You
brought
this
up
the
other
day.
Yeah
I
went
and
looked
at
it
today,
and
there
were
my
recollection
was
that
the
M
on
command
and
in
command
messages
were
all
mixed
up
and
they
are
in
basically
two
ways.
A
M-Mom
command
is
taxable
service
message,
so
it
has
to
go
to
the
monitor
and
it
like
blocks
on
quorum
and
all
that
stuff,
and
it's
used
for
COI
commands.
M
command
is
used
for
all
the
other
demons
and
it's
mostly
used
for
tell,
except
in
the
manager
case.
Voi
messages
can
also
get
sent
via
and
in
command,
though
I
just
pushed
the
first
full
request
and
the
cleanup,
which
basically
adds
a
new
and
manager
command
that
should
be
used,
is
intended
to
be
used
for
CLI
commands
of
a
CLI
goes
the
manager
module.
A
Then
it
goes
through
a
manager
command
that
way.
M
command
will
be
used
exclusively
for
tell
when
you're
sending
it
to
an
explicit
daemon
in
sort
of
a
velocity,
one-shot
sort
of
way.
That
would
be
the
first
part
and
then
the
second
part
is
that,
right
now
we
need
you
tell
on
a
monitor.
It's
just
like
it's
totally
broken
right.
It's
not
totally
broken,
but
it's
like
super.
It's
not
really
very
good.
A
It
basically
makes
them
on
client
like
try
to
drop
all
the
other
connections
to
monitors
and
only
connect
to
one,
and
it
sends
those
tell
commands
as
M
on
commands,
also
sort
of
wrong,
though
I
think
the
the
fix
is
going
to
be
to
have
like
a
different
path
for
health
to
the
monitor
and
actually
sends
a
different
type
of
message
and
hopefully
doesn't
disrupt
all
the
other
monitor
sessions
and-
and
that's
probably
not
a
super
high
priority.
I,
don't
know
how
important
the
Montello
commands
are.
A
C
C
A
Yeah
Mike,
my
initial
thought,
I
was
just
learned
to
look
at
this
today
was
just
basically
a
DES.
There's
the
admin
socket
class.
That's
attached
to
your
sub
context,
basically
just
add
a
method
on
there
with
every
just
pass
it
an
in
command
and
it
just
dumps
it
into
the
same
work.
You
that's
doing
your
other
evidence
like
me,
and
so
it's
like
one
one
thread:
that's
processing
those.