►
From YouTube: Ceph Code Walkthroughs: Ceph Manager
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
well
we'll
start
off
with
this
good
walk
through
through
the
desert
manager,
with
like
a
little
bit
of
background,
I
suppose
so
in
general,
the
manager
was
created
to
originally
just
appealing
to
offloaded
some
work
from
the
esef
monitors,
where
they
were
handling
a
lot
of
statistics,
collection
from
the
osds
and
mds's
and
other
demons
that
could
cause
them
to
become
overloaded
in
a
large
cluster.
A
A
This
list
service
that
that
runs
independently
and
kent
can
persist
things
if
it
would
like
like
into
the
monitor
or
raiders,
but
originally
it
was
basically
designed
as
a
way
to
analyze
anything
that
ran
a
cluster
cluster-wide
acquired
a
cluster-wide
view
that
didn't
require
persistence
and
could
easily
be
done
in
python.
A
So,
overall,
the
manager
is
written
in
a
combination
of
sk
plus
core
and
python
modules,
and
the
the
core
component
of
it
is
similar
to
other
staff
demons.
It
uses
all
this
all
the
same
infrastructure
for
on
talking
to
other
servers
on
the
wire
and
it
connects
this
acts
like
a
select
client
connecting
to
the
monitor
and
subscribing
to
some
apps
to
be
able
to
get
information
about
the
rest
of
the
cluster.
A
So
today
supports
many
modules
from
this
prediction
to
orchestration
to
internal
cluster
tasks
like
balancing,
eg's
and
managing
long-running
operations
for
rbd
or
ffs.
So
the
entry
point
for
this
f,
the
vendor
demon
in
general.
If
we
start
looking
at
the
code,
a
little
bit
is
like
rather
seth
demons
in
the
setfinderscoremanager.cc
file.
A
Let's
go!
It
goes
through
this.
The
typical
demon
setup
that
you'd
see
and
also
have
demons,
pricing,
arguments
and
environment
variables,
initializing
everything
as
a
environment,
daemon
and
picking
out
which
network.
A
To
bind
to
then
go
ahead
and
starting
things
up.
A
Another
property
of
the
manager
as
it
stands
today
is
that
it's
designed
to
run
at
in
an
active
standby
mode
where
a
single
instance
of
the
manager
demon
is
the
active
one
and
it
communicates
with
the
monitor
to
continuously
send
this
pings
to
the
monitor
to
let
it
know
that
it
is
still
alive
and
so
that,
if
the
active
manager
stops
responding,
the
monitor
will
tell
one
of
the
other
other
standby
managers
to
become
the
active
manager.
A
A
A
We
also
have
some
extra
infrastructure
to
receive
a
call
back
whenever
there
is
a
configuration
change,
in
particular
we're
checking
for
any
config
keys
that
are
set.
That's
cyber
manager,
prefix
and
we'll
handle
those
within
the
python
module
registry,
so
any
modules
that
have
their
own
options
they're
all
stored
under
the
manager
prefix
in
the
monitor's
config
key
structure,
and
they
can
they've
models,
can
has
actually
registered
interest
in
these
config
options.
Changing
and
get
notifications
when
they
do.
A
A
A
And
then
this,
let's
keep
repeating
itself
by
adding
an
event
after
the
manager
take
period,
which
I
believe
is
five
seconds
by
default,
to
continue
doing
this
in
standby
mode.
A
So
you
can
see
where
we
can
become
the
active
manager
by
looking
at
how
we
handle
the
manager
map.
So
when
the
monitor
notices
that
the
existing
active
manager
isn't
responsive
to
it
hasn't
set
beacons
in
a
long
enough
period
of
time
or.
A
Before
to
monitor,
we'll
choose
a
new
active
manager
and
go
ahead
and
put
that
in
the
new
version
of
the
manager
map,
which
all
that
standbys
are
subscribed
to
when
the
relevant
standby
who's
chosen
as
active,
receives
that
new
map
they'll
go
ahead
and
create
a
new
manager,
in
instance,
with
this
existing
setup.
That
was,
it
was
created,
like
my
clients,
the
maps
and
the
sffsms
client
and
the
operator
that
are
already
initialized
in
standby
mode
and
start
using.
A
That
new
class
to
handle
things.
So
let's
take
a
look
at
the
active
class.
A
A
A
You
can
see
that
we
have
a
number
of
handlers
for
different
types
of
maps,
handle
manager
digests
as
the
maps,
osd
maps,
central
log
messages,
service
maps
and
mod
maps
and
manager
maps.
Those
are
almost
all
the
kinds
of
maps
that
you
can
have
in
a
suckbuster
you'll
just
be
subscribed
to
by
the
andrew
here
and
received
from
the
mons
as
the
updates.
A
You
can
see
when
the
adventure
is
becoming
active
when
it's
first
getting
initialized.
It's
subscribing
to
a
bunch
of
these
extra
maps,
in
addition
to
different
prefixes
in
the
key
value
config
space
for
the
manager.
A
A
It
has
a
little
bit
of
loading
of
some
of
its
persistent
state
that
has
been
added
over
the
years.
Things
like
it's
storing
the
data
in
in
the
device
by
the
different
devices
in
the
cluster
in
the
monitors,
config
keys,.
A
A
A
Before
we
dive
more
into
the
modules,
though,
let's
take
a
look
at
the
daemon
server
instance
and
what
that
looks
like.
So
that's
your
recall.
That's
like
one
of
the
key
pieces
of
the
active
manager,
so
that's
initialized
when
the
active
manager
is
created
and
it
handles
a
lot
of
the
message,
processing
and
data
gathering
from
the
rest
of
the
cluster.
A
And
then
it's
got
its
own
references
to
the
messenger.
At
the
moment,
my
clients,
the
internal
state
of
the
manager
connecting
the
rest
of
the
cluster
to
python
modules,
etc.
A
A
A
And
then
again
it
starts
scheduling
the
function
to
be
called,
which
is
a
common
pattern
across
demons.
So
every
five
seconds
or
whatever
the
manager
take
period
is
it
will
send
a
report
and
adjust
pd's.
A
So
in
send
report
we're
kind
of
collecting
those
all
together
and
setting
off
the
current
version
of
that
snapshot
of
the
current
pd
state.
Due
to
the
mod
to
the
monitor.
A
Also
also
including
a
few
other
pieces
in
there,
the
any
health
checks
that
manager,
modules
or
other
services
have
added
progress.
Events
from
the
progress
module
or
that
may
have
been
added
from
other
modules
and
we're,
including
all
those
in
in
this
report,
we're
sending
to
the
monitor,
which
will
be
essentially
cached
the
monitor
and
used
to
respond
to
your
standard
kind
of
cli
commands.
B
A
This
is
things
includes
things
like
io
rates,
recovery
rates,
amount
of
data
that
was
compressed
all
that
sort
of
thing.
A
And
then,
finally,
we
have
a
whole
bunch
of
metrics
that
we
we
we
would
like
to
report
to
the
monitor
as
well.
So
these
would
be
things
like
slow
operations
or
other
sorts
of
health
metrics.
I
think
slots
is
the
main
one.
A
It
can
be
slow
offs
from
the
monitor
flaws
from
the
osds
slow-ops
from
mds.
Almost
every
service
reports
lost
at
this
point.
A
And
an
adjustment
report
and
sends
it
off
to
the
monitors,
that's
kind
of
the
core
of
the
active
manager.
It's
we
getting
all
these
statistics
off
the
wire
from
the
osds,
getting
health,
health
checks
and
and
slow
offs
reports
from
all
different
kinds
of
services
and
humans
in
the
cluster
and
summarizing
them
and
reporting
them
to
the
monitor.
A
Secondly,
it's
doing
pds
in
the
same
loop,
and
this
is
how
the
stuff
handles
large
changes
in
the
number
of
pg's
in
the
cluster
it'll
make
these
happen
over
time.
So
when
you
set
pg,
num
or
ptpnum
from
the
command
line
internally,
this
ends
up
setting
values
that
are
called
the
target.
Ptp
number
target,
pg
them
and
the
manager
in
this
this
function
will
slowly
go
through
and
adjust
the
actual
values
and
get
closer
to
those
targets
until
they're
reached.
A
Let's
take
a
look
at
the
current
kind
of
ratios
of
pgs
that
are
in
states
that
we
probably
don't
want.
I
want
to
do
anything
with
things
that
are
like
pgs
that
are
inactive
or
under
recovery
for
some
reason,
or
aren't
don't
have
anything
reported
about
them
if
they're
unknown,
you
don't
want
to
make
make
too
many
changes
to
a
cluster
at
once.
A
And
if
it's
not
equal
to
the,
where
it
should
be,
the
target
will
increase
or
decrease
the
pds
accordingly,
at
a
smaller
interval
at
a
time.
Thank
for
the
increasing
fpg's.
It
has
some
sort
of
limit
around
here,
just
not
not
increasing
it
by
too
many
at
once,
and
for
decreasing.
A
Pgs
only
decreases
a
single
pg
at
a
time
since,
when
you're
merging
pds,
it's
a
much
lower
process,
since
you
need
to
first
move
the
pg
two
pgs
that
are
going
to
be
merged
together
onto
the
same
osd
and
then
do
them
that
merge.
A
So
in
general,
if
you
want
to
change
how
the
fpga
behavior
in
terms
of
how
the
number
is,
is
changed
in
ceph,
this
is
where
you
would
do
it,
and
you
have
similar
logic
for
ptp
num
here,
which
is
the
pg
numbers.
That
is
the
logical
number
of
pgs
and
pgpn.
Is
the
fpg
placement
number
the
number
that
I
actually
used
for
placing
pgs?
A
If
you,
you
could
have,
let's
say
a
thousand
pgs,
but
with
a
ptp
them
or
of
one
a
placement
of
one
that
means
they're
all
placed
together
in
the
same
was
the
same
set
of
osds.
A
A
As
the
comment
says,
this
class
is
our
registry
of
modules.
It's
not,
it
doesn't
do
everything,
it's
purely
about
setting
up
the
environment,
for
the
modules
to
run
like
the
python
threat,
the
main
thread
for
the
python
interpreter,
so,
in
general,
the
way
that
the
manager
uses
python
is
a
little
bit
unusual.
A
It's
it's
using
the
c
python
interpreter,
but
it's
using
it
with
separate
interpreters
so
that
each
module
runs
in
its
own
main
thread
and
doesn't
have
to
any
kind
of
crossover
with
other
interpreters
for
other
modules,
which
means
that
they
don't
share
any
state.
But
I
mean
they
also
can't
have
bugs
that
affect
each
other's
interpreters
or
or
clashes
with
global
variable
names
or
that
sort
of
thing
global
state
in
general.
A
A
This
was
added
basically
to
deal
with
upgrades
and
to
make
sure
that
there
were
certain
if
there
are
certain
dependencies
that
always
need
to
be
there.
You
turn
those
on
like
additionally.
A
A
See
modules
generally
have
two
broad
categories
of
things
that
are
are
very
generally
associated
with
them.
One
first
is
a
list
of
commands,
they
respond
to
that
they
implement,
and
the
second
is
a
list
of
options.
A
A
Of
as
a
pretty
generic
thing,
which
is
what
is
filled
with
empty
things,
so
to
start
with,
then
we'll
add
in.
A
A
This
is
setting
up
all
the
generic
kind
of
base,
module
and
based
objects,
classes
for
python,
for
the
the
module
actually
gets
started
and
has
that
transfer
it's
going
to
run
yet.
A
A
Dependencies
and
then
finally
load
the
actual
module
itself.
That's
only
then
there's
the
subclass
of
the
generic
entry
module.
A
A
This
means
whether,
if
I
recall
correctly
whether
it
has
to
be
synchronously
pulled,
I
can't
remember
the
exact
implementation
here,
but
in
general,
we're
looking
at
the
the
general
pieces,
the
command.
Let's
take
a
look
at
what
this
looks
like
in
an
example.
So
all
the
managing
modules
are
under
pipelines,
manager.
A
Is
just
the
interface
not
the
implementation
you
can
see.
We
have
some
of
these
with
typing
defined.
A
All
these,
all
these,
starting
with
underscore
generally,
are
used
by
the
internals
of
the
manager,
not
by
the
modules
themselves.
A
You
can
see
this
similar
to
the
commands
for
walking
through
this
list
of
options,
and
each
option
ends
up
being
a.
A
A
And
we
ended
up
just
populating
all
these
options
into
the
c
plus
plus
code,
that's
tracking
them,
so
that
they
can
be
changed
and
sped
and
got
and
gotten
through
the
standard.
It's
fcli
apis
for
configuration
set
with
the
manager
prefix.
A
A
There's
also
an
option
to
have
a
standby
class
where
you
can
have
a
version
of
the
manager
of
the
module
that
runs
in
standby
mode
and
the
version
that
doesn't
that
runs
in
active
mode
and
those
have
different
behavior.
I
don't
believe
this
is
used
by
very
many
modules,
but
that
the
idea
is
there.
A
And
we
end
up
asking
that
it
can
run.
A
A
Will
include
things
like
constructing
natural
class
it'll
have
methods
that
you'll
see
within
the
python
modules
and
plenty
things
like
notifying
the
module
about
different
types
of
updates,
whether
those
are
from
different
types
of
maps
or.
A
A
Next
thing
to
mention
here
is
the
pi
formatter.
This
is
commonly
used
across
the
manager
code
to
construct
python
versions
of
objects
similar
to
how
we
have
a
json
formatter
or
an
xml
formatter
we're
dumping
objects.
We
just
have
a
python
version
that
nests
things
as
lists
and
dictionaries
by
the
native
types,
essentially
that
the
manager
modules
can
interact
with
directly.
A
This
is
called
running
a
remote
method.
This
can.
This
can
be
used
by
a
few
modules,
for
example,
to
create
progress,
events
which
are
handled
by
the
progress,
module
and
other
kinds
of
intermodal
communication,
which.
B
A
A
That
the
method
that
you
specified
with
the
args
and
keyword
arguments
that
you
passed
in.
A
Here's
how
we're
passing
commands
to
a
module
so
we're.
A
A
So
the
the
basic
module
base
class
is
implemented
within
c
plus.
There
are
a
few
of
the
these
python
classes
that
are
implemented
really
in
c
plus.
Here.
A
A
We
have
a
general
command
for
describing
cli
commands
in
python.
A
A
A
You
can
see
this
is
the
the
base
class
from
which
all
the
modules
inherit.
So
this
is
the
python
version
that
inherits
from
the
c
plus
base
manager
module,
as
well
as
the
manager,
module
logging,
mixin.
A
A
A
C
You
covered
base
manager
modules,
but
I
think
I
don't
think
you
covered
active
pi
modules
with
an
s
on
the
end
different
from
active
pi
module.
That's
where
a
lot
of
the
actual
implementations
of
the
functions
that
are
seen
in
the
the
ceph
module
or
in
the
the
python
manager
module
are
located.
A
Yeah,
so
you
can
see
this
has
a
number
of
methods
that
are
used
by
a
lot
of
different
sorts
of
modules
like
getting
and
setting
things
in
the
in
the
understore.
A
A
Handling
commands
etc.
So,
let's
look
at
at
example
of
one
of
these
pieces.
A
A
Has
the
config
map
from
the
configure
store
the
cluster
state
references
to
the
on
clients,
all
the
other
pieces?
You
need
to
talk
to
the
cluster
human
server
registry,
python
modules
and
the
this
is
where
the
progress
events
are
kept
internally
within
the
manager
as
well.
A
So
that
this
is
essentially
what
the
manager
uses
to
handle
all
sorts
of
things,
because
this
class
handles
handles
tracking
which
modules
are
which
and
and
calling
into
them
to
handle
whatever
needs
to
be
handled
so
for
get
health
checks,
we're
just
getting
all
the
checks
from
all
the
modules,
so
we're
just
going
to
loop
through
them
all,
and
it's
called
the
get
health
checks
method
on
each
individual
one
which
will
accumulate
the
checks.
This
health
check
map,
t
structure.
A
Similarly
for
handle
commands,
this
is
where
we're
calling
into
the
function
that
we
were
just
looking
at
we're
looking
for
the
the
module
that
the
command
is
directed
towards.
If
it
exists,
then
we'll
go
ahead
and
call
it
its
own
handle
command
method.
A
I
believe
these
are
actually
collected
in
daemon
server
itself,
but
they're
exposed
to
the
by
the
modules
through
active
pi
modules.
Here.
A
A
So
the
general
general
of
the
entry
point
to
the
module
after
it's
initialized,
let's
first
run
initialization
was
when
it's
first
created,
but
the
the
main
entry
point
and
is
the
serve
method
which
will
run
in
a
loop
generally
with
a
condition
like
this.
While
it's
supposed
to
be
running,
do
what
it
needs
to
do.
A
A
A
And,
let's
use
a
good
example,
I
guess
another
thing:
a
common
pcopc
women
among
modules
is
the
self-test
method,
which
is
run
by
the
test,
functions
to
just
verify
that
things
are
working
as
expected
in
general,
in
our
environment,
where
the
manager
is
fully
set
up
and
running.
So
I
think
that's
about
all
the
time
we
have
for
today.
I
think
there's
a
lot
more
to
cover
as
well.
So
hopefully
we
can
get
sage
in
the
future
to
come
back
and
give
us
a
bit
more
detail
on
different
aspects
of
this.
A
A
No
problem
all
right
thanks,
everybody
I'll
see
you
for
a
future
manager
walkthrough
in
the
future
at
a
later
point
in
time
to
be
determined
thanks.
Everyone.